Han Zhao Profile Banner
Han Zhao Profile
Han Zhao

@hanzhao_ml

2,914
Followers
1,230
Following
10
Media
359
Statuses

Assistant Professor @IllinoisCS ; Ph.D. @mldcmu ; Interested in machine learning and AI.

Champaign, IL
Joined May 2012
Don't wanna be here? Send us removal request.
Pinned Tweet
@hanzhao_ml
Han Zhao
4 months
How to ensure fairness (statistical parity) and privacy (DP) simultaneously? What are the costs of privacy and fairness upon accuracy? Excited to share our #ICML2024 work answering the two questions above! paper: code:
3
13
60
@hanzhao_ml
Han Zhao
4 years
Thrilled to announce that I'll be starting as an Assistant Professor of Computer Science at the University of Illinois at Urbana-Champaign in Fall 2021 @IllinoisCS Huge thanks to all my collaborators who help me make this happen, and looking forward to the future collaborations!
58
14
536
@hanzhao_ml
Han Zhao
2 months
🚨🚨 We are hiring! RT appreciated! Prof. Rui Song () and I will recruit post-doc scientists through Amazon’s post-doc program ().
4
107
380
@hanzhao_ml
Han Zhao
4 years
OK great. #NeurIPS2020 reviews out. This is the first time I got a review with sentences that are not even complete. Also, if you "believe" a result is "well-known and highly studied", please provide at least one reference.
8
3
165
@hanzhao_ml
Han Zhao
5 years
"On Learning Invariant Representation for Domain Adaptation" accepted to @icmlconf with high scores. Curious about the tradeoff in learning domain invariant representation for adaptation? Check it here:
2
15
111
@hanzhao_ml
Han Zhao
1 year
Happy to share our recent work on understanding linear scalarization vs. multi-objective optimization for multitask learning, to appear at #NeurIPS2023 : . TL;DR: Is linear scalarization always sufficient for MTL? If not, when will it fail? 1/n
Tweet media one
1
14
83
@hanzhao_ml
Han Zhao
4 years
New work: How to defend potential inference attacks on graphs? We propose a method via adversarial learning of representations for this purpose and analyze the potential tradeoff therein. arXiv: code:
Tweet media one
2
8
82
@hanzhao_ml
Han Zhao
4 years
We will present our recent work on understanding learning language-invariant representations for multilingual machine translations at #ICML2020 on Wednesday. Come and join us! Paper: Poster: Joint work with @JunjieHu12 @risteski_a
Tweet media one
3
10
69
@hanzhao_ml
Han Zhao
9 months
Heading to New Orleans for #NeurIPS2023 to present the following works with my students and collaborators! Looking forward to catching up with old friends & making new friends!!
Tweet media one
0
6
60
@hanzhao_ml
Han Zhao
1 year
Excited to share a recent work accepted at #CVPR2023 with the incredible M5 team @AmazonScience . paper: TL; DR: how to construct latent structures between multi-modal data to allow better retrieval and/or downstream task performance?
2
10
54
@hanzhao_ml
Han Zhao
3 years
Proud advisor moment! In this work we attempt to bridge the gap between meta-learning and multitask learning, to improve the training efficiency without losing the flexibility of fast adaptation. Paper and code coming soon, please stay tuned!
@Haoxiang__Wang
Haoxiang Wang
3 years
Passed the Qual exam and have one paper accepted by ICML this week! Grateful to my advisors Prof. Bo Li ( @uiuc_aisecure ) and Prof. Han Zhao ( @hanzhao_ml )!
Tweet media one
Tweet media two
Tweet media three
2
1
42
0
1
54
@hanzhao_ml
Han Zhao
2 years
#ICLR2022 How can learning invariant representations help cross-lingual transfer? Please come to our poster session at 12:30pm for more details, Session 5 Room 2. Joint work with @ruichengxian @elgreco_winter Paper: Code:
Tweet media one
0
7
48
@hanzhao_ml
Han Zhao
2 years
Attending ICML this week Hope to see and chat with many friends there!
2
1
45
@hanzhao_ml
Han Zhao
2 years
Welcome to @IllinoisCS , @adityaasinha ! Looking forward to working with you on exciting ML research :)
@adityaasinha
Aditya Sinha
2 years
Excited to share that I will be joining the University of Illinois Urbana-Champaign @IllinoisCS for their 2 year research focused MS program in Computer Science, this Fall 2022 where I will be working with Prof. @hanzhao_ml and others on problems in Machine Learning! (1/n)
Tweet media one
9
2
148
1
2
44
@hanzhao_ml
Han Zhao
4 months
Glad to share our latest work on robust multi-task learning under label noise to appear at #ICML2024 ! Congrats to the team! TL;DR: instead of using fixed weight linear scalarization, we propose to use the Chebyshev function over excess risks to scalarize multiple tasks!
@heyifei99
Yifei He
4 months
🎉 Thrilled to announce our work “Robust Multi-Task Learning with Excess Risks”has been accepted at #ICML2024 ! We introduce ExcessMTL, an excess risk based adaptive task weighting method that is robust to label noise. 1/n
1
4
18
0
3
42
@hanzhao_ml
Han Zhao
30 days
Will be attending my first ACL #ACL2024 from the 12th to the 15th! I'll help present: Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards arXiv: code:
Tweet media one
3
8
42
@hanzhao_ml
Han Zhao
1 year
If you're interested in working with the great @myamada0 and me on problems related to optimal transport and its applications in robustness, fairness, generalization, and/or model interpretability, please do apply!! Plz help RT!
@myamada0
myamada0
1 year
We call for a postdoc/staff scientist position jointly supervised by me at OIST and @hanzhao_ml at UIUC. The successful candidate will spend most of his/her time at OIST and work remotely with Prof. Zhao.
0
7
25
1
12
42
@hanzhao_ml
Han Zhao
5 years
Interested in learning neural nets in a data-dependent and adaptive way? Check out our recent work to appear @ NeurIPS-19 on this topic! TL; DR: maintain two additional covariance matrices for a fully-connected layer and use them to regularize the model. @rsalakhu @mldcmu
1
7
37
@hanzhao_ml
Han Zhao
6 months
It was a blast, thanks for the invitation and organizing!!
@myamada0
myamada0
6 months
Tweet media one
0
1
25
0
1
34
@hanzhao_ml
Han Zhao
2 years
Proud advisor moment! Congrats to @gargi_balasu for joining the 2023 class of Siebel Scholars! Gargi has broad interest in AI/ML and has already done many interesting works in multi-modality selection & domain generalization. Looking forward to your future success!
@gargi_balasu
Gargi Balasubramaniam
2 years
Honoured to be a Siebel Scholar, Class of 2023! #SiebelClassOf2023 Special thanks to @hanzhao_ml and @IllinoisCS . @SiebelScholars #MachineLearning
12
2
92
2
0
33
@hanzhao_ml
Han Zhao
3 years
Excited to share our recent #ICML2021 paper on bridging the performance gap between meta-learning and multitask learning. Joint work w/ @Haoxiang__Wang @uiuc_aisecure arXiv: Code:
@Haoxiang__Wang
Haoxiang Wang
3 years
Excited to share our #ICML2021 paper "Bridging Multi-Task Learning and Meta-Learning"! This paper bridges multi-task learning (MTL) and meta-learning by theory & experiment. Notably, we show that MTL can match SOTA gradient-based meta-learning with 10x less training time! (1/n)👇
Tweet media one
2
12
71
0
2
25
@hanzhao_ml
Han Zhao
4 years
New work #NeurIPS2020 with Y. H. Tsai, @myamada0 , LP Morency, and @rsalakhu on estimating point-wise dependency using neural nets, with applications in self-supervised learning, cross-modal retrieval, etc. paper: video page:
@rsalakhu
Russ Salakhutdinov
4 years
Neural Methods for Point-wise Dependency Estimation: quantitatively measuring how likely two outcomes cooccur with applications to Mutual Information estimation, self-supervised learning & cross-modal retrieval w/t H. Tsai, @hanzhao_ml et al #NeurIPS2020
2
10
85
0
5
25
@hanzhao_ml
Han Zhao
2 years
If you're at #ICML2022 and interested in out-of-distribution generalization, please come to our poster session: Tuesday 6:30-8:00PM EDT, Hall E #537 paper, poster & talk: code: Joint work w/ @Haoxiang__Wang @uiuc_aisecure 👇
@Haoxiang__Wang
Haoxiang Wang
2 years
Check out our #ICML2022 paper: Provable Domain Generalization via Invariant-Feature Subspace Recovery (ISR) [] ISR is a new method for OOD/Domain Generalization & Spurious Correlations, with theoretical guarantees & good empirical performance! 1/14👇
Tweet media one
1
26
146
0
3
25
@hanzhao_ml
Han Zhao
3 years
Excited to announce that we are organizing Machine Learning for Consumers and Markets Workshop (MLCM) at KDD 2021! Please consider submitting your great work on connecting AI & Business to our workshop! The submission deadline is May 10th, 2021. Website:
1
4
24
@hanzhao_ml
Han Zhao
4 years
#neurips2020 Our paper “Model-based Policy Optimization with Unsupervised Model Adaptation” will be presented in NeurIPS 2020 next week! Paper link: Video room: Code: (1/2)
Tweet media one
1
2
21
@hanzhao_ml
Han Zhao
4 years
Honestly this makes life even harder, especially during this global pandemic
@thegautamkamath
Gautam Kamath
4 years
Disgusting.
6
31
198
0
0
23
@hanzhao_ml
Han Zhao
5 months
This is my first time submitting to #ACL and I am sure my experience is not representative but still, it's a new to me -- a reviewer quietly decreased the score without posting anything, not even letting the authors know. At least a one-liner justification should be warranted?
7
0
23
@hanzhao_ml
Han Zhao
4 years
Check out this blog post on our recent work of understanding multilingual machine translation, appearing at ICML 2020: Paper: Video:
@mlcmublog
ML@CMU
4 years
Can you learn a "lingua franca" for translating b/w many languages? How many pairs of languages do you need aligned corpora for? A: Not in general, but under natural generative assumption, linear (not quadratic) pairs suffice! Check out the following post:
0
5
30
1
2
19
@hanzhao_ml
Han Zhao
3 years
It has been a fantastic experience working with @gjzhang1 on this work: we try to take into account both the invariant representations as well as the invariant risks to define "transferability" in domain generalization. Camera-ready to come soon!
@gjzhang1
Guojun Zhang
3 years
Happy to share our work "Quantifying and Improving Transferability in Domain Generalization" which has been accepted at #NeurIPS2021 ! Joint work with Han Zhao @hanzhao_ml , Yaoliang Yu and Pascal Poupart. Arxiv: . Camera-ready to come!
2
3
20
0
1
21
@hanzhao_ml
Han Zhao
4 years
If you work on topics pertaining to fairness, interpretability, and responsible CV, we would love for you to submit your recent work to our workshop!
@sympap
Symeon Papadopoulos
4 years
Responsible Computer Vision @CVPR 2021 workshop, deadline 10 March, org by @deeptigp @dubeylicious @ang3linawang Laurens van der Maaten @orussakovsky @judyfhoffman Dhruv Mahajan @hanzhao_ml #CallforPapers #ComputerVision #AlgorithmicBias
0
3
3
0
3
19
@hanzhao_ml
Han Zhao
4 years
Also, there are related discussions in Section 5 of Prof. Yihong Wu's lecture notes:
@ccanonne_
Clément Canonne
4 years
Stuff I wish I had known sooner: "Pinsker's inequality is cancelled," a thread. 🧵 If you want to relate total variation (TV) and Kullback-Leibler divergence (KL), then everyone, from textbooks to Google, will point you to Pinsker's inequality: TV(p,q) ≤ √(½ KL(p||q) ) 1/
Tweet media one
5
122
779
0
2
20
@hanzhao_ml
Han Zhao
5 years
Interested in knowing about the inherent tradeoff between fairness and utility in learning fair representations? Come to our poster and talk to me on Thu Dec 12th 05:00 -- 07:00 PM @ East Exhibition Hall B/C #111 ! Paper: Slides:
@zacharylipton
Zachary Lipton
5 years
Fairness Tutorial under way at West Hall A delivered by @sanmikoyejo . Come now! #neurips2019
Tweet media one
0
0
16
0
3
20
@hanzhao_ml
Han Zhao
4 years
Elegant and intuitive explanations! Maybe I am dumb, but can I ask why SGD on NNs leads to min norm solutions?
@daniela_witten
Daniela Witten
4 years
The Bias-Variance Trade-Off & "DOUBLE DESCENT" 🧵 Remember the bias-variance trade-off? It says that models perform well for an "intermediate level of flexibility". You've seen the picture of the U-shape test error curve. We try to hit the "sweet spot" of flexibility. 1/🧵
Tweet media one
59
1K
5K
2
0
20
@hanzhao_ml
Han Zhao
2 years
Happy to share our recent manuscript on constructing fair (in the demographic parity sense) and optimal classifiers under general settings (multi-class, multi-group, agnostic)! Joint work with @ruichengxian & @rlyin0171 . Link:
Tweet media one
@ruichengxian
ruicheng xian
2 years
Fair & optimal classifiers satisfying demographic parity can be learned in 3 simple steps! In , for the general multiclass, multigroup & agnostic setting, we characterize the exact trade-off b/t accuracy and DP by a Wasserstein-barycenter problem... 1/4
Tweet media one
2
6
27
0
2
19
@hanzhao_ml
Han Zhao
3 years
Huge congrats to @uiuc_aisecure !!
@uofigrainger
The Grainger College of Engineering
3 years
Congratulations to professor Bo Li from @IllinoisCS for being named a 2022 Sloan Research Fellow! This award from the Alfred P. Sloan Foundation is one of most prestigious awards available to early-career scientists 🔸
Tweet media one
1
8
111
0
1
19
@hanzhao_ml
Han Zhao
5 years
Please check out our recent blogpost exploring the tradeoff between fairness (statistical parity) and accuracy!
@rsalakhu
Russ Salakhutdinov
5 years
Inherent Tradeoffs in Learning Fair Representations – Machine Learning Blog | ML @CMU | Carnegie Mellon University
0
5
60
0
2
19
@hanzhao_ml
Han Zhao
5 years
Cannot agree more. I truly think probabilistic circuits are the way to go that combine both the richness of deep models and the tractability of reasoning under uncertainty.
@guyvdb
Guy Van den Broeck
5 years
Now that everyone is again into logic/symbols/reasoning vs deep/learning, I'd like to repost my C&T talk: 📽️ I discuss: - some history of this false dilemma in AI - logic and pure learning are *both* brittle - probabilistic world models as middle ground
1
51
254
0
2
18
@hanzhao_ml
Han Zhao
4 years
The funny thing is that another reviewer explicitly said: "no previous work studying this particular question" with a confidence score 5. Curious to see how the discussions between reviewers unfold :)
2
0
17
@hanzhao_ml
Han Zhao
3 months
Thanks for the organization! This is one of the greatest events I was able to participate in the last couple of years!
@myamada0
myamada0
3 months
📢Exciting News! We posted most of the MLSS 2024 lectures on YouTube! Please check it! #MLSS2024 #MLSS2024Okinawa #OIST #RIKENAIP
1
33
89
0
0
17
@hanzhao_ml
Han Zhao
3 years
All my acceptance recommendations got overturned as well. The main issue is that the 15% acceptance rate is not consistent at all with the originally suggested rate of 20% to 25% from an earlier email sent to us.
@guyvdb
Guy Van den Broeck
3 years
Sorry #AAAI2022 but a 15% acceptance rate is harmful to the community. Especially when I see lots of weak accept recommendations by the SPC+AC get overturned (~8% among my friend ACs). If SPC+AC, who are experts in the area, think it should be published, why waste everyone's time
20
49
419
2
0
15
@hanzhao_ml
Han Zhao
2 years
Check out our recent work @aistats_conf 2022 on mitigating performance disparity in changing environments (MDPs) by taking into account the feedback loop caused by policies! Poster session: Paper:
@jianfengchi
Jianfeng Chi
2 years
Check out our #AISTATS2022 paper "Towards Return Parity in Markov Decision Processes" at Poster Session 2, room 1, C2 (Mon 28 Mar 10:15 a.m. PDT — 11:45 a.m. PDT) Paper: Poster:
2
0
3
1
0
15
@hanzhao_ml
Han Zhao
2 years
A big congrats to the team!! This cannot happen without your efforts along the way!
@gargi_balasu
Gargi Balasubramaniam
2 years
Excited to share that our work on Greedy Modality Selection via Submodular Maximization has been accepted at UAI 2022! A big thanks to my advisor @hanzhao_ml for the guidance along the way! Co authors: Sam Cheng, Yifei He, Yao-Hung Hubert Tsai (CMU), Han Zhao @UncertaintyInAI
2
0
83
1
0
15
@hanzhao_ml
Han Zhao
2 years
Welcome to our poster session #1123 Hall E for more details and Q&A!
@Haoxiang__Wang
Haoxiang Wang
2 years
Excited to share our #icml work (w/ @uiuc_aisecure & @hanzhao_ml )👇 Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond [] The talk will be in Hall F at 11:55-12:00 EDT *today* The poster session is 6:30-8:30PM *today* 1/8🧵
Tweet media one
2
11
36
0
0
14
@hanzhao_ml
Han Zhao
1 month
@nanjiang_cs That's true -- I've never seen this level of harsh reviews before. The highest average score in my AC batch of 13 papers is 5.
0
0
14
@hanzhao_ml
Han Zhao
4 years
Check out this fantastic workshop!
@LotusSapphire
Shanghang Zhang
4 years
We are organizing the 2nd ICML Workshop on Human in the Loop Learning (). We warmly invite you to submit your work. DDL is 6/10. Following the success of last year, the HILL workshop will cover interactive learning, Explainable AI, online active learning.
Tweet media one
Tweet media two
Tweet media three
0
1
13
0
0
13
@hanzhao_ml
Han Zhao
2 years
Under a linear causal model, can the number of classes compensate for the number of training environments in recovering the invariant features? The answer is yes; Come and stop by for more details if you're interested! Paper link:
@gargi_balasu
Gargi Balasubramaniam
2 years
Come checkout our work on domain generalization via invariant feature subspace recovery (ISR-Multiclass) for robustness to spurious correlations in multi-class classification! Happening today virtually at the #NeurIPS2022 DistShift Workshop. w/ @Haoxiang__Wang @hanzhao_ml (1/2)
1
4
23
0
1
14
@hanzhao_ml
Han Zhao
4 years
Come by and join us! Tomorrow at 12:00 PM EST at Poster Session 4.
@jianfengchi
Jianfeng Chi
4 years
#NeurIPS2020 @hanzhao_ml and I will present our paper "Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation" tomorrow at 12:00 PM EST at Poster Session 4. Welcome to attend! Website: Paper:
1
2
8
0
0
12
@hanzhao_ml
Han Zhao
3 years
We're hiring! Candidates from all areas can apply, especially those working in quantum computing, cloud computing and data centers, interactive computing, the interaction of systems/architecture and AI/machine learning, and the social impacts of computing. Please help RT!
@IllinoisCDS
Siebel School of Computing and Data Science
3 years
📢 We’re hiring tenure-track faculty positions at all levels! For more information about these opportunities see: . 🔗 Learn more about #IllinoisCS :
Tweet media one
2
10
21
1
1
11
@hanzhao_ml
Han Zhao
2 years
Domain adversarial methods (domain-invariant features) are not a panacea for sure (and I doubt there exists such a method that works for all the settings), but for certain data generative distributions, e.g., the anti-causal one, they do tend to work (empirically and provably)
@zacharylipton
Zachary Lipton
2 years
Among other insights @shiorisagawa reveals that successful domain adversarial methods don’t work for real-world tasks. Sadly but truthfully, they don’t even work on synthetic tasks as claimed—numbers in papers are profoundly misleading (peeking at target labels to pick runs)
5
8
82
0
0
11
@hanzhao_ml
Han Zhao
4 years
Talks from AI seminar @SCSatCMU available online. Thanks @aayushbansal @zicokolter @rsalakhu for all the hard work! You're fantastic!
@aayushbansal
Aayush Bansal
4 years
Thanks also to other speakers who are not here on Twitter! We have uploaded most of the talks online (with @hanzhao_ml @SCSatCMU @CarnegieMellon )
1
7
29
0
2
11
@hanzhao_ml
Han Zhao
1 year
Thanks @myamada0 for the host -- looking forward to the workshop next week!!
@myamada0
myamada0
1 year
来週のIBISMLではHan Zhaoさん (UIUC) @hanzhao_ml の招待講演があります。 Title: Fair and Optimal Prediction via Post-Processing
0
2
7
0
0
10
@hanzhao_ml
Han Zhao
2 years
I will visit OIST this summer, hosted by the great @myamada0 . If you're interested in working together with us on trustworthy AI, please apply!!
@myamada0
myamada0
2 years
If you are interested in working on trustworthy AI, publishing at top ML conferences, and enjoying beautiful Okinawa (with good Japanese food), this position is perfect for you! OIST: MLDS unit:
1
3
10
2
2
11
@hanzhao_ml
Han Zhao
4 years
This paper is probably the one I read most in the last year!
@daibond_alpha
Bo Dai
4 years
Here is a paper with full version of the relationship between probability metrics by @AlisonLGibbs and @mathyawp .
Tweet media one
1
32
108
0
0
10
@hanzhao_ml
Han Zhao
3 years
@zacharylipton @ccanonne_ Indeed the weekly quizzes are gold!
0
0
9
@hanzhao_ml
Han Zhao
10 months
Heading to visit UofT and UWaterloo on Thu and Friday, happy to say hi and meet up if you're around!
1
0
9
@hanzhao_ml
Han Zhao
4 years
Link to the poster: !
@risteski_a
Andrej Risteski
4 years
Can you learn a "lingua franca" for translating b/w many languages? How many pairs of languages do you need aligned corpora for? A: Not in general, but under natural generative assumption, linear (not quadratic) pairs suffice! Wednesday, #ICML2020 , w/ @hanzhao_ml and @JunjieHu12 .
0
1
16
0
1
8
@hanzhao_ml
Han Zhao
2 months
Please send your CV with a paragraph to summarize your past research experience and future research agenda to hanzhao @illlinois .edu if interested. Thanks!
0
0
7
@hanzhao_ml
Han Zhao
2 months
We are especially interested in candidates working on genAI, LLMs, and related topics, with a preference given to candidates with rich hands-on experience. The Amazon Postdoctoral Science Program provides recent PhD graduates with a formal avenue to gain industry experience.
1
0
7
@hanzhao_ml
Han Zhao
2 months
The program advances postdoctoral scientists’ career development through industry exposure, publishing research, mentorship from the Amazon Science community, along with competitive compensation.
1
0
5
@hanzhao_ml
Han Zhao
1 year
Shout out to @XiaotianHan1 and @jianfengchi for leading the joint effort to make this benchmark happen! If you want to know the pros and cons of different methods to ensure fairness constraints, feel free to check out paper and code out!
@XiaotianHan1
Xiaotian (Max) Han
1 year
📢 Looking for easy-to-use fairness baselines? Curious about utility-fairness trade-off control? Unsure about training endpoints? Check out our new benchmark paper for answers!👇 Code: Paper: #AI #MachineLearning #Fairness
1
5
24
0
1
7
@hanzhao_ml
Han Zhao
4 years
This demonstrates exactly the meaning of incompetent racist
@2prime_PKU
Yiping Lu
4 years
It is called #coronavirus #covid ー19……… do something to prevent the virus rather than being an incompetent racist.
1
1
11
1
0
7
@hanzhao_ml
Han Zhao
2 years
@WilliamWangNLP It's nice to have such a table aiming to summarize the connection and differences between these concepts. But it still seems to me the "definitions" here are a bit vague and even cyclic
2
0
7
@hanzhao_ml
Han Zhao
2 years
Welcome back and looking forward to your talk!
@YiMaTweets
Yi Ma
2 years
I will be visiting Urbana-Champaign next week and give a distinguished lecture at the CS Department: I actually planned a visit two years ago but cancelled due to the pandemic. It is great that I will be seeing many old friends there again.
2
3
61
0
0
6
@hanzhao_ml
Han Zhao
2 years
@TaliaRinger Even as a Chinese I don't quite understand what does this mean lol
0
0
6
@hanzhao_ml
Han Zhao
2 years
@roydanroy @xwang_lk @RealAAAI AAAI may not be the most suitable venue for ML/CV/NLP, but I think it's still considered one of the best for classic AI subareas like search, AGT, etc.
0
0
6
@hanzhao_ml
Han Zhao
3 years
@tetraduzione Oh well, exactly the opposite :(
0
0
5
@hanzhao_ml
Han Zhao
5 years
Big congrats and nice work!!!
@ropeharz
Robert Peharz
5 years
This paper has quite a history. When Martin started his PhD in 2015, and said he'd be interested in Bayesian structure learning in SPNs, my reaction was: interesting -- and challenging 🙂. But here we are, just a few years later 😄, and accepted at #NeurIPS2019 !
1
3
22
0
0
5
@hanzhao_ml
Han Zhao
4 years
@Ismail_Elezi To me simple yet effective is actually the end goal.
0
0
5
@hanzhao_ml
Han Zhao
2 years
@thegautamkamath @NeurIPSConf @icmlconf @iclr_conf It's unfortunate but I have to say it's so disgusting...
0
0
5
@hanzhao_ml
Han Zhao
5 years
Attending the ICML 2019 conference this week at Long Beach. Feel free to reach out if you want to have a chat!
1
0
5
@hanzhao_ml
Han Zhao
2 years
@Haoxiang__Wang @uiuc_aisecure Congrats!! Well-deserved :)
0
0
5
@hanzhao_ml
Han Zhao
5 years
Will be visiting @VectorInst and @UWaterloo in the following two weeks, any friends there to have a chat?
0
0
5
@hanzhao_ml
Han Zhao
5 years
@fchollet Unfortunately that's Chinese not Japanese lol
0
0
5
@hanzhao_ml
Han Zhao
2 years
@nanjiang_cs In my case the system successfully avoids most of my own bids : )
1
0
5
@hanzhao_ml
Han Zhao
3 years
Please feel free to stop by and say hi!
@jianfengchi
Jianfeng Chi
3 years
Check out our #ICML2021 paper "Understanding and Mitigating Accuracy Disparity in Regression" at Poster Session 6, room 7, C0 (9 PM — 11 PM PST, July 22th, Thursday) paper: Poster:
1
0
11
0
0
5
@hanzhao_ml
Han Zhao
5 years
In some cases computing partition functions is not that hard, e.g. tractable models like Sum-Product Networks/Arithmetic Circuits
@lawrennd
Neil Lawrence
5 years
@ylecun @AvilaGarcez @KyleCranmer @frankdonaldwood If you constrain the systems you look at to not contain partition functions ... then you don't have to compute them ... but the statistical physics you aspire to is full of partition functions ...
0
0
2
1
0
5
@hanzhao_ml
Han Zhao
4 years
This is awesome! Thanks for sharing with the community!
@sangmichaelxie
Sang Michael Xie
4 years
WILDS collects real world distribution shifts to benchmark robust models! I’m particularly excited about the remote sensing datasets (PovertyMap and FMoW) - spatiotemporal shift is a real problem, and space/time shifts compound upon one another. Led by @PangWeiKoh @shiorisagawa
0
2
27
0
0
5
@hanzhao_ml
Han Zhao
2 years
Correction: the poster session has been changed to Thursday instead!
@hanzhao_ml
Han Zhao
2 years
If you're at #ICML2022 and interested in out-of-distribution generalization, please come to our poster session: Tuesday 6:30-8:00PM EDT, Hall E #537 paper, poster & talk: code: Joint work w/ @Haoxiang__Wang @uiuc_aisecure 👇
0
3
25
0
0
4
@hanzhao_ml
Han Zhao
3 years
@zacharylipton @risteski_a Thanks for pointing out my long term mistake!
0
0
4
@hanzhao_ml
Han Zhao
5 years
Please attend if you're @ #CSCW2019 !
@LuSun_Selena
Lu Sun
5 years
#CSCW2019 I will give my first conference talk to share our work on modeling social roles in online communities. This is a joint work with @Bob Kraut and @Diyi_Yang . It will be in the session crowds and collaboration, on Tuesday(Nov.12th) at 4:30pm in Room 415AB.
0
1
22
0
0
4
@hanzhao_ml
Han Zhao
10 months
One of the main hurdles in the wide application of adversarial methods for domain-invariant learning and fair representations is the difficulty in solving the min-max problem, especially under the non-convex-non-concave setting when it comes to NNs.
@davidinouye
David I. Inouye
10 months
Are adversarial losses the only generic method for distribution matching or domain-invariant learning? Towards Practical Non-Adversarial Distribution Alignment via Variational Bounds @Ziyu_Gong_Billy @b_usmn @hanzhao_ml
0
0
8
1
0
4
@hanzhao_ml
Han Zhao
1 year
@nanjiang_cs Last time (~4 years ago) in Shanghai my friend waited 45 mins to buy me a cup. How crazy!
0
0
4
@hanzhao_ml
Han Zhao
1 year
Will simple modality alignment through contrastive learning be sufficient, as investigated in one recent work by @james_y_zou et al ()? In this paper, we provide a negative answer to the latter (provably),...
1
0
4
@hanzhao_ml
Han Zhao
4 years
Joint work w/ @KeyuluXu @LiaoPeiyuan @rsalakhu Stefanie Jegelka, Geoff Gordon and Tommi Jaakkola.
0
0
4
@hanzhao_ml
Han Zhao
5 years
CFP from 3rd Workshop on Tractable Probabilistic Modeling!
@tetraduzione
antonio vergari - hiring PhD students
5 years
3rd Workshop on Tractable Probabilistic Modeling #TPM19 will be held @ @icmlconf #icml2019 works on #tractable {models, #inference , probabilistic programming #PPL , #neural estimators} are very welcome! with @pmddomingos @dlowd @rahman_tahrima @alejom_ml
1
19
25
0
1
4
@hanzhao_ml
Han Zhao
4 years
Poster session now at , happy to chat!
@hanzhao_ml
Han Zhao
4 years
Link to the poster: !
0
1
8
0
0
4
@hanzhao_ml
Han Zhao
2 years
@EmtiyazKhan @peter_richtarik Hmm interesting -- this probably means the avg scores differ a lot between different tracks. In my batch (~15 papers) an avg score >= 5 makes it top ~15%.
0
0
4
@hanzhao_ml
Han Zhao
4 months
1. Private histogram density estimation to ensure DP 2. Wasserstein barycenter computation to obtain a common output distribution 3. Randomized remapping (post-processing) according to the optimal transport map in Step 2 to ensure SP
Tweet media one
1
0
2
@hanzhao_ml
Han Zhao
5 years
Actually reading the book now while retweeting
@THSEA
Town Hall Seattle
5 years
. @mkearnsupenn & @Aaroth invite us to a conversation about how we can better embed human principles into machine code—without halting the advance of data-driven scientific exploration. November 11. $5 tickets:
0
2
3
1
1
4
@hanzhao_ml
Han Zhao
1 year
@myamada0 I’ve spent almost two months this summer visiting OIST myself -- it has a vibrant community filled with passionate researchers, cutting-edge facilities, and an inspiring atmosphere that fosters creativity and collaboration, plus, the view at Okinawa is gorgeous :)
1
0
3
@hanzhao_ml
Han Zhao
3 years
@TaliaRinger @SaugataGhose @camPossible @charith_mendis Looking forward to meeting with you all soon as well :)!
0
0
3
@hanzhao_ml
Han Zhao
5 months
@myamada0 Precisely! That's also why I always tell my students to try different things for their internships.
0
0
3
@hanzhao_ml
Han Zhao
2 years
@gjzhang1 IMO most of the datasets within WILDS are not anti-causal, since anti-causal is often associated with label shift, i.e., the conditional X | Y = y is the same across domains, which does not seem to be true...
0
0
2
@hanzhao_ml
Han Zhao
3 years
@zacharylipton @DjokerNole Well not clear the 2) is true (at least for now)
0
0
3
@hanzhao_ml
Han Zhao
8 months
@jasonhartford Likewise lol
1
0
3
@hanzhao_ml
Han Zhao
4 years
@BooleanAnalysis Yep exactly correct :)
0
0
3