Hadi Salman Profile
Hadi Salman

@hadisalmanX

4,904
Followers
325
Following
7
Media
453
Statuses

Research Scientist @OpenAI . Previously: PhD @MIT @MSFTResearch @UberATG @SCSatCMU @AUB_Lebanon

San Francisco, CA
Joined July 2018
Don't wanna be here? Send us removal request.
Pinned Tweet
@hadisalmanX
Hadi Salman
1 year
Life update: After few amazing years doing PhD @MIT , I am thrilled to share that I've joined @OpenAI ! Super excited for this chapter of my life and can't wait to help shape the future with the amazing talent at OpenAI!
39
24
803
@hadisalmanX
Hadi Salman
8 months
OpenAI is nothing without its people
12
33
598
@hadisalmanX
Hadi Salman
8 months
❤️
@sama
Sam Altman
8 months
i love the openai team so much
5K
4K
73K
5
8
157
@hadisalmanX
Hadi Salman
4 years
Super excited to announce that we have received the *best paper award* @iclr_conf workshop on trustworthy ML! Amazing work by @edwardjhu in collaboration w @adith387 & @TheGregYang ! Talk: Paper: Code:
@NicolasPapernot
Nicolas Papernot
4 years
Improved Wasserstein Attacks and Defenses by J. Edward Hu (Microsoft Research AI); Greg Yang (Microsoft Research AI); Adith Swaminathan (Microsoft Research); Hadi Salman (Microsoft Research)
1
2
27
4
28
133
@hadisalmanX
Hadi Salman
3 years
Watch me discuss *adversarial examples beyond security* at the @MLStreetTalk show! Thank you guys @ecsquendor @ykilcher @RisingSayak for the great effort and for having me on the show! It was a super fun and exciting discussion, and the production quality is really incredible!
@MLStreetTalk
Machine Learning Street Talk
3 years
This week we speak with @hadisalmanX from the @aleks_madry lab at @MIT about *un*-adversarial examples! Exploiting the brittleness of neural networks to make objects more recognisable i.e. robustness beyond security with @ecsquendor @ykilcher @RisingSayak
Tweet media one
2
15
76
0
25
85
@hadisalmanX
Hadi Salman
4 years
Check out our new paper on adversarial robustness as a prior for better transfer learning! A great colab w/ @andrew_ilyas @logan_engstrom @akapoor_av8r @aleks_madry We open-source ~70 robust ImageNet models which you might find helpful!
@aleks_madry
Aleksander Madry
4 years
Does adv. robustness help transfer learning? W/ @hadisalmanX @andrew_ilyas @logan_engstrom @akapoor_av8r @MSFTresearch we show l2-robust ImageNet models transfer better, despite lower accuracy: . Blog: Code:
Tweet media one
Tweet media two
3
35
163
1
23
86
@hadisalmanX
Hadi Salman
4 years
1/4 Wanna get a *provably* robust classifier from your pretrained one? Simply stack our *custom trained denoiser* in front of your model and you're good to go! Paper: Code: w/ @Eric_jie_thu @TheGregYang @akapoor_av8r @zicokolter
Tweet media one
1
19
70
@hadisalmanX
Hadi Salman
3 years
Check out *3DB*: our new tool for debugging computer vision models via 3D simulation! A year-long effort from our lab @MIT and @MSFTResearch . We have extensive demos, docs, code and blogpost!
@aleks_madry
Aleksander Madry
3 years
Introducing 3DB, a framework for debugging models using 3D rendering. Reproduce your favorite robustness analyses or design your own analyses/experiments in just a few lines of code! (1/3) Paper: Code: Blog:
Tweet media one
3
35
105
0
17
62
@hadisalmanX
Hadi Salman
8 months
I will be giving a keynote at the #NeurIPS2023 MuslML workshop on December 11 at 10:35 am. Pass by if you would like to chat about anything!
@MuslimsinML
Muslims in ML Workshop
8 months
Excited to announce the keynote speakers for the Muslims in Machine Learning (MusIML) workshop at #NeurIPS2023 ! We have four fantastic speakers, covering topics in ML with important societal implications. Come join us on December 11! ; 🧵👇
Tweet media one
Tweet media two
Tweet media three
Tweet media four
9
27
166
0
2
55
@hadisalmanX
Hadi Salman
1 year
A @huggingface demo for our image immunization paper is out! You will be able to: - edit your images - check what immunizing your images would do to these edits! w/ @Alaa_Khaddaj @gpoleclerc @andrew_ilyas @aleks_madry
@aleks_madry
Aleksander Madry
2 years
Last week on @TheDailyShow , @Trevornoah asked @OpenAI @miramurati a (v. important) Q: how can we safeguard against AI-powered photo editing for misinformation? My @MIT students hacked a way to "immunize" photos against edits: (1/8)
Tweet media one
24
206
1K
1
18
54
@hadisalmanX
Hadi Salman
1 year
We will present our work on immunization against malicious AI-powered image manipulation at #ICML2023 this *Tuesday*! Swing by if you're attending, we'd be happy to chat with you. w/ @Alaa_Khaddaj @gpoleclerc @andrew_ilyas @aleks_madry
Tweet media one
0
12
55
@hadisalmanX
Hadi Salman
8 months
We will be at #NeurIPS2023 ! Fill out the below form if you would like to meet and learn about our following efforts @OpenAI .
@aleks_madry
Aleksander Madry
8 months
We're building several efforts at OpenAI: Preparedness, reliable AI deployment research, and AI security research. Up for chatting with us about these at NeurIPS? Fill out this form (by Dec 1):
26
54
470
3
1
48
@hadisalmanX
Hadi Salman
4 years
Come watch our #NeurIPS2020 oral presentation on how adversarial robustness improves transfer learning *today at 6:30PM PT*! Talk: Paper: Code: Blogpost:
@aleks_madry
Aleksander Madry
4 years
Our NeurIPS oral (w/ @hadisalmanX @andrew_ilyas @logan_engstrom @akapoor_av8r ) "Do Adversarially Robust Models Transfer Better" will be live-streamed today at 6:30PM PT, + a poster session at 9PM PT! Paper @ , and shorter 3-min vid @
0
7
52
1
14
48
@hadisalmanX
Hadi Salman
1 year
Blog post: Paper: Code: Huge thanks to @_akhaliq and @huggingface for support and providing the resources to host this demo!
Tweet media one
0
4
40
@hadisalmanX
Hadi Salman
1 year
Excited to announce that our image immunization paper is accepted as an *Oral* at #ICML2023 ! 🔥 Come chat with us about it in Hawaii!
@aleks_madry
Aleksander Madry
2 years
Last week on @TheDailyShow , @Trevornoah asked @OpenAI @miramurati a (v. important) Q: how can we safeguard against AI-powered photo editing for misinformation? My @MIT students hacked a way to "immunize" photos against edits: (1/8)
Tweet media one
24
206
1K
3
7
43
@hadisalmanX
Hadi Salman
5 years
Check out our updated #NeurIPS2019 spotlight paper! We boost our provable L2-robustness results on CIFAR10 via pre-training on #ImageNet . Our best provably L2-robust model gives SOTA provable linfty robustness at a radius of 2/255
@TheGregYang
Greg Yang
5 years
New SOTA on CIFAR10 for provable robustness for L2 and Linfty adversary: pretrain on imagenet w/ SmoothAdv then finetune on CIFAR10. Adding unlabeled data helps too Code: Follow mah boi @hadisalmanX who made all of this work!
Tweet media one
Tweet media two
1
12
39
1
7
41
@hadisalmanX
Hadi Salman
4 years
Join me tomorrow during the *live* poster session of @iclr_conf workshop on Trustworthy ML () if you wanna learn more about our work on effective randomized smoothing for pretrained classifiers. Live: Sunday 1-3 PM ET #ICLR2020
@hadisalmanX
Hadi Salman
4 years
1/4 Wanna get a *provably* robust classifier from your pretrained one? Simply stack our *custom trained denoiser* in front of your model and you're good to go! Paper: Code: w/ @Eric_jie_thu @TheGregYang @akapoor_av8r @zicokolter
Tweet media one
1
19
70
1
9
39
@hadisalmanX
Hadi Salman
5 years
Tweet media one
1
3
31
@hadisalmanX
Hadi Salman
8 months
🔥🔥
@OpenAI
OpenAI
8 months
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.
6K
13K
67K
2
0
30
@hadisalmanX
Hadi Salman
4 years
Check out this @MSFTResearch blog post on our recent work on improving transfer learning!
@MSFTResearch
Microsoft Research
4 years
With little training data or compute, transfer learning is a simple way to obtain performant ML models. Learn how researchers at @MSFTresearch & @MIT found adversarially robust ML models can improve transfer learning on downstream computer vision tasks:
13
213
1K
1
2
27
@hadisalmanX
Hadi Salman
5 years
[1/6] How tight can convex-relaxed robustness verification for neural networks be in practice? We thoroughly investigate this in our new paper ! In collaboration w\ @TheGregYang , Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. Special thanks to @ilyaraz2 !
@TheGregYang
Greg Yang
5 years
[1/4] Everybody knows adversarial examples are a problem, and a lot of people tried to provably verify NN robustness. But seems convex relaxation alone runs into a theoretical and empirical barrier --- not tight enough! See our new paper
Tweet media one
3
14
57
2
5
25
@hadisalmanX
Hadi Salman
5 years
I am very excited to share our recent provable defense for image classifiers achieving state-of-the-art L2 #provable_adversarial_robustness on #ImageNet ! w/ @TheGregYang @jerryzli @ilyaraz2 @SebastienBubeck Huan Zhang & Pengchuan Zhang @MSFTResearch
@TheGregYang
Greg Yang
5 years
1/ SOTA L2 #ProvableRobustness by adversarially training a neural network convolved with Gaussian noise! paper: code: blog: w/ 💪💖 @hadisalmanX 💖💪 @jerryzli @ilyaraz2 @SebastienBubeck Huan & Pengchuan Zhang
Tweet media one
1
22
68
1
2
25
@hadisalmanX
Hadi Salman
2 years
Very excited our latest work is featured on @Gizmodo ! We demonstrate the feasibility of *immunizing* photos against manipulation by #StableDiffusion . Blog post: Code: w\ @Alaa_Khaddaj @gpoleclerc @andrew_ilyas @aleks_madry
@Gizmodo
Gizmodo
2 years
Who Is Working to End the Threat of AI-Generated Deepfakes, and Why Is It So Difficult?
Tweet media one
0
3
14
3
8
24
@hadisalmanX
Hadi Salman
5 years
Are you at #NeurIPS2019 ? Come to my spotlight talk and posters to learn about my work on adversarial robustness! All happening on *Thursday*. Spotlight: 10:20 am @ West Exh Hall A Poster1: 10:45 AM -- 12:45 PM @ East Exh Hall B + C #24 Poster2: 5-7PM @ East Exh Hall B + C #152
@hadisalmanX
Hadi Salman
5 years
I am very excited to share that my first two submissions to #NeurIPS got accepted, with one spotlight! (spotlight)
0
4
64
1
5
22
@hadisalmanX
Hadi Salman
3 years
Check out our latest work! We present *Smoothed ViTs* with remarkable certified robustness to adv. patches. We get std. accuracies & inference speeds comparable to non-robust models! Paper: Blog post: Code:
@aleks_madry
Aleksander Madry
3 years
Does certified (patch) robustness need to come at a steep std. accuracy/runtime cost? No, if you leverage ViTs. (And you get better robustness too!) W/ @hadisalmanX , @saachi_jain_ , and @RICEric22 . & Paper:
Tweet media one
0
23
65
0
8
21
@hadisalmanX
Hadi Salman
3 years
Today at 6 pm ET, I will talk about our recent work on *Smoothed Vision Transformers* at the ATVA 2021 workshop on Security and Reliability of ML. Join if you are interested in learning about recent advances in certified patch defenses. Zoom link here:
@huan_zhang12
Huan Zhang
3 years
Join us for the ATVA 2021 Workshop on Security and Reliability of Machine Learning (SRML) on Oct 18! Two keynote talks given by David Wagner and @zicokolter + 2 panels + 10 invited talks. See our website for the Zoom link for joining and detailed schedules
Tweet media one
3
7
29
0
2
17
@hadisalmanX
Hadi Salman
1 year
If you are at #CVPR2023 , pass by our poster! We would be happy to chat!
@saachi_jain_
Saachi Jain
1 year
Excited to be in Vancouver for #CVPR2023 ! @hadisalmanX and I will be presenting our poster on a data-based perspective on transfer learning on Tuesday (10:30-12). If you're around, drop by and say hi!
0
10
90
0
1
14
@hadisalmanX
Hadi Salman
5 years
Come talk to us @ICLR2019 's SafeML workshop this Monday to learn more about our recent work on convex-relaxed robustness verification for neural networks! Also, check out our new repo accompanying this work! @TheGregYang
@TheGregYang
Greg Yang
5 years
Where would your NN robustness verification algo lie in this plot of the current frontiers? Now you can measure against our convex relaxation barrier explicitly, via our new repo . Talk to us at @ICLR2019 SafeML, Monday 10:30am/4pm, Room R06! @hadisalman94
Tweet media one
0
7
25
0
2
14
@hadisalmanX
Hadi Salman
2 years
At #NeurIPS2022 ? Come talk to us about *3DB*: - Tuesday 11am-1pm: Hall J Poster #1042 - Wednesday 2-2:30pm: @Microsoft 's booth I will be there with @gpoleclerc @andrew_ilyas @saihv @logan_engstrom , and we would love to chat with you!
0
2
13
@hadisalmanX
Hadi Salman
5 years
Happy to announce that our recent paper "A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks" () is accepted to the SafeML workshop @iclr2019 ! In collaboration w\ @TheGregYang , Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang
@hadisalmanX
Hadi Salman
5 years
[1/6] How tight can convex-relaxed robustness verification for neural networks be in practice? We thoroughly investigate this in our new paper ! In collaboration w\ @TheGregYang , Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. Special thanks to @ilyaraz2 !
2
5
25
0
2
10
@hadisalmanX
Hadi Salman
2 years
Check out our new ICLR 2022 paper! We show how vision transformers can avoid the "missingness bias" which usually occurs when one removes part of an image in tasks such as model debugging. Come chat with us at our poster next week, Wednesday 27th at 1:30-3:30pm ET!
@aleks_madry
Aleksander Madry
2 years
What's the right way to remove part of an image? We show that typical strategies distort model predictions and introduce bias when debugging models. Good news: leveraging ViTs enables a way to side-step this bias. Paper: Blog post:
3
16
50
0
0
10
@hadisalmanX
Hadi Salman
4 years
Check another paper of ours (Randomized Smoothing of All Shapes & Sizes) in the @iclr_conf workshop on trustworthy ML !
@TheGregYang
Greg Yang
4 years
@tonyduan_ @edwardjhu & I will present Randomized Smoothing of All Shapes & Sizes (the #WulffCrystal paper) at ICLR Trustworthy ML workshop! Recording: Poster at 1pm ET! paper: code:
1
1
11
0
0
11
@hadisalmanX
Hadi Salman
1 year
Very exciting with great co-founders! Good luck @akapoor_av8r , @saihv , and team!
@ScaFoAI
Scaled Foundations
1 year
Hello world! We are Scaled Foundations, co-founded by @akapoor_av8r , @saihv , @shuhang0chen , @dnaraya - focusing on building safe and deployable General Robot Intelligence. We'll share official announcements and developments through this handle. Stay tuned! #AI #Robotics
0
5
17
0
0
10
@hadisalmanX
Hadi Salman
2 years
🔥Really cool article by @benjedwards demonstrating the serious implications current generative models can have on our lives. It also highlights our recent work *PhotoGuard* that attempts to solve the photo-editing aspect of this
@arstechnica
Ars Technica
2 years
This is John. He doesn't exist. But AI can easily put a photo of him in any situation we want—and the same process can apply to real people with just a few real photos pulled from social media:
Tweet media one
1
50
83
0
1
9
@hadisalmanX
Hadi Salman
4 years
4/4 This approach applies both to the case where one has full access to the pretrained classifier (e.g. API service providers) as well as the case where one only has query access (e.g. API users).
Tweet media one
0
0
8
@hadisalmanX
Hadi Salman
4 years
Thanks @NicolasPapernot @florian_tramer @carmelatroncoso Nicholas Carlini @ShibaniSan for the great efforts to make this workshop super nice!
0
0
8
@hadisalmanX
Hadi Salman
4 years
3/4 Our defense is simple. By prepending a custom trained denoiser to any off-the-shelf image classifier and using randomized smoothing, we effectively create a new classifier that is guaranteed to be Lp-robust to adversarial examples, without modifying the pretrained classifier.
1
0
8
@hadisalmanX
Hadi Salman
2 years
If you are attending CVPR and would like to learn about our work on certified patch defenses, pass by our poster ( #178 ) this Thursday 2:30-5pm CDT in Hall B2-C! @saachi_jain_ @RICEric22 and I will be there!
@hadisalmanX
Hadi Salman
3 years
Check out our latest work! We present *Smoothed ViTs* with remarkable certified robustness to adv. patches. We get std. accuracies & inference speeds comparable to non-robust models! Paper: Blog post: Code:
0
8
21
0
5
7
@hadisalmanX
Hadi Salman
4 years
2/4 We refer to our defense as *black-box smoothing*, and we demonstrate its effectiveness through extensive experiments on ImageNet and CIFAR-10. We also convert the @Azure , @googlecloud , @awscloud , and @clarifai vision APIs into *provably* robust ones! Try this using our code!
Tweet media one
Tweet media two
1
0
7
@hadisalmanX
Hadi Salman
5 years
We'd like to give a major shoutout to @deepcohen , Elan Rosenfeld and @zicokolter for building a solid foundation of scalable randomized smoothing and open sourcing this foundation so that we can iterate on top of it, leading to this work today!
0
0
6
@hadisalmanX
Hadi Salman
4 years
Also you can watch our pre-recorded presentation streamed tomorrow Sunday at 6:20 PM ET here
1
0
5
@hadisalmanX
Hadi Salman
5 years
@thegautamkamath @TheGregYang You should'nt rush announcing the award... Expect to see tomorrow two posters the same size as this:p ( @TheGregYang convinced me to print 8ft by 4ft and apparently it doesn't fit...)
1
0
4
@hadisalmanX
Hadi Salman
5 years
[2/6] We unify all existing LP-relaxed verifiers under a general convex relaxation framework.
1
0
1
@hadisalmanX
Hadi Salman
4 years
@sajjad_abdoli @MSFTResearch Thanks Sajjad! This is great question. I think some level of robustness is retained depending on how you perform the fine-tuning (fixed-feature vs. full network). I believe this paper studies this in more detail. We don't verify this in our paper though.
0
0
2
@hadisalmanX
Hadi Salman
5 years
@mmirman Good question! The confidence level we use for randomized smoothing is very high (with 99.9% chance a specific certified example is actually robust) that it doesn't really matter practically. One can always rerun certification with even higher confidence level.
1
1
2
@hadisalmanX
Hadi Salman
5 years
[5/6] Our results suggest there is an inherent barrier to tight robustness verification for the large class of methods captured by our framework.
1
0
1
@hadisalmanX
Hadi Salman
5 years
[6/6] Finally, we discuss possible causes of this barrier and potential future directions for bypassing it.
1
0
1
@hadisalmanX
Hadi Salman
5 years
@SebastienBubeck @MSFTResearch @Trevornoah I was casually walking in B99's atrium and saw him infront of the building!
1
0
1
@hadisalmanX
Hadi Salman
5 years
[4/6] We find the exact solution does not significantly improve upon the gap between exact verifiers and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR-10 datasets.
1
0
1
@hadisalmanX
Hadi Salman
4 years
@sh_reya @aleks_madry @andrew_ilyas @logan_engstrom @akapoor_av8r @MSFTResearch The total number of datapoints that we train on in a given epoch is fixed; for each image from the training set, we find *one* adversarial example per epoch, and we train only on that (we don't train on the "clean" datapoint). Hope this clarifies it!
1
0
2
@hadisalmanX
Hadi Salman
5 years
[3/6] We perform extensive experiments, amounting to more than 22 CPU-Years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks.
1
0
1
@hadisalmanX
Hadi Salman
3 years
@RisingSayak @ecsquendor @ykilcher Thanks Sayak! It was a great pleasure coming on the show, and really enjoyed it to!
0
0
1
@hadisalmanX
Hadi Salman
4 years
@jasondeanlee @PreetumNakkiran @aleks_madry @andrew_ilyas @logan_engstrom @akapoor_av8r @MSFTResearch In general, we view robust loss as a way to just enforce a prior (local stability), which seems analogous to what input diversity provides. 2/3
1
0
1
@hadisalmanX
Hadi Salman
1 year
@natanielruizg Thanks Nataniel!
0
0
1
@hadisalmanX
Hadi Salman
4 years
@jasondeanlee @PreetumNakkiran @aleks_madry @andrew_ilyas @logan_engstrom @akapoor_av8r @MSFTResearch There is some evidence though that this is not just a "data augmentation effect" in the classical sense, because the natural accuracy of the robust classifier is worse than std. (see also @logan_engstrom 's correction to the above tweet, which I think misreads the graph a bit).3/3
0
0
1
@hadisalmanX
Hadi Salman
4 years
@LucaAmb @andrew_ilyas @logan_engstrom @akapoor_av8r @aleks_madry Thanks @LucaAmb ! I don't really have an intuition regarding that, I guess because the target dataset usually has different # of classes than the source dataset, so analyzing decision boundaries of the source -->target models becomes tricky here?
1
0
1
@hadisalmanX
Hadi Salman
3 years
@ecsquendor Thanks Tim for having me on the show! It was a real pleasure. You are putting fantastic efforts to this podcast, with great content and *exceptional production quality*. Great job!
0
0
1
@hadisalmanX
Hadi Salman
2 years
@BlackHC @aleks_madry @TheDailyShow @Trevornoah @OpenAI @miramurati @MIT Actually, the adversarial perturbation is part of the head too, so removing everything but the head (which actually is already being done by the stable-diffusion model we used to generate these images) won't affect anything!
1
0
1
@hadisalmanX
Hadi Salman
5 years
@ilyaraz2 😂😂
1
0
0
@hadisalmanX
Hadi Salman
4 years
@unsorsodicorda @aleks_madry @andrew_ilyas @logan_engstrom @saihv @akapoor_av8r Thanks Andrea! Oh we had some trouble uploading yesterday, it should be on arXiv tonight!
0
0
1
@hadisalmanX
Hadi Salman
4 years
@sh_reya @aleks_madry @andrew_ilyas @logan_engstrom @akapoor_av8r @MSFTResearch Thanks @sh_reya ! In order to have fair comparisons, we train both standard and robust ImageNet models using the same set of hyperparams (including the number of parameter update steps). The only difference is in objective we optimize for.
2
0
1
@hadisalmanX
Hadi Salman
2 years
@Vertabia @Gizmodo @Alaa_Khaddaj @gpoleclerc @andrew_ilyas @aleks_madry Yeah this would work for that too! Basically, immunizing any photo makes it not "recognizable" by the generative model. And as you pointed in the 🧵, there will always be an arms race, but the hope is that this will be solved if companies providing these models get in the game.
0
0
1
@hadisalmanX
Hadi Salman
2 years
@randall_balestr @aleks_madry @saachi_jain_ @andrew_ilyas @logan_engstrom @RICEric22 Thanks! Indeed, examining biases caused by data-augmentation techniques is a great focus for studying bias transfer.
0
0
1
@hadisalmanX
Hadi Salman
5 years
@mmirman @TheGregYang @deepcohen We clearly state in the paper that the certification method we use ( @deepcohen 's ) gives high probability results (Section 2, Section 4). We will add a note to table 3 as well clearly stating that ours and Carmon et al.'s are results with high probability
1
0
1
@hadisalmanX
Hadi Salman
4 years
@LucaAmb @andrew_ilyas @logan_engstrom @akapoor_av8r @aleks_madry The intuition that we have is that enforcing robustness as a prior leads to "nice" robust features which we believe are more consistent/useful across datasets than "non-robust feats". Verifying this precisely/quantitatively is an interesting avenue for future work!
0
0
1