🤔When an algorithm causes harm, is discontinuing it enough to address its harms?
💡Our
#FAccT2022
paper introduces the concept of the _The Algorithmic Imprint_ to show how algorithmic harms can persist long after it’s discontinued
📜
A tweetorial👇
1/n
Words fail to capture the feeling of solidarity the
#YallMasking
movement has made me feel.
For the first time during this pandemic have I felt solidarity.
For many of us still holding down the fort, it's been a lonely ride.
It's a stark reminder of the strength in numbers.
Is hallucination in LLMs inevitable even with an idealized model architecture and perfect training data?
This work argues YES and offers a formal proof.
Let's dig in ⤵
🧵1/n
🎉 Thrilled to share that I'll be starting my TT faculty journey as an Assistant Professor in Computer Science at
@Northeastern
's
@KhouryCollege
of Computing in Fall'25!
I'll also serve as the faculty lead for Human-centered Computing Initiative in Responsible Governance &
A key sign of how broken CS/AI/ML
#PhD
programs is when there is an *unspoken* requirement that if you want to be competitive, you need to published already.
I mentor at least PhD applicants esp from the Global South/Majority World.
One super talented applicant asked me:
We ran the study in 2018.
Results were fascinating.
Wrote the paper in 2019.
Then peer review started sh*tting on it.
More times than I care to count.
Testament to the quality that a study from 2018 in XAI gets accepted to
#CHI2024
.
Good science has a long shelf life.
🚨 Are we evaluating qualitative studies correctly in HCI?
This paper "H is for Human and How (Not) To Evaluate Qualitative Research in HCI" argues we're using the wrong lens, potentially stifling valuable insights.
Let's unpack why and how to fix it.
🧵[1/7]
When I applied to PhD programs, I wish I knew the "person over topic" principle.
That is, given a chance to work with a generous, kind, and supportive advisor, work with them even if there isn't a "perfect" topic match.
Topics can be changed. Personality traits, not so much.
CS Theory prof during an on-site interview: so Upol, why do you think HCI is part of CS?
Me: I'd argue you cannot have CS without HCI
Him: How?
Me: Name one computing system in this planet or even in space that works in a vacuum without human interactions
Him: (ponders for a few
@AcademicChatter
Getting a quick win is more important than getting a big win. E.g., a third author paper relatively quicker is better than waiting years to get your first first-author. Having a win sooner than later builds credibility w/ others. You leverage that to move to better things.
Mentee: "I want to write a paper for CHI."
Me: "Awesome! Have you read any CHI papers?"
Mentee: "Yep, I’ve read a ton!"
Me: "Nice! Do you know what the form that reviewers fill out looks like—like, what they actually grade you on?"
Mentee: "Uhh... no, not really."
Me: "No
One thing I wish I knew the day I started my PhD in CS/AI:
Reading about the history of technology often leads to novel ideas about the future.
Ignorance of the history often leads to sloppy research.
Starting point? Read Science & Technology Studies (STS) literature.
Foolproof icebreaker at academic conferences:
What's your recent work I should read or cite?
I have used it for 9 years. Never failed. People love talking about their work. This makes the conversation about them (not you) and creates space for a deep dive.
#CHI2024
🎯 Explainable AI suffers from an epidemic. I call it Explainability Washing.
💡Think of it as window dressing—techniques, tools, or processes created to provide the illusion of explainability but not delivering it
Let’s use this example
#AI
#ML
#XAI
1/7
🤯 Every Explainable AI (XAI) system has a silent killer
⚡️ It’s called the sociotechnical gap
🧐 How do you detect & diagnose it?
🔥 Our
#CSCW2023
paper “Charting the Sociotechnical Gap in XAI” gives you a framework to do it
📰
A Tweetorial ⤵️
1/n
🚨 New pre-print alert! 🚨
Excited to share “The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations”
w/ the amazing team:
@samirpassi
,
@QVeraLiao
,Larry Chan,Ethan Lee,
@michael_muller
,
@mark_riedl
🔗
💡Findings at a glance...
1/n
I love CHI.
I love Eid.
Just don't love it when the two coincide.
For three straight years.
Especially when Eid moves 10-12 days every year.
Imagine traveling on Xmas day to catch CHI. Many observing Eid just had to do that today.
#chi2023
#EidMubarak
#CHImubarak
After reading
#chi2023
reviews, I'm reminded that
- this is a marathon, not a sprint.
- luck matters: landing on the right 1AC matters more than we'd like to admit
- death by subcommittee is real
- academic peer-reviewing has high entropy
#AcademicTwitter
#phdvoice
Rumors are true! I am on the job market 🚀
As a first-gen PhD, this is simultaneously daunting & exciting 💯
🎯 Top consideration: finding an academic home that values my flavor of XAI and RAI with supportive colleagues.
💌 Pls repost & help me reach beyond the filter bubble
Did you know that Upol Ehsan
@UpolEhsan
is on the faculty job market? He's an amazing researcher in Explainable AI (XAI) and Responsible AI (RAI). You want to hire him.
Keep an eye out for his application materials, or reach out if you don't see it.
May future generations of CS scholars never have to justify that HCI is CS work.
May they never have to justify why qualitative work can be rigorous. If they do, may the burden be symmetric on the quant scholars.
I will do my bit in this group project. Hope others will join.
Thrilled to share that my
#CHI2023
paper has not been brutally accepted & that we're submitting it elsewhere🔥✌️👑🥳
CHI's loss will be some other conference's gain soon 🥲
Kudos to everyone playing the reviewer roulette 🫡
Congrats to those whose tributes were accepted! 🏆
As I finalize
#chi2023
reviews, I'm trying to embody what my mentors taught me
- was I kind & constructive (not an a-hole) in my rigorous feedback?
- did I fulfill my job to be "selective" not "rejective"?
- have I given actionable feedback to make the paper better?
"Isn't the PhD supposed to be the training to learn how to publish & be a good scientist?"
I wish I had a better answer than the one I gave her.
#academia
#academicChatter
#AcademicTwitter
About to embark on the first of N(>=6) on site job talk visits!
Wish me good luck and good health.
Here's to productive discussions & building new relationships.
No matter how this search ends, I know I'll be better off for it.
Best of luck to everyone on this journey!
Many of you couldn't join us at the
#HCXAI
workshop at
#CHI2021
. We received tons of requests to make the videos available online.
We always want to broaden participation.
This is for you.
🎁
I'm really enjoying reviewing in-depth 20+ page papers at
#chi2023
. Without a strict page limit, some of the qualitative work gets the required legroom to take the analysis to a depth I haven't seen before. Moreover the paper feels more self contained than part of a trilogy 😅
Pro-tips reviewing tips from my mentors that you can't find in an official guide (relevant for
#CHI2024
reviews):
1) Use "paper-first" language. We evaluate *this version* of the paper, not the authors. E.g., opt for "the paper does not address X" vs "authors did not do X".
🥳OMG we're back again!
⚡️Submit your best work to the *THIRD* Human-centered Explainable AI (
#HCXAI
) workshop at
#CHI2023
🤔Frustrated that human factors are ignored in Explainable
#AI
? Let's do something about it!
🎯Deadline: Feb 26,2023
💡
Pls RT!
1/4
🎉Status update: accepted at
#CHI2024
While the process was demoralizing at times, the friendship with my co-authors forged through the trenches was worth the struggle.
Learn the surprising ways in which people with (or without) an AI background interpret AI explanations⤵️
🚨 New pre-print alert! 🚨
Excited to share “The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations”
w/ the amazing team:
@samirpassi
,
@QVeraLiao
,Larry Chan,Ethan Lee,
@michael_muller
,
@mark_riedl
🔗
💡Findings at a glance...
1/n
After five back-to-back paper rejections, finally an acceptance, a strong one at that!
This Reviewer's comment made my day. They didn't have to say it, but they did.
Kindness, indeed, is radical.
Today is a good day; cherishing it.
#AcademicChatter
@PhDVoice
@OpenAcademics
🎉 Excited to be the invited guest at
@HarvardHci
for their "Tea with Interesting People" series on Nov 8
🌟Eager to discuss two topics close to my heart:
(1) Human-centered Explainable AI
(2) Algorithmic (In)justice
What excites me most is the innovative format where...
1/4
Toxicity was a major problem for GPT-3. If you've used ChatGPT, you may have noticed that it's relatively decent at avoiding toxicity. Is it because of some magical AI trick?
Not really. So what was it? 👇
#ChatGPT
#AI
#ghostwork
#ResponsibleAI
#ML
#OpenAIChatGPT
1/n
🎯Care about Explainable
#AI
(
#XAI
) & Responsible AI?
🎁Want to join a *PAID* interview study with experts from
@MSFTResearch
&
@gtcomputing
to explore new ways to make AI systems explainable & increase user empowerment?
Sign-up:
Please RT & share!
1/3
AI/HCI/CS/STS friends: what are good papers that problematize the notion of a "ground truth" in AI?
Self-plugs are always welcomed!
Pls repost to break the filter bubble and get more POVs.
🎉 Congrats to all who successfully submitted their CHI papers! May the odds be in your favor.
I opted out of this cycle for many reasons. One of them was my sanity.
But most importantly, I don't think I was able to do justice to the work. The diamond needs more polishing to
Traditionally in ML, building models is the central activity and evaluation is a bit of an afterthought. But the story of ML over the last decade is that models are more general-purpose and more capable. General purpose means you build once but have to evaluate everywhere.
PhD admits: got a campus visit coming up? Or a chance to talk to current PhD students? Here's a hack that I wish I knew when I started:
Try to get a sense of how the program treats the students who are struggling. The programs will advertise their superstars, but...
1/n
Implications for algorithm deployers:
🎯Critically ask: does the problem really *need* an algorithmic intervention?
💯Despite being made out of “soft”ware, algorithms can leave hard imprints on society
💡That is, there’s no simple “undo” button for algorithmic deployments
3/n
Other scientists: this is a really complex concept, how are you going to explain this to your study participants?
Me (explaining the concept at the study): have you seen this meme?
Participant: Say no more.
@PhDVoice
@AcademicChatter
@OpenAcademics
@r2mustbestopped
#1
mistake I see in Explainable AI papers: adding a user study to your XAI project and claiming it's Human-centered XAI (HCXAI).
There needs to be an argument re: how the work centers human needs for
#AI
explainability for it to be HCXAI. Slapping a user study won't cut it
After four years of reading peer reviewed research to try to get a handle on facts instead of propaganda, I really have too much to say about the
#PodSaveJon
situation for a single thread.
So, I want to talk about dads. 🧵
Academics: liked reading a paper?
Email the author(s). Thank them. Takes <2 mins and makes someone else's day while making you feel you did something good.
I can attest both as a receiver and sender.
We often only take time to criticize.
We should also do it to appreciate.
🎁 This episode of the paper drop covers my favorite part-- the humans behind the work.
💡 Open any paper, what's right after the title? The authors.
🤔 Isn't it odd that we seldom talk about them in paper explainers?
🔥 Time to change that. I'll start
#academicTwitter
1/n
The most helpful
#writing
advice I ever got was:
- The more complex the thought, the simpler the sentence should be.
- One thought/idea, one sentence.
It came from a Pulitzer Prize winner.
#AcademicChatter
@PhDVoice
#writingtips
Unpopular opinion:
The *first* thing a new PhD student can do to is to become a *second* author of a paper ASAP, esp if you don't have prior pub experience.
This approach can produce that vital early win, understand the unwritten rules of the game, & learn *how* to collaborate
CS Profs: help out new faculty members. What are the mistakes you made when it came to PhD student recruitment when you started as a prof?
Please repost and help this get out of my filter bubble.
🚨 Thrilled to be a co-editor of a Special Issue on Human-centered Explainable AI (
#HCXAI
) at ACM TiiS.
🎯 It’s on making XAI more inclusive, actionable, & responsible.
🗓️ Deadline: Feb 21, 2022
🔗
Please RT🔊!
🤔 What’re we looking for?👇
(1/4)
🎉 Thrilled to give a talk at the
@UniofOxford
's Responsible Technology Institute on "The Algorithmic Imprint"
🎯 Join us for a deep dive on how algorithmic harms can persist even after it’s destroyed.
🗓️ 3pm UK, Oct 13, 2022
🎟️
@datasociety
@mlatgt
1/3
As DJ Khaled would say: "ANOTHER ONE!" ☝️
🚀 The Human-centered Explainable AI workshop is back at
#CHI2024
!
🎯 LLMs, LLMs, everywhere. What does it mean for Explainable AI & humans?
📍Submit to
#HCXAI
& find your community!
🔗 hcxai.jimdosite(.)com
Pls RT. Pro tips ⤵️
1/3
Started my
#CHI2024
talk by taking a group selfie with the audience.
Thoughtful audience questions + amazing session chair (
@Jay4w
) are the best gifts one can expect from conf talks.
Care about Explainable AI?
Want to learn about how people with and without an AI background interpret AI explanations? Curious about a new type of harmful effect in XAI?
Where : 313B
When: 12-12:15 pm
You'll get exclusive access to behind the scene details on how this
I did something weird for
#chi2024
.
I maximized attending non-LLM talks.
I thought I'd regret this, but I had a great time.
But this was harder than expected given the LLM-fication of HCI these days.
Don't get me wrong: HCI of LLMs is crucial to make LLMs usable.
But not every
While writing a paper, I was getting frustrated because I couldn't find the right words to capture a complex thought.
I reminded myself that it's okay to struggle finding words in your 4th language.
I know there are others like me. We need to be kind to ourselves
@PhDVoice
Using the vulnerable, underpaid, overworked RAs as human shields during a pandemic is a reflection of the school's character. The world is watching. Good luck on recruiting. Any deaths will be on your hands. I hope the fees and profits are worth it.
Georgia Tech is using RAs to deliver meals to quarantining students.
Underpaid and overworked RAs are on the front lines of Georgia Tech’s covid response.
GT is conscripting students to fight the outbreak which GT is causing.
#JacketsProtectJackets
#JacketsInfectJackets
🌟My summer highlight at
@MSFTResearch
has been working w/ the amazing
@QVeraLiao
,
@haldaume3
, &
@samirpassi
🔥 From difficult conceptual transfers to 40+ interviews, we tackled really hard topics in
#XAI
& had a hell of a good time doing it!
As a researcher...
1/2
🎉Super excited to welcome & host
@QVeraLiao
at Georgia Tech for her talk today on
#HCXAI
, my favorite topic🔥
🎁 She is a cherished & generous mentor of mine and such a joy to collaborate with!
GT peeps, come to the talk & give Vera a Yellow Jackets welcome!
@gvucenter
@ICatGT
My nightmare: we end creating "Excusable AI" instead of Explainable AI by over-focusing on algorithms.
To prevent this nightmare, we must center our
#XAI
efforts on the human.
Explainability is a human factor. It's time we treat it as such when making
#AI
explainable.
I'm starting to see a trend of people using "radical candor" as a disguise for being a "total jerk".
Friendly reminder: radical candor goes hand in hand w/ radical (not ruinous) kindness. Without the kindness, you've haven't created the space for effective candor.
#leadership
📢 The videos from the 2nd Human-centered Explainable AI (
#HCXAI
) workshop at
#CHI2022
are up!
🎁If you couldn't join us or just want to relive those amazing discussions, you're in for a treat!
💌 Please RT/share. Help us
#XAI
more human-centered.
1/2
Hey
#CHI2024
folks, if you are looking for a *free* transcript generating platform that works really well (way better + faster than YouTube), try Riverside.
📢 Paper decisions are out for the Human-centered Explainable AI (
#HCXAI
) workshop at
#CHI2022
🔥 We received an overwhelming number of submissions this year, which means the world to us. THANK YOU!
🎯 29 papers were accepted. Congrats to the teams!
#HCI
#AI
#XAI
1/4
📢BREAKING: You can attend the *hybrid* Human-centered Explainable AI (
#HCXAI
) workshop without an accepted paper!
💡Spots are extremely limited so fill this out ASAP:
Why are we opening things up?
💡Over the last 4 years, practitioners & policymakers
🙏 Humble request to PIs/authors with multiple
#CHI2023
acceptances:
What are your heuristics when it comes to subcommittee selection? What strategies have worked best? What have failed?
Many of us who come from non-native CHI/HCI labs would love to learn about this.
#HCI
We academics need to chill.
Stop policing folks why they didn't write "conditionally" when sharing
#chi2024
good news on social.
This is a subtle academic toxic trait.
Partake in others' joy. Your joy will increase.
Help with others' pain. The collective pain decreases.
Haven't seen family in 2+ years. About to embark on the first international trip in the pandemic. If there's a time for the vaccine booster to work, that'd be now.
#travel
Unpopular Opinion:
Almost all academic papers do a decent job of *what* they found, many do pretty well to describe *how* they did it, very few do a stellar job of *why* they are doing it and *why* the community needs it.
@PhDVoice
@OpenAcademics
#AcademicTwitter
#research
What are the latest and best papers for Algorithmic Justice/Fairness/Bias?
Need your help to curate a reading list for (early) PhD students. Bonus points if papers have non-Western perspectives.
Please help amplify this for well-rounded input
#AcademicTwitter
#Ethics
#AI
Care about Explainable AI?
Want to learn about how people with and without an AI background interpret AI explanations? Curious about a new type of harmful effect in XAI?
Where : 313B
When: 12-12:15 pm
You'll get exclusive access to behind the scene details on how this