Visiting Faculty
@GoogleDeepMind
& HCI Prof
@UCLA
. I build interactive AI systems that align w/ human values, assimilate human intents & augment human abilities
If you'd like to or have read/skimmed/sampled our HGAI paper, please help us improve it by filling in a short reader survey!
#GenerativeAI
#LLM
#AI
#CHI2024
#HappyNewYear
A non-minor update of our paper that formulates a research agenda on Human-Centered
#GenerativeAI
.
📑:
🧐: If you read the previous or current version of our paper, please help us by filling in a short reader survey:
You know when you try to copy the BibTex of a paper on Google Scholar you have to click multiple things?
Well ... I wrote a Chrome extension to make it one click only 🤠
🔗
Excited to receive the
@NSF
CAREER Award on Human-AI Interaction & Collaboration in Medicine.
Many thanks to the support of my students and colleagues
@ECE_UCLA
,
@UCLAengineering
, and collaborators
@UCLAHealth
@dgsomucla
.
Now, back to work ...
🚨 Newly-minted PhD alert!
Congrats to Dr.
@_JiahaoLi
on successfully defending his thesis.
PS: Hire him if you are looking for researchers at the intersection of HCI, AI, and robotics.
To appear at
#CHI2024
alt.chi:
HCI Papers Cite HCI Papers, Increasingly So.
We collected citations of CHI, UIST, & CSCW papers published in 2010-20. Finding: HCI papers have been more & more likely to be cited by HCI papers rather than non-HCI papers
📃
Just share this with one of my students: 11 of my first 12 CHI submissions got rejected. If there’s one thing I learned about reviews, it’s that take them seriously but don’t let them define your value.
#chi2019
Check out our paper that outlines the next-steps for Human-Centered Generative AI (HGAI).
Combing authors' cross-disciplinary expertise and perspectives, we propose a research agenda for HGAI across three levels--
Thank you
@UCLAchancellor
and thanks
@GoogleAI
for the recognition and support! It’s my great honor to receive this award and work on this exciting project with my wonderful students. 💪💙💛🐻
#GoBruins
Congrats,
@UCLAengineering
assistant prof
@_xiang_chen_
for receiving a Google Research Scholar Program Award! Prof. Chen’s research in human-AI collaborative systems shows us how humans and machines can more easily share information.
Extremely honored and excited to receive the ONR Young Investigator Award on "Knowledge Extraction from Human Interaction with AI". Many thanks to the support of my students, colleagues & collaborators
@UCLA
@UCLAengineering
@ECE_UCLA
!
📢 Looking for a post-doc interested in building tools/systems for any of the following: explainable AI, human-AI interaction/collaboration in specific domains (medicine, health, accessibility), interacting w/ generative AI, teaching AI w/ human knowledge.
#CHI2021
@sig_chi
I'm super excited to share this: I will join
@UCLA
ECE to work on Human-Computer Interaction and sensing as an assistant professor, starting in 2021 after postdoc at Apple. I am very appreciative of all the help from my advisor, collaborators, family and friends!
@UCLAengineering
Most of our CS faculty candidates this year have 100s of stars on their GitHub repos or 100k-M of installations of tools or libs they built. I wonder what’s the equivalence of that for an HCI candidate …
Join us in welcoming five new faculty to
@UCLA
's Computer Science Department (
@CS_UCLA
). Blaise Pascal-Tine, Saadia Gabriel, Sam Kumar, Eunice Jun and Remy Wang will each bring a unique expertise to the department.
#EngineerChange
#HappyNewYear
A non-minor update of our paper that formulates a research agenda on Human-Centered
#GenerativeAI
.
📑:
🧐: If you read the previous or current version of our paper, please help us by filling in a short reader survey:
Check out our paper that outlines the next-steps for Human-Centered Generative AI (HGAI).
Combing authors' cross-disciplinary expertise and perspectives, we propose a research agenda for HGAI across three levels--
Paper reviewers in computer graphics and vision ask for "user studies" to evaluate a method, particularly for papers with artistic or user-focused applications. In this blog post, I argue that these studies are often a waste of everyone's time.
1/N
The
#IEEEVR2020
community deserves a round of applause -- they show that they can really use their research to solve a real-world, large-scale problem 💪👍🥽
Today my class re-acted the Direct Manipulation vs. Interface Agents debate: 4 students took
@benbendc
's side, 15 argued for
@PattieMaes
& 13 stood with
@erichorvitz
. Looks like interface agents aren't going anywhere; question is if direct manipulation will still be there with us
Congratulations to
@UCLAengineering
student Yuan Liang on winning a
#chi2020
Best Paper Honorable Mention for his OralCam project--a collaboration between UCLA HCI and UCLA Design Automation Lab.
Excited to share the preprint of our
#chi2020
paper:
OralCam: Enabling Self-Examination and Awareness of Oral Health Using a Smartphone Camera
By taking a "selfie" of the oral cavity, users can stay informed by an AI of their oral health conditions.
Every year, my Human-Centered
#AI
class re-enact the
@benbendc
-
@PattieMaes
debate on direct manipulation vs. interface agents.
Here's a summary of this year's ---
Decade 1: enable customers to do what they want to do
Decade 2: enable companies do what they want to do
Decade 3: enable customers to do what companies want them to do
Why I'm losing faith in UX from
@markhurst
If I could reform
@sigchi
formats any way I want 🤔, there will be two tracks:
- Discovery track papers would require user studies at a level of rigor much higher than the current average in CHI papers;
- Inventive track papers would not require any user studies at all.
Really proud of our 1/2
#uist2020
paper--a project started by four
@UCLAengineering
undergrads (led by
@ritamsarmah
)
We built Geno--a developer tool for authoring multimodal interaction on existing web apps.
📃:
🎥:
Bruce
@liu_xingyu
just presented his award-winning paper at
@ACMUIST
#uist2022
---CrossA11y: Identifying Video Accessibility Issues via Cross-modal Grounding.
Chat with him if you'd like to learn more!
Excited to share 1/2
#chi2020
cont'd accepted paper from UCLA HCI:
CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis
#XAI
#HumanCenteredAI
#MedicalAI
Excited to share the preprint of our
#chi2020
paper:
OralCam: Enabling Self-Examination and Awareness of Oral Health Using a Smartphone Camera
By taking a "selfie" of the oral cavity, users can stay informed by an AI of their oral health conditions.
Dear
@ACMUIST
#uist2021
reviewers, please take a look at the "System Contributions" section in the CfP. We should accept more interactive system papers!
Kicking off
#chi2021
and by sharing the video of our
#chi2020
work:
CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis
🤔️How'd you like to have AI that guides you through the process of reading an article?
📢 Check out our just-accepted TOCHI paper that explores the design of a human-AI collaborative reading tool!
#ACL2023
#chi2024
Excited to Attend
#UIST2023
for the Gen AI Workshop and to present two papers & a demo!
Marvista: a Human-AI Collaborative Reading Tool
🎙️ Tue 9:40AM 🖥️ Demo Mon
XCreation: A Graph-based Crossmodal Generative Creativity Support Tool
🎙️ Tue 10:50AM
Excited to have
@rajan_vaish
and
@hqz
to speak at the Mini HCI Seminars of EE209AS. Come to join us if you want to learn about social computing, crowdsourcing and empowering communities with technology.
@UCLAengineering
@uclaieee
Hudson, S. E., & Mankoff, J. (2014). Concepts, Values, and Methods for Technical Human–Computer Interaction Research. In J. S. Olson & W. A. Kellogg (Eds.), Ways of Knowing in HCI (pp. 69–93). Springer.
Excited to share the preprint of our
#chi2020
paper:
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis
We took a user-centered approach to design a system that explains AI's results to physicians.
“surround yourself with really smart people and listen.” This is not the “listen” from your mom which means “do what I say.” It’s the “listen” that means “hear what is really being said, and carefully consider it.” --NOT
@pikachu_hudson
Dear
@sigchi
/
@ACMUIST
community, given the flexibility of
#remoteteaching
, I am happy to volunteer if you need a guest lecture.
Topics include the following
Noyan Evirgen presented our
@ACMUIST
#uist2022
paper 🦖GANzilla: User-Driven Direction Discovery in Generative Adversarial Networks
Talk to him to learn more!
How to find a needle in a haystack with AI? Perhaps our paper can answer the question!
Glad to announce our paper has been conditionally accepted by
@chi2023
.
Huge thanks to my advisor
@_xiang_chen_
and our pathologist collaborators!
#HCI
#CHI2023
(1/4)
#HCI
#chi2023
Every year, my Human-AI Interaction class re-ran the classic Direct Manipulation vs. Interface Agents debate. This year, 4 students took
@benbendc
's side, 5 argued for
@PattieMaes
, & 10 stood with
@erichorvitz
...
Today my class re-acted the Direct Manipulation vs. Interface Agents debate: 4 students took
@benbendc
's side, 15 argued for
@PattieMaes
& 13 stood with
@erichorvitz
. Looks like interface agents aren't going anywhere; question is if direct manipulation will still be there with us
"The most important question appears not to be 'Where can we use computers?', but 'Where must we use human beings?'. Until this matter is more thoroughly explored, tension between physicians and computer advocates will persist" -- M. S. Blois
In the academic world, UIST does a good job recognizing imperfectly written papers that contain highly innovative ideas.
Broken is beautiful. from
@ataussig
Excited to share our 2/2 and Jiahao Li's 2nd
#uist2020
paper, collaborated with
@iamqubick
We present Romeo--a design tool that turns part of a 3D model into a transformable robotic arm to perform user-specified tasks.
📄:
🎥:
10. Finally, don't expect too much.
CHI, after all, is too big a system to be perfect. At the end of the day, don't feel bad if you expectations aren't met. You should feel happy as long as there is something to gain from the experience.
@_JiahaoLi
gave a great talk at Purdue on his PhD works about augmenting everyday objects through robotic augmentation and perception. It provoked many discussions on HRI, human-AR interaction, and accessibility.