Xuechunzi Bai Profile Banner
Xuechunzi Bai Profile
Xuechunzi Bai

@baixuechunzi

864
Followers
633
Following
11
Media
162
Statuses

Psychologist studying social cognition, stereotypes, individual & structural, computational principles, in human and AI. Assistant Prof @UChicago

Hyde Park, Chicago
Joined May 2012
Don't wanna be here? Send us removal request.
Pinned Tweet
@baixuechunzi
Xuechunzi Bai
2 months
๐Ÿšจ New Preprint ๐Ÿšจ OpenAI says that the new GPT-4o model is carefully designed and evaluated to be safe and unbiased. But implicit biases often run deep and are harder to detect and remove than surface-level biases. In our preprint, we ask, are GPT-4o and other LLMs really
Tweet media one
14
103
516
@baixuechunzi
Xuechunzi Bai
2 years
Loved this initiative! but ๐Ÿ˜‚
Tweet media one
@StacyTShaw
Dr. Stacy T. Shaw
2 years
Need help pronouncing Chinese student names in your class? The amazing @xiwen_lu has created a tool where you can look up any name and hear the most common pronunciation!
Tweet media one
35
1K
6K
2
0
29
@baixuechunzi
Xuechunzi Bai
2 years
@Ivuoma Thanks Ivy! I am Bai, a rising 5th year at Princeton psych, policy, stats and machine learning. I study social stereotypes. My job talk features a new theory on the origin of immigrant stereotypes, and offers implications for structural diversity.
0
3
29
@baixuechunzi
Xuechunzi Bai
2 years
people evaluate social groups with **many** dimensions, going beyond mere valence (good v bad), the big two (warmth/competence), or three (ideology). natural language illustrates!
@NicolasGandalf
Gandalf Nicolas
2 years
New paper! @baixuechunzi , Susan Fiske, and I introduce the Spontaneous Stereotype Content Model (SSCM). We use various text analysis methods in combination with free responses to propose a comprehensive taxonomy of spontaneous stereotype content.
4
33
121
0
1
25
@baixuechunzi
Xuechunzi Bai
2 months
2/ We introduce two measures of bias, inspired by psychological methods to tackle similar issues in humans: ๐Ÿ’กLLM Implicit Bias: A prompt-based approach to uncover hidden biases. ๐Ÿ’กLLM Decision Bias: Detect subtle discrimination in decision-making tasks.
Tweet media one
2
5
23
@baixuechunzi
Xuechunzi Bai
9 months
๐Ÿ‘๐Ÿคฉ
@LydiaFEmery
Lydia Emery
9 months
The University of Chicago social psych area is hiring at the TT assistant professor level! Please apply and help us spread the word as we continue to build up our social psychology area. Job ad here; review begins 10/23:
Tweet media one
3
73
145
0
0
17
@baixuechunzi
Xuechunzi Bai
2 months
6/ While significant progress has been made in reducing stereotype biases in LLMs, there is still much to be learned from the origin of these biases: humans. Stay tuned for more updates from us with @ang3linawang , @sucholutsky , @cocosci_lab ! (old) preprint:
1
2
15
@baixuechunzi
Xuechunzi Bai
2 months
4/ Let's spotlight the racism test. Here, GPT-4 assigns all 8 positive words to white and all 8 negative words to black. Although human participants also tend to associate the concept of black with negativity, it is not to the same levels of GPT-4โ€™s confidence (no uncertainty)
6
3
14
@baixuechunzi
Xuechunzi Bai
2 months
5/ This pattern isn't just a one-off error but a consistent trend in the 8 tested models, including GPT-4o which claimed to have โ€œ... undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, ...โ€.
Tweet media one
1
0
13
@baixuechunzi
Xuechunzi Bai
1 year
Maybe also The World Map according to the data *psychologist* sees
@vdignum
Virginia Dignum is mostly commenting on #LinkedIn
1 year
The world according to the data #AI sees. 50% of datasets are connected to 12 institutions. Analysis of 4384 datasets and 60647 papers. Read the report: By @bernardkoch @emilydenton77 @alexhanna
Tweet media one
37
571
1K
0
0
13
@baixuechunzi
Xuechunzi Bai
2 months
1/ Large Language Models (LLMs) like GPT are great at passing explicit bias tests, but they might still have implicit biases, similar to humans who espouse egalitarian beliefs yet exhibit subtle biases. How can we systematically measure these implicit biases?๐Ÿค”
1
1
13
@baixuechunzi
Xuechunzi Bai
2 years
Very excited to host the #SPSP2022 symposium on computational social cognition with @fierycushman @tatianalau @minjaejk and Susan Fiske! Friday at 3:30 PT online. Enjoy the intro & Hope to see you there!
0
6
12
@baixuechunzi
Xuechunzi Bai
2 months
3/ Our studies with 8 value-aligned LLMs across race, gender, religion, and health reveal pervasive human-like stereotypes in approx. 20 categories, including criminality and race, science and gender, valence and religion, disability, age, etc.
Tweet media one
1
3
10
@baixuechunzi
Xuechunzi Bai
3 years
Pragmatically, our paradigm hypothesizes desegregated diversity at the level of the environment, and humble exploration at the level of individual, matter. See also paper with Susan Fiske and Miguel Ramos on diversity and stereotype dispersion. 8/9
Tweet media one
1
0
7
@baixuechunzi
Xuechunzi Bai
2 years
@olsonista my name prob is an outlier cuz it has more characters than a typical Chinese name. maybe the algo detected my familyโ€™s intention to squeeze my Korean, Japanese, and Chinese ancestry into one name! lol
0
0
6
@baixuechunzi
Xuechunzi Bai
3 years
To share a paper which also provides great counter-argument; we need a social constructivism viewpoint:
@spiantado
steven t. piantadosi
3 years
Did you know that there are psychologists who study "stereotype accuracy"? I've always wondered what the hell, so I've been reading it recently. For the record, itโ€™s exactly as bad as it sounds. Hereโ€™s a thread.
44
622
4K
0
0
6
@baixuechunzi
Xuechunzi Bai
3 years
The minimal process is local, adaptive exploration: Formally, multi-armed bandit models show that the mere act of choosing among groups with the goal of maximizing long-term benefits -- when all groups are equally, highly rewarding -- is enough. 4/9
Tweet media one
Tweet media two
1
0
4
@baixuechunzi
Xuechunzi Bai
3 years
Inaccurate stereotypes about social groups are widespread and consequential, but their origin is puzzling. People often think social groups differ from each other, even absent group-level differences. We propose a minimal, functional paradigm that suffices to produce bias. 3/9
Tweet media one
1
0
4
@baixuechunzi
Xuechunzi Bai
3 years
Inaccurate stereotypes donโ€™t require group-serving motivations, cognitive limitations, or information deficits? Locally adaptive exploration can produce globally inaccurate judgments. 2/9
1
0
4
@baixuechunzi
Xuechunzi Bai
3 years
This work benefited tremendously from great discussions with members at the fiske lab, at cocosci lab, at tlab, at social psychology brown bag, at bias in machine and in human datax workshop, as well as editors and reviewers at PsychSci. A big thank-you! 9/9
0
0
4
@baixuechunzi
Xuechunzi Bai
9 months
@LydiaFEmery ๐Ÿฅณ๐ŸŽ‰๐ŸŽŠ
0
0
3
@baixuechunzi
Xuechunzi Bai
3 years
@BareketOrly Thank you Orly!!
0
0
2
@baixuechunzi
Xuechunzi Bai
3 years
Theoretically: Perhaps one origin of stereotypes is much simpler than we have thought; even minimal assumptions can recreate it. However, this simplicity is also troubling: 6/9
1
0
2
@baixuechunzi
Xuechunzi Bai
3 years
Empirically, this phenomenon is reproduced in two large online experiments (N = 2404). Please feel free to play around with the code, data, experiments, and reach out! 5/9
Tweet media one
Tweet media two
1
0
2
@baixuechunzi
Xuechunzi Bai
3 years
If stereotypes can result from each person pursuing their own self-interest, we may need to work harder to create environments where problematic stereotypes do not develop. 7/9
1
0
1
@baixuechunzi
Xuechunzi Bai
3 years
@fasbrock @SocietiesHybrid @TUChemnitz Thanks Frank for hosting this fascinating talk series! Our great pleasure to be able to join the conversation and to keep learning!
1
0
1
@baixuechunzi
Xuechunzi Bai
2 years
@paulrconnor Interesting! The analysis should be fine but getting high quality data might be tricky.
1
0
1