Yuhan Zhang Profile Banner
Yuhan Zhang Profile
Yuhan Zhang

@YuhanZhang_

803
Followers
450
Following
10
Media
109
Statuses

Working on psycholinguistics.

Cambridge, MA
Joined September 2017
Don't wanna be here? Send us removal request.
Pinned Tweet
@YuhanZhang_
Yuhan Zhang
3 months
📢New paper in @glossapsycholx w/ Prof. Kate Davidson @TeaAnd_OrCoffee 📢 Ever since I encountered de re/de dicto in semantics, I’ve been amazed by how language flexibly encodes thoughts/beliefs. Here, we reported interesting interpretations of noun phrases in belief reports. 1/6
@glossapsycholx
Glossa Psycholinguistics
3 months
First up is @YuhanZhang_ and Davidson on "Interpreting referential noun phrases in belief reports – the de re/de dicto competition" ...
1
1
5
1
9
27
@YuhanZhang_
Yuhan Zhang
4 months
🙌🏻 I defended my PhD thesis “The Rational Processing of Language Illusions” yesterday!🎓 I am extremely grateful for my amazing advisors @LanguageMIT and @TeaAnd_OrCoffee and dear committee members @roger_p_levy and Kevin Ryan!
Tweet media one
Tweet media two
Tweet media three
Tweet media four
18
4
187
@YuhanZhang_
Yuhan Zhang
2 years
A new paper just came out in Cognition! It has been an incredibly amazing experience working with @raryskin and @LanguageMIT . We provide a noisy-channel explanation for the age-old linguistic illusion called “depth-charge” sentences. Check it out! 🧵1/n
2
24
96
@YuhanZhang_
Yuhan Zhang
3 years
Our department is running two searches for new faculty this year. One is a tenure-track position in syntax and the other is an open-rank position in phonology/phonetics. Research in interfaces, computational/experimental linguistics, or fieldwork is especially encouraged!
0
27
73
@YuhanZhang_
Yuhan Zhang
4 months
This summer I will be the sole instructor of LING101 (The Science of Language: An Introduction) at Harvard Summer School for undergrads! What knowledge do you think cannot be missed for future linguistic/cognitive science researchers or AI/ML practitioners?
5
4
57
@YuhanZhang_
Yuhan Zhang
7 months
#HSP2024 Looking forward to presenting a talk on negative polarity illusion with @LanguageMIT at Ann Arbor! I am thrilled to provide some new data patterns and I'd love to learn what you think about the underlying theory! 😀
1
5
36
@YuhanZhang_
Yuhan Zhang
1 year
It is great news to see "Linguistics and Computer Science (30.4801)" is created to be a STEM major. My question now is, could linguists who have substantial training in CS/DS but in a non-STEM program be eligible to apply for OPT extension?
3
4
27
@YuhanZhang_
Yuhan Zhang
10 months
When humans make errors in processing certain sentences, would LLMs imitate by making similar errors or surpass humans and circumvent these errors? We take language illusions as a testing ground. See you at #CoNLL 12/06 at 11am (location: West 1)! #EMNLP2023 #EMNLP23 #emnlp 1/n
1
4
23
@YuhanZhang_
Yuhan Zhang
3 months
If you are interested in a resource-rational explanation of the negative polarity illusion, please check this HSP talk (Session 1, starting at 1:07 in the video) .
@YuhanZhang_
Yuhan Zhang
7 months
#HSP2024 Looking forward to presenting a talk on negative polarity illusion with @LanguageMIT at Ann Arbor! I am thrilled to provide some new data patterns and I'd love to learn what you think about the underlying theory! 😀
1
5
36
0
3
20
@YuhanZhang_
Yuhan Zhang
5 months
I've been here (virtually) for a lot of the insightful talks! Highly recommend it to everyone interested in linguistics, cognitive science, and LLMs. 😃👏
@roger_p_levy
Roger Levy
5 months
New Horizons in Language Science: Large Language Models, Language Structure, and the Cognitive & Neural Basis of Language has started! Theme 1's speakers will present starting NOW. Ben Bergen, Leila Webhe, Ariel Goldstein, @davidbau ! Tune in on Zoom via
3
11
29
0
1
15
@YuhanZhang_
Yuhan Zhang
2 years
ChatGPT does think NPI illusion is ungrammatical but the reason is totally off.
Tweet media one
0
0
9
@YuhanZhang_
Yuhan Zhang
11 months
Hi there, does anyone know any female researcher going to EMNLP this year? I am looking for a roommate and I would really appreciate getting introduced to some potential friends! Thanks! 😊
1
5
8
@YuhanZhang_
Yuhan Zhang
5 months
Don’t miss this great opportunity!
@coryshain
Cory Shain ([email protected])
5 months
🚨JOB ALERT🚨 2-year full-time research coordinator. Help me get my new language-brain lab at @Stanford off the ground! fMRI and coding bg needed. Ideal for post-bacs interested in comp/cog/neuro/lang. Apply by May 31 for full consideration. Please RT!
0
54
107
0
1
8
@YuhanZhang_
Yuhan Zhang
4 months
This thesis won’t be possible without the intellectual exchanges with and inspirations from my wonderful colleagues, collaborators, and mentors. Ⓜ️🧠💻Thank you so much for your generous support and trust! ❤️ I am excited about what’s coming next!!!!
0
0
6
@YuhanZhang_
Yuhan Zhang
2 years
In four experiments, we find that (a) the more plausible the intended meaning of the depth-charge sentence is, the more likely the sentence is to be misinterpreted; (b) the higher the likelihood of our hypothesized noise operations, the higher the misinterpretation rate is. 10/n
1
0
7
@YuhanZhang_
Yuhan Zhang
2 years
Open questions: How does negation work during the online processing of depth-charge sentences? There are so many of them and how do they affect each other? End of 🧵
0
1
6
@YuhanZhang_
Yuhan Zhang
10 months
... from the gallery has ever made a bronze sculpture". Our results show that LLMs do not consistently show an illusion effect: they seem to be more likely to be tricked by syntactic illusions like the NPI illusion, compared to the other two where semantics matters more. 6/n
1
0
6
@YuhanZhang_
Yuhan Zhang
10 months
Our research focuses on testing three language illusions: the comparative illusion "More people have been to Russia than I have", the depth-charge illusion "No head injury is too trivial to be ignored", and the NPI illusion "The artist who no curator remembered... 5/n
1
0
5
@YuhanZhang_
Yuhan Zhang
2 years
@MasoudJasbi Could we also have a semantics/pragmatics list? 😋
1
0
5
@YuhanZhang_
Yuhan Zhang
10 months
Language illusions are ungrammatical, implausible, or meaning-wise anomalous sentences that are, nonetheless, rated as acceptable by native speakers. While language illusions pose a very interesting question about human language processing, we ask if they can also trick LLMs. 3/n
1
0
5
@YuhanZhang_
Yuhan Zhang
2 years
ChatGPT seems to solve this pronoun resolution task pretty well, with good reasoning.
Tweet media one
Tweet media two
0
0
4
@YuhanZhang_
Yuhan Zhang
10 months
Special thanks to my co-authors @LanguageMIT and @Forrest_L_Davis 😃 Here is more! 2/n
1
0
5
@YuhanZhang_
Yuhan Zhang
2 years
Overall, we provide a promising noisy-channel account for the depth-charge illusion. In the future, we wish to explore more language illusions that could possibly be addressed by this framework. 19/n
1
0
5
@YuhanZhang_
Yuhan Zhang
10 months
This question pushes us to a better understanding of whether LLMs could be viewed cognitive models of language processing. 4/n
1
0
5
@YuhanZhang_
Yuhan Zhang
3 years
Dear linguists, do you know if there is any tool that automatically generates raw sentences to tree diagrams following the Penn Tree Bank structure? Thanks a bunch!
1
1
4
@YuhanZhang_
Yuhan Zhang
4 years
Interested to see that Google Doc's automatic grammar correction is also confused with the subject-verb agreement. 🤣
Tweet media one
0
0
4
@YuhanZhang_
Yuhan Zhang
4 months
@ludling Thank you Nathan for the recommendation!
0
0
1
@YuhanZhang_
Yuhan Zhang
4 years
This would be an interesting example for structural ambiguity resulting from lexical ambiguity. lol
@JoeBiden
Joe Biden
4 years
Pitch in $5 to help this campaign fly.
Tweet media one
20K
172K
761K
0
0
4
@YuhanZhang_
Yuhan Zhang
4 months
One of my linguistics friends who is now working at Amazon highly recommends this!! 😀
@lingbeyondacad
Linguistics Career Launch
4 months
Early-bird registration for LCL 2024 is open until 6/24/2024!! Ticket portal: This ticket includes access to all panels, workshops, sessions, presentations, mixers, office hours, and other meetups.
1
1
11
1
0
3
@YuhanZhang_
Yuhan Zhang
8 months
When it comes to posting experiment data, code, and analysis, which repository do you use the most? OSF, GitHub, or others? I am aware that computational-related work often chooses GitHub, but what about more psycholinguistic-oriented work? #openscience
3
0
3
@YuhanZhang_
Yuhan Zhang
3 years
Syntax: Phonology/Phonetics:
@YuhanZhang_
Yuhan Zhang
3 years
Our department is running two searches for new faculty this year. One is a tenure-track position in syntax and the other is an open-rank position in phonology/phonetics. Research in interfaces, computational/experimental linguistics, or fieldwork is especially encouraged!
0
27
73
1
3
3
@YuhanZhang_
Yuhan Zhang
4 months
@fusaroli Thank you Riccardo for the recommendation!
0
0
1
@YuhanZhang_
Yuhan Zhang
3 months
Then, we edit the context a bit: Aurora falsely believes that the man is a peasant. This time, some ptcps didn’t like “Aurora wants to marry a prince” at all! This suggests how contextual/pragmatic information can modulate the acceptability of certain belief/desire reports. 😮4/6
1
0
2
@YuhanZhang_
Yuhan Zhang
4 months
0
0
0
@YuhanZhang_
Yuhan Zhang
3 years
Here is the phonology paper derived from a course project!
@YuhanZhang_
Yuhan Zhang
3 years
Ever wondered if the relationship between vowel reduction and stress shift is direct and clear during morphological transformation? 🧐 My newly published AMP paper says the relationship could be everything but clear! 🤣 Come and take a look!
0
0
1
0
0
2
@YuhanZhang_
Yuhan Zhang
5 years
"The history of linguistics might not be taught correctly to us, or our understanding of it is not complete." The LSA summer institute @LSA2019 helped me realize that and I think we have the obligation to take a critical view about what is known and especially what is not.
0
0
2
@YuhanZhang_
Yuhan Zhang
2 years
In Exp.1, we replicated Paape et al. (2020) (n=58), showing that the depth-charge sentence in English was indeed rated as more plausible than other implausible controls (e.g., Some head injuries are too trivial to be ignored). 11/n
Tweet media one
1
0
2
@YuhanZhang_
Yuhan Zhang
2 years
Most people read this to mean “we should not ignore head injuries no matter how trivial they are”. But the literal meaning is the opposite which is “ignore head injuries”! (Language Log: ) 3/n
2
0
2
@YuhanZhang_
Yuhan Zhang
7 months
0
0
2
@YuhanZhang_
Yuhan Zhang
2 years
In four experiments, the comprehension of depth-charge sentences is shown to correlate with (i) the plausibility of the intended meaning and (ii) the likelihood of hypothesized noise operations, which accord with predictions from the noisy-channel framework. 18/n
1
0
2
@YuhanZhang_
Yuhan Zhang
3 months
@NogaZaslavsky @NYUPsych @NYUDataScience This is so amazing! Congrats Noga!
1
0
1
@YuhanZhang_
Yuhan Zhang
7 months
@iria_df @LanguageMIT Hi Iria! Thanks for your interest! I believe the program will release the abstract quite soon! 😀
0
0
1
@YuhanZhang_
Yuhan Zhang
2 months
@lelia_glass And with pointing!
0
0
1
@YuhanZhang_
Yuhan Zhang
11 months
@HadasKotek I just heard from her! Such a small world! ☺️
0
0
1
@YuhanZhang_
Yuhan Zhang
3 months
@Sanghee__Kim Congratulations!!!
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
In Exp.4, we found that the inference rates of the implausible sentences in the two structural substitution conditions positively correlated with their respective noise likelihood. This also supports the noisy-channel account. 17/n
Tweet media one
1
0
1
@YuhanZhang_
Yuhan Zhang
3 years
Ever wondered if the relationship between vowel reduction and stress shift is direct and clear during morphological transformation? 🧐 My newly published AMP paper says the relationship could be everything but clear! 🤣 Come and take a look!
0
0
1
@YuhanZhang_
Yuhan Zhang
2 years
We offer new insights into understanding this illusion. We hypothesize that depth-charge sentences result from "noisy-channel" comprehension processes (Gibson et al., 2013; Levy, 2008; Ryskin et al., 2018, following Shannon 1948), modeled within the Bayesian framework: 8/n
Tweet media one
Tweet media two
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
Or even, the sentence itself is plausible and makes sense because “too trivial to be ignored” in this context means “so trivial to the extent that a head injury can be ignored” (Cook & Stevenson, 2010; Fortuin, 2014). 7/n
1
0
1
@YuhanZhang_
Yuhan Zhang
3 months
@_jennhu Huge congratulations!!!
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
Previous theories differ in what causes the misinterpretation, without consensus. For example, “no”, “trivial”, “too…to”, and “ignore” might contain too many negative meanings that overload processing (Wason & Reich, 1979). 5/n
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
We then proposed two noise operations for the intended sentence (si) to be produced as the depth-charge (sp). First, the intended sentence is “no head injury is so trivial as to be ignored” and the noise edit is a structural substitution (“so…as to” to “too…to”). 13/n
1
0
1
@YuhanZhang_
Yuhan Zhang
5 years
@BeyondProf Planning for the next week is my way to ease the guilt.
0
0
1
@YuhanZhang_
Yuhan Zhang
2 years
Do we know whether SCiL will be held in 2023?
2
0
1
@YuhanZhang_
Yuhan Zhang
3 months
Note: In the experimental materials, we used different but similar contexts, compared to the Sleeping Beauty example. Come and check it out! 6/6
0
0
1
@YuhanZhang_
Yuhan Zhang
11 months
@linguistMasoud Can't agree more!
0
0
1
@YuhanZhang_
Yuhan Zhang
2 years
It is more likely for the intended “no head injury is so trivial as to be ignored” to be produced as the canonical depth-charge sentence “…is too trivial to be ignored” than from “too…to” to “so…as to”, consistent with how structural frequency interacts with production. 16/n
Tweet media one
1
0
1
@YuhanZhang_
Yuhan Zhang
3 months
and exps show ptcps think this statement accurately describes the context. (Isn’t it reflecting how flexibly language encodes thought and desire? The desire can be linguistically encoded as “wants to marry a prince”, though the proposition is not registered by Aurora!!!) 😮 3/6
1
0
1
@YuhanZhang_
Yuhan Zhang
3 years
A nice article about interpreting the interaction between a categorical variable and a continuous variable in linear regression in R.
0
0
1
@YuhanZhang_
Yuhan Zhang
4 months
@TeaAnd_OrCoffee Thank you Kate!!! You are a fascinating and amazing advisor!! ❤️❤️
0
0
0
@YuhanZhang_
Yuhan Zhang
2 years
In Exp.3, we constructed a 2x2 condition crossing noise type and the operation direction, using a noise-likelihood rating study. We found that structural substitution is more likely than antonym substitution. 15/n
1
0
1
@YuhanZhang_
Yuhan Zhang
3 months
First, a context could be, like in Sleeping Beauty, that Aurora falls in love with a man and that man is a prince, even though Aurora doesn’t know this. Given the context, we can report Aurora’s desire as “Aurora wants to marry a prince” 2/6
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
It is common to see what people interpret from a sentence is not what the sentence literally means. Now, please read this sentence “no head injury is too trivial to be ignored”. What does this mean? 2/n
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
In Exp.2, we normed to what extent the intended meanings of depth-charge materials agree with world knowledge (n=31). We found, across 32 items, the higher this rating score, the higher the plausibility rating in Exp.1. The correlation supports a noisy-channel account. 12/n
Tweet media one
Tweet media two
1
0
1
@YuhanZhang_
Yuhan Zhang
3 months
What is more inspiring and worthy of more research is how the flexibility for language to encode thoughts is constrained by other factors, e.g. information accuracy, discrepancy btw mind/reality, and even moral judgment inclinations!! 🤩 5/6
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
Second, the intended sentence is “no head injury is too trivial to be treated” but is produced with an antonym substitution to be “…to be ignored”. We apply common production errors (e.g, Dell & Reich, 1981) to understand the depth-charge illusion. 14/n
Tweet media one
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
Readers infer the most likely intended meaning of a perceived sentence (sp) by weighing the plausibility of possible alternative sentences (si) against the likelihood of possible sentences being produced with errors into the perceived sentence. 9/n
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
This is super last minute but is there anyone coming to the Chicago Linguistic Society this week and still looking for a place to live? I want to find a roommate to stay with at Hyatt. Please DM me if you are interested! 🙂
0
0
1
@YuhanZhang_
Yuhan Zhang
2 years
Or, the comprehension is underspecified and driven by world knowledge (e.g., Sanford & Sturt, 2002; Paape et al. 2020). 6/n
1
0
1
@YuhanZhang_
Yuhan Zhang
2 years
These are called “depth-charge” sentences because processing them is like a depth-charge bomb that explodes in your mind after a while (Sanford & Emmott, 2012). It’s puzzling how a sentence like this can be completely misinterpreted and people don’t notice. 4/n
1
0
1