Freda Shi Profile Banner
Freda Shi Profile
Freda Shi

@fredahshi

1,909
Followers
697
Following
20
Media
314
Statuses

Assistant Professor @UWCheritonCS & Faculty Member @VectorInst PhD in CS from @TTIC_Connect BS from @PKU1898 , Ex- @MetaAI , @GoogleDeepMind Feeder of 3 🐈

Chicago, IL
Joined December 2016
Don't wanna be here? Send us removal request.
Pinned Tweet
@fredahshi
Freda Shi
27 days
🚨Long thread warning: excited to share that I defended my PhD thesis earlier in May! Here's my thesis, Learning Language Structures through Grounding: 1/
Tweet media one
30
11
298
@fredahshi
Freda Shi
1 year
Personal update: I'll be starting in July 2024 as an Assistant Professor @UWCheritonCS and a Faculty Member @VectorInst ! Looking forward to working with all the amazing folks! Prospective students: if you are interested in NLP and/or comp. linguistics, please consider applying!
33
20
322
@fredahshi
Freda Shi
2 years
Large language models show reasoning abilities in English with chain-of-thought prompting - how are their multilingual reasoning abilities? New preprint📄: Language models are multilingual chain-of-thought reasoners. (1/n)
Tweet media one
5
53
274
@fredahshi
Freda Shi
3 years
Honored to receive the 2021 Google PhD fellowship in natural language processing. Thanks @GoogleAI for the support! Kudos to my advisors and mentors: thanks for teaching me everything over the past years, and for showing me concrete examples of best researchers---yourselves!
@GoogleAI
Google AI
3 years
Continuing our tradition of supporting outstanding graduate students in their pursuit of research in computer science and related fields, we congratulate our 13th annual PhD Fellowship Program recipients! See the list of 2021 Fellowship recipients below:
15
52
419
9
4
181
@fredahshi
Freda Shi
8 months
🚨(Not Really) Old Paper Alert🚨: sharing our 2-year-old NeurIPS paper that I’m still quite excited about. We learn grounded, neuro-symbolic CCGs from multi-modal data and demonstrate nearly perfect compositional generalization to unseen sentences and scenes. (1/)
Tweet media one
2
17
108
@fredahshi
Freda Shi
2 years
Late post but let’s do this! Happy to share our #EMNLP2022 work on translating natural language to executable code with execution-aware minimum Bayes risk decoding 📝Paper: 📇Code: 📦Data (codex output): (1/n)
Tweet media one
3
20
102
@fredahshi
Freda Shi
4 years
Just got a paper w/ scores 4, 4, 4 rejected by #acl2020nlp , but the comments from the meta-reviewer and all reviewers are super, super constructive. Would like to say thank you to them all!
1
1
93
@fredahshi
Freda Shi
2 years
Though time it quite limited, I'm happy to spend most of my weekend reviewing for #iclr2023 - my assigned papers are all interesting, carefully written and relevant (to me), as most ICLR papers I've reviewed before - kudos to the ICLR matching system (and my ACs)!
1
0
39
@fredahshi
Freda Shi
1 year
#ACL2023 attendees: Welcome to Canada! 🇨🇦 I'll be at the conference from Monday to Wednesday. First time attending a conference without presenting a paper, and I’m sure I’ll enjoy all the cool presentations. Old & new friends: please don’t hesitate to come & say hi!
1
0
38
@fredahshi
Freda Shi
9 months
Looking forward to visiting tomorrow!
@michigan_AI
MichiganAI
9 months
📢Delighted to host @fredahshi 's #AI Seminar on "Learning Syntactic Structures from Visually Grounded Text and Speech"! TOMORROW, OCT. 24 @ 4pm ET:
Tweet media one
0
2
30
1
1
36
@fredahshi
Freda Shi
8 months
Yes, we are looking for PhD students at Waterloo! Come join us — apply by Dec 1!
@yuntiandeng
Yuntian Deng
8 months
I am hiring NLP/ML PhD students at UWaterloo, home to 5 NLP professors! Apply by Dec 1 Strong consideration will be given to those who can tackle the below challenge: Can we use LM's hidden states to reason multiple problems simultaneously? ​​Retweets/shares appreciated🥰
Tweet media one
12
133
486
1
0
36
@fredahshi
Freda Shi
2 years
Finally, I'll be presenting this work at EMNLP 2022 in person! Cannot wait to meet old and new friends - come and say hi!
@fredahshi
Freda Shi
2 years
Late post but let’s do this! Happy to share our #EMNLP2022 work on translating natural language to executable code with execution-aware minimum Bayes risk decoding 📝Paper: 📇Code: 📦Data (codex output): (1/n)
Tweet media one
3
20
102
0
4
32
@fredahshi
Freda Shi
2 years
This has been one of the most exciting posters I’ve visited at EMNLP2022. Neat results showing syntax and semantics are learnably separated in spectrums!
@mxmeij
Max
2 years
For #EMNLP2022 , we (w/ @robvanderg , @barbara_plank ) look through differentiale, rainbow-colored glasses to find linguistic timescale profiles for 7 #NLProc tasks across 6 languages 🌈 📑 📽️ 💬 10th Dec 9:00 at Poster Session 7 & 8
Tweet media one
1
3
28
0
0
21
@fredahshi
Freda Shi
5 months
Are there any resource/study showing which words (in any language) are more likely to be mispronounced (by either native speakers or L2 learners)? Any pointer is appreciated!
2
0
17
@fredahshi
Freda Shi
1 year
I very much enjoyed this paper, and of course, the poster! Large-sized data and LLMs present a fantastic opportunity for studying cultural differences.
@_emliu
Emmy Liu
1 year
"आज-कल NLP Research के साथ बने रहना उतना ही आसान है जितना कि मानसून मॆं भीगने से बचे रहना!" . Did you understand? How about LMs? Our #ACL2023 Findings paper explores multilingual models' cultural understanding through figurative language in 7 langs 🌎(1/9)
Tweet media one
5
39
204
2
0
17
@fredahshi
Freda Shi
3 years
And she feels so lucky to be a student at @TTIC_Connect ;)
@TTIC_Connect
TTIC
3 years
Third-year PhD student Freda Shi bridges the gap between linguistics and computer science in her natural language processing research. Follow the link to learn more: #computerscience #womeninstem
Tweet media one
0
1
9
1
0
15
@fredahshi
Freda Shi
1 year
If you're at ICML, chat with @xinyun_chen_ on this paper in the poster session 3 11am tomorrow!
@dmdohan
David Dohan
1 year
Come by the 11am posters on Wednesday to learn how irrelevant context effects LLMs:
Tweet media one
1
0
20
0
1
14
@fredahshi
Freda Shi
2 years
Surprisingly, PaLM-540B shows decent multilingual reasoning ability, solving >40% problems in any of the 10 investigated languages, including the underrepresented ones (such as Bengali and Swahili) that only cover <0.01% tokens of the pretraining data. (3/n)
Tweet media one
1
2
15
@fredahshi
Freda Shi
2 years
Back to 2017, when thinking about visually grounded syntax induction (), I dreamed for 1 second if we could parse image in similar ways---apparently it's too difficult for me then (and now), so, super excited to see this! Congrats on the nice work!
@xiaolonw
Xiaolong Wang
2 years
Introducing #CVPR2022 GroupViT: Semantic Segmentation Emerges from Text Supervision 👨‍👩‍👧 Without any pixel label ever, Our Grouping ViT can group pixels bottom-up to open vocabulary semantic segments. The only training data is 30M noisy image-text pairs.
4
128
644
0
0
14
@fredahshi
Freda Shi
2 years
Again, welcome to check out our paper and dataset for more details! Paper📄: Data💾: (8/n)
1
0
13
@fredahshi
Freda Shi
1 year
In the coming year, I'll finish my PhD @TTIC_Connect , and visit @roger_p_levy . Huge thanks to my advisors @kevingimpel and Karen, my mentors @LukeZettlemoyer , @sidawxyz and @denny_zhou , and everyone who helped me along the way!
1
0
13
@fredahshi
Freda Shi
3 years
Same here. Even worse: I feel I'm probably not qualified to review some of them -- no experience in this domain, not quite familiar with recent work, no labmates or close friends working on it -- while relevant papers (I thought) were not assigned to me.
@yufanghou
Yufang Hou
3 years
Got 5 papers to review for ARR today, all from different AEs, the due date is Dec 16! Logged into the system, there's no option to reject the assignment or discuss with AEs to extend the deadline/find a replacement. I wonder what's the average review load for Nov🤔 @ReviewAcl
16
1
56
1
1
12
@fredahshi
Freda Shi
2 years
In this work, we introduce the Multilingual Grade School Math (MGSM) dataset, by manually translating 250 English GSM8K test examples to 10 typologically diverse languages, and investigate language models’ reasoning ability with it. (2/n)
1
0
11
@fredahshi
Freda Shi
2 years
1. Chain-of-thought prompting is essential for the reasoning performance for both GPT-3 and PaLM; and notably, reasoning steps in English (EN-CoT) almost always outperform the ones in the same language as the problem (Native-CoT). (5/n)
Tweet media one
1
1
11
@fredahshi
Freda Shi
5 years
I'll be talking about Visually Grounded Neural Syntax Acquisition, one of the listed papers, on Monday 4:00 pm at Session 3E! This is a joint work with Jiayuan Mao, @kevingimpel and Karen Livescu. Paper: Project page:
@ACL2019_Italy
ACL2019
5 years
We are delighted to announce the list of papers that have been nominated as candidates for ACL 2019 Best Paper Awards! Check the list at #acl2019nlp
1
34
168
0
1
10
@fredahshi
Freda Shi
27 days
2 great surveys centered around the above 2 senses of grounding, respectively: In the Harnad (1990) sense, , by @ybisk , @universeinanegg , @_jessethomason_ and colleagues In the Clark & Brennan (1991) sense, , by folks incl. @ybisk 4/
Tweet media one
1
0
10
@fredahshi
Freda Shi
27 days
In my thesis, I discuss a family of tasks: learning language structures from supervision in other sources (through grounding) and corresponding methods to deal with each considered task. As many have recognized, grounding is a highly ambiguous term. More in 🧵 2/
1
0
10
@fredahshi
Freda Shi
2 years
Joint work with Mirac Suzgun, @markuseful , Xuezhi Wang, Suraj Srivats, @CrashTheMod3 , @hwchung27 , @yitayml , @seb_ruder , @denny_zhou , @dipanjand , @_jasonwei (9/9)
0
0
9
@fredahshi
Freda Shi
27 days
Prior work has mainly categorized grounding into 2 types: semantic grounding (finding meanings for forms; Harnad, 1990) and communicative grounding (finding common ground in dialogue; Clark and Brennan, 1991 + earlier work in pragmatics). 3/
Tweet media one
1
0
10
@fredahshi
Freda Shi
2 years
The multilingual reasoning abilities of language models also extend to other tasks: on XCOPA, a multilingual commonsense reasoning dataset, PaLM-540B sets a new state of the art (89.9% average accuracy) using only 4 examples, outperforming the prior best by 13.8%. (7/n)
1
0
10
@fredahshi
Freda Shi
27 days
An interesting and counterintuitive example of grounding under this formalization is GroupViT by @Jerry_XU_Jiarui , @xiaolonw , and folks, where an image segmentation model is trained from textual supervision---vision can be grounded in language, too! 8/
Tweet media one
1
0
9
@fredahshi
Freda Shi
27 days
Thanks to @McAllesterDavid , our anti-grounding prof at @TTIC_Connect : thank you for all the inspiring conversations and writings that push back the idea of grounding, e.g., . I hope (and believe) my grounding above is not what you are against :) 10/
1
0
9
@fredahshi
Freda Shi
27 days
@ybisk @universeinanegg @_jessethomason_ One exception is acoustically grounded word embeddings (e.g., Settle et al., 2019), where they encode acoustic knowledge into word embeddings. Perhaps no one thinks the pronunciation of a word is its meaning, but still, this is an acceptable usage of "grounding." 5/
Tweet media one
1
0
9
@fredahshi
Freda Shi
27 days
@ybisk @universeinanegg @_jessethomason_ In my thesis, I proposed the following definition of grounding, unifying all cases above. Grounding means processing the primary data X with supervision from source Y (the ground), where the mutual information I(X; Y) > 0, so we can find meaningful connections between them. 6/
1
0
9
@fredahshi
Freda Shi
1 year
Both Heinrich (who didn’t wanna share seat with others) and I enjoyed your excellent defense talk — huge congrats Dr. Kanishka Misra!
@kanishkamisra
Kanishka Misra 😶‍🌫️
1 year
Oh and my favorite photo from the defense was taken by @fredahshi -- I hope everyone here enjoys it as much as I did (what a great cat!) 5/6
Tweet media one
1
0
10
1
0
9
@fredahshi
Freda Shi
27 days
@ybisk @universeinanegg @_jessethomason_ In real-world scenarios, the conditional entropy H(Y|X) almost always> 0, meaning that the ground is usually more complicated than what is to be grounded from certain perspectives. 7/
1
0
8
@fredahshi
Freda Shi
2 years
@yoavartzi Congrats on the Best Paper Award!! Super well deserved!
0
0
8
@fredahshi
Freda Shi
2 years
2. When example problems in the same language as the problem of interest are available, use them for prompting. If not, use examples from a diverse set of languages. (6/n)
Tweet media one
1
0
7
@fredahshi
Freda Shi
2 years
In addition, we analyze the effect of the choices on prompting examples and prompting techniques, and highlight the following takeaways. (4/n)
1
0
7
@fredahshi
Freda Shi
27 days
I'm extremely thankful to my advisors Karen and @kevingimpel & my committee members and mentors @lukezettlemoyer and @roger_p_levy , for the great questions and suggestions on my thesis. 12/
1
0
7
@fredahshi
Freda Shi
4 months
Sida is beyond amazing! Go work with him!
@sidawxyz
Sida Wang
4 months
I'm hiring a PhD intern for the FAIR CodeGen (Code Llama) team. Do research on Code LLMs, execution feedback, evaluation, etc. Apply here:
3
31
198
0
0
5
@fredahshi
Freda Shi
27 days
@McAllesterDavid @TTIC_Connect Also here's a quick guide for readers interested in the additional content covered by my thesis. 11/
Tweet media one
1
0
6
@fredahshi
Freda Shi
27 days
To my friends, mentors, coauthors, and everyone who has offered direct or indirect help in the past years, please read my thanks in acknowledgment, which is probably the most exciting part of every PhD thesis. 14/14
0
0
6
@fredahshi
Freda Shi
6 years
Excited to have the work on tree-based neural sentence modeling (joint with my excellent collaborators Hao Zhou, Jiaze Chen and Lei Li) accepted by #EMNLP2018
0
0
6
@fredahshi
Freda Shi
1 year
I had some difficulty figuring out the horizontal scroll (横批; héng pī)—while eventually I realize in this case it should be read from left to right, we typically write it from right to left in China :) Happy New Year to my friends who are celebrating!
@LanguageLog
Language Log
1 year
The difficulty of expressing "nothing": This is a clever attempt to write a spring couplet (chūnlián 春聯), not in the usual Sinoglyphs / Chinese characters, but in pictographs: (source) I could figure out about half of the character equivalents (rebuses…
Tweet media one
5
10
40
0
0
5
@fredahshi
Freda Shi
5 months
@akoksal_ Thank you Abdullatif! Definitely checking out!
0
0
1
@fredahshi
Freda Shi
18 days
@denny_zhou @kchonyc I believe both explanations are valid, although marginalizing over reasoning paths that share the same result is probably the most natural way to think about it. My thesis (P123) discusses three explanations of SC and MBR-Exec (…)
1
0
5
@fredahshi
Freda Shi
4 months
Go Palatino!
@shuyanzhxyc
Shuyan Zhou
4 months
#COLM template is so visually pleasant, the joy of writing 🆙🆙🆙
2
3
53
1
0
5
@fredahshi
Freda Shi
27 days
Special thanks to @MichaelHBowling , Dale Schuurmans, and @nidhihegde for the wonderful discussion on grounding at a dinner a year ago. The conversation has made the term grounding (in my mind) more articulable. 9/
1
0
5
@fredahshi
Freda Shi
1 month
@MorrisAlper @moranynk @ElorHadar @RGiryes Excited to see more work on quantifying visual concreteness! Our ACL'19 work on quantifying text span concreteness and using it for syntactic parsing might also be of interest:
0
0
4
@fredahshi
Freda Shi
6 years
Our work got the same result on sentence encoder!
@gneubig
Graham Neubig
6 years
#EMNLP2018 "A Tree-based Decoder for NMT", a framework for incorporating trees in target side of MT systems. We compare constituency/dependency/non-syntactic binary trees, find surprising result that non-syntactic trees perform best, and try to explain why
Tweet media one
Tweet media two
Tweet media three
Tweet media four
4
32
140
1
0
4
@fredahshi
Freda Shi
4 years
Can't agree more. I voted for "that's syntax", but I wouldn't be happy to see a paper using "syntactic features" to refer to POS tags only, and I've been not so happy for >3 times.
@carlosgr_nlp
C. Gómez-Rodríguez
4 years
@emilymbender At the very least it's a misleading use of the term. To me it's like doing linear regression and calling it a neural approach... technically true (linear regression can be seen as a 1-neuron neural network) but I don't see why anyone would say it (w/o context) if not to oversell.
1
1
9
0
0
4
@fredahshi
Freda Shi
3 months
And great to see... Yoav brings back the cute llama arts!
@yoavartzi
Yoav Artzi (PC-ing COLM)
3 months
The @COLM_conf reviewing period has started. Reviewers should now receive emails, and all papers are now assigned. Thanks to all our ACs who adjusted assignments in the last few days. Happy reviewing all!
Tweet media one
1
7
45
1
0
4
@fredahshi
Freda Shi
5 years
See you then, Sam!
0
0
4
@fredahshi
Freda Shi
8 months
My brain LM is still favo(u)ring “favorite” - you should seriously consider coming to Canada 🍁
@maojiayuan
Jiayuan Mao
8 months
Definitely one of my top 3 favourite papers :) It marries deep learning with a minimal set of universal grammar rules for grounded language learning. It draws inspiration from lexicalist linguistics and cognitive science (bootstrapping from core knowledge).
0
6
57
0
0
4
@fredahshi
Freda Shi
3 years
(and guess which is me w/o exact matching on either first or last name! :)
@fredahshi
Freda Shi
3 years
Honored to receive the 2021 Google PhD fellowship in natural language processing. Thanks @GoogleAI for the support! Kudos to my advisors and mentors: thanks for teaching me everything over the past years, and for showing me concrete examples of best researchers---yourselves!
9
4
181
2
0
4
@fredahshi
Freda Shi
5 years
Madhur's course is really nice! I'd recommend it to everyone who wishes to review/learn some fundamental mathematical concepts related to machine learning.
@_onionesque
Shubhendu Trivedi
5 years
@EugeneVinitsky Madhur Tulsiani runs a very similar course every other year (this has links to iterations of the class, the latter ones have more refined notes).
1
1
9
0
0
4
@fredahshi
Freda Shi
27 days
Of course, the work covered in this thesis is built on the foundation of the literature—my thanks go to the authors of the papers I cited. I hope I've discussed your work in a fair way. 13/
1
0
4
@fredahshi
Freda Shi
5 months
@mrdrozdov HUGE congrats, Andrew!! 🎉🍾
0
0
1
@fredahshi
Freda Shi
7 months
@sharonlevy21 Also wonder if this would happen for inanimate objects/subjects as well!
0
0
2
@fredahshi
Freda Shi
8 months
@maojiayuan @jiajunwu_cs @roger_p_levy As in a CCG, each lexicon entry has its syntactic type and semantic representation. We induce the syntax and semantics of the questions, execute the neuro-symbolic semantic program with visual input, and reward the parser if the execution result is correct. (4/)
Tweet media one
1
0
3
@fredahshi
Freda Shi
9 months
@tallinzen That’s part of the reason why I started using GitHub to manage my working papers. Another part is the nice combination of VSCode & LaTeX workshop.
0
0
3
@fredahshi
Freda Shi
2 years
@RTomMcCoy Sometimes I do Ctrl/Cmd+Shift+V for 2) and 3) XD
0
0
3
@fredahshi
Freda Shi
7 months
@UndefBehavior Sorry to hear this! As an alternative, my coauthors and I tried to publish at ML conferences (for our case, NeurIPS) on highly linguistic topics. We got constructive feedback from reviewers, but very little attention for our presentation.
0
0
2
@fredahshi
Freda Shi
7 months
@sharonlevy21 Congrats on the excellent work! I found the Table 1 example very interesting: these four sentences are clearly negative to me, and I can't imagine if anyone would label any of them positive---wonder if more data could fix this?
2
0
3
@fredahshi
Freda Shi
5 years
@amitmoryossef @emilymbender My advisor Karen Livescu is working on ASL as well:
0
0
3
@fredahshi
Freda Shi
9 months
@kanishkamisra I started to use onenote (not supposed for managing todos though). Just start a new page for all todos this week each Monday morning, and copy leftovers from the prior week.
0
0
2
@fredahshi
Freda Shi
2 months
1
0
2
@fredahshi
Freda Shi
5 years
Great paper and impressive results. Very excited to see it!
@mrdrozdov
Andrew Drozdov
5 years
Now with paper link: And code: New results on unsupervised parsing: +6.5 F1 compared to ON-LSTM (2019), +6 F1 compared to PRLG (2011).
1
16
79
0
0
2
@fredahshi
Freda Shi
26 days
@SonglinYang4 Thanks Sonta! Now I know who to blame if someone calls me Dr Meow at conferences😼
0
0
2
@fredahshi
Freda Shi
6 years
Will also be presented at #EMNLP2018 :
0
0
2
@fredahshi
Freda Shi
1 year
Definitely reach out to @WenhuChen @yuntiandeng and/or me if you’d like to learn more about Waterloo NLP (and probably catch @hllo_wrld & @lintool next time :)
0
0
2
@fredahshi
Freda Shi
10 months
1
0
2
@fredahshi
Freda Shi
5 months
@lukeZhu20 Yes!! Thanks much, Jian! I should’ve pinged you before asking here ;)
0
0
1
@fredahshi
Freda Shi
8 months
@joycjhsu This is cool! Can I ask a quick question - why would humans say "no" to the teaser question? From a quick glance, it could perfectly be a "wug" to me :)
3
0
2
@fredahshi
Freda Shi
8 months
@colinraffel Congrats Colin! Looking forward to working with you at Vector!
0
0
2
@fredahshi
Freda Shi
8 months
We are also aware that the method comes with efficiency issues in complicated real-world settings, and that’s an exciting direction to explore in the future! (13/13)
0
0
2
@fredahshi
Freda Shi
2 years
@denny_zhou Oh wow, I'm surprised that the homework includes a clear chain-of-thought example! Also I think this is a challenging example for LMs: it's nontrivial to work out generalization from <-2, /3> operations to <+2, /4>, even for the 20-year-ago me.
1
0
2
@fredahshi
Freda Shi
7 months
@kanishkamisra Yeah still remember how surprised I was on a syntax class seeing people had different acceptability judgment to “y’all”.
1
0
2
@fredahshi
Freda Shi
7 months
@universeinanegg I occasionally came across this before — — perhaps it’s something close to what you’re looking for?
1
0
2
@fredahshi
Freda Shi
8 months
In summary, we show what will happen when neuro-symbolic models meet grammars (generalized CCGs), where we achieve significantly improved performance on compositional generalization. (12/)
1
0
2
@fredahshi
Freda Shi
6 years
Excited to see this!
0
0
2
@fredahshi
Freda Shi
8 months
Paper📝: Project page🌐: Work was led by the amazing @maojiayuan , and done jointly with @jiajunwu_cs , @roger_p_levy , and Josh Tenenbaum. (2/)
1
0
2
@fredahshi
Freda Shi
5 years
Happy days!
@TTIC_Connect
TTIC
5 years
Midwest Speech and Language Days | Day 2. Thank you to all participants, speakers and organizers!
Tweet media one
0
0
1
0
0
2
@fredahshi
Freda Shi
2 years
Surprisingly, with only program input and no access to ground-truth output, MBR-Exec shows significant improvement over all execution-unaware methods on Python program generation, tested on the MBPP dataset. (4/n)
Tweet media one
1
0
1
@fredahshi
Freda Shi
4 years
Just in case people have similar issue: I solved this by switching browser -- Win10 + Chrome 81.0.4044.138 doesn't work, but MS Edge does :)
@fredahshi
Freda Shi
4 years
I am trying to make a submission to @emnlp2020 , but the system asks me to fill out the reviewer data form. I filled it, and it says "thank you", and I was redirected to the home page, and everything repeats for another time. Anyone has similar problem?
0
0
0
0
0
1
@fredahshi
Freda Shi
24 days
@xwang_lk Thank you Eric! Excited to join the PI club😼
0
0
1
@fredahshi
Freda Shi
24 days
@ChengleiSi Thanks Chenglei! Enjoy NAACL :)
0
0
1
@fredahshi
Freda Shi
4 years
I am trying to make a submission to @emnlp2020 , but the system asks me to fill out the reviewer data form. I filled it, and it says "thank you", and I was redirected to the home page, and everything repeats for another time. Anyone has similar problem?
0
0
0
@fredahshi
Freda Shi
9 months
@wzhao_nlp @denny_zhou Agreed and that’s our motivation for proofreading multiple times before submitting a paper :) I’d love to see more human experiments and rigorous comparison!
0
0
1
@fredahshi
Freda Shi
2 months
@xiye_nlp @UAlberta @PrincetonPLI Congrats Xi & welcome to Canada🍁!
0
0
1
@fredahshi
Freda Shi
2 years
Language models spread their probability over multiple programs with slightly different implementations but the same underlying functionality. When we only have one chance to choose an output program for each natural language sentence, how should we select it? (2/n)
1
0
1
@fredahshi
Freda Shi
3 months
@m2saxon @yuvalmarton This is an excellent point! Starting with America ≠ US, I spent some time realizing America = US in most of my conversations, and then some more time bringing back America ≠ US.
0
0
1
@fredahshi
Freda Shi
26 days
@akoksal_ Thank you Abdullatif!
0
0
1
@fredahshi
Freda Shi
1 year
@ChenhaoTan @ChicagoHAI This is fantastic! Congrats Chenhao!
1
0
1