Hung-yi Lee (李宏毅) Profile
Hung-yi Lee (李宏毅)

@HungyiLee2

3,814
Followers
19
Following
11
Media
89
Statuses

Hung-yi Lee is currently a professor at National Taiwan University. He owns a YouTube channel teaching deep learning in Mandarin.

Joined March 2020
Don't wanna be here? Send us removal request.
@HungyiLee2
Hung-yi Lee (李宏毅)
7 months
Fine-tuning the LLaMA-2-Chat model may degrade its original capabilities (). But here's a lifeline: Chat Vector () keeps a chat model's original capability (it also works on Mistral). Recommend to everyone fine-tuning their LLMs.
Tweet media one
2
81
365
@HungyiLee2
Hung-yi Lee (李宏毅)
6 months
Discover the cutting-edge world of spoken LLMs with this comprehensive survey! link:
Tweet media one
3
58
211
@HungyiLee2
Hung-yi Lee (李宏毅)
2 years
I will attend NAACL 2022 to present the work, "Meta Learning for Natural Language Processing: A Survey" (). The overview paper is written by me, Shang-Wen Li and Ngoc Thang Vu.
0
40
197
@HungyiLee2
Hung-yi Lee (李宏毅)
28 days
I'll give an overview talk on Spoken Language Models at INTERSPEECH 2024! Join me tomorrow, September 3rd, from 13:30 to 14:10 in the "Lasso" room. link of slides:
8
35
193
@HungyiLee2
Hung-yi Lee (李宏毅)
3 years
Receive the YouTube Creator Silver Award for 100,000 subscribers. When I started uploading videos about DL to YouTube in the fall of 2016, I never imagined this achievement. Thanks to all subscribers. We learn together.
Tweet media one
Tweet media two
7
12
181
@HungyiLee2
Hung-yi Lee (李宏毅)
2 years
Two tutorials at INTERSPEECH'22. Self-Supervised Representation Learning for Speech Processing slides: Neural Speech Synthesis slides:
0
33
173
@HungyiLee2
Hung-yi Lee (李宏毅)
22 days
Congratulations to the SUPERB Team! Our work on the Speech Processing Universal PERformance Benchmark (SUPERB) has been ranked 7th among the most cited papers at INTERSPEECH over the past five years! A big round of applause to everyone involved.
Tweet media one
4
18
171
@HungyiLee2
Hung-yi Lee (李宏毅)
2 months
The paper "Self-Supervised Speech Representation Learning: A Review" is a top 25 download in IEEE JSTSP! The authors will discuss the latest in speech foundation models. Time: 1:00 PM ET, 6 Aug 2024 Registration page:
Tweet media one
Tweet media two
0
19
135
@HungyiLee2
Hung-yi Lee (李宏毅)
7 months
Recent years have witnessed significant developments in audio codec models (an overview figure from ). We introduce Codec-SUPERB () to boost fair and comprehensive comparison. Leaderboard:
Tweet media one
1
21
121
@HungyiLee2
Hung-yi Lee (李宏毅)
3 months
Launched "Intro to Generative AI" course with 1000+ students this spring! Thanks to @dcml0714 for being head TA. Using LLMs to evaluate assignments, inspired by his ACL paper (). Check what we learned:
@dcml0714
Cheng Han Chiang (姜成翰)
3 months
❗ New Paper❗ 📄 In '23, we proposed LLM-as-judge for NLP research 🤔 Any real-world applications? 💯 Now, we use LLM as an automatic assignment evaluator in a course with 1000+ students at National Taiwan University, led by @HungyiLee2 with me as a TA 🔗
Tweet media one
1
8
49
0
14
117
@HungyiLee2
Hung-yi Lee (李宏毅)
3 months
Exploring task vectors: Not just for text LLMs learning new languages (), but also helpful for speech models. Train with domain-specific synthetic data, then adapt using a task vector for real speech ().
Tweet media one
2
20
114
@HungyiLee2
Hung-yi Lee (李宏毅)
1 year
Attending #ICASSP2023 in Rhodes, Greece? Don't miss the workshop on "Self-supervision in Audio, Speech & Beyond". Dive deep into the advancements in self-supervised learning. Catch me delivering the workshop keynote @ Jupiter Ballroom, 8:40 a.m. GMT+3.
2
17
96
@HungyiLee2
Hung-yi Lee (李宏毅)
4 months
What's the best token unit for speech in LLMs? Dive into this question at the Codec-SUPERB Challenge at SLT 2024! We're now accepting submissions. For more information, please visit the challenge's webpage:
Tweet media one
0
6
91
@HungyiLee2
Hung-yi Lee (李宏毅)
2 years
Abdelrahman Mohamed (Meta), Shinji Watanabe (CMU), Tara Sainath (Google), Karen Livescu (TTIC), Shang-Wen Li (Meta), Shu-wen Yang (NTU), Katrin Kirchhoff (Amazon), and I will give a tutorial about self-supervised learning for speech at NAACL 2022.
0
15
87
@HungyiLee2
Hung-yi Lee (李宏毅)
10 months
Excited to speak at #ASRU2023 tomorrow (December 20) at 11:30 AM (GMT+8) on "The Journey of Advancements in Speech Foundation Models"! We'll explore the evolution of speech foundation models. Below, please find the slides:
2
23
78
@HungyiLee2
Hung-yi Lee (李宏毅)
5 months
Watched OpenAI's demo, amazed by GPT-4's speech understanding & interaction. Dynamic-SUPERB is collecting speech tasks to challenge foundation models. Submit your innovative tasks to advance speech processing! More info:
3
13
80
@HungyiLee2
Hung-yi Lee (李宏毅)
5 months
Join us for the Dynamic-SUPERB call-for-tasks event. Submit your innovative task to challenge the speech foundation models that can understand task instruction. Let's push the boundaries of what speech foundation models can do!
Tweet media one
1
19
75
@HungyiLee2
Hung-yi Lee (李宏毅)
2 months
Congratulations to Cheng Han Chiang ( @dcml0714 ) for winning the Best Paper Award at the ACL24 Knowledgeable LMs workshop! This paper tackles the issue I mentioned in my course () — combining correct facts can sometimes result in an incorrect response.
@dcml0714
Cheng Han Chiang (姜成翰)
2 months
🎉 Very honored and flattered to receive best paper award at the KnowledgeableLM workshop at #ACL2024 It means A LOT to be granted an award by community members who work on knowledge and LMs. I'll keep working on topics in this direction! Great collaboration with @HungyiLee2
Tweet media one
5
6
65
4
4
70
@HungyiLee2
Hung-yi Lee (李宏毅)
3 years
Self-Supervised Learning for Speech and Audio Processing Workshop @ AAAI 2022 ===== Website: Submission Deadline: November 15th, 2021 (Anywhere on Earth) -> Less than 24 hours! Submission website: Contact: sas.aaai.2022 @gmail .com
0
18
64
@HungyiLee2
Hung-yi Lee (李宏毅)
1 year
If you're participating in ICML 2023, do not miss the workshop "What's Left to TEACH (Trustworthy, Enhanced, Adaptable, Capable, and Human-centric) Chatbots?" It's happening today in Room 303. #ICML2023
0
9
60
@HungyiLee2
Hung-yi Lee (李宏毅)
3 years
Workshop on Self-supervised Learning for Audio and Speech Processing @ AAAI 2022 starts at 8:50 a.m., EST (9:50 p.m. GMT+8), February 28. If you want to hear about exciting new advances in self-supervised learning, don't miss it.
1
8
54
@HungyiLee2
Hung-yi Lee (李宏毅)
1 year
Join us for ASRU's satellite event - the Workshop on Speech Foundation Models & Performance Benchmarks (SPARKS), on Dec 16th, 2023, in Taiwan. 📌 Paper Submission: Oct 19th 🔗 Webpage: Tip: When registering for ASRU, tick the SPARKS option. #ASRU
0
18
54
@HungyiLee2
Hung-yi Lee (李宏毅)
5 months
Join the Webinar Series for Advancements in Audio, Speech and Language Technology. Next up: "End-to-End Automatic Speech Recognition" by Dr. Jinyu Li from Microsoft on May 10 @ 1:00 pm EDT (May 11 @ 1:00 am Taiwan time) Register now:
0
9
53
@HungyiLee2
Hung-yi Lee (李宏毅)
2 years
There have been many new developments in pre-trained LM recently. I will give a tutorial on the latest advances in pretrained LMs with Cheng-Han Chiang @dcml0714 and @YungSungChuang at AACL-IJCNLP 2022 from 5:00 p.m. to 8:00 p.m. on Nov 20th (Taiwan time).
1
9
50
@HungyiLee2
Hung-yi Lee (李宏毅)
4 months
Webinar Series for Advancements in Audio, Speech, and Language Technology Next Webinar: Neural Target Speech and Sound Extraction: An Overview Speaker: Dr. Marc Delcroix Time: June 6, 2024, 7:30 PM (NY Time) Register:
1
6
40
@HungyiLee2
Hung-yi Lee (李宏毅)
2 months
SPS SLTC/AASP TC Webinar Don't miss out on recent advances in speech separation, end-to-end modeling, speaker diarization, and more! Speaker: Dr. Takuya Yoshioka, Director of Research at Assembly AI Inc. Time: 1:00 PM ET, 23 July 2024 Register here:
0
6
38
@HungyiLee2
Hung-yi Lee (李宏毅)
10 months
Join us for an enlightening afternoon with distinguished speech researchers, Dr. Andreas Stolcke and Prof. Torbjørn Svendsen. Their talks will take place at Barry Lam Hall (博理館) (), R101 (Auditorium), NTU, on December 21st, starting at 2:20PM. #ASRU2023
Tweet media one
Tweet media two
0
6
32
@HungyiLee2
Hung-yi Lee (李宏毅)
26 days
Excited to speak at CHIME 2024, collocated with INTERSPEECH! Join me on Sept 6th, 14:00-15:00, for "Teaching New Skills to Foundation Models: Insights and Experiences." Learn why fine-tuning is more challenging than it seems! Workshop link:
0
4
31
@HungyiLee2
Hung-yi Lee (李宏毅)
3 years
Three years ago, when we first tried to use GAN to realize ​​unsupervised ASR (), I thought the idea was sci-fi. But a few days ago, Facebook AI pushed the idea of ​​using GAN for unsupervised ASR to 5.9% WER on Librispeech ().
0
10
28
@HungyiLee2
Hung-yi Lee (李宏毅)
3 years
HuBERT achieves surprisingly good performance on the speech version of GLUE, that is, SUPERB (). As we all know, SuperGLUE is constructed after the pre-trained LMs achieve superhuman performance on GLUE. Maybe we have to consider SuperSUPERB now.
@AIatMeta
AI at Meta
3 years
We are releasing pretrained HuBERT speech representation models and code for recognition and generation. By alternating clustering and prediction steps, HuBERT learns to invent discrete tokens representing continuous spoken input. Learn more:
Tweet media one
3
85
286
0
2
4