Shuang Li Profile
Shuang Li

@ShuangL13799063

5,134
Followers
800
Following
27
Media
239
Statuses

Incoming Assistant Professor @UofT , Postdoc @Stanford , PhD @MIT . Working on #Generative_Modeling and #Robot_Learning .

Palo Alto
Joined January 2020
Don't wanna be here? Send us removal request.
@ShuangL13799063
Shuang Li
2 years
Compositional Visual Generation with Composable Diffusion Models – ECCV 2022 @nanliuuu @du_yilun Antonio Torralba, Joshua B. Tenenbaum. We present Composable-Diffusion, an approach for compositional visual generation by composing multiple diffusion models together.
6
91
550
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can compose language to generate 3D assets now! Composable 3D assets generation without any training/fine-tuning (diffusion model is Point-E). Check our webpage and demo: "A green avocado" AND "A chair"
@ShuangL13799063
Shuang Li
2 years
Compositional Visual Generation with Composable Diffusion Models – ECCV 2022 @nanliuuu @du_yilun Antonio Torralba, Joshua B. Tenenbaum. We present Composable-Diffusion, an approach for compositional visual generation by composing multiple diffusion models together.
6
91
550
3
52
287
@ShuangL13799063
Shuang Li
3 years
Can pre-trained language models (LM) be used as a general framework for tasks across different environments? We study LM pre-training as a general framework for embodied decision-making. @xavierpuigf @du_yilun @clintonjwang @akyurekekin Antonio Torralba @jacobandreas @IMordatch
2
21
164
@ShuangL13799063
Shuang Li
2 years
How can we utilize pre-trained models from different modalities? We introduce a unified framework for composing ensembles of different pre-trained models to solve various multimodal problems in a zero-shot manner. @du_yilun @IMordatch , Antonio, and Josh.
3
25
153
@ShuangL13799063
Shuang Li
3 years
We have two papers accepted in Neurips 2021. One of them was selected as a spotlight presentation! Both of them are about Compositional Generation using EBMs! Thanks to all the collaborators!
2
1
136
@ShuangL13799063
Shuang Li
3 years
How can we understand the visual relations in a scene? In our NeurIPS spotlight, we present a method to understand scene relations in a compositional and modular way. Website: With @nanliuuu , @du_yilun , Josh Tenenbaum, and Antonio Torralba
3
18
115
@ShuangL13799063
Shuang Li
3 years
Our NeurIPS spotlight paper on composing visual relations is now featured on MIT news! with @nanliuuu , @du_yilun , Joshua Tenenbaum, and Antonio Torralba More information:
@MIT
Massachusetts Institute of Technology (MIT)
3 years
A new machine-learning model, developed at CSAIL, could help robots understand interactions in the world much as humans do. The work moves the field one step closer to enabling machines that can learn from and interact with their environments.
Tweet media one
2
42
103
0
11
69
@ShuangL13799063
Shuang Li
3 years
Glad to share our Neurips paper "Unsupervised Learning of Compositional Energy Concepts" done with my great collaborators @du_yilun , @yash_j_sharma , Josh Tenenbaum, and @IMordatch The code is available now:
@du_yilun
Yilun Du
3 years
How can we discover, unsupervised, the underlying objects and global factors of variation in the world? We show how to discover these factors as different energy functions. Website: w/ @ShuangL13799063 , @yash_j_sharma , Josh Tenenbaum, @IMordatch (1/4)
5
46
242
0
4
60
@ShuangL13799063
Shuang Li
2 years
Composing pre-trained models for zero-shot 3D perception, including object searching, self-driving, robot manipulation, and language-guided searching. Lead by amazing @_krishna_murthy Project:
@_krishna_murthy
Krishna Murthy
2 years
Announcing our newest preprint Features from CLIP, DINO are ready for zero-shot 3D perception. 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝗙𝘂𝘀𝗶𝗼𝗻 builds open-set multimodal 3D maps by fusing features to 3D. These maps can be queried by text, image, click, and audio📜👇
10
129
601
1
5
59
@ShuangL13799063
Shuang Li
3 years
I am very glad to share our recent work ‘3D Neural Scene Representations for Visuomotor Control’, with my amazing collaborators Yunzhu Li ( @YunzhuLiYZ ) Vincent Sitzmann ( @vincesitzmann ), Pulkit Agrawal ( @pulkitology ), Antonio Torralba.
@YunzhuLiYZ
Yunzhu Li
3 years
Introducing “3D Neural Scene Representations for Visuomotor Control”! (w/ video!) We combine implicit neural scene representations with intuitive physics models, enabling visuomotor control of dynamic 3D scenes from out-of-distribution viewpoints. (1/7)
2
34
170
2
7
58
@ShuangL13799063
Shuang Li
2 years
"A chair" AND NOT "Chair legs"
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can compose language to generate 3D assets now! Composable 3D assets generation without any training/fine-tuning (diffusion model is Point-E). Check our webpage and demo: "A green avocado" AND "A chair"
3
52
287
2
7
50
@ShuangL13799063
Shuang Li
1 year
How do we utilize pre-trained models and compose them for solving novel tasks without joint training them on end-to-end data? Compositional Foundation Models for Hierarchical Planning
@du_yilun
Yilun Du
1 year
A major challenge to constructing foundation models for decision making is data scarcity. We present a “compositional foundation model”, which addresses this by composing existing foundation models, each capturing a sub-part of decision making. (1/4)
4
39
225
0
3
46
@ShuangL13799063
Shuang Li
2 years
Very cool work!
@_akhaliq
AK
2 years
ARF: Artistic Radiance Fields abs: project page: github: create high-quality artistic 3D content by transferring the style of an exemplar image, such as a painting or sketch, to NeRF and its variants
2
133
532
0
1
45
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can be applied to many pre-trained diffusion models in a few lines of code. The quality of the results depends on how good the pre-trained model is, so 2D generations are still far superior to 3D.
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can compose language to generate 3D assets now! Composable 3D assets generation without any training/fine-tuning (diffusion model is Point-E). Check our webpage and demo: "A green avocado" AND "A chair"
3
52
287
0
7
40
@ShuangL13799063
Shuang Li
3 years
Come and join the EBM workshop today! I will give an Oral presentation on how to use energy-based models for continual learning. arxiv:
@EBMworkshop
Energy-Based Models
3 years
We are starting soon! Join our live stream 👇
Tweet media one
0
4
4
2
5
27
@ShuangL13799063
Shuang Li
2 years
Thanks a lot for the great help and support from Nvidia and MIT.
1
0
27
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion allows compositional visual generation across a variety of domains, such as language descriptions, objects, and object relations. It can compose multiple concepts during inference and generate images containing the input concepts without further training.
Tweet media one
1
4
27
@ShuangL13799063
Shuang Li
2 years
Code + Models + Paper: Diffusion Models are closely related to Energy-based Models (EBMs). Such connections allow us to compose diffusion models using compositional operators as we did in EBMs ().
1
3
27
@ShuangL13799063
Shuang Li
4 years
Our #NeurIPS2020 Spotlight paper on compositional generation. We define compositions of separate models as a set of logical operators, including disjunction, conjunction, and negation over energy functions. Paper: . Website: .
@du_yilun
Yilun Du
4 years
Check out our #NeurIPS2020 spotlight on compositional generation! We compose independently trained generative models on seperate datasets together to generate new combinations without retraining. Paper: . Website: . Thread(1/3)
1
6
68
0
1
25
@ShuangL13799063
Shuang Li
3 years
I am very glad to share our ICCV work ‘Weakly Supervised Human-Object Interaction Detection in Video via Contrastive Spatiotemporal Regions’, with my amazing collaborators Yilun Du, Antonio Torralba, Josef Sivic, and Bryan Russell.
0
0
25
@ShuangL13799063
Shuang Li
5 months
EECS Rising Stars workshop will be held at MIT this year. Strongly encourage eligible individuals to apply.
Tweet media one
0
0
23
@ShuangL13799063
Shuang Li
2 years
Thanks UT Austin for organizing this Rising Stars workshop! Very helpful for people who are interested in academic job! Strongly recommend!
@utexasece
Texas ECE
2 years
Thank you to all who participated in #RisingStarsinEECS2022 ! @utexasece and @UTCompSci were proud to host these amazing future leaders in academia. #WhatStartsHere
Tweet media one
1
11
48
1
0
20
@ShuangL13799063
Shuang Li
2 years
Our ECCV paper "Compositional Visual Generation with Composable Diffusion Models" @nanliuuu @du_yilun has been featured as the lead article on MIT News: Project Page: Demos are available at:
Tweet media one
@ShuangL13799063
Shuang Li
2 years
Compositional Visual Generation with Composable Diffusion Models – ECCV 2022 @nanliuuu @du_yilun Antonio Torralba, Joshua B. Tenenbaum. We present Composable-Diffusion, an approach for compositional visual generation by composing multiple diffusion models together.
6
91
550
0
1
19
@ShuangL13799063
Shuang Li
1 year
Check our latest paper, "Improving Factuality and Reasoning in Language Models through Multiagent Debate" .
0
1
18
@ShuangL13799063
Shuang Li
1 year
Debate Between ChatGPT and Bard. While both models generate incorrect responses to the initial GSM8K problem, the debate between the ChatGPT and Bard enables them to generate the correct final answer.
@du_yilun
Yilun Du
1 year
We find that multiagent debate can also enable two different instances of language models (chatGPT and Bard) to cooperatively solve a task they individually cannot. (4/5)
Tweet media one
2
8
40
1
2
18
@ShuangL13799063
Shuang Li
2 years
Check out our website at for more related papers.
1
12
17
@ShuangL13799063
Shuang Li
2 years
Images generated conditioned on each individual sentence description and the composed sentences. In each example, the first two images are generated conditioned on each individual sentence description and the last image is generated by composing the sentences.
Tweet media one
1
1
16
@ShuangL13799063
Shuang Li
4 months
We are hosting the Social Intelligence in Humans and Robots workshop, at #RSS2024 ! Join our hybrid workshop on July 19 to discuss about the development of social intelligence for AI systems with our amazing lineup of speakers. Our website is here: .
@SIHRworkshop
Social Intelligence in Humans and Robots
4 months
We are hosting the Social Intelligence in Humans and Robots workshop, at #RSS2024 ! Join our hybrid workshop on July 19 to discuss about the development of social intelligence for AI systems with our amazing lineup of speakers. Our website is here: .
Tweet media one
1
3
8
0
1
15
@ShuangL13799063
Shuang Li
2 years
code is available
@_akhaliq
AK
2 years
github:
2
21
83
0
2
15
@ShuangL13799063
Shuang Li
2 years
Check out our ICML2022 work on "Learning Iterative Reasoning through Energy Minimization".
@du_yilun
Yilun Du
2 years
(1/6) How can we learn to iteratively reason about the problems in the world? In our #ICML2022 paper, we introduce an approach towards iterative reasoning based off energy minimization : Website: w/ @ShuangL13799063 Josh Tenenbaum @IMordatch
7
77
474
0
1
14
@ShuangL13799063
Shuang Li
2 years
We are organizing the 2nd workshop on Social Intelligence in Humans and Robots at RSS 2022! The paper submission deadline is May 27th!
@SIHRworkshop
Social Intelligence in Humans and Robots
2 years
We are hosting the Social Intelligence in Humans and Robots workshop, at #RSS2022 ! Join us on July 1st, virtually or in New York to talk about how to build socially intelligent robots, and how studying human social intelligence can help us get there. Web:
1
5
13
0
2
12
@ShuangL13799063
Shuang Li
2 years
So excited to join #CVPR2022 in-person.
Tweet media one
0
0
13
@ShuangL13799063
Shuang Li
4 years
Excited to share our work Watch-And-Help with fantastic collaborators @xavierpuigf , @tianminshu , Zilin Wang, Joshua B. Tenenbaum, Sanja Fidler, and Antonio Torralba! Our work won a Best Paper Award at the #NeurIPS2020 Cooperative AI Workshop!
1
0
13
@ShuangL13799063
Shuang Li
1 year
Can you use LM to automatically design experiments and interpret unknown functions/neural networks? We find that LM augmented with only black-box access to functions can infer their structure, acting as a scientist by forming hypotheses and proposing experiments.
@cogconfluence
Sarah Schwettmann
1 year
🚨 NEW PAPER 🚨 Understanding increasingly large and complex neural networks will almost certainly require other AI models (themselves uninterpretable!) How should we evaluate automated interpretability methods? Introducing our new benchmark, FIND:
1
25
94
0
3
12
@ShuangL13799063
Shuang Li
2 years
I am in NOLA! Looking forward to talking to many of you!
@jacobandreas
Jacob Andreas
2 years
I'm not at NeurIPS, but if you're around tomorrow you should (1) check out @ShuangL13799063 's ("oral-equivalent"?) paper on using language model pretraining to improve generalization in interactive planning & decision-making tasks:
1
3
10
0
0
12
@ShuangL13799063
Shuang Li
1 year
Great work! @xavierpuigf
@xavierpuigf
Xavier Puig @ ECCV 🇮🇹
1 year
Thrilled to announce Habitat 3.0, an Embodied AI simulator to study human-robot interaction at scale! Habitat 3.0 is designed to train and evaluate agents to perform tasks along with humans, it includes: - Humanoid simulation - Human interaction tools - Multi-agent benchmarks 1/6
1
5
58
0
1
11
@ShuangL13799063
Shuang Li
2 years
"A couch" AND "A boat"
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can compose language to generate 3D assets now! Composable 3D assets generation without any training/fine-tuning (diffusion model is Point-E). Check our webpage and demo: "A green avocado" AND "A chair"
3
52
287
0
0
10
@ShuangL13799063
Shuang Li
4 years
A big improvement on contrastive divergence training of EBMs. Check out our recent work with @du_yilun and @IMordatch
@du_yilun
Yilun Du
4 years
Check out our new paper on EBM training! We show a neglected term in contrastive divergence significantly improves stability of EBMs. We also propose new tricks for better generation. Website: With @ShuangL13799063 Josh Tenenbaum, @IMordatch (1/4)
2
7
70
1
2
10
@ShuangL13799063
Shuang Li
3 years
We are glad to release VirtualHome 2.3.0 with more features, including procedural generation, physics, better graphics, improved performance, etc. Led by @xavierpuigf @KabirSwain Github: Colab:
@xavierpuigf
Xavier Puig @ ECCV 🇮🇹
3 years
Releasing VirtualHome 2.3.0!! 🚀 The new version, led by @KabirSwain , comes with a lot of updates, including procedural generation, physics, better graphics, improved performance, or time management. API+Demo: A thread with the main updates 👇
1
14
93
0
0
9
@ShuangL13799063
Shuang Li
2 years
Just arrived in NOLA! Happy to chat with many of you!
0
1
9
@ShuangL13799063
Shuang Li
11 months
Big congrats! @YunzhuLiYZ
@YunzhuLiYZ
Yunzhu Li
11 months
🎉 Excited to share that we've won the Best Systems Paper Award at #CoRL2023 for our work on RoboCook! A huge shoutout to the incredible team: @HaochenShi74 (lead), @HarryXu12 , Samuel Clarke, and @jiajunwu_cs .
Tweet media one
Tweet media two
15
8
215
1
0
9
@ShuangL13799063
Shuang Li
3 years
@fredodurand
Fredo Durand
3 years
Our new work (with fun demo!) on making better line drawing by making them informative, as assessed by a neural network's ability to infer depth and semantics. With Caroline Chan and @phillip_isola
Tweet media one
7
37
200
0
0
8
@ShuangL13799063
Shuang Li
3 years
The proposed model allows us to generate/edit complex images by composing multiple relational descriptions. Our model is trained on a single relational description and the composed scene relations (2, 3, and more relational descriptions) are outside the training distribution.
Tweet media one
0
0
8
@ShuangL13799063
Shuang Li
4 years
Most promisingly, energy-based models perform well in class-incremental learning without relying on stored data and without using replay, and they don’t need task boundaries.
@IMordatch
Igor Mordatch
4 years
Excited to share our work investigating energy-based models for continual learning and how they are naturally less prone to catastrophic forgetting: with fantastic collaborators @ShuangL13799063 @du_yilun @GMvandeVen and A. Torralba
0
3
38
0
0
8
@ShuangL13799063
Shuang Li
2 years
The Negation operator is also interesting to play with. For example, ‘AND dark’ will give black and white images. ‘NOT dark’ will give colorful images
@TomLikesRobots
TomLikesRobots🤖
2 years
Nearly there. Right number of fingers (if a bit sausage-like?). Rendered at 1024x1024 using Automatic's Highres. fix. Tweaked with composable and negative prompts. #stablediffusion #AIart #fantasy
Tweet media one
12
5
103
0
0
8
@ShuangL13799063
Shuang Li
2 years
@karpathy Thanks a lot for your attention. We have interactive demos available in Huggingface now: We are interested in improving models' generalization ability. Here are more related papers:
0
1
8
@ShuangL13799063
Shuang Li
3 years
For combinatorial generalization to out-of-distribution tasks, i.e. tasks involving new combinations of goals, states or objects, we find that LM pre-training improves task completion rates by 43.6% for tasks involving novel goals.
Tweet media one
1
0
7
@ShuangL13799063
Shuang Li
3 years
Project page: Paper link:
Tweet media one
1
0
7
@ShuangL13799063
Shuang Li
3 years
We propose to use pretrained LMs as a general framework for interactive decision-making by converting all policy inputs into sequential data. The effectiveness of pretraining is not limited to natural strings, but in fact extends to arbitrary sequential encodings.
@IMordatch
Igor Mordatch
3 years
Great work by Shuang and everyone involved! I think a particularly interesting and curious result here is the improvement in combinatorial generalization that LM pre-training induces.
0
0
13
0
1
7
@ShuangL13799063
Shuang Li
2 years
@FelixHill84 Thanks a lot for your attention. Our inference demo is available on Huggingface now: We are interested in improving models' generalization ability. Here are more related works:
0
1
7
@ShuangL13799063
Shuang Li
3 years
Our ‘3D Neural Scene Representations for Visuomotor Control’ has been accepted in CoRL-21 as an Oral presentation. Thanks to all the collaborators!
@ShuangL13799063
Shuang Li
3 years
I am very glad to share our recent work ‘3D Neural Scene Representations for Visuomotor Control’, with my amazing collaborators Yunzhu Li ( @YunzhuLiYZ ) Vincent Sitzmann ( @vincesitzmann ), Pulkit Agrawal ( @pulkitology ), Antonio Torralba.
2
7
58
0
0
7
@ShuangL13799063
Shuang Li
2 years
super cool work
@_akhaliq
AK
2 years
DreamFusion: Text-to-3D using 2D Diffusion paper: abs: project page: DeepDream on a pretrained 2D diffusion model enables text-to-3D synthesis
28
418
2K
0
0
6
@ShuangL13799063
Shuang Li
3 years
Is the effective combinatorial generalization because LMs are effective models of relations between natural language descriptions of states and actions, or because they provide a more general framework for combinatorial generalization in decision-making?
1
0
6
@ShuangL13799063
Shuang Li
3 years
We find that sequential representations (vs. fixed-dimensional feature vectors) and the LM objective are both important for generalization, however, the input encoding schemes (e.g. as a natural language string vs. an arbitrary encoding scheme) have little influence.
Tweet media one
0
0
6
@ShuangL13799063
Shuang Li
2 years
Image generation: Generator: a pre-trained diffusion model. Multiple scorers, such as CLIP and image classifiers, provide feedback to the generator. Robot manipulation: Generator: MPC+World model. A pre-trained image segmentation model computes scores from multiple camera views.
1
0
5
@ShuangL13799063
Shuang Li
3 years
We compose individual object relations using Energy-based Models. Below, we illustrate the process of composing energies during training for image generation.
1
0
6
@ShuangL13799063
Shuang Li
6 months
@igilitschenski @UofT Welcome to the Bay Area! It was super nice to chat with you.
0
0
6
@ShuangL13799063
Shuang Li
3 years
My collaborator will give another Oral presentation "Improved Contrastive Divergence Training of Energy Based Models".
1
0
6
@ShuangL13799063
Shuang Li
2 years
Video question answering: GPT-2 is used as the generator, and a set of CLIP models are used as scorers. Grade school math: GPT-2 is used as the generator, and a set of question-solution classifiers are used as scorers.
1
0
4
@ShuangL13799063
Shuang Li
3 years
Congratulations to @taochenshh and Pulkit for winning the best paper award at CoRL!
@pulkitology
Pulkit Agrawal
3 years
Congratulations to @taochenshh and Jie for winning the Best Paper Award at CoRL 2021 @corl_conf . It took a lot of grit, but was a lot of fun :)
Tweet media one
13
14
272
0
0
5
@ShuangL13799063
Shuang Li
3 years
@wgrathwohl yes, we will put them on arxiv.
0
0
4
@ShuangL13799063
Shuang Li
5 months
0
0
4
@ShuangL13799063
Shuang Li
10 months
@demi_guo_ Congrats! 🎉
1
0
2
@ShuangL13799063
Shuang Li
2 years
RSS Workshop on Social Intelligence in Humans and Robots is happening in 2 days @ #RSS2022 !
@SIHRworkshop
Social Intelligence in Humans and Robots
2 years
Our workshop, Social Intelligence in Humans and Robots, is happening in 2 days @ #RSS2022 ! Join us this Friday 9-6pm ET in person or online to learn about the origins of human social intelligence, and how to build more socially intelligent robots.
2
4
16
0
0
4
@ShuangL13799063
Shuang Li
2 years
"A monitor" AND "A brown couch"
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can compose language to generate 3D assets now! Composable 3D assets generation without any training/fine-tuning (diffusion model is Point-E). Check our webpage and demo: "A green avocado" AND "A chair"
3
52
287
0
0
4
@ShuangL13799063
Shuang Li
2 years
We use pre-trained models as "generators" or "scorers" and compose them via iterative consensus optimization. The generator iteratively constructs proposals, and the scorers give feedback to refine the result. This framework enables zero-shot generalization on multimodal tasks.
Tweet media one
1
0
3
@ShuangL13799063
Shuang Li
10 months
Another cool work by @liuziwei7
@liuziwei7
Ziwei Liu
10 months
🎯Align GenAI with Human Preference🎯 #InstructVideo instructs video diffusion models with human feedback by reward fine-tuning, enhancing the video generation quality/aesthetics - Project: - Paper: - Code:
0
18
83
0
1
4
@ShuangL13799063
Shuang Li
2 years
"A chair" AND "A cake"
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can compose language to generate 3D assets now! Composable 3D assets generation without any training/fine-tuning (diffusion model is Point-E). Check our webpage and demo: "A green avocado" AND "A chair"
3
52
287
0
1
3
@ShuangL13799063
Shuang Li
3 years
We further demonstrate the richness of the learned 3D dynamics model by performing future prediction and novel view synthesis. More results can be found on our webpage:
0
0
3
@ShuangL13799063
Shuang Li
1 year
Can "debate" among multiple language models effectively enhance factuality and reasoning? Check our latest paper, "Improving Factuality and Reasoning in Language Models through Multiagent Debate" .
0
0
3
@ShuangL13799063
Shuang Li
4 years
We are organizing an #icra2021 workshop on Social Intelligence in Humans and Robots. Welcome to submit your work by April 9th!
@SIHRworkshop
Social Intelligence in Humans and Robots
4 years
Introducing the first workshop on Social Intelligence in Humans and Robots, to be held at #ICRA2021 . We will bring views from multiple disciplines to discuss the origins of human social intelligence and how to build socially intelligent robots.
1
0
3
0
0
3
@ShuangL13799063
Shuang Li
1 year
Multiagent Debate Improves Reasoning and Factual Accuracy.
@du_yilun
Yilun Du
1 year
We prompt each model/agent with the same initial question, and ask each agent to iteratively critique and update their responses given the responses of other agents. We find this improves performance across a set of different reasoning and factuality benchmarks. (2/5)
Tweet media one
1
2
24
0
0
3
@ShuangL13799063
Shuang Li
2 years
Join us at the second workshop on Social Intelligence in Humans and Robots! @RSS2022
@SIHRworkshop
Social Intelligence in Humans and Robots
2 years
The SIHR workshop is starting in 5 minutes! Join us via zoom or in person at 545 Mudd Building, Columbia University.
0
1
3
0
1
3
@ShuangL13799063
Shuang Li
4 years
We are glad to have Prof. Anca Dragan, Hyowon Gweon, J. Kiley Hamlin, Dorsa Sadigh, Rebecca Saxe, Brian Scasselati to give talks during the workshop!
@ShuangL13799063
Shuang Li
4 years
I am glad to co-organize this workshop with my great colleagues @xavierpuigf , @tianminshu , @andreea7b , @liushari , Mengxi Li, and Professors Sanja Fidler and Antonio Torralba.
0
0
1
0
1
3
@ShuangL13799063
Shuang Li
1 year
(a) Performance improves as the number of underlying agents involved in debate increases. (b) Performance improves as the number of rounds of underlying debate increases.
@du_yilun
Yilun Du
1 year
We find that both increasing the number of agents in multiagent debate or the number of rounds of debate both improve performance. (3/5)
Tweet media one
2
0
28
0
0
3
@ShuangL13799063
Shuang Li
4 years
Our work Energy-Based Models for Continual Learning will be presented at NeurIPS Biological and Artificial Reinforcement Learning workshop. Poster session 1:00-3:00pm EST, 17:55 – 18:55pm EST Come and chat with us!
@ShuangL13799063
Shuang Li
4 years
Most promisingly, energy-based models perform well in class-incremental learning without relying on stored data and without using replay, and they don’t need task boundaries.
0
0
8
1
2
3
@ShuangL13799063
Shuang Li
2 years
"A toilet" AND "A chair"
@ShuangL13799063
Shuang Li
2 years
Composable-Diffusion can compose language to generate 3D assets now! Composable 3D assets generation without any training/fine-tuning (diffusion model is Point-E). Check our webpage and demo: "A green avocado" AND "A chair"
3
52
287
0
0
2
@ShuangL13799063
Shuang Li
1 year
@ajayj_ @genmoai The results are very impressive! I just throw a random prompt, and the generated videos are amazing.
0
0
2
@ShuangL13799063
Shuang Li
5 months
@du_yilun Big congratulations!!!🎊
1
0
2
@ShuangL13799063
Shuang Li
4 years
Project: Arxiv:
0
0
2
@ShuangL13799063
Shuang Li
2 years
@CoLLAs_Conf is a great conference for Lifelong Learning, Continual Learning, Meta-Learning, and related domains.
@CoLLAs_Conf
CoLLAs 2025
2 years
We are thrilled to release the list of invited speakers at @CoLLAs_Conf 2022: Yoshua Bengio, Rich Caruana, Claudia Clopath, Abhinav Gupta, @hugo_larochelle , @HanieSedghi , Tinne Tuytelaars. Our registrations are also now open:
Tweet media one
0
13
51
0
1
2
@ShuangL13799063
Shuang Li
2 years
@AnimaAnandkumar @NVIDIAAI Thank you so much for your guidance during my internship! 👍
0
0
2
@ShuangL13799063
Shuang Li
3 years
A dynamics model, over the learned representation space, enables visuomotor control for manipulation tasks involving rigid bodies and fluids. When coupled with an auto-decoding framework, it supports goal specification from viewpoints outside the training distribution.
0
0
1
@ShuangL13799063
Shuang Li
10 months
@colinraffel Congrats!
0
0
1
@ShuangL13799063
Shuang Li
4 years
@tianminshu won a best paper award again! 🎉
@tianminshu
Tianmin Shu
4 years
Our paper just won a Best Paper Award at @svrhm2020 ! Congrats to my wonderful collaborators 🎉 And thanks the organizers for the great workshop!
Tweet media one
1
10
75
0
0
1
@ShuangL13799063
Shuang Li
11 months
@HarryXu12 Congrats!
1
0
1
@ShuangL13799063
Shuang Li
2 years
@ShunyuYao12 @du_yilun @IMordatch Thanks a lot! Both iterative refinement and multiple scorers are important in this framework.
0
0
1
@ShuangL13799063
Shuang Li
3 years
0
0
1
@ShuangL13799063
Shuang Li
1 year
@mike_walmsley_ @icmlconf @siggraph @UofT @VectorInst @uot Congrats! That's wonderful. Glad to talk about AI for astronomy/citizen science!
0
0
1
@ShuangL13799063
Shuang Li
2 years
@TomLikesRobots Thanks for noticing our work!
1
0
1