Zeyi Liu Profile Banner
Zeyi Liu Profile
Zeyi Liu

@Liu_Zeyi_

656
Followers
279
Following
7
Media
40
Statuses

PhD student @Stanford w/ @SongShuran #Robotics #ComputerVision Class of 2022 @CUSEAS 🦁

New York, NY
Joined November 2021
Don't wanna be here? Send us removal request.
Pinned Tweet
@Liu_Zeyi_
Zeyi Liu
16 days
🔊 Audio signals contain rich information about daily interactions. Can our robots learn from videos with sound? Introducing ManiWAV, a robotic system that learns contact-rich manipulation skills from in-the-wild audio-visual data. See thread for more details (1/4) 👇
8
58
310
@Liu_Zeyi_
Zeyi Liu
1 year
🤖 Can robots reason about their mistakes by reflecting on past experiences? (1/n) We introduce REFLECT, a framework that leverages Large Language Models for robot failure explanation and correction, based on a summary of multi-sensory data. See below for details and links👇
3
13
98
@Liu_Zeyi_
Zeyi Liu
21 days
Never thought about REFLECT can provide this interesting test case for video gen models🤣 the robot motion’s pretty smooth and long-horizon!
@DJiafei
Jiafei Duan @ RSS2024
21 days
Try out #LumaDreamMachine for robotics action generation, even though there are artifacts in the object generated, but I would say that the kinematics of the robot motion is pretty good. Can we use for robotics data?
4
6
52
0
5
43
@Liu_Zeyi_
Zeyi Liu
1 year
Thanks @_akhaliq for covering our work! We find that LLM can reliably identify and explain robot failures given a textual summary of robot past experiences generated from raw sensory inputs. More results on Please stay tuned for code release!
@_akhaliq
AK
1 year
REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction paper page: The ability to detect and analyze failed executions automatically is crucial for an explainable and robust robotic system. Recently, Large Language Models (LLMs)
1
33
134
0
7
33
@Liu_Zeyi_
Zeyi Liu
5 months
Excited to see such a neat and ready-to-use data collection system is added to the robotics community! Looking forward to all the cool things our robots can learn😎
@chichengcc
Cheng Chi
5 months
Can we collect robot data without any robots? Introducing Universal Manipulation Interface (UMI) An open-source $400 system from @Stanford designed to democratize robot data collection 0 teleop -> autonomously wash dishes (precise), toss (dynamic), and fold clothes (bimanual)
43
352
2K
0
1
27
@Liu_Zeyi_
Zeyi Liu
16 days
🫳The hand-held data collection device synchronously records images from a GoPro camera with fish-eye lens, and audio from a contact microphone embedded in the gripper finger. 🧠With the collected demonstrations, we train an end-to-end sensorimotor learning model. (2/4)
1
1
13
@Liu_Zeyi_
Zeyi Liu
5 months
Cool work on generative gripper design! Impressive that all designs are generated with the same model by just taking as input a 2D/3D shape and a manipulation goal (e.g. shift up, rotate counterclockwise).
@XiaomengXu11
Xiaomeng Xu
5 months
Can we automate task-specific mechanical design without task-specific training? Introducing Dynamics-Guided Diffusion Model for Robot Manipulator Design, a data-driven framework for generating manipulator geometry designs for given manipulation tasks. w. Huy Ha, @SongShuran
7
46
255
0
0
7
@Liu_Zeyi_
Zeyi Liu
16 days
🥯 By collecting in-the-wild demonstrations in diverse environments, our policy directly generalizes to unseen in-the-wild environments with several different test-time scenarios. (3/4)
1
1
9
@Liu_Zeyi_
Zeyi Liu
9 months
😍🎃
@SongShuran
Shuran Song
9 months
New group photo. Halloween Edition 👻
Tweet media one
1
7
84
0
0
8
@Liu_Zeyi_
Zeyi Liu
1 month
Cool integration of LLM into articulated objects reconstruction, can be very useful for robotic applications!
@ZhaoMandi
Mandi Zhao
1 month
Here’s something you didn’t know LLMs can do – reconstruct articulated objects! Introducing Real2Code – our new real2sim approach that scalably reconstructs complex, multi-part articulated objects.
12
97
521
0
1
7
@Liu_Zeyi_
Zeyi Liu
1 year
(3/n) We systematically query LLM with a progressive failure explanation algorithm that is able to handle both execution-level and planning-level failures. Conditioned on the explanation, LLM is able to generate a correction plan for the robot to complete the task.
1
3
4
@Liu_Zeyi_
Zeyi Liu
1 year
(4/n) We evaluate our framework on a variety of tasks in both simulation and real world. More results can be found on the project website.
1
1
3
@Liu_Zeyi_
Zeyi Liu
15 days
@RuohanGao1 Thank you so much, Ruohan, definitely got a lot of inspirations from your line of work on multisensory learning!
0
0
1
@Liu_Zeyi_
Zeyi Liu
15 days
@vrushankdes Yes it’s mostly because of the motor vibration. I agree it could be a good idea to add some noise absorbing material in between the gripper and robot!
0
0
2
@Liu_Zeyi_
Zeyi Liu
1 year
(2/n) By leveraging foundation models, REFLECT converts unstructured, multi-sensory robot data into a hierarchical textual summary of robot sensory inputs, key events, and subgoals. The summary facilitates quick failure localization and in-context explanation.
1
1
2
@Liu_Zeyi_
Zeyi Liu
16 days
@chris_j_paxton Thank you Chris, this is exactly our motivation!
0
0
2
@Liu_Zeyi_
Zeyi Liu
15 days
@RemiCadene Thanks for featuring our work, Remi!
0
0
1
@Liu_Zeyi_
Zeyi Liu
1 year
@Sylvia_Sparkle @columbianlp @Zhou_Yu_AI Congrats, Siyan! Look forward to seeing you on campus soon❤️
0
0
1
@Liu_Zeyi_
Zeyi Liu
10 days
@wei_tianhao @OliverKroemer thank you so much, Tianhao!
0
0
0