Shuran Song Profile Banner
Shuran Song Profile
Shuran Song

@SongShuran

8,328
Followers
463
Following
30
Media
237
Statuses

Assistant Professor @Stanford University working on #Robotics #AI #ComputerVision

California
Joined July 2016
Don't wanna be here? Send us removal request.
Pinned Tweet
@SongShuran
Shuran Song
3 months
One of the common questions I get for UMI is how to apply it to mobile robots, eps when we don't have a precise IK solver. Check out UMI-on-legs! With a manipulation-centric whole-body controller, we can put any UMI skills on a legged robot🐕 Video:
@haqhuy
Huy Ha
3 months
I’ve been training dogs since middle school. It’s about time I train robot dogs too 😛 Introducing, UMI on Legs, an approach for scaling manipulation skills on robot dogs🐶It can toss, push heavy weights, and make your ~existing~ visuo-motor policies mobile!
12
87
431
0
8
99
@SongShuran
Shuran Song
2 years
The Internet is too fast, I’m still crafting my catchy twits, and word is already out😂 Well then, now you have it: RoboNinja🥷: Learning an Adaptive Cutting Policy for Multi-Material Objects 🧵👇 for a few interesting details you might have missed
4
51
364
@SongShuran
Shuran Song
2 years
How to precisely swing an *unknown* rope to hit a target? It is a challenging task even for us due to complex system dynamics - introduced by object deformation and high-speed dynamic actions. Iterative Residule Policy () is our attempt, details🧵⬇️1/n
7
42
325
@SongShuran
Shuran Song
8 months
Check out UMI! 3 things I learned in this project: 1. Wrist-mount cameras can be sufficient for challenging manipulation tasks with the right hardware design. 2. Cross-embodiment policy is possible with the right policy interface. 3. BC can generalize if the data is right.
@chichengcc
Cheng Chi
8 months
Can we collect robot data without any robots? Introducing Universal Manipulation Interface (UMI) An open-source $400 system from @Stanford designed to democratize robot data collection 0 teleop -> autonomously wash dishes (precise), toss (dynamic), and fold clothes (bimanual)
45
356
2K
3
31
246
@SongShuran
Shuran Song
4 years
More robots do not always lead to higher productivity if they don’t collaborate ;) Check out our latest work in #CORL2020 . Despite being trained on 1-4 arms static task, the system generalizes to 5-10 arms with dynamic targets w/ Huy Ha, Jingxi xu
4
45
217
@SongShuran
Shuran Song
2 years
DextAIRity: Deformable Manipulation Can be a Breeze! #RSS2022 A different way to manipulate objects using controlled airflow that reaches beyond contact 🤖 w. @Zhenjia_Xu , @chichengcc , @Ben_Burchfiel , @eacousineau , Siyuan Feng @CAIR_lab + @ToyotaResearch 🦾
5
54
194
@SongShuran
Shuran Song
3 months
UMI got the Outstanding System Paper finalist #RSS2024 . Congratulations team!! 🥳 Hope to see more UMI running around the world 😊 !
Tweet media one
@SongShuran
Shuran Song
8 months
Check out UMI! 3 things I learned in this project: 1. Wrist-mount cameras can be sufficient for challenging manipulation tasks with the right hardware design. 2. Cross-embodiment policy is possible with the right policy interface. 3. BC can generalize if the data is right.
3
31
246
6
8
197
@SongShuran
Shuran Song
3 years
Congratulations to Huy for winning the best system paper at CORL @corl_conf . It was so much fun building FlingBot and seeing the system work :)
Tweet media one
9
8
178
@SongShuran
Shuran Song
3 years
Honored to be selected as a Sloan Fellow. I am grateful to my mentors, collaborators, and most importantly my awesome students!! Thank you all!
25
3
171
@SongShuran
Shuran Song
2 years
Diffusion Policy for robots! The most impressive thing to me is how fast we can deploy a new skill with this framework -- and we just keep adding more and more. Cheng has made the framework really easy to use, so you try it out too. Colab & Github:
@chichengcc
Cheng Chi
2 years
What if the form of visuomotor policy has been the bottleneck for robotic manipulation all along? Diffusion Policy achieves 46.9% improvement vs prior StoA on 11 tasks from 4 benchmarks + 4 real world tasks! (1/7) website : paper:
9
99
538
0
23
162
@SongShuran
Shuran Song
2 years
Montessori busy boards for robots! We're open-sourcing a toy-inspired robot learning environment for developing essential interaction, reasoning, and planning skills. Let's give our robot toddlers toys to play with before asking them for help in the kitchen ;) (1/n)
2
22
164
@SongShuran
Shuran Song
2 years
Semantic abstraction -- give CLIP new 3D reasoning capabilities, so your robots can find that “ rapid test behind the Harry Potter book.” 😉 w. Huy Ha
@_akhaliq
AK
2 years
Semantic Abstraction: Open-World 3D Scene Understanding from 2D Vision-Language Models abs: project page:
0
44
242
0
29
164
@SongShuran
Shuran Song
3 years
Universal Manipulation Policy Network – a single policy learns to manipulate a diverse set of articulated objects (e.g., fridge, laptop, or drawers) regardless of their joint types or # links. w. @zhenjia @zhanpeng_he Things we learned 🧵⬇️1/n
3
32
163
@SongShuran
Shuran Song
2 years
Congratulations to @chichengcc for winning the Best Paper Award and @Zhenjia_Xu for the Best Systems Paper Finalist at #RSS2022 !! 🥳🎉
@RoboticsSciSys
Robotics: Science and Systems
2 years
Big congrats to the winners (and the finalists!) of the #RSS2022 awards: - Best paper award: - Best systems paper award: - Best student paper award:
Tweet media one
Tweet media two
Tweet media three
4
18
100
11
8
143
@SongShuran
Shuran Song
2 years
Excited to receive the NSF CAREER award! I'm grateful to all my students @CAIRLab , mentors, and collaborators for making this possible 😊 and thank you, Holly and Bernadette, for writing this nice article that summarizes our research. 🤖
@CUSEAS
Columbia Engineering
2 years
Congrats to our @ColumbiaCompSci Prof Shuran Song @SongShuran , who's won an @NSF CAREER award to enable #Robots to learn on their own and adapt to new environments. @ColumbiaScience @Columbia
Tweet media one
3
5
41
10
13
148
@SongShuran
Shuran Song
3 years
Honored to be a Microsoft Research Faculty Fellow!
@MSFTResearch
Microsoft Research
3 years
Congratulations to the 2021 Microsoft Research Faculty Fellows! This fellowship recognizes innovative, promising new faculty whose exceptional talent for innovation identifies them as emerging leaders in their fields. Learn about their research interests:
3
27
99
5
0
148
@SongShuran
Shuran Song
7 months
Embodiment is such a critical component of Embodiment Intelligence but often gets overlooked. Can robots learn to generate different embodiment (i.e., hardware designs) for different tasks that drastically simplify perception, planning, and control? Check it out ⬇️
@XiaomengXu11
Xiaomeng Xu
7 months
Can we automate task-specific mechanical design without task-specific training? Introducing Dynamics-Guided Diffusion Model for Robot Manipulator Design, a data-driven framework for generating manipulator geometry designs for given manipulation tasks. w. Huy Ha, @SongShuran
7
46
257
1
14
140
@SongShuran
Shuran Song
3 years
Dynamic manipulation turns out to be so much more effective for cloth unfolding! Check out FlingBot -- unfold your shirt in 3 steps! 😉 Code for both simulation & real robots is available! #CORL2021 w/ Huy Ha
6
20
140
@SongShuran
Shuran Song
3 months
Have to share these epic fails ... "We've broken 3 legs, fried 1 Jetson, and ripped one pair of pants, so you don't have to" 😅 Check here for details: 😉
3
11
139
@SongShuran
Shuran Song
4 months
Real2Code -- translating real-world articulated objects to sim using code generation! With the code representation, this method scales well wrt the number of object parts, check out the 10-drawer table it reconstructed 😉
@ZhaoMandi
Mandi Zhao
4 months
Here’s something you didn’t know LLMs can do – reconstruct articulated objects! Introducing Real2Code – our new real2sim approach that scalably reconstructs complex, multi-part articulated objects.
12
96
521
1
17
138
@SongShuran
Shuran Song
3 months
Don't want to collect hundreds of demonstrations for every object and scenario? Check out EquiBot form @yjy0625 --- Leveraging equivariance in diffusion policy to make it sample-efficient and generalizable!
@yjy0625
Jingyun Yang
3 months
Want a robot that learns household tasks by watching you? EquiBot is a ✨ generalizable and 🚰 data-efficient method for visuomotor policy learning, robust to changes in object shapes, lighting, and scene makeup, even from just 5 mins of human videos. 🧵↓
11
73
313
0
17
131
@SongShuran
Shuran Song
3 months
By plugging a $5 contact microphone 🎤into UMI, we can now "hear" 👂all the critical contact events during manipulation and "feel" ☝️the subtle differences on the contact surface. Check out @Liu_Zeyi_ 's new work on ManiWav: Manipulation from In-the-Wild Audio-Visual Data!
@Liu_Zeyi_
Zeyi Liu
3 months
🔊 Audio signals contain rich information about daily interactions. Can our robots learn from videos with sound? Introducing ManiWAV, a robotic system that learns contact-rich manipulation skills from in-the-wild audio-visual data. See thread for more details (1/4) 👇
8
58
312
1
18
123
@SongShuran
Shuran Song
5 months
;)
@huaijiangzhu
huaijiang
5 months
@SongShuran summarized the dynamics in many academic labs so well 😂
Tweet media one
2
4
55
0
3
115
@SongShuran
Shuran Song
4 years
Struggling with your 2D visual predictive models that keep losing track of objects? Time to try out this 3D dynamic scene representation (DSR) at #CORL2020 . w. zhenjia_xu @zhanpeng_he @jiajunwu_cs
1
23
105
@SongShuran
Shuran Song
7 months
This robot is having a lot of fun! Check out @ruoshi_liu 's PaperBot, a robot that learns to design, fold, and throw a paper airplane 😊✈️, and many other things!
@ruoshi_liu
Ruoshi Liu @ ECCV
7 months
Humans can design tools to solve various real-world tasks, and so should embodied agents. We introduce PaperBot, a framework for learning to create and utilize paper-based tools directly in the real world.
7
29
171
1
9
87
@SongShuran
Shuran Song
11 months
New group photo. Halloween Edition 👻
Tweet media one
1
7
84
@SongShuran
Shuran Song
9 months
The tastiest robot demo 🤩 !!
@tonyzzhao
Tony Z. Zhao
9 months
Introducing 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Hardware! A low-cost, open-source, mobile manipulator. One of the most high-effort projects in my past 5yrs! Not possible without co-lead @zipengfu and @chelseabfinn . At the end, what's better than cooking yourself a meal with the 🤖🧑‍🍳
235
1K
5K
1
9
83
@SongShuran
Shuran Song
2 months
What if your robot hand suddenly lost a finger? 🤕🤖 Wouldn’t it be great if the same policy could still be effective? Check out "Get-Zero"— by representing the embodiment as a directed grasp, the single trained policy can generalize across new designs without retraining 🪄
@austinapatel
Austin Patel
2 months
What if you could control new hand designs without a new policy? Introducing GET-Zero, an embodiment-aware policy that can zero-shot control a wide range of hand designs with a single set of network weights.
1
10
56
0
8
65
@SongShuran
Shuran Song
15 days
Thank you, Deepak! I'm honored to be in such great company :)
@pathak2206
Deepak Pathak
16 days
Also congratulations to @SongShuran -- delighted to be joining the list with her. :)
1
0
39
1
2
64
@SongShuran
Shuran Song
7 months
UMI's pretrained weight is released. We have tested the policy on three different robots: UR5, Franka, and ARX. Time to try it on your robot !! Buy any "espresso cup with saucer" on Amazon, and it should work -- or let @chichengcc know if it doesn't 😉
@chichengcc
Cheng Chi
7 months
Weights drop ⚠️ We released our pre-trained model for the cup arrangement task trained on 1400 demos! We aim to enable anyone to deploy UMI on their robot to arrange any "espresso cup with saucer" they buy on Amazon.
3
24
170
1
3
61
@SongShuran
Shuran Song
7 months
Amazing work on collaborative cooking. The interaction between human and robots is so natural and smooth, see the subtle things like how the robot is pausing and waiting for the human to pour soup, very impressive!
@sanjibac
Sanjiban Choudhury
7 months
Cooking in kitchens is fun. BUT doing it collaboratively with two robots is even more satisfying! We introduce MOSAIC, a modular framework that coordinates multiple robots to closely collaborate and cook with humans via natural language interaction and a repository of skills.
5
42
184
1
4
55
@SongShuran
Shuran Song
7 months
Check out @chichengcc 's step-by-step tutorial on building the UMI gripper. We really hope to see more UMIs running in the wild. 😊
@chichengcc
Cheng Chi
7 months
We made a step-by-step video tutorial for building the UMI gripper! Please leave comments on @YouTube if you have any question
Tweet media one
9
25
190
0
3
49
@SongShuran
Shuran Song
3 years
One grasping policy for many and (new!) grippers. Code is available here: [](). Try it out, and let us know if your favorite gripper is missing! w. Zhenjia, Beichun, @submagr
3
4
49
@SongShuran
Shuran Song
2 years
#RSS2022 is happening next week @Columbia ! @chichengcc and @Zhenjia_Xu are presenting Iterative Residual Policy and DextAIRity Join us for a tour of our lab on Thursday! Our robots are getting dressed for demos 😜
0
6
43
@SongShuran
Shuran Song
1 year
@haqhuy ’s new project: *Scaling up* robot data collection using LLM for ✅ task decomposition ✅ reward formulation *Distill down* into visuomotor policies that ✅ operate from raw sensory input ✅ improve overtime. Check out the engaging Q&A here 😉
@haqhuy
Huy Ha
1 year
How can we put robotics on the same scaling trend as large language models while not compromising on rich low-level manipulation and control?
3
43
262
0
6
45
@SongShuran
Shuran Song
1 year
Just like us humans, failures are inevitable for robots as well and it is important to "REFLECT" on them! Check out @Liu_Zeyi_ and @ArpitBahety 's new project on failure reasonings for robots. The new dataset (RoboFail) and code are out too!
@Liu_Zeyi_
Zeyi Liu
1 year
🤖 Can robots reason about their mistakes by reflecting on past experiences? (1/n) We introduce REFLECT, a framework that leverages Large Language Models for robot failure explanation and correction, based on a summary of multi-sensory data. See below for details and links👇
3
13
98
2
5
38
@SongShuran
Shuran Song
4 years
Can robots learn how to improve their tools (i.e., grippers) to better accomplish a given task? Check out our work “Fit2Form: 3D Generative Model for Robot Gripper Form Design.” at #CORL2020 w. Huy Ha, @submagr
1
9
31
@SongShuran
Shuran Song
3 years
The talk and poster session for FlingBot is tomorrow (8 am in California, 11 am in Boston, 4 pm in London, 1 am in Tokyo). Please drop by and say hi!
@corl_conf
Conference on Robot Learning
3 years
Congratulations to #CoRL2021 best systems paper finalist, "FlingBot: The Unreasonable Effectiveness of Dynamic Manipulation for Cloth Unfolding", Huy Ha, Shuran Song. #robotics #learning #award #research
Tweet media one
0
9
28
0
1
31
@SongShuran
Shuran Song
3 years
Code for Garmentnet is out. Check it out!
1
5
30
@SongShuran
Shuran Song
10 months
dense 3D tracking for deformables 👗
@BDuisterhof
Bardienus Duisterhof
10 months
Deformable objects are common in household, industrial and healthcare settings. Tracking them would unlock many applications in robotics, gen-AI, and AR. How? Check out MD-Splatting: a method for dense 3D tracking and dynamic novel view synthesis on deformable cloths. 1/6🧵
3
22
90
1
3
25
@SongShuran
Shuran Song
1 year
TRI's effort on Scaling up Diffusion Polices!
@Ken_Goldberg
Ken Goldberg
1 year
Hats off to @ToyotaResearch for exciting results with “Large Behavior Models” and Diffusion Policies:
2
19
136
0
0
26
@SongShuran
Shuran Song
2 years
robot MOO!!
@hausman_k
Karol Hausman
2 years
🚨 🚨 Another new work showcasing bitter lesson 2.0 🚨 🚨 Introducing MOO: We leverage vision-language models (VLMs) to allow robots to manipulate objects they've never interacted with, and in new environments, while learning end-to-end policies. 🧵
3
20
98
0
0
24
@SongShuran
Shuran Song
2 years
sweet demo 🍬🍭🥳 @andyzengtweets
@GoogleAI
Google AI
2 years
From our demo floor at AI@, check out Code as Policies at work. This helper robot is able to compute and execute a task given via natural language. Read more →
13
67
253
0
0
24
@SongShuran
Shuran Song
2 years
#CVPR2022 We are looking volunteers from the CVPR community (graduate students, university faculty, and researchers) to help us organize *in-person* outreach events!
@CVPR
#CVPR2024
2 years
#CVPR2022 call for volunteers is now up!
0
9
25
1
3
21
@SongShuran
Shuran Song
1 year
It is hollywood-level demo. so cool!
@yukez
Yuke Zhu
1 year
Excited to share our latest progress on legged manipulation with humanoids. We created a VR interface to remote control the Draco-3 robot 🤖, which cooks ramen for hungry graduate students at night. We can't wait for the day it will help us at home in the real world! #humanoid
7
67
323
0
1
19
@SongShuran
Shuran Song
2 years
5/5 yes, the failure cases are delicious 🍑🥑🥭
2
2
19
@SongShuran
Shuran Song
2 years
1/5 We have developed a differentiable simulator based on Taichi for multi-material object cutting. It helped in saving us a lot of avocados 😉
1
1
17
@SongShuran
Shuran Song
2 years
Apart from swing rope, IRP is a general formulation that could work for other dynamic manipulation of deformable objects, like swinging a table cloth. 5/n
2
0
16
@SongShuran
Shuran Song
3 months
also, check out @yihuai 's documentation on how to run the UMI-on-Legs system on physical hardware!
@YihuaiGao
Yihuai Gao
3 months
@haqhuy We spent a lot of effort on the documentation and hope that people can easily reproduce our work (including hardware!). We disscussed our hardware choices and how we fixed all kinds of harware problems so you don't have to. Please check it out!
1
0
14
0
0
15
@SongShuran
Shuran Song
2 years
4/5 It is an amazing team work from 5(!) different universities!! Thank you all @Zhenjia_Xu , Zhou Xian, @Xingyu2017 , @chichengcc , @huang_zhiao , @gan_chuang
Tweet media one
1
1
15
@SongShuran
Shuran Song
3 years
The deadline for #RSS2022 Workshops & Tutorials is approaching (Feb 18)! Remember to submit your proposal. 🤖
0
2
14
@SongShuran
Shuran Song
3 years
Looking forward to it 🤖
@GRASPlab
GRASP Laboratory
3 years
TOMORROW: Spring 2022 GRASP SFI: Shuran Song, Shuran Song,( @SongShuran ) Columbia University, “The Reasonable Effectiveness of Dynamic Manipulation for Deformable Objects” 3/16 @ 3:00 - 4:00pm - Levine 512 & Zoom. See you there!
Tweet media one
0
4
26
0
0
13
@SongShuran
Shuran Song
4 years
Cool!
@pathak2206
Deepak Pathak
4 years
RL gets specific to the robot it is trained on. Can a policy be trained to control many agents? Turns out, training (shared) policy for each motor instead of whole robot not only achieves SOTA at train but also transfers to unseen agents w/o fine-tuning!
7
238
961
2
1
12
@SongShuran
Shuran Song
3 years
Grasping dynamic and moving objects, the #IROS2021 talk is today! w. Ireiayo Akinola, @xu_jingxi , and Peter Allen
0
1
11
@SongShuran
Shuran Song
2 years
Stress test on robustness-- we interrupt the system by randomly tying a few knots on the rope after the policy converges on a given goal. Thanks to its iterative formulation, IRP can quickly adapt and regain good performance. 4/n
1
0
11
@SongShuran
Shuran Song
3 months
nice summary of ManiWav!
@KevMusgrave
Kevin Musgrave
3 months
Can audio help robots perform tasks better? ManiWAV is a framework for leveraging contact audio to improve performance in various robotic tasks. It consists of a modified Universal Manipulation Interface, an audio augmentation strategy, and a neural network architecture that
0
2
7
0
0
10
@SongShuran
Shuran Song
2 years
2/5 It is important to have an adaptive policy for handling out-of-distribution scenarios. Otherwise, the policy can easily get stuck. See the comparison below with the non-adaptive policy.
1
1
9
@SongShuran
Shuran Song
2 years
3/5 Since we only need the binary contact information, it is possible to implement the real-world system with a low-cost weight sensor ($10) instead of an expensive force-torque sensor ($5000+). Though a precise force sensor could potentially enable more complex behaviors.
2
1
8
@SongShuran
Shuran Song
2 years
It was great having you here! The talk was great, learned a lot!! 😊
@hila_chefer
Hila Chefer
2 years
I had the pleasure of speaking at @Columbia ’s vision seminar, kindly hosted by @SongShuran , @sy_gadre . My talk focused on using Transformer explainability algorithms to improve performance of downstream tasks (e.g. image editing). Check it out :)
0
2
33
1
0
8
@SongShuran
Shuran Song
2 years
Thank you, Mohit!
@mohito1905
Mohit Shridhar
2 years
Not only is this amazing research, it has all the elements of a great robotics (manipulation) paper that I wish was common practice in the field. Quick Thread:
1
30
193
1
0
7
@SongShuran
Shuran Song
2 years
The policy is trained only in simulation and directly tested with different real-world ropes. Despite the large sim2real gaps, IRP can adjust its action based on visual feedback. 3/n
1
1
7
@SongShuran
Shuran Song
3 years
4/n Performing large-scale real-world training is still very challenging. While the policy only takes images as input, the reward used for training still uses the joint state, which is can be hard to get from real-world videos ☹️ Ideas are welcome!
1
0
7
@SongShuran
Shuran Song
2 years
Instead of learning the direct mapping from action to trajectory, IRP learns to predict the effects of a delta action on the previously observed trajectory -- swing faster will reach higher. And use the prediction to adjust its action and get closer to the goal iteratively. 2/n
1
0
7
@SongShuran
Shuran Song
3 years
3/n In fact, interactions could help in understanding objects’ underlying structure. Fig ⬇️ shows the joint parameters inferred from actions. While the algorithm has never been supervised on joint parameters, it is able to estimate them for both revolute and prismatic joints.
1
0
5
@SongShuran
Shuran Song
3 years
2/n Manipulation strategy is surprisingly generalizable across object categories IF the policy learns the right thing. Oftentimes it is not necessary to perform explicit pose estimation or part segmentation to perform effective manipulations.
1
0
5
@SongShuran
Shuran Song
2 years
BusyBoard is procedurally generated using diverse objects with inter-object functional relation pairs. The skills learned from BusyBoards can be applied to real-world objects (2/n)
1
0
5
@SongShuran
Shuran Song
3 years
@animesh_garg @UofTRobotics Love your gif, gonna steal it for my next talk :)) Thank you for inviting me, it was really fun!!
0
0
4
@SongShuran
Shuran Song
2 years
and environments beyond BusyBoard, like kitchens in AI2 THOR Home. (3/n) code & paper: w/ @Liu_Zeyi_ @Zhenjia_Xu #CoRL 2022
Tweet media one
0
1
4
@SongShuran
Shuran Song
4 years
Paper: Code+Data
0
0
4
@SongShuran
Shuran Song
2 years
As a volunteer, you can help either (1) guide local high school students to tour our expo and demo exhibition or (2) provide mentorship of college student(s) from local HBCU/MI institutes during their participation at CVPR.
0
0
3
@SongShuran
Shuran Song
5 months
@lucacarlone1 wohoo!! congrats!!!
0
0
3
@SongShuran
Shuran Song
2 years
@shahdhruv_ @chris_j_paxton Thank you @shahdhruv_ @chris_j_paxton ! @Zhenjia_Xu is really the magician who made all the magic happen 🥷
0
0
3
@SongShuran
Shuran Song
5 years
@danfei_xu @kevin_zakka @andyzengtweets @photoneo Some of the images are indeed grey-scale images. Photoneo doesn't have a color camera 🙃
0
0
2
@SongShuran
Shuran Song
3 years
@ehsanik Thank you, Kiana, for organizing the series! Many interesting talks, it was fun!
0
0
2
@SongShuran
Shuran Song
4 months
0
0
1
@SongShuran
Shuran Song
2 years
@shahdhruv_ This is so cool! LLM + VLM + VNM!
1
0
1
@SongShuran
Shuran Song
2 years
@hadmoni so cool 🤖
0
0
1
@SongShuran
Shuran Song
3 years
0
0
1
@SongShuran
Shuran Song
3 years
@MahnaSakshay @crazy_sanguine @zhenjia @zhanpeng_he The policy does need to take the position into account ( PositionNet) Most of the time it learns to apply force away from the joint axis but not "furthest".
1
0
1
@SongShuran
Shuran Song
3 years
Remember to submit your workshop and Tutorial proposal to 🤖 @RSS_Foundation 2022. Less than a week left ⌛️
@SongShuran
Shuran Song
3 years
The deadline for #RSS2022 Workshops & Tutorials is approaching (Feb 18)! Remember to submit your proposal. 🤖
0
2
14
0
0
1