Stanford Vision and Learning Lab Profile Banner
Stanford Vision and Learning Lab Profile
Stanford Vision and Learning Lab

@StanfordSVL

15,018
Followers
148
Following
13
Media
346
Statuses

SVL is led by @drfeifei @silviocinguetta @jcniebles @jiajunwu_cs and works on machine learning, computer vision, robotics and language

Stanford, CA
Joined September 2014
Don't wanna be here? Send us removal request.
@StanfordSVL
Stanford Vision and Learning Lab
7 years
#TransferLearning is crucial for general #AI , and understanding what transfers to what is crucial for #TransferLearning . Taskonomy ( #CVPR18 oral) is one step towards understanding transferability among #perception tasks. Live demo and more:
2
131
269
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Introducing the RoboTurk Real Robot Dataset - one of the largest, richest, and most diverse robot manipulation datasets ever collected using human creativity and dexterity! 111 hours 54 non-expert demonstrators 2144 demonstrations Download: [1/2]
1
49
157
@StanfordSVL
Stanford Vision and Learning Lab
1 year
Stanford Vision and Learning Lab is presenting 7 papers at #CORL2023 , including 3 oral presentations, and 3 award nominations, see below:
1
8
73
@StanfordSVL
Stanford Vision and Learning Lab
5 years
[1/2] Our lab has 3 papers accepted to NeurIPS 2019: 1. HYPE: Human Eye Perceptual Evaluation of Generative Models. Zhou and Gordon et al. (Oral) 2. SOCIAL-BIGAT: Multimodal Trajectory Forecasting using Bicycle-GAN and Graph Attention Networks. Kosaraju et al.
2
7
62
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Stanford Vision and Learning Group website is online now! @drfeifei @silviocinguetta @jcniebles
0
19
42
@StanfordSVL
Stanford Vision and Learning Lab
5 years
We are hosting one of the 3 challenges of at CVPR20. Train your navigating agent in our simulator Gibson () and we will test it in the real world! The best solutions will showcase live during CVPR. More info:
1
8
39
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Learning from hints (not demonstrations): A new paper on an important direction of RL for control where expert intuition can be used to guide learning without the need to provide optimal or even complete solutions.
@animesh_garg
Animesh Garg
5 years
Our new work at #Corl2019 will present RL with Ensemble of Suboptimal Teachers -aka- specify as much as you can easily, let learning handle the rest. Blog: Paper: w\ @andrey_kurenkov , A. Mandlekar, @RobobertoMM , @silviocinguetta
1
26
99
0
4
34
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Work from our group in Robot Learning for Manipulation is finalist for best paper award at @icra2019 and is being presented tomorrow in Montreal. @drfeifei @silviocinguetta @animesh_garg @yukez @michellearning @leto__jean
@animesh_garg
Animesh Garg
5 years
Excited to be at #ICRA2019 Best Paper Award talk Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks Paper: Video:
5
33
153
0
6
28
@StanfordSVL
Stanford Vision and Learning Lab
5 years
We are happy to announce our ICCV19 Workshop on Visual Perception for Navigation in Human Environments: The JackRabbot Social Robotics Dataset and Benchmark. Submission deadline August 20. For more info, contact @SHamidRezatofig and Roberto Martin-Martin
0
5
23
@StanfordSVL
Stanford Vision and Learning Lab
6 months
Are you a passionate and experienced researcher in robotics with knowledge in computer vision? Do you want to build impactful robotic systems? Stanford Vision and Learning lab (SVL) is searching for a Postdoctoral Fellow with your skills.
2
3
23
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Our focus on robot learning from single example of a task through a video has resulted in a line of work that combines symbolic systems with neural networks
@animesh_garg
Animesh Garg
5 years
This continues our efforts in neuro-symbolic planning for one-shot imitation in multi-step reasoning domains. 1. Neural Task Programs: 2. Neural Task Graphs: 3. Continuous Relaxation of Symbolic Planner:
0
0
17
0
4
22
@StanfordSVL
Stanford Vision and Learning Lab
5 years
The group wins the best paper award at #ICRA2019 for work on multimodal state representations in robot policy learning
@animesh_garg
Animesh Garg
5 years
Tweet media one
11
12
147
0
1
21
@StanfordSVL
Stanford Vision and Learning Lab
6 years
New work on pose tracking from our group
@yukez
Yuke Zhu
6 years
We have just released our new work on 6D pose estimation from RGB-D data -- real-time inference with end-to-end deep models for real-world robot grasping and manipulation! Paper: Code: w/ @danfei_xu @drfeifei @silviocinguetta
Tweet media one
3
23
99
1
3
21
@StanfordSVL
Stanford Vision and Learning Lab
4 months
Stanford Vision and Learning Lab members and alumni: we are organizing a SVL+Alumni meet up at CVPR’24! Please check your emails for more details.
1
1
22
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Congrats @silviocinguetta for the appointment as the inaugural Mindtree Faculty Scholar at Stanford
@silviocinguetta
Silvio Savarese
6 years
I am delighted to share the news that I have been appointed the inaugural Mindtree Faculty Scholar at Stanford. More info:
7
7
114
0
3
20
@StanfordSVL
Stanford Vision and Learning Lab
6 years
@animesh_garg
Animesh Garg
6 years
Check out our paper on Task-Oriented Grasping at #RSS2018 at 930am on Wed 06/27 (find full paper here: )
Tweet media one
0
2
21
0
5
19
@StanfordSVL
Stanford Vision and Learning Lab
5 years
A new work on structuring diverse semantics in 3D space that yielded the 3D Scene Graph! It’s showcased on the Gibson database by annotating the models with diverse semantics using a semi-automated method.
@ir0armeni
Iro Armeni
5 years
What space should diverse semantics be grounded & what should be the structure? 3D Scene Graph is a 4-layer structure for unified semantics, 3D space &camera. We demonstrate it on Gibson models with an automated labeling method. Data available to download!
0
4
16
0
7
19
@StanfordSVL
Stanford Vision and Learning Lab
1 year
Composable Part-based Manipulation uses object-part decomposition & part-part correspondences to improve generalization of robotic manipulation across object categories. @Weiyu_Liu_ Poster: Thu 9 2:45-3:30 pm Website: w/ Tucker Hermans & @animesh_garg
0
3
19
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Thanks to @elonmusk for making time and @russelljkaplan , @shivon and @karpathy in making it happen
@russelljkaplan
Russell Kaplan
6 years
Fun chat last night at the Tesla party with Elon and some @StanfordCVGL folks #CVPR2018
Tweet media one
2
3
49
1
3
18
@StanfordSVL
Stanford Vision and Learning Lab
5 years
This is continuing an important line of work in policy learning with large Datasets. More importantly this is the question to answer if we need to create a analog of "Imagenet" for Robotics. We need to both collect large Datasets and have Algorithms to leverage this data!
@animesh_garg
Animesh Garg
5 years
New work on exploring if a policy can be learned only from offline, off-policy dataset? IRIS: Implicit Reinforcement without Interaction at Scale Video: Seattle Robotics @NvidiaAI @AjayMandlekar @drfeifei B. Boots F. Ramos D. Fox
1
10
46
0
1
18
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Thanks a lot for the support @NvidiaAI
@NVIDIAAIDev
NVIDIA AI Developer
7 years
We did it again! Last night we handed out another 15 NVIDIA V100s to world smartest #AI researchers at #ICML2017
Tweet media one
4
43
143
0
3
15
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Joint 2D-3D-Semantic data for scene understanding is out! 70K+ images and 6 mutually registered modalities in 2D&3D:
Tweet media one
0
10
14
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Congratulations to Ashesh, Amir, Silvio and Ashutosh for the best student paper award at CVPR 2016!!Structural-RNN!!
Tweet media one
1
1
11
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Continued efforts in larger scale crowdsourcing robotics for dataset creation in setups where engineered solutions are hard, simulation is tricky and pure compute has low success. The diversity of human cognitive reasoning and dexterity provides so many ways do do the same task!
@animesh_garg
Animesh Garg
5 years
Scaling crowdsourcing robotics with Human Reasoning & Dexterity for Large-Scale Dataset Creation Blog: Paper: Webpage: @AjayMandlekar @yukez @silviocinguetta @drfeifei J. Booher, M. Spero, A. Tung, A. Gupta
1
41
141
0
1
11
@StanfordSVL
Stanford Vision and Learning Lab
5 years
A thorough evaluation of possible action spaces and their efficiency with RL for manipulation tasks. The answer is best of both worlds - Op. Space control in end effector space with learnable gains fares better than end to end image-to-torque
@animesh_garg
Animesh Garg
5 years
Which control spaces work well for RL in contact-rich robot manipulation -- 'Variable Impedance Control in End-Effector Space' #IROS2019 Paper: Video: w\ Roberto M-M., @michellearning , R. Gardener, @silviocinguetta , @leto__jean
Tweet media one
0
15
64
0
2
11
@StanfordSVL
Stanford Vision and Learning Lab
6 years
@ChrisChoy208
Chris Choy
6 years
We created 4D ConvNets for 3D video perception :D Please checkout our paper: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks, @cvpr2019
3
100
361
0
1
10
@StanfordSVL
Stanford Vision and Learning Lab
7 years
PhD student @lynetcha gives a great overview of ongoing work in point cloud segmentation
@twimlai
The TWIML AI Podcast
7 years
Be sure to check out #TWiMLTalk #123 ! Joined by @lynetcha , we discuss her work on SEGCloud, an end-to-end framework that performs 3D point-level segmentation. Head over to to listen!
Tweet media one
0
1
4
0
6
10
@StanfordSVL
Stanford Vision and Learning Lab
6 years
This is a very exciting new direction of research in our group: perception by combining vision with touch for manipulation.
@animesh_garg
Animesh Garg
6 years
Very excited to share our new paper on Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations. Featuring RL on real-robots from scratch in a matter of hours without any simulation! Video: arXiv:
1
5
34
0
3
10
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Thanks @ToyotaResearch and @SAILToyota for the exciting new experimental platform!
@animesh_garg
Animesh Garg
6 years
New baby rolls in SVL lab ( @StanfordCVGL ) - the @ToyotaResearch HSR. looking forward to servicing some humans! @drfeifei @silviocinguetta
Tweet media one
4
4
41
0
3
9
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Semantic Segmentation of 3D Point Clouds  #3DV @lynetcha , @ChrisChoy208   Iro Armeni, @JunYoungGwak , @silviocinguetta
Tweet media one
0
1
9
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Congrats to all our graduates!
@drfeifei
Fei-Fei Li
6 years
Happy day for students, and proud day for advisor (i.e. me) and families on @Stanford Computer Science Dept Graduation Ceremony day :) Last picture is for CS231n instructors/TAs. @jcjohnss @syeung10 @cs231n
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
17
323
0
0
9
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Great work in policy learning for navigation using visual info @ICRA2017 @drfeifei
Tweet media one
Tweet media two
0
2
8
@StanfordSVL
Stanford Vision and Learning Lab
6 years
If you are at #CVPR18 , there is a workshop organized by Berkeley, Stanford, and MIT on Beyond Supervised Learning, with a great lineup of speakers. At Ballroom G. Drop by! #perception #unsupervisedlearning #selfsupervisedlearning #transferlearning
Tweet media one
1
6
8
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Exciting news! CVGL got 6 papers accepted at ECCV, with an amazing unprecedented acceptance rate of ~85%!
0
3
7
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Hot new paper on 3D reconstruction by @NoSleepMusings @ChrisChoy208 @animesh_garg and team
@animesh_garg
Animesh Garg
7 years
Our new work on Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image aka DeformNet
0
1
13
0
1
8
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Congrats to all the authors
@drfeifei
Fei-Fei Li
6 years
Congratulations to Best Paper Award by @zamir_ar @silviocinguetta @StanfordCVGL & collaborators at #CVPR2018 , a computational map of perceptual task transfer learning!!
3
63
215
0
0
8
@StanfordSVL
Stanford Vision and Learning Lab
7 years
SegCloud was a recent Spotlight at 3DV and is the current leader on the reduced-8 benchmark.
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Semantic Segmentation of 3D Point Clouds  #3DV @lynetcha , @ChrisChoy208   Iro Armeni, @JunYoungGwak , @silviocinguetta
Tweet media one
0
1
9
0
1
7
@StanfordSVL
Stanford Vision and Learning Lab
1 year
Humans have the remarkable ability to make and use tools to help them solve tasks. We introduce a framework for robots also to jointly learn to design and use tools via reinforcement learning. Poster: Thu 9th 2:45-3:30 pm Website:
@stephentian_
Stephen Tian
1 year
A robot may be unable to complete a task when limited by its morphology. Remarkably, people and some animals can get around this by not only using but also *designing* tools. We explore whether robots can also do this in our latest work! 🌐 🧵👇
1
14
55
1
1
7
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Lab PI @drfeifei wins the PAMI Longuest-Higgins Prize for the effort on ImageNet with @lijiali_vision @jiadeng @RichardSocher and team. Congrats everyone
0
0
7
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Discussing remarkable vision capacity of humans with roboticists at @IROS2017 #inspiration
@animesh_garg
Animesh Garg
7 years
Plenary by @drfeifei at @IROS2017 The Cambrian explosion in robots: Robots need eyes - Perception meets Robotics
Tweet media one
0
3
15
0
5
7
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Come check out the poster on recent work in imitation learning at #NeurIPS
@animesh_garg
Animesh Garg
5 years
"AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers" Deep RL Workshop - Sat Dec 14 - Paper: @andrey_kurenkov @AjayMandlekar @RobobertoMM @silviocinguetta @StanfordSVL
1
1
6
0
1
7
@StanfordSVL
Stanford Vision and Learning Lab
7 years
@animesh_garg
Animesh Garg
7 years
Exciting lineup of accepted papers at #CoRL2017
0
7
24
0
5
6
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Students from the lab @RanjayKrishna and Apoorva Doradula describe their new work on engagement learning!
@Stanford
Stanford University
6 years
Ranjay Krishna and Apoorva Doradula use conversations as a strategy for training AI systems. They call it engagement learning – an AI “learns what kinds of concepts people like to discuss and how to ask questions to get an informative response.”
Tweet media one
0
17
62
0
2
5
@StanfordSVL
Stanford Vision and Learning Lab
1 year
NOIR is a brain-robot interface that enables humans to command robots to perform 20 challenging everyday activities using their brain signals, such as cooking, cleaning, & playing games. Poster: Wed 8th 5:15pm Website:
@RuohanZhang76
Ruohan Zhang
1 year
Introducing our new work @corl_conf 2023, a novel brain-robot interface system: NOIR (Neural Signal Operated Intelligent Robots). Website: Paper: 🧠🤖
20
184
754
1
0
6
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Check out the workshop on a building causal models to go beyond reinforcement learning in robotics organized by lab members @animesh_garg and @yukez
@animesh_garg
Animesh Garg
6 years
Excited about the lineup of speakers to explore causality in robotics. RSS Workshop on Causal-Imitation. CfP is out and deadline for posters is Jun 3. Consider submitting! @yukez @ermonste @jiajunwu_cs and Michael Laskey
0
4
20
0
1
6
@StanfordSVL
Stanford Vision and Learning Lab
5 years
[2/2] 3. Regression Planning Networks. Xu et al. Congratulations to all the authors for their amazing work!
0
1
6
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Exciting work on video analysis and another step on the path to having autonomous agents attempting to make us pasta
@animesh_garg
Animesh Garg
6 years
A great talk by Shyamal Buch at #CVPR18 on our paper on Weakly supervised reference aware visual grounding. @jcniebles @drfeifei @deanh619 Lucio Dery from @StanfordCVGL
Tweet media one
Tweet media two
2
12
61
0
2
6
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Fresh work from the lab!
@animesh_garg
Animesh Garg
7 years
Our new work on weakly supervised GANs for 3D shape from images Paper: Podcast:
1
4
5
0
1
5
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Stanford scientists are teaching a robot how not to be awkward in public via @qz
0
2
4
@StanfordSVL
Stanford Vision and Learning Lab
7 years
JR featuring at #NVIDIA GPU Tech Conference #GTC May 8-11 in Silicon Valley. Stop by!
Tweet media one
0
1
4
@StanfordSVL
Stanford Vision and Learning Lab
6 years
#Stanford 's #Jackrabbot is featured on NTY! @nytimes What Comes After the Roomba?
0
2
5
@StanfordSVL
Stanford Vision and Learning Lab
1 year
VoxPoser uses LLM+VLM to create 3D value maps in robot workspace, which guides motion planner to synthesize behaviors for everyday manipulation tasks w/o requiring robot data. Oral: Wed 8th 11:50 am Poster: Wed 8th 5:15-6:00 pm
@wenlong_huang
Wenlong Huang
1 year
How to harness foundation models for *generalization in the wild* in robot manipulation? Introducing VoxPoser: use LLM+VLM to label affordances and constraints directly in 3D perceptual space for zero-shot robot manipulation in the real world! 🌐 🧵👇
10
143
584
1
2
5
@StanfordSVL
Stanford Vision and Learning Lab
1 year
MimicPlay is an imitation learning algorithm that uses cheap human play data to unlock real-time planning for long-horizon manipulation. Oral: Thu 9th 8:30 am Poster: Thu 9th 2:45-3:30 pm Best paper/Best student paper finalist Best system paper finalist
@chenwang_j
Chen Wang
2 years
How to teach robots to perform long-horizon tasks efficiently and robustly🦾? Introducing MimicPlay - an imitation learning algorithm that uses "cheap human play data". Our approach unlocks both real-time planning through raw perception and strong robustness to disturbances!🧵👇
20
145
734
1
0
5
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Exciting direction of research by joint group, SVL, of @silviocinguetta and @drfeifei
@animesh_garg
Animesh Garg
7 years
Our new paper on Neural Task Programming: Learning to Generalize Across Hierarchical Tasks
Tweet media one
3
67
203
0
0
5
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Glad to share some of our efforts in healthcare
@StanfordAILab
Stanford AI Lab
6 years
Check out our first blog post about SAIL research, about how computer vision can be used to enable smart hospitals By Albert Haque & Michelle Guo ( @mshlguo ), led by professors Terry Platchek ( @TerryPlatchek ), Arnold Milstein, & Fei-Fei Li ( @drfeifei )
1
11
28
0
0
4
@StanfordSVL
Stanford Vision and Learning Lab
1 year
Sequential dexterity is a system that learns to chain multiple dexterous manipulation policies for tackling long-horizon manipulation tasks in both simulation and real-world. Poster: Tue 7th 2:45-3:30 pm
@chenwang_j
Chen Wang
1 year
How to chain multiple dexterous skills to tackle complex long-horizon manipulation tasks? Imagine retrieving a LEGO block from a pile, rotating it in-hand, and inserting it at the desired location to build a structure. Introducing our new work - Sequential Dexterity 🧵👇
26
90
469
1
0
4
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Lab PI @silviocinguetta meets up with dignitaries from the IT ministry of India to discuss AI challenges for India
@rsprasad
Ravi Shankar Prasad
6 years
It was an extra-ordinary visit to Stanford University. Had a wonderful exchange with its brilliant faculty on use of technology for human development and the application of AI and the challenge it poses. The faculty was deeply impressed with India's story of digital inclusion.
Tweet media one
Tweet media two
Tweet media three
94
323
2K
0
0
4
@StanfordSVL
Stanford Vision and Learning Lab
10 years
Four papers have been accepted in CVPR 2015 including one oral presentation! Congratulations to all the authors!
0
0
4
@StanfordSVL
Stanford Vision and Learning Lab
6 years
@animesh_garg
Animesh Garg
6 years
This is being presented at #ICRA2018 by @StanfordCVGL student @danfei_xu on Wed 05/23 in Brisbane. Come say hi if you attending -
0
3
19
0
1
4
@StanfordSVL
Stanford Vision and Learning Lab
5 years
A new effort from SVL and the JackRabbot team! New dataset and benchmark for robot perception in human environments. The winners of the first challenge on pedestrian detection and tracking will be presented at our workshop at #ICCV19 !
@RobobertoMM
Roberto
5 years
Want to see what JackRabbot sees? Finally a comprehensive dataset from the point of view of a mobile manipulating robot!
0
2
9
0
0
4
@StanfordSVL
Stanford Vision and Learning Lab
5 years
New study that analyzes different action spaces for RL in robot manipulation in the quest to find the best one. Guys what? The best method is a combination of operational space control (1986) with learned adaptive gain tuning. To appear at #IROS2019
@animesh_garg
Animesh Garg
5 years
What action space should we use for contact-rich manipulation? We show that Variable Impedance Control in End-Effector Space outperforms most other choices. Paper: w\ R. Martin-Martin, @michellearning , R. Gardner, @silviocinguetta , @leto__jean @StanfordSVL
Tweet media one
0
11
37
0
1
4
@StanfordSVL
Stanford Vision and Learning Lab
4 years
RL in multitask domains with shared underlying latent information can be made more effective through learning action space manifolds. Check out this effort on Learning latent Action Spaces at #ICRA2021
@animesh_garg
Animesh Garg
4 years
When you try to open a new door, do you try to yank it up? Likely, no. Why should your robot continue to do so! Check out our #ICRA2021 paper on learning action spaces for efficient contact-rich manipulation. paper:
1
10
64
0
2
4
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Stanford’s ‘Jackrabbot’ robot will attempt to learn the arcane and unspoken rules via @techcrunch
0
0
3
@StanfordSVL
Stanford Vision and Learning Lab
9 years
Our paper "Robust Single-View Instance Recognition", by D. Held, S. Thrun , S. Savarese, has been accepted to ICRA
0
1
3
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Exciting new line of work from the group on 3D vision at #CVPR2019 by @lynetcha @SHamidRezatofig @silviocinguetta
@lynetcha
Lyne Tchapmi
5 years
Announcing the launch of "Completion3D: Stanford 3D Point Cloud Completion Benchmark” . Come check out our associated paper "TopNet: Structural Point Cloud Decoder" at #CVPR2019 ! @silviocinguetta @SHamidRezatofig @StanfordSVL
Tweet media one
1
2
10
0
0
2
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Come check out labs new work at #ICRA2019 14:40-15:55, Grasping II 1.2.12 MoB1-12, Room 220 @silviocinguetta @ken_goldberg @animesh_garg @andrey_kurenkov
@animesh_garg
Animesh Garg
5 years
Presenting on Monday at #ICRA2019 Mechanical Search: Multi-Step Retrieval of a Target Object Occluded by Clutter. w\ @ken_goldberg @andrey_kurenkov Paper: Video:
1
3
14
0
0
3
@StanfordSVL
Stanford Vision and Learning Lab
1 year
Do you know how to make a dumpling🥟? Our robot🤖does! RoboCook is a robot system designed for long-horizon manipulation of elasto-plastic objects with a variety of tools. Oral: Tue 7th 8:50 am Poster: Tue 7th 2:45-3:30 pm Best system paper finalist
@HaochenShi74
Haochen Shi
1 year
Do you know how to make a dumpling🥟? Our robot🤖does! Introducing RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools. Project website: Here we show how RoboCook makes a dumpling under external human perturbation. Thread🧵👇
4
39
172
1
0
3
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Congrats @silviocinguetta . Looking forward to exciting research Stanford AI Center
0
2
3
@StanfordSVL
Stanford Vision and Learning Lab
8 years
#3DV is sold out. Action packed schedule. See you all next week.
@ozansener
Ozan Sener
8 years
Vaoww!! #3DV is sold out, a great conference is waiting for all of us. See you all next Tuesday.
0
0
2
0
0
2
@StanfordSVL
Stanford Vision and Learning Lab
8 years
#3DV2016 Richard Szeliski Keynote on 3D Reconstruction for image based rendering
Tweet media one
0
0
2
@StanfordSVL
Stanford Vision and Learning Lab
5 years
Something all conferences should do
@animesh_garg
Animesh Garg
5 years
Robotics leading the way in inclusiveness by providing child care services at #ICRA2019
Tweet media one
1
8
40
0
0
2
@StanfordSVL
Stanford Vision and Learning Lab
8 years
3DV program and tutorials are now online. Register now and mark your calenders for. Oct 25-28
@silviocinguetta
Silvio Savarese
8 years
The program for #3DV2016 is now available! Very exciting list of speakers, talks, demos, tutorials, and exhibitors!
0
10
9
0
4
2
@StanfordSVL
Stanford Vision and Learning Lab
6 years
A very timely workshop on causality and robotics. Consider participating! Also see a recent piece on Judea Pearl's perspective on causality models in AI
@animesh_garg
Animesh Garg
6 years
Excited about the lineup of speakers to explore causality in robotics. RSS Workshop on Causal-Imitation. CfP is out and deadline for posters is Jun 3. Consider submitting! @yukez @ermonste @jiajunwu_cs and Michael Laskey
0
4
20
0
1
2
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Our work on deep Layout estimation have been picked up and featured on CVPR daily:
0
2
2
@StanfordSVL
Stanford Vision and Learning Lab
6 years
Exiting workshop at #rss2018 from lab folks @animesh_garg and @yukez
@animesh_garg
Animesh Garg
6 years
Elias Boreinboim from Purdue explaining the causal heirarchy from Judea Pearl school of thought at workshop on causality and imitation #RSS2018
Tweet media one
Tweet media two
0
1
6
0
0
2
@StanfordSVL
Stanford Vision and Learning Lab
8 years
@ChrisChoy208 from CVGL presenting at #eccv2016 . Great Job Chris!
Tweet media one
0
1
2
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Good news! We got two papers accepted to NIPS'16 (1 poster and 1 oral)
0
1
2
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Attending @cvpr2016 ? Vote for our 2019 CVPR Bid! More info at
0
0
2
@StanfordSVL
Stanford Vision and Learning Lab
9 years
The #3dv2016 website is live!
0
2
2
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Our paper won the Google runner-up best paper award at the Vision from Satellite to Street 2015 Workshop!
1
1
1
@StanfordSVL
Stanford Vision and Learning Lab
7 years
Congrats Ajay
@animesh_garg
Animesh Garg
7 years
Ajay Mandlekar from @StanfordCVGL gives a great talk at @IROS2017 @drfeifei @silviocinguetta
Tweet media one
Tweet media two
0
1
8
0
0
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
The deadline of early bird registration of 3DV 2016 is approaching (9/16). Do not forget to register here #3dv2016
0
2
1
@StanfordSVL
Stanford Vision and Learning Lab
9 years
5 papers accepted to #cvpr2016 : 2 oral, 2 spotlight, 1 poster!
0
1
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
#3DV Program is online now. Exciting keynotes and talks starting next Tues 10/25
0
0
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Stanford team teaching robots manners via @abc7newsbayarea
0
0
1
@StanfordSVL
Stanford Vision and Learning Lab
7 years
New work from our group on zero shot transfer in RL for Robotics
@animesh_garg
Animesh Garg
7 years
Our new work on Zero-shot transfer across stochastic dynamics
Tweet media one
0
2
10
0
1
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
@silviocinguetta adding to the great conversation on ar/vr in vision at @ToyotaResearch workshop
Tweet media one
0
1
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
We got the Stanford CIFE Seed Research Award to conduct research on parsing of buildings. See our latest work on it:
0
0
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
3DV 2016 is sold out. We look forward to seeing you all next Tuesday! #3dv2016
0
1
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
This Monday (10/17) we'll discuss building parsing, its potential and the AEC/FM industry at "The Modern Architect" radio show @KZSU 10-11AM
0
0
1
@StanfordSVL
Stanford Vision and Learning Lab
6 months
Stanford SVL is a vibrant community of faculty, postdocs and students. Our alumni have landed in prestigious positions in academia and industry. We strive for an integrative and diverse group. We especially encourage applications from traditionally underrepresented groups in AI.
1
0
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
BBC News - Jackrabbot: Why this robot is watching how you move
0
1
1
@StanfordSVL
Stanford Vision and Learning Lab
8 years
Exciting new datasets for progress in videos
0
2
1