Noah Snavely Profile Banner
Noah Snavely Profile
Noah Snavely

@Jimantha

7,602
Followers
860
Following
55
Media
857
Statuses

3D vision fanatic. Professor @cornell_tech & Researcher @GoogleAI . He or they.

New York, NY
Joined June 2008
Don't wanna be here? Send us removal request.
@Jimantha
Noah Snavely
4 years
Hello, view synthesis devotees. I invite you to some new work at @eccvconf . We gather tourist photos of famous landmarks and learn a new neural 3D representation that can synthesize new views with natural, modifyable lighting. We call it "Crowdsampling the Plenoptic Function".
10
124
569
@Jimantha
Noah Snavely
5 years
It turns out that YouTube has tons of videos of people pretending to be statues. This is great for learning about the 3D shape of people! Cool new work from @zl548 at CVPR19 from his Google internship.
2
53
235
@Jimantha
Noah Snavely
4 years
Attention all looking glass lovers: This tweet is a shameless plug for a CVPR 2020 paper that asks a dumb question and finds an interesting answer. Can you tell if an image has been horizontally flipped or not?
Tweet media one
10
41
235
@Jimantha
Noah Snavely
3 years
We couldn't find a Fundamental matrix visualizer online, so we made one for our vision course. If you are an F-matrix fan, take a look & tell us if you find any problems. And please send pointers to other demos! (Credits: Alek Curless, Sri Chakra Kumar)
9
38
231
@Jimantha
Noah Snavely
5 years
Dear typesetting fanatics: I wrote a short Latex style guide with some tips & tricks that I find useful for making short and nice-looking papers. If you are working on ECCV papers or the like, maybe it will be useful to you, too.
5
60
213
@Jimantha
Noah Snavely
2 years
For stylization fans, @KaiZhang9546 's work called ARF: Artistic Radiance Fields is on Tuesday's docket at @eccvconf . It achieves nice, view-consistent 2D-to-3D style transfer results by fine-tuning a radiance field so that projections resemble the style of an input source image.
@_akhaliq
AK
2 years
ARF: Artistic Radiance Fields abs: project page: github: create high-quality artistic 3D content by transferring the style of an exemplar image, such as a painting or sketch, to NeRF and its variants
2
133
532
3
28
184
@Jimantha
Noah Snavely
7 months
Thank you to @TheOfficialACM for this really kind honor!
@cornell_tech
Cornell Tech
7 months
In honor of his technical achievements, Associate Professor Noah Snavely was recently named a 2023 @TheOfficialACM Fellow. Congrats, Noah! @Jimantha @Cornell #ACM #Fellowship #CS #Faculty #CornellTech
1
6
31
26
5
171
@Jimantha
Noah Snavely
1 year
Do you have the blues because you are getting broken 3D models from COLMAP or other 3D reconstruction pipelines? Ruojin has a nice new paper and codebase that can help! We invite you to check out our work on doppelganger images here:
@ruojin8
Ruojin Cai
1 year
Check out our #ICCV203 paper called Doppelgangers. We train a classifier to detect distinct but visually similar image pairs ("doppelgangers") and apply it to SfM disambiguation, enabling COLMAP to create correct 3D models in hard cases. Project page:
2
36
191
1
22
132
@Jimantha
Noah Snavely
4 years
Dear city lovers: here's new work at @eccvconf on observing many images of a city over time, and learning to factor lighting effects from scene appearance. This factorization lets us relight new images, even from new cities. Here we learn from NYC and create a full day in Paris.
7
19
131
@Jimantha
Noah Snavely
2 years
To all the CVPR-heads out there -- check out @KaiZhang9546 's work on inverse rendering in this morning's oral session! Relightable 3D meshes from photos, with really beautiful results.
@_akhaliq
AK
2 years
IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images abs: project page:
0
33
148
1
13
129
@Jimantha
Noah Snavely
3 months
This is so cool -- congrats to @zhengqi_li , Richard Tucker, and @holynski_ !
@GoogleAI
Google AI
3 months
Congratulations to @zhengqi_li , Richard Tucker, @Jimantha , and @holynski_ . Their paper “Generative Image Dynamics” received the #CVPR2024 Best Paper Award. Read the paper:
Tweet media one
Tweet media two
8
29
180
7
4
119
@Jimantha
Noah Snavely
1 year
Really proud of @QianqianWang5 for her best student paper award–winning work she presented at the final session of @ICCVConference . Wonderful job, @QianqianWang5 ! And congrats to authors @ruojin8 , Yen-Yu Chang, @zhengqi_li , @BharathHarihar3 , and @holynski_ .
@holynski_
Aleksander Holynski
1 year
. @QianqianWang5 's 🎉Best Student Paper🎉 is being presented at #ICCV2023 tomorrow (Friday)! ▶️"Tracking Everything Everywhere All At Once"◀️ w/ Yen-Yu Chang, @ruojin8 @zhengqi_li @BharathHarihar3 @Jimantha Friday Afternoon Oral & Poster! Come say hi!
1
21
170
7
0
118
@Jimantha
Noah Snavely
3 years
Fitting a CVPR paper in exactly 8 pages is a real Procrustes-style exercise.
6
2
117
@Jimantha
Noah Snavely
4 years
Greetings from View Synthesis Land! Richard Tucker and I had a fun @cvpr2020 paper (from @GoogleAI ) called "Single-View View Synthesis with Multiplane Images". The code (and Colab) is now available. Have fun out there! web: Colab:
0
25
107
@Jimantha
Noah Snavely
5 years
Got an urge to render the world from Internet photo collections? The source code for @moustafaMeshry 's CVPR2019 best paper finalist is now available: . Have fun out there!
1
28
107
@Jimantha
Noah Snavely
5 years
Learned about Notre Dame Cathedral through computer vision and structure from motion, of all things, many years before I ever got a chance to visit. Very sad day.
Tweet media one
2
20
105
@Jimantha
Noah Snavely
1 year
Zhengqi’s new work is a very cool approach to single-image animation—these videos are really nifty! This work turns a still image into a looping video by predicting frequency-space motion. It can also make your image interactive. The demo is really nice!
@zhengqi_li
Zhengqi Li
1 year
Excited to share our work on Generative Image Dynamics! We learn a generative image-space prior for scene dynamics, which can turn a still photo into a seamless looping video or let you interact with objects in the picture. Check out the interactive demo:
19
154
855
4
8
96
@Jimantha
Noah Snavely
2 years
This is so cool! Check out Richard Bowen's work today at #3DV2022 . It considers what possible flow fields could arise if you were to hit a hypothetical "play" button on a still image. @3DVconf
@_akhaliq
AK
3 years
Dimensions of Motion: Learning to Predict a Subspace of Optical Flow from a Single Image abs:
Tweet media one
0
14
83
3
10
93
@Jimantha
Noah Snavely
4 years
Hey there—code and data for our Crowdsampling the Plenoptic Function paper from @eccvconf is now available for all you tourism-heads out there. github link:
@Jimantha
Noah Snavely
4 years
Hello, view synthesis devotees. I invite you to some new work at @eccvconf . We gather tourist photos of famous landmarks and learn a new neural 3D representation that can synthesize new views with natural, modifyable lighting. We call it "Crowdsampling the Plenoptic Function".
10
124
569
2
22
92
@Jimantha
Noah Snavely
5 years
Hello to all you light field lovers out there! We have new work with John Flynn and others on high-quality view synthesis from a camera array. We use soft layers to make nice pictures. Presented in Tuesday's afternoon oral session at @cvpr19 .
2
22
91
@Jimantha
Noah Snavely
4 years
I have a real soft spot for epipolar geometry—and so this tweet is a crass advertisement for some work of ours at @eccvconf that I think is nice. The idea is to learn local feature descriptors from pairs of images with known camera poses—no ground truth correspondence required.
6
11
91
@Jimantha
Noah Snavely
4 years
I think it is pretty neat. This is work from Cornell Tech with @zl548 , @XianWenqi , and @AbeDavis . You can find out more at , or watch this wonderful teaser video made by @AbeDavis .
8
20
83
@Jimantha
Noah Snavely
2 months
This is so cool! Check out @boyang_deng 's wonderful work on generating Streetscapes -- tours through imaginary street scenes, conditioned on a desired city layout and a text description. I like this wintry result a lot!
@boyang_deng
Boyang Deng
2 months
Thought about generating realistic 3D urban neighbourhoods from maps, dawn to dusk, rain or shine? Putting heavy snow on the streets of Barcelona? Or making Paris look like NYC? We built a Streetscapes system that does all these. See . (Showreel w/ 🔊 ↓)
3
18
114
0
4
83
@Jimantha
Noah Snavely
4 years
Maybe it's just me, but for me, the award for the computer vision project whose webpage has survived for the longest time without breaking is "3D Photography on your Desk" by Jean-Yves Bouguet and Pietro Perona (1998).
2
6
82
@Jimantha
Noah Snavely
6 years
In need of many examples of camera trajectories from videos? Check out our new RealEstate10K dataset! . This is the kind of data we used in our recent Stereo Magnification work on view synthesis (with Tinghui Zhou).
2
38
80
@Jimantha
Noah Snavely
4 years
These results look amazing!
@_akhaliq
AK
4 years
NeX: Real-time View Synthesis with Neural Basis Expansion pdf: abs:
Tweet media one
7
68
301
1
11
79
@Jimantha
Noah Snavely
1 year
I'm really proud of @zhengqi_li , who put his heart into the DynIBaR work that got the Best Paper Honorable Mention nod at CVPR. And I'm really sad that he couldn't be there to experience it due to circumstances beyond his control. Thanks for the nice photo and note, @jon_barron !
0
2
78
@Jimantha
Noah Snavely
4 years
Hi everyone. I'm helping to organize tomorrow's ECCV 4D Vision Workshop. We have a lineup of great papers and speakers—some real vision enthusiasts—including @RaquelUrtasun , Michael Ryoo, @davsca1 , Drago Anguelov, @mapo1 , @xiaolonw , & Tom Funkhouser.
2
12
66
@Jimantha
Noah Snavely
7 years
Zhengqi Li (Cornell PhD student) presents: MegaDepth! Big (100K+), diverse dataset of RGBD images derived from Internet multi-view stereo. Good for training RGB -> depth, generalizable to other datasets (e.g. KITTI). Web: , arXiv:
Tweet media one
Tweet media two
1
17
65
@Jimantha
Noah Snavely
4 years
This is work with Zhiqiu Lin, Jin Sun, and @abedavis . You can check it out at or visit the CVPR Q&A on "Visual Chirality" on Thursday. Or watch this nice teaser video from @AbeDavis . Thanks! Now back to your timeline.
2
3
64
@Jimantha
Noah Snavely
2 years
I like this cool space-time view synthesis work from lead author @QianqianWang5 and some other friendly people.
@_akhaliq
AK
2 years
3D Moments from Near-Duplicate Photos abs: project page:
2
30
148
1
6
61
@Jimantha
Noah Snavely
4 years
Shamelessly plugging this talk tomorrow (Wednesday). My hat is off to the 3DGV organizers for putting together a great series of talks on cool 3D vision-style work!
@YasutakaFuruka1
Yasutaka Furukawa
4 years
@3_dgv Seminar in 2 days! @Jimantha Noah Snavely will talk about "The Plenoptic Camera", joined by @_pratul_ Pratul Srinivasan and Rick Szeliski. Please distribute the news to students/members in your groups. Youtube: 3/10 11am Pacific 3/10 19:00 UK
0
11
40
1
3
60
@Jimantha
Noah Snavely
2 years
This is so cool -- congrats, @QianqianWang5 !
@CornellCIS
Cornell Bowers Computing and Information Science
2 years
The Google Ph.D. Fellowship Program has selected @QianqianWang5 as one of its 2022 fellows. “I hope that my technology can enable us to create a rich and realistic virtual world,” - Qianqian Wang, computer science Ph.D. student at @Cornell_tech Read more:
Tweet media one
0
0
27
5
1
57
@Jimantha
Noah Snavely
5 years
Hello to all you fashion-heads out there! We invite you to our new ICCV paper on analyzing clothing in millions of photos around the world. We can discover world events and festivals purely from apparel! With U. Mall, K. Matzen, B. Hariharan & K. Bala.
Tweet media one
Tweet media two
Tweet media three
1
10
54
@Jimantha
Noah Snavely
4 years
For all you inverse rendering fanatics out there, some great work on recovering shape, glossy material, and lighting from multiple photos. This is this work of @KaiZhang9546 , Fujun Luan, @QianqianWang5 , and Kavita Bala from @cs_cornell , @CornellECE , and @cornell_tech .
@_akhaliq
AK
4 years
PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting pdf: abs: project page:
0
8
64
1
7
55
@Jimantha
Noah Snavely
4 years
Someone in NYC today was showing off some very precocious parallel parking skills.
Tweet media one
1
4
53
@Jimantha
Noah Snavely
4 years
We invite you to check out this nice work on view synthesis for dynamic scenes! Work from @zl548 during his Adobe internship with @oliverwang81 and @simon_niklaus .
@simon_niklaus
Simon Niklaus
4 years
Really great work we did with @zl548 on practical novel view synthesis in space and time. Take any video and move the camera, or slow down the time, or both! Website: With: @zl548 , @oliverwang81 , @Jimantha
6
44
266
2
2
53
@Jimantha
Noah Snavely
4 years
Very nice work led by @QianqianWang5 on generalizing NeRF by incorporating principles from classic image-based rendering. Along with other work like pixelNeRF and GRF, I'm excited by these demonstrations of cross-scene generalization. (And I love this example miniature scene!)
@jon_barron
Jon Barron
4 years
Training NeRFs per-scene is so 2020. Inspired by image based rendering, IBRNet does amortized inference for view synthesis by learning how to look at input images at render time. 15% drop in error, 80% fewer FLOPs than NeRF. Great work @QianqianWang5 !
2
82
455
0
6
54
@Jimantha
Noah Snavely
6 years
In case you missed it at ECCV: @zl548 has a new dataset called CGIntrinsics. Ludicrously high-quality CG renderings for learning intrinsic images. You can predict state-of-the-art intrinsic images on real photos just by training on CG data! #ECCV2018
Tweet media one
0
23
52
@Jimantha
Noah Snavely
4 years
It feels like CVPR20 ended 9 years ago—but I'm only now checking it out. I recommend the great FATE tutorial. On top of the challenges outlined, I imagine there are hurdles even in spearheading such a tutorial—thank you, @timnitGebru & @cephaloponderer !
0
4
51
@Jimantha
Noah Snavely
4 years
I got the chance to read this paper in detail recently, and it is really cool, especially for all you feature matching–heads out there! I love the idea of computing descriptors on the basis of two images at once. Nice work, @oliviawiles1 , Sebastien Ehrhardt, and Andrew Zisserman!
@ducha_aiki
Dmytro Mishkin 🇺🇦 @ECCV2024
4 years
D2D: Learning to find good correspondences for image matching and manipulation @oliviawiles1 , Sebastien Ehrhardt, Andrew Zisserman, @Oxford_VGG Idea: extract features conditionally on 2nd image. 1/
Tweet media one
Tweet media two
Tweet media three
3
26
75
0
9
51
@Jimantha
Noah Snavely
2 years
This nice work on surface reconstruction from Internet photos is being presented today @siggraph !
@JiamingSuen
Jiaming Sun
2 years
Glad to share our work “Neural 3D Reconstruction in the Wild” in SIGGRAPH 2022! We show that with a clever sampling strategy, neural-based 3D reconstruction can be better and faster than COLMAP. Check out the project page at: .
7
55
319
0
2
49
@Jimantha
Noah Snavely
4 years
Very creative use of data!
@_akhaliq
AK
4 years
Reconstructing 3D Human Pose by Watching Humans in the Mirror pdf: abs: project page:
2
26
114
1
2
43
@Jimantha
Noah Snavely
4 years
I've been thinking about this horrible situation all day, and can't imagine what Timnit is going through.
2
0
43
@Jimantha
Noah Snavely
4 years
Wow -- these results look amazing!
@duck
Daniel Duckworth
4 years
Our paper, “NeRF in the Wild”, is out! NeRF-W is a method for reconstructing 3D scenes from internet photography. We apply it to the kinds of photos you might take on vacation: tourists, poor lighting, filters, and all. (1/n)
75
1K
6K
1
5
43
@Jimantha
Noah Snavely
4 years
I'm really excited about this work!
@akanazawa
Angjoo Kanazawa
4 years
View synthesis is super cool! How can we push it further to generate the world *far* beyond the edges of an image? We present Infinite Nature, a method that combines image synthesis and 3D to generate long videos of natural scenes from a single image.
19
457
2K
2
1
42
@Jimantha
Noah Snavely
3 years
If you are a big fan of solids of revolution like me, you might like this very nice work from Shangzhe on modeling them from single images.
@elliottszwu
Elliott / Shangzhe Wu
3 years
Let's turn photos of ancient "revolutionary" (rotationally symmetric) artefacts into 3D and rotate them, or even change the lighting! Our model learns to de-render a single image of a vase into shape, albedo, material & lighting, from just a single-image collection. #CVPR2021
4
39
256
0
5
39
@Jimantha
Noah Snavely
4 years
Totally agree with Beth: "The violence directed at Asian Americans, especially women, children and elderly, is against the very core values America is built on. This is why I am standing up and speaking up today."
0
2
38
@Jimantha
Noah Snavely
5 years
Very cool dataset of historical stereo image pairs!
@CSProfKGD
Kosta Derpanis at #ECCV2024 Milan 🇮🇹
5 years
Xuan Luo, Yanmeng Kong, Jason Lawrence, Ricardo Martin-Brualla, Steve Seitz, KeystoneDepth: Visualizing History in 3D
Tweet media one
0
7
24
1
11
37
@Jimantha
Noah Snavely
3 years
Hi all, please consider nominating yourself to be reviewer for #CVPR2022 . And please pass the word along, especially to those whose voices are not well represented in the vision community. This is one way to help guide the field.
@CVPR
#CVPR2024
3 years
#CVPR2022 is seeking additional reviewers. If interested or you want to nominate someone, please fill out the following reviewer nomination form:
Tweet media one
1
43
85
0
11
34
@Jimantha
Noah Snavely
1 year
Really nice work from @zhengqi_li that gets very impressive results on view synthesis for dynamic scenes!
@zhengqi_li
Zhengqi Li
1 year
Check out our CVPR 2023 Award Candidate paper, DynIBaR! DynIBaR takes monocular videos of dynamic scenes and renders novel views in space and time. It addresses limitations of prior dynamic NeRF methods, rendering much higher quality views.
3
92
448
0
1
33
@Jimantha
Noah Snavely
4 years
I love these wonderful interpolations and this very nice project!
@BenMildenhall
Ben Mildenhall
4 years
From our latest project, an homage to the original Photo Tourism visualizations by @Jimantha et al. - interpolating between camera pose, focal length, aspect ratio, and scene appearance from different tourist images. More details at @_pratul_ @jon_barron
4
8
52
0
1
34
@Jimantha
Noah Snavely
5 years
I'm a big logo head! I keep seeing ads for Zenni on the train. I'm intrigued by how the stylized Z and N are exact mirror images here, but not in "real life" - one has horizontal lines, the other vertical. Yet no problem interpreting the logo. Cool Gestalt-style logic at work!
Tweet media one
0
1
33
@Jimantha
Noah Snavely
4 years
There are few things I find more terror-inducing than cold calling people -- but I am finding that making US election-related volunteer calls leads to some pretty nice conversations. Some folks just want to chat right now.
0
0
34
@Jimantha
Noah Snavely
10 months
This was really cool! Thank you for organizing @elliottszwu , @ruoshi_liu , and @Haian_Jin ! It was nice seeing everyone at @cornell_tech .
@ruoshi_liu
Ruoshi Liu @ ECCV
10 months
Organizing the first NYC vision workshop was super fun! Shout out to other organizers @elliottszwu @Haian_Jin and especially @Jimantha for the generous support!
Tweet media one
3
8
65
0
2
32
@Jimantha
Noah Snavely
3 years
A bunch of us are hanging out inside a nice #ICCV2021 video chat saloon type interface chatting about Infinite Nature right now!
@akanazawa
Angjoo Kanazawa
4 years
View synthesis is super cool! How can we push it further to generate the world *far* beyond the edges of an image? We present Infinite Nature, a method that combines image synthesis and 3D to generate long videos of natural scenes from a single image.
19
457
2K
0
3
32
@Jimantha
Noah Snavely
3 months
This work led by @Haian_Jin is really nice. It takes text-to-image models and teases out their capability to light objects in a controllable way, much like Zero123 does for camera viewpoint. I'm really surprised that conditioning on environment maps can work this well!
@Haian_Jin
Haian Jin
3 months
Check out our recent work “Neural Gaffer: Relighting Any Object via Diffusion” 📷🌈, an end-to-end 2D relighting diffusion model that accurately relights any object in a single image under various lighting conditions. 🧵1/N: Website:
3
18
79
1
5
32
@Jimantha
Noah Snavely
4 years
Yes, I hope our international students (and H-1B holders) can breathe a little easier now.
@informor
Mor Naaman (@[email protected])
4 years
I'm thinking tonight about our international students, PhD and MS students who over the last few years have faced so much uncertainty about their very presence in this country. This is a great moment for them, and a great moment for US universities and the US economy.
14
205
2K
1
0
31
@Jimantha
Noah Snavely
4 years
Is it just me or are ECCV decisions up already? @eccvconf
6
2
30
@Jimantha
Noah Snavely
2 years
This really fun work — Infinite Nature synthesis trained on still photos — with @zhengqi_li , @QianqianWang5 , and @akanazawa will be live at #ECCV2022 on Wednesday! Thanks, and have a great day.
@akanazawa
Angjoo Kanazawa
2 years
A new follow up to infinite nature is out! This time we show how an infinite nature model can be trained on *single image* collections, without any multi-view or video supervision at training time! We call it infinite nature 𝘻𝘦𝘳𝘰 since it requires no video 🙂 #ECCV2022 oral
3
17
193
0
0
30
@Jimantha
Noah Snavely
2 months
I'm a big fan of work on visual discovery, and this work on using diffusion models for data mining is really cool!
@shiryginosar
Shiry Ginosar
2 months
Image synthesis models can be used for visual data mining! See our new #ECCV2024 paper: "Diffusion Models as Data Mining Tools." Project page: Paper: 1/9
Tweet media one
4
21
89
0
1
31
@Jimantha
Noah Snavely
3 years
Happy Lunar New Year, and to all you astronomy fanatics a question -- If you and your family lived for generations in a village on the far side of the Moon, would you realize that the Earth existed?
3
0
29
@Jimantha
Noah Snavely
4 years
These depth maps look amazing! Nice work, @XuanLuo14 and co-authors!
@jbhuang0604
Jia-Bin Huang
4 years
Check out our #SIGGRAPH2020 paper on Consistent Video Depth Estimation. Our geometrically consistent depth enables cool video effects to a whole new level! Video: Paper: Project page:
10
212
887
1
2
29
@Jimantha
Noah Snavely
4 years
We call it "Learning to Factorize and Relight a City". This is the nice work of first author @ndrewLiu at @googleai , and @shiryginosar , @TinghuiZhou , and Alyosha Efros. See more nice videos at !
2
5
28
@Jimantha
Noah Snavely
3 years
Reminder about this CVPR registration support program -- please apply by April 15, 2022 if you'd like to be considered for a registration fee waiver! I hope that this effort can help increase inclusivity of CVPR. Application is here:
@CVPR
#CVPR2024
3 years
#CVPR2022 is committed to supporting students from communities that do not traditionally attend CVPR through waived registration fees, to foster a more inclusive, diverse and equitable conference. 1/2
3
34
126
1
10
29
@Jimantha
Noah Snavely
1 year
I think I'm not mistaken that ICCV camera ready papers this year can be 9 pages+references (not 8)? If that is right, that is a first, and a very welcome, cool, and nice change!
3
1
28
@Jimantha
Noah Snavely
2 years
CVPR Academy is in progress! For all you first-time CVPR fanatics. In Room 202, or accessible though the virtual platform. All the best to you all.
@CVPR
#CVPR2024
2 years
#CVPR2022 Pre-Conference Workshop for First-Time Attendees: On Monday 6/20 AM, Room 202
Tweet media one
0
2
17
0
7
28
@Jimantha
Noah Snavely
3 years
For all you photography fanatics out there, a nice blog post about the photo with the longest known sightline captured to date -- 443km, from the Pyrenees to the French Alps.
1
3
28
@Jimantha
Noah Snavely
4 years
Hi there. Code for our @eccvconf work from @cornelltech on learning where people could appear in an image is now online. Our (cool) method learns to predict potential people purely from observing data like Waymo's Open Dataset. Code below—have a good day!
@ElorHadar
Hadar Averbuch-Elor
4 years
Where could people walk? Excited to share our @eccvconf paper on learning contextual walkability: "Hidden Footprints: Learning Contextual Walkability from 3D Human Trails" Arxiv: Website: With: Jin Sun, @QianqianWang5 , @Jimantha
1
13
41
0
5
28
@Jimantha
Noah Snavely
4 years
The upshot is that images seem to be full of low- and high-level chirality cues, and deep networks are pretty good at guessing when an image has been flipped. You might care if you're into data augmentation, image forensics, or self-supervision (or if you are a huge mirror-head).
1
6
27
@Jimantha
Noah Snavely
3 years
For Lunar New Year/Spring Festival, another Moon-based note to all you Moon lunatics out there. One of my favorite gifs on Wikipedia is this one illustrating the apparent wobble of the Moon over the course of a month, called libration.
3
1
27
@Jimantha
Noah Snavely
2 years
Tomorrow is Election Tuesday in the US. I have tons of extra candy. If you see me and show me a "I Voted" sticker, I will try to give you some candy! I have Skittles and Baby Ruths.
1
0
27
@Jimantha
Noah Snavely
3 years
I am sorry for buzz marketing, but if you are looking for a TV show for a 3-5 year old, I recommend a math-themed PBS program called Peg + Cat. Our 4 year old loves it (and the songs are catchy).
2
0
27
@Jimantha
Noah Snavely
3 years
This workshop was so cool! My live talk had some technical difficulties, so if you want to see a clean version of a talk on how to tell if you are in a mirror universe (and a bunch of other great talks on less obscure topics), please check this out!
@DaveLindell
David Lindell
3 years
If you missed the Computational Cameras and Displays Workshop at #CVPR2021 , you can still watch the recorded talks at
1
11
50
5
7
27
@Jimantha
Noah Snavely
3 years
This looks like a wonderful program for computer graphics PhD students and postdocs! Application deadline on April 4, 2022 at
@wigraphorg
WiGRAPH
3 years
Announcing WiGRAPH's Rising Stars in Computer Graphics! Ph.D. students and postdocs of underrepresented genders: apply for a two-year program of mentorship and workshops co-located with SIGGRAPH 2022&2023. Travel support provided. #WiGRAPHRisingStars [1/6]
Tweet media one
3
79
195
0
6
26
@Jimantha
Noah Snavely
4 years
I've personally benefited a ton from @timnitGebru and her work. Her earlier work on estimating demographics at scale from Street View has been a big inspiration to me. Her more recent work in ethics is truly foundational, and has helped me think about the world differently.
1
2
25
@Jimantha
Noah Snavely
2 years
If you're a big lover of video decomposition, check out @vickie_ye_ 's nice paper on Deformable Sprites in the CVPR afternoon oral session. We are huge fans of layered video representations!
1
3
24
@Jimantha
Noah Snavely
7 years
Oops.
Tweet media one
0
6
24
@Jimantha
Noah Snavely
9 months
This work looks really interesting!
@JeromeRevaud
Jerome Revaud
10 months
We believe we did a breakthrough in Geometric 3D Vision: meet DUSt3R, an all-in-one 3D Reconstruction method
Tweet media one
13
55
262
1
3
24
@Jimantha
Noah Snavely
4 years
A deep network takes two images, learns to search for 2D matches between them, and then a loss function decides how much it likes the matches based on how much they deviate from the epipolar constraints derived from the camera poses, as in the visualization below.
3
1
22
@Jimantha
Noah Snavely
4 years
@CSProfKGD Thanks, Kosta! Yes, there was an appearance from a 4-year-old who wasn't happy that I wasn't in play mode. (I moved to a different room, but forgot my trackball, so she started controlling my computer remotely.) I'm glad people seemed to be understanding of the heightened chaos.
4
0
21
@Jimantha
Noah Snavely
3 years
That page also has a stunning photo of the Earth that looks like CGI but is actually a photo from the Lunar Reconnaissance Orbiter. This photo is new to me but is really amazing!
Tweet media one
1
3
20
@Jimantha
Noah Snavely
3 years
@taiyasaki @ICCV_2021 Reviewers rock! Thank you so much for your hard work. Some didn't chime into discussions, but I think because CVPR and life was happening. Also, I initiated discussions late... between CVPR and sick kids/self, I was a bad AC☹️. But many reviewers chimed in anyway. Thank you!
1
0
20
@Jimantha
Noah Snavely
4 years
Thanks, Ben! And I should note that the original idea for multiplane images came from John Flynn, working with Graham Fyffe and @debfx . That idea was also presaged in John's prior DeepStereo view synthesis method, as well as Soft3D from Penner and Zhang.
@BenMildenhall
Ben Mildenhall
4 years
Great overview from @fdellaert ! I'd also like to highlight @TinghuiZhou and @Jimantha et al. for bringing volume rendering into deep learning for view synthesis with their paper Stereo Magnification in 2018.
2
2
28
1
1
20
@Jimantha
Noah Snavely
3 years
Hi all, ICCV21 isn't even over yet, but #CVPR2022 will be here before we know it, and deadlines for proposing workshops & tutorials are coming up soon. It would be great if the organizers had a diverse set of proposals on a range of topics, including societal impacts of CV.
1
3
20
@Jimantha
Noah Snavely
10 months
This idea is really cool!
@dorverbin
Dor Verbin
10 months
Introducing Eclipse, a method for recovering lighting and materials even from diffuse objects! The key idea is that standard "NeRF-like" data has all we need: a photographer moving around a scene to capture it causes "accidental" lighting variations. (1/3)
5
37
353
0
1
20
@Jimantha
Noah Snavely
2 years
Congrats, @ruojin8 ! This is so cool!
@cornell_tech
Cornell Tech
2 years
Congratulations to CS PhD student Ruojin Cai for being selected as a 2022 Snap Research Fellow! 🎉 Learn more: #CS #PhD #Fellowship #SnapResearch #OnlyAtCornellTech #EngineeredToMatter @Cornell
Tweet media one
Tweet media two
1
2
11
1
1
20
@Jimantha
Noah Snavely
7 years
And for all y'all intrinsic image fanatics out there -- Zhengqi also has a cool new paper on learning intrinsic images supervised with time-lapse data: "Learning Intrinsic Image Decomposition by Watching the World". Web: , arXiv:
Tweet media one
0
6
19
@Jimantha
Noah Snavely
3 years
A reminder that @CVPR 2022 workshop proposals are due tomorrow, October 19, at 11:59pm Pacific. Thank you! More info here:
0
8
19
@Jimantha
Noah Snavely
3 years
Hope you are all doing well out there. ECCV22 workshop proposals are due tomorrow! it would be wonderful to see a diverse range of workshops at the conference.
@eccvconf
European Conference on Computer Vision #ECCV2024
3 years
#ECCV2022 Call for Workshop Proposals
Tweet media one
1
10
46
1
4
18
@Jimantha
Noah Snavely
2 years
Hey everybody -- there is still time to apply to the CVPR 2022 travel grant program. More details in the form:
@CVPR
#CVPR2024
2 years
#CVPR2022 is now accepting applications for travel grants. Decisions will be made on a rolling basis, so please apply soon, and no later than 5/13 at 11:59 CST.  Links:  (student)  (advisor) Source:
Tweet media one
0
12
44
1
9
19
@Jimantha
Noah Snavely
4 years
She is a guiding light in computer vision and learning, and she deserves our support.
0
0
19
@Jimantha
Noah Snavely
4 months
It was a pleasure to attend this wonderful defense! Are you saying that is a hat in the third photo?
@pesarlin
Paul-Edouard Sarlin
4 months
I defended my PhD thesis last week 🥳 Thank you to everyone that made this possible, including my advisor @mapo1 , examiners @Jimantha @quantombone and Daniel Cremers, and the amazing @cvg_ethz . As per the tradition, I received a nice commemorative hat 🎓 Now time for vacations 😎
Tweet media one
Tweet media two
Tweet media three
31
7
355
1
0
18
@Jimantha
Noah Snavely
4 years
For y'all @eccvconf attendees, @zl548 , @XianWenqi , and I are hanging out at the Zoom poster right now.
@Jimantha
Noah Snavely
4 years
Hello, view synthesis devotees. I invite you to some new work at @eccvconf . We gather tourist photos of famous landmarks and learn a new neural 3D representation that can synthesize new views with natural, modifyable lighting. We call it "Crowdsampling the Plenoptic Function".
10
124
569
0
4
18
@Jimantha
Noah Snavely
4 years
For all you ECCV rebuttal writers out there... seems like reviewers can see your rebuttals in OpenReview even before the end of the rebuttal period, which may be surprising behavior to CMT-heads like me.
@dantkz
Daniyar Turmukhambetov
4 years
@CSProfKGD As a reviewer I saw initial rebuttal from authors, and then edited version. So, you can edit, but whatever you have entered is already visible to the reviewers.
1
0
2
2
2
18
@Jimantha
Noah Snavely
6 years
Layers! We love layers. On that note, a shameless plug for Shubham Tulsiani's work on *layered scene inference* -- predicting geometry in the form of *layered* depth maps from single images. #ECCV2018 #layers
Tweet media one
0
3
18
@Jimantha
Noah Snavely
3 years
@ak92501 I like this. I always wanted something like this to get ground truth data for predicting sun direction from images.
0
0
16
@Jimantha
Noah Snavely
4 years
I heard some states are rightfully changing up their flag design, so it seemed like a good time to remind folks that my home state of Arizona has the best flag, and Wisconsin has the worst one.
Tweet media one
Tweet media two
2
0
16
@Jimantha
Noah Snavely
4 years
... low-level artifacts due to JPEG compression and Bayer demosaicing, and high-level elements like shirt collars, musical instruments, and even eye gaze and hair part (?)
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
2
15