Ming-Yu Liu Profile Banner
Ming-Yu Liu Profile
Ming-Yu Liu

@liu_mingyu

7,877
Followers
491
Following
79
Media
911
Statuses

Tweets are my own.

San Jose, CA
Joined December 2015
Don't wanna be here? Send us removal request.
@liu_mingyu
Ming-Yu Liu
7 years
Given a content photo and a style photo, the algorithm transfers the style of the style photo to the content photo to generate a stylized output as it was captured by a camera. #DeepLearning #style Code: Paper:
Tweet media one
76
3K
8K
@liu_mingyu
Ming-Yu Liu
5 years
The #GauGAN beta version is now available to everyone as a web service via #NVIDIA AI Playground A short illustration video is available in May everybody have fun with the app. #GAN , #SPADE
20
579
2K
@liu_mingyu
Ming-Yu Liu
5 years
A #GauGAN timelapse video created by @neilbickford from @nvidia Again, live demo is available at
15
362
1K
@liu_mingyu
Ming-Yu Liu
7 years
MUNIT: Multimodal unsupervised image-to-image translation. Learn to translate one input dog image to a distribution of cat images without paired training data. paper: code: by @xunhuang1995 @SergeBelongie @jankautz #DeepLearning
16
632
1K
@liu_mingyu
Ming-Yu Liu
6 years
Check out our #CVPR19 oral paper on a new conditional normalization layer for semantic image synthesis #SPADE and its demo app #GauGAN paper website video1 @tcwang0509 @junyanz89
13
324
872
@liu_mingyu
Ming-Yu Liu
5 years
Check out our new #GAN work on translating images to unseen domains in the test time with few example images. Live demo Project page Paper Video #NVIDIA
15
258
799
@liu_mingyu
Ming-Yu Liu
4 years
Check out our new work on face-vid2vid, a neural talking-head model for video conferencing that is 10x more bandwidth efficient than H264 arxiv project video @tcwang0509 @arunmallya #GAN
14
149
743
@liu_mingyu
Ming-Yu Liu
5 years
#GauGAN meets #3D A cool demonstration by Jay Axe, a 3D artist @nvidia Online demo is still running at
7
124
527
@liu_mingyu
Ming-Yu Liu
7 years
Snowy to summery image translation. Results from our NIPS 17 paper () Full video available in
7
207
488
@liu_mingyu
Ming-Yu Liu
6 years
code: video: Using Conditional GAN to generate HD resolution videos
@Miles_Brundage
Miles Brundage
6 years
"Video-to-Video Synthesis," Wang et al.: Impressive stuff from NVIDIA.
3
14
75
5
136
364
@liu_mingyu
Ming-Yu Liu
5 years
In CVPR2019, the #StyleGAN paper won the best paper honorable mention and #SPADE / #GauGAN paper won the best paper finalist. Congratulates to all the GAN authors! @nvidia
8
46
342
@liu_mingyu
Ming-Yu Liu
4 years
I am looking for highly-motivated researchers that are interested and have experiences in AI for content creation (image/3D/audio/XXX) to join our research team. If interested, please send your CV to my NVIDIA email.
16
45
341
@liu_mingyu
Ming-Yu Liu
5 years
Glad to share our #NeurIPS2019 paper on few-shot vid2vid where we address the scalability issue of our #vid2vid . Now, with 1model and as few as 1 example image provided in the test time, we could render the motion of a target subject. code coming soon.
5
99
302
@liu_mingyu
Ming-Yu Liu
6 years
Glad to see that our #GAN research works enable people to "generate realistic dance videos of NBA players for in-game entertainment." #pix2pixHD , #vid2vid
2
64
278
@liu_mingyu
Ming-Yu Liu
3 years
The #GauGAN model is now freely available in a standalone app, called #CANVAS @NVIDIAAI Visit the website and download a copy Turn your doodles into beautiful landscapes!!! #GAN
Tweet media one
4
71
258
@liu_mingyu
Ming-Yu Liu
5 years
Woohoo! GauGAN won the Best of What's New Award by Popular Science Magazine!!! If you haven't tried GauGAN, please visit
@NVIDIAAI
NVIDIA AI
5 years
The real-time #AI art sensation #GauGAN won @PopSci Magazine "Best of What's New Award" in the engineering category. See how NVIDIA researchers developed the first AI model that can produce complex images with only a few brushstrokes.
4
63
198
8
47
235
@liu_mingyu
Ming-Yu Liu
6 years
May the power of #TitanRTX with you @goodfellow_ian
Tweet media one
2
12
227
@liu_mingyu
Ming-Yu Liu
5 years
Check out PointFlow for point cloud generation video code project Brought to you by @YangGuandao @xunhuang1995 Zekun Hao @SergeBelongie Bharath Hariharan
1
49
226
@liu_mingyu
Ming-Yu Liu
1 year
My team is hiring. If you want to join our effort in building better 3D models to digitalize the real world, here is the link for the application
@DrJimFan
Jim Fan
1 year
To give you a sense of how fast AI for 3D modeling is advancing: the field went from the left (original NeRF-reconstructed mesh) to right (Neuralangelo from NVIDIA) in 3 years. Transporting reality into high-fidelity simulation is no longer a pipe dream.
Tweet media one
Tweet media two
33
197
1K
2
16
223
@liu_mingyu
Ming-Yu Liu
3 years
Wondering what next after #StyleGAN -x? Check out this awesome paper from my colleagues on #AliasFreeGAN . It addresses the texture sticking issue in image synthesis and further advances the rendering quality
3
32
215
@liu_mingyu
Ming-Yu Liu
6 years
We will be presenting our work on Multimodal Unsupervised Image-to-Image Translation (MUNIT) on P-1B-06 from 4-6pm on Monday at #ECCV2018 Please drop by if you want to know more details about our work. Code is available at
1
78
201
@liu_mingyu
Ming-Yu Liu
7 years
pix2pixHD code is now available in paper Brought to you by @tcwang0509 @junyanz89 @jankautz @ctnzr
3
74
187
@liu_mingyu
Ming-Yu Liu
5 years
So happy to share the news that @NVIDIADesign #GauGAN won both best real time live demo award and people’s choice of best demo award in #SIGGRAPH2019 Real-Time Live. @junyanz89 @gavriilklimov @tcwang0509 @chrisjhebert Special thank to @goodfellow_ian to bring #GAN to this world.
Tweet media one
9
23
175
@liu_mingyu
Ming-Yu Liu
4 years
OpenAI's Dall-E is so impressive. The text2image generation capability is beyond imagination. Big congrats to @OpenAI and @ilyasut for the amazing work.
Tweet media one
2
44
171
@liu_mingyu
Ming-Yu Liu
5 years
We will be doing a poster session for our #ICCV2019 paper on FUNIT: Few-shot Unsupervised Image-to-Image Translation () at Hall 1B. Poster 139. Hope to see you there.
4
39
165
@liu_mingyu
Ming-Yu Liu
6 years
If you want to drive a #GAN in a virtual city. Please come to #NVIDIA booth in #NeurIPS . We use #vid2vid to convert segmentation masks from a game engine to images in real-time. Steering wheel and pedal are provide.
3
52
162
@liu_mingyu
Ming-Yu Liu
6 years
We made several updates to #FastPhotoStyle -20x faster now -Interface for utilizing a semantic segmentator for automatic mask generation -A new tutorial -Code -Paper will be presented in #ECCV2018 @leexiaoju #DeepLearning #style
2
47
157
@liu_mingyu
Ming-Yu Liu
5 years
I am looking for a summer PhD intern that has experience on semantic segmentation, instance segmentation, panoptic segmentation, human pose estimation, or optical flow estimation to work together on a research project. Please DM to me if interested.
4
44
150
@liu_mingyu
Ming-Yu Liu
4 years
My awesome colleagues have now released #PyTorch version of StyleGAN2-ADA. (The initial release was in #TensorFlow ) ADA uses a clever data augmentation to help address limit sample problems in #GAN training.
2
34
143
@liu_mingyu
Ming-Yu Liu
2 years
I’m looking for researchers with experiences and strong passion in large-scale image-text models to join our research team at CA. Strong knowledge on diffusion models, contrastive learning, or data curation is preferred. Team-work first, extreme hard-core, and perfection-driven.
6
23
149
@liu_mingyu
Ming-Yu Liu
3 years
We are looking for several Ph.D. interns for 2022 spring/summer/fall. We plan to cover several topics, including neural rendering, zero-shot segmentation, text--image modeling, speech/music generation, and denoising diffusion models. If interested, send your CV to me.
2
34
137
@liu_mingyu
Ming-Yu Liu
2 years
Very happy to share #NVIDIA Picasso. It took a whole team effort to make it happen. We are hiring. DM me if you are interested in joining our effort to build SOTA foundation models.
4
29
122
@liu_mingyu
Ming-Yu Liu
4 years
Would like to share our ECCV Spotlight paper on COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content Conditioned Style Encoder Joint work with @ksaitoboston1 and @saenko_group Project Video Code coming soon! (1/N)
5
34
120
@liu_mingyu
Ming-Yu Liu
5 years
I will be giving a talk on #GauGAN and its backbone algorithm #SPADE in room 501 at #SIGGRAPH2019 at 2pm today (Sunday 7/28). If you are interested in how we design a GAN to achieve the task and the idea behind it. Please come.
1
18
113
@liu_mingyu
Ming-Yu Liu
1 year
Very proud of the team ( @chenhsuan_lin @mli0603 Thomas Müller, Alex Evans) that brought this invention to life, which Time Magazine now recognizes as one of the best inventions of 2023. We are hiring researchers of different seniority to join our mission to democratize content
2
7
113
@liu_mingyu
Ming-Yu Liu
6 years
Going to present several #GAN works in NVIDIA’s #GTC19 conference, including #StyleGAN , #vid2vid , and several other new GAN works that we have NOT announced. Register by 2/8 for early bird pricing and use discount code NVMINGYUL for an additional 25% off:
4
24
113
@liu_mingyu
Ming-Yu Liu
7 years
MoCoGAN for mapping random vectors to videos Code/Docker Image: Paper: #DeepLearning #GAN
0
46
103
@liu_mingyu
Ming-Yu Liu
7 years
Our unsupervised image translation code is now available in Thank @goodfellow_ian for letting us using his portrait.
1
39
102
@liu_mingyu
Ming-Yu Liu
1 year
Nvidia PhD Graduate Fellowship application begins. Apply now.
0
24
101
@liu_mingyu
Ming-Yu Liu
6 years
We will present several #GAN works in NVIDIA’s #GTC19 conference, including #StyleGAN , #vid2vid , and several other new GAN works that we have NOT announced. Register by 2/8 for early bird pricing and use discount code NVMINGYUL for an additional 25% off:
0
23
99
@liu_mingyu
Ming-Yu Liu
4 years
Kudos to @tcwang0509 and @arunmallya Exciting to see a new chapter of video conferencing.
2
11
88
@liu_mingyu
Ming-Yu Liu
3 years
Congrats to @phillip_isola and @georgiagkioxari on winning the CVPR Young Researcher Awards!!!
Tweet media one
0
3
81
@liu_mingyu
Ming-Yu Liu
7 years
Check out our MoCoGAN work on motion and content decomposed random video generation via GAN.
0
38
78
@liu_mingyu
Ming-Yu Liu
6 years
4
16
77
@liu_mingyu
Ming-Yu Liu
5 years
We are running a tutorial on deep learning for content creation in CVPR on Sunday . We have a set of amazing speakers including @phillip_isola @jtompkin Tero Kerras, @FidlerSanja Sylvain Paris @tcwang0509 @junyanz89 @elishechtman Please come to join us.
2
21
73
@liu_mingyu
Ming-Yu Liu
6 years
2
12
69
@liu_mingyu
Ming-Yu Liu
4 years
1/4 Exciting to share our #ECCV2020 paper on world-consistent #vid2vid on generating consistent renderings of 3D world. @NVIDIAAI #GAN with @arunmallya @tcwang0509 Karan Sapra paper project video
1
18
65
@liu_mingyu
Ming-Yu Liu
6 years
Congrats @tcwang0509 and @leexiaoju for their amazing works. vid2vid: FastPhotoStyle:
@leexiaoju
Yijun Li
6 years
[Vid2Vid] and [PhotoWCT] are among the 25 Best Data Science and Machine Learning GitHub Repositories from 2018~Congrats @liu_mingyu
0
10
34
2
13
65
@liu_mingyu
Ming-Yu Liu
6 years
The PWC-Net code, which won the optical flow estimation competition in the CVPR'18 Robust Vision Challenge, is now available on GitHub. Code: Paper: Challenge:
Tweet media one
1
35
65
@liu_mingyu
Ming-Yu Liu
5 years
Want to win a #QuadroRTX 6000! Join online #GauGAN contest hosted by @NVIDIA 1. Turn your doodle into a photorealistic masterpiece with GauGAN: () 2. Share your AI artwork on Twitter with #SIGGRAPH2019 , #GauGAN & @NVIDIADesign
0
22
64
@liu_mingyu
Ming-Yu Liu
3 years
Interested in knowing more about our #CVPR2021 paper on #FaceVid2Vid , we welcome you to test out the algorithm yourself. Our online demo is available at
0
10
61
@liu_mingyu
Ming-Yu Liu
3 years
Happy to announce that GANcraft code is released!!!
@arunmallya
Arun Mallya
3 years
Code for #GANcraft (ICCV'21) has been released at , with pretrained models & training instructions. You can even import your own worlds and make them real! This also includes updates to the #Imaginaire repo to make it faster, better, and more awesome!
3
53
293
2
5
59
@liu_mingyu
Ming-Yu Liu
6 years
I will give a talk on video-to-video synthesis on #ECCV2018 Chalearn Looking at People workshop () at 5pm today. Please drop by if you are interested in knowing more details about the work. Slides available at
1
8
56
@liu_mingyu
Ming-Yu Liu
4 years
Bill is indeed a role model for many of us. While I was complaining inconvenient of working from home, he is making impacts.
0
6
52
@liu_mingyu
Ming-Yu Liu
3 years
So excited to share that our deep learning-based digital avatar demo won the Best-in-Show.
@siggraph
ACM SIGGRAPH
3 years
A final congrats of the night to the jury-voted Best in Show winner, "I am AI: AI-Driven Digital Avatar Made Easy". Cheers to the team at @nvidia ! #SIGGRAPH2021 #RealTimeLive
Tweet media one
1
16
78
5
8
49
@liu_mingyu
Ming-Yu Liu
3 years
We have made the TalkingHead-1KH dataset used in our face-vid2vid paper available. We also provide the face-vid2vid results on this dataset for benchmarking.
2
8
43
@liu_mingyu
Ming-Yu Liu
4 years
The ultimate goal of #GANcraft is to turn #Minecraft gamers into 3D artists.
@arunmallya
Arun Mallya
4 years
Introducing GANcraft, a method to convert user-created semantic 3D block worlds, like those from Minecraft, to realistic-looking worlds, without paired training data! arxiv: webpage: by @zekunhao19951 , @SergeBelongie , @liu_mingyu
11
216
793
1
4
43
@liu_mingyu
Ming-Yu Liu
4 years
@tcwang0509 and I are looking for a 2021 summer Ph.D. intern to work on an exciting image and video synthesis project. Relevant experiences on the topic and strong implementation skills are required. If interested, please DM @tcwang0509
3
11
41
@liu_mingyu
Ming-Yu Liu
4 years
Our face-vid2vid in action
@zkerravala
Zeus Kerravala
4 years
Check out video with our without Maxine. #GTC21 #GTC @nvidia
Tweet media one
Tweet media two
0
7
19
0
5
39
@liu_mingyu
Ming-Yu Liu
6 years
I will give a talk on multimodal image domain transfer in #ECCV2018 TaskCV workshop at 2pm (). Please come if you are interested in our recent research in the domain. Slides: Location: Room N1095ZG, Technical University of Munich
0
9
40
@liu_mingyu
Ming-Yu Liu
6 years
We will be presenting #SPADE the method behind #GauGAN and #vid2vid in #GDC19 this afternoon (from 3:30pm to 4:30). Please come if you are interested. Slides are available at and
1
8
38
@liu_mingyu
Ming-Yu Liu
5 years
Since it needs as few as one example image, it can be used to make da Vinci's Mona Lisa talk.
0
11
36
@liu_mingyu
Ming-Yu Liu
1 year
Super excited about GettyImages launching their commercially safe generative AI offerings. GettyImages is a customer of NVIDIA Picasso Foundry services, and the Picasso Research team is looking for hard-core GenAI talents to join our force to build world-class GenAI capabilities
@liu_mingyu
Ming-Yu Liu
1 year
0
3
18
2
4
36
@liu_mingyu
Ming-Yu Liu
2 years
I am looking for several experienced data curation and processing engineers with our generative ai effort in the visual space. Strong knowledge of computer vision, computer graphics, and machine learning is required. If interested, please DM me.
3
7
36
@liu_mingyu
Ming-Yu Liu
4 years
We try to provide some tips on how to train your networks faster and fit in more data to your GPUs through this tutorial. We want to help you maximize your GPU investment. Feedback is very welcome.
@arunmallya
Arun Mallya
4 years
Learn how to speed up your neural network training by 2-15x with no loss in accuracy! 🤯🤩 The new and updated "Accelerating Computer Vision with Mixed Precision" tutorial is online as part of #ECCV2020 . Ft. talks by @liu_mingyu @tcwang0509 @shalinidemello
Tweet media one
3
41
158
0
6
36
@liu_mingyu
Ming-Yu Liu
5 years
We will being giving a tutorial on how to use mixed precision (fp32 + fp16) for training deep networks for vision applications at #ICCV2019 in the Saturday morning. . Want to train your network faster with your current GPUs? Please come to check it out.
1
10
35
@liu_mingyu
Ming-Yu Liu
1 year
I will be in #CVPR2023 until end of day Wednesday. If you are interested i joining our effort, please drop me a dm. We might be able to meet in person.
@liu_mingyu
Ming-Yu Liu
1 year
My team is hiring. If you want to join our effort in building better 3D models to digitalize the real world, here is the link for the application
2
16
223
0
4
31
@liu_mingyu
Ming-Yu Liu
7 years
CVPR 2018 AI City Challenge Tracks 1> Traffic flow analysis 2> Anomaly detection 3> Multi-camera vehicle detection and re-ID
0
18
32
@liu_mingyu
Ming-Yu Liu
5 years
Glad to share the news that our #GauGAN work on translating user doodles into photorealistic images is now in exhibition in #ArsElectronica in Linz, Austria. Welcome to test it out if you are around. #NVIDIA
Tweet media one
0
5
31
@liu_mingyu
Ming-Yu Liu
4 years
NVIDIA is arguably the best place to work on the junction of computer graphics and deep learning research. Apply!
@luminohope
Koki Nagano
4 years
NVIDIA Research is hiring for Research Scientist, Deep Learning & Computer Graphics. Are you excited about generating photorealistic graphics using Deep Learning and also interested in fighting against visual misinformation? Come join @NVIDIA Research!
4
57
248
1
2
30
@liu_mingyu
Ming-Yu Liu
4 years
As many papers have their videos on youtube, it will be great if we can create a youtube video list based on the official program to simulate a more complete conference experience but free to everybody.
@ankurhandos
Ankur Handa
4 years
eccv 2020 papers are available here
1
10
44
1
3
28
@liu_mingyu
Ming-Yu Liu
7 years
Check out CASENet Full video: Paper: (CVPR'17) Code: #DeepLearning
0
3
27
@liu_mingyu
Ming-Yu Liu
4 years
Online demo available at
@liu_mingyu
Ming-Yu Liu
4 years
Check out our new work on face-vid2vid, a neural talking-head model for video conferencing that is 10x more bandwidth efficient than H264 arxiv project video @tcwang0509 @arunmallya #GAN
14
149
743
0
6
25
@liu_mingyu
Ming-Yu Liu
4 years
Check out this awesome NeurIPS paper from Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, @jaakkolehtinen , Timo Aila on using data augmentation for training GANs with limited data. code:
@NVIDIAAIDev
NVIDIA AI Developer
4 years
NVIDIA Research developed a #GAN training breakthrough. A new technique called ADA that only needs less than a few thousand training images to generate high-resolution images. Learn how: #NeurIPS2020
1
60
228
1
1
24
@liu_mingyu
Ming-Yu Liu
5 years
The same model can be used for generating dancing videos of different people even for the person it didn't see during training.
2
5
23
@liu_mingyu
Ming-Yu Liu
5 years
There will be a workshop on image and video synthesis in ICCV 2019 (). Looking forward to your participation. @shiryginosar @junyanz89 @AaronHertzmann
@shiryginosar
Shiry Ginosar
5 years
Our ICCV 19 workshop - "Image and Video Synthesis: How? Why? and What If?" . Check out the excellent lineup of speakers and submit your accidental art created as a side effect of synthesis research to win NVIDIA GPUs!
0
7
33
0
7
23
@liu_mingyu
Ming-Yu Liu
3 years
Great opportunity to join my wife’s early-stage startup. She is a very dedicated, hard-working founder with great vision and execution.
0
5
22
@liu_mingyu
Ming-Yu Liu
3 years
Here is the video of our RTL demo that was selected Best-in-Show in #SIGGRAPH2021 I AM AI: Digital Avatar Made Easy SIGGRAPH Real-Time Live Demo presented by @ctnzr @arunmallya and Kevin Shih
1
5
22
@liu_mingyu
Ming-Yu Liu
4 years
Don’t miss this keynote. You will find lot of exciting new things.
@NVIDIAGTC
NVIDIA GTC
4 years
Only one hour away... Be sure to tune in for the #GTC21 keynote with #NVIDIA Founder and CEO Jensen Huang at 8:30 a.m. PDT.
Tweet media one
8
71
241
0
0
22
@liu_mingyu
Ming-Yu Liu
6 years
@avsa @genekogan @tcwang0509 @junyanz89 I think it will still take sometime to reach production quality. But our engineering team will release this demo as a free online service this summer for people to try out.
2
1
21
@liu_mingyu
Ming-Yu Liu
4 years
Very proud of our colleagues. “NVIDIA won every test across all six application areas for data center and edge computing systems in the second version of MLPerf Inference.”
@NVIDIAAI
NVIDIA AI
4 years
NVIDIA extends lead on #MLPerf inference benchmark for computer vision, conversational AI, and recommender workloads, breakthrough performance enables businesses to move #AI from research to production.
0
63
194
0
1
20
@liu_mingyu
Ming-Yu Liu
4 years
Congrats to the winners of the ECCV2020 awards.
@CSProfKGD
Kosta Derpanis
4 years
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
52
189
0
0
20
@liu_mingyu
Ming-Yu Liu
4 years
An accurate and efficient simulation environment is one key towards successful reinforcement learning research/applications. This one is exciting!
@viktor_m81
ViktorM🇺🇦
4 years
Do you want to train tasks on a single PC that was possible to solve only using big CPU clusters? Proud to share the project I've been working on for the last 2 years, end-to-end GPU accelerated simulator for RL and control:
15
74
387
0
4
18
@liu_mingyu
Ming-Yu Liu
6 years
@ankurhandos @hardmaru Many training photos contain trees near water body with reflections. The #GAN eventually learns this correlation in the distribution. Interestingly, this reflection was only learned in later epochs.
1
1
19
@liu_mingyu
Ming-Yu Liu
3 years
Well deserved and big congrats!!!
@BelongieLab
Belongie Lab
3 years
Zekun Hao has been awarded the @NVIDIA PhD scholarship for “developing algorithms that learn from real-world visual data and apply that knowledge to help human creators build photorealistic 3D worlds. Congratulations @ZekunHao19951 @cs_cornell !
0
6
42
1
1
18