Good news! We've open-sourced our TGS (Triplane meets Gaussian Splatting) code on GitHub and uploaded the model to Hugging Face. Discover how the code and model can be helpful to your 3D GenAI research:
🔗 GitHub:
🔗 Hugging Face:
Play with our TGS (Triplane meets Gaussian Splatting) on the new Gradio demo!
Fast 3D generation from a single image and real-time online 3DGS viewing.
Get hands-on:
Play with our TGS (Triplane meets Gaussian Splatting) on the new Gradio demo!
Fast 3D generation from a single image and real-time online 3DGS viewing.
Get hands-on:
Check out TGS (Triplane meets Gaussian Splatting) for single-view 3D reconstruction!
Introducing a hybrid Triplane-Gaussian representation to achieve generalizable 3DGS.
The result? High-quality 3D from single images in mere tenths of a second!
🔗:
Today we are releasing TripoSR in collaboration with
@tripoAI
. TripoSR is a new image-to-3D model capable of creating high quality outputs in less than a second.
Learn more here:
Check out TGS (Triplane meets Gaussian Splatting) for single-view 3D reconstruction!
Introducing a hybrid Triplane-Gaussian representation to achieve generalizable 3DGS.
The result? High-quality 3D from single images in mere tenths of a second!
🔗:
Amidst the wave of cool 3DGS research, we're bringing a slightly different recipe, "VMesh", a hybrid volume-mesh representation, to
#SIGGRAPHASIA2023
.
Expect efficient rendering, compact storage, and high-fidelity geometry representation.
🔗
3D Gaussian Splattings are cool, dynamic 3D Gaussians are even cooler, and we make dynamic 3D Gaussians editable!
"SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes": our approach for high-fidelity, editable dynamic scenes.
🔗:
3D Gaussian Splattings are cool, dynamic 3D Gaussians are even cooler, and we make dynamic 3D Gaussians editable!
"SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes": our approach for high-fidelity, editable dynamic scenes.
🔗:
@ThatRosaryGirl
@JohannJaegerson
@thecathguy
I just made a 3D model of your husband's anime figurine using Tripo ()
Get drafts in seconds and final models in 5 mins - currently free to try! Check it out, it's cool:
Really excited about recent works in 3DGS for Human Avatars! Yet, there's still room for polygon rasterization with stronger graphics pipeline compatibility and better geometric fidelity.
At
#SIGGRAPHAsia2023
, we're introducing BakedAvatar - it achieves real-time rendering on
Really excited about recent works in 3DGS for Human Avatars! Yet, there's still room for polygon rasterization with stronger graphics pipeline compatibility and better geometric fidelity.
At
#SIGGRAPHAsia2023
, we're introducing BakedAvatar - it achieves real-time rendering on
Proud to announce that Tripo AI is now officially launched in GPT Store. 💫
Tripo AI is the only GPTs that can generate 3D models so far. Feel free to have a try📷!
#ChatGPT
#OpenAI
#GPTStore
#GPT4
#tripoai
Thanks
@_akhaliq
for sharing our work.
Text to relightable 3D asset generation is here!
With
#UniDream
, we're turning textual descriptions into realistic 3D objects with enhanced realism and relighting capabilities.
UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation
paper page:
Recent advancements in text-to-3D generation technology have significantly advanced the conversion of textual descriptions into imaginative well-geometrical and finely
Meet us at the Thursday PM session at 4:30pm
#029
!
We introduce an explicit 3D shape prior to CLIP-guided 3D optimization methods, generating imaginative 3D content with better visual quality and shape accuracy.
#CVPR2023
🚀 Excited to share
#threestudio
- A unified framework for 3D content generation!
🔗 Check it out: 🎉
🤝 Collaboration between Tsinghua University, Tencent ARC Lab, and Tencent AI Lab.
💡 Stay tuned for more!
#3D
#GenerativeAI
ShowRoom3D: Text to High-Quality 3D Room Generation Using 3D Priors
paper page:
introduce ShowRoom3D, a three-stage approach for generating high-quality 3D room-scale scenes from texts. Previous methods using 2D diffusion priors to optimize neural
Introduce Tripo, the advanced 3D Foundation Model.
👣
Tripo can generate textured 3D mesh models in 8 seconds. Moreover, refinement only takes 5 minutes, giving you 3D models that rival the quality of handcrafted ones with an impressive 95% success rate and beyond. 🚀
Thrilled to share that 𝗮𝗹𝗹 (𝟲/𝟲) submissions from our VAST AI Research team made it to
#CVPR2024
!
Covering 𝗧𝗚𝗦, 𝗣𝗜𝟯𝗗, 𝗦𝗖-𝗚𝗦, 𝗪𝗼𝗻𝗱𝗲𝗿𝟯𝗗, 𝗘𝗽𝗶𝗗𝗶𝗳𝗳, and 𝗗𝗿𝗲𝗮𝗺𝗖𝗼𝗺𝗽𝗼𝘀𝗲𝗿. Kudos to the team & collaborators!
Details: (1/n)
VAST AI Research released model for Triplane Meets Gaussian Splatting on Hugging Face
Fast and Generalizable Single-View 3D Reconstruction with Transformers
model:
demo:
Appreciating the entire workflow’s stunning results! Proud to see Tripo’s
@tripoai
image-to-3D capabilities helped in creating the mesh.
Let’s continue shaping an unimaginable future together.
A generated mesh using an image made with Leonardo AI. The image was re-projected onto the mesh as the texture had lost some definition. The jawline is clearly not so successful here, but besides that this is a pretty crazy look at a very near future where AI gen works in 3D at a
NeRF-Texture will be part of the "Material Rendering" session on Wednesday (8/9) at 2pm in Petree Hall D. Looking forward to discussing our work and seeing you there! 😊
#SIGGRAPH2023
🌟 Harnessing Tech for Good: ARC Lab is thrilled to be a part of the team integrating cutting-edge AI to restore a stunning 4,500-year-old statue.
#TechForGood
#Innovation
#tencent
#ARCLab
Excited to share our
#CVPR2023
paper SurfelNeRF! We introduce a novel approach to online photorealistic reconstruction of indoor scenes.
Project page: .
Don't miss our poster at Tuesday AM session, TUE-AM-011.
Amazing work 4D-fy!🎉 It's fantastic to see threestudio contributing to such incredible 4D generation work. Seems it's time for a dimensionality expansion for threestudio too - t̶h̶r̶e̶e̶34studio? 😆
📢📢📢 thrilled to announce "𝟒𝐃-𝐟𝐲: 𝐓𝐞𝐱𝐭-𝐭𝐨-𝟒𝐃 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐔𝐬𝐢𝐧𝐠 𝐇𝐲𝐛𝐫𝐢𝐝 𝐒𝐜𝐨𝐫𝐞 𝐃𝐢𝐬𝐭𝐢𝐥𝐥𝐚𝐭𝐢𝐨𝐧 𝐒𝐚𝐦𝐩𝐥𝐢𝐧𝐠"
Way to start
@sherwinbahmani
🎉
PhD co-advised with
@DaveLindell
at
#UofT
.
Exciting times for DreamAvatar! Pushing the boundaries yet again in controllable 3D avatar generation. 𝐀𝐧𝐝 𝐠𝐫𝐞𝐚𝐭 𝐧𝐞𝐰𝐬 – 𝐭𝐡𝐞 𝐜𝐨𝐝𝐞 𝐢𝐬 𝐧𝐨𝐰 𝐚𝐯𝐚𝐢𝐥𝐚𝐛𝐥𝐞! A huge hats off to the team
@yukangcao
,
@kaihan_vis
, for elevating this project to new heights!
💃Glad to re-introduce DreamAvatar!💃
DreamAvatar was the first to generate controllable 3D avatars via diffusion models.
We are now happy to bring it to a higher level!
- Project:
- Code:
- arXiv:
The new extension system for threestudio is here! It marks the start of an even more collaborative workflow. Eager to see the diverse 3D Generative AI tech our community will bring 👀
Link:
Excited to announce that threestudio now supports an extension system: . This is just the beginning 🚀. We aim to update various 3D representations and diffusion models with the power of the 3D generation community 💪. Let's build the future together!
@NickADobos
@Kwebbelkop
If you're looking for "image to 3D", Tripo () might be a handy tool.
Quick drafts in seconds, final models in 5 mins. Currently free to try!
See how it works on one of your figurines:
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
propose to model the 3D parameter as a random variable instead of a constant as in SDS and present variational score distillation (VSD), a principled particle-based variational
can’t imagine that I even thought about bringing every cutting-edge technique into threestudio🤯 good news is threestudio is making a change so that we can keep unifying all the great works in one framework 😉 stay tuned!
I created a cool JRPG3d AI scene while playing with
@tripoai
this holiday. It's probably the best 3dAI tool I had tried until the moment:
Midjourney concept-> TripoAI -> models/meshes
Midjourney -> marigold(depth maps) -> scenario
Mixamo -> animation
#GodotEngine
-> scene.
@MrForExample
Thanks for bringing TGS into ComfyUI! Merging 3D workflows into ComfyUI is indeed exciting and holds great potential. We've had similar thoughts and would love to explore this further.
Perhaps, we could collaborate on enhancing 3D capabilities in ComfyUI together?