Happy to announce DreamFusion, our new method for Text-to-3D! dreamfusion3d.github.io We optimize a NeRF from scratch using a pretrained text-to-image diffusion model. No 3D data needed! Joint work w/ the incredible team of @BenMildenhall @ajayj_ @jon_barron #dreamfusion
DreamFusion generates 3D models from diverse text prompts. Check out our gallery of hundreds of 3D models: dreamfusion3d.github.io/gallery.html
We build on Dream Fields, replacing CLIP with a new loss computed from the Imagen text-to-image diffusion model (imagen.research.google) : x.com/ajayj_/status/…
We build on Dream Fields, replacing CLIP with a new loss computed from the Imagen text-to-image diffusion model (imagen.research.google) : x.com/ajayj_/status/…
The 3D model we generate is an improved NeRF that produces a 3D volume with density, color, and surface normals:
DreamFusion represents appearance as a material color, which can be combined with normals for rendering under different lighting conditions:
We can even take several 3D models generated by DreamFusion and compose them into new scenes: