Text-to-3D

DreamFusion: Text-to-3D using 2D Diffusion

https://arxiv.org/pdf/2209.14988

Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models

https://arxiv.org/pdf/2212.14704

Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation

[Arxiv 2303.13873][PDF]

相较之前的Text-to-3D工作,能生成更高质量的3D模型,摒弃了具有缺点的NeRF使用与GET3D中相同的渲染方式,并添加了一个提供逼真渲染的步骤

DreamBooth3D: Subject-Driven Text-to-3D Generation

[ArXiv 2303.13508][PDF]

生成文本对应描述的3D assets,但是主要内容要与用户提供的图片中的内容一致,用了复杂的过程将DreamBooth与DreamFusion结合

3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion

[ArXiv 2303.11938][PDF]

从Diffusion侧面输入CLIP的embedding,将其映射到latent NeRF(如eg3d,styleNeRF)的latent space,实现文本到3D的生成

Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation

https://arxiv.org/pdf/2303.15413

View ynthesis

Partial-View Object View Synthesis via Filtered Inversion

https://arxiv.org/abs/2304.00673

Generative Novel View Synthesis with 3D-Aware Diffusion Models

https://arxiv.org/abs/2304.02602

Consistent View Synthesis with Pose-Guided Diffusion Models

[CVPR2023][PDF][2303.17598]

Enhanced Stable View Synthesis

https://arxiv.org/abs/2303.17094

Decoupling Dynamic Monocular Videos for Dynamic View Synthesis

https://arxiv.org/abs/2304.01716

Efficient View Synthesis and 3D-based Multi-Frame Denoising with Multiplane Feature Representations

https://arxiv.org/abs/2303.18139