TY - GEN
T1 - SDFusion
T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
AU - Cheng, Yen Chi
AU - Lee, Hsin Ying
AU - Tulyakov, Sergey
AU - Schwing, Alexander
AU - Gui, Liangyan
N1 - Although the results look promising and exciting, there are quite a few future directions for improvement. First, SDFusion is trained on high-quality signed distance function representations. To make the model more general and to enable the use of more diverse data, a model that operates on various 3D representations simultaneously is desirable. Another future direction is related to the diversity of the data: we currently apply SDFusion on object-centric data. It is interesting to apply the model to more challenging scenarios (e.g., entire 3D scenes). Finally, we believe there is room to further explore how to combine models trained on 2D and 3D data. Acknowledgements: Work supported in part by NSF under Grants 2008387, 2045586, 2106825, MRI 1725729, and NIFA award 2020-67021-32799. Thanks to NVIDIA for providing a GPU for debugging.
PY - 2023
Y1 - 2023
N2 - In this work, we present a novel framework built to sim-plify 3D asset generation for amateur users. To enable interactive generation, our method supports a variety of input modalities that can be easily provided by a human, in-cluding images, text, partially observed shapes and combinations of these, further allowing to adjust the strength of each input. At the core of our approach is an encoder-decoder, compressing 3D shapes into a compact latent representation, upon which a diffusion model is learned. To enable a variety of multimodal inputs, we employ task-specific encoders with dropout followed by a cross-attention mechanism. Due to its flexibility, our model naturally supports a variety of tasks, outperforming prior works on shape completion, image-based 3D reconstruction, and text-to-3D. Most interestingly, our model can combine all these tasks into one swiss-army-knife tool, enabling the user to perform shape generation using incomplete shapes, images, and textual descriptions at the same time, providing the relative weights for each input and facilitating interactivity. Despite our approach being shape-only, we further show an efficient method to texture the generated shape using large-scale text-to-image models.
AB - In this work, we present a novel framework built to sim-plify 3D asset generation for amateur users. To enable interactive generation, our method supports a variety of input modalities that can be easily provided by a human, in-cluding images, text, partially observed shapes and combinations of these, further allowing to adjust the strength of each input. At the core of our approach is an encoder-decoder, compressing 3D shapes into a compact latent representation, upon which a diffusion model is learned. To enable a variety of multimodal inputs, we employ task-specific encoders with dropout followed by a cross-attention mechanism. Due to its flexibility, our model naturally supports a variety of tasks, outperforming prior works on shape completion, image-based 3D reconstruction, and text-to-3D. Most interestingly, our model can combine all these tasks into one swiss-army-knife tool, enabling the user to perform shape generation using incomplete shapes, images, and textual descriptions at the same time, providing the relative weights for each input and facilitating interactivity. Despite our approach being shape-only, we further show an efficient method to texture the generated shape using large-scale text-to-image models.
KW - Image and video synthesis and generation
UR - http://www.scopus.com/inward/record.url?scp=85168549236&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85168549236&partnerID=8YFLogxK
U2 - 10.1109/CVPR52729.2023.00433
DO - 10.1109/CVPR52729.2023.00433
M3 - Conference contribution
AN - SCOPUS:85168549236
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 4456
EP - 4465
BT - Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
PB - IEEE Computer Society
Y2 - 18 June 2023 through 22 June 2023
ER -