In this project, we developed a prototype to explore the possibility to generate detailed 3D meshes from unrefined quick sketches/doodles.
We collected a small set of 3D models for training. We then implemented methods for data augmentation by applying a series of 3D transformations and smart mesh deformations to our dataset. We also converted these models into watertight meshes using custom algorithms and generated corresponding human-like doodles both manually and through automated processes.
We utilized Variational Autoencoders (VAEs) combined with a PointNet++ architecture (the latest model at that time). Custom tweaks were made to accommodate sketch inputs and allowing the model to output 3D meshes instead of point clouds.
Despite the limited dataset, our approach successfully demonstrated AI’s potential to transform simple sketches into detailed 3D models.