News

In traditional 3D modeling, the process often begins with a concept sketch or a 2D image. Designers then have to manually interpret these flat visuals and reconstruct them in a 3D environment.
The AI research labs at Facebook, Nvidia, and startups like Threedy.ai have at various points tried their hand at the challenge of 2D-object-to-3D-shape conversion. But in a new preprint paper, a ...
These include bringing in 3d models to make them fit in with 2d artwork, being able to use 3d geometry for rotoscoping or reference, using 3d cameras and layout, and using Blender’s modifiers on ...
Researchers have written an algorithm to derive 3D graphics from 2D data, quickly and at scale. Microsoft researchers claim to have devised an AI able to generate better 3D shapes from 2D images and ...
Specifically, MonoCon is capable of identifying 3D objects in 2D images and placing them in a "bounding box," which effectively tells the AI the outermost edges of the relevant object.
An AI method enables the generation of sharp, high-quality 3D shapes that are closer to the quality of the best 2D image models. Previous approaches typically generated blurry or cartoonish 3D shapes.
[Carson Katri] has a fantastic solution to easily add textures to 3D scenes in Blender: have an image-generating AI create the texture on demand, and do it for you. As shown here, two featureless b… ...
Nvidia researchers have created a rendering framework that uses AI to take 2D information and transform it into a 3D object accurately. The system is called DIB-R, ...