Many people are searching for new ways to generate 3D models. A new software project called 3-Sweep, under development at an Israeli university, is making it easy to create 3D models from a single 2D photograph.

The research project’s live demo has generated lots of interest both at its release at the 2013 SIGGRAPH Asia imaging software conference and online. A YouTube demo shows its operation and some of its limitations, and already has more than a million and a half views. The video, a must-watch, demonstrates how a human quickly traces and “sweeps” objects in an image, forming a model that can be resized, rotated or moved around.

The research group considered primarily interior design, gaming and 3D modeling software applications during development, but many others could potentially use it. Tao Chen was one of the graduate students working on the project when he attended the Interdisciplinary Center in Israel. He’s now a post-doc researcher at Columbia University, and he spoke to GDP about how this technology could eventually be useful to GIS and other Geospatial communities.

Other Methods for Image-Based 3D Modeling
Currently, to form a 3D CAD model from a 2D image, a user would make a high-resolution scan of an image and use 3D rendering software such as SolidWorks, proE or AutoCAD that supports user-input images. Modeling requires many scanned views and the ability to have a common origin in 3D space. It also takes users who understand projection and have modeling experience with the software.

Another way is to use software that allows images to be used as a reference for an artist to complete modeling. Chen says tools like SketchUp and 3D Max are the most useful for accomplishing this, but he says even these are hard for new users to learn, and they don’t extract features from the images.

According to Chen, a few other groups are researching image-based modeling, and some are coming up with automated tools that don’t require human interaction. But, those tools have difficulty with complicated objects because they make assumptions that an image already has a specific feature before extracting nodes to model it.

Much Easier for Novice Modelers
This research project evolved from the desire to create models from user-drawn sketches. Chen says they wanted the resulting models to look better than the sketches themselves. The group found a unique way to do this.

With 3-Sweep, a user defines three sweeps, each defining one dimension of a primitive object, a cylinder or box, in a 2D image. Edge detection averages most of the interaction area. While the image doesn’t contain 3D information, geometric constraints of parallels on a surface allow a user to optimize and define the depth and generate the 3D model.

Potentially Useful for GIS/Construction Applications
Chen says users can disassemble most objects into cubes, cylinders or coins, but that’s not always the case for freeform shapes. It is harder for their approach to work on details like small trees or natural items that lack parallel features. While some objects encountered may not be compatible with this method, 3-Sweep works well with those that are man-made and have sharp edges and good definition. Think pipelines, wind turbines and other structures.

Someone arriving on a construction or surveying job site could take a picture with a phone or tablet, quickly create a model, and use it to correct or update a larger incorrect model. Chen says right now it would be a problem to make this method work on mobile devices since they have limited computation power, but he envisions it for the future, saying that would be solved in time.

3-Sweep could be very useful for generating all or parts of streets, curbs, roads, adjacent buildings or plots, making it a helpful planning tool. A user can input a single photo of a building or a plot and use 3-Sweep technology to generate parts of the building.

This technology makes it easy to model a simple rectangular building, but for more complex structures and small details, more than one image may be needed. The group created a single model of a Chinese tower by merging one image of the main part of the building with another that focused on its details.

Chen says that current mesh generating methods for modeling landscapes are very slow. “If a user can input an image, 3D modeling is much faster. Sometimes 3D mesh data may be missing points. Hand sketches or images might be easier to obtain, or all that’s available. Our group looked to see if we can combine image data with scanning data to make both techniques work better and found that it is helpful to have the 3D grid point data with the depth of the object,” he says.

Project Direction
3-Sweep has no current public release, and Chen says the research group wants to improve how users interact with the software by making it more user-friendly. Currently, the three professors at the Israeli university involved with the research are raising funds to continue software development. Watch the Computer Science Lab’s 3-Sweep project page for updates.