From 2-D to 3-D with a few clicks

By Peter Kim

Every day millions of students log on to the social networking Web site Facebook to view photos of friends and family.

However, with current engineering research, future users may be able to upload 3-D photos that allow the viewer to see behind, above and the side of a person’s head.

Led by Thomas Huang, electrical and computer engineering professor, University scientists are researching ways to teach computers to look at a 2-D photo of a person’s face and use its previous knowledge of humans to generate a 3-D model of the person’s whole head.

“Humans already do this automatically. We can easily recall an image of a face and imagine it from a back or from a side view. People can do this easily because we have seen lots of faces in our lives,” said Yuxiao Hu, graduate student. “We are teaching computers to learn this. We give it a 3-D database (of different face images), so it can reconstruct the face.”

They do this by taking hundreds of pictures of different people’s heads from various angles through a 3-D laser scanner. Scientists then put all of the pictures into a database from which computers can draw to create the 3-D models of human heads and faces.

Get The Daily Illini in your inbox!

  • Catch the latest on University of Illinois news, sports, and more. Delivered every weekday.
  • Stay up to date on all things Illini sports. Delivered every Monday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you for subscribing!

Once computers can reconstruct a 3-D head or face from a flat image, other objects will be easier to reconstruct, Hu added.

In the future, people may be able to take just a couple photos of a room and then later upload the photos to computer and virtually walk into the room and look around.

“Human faces are not rigid. Lighting effects or strong emotions can make it hard to reconstruct them,” Hu said. “Humans are more perceptive of deformations in the face than other objects, so once we can do faces, other things won’t be so difficult.”

Once the technology for reconstructing faces is complete, its effect could ripple through everything from online gaming to criminal justice.

In online computer games, such as Second Life or World of Warcraft, where players can create personalized characters, a player may wish to create a character in their own likeness. Face modeling could be used to take a picture of the player and instantly create a virtual 3-D character that looks exactly like them, Hu said.

The reverse of this process is also possible and would be useful in identifying criminals from security videos.

“When we watch a security video, we can’t expect to see exactly the front face of a criminal. We could (use a 3-D face model to) calculate variations of the criminal’s face from that angle and definitely improve the accuracy of recognizing that face,” said Zhihong Zeng, postdoctoral researcher at the University.

Another large part of the research involves teaching computers to recognize and generate facial expressions and emotions, which could be used for speech recognition for word processors, such as Microsoft Word.

“People always use facial expressions to express what they are saying,” Zeng said. “Recognizing facial expressions (through 3-D computer models) will improve the accuracy of speech recognition.”

If computers could recognize expressions and emotions, they could also produce 3-D faces that move and speak with emotion, Hu said. Hu believes that this is a one step forward for artificial intelligence technology.

“Robots always have neutral expressions. We can always tell when it is a robot,” Hu said. “Maybe in the future, we will not be able to tell if it is a person or a robot.”