Google-funded research will scan clothing and behavior

Tanzeem Choudhury
Choudhury
Steve Marschner
Marschner

Two faculty members have received awards from Research at Google, the Internet giant’s R&D division, to support computer science projects. The awards provide grants of up to $150,000 for one year to pay the tuition for a graduate student to work on a project, along with opportunities to collaborate with Google researchers and engineers.

Tanzeem Choudhury, associate professor of information science, will develop an application for Google Glass to monitor nonverbal behavior and coach the wearer on how to avoid costly errors in interpersonal relations.

Subtle cues like body language, eye contact and tone of voice can influence the impression a person makes in job interviews, doctor-patient conversations, public speaking and social situations. Google Glass, Choudhury says, offers ways to monitor such behavior. Its camera can report what the wearer is looking at, and a face-detection algorithm can determine if the person is making eye contact. A gyroscope in the device can track head movements; nodding “yes” or shaking “no” suggests paying attention. Google Glass also carries a microphone, and Choudhury will apply voice analysis tools she has already developed for use with mobile phones.

Although messages like “you’re not paying attention,” “speak slower” or “make eye contact” could be flashed to the wearer, Choudhury said the challenge is to provide feedback that is helpful and not distracting. Rather than flashing text messages, for example, the device might throw a color-coded cast over the display. She will conduct experiments to compare several approaches. To preserve privacy, she noted, none of the images or conversation used in the analysis will be recorded.

Stephen Marschner, professor of computer science and a specialist in computer graphics, aims to develop a method for 3-D scanning of soft, deformable objects like a shoe, backpack or pair of jeans. Ultimately the goal is to 3-D scan clothing to produce working models that could be placed on 3-D animated characters or tested for fit against a body scan of a particular person.

Scanning rigid objects has become routine, using 3-D cameras or software that combines multiple still images from different angles into a 3-D model. The model can be fed to a 3-D printer to make a copy or be used as the basis for a computer graphic image.

In scanning clothing and other soft objects, there’s good news and bad news. It’s easy to move the object around to scan front and back, outside and inside, and in a variety of poses. But the computer can get confused about which points in the object are supposed to be connected: If the flap of a handbag is raised, should the front of the bag come with it?

Marschner and his team plan to combine static scans of an object in various positions with video of it in motion to capture the whole shape, inside and out, and end up with a working model of how it moves.

In the year supported by the grant, they plan to build a working system that can scan simple objects – a shoe, a cloth pouch – with the goal of moving on to clothing.

“A system that can acquire realistic, animatable models of existing garments on a large scale has the potential to completely change the online marketing of clothing,” they said.

Media Contact

Syl Kacapyr