Think “Westworld.”
Futuristic artificial intelligence has advanced to a state in which machines are indistinguishable from humans. Is it possible?
Maybe, thanks to research from a University of Houston professor.
Zhigang Deng, the director of graduate studies in the Department of Computer Science, is breaking ground in the new field of behavioral science by quantifying eye contact in multi-person conversations.
“I’m trying to understand human behavior from a computational standpoint and, based on the results, understand how humans and computers can work together,” Deng said.
Rather than taking the qualitative approach that typical behavioral scientists use when analyzing how humans interact, Deng uses computers to map out every eye movement his human models make and captures them on highly sensitive cameras.
His computers then crunch the data to give him a high-quality quantitative representation of how humans use eye contact to facilitate communication when there is more than one person involved in a conversation. Then, he applies the findings to computer-generated human avatars.
This research, Deng said, could create drastic changes to the way video games look and feel, enabling developers to create ultra life-like, on-screen human simulations.
“(The findings from this research) could make avatars more natural and believable,” said Yu Ding, Deng’s postdoctoral researcher. “In the industry, animations are produced manually by artists. It is very time-consuming and expressive, and the produced animation can be only applied to delicately planned scenarios.”
Deng and his team hope that computers can use the findings from this research to create virtual people capable of displaying human-like behaviors without a graphic animator having to design every minuscule motion.
Programs designed from the findings of this research could have dramatic cost-cutting effects on Hollywood films like “Lord of the Rings” or James Cameron’s “Avatar,” which required the use of expensive motion-capture technology.
“It can automatically generate the animation of multi-party conversations only according to the speech information, including the hand gesture, lip-sync, facial expression and eye gaze direction,” said Yuting Zhang, a second-year doctoral candidate under Deng. “That is almost everything during conversation. So in the fields of film and game, we don’t need to capture the real humans’ behaviors anymore — which cost much time and labor.”
The applications of this research span further than the film and gaming industries; it could revolutionize virtual education, training and medicine, Deng said.
The presence of a human-like gaze could help many learn more effectively, especially when it comes to topics that typically require another person or an actor to teach, like a medical student learning proper bedside manner, Deng said. It could even help in the diagnosis of autism, a notoriously hard-to-diagnose disorder.
Deng can even foresee a future in which computers work in tandem with detectives to uncover the truth by analyzing a suspect’s eye movement and body language by acting as a more accurate polygraph test.
“As long as I can transfer expert knowledge of criminal investigation or identification into the computer, then the police could use this application,” Deng said.
It may not sound like “Westworld” just yet, but Deng hopes that in the future, robots will communicate in humanity’s native language: eye contact.
“In the future, we can make social or humanoid robots that have a normal human gaze, the gaze I’m familiar with,” Deng said.