Facial expressions of virtual characters can more realistically capture those of their human counterparts, thanks to new research from Universidad Autónoma del Estado de México (UAEM) (Autonomous University of Mexico State).
Over the years, technology has made video games increasingly realistic and motion capture particularly has been used to better replicate human behaviour in the virtual world.
This has been especially evident in the area of facial expressions with recent developments from companies such as Rockstar Games and Naughty Dog leading the way.
Rockstar Games’ LA Noire uses MotionScan technology to capture actors’ facial expressions, allowing players, aka detectives, to better spot a suspect’s lie. For Naughty Dog’s The Last of Us, special animation was used to capture actors’ facial muscles individually for a more realistic result.
While these developments have certainly brought a new level of reality to video games, the technology has, up to this point, been dependent on actors. But what if virtual characters could be made to be more realistic without the help of actors?
“I would like to see a character that is expressing itself, not precaptured, not generating canned expressions or creating them from semantic rules, but creating expressions by the same things that create our expressions,” says Javier von der Pahlen, director of creative research and development at the Central Studios division of Activision, as quoted by website IEEE Spectrum in May.
Writer Tekla Perry goes on to say that “a truly digital character, one that does its own acting rather than conveying the acting of another actor, will be created only by merging artificial-intelligence technology that passes the Turing test in both verbal and nonverbal responses — physical gestures, facial expressions and so on — with perfectly rendered computer graphics. That will be really hard.”
Could UAEM’s new research provide us with this kind of technology?
The university’s department of computer science teamed up with the Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional (Center for Research and Advanced Studies of the National Polytechnic Institute) and the University of Guadalajara, with students serving as human models.
Together, they are working on a virtual project known as ‘juego serio’ (‘serious game’) which, according to UAEM engineer Marco Antonio Ramos Corchado, is for educational, scientific and civil purposes. But the research can also be used to make the facial expressions of video game characters more realistic.
Currently, virtual characters mimic human behaviour through programmed commands or scripts which, Corchado says, results in a “robotic” reaction. UAEM’s research, on the other hand, studies the 43 muscles involved in human facial behaviour to generate more realistic expressions and emotions.
Students are fitted with tactile sensors that produce tiny electrical pulses as they perform different gestures. Using a 3D camera, researchers are able to capture these different gestures.
Taking into account the influence of emotions, attitudes and moods on human behaviour, and their variations depending on social context, the information is then translated into numerical data and entered into a kinesic model designed by UAEM.
This is then used to animate expressions and gestures of virtual characters, such as happiness, sadness, fear, anger, surprise and disgust.
Corchado and his team are busy simulating natural disasters and other situations as part of their ‘serious game’ to capture the varied range of human expressions. In the meantime, it may be time to get ready for an improved sense of reality in less serious, but equally important, games too.