AV整氈窒

 

Research in motion

- March 17, 2009

Prof. Aaron Newman and research assistant Hazlin Zaini go through the motions of sign language in front of a green screen. (Nick Pearce Photo)

Gollum. King Kong. Jar Jar Binks. And nearly every character in modern video games.

Motion capture technology has advanced dramatically in the past decade, to the point where digital characters in film and gaming are approaching photo-realism. But Aaron Newman sees the technologys potential for more than just entertainment. The AV整氈窒 psychologist and Canada Research Chair in Cognitive Neuroscience is using motion capture to help better understand sign language and other forms of gesture-based communication.

Sign languages use the same abstract rules and patterns as spoken languages, but theyre coming in through an entirely different channelsight as opposed to sound, he says. How does that shape your brain and its capacity for human language?

One of the challenges in studying the effects of sign language on the brain is that human gestures are often aided or affected by other stimuli like facial expressions. To overcome this, Dr. Newman decided he needed to prepare his own short videos that strip away everything but the most basic movement. These videos would then be shown to study participants hooked up to an EEG system that monitors brain activity.

We can see instantly how people react, within milliseconds, he says. We see the blips of activity coming from different areas of the brain, and that helps us better understand the processes by which people understand these gestures. We want to know where and when symbolic communication crosses the threshold into full-blown language in the brain.

To get the clarity he required in these videos, Dr. Newman used funding from NSERC to purchase a professional motion capture system from a Fredericton company called Measurand, whose major clients are in the film and gaming industries. (Their equipment has even been used by NASA). Just like youd see on a behind-the-scenes DVD feature, Dr. Newman and his team hooked their assistants up with dozens of fibre-optic sensors and tracked their movement down to the pinky finger.

But Dr. Newmans lab is full of psychology students; not the programming experts needed to turn all this data into useable video. Coincidentally, Measurands animation director Carl Callewart was hosting a training seminar on motion capture animation for students at the Centre for the Arts and Technology in Halifax. The two decided to partner on the project. The seminar took place over three days, during which the 20 students prepared over 80 short video clips from Dr. Newmans motion capture data.

Over the next several months, Dr. Newman plans to test the videos on both sign language users and people who dont know sign language at all, and hes already put together a second set of motion capture data that he hopes to turn into another set of animations. He is also planning on using the motion capture data in collaboration with researchers at the National Institute for Deafness and Communication Disorders in the U.S.