Sometimes, the ability (or lack thereof) to intuit the emotional responses of those around us is the most readily apparent difference between neurotypical people and those with autism. The innate recognition of what others are feeling based on their behavior, choice of words, and basic changes in facial expression is the first step towards empathy, that all-important social skill.
People with autism often struggle with reading facial expressions. For some, simply making eye contact for long enough to internalize expressive subtleties results in sensory overload, and we can’t comprehend what we don’t even see. There can also be a basic disconnect between a visual and the interpretation thereof.
Although I am not autistic myself, nor are my kids, I have experienced the frustrations of my children not getting what they wanted when they were nonverbal, screaming infants: a cry could mean so many things! My responses were that of guesswork, trial and error, and often missed the mark the first time around. Misunderstanding nonverbal cues must be difficult on so many levels for autistic people trying to make their way through a social world.
Lijun Yin is addressing this issue through remarkable facial-recognition technology. Yin is developing webcam technology that can “follow” a user’s gaze and interpret what to do next based on the eye path. Now, he is working to extend this capability to catch nuances in facial expression – and thereby interpret emotional states. By working with six basic moods – anger, disgust, fear, joy, sadness, and surprise, Yin is figuring out ways to allow a computer to computer to distinguish between and among them.
The applications of this sort of high-level artificial intelligence are pretty much endless. In furthering his research, Yin is partnering with psychologist Peter Gerhardstein (Binghamton University) to explore ways these advancements could benefit children with autism. Children with autism are often taught to interpret others’ moods by being shown pictures and photographs; using Yin’s technology, computers could instead generate three-dimensional avatars, based on pictures of familiar people (say, the child’s family members). The avatars’ moods would shift, yielding a much richer level of experience and education for this type of therapy.
Facial-recognition technology would also be of great benefit to parents of nonverbal and yet-illiterate autistic children, eliminating some of the guesswork from establishing the emotion behind a symptomatic behavioral display. Understanding sooner, for instance, when your child is in physical pain rather than emotional distress would be of significant value to any parent, let alone one of a child with special needs.
These kinds of ingenuity and creativity in the use of modern technology give me tremendous hope for improved quality of life for families dealing with autism in the foreseeable future. It will be exciting to follow the Yin and Gerhardstein collaboration and their unique contributions to world of autism.