Affective computing refers to the study of emotional machines and their impact on human lives. Though the field encompasses a wide range of disciplines, in technology, it is primarily associated with artificial intelligence (AI). Read on to learn more about what this exciting sector and emotion AI have in store for users.
What Is Affective Computing?
Affective computing refers to studying and developing systems that recognize, interpret, process, and simulate human emotions. It aims to give machines the ability to simulate empathy so it can interpret the emotional state of humans and adapt its behaviors and give appropriate responses.
How Did the Concept of Affective Computing Come into Being?
The modern concept of affective computing originated from a paper written by Massachusetts Institute of Technology (MIT) scholar Rosalind Picard. Picard is a respected pioneer in the field, having founded and directed the Affective Computing Research Group at MIT Media Lab and several tech startups.
In her paper, Picard stressed the importance of developing computers that can adequately perform tasks, make better decisions, and assist humans effectively by recognizing their emotions. At the time, Picard noted that it is beneficial, too, for computers to develop emotional capabilities. However, in separate research published in 2002, she stated that computers need not have an emotional perception, but rather the ability to meet a person’s “emotional needs.” In that sense, affective systems should contribute to the user’s “empathic, self-awareness, and experiential needs.” The user should just feel that the computer acknowledged his or her feelings after the interaction.
How Do Affective Computing-Powered Technologies Work?
Affective computing-enabled systems detect and react to changes in a user’s language, tone of voice, and nuances in facial expression. They do so by collecting user data through physical sensors, such as video cameras and microphones, and analyzing such information based on previous experiences and data sets.
For instance, voice pattern software Cogito enables agents to recognize changes in customers’ moods in real-time so they can adapt their spiels and solutions to the latter’s problems. Data mining research on “non-linguistic speech features” that its founder conducted in the early 2000s became its basis.
Another example of commercially available affective computing devices is MoodLight, an interactive ambient lighting system that responds to biosensor inputs related to a user’s current private internal state.
What Are Alternatives to Affective Computing?
Since affective computing remains in the nascent stages, companies can rely on alternatives for now, including:
- In marketing, they can run campaign pilots to assess their potential success. The results of the tests, so to speak, can then serve as inputs to improve the campaigns so they would work better after the actual launch.
- In customer service, they can launch customer satisfaction surveys. While these may not allow organizations to make real-time adjustments, they can still provide customer feedback.
What Are Some Examples of Applications That Use Artificial Emotional Intelligence?
There are currently thousands of companies that deploy artificial emotional intelligence or emotion AI software to assist with better customer service. Here are two examples:
- The Virtual Emotion Resource Network (VERN) is a patent-pending sentiment analysis program that uses deep learning to pick up human emotions via facial recognition and voice sensors. Also, VERN can detect humor. According to its designers, VERN can be integrated into chatbots, marketing analytics software, and games.
- Affectiva uses a webcam for facial recognition to document and analyze emotional registers. Its application gauges emotional engagement for entertainment and marketing companies so they can improve their content. It offers a Software Development Kit (SDK) for clients.
What Are Known Challenges in Using Artificial Emotional Intelligence in Technologies?
Echoing the sentiment shared by research leaders in the Harvard Business Review, it isn’t effortless to read emotions. What one says may not reflect precisely how one feels. A person may act happy to conceal his or her feelings despite having an off day. If humans are having a hard time interpreting facial expressions or emotions, then it’s safe to assume that computers probably can’t do any better. As such, maintaining accuracy when measuring emotions is likely to become a problem for emotion AI.
There’s also the issue of bias. In the past, some AI systems acquired preferences based on “bad data” and the internalized prejudices of their developers. Researchers noted that one emotion AI that they analyzed was susceptible to associating negative emotions to some ethnic groups. That is one problem that AI system makers are working on.
Can Affective Computing-Enabled Devices Intrude on My Personal Space?
It’s natural to be a little paranoid about humanized computers. Emotions, after all, are considered private, and not every person on the planet wants their feelings exposed or tracked. Emotion AI developers recognize this concern among users. And to assure them that their emotional data gets treated with respect, developers are implementing unobtrusive ways of gathering emotional states. Users can also choose to opt in or out of emotion detection technologies.
