Can machines really read our emotions? Funding secured for research into the usefulness of emotion AI

By Alice Yin, FIMS Communications Staff

December 8, 2021

Headshot of Luke StarkAssistant professor Luke Stark has been awarded an Insight Development Grant from the Social Sciences and Humanities Research Council (SSHRC) for his project, The Political Economy of Emotion AI. The grant will allow Stark to study artificial intelligence systems designed to analyze human emotions and develop a framework to guide the design and best use of these technologies.

Human emotions are at the core of human identity, and the use of artificial intelligence (AI) systems to collect and analyze data about our emotions is becoming increasingly commonplace.

Stark, who has done extensive research into the historical, social and ethical impacts of AI technologies, notes that there has been an alarming lack of research around whether these increasingly popular forms of emotion AI are accurate, ethical, useful, or socially beneficial.

"Few scholars have examined the political economy around collecting and analyzing emotive data via AI systems, and even fewer have considered how these systems can be designed to be more just, ethical, and fair with regards to data about human emotions."

Stark's project will look at the current and future political economy of AI driven data collection and analysis of emotional expression, in the real-world context of AI-enabled hiring systems in Canada. Why are firms choosing to use these technologies, and do they help or hurt us?

He hypothesizes that the theories of human emotion that ground most work on collecting and analyzing emotive data for AI systems are conceptually flawed and unrepresentative of the breadth of interdisciplinary knowledge on human emotions.

"It seems unlikely that machine learning models can make the kinds of predictions their developers often claim they can make about our feelings: how we're feeling in the moment and what relationship our emotions have with our judgement or future behaviour," he posits.

"But that uncertainty doesn't stop customers of these systems, including businesses and governments, from believing these claims as if they're scientific truths."

There are real harms that can arise when human emotions are misunderstood or ignored by AI systems, including a potential for individuals to be discriminated against, manipulated, or stripped of their autonomy by biased technologies.

As the use of AI becomes increasingly commonplace, Stark emphasizes the need for scholars, policymakers, and technologists to understand the complexity of human emotions and the digital economy being built on them when designing, critiquing, and regulating AI systems.

"Ideally, I hope that policymakers and technologists will collaborate to restrict the use of emotion recognition technologies to extremely narrow and tailored cases, if they're used at all."

Stark expects the findings from this project to be potentially transformative in multiple arenas: not only in influencing policymaking and regulatory discussions around these technologies, but also in advancing scholarship around artificial intelligence ethics and providing technologists with recommendations for how to design these technologies in the best interests of users.

Stark's current grant will support his work over the next two years. SSHRC Insight Development Grants aim to support and foster excellence in social sciences and humanities research intended to deepen, widen and increase our collective understanding of individuals and societies, as well as to inform the search for solutions to societal challenges.