Paper
Publication
Warmth and competence in human-agent cooperation
Abstract

Trust is a central challenge for the development and deployment of artificial intelligence (AI). Recent research argues that AI agents trained with deep reinforcement learning can successfully interact and collaborate with humans. However, these studies define success primarily in terms of “objective” metrics like task performance, obscuring substantial variation in the levels of trust and subjective preference that different agents generate. To better understand the factors shaping subjective preferences in human-agent cooperation, we train deep reinforcement learning agents in the Coins game, a two-player social dilemma. We recruit participants to play with them and measure participants’ social perception of the agents they encounter. Drawing inspiration from social science and biology research, we also design and implement a new “partner choice” framework to elicit revealed preferences: after playing an episode with an agent, participants are asked whether they would like to play the next round with the same agent or by themselves. Perceptions of warmth and competence—two fundamental dimensions of human social perception—predict participants’ preferences above and beyond objective performance metrics. This holds true for both stated and revealed preferences. Given these findings, we recommend human-agent interaction researchers routinely incorporate the measurement of subjective preferences and social perception into their studies.

Authors' notes