ABSTRACT:
Conversational Agents (CAs) are becoming part of our everyday lives. About 10 percent of users display aggressive behavior toward CAs, such as swearing at them when they produce errors. We conducted two online experiments to understand user aggression toward CAs better. In the first experiment, 175 participants used either a humanlike CA or a non-humanlike CA. Both CAs worked without errors, and we observed no increased frustration or user aggression. The second experiment (with 201 participants) was the focus of this study; in it, both CAs produce a series of errors. The results show that frustration with errors drives aggression, and users with higher impulsivity are more likely to become aggressive when frustrated. The results also suggest that there are three pathways by which perceived humanness influences users’ aggression to CAs. First, perceived humanness directly increases the frustration with the CA when it produces errors. Second, perceived humanness increases service satisfaction which in turn reduces frustration. Third, perceived humanness influences the nature of aggression when users become frustrated (i.e., users are less likely to use highly offensive words with a more humanlike CA). Our research contributes to our theoretical understanding of the role of anthropomorphism in the interaction with machines, showing that designing a CA to be more humanlike is a double-edged sword—both increasing and decreasing the frustration that leads to aggression—and also a means to reduce the most severe aggression.
Key words and phrases: Conversational agent, chatbot, humanlike design, service encounter, online errors, frustration, aggression, profanity, swearing, insults, computer anthropomorphism