ABSTRACT: Despite the improving accuracy of agent-based expert systems, human expert users aided by these systems have not improved their accuracy. Self-affirmation theory suggests that human expert users could be experiencing threat, causing them to act defensively and ignore the system's conflicting recommendations. Previous research has demonstrated that affirming an individual in an unrelated area reduces defensiveness and increases objectivity to conflicting information. Using an affirmation manipulation prior to a credibility assessment task, this study investigated if experts are threatened by counterattitudinal expert system recommendations. For our study, 178 credibility assessment experts from the American Polygraph Association (n = 134) and the European Union's border security agency Frontex (n = 44) interacted with a deception detection expert system to make a deception judgment that was immediately contradicted. Reducing the threat prior to making their judgments did not improve accuracy, but did improve objectivity toward the system. This study demonstrates that human experts are threatened by advanced expert systems that contradict their expertise. As more and more systems increase integration of artificial intelligence and inadvertently assail the expertise and abilities of users, threat and self-evaluative concerns will become an impediment to technology acceptance.
Key words and phrases: credibility assessment systems, deception detection, expert systems, user anxiety