ABSTRACT:
We propose a theoretical model based on the judge-advisor system (JAS) and empirically examine how algorithmic advice, compared to identical advice from humans, influences human judgment. This effect is contingent on the level of transparency, which varies with whether and how the prediction performance of the advice source is presented. In a series of five controlled behavioral experiments, we show that individuals largely exhibit algorithm appreciation; that is, they follow algorithmic advice to a greater extent than identical human advice due to a higher trust in an algorithmic than human advisor. Interestingly, neither the extent of higher trust in algorithmic advisors nor the level of algorithm appreciation decreases when individuals are informed of the algorithm’s prediction errors (i.e., upon presenting prediction performance in an aggregated format). By contrast, algorithm appreciation declines when the transparency of the advice source’s prediction performance further increases through an elaborated format. This is plausibly because the greater cognitive load imposed by the elaborated format impedes advice taking. Finally, we identify a boundary condition: algorithm appreciation is reduced for individuals with a lower dispositional need for cognition. Our findings provide key implications for research and managerial practice.
Key words and phrases: Algorithmic advice, algorithm appreciation, algorithmic transparency, online trust, cognitive load, prediction performance