ABSTRACT:
In an attempt to combat fake news, policymakers in many countries are considering mandating the disclosure of artificial intelligence (AI) recommendations of social media news articles. We used two randomized controlled experiments to investigate the effects of labeling social media news stories as recommended by AI. Our results show that an AI recommendation reduced belief in true news articles and had no material effect on belief in fake news. In contrast, a recommendation by an expert increased belief in true news articles, but had no effect for fake news articles. A friend recommendation had no effect for fake articles and inconsistent effects for true articles. Belief that an article was true led to news engagement (liking, commenting, sharing), but an AI recommendation weakened this relationship, making confirmation bias the primary factor influencing engagement. The trustworthiness of the recommender only partially explained these effects, which suggests that there are other theoretical factors at work. This study reveals that the explicit labeling of AI curation of social media news stories does not help combat fake news, but instead is likely to backfire and have unintended negative effects by decreasing the belief of and engagement with true news articles.
Key words and phrases: Fake news, online disinformation, news trustworthiness, AI recommendations, deception detection, AI vs. human