ABSTRACT:
Fake news on social media has become a serious problem, and social media platforms have started to actively implement various interventions to mitigate its impact. This paper focuses on the effectiveness of two platform interventions, namely a content-level intervention (i.e., a fake news flag that applies to a single post) and an account-level intervention (i.e., a forwarding restriction policy that applies to the entire account). Collecting data from China’s largest social media platform, we study the impact of a fake news flag on three fake news dissemination patterns using a propensity score matching method with a difference-in-differences approach. We find that implementing a policy of using fake news flag influences the dissemination of fake news in a more centralized manner via direct forwards and in a less dispersed manner via indirect forwards, and that fake news posts are forwarded more often by influential users. In addition, compared with truthful news, fake news is disseminated in a less centralized and more dispersed manner and survives for a shorter period after a forwarding restriction policy is implemented. This study provides causal empirical evidence of the effect of a fake news flag on fake news dissemination. We also expand the literature on platform interventions to combat fake news by investigating a less studied account-level intervention. We discuss the practical implications of our results for social media platform owners and policymakers.
Key words and phrases: Fake News, Fake News Online, Fake News Flag, Forwarding Restriction Policy, Fake News Dissemination, Quasi-Experiment, Online Disinformation, Platform Policies