We are delighted to announce the schedule of the Fall 2021 Social Media Lab Guest Speaker Series. The series will feature six selected studies published in the special theme issue of Big Data & Society (BD&S) on “Studying the COVID-19 Infodemic at Scale” (Editors: Anatoliy Gruzd, Manlio De Domenico, Pier Luigi Sacco, Sylvie Briand).
The guest talks will highlight some of the benefits and challenges of detecting and combating the spread of COVID-19 misinformation on social media. In particular, the talks will provide concrete examples of how digital trace data can be used to assess and mitigate the infodemic risks and consequences on individuals and society.
All sessions are free and open to the public, but you will need to register to receive a Google Meet access link to attend each talk.
Guest Talk 1 – Thursday, Oct 7, 2021 @ 2-3pm (ET)
Observatory on Social Media, Indiana University, Bloomington, IN, USA
Authors: Kai-Cheng Yang, Francesco Pierri, Pik-Mai Hui, David Axelrod, Christopher Torres-Lugo, John Bryden, Filippo Menczer
Abstract: The global spread of the novel coronavirus is affected by the spread of related misinformation—the so-called COVID-19 Infodemic—that makes populations more vulnerable to the disease through resistance to mitigation efforts. Here, we analyze the prevalence and diffusion of links to low-credibility content about the pandemic across two major social media platforms, Twitter and Facebook. We characterize cross-platform similarities and differences in popular sources, diffusion patterns, influencers, coordination, and automation. Comparing the two platforms, we find divergence among the prevalence of popular low-credibility sources and suspicious videos. A minority of accounts and pages exert a strong influence on each platform. These misinformation “superspreaders” are often associated with the low-credibility sources and tend to be verified by the platforms. On both platforms, there is evidence of coordinated sharing of Infodemic content. The overt nature of this manipulation points to the need for societal-level solutions in addition to mitigation strategies within the platforms. However, we highlight limits imposed by inconsistent data-access policies on our capability to study harmful manipulations of information ecosystems.
Guest Talk 2 – Thursday, Oct 21, 2021 @ noon-1pm (ET)
Department of Geography & Planning, University of Liverpool, Liverpool, UK
Authors: Mark Green, Elena Musi, Francisco Rowe, Darren Charles, Frances Darlington Pollock, Chris Kypridemos, Andrew Morse, Patricia Rossini, John Tulloch, Andrew Davies, Emily Dearden, Henrdramoorthy Maheswaran, Alex Singleton, Roberto Vivancos, Sally Sheard
Abstract: COVID-19 is unique in that it is the first global pandemic occurring amidst a crowded information environment that has facilitated the proliferation of misinformation on social media. Dangerous misleading narratives have the potential to disrupt ‘official’ information sharing at major government announcements. Using an interrupted time-series design, we test the impact of the announcement of the first UK lockdown (8–8.30 p.m. 23 March 2020) on short-term trends of misinformation on Twitter. We utilise a novel dataset of all COVID-19-related social media posts on Twitter from the UK 48 hours before and 48 hours after the announcement (n = 2,531,888). We find that while the number of tweets increased immediately post announcement, there was no evidence of an increase in misinformation-related tweets. We found an increase in COVID-19-related bot activity post-announcement. Topic modelling of misinformation tweets revealed four distinct clusters: ‘government and policy’, ‘symptoms’, ‘pushing back against misinformation’ and ‘cures and treatments’.
Guest Talk 3 – Thursday, Oct 28, 2021 @ 2-3pm (ET)
Department of Psychology, School of the Biological Sciences, University of Cambridge, Cambridge, UK
Authors: Melisa Basol, Jon Roozenbeek, Manon Berriche, Fatih Uenal, William P. McClanahan and Sander van der Linden
Abstract: Misinformation about the novel coronavirus (COVID-19) is a pressing societal challenge. Across two studies, one preregistered (n1 = 1771 and n2 = 1777), we assess the efficacy of two ‘prebunking’ interventions aimed at improving people’s ability to spot manipulation techniques commonly used in COVID-19 misinformation across three different languages (English, French and German). We find that Go Viral!, a novel five-minute browser game, (a) increases the perceived manipulativeness of misinformation about COVID-19, (b) improves people’s attitudinal certainty (confidence) in their ability to spot misinformation and (c) reduces self-reported willingness to share misinformation with others. The first two effects remain significant for at least one week after gameplay. We also find that reading real-world infographics from UNESCO improves people’s ability and confidence in spotting COVID-19 misinformation (albeit with descriptively smaller effect sizes than the game). Limitations and implications for fake news interventions are discussed.
Guest Talk 4 – Thursday, Nov 4, 2021 @ 2-3pm (ET)
Department of Security and Crime Science, University College London, London, UK
Johns Hopkins University, Washington, DC, USA
Authors: Kacper T Gradoń, Janusz A. Hołyst, Wesley R Moy, Julian Sienkiewicz and Krzysztof Suchecki
Abstract: The article explores the concept of infodemics during the COVID-19 pandemic, focusing on the propagation of false or inaccurate information proliferating worldwide throughout the SARS-CoV-2 health crisis. We provide an overview of disinformation, misinformation and malinformation and discuss the notion of “fake news”, and highlight the threats these phenomena bear for health policies and national and international security. We discuss the mis-/disinformation as a significant challenge to the public health, intelligence, and policymaking communities and highlight the necessity to design measures enabling the prevention, interdiction, and mitigation of such threats. We then present an overview of selected opportunities for applying technology to study and combat disinformation, outlining several approaches currently being used to understand, describe, and model the phenomena of misinformation and disinformation. We focus specifically on complex networks, machine learning, data- and text-mining methods in misinformation detection, sentiment analysis, and agent-based models of misinformation spreading and the detection of misinformation sources in the network. We conclude with the set of recommendations supporting the World Health Organization’s initiative on infodemiology. We support the implementation of integrated preventive procedures and internationalization of infodemic management. We also endorse the application of the cross-disciplinary methodology of Crime Science discipline, supplemented by Big Data analysis and related information technologies to prevent, disrupt, and detect mis- and disinformation efficiently.
Guest Talk 5 – Thursday, Nov 11, 2021 @ 2-3pm (ET)
Michael Robert Haupt
Department of Cognitive Science, University of California San Diego, La Jolla, USA
Authors: Michael Robert Haupt, Jiawei Li and Tim K Mackey
Abstract: This study investigates the types of misinformation spread on Twitter that evokes scientific authority or evidence when making false claims about the antimalarial drug hydroxychloroquine as a treatment for COVID-19. Specifically, we examined tweets generated after former U.S. President Donald Trump retweeted misinformation about the drug using an unsupervised machine learning approach called the biterm topic model that is used to cluster tweets into misinformation topics based on textual similarity. The top 10 tweets from each topic cluster were content coded for three types of misinformation categories related to scientific authority: medical endorsements of hydroxychloroquine, scientific information used to support hydroxychloroquine’s use, and a comparison group that included scientific evidence opposing hydroxychloroquine’s use. Results show a much higher volume of tweets featuring medical endorsements and use of supportive scientific information compared to accurate and updated scientific evidence, that misinformation-related tweets propagated for a longer time frame, and the majority of hydroxychloroquine Twitter discourse expressed positive views about the drug. Metadata from Twitter accounts found that prominent users within misinformation discourse were more likely to have media or political affiliation and explicitly expressed support for President Trump. Conversely, prominent accounts within the scientific opposition discourse primarily consisted of medical doctors or scientists but had far less influence in the Twitter discourse. Implications of these findings and connections to related social media research are discussed, as well as cognitive mechanisms for understanding susceptibility to misinformation and strategies to combat misinformation spread via online platforms.
Guest Talk 6 – Friday, Nov 19, 2021 @ 2-3pm (ET)
Department of Communication, Loyola University Maryland, Baltimore, MD, USA
Authors: Paola Pascual-Ferrá, Neil Alperstein, Daniel J Barnett, and Rajiv N Rimal
Abstract: Medical and public health professionals recommend wearing face masks to combat the spread of the coronavirus disease of 2019 (COVID-19). While the majority of people in the United States support wearing face masks as an effective tool to combat COVID-19, a smaller percentage declared the recommendation by public health agencies as a government imposition and an infringement on personal liberty. Social media play a significant role in amplifying public health issues, whereby a minority against the imposition can speak loudly, perhaps using tactics of verbal aggression taking the form of toxic language. We investigated the role that toxicity plays in the online discourse around wearing face masks. Overall, we found tweets including anti-mask hashtags were significantly more likely to use toxic language, while tweets with pro-mask hashtags were somewhat less toxic with the exception of #WearADamnMask. We conclude that the tensions between these two positions raise doubt and uncertainty around the issue, which make it difficult for health communicators to break through the clutter in order to combat the infodemic. Public health agencies and other governmental institutions should monitor toxicity trends on social media in order to better ascertain prevailing sentiment toward their recommendations and then apply these data-driven insights to refine and adapt their risk communication messaging toward mask wearing, vaccine uptake, and other interventions.