-- GiadaM - 05 Aug 2023

Disinformation Impact Assessment

Does the work of “opinion shapers” matter?

Team Members

Paul Ballot, Carlos Eduardo Barros, Andrea Benedetto, Meriam Belkhir, Simon M. Ceh, Roeland Dubèl, Fatima Gaw, Joanne Kuai, Giada Marino, Scott Rodgers, Kris Ruijgrok, Janjira Sombatpoonsiri, Olivia Thompson, Rob Topinka, Guillen Torres, Martin Trans

Contents

Team Members 1

Contents 1

Summary of Key Findings 2

1. Introduction 2

2. Initial Data Sets 2

3. Research Questions 2

4. Methodology 2

5. Findings 2

6. Discussion 2

7. Conclusion 2

8. References 2

Summary of Key Findings

Contextualizing ‘anti-woke’ Facebook accounts

  • There are notable limitations to studying misinformation via discrete posts or URLs, at least in relation to the ‘anti-woke’ issue space we studied. Studying misinformation with a focus on posts or URLs risks fetishizing misinformation as pieces of ‘content’ while underplaying context.

  • Misinformation is not merely misleading or incorrect information, but a phenomenon emerging from the cultural milieu of social media users (e.g. in this study, the given Facebook page or group), not to mention the broader technical characteristics of the digital environments across which such content circulates.

  • Link sharing in the ‘anti-woke’ issue space is strongly associated with partisan Facebook pages and groups. An important finding of our qualitative analysis was that a number of partisan Facebook pages were run by two right-wing news publishers (the Daily Wire and the Western Journal) under separate names (e.g. ‘Donald Trump Is My President’), with the affiliation with these publishers ‘cloaked’ to differing degrees.

  • The Misinformation Amplification Factor (MAF) can be seen as a more general performance measure, comparing the performance of a Facebook post against the historical performance of posts from the same account.

  • MAF appears to be most useful in an assumed case of misinformation content being shared by a Facebook account that otherwise shares content that might not be considered as misinformation. However, since our final dataset included a large number of ‘anti-woke’ Facebook accounts, the MAF was less useful, as these accounts consistently share misleading content which might likewise be flagged as misinformation.

Exploring Instances of Coordinated Misinformation Sharing within Defined Issue Spaces on Facebook

  • Large hyper-partisan content producers rely on a network of Facebook Pages loosely connected to them, to distribute problematic content.

    • The Daily Wire

    • The Schiller Institute

    • La Rouche Institute

    • Western Journal

  • The Misinformation Amplification Factor (MFA) has been negative (M = -0.24, SD = 6.81) for most of the identified posts, thereby indicating that misinformation content is not actually boosted by Facebook.

  • Qualitative analysis of each account/group/page that shared a particular URL containing misinformation showed that multiple accounts/groups/pages were affiliated with the account/group/page where the URL containing misinformation first appeared. The affiliation manifested in multiple ways: formally affiliated groups to the original group or page, informally created copy pages/groups identified as copies through content and shared names.

  • Qualitative analysis of the groups/pages or accounts that shared a URL flagged as containing misinformation showed that these were mostly communities (rather than organisations or individuals) revolving around partisan issues (rather than single issues, spam, conspiracy, or religious issues). This may just be reflective of how Facebook is used in general and the breakdown of communities, organisations, individuals, and the issues they are concerned with.

1. Introduction

In the past ten years, there has been an increasing trend among citizens to utilize social media (SM) and instant messaging services (IMs) for consuming and sharing news (Newman et al., 2021). This involvement includes both competition and collaboration with established or alternative media entities in activities such as news creation, evidence gathering, and information verification (Iannelli & Splendore, 2017).

However, despite the initial excitement surrounding this development, scholars began to observe negative implications for democracy's well-being after the 2016 election of Trump and the Brexit referendum. This was primarily attributed to the increased participation of citizens in information cycles, particularly on social media, which led to a surge in the circulation of problematic information (Quandt, 2018). The term "problematic information" encompasses various types of misleading information created intentionally or unintentionally to deceive the audience (Jack, 2017). Social media platforms, in particular, have witnessed the widespread dissemination of problematic information, where it can be legitimized through the personal influence of "friends" (Anspach, 2017) and the algorithmic logic that amplifies the credibility of popular posts, resembling a bandwagon effect (Schmitt-Beck, 2015).

Despite concerns about the implications that the unchecked circulation of problematic information may have on the functioning of democracies, attempts to measure its impact are still limited and highly complex to quantify in a real-world setting (Nimmo, 2020). Some attempts have been made through experimental designs in controlled environments, such as verifying whether exposure to problematic news has effects on citizens' political polarization (Borella et al., 2017; Cook et al., 2017; Rottweiler & Gill, 2022; Zimmermann & Kohring, 2020). Attempts to measure this impact through digital data or mixed techniques are still in their early stages (Eady et al., 2023) and a debatable subject. One of these attempts tried to measure how a piece of misinformation is amplified by social media platforms: the Integrity Institute (2022) Misinformation Amplification Factor (MAF) represents the relationship between the actual engagement associated with a post containing misinformation and the creator's historical content performance.

We thus attempted to calculate the MAF on a set of posts containing URLs that had been fact-checked as false (containing misinformation) and retrieved from previous studies conducted by the team of Mapping Italian News.

This project takes an experimental approach, aiming to address the problem without taking a specific stance.

In the following paragraphs, we will describe the dataset from which we started our analysis and the research questions that guided the study. Additionally, we will provide a brief overview of the methodology employed, present the obtained results, and discuss some key trends and implications. We will also address the limitations of the research design.

2. Initial Data Sets

The project relies on a dataset collected in previous studies - Global CLSB maps 2022 - in which we identified 818 coordinated accounts (115 Facebook Pages and 703 public groups organized in 95 networks) that performed “coordinated link sharing behavior” on Facebook by rapidly sharing at least four different news stories rated as problematic by Facebook third-party fact-checkers between January 2017 and December 2021. We started from posts published by these accounts both in a coordinated and non-coordinated manner.

3. Research Questions
  1. What are the characteristics of Facebook accounts (pages and groups) engaged in coordinated link sharing, and what is the context of this sharing?

    1. What are the key characteristics of and contexts in which ‘anti-woke’ Facebook accounts engage in coordinated link sharing?
    2. What are the challenges of assessing discrete social media posts or URLs for misinformation by applying the Misinformation Amplification Factor (MAF) on datasets generated with CooRnet?
  2. What are the characteristics of Facebook accounts (pages and groups) engaged in misinformation’s coordinated sharing behavior about two issues spaces, climate change, and Covid-19?
4. Methodology

The Coordinated Link Sharing Behaviour

The methodology implies the use of CooRnet. CooRnet is a CrowdTangle -based library that detects coordinated link sharing behavior (CLSB) on Facebook/Instagram.

CLSB refers to a specific coordinated activity performed by a network of Facebook pages, groups, and verified public profiles that repeatedly share the same URLs in a very short time from each other.

To detect such networks, we designed, implemented, and tested an algorithm that detects sets of Facebook accounts which have performed coordinated link sharing behaviour by

(1) estimating a time threshold that identifies URLs shares performed by multiple distinguished entities within an unusually short period of time (as compared to the entire dataset), and

(2) grouping the entities that repeatedly shared the same link within this coordination interval.

To measure how a piece of misinformation is amplified by social media platforms, we attempted to calculate the Integrity Institute Misinformation Amplification Factor (MAF) on a set of posts containing URLs which had been fact-checked as false (containing misinformation) and retrieved from previous studies conducted by the team of Mapping Italian News.

The MAF represents the relationship between the actual engagement associated with a post containing misinformation and the post’s anticipated engagement based on the creator's historical content performance. or a specific piece of misinformation content, the MAF is calculated as follows:

MAF = Engagement on misinformation post / Average engagement on posts from creator prior to misinformation post

Contextualizing ‘anti-woke’ Facebook accounts workflow

Data collection

We began by identifying a misinformation issue space within a CooRnet dataset that contained Facebook posts with URLs that had appeared in previous Facebook posts labeled as ‘misinformation’ by fact-checkers.

After initial exploration, we developed a purposefully broad set of keywords for posts related to ‘woke’ (woke, race, and cancel and variations, eg canceled, cancellation) and the insurrection at the US capitol (capitol, January, stolen, Jan 6).

We then extracted all URLs related to our keywords from the CooRnet dataset, manually filtered them for relevance, and generated new datasets for ‘woke’ and the US capitol insurrection using CooRnet ’s get_ctshares function. It is important to note that CooRnet gathers all URLs included in posts previously flagged as ‘misinformation’, and hence the URLs themselves do not necessarily contain misinformation. However, given our interest in the broader context of misinformation, we decided to explore all URLs in our dataset without subjecting them to additional fact-checking.

The dataset for ‘woke’ was much larger (490 posts for woke compared to 297 for the insurrection), and initial qualitative coding showed that the ‘woke’ dataset was also more contextually and discursively rich. We, therefore, limited our data analysis to the ‘woke’ dataset.

Data analysis

We conducted a quali-quantitative analysis of our ‘woke’ dataset. For our qualitative analysis, we manually coded a list of Facebook accounts (173 pages or groups) appearing in the CooRnet output in the following fields:

Account identity (organization, individual, community, other)

Whether the account appeared to be ‘cloaked’ (e.g. the ‘Fed-up Americans’ page presented as an authentic community, but was in fact run by the publisher Daily Wire, LLC, an entity also responsible for several similar pages)

Political bias (partisan, conspiracy, religious, single-issue, parody, other).

We divided this manual coding among the team members, with ambiguous cases cross-checked by a second coder.

For our quantitative analysis, we calculated the MAF for 153 accounts (20 accounts had no activity in the 14 days prior to the post containing the URL).

MAF is intended to represent the relationship between:

Engagement on a post containing a URL that was previously included in a post labeled as ‘misinformation’ by Facebook fact-checkers.

The average account engagement in the fifteen days prior to the post. The MAF is highly skewed toward negative scores because many posts received no engagement.

Exploring Instances of Coordinated Misinformation Sharing within Defined Issue Spaces on Facebook

Step 1. We used a dataset gathered from previous research projects of fact-checked URLs. The dataset includes also all the Facebook posts including these URLs.

Step 2. Language Detection: Reducing the dataset to English posts only.

Step 3. Issue Space: Using expert knowledge and data exploration to identify relevant topics of our interest: covid-19 and climate change.

Step 4. Finding relevant Keywords via a qualitative snowballing approach.

Step 5. Filtering: Reducing the dataset to Posts containing our identified keywords.

After building a keyword list for each issue space we ran them on 4Cat’s filter-by-value tool to build two subsets. This step only considered words appearing on the “description” metadata.

Step 6. Network visualization: Both subsets were plotted on Gephi to visualize the connections between information sources (including pages, groups, and profiles) and shared URLs.

5. Findings

5.1 Contextualizing ‘anti-woke’ Facebook accounts workflow

Table 5.1.1. Number of posts and accounts engaged in coordinated link sharing, generated by CooRnet

Accounts engaged in coordinated link sharing

Count

Total number of coordinated posts

445

Total number of coordinated accounts

173

Facebook pages

94

Facebook groups

79

There are 445 coordinated posts identified in the CooRnet from the 9 links input, shared by 173 unique Facebook accounts. These accounts are Facebook pages (94) and Facebook groups (79).

Figure 5.1.1. Accounts’ identity

The analyzed accounts were classified under three types: Individually managed accounts (26 accounts, 15%), accounts held by recognized organizations (39 accounts, 23%), and accounts held by a community or group of people sharing the same interest (108 accounts, 62%).

Figure 5.1.2. Type of political bias

Across the 173 analyzed accounts, the majority of accounts (134) share “partisan” posts and links either pros or cons to a given topic. 18 accounts are sharing more “conspiracy” content. The rest of the accounts are sharing either religious/spiritual content (9), single-focused issue (6), parody content (4), or any other miscellaneous content (2).

Figure 5.1.3. Number of suspected cloaked accounts

We found 29 cloaked accounts, 11 from the right-wing websites the Daily Wire and 18 from the Western Journal. These accounts operate under separate names (e.g. ‘Donald Trump Is My President’ or Conservative Alliance by WJ), with affiliation with these publishers. This seems to be a strategy to amplify partisan content across different Facebook pages, and is potentially intended to establish the authenticity or legitimacy of that content.

Figure 5.1.4. Distribution of MAF scores among the accounts per political bias

This figure shows the distribution of MAF scores across the different classes of accounts. Most accounts produced coordinated posts that created slightly less engagement as compared to the average of each account's history. However, some account types (e.g., conspiracy) also shared links that created above-average engagement. While partisan accounts typically had average engagement, they also produced some of the most engaging, and disengaging content.

Figure 5.1.5. Social network analysis of Facebook accounts engaged in coordinated link sharing

The network of Facebook accounts engaged in coordinated sharing is generally loose, except for some highly connected clusters. These include the two clusters of news publishers' account networks, which is the Daily Wire sub-network (purple) and the Western Journal sub-network (blue), and the PragerU cluster and its coordination with a set of Facebook groups that directed users to another set of external links.

5.2 Exploring Instances of Coordinated Misinformation Sharing within Defined Issue Spaces on Facebook

Figure 5.2.1. Language distribution of Facebook posts

Firstly, we quantified the predominant language in Facebook posts related to the two identified issue spaces (Figure 5.2.1). This allowed us to filter out only English-language content.

Figure 5.2.2. Type of content qualitatively identified in Facebook entities.

From a qualitative analysis of the content, it emerges that the majority are partisan in nature, followed by conspiratorial content and spam. 2.8% of the content was no longer available as it was removed by Facebook for violating platform policies (Figure 5.2.2).

Figure 5.2.3. Type of Facebook entity (account, page, group) from the qualitative analysis of a subsample.

From the analysis of a sub-sample of actors, it emerged that the majority are communities, while only 8.5% are organizations and 6.6% are individual individuals (Figure 5.2.3).

Figure 5.2.4. MAF scores distribution.

As can be observed in Figure 5.2.4, the distribution of the Misinformation Amplification Factor is more skewed towards negative values compared to positive ones. This data is further supported by the values of the mean (M = -0.24) and standard deviation (SD = 6.81).

6. Discussion

Contextualizing ‘anti-woke’ Facebook accounts workflow

This project investigated the characteristics and contexts of Facebook accounts (pages or groups) associated with ‘anti-woke’ content and engaged in coordinated link sharing. This focus emerged from challenges faced by our group in assessing the circulation of ‘misinformation’ via social media using a calculation of the Misinformation Amplification Factor (MAF). Beginning with a dataset generated via CooRnet, we took a quali-quantitative approach, combining recursive attempts to calculate the MAF from the dataset with a focused qualitative exploration of the Facebook account in the ‘anti-woke’ issue space drawn from this dataset.

Our overall finding is that analyzing misinformation requires as much attention to context as content. In other words, a focus on content flagged as misinformation has considerable limitations. Our findings can be summarized as follows:

  • There are notable limitations to studying misinformation via discrete posts or URLs, at least in relation to the ‘anti-woke’ issue space we studied. Studying misinformation with a focus on posts or URLs risks fetishizing misinformation as pieces of ‘content’ while underplaying context.

  • Misinformation is not merely misleading or incorrect information, but a phenomenon emerging from the cultural milieu of social media users (e.g. in this study, the given Facebook page or group), not to mention the broader technical characteristics of the digital environments across which such content circulates.

  • Link sharing in the ‘anti-woke’ issue space is strongly associated with partisan Facebook pages and groups. An important finding of our qualitative analysis was that a number of partisan Facebook pages were run by two right-wing news publishers (the Daily Wire and the Western Journal) under separate names (e.g. ‘Donald Trump Is My President’), with the affiliation with these publishers ‘cloaked’ to differing degrees.

  • The Misinformation Amplification Factor (MAF) can be seen as a more general performance measure, comparing the performance of a Facebook post against the historical performance of posts from the same account.

  • MAF appears to be most useful in an assumed case of misinformation content being shared by a Facebook account that otherwise shares content that might not be considered as misinformation. However, since our final dataset included a large number of ‘anti-woke’ Facebook accounts, the MAF was less useful, as these accounts consistently share misleading content which might likewise be flagged as misinformation.

Exploring Instances of Coordinated Misinformation Sharing within Defined Issue Spaces on Facebook

The unregulated dissemination of contentious information has sparked apprehensions regarding its potential repercussions on the operational dynamics of democratic societies. Endeavors to gauge these ramifications are constrained and have revealed themselves to be exceedingly intricate to assess within a practical context. Several endeavors have been undertaken employing experimental frameworks in controlled settings, aiming to ascertain whether exposure to problematic news contributes to the polarization of citizens' political affiliations. The pursuit of quantifying this influence through digital datasets or hybrid methodologies remains nascent and constitutes a subject still open to debate.

Takeaways

  • Large hyper-partisan content producers rely on a network of Facebook Pages loosely connected to them, to distribute problematic content. These include The Daily Wire, The Schiller Institute, La Rouche Institute and Western Journal.

  • The Misinformation Amplification Factor (MFA) has been negative (M = -0.24, SD = 6.81) for most of the identified posts, thereby indicating that Facebook does not actually boost misinformation content.

  • Qualitative analysis of each account/group/page that shared a particular URL containing misinformation showed that multiple accounts/groups/pages were affiliated with the account/group/page where the URL containing misinformation first appeared. The affiliation manifested in multiple ways: formally affiliated groups to the original group or page, informally created copy pages/groups identified as copies through content and shared names.

  • Qualitative analysis of the groups/pages or accounts that shared a URL flagged as containing misinformation showed that these were mostly communities (rather than organisations or individuals) revolving around partisan issues (rather than single issues, spam, conspiracy, or religious issues). This may just be reflective of how Facebook is used in general and the breakdown of communities, organisations, individuals, and the issues they are concerned with.

Limitations

  • Most of the Groups and Pages identified had little to no engagement. Therefore, the MFA, which is calculated as the ratio of current post engagement to mean post engagement in the previous 2 weeks, might not be a useful measure.

  • In total there were 633.103 posts, which in total had 1.128.826 links. On average a post thus included 1.8 links which made it difficult to identify which link was the exact source of misinformation.

  • Benchmarking against the page/group itself is only of limited use, given that many accounts tend to post misinformation on multiple occasions.

7. Conclusion

Both the two branches of this project revolve around the investigation of misinformation, its dissemination, and its impact on Facebook.
In conclusion, this project underscores the complex interplay between context and content in studying misinformation within the 'anti-woke' issue space on Facebook, highlighting the challenges of quantifying its impact, while revealing networked dynamics of misinformation dissemination and underscoring the need for nuanced metrics and approaches.
The first study specifically explores the characteristics of Facebook accounts associated with 'anti-woke' content and coordinated link sharing, while also discussing the limitations of studying misinformation solely through content focus.
The second study delves into the tactics used by large hyper-partisan content producers to distribute problematic content through loosely connected Facebook Pages and the impact of the Misinformation Amplification Factor (MAF) on identified posts.
This research highlights the complex nature of misinformation, the role of Facebook in its circulation, and the need to consider context in analyzing and understanding its spread.

Posters

Contextualizing ‘anti-woke’ Facebook accounts workflow

Exploring Instances of Coordinated Misinformation Sharing within Defined Issue Spaces on Facebook

8. References

Anspach, N. M. (2017). The New Personal Influence: How Our Facebook Friends Influence the News We Read. Political Communication, 34(4), 590–606. https://doi.org/10.1080/10584609.2017.1316329

Borella, C. A., Barcelona Graduate School of Economics, Barcelona, Spain, Rossinelli, D., & Barcelona Graduate School of Economics, Barcelona, Spain. (2017). Fake news, immigration, and opinion polarization. Socioeconomic Challenges, 1(4), 59–72. https://doi.org/10.21272/sec.1(4).59-72.2017

Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PloS One, 12(5), e0175799. https://doi.org/10.1371/journal.pone.0175799

Eady, G., Paskhalis, T., Zilinsky, J., Bonneau, R., Nagler, J., & Tucker, J. A. (2023). Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior. Nature Communications, 14(62), 1–11. https://doi.org/10.1038/s41467-022-35576-9

Iannelli, L., & Splendore, S. (2017). Participation in the Hybrid Political Newsmaking and Its Consequences on Journalism Epistemology. Comunicazioni Sociali, 3(2017), 436–447.

Integrity Institute. (2022). Misinformation Amplification Analysis and Tracking Dashboard. Elections Integrity Program.

Jack, C. (2017). Lexicon of lies: Terms for problematic information. Data & Society, 3. https://apo.org.au/sites/default/files/resource-files/2017/08/apo-nid183786-1180516.pdf

Newman, N., Fletcher, R., Schulz, A., Andi, S., Robertson, C. T., & Nielsen, R. K. (2021). Reuters Institute Digital News Report 2021. https://papers.ssrn.com/abstract=3873260

Nimmo, D. (2020). The Political Persuaders. Routledge. https://play.google.com/store/books/details?id=QDnSDwAAQBAJ

Quandt, T. (2018). Dark Participation. Media and Communication, 6(4), 36–48. https://doi.org/10.17645/mac.v6i4.1519

Rottweiler, B., & Gill, P. (2022). Conspiracy beliefs and violent extremist intentions: The contingent effects of self-efficacy, self-control and law-related morality. Terrorism and Political Violence, 34(7), 1485–1504. https://doi.org/10.1080/09546553.2020.1803288

Schmitt-Beck, R. (2015). Bandwagon Effect. In The International Encyclopedia of Political Communication (pp. 1–5). Wiley. https://doi.org/10.1002/9781118541555.wbiepc015

Zimmermann, F., & Kohring, M. (2020). Mistrust, Disinforming News, and Vote Choice: A Panel Survey on the Origins and Consequences of Believing Disinformation in the 2017 German Parliamentary Election. Political Communication, 37(2), 215–237. https://doi.org/10.1080/10584609.2019.1686095
Topic revision: r2 - 18 Aug 2023, MartinTrans
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback