The Anatomy of (In)direct harassment

The Anatomy of (In)direct harassment: temporality, rhetoric and affordances

Team Members

Eirliani Abdul Rahman, Emillie V. de Keulenaar, Giulia Campaioli, Holy Shum, Huan Lai, Jacopo Sironi, Natalie Kerby, Penelope Bollini, Tristan Bannerman and Valentina Caiola

1. Introduction

When Elon Musk purchased Twitter in October 2022, the platform had 330 million users. Harassment, hate, and negativity were always common on the platform, but after Musk acquired the platform and changed its moderation policies, researchers of the Anti-Defamation League found rising levels of hate speech.

On December 8, 2022, Anne Collier posted a tweet linking to a letter of resignation from three members of Twitter's Trust and Safety Council, an advisory council comprising 100 external individual experts and organisations on matters of online safety and content moderation. In the letter, three members of the Council—Anne Collier, Eirliani Abdul Rahman, and Lesley Podesta—explained the reasons for their resignation: the meteoric rise in hate speech following Elon Musk's purchase of the platform was evident and unacceptable to continue their duty.

On December 9, 2022, Mike Cernovich replied to Collier’s tweet reposting an article that was two years old at the time, with the tweet “You all belong in jail”. The New York Post article claimed that the platform had failed to remove child sexual abuse material (CSAM) despite several reports from the victims. Cernovich used the article to support veiled accusations that the Council’s members were responsible for the crime, even though as advisory council members, they were not directly involved in any of Twitter’s daily operational decisions.

Less than an hour later, Elon Musk responded to Cernovich, tweeting: “It’s a crime that they refused to take action on child exploitation for years!”, indirectly accusing the Council members of being responsible for the presence of CSAM content on the platform.

In the 24 hours that followed Cernovich’s tweet, the three women were mentioned or tagged in an increasing “swarming” of tweets that included insults, defamation, and threats of doxxing and physical aggression. Moreover, 2198 memes and other visual content was tweeted, some of which contained cheap fakes that sexualized the three women who resigned from the Council as well as Chief Legal Officer Vijaya Gadde whom Musk fired in October 2022.

Musk dissolved the Council entirely four days later.

Twitter’s rules prohibit engaging in “the targeted harassment of someone, or incite other people to do so”, such as targeting people with abuse or harassment online and behavior that urges offline action, such as physical harassment, as well as “content that otherwise sexualizes an individual without their consent”, and “repetitive usage of insults or profanity where the context is to harass or intimidate others”. In practice, a user accessing the Twitter Help Center to report a case of harassment is only allowed to report up to five harassing incidents. There is currently no way to report to the platform that one is being harassed on behalf of multiple accounts. This latter approach would be helpful for targets of online harassment and/or abuse as it is retraumatizing for the targets to look at such tweets. It would be helpful for targets to enlist the help of trusted friends to report on their behalf instead.

Moreover, there is no legal basis for keeping influential profiles or public figures accountable for indirectly and covertly inciting an attack on a person or a group, despite this kind of ideologically driven hate speech increasing the likelihood that people will violently and unpredictably attack the targets of vicious claims. Investigating the dynamics of online harassment and the role of platform affordances in enabling this can help detect this kind of phenomenon, and thus support victims to seek recourse and hold social media platforms accountable.

Building on Marwick’s model of “morally motivated networked harassment” (2021), we argue that the case under study can be described as “indirect swarming”, a form of networked harassment wherein due to a newsworthy event, an influential account or what we call “amplifiers” signal to their followers to harass a target via coded or masked language. Coded language is used by the amplifiers to covertly signal their followers, who carry out the swarming via harassing tweets. Retweets and replies play a fundamental part, as they allow users to quickly contribute to the harassment campaign by amplifying the visibility of the amplifiers’ message.

Online & Networked Harassment

Online harassment refers to “unwanted contact that is used to create an intimidating, annoying, frightening, and even hostile environment” through digital means (Lenhart et al., 2016). Online harassment affects the mental health and threatens the physical integrity of victims, and it affects society at large by creating hostile online environments that silence not only victims, but also people who have witnessed harassment online and even those who have not (Lenhart et al., 2016). Research shows that young people, women, racialized and minoritized people (e.g. LGBT+ people) face higher levels of online harassment and are more affected by the negative consequences of online harassment (Lenhart et al., 2016; EIGE, 2017).

While legal and technical definitions of harassment presume that harassment is carried out by one person, Marwick’s (2021) model of Morally Motivated Networked Harassment (MMNH) contemplates the case of harassment being carried out by a network of individuals, legitimized by “amplifiers” whom she defined as highly followed accounts and influencers who accuse the target of violating the morals of the community.

In Marwick’s model, morally motivated networked harassment identifies a situation wherein “a member of a social network or online community accuses a target of violating their network’s norms, triggering moral outrage. Network members send harassing messages to the target, reinforcing their adherence to the norm and signaling network membership.” (Marwick, 2021, p. 1). However, the model does not explain how the amplification happens, and fails to account for cases of harassment instigated by amplifiers through coded and masked language, rather than with direct accusations.

2. Research Questions

What is indirect swarming? What evidence is there that this is different from networked harassment?

Starting from the lived experience of Eirliani, we analyze how this case of networked harassment unfolded over time to provide a measurable characterization of the dynamics of indirect swarming. We consider the evolution of the case as seen “from the perspective of the targets” by collecting and analyzing all the tweets that mentioned Eirliani, Anne Collier, and Lesley Podesta, therefore the “swarming” of tweets, retweets, replies, and quotes that targeted them. Embracing the importance of embodied experience (D’Ignazio and Klein, 2021), we start from the lived experience of harassment and aim to identify measurable indicators to describe, detect, predict, and prevent (in)direct swarming.

Moreover, we analyze the rhetoric of the tweets through and beyond Twitter’s definition of harassment to evaluate the harassing character of the swarming and the limitations of Twitter’s moderation policy, and to uncover the presence of tropes, veiled attacks, and conspiracy theories.

See proposed working model of “indirect swarming” (Annex A).

Research objectives

The general objective of this study was to describe the episode of harassment through quantitative indicators and develop a conceptual model for indirect harassment/swarming.

Specifically, the research aimed to analyze:

Temporality: describe how the indirect harassment unfolds over time
  • Escalation: understand the role of influential profiles such as Musk and Cernovich in the amplification of the swarming;
  • Affordances: define which Twitter affordances play a role in indirect harassment
  • Aggressors: identify the top ten harassers and describe how they engage with indirect harassment over time
Rhetoric: identify the rhetoric used in this event to evaluate the harassing nature of the swarming, according to 1) Twitter’s definition of harassment, and 2) the definition of online political violence proposed by AzMina Magazine and InternetLab (2021), two non-profit organizations from Brazil dedicated to defending digital and human rights.

3. Methodology and initial datasets

The tweets were collected to investigate the case study of online harassment directed to Eirliani Abdul Rahman, Anne Collier and Lesley Podesta, former members of the Twitter's Trust and Safety Council.

The initial dataset consisted of 126,984 tweets directed at Eirliani Abdul Rahman, Anne Collier and Lesley Podesta between 7 December 2022 and 5 May 2023. To collect the sample, we queried Twitter API v.3 using 4CAT for replies, retweets, quote tweets of Elon Musk and Mike Cernovich and mentions of @eirliani, @annecollier, and @podesta_lesley between December 2022 and May 2023.

Temporality

  • Affordances: we used RawGraph to visualize the minute-by-minute lifecycle of 1) the sum of replies, retweets, quote tweets, and likes and 2) replies, retweets, quote tweets, and likes as separate entities.

  • Escalation: visualizations over 1) the whole time frame of the corpus, 2) a selected timeframe of the 7 days with the highest activity, 3) a selected timeframe of the 24 hours with highest activity.

  • Aggressors: we used ChatGPT ’s auto-completion API to code tweets by harassment type, according to Twitter’s definition of harassment. Following which, we plotted the top harassers’ engagement (calculated as the sum of number of RTs, replies, comments, and likes) minute-by-minute, over a selected timeframe (from 3:30 pm December 9, 2022 to 3:30 pm December 10, 2022).

Rhetoric

  • Referring to Twitter’s definition of ‘harassment’, a sampling of the corpus (10% most and 10% least engaged tweets) was coded using ChatGPT ’s auto-completion API. We used ChatGPT to analyze the presence of harassing content in the corpus, using Twitter’s definition of harassment to construct the following prompt:

“Does this post (t1) target @eirliani, @podesta_lesley or @annecollier with malicious, unreciprocated, and intended attempts to humiliate or degrade them, (t2) mention or tag @eirliani, @podesta_lesley or @annecollier with malicious content, (i1) target @eirliani, @podesta_lesley or @annecollier with insults or profanity, (e1) encourage others to harass or target @eirliani, @podesta_lesley or @annecollier with abuse, (e2) call to target @eirliani, @podesta_lesley or @annecollier with abuse or harassment online or (e3) urge offline action, such as physical harassment? Answer with t1, t2, (t1) (t2) (i1) (e1) (e2) (e3) i1, e1, e2, e3 or a combination of these.”
  • Based on the results of this first codification, which identified the tweets as harassment, we used ChatGPT to perform a second codification to understand if tweets that were not categorized as harassment could be categorized as other forms of hateful speech.
  • Looking at the tweets, we noticed that indirect swarming was manifested in different ways, in particular through tweets that were (a) questioning @eirliani's, @podesta_lesley's or @annecollier's principles and morality; (b) insinuating a link between them and pedophilia; (c) addressed them and their resignation in a sarcastic way; (d) were dismissing and belittling them or addressed @eirliani, @podesta_lesley or @annecollier in patronizing ways; (e) wished that they would go to jail; (f) expressed relief or indifference about their leave.

  • Because of time and financial limitations, we decided to focus on one instance of a manifestation of indirect swarming, which was “sarcasm”. We developed a new prompt for ChatGPT to analyze the presence of sarcasm:

“Does this post (s1) sarcastically address @eirliani, @podesta_lesley or @annecollier or (s2) mocks @eirliani, @podesta_lesley or @annecollier integrity or (s3) contain sarcasm? Answer with s1, s2, s3 or a combination of these.”
  • To understand which articles/outside of Twitter sources were used to approve or disapprove the harassment, we used a network user to url. The relationship between the users and the urls mentioned is tagged based on whether they credit the url or discredit it.

  • To understand the positioning of different users with respect to the harassment, in relation to other users, we used a network user to user.

  • To characterize the rhetoric of this case of harassment, we performed a content analysis of all the tweets that contained the phrase “you are a…”, after removing stopwords and retaining 3 words before and 3 words after the “you are a…” phrase. Words were then categorized per theme, showing the predominance of particular slurs that were used to attack the three women.

4. Findings

Temporality and escalation

The harassment of the three women escalated over a short period of time and was catalyzed by amplifier accounts, reaching peaks of >400 posts per minute within 24 hours from the first amplifier’s tweet. Amplifiers used anachronistic newspaper articles to build credibility for their accusations against the targets, amplifying the diffusion of defamatory accusations of negligence, corruption, collusion with mainstream media, and pedophilia. On December 8, 2022, Anne Collier tweets the announcement that she, Eirliani Abdul Rahman, and Lesley Podesta resigned from the Twitter Trust and Safety Council linking to a Net Family News article, which Collier wrote, with an explanation of their decision. Her tweet is retweeted 7139 times.

Fig. 1. The escalation of indirect swarming from December 8, 2022, day of Anne Collier's resignation letter, to May 5, 2023.

Fig. 2 and 3. The escalation of indirect swarming in the hottest 24 hours and over the first 7 days.

The next day, 9 December at 17:40, Mike Cernovich tweets a reply stating “You all belong in jail” and links to a New York Post article from 2021, claiming that Twitter refused to remove child sexual abuse material because it did not violate the platform’s policies. As engagement starts to rise, Elon Musk responds to Cernovich, tweeting "It is a crime that they refused to take action on child exploitation for years!". The minute after Musk’s tweet, the combined level of replies, retweets, and quote tweets heavily spiked from below a frequency of 50 per minute to over 400 per minute. His tweet gets 17,563 retweets.

Over the next 9 hours, the swarming continues with rapid drops in the frequency of posting followed by new spikes of 200-300 total posts per minute. Over the course of 24 hours, the swarming decreases and stabilizes below 50 posts per minute for about 10 hours (from 4 am to 12 pm). Around 12 pm, the number of posts has another peak of around 50 posts per minute but then dies out quickly. Within the next seven days, the frequency of posting remains below 5 posts per minute, albeit with some peaks.

Affordances

Over the next 24 hours, retweets and replies become key mechanisms in indirectly harassing the resigning members of Twitter’s Trust and Safety Council. Quote tweets play a much smaller role. Retweets (Fig. 4) allow Twitter users to spread a message without adding commentary, and often signal an endorsement of the content. Either way, retweets are an important tool for amplification, creating a flurry of notifications for the targets of the harassment and simultaneously exposing them to more potential harassers. Replies (Fig. 5) play a more direct role in harassment than retweets, given that a user comments on the inflammatory tweet. This both piles on further harassment and also increases amplification by raising the tweet’s engagement metrics, which likely triggers the Twitter algorithm to spread the tweet. While the affordances of Twitter allow for positive messages to spread quickly, they also enable harassment campaigns to proliferate without users needing to participate in direct harassment.

Fig. 4. Retweets by minute in the first few days

Fig. 5. Replies by minute within the first few days

Aggressors

Elon Musk was an amplifier of the attack with 30% of texts in the quotes classified as direct harassment (shown in red). Most participants in the swarm merely liked or retweeted Musk's and Cernovich's Tweets. We have classified this as indirect swarming.

“As one of the targets by Cernovich and Musk, the experience of being swarmed in this way was shocking. Anne, Lesley and I were stating the facts: that the staggering rise in hate speech on Twitter is not tenable and we cannot in full conscience, remain as members of Twitter's Trust and Safety Council. We received threatening emails and also vitriol on Facebook and LinkedIn. An author of one of these emails described graphically how he wanted to see us die in three different ways and hoped that we would get doxxed. I had to get the FBI involved for our safety.”

~ Eirliani A. Rahman

Fig. 4. The formation of swarming

Fig. 5 Two sides of the swarm

Rhetoric

Harassment and sarcasm in the swarming

Online harassment refers to a broad spectrum of abusive behaviors enabled by technology platforms to target a user or users. Data and Society identified ten types of harassment online including physical threats, name-calling, impersonation, spreading rumors, and encouraging others to harass a target. Networked harassment describes a form of online harassment against an individual or group of individuals, which is “encouraged, promoted, or instigated” by members of an online network or community. Twitter and other social media platforms do not currently recognize this form of harassment.

Fig 6. The swarming of a sample of the tweets in the corpus that were determined to be harassment or not harassment according to Twitter’s guidelines

This analysis showed which of the corpus was categorized as harassment by ChatGPT, according to Twitter’s definition of harassment. However, Twitter’s definition of harassment is limited to direct and explicit harassment, so many tweets that were contributing to the swarming in an indirect way were not detected as harassing through the previous prompt.

While each one of these tweets might not be recognized as harassment in a strict sense, they contribute to the swarming in three ways: on one hand, conspiratory narratives are indirectly instigated by amplifiers, then directly used to harass the three women of the Council by their followers, mobilizing disgust, which in turn is shown to play a role in the incitement to violence. On the other hand, sarcasm, belittling, and dismissing are commonly used to undermine a woman’s reputation and perpetuate hierarchical gender relations, creating a hostile environment that pushes women to withdraw from the public sphere (EIGE, 2022). Finally, the coordinated nature of the tweets contributed to the evolution and magnitude of the swarming.

Fig 7. The swarming is not categorized as harassment or not harassment, according to Twitter's definition

We ran the prompt on a dataset constituted by the tweets that were not categorized as harassment in the previous analysis. This analysis resulted in a large section of the tweets being categorized as sarcasm. Specifically, referring to the categories of sarcasm used to build the prompt (s1, s2, s3), the vast majority of tweets were categorized as type 1 Sarcarsm, ‘sarcastically addressing’ the women; many of tweets were identified as s2 “ mocking the integrity’ of the women, and much less were categorized as ‘containing sarcasm’. None of the tweets were categorized as a combination of the three forms of sarcasm.

Fig 8. The swarming of tweets categorized as sarcasm, where (s1) sarcastically address @eirliani, @podesta_lesley or @annecollier or (s2) mocks @eirliani, @podesta_lesley or @annecollier integrity or (s3) contains sarcasm

Building the defamatory dossier with memes, collages, and symbols

Images, memes, and other graphic material was used to bring proof of the accusations directed towards the targets of harassment, to humiliate them with 'cheap fakes', and to signal other talking points of the current online political arena in the US, for example 'Biden Laptop Matters' and 'Pizzagate'. We discerned various themes including (a) the (false) accusation that Lesley Podesta is related to John Podesta, the then Chair of Hillary Clinton's 2016 Democratic presidential campaign, alongside pictures of Lesley Podesta herself and that of the Clintons; and (b) linking the 3 targets to the alt-right 'groomers' conspiracy, wherein gay people are accused of grooming children for sexual abuse. We also recognized screenshots of critical news about Elon Musk and platform safety, suggesting also the presence of an otherwise less visible critique to Elon Musk.

Fig. 9. A collage of the top 500 most mentioned memes in the dataset

5. Discussion

Indirect harassment was catalyzed by amplifier accounts via coded/masked language ("You should go to jail" and tagging the targets) or other linguistic innovations such as memes and emojis. By using such indirect or coded language, users can potentially shield themselves from accusations of harassment, making it more difficult to hold them accountable for their harmful actions.

Amplifiers' followers carry out both direct and indirect harassment. We have shown that some of these top harassers repeatedly tweeted. Furthermore, the affordances of Twitter allow for the amplification of amplifiers’ posts, boosting the tweets’ reach and exposure to other potential harmful actors through retweets and replies.

We argue that a key characteristic of indirect swarming is the sudden nature of the attack, with the stochastic nature of the harassment meaning that it takes place over a short period of time. In this case, over a 24-hour period. There is no build up to the attack, but rather, it comes on after initial amplification by actors with an audience that is ready for activation. We have also shown that amplifiers use irrelevant information and/or old newspaper articles to build credibility for their accusations against the target. This is key, for it validates their attack through institutional means, even when taken out of context.

6. Conclusions

We were able to show that the analyzed case constitutes indirect swarming, a form of networked harassment where indirect swarming was catalyzed by amplifier accounts via coded/masked language, signaling their followers who then carry out direct and indirect harassment, leveraging on Twitter’s affordances of replies and retweets. The harassment escalates abruptly following amplifiers’ tweets, and it dies out in the span of a few days. The harassing tweets contained harassing content, according to Twitter’s definition, and more veiled forms of harassment, such as sarcastic comments regarding the targets’ contested position.

Among the limitations of the study is the lack of validation of the analysis conducted by ChatGPT using the prompts. This method was experimentally used for the Digital Methods Summer School Initiative but it would not be sound for a publication without a thorough validation of the method including via manual verification.

More remains to be understood about indirect swarming. When amplifiers post and use "coded" language, do they know what the consequences could be, in terms of what their followers may be capable of doing? How do you hold the platform’s owner(s) accountable?

7. References

AZMina Magazine; InternetLab. Monitora: Report on online political violence on the pages and profiles of candidates in the 2020 municipal elections. São Paulo, 2021.

Brady, W. J., Crockett, M. J., and van Bavel, J. J. 2020. The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online, Perspectives on Psychological Science 2020, Vol. 15(4) 978-1010. DOI: 10.1177/1745691620917336.

European Institute for Gender Equality. (2022). Combating Cyber Violence against Women and Girls. DOI:10.2839/827864

European Institute for Gender Equality. (2017). Cyber Violence Against Women and Girls. DOI: 10.2839/876816

Lenhart, A., Ybarra, M. L., Zickuhr, K., and Price-Feeney, M. (2016). Online harassment, digital abuse, and cyberstalking in America. Data & Society Research Institute.

Lewis, R. and Alice E. Marwick and William Clyde Partin, “We Dissect Stupidity and Respond to It”: Response Videos and Networked Harassment on YouTube, American Behavioral Scientist, 2020, Volume 65, 735-756.

Marwick, A. E. and Caplan, R. (2018). Drinking male tears: language, the manosphere, and networked harassment, Feminist Media Studies, 18:4, 543-559, DOI: 10.1080/14680777.2018.1450568.

Marwick, A. and danah boyd. 2014. ‘It’s just drama’: Teen perspectives on conflict and aggression in a networked era. Journal of Youth Studies 17, 9 (Oct. 2014), 1187–1204. https://doi.org/10.1080/13676261.2014.901493.

Marwick, A. E. 2021. Morally motivated networked harassment as normative reinforcement, Social Media + Society, April 2021, https://journals.sagepub.com/doi/full/10.1177/20563051211021378.

Dissecting In(Direct) Harassment: Actors & Narratives

Team Members

Eirliani Abdul Rahman, Giulia Campaioli, Paul Ballot, Malin Holm and Elena Aversa

1. Introduction

On December 8, 2022 Eirliani Abdul Rahman and two other members resigned from Twitter’s Trust and Safety Council, speaking up against the meteoric rise in hate speech after Elon Musk's purchase of the platform. In response, Musk indirectly instigated harassment against the three women on Twitter, and dissolved the Council entirely four days later.

In Week 1, we examined the anatomy of indirect swarming, its escalation in terms of its temporality and rhetoric, showing the sudden and explosive nature of the harassment, the catalyzing role of amplifiers, and the affordances that enable continuous and fast engagement with the harassment, such as ‘like’ and ‘retweet’. We also found that amplifiers succeeded in amplifying particular rhetorics and that top harassers largely employed memes.

However, various aspects of this event remain unclear. Marwick (2021) claims that there is temporal coordination in networked harassment. Our experimental work in Week 1 supports that, showing the abrupt start of the swarming abruptly after the amplifiers' tweets. But was the swarming intentionally coordinated? When amplifiers post and use "coded" language, do they know what the consequences could be, in terms of what their followers may be capable of doing?

In Week 2, we interrogate the narratives embedded in the text and images of the tweets and replies. We also conduct further analyses on the accounts who posted the most, thus contributing to the virality of the conversation, to identify inflection points that changed the direction of the conversation either against or in support of the three women. Through a process of meaning making, we sought to understand whether the narratives changed over time as the attacks intensified. Was there any evidence of any form of coordination between them?

Background

In Week 1 of the Digital Methods’ Initiative’s summer school, we analyzed this case using the framework of Marwick’s (2021) model for Morally Motivated Network Harassment, and found that the model constitutes a useful framework to understand network harassment, but the taxonomy should include a new category we call “indirect swarming”. Indirect swarming would identify a case of networked harassment where a) amplifiers play a central role in the sudden amplification of harassment towards targets; b) they achieve that by signaling to their followers through coded/masked language to create plausible deniability around their behavior.

The ADL Center for Technology & Society claims that the Twitter Trust and Safety Council case constitutes what they describe as stochastic harassment, wherein influential accounts weaponize talking points to incite harassment. Stochastic harassment would differ from stochastic terrorism with the latter referring to “the use of mass media to provoke random acts of ideologically motivated violence that are statistically predictable but individually unpredictable.”

Mainstream social media platforms have become places of extreme amplification of polarizing, inflammatory content and conspiratory narratives (Theocharis et al., 2021) that mobilize strong negative affects like disgust and anger. We noticed in Week 1 that 1) disgust and related keywords made up a cluster of most used in the corpus, 2) themes typical of internet conspiracies, such as Pizzagate and QAnon appeared in memes and collages. Considering that these kind of discourses are weaponized in stochastic harassment to incite violence, and that the use of coded and masked language is a linguistic strategy of online anti-X communities used to avoid moderation (Manrique et al., 2023), we decided to dive deeper into the narratives and actors that contributed to the indirect swarming.

The Facebook Files have brought evidence of what researchers had been investigating and claiming for years, that the platform knew about the societal and individual consequences of their platforms’ mechanisms and did not act on them, privileging engagement and profit over users’ health. However, big tech ‘whistleblowers’ face risks online and offline for their testimonies (Hughes, 2023; Giugni, 2022). It is important to continue investigating how social media platforms influence the diffusion of harassment and violence to inform content moderation policies.

2. Research Questions

Based on Week 1’s case study of indirect swarming, the aim of the Week 2’s project was to characterize the profiles of significant actors and identify relevant narratives. We interrogated the narratives embedded in the memes and the text of the tweets and replies. Through a process of meaning-making, we sought to understand whether the narratives changed over time as the attacks intensified. Was there evidence of any form of coordination between them?

At the same time, we looked more closely at who the top accounts were contributing to the virality of the conversation, and tried to understand if there were any inflection points where the most retweeted tweets changed the direction of the conversation, either in support of the three women or against them.

RQ1: Who were the top actors who contributed to the virality of the attacks?

RQ2: What were the narratives that were driving the harassment against the three women?

3. Methodology and initial datasets

The initial dataset consisted of 126,984 tweets directed at Eirliani Abdul Rahman, Anne Collier and Lesley Podesta. These tweets were posted between 7 December 2022 and 5 May 2023 as part of a harassment campaign instigated by Elon Musk, as the three women resigned from Twitter's Trust and Safety Council, speaking out against the meteoric rise in hate speech following Musk's purchase of the platform. Using Twitter Academic API, tweets were sourced by querying Twitter API v.3 using 4CAT for replies, retweets, quote tweets of Elon Musk and Mike Cernovich and mentions of @eirliani, @annecollier, and @podesta_lesley between December 2022 and May 2023.

Analysis of the most active harassers. The ten most active users were identified from the corpus based on the highest number of tweets in the data set and then analyzed their impact based on the engagement their tweets received. Their user profiles were then qualitatively and manually analyzed one-by-one to understand their political ideology and how they engaged in the campaign.

Analysis of the most prevalent narratives. The analysis of narratives was twofold: on one hand, we conducted an in-depth analysis of the narratives contained in the text of original tweets; on the other hand, we analyzed the narratives contained in the images. Finally, we compared the findings to show how images are used to convey narratives that are not identified in the text of tweets.

1. The most dominant narratives driving the harassment were identified by an inductive analysis of a sample of the initial corpus.

To obtain the sample, we first excluded retweets and replies from the initial dataset, generating a sample of 70,000 original tweets; we then took a random sample of 10% (n=700) of the corpus of original tweets.

The identification of the most dominant narratives in the corpus started with a manual coding of the random sample of original tweets (n=700). The coding was carried out by two researchers in the team who read the sampled corpus in its entirety. The analysis focused on finding repeated patterns of meaning in the tweets that formed part of the harassment against the targets (Braun & Clarke, 2006). This manual analysis led to identifying three narratives that were consistently present in the sample; we then used the keyword analysis conducted in Week 1 (see Report Week 1) for the words associated with the sentence ‘you are a…’ to identify narrative-specific keywords.

After defining these narrative-specific keywords, we utilized them as proxies for particular narratives. To do so, we ran an R script to filter the corpus for said keywords in order to categorize every tweet into one narrative or a mixture of different narratives.

2. The narratives in the images were identified using a mixed methods approach.

Based on a collage of the top 500 most mentioned images used in the swarming, we created an initial list of common themes, symbols, genres (memes, collages, pictures, etc), and representations of significant actors (e.g. Lesley Podesta, Elon Musk).

We found (a) memes, pictures, and collages used to support the accusation that Lesley Podesta is related to John Podesta, the then Chair of Hillary Clinton's 2016 Democratic presidential campaign, alongside pictures of Lesley Podesta herself and that of the Clintons; and (b) memes and collages linking the 3 targets to the alt-right 'groomers' conspiracy, wherein gay people are accused of grooming children for sexual abuse; (c) screenshots, and memes promoting Elon Musk as the exposer of pedophiles; and (d) various memes, images, screenshots, and collages with references to PizzaGate, QAnon, and other known alt-right narratives (e.g. “Biden Laptop Matters”).

Then, we used PixPlot to group the corpus of images (n=2918. The automated analysis resulted in 10 clusters of images. We conducted a manual analysis on a portion of the pictures in each clusters to develop labels for the clusters. Finally, we chose a sample of significant images to exemplify the narratives contained in the images and compared the narratives of the images to those identified through the textual analysis.

4. Findings

Post-truth narratives and hidden misogyny

The three women were accused of being responsible for CSAM on the platform, indirectly with coded language by the amplifiers, and directly with images and sexualized misogyny by the followers. These claims were backed up with a 'defamatory dossier' made of memes, collages, and screenshots (see Poster 1). This week's analysis shows how tropes and talking points of the alt-right were used to support the attack on the three women and, in turn, how the 'dossier of evidence' (Marwick, 2021) on the three targets was used to bring proof to a system of conspiratory beliefs like QAnon and PizzaGate.

Twitter’s Council is responsible for CSAM on Twitter: 'You all belong in jail.'

Cernovich's influence is evident when we look at the most common accusation directed at the women who resigned from Twitter’s Trust and Safety Council: that the Council is responsible for CSAM on the platform and therefore they "belonged in jail":

@annecollier You should be in jail for child exploitation.

This narrative, or moral accusation, legitimized the users to incite to act out further online and offline harassment against them:

@annecollier @eirliani @podesta_lesley You three should have your hard drives searched.

“Democrats are pedophiles (and they belong in jail)”

Another dominant narrative supported the idea that democrats and left-leaning public figures (including the three targets) are involved with child sexual abuse and Satanic rituals, and thus they also 'belong in jail'.

@annecollier @eirliani @podesta_lesley @annecollier you’re as big a piece of 💩 as one gets. You and your leftist colleagues belong in jail.

The women of the council would be criminals, like Democrats, because Elon Musk said so. And why would Elon Musk lie?

@jack @elonmusk @Cernovich @annecollier @eirliani @podesta_lesley Prove its false! Elon has no reason to lie. Elon Musk was a strong Democrat. Why would he change parties for no reason unless he discovered something like we all did. I was a Democrat my whole adult life until I found out the truth. The Democratic Party has been abducted by evil

“Lesley Podesta is the niece of a pedophile and thus a child abuser herself”

Lesley Podesta is falsely accused of being the niece of John Podesta who was the chair of Hillary Clinton's presidential campaign; John Podesta is associated with child abuse and CSAM, and the accuse is supported by a ‘dossier of evidence’ (Marwick, 2021) constructed using collages of photos, ‘cheap fakes’ and memes.

@elonmusk @Cernovich @annecollier @eirliani @podesta_lesley pssssssttt. If the name is Podesta, then I'm sure they DID "take action" probably not the right kind...

“Twitter’s moderation has a left-leaning bias”

Twitter's Trust and Safety Council, prior to Musk, was accused of censoring, deplatforming, and shadow banning right-leaning accounts, and favoring left-leaning accounts. After buying the platform, Musk released a series of files, the so-called “Twitter Files”, which he claimed to prove the left-wing collusive moderation of the Council. Despite many technology journalists disputing this, that the reported evidence actually demonstrated that Twitter's policy team having a difficult time making a tough call. In contrast, right-wing voices said the documents confirmed Twitter's liberal bias.

"@annecollier @eirliani @podesta_lesley Bye, Have a great life… but we won’t be crying you are gone. It’s ridiculous to think only democrats have a voice and can say what they want all while you silence republicans. That’s a given. The way you shaped our elections , NOBODY will forgive you. Period."

This includes accusations with coded or masked language.

@elonmusk @Cernovich @annecollier @eirliani @podesta_lesley .@elonmusk Thank you, survivors voices have been silenced for to long. This is an epidemic world wide, we need to fight back and protect these survivors at all cost.

“Hate speech moderation, which is actually a way to impede free speech, was prioritized over child sexual abuse moderation on Twitter.”

The Council was portrayed as privileging the moderation of hate speech over moderation of child sexual abuse; hate speech moderation is seen as a way left-leaning Twitter would be censoring the free speech of alt-right and right-leaning people.

@annecollier @eirliani @podesta_lesley You allowed child pornography to circulate freely on Twitter while you banned conservative accounts because you don't agree with the policies. In a just world you would be going to jail.

Sexualizing misogyny

We found a ‘hidden narrative’, that of sexualizing misogyny, which identifies common tropes that characterize sexualized misogyny online (Zanello, 2020), such as the sexual objectification of women and sexualized ‘cheap fakes’ of women involved in the swarming or with Twitter moderation.

We call this “hidden” in the sense that we did not find this narrative embedded in the text of the tweets themselves. Rather, this narrative was only made visible through images, particularly memes.

A recurring image photoshopped the head of Vijaya Gadde–the chief legal officer who was fired by Musk when he purchased the platform in October 2022 - on the body of an unknown woman. In the picture, the woman is on all fours on the bed, and a smiling man is pulling her hair from behind.

Criticisms of Elon Musk

We also find some instances of tweets that were critical about Musk and concerned about the effects that his acquisition of the platform would have.

@TiMoudou @annecollier @Ronilj261 @eirliani @podesta_lesley Soon Twitter will be nothing more than an Elon Musk echo chamber.

5. Conclusions

We had interrogated the narratives embedded in the memes and the text of the tweets and replies. Through a process of meaning-making, we found the underlying narratives mentioned above on post-truth narratives and hidden misogyny. The strongest underpinning narrative was the accusation that the three women who resigned from Twitter’s Council were supporting pedophilia.

While we examined manually who the top accounts were who contributed to the virality of the conversation, we found that three of the top ten posters were in support of the targets. In general, the other seven harassers were able to push the conversation such that the swarming was negative overall. We were not, in the short time available, to determine whether the harassers coordinated in any form.

6. References

Brady, W. J., Crockett, M. J., and van Bavel, J. J. 2020. The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online, Perspectives on Psychological Science 2020, Vol. 15(4) 978-1010. DOI: 10.1177/1745691620917336.

Braun, V., & Clarke, V. (2006). Using Thematic Analysis in Psychology. Qualitative Research in Psychology, 3(2), 77–101.

Lenhart, A., Ybarra, M. L., Zickuhr, K., and Price-Feeney, M. (2016). Online harassment, digital abuse, and cyberstalking in America. Data & Society Research Institute.

Lewis, R. and Alice E. Marwick and William Clyde Partin, “We Dissect Stupidity and Respond to It”: Response Videos and Networked Harassment on YouTube, American Behavioral Scientist, 2020, Volume 65, 735-756.

Marwick, A. E. and Caplan, R. (2018). Drinking male tears: language, the manosphere, and networked harassment, Feminist Media Studies, 18:4, 543-559, DOI: 10.1080/14680777.2018.1450568.

Marwick, A. and danah boyd. 2014. ‘It’s just drama’: Teen perspectives on conflict and aggression in a networked era. Journal of Youth Studies 17, 9 (Oct. 2014), 1187–1204. https://doi.org/10.1080/13676261.2014.901493.

Marwick, A. E. 2021. Morally motivated networked harassment as normative reinforcement, Social Media + Society, April 2021, https://journals.sagepub.com/doi/full/10.1177/20563051211021378.

Topic revision: r1 - 21 Aug 2023, EmilieDeKeulenaar
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback