Mapping Mobs - Technological affordances, metrics, and digital violence against journalists.
Darja Wischerath, Rosalie Dielesen, Danique Leenstra, Bobby Uilen, Zarah Noorani, Luisa Garcia Amaya, Camilla Folena, Sofia Ompolasvili, Ashraf Sahli, Carla Roman, Marloes Geboers, Tomás Dodds, Gen Lemaire, Lotte Timmermans, Dilina Janadith
The Dutch ‘Persveilig’ (safe press) organization recently reported 262 cases of aggression and violence against members of the press in the Netherlands, which is double the number of reported cases in 2020 (Persveilig, 2021). In the race for clicks, a human toll is paid. Journalists find themselves in limbo: they are nudged or even pushed to lean into the platform logic of maximizing engagement which might make them more vulnerable to (personal) attacks. Newsrooms take into account audience metrics that are, partly, derived from social platforms. Editorial decisions (Christin, 2020, Petre, 2015), as well as the production of news (Poell & Nieborg, 2018), are more or less affected by journalists ‘leaning into’ what is popular according to audience metrics. At the same time, (digital) violence against journalists is on the rise (Miller, 2021; Waisbord, 2020). Mobs can quickly emerge and ‘impose’ censorship. In this project, we want to study online hostility toward the press through a ‘platform affordances approach’ in which we assume that the architectures of platforms will shape and constrain violence toward journalists in specific ways. Alongside this, we want to assess what issues are prone to ignite mobs and what the role of clickbait and other ‘content styles’ is in relation to mobs going off.
We will not so much focus on regulatory tactics of banning or de-platforming, rather we aim to explore the relationship between engagement intensity (that is, in turn, shaping audience metrics informing journalists) and its predictive value for hostility toward journalists. In the case of the Netherlands, ¾ of cases of violence included personal threats of the journalist, in 29% the news outlet was also attacked, and in 37% of cases the journalistic profession, in general, was a target of aggression. In 3 out of 10 cases violence took place on social media, mainly Twitter and Facebook. Going from this data we aim to zoom in on both who is involved, what societal issues get entangled in the attacks toward journalists, and what the discourses are of the vitriolic messages. For non-Dutch speaking participants: we will widen our net and also collect social media data that ties into the upcoming Chilean elections, as well as English-based cases.
"Elites, norms, and audience. All of these figure centrally in journalism’s sense of self in largely Western or global Northern liberal democracies, and each of them are fatigued. They need to be cast aside and rethought. We argue that the reliance on elites is faulty and that it furthers journalism’s reliance on sectors of the public that aren’t representative. We argue that the reliance on norms isn't helpful because norms are thought to unfold in a perfect aspired condition that isn't at all reflective of what journalists are actually going through. And we argue that the audience isn't there in the way that journalism has typically thought or expected." -
Barbie Zelizer on The Journalism Manifesto (2021)
As digital technologies have made their way into news production, allowing news organizations to measure in real-time audience behaviors and engagement, click-based and editorial goals have become increasingly intertwined. Media organizations are now embracing social media platforms to connect and expand their audiences (Lysak et al., 2012
). The popularisation of quantitative audience measurements has modified the news-making and distribution process to successfully encompass the new “measured” audience preferences and opinions. In the process of writing for a datafied audience driven by metrics, journalists have stopped writing—according to their values—for a general audience. Instead, reporters write in an attempt to satisfy the algorithm that determines the quantification of such publics (Gallagher, 2017), and therefore for an unperfect datafied audience. Anderson (2011) refers to this phenomenon as ‘algorithmic publics’, a concept that encapsulates the influence of algorithms and its metrics on journalists’ perceptions of the audience, and the influence that those perceptions end up having on the development of news making.
However, while social media platforms have made it possible to connect new algorithmic publics to the newsrooms, they have also played a role in the increase of hostility toward the press in general, and abuse and harassment of journalists in particular. Virtual mobs, at times aided by the algorithm behind these platforms, have found a place to connect online and target reporters across different media organizations (Miller, 2021; Waisbord, 2021).
Online harassment affects journalists globally, but it also presents local and national particularities (Waisbord, 2020). We expand on existing ethnographic research of newsrooms in Chile which focused on how journalists perceive and cope with online vitriol (Dodds, forthcoming). We’d like to build on this and move toward answering questions about the role that platform affordances play. We understand violence very broadly, meaning that we include discourses that undermine trust in ‘mainstream press’ and undermine the public role of independent journalism in democracies.
2. Research Questions
Can we find correlations between resonance in terms of engagement intensities on the one hand and the extent and kinds of hostilities on platforms on the other hand?
Do we see differences between countries and the kinds of hate that are prominent?
What is the role of personal characteristics and perceived political leanings of journalists and news outlets respectively, in the kinds of allegations that are made?
3. Methodology and initial datasets
Data collection and sampling
We queried the Twitter handles of several journalists and news outlets based over three countries: the Netherlands, the UK, and Chili, mostly pertaining to the date range: November 26, 2021, up until January 5, 2022 (with some datasets differing when they would amount to an unmanageable size). This time span includes several events that influence user activity intensities such as the Chilean presidential elections, a lockdown in the Netherlands, and the resignation of a famous female political editor of the BBC, attacked for her ties to Boris Johnson.
For Facebook data we queried an interesting page that was dedicated to a hashtag that was prominent in the UK dataset: #defundtheBBC, using CrowdTangle . With the dataset coming out of this search, we could see what news sources were shared by this page. We used the CrowdTangle Chrome plugin extension to assess which other pages were sharing the same news articles as this page. See also the Findings section.
Mónica Pérez (T13)
Carola Urrejola (T13)
Consuelo Saavedra (Radio Duna)
Juan Manuel Astorga
Monserrat Álvarez (CHV)
Mauricio Bustamente (Cooperativa)
Sebastián Esnaola (Cooperativa)
Fresia Soltof (CNN Chile)
Mirna Schindler (T13)
Leslie Ayala (La Tercera)
|| Maarten Keulemans (Volkskrant, male, perceived left of center)
Asha ten Broeke (Volkskrant, female, perceived left of center)
Chris Klomp (earlier dataset, due to Twitter break of Chris)
Diederik de Groot
Paul Brand (ITV, uncovered current govt email scandal)
Ashley Cowburn (The Independent)
Vicki Young (BBC deputy political editor)
We also queried Laura Kuenssberg, a UK journalist to gather posts about her. Through sampling high and low interacted posts, we arrived at a dataset that we used as an entry point for manually and qualitatively analyzing the comments that these posts received. See also the findings section.
The different datasets were demarcated through customized queries, based on word and hashtag frequencies which in turn were derived from the two statistics modules in TCAT that allow to arrive at such outputs. Underneath we will shortly state the specifics for each country.
The hatebase analysis module in 4cat was used to get query words. For comparability, we also included general query words that were frequently present throughout the entire dataset, these were:
journalist OR media OR fake OR scum OR propaganda. In a second step, the entire dataset was demarcated/narrowed down through this hate query that we ran in TCAT.
Using the bipartite hashtag/user network output we could go from TCAT to Gephi where nodes were colored to separate the two types of nodes. Nodes were sized by usage and mention frequency.
Filter out most frequently used words via TCAT (word frequency module).
Make Dutch hate lexicons on the basis of this.
Query dataset in TCAT with hate lexicons. [insert Dutch hate query here]
In a second step, the entire dataset was demarcated/narrowed down through this hate query that we ran in TCAT. Using the bipartite hashtag/user network output we could go from TCAT to Gephi where nodes were colored to separate the two types of nodes. Nodes were sized by usage and mention frequency.
-Chilean data: similar to Dutch protocol.
CrowdTangle, cross-platform analyses and comments analyses
Laura Kuenssberg from BBC is used as a case to systematically compare platforms and their associations with harassment. Below steps were followed in the process.
Twitter data collected for the handle of @bbclaurak (2021/11/25 -2021/12/26) - 118,727 items
Ran TCAT hostname frequency for the above data set (per day) and identified special peaks.
Ran TCAT hashtag frequency for identified journalism/journalist targeted hashtags (fakenews, defundbbc, scummedia, sacklaura) and recorded special peaks.
Manually inquired into the potential event/post/news triggered the above 2 and 3 peaks.
Crowdtangle: Laura’s name + Twitter handle mention frequency in Facebook
Crowdtangle: Interactions for prominent harassment tags based on Twitter data
Manually inquired into the potential events/posts/news that triggered the spikes.
A qualitative inquiry into comments
10 most interacted and 10 least interacted articles were selected from data gathered from Crowdtangle for Laura’s name.
The comments sections were then thematically inquired to identify hate speech targeted at journalists (professional, personal etc).
Keywords used in these comments were then separately listed (such as liar, clown, c*nt, etc)
the overall interactions received by these comments were recorded.
Results from the above two separate processes were then cross analyzed to inquire whether these peaks that simultaneously occurred were triggered by the same event and the nature of harassments (see Results section, over-time graphs).
Deep dive into #defundthebbc
For the UK section of the project, we dived into the #defundthebbc hashtag which emerged from the BBC dataset in the first place. Through Crowdtangle we queried "#defundthebbc, #defundbbc, defundthebbc" collecting posts from the same period of the Twitter analysis (25 Nov-5Jan). From collecting Facebook posts, a public page related to the hashtag emerged, and it seemed to contribute and lead the #defundthebbc campaign online. From the 43 posts collected we skimmed the >50 comments posts and for the 20 posts that were left, we proceeded with comment analysis. The comments collection was done manually by detecting the first 10 hate comments (listing 'recent ones' from Facebook) with three directions: (1) hate towards a precise journalist, (2) towards a media outlet, (3) towards journalism in a large sense. For each post, we collected the first 10 hate comments following this coding protocol: Comment text | Likes | Replies | Keyword | Direction |
From page results, the hate movement against the BBC already seemed to be organized. Each post, in fact, included a self-produced meme to summarise the news against the BBC which the post is promoting (see Findings). Moreover, trying to map to what extent there is, or there is not, a consistent infrastructure of hate that promotes the campaign to defund the BBC, we used the CrowdTangle extension for Chrome which permitted us to map where a precise article linked in one of the @DefundtheBBC posts taken into account, was reshared by other pages or verified profiles.
Hashtags that were used in relation to a specific journalist were plotted over time. This revealed differences between male and female journalists in which male journalists seem to be more attacked in the context of their profession (this was more clear in the words used, not so much in the tags as these pertained to the topics covered by the journalist), whereas female journalists seemed to be targeted through references that relate to their physical appearances, see figures 1-4.
Figure 1 Hashtags over-time for Dutch journalist Wierd Duk (De Telegraaf, male). Tags pertain mainly to the pandemic and topics covered by Duk. The orange tags are pertaining to him being a 'whorenalist' and the like...
Figure 2: hashtag-user network of the dataset pertaining to Asha ten Broeke (female Dutch journalist, de Volkskrant). Note how her physical appearance gets dragged in through tags such as fat shaming and BMI. This is also the case for the UK journalist Laura Kuenssberg, see figure 4, although this is almost not visible through the tags (see also the limitations section).
Figure 3: Tags and users surrounding male journalist Maarten Keulemans (de Volkskrant, note that this is the same paper as Asha ten Broeke in figure 2).
Figure 4: Hashtags over time in the dataset on female journalist Laura Kuenssberg (BBC). In the words in tweets and especially so in Facebook comments (see figure X) her facial features were frequently mentioned in the context of digital violence. In the tags, in figure 3 Laura is mentioned as Tory Laura pertaining to her perceived biased stances. Also interesting is that there are overall more tags relating to attacks on the press in general (the red tags) than in the visualization of hashtags over time for Wierd Duk.
Figure 5: hashtags over time for Daniel Matamal (male Chilean journalist). Here the attacks are also pertaining to critique on the press in general (#prensabasure means garbage press). Attacks on him pertain to more normative allegations on what a journalist should and should not be (Matamala lies).
As far as the influence of the perceived political leanings of media outlets, we could not derive conclusions through interpreting the user/tag networks. See figure 6 of NOS and Telegraaf, both Dutch outlets, perceived as pro-government (NOS) and rightwing/conservative (Telegraaf). Both hold tags such as msm (mainstream media, mostly used as attack) and fake news or even state propaganda, however, we would need more context of how these words are used and in what direction they are used. The hypothesis is that in the Telegraaf dataset, the uses of msm are mostly used to attack other media (other than Telegraag) whereas sich tags in the NOS dataset would be directed to NOS itself.