Am I shadow banned? Studying online discussions on a contested form on content moderation


Team Members

Laura Savolainen, University of Helsinki

 


Mea Lakso, University of Helsinki
Clémence De Grandi, University of Amsterdam
Karlygash Nurdilda, University of Amsterdam
Caitlin Harzem, University of Amsterdam

Contents

Team Members 1

Contents 2

Abstract 3

1. Introduction 3

2. Initial Data Sets 5

2.1. The description of datasets 5

2.2. Limitations of the datasets 5

3. Research Questions 6

4. Methodology 7

4.3. The methodological process 7

4.2. Reddit as a data source 8

5. Findings 9

5.1 First-order folk theories: the formation and exchange of speculative knowledge 9

5.2 Uncertainty and contestation 11

5.3 Second-order folk theories: politicizing algorithms 12

5.4 Being affected: emotional and other real-world consequences 14

5.5 Shadow banning among a marginalised community: case sex workers of Reddit 15

6. Discussion 17

7. Conclusion 18

8. References 18

Abstract


How does the lack of transparency in content moderation and platform governance manifest in user’s experiences of platforms? In this article we present findings on how regular users of Instagram, TikTok and YouTube make sense of the mystical shadow ban. We found out that there is no one coherent understanding of what a shadow ban entails or whether it even exists. However, for some groups, like content creators reliant on online visibility and people in non-normative professions such as sex workers, ‘shadow ban’ is more than a temporary inconvenience. Our analysis demonstrates how the majority of affective, communicative and economic repercussions of content moderation and deprioritization – typically thought to result from the shortcomings of individuals – are better understood in a collective register: as symptoms of sociotechnical systems that foreground commercial gains over the wellbeing of their everyday users.

1. Introduction

‘Shadowbanning. It’s not a thing, right?’ went a question received by Adam Mosseri, Head of Instagram, during a virtual Q&A session in February, 2020. The question refers to a form of social media content moderation where posts or comments are made strategically undiscoverable to everyone except for their posters. While content removals, hashtag bans or account suspensions trigger a strong negative reaction and calls for explanations, from the perspective of platforms, difficult-to-detect shadow banning could seemingly eliminate the problem of angry or resistant responses to content moderation. ‘Shadow banning is not a thing’, Mosseri replied – joining the ranks of other platform representatives denying the existence of the practice: ‘If someone follows you on Instagram, your photos and videos can show up in their feed if they keep using their feed. Being in Explore [Instagram’s algorithmically curated recommendation page] is not guaranteed for anyone. Sometimes you’ll get lucky, sometimes you won’t’. At the same time, though, users continue to accuse platforms for shadow banning their content (Cook, 2020; Bambrough, 2020). And, in fact, Instagram has stated that while some posts ‘may not go against our Community Guidelines, they might not be appropriate for our global community, and we’ll limit those types of posts from being recommended on Explore and hashtag pages’ (Instagram, 2021). The platform uses a ‘variety of signals, for example, if an account has recently gone against our Community Guidelines, to determine which posts and accounts can be recommended to the community’ (ibid.).

Shadow banning remains an under-researched topic, even though other related issues, such as content moderation (Roberts, 2018; Gerrard and Thornham, 2020), platform governance (Beer, 2009; Katzenbach and Ulbricht, 2019) and (content curation) algorithms (Gillespie, 2013; Hallinan and Striphas, 2016) have been widely addressed. In this paper, we analyse online discussions on shadow banning on the popular discussion forum Reddit to understand how users experience, find evidence for, and make sense of the controversial practice on different social media platforms. In taking this perspective, we are inspired by current discussions on platform governance that, instead of treating algorithms and content moderation as inherently inaccessible ‘black boxes’, study how they become the objects of everyday experience, practical knowledge and intentional action from users’ part (Bucher, 2018; Bishop, 2019; Mayers West, 2018; Gerrard, 2018). Indeed, on social media platforms, meeting one’s ends – from information access and basic self-presentation goals to professional advancement – increasingly requires taking their moderation practices and algorithmic logics into account.

Researchers have, for example, highlighted how users begin to form beliefs or folk knowledge about algorithmic processes through practical engagements, everyday conversations, as well as news reporting about social media algorithms (Eslami et al. 2016; Bucher, 2018; Bishop, 2019; Lomborg and Kapsch, 2019). These beliefs, in turn, shape behaviour in algorithmically mediated spaces, and so mapping them is necessary for understanding user activities. In an analysis on the formation and exchange of algorithmic knowledge among YouTube beauty vloggers, Bishop (2019) argues that ‘communally and socially informed theories and strategies pertaining to recommendation algorithms’ can even be used as ‘proof of how algorithms work and have worked’. However, as we’ll come to show, the concept of shadow banning is too contested a term to enable one to infer, based solely on user discussions, whether or not the practice even actually exists as such. Indeed, instead of seeking to establish epistemic certainty around shadow banning, we take the term’s instability as our point of departure. We approach ‘shadow banning’ as both a folk construct and a manifestation of algorithmic culture, asking, what do the experiences, theories and opinions articulated in connection to the term reveal about contemporary platform politics and relations. While we use the term primarily as an access point for understanding how and when content moderation and curation algorithms ‘come to matter’ (Bucher, 2018: 6), the results of our study may still provide speculative insight into social media platforms’ rapidly developing content moderation practices and policies, guiding further investigations into them.

Moreover, marginalised and other communities who’ve spoken out about being negatively affected by platform policies in general, have raised similar concerns vis-à-vis shadow banning (e.g. Akpan, 2020; Tierney, 2018; Cook, 2019). Therefore, we include in our study an analysis of one such community: sex workers. It is vital that suspected biases are not ignored and that they are studied further, so that practices that possibly produce systematic inequalities would be discovered, called upon and changed for better. Evidence of individual cases does not automatically mean that social media platforms would be purposefully discriminatory (although it is, of course, a possibility that should be considered). Nevertheless, such claims should be taken seriously. They may indicate platform logics that are unknown or invisible to ‘average’ users, but still a defining feature of the experiences of specific communities. Overall, our discussion highlights user agency and its limits in relation to (algorithmic) content moderation and curation. Our analysis demonstrates how the majority of affective, communicative and economic repercussions of content moderation and deprioritization – typically thought to result from the shortcomings of individuals – are better understood in a collective register: as symptoms of sociotechnical systems that foreground commercial gains over the wellbeing of their everyday users.

2. Initial Data Sets

2.1. The description of datasets

We had two initial datasets of Reddit posts and comments that were created by combining manually multiple smaller datasets generated by specific queries for 4CAT software (Peeters and Hagen 2018). No time period was specified in the search tool, resulting in posts starting from 2012 until 6th of January 2021, the date of query. The two complete datasets are available from the authors. All the user data was automatically anonymized by the 4CAT.

First dataset represents the discussion in Reddit about shadow banning in three popular social media platforms. It consists of Reddit posts and comments that mention 'shadow ban' in subreddits r/Instagram, r/InstagramMarketing, r/TikTok, r/TikTokHelp, r/YouTube and r/YouTubers. This initial dataset consisted of 1618 posts. We excluded Twitter related subreddits and altered spellings of shadow ban, since the dataset was already abundant for the scope of this research.

The second, much smaller, dataset represents the discussion about shadow banning within specific Reddit communities of sex workers and pole dancers. It consists of Reddit submissions and comments that mention 'shadow ban,' 'shadow banned,' 'shadowban' and 'shadowbanned' in subreddits r/poledancing, r/Stripper,Strippers, r/CamModel, r/AskAnEscort, r/SexWorkers, r/OnlyFans101, r/exoticdancers, r/CamModeling, r/CamGirlProblems, r/WitchySexWork, r/HighEndEscorts, r/SexWorkersOnly, r/SellerCircleStage, r/LegalizeSexForCash, r/MaleSexWorkersOnly, r/SexWorkersOfTwitter, r/CamModelCommunity, r/SexWorkersAnonymous, r/sexworknews and r/talesfromsexwork. The initial dataset consisted of 480 posts. The altered iterations of shadow ban and a Twitter specific sexworker subreddit were included since, after testing results of different queries, it was noted that they added relevant data to our final dataset.

2.2. Limitations of the datasets

4CAT is a tool that is suitable for creating datasets for multiple thread-based online platforms. It is created and run by Open Intelligence Lab at the University of Amsterdam as part of a project Opinion Dynamics and Cultural Conflict in European Space (ODYCCEUS) funded by the European Research Council and it is under development. (Open Intelligence Lab, 2021a) The creators of the tool state that the data that the tool captures should be, generally speaking, quite complete: 'all posts made since scraping started are available, as well as some older posts that were included in the initial scrapes' (Open Intelligence Lab, 2021b). As shadow banning is quite a new concept, the possible gaps in older posts should not be a significant limitation to our research.

The 4CAT tool uses another software in its queries for Reddit: The pushshift.io Reddit API which is maintained by Jason M. Baumgartner (Baumgartner, 2018). The Pushift API works as a big data storage of copies of Reddit objects and it copies every submission and comment in Reddit once it is posted to Reddit. This feature has limitations however, since the metadata such as scores, edits or removals of the posts is not updated once it is copied, thus the data can differ from what is displayed in Reddit (u/inspiredby, 2019). It is noteworthy that Pushift API also contains only public subreddits excluding the private communities, and that its searches are not by default case-sensitive (Baumgartner, 2018). In addition, at the time of the research the Pushshift API was warning its users that it was in the middle of the process of moving and re-indexing old data into a new upgraded server cluster, which might have created an occasional window of old data not being available until it has been migrated. (u/Stuck_In_the_Matrix, 2020)

These limitations however were acknowledged, but not considered as a fundamental obstacle in our study. Even though metadata can be in some cases an invaluable source of information (Sánchez Querubín, 2021), the Reddit metadata was not in our primary focus, rather the content of the posts was. We were also aware of the potential issues that the Pushift.io might have during our queries through 4CAT, but at the time of the research we did not observe particular oddities in the query results. Additionally, we also got abundant amounts of research material which began to saturate as we went through it, and thus, we believe that even if it was the case that some data had been missing, it would have not affect the essence of our analysis results. However, the possible loss of material in the sex worker dataset might have had more significant impact on our research, since it was scarcer source in the first place, but we added different spellings of ‘shadow ban’ and made queries different days in order to compensate the possible loss of data.

3. Research Questions

How is shadow banning discussed, made sense of, and evaluated by users, and what can we infer about the practice itself through user experiences?


  • How is shadow banning discussed in a discriminated against community e.g. sex workers? Are there differences between mainstream users and this community?

4. Methodology

4.3. The methodological process

Our aim was to study how shadow banning is discussed and made sense of in certain platform related and sex work related subreddits. We wanted to have an entry point to a phenomenon that is discussed considerably in social media, but about which little is officially known, and to see what that entry point can show us about the relationships between users and certain social media platforms. In order to closely, comprehensively and critically understand the nature of the knowledge, beliefs and experiences of the users, an application of qualitative methods, such as qualitative content analysis (e.g. Prior, 2014) of the Reddit posts and inductive categorization of the contents was needed.

The known strengths, such as context sensitivity and the in-depth insight to complex social phenomena, and limitations, such as the limited generalizability of the analysis findings, of qualitative research (e.g. Chowdhury 2015) apply also in our study. However, the data of the research, was scraped by digital tool 4CAT, which filtered an extensive number of Reddit posts and comments for us, resulting in an assumably quite complete set of all the posts that mention shadow ban in the given subreddits (from 2012 to 2020) which perhaps adds a complimentary layer of credibility to our analysis results. The practical limitations of our study relate to the short timespan of completing this research, thus making it more vulnerable to errors and inevitably limited in its scope.

We used a mix of methods in our research. As a first filter, we used the data capture tool 4CAT to retrieve all Reddit posts using the term 'shadow ban' from six popular Instagram, YouTube, and TikTok related subreddits, resulting in 1618 posts. We assembled the lists of the relevant subreddits of both datasets from various internet and online media sources, but ultimately, there is no way of knowing, if all the relevant subreddits are included. We ran searches for three more alternate ways of spelling shadow ban, however it resulted in over 7000 posts. After familiarizing ourselves with the data, it became evident that for the purpose and scope of this study the initial dataset of 1618 posts was sufficient.

The Reddit posts were then qualitatively assessed and separated by platform and further manually categorised as to how the post discussed shadow banning. The categories were beliefs and working theories of shadow banning, evidence of shadow banning, advice and strategies to avoid and try to stop a shadow ban, disbelief in the shadow bans existence or how it is discussed, personal more emotional recounts of shadow banning, consequences users have reported as a result of being shadow banned and help users requesting help and advice. The categories were determined based on the preliminary exploration of the data and refined, when necessary, as we went through the data. We excluded posts and comments that contained very scarcely information or that were referring to shadow banning that happens inside Reddit.

Additionally, to examine specific communities that have spoken out being affected by shadow banning, 21 sex work and pole dancing related subreddits were searched for all posts containing 'shadow ban,' 'shadow banned,' 'shadowban' and 'shadowbanned': this resulted in 480 posts. More iterations of the key term were used, since query for only ‘shadow ban’ produced very limited quantity, yet promising quality of results. The sex work dataset mainly consisted of Twitter related posts, but we focused primarly on Instagram, since it was in the focus of our study. Interestingly, there was no relevant material about TikTok or Youtube. The different iterations of the key word did not change the quality of posts collected but allowed for more relevant data to be studied. Once the posts of both datasets were categorised, their contents were further qualitatively examined by the category. Comparisons and differences were drawn between the social media platforms and between the two datasets. Finally, we formed a synthesis of the analysis results.

4.2. Reddit as a data source

Reddit is a popular online platform, in which people can discuss and share content anonymously in different communities or subreddits. The form of the platform resembles one of an online forum: Users can post, comment and vote each other’s content in any chosen community and start or join new subreddits according to their interest. (Reddit Inc. 2021) As Amaya et al. (2019) among others have noted, Reddit is an attractive source of research data for several reasons. It is very popular ㄧ 18th popular page in the world, right before Netflix (Alexa Internet Inc. 2021a), and 7th popular page in U.S., right after facebook (Alexa Internet Inc. 2021a) as of 4.2.202.1 ㄧ which makes it a relevant and abundant source of information.

Perhaps one of the major advantages of the Reddit data from the point of view of our research is that the platform is organized around communities of specific interest (Amaya et al. 2019, 2), which makes finding the relevant discussions relatively easy by identifying the relevant Reddit communities. The forum-like affordances create a fruitful opportunity to study an organic and lively online discussion inside specific communities, which is valuable for our study since we are interested in material that contains users’ experiences and beliefs that are individually expressed but socially negotiated.

The feature of subreddits combined with the anonymity of discussion is especially useful in the case of researching sensitive or stigmatized matters; the Reddit data has been used previously to study for example the discussion in pro-eating disorder communities (Sowles et al. 2018), the sexual identities of men (Robards 2018), the membership in mental health communities (Park et al 2018) and 'lived' atheism in Reddit (Lundmark & LeDrew 2019). The anonymity allows (but does not necessarily result in) open and honest accounts on delicate matters (Amaya et al. 2019, 2) and peer support without a fear of being personally stigmatized (e.g. Tanis 2008). In this respect studying the discussions inside multiple sex work related online communities, can be an invaluable way to access intimate experiences, realities, and knowledge that would otherwise not be accesible or known to the outsiders of the community. To secure the anonymity of users, the data was anonymised and the direct quotes from the data in this paper are equipped with minimal information to prevent the tracking of the users.

However, Reddit data has some limitations. It does not provide socio-demographic background data of its users and the user base is in all likelihood skewed towards men and the young. (Amaya et al. 2019). In addition Reddit itself is not a neutral ground for social interaction and self-expression, but its affordances shape the form and content produced in it. Indeed, for example Adrianne Massanari (2017) has noted in her study, how Reddit’s design, algorithm, and platform politics implicitly support misogynistic 'toxic technocultures' online.

5. Findings

5.1 First-order folk theories: the formation and exchange of speculative knowledge

Users came to Reddit to ask for advice regarding whether or not they were shadowbanned and what to do about it, and to exchange beliefs, strategies, and advice pertaining to shadow banning. The structure of the conversation was strikingly similar even when users were talking about different platforms. Discussions on shadow banning did, however, reflect the platforms’ distinct affordances. For example, among Instagram-users, suspicions of shadow bans were typically caused by an otherwise unexplainable decrease in the number of likes or new followers one obtained. Meanwhile, in the subreddits for TikTok, a drop in views and not getting to the For You Page (FYP) – TikTok ’s personalised, algorithmically curated front page – like one had used to, was thought to indicate a shadow ban. While in the case of TikTok and Instagram, shadow bans were typically discussed in relation to the whole account losing visibility, among YouTube -users, shadow banning was understood as something that could happen to comments and videos in addition to channels.

The reported decreases in engagement could be staggering – for example, one TikTok user describes the number of views their videos received dropping from as many as 50 million to as low as 50. The opacity, arbitrariness and unaccountability of algorithmic governance drove users to ask for help on Reddit, in an effort to regain control over otherwise unexplainable circumstances. Interestingly, the term ‘shadow banning’ was used even by those who didn’t believe they were shadow banned per se, but something equally ‘shady’ and opaque was happening to their accounts: ‘I’m wondering what's wrong, I don’t think it’s a shadow ban because I still get views on old posts but it’s still super weird’ (TikTok user).

Others replied, sharing their assessment of the situation and practical advice. Here, the concept of ‘folk theory’ – that refers to everyday conceptions about how the world works and is applied in many studies of everyday life with algorithms (Ytre-Arne and Moe, 2020; Eslami et al., 2016; Toff and Nielsen, 2018) – is especially relevant. While folk theories are often cursory and unsystematic, the word ‘theory’ anyhow hints ‘toward models or principles intended to hold up in the face of various empirical realities’ (Ytre-Arne & Moe, 2020: 5). Perhaps due to the hands-on nature of the conversations, the most common beliefs articulated in relation to shadow banning were relatively technical and individualised. They were typically accompanied with corresponding strategies regarding how to avoid or ‘cure’ a shadow ban. This finding attests to the productive nature of algorithm-related beliefs as shaping user behaviour and thus, partaking in algorithmic world-making (Eslami et al., 2017; Bucher, 2018). We name these technical beliefs ‘first-order’ folk theories.

We found that users had developed elaborate tactics of diagnosing and attempting to resolve shadow bans – or, as one Instagram-user put it, of ‘fighting back to normal’. Most commonly, one should start by ensuring that one indeed is shadow banned. This could be done by making a post with a unique hashtag, and then using another account or asking another person to check whether the post shows up in search. TikTok also offers an ‘analytics’ service for its users, which helped them in making inferences about their situation. The analytics tab shows, for example, one’s account’s growth curve, as well as a breakdown of where the traffic to one’s posts came from. If traffic gained through the FYP had diminished to very low levels, this was thought to indicate a so-called ‘soft shadow ban’ – discussed widely among Instagram-users as well – where one’s visibility is algorithmically inhibited, but not altogether suppressed. Users also made their own attempts at reverse-engineering algorithms, i.e. ‘examining what data are fed into an algorithm and what output is produced’ (Kitchin, 2017: 24). They could, for example, repost the same image that had been previously algorithmically promoted, and inspect its performance. Some users made summaries of other users’ experiences, or experimented with the platforms’ algorithms, in an effort to gain more objective/aperspectival knowledge:

‘I’ve been doing testing on multiple different accounts from the middle of September to Thanksgiving break. … This is the information I’ve gathered running these small test accounts.’ (TikTok user)

First-order folk theories articulated many potential technical explanations for shadow bans. It was typically believed that Instagram shadow banned or ‘soft shadow banned’ users who utilised third-party applications to e.g. grow their following or upload images from a computer instead of a smartphone, or who engaged in ‘bot-like’ behaviour. Certain hashtags could likewise be shadow banned, in which case Instagram- or TikTok -posts using them wouldn’t show up to others. Consequently, it was advised that one should stop using external applications; avoid using shadowbanned hashtags; or avoid certain behaviours such as excessive hashtag-use, in order to not seem like a computer in the eyes of the algorithm. Especially among users of Instagram, it was believed that previous content takedowns – whether or not they had resulted from actual/legitimate violations of community guidelines – caused a period of restricted visibility that could last anywhere between two days and ‘forever’. Meanwhile, TikTok users advised one another to delete all content that might be in any way offensive (e.g. videos containing curse words, alcohol, or depictions of someone getting hurt) – or, intriguingly, any videos that were not ‘well-lit’. Indeed, analysing the comments, algorithms began to emerge not as objective, efficient and knowable, but as authoritarian and highly capricious entities, leaving users struggling to understand their quirks or, in the words of one user, ‘follow their mysterious rules’.

I've never seen a platform punish its users this much for the smallest things … Why do they feel the need to shadow ban us for the smallest things? (Instagram-user)


5.2 Uncertainty and contestation

I’m really stretching my brain to try and figure out what the problematic content is. Is it because my older buddy falls down? Is that depicting harm to a child? Does it [the algorithm] not like the music because it’s not part of their library? Is it because it shows (part?) of a license plate on a car, and that’s identifiable information? (TikTok user)

As the above quote illustrates, while algorithms are ubiquitous, unavoidable, and noticed by users, it may anyhow be impossible to effectively know or address them. Indeed, rather than resulting in ‘algorithmic expertise’, the ‘algorithmic gossip’ (Bucher, 2018) that users’ first-level folk theories and their corresponding strategies constituted was often undetermined in tone:

‘Nobody knows what triggers shadow ban, how it can be lifted or how long it takes. Also, you will get no support when reporting the problem. All you can do now is to wait (and report the issue although it probably has no effect)’ (Instagram-user)

Uncertainty was reflected in the fact that advice could be self-contradictory: for example, while some advised others to continue posting as usual during a shadow ban, others warned that one should abstain from all activity in order avoid further penalisation. Suggested strategies didn’t necessarily work, with shadow banned users saying that they’ve tried every advice they’ve found, but to no avail. The uncertain, hard-to-control and seemingly incurable nature of perceived shadow bans made users posit all sorts of hypothetical practices and technical agencies organising their experience. For example, it was believed that Instagram and YouTube had databases of users with ‘non-optimal standing’, whose content they monitored more intensely; or that shadow bans were not account-specific but connected to a device or an IP address. This would explain why creating a new account wasn’t necessarily enough to free oneself from algorithmic deprioritization.

Perhaps most interestingly, the very definition of what constituted a shadow ban – or whether the practice even existed – was itself uncertain and contested. For some users, many experiences that others claimed to be shadow banning were simply how the algorithm is supposed to work: to filter out irrelevant and non-engaging content.

‘Always check your analytics. If it has at least 1% FYP source, you’re not shadowbanned, the algorithm just doesn’t like the content you’ve posted.’ (TikTok -user)

‘Up until recently YouTubers got a free ride, having their content promoted to all age groups. Because of recent scandals a lot of videos have been given a mature rating and no longer showing to young people. So it's not a shadow-ban but more of a got bonus views to inappropriate audience until it was fixed to what it should be’ (YouTube -user)

‘So many people blame low views on shadow ban. Gives them a comfortable excuse to justify their bad content not performing well.’ (TikTok -user)

Some users even sided with platforms’ official statements, voicing that the practice of shadow banning doesn’t exist in any form, and is ‘just a myth of the users’ (Instagram-user). Disbelief in shadow banning is a folk theory about algorithms in and of itself – one that draws from the ‘meritocratic framework’, a conception that ‘talent will rise to the top’ (Littler 2013: 52). Here, ‘the subtext is not to worry about the technical requirements of algorithmic visibility: just simply make good content’ (Bishop, 2019: 2600).


5.3 Second-order folk theories: politicizing algorithms

If ‘first-order’ folk theories about shadow banning were typically technical and individualised – centering on individual behaviours and technical details and explanations (e.g. shadow bans are caused by the improper use of affordances like hashtags) – what we call ‘second-order’ folk theories were less interested in practicalities, or in correcting the behaviours of singular users. Rather, they tried to place the shadow ban in a broader context that exceeded first-order theories’ focus on the individual user. They concerned themselves with the motivations behind shadow banning, typically articulating the shadow ban as a tool for discriminating against certain users or content. For example, one distinct type of second-order folk theory saw shadow bans as driving the commercialisation and thus standardisation of content:

'Not a fan of his channel at all … but it’s deplorable from YouTube to ban a channel just because he’s brash and offensive. … The only way to be accepted by YouTube is to make cookie cutter vlogs or fortnite videos lol' (YouTube -user)

'Did you ever take a good look at what they promote on their homepage? Not what is beneficial to mental growth but what takes people’s attention and time and changes their brains into shapeless jelly.' (YouTube -user)

Further, shadow bans were believed to have been weaponized against specific political groups or ideologies. These second-order theories were most often voiced by conservative or right-wing users:

‘They [YouTube and Google] still shadow ban people, mostly for ideological 'crimes'. No one ever sees your videos, no one ever reads your comments, you become persona non grata forever. Welcome to the liberal wasteland that is Google.’ (YouTube -user)

Second-order folk theories easily veered towards the conspiratorial, sometimes self-consciously so. For example, one user explained covering ‘content about the military’, and mentioning ‘something about US military and the South Pacific’ in one of their TikTok -videos’ comment threads: ‘Now this may be on conspiracy theory levels, but seeing as TikTok is owned by a Chinese company I was wondering if it could have triggered something in their shadow ban algorithms…’. Characteristics of conspiracy theories include, for example, a disbelief in the ‘official account’, the assumption of nefarious intent, and the overinterpretation of random occurrences as part of a broader pattern (Cook et al., 2020). While these ‘algorithmic conspiracies’ – that indeed challenge platforms’ official narratives of shadow bans as non-existing, assume that social media companies have a sinful agenda, and see intent in occurrences that might as well have been glitches or false positives – can seem at first glance silly or untenable, there is something highly revelatory about the fact that users develop them. While often seen in a negative light, conspiracy theories have been conceptualised as tools of resistance employed by the oppressed (Turner, 1993), as well as ‘poor person’s cognitive mapping’ (Jameson, 1988: 356), i.e. unsophisticated and misguided attempts at imagining and representing the abstract systems underlying and shaping lived experience. These approaches highlight conspiracy theories as if not rational, at least intelligible responses to unequal power relations, unknowns/crises of representation, and information asymmetries. Indeed, and as demonstrated, if folk theories – understood a ‘cultural resource from which we can construct strategies for action’ (Nielsen, 2016: 842) – fail to help people operate online, is it a surprise that they turn towards conspiracy theories instead?

A further, related dimension of second-order folk theories was that they identified shadow bans as resulting from platforms’ abuse of power rather than the mishaps of individual users. As such, they had more political/subversive potential than first-order folk theories and algorithmic gossip.

‘Right now, it [YouTube] is getting away with everything it wants just because it is a big corporation. Can you imagine what would happen if the government decided to strip you away from growth opportunities or shadow ban you?’ (YouTube -user)

'You may have noticed a pattern where social media platforms start with a reverse chronological feed and then move to an algorithmic feed after hitting a critical mass of users and eliminating all competing services. Once they make that switch, said platform goes from good to evil. … Reverse chronology is a free democracy. 'Optimized' is corporate fascism' (YouTube -user)

5.4 Being affected: emotional and other real-world consequences

In the previous sections, we’ve highlighted how users exchange information, contest over the definition, and form speculative knowledge of shadow bans. In doing so, we’ve been careful not to overemphasise users’ algorithmic expertise, but have sketched out the uncertainty that permeates their attempts at knowing and goal-oriented action. However, recent research on everyday life with algorithms stresses not just the rational and cognitive, but also the emotional dimension of sense-making in relation to datafication and algorithms (Bucher, 2018; Ytre-Arne and Moe, 2020; Lomborg and Kapsch, 2019; Ruckenstein and Granroth, 2020). Emotions have epistemic value for social researchers. From a sociological perspective, emotions are meaningful responses to social conditions, occurrences and relationships that have significance and consequences in terms of lived lives. Tracing expressions of feelings like frustration and hopelessness can, then, be used to map both algorithmic and user agency, and their limits.

Algorithms have the ability to affect people; they become ‘strangely tangible in their capacity to create certain affective impulses, statements, protests, sensations and feelings of anger, confusion or joy’ (Bucher, 2018: 94). Indeed, we found that people didn’t come to Reddit merely to exchange and accumulate information and develop and revise folk theories, but also to share the affective repercussions of perceived shadow bans, and seek out emotional support. The strongest emotions were expressed in tandem with the negative impacts of algorithmic deprioritization on everyday life. Users were saddened by their sudden inability to communicate with others, or disappointed and angered about the hard work they had put into developing their profiles going to waste:

I just need answers because this page is my baby and I don’t wanna see it die. (Instagram-user)

As Instagram is a huge platform for artists, giving massive opportunities, it really makes me depressed, sad and unmotivated … it would be nice to have a legit audience with decent popularity so I actually have a chance to be spotted by recruiters etc. (Instagram-user)

People who made use of social media platforms for professional purposes were undoubtedly hit the hardest by perceived shadow bans. However, the users were bound by a shared feeling of continuous contingency and uncertainty. From the perspective of this finding – of the inability to satisfyingly understand act on one’s circumstances – the algorithmic gossip analysed comes across as an attempt at gaining a (however fictitious) sense of control and agency in circumstances that one in reality can do very little to manage or change.

5.5 Shadow banning among a marginalised community: case sex workers of Reddit

While Instagram, YouTube and TikTok related subreddits were meant for anyone interested in theses topics, the Reddit’s sex work communities were mostly directed to other sex workers. We wanted to include sex worker subreddits into this study to see if and how their experiences and knowledge about shadow banning differed from the other users and whether it could tell us something about their realities with these platforms. Sex work and one of its form, prostitution, has long historical continuities of stigmatization, political contestation, criminalization and marginalization (e.g. Ditmore 2010), and thus it is valuable to study how these continuities might transfer into online environments. The fact that certain forms of sex work, such as prostitution, are still criminalized in several countries (ProCon.org 2018), including almost all of the states of U.S. (McKinley 2019), makes the question of sex workers in online environments particularly tensioned and manifold.

‘Adding on to this, don't post your onlyfans link directly in your bio or you'll get shadowbanned. It happened to me. I'm pretty sure Instagram's algorithm just does it automatically for certain websites.’ (r/SexWorkersOnly)

In line with the findings of the first dataset, people usually understood shadow banning as something that affects negatively to their likes, follower engagement, visibility or findability of their posts or accounts. Similarly, feelings of frustration, uncertainty, and discouragement were common, and often related to not understanding the reasons behind shadow banning, or to the resulting economical losses. A diverse, but not entirely unanimous range of tactical 'tips and tricks' were exchanged in order to navigate with the shadow ban or the risk of it. Furthermore, the first-order folk theories were predominant: people often associated shadow ban with something technical they or other users 'did wrong', and as result the shadow ban was triggered. Although many times the triggering mechanisms were not speculated explicitly further, the supposition about the existence of auto-detection and/or flagging of certain content and behavior was visible.

‘Keep the posts suggestive but not nude. Influencers get away with a lot more but IG has a super annoying algorithm and if you get reported enough you will get shadow banned or a full ban’ (r/CamGirlProblems)

There was an essential difference however: even though some of the tactics were similar across the data sets, such as avoiding spam-bot like behaviour, the sex worker’s tactics suggested that anything which might be an indicator of sex work such as putting direct link to sex work associated pages (e.g. Onlyfans) in one’s bio; using sex work related and/or too sexually explicit 'broken’ or ‘banned' hashtags; having too sexually explicit or nude content; and the use of certain ‘suggestive’ emojis, were perceived as a potential trigger for shadow banning or outright banning in Instagram, even if the content was otherwise moderate. A few users felt that the platforms tolerated nude content unequally for the favor of influencers.

'A lot sex work pages get shadowbanned unfortunately' (r/SexWorkersOnly)'

'In closing I'll add that we [sexworkers] 're being shadowbanned on Twitter and deleted on Instagram' (r/SexWorkers)

'Instagram is....VERY touchy. They will delete your account for no reason with zero warning' (r/SellerCircleStage)

The crucial difference between these two datasets, however, was that the existence of shadowban was not questioned at all in the sex work subreddits (zero posts was categorized under misbelief), whereas in the platfrom subreddits it was rather contested. The shadow ban was perceived as a self-evident fact and a risk that needed to be coped with accordingly, although a single fool-proof or particularly efficient tactic was not reported nor agreed upon. The shadow ban was often interlinked with banning of the content or the account, and experiences of being removed from and/or shadowbanned multiple times in the platforms were not uncommon. In fact, specifically Instagram was seen particularly hostile towards sex workers, having zero tolerance for anything explicitly sex work related while Twitter seemed to be slightly more accepting.

‘[--] it sucks that they [Instagram] hate sex workers so much.’(r/SexWorkersOnly).

Explicit second-order theories were otherwise rare. The shadow ban was understood as a platform’s tool: to execute and enforce certain policies e.g. excluding sex workers from the platforms; to delete or hide certain content related to nudity in order to appear 'family friendly'; to avoid legal consequences of the the new counter-sex trafficing SESTA/FOSTA legislation; or as a manifestation of misogynistic culture and patriarchy. However, while the perceived hostility of platforms towards sex workers was quite common perception in the sex work related subreddits, the other second-order theories were mentioned only in few individual accounts. Perhaps surprisingly, the shadow ban or the policies of the platforms were rarely problematized in political terms and the calls for collective action were extremely exceptional.

6. Discussion

The impacts of algorithms and content moderation unfold in real-life -contexts and therefore, in order to improve content moderation practices, we need to study users’ experiences. Our research shows that content moderation assemblages (cf. Gerrard and Thornham, 2020) generate feelings of misrecognition, frustration, and hopelessness at a mass scale. Users don’t want to game the rules, but follow them. However, they aren’t given a chance. Even if shadowbanning wasn’t real – as Instagram, TikTok, and other major social media platforms have stated – or as common as users claim, the popularity of the concept undoubtedly indicates that in users’ lives, platforms’ visibility algorithms and moderation practices routinely register as self-interested, threatening, and arbitrary. Uncertainty gives rise to speculation and rumours. Here, Bishop’s (2019) argument about ‘algorithmic gossip’ as a response to uneven power relations between users and platforms is especially relevant. However, we’d be careful about emphasising the empowering qualities of ‘shadow banning gossip’, because based on our data, it seems that instead of enabling users to reach financial consistency and visibility on social media platforms (ibid.: 2602), users’ struggles for knowledge and self-determination are often felt to be unproductive. Indeed, and as noted, no effective consensus is reached among users about the meaning, definition, or even the existence of ‘shadow banning’. However, as a folk construct, despite its ambiguity – or perhaps exactly because of it – the term enables people to come together to articulate and exchange personal experiences and consequences of algorithmic power. By aggregating these first-person accounts of suspected ‘shadow bans’, we begin to see their structural origins. Based on our data, we cannot say anything definitive about the existence of the practice of shadow banning on the platforms studied. Yet, our analysis anyhow demonstrates that the majority of affective, communicative and professional repercussions of content moderation and deprioritization – typically thought to result from individual shortcomings – are better understood in a collective register: as symptoms of sociotechnical systems that foreground commercial gains over the wellbeing of most everyday users.

The findings of this research suggest some significant differences in user experiences between sex workers and others. Exceptionally, the existance of the shadow ban was not contested at all among the sex workers, which might indicate that the punitive actions of the platforms are more experienced by the sex work communities than by the other users. Furthermore, unlike average user, the sex workers seemed to struggle with not only the shadow ban, but also with the deletion of their accounts, both of which were common to happen multiple times. The sex workers perceive the platgorms as hostile towards them and attribute it to their profession. Rather than being problematized, the hostility and the risk of the shadow bans and bans seemed to be an inseparable features of the sex workers’ everyday expereince and ‘part of the job’ that needed to be coped with individually. In the light of historical continuities and the current legislation in U.S. and many other countries, it would be hardly surprising if the platforms would like to exclude or at least restrict the visibility of sex workers, thus replicating the marginalization of this community in the online spaces.

7. Conclusion

In this paper, we’ve analysed how social media users experience, find evidence for, and make sense of the contested practice of shadow banning. Our research suggests that social media users strive to comply with platforms’ algorithmically enforced rules and values, but they aren’t given sufficient guidance. This is exactly why they came to Reddit: to acquire information about and make sense of other social media platforms’ moderation and deprioritization practices. At times, they managed to gain some (however illusory and fleeting) sense of agency within the unequal power relationship between the platform and the user – but more often, they were left confused and frustrated. While the analysed discussions do not allow us to give a definitive verdict regarding the existence of shadow banning itself, they do point to emerging contours of algorithmic content moderation and curation practices on major social media platforms, and give rise to important questions of both ethical and empirical nature. Should platforms have a responsibility to disclose to users how their content is algorithmically ‘seen’ and evaluated? What would be the societal, technical, or economic trade-offs that optimising content moderation/curation algorithms for better user experience would entail? For example, could meaningful transparency (Suzor et al., 2019) for the majority of users be reached without empowering a malevolent minority seeking to illegitimately exploit the affordances of social media for e.g. political or commercial gain? Moreover, it would be important to study further if certain minorities are policed more than others, since there might be systematic biases that enforce inequalities. Based on our research, we have some preliminary indication that social media platforms are experienced very differently by sex workers and ‘average users’. As one of the sex workers put it: ‘I have a friend who's all into the political side of Twitter, and I asked them if they ever have to worry about being 'shadowbanned'. They had never heard of the term before… So, it’s a little surprising that certain areas of Twitter haven’t even heard of the term 'shadowbanned' since it apparently doesn’t happen at all to them’.

8. References

Akpan, P. (2020 October 9). How Shadow Banning Affects People From Different Communities. Bustle. Accessed 7.2.2020. https://www.bustle.com/life/what-is-shadowbanning-and-how-does-it-work.

Alexa Internet Inc. (2021a February 4). The top 500 sites on the web. Amazon.com. Accessed 4.2.2021. https://www.alexa.com/topsites.

Alexa Internet Inc. (2021b February 4). Top Sites in United States. Amazon.com. Accessed 4.2.2021. https://www.alexa.com/topsites/countries/US.

Amaya, A., Bach, R., Keusch, F., & Kreuter, F. (2019). New Data Sources in Social Science Research: Things to Know Before Working With Reddit Data. Social Science Computer Review. https://doi.org/10.1177/0894439319893305.

Bambrough, B. (2020, April 30). Twitter Accused Of ‘Shadow-Banning’ Bitcoin And Crypto Accounts. Forbes.

Baumgartner, J. M. (2008). Pushshift Reddit API v4.0 Documentation. Accessed 7.2.202.https://Reddit-api.readthedocs.io/en/latest/#how-many-objects-are-indexed-on-the-back-end.

Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002.

Bishop, S. (2019). Managing visibility on YouTube through algorithmic gossip. New media & society, 21(11-12), 2589-2606.

Bucher, T. (2018). If... then: Algorithmic power and politics. Oxford University Press.

Chowdhury, M.F. Coding, sorting and sifting of qualitative data analysis: debates and discussion. Quality & Quantity 49, 1135–1143 (2015).

Cook, J. (2020, February 25). Instagram’s CEO Says Shadow Banning ‘Is Not A Thing’. That’s Not True. HuffPost.

Cook, J., van der Linden, S., Lewandowsky, S. and Ecker, U. (2020, May 15). Coronavirus, ‘Pandemic’, and the seven traits of conspiratorial thinking. The Conversation.

Ditmore, M.H. (2010). Prostitution and Sex Work. ABC-CLIO, LLC: Westport.

Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016). First I 'like' it, then I hide it: Folk Theories of Social Feeds. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2371-2382.

Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media’s sexist assemblages. New Media & Society, 22(7), 1266-1286.

Gerrard, Y. (2018). Beyond the hashtag: Circumventing content moderation on social media. New Media & Society, 20(12), 4492-4511.

Gillespie, T. (2013). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies (pp. 16–193). Cambridge: The MIT Press.

Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and the production of algorithmic culture. New media & society, 18(1), 117-137.

Instagram. (2021). Why are certain posts on Instagram not appearing in Explore and hashtag pages? Accessible at: https://www.facebook.com/help/Instagram/613868662393739?helpref=uf_permalink

Jameson, F. (1988). Cognitive mapping. Marxism and the Interpretation of Culture, 348.

Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4), 1-18.

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29.

Lomborg, S., & Kapsch, P. H. (2019). Decoding algorithms. Media, Culture & Society, 42(5), 745-761.

Lundmark, E. and LeDrew, S. (2019) ‘Unorganized atheism and the secular movement: Reddit as a site for studying ‘lived atheism’’, Social Compass, 66(1), pp. 112–129. doi: 10.1177/0037768618816096.

McKinley. J. (2019 May 31). Could Prostitution Be Next to Be Decriminalized? New York Times. Accessed 7.2.2021. https://www.nytimes.com/2019/05/31/nyregion/presidential-candidates-prostitution.html

Massanari, A. (2017) ‘#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures’, New Media & Society, 19(3), pp. 329–346. doi: 10.1177/1461444815608807.

Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366-4383.

Nielsen RK (2016) Folk theories of journalism. Journalism Studies 17(7): 840–848.

Open Intelligence Lab (2021a). What is 4CAT?. The University of Amsterdam. Accessed 7.2.2021. https://wiki.digitalmethods.net/Dmi/Tool4CAT.

Open Intelligence Lab. (2021b). Frequently Asked Questions. The University of Amsterdam. Accessed 7.2.2021. https://wiki.digitalmethods.net/Dmi/Tool4CAT.

Park, A., Conway, M., Chen, A.T. (2018). Examining thematic similarity, difference, and membership in three online mental health communities from Reddit: A text mining and visualization approach. Computers in Human Behavior 78 pp. 98-112.

Peeters, S., & Hagen, S. (2018). 4CAT: Capture and Analysis Toolkit. Computer software. Vers. 1.0.

Prior. L. (2014). Content Analysis. The Oxford Handbook of Qualitative Research (1st edn). Edited by Patricia Leavy. Oxford University Press: Oxford.

ProCon.org. (2018 April 23). Countries and Their Prostitution Policies. Britannica ProCon.org. Accessed 7.2.2021. https://prostitution.procon.org/countries-and-their-prostitution-policies/#.

Reddit Inc. (2021). About. Accessed 7.2.2021. https://www.Redditinc.com/.

Robards, B. (2018) ‘‘Totally straight’: Contested sexual identities on social media site Reddit’, Sexualities, 21(1–2), pp. 49–67. doi: 10.1177/1363460716678563.

Roberts, S. T. (2018). Digital detritus: 'Error' and the logic of opacity in social media content moderation. First Monday, 23(3). https://doi.org/10.5210/fm.v23i3.8283.

Ruckenstein, M., & Granroth, J. (2020). Algorithms, advertising and the intimacy of surveillance. Journal of Cultural Economy, 13(1), 12-24.

Sánchez Querubín, N. (2021 January 4). Close-reading user-generated (meta)data [Live tutorial]. Digital Methods Winter School 2021. The University of Amsterdam.

Sowles, S., McLeary, M., Optican, A., Cahn, E., Krauss, M., Fitzsimmons-Craft, E., ... Cavazos-Rehg, P. (2018). A content analysis of an online pro-eating disorder community on Reddit. Body Image, 24, 137–144.

Suzor, N. P., West, S. M., Quodling, A., & York, J. (2019). What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation. International Journal of Communication, 13, 18.

Tanis, M. (2008) Health-Related On-Line Forums: What's the Big Attraction?, Journal of Health Communication, 13:7, 698-714.

Tierney. A. (2018 April 2). Sex Workers Say They’re Being Pushed Off Social Media Platforms. Vice. Accessed 7.2.2021. https://www.vice.com/en/article/3kjawb/sex-workers-say-theyre-being-pushed-off-social-media-platforms.

Toff B and Nielsen RK (2018) ‘I Just Google it’: Folk theories of distributed discovery. Journal of Communication 68(3): 636–657.

Turner, P. A. (1993). I heard it through the grapevine: Rumor in African-American culture. Univ of California Press.

u/inspiredby. (2019 April 14). New to Pushshift? Read this! FAQ. r/pushshift, Reddit. Accessed 7.2.2021. https://www.Reddit.com/r/pushshift/comments/bcxguf/new_to_pushshift_read_this_faq/.

u/Stuck_In_the_Matrix. (2020 November 7). Growing pains and moving forward to bigger and better performance. r/pushshift, Reddit. Accessed 7.2.2021. https://www.Reddit.com/r/pushshift/comments/jplcs1/growing_pains_and_moving_forward_to_bigger_and/.

Ytre-Arne, B., & Moe, H. (2020). Folk theories of algorithms: Understanding digital irritation. Media, Culture & Society, 0163443720972314.

Topic revision: r2 - 28 Feb 2022, StijnPeeters
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback