Emillie de Keulenaar (OILab, University of Groningen and UN Innovation Cell)
Ivan Kisjes (University of Amsterdam)
Sarah Vorndran (University of Amsterdam)
On January 8th, 2023, enraged citizens stormed governmental buildings in Brasília, the capital of Brazil. Prior to the January riot, Bolsonaro supporters vehemently demanded military intervention following the close win of Luiz Inácio Lula da Silva (Lula) over Jair Bolsonaro by 0.80% on October 30th, 2022. What first started as occupying military headquarters and blocking roads across cities pervasively turned into an organized event, coordinated and broadcasted across platforms. The country now stands at odds with a looming memory of the ‘ditadura militar’ of the 1960-1980s. Will history repeat itself - and if so, to what extent?
How have demands for military interventions been normalized in the months leading to January 8, 2023?
To what extent were the riots of January 8 planned? How were they planned?
What is the participation of militaries, policemen and other security in these riots?
What are the core narratives and types of rhetoric that have come to normalize violence in the riots?
Five social media platforms were selected for analysis. The first three include Facebook, Instagram, YouTube, including a small number of posts from Whatsapp. These four platforms remain the most popular in Brazil (Semrush, 2023; Similarweb, 2023). Telegram and GETTR, on the other hand, are specifically relevant for the kinds of content being analysed in this study, which are more prone to content moderation on former platforms. These include allegations of fraud in the elections, militaristic content, attacks on institutions such as the Supreme Federal Court and Tribunal Electoral Court [cite]. One limitation of this selection of platforms is the absence of TikTok, which, upon consultation of our data, we found was frequently cited by users as a source of ‘proof’ of electoral fraud and other problematic content.
Each of these platforms were accessed with different data collection tools. Facebook and Instagram were accessed with Meta’s own data collection tool, Crowdtangle. YouTube data was collected with youtube-dl (Gonzalez et al., 2023), a Python library used to scrape various types of YouTube content (videos, transcripts, comments, video rankings, etc.). Telegram data was collected through the Telegram API, via UN-CAT, a data collection and analysis tool based on 4CAT (Peeters et al., 2021). Twitter data was collected with the Twitter Academic API. GETTR, finally, was collected with its API.
The limitations to this data collection configuration have to do with the limitations of each platform API and platform features. Each platform API returns a different amount of results. Twitter’s Academic API is the most generous, offering access to its entire archive (minus moderated content), with the only limit being set on not collecting more than 10 million Tweets per month. Crowdtangle also offers access to its archive, though what gets included or excluded is still subject to speculation. Crowdtangle has itself stated that it includes pages with significant amounts of followers, though some research has found otherwise [cite]. Finally, Telegram data was not always collected consistently, in at least two ways. First, unlike Crowdtangle, youtube-dl and the Twitter Academic API, Telegram’s API allows for the collection of posts from a selection of channels, rather than posts at large. The data we obtained was from an expert list of about  channels, compiled by researchers from the Federal University of Rio de Janeiro. Second, channel admins used Telegram’s auto-deletion feature to routinely erase Telegram posts, for fear of persecution. This means that we lost posts in instances where we did not maintain a daily collection of Telegram posts [see FIGURE x].
The data we have collected are posts that mention any one of a list of relevant keywords, found here. These keywords, or queries, are meant to capture incitement to violence, hyper-antagonistic rhetoric and distrust in the Brazilian electoral process. This process is popularly referred to as the “firehose approach”, and is the most used in social media-based data science and digital methods [CITE]. The themes or rhetoric we aimed to capture were: (1) allegations of fraud in the elections (“#BrazilWantsTheCode”, “#BrazilWasStolen”); (2) anti-media rhetoric (“#globolixo”, “#midiapodre”); (3) anti-opposition (“#esquerdacriminosa”); (4) slander against Brazilian institutions, particularly the Supreme Federal Court and the Tribunal Electoral Court (“#forastf”, “Barroso na cadeia”, “#forarepublica”); (5) Bolsonaro campaign hashtags (“#bolsonaro22”); (6) calls for a military coup (“#intervençãomilitarjá”); (7) calls to join the January 8 riots (“#festadaselma”, “#brazilianspring”); (8) calls to join the strikes of November-December (“#vemproquartel”); (9) climate change denialism (“#mudançaclimáticafakenews”); (10) conspiratorial language, particularly targeting the United Nations (“#agenda2030”, for example, refers to a conspiracy theory according to which the WEF’s “Agenda 2030” is a co-opted plan to rob individuals of various freedoms); (11) general culture wars, particularly targeting women’s rights (“#naoaoaborto”); (12) indigenous rights (“#marcotemporalsim”); and (13) pro-armament (“#armaspelavida”, etc.). It is crucial to stress that Bolsonaro campaign hashtags and keywords referring to pro-armament, anti-abortion and other culture wars issues are included not because they are counter to SDG goals or problematic in themselves — they are indeed legitimate expressions of a diverse ideological spectrum. They are included because they tend to co-occur with the key themes studied in this report, mentioned above.
The process of gathering relevant keywords, or query design, must go through several iterations before keywords can be considered reliable (King et al. [CITE]). This is because words tend to be used in a variety of contexts. The keyword “Barroso na cadeia”, for example, may well be used to say “Eu não quero o [Barroso na cadeia]”. A keyword alone cannot guarantee that we obtain relevant results. One must instead thoroughly contextualise the usage of every keyword to ensure that it means what they assume it means. This is done by collecting social media posts in a few iterations, whereby one finds other, more relevant keywords. This process is known as snowballing, because it starts with a small list of keywords that will tend to accumulate yet more keywords. If one searches for social media using the keyword “Barroso na cadeia”, for example, they may find that resulting Tweets, YouTube videos, Facebook, Instagram, Telegram posts also mention keywords like #forastf, #foraalexandredemoraes, or even #intervençãomilitarjá. Hashtags, in particular, are good query candidates because they tend to be strong indicators of partisanship. When one mentions #intervençãomilitarjá, it is likely that they mean they want a military coup.
Our query design underwent at least three iterations. We began collecting YouTube videos and posts from GETTR, Twitter, Facebook, Instagram, and Telegram groups using the original keywords from the 2020 research project, as well as additional keywords aiming at a broader set of more contemporary issues (mentioned above). The latter were collected by either expert knowledge of the political scenario in Brazilian social media, particularly with local researchers, or by loosely browsing social media posts and manually collecting a few relevant keywords and hashtags. This initial list of queries contained 172 keywords. With those, we obtained a set of posts from which we extracted the 500 most mentioned hashtags or words (n-grams). We manually selected and added some of these 500 terms to our list. Other keywords, particularly those used to find support for a military coup or participation of army personnel in the riots, were obtained by filtering posts that mentioned military titles: “general”, “brigadeiro”, “tenente”, “major”, “soldado”, “comandante” and “coronel”. We excluded “capitão” to prevent obtaining posts (solely) about Bolsonaro.
Every process of data collection needs to include “data cleaning”, an instance in which irrelevant posts are removed from datasets. Cleaning was done by removing posts containing noisy keywords. These can be seen in [Sheet x]. We also removed approximately 300 Telegram channels from our original expert list. This was done by entering each and every one of these channels and conferring that their content was in line with our query list.
By 2021, new research questions had emerged that made it necessary to update the original methodology. First, the very concept of ‘bots’ or ‘coordinated inauthentic accounts’ had undergone significant scrutiny within computer science research and Internet studies. The first and most consequent objection by [CITE ET AL.] argued that bot research is subject to (a) identifying significant amounts of false positives, i.e., users that are wrongly classified as bots; and more broadly, (b) ever-changing disinformation strategies that constantly render bot detection obsolete. In our own research [see FIGURE X], we found that some users were classified as ‘bots’ when they were in fact real humans acting as bots. One example is sycophantic and hyper-partisan behaviour, which often translates into users that constantly retweet key accounts (for example, Bolsonaro), follow many but do not have a lot of followers, and do not post a lot of original content. Bot behaviour is also prone to constant changes and adaptations. When Brazilian news media Globo divulged a study claiming that 55% of online Bolsonaro supporters were likely bots, many of them — bots and non-bots — responded defiantly with a campaign to disseminate the hashtag ‘#EuSouRobodoBolsonaro’ (#IamABolsonaroBot).
Second, the choice of bot analysis to study political extremism in Brazil is arguably problematic in itself, because it tends to underestimate the authenticity, and thus gravity, of such political movements. The resurgence of Brazilian militarism as a political culture; the belief that the elections were fraudulent; or that the United Nations is involved in international (‘globalist’) conspiracies to denigrate the national sovereignty of Brazil, may well be outlandish narratives co-amplified by bots. But they emerge from sincere and profound sentiments of distrust and antagonism, which are arguably the root problem that stimulates the consumption of disinformation.
For these reasons, the 2021-2022 research project adds ulterior analyses to just bot detection. One of these analyses consisted in tracing how CIB strategies changed since 2020, looking in particular at what (and how many) hashtags likely bots have used over time; periods of significant bot activities between 2021 and 2022; issues and narratives amplified by likely bots; which actors and issues likely bots tend to target; and how users have tended to define bots over time.
Other than bot analysis, we also looked at the presence of different types of rhetoric over time. This analysis consisted in counting the number of posts, per platform, that mention keywords we associated with a type of rhetoric, or theme. If a post mentions the hashtag ‘#SOSFFAA’, for example, will be classified as a ‘call for a military coup’. In the eventuality that posts contain multiple keywords, each ascribed to a different theme, we assign that post multiple themes. This analysis rendered [Figure x], which shows the number of posts per theme in broad strokes. To keep our analysis consistent with SDGs, we looked at which SGDs these themes targeted or were related to [FIGURE X].
To enrich this analysis, we manually studied and assigned themes to the 500 top most engaged with post, per platform. This exercise allowed us to find themes we may not have found without a close inspection of posts, and also allows us to look at the content of a representative sample of many millions of posts across all platforms. The output includes [Figures x, y and z], which illustrate in detail (x) how the January 8 attacks were planned across all five platforms; (y) how militaries partook in the planning of the attacks and adjacent events; and (z) how ‘proof’ of electoral fraud was [mounted] and how allegations of fraud persisted despite of legislative or platform content moderation.
Another important aspect of this study was the presence of platform or judicial content moderation against (unfounded) allegations of fraud and other problematic content. Social media platforms have had to revamp their content moderation systems somewhat dramatically since at least 2018, platform CEOs had acknowledged that [the links between “online content” and “offline events” were becoming undeniably real] [CITE]. This became all the more obvious in early 2021, when nearly all mainstream social media platforms had decided to remove or ‘deplatform’ users or content related to the January 6th Capitol Hill riots. Together with the failures of moderation in Myanmar in 2018, this event set a new bar for platform content moderation to prevent similar events from happening again — particularly in highly polarised elections around the world. This study addresses this problem by inquiring how platforms moderated problematic content throughout the elections, what they have targeted, and — given many online incentives to join the January 8th attacks in Brasília — how they have failed in their task.
Legislative interference in content moderation is also key, given that a large portion of content was moderated not by volition of platforms, but by order of the Supreme Federal Court and Tribunal Electoral Court. Their active presence online encouraged lead many users to ‘innovate’ strategies of evasion of content moderation, by for example using a combination of more-or-less moderated platforms (Facebook and Telegram; YouTube and GETTR), or increasingly obfuscating their conversations when speaking of evidence of fraud, planning the January 8 riots, or other sensible matters.
The kinds of content moderation we capture for analysis depend on the platform. We used a combination of API calls, manual annotations and web scrapers such as Selenium to systematically collect traces of content moderation on Facebook, Instagram, Twitter, YouTube and Telegram. On Twitter, content moderation may be visible in post or user statuses — over time, some may gain labels ([such as “Visit the STF website for more information”]), while others may have been removed or temporarily suspended. On YouTube, videos may be removed, sometimes with a clear justification (“This video has been removed for violating the YouTube Community Guidelines”). They may also be demoted, meaning they may be downgraded in search and recommendation results to de-incentivise engagement in problematic content. On Telegram, content moderation is done primarily under the order of the STF and TSE, and will target entire channels instead of posts. In the lead-up to the January 8 attacks, users begun to moderate themselves, using for example increasingly cryptic language, routinely deleting their posts, turning their groups to private, or deleting their entire channels.
One important limitation is that, in our results, values are sometimes presented in an absolute scale [see e.g. figures x, y and z]. These results may be biased in favour of predominant thematic keywords. There are, for example, a majority of 79 keywords for the theme “calls for a military coup”, and a minority of 2 keywords for “indigenous rights”. This will mean that, unless there are very few posts related to the latter theme in social media, we will obtain a majority of posts categorised as “calls for a military coup”.
We begin by outlining 6 main findings: (1) the general change of tides in political discussions from 2020 to 2023, where “culture war” issues have been gradually replaced with attacks on Brazilian institutions and calls for a military coup; (2) changes in CIB strategies, where the lines between ‘bots’ and hyper-partisan user behaviour have become more blurred; (3) the persistence of allegations of fraud in the elections despite legislative and platform content moderation; (4) how the January 8th attacks were planned across five platforms; (5) how militaries partook in the attacks; and (6) failures in platform content moderation.
Main figure here
Main figure here (draft)
Deep polarisation between Bolsonaro supporters and opponents tied to left-wing or moderate political branches (for example, the so-called “terceira via”, led sporadically by centre-left politician Ciro Gomes, centre-right politician Simone Tebet, or ex-judge Sérgio Moro) became fertile ground for sentiments of distrust regarding the elections. Bolsonaro himself had voiced doubts that the US elections of 2020 were clean [CITE]. He began to voice concerns about the integrity of Brazilian elections in March of 2020, when claiming that he could have won the elections of 2018 in the first round [CITE]. These claims were not unheard of before: in 2014, Dilma Rousseff’s opponent Aécio Neves had made similar, though milder, allegations [cit]. After 2015, however, sentiments of distrust in the Brazilian electoral system were amplified by the profound distrust of large parts of the Brazilian population against not just Lula’s Labour Party, but the “established” political class. In this context, it was easy to presume that an “outsider” like Bolsonaro — who had been knifed during his campaign — would have been systematically prevented from taking office by obscure forces, either by death or by electoral loss.
On social media, allegations of fraud underwent four key events: (1) allegations of fraud during the vote; (2) alleged reservist General Koury alleging on his Telegram group b-38 that the only way to confirm fraud is to consult the “source code” of electronic voting machines; (3) a highly influential presentation made by Argentinian political consultant Fernando Cerimedo, alleging massive voter fraud; and (4) a formal request by Bolsonaro’s Liberal Party to annul the second round of votes, due to an internal investigation finding evidence of fraud.
During the days of the vote, Telegram users frequently exchanged “evidence” of fraud through video snippets of strange events in voting booths, testimonials of Brazilians who were allegedly told only to vote for Lula, or who allegedly used a machine that selected Lula as a candidate despite their choice for Bolsonaro. When the federal highway police (PRF) was seen blocking roads in the North East, preventing some voters from reaching voting booths, Telegram groups saw instead one of many needed instances to prevent massive fraud. Such “evidence” was at the time not moderated. It had been months earlier, when the STF ordered Telegram to shut down the Telegram group b-38 and then reinstated it. Many more available Telegram channels also propagated such information.
On October 31st, alleged reservist general Koury, the administrator of the reinstated Telegram group b-38, organised an open group call (as he or other admins did daily) in which he explained that Bolsonaro still had a few options to contest the elections based on allegations of fraud, and that one of the most important ways to verify it was to check the “source code” of electronic voting machines [see note 1 in figure x]. Soon, hashtags like #investiguecodigofonte, #opovoexigeinvestigaçaonocodigofonte and #militaresfaçamtestenocodigofonte began to circulate across platforms, as well as “BrazilWasStolen”, a hashtag designed to attract international support. A call for general strikes was already being organised across Telegram channels, and was initiated almost imminently after Lula was declared winner of the elections [see video x and y]. This strike would last at least two more months until the January 8 riots, and one of its main demands — besides a military coup — was that the STF revealed the “source code”.
By November 4, “final” evidence of fraud was found in an elaborate presentation disseminated by an Argentinian political consultant by the name of Fernando Cerimedo. Cerimedo made an hour-long presentation in which he used public data divulged by the TSE to argue that voting machines were geared to produce votes for either Lula or Bolsonaro, depending on their year of fabrication. Voting machines fabricated before 2020 had a majority of votes for Lula, while those fabricated after that date had a majority of votes for Bolsonaro. [This showed important irregularities, he argued, since this tendency was also visible in the same cities or neighbourhoods.] Users on Twitter, YouTube, Facebook, Instagram, Telegram and GETTR rushed to further disseminate Cerimedo’s presentation while STF-led moderation removed it from social media platforms.
On November 15, Bolsonaro’s Liberal Party made similar accusations in a report compiled by an alleged organisation by the name of “Instituto Voto Legal”. IVL presented the same evidence as Cerimedo, but presented them as results of a lengthy and thorough scientific study led by accredited experts, such as IVL president and engineer Carlos Rocha [cit]. This report did not make as much impact as Cerimedo’s presentation on social media, but did help bring Cerimedo’s arguments to more mainstream areas of Brazilian media environments. While Cerimedo’s claims continued to circulate across GETTR and Telegram, it was PL’s report that got the most attention by accredited news media, STF, TSE and even Bolsonaro himself, who later used it when requesting to annul the election results [cite]. Eventually, TSE president Alexandre de Moraes dismissed the report and ordered the Liberal Party to pay a fine for “bad faith” [research].
Why, despite multiple instances of content moderation, were such allegations so persistent? One probable cause is that the belief that the 2022 elections would be rigged against Bolsonaro was difficult to dispel, be it by “fact checking” or proactive content moderation. The Supreme Federal Court, eventually led by Minister Alexandre de Moraes, sought to discipline unfounded claims of electoral fraud, by for example pressuring social media platforms to moderate such content. Social media platforms abided (some, such as Telegram, reluctantly) by outlining a number of specific policies for the Brazilian elections. The measures listed in these policies are the same as those that platforms had outlined for the U.S. elections: on Twitter, the “Civic Integrity” policy (initially) flags claims of electoral fraud with “educative” (“pre-bunk”) content, such as links to the Tribunal Electoral Court or the Supreme Federal Court [cit]; Meta, the owner of Facebook and Instagram, likewise used flagging features to “re-educate” users about the reality of the electoral process [cit]; and YouTube sought to delete and eventually suspend claims of electoral fraud, as well as incentives not to partake in elections [cit]. Telegram, at last, was threatened to be banned in Brazil in the summer of 2021 unless it abided by STF’s directives to remove certain channels.
Arguably, the belief in evidence of fraud may have persisted precisely because of these measures. Multiple instances of content removal — “deplatforming” — lead to a distrust in major social media platforms (Facebook, Instagram, YouTube, Twitter) as politically corrupt “Big Tech” among Bolsonaro supporters. Their collaboration with TSE, STF, and “mainstream” news media such as Estadão, Reuters and UOL [cit Meta] was seen as further proof of their lack of neutrality as “open” or “public spheres” for political debate. When Elon Musk took over Twitter in late October 2022, for example, Paulo Figuereido Filho (the great-grandson of military head of state João Figuereido) inquired why his new company had been “imposing a draconian ideological censorship of the Brazilian people's right to free speech” [cit]. Musk, who had at that point aspired for a “non-partisan” form of content moderation, replied that it was “possible” that Twitter had previously given preference to left-wing politicians during the elections [cit] — feeding into pre-existing suspicions of platform partisanship.
Gradually, moderation began to be associated with political censorship or persecution. The fact that STF or TSE would order the removal of “evidence” of voter fraud and other key information was in this sense the very proof needed to confirm the veracity of these claims. Users suspect that they are persecuted for holding a “censored truth” that the STF and “the world” at large do not want Brazilians to know.
Main figure here
From 2015, a major investigation presided by ex-federal judge Sergio Moro incriminated Lula and three mandates by the Labour Party of systemic state corruption. This cast a dark shadow on Lula, who despite finishing his government with an approval of 80%, came to be perceived as an emblem of Brazil’s historical problem with corruption and poor political representation between state and civil society. Key figures of Brazil’s redemocratization process — including new political parties established in 1979, of which some key members partook in the Diretas Já — were cast as a corrupt “establishment” or “political class”, with no or little interest in the actual welfare of the Brazilian people and the long-term future of the Brazilian nation.
Such events helped resurge a historical perception according to which Brazil cannot be governed by politicians, but instead by personnel promising long-term political, social and economic stability. Historically, the Brazilian army has enjoyed support from members or adepts as an alternative to republican politicians, due to their discipline, claim of ideological neutrality, and defense of socially conservative values [CITE]. To parts of the 55% of Brazilians who voted for Bolsonaro in the 2017 elections, these values were seen as the only alternative to put a final stop to Brazil’s culture of corruption, its endemic crime and other structural issues that decades of Republican politicians had only usurped from.
Ex-President Bolsonaro became a modern emblem of Brazil’s newly resurged militaristic political tradition. He was once an army captain, who emerged in viral 2015 YouTube videos in ways analogous to transgressive politicians in the U.S., Europe, India and the Philippines. He became famous for reviving elements of Brazil’s military regime as a form of radical subculture, defending the use of torture and other human right abuses committed by the military dictatorship from 1964 to the late 1970s. This kind of populist rhetoric, of the same exasperated tone as that of “the people”, moulded his image as a transparent and simple man who “tells it like it is”, and who, like many, believed that the best way to govern Brazil is with a no-fuss, heavy-handed approach against corruption and the vagaries of socially progressive ideas.
Bolsonaro’s combination of populist tropes with militaristic ideals helped rebrand a once-feared and authoritarian regime into a form of “military populism” [CITE] – a direct democracy led by well-willing and incorruptible militaries, or a semblance of such. Since 1988, the military dictatorship of 1964-1988 had once strongly lost favour in public opinion; public education, the arts and Brazilian politics have pushed for a critical perspective on this historical period [CITE], and Dilma Rousseff, who was impeached in early 2016, had pushed heavily for a truth and reconciliation committee to address human rights violations committed by militaries in the 60s and 70s. This form of “military populism” was reinforced by revisionist interpretations of the military regime, according to which torture was a means to a good end (to punish “terrorism”), and the 1964 military coup was in reality a popular “counter-coup” against an imminent communist revolution.
One of the reasons why the normalisation of militarism in Brazilian political culture is problematic is because it hampers an already difficult process of truth and reconciliation between the victims and culprits of Brazil’s military regime. Though the military dictatorship ended in 1988, what remained was a law of amnesty for all militaries involved in human rights violation, particularly torture. This law has arguably turned the page of the military regime too quickly, in that it left a portion of the Brazilian population — Brazilians who were not targeted by anti-subversive disciplinary measures, as well as generations born after 1988 — oblivious to the persecution and other injustices committed on their counterparts. This contributes to a division of two radically different lived experiences of Brazilian history, which count as an important factor in current-day political polarisation around social justice, historical reparations and related issues.
But even though the authoritarian aspect of Brazil’s military regime was largely dismissed by Bolsonaro’s government [cite], to call for a military coup of the same kind as that of 1964 remained controversial in the Brazilian public sphere [CITE]. Bolsonaro himself took distance from his supporters when called out for not countering demands for a military coup during his rallies. When asked about why, he claimed that these members were marginal and that they were protected by freedom of speech [CITE].
“Look, our rallies barely make any noise. You will barely hear a tin can rolling in the streets. I consider this freedom of expression. These claims for “article 142” [a law describing the role of the army within the Brazilian state] to be triggered – what is this, article 142? It’s an article of the constitution. That I don’t interpret in the same way as very few of my supporters do. When some people call for congress to be closed down, it’s their freedom of expression. I won’t incentivize this kind of discourse. To me, this is part of democracy. What’s not OK, is me threatening to close the congress or the Supreme Federal Court. So, Bonner [TV presenter], there is nothing to see here. I see this as freedom of expression. Just like when people call for AI5 [a law used by the Brazilian army to take over power in 1964]. AI5 doesn’t even exist anymore. You want to punish someone for raising a banner that says “AI5”, that’s something that I think will lead to nowhere. Now, on the other side of the political spectrum…” (JN entrevista Jair Bolsonaro (PL), candidato à reeleição, 2022)
Main figure here
Lula wins the elections on October 31st. There is widespread belief among Bolsonaristas that they were fraudulent or instrumentalised in his favor. On that day, an alleged reservist general, gn. Koury, says on the Telegram group b-38 that no-one will know the extent of frauds unless the source code of electronic voting machines is revealed. This leads to Bolsonaristas asking for the “source code” of the electronic machines. By early November, truckers began a strike and Bolsonaristas supported them by camping in front of military barracks. The strategy behind these initiatives is to use popular pressure to push the military to do a coup, just as they saw it happen in 1964.
Bolsonaro enters two months of silence; in his absence, Telegram users engage in interpretations of statements by militaries or allied politicians, "reading between the lines" to find evidence of their prophecy (that the military would take over) coming true. Users also exchange videos by or of militaries allegedly supporting the strikes or giving tips as to how to best interpret the Brazilian constitution or law in favour of a military coup. Some Telegram users say that generals in barracks came out to defend them when military police tried to evict them from occupations.
When Lula is inaugurated, all their hypotheses are disproved. Some still believed that Bolsonaro placed Gn. Heleno at the helm, and that the inauguration was fake. But in the face of widespread disappointment, users move onto another strategy. They unearth videos of Olavo de Carvalho offering an alternative interpretation of the Brazilian constitution; he says that the military, a neutral institution, will only take over if the people do it first. The people must do a coup -- a "popular coup" -- that will then legitimize a military takeover. They must invade Congress and not let any deputies, senators, etc, get in until the military intervenes. Users board onto this theory, especially as some express exasperation at the ineffectiveness of "pacific" protests, arguing that the use of violence is the only way to obtain visible results. Coups in Sri Lanka and Thailand are cited as references.
Telegram groups begin to plan an invasion of Brasilia. Smaller, private groups are created to exchange practicalities: free convoys all around Brazil are offered Telegram group members who sometimes asked for donations; the time, date and place of the riots (first, a reunion in the military barracks of Brasilia on Saturday; then, a march towards Brasilia on Sunday by 14:00); entrepreneurs & people in the agricultural sectors are called in to help finance the riots. The days – January 7th and 8th – are planned on the 4th of January.
As plans become more and more precise, users begin to use coded language (e.g., culinary recipes with diff ingredients refer to diff actions to be taken on Sunday the 8th). On Sunday morning, some members were seen filming the Esplanada, alleging that a few military police told them that they would not intervene and that they were on their side.
Each platform had a different use. While Facebook and Instagram were used for ‘broadcasting’ calls to join the January 8 riots, Telegram was used to plan it in detail and in a more or less coordinated fashion. Youtube’s long-video format allowed for the uploading of educational videos, opinion pieces, and on-location reporting in relation to the riots. Meanwhile, Gettr was used by political figures and others to share conspiracies and conservative statement pieces. The type of language used on each of these platforms becomes more or less explicit depending on external content moderation. On Twitter, Instagram, Facebook and YouTube, we find content by high-profile politicians or pundits such as Magno Malta encouraging the riots a few days prior to it taking place.
As Telegram is known for its lack of central moderation (Rogers, 2020) and affords simple auto deletion, rioters formed a variety of groups on this platform, often dedicated to spreading general information or disseminating possibly incriminating content.
Alternatively, mainstream social media platforms such as Facebook, Instagram and Twitter were used by prominent Brazilian politicians and popular news stations to offer political support. Jovem Pan, the main Brazilian radio station based in São Paulo, utilized YouTube to cover the riots while other channels prophesied a coup d’Etat and produced entertainment in support of the rioters.
Bolsonaro supporters made use of Twitter’s platform vernacular (Rogers 2019, 19) by using the same hashtags en masse. That is, by attempting to make hashtags such as #BrazilWasStolen and #BrazilianSpring trend, disappointed voters could broadcast their general dissatisfaction and call for action. Similar and identical hashtags were also used on Instagram and Facebook, and even Youtube, showing how the broadcasting had to occur across platforms to generate traction. It appears that although hashtags were often combined, they reference distinct events. While #BrazilWasStolen refers to a fraudulent election, #BrazilianSpring and #MilitaryInterventionAlready reference past and future riots in Brazil and globally. More specifically, #LulaCriminal and #GloboTrash were used to directly link the President Lula’s to his prior incarceration and the media conglomerate Globo to alleged fake news. These hashtags demarcate how agenda-settings are continuous across platforms, similar to how cross-platform pollination of the same (audio-)visual content is not uncommon. Nevertheless, the attached individual commentary often differs depending on the platform’s culture encouraged by its affordances, such as more or less anonymity and moderation practices.
Main figure here
Platforms such as Facebook, Instagram, YouTube and Twitter, relying on moderation policies crafted for the U.S. elections of 2020, have primarily moderated content that made allegations of electoral fraud, especially by redirecting them to the website of the STF. Posts by users or militaries calling for a coup ("golpista" content) have not (if barely) been moderated. We find a "dislocated" form of moderation, whereby American platforms are unfamiliar with Brazilian political history and therefore do not detect content that is problematic in a Brazilian (or other foreign) contexts.
Despite efforts at moderating insurrectionist content by both platforms and Brazilian legislators, content could still flow across more-to-less moderated platforms. For example: posts from YouTube, Facebook, Twitter and Instagram may frequently link to non-moderated, "alt-tech" alternatives like Bitchute, Rumble, Gettr and of course Telegram. Conversely, Telegram material frequently links back to YouTube. This builds a somewhat resilient information flow of problematic content across the Web.
|jpg||Areagraph_03_Tavola disegno 1.jpg||manage||302 K||21 Oct 2019 - 13:36||EmilieDeKeulenaar|
|jpg||Atlantis_WikiTimeline_Tavola disegno 1.jpg||manage||86 K||21 Oct 2019 - 13:28||EmilieDeKeulenaar|
|jpg||Crusade_WikiTimeline-02.jpg||manage||70 K||21 Oct 2019 - 13:27||EmilieDeKeulenaar|
|png||Screenshot 2019-07-22 at 15.22.51.png||manage||429 K||21 Oct 2019 - 13:20||EmilieDeKeulenaar|
|png||Screenshot 2019-07-22 at 16.42.17.png||manage||527 K||21 Oct 2019 - 13:37||EmilieDeKeulenaar|
|png||Screenshot 2019-07-23 at 12.25.46.png||manage||60 K||21 Oct 2019 - 13:24||EmilieDeKeulenaar|
|png||Screenshot 2019-07-23 at 16.10.01.png||manage||327 K||21 Oct 2019 - 13:31||EmilieDeKeulenaar|
|jpg||WW2_WikiTimeline-03.jpg||manage||66 K||21 Oct 2019 - 13:28||EmilieDeKeulenaar|
|png||cluster 2.png||manage||1 MB||21 Oct 2019 - 13:44||EmilieDeKeulenaar|
|png||image-wall-e3b55f6d8e296e95f13bd18fc943dd55.png||manage||934 K||21 Oct 2019 - 13:33||EmilieDeKeulenaar|
|png||pasted image 0.png||manage||1 MB||21 Oct 2019 - 13:23||EmilieDeKeulenaar|
|png||pasted image 2.png||manage||1 MB||21 Oct 2019 - 13:32||EmilieDeKeulenaar|
|png||unnamed-2.png||manage||12 K||21 Oct 2019 - 13:34||EmilieDeKeulenaar|
|png||unnamed-3.png||manage||11 K||21 Oct 2019 - 13:34||EmilieDeKeulenaar|
|png||unnamed-4.png||manage||54 K||21 Oct 2019 - 13:37||EmilieDeKeulenaar|