As in earlier projects looking into Syrian war imagery and its circulation and amplification on social platforms, a kind of visual platform spillover occurred within the discursive space of solidarity with Ukraine (see also this). This, we dub an ‘Instafication of Twitter’ where particular styles of visual expressions of solidarity, usually omnipresent on Instagram, spill over to Twitter; reducing platform specificity in the visual sphere. In terms of narratives, an evolution of typical refugee and ‘pray-for-tags’ over time make way for politically fueled parties that are taking over the solidarity tag to tie into antagonistic or at least more ambivalent stances toward the situation.
We are ‘sniffing’ things such as the presence of images of injured people in English language tweets whereas in Russian language tweets we encounter more images of deceased people, pointing to more graphic imagery co-occurring with particular languages. Note, however, that in follow-up research one should close read for intents with which such tweets are uploaded to reveal other factors of influence that might explain these differences to a greater extent. We should also reset tag queries to include different political stances as our query skews heavily toward the pro-Ukrainian sphere, we expect this would give us very different distributions of graphic imagery of death over languages.
Assessing the TikTok space seemed to point to an intensification of identity performance (a dynamic present across social platforms, see among many others Jurgenson, 2019; Wahl-Jorgensen, 2019; Papacharissi, 2015; boyd; 2010) at the expense of representation of the actual conflict and its consequences on the ground (see also Van Dijck, 2008). The platform-specific videos culminate in a weirdly affectively charged environment where an “absurd” juxtapositioning is abound that evolves out of a combination of representing warfare on the one hand with (occasionally) funny music tunes or trivial gestures on the other hand.
Multi-modal in that it merges textual, (dynamic) visual, and auditive features, TikTok -native visual content is networked through both hashtags and sounds, creating ambiguous forms of affective alignment and various overlapping layers of mediation and resonance (see Hennion, 2003; Paasonen, 2019). This provides a valuable entry point into the study of what we identify as ‘sound spaces’ or context-specific environments of musical/affective attunement which permit various forms of emotional expression and social bonding (e.g., through related practices of cross-hashtagging and sound-linking). Below we discuss several methods for augmenting TikTok video analysis with both auditive and textual layers of TikTok -native engagement, by analyzing sound-co-hashtag relations of the top 10 videos connected through Tom Odell’s Another Love, which became a TikTok anthem for Ukrainians during the war (see also Cork 2022).
Interesting is also the Ukrainian language TikTok that contrasts with the Ukrainian language Twitter. Whereas we see ingroup talk of Ukrainians (or at least Ukrainian speakers) on Twitter taking place through static artworks and poems, on TikTok we see a new solidarity expression appearing where soldiers are vehicles for ingroup communication of support amongst Ukrainians or diaspora. This is interesting as static Twitter imagery of soldiers is almost completely absent on UA Twitter, perhaps pointing to how static soldier imagery lacks the added modalities that TikTok videos allow for and that alters the soldier into a ‘friend and defender, a family member’.
“On May 8th, Zelensky releases a black-and-white video that I started watching but then paused indefinitely. I can no longer stomach all these more-and-more beautiful Presidential videos. Something important is lost precisely where he struggles to grasp your attention with that upgraded camerawork and his significantly improved speeches. The video tries to achieve the impossible – speaking to so many different audiences that it feels a little bit schizophrenic. We are burn-out here, quite literally so. But the rest of the world is bored. Two months. This is how long it can pay attention to a war”
- From the INC blog series ‘Dispatches from the place of immanence, by Svitlana Matviyenko
The above fragment touches on a question that numerous scholars have spent their lives working on: the role of war imagery in war fatigue (Scarry, Sontag, Butler, McCosker). Though exemplary of hypermediated spaces, the participatory logic of social platforms —in contrast to television— holds features that would potentially negate felt inefficacy toward distant suffering if we follow Sontag’s argument (2003, p. 91) that: “People become less responsive to horrors of war if one feels that there is nothing ‘we’ can do […] then one starts to get bored, cynical, apathetic.” Social platforms allow people to detach from apathy and invite users to interact with images of suffering to greater or lesser extents and always in particular ways. We use emoji buttons or hashtags when sharing images. We engage in reworking news photographs, altering their communicative messages to construct new ones and so on. Although responding to and reworking images provides a way to engage with tragedy, Sandrine Boudana et al. (2017) and Barbie Zelizer (2006) amongst others, critique such practices of reproduction as: ‘leading to a dilution or distortion of meaning.’ Luc Boltanski (1999) argued that the aestheticization of what we see in the media emotionally and morally insulates viewers from the suffering of others. This is furthered by Chouliaraki (2013) who sees a consumer-driven solidarity emerging, that is: a solidarity that can only be extended to victims that are ‘like us’ drawing on notions of common humanities but in doing so excluding distant others.
In previous projects (insert LINKS) images of war and suffering were studied in relation to the affective affordances of platforms, the latter pertaining to the interplay between the affect architecture of a particular platform and the allowed-for usage of such features as posts, hashtags, buttons, and ‘remix practices’. Particular tendencies emerged from three distinct empirical analyses (2022, thesis), such as 1) the crowding out of the images of actual victims by user’s visual emotive expressions on Instagram, 2) the ambiguous collective usage of Facebook’s Reaction buttons, and 3) the steep power asymmetries of Twitter's architecture privileging representations of suffering disseminated by already powerful actors. With the Russian invasion of Ukraine, there is, unfortunately, ample reason to continue assessing how social platforms affect, impact, and shape the visuality of contemporary war and suffering. The emergence of TikTok as a prominent platform for the mediation of the war in Ukraine is extensively studied in a myriad of ways, oftentimes focusing on the ‘information war’ that is raging across the larger web.
Moving away from mapping influential actors behind particular (micro-targeted) political messaging, this project aimed to assess the resonance dynamics of content on two platforms: Twitter and TikTok.
The participatory affordances of social media are defined by the communicative structures of platforms. These commodify majority preferences, often relying on a social graph model where content is dubbed engaging depending on its resonance within friend or follower networks (see Twitter among many other social giants such as Facebook). However, TikTok breaks away from this structure. In a recent commentary article in The New Yorker, Cal Newport describes the app’s “energetic embrace of shallowness as making it more likely, in the long term, to become the answer to a trivia question than a sustained cultural force.” The platform’s communicative architecture breaks away from centering a networked social graph in which content needs to be shared within your friend or follower network in order to attract engagement.
With that, amplification is not tied to friend or follower networks. All this makes TikTok an entirely new space to research, with a different cadence, language, and style of presentation (Stokel-Walker, Guardian, 2022). On top of this, the urgency to asses how war is communicated and represented simply flows from the steep uptake of TikTok as a place where people go to; apart from becoming the fastest growing social network reaching 40% of 18–24s, a staggering 15% of these youngsters uses the platform for news (Reuters Digital News Report, 2022).
TikTok as designed for war?
In a Wired article, Chris Stokel-Walker emphasizes how TikTok is ‘designed for war’ due to the easiness of use and the instant character of the platform: “As Russia prepared to invade Ukraine, it became a boon for open source investigators trying to track troop movements, and has provided immediate, quickfire footage of what’s happening as Ukrainians fight for their future [...] If Facebook is bloated, Instagram is curated, and YouTube requires a shedload of equipment and editing time, TikTok is quick and dirty—the kind of video platform that can shape perceptions of how a conflict is unfolding.”
This project aims to expand on the methodological protocols developed and applied in earlier research (2016-2022) into the Syrian war and how images of that conflict proliferated on Instagram, Twitter, and Facebook. While being dubbed by a myriad of news outlets as being the ‘first TikTok war’, the war in Ukraine gives rise to questions around this platform’s aesthetic and memetic affordances (sound, stickers!) and how these impact the representation of war within solidarity frameworks.
Chapter 1 CONTENT DYNAMICS
RQ: How is the #standwithukraine narrative evolving over time in terms of resonance and content characteristics? (Twitter & TikTok)
RQ: Looking at TikTok what space do photography and expressive visual artworks take in?
RQ: How are particular image objects (Twitter) used in different language spaces?
Chapter 2 RESONANCE DYNAMICS
RQ: How resonant are videos and image tweets over time?
RQ: How are particular sounds (TikTok) connected to particular narratives?
Chapter 3 TACTICAL DYNAMICS
RQ: How are tags and languages strategically used to make visible the horrors of warfare on the ground?
RQ: In what ways are these graphic images censored?
We worked with two main data sets: one for Twitter (4cat, Twitter V2 api) and one for TikTok (Zeeschuimer; TikTok Scraper). The Twitter dataset spans tweets tagged with #StandWithUkraine published in the time frame February 20, 2022, to July 10, 2022 (giving us roundabout 1600K tweets).
The TikTok dataset was gathered using Zeeschuimer and TikTok scraper. We arrived at a TikTok query by blending both tags derived from an expert list of Ukrainian, Russian, and English language hashtags (with many thanks to Sofia Romansky) and selecting some tags based on co-occurrence and prominence on Twitter for discursive comparability purposes.
The data set was cut into six timeframes for workability. These timeframes reflect the following timespans:
Feb 1- Mar 1
Mar 2 - Mar 16
Mar 17 - Mar 31
Apr 1 - Apr 21
Apr 22 - May 21
May 22 - Jun 10
Tweets were filtered for tweets containing images using the 4cat filter (contains type column images). A stream graph with the hashtag occurrences over time was created to get a longitudinal sense of the discursive dynamics. The generic persistent tags emerging from the rank flow of hashtag occurrences over time were used to inform a selection of tags from which then image walls were created.
Then we engaged in creating co-tag networks using 4cat’s co-tag network module. The modularity classes and degree ranks from these analyses informed our queries to further narrow down the dataset in order to make possible the creation of image-label networks that require quite some heavy sampling in order to retain the legibility of the networks.
Going from modularity and top 3 degree ranks, we selected three tags that were specific to Ukrainian, Russian, and English language tweets (languages organically tend to cluster by the co-tag network algorithm). Important is that in this selection we discard names of cities, politicians, and very generic concepts such as #war. In this way, we boil down to affectively charged tags. The top 3 highest ranking in degree and engagement (RTs) were then taken from all three mentioned languages giving is 3 tags per language to narrow down the larger datasets.
How to arrive at an image-label network in Gephi going from 4cat? Aka here come more detailed notes to ourselves and future researchers (thank us later):
Select the filtered dataset with only images (see filter function). Filter this new dataset by the three hashtags of your language (filter by value, column hashtags, contains, separate by a comma, and run). Open the new dataset. Filter a random sample of 300 tweets. Open the new dataset. Go to Visual > set max to 300> click ONLY “images”, uncheck box ‘column’. From downloaded images > expand all > vision api. Run the vision api (insert key). We chose the features: Label and text detection, more = more expensive. This will give you a JSON with image file names and labels. First, do: convert vision results from JSON to CSV (it is the second ‘to csv’ option not the first!!). Again we use the expand tree button and then we do a custom network where we do from Image file column to Annotations column label annotation, choose directed, split value tick the box, rest leave as is, run!
!! Don’t forget to check the awesome preview option in 4cat to get a feel of your network’s shape. Images will appear in Gephi as long as you have the image preview plugin installed from the plugins library in Gephi.
Now put the downloaded images zip on your device (download images in 4cat function). The directory is the path that you insert in the “Render nodes as images” option in the preview module of Gephi.
HACK ALERT (for windows users at least): path in the image preview plugin on Gephi is the path of one of the images, so go to your images folder, right-click on the image, then copy the path.
Disappearing graphs in the preview problems? Click window>graph for the reappearance of your network.
If you want to get rid of image file names that remain after rendering nodes as images: retain vision labels in Preview through filter > category (insert image) select all > right click, edit all nodes > label, tap the three dots box > keep no value, OK > done!
The TikTok dataset for #standwithukraine was scraped using TikTok scraper, producing a folder of 967 video thumbnails (default covers), video files, and attached metadata. In our analysis, we focused on the temporality (timestamps), sounds (musicMeta.musicName), hashtags (extracted from post text), engagement metrics (Digg count), and aesthetic properties of (video) content (see the findings section).
In order to attend to the temporal and aesthetic specificity of TikTok engagement, we used the methodological protocols for studying TikTok vernaculars (see also here) through different plotting and montage techniques afforded by ImageJ and focusing on the linkages of video content with the time of posting and the intensity of engagement (measured by the count of likes or ‘Digg count’).
Using ImageMontage macro for ImageJ in combination with the command line-based tool ffmpeg, we visualized the storyboards for the 10 most liked TikTok videos connected through Tom Odell’s Another Love (see the same protocol and the findings section for a discussion; see also Bainotti et al. 2021). To attend to the specificity of video content, 10 videos were merged into one video file using an online video converter. This input was then deconstructed in individual image frames using the command line tool ffmpeg4: The output of this tool is a new folder of images that can be visualized with the ImageMontage macro for ImageJ.
#standwithukrain as a visual prompt to outline tactical dynamics of support
During the week we outlined two preliminary visual frameworks to detect imagery (1) used to support the Ukrainian cause using the colors of its flag and (2) that depicted the war’s consequences to a wider audience to raise awareness of the conflict. The experimental framework was applied to two batches of pictures: the first one was used on pictures from Twitter, where samples of recurring behaviors were identified, and on cover images from TikTok; the second one was used in the various language spheres identified by Twitter hashtags (Russian, English, Ukrainian).
Color sampling to detect visual support to the cause
The first visual framework investigates symbolic uses of colors from the Ukrainian flag to signify support. Inspired by a viral picture that went viral on VKontakte in Russia that portrays a lady wearing a blue scarf on her head and a yellow jacket, we constructed an automated “Action” on Photoshop to sample and isolate these symbolic signifiers in the pictures from the dataset.
Figure 1. Inspired by the sample picture, the Action selects all colors matching a range of colors sampled from a series of pictures from the dataset that cover the “yellow” and “blue” color spaces. After sampling these colors, the Action isolates them and removes the background (turning it black). The result is an image that only contains blue, yellow, and black.
Subject sampling to detect infiltration of reportage imagery
The second visual framework involved using a cutout technique to identify imagery from the battlefield itself. Starting from hashtag spheres, one for each language, we queried the Google Vision API result for the following terms: “soldier”, “military”, “destruction”, and “building”. Within these terms, we identified two categories of subjects within reportage photographs: soldiers and the devastated environment. Using another Photoshop Action, we isolated these two categories from each other to arrange them in a grid where the horizontal position represents the timeframe in which the pictures were posted.
Figure 2. Structure of the visualization: the two categories divide the space into two vertical areas that are then divided by the temporal dimension organized according to the initial query from 4CAT.
Protocol on the sound Sankey
A Sankey focusing on the extent to which #standwithukraine hashtags and sounds are attuned to one another on TikTok was created based on the exploration of a bi-partite sound-hashtag Gephi network for #standwithukraine. After we deleted the main node “original sound”, four resonant songs (two Ukrainian and two international) were selected based on their frequent appearance together with #standwithukraine co-hashtags. The resulting sound-hashtag relations were exported as edges and then represented as flows using RawGraph ’s sankey. For each sound the 15 most used hashtags were selected. Shared co-hashtags were placed in the middle, and sound-specific hashtags were placed at the margins.
From the rank flow of hashtags in the Twitter space we see two relevant findings, first, it is clear how solidarity tags relating to refugees and their needs are prominent only in the first two months, they move out of sight later on, where we see more political tags (even Pro-Russian tags that are hijacking the #standwithukraine tag that was our point of entry into the data!!) appearing. This reflects findings from earlier research where emotive tags make way for more critical and politically laden tags over time. The amplification and engagement dynamics native to the business model of social platforms then make that these more critical conversations take place when the attention of the masses is already elsewhere.
Figure 3: Twitter hashtags over time
For hashtags over time in TikTok striking differences with Twitter can be found when one sees how the (presumed unfair) winner (Ukraine) of the Eurovision contest controversy completely overpowers any other discursive frames in May. Similar to Twitter we see the disappearance of tags that relate to emotive solidarity with victims. Also interesting is to see how, later in the time frame, Ukrainian tags dominate even though the queried tag for this rank flow is English and was not narrowed down using language-specific tags, pointing to a shift from global to more local attention for this conflict.
Figure 4 rank flow for TikTok
Twitter image-objects networks over time and linguistic analyses: in-group and out-group visual talk
Themed clusters appeared but were not limited to soldiers, destroyed buildings and military vehicles, art & illustrations, flags, and their colors & related content, and cats. These ‘themes’ reflect the content clusters that were present in the image networks and that proved to be persistent and relevant throughout the narrative over the six timeframes mentioned in section 4, apart from the absence of soldiers and military vehicles that were missing in the Ukrainian language spaces. Interesting is how the Ukrainian sphere has a more limited set of themed clusters, very much focused on poems and solidarity artworks, pointing to how Ukrainians amongst themselves (in group speech) share visuals that pertain to hope and resilience and support, whereas it is presumed that in the Russian and English language spaces, Ukrainians are targeting out groups or in other words, they speak to a different audience, set outside their ingroup. Of course, both the latter spaces are largely constituted of non-Ukrainian speakers as well.
The sad faces theme specific to the English language space is also of interest as it pertains to the typical humanitarian campaigning imagery that attains dominance in the more globally oriented English sphere.
Figure 5: overtime shifts in object clusters per language space.
Tactical dynamics in standing with Ukraine
This particular technique allowed us to isolate four main families of behavior in supporting Ukraine through social media:
Pictures of protests. When the war started, a flood of protests was held in various countries, including Russia. Many pictures posted on social media documented these protests by photographing flags carried by crowds.
Public displays of support by governments. Similar to pictures of protests, pictures of public displays portray public spaces that were used by governments to show support for Ukraine. These pictures focus their attention on buildings or other public spaces that are lit up by Ukrainian colors.
Symbolic clothing. Like the lady in the subway picture, many users on social media recreated their own take on this way of displaying support by wearing the colors of the Ukrainian flags in various contexts.
Flag recreations through expressive media. As the last category, the Ukrainian flag was recreated expressively with many different materials: from digitally produced imagery to carefully arranged photography like sunflowers on a blue sky.
Figure 6: An example of (from left to right): protests, symbolic clothing, and public displays of support.
In the English sphere, a high number of pictures were independent advertisements that expose companies that after the war started were still maintaining economic relationships with Russia. A particular example of this kind of content is an ad that portrays a Russian soldier wearing a vest from an American company.
A different trend emerged instead from the comparison between the Russian and Ukrainian sphere: while the latter has virtually no images from the war field, the former has the highest number of photographs posted, some of them graphic.
Figure 8: figures evolving over time for the Russian and Ukrainian spheres, note the lack of soldiers in the Ukrainians talking Ukrainian.
We speculate that people using these hashtags are supporters of Ukraine that try to reach a Russian audience to expose them to the horror of the war field. An indicator of this behavior is the censoring done to the photographs themselves: some hide the identity of the soldiers (presumably to protect them) or the background to avoid revealing the locations of where these pictures were taken. Some pictures censor graphic content in order to avoid being removed by automated algorithms.
Figure 9: ways of censoring.
To arrange the videos in accordance with the temporality of posting and intensity of engagement, we first plotted the video thumbnails as blank data points with ImageJ, highlighting the temporal shifts in relations of relevance (time of posting on the x-axis) and content-specific engaging potential (Digg count on the y-axis). Top 10 outliers representing resonant posts were then replaced through actual video content formatted as GIFs (see high-res viz). Meaningful in this visualization is the distribution of images in space (see Manovich 2011, 2020), with the more dense rhythms of posting in the dimension of ordinary engagement (low visibility of individual posts and high frequency of posting per month). Visible outliers are scattered in the more sparse areas of the plot, pointing to the adjustment of platform-specific trends such as ‘time-travel’ (video 4) and ‘fitness challenge’ (video 5) to the shared experience of war as mediated through TikTok (note that sensitive content was de-identified). Three most liked videos containing before-and-after stickers (Ukraine before and after the war) were posted in March and April 2022, constituting the peak of engagement with #standwithukraine during the beginning of the Russian invasion in Ukraine (note that the curve starts to flatten out already in May). While acknowledging the API-based limitations of our dataset (967 posts are hardly representative), we interpret these temporal dynamics as indicators of affective intensity initially building up around the war tragedy and soon stagnating into TikTok -native tactics of ‘trend-attunement’ (more on this logic here). Here, the question of how (long) we stand with Ukraine is also a question of platform-mediated relational dynamics constituting the notion of algorithmically curated imitation publics (see also Zulli & Zulli 2021).
Figure 10: A scatter plot of 967 #standwithukraine videos represented as data points and plotted with ImageJ (timestamp on the x-axis; Digg count (likes) on the y-axis). Top ten videos were manually replaced through GIFs; sensitive content was de-identified.
In order to understand patterns of imitation over time based on the similarity of visual artifacts, we complemented this analysis with a time-bound montage representing all video thumbnails side-by-side (see below). We used the month of publication and the chronological order of posting to arrange the images, displaying persistent repetitions in use of blue and yellow colors and contextual appropriations of ‘before-and-after the war’-stickers in March and April. To highlight the color patterns, we created a two-layer visualization using the method of color-sampling to detect visual signifiers in pro-Ukrainian posts and superimposing the resulting image over the first layer of the montage.
Figure 11: An image montage of 967 #standwithukraine videos chronologically arranged within each timeframe and separated into groups by the month of publication. The second montage layer focusing on the repetition of color arrangements was created through automated color sampling with Photoshop.
The resulting GIF-visualization (see high-res viz) allows us to address the questions: Which vernacular styles dominate a given timeframe (stickers) and which are persistent over time? Allowing for a comparison between video thumbnails within and across different time frames, it shows the persistence of body images in combination with the national colors of Ukraine (documentations of public support and gatherings in the streets being frequently used).
TikTok Sound Analysis
TikTok engagement is multi-modal in that it merges textual, (dynamic) visual, and auditive features. Platform-native visual content is networked through both hashtags and sounds, creating ambiguous forms of affective alignment and various overlapping layers of mediation and resonance.
The sankey visualization below provides a network interpretation of co-hashtag relations between four resonant #standwithukraine sounds (two international songs highlighted in yellow and two Ukrainian songs highlighted in blue). Node size represents the extent to which hashtags (in the middle) and sounds (left and right) are attuned to one another, with Ukrainian hashtags such as #українськийтікток (trans. Ukrainian TikTok) and #славаукраїні (trans. glory to Ukraine) being specific to the Eurovision-winning Ukrainian song Stefania (by Kalush Orchestra) and ambient sound Доброго вечора (full title Доброго вечора ми з України, trans. Good evening, we’re from Ukraine by BAYRAKTAR MUSIC UA). Flow size represents the number of times a hashtag appears in relation to a given song.
Figure 12: A Sankey diagram focusing on sound-co-hashtag relations (created with Rawgraphs).
In terms of its analytical potential, this technique provides a valuable entry point into the study of what we identify as ‘sound spaces’ or context-specific environments of musical/affective attunement which permit various forms of emotional expression and social bonding. Frequently shared co-hashtags such as the omnipresent #fyp used in combination with #Ukraine, #standwithukraine, #zelensky, and #war connect the Ukrainian ‘sound space’ with well-known international hits Unstoppable (by Sia) and Another Love (by tom Odell), the latter becoming a TikTok anthem for Ukrainians during the first month of the war (see also Cork 2022).
A montage of video frames (see high res viz) extracted from 10 most liked #standwithukraine videos connected through Another Love presents the dynamics of imitation rendered visible both in the storyboards of the videos and through corresponding co-hashtags. The persistence of video footage documenting protests against the Russian invasion of Ukraine points to the appropriation of Another Love as an expression of hope and solidarity.
By displaying sequences of shots side by side in accordance with the storyboard of a given video (each 25th frame extracted using ffmpeg), this analytical technique allows us to study sound-specific video styles across ten resonant Another Love #standwithukraine posts (note that the resonance on TikTok can be measured e.g., by the count of likes, comments or by play count as well as by the repetitions in use of a specific video genre/filter/storyboard with low engagement metrics but high frequency of posting).
Figure 13: A montage of video frames extracted from 10 most liked Another Love #standwithukraine videos visualized with ImageJ. Frames were extracted with ffmpeg (each 25th frame per video).Interesting is the Ukrainian language space on TikTok (note, we leave the generic #stanndwithukraine query) and see how, in contrast with Ukrainian language Twitter, here soldiers do appear. And even more than mere appearance, they play a dominant role in most dug videos where 5 out of the top 7 videos were from a soldier-influencer-celeb… So whereas we see ingroup talk of Ukrainians (or at least Ukrainian speakers) on Twitter taking place through static artworks, we see a new solidarity expression appearing where soldiers are vehicles for ingroup communication of support.
What is also noticeable is that the content of this soldier is very typical Tik Tok videos, focusing on fun, lighthearted, and sometimes stupid short footages. This contrasts with the perceived difficult and serious military life. While real war zone videos also appeared on these military celebrities’ accounts, they tend to attract fewer viewers and risk being blocked by the app as explicit content. It is arguable that the popularization of military celebrities has a double-sword effect: on the one hand, it has shortened the distance between the general public and the military context, as now soldiers are on social media as the general public do; On the other hand, the entertainment spirit of Tik Tok has limited what can be disclosed to the public. It is also questionable the strength of affective connections between military celebrities and their audience.