It would be a mistake to associate micro-influencers exclusively with private markets, as the Russian government actively engages them as both targets of political repression and means of propaganda, following other states (Sanovich, 2017). Due to the relative freedom of Russian cyberspace compared to Russian TV broadcast, independent politicians exist within the same economy of influencers and micro-influencers, with the most well-known example being Alexey Navalny, who gained fame as a blogger. Flourishing Russian cyberspace came into the attention of the Russian government in 2009, when at the time president Dmitry Medvedev started blogs across several social networks (Sanovich, 2017). The massive failure of Medvedev to become an influencer online, flooded by the backlash and trolling of a much more critical audience than anything the government was used to on state-run TV, initiated the rise of pro-government bots and trolls. They were aimed at posting "diversionary comments in high profile opposition blogs", already flourishing online outside of the reach of the Russian state, and "retweeting and reposting pro-government messages" (Sanovich, 2017). This bots’ ecosystem was then redeployed in the 2016 U.S. election, giving the name to the new ‘post-truth’ era and new notions of truth and verification.

The Russian state cyberfight with political influencers and micro-influencers showed how truth and contemporary modes of warfighting were codependent. To manufacture or denounce something fake posing as authentic, in other words, to engage in cyberwar, one should define those categories, even if not explicitly. "Cyberwar is a weaponisation of information that always threatens to destroy truth", and therefore defines it by forcing it to change and adapt (Matviyenko & Dyer-Witheford, 2019). Indeed, this malleable truth is tightly connected to the one overviewed earlier - the truth as resulting engagement rates rather than intrinsic qualities of the content. As the ‘ad is a blueprint of fake news’, the ‘truth’ of social media is the blueprint of the ‘post-truth’ era of cyberwar. Fake news themselves can only be defined through rates of engagement with them. There is nothing intrinsically fake about fake news - it might be satirical jokes, or spam, that can be spread by something or someone without believing in it. There have been many cases of major media unironically picking up satirical news articles addressing them as real, turning into fake news something that was not produced as fake. Indeed, as researchers Jonathan Gray, Liliana Bounegru, and Tommaso Venturini suggest, the primary distribution of fake news is achieved by those who do not actually believe in them - debunkers of fake news on one part of political spectrum and trolls on another claim to not support the views they are promoting (Gray, Bounegru & Venturini, 2020). Trolls, unpaid 4chan volunteers as well as hired by the government (for instance Russian' Internet Research Agency' and more contemporary 'Panorama') or private companies (you can look up sock-puppetry, shilling and astroturfing), way outnumber those who spread messages 'genuinely' (Venturini, 2019).

Therefore, fake news is junk content that gets rendered fake by its viral engagement, rather than deceptiveness - not all junk content makes it to the media as fake news. Such virality is not all-pervasive, despite how it is being portrayed in mainstream media, and is enormously hard to achieve. Cyberspace is a foam, consisting of the multitude of information bubbles that do not collapse into one globe (Matviyenko, 2021). To make something intentionally pierce those bubbles is a complicated task. The echo-chambers are very hard to control - it is easier to burst than to blow a bubble. This is why Russia had to augment human trolls, even though remaining a large part of the Russian governmental agenda,by automated bots. Weaponising the "well established and innovative industry of spam and search optimisation", already developed by the mid-2000s, became vital for the governmental protocols of augmenting 'truth', i.e. 'engagement'. The Russian state prioritised doctoring results of recommendation algorithms and distributing propaganda through advertisement, putting the virality on the state agenda of rating how successful the propaganda campaign was. Virality is more precisely captured as the 'scalability' - term, proposed by Philip Howard and Samuel Woolley to describe computational propaganda (Woolley and Howard 2019:14). Scalability implies rapid distribution, enabled by automation, but does not ascribe to computational propaganda the power it does not possess.

Such is directly dependent on advertising tracking infrastructures. "While collective virality is a constant and essential dimension of social existence, 'junk information' is a relatively new phenomenon, because only recently virality has become the object of a complex system dedicated to its production and circulation" (Venturini, 2019 :20). Social network companies at large are advertising companies, 'vehicles for intrusive digital advertising and data collection practices' (Gray et al 2020:334). These advertisement infrastructures are aimed to target and promote fake news and computational propaganda in practically the same ways as regular news. These 'invisible data mining infrastructure' of cookies, beacons and other devices" create "filter bubbles" and "echo chambers", across which junk news is scaled (Gray et al 2020:332). "The resulting network does not suggest a sharp, binary distinction between the tracking practices of mainstream and junk news producers but rather a range of different audience marketplace configurations which they share and through which they can be differentiated (Gray et al 2020:332)". A solution, therefore, comes from the tradition of AdBlockers, as one can see in the projects realised for Russian Vkontakte and YouTube. Observer (for YouTube) and Chef's Trap (for Vkontakte) work as browser extensions, powered by the continually updating database. They mark fake comments and likes, allowing users to ignore trolls as other forms of spam. As the truth in cyberwar is defined by engagement, the only way to debunk digital propaganda is to suffocate the attention it gets.




  








































  02

03  Bursting
Bubbles



It would be a mistake to associate micro-influencers exclusively with private markets, as the Russian government actively engages them as both targets of political repression and means of propaganda, following other states (Sanovich, 2017). Due to the relative freedom of Russian cyberspace compared to Russian TV broadcast, independent politicians exist within the same economy of influencers and micro-influencers, with the most well-known example being Alexey Navalny, who gained fame as a blogger. Flourishing Russian cyberspace came into the attention of the Russian government in 2009, when at the time president Dmitry Medvedev started blogs across several social networks (Sanovich, 2017). The massive failure of Medvedev to become an influencer online, flooded by the backlash and trolling of a much more critical audience than anything the government was used to on state-run TV, initiated the rise of pro-government bots and trolls. They were aimed at posting "diversionary comments in high profile opposition blogs", already flourishing online outside of the reach of the Russian state, and "retweeting and reposting pro-government messages" (Sanovich, 2017). This bots’ ecosystem was then redeployed in the 2016 U.S. election, giving the name to the new ‘post-truth’ era and new notions of truth and verification.


The Russian state cyberfight with political influencers and micro-influencers showed how truth and contemporary modes of warfighting were codependent. To manufacture or denounce something fake posing as authentic, in other words, to engage in cyberwar, one should define those categories, even if not explicitly. "Cyberwar is a weaponisation of information that always threatens to destroy truth", and therefore defines it by forcing it to change and adapt (Matviyenko & Dyer-Witheford, 2019). Indeed, this malleable truth is tightly connected to the one overviewed earlier - the truth as resulting engagement rates rather than intrinsic qualities of the content. As the ‘ad is a blueprint of fake news’, the ‘truth’ of social media is the blueprint of the ‘post-truth’ era of cyberwar. Fake news themselves can only be defined through rates of engagement with them. There is nothing intrinsically fake about fake news - it might be satirical jokes, or spam, that can be spread by something or someone without believing in it. There have been many cases of major media unironically picking up satirical news articles addressing them as real, turning into fake news something that was not produced as fake. Indeed, as researchers Jonathan Gray, Liliana Bounegru, and Tommaso Venturini suggest, the primary distribution of fake news is achieved by those who do not actually believe in them - debunkers of fake news on one part of political spectrum and trolls on another claim to not support the views they are promoting (Gray, Bounegru & Venturini, 2020). Trolls, unpaid 4chan volunteers as well as hired by the government (for instance Russian' Internet Research Agency' and more contemporary 'Panorama') or private companies (you can look up sock-puppetry, shilling and astroturfing), way outnumber those who spread messages 'genuinely' (Venturini, 2019).


Therefore, fake news is junk content that gets rendered fake by its viral engagement, rather than deceptiveness - not all junk content makes it to the media as fake news. Such virality is not all-pervasive, despite how it is being portrayed in mainstream media, and is enormously hard to achieve. Cyberspace is a foam, consisting of the multitude of information bubbles that do not collapse into one globe (Matviyenko, 2021). To make something intentionally pierce those bubbles is a complicated task. The echo-chambers are very hard to control - it is easier to burst than to blow a bubble. This is why Russia had to augment human trolls, even though remaining a large part of the Russian governmental agenda,by automated bots. Weaponising the "well established and innovative industry of spam and search optimisation", already developed by the mid-2000s, became vital for the governmental protocols of augmenting 'truth', i.e. 'engagement'. The Russian state prioritised doctoring results of recommendation algorithms and distributing propaganda through advertisement, putting the virality on the state agenda of rating how successful the propaganda campaign was. Virality is more precisely captured as the 'scalability' - term, proposed by Philip Howard and Samuel Woolley to describe computational propaganda (Woolley and Howard 2019:14). Scalability implies rapid distribution, enabled by automation, but does not ascribe to computational propaganda the power it does not possess.


Such is directly dependent on advertising tracking infrastructures. "While collective virality is a constant and essential dimension of social existence, 'junk information' is a relatively new phenomenon, because only recently virality has become the object of a complex system dedicated to its production and circulation" (Venturini, 2019 :20). Social network companies at large are advertising companies, 'vehicles for intrusive digital advertising and data collection practices' (Gray et al 2020:334). These advertisement infrastructures are aimed to target and promote fake news and computational propaganda in practically the same ways as regular news. These 'invisible data mining infrastructure' of cookies, beacons and other devices" create "filter bubbles" and "echo chambers", across which junk news is scaled (Gray et al 2020:332). "The resulting network does not suggest a sharp, binary distinction between the tracking practices of mainstream and junk news producers but rather a range of different audience marketplace configurations which they share and through which they can be differentiated (Gray et al 2020:332)". A solution, therefore, comes from the tradition of AdBlockers, as one can see in the projects realised for Russian Vkontakte and YouTube. Observer (for YouTube) and Chef's Trap (for Vkontakte) work as browser extensions, powered by the continually updating database. They mark fake comments and likes, allowing users to ignore trolls as other forms of spam. As the truth in cyberwar is defined by engagement, the only way to debunk digital propaganda is to suffocate the attention it gets.

04