03

Veritable
machines



In 2016, parallel to the rise of micro-influencers, another genealogy became definitive for the new policies of verification marks. Emily Rosamond draws attention towards the deployment of reputation warfare in U.S. elections in 2016, following the tradition of scholars that overview cyberwar strategies deployed in U.S. elections as defining for contemporary cyber warfighting. For American companies, 2016 indeed brought forward the urge to verify the truth of the content that the user produces, no less than the user themself. The need to authenticate the content was immediately addressed by the U.S. social media companies that found themselves amid the neverending wave of scandals associated with fake news and trolling. They needed to invent new verification policies, which did not directly imply new rules, but rather broadened the functions ascribed to the verification process. Blue ticks were made responsible for the produced content and became connected to the mitigation of fake news and trolling, modes of cyber warfighting that became most famous after the 2016 U.S. election. Nevertheless, there are a multitude of reputational warfare cases that are not part of such U.S.-centric genealogy. For instance, FindFace, Russian surveillance company in charge of all major governmental surveillance projects in 2020, gained a significant amount of its fame by allowing to deanonymise and harass Russian porn actresses in 2016 extensively. It means that Russian social networks do not follow the same verification timeframe as the one in the U.S.


The infrastructure of cyberwarfare is impossible to disentangle from machinic infrastructures of truth. It is true not only in the sense that now verification marks are discussed as a way to address the seemingly omnipresent cyberwarfare. Truth and contemporary modes of warfighting are codependent, as truth is always present in cyberwarfare as negative space. To manufacture or denounce something fake posing as authentic, in other words, to engage in cyberwar, one should define those categories, even if not explicitly. "Cyberwar is a weaponisation of information that always threatens to destroy truth", and therefore defines it by forcing it to change and adapt (Matviyenko & Dyer-Witheford, 2019). Simultaneously, the truth in these cyberwar infrastructures obeys the same rules as the truth of M.I.T., defined through engagement. These shared modes of truth as engagement could be explained through virality/tracking loops, as I have outlined earlier. 


Approaching cyberwarfare as M.I.T. shows how counterproductive are attempts to protect the users by targeting the content of cyberwarfare rather than the infrastructure of its distribution. Fake news cannot be defined by their production or consumption, qualities that constitute content. While produced, fake news might be intended as irony or satire, while addressing fake news through consumption implies that someones need to trust the fake news piece while engaging with it. There have been many cases of major media unironically picking up satirical news articles addressing them as real. Indeed, as researchers Jonathan Gray, Liliana Bounegru, and Tommaso Venturini suggest, the primary distribution of fake news is achieved by those who do not actually believe in them - debunkers of fake news on one part of political spectrum and trolls on another claim to not support the views they are promoting (Gray, Bounegru & Venturini, 2020). Trolls, hired by the government (for instance Russian' Internet Research Agency' and more contemporary 'Panorama') or private companies (you can look up sock-puppetry, shilling and astroturfing), way outnumber those who spread messages 'genuinely' (Venturini, 2019). Furthermore, the culture of unpaid trolling also implies truth as engagement rather than trust in the content - the idea is to achieve a reaction rather than convince someone.


To turn from the production and consumption of information - counterproductive in an analysis of fake news - towards an investigation of its material distribution was proposed by the infrastructural turn in media studies, as overviewed by Lisa Parks and Nicole Starosielski (Parks & Starosielski, 2017). Infrastructural perspective allows seeing the circulation as an active process that is continuously maintained to be in action, and that defines how the information is spread physically. Such focus on circulation, i.e. infrastructural perspective, allowed Gray, Bounegru and Venturini dissect fake news as junk content, that gets rendered fake by its 'spreadability', rather than deceptiveness - not all junk content makes it to the media as fake news.  'Spreadability' is set out by social media infrastructures, made 'to maximise the virality of online contents'. Such virality is not all-pervasive, despite how it is being portrayed in mainstream media, precisely because of the way information infrastructures function. Virality is more precisely captured as 'scalability' - term, proposed by Philip Howard and Samuel Woolley to describe computational propaganda (Woolley and Howard 2019:14). Scalability implies rapid distribution, enabled by automation, but does not ascribe to computational propaganda the power it does not possess.


The choice of terminology is vital in this case, as cyberwarfare operates through the numbing discourse or 'cyberspeak' (Matviyenko & Dyer-Witheford, 2019). It trains us to be resilient "against hostile intrusion through social media service upgrades, improved smartphones, and the steady surrender of individual and social freedoms to enhanced powers of the security forces'" in a similar manner as "in previous decades, we were habituated to apocalyptic prospects by an anodyne "nukespeak." To ascribe more power to cyberwarfare that it actually has welcomes, on the one hand, helplessness and normalisation of total control (that is not there but is yet to come), on the other, promotes the very tools it aims to counter. Deepfake hoax allowed social media companies to invest millions in training facial recognition algorithms and fight non-existent enemies, turning their backs towards ongoing misinformation campaigns. Fake news paradigm sparked a wave of debunking fake news that facilitated the flourishing of junk news (Engelhardt, 2020; Gray et al, 2020). 'Academic interpretation, too, can be weaponised; those who stare into the abyss of cyberwar soon find it staring back at them.' (Matviyenko & Dyer-Witheford, 2019). Therefore, dealing with infrastructures that enable 'veritable fog machines', automated agents that scale computational propaganda, one must be careful not to spread the fog further.


05