Circuits
of truth


This relational nature of truth dates back to the actuarial logic of information infrastructures that used to be the hot topic of debates before the proliferation of algorithmic prediction. Originating in statistical knowledge on populations, actuaries' work was meant to calculate and minimise risks for the insurance industry since the slave trade in the 19th century, the job that is now heavily automated (Ochigame, 2020). Actuarial logic was meant to be inductive - generalise your future behaviour based on a set of parameters, and such inductive reasoning became embedded later in digital information infrastructures (Rouvroy, 2011). Reading about 'actuarial age' one can see the specific notion of truth that defined the one we are dealing with today (Harcout, 2007; Rouvroy, 2011). It was not the truth in the sense of a control group and the characteristics of its behaviour, against which you can measure the result of the prediction, referred to as "ground truth". Actuarial logic, indeed, was defined by "the lack of ground truth", as the detection systems meant "to evaluate the validity of detection mechanisms aimed ... to detect users' preferences and consumption propensities (in a marketing scenario) ... also impact on the material or cognitive preconditions of actions" (Rouvroy, 2011). The act of prediction actively influences the measured population, making it impossible to create conditions to learn what would happen without such prediction. Translating it to the current situation, we would never know if a person would commit a crime if a drone would not have already killed them based on the metadata (Pasquinelli, 2017).



This notion of truth is then inherited by algorithms that constitute the work of current information infrastructures, including social media networks. The notion of truth that they propose, therefore, "is not opposed to falsity or error of the prediction" - as verified accounts are opposed to accounts without a blue mark not because unverified accounts are fake (Amoore, 2019). The truth is defined by the probability of action to be done based on the data that we analyse. For such a system, it does not matter if its initial prediction was accurate or was the observed action the result of the prediction. It implies that it might be more accessible to 'adapt consumers to what the market has to offer rather than to adapt market offers to the genuine needs and preferences of consumers' (Rouvroy, 2011).


In 'machinic infrastructures of truth,' the 'truth' states for the predicted 'increase of the engagement', defined earlier as 'influence' or 'prominence' (Venturini, 2019). The way truth is defined in M.I.T. does not correspond to 'quality' or 'authenticity' of the accounts as it “is not opposed to falsity” (Amoore, 2019). Online platforms are defined by "radical behaviourism": "they don't care about why their users engage with them, how engagement is generated, or what engagement even means - the only thing that matters is increasing their measures of clicking, viewing, scrolling" (Venturini, 2019). More stories from social media whistleblowers are yet to follow the outcry "I have blood on my hands", in which the ex-Facebook employee called out the company declining to take action against fake accounts interfering in politics - these accounts were factually fake, but true in the sense of genuinely high numbers of generated users’ engagement. It has to do with the fact that 'the expected clickthrough rate (the probability that users will engage with the ad)' defines how much profit will be gained (Venturini, 2019). Therefore, the adaptation of consumers - users in terms of social network companies - comes in the form of 'positive feedback between tracking and engagement' (Venturini, 2019). One can highlight such an understanding of truth as engagement based on the changes in the verification policy and their connection to the social media perception of engagement. The tool, invented by Twitter 'to help with cases of mistaken identity or impersonation', has changed significantly since 2009 with the changing structure of tracking/engagement loops, despite the icon, borrowed by other social networks, remaining the same.


When social networks introduced blue tick verification systems, these platforms were finishing the transition from the 1990's era, characterised by the clear cut distinction between '"posters", the minority of individuals contributing to the life of digital communities, and "lurkers," the silent majority who just read their discussions' (Venturini, 2019:21). In the 2000s social networks aimed to capitalise on the thinning of this distinction, making it easier for 'lurkers' to produce their own content by reacting through 'social buttons', to maximise and quantify users engagement - one of the main assets in the contemporary digital economy. Nevertheless, at a time verification ticks were introduced in 2009, the distinction between 'posters', rebranded as influencers, and users remained. Even though both could at that point create engagement, this engagement was expected to vary drastically based on how many followers one has, with the numbers being highly polarised. Therefore, the fact that verification marks were granted exclusively to celebrities without any publicly available explanation of the algorithm or opportunity to apply did not raise a public debate, as the categories did not have a space to be in tension to one another.