In the saturated cyberscape of post-truth politics, social media platforms use blue verification marks to help users discriminate between 'true' and 'fake'. However, like all binary rating systems, verification cannot explicitly delineate these terms within its own self-referential logic. Polarisation only arises when 'true' and 'false' become enclosed in a single circuit, creating artificial opposition -- 'true' as what is 'not false', and vice versa. The relational nature of these polarities results in practical paradoxes, as exemplified by Facebook failing to delete fake propaganda accounts despite its repeated promises to battle against false news.
This project attempts to reverse engineer the mechanisms of truth production by looking at the broader digital infrastructures they comprise. Drawing from the notion of hardware interfaces, it reframes blue ticks as an information processing device whose function is to encode and decode signals. It focuses on the inputs -- the user interactions -- that power this circuit, revealing how engagement rates determine who or what is considered 'true'.
You can find on this domain: interviews, analysing infrastructure as a method of media research, militarisation of cyberspace, and counter-propaganda resistance; facial filters, useful for engaging in cyber conflict; and in-depth investigation dissecting the history and presence of blue ticks. Investigation table of contents: