Earlier this year, the Sri Lankan government declared a ten-day state of emergency after a series of anti-Muslim attacks took place across the country. The violence was said to be sparked and inflamed by a series of warped and false stories spread on Facebook. One fake story revolved around the alleged seizure of 23,000 sterilisation pills by police from a Muslim pharmacist, being cited as proof of a plot to wipe out the majority Buddhist Sinhalese population.
Ironically popularised by Trump as a term, the 2016 US election served as cautionary tale of the perniciousness of “Fake News” in the current attention economy where our attention is the currency traded, and the truth peripheral in the unregulated and unmanaged world of online advertising.
Other social media platforms such as Twitter, 4chan and Reddit have been implicated in the spread of digital disinformation and subsequent violence. This is somewhat bittersweet considering the role the same platforms once played in democratising information and upholding democratic ideals in authoritarian states around the world.
Beyond the discussion around privacy and the ethics of data harvesting that is ongoing nearly a year on from the Cambridge Analytica scandal, the deadly impact of fake news stories on Facebook in particular are amplified in places such as Sri Lanka and Myanmar where ethnic tensions are already high and Facebook’s “free basic” services (which gives limited free connectivity via the Facebook app for those who cannot afford internet data) means that Facebook is synonymous with the internet and is the sole source of information for many.
Consider the advancements in video editing technology which will further cloud our truth crisis. Known as deepfakes or face retargeting, this technology can completely alter or replace people’s speech, faces and facial expressions to create uncanny videos of scenes that never actually happened. Deepfakes has been used to create fake celebrity porn, seamlessly swapping one’s face onto porn scenes, in what writer Foer argues is perhaps “one of the cruelest, most invasive forms of identity theft invented in the internet era”. The technology has become so advanced that augmented videos are almost impossible to distinguish from real footage and a casual observer is unable to easily detect the hoax.
And what’s worrisome is that the developers of such technologies intend on democratising it, drastically lowering the technical threshold needed to create fake videos, thereby increasing their prevalence online.
This development undermines our historic reliance on video as a reasonably reliable medium to accurately portray reality. Video – the moving image, captured in real time – is often a key element of evidence in many defence or prosecution cases. Beyond the ethics of digital spectatorship and consumption of suffering as pre-requirements for support or empathy, the trust in video elicits such emotive responses from audiences whether its smartphone footage of shelling in Syria to recordings of police brutality. One such example is the livestreamed murder of Philando Castile in 2016 which managed to provoke outrage over police brutality that countless evidence had unfortunately not done so before. Similarly, manipulated or misrepresented videos have provided the trigger for social conflagrations like those in Sri Lanka, meaning proliferation of falsehoods will acquire a “whole new, explosive emotional intensity”.
The looming pervasiveness of fabricated videos will therefore push the superior position video once occupied of portraying a common reality to a thing of the past. We are moving further into a world where our eyes routinely deceive us and manipulated videos will generate understandable suspicions about everything we watch. These doubts could be further exploited by those intent on spreading disinformation to further murk the waters between truth and falsehoods. As we hasten towards a world beyond truth, perhaps it is as Zeynup Tufecki argues, that the most effective forms of censorship today involve “meddling with trust and attention, not muzzling speech itself”.
How then do the high levels of scepticism needed to navigate through the unending cycles of content impact our ability to empathise with the suffering of others, when any story, video, and image can be dismissed as fake from the get-go. This is something I already struggle with being a virtual spectator of the horrors from Syria, only to be callously dismissed by others as fake (even when true). How do we maintain our humanity in honouring victims of violence while acknowledging the emotional manipulation that can be done to harness the dynamics of viral outrage for political causes?
Companies are attempting to use technology to counter the fake-news epidemic, with Mark Zuckerberg promising U.S. Congress that Facebook will use artificial intelligence to help spot and control the spread of fake news. Gifycat, a video hosting and editing platform, is already running AI over submitted videos to spot fakes. Another start-up, Factom aims to use blockchain to assert the authenticity of a video at a certain time and the camera it was taken by the camera that digitally signed the data.
If it were to gain wider adoption, Factom may well help change how the law defines truth and authenticate and validate digital evidence used in court. However, such attempts may be in vain once the fake news has already been released and circulated on social media. A study in Science found that humans selectively preferentially spread fake news over real news, which they found travelled six times faster on Twitter, attesting to what Jonathan Swift once wrote satirically: "Falsehood flies, and truth comes limping after it”. The same holds for tabloid headlines, which can often generously play with or stretch the facts, causing hysteria and outrage, only to be corrected a couple days or weeks later in a corner of page 20, once the public’s attention has moved on but the story remains cemented in the public’s collective memory.
Arguably then, this truth crisis is not a modern one; from wars to epistemological debates, competing truths have been fought over or contested for centuries. But the frightening speed at which false information now spreads through the web, coupled with algorithm enabled "echo chambres" and a human tendency to adhere to stories that flatter our existing worldview is continuously undermining whatever fragile consensus and trust in social institutions that had been achieved.