There are early reports of VR Deepfake Pornography circling the internet. For those who are unsure, this probably means we have a shockingly dystopian future ahead of us. Like, Black Mirror level dystopian. The concept of the Deepfake has been around for over a year now. In short, people with coding experience were able to use artificial intelligence to copy and paste faces onto other individuals, but in VIDEOS. And many of these are very convincing, like this fake Obama video.
A daunting task
In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media. Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning. The best AI-produced prose used to be closer to Mad Libs than The Grapes of Wrath , but cutting-edge language models can now write with humanlike pith and cogency. Given a semantic context, it predicts which words are most likely to appear in a sentence, essentially writing its own text. If words in a sample being evaluated match the top 10, , or 1, predicted words, an indicator turns green, yellow, or red, respectively.
Shortly before, a manipulated video of US Speaker of the House Nancy Pelosi making her sound as if she were drunk circulated on the internet. The video, which was even shared by President Trump and members of the GOP, was later verified as a hoax. But with the quality of these videos rapidly improving, DeepFakes have sparked wider epistemological discussions about the future understandings of knowledge and truth. These developments in AI and machine learning ML are concurrently taking place with giant leaps in immersive experiences technologies, better known as augmented reality AR and virtual reality VR. Here, I offer a few thoughts on how the marriage of these technologies might transform our understanding of reality and presence. In this way GANs generate replicas or realistic manipulations of natural objects and people in a way which look highly realistic. FaceApp , for instance, the controversial mobile app that creates face transformations of photographs, works with GANs. Yet, there is a reason why DeepFakes have sparked so much discussion recently. I spoke to Henry Ajder from Deeptrace Labs , who explained to me that:. To date, replicas such as the ones mentioned above can still be identified with the naked eye, but soon AI-generated synthetic media is expected to be able to fully replicate photos, video footage, or even human voices, beyond the point of human detection - at least not without the help of relevant software.
Samuel Woolley's The Reality Game documents an online world awash with alternative facts, deepfakes and other digitally disseminated disinformation, and explores how to limit the damage in the future. Woolley uses the term 'computational propaganda' for his research field, and argues that "The next wave of technology will enable more potent ways of attacking reality than ever". Woolley stresses that humans are still the key factor: a bot, a VR app, a convincing digital assistant -- whatever the tool may be -- can either control or liberate channels of communication, depending on "who is behind the digital wheel". Tools are not sentient, he points out, not yet, anyway and there's always a person behind a Twitter bot or a VR game. Creators of social media websites may have intended to connect people and advance democracy, as well as make money: but it turns out "they could also be used to control people, to harass them, and to silence them".