A South Korean news channel recently replaced regular anchor Kim Joo-Ha with a deepfake version of herself. Viewers were warned ahead of time and while some viewers were amazed by the realism, others were worried that the real Kim Joo-Ha might find herself out of a job.
CGI effects used to take huge teams, but now editing videos has become increasingly easy. As deepfakes become more and more common online, concern is growing for what this will mean in the metaverse where a convincing enough avatar could be used to fake someone’s entire identity.
As an alternative to the Queen’s typical festive broadcast, Channel 4 used a dancing deepfake to present the 2020 speech. Intended as a stark warning about the dangers of fake news, it was met with incredibly mixed responses. Some criticised the broadcast for potentially making the public think deepfakes are a more widespread problem than they actually are, while others felt it made a very important message about scrutinising things that are presented as fact.
Identifying fake news is becoming increasingly difficult as faked footage improves and more and more people seek their news from social media as opposed to fact-checked news outlets. Fake news has been rife during the war in Ukraine as people around the world turn to the media to stay informed about the conflict. Fake news is often spread unintentionally by people who are simply unaware that the content isn’t real, while some is spread deliberately.
A deepfake video of Ukrainian President Volodymyr Zelensky emerged last month in which an unconvincing Zelensky asked Ukrainians to put down their weapons. This deepfake was a particularly poor one – parts of his body were more pixelated than others, his head was too large, and his movements stilted. Because the footage was so crude it was an easy win for Meta (owner of Facebook and Instagram) who ‘quickly reviewed and removed’ the footage. Zelensky described the video as a ‘childish provocation’ on his official Instagram account, but the dangers of the technology’s use in a time of war are alarming.
While that deepfake was easy to identify as fake with even a small dose of scepticism, they will inevitably get better. An increase in scepticism risks eroding trust in the media, while remaining too trusting risks the proliferation of fake news. Navigating and identifying fake news is incredibly complex, so the logical step would be to attempt to police its spread. This, however, would be an immense and complicated task.
Reducing the impact of deepfakes could be done by limiting access to the tools used to create them, but they are becoming increasingly common. Before too much longer, we may all have the tools required to make a reasonably convincing deepfake in our pockets.
An alternative to limiting access to the tools used to create these videos would be to ban altered media, but an outright ban on synthetic media is not that simple. Henry Ajder, a researcher who spent years looking into the malicious uses of synthetic media, explained that ‘if you ban synthetic media you ban all Instagram filters, you ban the computational photography on your camera and your smartphone, you ban the dinosaurs in Jurassic Park.’ These technologies have become something most of us use every single day, and they are only becoming more and more common over time.
To quote Ajder, ‘the future will be synthesised and there’s no sugarcoating the challenges ahead.’
The potential issues surrounding deepfakes grow even more complex in the metaverse.
In a realm where everyone is represented by a digital avatar, spotting fakes will be far more difficult and the consequences of a convincing fake may be far greater.
There are already companies working to design hyper-realistic avatars that you can use to represent yourself in the metaverse, including the deepfake or synthetic media company Metaphysic. Metaphysic founder Chris Ume was the creator of the Deep Tom Cruise videos that went viral on TikTok last year; these videos showcased the capabilities of the technology, but also highlighted its potentially nefarious applications if it were to wind up in the wrong hands.
In the metaverse, realistic fakes could easily be used to mimic someone’s identity; when we are all represented by avatars, copying someone’s avatar could be used for malicious purposes with significant implications including accessing personal data and outright identity fraud.
To combat the potential theft of our personal likenesses – and the biometric data used to create them – Metaphysic makes it possible to securely store your avatar as an NFT. This theoretically means that users can keep ownership of their image. With the metaverse rapidly growing, implementing legislation that will keep this data secure is a matter of urgency.
Currently, the issue remains relatively small and truly convincing deepfakes are somewhat isolated incidents, but as the technology improves it will be vital that people’s likenesses are protected. Hopefully, as the industry grows and legitimate companies such as Metaphysic blossom along with it, legislation will begin to be put in place to protect people’s likenesses. However, even creating the current iteration of the Online Safety Bill has been a lengthy task, so creating equivalent legislation that covers the entire metaverse will be no easy task.