Deepfakes: what to do when nothing is what it seems
They are increasingly finding their way into hybrid aggressions of states and criminal organisations. Deepfakes, systems that use AI to fabricate bogus audio-visual content, pose a technological security threat that can only be countered with sophisticated technology.
Table of contents:
The most recent controversy involving the use of extremely realistic deepfakes came on TikTok. The Chinese social media platform is currently hosting an account, Unreal Reeves, featuring videos of the actor Keanu Reeves in mundane situations, such as singing, listening to music or joking about his career.
Journalist Eric Hal Schwartz notes in his Voicebot blog how barely a third of web users commenting on the videos seem aware that it is not the real Reeves, but an imitation generated by synthetic media. While the account name makes the fake explicitly clear, many seem to think they are seeing the actor or "a twin brother separated at birth".
As Schwartz explains, this is not a malicious deepfake, i.e. a counterfeit that is being passed off as real with criminal intentions, but an application for recreational use, apparently made with the graphics engine of Unreal, a popular 3D action video game by Epic Games. Last summer, the US startup Metaphysic competed in the 17th season of reality TV show America's Got Talent (GT) by performing deepfakes of deceased artists such as Elvis Presley. He made it to the finals. The audience found the virtual resurrection of the King of Rock irresistible, which also made use of the state-of-the-art Respeecher voice synthesis programme, so that nothing in the performance was real, neither the images nor the music.
Increasingly convincing (and unsettling)
These two anecdotes are a perfect illustration of how far deepfakes have come since the technology became available around 1997. Even the synthesised images of Barack Obama that sparked a stir in 2018 now seem rudimentary compared to what can be generated today with a sophisticated graphics engine and, yes, extensive use of artificial intelligence algorithms with self-learning systems (deep learning).
Devin Coldewey, editor of the TechCrunch corporate newsletter, explains that speech synthesis, which has lagged behind image synthesis in terms of realism, has also just taken a decisive step forward with systems such as VALL-E, "which makes acoustic deepfakes simple, fast and trivial".
Every possible sort of use, whether malicious or not
Deepfakes, as with any other technological development, are a neutral tool. Charitable uses have been made of this technology, such as the Obama image syntheses, voiced by film director Jordan Peele, in order to raise public awareness of its associated risks.
Far more alarming, of course, concerns how criminal organisations are using deepfakes, a practice that has been detected with increasing intensity for at least three years. The most common practice involves deepfake blackmail, which consists of sending images or voice clips of someone they claim to have kidnapped. Identity theft, as reported by a group of researchers in the article How Underground Groups Use Stolen Identities and Deepfakes, published in Trend Micro, is also becoming more frequent in order to overcome control and verification systems or to cause reputational damage.
I don't know if this refers to extortion or fraud… when it's intimidation for profit or legal business, it's extortion; when it's deception designed to achieve the displacement of the victim's or a third party's assets it's fraud…
The truly ominous ones concern politically motivated deepfakes that pretend to be real, as in the case of the fake videos of Joe Biden.
In the public sphere, deepfakes have proven to be a very effective tool for propaganda and disinformation. Some cases, such as the smear campaigns using synthesised images against Mauricio Macri, Angela Merkel or Donald Trump, can be as effective as they are anecdotal. The truly ominous ones concern politically motivated deepfakes that pretend to be real, as in the case of the fake videos of Joe Biden allegedly showing signs of advanced cognitive impairment that circulated on social media during the 2020 US presidential campaign.
The effectiveness of such poisoning efforts became clear in a poll conducted by Fox News days after the videos were revealed to be faked: 37% of respondents who had seen the images were convinced that they were real, despite an official denial.
An unstoppable trend?
Moravec explains that the number of deepfake videos detected online has continued to increase geometrically since 2018. There were approximately 14,000 in 2019, and today there are already hundreds of thousands. Our power to detect them has also diminished. In June 2020, the Deepfake Detection Challenge, a collaborative initiative based on developing synthetic image detection software, yielded promising results: the best software, created by independent developers or computer engineers from universities such as Oxford, Cornell or Berkeley, detected deepfakes at an accuracy rate of up to 82%. The current results are significantly lower.
This again proves that in the use of disruptive technologies with potential criminal uses, the capacity for aggression tends to come before the capacity to respond. Sophisticated artificial neural network-based systems, such as Expression Manipulation Detection (EMD), the brainchild of a group of engineers at the University of Riverside, achieved very promising initial results. However, this initial positive impact tends to fade away as soon as deepfake creators adapt to these new parameters and integrate the criteria used by detection programmes into their artificial intelligence algorithms.
A glimmer of hope
This past November, Intel unveiled FakeCatcher, a detection platform claiming to be able to detect deepfakes with an accuracy of over 96%. Ilke Demir, a senior researcher at Intel Labs, noted at the product launch that for the first time, detection software intended for home use "delivers results in a matter of milliseconds".
According to Demir, "most of the deep learning-based detectors that have been used so far try to detect irregularities in videos, which is becoming less and less effective, given the increasing technical perfection of deepfake creation applications". FakeCatcher, in contrast, compares fake videos with real ones by paying special attention to "blood flow", i.e. the subtle pigmentation changes that occur in human organisms when the heart pumps blood. If these good omens from FakeCatcher are indeed true, it means that technological solutions are how we should solve technologically posed problems.
-
Zero Trust: redefining security in the digital age
April 24, 2024
-
Demand grows for smart home protection
April 16, 2024