AI and Deepfakes: a dangerous duo in the wrong hands
Share
The emerging growth of Artificial Intelligence has made it possible to transform key sectors such as medicine, the economy, the energy sector or transportation. In the current scenario, Artificial Intelligence has immense potential to build a better society. However, AI is such a powerful double-edged tool that it can be exploited to develop solutions that have negative or malicious purposes.
At the level of cybersecurity, Artificial Intelligence has allowed the evolution and improvement of social engineering techniques, facilitating the execution of more effective and difficult to detect cyberattack campaigns.
In this context, new frauds driven by AI have been developed, especially what entails the creation of deepfakes.
What is deepfake?
The term deepfake results from the combination of the term Deep Learning and fake. Thus, this term refers to the technique by which, using complex Artificial Intelligence algorithms such as Deep Learning, audiovisual content is manipulated, giving it a highly realistic effect. This characteristic implies that, under the conventional perception of a human, it is very difficult to determine whether the audiovisual content being consumed is true or false.
In this context, deepfake technology for identity theft has managed to expand to all digital platforms and to the field of personal communication, the film industry, as well as the corporate and government sector. While it opens up opportunities for creativity and innovation in digital content production, it also presents considerable risks to privacy and security.
Although deepface and deepvoice are not widely recognized terms within deepfake, we can explain them separately to understand the applications within the world of cybersecurity:
- Deepfaces: They consist of creating images with a high level of realism from scratch, but being completely fictitious.
- Deepvoices: This technology allows us to generate synthetic human voices that sound very natural from written text. In addition to generating a voice from scratch, it is possible to fake the voice of a real person by training the AI with samples of the real voice. With a few minutes of audio of a person's voice, any user could clone their voice and compromise their security and privacy.
Real cases of deepfake and AI attacks
Some known cases of deepfake or false alteration and creation of images, audio and videos are the following:
- Deepfake of Vladimir Putin and Donald Trump: In 2019, a deepfake video depicting Vladimir Putin and Donald Trump was shared online, where both political leaders appeared to discuss serious topics such as arms control and international politics. This type of content highlights how deepfakes could be used to misinform and manipulate public opinions.
- Deepfake of actors in pornographic films: One of the most controversial uses of deepfakes has been the creation of fake pornographic videos that show celebrities and public people in compromising scenes, as was the case of Emma Watson, Rosalía or Taylor Swift. This type of content has raised legal and ethical concerns about privacy and consent.
- Barack Obama Deepfake: In 2018, a deepfake video of former US President Barack Obama was created by Jordan Peele's Monkeypaw Productions. In the video, Obama appears to be making unusual statements and warning about the dangers of deepfakes, which was done to raise awareness about this technology and its possible malicious uses. The video is available on YouTube at the following link .
- One of the notable cases of voice spoofing is that of the CEO of a British energy company in March 2019. In this incident, fraudsters used artificial intelligence to imitate the CEO's voice. This senior manager received a phone call from his supposed boss in which he was instructed to make a transfer of €220,000 to an external bank account. The CEO made the transfer without verifying the identity of his boss, given the credibility of a phone call with such a realistic voice.