IA y Deepfakes: un dúo peligroso en las manos equivocadas

AI and Deepfakes: a dangerous duo in the wrong hands

Celia Catalán



The emerging growth of Artificial Intelligence has made it possible to transform key sectors such as medicine, the economy, the energy sector or transportation. In the current scenario, Artificial Intelligence has immense potential to build a better society. However, AI is such a powerful double-edged tool that it can be exploited to develop solutions that have negative or malicious purposes.

At the level of cybersecurity, Artificial Intelligence has allowed the evolution and improvement of social engineering techniques, facilitating the execution of more effective and difficult to detect cyberattack campaigns. 

In this context, new frauds driven by AI have been developed, especially what entails the creation of deepfakes.

What is deepfake?

The term deepfake results from the combination of the term Deep Learning and fake. Thus, this term refers to the technique by which, using complex Artificial Intelligence algorithms such as Deep Learning, audiovisual content is manipulated, giving it a highly realistic effect. This characteristic implies that, under the conventional perception of a human, it is very difficult to determine whether the audiovisual content being consumed is true or false. 

In this context, deepfake technology for identity theft has managed to expand to all digital platforms and to the field of personal communication, the film industry, as well as the corporate and government sector. While it opens up opportunities for creativity and innovation in digital content production, it also presents considerable risks to privacy and security.

Although deepface and deepvoice are not widely recognized terms within deepfake, we can explain them separately to understand the applications within the world of cybersecurity:

  • Deepfaces: They consist of creating images with a high level of realism from scratch, but being completely fictitious. 
  • Deepvoices: This technology allows us to generate synthetic human voices that sound very natural from written text. In addition to generating a voice from scratch, it is possible to fake the voice of a real person by training the AI ​​with samples of the real voice. With a few minutes of audio of a person's voice, any user could clone their voice and compromise their security and privacy.


Real cases of deepfake and AI attacks

Some known cases of deepfake or false alteration and creation of images, audio and videos are the following:

  • Deepfake of Vladimir Putin and Donald Trump: In 2019, a deepfake video depicting Vladimir Putin and Donald Trump was shared online, where both political leaders appeared to discuss serious topics such as arms control and international politics. This type of content highlights how deepfakes could be used to misinform and manipulate public opinions.
  • Deepfake of actors in pornographic films: One of the most controversial uses of deepfakes has been the creation of fake pornographic videos that show celebrities and public people in compromising scenes, as was the case of Emma Watson, Rosalía or Taylor Swift. This type of content has raised legal and ethical concerns about privacy and consent. 
  • Barack Obama Deepfake: In 2018, a deepfake video of former US President Barack Obama was created by Jordan Peele's Monkeypaw Productions. In the video, Obama appears to be making unusual statements and warning about the dangers of deepfakes, which was done to raise awareness about this technology and its possible malicious uses. The video is available on YouTube at the following link .
  • One of the notable cases of voice spoofing is that of the CEO of a British energy company in March 2019. In this incident, fraudsters used artificial intelligence to imitate the CEO's voice. This senior manager received a phone call from his supposed boss in which he was instructed to make a transfer of €220,000 to an external bank account. The CEO made the transfer without verifying the identity of his boss, given the credibility of a phone call with such a realistic voice.


One more step in deepfake…

The advances in the sophistication of conventional deepfakes are so substantial that, currently, deepfake exercises have been carried out in the context of a video call, which implies the ability to alter an individual's appearance and voice in real time during a live call. 

A major problem arises in this situation, since the difficulty in detecting this type of scam increases on a large scale. Despite the still low awareness of this type of impersonation exercises, it is likely that given the media cases of deepfake in recent years, a user on the Internet may distrust videos or images that reflect unusual content, or that a senior manager Take a more hesitant attitude when faced with a call from your boss requesting, again, unusual things. However, deepfakes in real time currently represent a great challenge due to the difficulty in identifying them. And you...would you doubt the identity of the person you are seeing and hearing on the other side of the screen in real time?

However, this technique still has small flaws regarding head movement and perspective. Head rotations in the 90º model or hands in front of the face are some of the common actions of a person that deepfakes have not yet been able to replicate in a completely realistic way. Therefore, while detection techniques for this type of attack are developed and improved, it may be useful to detect deepfakes by asking the person with whom you are making a video call to move their head or raise their arm (although It is not the most recommended solution!!).

Deepfake available to everyone

One of the most worrying aspects of deepfake is its accessibility. Currently, there are many free or very low-cost software options and applications that allow users to create deepfakes with relative ease.

There are platforms, such as https://thispersonnotexist.org/, which generate portraits of people that, despite looking like images of real people, are completely false.

Other well-known tools for modifying visual content are myEdit or PowerDirector. These tools, added to voice cloners like Vidnoz, are the perfect combination for cybercriminals willing to falsify and impersonate identities in digital, but also governmental, spheres.

Finally, DeepFaceLive is a real-time deepfake version of the popular DeepFaceLab software.
A simple search for deepfake resources reveals hundreds of online resources to easily create new setups and hoaxes.

One more thing…

Two years ago, a Chinese multinational suffered an attack based on deepfake technology in which 26 million dollars were defrauded. 

Given the media impact that this attack had, and in the fight to raise awareness of emerging digital threats, Zerolynx had the opportunity to develop a proof of concept on deepfake in Teams video calls. The lynxes Javier Martín , Álvaro Caseiro and Daniel Rico , using tools accessible to any internet user, carried out an impersonation exercise in a safe and controlled environment as an awareness campaign for senior officials of an executive company.

The impersonation of the CEO live and during a video call is an event that causes distrust in the workforce in the event of suspicious or unusual events such as the approval of new invoices, delegation of command positions or changes in supplier or customer data. A meeting in which the CEO is expected to be present or not, is a good opportunity to test the security awareness of senior management while at the same time training everyone present on the risks that will become increasingly common.

Alba Vara , cybersecurity analyst at Zerolynx .


return to blog

Leave a comment

Please note that comments must be approved before they are published.