As technology advances and offers endless possibilities for all business sectors, cybercriminals also take advantage of its evolution to enhance their attack strategies.
This is the case with Deepfake, a threat that uses Artificial Intelligence and Machine Learning resources to simulate real communication between individuals and apply cyber scams.
It resembles phishing attacks, which aim to request and steal confidential information, causing very serious damage to the victims.
While phishing is characterized as a clever email written directly to a person, a Deepfake is a well-crafted video or voice recording that can be used to defame, extort, or scam.
This type of attack can cause huge damage not only to companies' reputations, but also financially. This is what happened to a UK energy company in 2019. The cybercriminals used a recording simulating the voice of the company's CEO and requested a bank transfer. The company suffered a loss of €220,000.
To prevent this type of attack, it is critical that organizations are prepared to deal with evolving cyber threats, and make employees aware of the power of Deepfakes attacks and how they can negatively impact the entire enterprise.
Lack of information facilitates attacks
According to the survey "Infodemic and the impacts on digital life", a large part of the Latin American respondents said they cannot recognize when a video has been edited using Deepfake techniques. The survey also shows that 66% of Brazilians are unaware of the existence of this technique.
According to the report, video and audio manipulation technologies in themselves are not malicious, as they allow the film industry, for example, to offer increasingly incredible experiences. However, the use of Deepfake tends to become increasingly unnoticeable and, as with any innovative technology, its misuse carries risks.
How are deepfakes made?
The following are the most common examples of deepfake attacks:
Cybercriminals try to impersonate another person in audio recordings, called Deepvoice or Audiofake. New technological features allow them to use person-specific characteristics such as voice tones and personality. In this way they can trick the victim into requesting confidential information or financial transactions.
The video recording uses the victim's face and manipulates images and sounds to bypass biometric passwords and attack their reputation or credibility. They often go viral more easily, as social networks and YouTube often classify these types of videos as parodies and do not ban them.
Therefore, they have a high potential to become a weapon in the hands of cybercriminals and malicious people.
The results become more worrying if one considers that, in addition to videos being shared on social networks or WhatsApp, fraud has already been reported on job search platforms. Criminals manipulate this technique to create fake profiles in order to trick victims and gain access to their information.
One should also take into account the incidents where Deepfake has been used to imitate the voice of entrepreneurs or public figures with the intention of creating or amplifying misinformation. This is likely to be the case as the elections in Brazil approach.
If in the past a faked video was easily unmasked using poor montages, today, with Artificial Intelligence and Machine Learning resources, it is much more difficult to make this distinction.
Criminal use of Deepfake
The Europol Innovation Lab (European Union Agency for Law Enforcement Cooperation) recently published the report 'Facing Reality? Law enforcement and the challenge of deepfakes"'. The document provides insight into the criminal use of Deepfake technology in serious crimes such as: CEO fraud, evidence tampering and the production of non-consensual pornography. Advances in Artificial Intelligence and the public availability of large databases of images and videos mean that the volume and quality of tampered content is increasing, which is facilitating the proliferation of crimes.
The report further states that authorities will need to improve the skills and technologies available to law enforcement officers if they are to combat the criminal use of Deepfakes. Examples of these new capabilities range from implementing technical and organizational safeguards against video tampering, to developing Deepfake detection software that also uses Artificial Intelligence.
What is happening now is a debate to create a certification for videos and collaborative content. Authentication methods, including blockchain, may be useful to combat deepfake attacks.
Blockchain can be used to require a user to provide proof of their identity before they can disseminate content on your behalf. Blockchain tools can still be used to verify that content has been edited or altered from its original version. However, experts argue that decentralization of authentication is key so that one entity does not have full authority to validate content.
In addition to Blockchain, the use of multi-factor authentication or digital signatures are also viable ways to increase the security of digital content.
It is important to map network traffic and constantly analyze the data so that new tools can be added to detect and identify cybercrime in an automated way. Thus, companies will be better prepared to deal with the evolution of threats.
To find out how to protect your company from cyber threats, contact ISH's team of experts and learn about the best information security solutions.