Deepfake technology, which was recently identified as becoming a credible cyberthreat, emphasizes the notion that an attack can arise from anywhere. Though the creation of deepfakes can be traced back to 2017, it became a popular trend when a video featuring former US president Barack Obama and Jordan Peele made the rounds on social media. The video was created by Peele's production company using FakeApp, the AI face-swapping tool, and Adobe After Effects.
According to threat intelligence platform IntSights, while deepfake technology isn't as popular a choice for cyberattackers as other methods, it is an emerging threat that security teams must be on the lookout for. IntSights also recognized a 43% increase in hacker chatter around deepfakes on dark web forums since 2019.
An amalgamation of the terms "deep learning" and "fake", a deepfake is a synthesized or fake version of an image, video, or audio file, manipulated to make the observer believe it is real. Deepfakes are created using deep learning methods and artificial intelligence software.
The previously mentioned deepfake containing Obama and Peele was made to increase awareness of deepfakes and contains a fake video of Obama warning users to be careful while consuming content online.
Deepfakes are created using deep learning methodologies; specifically, one called generative adversarial network (GAN). GAN is a group of neural network models, which includes machine learning models that teach computers how to process information like the human brain.
The GAN used to create deepfakes consists of two major neural networks run by individual AI algorithms—one is called the generator and the other the discriminator. In simple terms, the generator creates fake content and sends it to the discriminator, which then compares fake content to the target content to identify differences between the two. The generator then tries to eliminate these differences and fool the discriminator again through improved fake content. This cycle continues until a near-perfect fake file is generated.
While the detrimental effects of deepfake technology have been acknowledged, so have its advantages, particularly in the film industry. Deepfakes have been looked at as a cheaper alternative to expensive CGI techniques and as a way to effectively revive actors (i.e. portray them in films) once they are no longer alive. This has led to the birth of several deepfake applications that anybody can use. Creating deepfakes has never been easier.
The accessibility and effectiveness of deepfake technology have resulted in cybercriminals using it for social engineering attacks.
Even though deepfake technology began as an attempt to create fake videos or images, it is now used to mimic clone voice messages as well. Cybercriminals could use this to carry out social engineering attacks like:
Apart from these, deepfakes can also have a profound political and social impact, since they can influence the political decisions of the people who view them or cause an immediate reaction among the masses when fake content goes viral.
Dealing with deepfake videos or images can be approached in two ways. One is to prevent the misuse of authentic content in fake videos or images, such as using blockchain mechanisms. The other is to use AI/ML technology to detect whether content has been changed.
Using digital signatures and multi-factor authentication are other suggested methods to prevent access to video, audio, or images that can be used to create convincing deepfakes. While digital signatures are a great way to authenticate binary files, video content may require more of a watermark than a digital signature, which is possible through blockchain, as mentioned above. Ensuring multi-factor authentication will also go a long way in preventing unauthorized access of important video resources which can be used for creating deepfakes.
You will receive regular updates on the latest news on cybersecurity.
© 2021 Zoho Corporation Pvt. Ltd. All rights reserved.