Deepfakes: A real threat to private individuals and companies

Deepfakes are videos, images or sounds that are created with the help of artificial intelligence and that look real even though they are fakes. They became known primarily through videos in which the voices and facial features of prominent actors are imitated with computer programs. The video deepfakes with so-called “face swaps” (face swaps) from 2017 and 2018 were easily recognizable to the human eye, as the swapped face deviated from the body. Technology has made another leap in recent years, so that differences in well-made deepfakes can hardly be recognized. The advanced technology saves the film industry time for re-shot scenes – but it offers cybercriminals around the world a new starting point for social engineering attacks.

Deepfakes are already cyber threats

As early as 2019, the insurance company Euler Hermes Group drew attention to a case that took place at a branch of a German energy company. A cybercriminal had forged the voice of a CEO to trick the UK office manager into transferring a large sum to a Hungarian supplier’s bank account. The process corresponds to typical voice phishing, also known as vishing. Apparently it was possible to imitate the voice of the board of directors so perfectly with artificial intelligence that the British managing director did not become suspicious.

This form of CEO fraud has since set the precedent in cybersecurity for the use of deepfakes. In March 2021, the FBI brought the term Business Identity Compromise (BIC) into play due to the advanced technology in order to be prepared for criminal prosecution. The US agency warns that where deepfake tools are used to create “synthetic” corporate personalities or to imitate existing employees, they are likely to have a very significant financial and reputational impact on the companies and organizations concerned.

In the meantime, the danger has also been recognized in Germany and politicians like Markus Söder are dealing with the topic: “… we have to defend ourselves against so-called“ deep fakes ”, where even images and language are manipulated with artificial intelligence on the Internet. There is still no criminal offense for this. We should introduce it quickly. “

This warning is not exaggerated. Deepfake expert Dr. Lydia Kostopoulos believes that the technology is now so good that anyone with enough patience, time and computing power can use it. She says, “It is very likely that cyber criminals will take advantage of this technology as it is becoming more and more accessible to the public. With a software like Lyrebird (now part of Descript) you can clone the voice of any person if you have an audio file from them. Lots of people are lecturing on YouTube or have spoken on a podcast, so it’s easy to fake them. “

In addition to phishing attacks, deepfakes could also be used to blackmail people. Female employees in a company are particularly targeted, as there is now software that can be used to create pornographic material based on images of women and upload them to well-known platforms. It is conceivable, for example, that a managing director or chairman of the board is forced to embezzle funds with such a picture deepfake. And this is just one of many scenarios, because there are no limits to criminal energy.

Expert Kostopoulos says: “In the end it depends on what the intention is. Criminals think about what they want to achieve and how they will achieve their goal beforehand and don’t play around with the technology. Cybercriminal gangs are highly professional and will only use deepfakes if they have tried all other social engineering methods unsuccessfully. ”After all, it is still easier to achieve the desired goal with phishing emails, especially in conjunction with ransomware.

The greatest danger, however, is that deepfakes are used in real-time communication. This could be done with either voice calls or video calls. It’s just too difficult for humans to spot really well-made deepfakes. Unfortunately, reliable detection of counterfeits in real time is still a long way off because, as is so often the case, cyber criminals are one step ahead of defenders.

Protective measures

The case of 2019 shows that deepfakes are already being used at Vishing and there are enough examples on video platforms of how the technology has developed further, especially when it comes to linking image and sound. Companies, and above all management and supervisory boards, should therefore already now consider how they can protect themselves personally against social engineering. You should limit your presentation on social media, especially with regard to image, video and audio files, and protect your login data.

In addition, it is important to distinguish which deepfake scenario it is. If, as in the case described, it is a vishing scenario in which money is to be extorted, an organization can introduce several components for the verification process before transactions are initiated. If a disparaging deepfake has been made public in the case of a brand, a political figure or a company, the responsible communications department must be able to pull a crisis communication plan out of its pocket to limit the damage.

Most importantly, there needs to be more education at both corporate and national levels about how deepfakes work and how easily they can be created. In terms of content, it does not matter whether it is a clone of voices or a fake of facial features in a picture. Education about disinformation and analysis of information would also be very useful. Again, there are many defense strategies out there, but it depends on what situation the defense strategy is intended for.

Conclusion

In the end, however, the realization remains that it is difficult for humans to recognize deepfakes. The algorithms that generate deepfakes are becoming more and more sophisticated. But technologies can also be used here to detect deepfakes. Ultimately, as with all technological developments, it is a race, and the well-known cat-and-mouse game between cybercriminals and defenders is only enriched by one variant. Therefore, technology alone will not solve the problem. Rather, what is needed is better training for those responsible, celebrities, employees and ultimately all people who could potentially become victims of social engineering. Security awareness training courses that encourage critical thinking, compliance guidelines with relevant information and recommendations for action and, ultimately, a strong security culture in the corporate culture should at least protect companies from CEO fraud.

Leave a Reply

Your email address will not be published. Required fields are marked *