29/10/2024
CTI
The advent of deepfakes
Cybersecurity Insights
Are you affraid of identity theft? From corporate scam to global scale manipulations, the rise of deepfakes impersonating public figures is also spreading in the world of work to serve threat actors’ interests. These are relying on well established business models, espacially in the twists and turns of the dark web. CWATCH reviews in this episode different use cases of these synthesis productions.
A potential for chaos
A game of impersonation
From scamming collaborators of a small business to feeding a large disinformation campaign to influence the outcome of a presidential election, deepfakes are one of the most impressive applications of AI and a part of what we call “synthetic media”. Even when used for entertainment purpose, they can eventually lead to misinformation.
Deepfake-as-a-service
Cybercriminals have already built successful deepfake businesses on underground marketplaces making it more accessible. However, prices are quite high (around USD 16,000) and the number of experts remains scarce. Modalities can be discussed regarding the specific needs of their clients, whether they come for an account takeover or an identity fraud. The rise of this type of service is concerning as people are able to identify fake speeches in deepfakes only 73% of the time, according to the University College London’s researchers. For more information on underground markets, consult our Threat Landscape 2022-2023.
A wide range of actors
Stir the pot in society
If we escape from the depths of the dark web and come back to the surface, deepfakes can be very impactful in the public sphere and for businesses. Several types of profiles participate in the dissemination of deepfakes:
State actors
Some researchers from American universities consider that democracies could also use deepfakes to meet their objectives.
Social media users
Fake accounts, legitimate users, or even influencers use this material to discredit other parties.
Influential public persona
These individuals often have privileged tribunes to express themselves and their publications can have a substantial impact
Scammers
In 2024, a threat actor cloned the voice of Ferrari’s CEO voice to initiate currency hedging
Another tool added to the arsenal of state actors
In 2024, an American cybersecurity company disclosed that they hired a man who they believed was a US-based engineer. It appeared that the new collaborator was a North Korean threat actor looking to infiltrate the company. The Security Operations Center detected a series of malicious activities emanating from the new hire though no illegal access was gained nor data exfiltrated.
Deepfake detection on the rise?
Categorisation algorithms are trained on sets of data including real and fake content. Several editors have already developed deepfake detection solutions to respond to the increasing need to uncover fraudulent schemes and behaviours.
A case study on finance
Beyond organisations’ interests: a can of worms
The appearance of deepfakes in the threat landscape has profound and lasting consequences. Indeed, they have the potential to disrupt entire economic sectors and markets. As we mentioned in the previous episode, legitimate public figures and members of the establishment can support deepfakes even when they are conscious of their deceptive nature.
Financial markets can be very sensitive to world events and deepfakes are well acquainted with sensationalism. In that case, time is crucial as debunking will not be instantaneous and… too late! The index already dwindled.
In practice
In 2023, the publication of an AI generated picture showing the Pentagon targeted by an explosion was relayed on X (formerly Twitter) which allegedly triggered a 3 percentage points drop on the S&P 500. Although the fall was short, it provided an acute sneak peek into deepfake’s damaging potential.