The Dark Side of Deepfakes: A Halloween Horror Story
Learn how fraudsters can turn you into a deepfake and how to protect yourself.
Learn how fraudsters can turn you into a deepfake and how to protect yourself.
A woman opened a link sent to her by email to see herself in a porn video with her real name in the title. The video looked real, but it wasn’t actually her—it was a deepfake. Sharing her story with Elle Magazine, the woman said: “Some pictures even had lengthy and detailed descriptions of me. One stated who my childhood best friend was, another my name and finally one, I discovered to my absolute horror, even shared my home address.”
In another case, two grandparents received a call from who they thought was their grandson. He said he was in jail, with no wallet or cellphone, and needed cash for bail. The grandmother, a 73-year old woman, scrambled to do whatever she could to help, withdrawing all the cash she could from the bank. Later on, she learned that her grandson’s voice was a deepfake and it was all a scam.
Depression, anxiety, financial losses, ruined reputation, cyberbullying—these are just a few of the consequences that deepfakes can have. What’s more, deepfakes have been on the rise for the last several years, with multiple face swap tools widely available, including Midjourney and AI filters on TikTok. Of course, there are certain cases when deepfakes can be detected in time. For instance, companies can employ Liveness Detection to reliably ensure that customers are truly present and genuine. However, this is a relatively narrow use case.
So what happens when deepfake technology falls into the wrong hands?
For years, AI has been systematically used to abuse and harass women. Machine learning (ML) algorithms can now insert women’s images into porn videos without their permission—and, today, deepfake porn is endemic.
An anonymous researcher of non-consensual deepfake porn recently shared an analysis with WIRED, detailing how widespread such videos have become. So far in 2023, 113,000 deepfake porn videos were uploaded to the internet sources shown in the search engines—that’s a 54 percent increase over the 73,000 videos uploaded in all of 2022. The research also forecasts that, by the end of 2023, “the total number of deepfake porn videos will be more than the total number of every other year combined.”
The spread of deepfake porn is made worse by the fact that widely available platforms, including Google, Cloudflare, and Twitch, are used for distribution. Meanwhile, the consequences for victims are severe, including depression, self-injury, and suicide.
Deepfakes are also used to manipulate public opinion and spread disinformation. This can include realistic videos or audio recordings portraying public figures saying or doing things they never have. Deepfake disinformation is often released during critical moments, such as elections and armed conflict, to fan political flames.
Manipulated media can polarize communities and irreversibly damage the reputations of media sources, politicians, or even entire nations. And even if it’s proven that certain content was disinformation later on, the damage is often irreversible—and it’s difficult, if not impossible, for harmed reputations to be restored.
Deepfakes are also used in romance scams to create fake personas used for deceiving victims into forming emotional connections online.
Scammers use deepfake technology to present themselves as an attractive person seeking romance. This is to gain the victim’s trust, and eventually manipulate them into sending money or divulging personal information. Just recently, a woman from the US state of Utah, was convicted in an online romance scam that cost her victims over $6 million.
Of course, an attractive deepfake photo isn’t enough to get the job done. Romance scammers are usually skillful manipulators who use social engineering techniques to get what they want. Therefore, it’s essential to be skeptical when an unknown person approaches you online for any reason.
AI voice-generating apps can analyze what makes a person’s voice unique—including age, gender, and accent—and then re-create the pitch, timbre, and individual sounds of any person’s voice.
Voice deepfakes are exploited by fraudsters in various ways, including:
According to Pindrop, a voice interaction tech company, most voice deepfake attacks target credit card service call centers.
Another major vector of attack involves targeting individual CEOs and other major stakeholders, who can be duped into making significant financial transactions through voice deepfakes.
Deepfakes are increasingly used in advertising and movies for various purposes. In the film industry, deepfakes can be used for visual effects, enabling filmmakers to recreate younger versions of actors or bring deceased actors back to the screen.
In advertising, companies use deepfake technology to insert celebrities’ faces onto the bodies of models to promote their products, creating more attention-grabbing and persuasive content.
However, things can go seriously wrong when such methods are employed without the actor’s consent. These deepfakes can not only damage the careers of actors impersonated without their consent, but also pose a risk to the credibility and authenticity of the entertainment and advertising industries.
While there is no way to completely eliminate the risk of deepfake abuse, there are preventative measures that you can take: