Mar 12, 2024
17 min read

Are You Ready for AI-Generated Fraud? “What the Fraud” Podcast

Dive into the World of Fraud with the "What The Fraud?" Podcast! 🚀 In this episode, we delve into the impact of AI-driven fraud and misinformation online with our guest, Walter Pasquarelli, a renowned expert in generative AI and synthetic realities, who has advised major conglomerates like Google and Meta.

TOM TARANIUK: Hello and welcome once again to What The Fraud?, a podcast by Sumsub where digital fraudsters meet their match. I’m Thomas Taraniuk, responsible for our exciting partnerships here at Sumsub, the global verification platform helping to verify users, businesses and transactions. In today’s episode, we’re examining the use of AI by fraudsters and how it is driving a new wave of fraud, creating complex challenges and combating misinformation online. At Sumsub, we like to describe AI as a double edged sword. It can be used to great effect to combat fraud in the first place, but it’s also developing at a rapid rate and criminals are using it in increasingly sophisticated ways. 

One particular case that caught my eye this week with my colleague involved a finance worker who was tricked into attending a Zoom meeting with, well supposedly, his company’s chief financial officer, or CFO, along with a group of top level executives.

In fact, actually, every person on that call had been deepfaked, and they convinced the finance worker to send them 200 million Hong Kong dollars of the company’s own funds. It seems that every other week there’s a story just like this one and it’s certainly something that we’d like to explore during this episode.

Today’s guest is Walter Pasquarelli. Walter is a world renowned expert on generative AI and synthetic realities and has been working as a consultant and advisor for some of the world’s biggest conglomerates including Google, Meta, Microsoft, and you may have one in your hand, Coca Cola. He is also an internationally recognized speaker, often giving talks on the responsible use of AI, AI policy, and data governance as well.

Walter, welcome. 

WALTER PASQUARELLI: Thank you so much for having me here. Really excited about our conversation. 

TOM TARANIUK: Likewise. Thank you so much for joining us. So, to kick things off, Walter, as someone deeply involved in tech innovation and running an AI themed YouTube channel, what recent instance of AI generated fraud have you seen has truly astounded you? 

WALTER PASQUARELLI: Well, that’s an excellent question. What recent case has truly astounded me? I mean, the thing is that you keep looking around and virtually every week there is a new case of some type of fraud or some type of disinformation perpetrated either against organizations or sometimes even against individuals.

Think about it. Just a few weeks ago, there was a case about illicit use of Taylor Swift’s face in pornographic material, which I think really brought the topic of AI generated fraud and synthetic media to the center of public consciousness. And I think that even like mainstream outlets and ordinary people really felt this is not just something that is a theory anymore.

Suggested read: The Dark Side of Deepfakes: A Halloween Horror Story

This is something that is actually happening to people that we know and that are part of our entertainment life, so to speak. But as you just said, a couple of minutes ago, this, these cases are still continuing now even on the corporate world where some business in Hong Kong was scammed for millions of dollars.

I hear new cases arising every week, and until we have developed a consciousness and strategies and an education about it, crucially, to be able to understand what’s at stake, these cases are just going to continue. 

TOM TARANIUK: Following up from what you just said, especially with the industry considering AI as This double edged sword serving as both a powerful tool for both anti fraud solution providers and those committing identity fraud.

Our recent identity fraud report actually discovered that fintech, crypto, and online gaming platforms are particularly vulnerable to AI fraud, with deepfakes posing a significant threat. From your experience, Walter, which industries are most susceptible to AI generated fraud, including deepfake related scams, and from your experience, perhaps why?

WALTER PASQUARELLI: Every industry will in some way be affected by deep fakes and AI generated synthetic media and fraud. I’ll just say this, it’s not a coincidence that the World Economic Forum described this year AI generated disinformation as the biggest threat for the global economy. And that is because of all the progress that has been happening in the field of AI generated synthetic media and a wider democratization of these kinds of capabilities.

Some industries that are particularly affected by that are on the one hand, I think we saw last year with Hollywood and any type of creative industries where we see these kinds of tools being able to recreate audio, video. People appearances altogether. And so there’s really sort of, let’s call it maybe, um, uh, an economic question about the future of these industries.

If then we look at banking or finance, there’s the issue of fraud that is obviously omnipresent, as we saw in the, in the case of Hong Kong, where these, these issues keep reappearing, where people might be scammed and where there might be some, some phishing emails that, that just appear authentic and real, but also in the world of politics this year, that’s going to be a major issue.

This year they called it the year of elections with approximately half of the world’s population casting their vote and ballots. That is going to have massive ramifications of everything that we see out there needs on some level to be questioned and the trust that we ascribe to media or any type of content basically is being second guessed.

To put it simply every industry will be affected but there’s also other sort of like adjacent industries which will be incredibly interested in this field. Take for example insurance. For insurance companies looking at these frauds and incidents that obviously represents a particularly hot area because it’s still primarily unexplored, but there is really a need to be able to provide the right kinds of insurance approaches for these actors and organizations that will be affected by deepfake related fraud.

Suggested read: Bypassing Facial Recognition—How to Detect Deepfakes and Other Fraud

TOM TARANIUK: A hundred percent. I think you hit the nail on the head there, especially around, um, the deepfakes you’re seeing in social media, um, where it’s running rife with some of these huge celebrities, and obviously people casting their vote and listening to these influencers. Everything hinges on that. So with visual and audio deepfakes, We’re seeing these increase hugely and being used to defraud people not only with casting their vote but out of their own money and also their possessions as well.

These sorts of scams are rife across the online space. So my next question posed to you, what can the individual do to protect themselves from this? 

WALTER PASQUARELLI: Yeah, that’s the $1 million question. It is, it is. I mean, there’s a few things that basically immediately jumped to mind. One thing is, of course, the educational element.

So at this stage, we’re still in a situation where the majority of people can Basically spot whether, say, a video or a piece of content is AI generated. If we look at deepfakes, for example, typically the eyes look unnatural, the mouth movements look a little bit uncanny. But that’s, of course, not a real solution.

That’s a temporary fix. But as these technologies become more advanced, give it a year’s time, and I believe that we’ll reach a stage where video content will become basically indiscernible from, like, authentic one. For individuals, there’s a few things that people can do to protect themselves from reputational damage.

One thing is, for example, having their digital identity cryptographically proven. And that means that I might be someone who has, you know, his face, his voice, his appearance. And I can basically certify that this is actually me when I’m speaking to a camera. When someone uses my face, it basically follows a certain process and my identity is basically cryptographically enhashed.

That’s sort of a very technical element of that. But for the time being, until the technological solutions are there, the educational element also looking, what do I share about me? Who do I give access to my data? And so on and so forth. That is something that I think will be the priority in the next couple of months.

TOM TARANIUK: So we’re also seeing, as you mentioned, celebrities having their likeness used, what legal protections are in place to stop this sort of thing from happening? 

WALTER PASQUARELLI: I think this is a really important question because it takes a star to make these kinds of issues well known, but the vast majority of victims are going to be ordinary people.

So we see issues such as, for example, revenge porn taking place. There were some other issues, of course, for organizations and for corporates. Plenty of examples where people have been scammed of millions of dollars. Now, in the past, there were a number of issues there, primarily because there wasn’t really a legal protection.

What people could do was mainly, sometimes, file for copyright infringement. That basically my face has been used in a way that I didn’t necessarily agree to. Now things are changing thankfully. In the United States, there’s now the no fake AI act, which basically prohibits the use of people’s appearances for generating these kinds of illicit AI content used for fraud. 

That’s sort of one breakthrough legislative piece that has sort of come to the fore. With the European Union AI Act, similar provisions are coming to play primarily about labeling and about disclosing the origin of this content. So the idea is that if someone will develop a deepfake about me or about a famous politician or about a famous executive, there will be a requirement to be able to say this piece here is AI generated. We’re still at the beginning, so we’re not quite there yet in terms of like having a full legislative and regulatory environment, which fully makes people feel safe, but we’re rapidly advancing. We saw President Biden coming to the fore. He himself has been the victim of several deepfakes over the past year.

TOM TARANIUK: We’ve all seen them, yeah? 

WALTER PASQUARELLI: Yeah, exactly. And there’s been a lot of discussion also happening, U. S. AI policy within the tech community. even like firms such as Adobe have really been sort of frontrunners in trying to promote standards that help people assess the veracity and the provenance of content. So there is a variety of different tools and mechanisms from like regulation and technical ones that are being developed in order to be able to spot the authenticity of content.

TOM TARANIUK: It’s great to see and it’s great to hear right now, although some may say think it is a bit slow, but of course, with the rise of this AI technology, it has been meteoric. Sumsub’s recent identity fraud report found that a 10 times increase in deep fakes was detected globally across all industries from 2022 to 2023.

So from your side, Walter, we’d love for you to talk to us about the technology actually being used, implemented, applied here in AI generated imagery, as well as audio. 

WALTER PASQUARELLI: As a matter of fact, AI generated synthetic media and deepfakes isn’t really anything new. I remember the first time I wrote about it was in early 2018, where we were sort of like discussing the epistemological, sort of the implications for truth and knowledge back then.

At that point, when we looked at the technology, which was primarily based on so called generative adversarial neural networks, which is a sub branch of deep learning. It used to be a pretty laborious process. So there was famously the video of, you know, deep Tom Cruise that was called, the fake Tom Cruise that went viral.

So that required a lot of work. So that was not just like the click of a button where I would ask, Hey, can you recreate Tom Cruise? But that was basically a designer working on the back of that, using these kinds of technologies, these kinds of GANs in order to make Tom Cruise look hyper realistic. And even the actor that was actually impersonating Tom Cruise looked very similar to him to begin with.

Now, the thing that’s happened in the past years is that there has been these advances in foundation models. And these advances in foundation models means that the capabilities of artificial intelligence has first of all, massively scaled, number one, so the ability of AI to generate content in a hyper realistic way has drastically increased, but simultaneously, there’s been a democratization of these tools in the sense that a lot of these tools that can be used for generating audio or images or videos are pretty much at the disposal of ordinary consumers.

A lot of times these tools didn’t really have any kind of security guardrails so anyone could generate pretty much anything that would maybe infringe on some copyrighted materials or maybe depicting any real upsetting images but the guardrails were actually quite thin and these two factors, the advancement of AI and the democratization of these tools is something that has shifted the issue of fraud and disinformation from something that required high scale capabilities to something that can now happen at the grassroots 

level.

TOM TARANIUK: Let’s maybe take a step to the other end of this double edged sword as well. So in your view specifically, what are the most effective AI technologies currently used for fraud detection and perhaps how are fraudsters actually adopting these or adapting to using these methods? 

WALTER PASQUARELLI: That is a fascinating question and one that I think is actually crucial.

AI use and fraud detection, if you ask me, the main way that it can be useful is maybe for quantitative data analysis and recognizing some patterns, say financial fraud, understanding if there’s maybe some kind of illicit activities based on quantitative data. When it comes to the detection of AI generated content, say audio, video, or image, AI detection sounds like the plausible solution.

It sounds like the most tempting approach, but it’s fundamentally unreliable. If we look at, for example, text generators, OpenAI, I think, had an accuracy rate for their own tool, that I think was about 26%. They took it down, so it’s very unreliable. When it comes to AI generated videos, I think Meta launched years ago, a competition before actually these kinds of tools even had advanced this far and the tool which won that competition with the highest accuracy rate was that I think just about 50%.

So the problem there is that in many cases, the same technology that provides the solution is also the same kinds of technology that creates these kinds of AI generated fraudulent activities and media. So what’s the solution here? Yeah. I think a few years ago we actually started moving away from the idea of using an AI tool that sort of detects whether a piece of content is synthetically generated.

What we’re now actually doing is that we start much earlier, so we actually try and prove the authenticity of a piece of content from the moment it is actually made. One example is, for example, the C2PA standard, as it’s called. And that’s basically an embedded cryptographic hash, that if I’m, for example, a photographer, this hash is put on the photograph, and it tells you anything about the provenance of this file, say where it was taken, at what time it was taken, it includes a digital signature.

And every time that this photograph is shared, or every time that this photograph is changed, it basically records that change on this piece of media. There’s a lot of companies that are adopting it. For example, Adobe is currently adopting it in anything on, on, on Photoshop. There’s a lot of other firms that are following through.

And of course that requires scale. So that means that everyone has to use it. And all of the hardware providers, such as Apple, ideally would use this kind of standard as well. But the shift here, and I think that’s crucial to understand is a, a transition away from using AI for detecting AI generated content, but really focus on the origin of the piece of content and media.

TOM TARANIUK: Welcome back to What The Fraud. I’m Thomas Taraniuk, and I’m joined here today by Walter Pasquarelli. 

WALTER PASQUARELLI: Thank you, Thomas. Good to be here with you. 

TOM TARANIUK: I was going to ask you, and this is probably quite interesting for our listeners today. How can businesses best choose a technology that is future proof and also counters these acting deepfakes?

WALTER PASQUARELLI: Yeah, that’s a great question. I mean, the first thing that I would recommend and that I typically advise some of my clients and organizations that are worried about deepfakes and AI generated fraud is that we first of all have to conduct an assessment of what are actually the risks. And when it comes to understanding the risks, we have to understand, okay, what’s first of all, our business strategy, what are our use cases, and what are the specific security protocols that we already have in place.

If you’re a firm that relies extensively on let’s say calls for making financial decisions, which I do not recommend any firm to to make. If you’re a firm that operates extensively remotely, there’s obviously a much higher risk. If you’re a firm that operates more face to face, this might be slightly different, but there might be other issues such as, for example, reputational damage if you have a CEO or a CFO that is very much in the spotlight.

So start with creating a risk profile of exactly what the issues that you might be facing. That’s the number one thing. And that requires understanding deepfakes. That requires understanding synthetic media and synthetic realities and its capabilities. When it comes to choosing the right kind of technology, I would say that there’s effectively a number of issues that companies can do there.

One is, for example, understanding the standards that are out there. C2PA is one of them that is going to be very popular. That is growing. incredibly rapidly and that it seems to be from what we know so far incredibly promising. And then there can be other kinds of solutions such as a KYC or understanding, for example, biometrics and integrating into your kinds of solutions and business processes so that you tackle the issue from multiple facets.

The final thing, and I keep, Saying this to everyone I speak to is of course the educational element. If you have your CEO or your CFO calling you asking you to transfer so and so much, probably not a good idea to follow up unless you have sort of a verified, authenticated channel of communications through which she or he is actually making this request 

TOM TARANIUK: 100%. And it can’t just be an email signature either. Because I mean, talking briefly about AI in transaction monitoring, it’s become a major global problem in 2023 alone, total losses as a result of transaction fraud was around 48 billion. And that’s US dollars as well.

So as we know, AI tools help businesses is. to monitor their customer behavior after the origination or the first touch point of getting them on board, right? And therefore accurately identify and also prevent risks, let’s say, when it comes to frauds. What is transaction frauds and when and how does it occur?

WALTER PASQUARELLI: So I think AI is a particularly interesting application here, primarily because of the quantitative nature of transactions. So artificial intelligence has been used in the past for transaction fraud. If we want to bring sort of like the idea of AI generated content and deepfakes into the mix. There is obviously sort of like an issue of, as we discussed earlier, sort of the idea of having audio and voice clones that can be used for facilitating certain kinds of transactions that, that might have otherwise never taken place.

But I think the key thing for artificial intelligence is that whenever there is information that happens or that is captured in the form of numerical quantitative data, that’s an area where artificial intelligence is quite strong in. Because if you’re an organization that has a certain kind of transaction history, Sort of at this point in the year, these and these kinds of suppliers are going to be paid.

Or if at this stage in time, these kinds of transactions typically happen and they tend to increase in the month of March and go down in the month of September, for example. And then there’s a number of anomalies taking place, both in terms of the identity of the originator of this transaction or the recipient.

That is something that AI is incredibly powerful. And that is something where AI is going to be the alpha and omega of basically this kind of verification of transaction fraud. 

TOM TARANIUK: 100%. And you probably heard stories around businesses using Excel sheets for a lot of their quantitative data, right? Yes. But obviously that’s not a good way to go.

And with the adoption of AI, they can retrospectively go back, look at these massive amounts of data and look into the patterns around fraudulent actors out there. Would you be able to give me a brief outlook to what data businesses could use or the structure behind a lot of their, let’s say, attempts to protect themselves from transaction fraud.

WALTER PASQUARELLI: I think when it comes to using AI in any kind of organization, data is where the success begins and ends. And I can tell you the Data about one single transaction is worthless. Data about one single individual is worthless. It’s not going to help you advance significantly because it doesn’t have statistical reliability.

The thing that gets quite interesting is when we actually take the data of an entire organization, its whole history. Maybe even if we operate in an industry where there’s quite a lot of data about financial transactions taking place, because that allows us to, to benchmark if that data is, is anonymous.

When it comes to businesses that want to use AI for fraud detection, the more data they have overall, the better. Think of it, AI ultimately, and I know it’s wildly unpopular to say that, but it’s like statistics with computing power. And the more representations we have of different kinds of transactions, the more kinds of typologies, of fraudulent activities we can create to more kinds of fraud of personas of fraudulent transactions we can actually imagine.

And that helps us to understand whenever there’s something that is wildly off the statistical means, then we can identify this pattern as being something that is abnormal. Now, the thing there is that data as a topic is wildly unsexy, and a lot of organizations, they say like, we just want to have AI. But the problem is that your house needs to be in order.

And that means that we need to create the right kinds of foundation where data about any kinds of transaction is actually readily available, not only for the people that actually do these transactions, but for anyone else that actually wants to implement security measures through AI systems as a whole.

TOM TARANIUK: Now, perhaps even looking to the future of AI, I would love to talk to you about what would be your predictions on how AI technology will be used by fraudsters in the future. 

WALTER PASQUARELLI: So it is interesting because if we look at the market around AI use, it’s been booming like nothing I’ve ever seen before since late 2022.

And it kind of keeps booming as well. So it’s, it keeps growing and growing. And I think a few months ago, someone said AI has reached peak hype. I don’t see this hype kind of going down. 

TOM TARANIUK: Do you think we’re still going to be spouting the AI buzzwords in the next 12 months? 

WALTER PASQUARELLI: The thing is that there’s still more and more investments going in.

And even if we have some companies that will burst in bubbles, the cat’s out of the sack, as we say, right? And that means that the innovation has happened. The progress has already taken place. A lot of the tools will become probably better and better as well for consumers, but as well for fraudsters. I currently see one major pattern evolving over the next couple years.

One is, I think, we could compare it almost a little bit to a cat and mouse games. As AI tools basically become better, and if they stay open source without the right kinds of safeguards, that will create Inevitably, increased opportunities for fraudsters who will be able to use these tools in order to perpetrate illicit activities.

The other trend is that of course businesses and people are going to become increasingly aware of that. Policy will come in, regulation will seek to counter these kinds of illicit activities. And that will probably catch up, but then it’s going to happen that basically like AI will advance again. And so fraudsters will primarily again, advance in sort of the types of activities that they’re going to do.

So it’s always going to be this kind of cat and mouse games between fraudsters, awareness and policy regulation. I would say, perhaps that there is a real argument to be made that there is going to be something like leaders and laggards in terms of AI security. And that means the people who are going to invest into the right kinds of security protocols into the awareness and education within an organization for a country, even these are going to be the one that will be most future proof.

Fraudsters will come after organization and countries that ultimately have not understood how to defend themselves from these kinds of activities. 

TOM TARANIUK: To sum up, would you be able to give our listeners and maybe perhaps myself as well, your top three tips to protecting themselves and myself and their businesses as well from AI generated fraud in 2024? 

WALTER PASQUARELLI: Absolutely. I mean, the first step that I always say for any organization that wants to protect itself, but also to generally stay afloat in this new day and age is to, first of all, demystify artificial intelligence. And we’re not looking at. The Terminator, we’re not looking at some magic silver bullet.

We’re looking at a particular kind of technology. And if we want to Be aware and protect ourselves against the risks, but also its opportunities. Then we need to understand it at the highest level of the organization, all the way to your intern that has joined yesterday, that’s step one. Step number two, be aware of the technological solutions that are out there, be aware of the collaborative initiatives that are happening.

Again, I think standards such as C2PA are very promising. They’re incredibly spreading. And I think that’s probably the, the future of authentication and AI verification, but of course, there’s also other tools in the market that might help become an add on to these kinds of standards that are becoming increasingly part of joint initiatives, but also like regulation and policy.

And the third thing is educate your people about the risks that are actually happening. And depending on the use case of your organization and your risk profile and exposure, develop the right kinds of internal policies that will help you future proof and at least mitigate the risks that AI fraudsters can have on your organization.

TOM TARANIUK: Absolutely. And, uh, from our, our last guest speaker as well, we heard that the last line of defense are employees. And at the end of the day, they’ll have consumer profiles as well, and they do need to protect themselves. That was a fantastic and intriguing talk as well. I’ve learned a lot and I hope our listeners have too. Walter, that was wonderful. Thank you for your time. 

WALTER PASQUARELLI: Thank you so much for having me. 

TOM TARANIUK: So what can you expect from the next episode of What the Fraud? Well, we’ll be taking a deep dive into the world of fraud in online gaming. We’ll discuss fake accounts, engagement, manipulation, the spread of misinformation, and what the major online gaming companies are beginning to implement to stop all of the above from taking place.

AIDeepfakesFraud PreventionKYCMachine Learning