• Oct 02, 2025
  • 22 min read

Agentic AI: Redefining Fraud Defence | “What The Fraud?” Podcast

Dive into the world of fraud with the "What The Fraud?" Podcast! 🚀 In this episode, Tom is joined by Will Lawrence, CEO and co-founder of Greenlite AI. Together, they explore how AI agents fight high-volume fraud, reconcile data, and free investigators for complex cases.

THOMAS TARANIUK: Hello and welcome to What The Fraud?, a podcast from Sumsub, where financial fraudsters meet their match. I’m Thomas Taraniuk, currently responsible for some of our very exciting partnerships here at Sumsub, a global verification platform helping to verify users, businesses, and transactions. Today we’re unpacking the challenges at the heart of Agentic AI, and joining me is Will Lawrence, CEO and co-founder of Greenlite AI, whose company has actually built AI agents designed to spot fraud and streamline compliance for both banks and fintechs. So, Will, great to have you here today on What The Fraud?

WILL LAWRENCE: Oh, what a pleasure. Thank you so much for having me.

THOMAS TARANIUK: The pleasure is all ours. I have a couple of questions to kick off. Today’s episode is timely with everything going on at the moment because billions are being invested into Agentic AI from many different use cases, hailing from many different sources, and seen by many as the next frontier in financial services compliance.

But before we get into the details, Will, for those who might not be familiar, what exactly is Agentic AI? How is it already changing the way we tackle fraud?

What is Agentic AI?

WILL LAWRENCE: Definitely. I’m really excited to be here, and I’ll unpack this by giving a few stories that might be helpful.

First off, a quick introduction: I’m Will, CEO and co-founder of Greenlite AI. Prior to Greenlite, I worked on the product team focused on anti-money laundering and financial crime over at Facebook—now Meta. AI has been around in our space—fraud and financial crime—for a very long time.

When we talked about AI 15 years ago, it referred to a different technology. Generally speaking—focused on machine learning. More specifically, a form of machine learning called supervised machine learning, which crunches large amounts of data to help determine if a new transaction, document, or ID is fraudulent or has any issues. The key part of that is that it’s really a predictive engine using data. We might have called that AI about 10 years ago.

Suggested read: Machine Learning and Artificial Intelligence in Fraud Detection and Anti-Money Laundering Compliance

In 2023, we started to see generative AI. Generative AI was really interesting—the ChatGPTs of the world—because it helps you understand unstructured data and create unstructured data. What was really cool about ChatGPT is you can talk to it in human language, upload documents, or take a picture of something, and it can understand it quickly and respond with novel text that previously wasn’t very structured.

It’s a really exciting technology. In fraud and financial crime, we saw it used to summarize documents or cases and help write narratives.

Now, Agentic AI is the next version. All of these fall under the AI umbrella, but the main thing with Agentic AI is it’s not just about answering questions—it’s about taking actions.

It’s a big leap forward because it uses technologies from supervised machine learning and generative AI to understand both structured and unstructured data and then take actions against that. It’s no longer just summarizing and describing; it’s actually able to take actions.

If you go one layer down into the technology stack, what does “taking actions” mean? It’s using capabilities from the generative AI world to understand unstructured data and perform sequential tasks.

You might ask ChatGPT, “What’s my favorite recipe?” or “Write a story for my daughter,” and it gives you an answer. In the Agentic AI world, you could say, “Publish a story for my daughter on my website.” It would then complete several steps: write the story, create a website, configure DNS and hosting, upload it to your domain platform, design the website, and publish the article.

It’s much more action-oriented, which is really exciting. There’s so much more we could go into, but you can see why this is such an exciting new technology for the world of fraud and financial crime.

THOMAS TARANIUK: Definitely gonna be opening a lot of doors for people. But of course, with those sorts of decision-making steps being made, especially by an agent, there needs to be roadblocks—not roadblocks, but bumps in the way to make sure that that agent is making decisions on behalf of a human who has actually verified themselves and the actions that they do want to take.

I mean, we’re seeing Agentic AI discussed more and more in the context of fraud prevention and verification at the same time. From the perspective of your work previously at Meta, but also now at Greenlite, how is Greenlite actually putting it to work for clients in the scope of Agentic AI? And what kinds of problems are you helping to solve within the remit that traditional tools maybe couldn’t at the start?

What kinds of problems can Agentic AI solve that traditional tools can’t?

WILL LAWRENCE: So if we look at any fraud, compliance, or risk department, there are the teams that are defining: what is our tolerance for certain risky activities, or what does our regulator define us as required to do in certain geographies? There’s the policy teams broadly, but then there are always the operations teams—fraud operations, risk operations, compliance operations. If you zoom into what the operational teams are doing at scale, they’re tackling repeated events in a fairly, fairly scaled way.

Generally speaking, these might be standard operating procedures for completing a fraud investigation, reviewing an ID, or completing an AML investigation. If we go a layer deeper into what’s in those operating procedures, right, it might be things like reviewing transactions, reviewing documents, doing research.

I laid it out that way because what an agent today is very good at is following standard operating procedures in a very auditable and high-throughput way. And so, I challenge anyone who’s listening here to think about your program today: where do you have operating procedures and people doing repeated steps? That’s where an agent is going to be very helpful.

At Greenlite, we spend a lot of our time in the world of financial crime. Very common examples of what a Greenlite agent is doing in production every single day include reviewing AML transaction monitoring alerts—transactions flagged for potential criminal activities. Your team has the operating procedures for how you want to review them, and you can now use AI to do that at scale.

Similarly, we do a lot of high-risk customer reviews. Let’s say you onboard a complicated institution from a part of the world deemed high risk—you’re probably going to need to do incremental review steps against them. That’s the type of task a Greenlite agent is really skilled at.

There are a couple of different advantages. The first is just a faster response. In that example of institutional customer onboarding, you might be able to close it out in 5, 10, or 20 minutes versus the six-hour SLA that your team currently has. That’s a good example of one value driver.

The second advantage is increased auditability. One thing that’s nice about an agent: it will log every step. “I opened the browser, I went to Google, I searched this string, I found these six results on Google.” When humans are doing that investigation, they don’t have the time to log every single step or make notes all the way through. Auditability goes up.

The final advantage is efficiency: you can get more investigations or reviews done in a given hour. That’s what’s really exciting, and that’s where teams are often using Greenlite today—for those types of standard operating procedure executions in a financial crime program.

THOMAS TARANIUK: Wonderful to hear. Beyond triaging and all of these other scopes of the agent AI, there’s a scope for actually spotting subtle fraud signs that previously humans couldn’t really detect. Right. From that perspective, what are these subtle fraud signs that humans would otherwise miss, which Agent AI at Greenlite could spot?

What subtle signs of fraud can Agentic AI spot that humans miss?

WILL LAWRENCE: So I think the main thing that Agent AI is going to be good at is pulling together data from multiple sources and reconciling them. So when you’re looking at a fraud case, let’s just take an identity document, you’re looking at the metadata associated with that ID document. When was it created? What are some of the updates that may have taken place to it? What file formats might it be in? That’s one part—looking at the text in the metadata.

The second part is the image itself. One of our customers was in the office last week, telling us about how they’re starting to see a lot more synthetic identities. A human could always verify it because there are subtle visual cues—like the eyes might look a little off when it’s a deepfake, like they look like Gollum. And I was like, okay, a human could look at that. So it’s a visual element that might be helpful.

The third part is: who’s submitting that document? What do we know about that entity? Could we do a little incremental research on the person submitting that ID? Are they trustworthy? Are they on any lists we should know about? That’s the third component.

The fourth, I think, is device intelligence, which is very helpful—pulling together all the information you know about the device. I say four different categories here because they’re not novel categories. We’re already looking at them today, but we don’t really pull them together well because they’re all in different formats: some might be text, some images, some open-source research. That’s what you might call unstructured data. Pulling it together and structuring it is what language models and agents are very good at.

Sometimes I like to think about agents: they don’t reinvent the wheel. They’re not going to create novel processes that you’ve never seen. They’re just going to automate the same steps your human team is already doing. So that’s how we think about it: not novel signals, just combining existing signals in a faster, higher-throughput way.

THOMAS TARANIUK: Absolutely. Humans will remain critical, as you mentioned, especially when looking at deepfakes. But when we’re looking at millions—real attacks with hundreds of thousands of documents with fake users and bots filtered through different systems—it can be very difficult to maintain customer trust, but also operational efficiency. The automation increases, so what is the role of the human in the loop, in your opinion? We can’t automate it all, right? I think that’s what you’ve sort of clarified as well.

What’s the role of humans in this circle of automation?

WILL LAWRENCE: The role of the human in the loop just becomes more and more important over time. Today, if you talk to any fraud or compliance leader, they will acknowledge that a lot of their operational teams are spending their time on data collection or data processing. Right? That’s what agents are going to be very good at. Sometimes I think agents are going to be like paralegals, turning every investigator into a lawyer.

Suggested read: Can autonomous AI agents handle end-to-end KYC with minimum human oversight, and will LLM-powered systems replace human analysts?

So rather than compiling all this information and doing all the research themselves, humans can review, redline, connect dots that the paralegal may have missed, and ultimately make decisions faster. The key part here is putting humans in the spot where they can do work they’re uniquely positioned to do, which is reasoning-type tasks focused on decision-making. Because that’s ultimately where humans are really exceptional.

Every single investigator we speak to hates how much manual work they have to do. I’ve never met someone who’s like, “Nope, I have a great process; I’m completely satisfied with it.” It’s always a situation where they have to go through all this manual work to get to the things they love. And we’re excited for agents to help them jump straight to the tasks they enjoy.

THOMAS TARANIUK: And I think the right balance is letting AI handle and streamline the routine checks, the triage, while humans focus on the edge cases. But when we’re talking about the actual person making the decisions, the accountability has to be on the human within the loop, right? Which regulators increasingly expect. And, you cannot take an agent AI to court any more than you could take a Tesla for choosing one person over another in terms of a car crash, etc. Right? What would be your opinion here? I mean, decisions shouldn’t be part of this little black box, but as you mentioned, an agent AI can be fully accountable if there is a full log of its decision-making.

Balancing automation and accountability between humans and Agentic AI

WILL LAWRENCE: Good question. So I think today, what we’re largely seeing our teams do—we work primarily with large enterprises, Fortune 500s, regulated industries—is using humans to make decisions at the end of the day, and that’s a great starting point for most organizations. If we fast forward about three years, I’m pretty confident that agents can be used for more decision-making on lower-risk tasks. We’ve been using AI models to make decisions in fraud and financial crime for a long time.

It’s not a novel concept. When Sumsub does IDV on a given customer, they’re making a decision on whether or not to even review that ID. That’s very common. The way we get comfortable using technologies like that for decisioning is through model validation and governance, and model risk management. Technology is not perfect, but as long as we understand the inputs, outputs, model construction, limitations, and risks we need to continue checking, we can feel fairly comfortable using it in production.

Today, most teams are using agents to help in decision-making, making investigations much faster. I’m confident that in two to three years, once we understand these technologies more deeply and manage the risks, we can use them for more decision-making tasks on lower-risk activities.

A simple example: handling things like PEP and adverse media alerts. Everyone listening who has dealt with those knows how repetitive and perhaps low-risk they can be. These are great use cases for agents because they can sort through alerts and only flag the ones that need human attention. Today, humans are in the loop for most tasks that agents are tackling.

THOMAS TARANIUK: Great to hear. It sounds like, with your previous work within the same remit—let’s say at Meta—you’ve built these ideas around making your past self’s life much easier, right? From your experience, Will, having built tools within institutions and then worked on AML products before founding Greenlite, what does the day-to-day look like for a risk and compliance officer working with an agent AI? What was the problem you couldn’t solve back then for yourself but are solving now for the market? And if you were advising your past self or someone in the industry today, what should their Compliance Pro program actually include?

What does the day-to-day look like for a risk and compliance officer working with Agentic AI?

WILL LAWRENCE: Oh, great questions. Okay, so let’s talk about the status quo I recall. Every morning, an investigator would wake up to a new queue of things to investigate, on top of the previous day’s queue they didn’t complete.

You often have things like: this fraud investigation needs to take place, or these transactions look suspicious from an AML perspective. Then they start compiling information—documentation, data, etc. They might start analyzing some of that data, open Excel, try to create pivot tables. They do the data collection and processing. Once they’ve assembled everything into one or multiple spots, they can start decision-making and work through closing the cases. Once they make a decision, they have to write their narrative to close it out in an auditable, high-quality way.

That’s generally what the workflow looks like. By the end of the day, if they haven’t finished their queue, it carries over to the next day. It can be a pretty intense grind, which is why there’s such high churn in fraud and compliance teams. It’s a very aggressive pace for many of these teams.

Cool. So looking at a team that uses Greenlite or AI agents more broadly…

Imagine how you wake up in the morning and that same queue has been pre-worked by an agent. All that data collection has taken place. A lot of that baseline analysis, those pivot tables have been made. Those researches, those reviews of those websites, have been completed. That packet is now sitting on your desk.

That’s pretty exciting as an idea. One of our customers described it as, like, you can never go back to the old world of doing things once you’ve started to have an agent help you with your investigation.

So you jump in, you have this packet on your desk, and you’re like, “Okay, cool. I noticed that, you know, Tom made these weird transactions to Will. Is that within our risk tolerance?” You might say, “Yeah, that is, and that’s okay,” and we can move on. That’s where we can focus much more on decision-making. And so that’s how we generally think about it.

THOMAS TARANIUK: Because fraudsters never stand still—or that’s what we like to say. How do you keep each AI agent current and ready to take on the latest scams, ploys, and, of course, maneuvers that they have to actually defraud both users and businesses, as well as surpass regulators and technology?

How do you keep each AI agent ready to take on the latest scams?

WILL LAWRENCE: Yeah. So one thing that’s cool about agents is they’ll actually slot fairly well into existing programs. So let’s go back to how does the team currently keep things current, right? Every now and then there is a quality assurance and a quality control team reviewing how teams are doing those fraud investigations.

They might notice gaps: “Hey, this didn’t account for this new typology that we’re seeing.” And they might identify that gap and then have a training program against it, right? “Hey, we’re seeing this new vector emerge. Let’s talk about it. Here’s how to spot it. Here’s what you should do in that situation.” You’re effectively updating the operating procedures to handle these incremental threats that you’re seeing. But you only get to be in that spot when you are observing how an investigator is currently tackling a given investigation.

So again, I’m saying this quality assurance and quality control piece becomes even more important in a world of agents. Because you’ll still identify new vectors, and it’s up to humans to decide: how do we want to investigate that?

Again, if you have more time and you’re not doing this administrative work, you’re going to have more time to think about how you want to investigate these emerging typologies. And that’s where I think a lot of fraud professionals need to be spending more time. Not for lack of trying, they just don’t have the capacity to do so today.

If they have the ability to say, “Here’s how we want to investigate these synthetic identities, here are the incremental tools we want to be using,” that allows you to actually update your policies and your standard operating procedures.

That’s generally the workflow we see: QA and QC teams review investigations, figure out the different incremental mitigations, teach the agents how to use the incremental tools and mitigations, and the flywheel continues so that you can focus on more proactive risk mitigation rather than just executing your operating procedures.

THOMAS TARANIUK: Most definitely. And AI agents can’t be trained just on retrospective data. It needs to be continuous. But according to our Sumsub 2025 European Financial Services report, approximately 51% of fintech professionals say that keeping up with regulations is their biggest challenge, with 44% citing high operational costs as well.

So the big question for you today, Will, is: is Agentic AI a breakthrough, or another layer of complexity for compliance teams to manage day to day?

Is Agentic AI a breakthrough, or another layer of complexity for compliance teams to manage day to day?

WILL LAWRENCE: The short answer is yes—it’s kind of both. On one side, it’s a breakthrough in that it gives teams more capacity to focus on proactive work.

I’m a big believer that interpreting guidelines and keeping up with the pace is something that, frankly, someone inside our organization can be best positioned to handle. And this is where a lot of the skilled knowledge work that fraud, risk, and compliance professionals do really shines.

Whenever I talk to compliance and risk leaders, this is the main thing I tell them to focus on: how do you figure out the best path for your institution to chart? Agentic AI is a breakthrough in that it helps give more capacity to focus on these types of questions. I think that’s what we’re really excited about because, again, at Greenlite we focus on solving operational challenges so that you don’t have to think about them as much. You can think about policy challenges, which are the bigger-picture things to be considering.

However, it is also new technology, and anyone in this field knows that when you use new technology, there are incremental risks that you need to manage, as well as incremental controls and oversight you need to put in place.

Now, fortunately, that would be challenging if there was no benefit. Fortunately, we’ve just freed up a lot of capacity on the operational side. We now have people who would love to do that. What we often see in teams that we work with is taking some of these investigators and turning them into, like, essentially technology leaders.

Suggested read: Know Your Machines: AI Agents and the Rising Insider Threat in Banking and Crypto

How do you oversee the AI? How do you make sure that it’s using the same policies? How do you continue to tune it over time? If agents weren’t freeing capacity on the operational side, this would be a very challenging technology to adopt. But because that is taking place, you have the ability to add in those resources and those controls to manage the risks you might be seeing.

THOMAS TARANIUK: Most definitely. And I think following up on asking you, whether or not agent AI is a breakthrough, there also needs to be a breakthrough in trust and inherent governance of that for the market and regulators to adopt it at scale across the board, right? Or are you seeing that breakthrough happen already and enterprise businesses actually adopting this with the utmost trust and belief that it’s going to solve, day-to-day problems for the coming years?

Trust and Agentic AI

WILL LAWRENCE: The short answer is we’re already seeing teams adopt this at scale. We work with several public companies and federally regulated banks who are all using this type of technology in production today. And you might wonder, how can they do this? This is a new technology—how do they feel comfortable?

The general thought here is that, again, we’ve been using AI—across those different categories of AI we described earlier—for a long time. People have been using models for a very, very long time. That means we already have a lot of the controls, validation, and governance processes that we need.

So if I were to use a new transaction monitoring system, I know exactly how I would validate that system and how I would put governance in place for that system. And again, what kind of controls am I putting in place? The same thing happens with agents. There are a new set of models, right? We know how to use new models. We know what controls we need to put in place here. The risks might be slightly different because they’re processing a different task, but ultimately, we know how to do model validation, governance, and controls. That’s what we’re seeing. Teams are applying the same process that they have against these new systems.

I like to say, just because AI is a new technology, don’t throw the baby out with the bathwater—you already have great controls in place; just continue to apply that to this new technology. Sometimes, I think maybe it’s in the media’s interest to hype up this technology as a completely different thing, and that’s not the case. It’s just a new set of models for us to use in our fight against financial crime and fraud.

And the other part that I think everyone is recognizing is that the bad guys already have access to this and have been using it for a long time at this point. So you’re falling very far behind if you cannot adopt this technology to keep up with those fraud losses and risks being generated through this type of AI.

THOMAS TARANIUK: So the question is then, will regulators actually ever accept AI-first compliance embedded throughout the macro field of every industry around the world—financial, regulated industries within banking, FinTech, and even, of course, within the more regulated crypto industry as well?

Will regulators ever accept AI-first compliance embedded throughout the macro field of every industry around the world?

WILL LAWRENCE: So if you look at it in a vacuum, it might seem like a crazy concept: why would we accept this new technology? But if you look at the broader picture of financial services, it’s never been more costly to run a financial institution. The regulation continues to go up. And what does this mean? We’re seeing that it’s very hard for smaller institutions to compete with the big ones.

And so we’re seeing this lack of competition, where regulators—especially here in the US—are seeing this lack as a concern. It’s a systematic concern for the health of the financial system. If regulators want to encourage competition, want to encourage diversity in where assets are stored and how they’re used, you need to allow smaller institutions to operate at scale.

If you put that backdrop against it, what are some of the tools that these smaller financial institutions might adopt? Technology or automation to increase their leverage much more than, say, a Barclays, who can throw more people at the problem. Right? And I think that’s really exciting.

Again, if you’re just looking in a vacuum, probably not. But if you look at the overall competitive landscape of financial institutions today, you really need to encourage innovation so smaller institutions can compete with the large ones. We think automation is going to be one of the major levers for doing so.

I’m optimistic. I’m very optimistic that regulators will recognize this and see it as a great tool to increase the stability of financial institutions.

THOMAS TARANIUK: AI will make mistakes, that much is certain. I think the question for the podcast today is: what risks does that create, and how do we limit them, especially in highly regulated sectors such as FinTech and banking, where ML and risk officers carry significant responsibility and face heavy penalties from regulators even for minor compliance errors? Can we alleviate some of that responsibility, or is it still going to be on these individuals who must trust AI to complete a job they have historically been doing within their profession?

How do we limit the mistakes Agentic AI can make, and what risks do they bear?

WILL LAWRENCE: So, I think as we help teams roll out the Agentic AI solutions, about 80% of the controls, validation, and governance in place is similar to what you have in your past models world. So again, continue to use the systems you have in place for putting in controls, adding governance structures, and managing risks; that will cover a lot. Agent AI is broadly another tool. It’s another model to help us with incremental tasks.

So again, use the things you already have in place. You don’t have to start from ground zero to use this type of technology. Bring those over. What’s the 20% that’s different? The interesting thing is that agent AI and generative AI share the common feature that outputs are not as predictable as a traditional supervised machine learning model.

If you ask, for example, ChatGPT to tell you a joke four different times, it will give you four different jokes. This goes back to what we need to assess for the success of these models: directional correctness. In that example, did it tell you a joke? Yes. You might then ask, how funny is the joke? That’s a secondary concept. The primary question is: did it do the job it needed to do?

When Greenlite is helping teams close out, say, a PEP or an adverse media alert, the investigation will look similar every time, but the spirit will be roughly the same. It’ll be the same answer on the other side, and that’s what we need to check—the spirit and direction of the answer versus the exact words.

Suggested read: 5 Best Practices for Adverse Media Screening

Another point: having worked on machine learning systems for most of my career, the way you do testing inside an agentic system is quite different from how you do it in traditional transaction monitoring or machine learning processes. In machine learning, generally, you pump in a lot of test data and expect the same outputs on the other side. You then understand how every transformation took place. That’s pretty much it: input and output.

Now, you need to check that every step is done because there’s variation at every step. Going back to the joke example, every prompt can be a little different. In Gentech, you actually have to use evaluations, or “evals,” benchmarking to ensure the quality of responses is really good.

A specific example: Greenlite reviews websites to assess the nature of a business and determine if it looks fraudulent. We have a list of websites that are fraudulent and correct. We consistently run that benchmark through the system to ensure it continues to get the right answers as we make changes and improvements. This ensures incremental launches or model updates perform well.

Another challenge: every company probably reviews a website slightly differently. How do you make sure company-specific logic is incorporated into your model? That’s a new-ish concept. When you ask the AI to evaluate itself, you might say: from my institution, I care about these 16 things; also check for these 16 things.

Essentially, the structure of testing is a little different because you need prompt-level evaluations. Any strong organization will have those types of evaluations in place. I always ask providers: how do you test your Agentic AI system? Tell me your eval strategy, your benchmarking process, how I can trust the results. A capable provider should answer this.

THOMAS TARANIUK: Some amazing points there, Will. I think you’ve taught me a lot. We’ve discussed the upside, benefits, and breakthroughs that could take place.

Do you believe there will be a day when humans or compliance professionals won’t be needed in compliance with AI and agent AI? You’ve said humans are still very important for decision-making and responsibility. But of course, there are paralegals who need to work through that scope to become lawyers or trainee lawyers, and then partners. You are removing a big scope of work at the bottom of the ladder, where a lot of training, intelligence, and information-gathering experience for new compliance professionals takes place.

Will there be a day when humans or compliance professionals won’t be needed in compliance with AI and Agent AI?

WILL LAWRENCE: I can confidently say that there’s going to be a very bright future for compliance, risk, and fraud professionals. We’ve already seen this story before; this isn’t the first time a technology shift has taken place, and it’s not even the first technology shift affecting financial services.

I’m always reminded of the rollout of ATMs, when people said, you know, with these ATMs, there would never be a bank teller again. Fast forward to where we are now: there are more bank tellers than there were at the introduction of ATMs. Why is that? Those bank tellers are now spending more time on specialized services. They might help a specific population, like the elderly, access the same tools an ATM might provide. They might focus on advisory services, routing clients to the right person inside the institution, or improving customer success. They can spend more time with each individual client because other clients can go straight to the ATM.

That exact same thing is going to happen in compliance, risk, and fraud.

Fraud is not going anywhere; it’s just getting more rampant. AI is going to help us in that fight by taking some of the grunt work off the table. But that doesn’t mean there won’t be incremental fraud vectors that need to be addressed.

This ties into an interesting question about how new people can enter the field without existing training grounds. I think we need to build more specialized training programs, focusing less on operations, and instead emphasizing the skilled tasks we’ve discussed in this conversation. For example, how to interpret FCA requirements based on our institution, or how to conduct control and quality assurance. Training new grads with these baseline skills will make them successful throughout their careers.

THOMAS TARANIUK: Most definitely. I have one final question before we move on to the outro. Will, as more companies, such as your clients and partners, adopt agent AI under the banner of compliance, what’s the single most practical way you see it being used moving forward? What’s the next natural step in the evolution of agent AI for compliance and fraud prevention?

What’s the next step in the evolution of Agentic AI for compliance and fraud prevention?

WILL LAWRENCE: I think there’s still a lot of work to be done on the L1 layer of fraud risk and compliance — broadly, the triage layer. If you ask any financial crime or fraud professional, the number one thing keeping their team occupied is false positives: investigations that don’t need review. They’d rather focus on cases that are truly suspicious.

There’s still much to improve there. We spend a lot of time in this area, and we’re excited to extend the number of L1 investigations AI can handle. That’s the immediate use case. I anticipate that in two to three years, every financial institution will have this triage layer driven by agents, increasing organizational scale and leverage, while also promoting happier employees who no longer have to handle all the false positives.

THOMAS TARANIUK: Well, that’s a big commitment and I love to see it. I think what you are trying to solve within the industry is going to make a lot of lives easier and put a lot of people into a position of confidence with the businesses they entrust with their financials and other resources, but also help people stay safe from fraudsters, which at the end of the day is what we’re trying to do. To close things out as we usually do on What The Fraud? Podcast, we always like to have a little bit of fun at the end. Five quick-fire questions. No overthinking. Are you up for it?

Quick-fire questions

WILL LAWRENCE: Love it. Let’s do it.

THOMAS TARANIUK: Wonderful. If you could ban one risky online behavior forever, what would it be?

WILL LAWRENCE: I’m going to get in trouble for this one, but online gambling can be crazy considering how little ID verification you need to access the amount of capital you can move. So that’s going to be one of them.

THOMAS TARANIUK: Absolutely. Have you ever been a victim of fraud yourself? Is that also tied to online gambling?

WILL LAWRENCE: Fortunately, I have not yet—knock on wood—but it hasn’t happened to me.

THOMAS TARANIUK: Good to hear. I think you are one of the first for whom it hasn’t, actually, on our Podcast today. So what’s one fraud myth you wish would disappear?

WILL LAWRENCE: Ooh, wow. That’s a challenging one. I think the one I hope disappears the most is that young people who are technology savvy think they’re immune to fraud because they know how to spot very obvious scams. They have no concept of account takeovers or identities being stolen.

People are lured into a false sense of confidence because they think that knowing technology protects them, but scams are much more sophisticated.

THOMAS TARANIUK: Definitely. As a follow-up, which type of fraud do you think will grow the fastest in the next five years?

WILL LAWRENCE: It’s got to be identity theft and the use of deepfakes for identity documentation. Some of our customers report entire company documents and IDs being fabricated, and it’s really hard to detect with current systems.

I often think fraud vectors will move lockstep with technological improvements. Right now, generating documents and images has never been easier, and deepfakes are more accessible than ever. That vector terrifies me, but it’s something we all need to fight in this industry.

THOMAS TARANIUK: We should all be fighting it together, especially as more models are released, democratizing access and lowering costs for deepfake and synthetic image tools. We just need to keep up—the arms race, basically. But Will, if you could have any other career other than the one you’re currently in, what would it be? I feel like this is a terrible question because you seem very happy in your current role.

WILL LAWRENCE: I am enjoying it. I love this right now. Honestly, being an investigator sounds really exciting—like a high-level investigator working on complex criminal activities. I have so much respect for some of the investigators we get a chance to work with. They are some of the hardest working, kindest professionals I’ve ever met. The toolkits they have and the mental gymnastics they go through are just amazing. I’d love to learn some of those skills.

THOMAS TARANIUK: Amazing to hear. And of course, they’re going to do even better now that they have Agentic AI by their side, working on some of the cases with them as well.

WILL LAWRENCE: Hey, let’s hope so. I completely agree.

THOMAS TARANIUK: Thank you for joining us. Very happy to discuss some interesting topics and take a glimpse into the future role and life of a compliance expert or fraud analyst.

WILL LAWRENCE: Thank you so much for having me. It was a pleasure.

THOMAS TARANIUK: Great to have Will on this week’s episode of What The Fraud? Podcast. Some really important topics, but a key takeaway about Agentic AI is that it won’t replace day-to-day jobs of those working within compliance. Agents can flag anomalies at scale and round up decisioning, but humans remain critical for judgments, customer care, and complex cases.

The right balance is letting AI handle routine checks and triage, while humans focus on edge cases, disputes, and regulatory interpretation. Having a human in the loop also provides accountability, which regulators increasingly expect, ensuring decisions aren’t just a black box.

We are looking at a brighter future with AI helping fight fraudsters at their own game while keeping decision-making in compliance streamlined, accurate, and compliant with business procedures. Join us next time on What The Fraud? Podcast as we explore how scammers are stepping up their game in fintech and why educating customers must remain a top priority in combating cybercrime.