• Mar 18, 2026
  • 15 min read

Synthetic Identities: When Data Becomes a Persona | "What The Fraud?" Podcast

Dive into the world of fraud with the 'What The Fraud?' podcast! 🚀 In this episode, Tom is joined by Steve Lenderman to explore the growing threat of synthetic identity fraud—how these identities are created, how they evolve, and why this issue has become a global concern.

THOMAS TARANIUK: Hello and welcome to season five of What The Fraud?, a podcast by Sumsub, where digital fraudsters meet their match.

I'm Thomas, Head of Partnerships here at Sumsub, a global verification platform helping businesses verify users, companies, and transactions. Over the past four seasons, we've looked at emerging trends, done deep dives into the industry, and focused on specific stories and journeys. But as fraudsters continue to upskill, now is the right time to get specific.

A new year brings fresh perspectives and new cases of fraud. This season, we're taking a closer look at the different types of fraud attacks and what they really mean for you. Joining us to kick things off is Steve Lenderman, a synthetic identity specialist and Head of Fraud Prevention at iSolved, the HR and payroll platform helping employers manage their workforce. The perfect place to start. Steve, welcome to What The Fraud?

STEVE LENDMAN: Exciting to be here. It should be fun. Looking forward to it.

What is synthetic identity, and how are fraudsters using it?

THOMAS TARANIUK: So, Steve, synthetic identity fraud is widely considered one of the fastest-growing and most challenging sectors of financial crime globally. It’s something different from the classic identity theft most people are used to. Let’s break it down—what is it from your perspective, and how are fraudsters using it?

STEVE LENDERMAN: Synthetic identity, at its core, is essentially an identity created with fictitious data. There are a number of different definitions in the industry here and there, but here in the United States, the Federal Reserve of Boston actually defined synthetic identity. That’s been really important for us to understand, because if you can't define it, you really can't measure it.

But as we've seen identity grow, obviously, here in the United States, it has expanded globally. This is now a global problem, and that’s really concerning to all of us. The skillset of creating synthetic identities has been kind of honed here in the United States and is now being weaponized globally.

And what is very concerning is that even after 20 years of dealing with synthetic identity here in the States, we still don't have it under control. The first five years of synthetic identity were kind of a playground for the bad guys. Now they are good at it, and that should be terrifying. They are really good at it now, and now they are doing it globally.

There is no learning curve for the rest of the globe. They are fully functional at the peak of their game and really causing havoc.

How do synthetic identities work?

THOMAS TARANIUK: We'd love to ask you as well, Steve, how do synthetic identities work? Can a user survive onboarding, demonstrate good behavior, and then after the fact it turns out they’re not real?

STEVE LENDERMAN: Synthetic identities rely on data. There are fictitious pieces of data that create identity, and the days of physical identity are numbered, if not already over.

We used to talk about ourselves in terms of height, weight, eye color, and things of that nature that identified us physically. But as we moved away from in-person transactions at banks or retail locations, we became a self-service digital platform. That digital channel has allowed synthetics to really flourish, because all we really are to a bank, a retailer, a hotel, or a hospital are literally pieces of data on a screen.

This physical connection of an identity to a person really no longer exists in the same way. Your identity is your digital identity—how you behave digitally, what you look like from a digital perspective, and what that data is.

This data has been aggregated and disseminated into the wild over the last 20 years, and it has allowed synthetics to flourish. That same practice is now taking pace globally. While it allows a lot of really nice things to happen from a self-service perspective, it also opens a very large door for synthetic threat actors to be able to get into your systems and live there. For as long as they choose.

Some like to play a short game—a couple of weeks or a month or two—and get in and out quickly. Others play the long game. We've seen synthetic identities in accounts for over 20 years, just waiting and being used to carry out other threat actor activities, like building more synthetics or obtaining citizenship.

So it can be a longer or shorter game, but it is really just about data. And once that data is in your ecosystem, it becomes very difficult to remove because it becomes validated.

What is a bust out?

THOMAS TARANIUK: It’s an incredibly interesting segment of work as well, Steve. And with synthetic identities, we keep hearing the term “bust outs.” Would you be able to describe what that means?

STEVE LENDERMAN: A bust out is when a threat actor uses an account up to or beyond its credit limit. So a traditional bust out would be having a $10,000 credit limit or £10,000 limit, maxing that out, and essentially walking away.

We would typically see that scenario with traditional identity theft, where you max the credit line and then dump it and move on. But threat actors have figured out that you can actually go above the credit line using payment regulations to essentially fool systems into giving you more available credit.

For example, here in the States—and I’m sure in other countries—you make a $10,000 purchase on a $10,000 line, then make a $15,000 payment. So you essentially overpay the account. That payment is posted, and before it settles fully through the bank, the threat actor spends that additional amount.

So now you have a $10,000 line, a $15,000 payment, and another $10,000 transaction. When everything settles, the balance might be $20,000 on a $10,000 line.

Where synthetics and bust outs intertwine is that bust outs can only be used once. Once you burn your identity and your credit score, you can’t do it again. So threat actors figured out how to manipulate identity elements—changing names, dates of birth, social security numbers—making slight adjustments that allow them to create new identities and run bust outs repeatedly without recourse.

And just to close, it’s important to understand that synthetics and bust outs are intertwined, but not every synthetic is a bust out, and not every bust out is a synthetic. That’s very important.

THOMAS TARANIUK: That makes a lot of sense. From your perspective, would you say the synthetic IDs are put in place long before the bust out, or is this a case where it's quite a quick turnover? Because they have to build creditworthiness, right? They have to build trust with the organization. So that takes a lot of pre-investment by the perpetrators behind these synthetic identities as well.

STEVE LENDERMAN: Yeah, I think it depends. Traditional synthetics were living in a credit world for most of their original lives. But with the evolution of fintechs, non-traditional banking, crypto, and now even expansion into healthcare, government, and buy now, pay later—anywhere money is being moved—synthetics are involved.

Traditionally, there was a longer game with credit because it took time to build credit and make the identity more mature. You wanted to establish a higher credit line. If you were just getting a new line of £1,000, is it worth your time? Maybe. But is a £10,000 limit more beneficial? Absolutely. To get there, though, you had to build that credit up.

Now the bad actors have figured out they don’t always need credit anymore. There are other opportunities—retail, banking, fintech—where they can do just as much damage. They can open accounts with typically a lower level of diligence.

Because most retail accounts and traditional bank accounts are not tied to credit reporting, they can live there without being detected or shared. There are no inquiries or trade lines for the industry to see. I could open 10, 20, or 30 fintech or banking accounts, and nobody would know.

That’s where synthetics have moved—to start their lifecycle, establish themselves, mature, and then springboard into higher-value opportunities like credit.

So typically, we see shorter timelines now, but they still use sleeper accounts. They may sit for a year or two to build into something more powerful.

Do synthetic identities ever really disappear?

THOMAS TARANIUK: If you activate them all at once, at that point I’m guessing they’re burned, right? They could use another account that is completely isolated, but do they just disappear? Or is there a database of synthetic IDs—a repository where businesses can flag them and say no one should work with these identities?

STEVE LENDERMAN: This is the frustrating part about synthetic identity fraud. We’ve known about it for two decades. It’s now a global problem, and still, globally—and even here in the United States—it’s not being addressed appropriately.

We do have a definition of synthetic identity, but even with credit bureaus, there is still no proper way to close an identity as synthetic. They have to mark it as identity theft, which isn’t accurate. And if they mark it as identity theft, it actually validates the identity as legitimate—which it isn’t.

They can’t charge it off as bad debt in a meaningful way, because it goes into collections and essentially disappears. There’s no proper classification.

There’s no way for credit bureaus to clearly identify something as synthetic. That has to change. If we don’t fix that, the industry will never fully understand the scope of the problem.

And fintechs are not reporting to credit bureaus in the same way. Banks have shared information for years, formally and informally, but fintechs and newer industries don’t have that visibility. They’re just starting to experience this type of fraud and don’t fully understand it yet.

To the best of my knowledge, there is no government-sanctioned synthetic identity database. We do manage an informal working group in the United States—the Bust Out Synthetic Identity Working Group—where we track about 16,000 synthetic identities.

There are also vendors with consortium databases, but all of that information is proprietary. No one wants to share data freely, so everyone has bits and pieces of a much larger problem. No one has the full picture.

What types of synthetic identities are causing the most damage?

THOMAS TARANIUK: So from your perspective, what you're saying is these industries have a lot of work to do to catch up. What types of synthetic identities are you seeing in your role that are doing the most damage?

STEVE LENDERMAN: From a payroll perspective, we’re very concerned with synthetic identities being added as employees, or even synthetic entities—fake businesses—being created and populated with synthetic employees.

What we want as threat actors is legitimacy. One of the best ways to demonstrate legitimacy anywhere in the world is with a pay stub. Pay records feed into government systems and tax agencies.

So it’s critically important for human capital management companies globally to take this seriously. Once we authenticate a synthetic identity with a pay stub, it opens the door for banking and other services, because many institutions ask for proof of income or bank statements.

But more importantly, it pushes that synthetic data into government systems. Governments are not always great at managing data, and once that bad data enters their systems, it becomes very difficult to remove.

From my perspective, that’s one of the most critical risks—accidentally validating synthetic identities and enabling them to be used elsewhere. It’s terrifying how creative these actors are.

Which industries are targeted?

THOMAS TARANIUK: It is certainly terrifying. Historically, this has been seen as a credit problem, but now it’s clearly a broader fraud issue. If you were a fraudster creating synthetic identities, which industries would you target?

STEVE LENDERMAN: The biggest issue right now is lack of collaboration across industries. Threat actors can move across verticals with no friction.

They are definitely targeting fintech, using those accounts to start the lifecycle of synthetic identities. Buy now, pay later is also a major risk because it’s fast-moving and allows larger purchases than before.

You can now finance high-ticket items—cars, rent, even surgery. That opens up huge opportunities for fraud.

Insurance is another area. Traditionally, fraud there involved fake accidents or claims. Now we’re seeing synthetic identities opening life insurance policies and faking deaths using synthetic documents and deepfakes.

Organizations often have payout thresholds where claims are processed automatically if everything appears valid. Fraudsters understand those thresholds and operate just below them.

How do synthetic identities bypass KYC?

THOMAS TARANIUK: You mentioned some interesting outcomes and thresholds. On a technical level, how are synthetic identities bypassing KYC and verification systems?

STEVE LENDERMAN: We need to rewind a bit. I personally manage three synthetic identities—two that I created over 15 years ago manually, and one more recent one using technology.

In the past, it took a lot of effort—building data, creating social profiles, making transactions manually. Now, with AI and large language models, I can automate much of that.

Fraudsters can now create hundreds or thousands of synthetic identities instead of just one. They test systems, see what works, and refine their approach.

They probe onboarding systems, identify thresholds, and adjust identities until they succeed. If an application is rejected, they analyze why and tweak the data before trying again.

This is absolutely a game of cat and mouse. They are constantly learning and adapting.

THOMAS TARANIUK: So you've mentioned as well building your own synthetic identities. Is that something you've experimented with on the agentic side? And could you tell us a little bit more about what it involved and what it taught you?

STEVE LENDERMAN: Sure. Not yet. Certainly, agentic AI is a hot topic in the fraud world. And yes, if we don’t think synthetic identity threat actors have already thought about using it, we’re foolish. They absolutely are looking at that moving forward.

Again, it provides another layer of opportunity for these actors to perpetrate fraud. It’s another digital transaction layer that doesn’t involve physical interaction. You’re able to essentially put an identity into a platform and then use agentic AI to make purchases.

We tend to think of agentic AI at first as ordering things from Amazon or DoorDash—smaller ticket items. That’s how buy now, pay later started as well. But in a year or so, when agentic AI has matured further, we’ll see it used for large purchases—MacBooks, crypto, and other high-value items.

Now you have a synthetic identity using agentic AI to make purchases. The merchant or retailer is excited about making the sale, but they don’t understand who—or what—is making the purchase. It’s not a person. It’s a system.

From an identity perspective, my most recent synthetic illustrates where the industry is going. I built a 16-year-old child identity for a synthetic family I had already created. One of the key indicators we use to identify synthetics is that they typically don’t have relatives or associates. So I wanted to test that.

Previously, it took me weeks or months to build identities. This time, using AI tools and prompting, I created a fully functioning 16-year-old identity in seven minutes. Seven minutes.

Now imagine doing that at scale. A threat actor with minimal skill can do this with the right prompts. That’s where things become really concerning.

Could synthetic identities be regulated or controlled?

THOMAS TARANIUK: The sheer size of the problem is astonishing. And the fact that it's not illegal to create a synthetic identity or use it is even more worrying. What do you see as the future—can this be regulated or controlled?

STEVE LENDERMAN: I think we have to go back to what we discussed earlier—reporting and classification, whether that’s through credit bureaus or government systems.

We’ve seen interesting statistics in the United States. Most synthetic identity losses are below $25,000. There’s a reason for that. If a loss exceeds $25,000, banks are required to file a suspicious activity report, which goes to the federal government.

Threat actors know they don’t want that data entering government systems. So they stay below that threshold intentionally.

Even when these cases are reported, there is no proper classification for synthetic identity fraud. It gets lumped into a general category, which means the data effectively disappears.

We also need to recognize that physical identity, financial identity, and digital identity are now separate. Some European countries have done a better job of linking these through ID systems tied to devices and behavior.

In the United States, we rely heavily on a nine-digit number that was originally designed for government benefits. That has become our financial identity system, which creates vulnerabilities.

We caused this problem ourselves, and it’s not something that can be easily reversed. Will it stop? No. It’s not going to stop.

THOMAS TARANIUK: It may not stop, but you're fighting the good fight.

STEVE LENDERMAN: We’re all trying. We’re all fraud fighters.

Can we catch synthetic identities before they cause any harm?

THOMAS TARANIUK: Absolutely. These identities often have multiple lives and play the long game. The key question is whether we can catch them before they cause damage. What warning signs should companies look for?

STEVE LENDERMAN: We mentioned onboarding time as one indicator of synthetic identity. We’re also looking at two key variables.

First, human behavior patterns. People tend to follow consistent routines on their devices. Synthetic identities don’t naturally exhibit those patterns. Behavioral analytics can help identify these anomalies.

Second, device intelligence. A digital identity is tied to a device. When a single device is associated with multiple identities or personas, you can start identifying relationships.

Using graph analytics, you can map connections across accounts and uncover networks of fraudulent activity. These threat actors are not operating a single synthetic identity—they’re running hundreds or thousands.

You start with behavioral signals, then expand into device intelligence and network analysis to uncover broader patterns.

There are also traditional indicators like lack of associates or thin credit files, but those can be manipulated easily, especially with AI.

The biggest challenge is that organizations need to acknowledge the problem first. Many still treat this as credit loss rather than fraud. But once you acknowledge it, you need to invest resources to address it—and many organizations struggle with that.

THOMAS TARANIUK: My concern is that technologies are evolving to mimic real behavior. Could fraudsters use AI agents to replicate human patterns and bypass these signals?

STEVE LENDERMAN: Yes. Some of that I had to do manually in the past—logging in, moving money, posting activity. Now I use AI to automate those behaviors.

I use large language models to generate code that creates bots to manage social media, perform transactions, make purchases, and simulate activity.

And that’s just me working with a few identities. Large organized groups are doing this at scale.

The days of individual fraudsters working alone are over. This is now transnational organized crime and nation-state activity. For example, North Korean actors are creating synthetic identities to gain employment in organizations, not for money, but for intellectual property access.

THOMAS TARANIUK: We've seen in recent reports that fraud is becoming more sophisticated, especially with AI. How has this impacted synthetic identity fraud?

STEVE LENDERMAN: We’re seeing an evolution not just in synthetic identities but in synthetic entities—fake businesses.

This became very clear during COVID relief programs. Fraudsters created synthetic businesses populated with synthetic identities and applied for government loans.

In the United States, PPP fraud became one of the largest fraud cases in history. The same likely happened with similar programs globally.

Creating a business is easier than creating an identity. It can be done in hours. The financial upside is much larger—six-figure loans, large credit lines, significant money movement.

This shift into commercial fraud is significant. Fraudsters can now access much larger sums with less effort.

THOMAS TARANIUK: It certainly seems that traditional crime does not pay, but this type of crime certainly does. Steve, you’ve been involved in some fascinating investigations—from bust outs on one side to credit abuse and synthetic identities on the other. What’s an example that really brought the scale of this problem home to you?

STEVE LENDERMAN: I’ll flash back in time to illustrate that this is not a new problem and to show the scale of it.

Back in 2007, the United States prosecuted its first synthetic identity fraud case. At the time, there were no laws specifically covering synthetic identity fraud, so the case was prosecuted under tax evasion, wire fraud, and RICO statutes.

In short, several credit card banks noticed a pattern of purchases involving gold bullion bars. When we cross-referenced the accounts, we found that the identities being used were tied to student visas, and the individuals associated with those identities were no longer in the country.

Those identities were linked to a specific region. We brought the case to federal agencies and had to explain what synthetic identity fraud was, because it wasn’t well understood at the time.

When authorities searched a residence in Brooklyn, they found multiple gold bars stored in an oven. Further investigation revealed that the gold was being used to fund terror financing, and the identities involved were tied to Pakistani nationals.

That was in 2007. Synthetic identities were already being used for purposes beyond financial fraud. That case really drove home for me that this isn’t just about money. It’s about geopolitical risk and security.

And now, nearly 20 years later, we’re seeing similar patterns with nation-state actors using synthetic identities to infiltrate organizations for intellectual property. It’s no longer just a banking problem. It’s an industry-wide issue.

THOMAS TARANIUK: That’s why I want to talk about your role with the International Association of Financial Crime Investigators. What are you responsible for there?

STEVE LENDERMAN: The IAFCl has been around since 1968, with about 8,000 members globally. It’s made up of fraud practitioners—people on the front lines, including banks, insurance companies, law enforcement, and prosecutors. One of the key initiatives is forming working groups focused on specific fraud types. These groups bring together professionals to share insights, best practices, and intelligence. We have groups focused on bust outs, synthetic identity, check fraud, scams, crypto, and more—11 groups in total. As a member, you can participate in any of them. They meet regularly, share information, and collaborate on tackling fraud challenges. Our synthetic identity working group has around 800 members actively sharing intelligence and strategies. It’s a powerful network.

THOMAS TARANIUK: It sounds like collaboration is critical. How important is it for identifying and reacting to patterns like bust outs?

STEVE LENDERMAN: It’s absolutely critical. What you know is important, but who you know can be even more valuable. Within these groups, there’s a strong level of trust. If I flag an account as fraudulent, others can act on that information quickly. We share intelligence informally and formally, helping prevent fraud across institutions. The impact has been significant—millions of dollars saved through collaboration. It’s truly a 24/7 effort, and it’s empowering to be part of a network that is actively working together to fight fraud.

Advice for companies trying to stay ahead of synthetic identities

THOMAS TARANIUK: We’ve spoken to many groups focused on consumer fraud, but this is clearly more B2B. What’s your advice for companies trying to stay ahead of synthetic identities?

STEVE LENDERMAN: The most important thing is working closely with product teams during the design of onboarding processes. Fraud prevention shouldn’t be an afterthought.

Fraud teams are often seen as blockers, but the goal is to enable better conversion of legitimate users. You want to design onboarding flows that filter out bad actors early while allowing good users through smoothly.

There are tools available, like Microsoft Clarity, that can help analyze onboarding behavior. While often used for marketing, this data is extremely valuable for fraud detection as well.

Understanding how users interact with your system—timing, behavior, drop-off points—can reveal suspicious patterns.

Ultimately, onboarding bad users is a financial liability. Focusing on quality over quantity leads to better outcomes.

THOMAS TARANIUK: There are multiple teams involved—product, fraud, finance—all balancing different priorities. But ultimately, everyone wants the same thing: growth, safety, and trust.

Quick-fire round

To close things out, Steve, we always like to have a bit of fun. Five quick-fire questions. No overthinking—just answers. Are you ready?

STEVE LENDERMAN: I’m ready.

THOMAS TARANIUK: If you could ban one risky online behavior forever, what would it be?

STEVE LENDERMAN: Bot activity. If we could eliminate unauthorized bot activity, that would be a huge win.

THOMAS TARANIUK: Have you ever been the victim of fraud yourself?

STEVE LENDERMAN: Yes, multiple times. Even as a fraud practitioner, it happens. It’s frustrating, and it gives you a real sense of the customer experience.

THOMAS TARANIUK: What’s one thing about fraud prevention that people underestimate?

STEVE LENDERMAN: The idea that banks have insurance that covers fraud. That’s not true. Fraud losses are operational costs that get passed on to customers.

THOMAS TARANIUK: What has surprised you most about synthetic identities?

STEVE LENDERMAN: How they’ve evolved over the past 20 years—and how much more advanced they’re going to become with emerging technologies like agentic AI.

THOMAS TARANIUK: If you could have any other career, what would it be?

STEVE LENDERMAN: I used to play semi-professional beach volleyball and coach at the collegiate level. I’d probably go back to that, even though it’s a lot of work for not much money.

THOMAS TARANIUK: Sounds like a great lifestyle regardless. Steve, thank you so much for starting season five with such a strong episode.

STEVE LENDERMAN: Thank you.

THOMAS TARANIUK: If you enjoyed today’s episode, make sure to follow the podcast and leave a review. It helps others discover the show and stay informed about the latest fraud trends.

In our next episode, we’ll dive into AI agents—can we fully trust autonomous systems, and who is responsible when things go wrong? Stay safe, and we’ll see you in the next episode.