• Apr 22, 2026
  • 21 min read

When Systems Become the Target: The New Era of Fraud | "What The Fraud?" Podcast

Dive into the world of fraud with the 'What The Fraud?' podcast! 🚀 In this episode, Tom is joined by Maikel Ninaber, Head of Risk and Resilience for EMEA at Mastercard. Together, they discuss camera injection attacks, emulator farms, AI-driven fraud, and reverse-engineered onboarding flows, showing how criminal operations are becoming more structured, automated, and scalable.

Tom: Hello and welcome to What The Fraud?, a podcast by Sumsub, where digital fraudsters meet their match. I'm Thomas Taraniuk, Head of Partnerships here at Sumsub, a global verification platform helping businesses verify users, companies, and transactions as well.

Today, we are examining how modern fraud attacks work—from techniques like camera injections to large-scale emulator farms—and what organizations can do to stay resilient in an ever-changing fraud landscape.

My guest today is Maikel Ninaber, Head of Risk and Resilience for AMEA at MasterCard and, crucially, a former ethical hacker who knows firsthand how fraudsters get in. Maikel, welcome to What The Fraud?

MAIKEL NINABER: Thank you so much for welcoming me.

Just to clarify, everything I'll share today reflects my personal experience and views from working in offensive security and fraud defense. I'm speaking in a personal capacity and not on behalf of my employer or any specific organization. Thank you for having me.

What does being an ethical hacker mean?

Tom: Michael, before we start talking about your role at MasterCard, I have to know about your time as an ethical hacker. For anyone who's not familiar, what does that actually mean, and what does the job involve?

MAIKEL NINABER: It basically means that you hack with the permission of the owner or employer at that moment in time. So usually you get a letter of engagement that includes the scope—so that's your get-out-of-jail-free card.

That incentivizes you so that when you get caught, you can show it, and it proves that it's a legal action you're taking, right? So we're not doing anything illegal. We're trying to help organizations become more resilient. And by doing that, sometimes you have to act as if you are a real threat.

THOMAS TARANIUK: That ethical hacking background is a fascinating way into this conversation because, in a sense, you've gone from testing the walls to building the infrastructure at a serious scale. And MasterCard is bigger, more complex as a beast than most people realize—billions of transactions, millions of fraud attempts. How would you describe the business today, and what does your world look like within it?

MAIKEL NINABER: If I look at the business at MasterCard, it's more of a technology company focusing on broadening, you know—it's based in the e-commerce capacity, but also on acquiring new companies, especially in the area of fraud and cybersecurity. One of the most surprising things when you study fraud groups is how structured they are. Some groups literally track performance metrics. They measure success rates, onboarding conversions, and detection rates, just like a normal tech company, right?

So they run experiments, improve scripts, and optimize workflows, and we're trying to play into that, right? So that's literally the cat-and-mouse game that any organization, including MasterCard, is taking a role in.

THOMAS TARANIUK: Grand. And from your previous experience as well, I'd love to hear a little bit about how you got into that.

MAIKEL NINABER: I started off as a programmer, and then slowly we realized that I was better at breaking code than building it. So I evolved my career, primarily in the beginning in social engineering, and later it became more dedicated to penetration testing and red teaming. And that literally means that I went to organizations, acted as if I were an employee there, and then, being part of that organization, looked for credentials or other materials that could give me a sort of standing in that organization to pursue persistence.

How important is it to have the right data and the right analysis tools for a company processing payments?

THOMAS TARANIUK: MasterCard processes billions upon billions of payments and transactions, with data points utilized across cybersecurity and risk signaling. How important is it to have the right data and the right analysis tools?

MAIKEL NINABER: It's really eye-opening because you literally see different types of threats in the e-commerce space, in the fraud space, and how they are attacking or using sophisticated methods in the repayment world. And then, working at MasterCard, it really showcases that you have so many different anonymized data points that you can attribute and score to create that defense-in-depth layer so needed to stop fraud.

THOMAS TARANIUK: Absolutely. And forged documents and fake identities are still very much in play, right? But now the attacks go even further, targeting the verification systems themselves. From our perspective, the Sumsub Identity Fraud Report as of last year puts a number on it: a 180% rise in sophisticated multi-step attacks globally.

When did you notice a shift, let's say, and what patterns have you seen emerge?

A shift in identity fraud

MAIKEL NINABER: Historically, identity fraud was about documents—fake passports, edited PDFs, stolen selfies. Companies focused on visual authenticity, right?

Today, the attack surface has shifted. Fraudsters no longer try to trick the document. They try to exploit the workflow itself. So this means they study onboarding flows, SDK behaviors, retry mechanisms, rate limits, and device checks. The goal is not to produce the perfect fake identity, right? It's to pass the verification process.

Suggested read: AI Fake IDs and the New KYC Risk

So a story: I once saw a case where attackers discovered that after reinstalling a mobile app, the onboarding retry counter was reset every time. So that meant they could attempt verification hundreds of times until the AI score crossed the approval threshold. So the identity data was completely real. The attacker simply brute-forced the verification logic until the system accepted it. So, core examples.

THOMAS TARANIUK: Well, that must have cost them an arm and a leg. And Michael, can you tell us a bit about reverse engineering SDKs as well—the software development kits?

Reverse engineering SDKs

MAIKEL NINABER: Relating to the reverse engineering part, it's usually that you see different types of API calls happening in these types of software packages. So what companies or malicious organizations would do is find ways to bypass that, right? For instance, camera injections are one of the most interesting techniques because they change the problem entirely.

So normally, a verification system assumes that the camera feed is coming from a physical device. But if an attacker controls the operating system or the runtime environment, they can intercept that camera signal and replace it. So instead of showing a real face on a camera, they inject a synthetic video stream into the application.

Imagine a liveness test asking someone to blink and turn their head. A fraud group trained a deepfake model using stolen identity photos. They then ran the verification app inside a modified Android environment, and that injection generated a video feed. So the AI model saw a perfectly compliant face blinking and turning its head, and the system passed the user as real, even though there was never a real camera involved.

THOMAS TARANIUK: So the user itself looks real, sounds real, but isn't real. And of course, there are more zero-friction technologies used now, which allow these criminal groups to brute-force systems across the globe. What's being done from your perspective? Is there enough in place to stop these sorts of attacks, or no?

MAIKEL NINABER: As I mentioned earlier, it's really a cat-and-mouse game. So what actually works is that no single control solves identity fraud. The only effective approach is a layered defense.

So that would mean a combination of device integrity checks, runtime attestation, behavioral signals, network intelligence, and continuous monitoring after onboarding.

Especially the behavioral signals one is a very interesting one because that would showcase key indicators on how passwords are being typed in—the frequency, how fast, or how the phone is actually being held. Is the person a left-hand holder or a right-hand holder? So all of these things create extra insights to be part of a specific identity and to be used as a check.

THOMAS TARANIUK: Absolutely. It's all of the checks on the backend that the user doesn't provide. For the audience's benefit, could you give us a quick rundown of what a rooted device is?

What is a rooted device?

MAIKEL NINABER: A rooted device is essentially a phone where the normal security restrictions have been removed. So it gives the user full control of the operating system, which can be really useful for developers and researchers. It also means that security protections that apps rely on may no longer be trustworthy. So with rooted devices, additional applications can be installed that you would not want there, right?

These applications could get extra data or read out the various API calls that are actually being utilized by the SDK. So it gives the malicious attacker additional information on how a system works so that later on they can bypass it or inject code or other types of systems to bypass the process at a specific financial institution where onboarding is taking place.

THOMAS TARANIUK: How well understood is this entire theme, would you say? If we're looking at all of the indicators for a criminal or someone looking to take advantage of a system—if they look real, if they sound real, by all intents and purposes they may be real, but device intelligence may prove otherwise. Rooted devices, right? Camera injections—I mean, could you tell us a little bit more?

MAIKEL NINABER: There are key differences in each market, like comparing, for instance, the Middle East and Europe. What I see is that in Europe there are very tight regulations, right? One of the main differences between fraud in Europe and the Middle East lies in the maturity of the financial ecosystem and regulatory frameworks.

Europe has long-established banking systems and strict regulations—strong customer authentication requirements—which have pushed fraud toward more sophisticated social engineering tactics like account takeover, investment scams, and payment redirection. So attackers often focus on manipulating people rather than bypassing systems because the technical controls are already well developed.

In the Middle East, particularly in fast-growing digital hubs like the Gulf region, the financial and fintech ecosystems are expanding rapidly as governments push digital transformation. So this creates enormous innovation but also opens new attack surfaces while systems and processes are still evolving. As a result, fraud in the region often targets onboarding processes, identity verification systems, and mule account networks rather than purely social engineering.

Suggested read: What Is a Money Mule? Red Flags, Examples, and Prevention in 2025

So in simple terms, fraud tends to exploit people in mature ecosystems and technology in rapidly growing environments. And as digital infrastructure in the Middle East continues to mature, fraud patterns are likely to evolve into the same sophisticated social manipulation techniques already seen in Europe.

THOMAS TARANIUK: So from the perspective of the Middle East, where there are a lot of people who are unbanked moving in that direction, the sophistication of fraud may not need to be as high to target some of these companies. But when we're looking at mature markets with fintech ecosystems that have been around for a while, how well understood are camera injection attacks? Are most organizations across it in these mature markets, or are they still behind? Is it flying under the radar?

How well understood are camera injection attacks in mature markets?

MAIKEL NINABER: It's not flying under the radar. I've been in key intelligence-sharing groups where multiple banks take part. So we're learning from each other. The only downside is that everyone has their own approach. Approaches cost money, and tools and technologies as well. And not every financial institution is open to sharing their entire metrics on how they detect everything.

They do give examples of how they tackle it, but they never give the full recipe. And I can understand that—they pay money for it.

I think everyone is also taking the next step, where they're trying to figure out how AI or agentic AI can play a role in this. Because a lot of things were primarily done manually, and now they're moving toward automation—but there are risks to that as well.

THOMAS TARANIUK: Certainly the case. I would love to hear from your perspective as well—what's the most advanced attack that you've seen? What's the most advanced attack that you've also performed, perhaps?

What's the most advanced attack you've ever seen?

MAIKEL NINABER: A few years ago, a large digital platform believed that they had solved identity fraud. They had advanced document verification, selfie liveness checks, and a team of manual reviewers. They had everything. Fraud numbers had dropped significantly, so everyone felt confident.

Then one weekend, something strange happened. New accounts started passing verification at unusually high rates. The documents looked perfect, the faces blinked, people turned their heads exactly as the liveness system asked. But within 48 hours, those accounts started moving money in coordinated patterns—small transactions at first, then larger ones eventually. Thousands of accounts were draining promotional balances and exploiting payout mechanisms.

When investigators looked deeper, they realized something unsettling. Nobody had actually forged a passport. Nobody photoshopped documents. Nobody fooled the human reviewer.

Instead, the attackers had reverse engineered the onboarding workflows. They ran the mobile apps inside emulators, injected synthetic camera feeds, and automated the entire process at machine speed. So the system wasn't defeated by fake identities. It was defeated by its own design assumptions.

That's the shift we're seeing today. Fraud is no longer about convincing a human. It's about feeding a system.

THOMAS TARANIUK: That completely reinforces what we're seeing with the sophistication shift—bypassing injections and all of these emulator scams across the board.

Bypassing verification and tricking cameras, however, is one side of the coin. Fraud is really about scale, right? You mentioned one attack, which is super sophisticated, but emulator farms are where it comes in. We touched on it briefly—thousands of virtual devices all running simultaneously, all at once, each looking like a completely legitimate profile, at least to the systems in place.

What's your experience with these farms, these emulator farms, and what's the most reliable way you see detection working as well?

Detecting emulator farms

MAIKEL NINABER: Another major evolution in the industrialization of fraud is indeed emulator farms. So instead of one person committing fraud, organized groups run hundreds of virtual mobile devices simultaneously.

Each emulator runs a script attempting onboarding using different identities, proxies, and phone numbers. And this continues, right? We've also seen marketplaces that sell these types of boxes. I think it goes for maybe $300—you can buy that in crypto, which makes that organization untraceable, so people can start committing these types of massive crimes.

I even remember in one of our investigations, we saw a setup where attackers ran hundreds of Android emulators on cloud services. Each emulator attempted identity verification continuously. Their success rate was only around 5%, which seems quite low, but when you run thousands of attempts per hour, that still produces hundreds of verified accounts every day.

And this is also one of the reasons why manual reviews can't keep up. A manual review is important, but it cannot scale against automated attacks. A human reviewer might be able to do 20 to 30 cases per hour. Fraud automation can generate thousands of attempts per hour. So the speed mismatch is enormous.

So in this core problem, you get the fatigue issue. In one case, a fraud team had dozens of reviewers checking ID submissions all day. The fraud group began experimenting with small changes—slightly different lighting, slightly different head positions.

So over time, reviewers started approving borderline cases because they were seeing thousands of similar submissions every day. You're really fatiguing the manual reviewers from that point of view.

And how to stop it—that's what we mentioned previously: the layered approach of having different metrics being looked at and reviewed, some manual, some automated.

I would actually go for the manual part if you introduce a step-up in the process, because then you utilize your core operations at a later stage and for the most important cases. Because let's be honest—not everyone onboarding and creating an account is a fraudster, right? But to get those cases out, you need to filter the weeds and then do the extra checks, and that can be done manually.

THOMAS TARANIUK: And that's definitely why you need a layered verification system as well. of From the perspective of the barriers to entry, you mentioned a couple hundred dollars to buy one of these, right? You can host it in the cloud or have it physically. Does that mean access for criminal groups—not only organized groups, but individuals looking to do this as a side activity—is becoming lower and lower?

MAIKEL NINABER: It is. To be honest, traditionally, these farming boxes weren't built with the idea of committing fraud. They were more intended for building a market profile.

So say you're selling a product—you want to do mass advertisements or mass posts related to reviews, because there are a lot of review bots out there, and there are many checks related to this. It's not directly related to fraud.

But as you see in many cases, when something is built for one purpose, it starts being exploited for others it wasn't originally intended for.

In the beginning, we saw these boxes primarily on the dark web, but nowadays, if you just search for them, you'll probably find some on the clear web as well. Organizations are not even afraid to hide themselves when selling these types of products, which is quite concerning at times.

THOMAS TARANIUK: That is super interesting. But from the perspective of these companies, they're probably saying it's used for legal means, right?

MAIKEL NINABER: Exactly.

THOMAS TARANIUK: From the perspective of the victims in this case—because it's not a victimless crime—what sectors are being targeted by emulator farms?

What sectors are targeted by emulator farms?

MAIKEL NINABER: It's mostly the elderly, right? And usually these emulator farms have a pre-stage before they start attacking. So I'm talking about big data breaches—looking at groups like ShinyHunters.

I'll give you an example. In the Netherlands, a large telephone company named Odido was breached through their Salesforce environment. Odido has access to roughly 30% of the Dutch population that uses their phone subscriptions and services.

So if you have that massive data breach, they ask for a ransom that isn't paid, and the data is made publicly available, criminals take that data, use it in their farms, and try to commit fraud that way. Or they onboard legitimate users by asking them for identity information or payments that can later be sold or used for other forms of fraud.

THOMAS TARANIUK: And are there certain industries being targeted more than others? Is it a distribution play—they want to target, say, a telecom provider because it has the most end-user profiles and digital data?

MAIKEL NINABER: I think they're targeting any type of company where the customer base is huge.

So if we look at MasterCard, they're being targeted every second. That's why we have a strong team defending MasterCard across several locations, in Europe and globally, which is great to see.

They're also making other companies aware of what information is being targeted and what types of attacks are happening. This information becomes valuable for other organizations to learn from.

But there are still plenty of organizations that keep this to themselves, so we're really happy with new regulations—for instance, for financial institutions like DORA or NIS2—that emphasize the need to be compliant and prepared.

Because when something goes wrong—and it will at some point—you need to have processes in place to handle it. You never know when it will happen.

THOMAS TARANIUK: Absolutely. You just need to be prepared in the meantime. I'm sure you agree AI is here to stay, and with it, the use of agents. How is agentic fraud changing the landscape for many of these businesses? And can platforms like OpenAI be used for illicit purposes?

Agentic fraud

MAIKEL NINABER: I'm a major fan of agentic AI and AI, but AI increases both risk and defense capabilities. Attackers use AI for deepfakes and automation, and defenders use it to detect anomalies across massive datasets. Ultimately, the advantage goes to the organizations that adapt the fastest.

Many organizations still treat identity verification as a single onboarding step, but fraud is a lifecycle problem. Attackers adapt after onboarding, so monitoring must continue after the account is created. That's also a key point we're trying to communicate to companies—that they should continuously monitor what is connected to their network and who is part of their customer base.

Because in the beginning, everything might look fine and even drive revenue, but at some point it can become a risk if you don't keep monitoring it.

THOMAS TARANIUK: Certainly the case. I mean, if we're talking about these emerging technologies, how genuinely confident are you that we can keep up? I say “we” loosely, as in the privatized industry globally.

MAIKEL NINABER: I think there's always room for improvement, right, from that point of view. But I think we do have the right technology and the right people in place all over the world who can contribute to this. It just requires openness. What I'm also seeing in this space is that there are more and more open-source projects related to this, where people learn and build from them. In the beginning, big corporations with the most money were taking the lead, and they were offering solutions at a price.

So I think now, with it being a lot more accessible through open forums and open-source projects, it creates a lot more awareness related to this topic.

THOMAS TARANIUK: Certainly the case. When you're operating at MasterCard scale, human review alone simply can't keep pace. How does artificial intelligence change the picture, and where does human judgment still play a part in this equation?

How does AI help with compliance reviews and anti-fraud investigations?

MAIKEL NINABER: I think it's still part of the main story we talked about earlier—the defense-in-depth and the layered approach.

For instance, at MasterCard, they are really good at providing scores for each type of metric that's out there—whether it's the device location, the IP location. And bear in mind, this is also very important to mention: MasterCard is not a financial institution. They build systems and products that are ultimately used by financial institutions, and it's up to those institutions to utilize these solutions.

Only the financial institution knows who the account holder is and what the person is related to. MasterCard only has access to what they call anonymized numbers and the scores related to them. Only the financial institution can trace that back to a person or an organization.

THOMAS TARANIUK: Certainly, but card rails as well as card networks are at the center of the picture. What do you anticipate being the biggest threat to MasterCard or the likes of Visa, Discover, etc.?

What's the biggest threat to companies like MasterCard?

MAIKEL NINABER: Data compromises. In the earlier days, people were brute-forcing their way into systems. Nowadays, that's not the case anymore. They use real identity information to log in or to gain access to accounts.

And I think that's the part that has shifted. It has become easier for criminals to breach organizations because of existing vulnerabilities that haven't been patched, or because people are not properly trained in security awareness—clicking on malicious links or falling for phishing campaigns.

Because the moment an attacker has access to system data, the first thing they do is exfiltrate that information and put it up for sale. Then it's up to the highest bidder how that data gets used across different organizations or forums, and it can go very wrong from there.

THOMAS TARANIUK: It certainly can go south, and I really appreciate you going into detail here. I want to talk more about the defenses we can put in place and what resilience looks like in the fight against fraud.

Maikel, we’ve covered how fraud has shifted from stealing identities to targeting the infrastructure itself. So what does resilience look like at a company at MasterCard scale, with millions of transactions happening simultaneously around the world and similar levels of sophisticated attacks? And how do you prove to customers that you are prepared for all of this?

MAIKEL NINABER: I think it's about showcasing the intelligence metrics you have on patterns that are currently happening in the market, to better inform them about what's coming their way or what's currently at play.

We strongly believe in not operating in silos. Previously, in the industry, you might have had a siloed approach with a fraud operations team and a cybersecurity team. What we're seeing now is that these teams are being brought together. We're bringing fraud operations and cybersecurity operations together because we've seen cases where information was not shared proactively, and in the end, a lot of fraud happened.

So imagine a breach happens, and you don’t inform your fraud department. Months later, fraud occurs that was actually related to that breach. You could have tackled that in a more proactive way.

Previously, companies were addressing this more reactively—after the fraud happened, they would investigate how to stop it. Nowadays, it's more about being informed upfront: we know this is happening, we need to stop it. For instance, at MasterCard, they have a tool called MasterCard Threat Intelligence. This means you are informed proactively and can take action directly.

This relates to things like card testing. To give an example, there are many markets that sell cards, and those cards can later be abused for fraudulent activities. What MasterCard can do is detect if a card is being tested and whether it is being blocked by the issuing bank.

On one side, they can block that specific transaction without blocking the entire card. So if you go to buy a coffee, the card still works, but it's disruptive on the dark web marketplace, where you're breaking that ecosystem.

A buyer becomes uninterested because the card isn’t working in that context. On the other side, the bank is warned proactively before fraud happens: this card is currently being tested on a dark web market—expect fraud in the coming days because the credentials have been compromised.

So I'm talking about the full PAN number, CVC, expiry date—that information is valuable. The bank can then decide whether to accept the risk or reissue the card.

THOMAS TARANIUK: Certainly. The key here is being proactive, as well as reacting when needed. What does good card issuing look like in terms of fraud resilience, and how many companies are actually achieving it?

What is card reissuing?

MAIKEL NINABER: More and more companies are achieving it. I don’t have exact percentages, but I think we're now above 70%. Europe is a mature market, while others are less mature.

Card reissuing comes at a cost, so it depends on the systems in place and the logistics involved. In the end, it's a financial decision.

From a risk and resilience perspective, cybersecurity and fraud teams are often seen as cost centers. The only way we can contribute is by saving the organization money—and that comes from being proactive.

At MasterCard, we try to show the return on investment by demonstrating how much fraud could have been prevented with these solutions in place.

THOMAS TARANIUK: Certainly the case. We’ve talked about different segments of fraud—the initial onboarding and circumventing KYC, and the ongoing behavioral and transaction monitoring afterward. After a card is issued, users continue interacting, and there are subtle indicators—IP address, device intelligence, and so on. How important are those in detecting and stopping fraud?

How important are subtle indicators, like IP address and device intelligence checks, in fraud prevention?

MAIKEL NINABER: They are very important. For example, if a transaction happens in the Netherlands and then seconds later in Bangladesh, it raises the question—how is that possible?

That could be an indicator. Financial institutions might reach out through their apps or contact the customer directly.

But even that is becoming difficult. Previously, banks would call customers, but with the rise of social engineering, they now say they will never call. Everything happens in the app.

That transition has been challenging, especially because while institutions focus on younger users, older generations are still used to older methods and can be more vulnerable to these attacks.

THOMAS TARANIUK: It’s certainly the case. Even after KYC, about 70% of fraud happens. Layered defenses and user education are critical. In our last season, we discussed whether a certain level of fraud must be accepted. Do you agree with that sentiment?

Should a certain level of fraud be accepted?

MAIKEL NINABER: Yes, I do. Financial institutions need to train models, and just like in cybersecurity, you can never guarantee 100% protection. Attackers are smart—they analyze responses, timings, and find new ways. It’s the cat-and-mouse game we’re continuously playing.

THOMAS TARANIUK: Do you think there will ever be zero acceptable fraud?

MAIKEL NINABER: I don’t think so, at least not in the next 10 years. We already have a lot of technology in place, and yet it still happens.

THOMAS TARANIUK: It’s a double-edged sword—the more advanced we become, the more advanced fraudsters become. What do you imagine the ultimate weapon for fraudsters being in the future?

MAIKEL NINABER: I think agentic AI could play a very big role. A few years ago, AI was the buzzword—now it’s evolving into agentic AI. I’ve seen interesting capabilities where you can segment different roles across AI agents into different functionalities. That’s also part of a layered approach.

We’re not fully removing the human element yet, and it may still need to play a role. But we’re definitely moving into very interesting times, both in e-commerce and in fraud intelligence when it comes to stopping this type of fraud.

THOMAS TARANIUK: Excellent. Looking ahead, what are the fraud trends you are watching most closely for the rest of 2026?

MAIKEL NINABER: One of them is AI-driven identity fraud, right? Because if agentic AI can be used to enrich e-commerce and perform intelligence in the fraud space, it can also be used for malicious purposes.

Automated fraud factories, social engineering at AI scale—where you're actually training agentic AI on how to perform social engineering. For instance, with platforms like OpenClaw, you could instruct it to call large volumes of numbers and extract specific information, creating a voice that sounds like a real person to obtain that information. I've seen it in action—it’s quite capable from that point of view.

Fraud targeting digital identity systems as well—think about the entities that issue driver’s licenses or passports. As you mentioned, it’s both scary and, at the same time, exciting because you get to stop this kind of activity. It’s concerning because if you're part of an organization being attacked, you really need to think about how to adjust.

Also fraud in embedded finance, cross-platform fraud networks—fraud will become more automated, more AI-driven, and more scalable. The biggest challenge for organizations will not just be detecting fraud, but detecting it fast enough. I think that’s the key part.

THOMAS TARANIUK: Certainly—real-time reactions as fraud gets faster. We definitely have our hands full. I’d love to do a quick exercise with you. If we’re looking to the future—fraud-as-a-service on one side and compliance tech on the other—Sumsub and MasterCard trying to fight the good fight.

From the perspective of fraud-as-a-service, what would you say is the perfect package in this arms race? We’ve got tools for generating voice and images in real time, deepfakes from advanced models, scripts and agents that can execute, and emulator farms as well.

Is there a full package you would describe as the ultimate “bunker buster” for banks and fintechs?

The ultimate anti-fraud weapon for banks and fintechs

MAIKEL NINABER: I think stopping this trend requires a shift in how organizations think about fraud defense.

First, companies need layered defenses that combine identity verification, device intelligence, behavioral analytics, and network monitoring, rather than relying on a single security control.

Second, organizations must move toward real-time detection and response because automated fraud operates at machine speed, and manual review cannot keep up.

Another key element is collaboration. Fraud networks operate across platforms and industries, so banks, fintechs, telecom providers, and technology companies need to share threat intelligence and fraud indicators more actively.

Disrupting the underlying infrastructure—such as bot networks, mule accounts, and payment channels—is often more effective than addressing individual fraud attempts.

Ultimately, the goal is not to eliminate fraud entirely, which is unrealistic, but to raise the cost and complexity of attacks.

When fraud becomes too difficult or unprofitable to scale, attackers move elsewhere. That economic pressure is one of the most effective long-term defenses against fraud-as-a-service.

THOMAS TARANIUK: Excellent. I love the way you framed it—reduce the ROI for fraudsters, and they stop playing the game.

Quick-fire round

To close things out, we always like to have a bit of fun. Five quick-fire questions—no overthinking. Michael, are you ready?

MAIKEL NINABER: Go for it.

THOMAS TARANIUK: Excellent. So what’s one online fraud prevention technique or use case that you absolutely despise?

MAIKEL NINABER: CAPTCHAs. But there are different types. Some organizations have improved them, like rotating shapes or pattern recognition.

But then you have CAPTCHAs with numbers that are so badly displayed that you end up guessing what you’re seeing. It takes too much time just to figure it out before you can actually access something. So yeah, I really despise that.

THOMAS TARANIUK: Absolutely, I feel the same—especially the “find the bicycle, find the bus” ones. I struggle with those. Have you ever been the victim of fraud yourself?

MAIKEL NINABER: I have not. I usually laugh about it with my friends. When I get a phishing email, I share it with others. At MasterCard, we also have a policy where we’re trained on phishing multiple times a month. They even show what the indicators were and how to spot those emails. So no—but I do enjoy having a laugh about it, even when friends fall victim. I ask them how they managed to fall into that trap.

THOMAS TARANIUK: That’s incredible. You might be the first person I’ve spoken to who hasn’t fallen victim to any kind of fraud.

MAIKEL NINABER: Lucky me.

THOMAS TARANIUK: Exactly—you must be doing something right.

MAIKEL NINABER: My social presence is quite small. I’m not very active on social media, so maybe that helps.

THOMAS TARANIUK: So you don’t have a target on your back. That makes sense. Next question—what’s one thing about fraud prevention that the public completely underestimates?

MAIKEL NINABER: The threat intelligence part—being informed about what is currently happening and acting on it.

I am seeing a shift where people and organizations are becoming more collaborative and sharing more. But there was a time when organizations kept everything to themselves, and it didn’t benefit the ecosystem.

Even if they learned from an incident, they kept that intelligence private. And there are also penalties—you need to report breaches, which comes at a cost. So some organizations choose to stay quiet to avoid those penalties. That’s my personal opinion.

THOMAS TARANIUK: That’s really interesting. And what’s one thing you learned as an ethical hacker that you’ve taken into your work today?

MAIKEL NINABER: Be open. Be open to new ideas and new threats. No one knows everything—not even me.

THOMAS TARANIUK: I feel like you know quite a lot.

MAIKEL NINABER: Thank you. But I’ve compromised many organizations and had to explain to boards how it happened. Some of the methods were surprisingly simple.

For example, if you walk into a building where you don’t belong, the first question is how you got in. Sometimes it’s because people were smoking outside and left the door open. Sometimes we dressed up for a festivity and were let in. In Dutch culture, there are popular celebrations like Sinterklaas, where people dress up. It looks friendly, so people let you in.

Nowadays, there are badges and multiple security layers, but some organizations forget that buildings have windows. From outside, you can still observe a lot—you can see people typing credentials, where information is stored, or even transactions. So even without entering the building, you can still gather valuable information, which is quite interesting.

THOMAS TARANIUK: There are so many nuances to this—the digital side and the physical side, which almost feels like espionage.

MAIKEL NINABER: Yes, exactly.

THOMAS TARANIUK: If you could have any other career than the one you’re in now, what would it be?

MAIKEL NINABER: I think being a secret agent or a spy would be interesting.

THOMAS TARANIUK: It sounds like you’re already doing parts of that, just more openly.

MAIKEL NINABER: Exactly. I’m no secret agent, but it would have been cool—a dream job.

THOMAS TARANIUK: Absolutely. Well, thank you so much, Mikel, for joining us on What The Fraud? It’s been a pleasure having you.

MAIKEL NINABER: Thank you so much for having me.