- Sep 18, 2025
- 20 min read
Fraud, Digital Identity, Trust. Where do we go from here? | “What The Fraud?” Podcast
Dive into the world of fraud with the "What The Fraud?" podcast! 🚀 In the first episode of season 4, Tom is joined by Eddie Moxon-Garcia, Product Marketing Lead at Sumsub, and Nikolai Suhharnikov, Head of Technical Pre-Sales at Sumsub. Together, they discuss why trust is at the center of digital identity, how businesses can build it, and what happens when it breaks.
THOMAS TARANIUK: Hello and welcome to season four of What The Fraud?, a podcast by Sumsub where we explore how digital fraudsters meet their match. I’m Thomas Taraniuk, currently responsible for some of our very exciting partnerships here at Sumsub, the global verification platform helping to verify users, businesses, and transactions.
As AI grows more complex and cybercriminals more sophisticated, we need this podcast more than ever. In our first episode back, we’re going deep and tackling one of fraud’s trickiest words: trust. It may sound soft, but in practice, it’s the hardest currency there is. In a world of AI content, bots, and fraud, trust can make or break a platform. It isn’t just a “nice to have”—it’s survival.
So what do we really mean by trust, and how does it connect to digital identity? Most importantly, how can companies earn trust and keep it? Today, I’m joined by two industry insiders: Eddie Moxon-Garcia, our product marketing lead here at Sumsub. You might remember him from season three, where we dug into digital IDs—worth a listen if you missed it. Also with us is Nikolai Suhharnikov, Head of Technical Pre-Sales here at Sumsub. For the past six years, he’s helped some of the world’s biggest companies manage fraud protection, verification, and more.
Welcome back, Eddie, and great to have you here, Nikolai. Today’s episode is all about trust. So, what is it that interests you so much about this topic?
EDDIE MOXON-GARCIA: Hi, Tom. Thanks for having me back. It’s always good to see you and to be given the opportunity to talk about digital ID in general. But trust is something I am deeply passionate about. I do think that reusable digital identity is the foundational work necessary to build actual, trustworthy services.
THOMAS TARANIUK: Excellent. Thank you, Eddie. How about yourself, Nikolai?
NIKOLAI SUHHARNIKOV: Hi, Tom. Thank you so much for having me on. The topic is very close to my heart as well. Having seen how different companies interpret and value trust over many years, it’s always interesting to tackle this topic and hear everyone’s different perspectives—especially with experts like yourself and Eddie on the call.
Today’s fraud landscape
THOMAS TARANIUK: Thank you, Nikolai. Let’s start off with some stats. In the first quarter of this year, Sumsub found synthetic ID fraud in the USA surged by almost 300%. Identity fraud has more than doubled since 2021, with 67% of companies saying they’re seeing more fraud. Nikolai, is this the sort of thing you’re exposed to daily? If you had to explain today’s fraud landscape to someone who last looked at it five years ago, what would shock them the most?
NIKOLAI SUHHARNIKOV: Very good question, Tom. Five years ago, KYC verification meant presenting documents next to your head to prove liveness, authenticity, and presence in real time. Now the landscape has shifted drastically. One of the most popular—and buzzy—words is generative AI. While some call it a miracle technology, it’s also one of the biggest threats to safety and trust. The biggest shift I see is the focus on preventing synthetic fraud and GenAI fraud.
Suggested read: Know Your Machines: AI Agents and the Rising Insider Threat in Banking and Crypto
THOMAS TARANIUK: Would you say synthetic fraud affects different businesses differently depending on geography or verticals?
NIKOLAI SUHHARNIKOV: The short answer is yes. Highly regulated industries like banks and fintechs have been tackling fraud for many years. Newer industries or those just exposed to verification regulations face it differently—they’re often unprepared for the sophistication of modern fraud. So while everyone is affected, some are better prepared due to experience.
What are fraudsters targeting in 2025?
THOMAS TARANIUK: Eddie, from your perspective, last time we spoke you mentioned that what surprises you most about fraud is its sophistication, accessibility and democratization. Are fraudsters targeting vulnerable industries, chasing low-hanging fruit, or the fastest money?
EDDIE MOXON-GARCIA: As technology evolves, so does its use by both good and bad actors. It’s expected that people will exploit weaknesses. Regarding targeted industries, regulated versus unregulated industries behave differently. Regulated businesses—like financial institutions and healthcare—must adhere to laws and compliance, so they’re better protected. Unregulated businesses, like most e-commerce platforms, are inherently more vulnerable. That knowledge is accessible to fraudsters, making it easier to map attack strategies.
All industries are at risk, but the shape of the risk differs. Even micro-economies like marketplaces or rideshare apps need protection—not just for reputational or financial reasons, but to safeguard communities.
Before, I spoke about regulated industry businesses being the ones that keep our economy and society going, but the other ones also account for microeconomies. Think about marketplaces and car-sharing apps—these are all microeconomies that can be exploited. It’s somebody’s duty to protect the welfare of the users and the owners of these platforms and services. So yeah, I do believe something should be done.
Risks for regulated and unregulated industries
THOMAS TARANIUK: Most definitely. At the end of the day, it’s not just about reputational damage, revenue loss, or other issues, but also protecting communities as a whole.
EDDIE MOXON-GARCIA: I actually think the risks are the same. Reputational damage and revenue loss are the same for a regulated business and an unregulated business; it just looks different. If you think about your main bank provider being the victim of a cyberattack and potentially losing some of your money in your savings account, you know that by default they will have a system and framework in place to protect you and your assets. That gives you a certain level of confidence. You will likely remain with your bank unless something significantly changes or affects your life.
With an unregulated business, it’s the same thing. If they’re the victim of a cyberattack and it affects you and your account, the reputational damage could potentially mean that you switch vendors, go to a different marketplace, or choose a different platform.
So the risk is the same; it just looks different from the user perspective.
NIKOLAI SUHHARNIKOV: From my perspective, the story is a little different, because I’m mostly focused on also protecting the businesses themselves. A lot of the time, what they’re thinking about is how to protect—Eddie mentioned—the assets of a given individual for whatever service they’re using. Meanwhile, businesses are thinking about how to protect their own assets. Fraud and the risks that come with it look very different for different businesses—regulated, unregulated, loosely regulated.
Think about car-sharing: the kind of risk to them might be losing a car or having a vehicle stolen. For a bank, it’s the loss of actual assets—money. Regardless of regulation, they are all thinking about safety, trust, and compliance: how to protect themselves and their users, and how to prevent major damage to the business.
Suggested read: Trust on the Move: Fraud Prevention and Verification in the Mobility Industry (2025)
What businesses fear the most
EDDIE MOXON-GARCIA: Is it fear, do you think? Like, is it because they just need to be compliant or because they’re scared they might lose revenue or part of their customer base?
NIKOLAI SUHHARNIKOV: I think part of it is definitely fear. Every time something like this happens and it gets out—there’s a news article, a publication—companies in the same industry hear about it and think, “I don’t want to be that person; I don’t want the same kind of damage done to my business.” But at the same time, they also have to think about it from every angle: compliance, risk, fraud. They map out every possible bad situation and work backwards: how can we prevent this from happening? How can we prevent it from impacting us in a major way?
So yes, fear is part of it, but mostly it’s about figuring out how to protect themselves.
EDDIE MOXON-GARCIA: I’m infinitely curious about this relationship between fear and the need for compliance. Historic evidence shows that big corporations have been fined for non-compliance. Companies often say, “We can deal with the fine—it’s just money. What we cannot deal with is reputational damage, losing customers, or how it affects our business model.”
For unregulated businesses, it’s not quite the same. They do care about fines and other implications, but the dynamic differs. How much of this fear-compliance relationship do you think applies to unregulated businesses?
NIKOLAI SUHHARNIKOV: It depends on the industry. Loosely regulated companies, for instance, operate near unregulated spaces but still have some regulatory obligations. Fear plays a huge role in their decision-making, which oftentimes benefits them, but sometimes neglects user experience, trust, and the overall user journey.
Eddie, I know you think about this daily—how do you consider trust, fear, and user experience in practice?
Trust, fear and digital identity
EDDIE MOXON-GARCIA: I feel strongly about it because building trust is not easy. Trust is not given, it’s forged over time. And it’s a two-way street: the platform must trust who they onboard. It’s complex, with no silver bullet. But we do plenty of work around it with safety mechanisms that further trust-building initiatives. Trust requires collaboration—regulators, businesses, and users all play a role.
THOMAS TARANIUK: Things are changing fast. What’s changed in the verification tech since we last spoke? How are changes becoming less static to something more dynamic on an ongoing basis?
EDDIE MOXON-GARCIA: I believe we’re now at a point in time where digital identity is no longer a “nice to have.” It is, or will soon become, a requirement in order for all of us to function properly. Because every day we are bombarded from a hundred different angles—by platforms, services, companies, people, and various other things—asking for our data. We exist predominantly online. What that means is that we are creating all of these vulnerability spots throughout our day. Right?
Every time we sign up for a new account somewhere—even if it’s social media, or just your food delivery app, or your bank—we’re giving away pieces of our personal data. Digital identity aims, or at its very core, is trying to centralize all of this and give users the opportunity to keep our entire digital footprint in one place and have full ownership of it.
What’s important for the businesses interacting with this is that the checks, compliance, and regulation processes that are happening are actually useful for them. What that means is that we can no longer afford to just do the initial verification of an individual and leave it at that. That was ten years ago. KYC and AML were designed as onboarding tools; after someone went through verification, that was it.
However, because—as I mentioned—we interact every day with different services and platforms, and we exist online, there is such a need for these checks to be dynamic and to happen constantly at different touchpoints. We are far beyond just a single human verification. A human identity is only one node in the larger network of behavior that needs to be examined if you’re going to detect fraud early, catch it, or prevent it. There are myriad reasons and checks that should and could happen along the journey of a user on a platform, however long that journey is.
Suggested read: Digital Identity in 2025: The Complete Guide
THOMAS TARANIUK: I think you’ve hit quite a few good points there, Eddie. And Nikolai, do you have any points as well? I mean, the ever-changing nature of both onboarding, but also the dynamic nature of user behavior after the fact, will mean that more users need to be monitored to a certain degree, and they need to be monitored smartly as well.
NIKOLAI SUHHARNIKOV: That’s a very interesting point because I totally agree with Eddie. As these checks become more strict and the necessity for user verification becomes more commonplace, something like a digital identity solution—or some form of it—is really a unique and convenient solution for a lot of the user pains.
What’s real and what’s not?
THOMAS TARANIUK: But now, looking to the future, and considering some of the questions you have answered today around synthetic fraud coming into play, deepfakes, overwhelming companies, more and more traffic coming from these sorts of synthetic fraud bots, AI agents, etc.—not real people—it raises the question to both of you: how do we know what’s real and what’s not?
EDDIE MOXON-GARCIA: It’s an excellent question because the answer is, in a lot of cases, companies aren’t protecting themselves correctly. I would say they often rely solely on documentary verification, and then that’s it. Maybe they do a liveness check. But because of everything I said before, I think it’s of paramount importance that it doesn’t stop there.
This is why we have continuous transaction monitoring, event monitoring, and device intelligence—all of these wonderful types of checks that paint a full picture of who somebody is. Being able to just compare a passport photograph against a liveness check that may have been fooled by a very good deepfake is just not enough. You have to look at all these different data points.
Device intelligence, as Nikolai mentioned, is a great thing. It happens pre-screening, before actual KYC incurs any cost for a company, and you can tell whether this person is part of a network of fraudsters, or if they’re trying to verify with the same identity across seven different platforms under different names, and so on.
I think it just goes back to this dynamic element. The way for all companies to truly safeguard their business and their model is by adopting, at the very least, a mental model centered around this dynamic element, and then continuous monitoring and continuous rechecking. It’s not enough to just tick the box of compliance and say, “We have verified this person’s identity.”
THOMAS TARANIUK: Excellent points as well, Eddie. I think, at the end of the day, if fraud is dynamic, if fraudsters are malleable and able to be flexible in their targeting strategies, businesses will need to adapt and implement changes as well.
So, Nikolai, from your perspective, what’s the most surprising tactic you’re seeing right now, today, this week, something that businesses aren’t ready for?
What’s the most surprising fraudsters’ tactic businesses aren’t ready for?
NIKOLAI SUHHARNIKOV: With the rise of generative AI, everybody’s focused on how to prevent synthetic fraud or synthetic images from getting into our systems and being classified or recognized as real, authentic, and genuine.
What a lot of these companies forget is that the threat of traditional fraud, for example documentary fraud or presentation attack fraud, is still ever-present.
When you look at the fraud that happens to highly regulated businesses like banks and financial institutions, and when you really examine the data, what you find there is very, very scary. Sometimes you see a physical ID or a physical document that has all the signs of being real—it has security features, the data is formatted properly, the color saturation is correct—but it’s a fake document. It’s a fake document created by a group of fraudsters who have acquired the means to create insanely realistic-looking identity documents.
A lot of companies are missing the point these days because they’re overly focused on new terms such as “risk insights,” “risk signals,” and “AI fraud prevention,” while forgetting to go back to the roots and protect themselves against legacy fraud, because that still happens.
Actually, I would say that fraudsters are quite happy that the focus has shifted from traditional styles of fraud to the new wave of fraud that has come about with generative AI.
How should companies rethink trust to cover both humans and nonhumans?
THOMAS TARANIUK: How should companies rethink trust to cover both humans and nonhumans? At the end of the day, Eddie?
EDDIE MOXON-GARCIA: I think users ultimately need to demand a level of transparency from all these platforms and services, right?
The answer is actually very simple: show me, don’t tell me. Just think about all the accounts you’ve signed up for in the last six months, all the platforms you have onboarded to. Typically, during your signup process, they’ll say, “By the way, we’re keeping your data. It’s totally safe. You don’t have to worry about it. On you go.” And what happens is that they bury, deep in the terms and conditions, what is actually happening with your data—whether a third-party processor is interacting with it, you don’t know. The reality is that once you access a platform as a user, you have no idea what’s happened. You have no clue where your passport has gone, who’s looking at it, how many times they’re looking at it, whether they’re going to share it with somebody else, and so on.
The transparency I speak so strongly and passionately about has to be tangible. “Show me, don’t tell me” means you don’t just say to people, “Your data is safe with us.” You give them access to it. You give them the opportunity to revoke access to the business from that data at any point, and you show them how it’s being used.
It’s very simple, but it’s not easy to execute. It demands that people go to these companies and services and say, “Hey, you were looking for customers. I’m happy to come onto this platform and give you my money, but you’ve got to let me be in control of my personal data. You’ve got to let me choose whether you share it, how long you keep it for, and so on.”
THOMAS TARANIUK: Absolutely great example. Thank you for explaining that as well, Eddie.
But Nikolai, switching back to you—you’ve done a huge amount of work with international agencies, Interpol for instance, as well as organizations that are pushing, let’s say, the next stage of what’s needed by different companies—not on a micro level in different countries, but with the European Commission across 27 EU states. Are these businesses clamping down on this new wave of fraud and the new use cases that Eddie is talking about?
Trust in regulated and unregulated industries
NIKOLAI SUHHARNIKOV: I would say that if you want to play the game, you also have to play by the rules. A lot of people and users of different services, both in regulated and unregulated industries, want trust—but they don’t want to give up anything themselves. If we are going to have trust, then we have to establish a very clear sentiment in the space that everybody has a role to play here.
These businesses, as Eddie said, have to make sure they are being transparent and clear about what they’re doing with the data—what data they’re collecting and what they’re not. But users and platforms also have to understand that they need to ensure the businesses know who they’re working with. Users have to prove their authenticity, at least for specific businesses. That way, in these high-risk environments, we can create a cohesive playground where everybody can coexist neatly and safely.
The public sector also plays a huge role, especially in educating users of these companies. If we are moving toward a world where we often don’t know what’s real and what’s not, we all need to be on the same page and on the same side.
THOMAS TARANIUK: Definitely, trust is built through communication—seamless communication.
Right. And Eddie, from your perspective, do you think it’s equally important to collaborate on a macro level, within government organizations, as well as with top and middle-level organizations in these industries, before businesses actually take it into their own hands to move forward?
Collaboration on a macro level
EDDIE MOXON-GARCIA: 100%. There is no other way than to meet each other halfway. Unless there are masses of people saying, “Our data is important to us, and we will no longer engage with you as a business, whoever you are, unless we have full control and visibility of what you’re doing with our PII,” then it’s just a recipe for disaster.
So, I do think it’s incredibly important that companies like Sumsub work with these organizations so that, from the top down, they can tighten things for companies. And maybe regulations don’t need to be super strict, but they do need to be clear, and they need to allow room for these companies to improve how they choose to be transparent—because transparency is also a double-edged sword.
We have to remind ourselves that within the compliance space, auditors will come to your company and ask for records of everything a user has done from day one up until day whatever. And sometimes you can’t make that information public to everyone. So there’s also a balance to be found there.
NIKOLAI SUHHARNIKOV: To be honest, you really hit the nail on the head, because what I think happens a lot in that sector is almost an overcomplication of things. They focus a lot on strictness, safety, and the “proper” way of doing things. What that leads to is a lack of flexibility and also a lack of speed—speed of implementing these policies, speed of adapting, speed of amending them.
I believe we can trust them to have our backs and to have our best interests in mind. But at the same time, it’s quite scary to me, the rate at which AI and new technologies are advancing versus the rate at which regulation and policy are advancing. Those two simply don’t add up. I’m afraid there will be time lags between how quickly the technology advances and impacts us as users, and how quickly regulations and the compliance space can catch up to bring that trust and safety back to the users.
EDDIE MOXON-GARCIA: And this is why we need worldwide adoption of usable, reusable digital identity. Because if regulators aren’t fast enough, then the users can be. If companies have digital ID solutions that users can fully trust, rely on, and use to keep their data safe, then we’re doing half the work for them—and the regulators will have to keep up. That’s why we need digital identity in the world.
THOMAS TARANIUK: I think you’re both correct there as well. But anyway, we did go into this discussion last time, didn’t we? We talked about the centralized points of failure with government IDs and otherwise. And I think those problems may still persist at the end of the day. But if you do think it prompts these issues to be solved, I’d love to hear a few more points from your side.
EDDIE MOXON-GARCIA: I think it’s the best solution we have today. It’s not the perfect solution, but it’s the best one so far. I understand the concern about having a single point of access where everything is stored, but this is why, personally at least, Sumsub is making sure that there are layers of security you need to go through as a user before you can access your own stuff.
It would be great if you could go somewhere and say,
Hey, this bank has a copy of my passport. Uber has a copy of my driver’s license, or whatever.” And then you could say, “I no longer use that service. Revoke access. You no longer get to see my personal data.
Trust in the digital space
THOMAS TARANIUK: We’ve covered fraud tactics, we’ve covered the evolution of digital identity, but I’d love to hear your input about trust online in the digital space. What comes to mind first—the technology, the process, or the people? Would love to hear your thoughts, Eddie.
EDDIE MOXON-GARCIA: I just so happen to be someone who works in this space, but I’m also a normal person who has a bunch of online accounts, who signs up for services all the time. And if I’m being 100% honest, there’s always the tiniest bit of anxiety—even when I’m making an Amazon purchase—because I’m exposed to the sophistication of fraud nowadays and all the different weak points these platforms could have. There’s always this level of anxiety I feel when I’m making any sort of transaction online.
However, I’m also a cynic, so I do as much as I can to protect myself, as we said previously. But if I were to venture an educated guess for how most people feel, I would say that trust doesn’t really come into play until after something has happened.
People are, I think, naturally trustworthy. And everything we’ve discussed today, we’ve discussed because we work here. But this is not how normal people operate. If I want to onboard to a service because I need to perform whatever action, that’s all I’m thinking of. Whatever happens in between me signing up and me purchasing my new car—I don’t care about. I only see it as an obstacle, I see it as a friction point. I’m just trying to get to my desired action. So, if something does go wrong at checkout or whatever, that’s when I begin to think about trust. That’s my educated guess, but I can’t speak for most people.
NIKOLAI SUHHARNIKOV: I just want to comment on the fact that I think it’s very funny that Eddie, being the cynic, said that people are naturally trustworthy or naturally want to trust. I feel like it’s the complete opposite. People are incredibly distrustful of these services and these huge corporations that we interact with on a daily basis.
From my perspective, being a person who interacts with these tools and knows them inside out—knowing how safe they are, knowing how secure they are, knowing how these businesses are always thinking about the users and the safety of their entire platform—I also think it’s always important to make it feel human and make it feel safe, secure, and convenient for the user. And honestly, this is a very boring answer, but: communication. I think communication and user education are always going to be incredibly important because of the level of distrust people have toward these services online.
EDDIE MOXON-GARCIA: I have a question actually for both of you. Do you think most people read through T’s and C’s before they tick that box when signing up for a new service? Thank you. Which means they’re inherently trusting. They’re trusting that this company will keep their data safe. They’re trusting that their data is not being shared with anyone. If it were the other way around, they would go through the T’s and C’s and, I guarantee, choose not to sign up for the service.
What everyone should do is: open your T’s and C’s, download it, take a screenshot, whatever—and then Command+F search for keywords that matter to you: third-party vendors, data residency (where’s my data being kept, for how long), what is the revoke policy of said access, and so on.
So again, cynic.
NIKOLAI SUHHARNIKOV: What Eddie said there hits so close to home, because when people read about the fact that, for example, in a traditional user verification or onboarding procedure, they’re suddenly asked to present their government-issued documents or to log into their open banking platform—that natural cynicism and distrust basically deters users from continuing. And we see this very clearly in the conversion metrics and the password rate metrics of these solutions. Users are naturally not very trusting of these things. So what Tom said, I think, is just so spot-on and perfect here, because I feel like the user experience of any given verification procedure, of any given compliance procedure, is so important. We—and these businesses and everybody else—have to realize that for a day-to-day, normal user, this is not a standard kind of operating procedure. They’re going into something that is completely alien to them if they haven’t done it before. They’re doing it without fully understanding why they’re doing it or how they’re supposed to do it.
So having a good user experience and, again, communicating what exactly you’re doing with this information is, I think, the most important thing about these kinds of services.
UX and communication in iGaming, banking and other real industries
THOMAS TARANIUK: How does it play out in practice and everyday terms within gaming, banking, and other real industries? But also, as Eddie pointed out earlier, in unregulated industries such as marketplaces and mobility. I mean, why would we focus on these industries specifically if we were orchestrating trust for users?
NIKOLAI SUHHARNIKOV: First, build trust with a user by letting them onto your platform. You kind of introduce them to everything that’s going to be happening to them. You let them have access to a limited set of things on the platform. And then, as the user moves to specific higher-risk action points or different kinds of options on the platform, that’s when you say, “Sorry, up until now we’ve been okay with you not providing us with certain data or not verifying certain details about yourself. But to do this very risky action, or an action that is highly guarded by compliance requirements, we do need to verify you.”
So I think that’s what trust orchestration is for me—it’s approaching it in a very slow and tempered manner, step by step, and building that trust as the user goes along.
EDDIE MOXON-GARCIA: With iGaming, there is huge potential revenue loss, right? So I think it’s a good example to talk about right now. Huge potential revenue loss also means huge potential trust loss, in my opinion.
When we talk about orchestration, it’s not just from the platform or product side. It’s not just from the T’s and C’s. I think it’s across the entirety of what that platform or organization puts out into the world. And that includes their marketing, their website, the UI, the UX inside the product, the conversations that their sales representatives have with other people, the clerks at the office selling whatever product. All of this helps to orchestrate trust.
A good example of that is saying, “Hey, we are X iGaming company, and one of our trusted partners is Y trusted company.” Reputational cloud is incredibly important, and that is a way to build trust. And it happens outside of the product—it happens outside of anything that any user is interacting with. So that’s one touchpoint.
Then you have the legal aspect: your T’s and C’s. Are they clear? Are they easy to understand? Do you address the points that you know are important to people straight away? That’s another touchpoint, part of the orchestration.
Then we have inside the product: the UX, the UI. Do you make it easy and accessible for me to perform the actions I want and need to do, and also make it easy and accessible for me to say, “I no longer wish to be part of this relationship”?
Does potential revenue loss mean potential trust loss?
THOMAS TARANIUK: Thank you, Eddie. And of course, mentioning that potential revenue loss is also potential trust loss. Nikolai, do you believe in that statement?
NIKOLAI SUHHARNIKOV: I do, but I think the inverse is even more true, because for the majority of them the user, whether we like it or not, is the source of revenue. So trust loss, more than anything else, is revenue loss as well. If you lose trust in your platform—in how you protect your user base, in how you protect their assets—you’re losing users, and hence you’re losing revenue. So I think the inverse is as true, if not more true, than the original statement.
THOMAS TARANIUK: Most definitely. I do have a final question for both of you, Eddie and Nikolai. How do we keep billions of people secure whilst also making it feel really human and tangible?
EDDIE MOXON-GARCIA: Education. Literacy. It’s very simple. I don’t think any amount of transparency or good UX can cancel out a lack of education and understanding of what is actually happening. So I think it also falls on the companies to educate and make sure that their audience is literate enough to understand whatever level of transparency that company has decided to engage in.
Maybe you’re dealing with the elderly and a pension scheme, for example. You can’t expect them to go through T’s and C’s, tick all the boxes, and do liveness checks. It’s the company’s responsibility to adapt the message to that audience so it’s easy to understand. There are accessibility concerns too—make things readable, easy to find, and clear.
So I believe education is the answer to your question.
NIKOLAI SUHHARNIKOV: It’s very important that we don’t assume we can take a one-size-fits-all approach to every human on this planet. Like Eddie mentioned, digital literacy and the great differences people have with it is something businesses have to come to terms with on a regular basis. A verification process for a tech-savvy young person is not the same as for an elderly person.
We have to take a tailored approach, and in order to unify how we’re orchestrating these approaches for different user groups, that responsibility falls—at least largely—on policymakers and regulators. They need to ensure we’re taking the best approach possible for every user, and make sure users understand that’s how they’ll be treated—with their best interests in mind.
Quick fire-round
THOMAS TARANIUK: These are brilliant answers, Eddie and Nikolai. Thank you very much. But I’m not going to let you off the hook yet. Before I let you go, I want you both to answer a few personal questions. For season 4, we’ve got new quickfire questions—five each. Eddie, Nikolai, let’s go. Eddie, you’re not off the hook even though you did it last time. We’re going to make these quick and sweet.
If you could ban one risky online behavior forever, what would it be?
EDDIE MOXON-GARCIA: Bots. It’s becoming increasingly common to have ID-verified profiles on online dating platforms. The behavior I wish didn’t exist is the lack of that. It’s incredibly difficult to trust that the person you’re speaking with is real.
THOMAS TARANIUK: They need KYC, right?
EDDIE MOXON-GARCIA: Not necessarily, but a little ID verification, a little liveness check, wouldn’t hurt—and it’s a timesaver. Online dating is also a huge target for deepfakes, phishing attacks, and so on.
THOMAS TARANIUK: I definitely agree with you, Eddie. But I must remind you, these are quickfire questions. So let’s move to number two.
If fraudsters weren’t criminals, what job would they be great at?
NIKOLAI SUHHARNIKOV: Fraud busting. Or would there not be any fraudsters?
EDDIE MOXON-GARCIA: No—murder mystery parties. Well, the thing is, you never know until you’re playing.
NIKOLAI SUHHARNIKOV: They’d be great at quality assurance—trying to break things, finding vulnerabilities. That’s what they already do, just with malicious intent. Flip it around, and they could do it for good.
THOMAS TARANIUK: Most definitely. So, what’s the one fraud myth you wish would disappear?
NIKOLAI SUHHARNIKOV: That deepfakes and generative AI are the biggest threat. The reason I say that is because we dismiss everything else that’s out there, and it’s a very scary world. We have to think holistically. Deepfakes and GenAI are part of it, but not the whole picture.
THOMAS TARANIUK: Definitely tune in to episode two of season 4. Eddie, how about yourself?
EDDIE MOXON-GARCIA: I’d say the myth that OTPs are not safe. It really bothers me when I hear that, because fraudsters know that’s not true. Yet they use that same channel to target people, saying, “Hey, don’t trust OTPs—click this link instead.” That really bothers me.
THOMAS TARANIUK: Nikolai, back to you. Which type of fraud do you think will grow fastest in the next five years? Eddie, I’d love your thoughts afterwards.
NIKOLAI SUHHARNIKOV: It’s probably going to be—there isn’t an industry term for it yet—but basically part-synthetic, part-real fraud. Fraudsters have some victim data, but they fill in the gaps with GenAI tools.
EDDIE MOXON-GARCIA: Any form of account takeover really scares me—especially after the initial “prove who you are” step. Because of the sheer volume of services and platforms we interact with every day, gaining access to one could mean access to all. Fraudsters know this, and that really worries me.
THOMAS TARANIUK: So, fifth question. Nikolai, what’s the first website you check in the morning?
NIKOLAI SUHHARNIKOV: My answer is so boring. But it’s not sad—it makes me proud, actually, because I think of myself as organized. It’s my calendar. I always want to plan my day, see what I have, and see what I can move around.
EDDIE MOXON-GARCIA: It’s the New York Times. But only because overnight I get so many notifications from them. When I wake up, the first five or ten things I see are from the Times, so it’s the first thing I open. Not by choice.
THOMAS TARANIUK: Excellent. Excellent. I’m not going to tell you mine, because it’s even more boring.
EDDIE MOXON-GARCIA: Please, you have to now.
THOMAS TARANIUK: Well, that’s the benefit of asking the questions. So Eddie, Nikolai, thank you so much for joining me on this episode of What the Fraud? It’s been great to listen, to have the back-and-forth, and to open my eyes to a lot of issues around trust, digital IDs, and some topical matters we can agree or disagree on happily.
We’re just getting started this season, and in the coming episodes we’ll be diving into one of today’s hottest topics—Agentic AI—and exploring what the latest trends in fintech fraud mean for the future.
Relevant articles
- Article
- Jul 17, 2025
- 13 min read

- Article
- 1 week ago
- 5 min read
Explore the best countries for crypto businesses in 2025 and 2026, and learn where regulations and incentives favor growth
