• Dec 19, 2025
  • 17 min read

Inside Sumsub's 2025 Identity Fraud Report | "What The Fraud?" Podcast

Dive into the world of fraud with the “What The Fraud?” Podcast! 🚀 In this episode, Tom is joined by Artem Popov, Technical Product Manager for Fraud Prevention at Sumsub. Together, they discuss findings from Sumsub’s annual Identity Fraud Report: from AI-generated synthetic identities and deepfakes fooling liveness checks to telemetry tampering, we expose sophisticated techniques targeting industries worldwide.

THOMAS TARANIUK: Hello and welcome to What The Fraud?, a podcast by Sumsub, where digital crooks meet their match. I’m Thomas Taraniuk, currently responsible for some of our very exciting partnerships here at Sumsub, a global verification platform helping businesses to verify users, companies, and transactions.

Today, we are diving into Sumsub’s just-released annual Identity Fraud Report, one of the most comprehensive global studies on fraud. We’ve analyzed over 4 million fraud attempts, surveyed over 300 fraud and risk professionals, and 1,200 users. And the findings? Crude amateur fraud is out, and coordinated professional attacks are in. Joining me today is Artem Popov, our Technical Product Manager for Fraud Prevention Solutions here at Sumsub. Welcome to What The Fraud?

ARTEM POPOV: Happy to be here.

THOMAS TARANIUK: Before we get into the report itself, Artem, can you briefly tell us what you do here at Sumsub, just for our audience, and why this report matters so much to you?

ARTEM POPOV: So at Sumsub, I’m responsible for the whole Fraud Prevention solution and product that we have. Sumsub is a platform which has many different solutions, including KYC and transaction monitoring, and I’m responsible for the fraud prevention part of it. Why this report matters to our team is because we, as a KYC provider and verification platform, work globally. Different regions, different teams, and different markets have different needs and different problems. We really want to make sure we see and understand all the different trends, and we also love to share them with other teams as well.

THOMAS TARANIUK: Excellent to hear. So let’s dive straight into the findings. Last year, our report identified the democratization of fraud. Anyone with a laptop could become a fraudster today or tomorrow using ready-made toolkits for a very, very low price on these fraud-as-a-service platforms. Artem, you’ve been deep in the report for weeks now. What are the headline findings for this year, in 2025? What should people listening to the podcast right now actually know about this report?

Headline findings of the 2025 Sumsub Identity Fraud Report

ARTEM POPOV: So the big story this year is the shift from amateur and opportunistic to professional. Last year, only around 10% of fraud attempts were considered sophisticated. This year, it jumped, I think, by around 180%. So nearly a third of attacks involved something advanced—some AI-generated identities, some tricks with device environments, everything that helps you keep your attacks scaled and not visible to protection systems.

At the same time, total fraud attempts actually dropped by around 50%. But I think that, due to seeing this sophistication shift, the damage is way higher compared to what we had previously. Fraudsters now combine different techniques. They combine targeted phishing, deepfake videos, device manipulation, and they use all of it to bypass multiple defenses at the same time and remain invisible. So they are investing a lot of resources. They invest real time, they invest money, they hire people, and they invest in tech.

Suggested read: More Sophisticated—and More AI-Driven—Than Ever: Top Identity Fraud Trends to Watch in 2026

THOMAS TARANIUK: Wow. So we’re talking about fewer attacks, but each one being more advanced than the last, and far more dangerous as well. I mean, what are the most vulnerable targets, in your opinion? Where are the fraudsters actually playing the most now?

The most vulnerable targets

ARTEM POPOV: One of the most vulnerable targets for fraudsters is ID cards. ID cards are responsible for 72% of all fraudulent documents we detected in 2025. They’re everywhere. They’re easy to access, and they’re the quickest entry point for fraudsters who want to, for example, create a synthetic identity.

Whereas passports and driver’s licenses follow with 13% and 10% respectively. They are targeted more in high-value schemes requiring stronger credentials, or maybe cross-border mobility and cross-border financial access. There are also other document types that play supporting roles, like utility bills, and they’re responsible in total for around 5% of verification fraud.

And if we talk about industries, I would say that online media fraud remains high at around 6%. It’s still one of the highest places to commit fraud. The majority of attacks stem from fake accounts, social engineering, impersonation, and manipulation. For example, in March 2025, a Tbilisi boiler room operation used deepfakes and fake social promotions to defraud over 6,000 victims of $35 million. It’s a huge number, and it illustrates the scale of these online media fraud attacks. And this was just one specific actor that caused this type of damage, right?

The dating industry, also at 6.3% according to our report, continues to face one of the most emotionally manipulative forms of fraud, which is romance scams. This is a great way for fraudsters to use AI-generated personas and deepfakes to build trust they never had the opportunity to build before, and then move victims off-platform to gain an advantage over them. And the damage is not only financial loss. You can imagine what you could feel when you’re experiencing this type of romance scam. It highlights how social engineering, emotional manipulation, synthetic identities, and identity fraud are now converging.

Suggested read: Detecting Romance and Dating Scams: A Guide for Dating Platforms and Their Users

THOMAS TARANIUK: It’s certainly the case that it’s never just a fiscal loss. Right. We’ve talked extensively to victims across this podcast, and of course, day to day, anyone you meet could be a victim of fraud as well. Can you give us a little bit of insight into why fraudsters are targeting these industries and why we’re seeing such damage there?

Why do fraudsters target non-regulated industries?

ARTEM POPOV: As I said before, in 2025 we observed a dramatic 180% year-over-year increase in what we classify as sophisticated fraud. Sophisticated fraud is, in a way, professional fraud—attacks that require knowledge to perform. They combine multiple coordinated techniques:

  • synthetic identities
  • social engineering
  • device or telemetry tampering

which is basically a fancy way to say they can manipulate technologies to make it appear as a different type of environment, and also use cross-channel manipulation.

Unlike single-method attempts—where you might just try to forge a document or an ID—these multi-step schemes require considerable effort. They require planning, automation, and adaptability, and they are honestly far harder to detect and contain than opportunistic types of fraud.

THOMAS TARANIUK: The challenge is keeping up, right? Every time we close a loophole, they find a new one to outsmart us. It’s a Tom and Jerry game—excuse the pun. One fraudster gets trapped, another finds a loophole. It’s a constant arms race. Artem, how does your team at Sumsub actually detect these sophisticated attacks, close those loopholes, and continue to look at behavioral patterns? Or is it something else entirely?

How does Sumsub detect sophisticated attacks?

ARTEM POPOV: I would say it’s a combination of different things, both technological and operational. In terms of technology, we look at much more than just an isolated document or a face. We analyze behavioral patterns. We analyze devices and telemetry signals. We try to find as many inconsistencies between the data we observe as possible, because that’s a great way to detect fraud. It’s really hard to fake everything consistently and convincingly at the same time.

It’s also really helpful to have every piece of data you could potentially have in a single place, because one of the hardest challenges for many companies fighting fraud is fragmented data. You might have a KYC provider collecting onboarding selfies in one place, and a fraud prevention provider collecting behavioral data in another. Having everything stored in the same place and used in the same workflow helps us keep up with fraud changes.

THOMAS TARANIUK: Certainly. And if we’re looking at it chronologically, it’s everything from A to Z—the entire lifecycle of the user, from document upload and liveness checks to behavioral insights and telemetry, all the way to being offboarded for whatever reason. The outcome being that you can protect users, communities, and businesses.

But when we look at the report itself, it shows huge differences between regions, right? Fraud globally might follow general trends, but region to region it can be quite diverse. Some countries saw dramatic drops in fraud, while others saw explosive growth. Artem, could you walk us through the regional discrepancies we’re seeing now? Who’s getting better, who’s getting worse, and why?

ARTEM POPOV: I want to start with the “why.” Why do we even see regional differences? This can happen even when two countries in the same region are very close geographically but have massive differences in fraud type and prevalence.

It depends on local regulations. It depends on the financial methods most commonly used in a country. In some countries, you don’t have chargeback fraud because there are no chargebacks. It also depends on the victims—where they are located, how accessible they are, and the economic situation in the country itself. People may be incentivized to commit fraud due to economic vulnerability. All of these factors influence fraud rates and behaviors across different countries.

If we go region by region, let’s start with North America. On paper, it looks very safe. Fraud rates are around 1%, which is among the lowest globally. But deepfakes showed up massively, more than doubling in the US and Canada. So the volume is low, but the sophistication is very high. And again, for the most part, we’re talking about identity fraud. From my experience, romance scam victims are often from the US and Canada. They are targeted heavily because fraudsters can extract more money from people living in economically developed countries.

THOMAS TARANIUK: Definitely. And even in the US, 1% might sound small, but realistically, it isn’t. That can account for hundreds of millions of dollars. The US has the largest economy in the world, so it’s a major target for fraudsters. But can you tell me why the US is still so low according to our report?

ARTEM POPOV: I think it’s low because of the overall maturity of compliance and regulation in these countries, and because many companies operating in North America already have mature protection mechanisms. So while it’s a highly attractive target economically, it’s relatively well protected from opportunistic fraud.

THOMAS TARANIUK: And education plays a role as well, I’d imagine. If we look at the other side of the world—Saudi Arabia and the Middle East—what trends are we seeing there?

ARTEM POPOV: In the Middle East, the growth was even more dramatic than the deepfake trend in the US. In countries like Bahrain and Oman, deepfake activity jumped by several hundred percent. You can really feel the pace of digital adoption being matched by the pace of fraud innovation. Of course, the base might have been low last year, but the trend is very clear—it’s accelerating fast.

Africa is also fascinating. It’s a very fast-evolving region in every sense. There’s been a technological boom over recent years, and we see that reflected in fraud trends as well. Deepfake attempts jumped by several hundred percent in places like Mali and Tanzania. And if we take Kenya as an example, deepfakes already account for close to 10% of all fraud attempts, which is remarkable, considering how new this type of fraud was just a year ago. Could we have predicted it? Probably. But it’s still fascinating to watch it happen in real time.

THOMAS TARANIUK: So, Artem, what if we look at Europe now? What’s happening exactly in this region?

Fraud in Europe and Latin America

ARTEM POPOV: Oh, Europe looks calmer at first, especially compared to Africa. Overall, fraud actually dropped a bit. But again, if we talk about the sophistication part of it, we can see that deepfakes are spreading steadily. About two thirds of people in Europe say they have encountered deepfakes already, which tells you how quickly they’ve become a part of everyday life.

THOMAS TARANIUK: Deepfakes are a problem everywhere though, right, Artem? I mean, if we go to the other side of the Atlantic—Latin America, Brazil, Argentina—are they seeing the same issues as everywhere else in the world? The same, let’s say, smaller amounts in terms of total fraud volume, but also more sophistication?

ARTEM POPOV: Yeah. I think they’re kind of a mixed bag. And as I said before, two countries from the same region can go in exactly opposite directions. Malaysia comes to mind, where fraud grew by almost 200% year over year. And if we look at other countries in APAC, they’re tightening regulations, and there’s some stabilization of fraud rates there. So this shows the difference between countries in the same region.

Also, if we speak about Latin America, there is a noticeable rise in synthetic identities and betting- and iGaming-related fraud, which are closely connected. That’s because there are multiple regulated environments in which iGaming and betting platforms operate in Latin America.

When you have a regulated environment with KYC protection, there is an incentive for fraudsters to avoid that and gain advantages from bonus programs or affiliate fraud. You have this KYC layer that they need to avoid as a first line of defense, which incentivizes them to engage in synthetic identity fraud.

THOMAS TARANIUK: Got it. Overall, it does sound like everywhere is getting this under control in terms of fraud volumes. But in terms of sophistication, fraudsters are looking for those loopholes they’re just about trying to get through at the moment, and we’re spotting these challenges.

So we’ve established that fraud is getting more sophisticated, but let’s talk about the drivers behind it. I just want to be completely clear on one technology that keeps coming up—of course, in our report and in every single conversation we have—and that is AI, artificial intelligence.

As we discussed, it’s not just about creating deepfakes. It’s also about going beyond documents and mimicking human behavior, whether that’s transactions or human interactions with digital services.

And look, we keep talking about AI constantly. We keep talking about machine learning on this podcast. It’s changed the fraud landscape forever—for us, for our audience, and for anyone who uses digital services. Can you talk us through, Artem, what AI trends have been identified in the fraud report so far, and what AI trends we’re expecting for next year?

ARTEM POPOV: What we see first is that ID documents are being created with near-perfect details. Previously, you needed specific skills to replicate things like fonts, textures, and holograms. Now you have AI that has access to many examples of real documents, so it can replicate them.

And can you imagine that, in just six months, we already see that 2% of all fake documents were generated by AI tools like OpenAI, Claude, and Gemini? And even if these platforms have protections—like, “Hey, we don’t want to give users tools to commit fraud”—there are ways to get around that. There are specific AI prompts that allow you to bypass these protections.

If we talk about trends, this is what’s available on commercial platforms now. But give this a couple of years, and the same level of technology will probably be available as open-source systems. Then everybody will be able to use it and avoid things like watermarks, which commercial AI platforms use to protect people against fraud or deception.

When you have access to the same level of technology as open source, you don’t have to deal with watermarks. You can generate content on your own PC, change the code, and avoid creating those watermarks.

And synthetic videos—we haven’t seen this level of technology before. On one hand, people are having fun with Sora, creating memes on the internet. On the other hand, there are people trying to bypass classic liveness technologies that weren’t ready for this level of deepfake sophistication.

Suggested read: Liveness Detection: A Complete Guide for Fraud Prevention and Compliance in 2025

You can now generate deepfakes in real time. You don’t need to make a pre-recorded video. With a decent enough laptop, you can even bypass video identification, which is still used in some countries like Germany. So visual verification is becoming one of the most vulnerable layers of identity protection, and it requires a lot of effort from verification providers to become stronger and more resilient to these abuses.

THOMAS TARANIUK: That’s really interesting. Of course, when we look at KYC, the predominant verification method is ID plus liveness and face matching. And all of these tools are working almost like freemium services for fraudsters, enabling them to create near-realistic IDs and documents without watermarks.

I know there are some companies placing watermarks that are invisible to the human eye and can be detected by systems, but it’s still quite scary to see all of this happening.

This is at the very beginning of the user journey, whether someone is onboarding to an iGaming company, online media platform, dating app, et cetera. But after that, we’re also seeing AI used by fraudsters to mimic human behavior. How big of a problem do you see that being next year?

ARTEM POPOV: The good thing about AI is that it has two sides to the coin—it’s a double-edged sword. It helps fraudsters do their work, but it also empowers protection mechanisms to become smarter.

For me, the most frightening part is the human element. It’s much easier to deceive people now. So hopefully, as a society, we’ll adapt and evolve and develop much stronger hygiene against these types of attacks.

THOMAS TARANIUK: And when we talk about two sides of the coin, there are the defense mechanisms companies use to onboard and monitor users, and then there’s the other side—the digital users themselves, who need to be educated and smart about their online hygiene, password protection, and digital identity.

And here’s where it gets really interesting. All of these AI use-case shifts—not only during KYC but after the fact as well—are concerning. Major tech companies around the world are racing to release more powerful and, potentially, more dangerous AI models when used incorrectly, often with fewer restrictions, and as you mentioned, with better prompts that allow users to bypass safeguards to stay competitive.

Are they inadvertently handing fraudsters better tools, Artem?

ARTEM POPOV: In some ways, yes, but it’s probably inevitable. There’s no real way to fully control these technologies. If commercial companies can do it now, it will eventually be available to smaller companies or specialized operations, enabling private AI tools or even jailbreak services for fraud.

There’s definitely a race to the top in AI, and fraudsters are benefiting from it. Protection systems are benefiting too, but this is simply progress. Everybody adapts, and those of us working in protection need to stay on top of it.

THOMAS TARANIUK: It’s an arms race, and I hate to say it, but it’s one we’re fighting every single day. These reports help, but the report also mentions something else—AI fraud agents. They’re starting to appear more frequently this year. Can you tell us a bit more about that?

AI fraud agents: A new trend—the emergence of autonomous fraud systems

ARTEM POPOV: That’s actually one of the biggest changes compared to last year. It’s something really new that we haven’t seen before. You can think of them as early versions of autonomous fraud systems.

Previously, if you were a fraudster and wanted to automate an attack, you’d write specific code for a specific pattern. You’d run it, it would work for a while, then it would get detected, and you’d have to adapt manually. A human was always in the loop.

Now, similar to how coding agents can write entire codebases from basic requirements—testing and adapting automatically—fraud systems will likely evolve the same way. In the future, they’ll be able to adapt to changing protection mechanisms without a human in the loop, which is a really interesting challenge.

Suggested read: Know Your Machines: AI Agents and the Rising Insider Threat in Banking and Crypto

THOMAS TARANIUK: The report also talks a lot about something called telemetry tampering. Can you explain what that is for our audience? It sounds quite technical.

Telemetry tampering

ARTEM POPOV: This is when fraudsters stop attacking the content itself—like the ID document—and start attacking the system around it. A good example is liveness or biometric checks.

Imagine you need to show your face to the camera. Instead of doing that, you inject a pre-recorded video into the camera feed. The next challenge is making the camera appear genuine and not like a virtual camera. That’s telemetry tampering—making the device appear real while injecting fake input.

It’s a completely different level of attack, especially when combined with other techniques we discussed earlier.

THOMAS TARANIUK: Definitely hard to spot, especially when these are brand-new fraud techniques. It’s incredibly creative—spoofing cameras, injecting code, making it look like a real camera when it’s actually a pre-recorded video of a person who doesn’t even exist.

It’s super scary. And at the end of the day, it sounds bleak. Fraudsters have AI, they’re attacking systems directly, and they’re becoming more professional. But here’s the thing—we’ve identified these emerging trends in the report, which means we can now identify how to stop them and how to beat them.

So now the crucial question is, Artem: how do we all fight back?

We’re in an arms race, absolutely no doubt. We’re not losing. This isn’t just a problem for one sector or one geography, as we’ve explained. Everyone’s affected—businesses, platforms, users, regulators, right? But what actually works, Artem, against sophisticated fraud?

What actually works against sophisticated fraud

ARTEM POPOV: There are ways to manage that.

The big thing is to use a multi-layered approach and not rely on a single check.

So if we talk about, again, the case with a deepfake injected into a webcam—if you’re analyzing only the selfie itself, you don’t have access to the data showing that the person is probably trying to inject this image using a fake device or a virtual webcam, right? When you have two different checks there, it’s already much easier to manage.

So you definitely need multiple checks across different layers: document checks, biometrics, device intelligence—which is a big one—and behavioral analysis. But still, I think the highest risk, which is really hard to manage, is humans.

THOMAS TARANIUK: Human error, that is.

ARTEM POPOV: Not only are humans really easy to trick—which we see from all this social engineering and phishing—but even though humans are the subject of deception, without humans, AI systems and automated solutions don’t work that well either. You always need this kind of additional feedback to train your specialized fraud prevention systems for your specific case.

So humans should still be in the loop, and that loop needs to be really effective. I would say AI plus a human is a great way to manage things at this moment in time.

THOMAS TARANIUK: It makes it more effective, right? And even on the social engineering side, where businesses are losing hundreds of millions sometimes all at once—believing that their CFO has asked them to move money, et cetera—I honestly think Zoom should include some kind of fraud or deepfake detection within calls, because a lot of this is happening there.

And I do think it’s worth saying that even if you’re not using Sumsub for the entire user verification lifecycle, there are systematic things businesses can do—fundamentals that every business should implement. You need layered protection, as you mentioned earlier. You need to look at every step, from initial pre-screening all the way to ongoing behavioral monitoring, with everything in between.

Not relying on a single check is critical. Having a human in the loop can be useful. Fraudsters are now good enough to beat any one system, right? So that layered response becomes the best overall line of defense.

ARTEM POPOV: And if we talk about businesses, given how fraudsters are using different techniques to avoid being noticed, observed, or acted upon, businesses are struggling to keep up with the pace. Their core business usually isn’t fraud protection, and you don’t want resources leaking just because you’re trying to enter a new market or operate day to day.

I think this is a big challenge. Previously, some in-house systems worked well enough. Nowadays, implementing and maintaining all these systems is a lot of work.

THOMAS TARANIUK: And updating those legacy systems is expensive too.

ARTEM POPOV: Oh yeah.

THOMAS TARANIUK: So having a solution you can rely on that constantly evolves as threats evolve is essential. Regulators are tightening not only in Europe but globally. Companies are spending more budget protecting their profits and bottom lines through fraud prevention.

But as discussed in previous episodes—especially the one with Boris Montin from Glovo—every company has to accept a certain baseline fraud rate. It’s often the cost of doing business, not just in Europe, not just in delivery services, but also in fintech and banking.

So where’s the balance, Artem? What is that baseline? Should companies keep pouring money into fraud prevention, or is there a point of diminishing returns?

ARTEM POPOV: I think accepting a certain level of fraud is almost a philosophical challenge. Fraudsters will always be slightly ahead in the arms race because they’re more flexible than companies or regulators. They’ll always have some edge.

What really works in fraud prevention is making fraud schemes expensive and technically complex for fraudsters. If it becomes too costly, most fraudsters will simply move on to easier targets. So it’s largely an economic question.

Not everyone needs bank-level security. Some companies accept a certain level of fraud because they want to grow quickly. That’s why adaptive friction is such a good solution.

Instead of a binary approach—high-risk user blocked, low-risk user, allowed—adaptive friction lets you assess risk and adjust the number of challenges a user faces before taking an action. This helps manage false positives and gives legitimate users a remediation path.

You can’t eliminate fraud completely. It’s about maintaining a good user experience while protecting against obvious and opportunistic fraud.

THOMAS TARANIUK: Absolutely. Adaptive friction is a great example—placing small blockers based on how users interact with services or meet risk criteria. But any friction impacts business performance and user acquisition. In saturated markets like fintech, customer stickiness is essential.

Add too much friction, you lose users. Add too little, you lose to fraud. It really is a tug of war.

Looking ahead to 2026, Artem—we’ve talked a lot about the 2025 report. What fraud trends do you think will grow? Which ones will emerge? Crystal ball time—what do you see coming next year?

ARTEM POPOV: These are challenges, but personally, I enjoy tackling them. For companies, it’ll definitely be challenging, and for us at Sumsub too, but that’s what we’ve been doing for over 10 years.

Deepfakes are already becoming normal, and they’ll become even more normal. Eventually, people may not even trust livestreams anymore. For example, recently we tried to hire a backend engineer and encountered candidates trying to pass interviews using deepfakes.

THOMAS TARANIUK: Why would they do that? That’s incredible.

ARTEM POPOV: Like any fraud, it’s about gaining an advantage. Someone might pay another person to pass an interview or secure a job, or just get an initial payment and monetize it.

THOMAS TARANIUK: Makes sense.

ARTEM POPOV: We noticed inconsistencies—unusual accents, things that weren’t obvious in the deepfake itself but stood out to a human observer. It’s getting trickier for sure.

Another trend is synthetic identities being reused across platforms. Fraudsters can build consistent fake personas, create digital footprints, and then use that trust to attack banks, lenders, and more.

THOMAS TARANIUK: Expanding trust networks—and enabling money muling too.

ARTEM POPOV: Exactly. Money muling is something I think is underestimated right now. As KYC becomes stronger, fraudsters recruit real people from vulnerable environments, tricking them into thinking they’re taking legitimate jobs. These people then create accounts using their real identities, which are used for money muling or other fraud schemes.

THOMAS TARANIUK: There’s a lot to unpack there—thank you, Artem. Looking forward, we see trends worsening and new threats emerging. But for anyone listening, stay tuned for our outreach and next year’s fraud report.

Quick-fire round

To close things out, it’s time for our quick-fire questions. Ready, set, go. If you could ban one risky online behavior forever, what would it be?

ARTEM POPOV: Phishing. It’s just too prevalent.

THOMAS TARANIUK: If you could ban one risky behavior people do online—you or me—what would it be?

ARTEM POPOV: The sense of urgency. If someone pressures you into acting quickly, take a one-minute pause. That alone can prevent many scams.

THOMAS TARANIUK: Easier said than done. Next question—have you ever been a victim of fraud?

ARTEM POPOV: My wife recently was. Her credit card details were leaked somehow. She noticed fraudulent payments totaling a few thousand euros, done without 3DS. We went through the chargeback process, which takes time and creates uncertainty. Luckily, we got the money back, but it wasn’t easy.

THOMAS TARANIUK: A headache, but at least you got it back. Next question: What does the public most underestimate about fraud prevention?

ARTEM POPOV: Money muling—recruiting and tricking people into becoming part of fraud schemes without realizing it.

THOMAS TARANIUK: Penultimate question—is AI a bubble that will burst?

ARTEM POPOV: I don’t speculate on investments, but I think society will benefit from AI. In my work, it’s helped enormously. I can prototype faster and do more, which is incredible.

THOMAS TARANIUK: You’re a vibe coder, Artem.

ARTEM POPOV: With AI, yes. And I think we’ll see positive outcomes overall.

THOMAS TARANIUK: Final question: if you could have any career other than your current one, what would it be?

ARTEM POPOV: If money didn’t matter, probably something in the music industry—not as an artist, but helping discover talent, releasing vinyls and CDs.

THOMAS TARANIUK: That sounds wonderful. Thank you, Artem, for joining What The Fraud? It’s been a pleasure.

ARTEM POPOV: Thank you very much for having me.