• Aug 21, 2025
  • < 1 min read

Ask Sumsubers: Can autonomous AI agents handle end-to-end KYC with minimum human oversight, and will LLM-powered systems replace human analysts?

Sumsub keeps getting questions from our followers about the specifics of regulatory compliance, verification, automated solutions, and everything in between. We’ve therefore decided to launch a monthly Q&A series, where our legal, tech, and other experts answer your most frequently asked questions. Check out The Sumsuber and our social media for new answers, and don’t forget to ask about the things that interest you. This week, Eugeny Malyutin, Head of LLM at Sumsub, will discuss whether it’s realistic for autonomous AI agents to conduct end-to-end KYC document verification with minimum human oversight, and if LLM-powered systems will ultimately replace human analysts.

Follow this monthly series and submit your own questions to our LinkedIn.

AI has already changed verification processes—and it will continue to do so. Autonomous AI systems capable of completing tasks without human oversight are often presented as the near future. But in reality, there’s still a big gap between shiny prototypes and the messy day-to-day of real fraud fighting.

Some tasks are already performed better by AI than by humans. Sophisticated deepfakes, for example, are nearly impossible to spot with the naked eye, but AI can handle them. Routine jobs like data extraction are also more efficient when done by machines—both in terms of accuracy, and because they’re faster, cheaper, and less prone to unpredictable bias.

If you look at how a modern international company handles verification, the process can be imagined as a chain of interconnected steps: verifying proof of identity, checking authenticity, capturing a selfie, running liveness checks, matching selfie to ID, scanning sanctions lists, and finally, making a decision based on regulations and risk scoring. AI—especially LLMs—can already automate many parts of this workflow. But can we leave the entire process to AI alone? Nope, and here’s why.

First—responsibility. Operating in regulated industries means being accountable. If an AI gets something wrong, will it pay fines of up to 10% of global revenue under the UK Online Safety Act? I don’t think so—that liability falls on you.

Second—performance. An AI that achieves 97% accuracy at a single step will see its performance collapse when it has to string multiple decisions together. Ten sequential steps can push that accuracy down to 77%. Real-world verification flows often involve twenty steps or more, which makes the odds of flawless execution unacceptably low. Add in the fact that AI agents are supposed to make plans, not just follow them, and you’re looking at 20 opportunities for error, with only about a 55% chance of getting everything right. That’s not good enough.

Last but not least—there are deeper technical issues: long-context limitations, hallucinations, contradictions between the data models on which they were trained and modern regulations, and the inability of current systems to measure their own uncertainty. The raw ‘just let agents do it’ approach fails here.

So, does that mean AI is a total failure in online verification? Not at all. We just need to move past the naive idea of full replacement and start focusing on collaboration. That means designing models that can raise flags when they’re uncertain, measuring performance carefully, automating the tasks machines do best, and leaving the rest to humans. It’s about building workflows with a human touch, not removing the human altogether.

Will this reduce the number of anti-fincrime specialists we need? Also no. Fraud is growing and becoming democratized—the barriers to entry for deepfake models are shockingly low. What we really want is to stop wasting skilled human effort on repetitive, routine work and reinvest that time into analytics, discovery, and deeper investigations. Modern AI can help here too: making advanced tools more accessible, helping detect anomalies in event streams, and even drafting reports.

So my vision of the future is this: humans will not be eliminated from the process. The future will give us a super-human specialist—one empowered with AI tools—ready to fight increasingly sophisticated fraud. That’s it.

Eugeny Malyutin

Head of LLM at Sumsub