Fighting Fraud Takes Two: Why Academia and Business Must Build AI Together

In this article, Slava Pirogov, Sumsub's top Machine Learning Engineer, discusses academia and businesses teaming up to combat fraud with AI.

Fighting Fraud Takes Two: Why Academia and Business Must Build AI Together

Business and academia have always been intertwined, often in ways we forget. One of my favorite examples is William Sealy Gosset, a brewer at Guinness, who in 1908 created the Student’s t-distribution to improve beer quality control. Pure research was born inside a brewery and went on to shape modern statistics. A century later, Bitcoin, which began as a research whitepaper, now supports real-world markets and payments. 

Today, in machine learning, AI, and fraud prevention, the situation is no different. The world’s largest platform companies understand this: Google spent $49.3 billion on R&D in 2024 (about 14% of its revenue), while Meta spent $43.9 billion (26.7% of revenue). Their investments in central academic-style research remain crucial in building tomorrow’s AI.

Academia inspires, business grounds

Each side brings something essential. Business grounds research with real problems, data, and constraints, while academia inspires innovation. Consider large language models: long before ChatGPT, they were niche research tools. Only when wrapped into a practical, conversational format did they spark a global AI boom—transforming OpenAI from a non-profit research group into one of the world’s most influential companies.

Another crucial role for academia is benchmarking. Open datasets and peer-reviewed results let the whole world see not only the claims but also the evidence. Without that transparency, saying a fraud-detection model achieves a false-positive rate of 0.1% means very little.

Research already powers our work

In practice, the line between research and production is almost gone. Many of today’s default tools were research papers just a few years ago. For instance, if you want to search images by text, you use CLIP—no longer just “research,” but infrastructure.

In our own AI team, we constantly monitor research papers across many topics—for example, on robust fine-tuning, which is probably the most important area for making our models one of the best in the industry. We often experiment with works that are only a few weeks old and have zero citations. If the idea is strong, we try it. This has become standard practice for companies determined to stay at the forefront.

And research doesn’t just flow one way. At Sumsub, we’ve also given back, publishing two papers at ICML workshops in 2025 on deepfake detection. These papers directly improved our production systems and, at the same time, helped build a bridge between academic openness and real-world application.

The first, “Evaluating Deepfake Detectors in the Wild”, focuses on open benchmarking in real-world settings. We created a new high-quality deepfake dataset and proposed a flexible evaluation pipeline, which ensures model performance claims truly hold under real conditions. The results revealed a critical insight: open-source state-of-the-art detectors aren’t even close to deployment in production.

In our second paper, “Visual Language Models as Zero-Shot Deepfake Detectors”, we propose a novel method of utilizing VLMs in classification tasks, which achieves superior performance in deepfake detection. The approach is remarkably flexible and not limited to the concrete task or model, boosting performance in different setups. Together, these works illustrate how cutting-edge academic ideas can immediately strengthen practical systems.

We’ve gone further still by launching a collaboration with Constructor University in Bremen, home to one of the most respected research groups in Bayesian methods. Our applied knowledge in the fraud domain complements their theoretical expertise, creating a space where both sides can meet in the middle. The goal is to advance both the science of AI and the practical fight against fraud.

Fraudsters adapt—we must be faster

The AI threat landscape is evolving at breakneck speed. In 2024 and 2025, Sumsub has detected a fourfold increase in deepfake incidents worldwide, with deepfakes accounting for 7% of all fraud attempts. A few years ago, many deepfakes were crude and visibly flawed. Today, they are often indistinguishable to the human eye. Fraudsters are innovating with the same tools as researchers, sometimes even faster. We can’t just respond to fraud; we must anticipate it. Solving tomorrow’s problem is often more important than reacting to today’s.

Joined efforts for a secure future

New laws, such as the EU AI Act, will only increase the demand for academically informed approaches. Research will help companies develop more fair, transparent, and less biased models, while techniques like synthetic data generation already play an important role in how we train ours.

Another emerging trend is explainability: understanding why exactly we flagged an applicant as fraudulent or detected a deepfake. The next wave of models (such as Vision Language Models) can help make decisions more interpretable, which in turn builds trust with regulators and users.

Walking hand in hand against fraud

History shows that business and academia are most powerful when they move together. In the fight against digital fraud, that lesson has never been more urgent. Companies provide the resources and real-world grounding; universities provide the openness, theory, and innovation.

Fraudsters are adapting every day. To stay ahead, our AI must evolve even faster—and it will only do so if research and business continue to walk hand in hand.

You can read Slava’s published works, “Evaluating Deepfake Detectors in the Wild” and “Visual Language Models as Zero-Shot Deepfake Detectors” by following these links.

What the Fraud Summit 2025

Fraud became too easily available. At WTF Summit, industry top dogs from fintech and crypto will share what the new future brings—alongside resilience strategies you can readily act on.

Learn more
What the Fraud Summit 2025