• Jan 28, 2026
  • 9 min read

From AI Agents to Know Your Agent: Why KYA Is Critical for Secure Autonomous AI

AI agents can act autonomously for good or harm if left unverified. We break down Know Your Agent and the technical layers behind secure AI agent verification.

Today, AI agents—powerful autonomous software systems—are among the most visible trends in modern technology and are increasingly discussed in the context of fraud and security. As AI agents move from experimental tools to independent actors operating across financial systems, APIs, and enterprise workflows, a fundamental question emerges: what—or who—is actually acting?

Some form of automation has always existed. Historically, it involved predefined actions carried out through scripts, browser tooling, or simulated environments. These capabilities required technical expertise and infrastructure, which limited their use to engineers, specialized companies, and well-resourced fraud operations. As a result, automated behavior was rare among regular users and could safely be treated as a strong risk signal. That assumption is now breaking.

AI agents and browser-based agents are democratized and dramatically lower the barrier to entry. Tasks that once required code or APIs can now be performed using natural language. Agent-driven activity is becoming more accessible, more situational, and harder to distinguish from normal user behavior—which profoundly changes the risk landscape.

From a fraud perspective, automation has always enabled scale and efficiency, successfully powering schemes such as credential testing, account takeovers, phishing, and downstream money movement. These risks were usually manageable because legitimate users rarely behaved this way, and suspicious activity could be blocked or reviewed. That model is also breaking.

AI agents are rapidly becoming the backbone of digital operations, yet most of today’s systems still treat them as opaque, unaccountable black boxes.

Vyacheslav Zholudev

Co-founder and CTO at Sumsub

Today, AI agents can do pretty much anything a user can do in a browser, including initiating transactions and accessing sensitive data—acting with limited or no human oversight. This creates a new challenge for organizations: separating legitimate agent activity from malicious use. Although we’ve already witnessed AI agent-led fraud, most agents are used to help real users save time—submitting multiple payments, purchasing tickets, or completing one-off tasks that do not follow predictable patterns. Blocking all agent-driven behavior will increasingly mean blocking real customers.

This is where Know Your Agent (KYA) enters the game. Like Know Your Customer (KYC), Know Your Agent frameworks establish trust and accountability for autonomous systems. 

KYA is the process of verifying AI agents across identity, authentication, authorization, and policy enforcement. It also establishes who is behind the agent, ensuring that only legitimate agents operate within defined guardrails. Sumsub is the only verification solution on the market that offers human binding. 

Sumsub’s AI Agent Verification links each AI agent to a verified human identity—the most secure approach to KYA today. It goes beyond authenticating the agent itself by verifying the real person behind it, delivering greater accountability, trust, and stronger risk prevention.

With AI Agent Verification, Sumsub is the first to bind AI agents to verified human identities at scale. Rather than attempting to blindly trust AI agents themselves, our solution focuses on verifying the humans behind them.

Vyacheslav Zholudev

Co-founder and CTO at Sumsub

As organizations continue to deploy AI agents at scale, KYA is becoming a prerequisite for secure, trustworthy autonomy. Understanding how to verify and govern AI agents is now essential to enabling innovation without enabling abuse.

What is Know Your Agent?

Know Your Agent (KYA) is a risk-based approach to establishing and maintaining trust in AI agents by defining their identity, binding them to responsible entities (human or organizational), and enforcing policy, oversight, and auditability across all autonomous actions. 

KYA mitigates the risks of fraudulent automation across industries (e.g., e-commerce), unauthorized machine access to sensitive services, impersonation of trusted agents, and ungoverned autonomous actions, ensuring that AI agents are identifiable and trusted.

AI agent verification is also a crucial precursor to secure autonomous payments. Without strong identity and authorization controls, allowing agents to move money or enter financial contracts introduces unacceptable risk. With KYA in place, organizations can safely implement agent-driven workflows and simultaneously maintain regulatory and operational guardrails.

The KYA framework is evolving, and multiple approaches are emerging to distinguish “good,” legitimate automation acting on behalf of real users, from “bad,” or fraudulent automation.

A growing area of focus is human-in-the-loop accountability, where AI-driven automation is explicitly bound to a real, verified human identity. In this model, the system first detects when activity is automated, evaluates its risk level, and applies additional verification only when warranted. In higher-risk scenarios, AI agent verification can require a targeted liveness check to confirm that a real human is present and authorized. This approach prevents deepfakes or synthetic actors from substituting real users and ensures that every autonomous action remains directly attributable to the human ultimately responsible.

Know Your Agent in different contexts

Know Your Agent manifests differently depending on where and how AI agents operate. 

In the open web, bots and agents can interact with any website to browse content, collect information, or perform lightweight automated actions. In this environment, the primary concern is often basic identification and traffic legitimacy rather than strict permissioning.

In corporate settings, the focus shifts toward strong authentication and fine-grained authorization. AI agents may access internal systems, APIs, or sensitive data, making it critical to precisely define who the agent represents, what resources it can access, and which actions it is permitted to perform.

A third and rapidly growing context is mass automation, including AI-powered browsers and agent frameworks augmented with MCP modules. Here, agents increasingly act on behalf of users across many external services, often without continuous human supervision. In these scenarios, technical identity alone is not sufficient. For sensitive actions—such as submitting forms, executing transactions, or modifying accounts—systems must also answer a higher-order question: who authorized this action?

This is where KYA extends beyond traditional agent authentication into explicit authorization and accountability, ensuring that automated actions are not only technically valid but also legitimately approved and traceable to a responsible human or organization.

Enforcing this distinction in real systems requires a layered technical model that starts with agent identity and extends to authentication and authorization. Let’s dive into technical peculiarities.

What is an AI agent identity?

An AI agent identity is a verifiable, persistent representation of an AI agent that allows systems to recognize which agent it is, authenticate it, and enforce what it is allowed to do.

In practice, AI agent identity consists of two distinct but connected identities:

  1. “Machine” identity: Who or what is the AI agent as a technical entity? (cryptographic credentials, keys, metadata, scopes, policies)
  2. Human identity: Who is the real person or organization that operates, authorizes, or is accountable for that agent?

“Machine” identity governs how the agent authenticates and is authorized within systems, while human identity establishes accountability through Human-Delegated Authentication (agents acting on behalf of a user), where authority is explicitly derived from a real, verified person.

A complete KYA approach connects both, but most existing systems focus on the "machine" identity alone, while it’s important to establish and verify the human identity behind the agent, too.

Next, we examine how an AI agent identity is authenticated and authorized in practice.

What is AI agent authentication?

AI agent authentication is the process of verifying that an agent is who it claims to be before granting access to systems, APIs, or resources. Unlike human authentication, this typically relies on machine-to-machine authentication mechanisms.

Common approaches include cryptographic credentials such as private keys, signed tokens, or certificates. Once authenticated, the system establishes trust in the AI agent identity, but authentication alone does not define what the agent is allowed to do.

In short: authentication answers “Who is the agent?”, not “What can the agent do?”

How to authenticate AI agents

AI agent authentication involves establishing a complete identity lifecycle. The key steps are:

  1. Establishing unique identities: Every AI agent must have a distinct, non-shared identity to enable accountability and auditability.
  2. Implementing M2M authentication: Use proven AI agent authentication methods such as OAuth client credentials or mTLS for secure machine-to-machine interactions.
  3. Securing credential storage: Apply robust credential management and secret management practices to prevent leakage or misuse.
  4. Configuring token lifecycle management: Enforce short-lived tokens, rotation policies, and revocation mechanisms as part of proper token lifecycle control.

These are the practices to make sure authentication is secure even as agents scale across environments.

‼️The above methods work best in controlled environments where agents are pre-registered and tightly managed (for example, internal corporate agents or backend services). However, they are not sufficient on their own for more open scenarios—such as AI browsers or consumer-facing agents acting on behalf of users—where additional controls (human binding, step-up verification, risk-based checks) are required to ensure accountability and mitigate abuse.

What is AI agent authorization?

AI agent authorization determines what an authenticated agent is permitted to do—such as which APIs it can call, which data it can access, and which actions it can execute.

Understanding authorization vs authentication is important:

  • Authentication verifies identity
  • Authorization enforces permissions

For AI agents, authorization is necessary because autonomous systems operate continuously and at scale. Without strict authorization boundaries, a single compromised agent could cause systemic damage.

Effective AI agent authorization thus makes sure even trusted agents can only act within narrowly defined scopes aligned with their intended purpose.

To sum up, identity vs authentication vs authorization are distinct:

  • AI agent identity = what and who the agent is (“machine” identity + human identity)
  • AI agent authentication = verifying that identity (e.g., cryptographic credentials)
  • AI agent authorization = determining what the agent is allowed to do.

This distinction still holds for AI agents, just as it does for people.

Best practices for AI agent authorization

Strong authorization is needed not only for security, but also to meet emerging compliance and regulatory expectations for AI governance. Recommended AI agent security best practices include:

  • Definition of granular permissions: Avoid overprivileged agents by narrowly scoping access.
  • Implementation of context-aware authorization: Use transaction size, frequency, or risk signals to dynamically evaluate permissions.
  • Setting time-bound access: Grant permissions only for the duration required to complete a task.
  • Designing human-in-the-loop conditions: Introduce approval checkpoints for high-risk actions, enabling human-in-the-loop AI oversight without blocking automation entirely.

These controls ensure that autonomy is earned, monitored, and reversible. This positions organizations to meet future regulatory requirements if AI agents become explicitly regulated.

Suggested read: Comprehensive Guide to AI Laws and Regulations Worldwide (2026)

How Sumsub can help: KYA, bound to a human identity

At Sumsub, Know Your Agent is about an agent whose activity is explicitly authorized by a real human, here and now. In this model, an AI agent or automated workflow may execute actions—but authorization and responsibility always belong to a real person, and that link can be verified dynamically when risk arises. Rather than focusing solely on identifying the agent itself, this approach ensures that automated actions are explicitly approved by a verified human, in real time. This human-bound agent model makes automation accountable without limiting its usefulness.

In practice, binding an AI agent activity to a real identity involves three steps:

  1. Detecting automation
  2. Assessing risk in real time
  3. Binding high-risk activity to a human

Let’s delve into each step.

Step 1️⃣: Detecting automation and agent activity

The first step is determining whether an action is automated.

Explicit declaration: Some agents identify themselves using verifiable credentials. Verified agents can be safely treated as known automation and governed by policy. However, this does not identify who is behind the agent.

Implicit detection: Most malicious automation does not declare itself. In these cases, detection relies on bot mitigation techniques, as AI browsers and agentic tools often try to hide automation to preserve functionality.

Detecting automation under the hood combines multiple signals:

  • Device intelligence
  • Behavioral analytics
  • Session and transaction monitoring
  • Ongoing monitoring

AI browsers use techniques similar to automated browser control and testing frameworks—the difference is intent, not capability.

Step 2️⃣: Real-time risk assessment

Automation can be detected at any stage of the user journey. Signals are continuously aggregated to calculate risk based on automation likelihood, behavior, transaction context, and history.

When risk exceeds a threshold, the system triggers a targeted challenge rather than blocking. The primary challenge is liveness verification, which spots fake identities and confirms that a real human is authorizing the agent’s actions in real time. Additional checks (such as a payment check) may apply for sensitive or payment-related actions.

Step 3️⃣: Binding automation to a human

Once verified:

  • The agent is explicitly authorized
  • Activity is linked to a verified person

If the verified person is banned, the automation is also banned. If a blocklist is applied, the face—not just the applicant—is blocked.

This creates a clear chain of responsibility: agent → verified user → face.

It is easy to create new agents or users, but hard to fake a face, which makes this link much more robust.

This makes malicious automation accountable while allowing legitimate automation to continue.

This KYA approach works because detection, decisioning, challenges, and attribution all happen within a single platform, and automation can be controlled at scale without breaking user experience or banning agents outright.

Take a look at the demo to see how it works in practice. In this video, you’ll see an AI agent automate a batch of wedding-related payments using an agentic browser. Artem will create a transaction prompt from an uploaded spreadsheet, verifying the user’s identity via a liveness check to bind the agent to a human, and then will automatically process subsequent transactions once verification is complete.

This approach aligns naturally with Sumsub’s existing identity, fraud, and risk infrastructure, including:

  • Bot Detection: Detects automated behavior and evaluates its risk level without assuming all automation is malicious.
  • Device Intelligence: Collects low-level browser and device signals to identify inconsistencies typical of automated or instrumented environments.
  • Behavioral Analytics: This includes analyzing interaction patterns within a session, such as mouse movements, typing dynamics, action timing and sequencing, and behavioral entropy.
  • Liveness Detection: Confirms that a real human is present and actively authorizing an action when risk thresholds are crossed.
  • Risk Orchestration: Applies controls dynamically across onboarding, login, payments, and sensitive actions based on real-time risk signals.
  • Session and Transaction Monitoring: Correlate user actions across onboarding, login, configuration changes, as well as financial transactions.

Together, these capabilities allow businesses to verify who is behind the automation and take action based on the gathered data—not just that automation exists.

Key use cases of Know Your Agent (KYA)

AI agent verification provides control, accountability, and proportional response to automated behavior in the industries where automation is unavoidable, such as:

Fintech & Payments

  • One-off mass payments
  • Automated transaction flows
  • Card draining vs legitimate user scripting
  • High regulatory and fraud sensitivity

E-commerce & Ticketing

  • Ticket scalping and resale
  • Automated purchases
  • Friendly fraud and automated chargebacks
  • Mixed traffic, where automation can be both abusive and legitimate

Automation is everywhere across these industries, with the real challenge being smart control and proportional response. Sumsub helps businesses achieve exactly this.

Conclusion

Depending on the risk level assessed through Know Your Agent, targeted responses may include liveness checks, payment method confirmation, and step-up verification proportional to transaction value or observed behavior. This approach delivers better outcomes on both sides. When “bad” automation is identified—such as fraud, abuse, resale, draining, or testing—the activity becomes attributable to a real person, enabling effective enforcement, investigation, and deterrence. When “good” automation is detected—legitimate users simply trying to save time—those users can complete tasks faster without unnecessary friction or a degraded user experience.

AI agents are becoming an inseparable part of digital and financial systems, which requires identity and access controls. AI agent authentication and authorization are foundational for deploying autonomous AI safely, responsibly, and at scale.

By adopting Know Your Agent and general AI agent verification principles, companies enforce meaningful guardrails and unlock the benefits of autonomous AI without sacrificing security or compliance.

FAQ

  • What is an AI agent?

    An AI agent is a software system that can autonomously plan, decide, and execute actions across digital systems to achieve a goal, often without continuous human input.

  • What is Know Your Agent?

    Know Your Agent (KYA) is a framework for verifying, governing, and holding AI agents accountable by ensuring their actions are authorized, auditable, and bound to trusted identities.

  • What is AI agent authentication?

    AI agent authentication is the process of verifying an agent’s identity using machine-to-machine credentials such as cryptographic keys, tokens, or certificates before granting system access.

  • What authentication methods are used for AI agents?

    AI agent authentication commonly relies on OAuth 2.1 with the client credentials flow for secure machine-to-machine access and fine-grained authorization, mutual TLS (mTLS) authentication for high-security and regulated environments, and API key authentication for simpler, low-risk use cases. The appropriate method depends on security requirements, compliance obligations, and the sensitivity of the agent’s actions.

  • What is AI agent authorization?

    AI agent authorization determines what an authenticated agent is allowed to do, including which actions it can perform, which APIs it can access, and under what conditions.

  • What authorization models are used for AI agents?

    Common authorization models for AI agents include RBAC, where permissions are grouped into roles, ABAC (attribute-based access control), which evaluates multiple attributes before granting access, and ReBAC, which determines access based on relationships between entities. When comparing RBAC vs ABAC, many organizations adopt hybrid approaches to balance simplicity with flexible, context-aware control.

  • How does MCP authorization work?

    MCP authorization in the Model Context Protocol uses OAuth 2.1 with PKCE for AI agent interactions, enabling dynamic authorization discovery and secure token exchange. By combining MCP authentication and authorization, MCP security allows AI agents to access tools and services with delegated permissions, without relying on long-lived secrets.

  • How are digital wallets used for AI agent identity?

    For AI agent identity, digital wallet identity systems store reusable digital identity credentials for frictionless, bot-initiated account opening, including interactions with open banking API environments. These wallets can hold new credential forms that define permitted activities and transaction guidelines, helping bind automated behavior to verified identities and enforceable policies. This model is still emerging rather than universally deployed today—especially in regulated financial systems.

  • Which KYA solutions are available on the market?

    Several vendors offer Know Your Agent (KYA), focused on verifying AI agent identity. However, Sumsub is the only one providing human binding, which links an AI agent to a verified human identity. The most secure approach to KYA is not just authenticating the agent itself, but also establishing and verifying the real human behind the agent, which provides accountability, trust, and stronger risk prevention.