- Dec 05, 2025
- 22 min read
Comprehensive Guide to AI Laws and Regulations Worldwide (2026)
Discover the current and upcoming international AI laws and policies, and explore how they are set to transform the way businesses and users interact with AI.

Imagine Priya, an HR manager, who now screens resumes in minutes instead of spending hours manually reviewing each one, so she can focus on connecting with the best candidates. And Carlos, an MLRO, who, with AI flagging suspicious transactions in real time, detects fraud faster and spends less time sifting through endless reports.
Thereâs also Mike, who now uses off-the-shelf AI tools to generate convincing deepfake IDs and fake documents in seconds, making scams easier to launch and harder to spot.
AI adoption has reshaped many industriesâbringing both benefits and threats. More than a billion people globally are now using AI tools, according to DataReportalâs estimate, while 88% of businesses have adopted AI technologies to some extent. This technology has rapidly become an essential part of everyday life, both at work and at home. It offers a range of benefits, including automating repetitive tasks, speeding up processes, and reducing the potential for human error.
Given the rapid pace at which AI has been adopted across various aspects of our lives, it is not surprising that governments and organizations have been scrambling to catch up. This urgency stems from criminals increasingly using AI for fraud and money launderingâwith a 180% rise in sophisticated, AI-driven fraud in the past yearâand from the growing need to protect peopleâs fundamental rights from AI-related abuses such as discrimination, privacy violations, and opaque automated decisions.
However, AI can also be a vital tool for those fighting against illicit activity, as well as having numerous legitimate business applications. The truth is, AI is now critical for combating criminals, as it is the key to keeping up with the scale and sophistication of their latest tactics.
Letâs explore how AI is being regulated globally, why this matters for businesses, and how they can remain compliant in 2026.
How is AI regulated worldwide?
72 countries currently have AI policies, according to the international database maintained by the Organisation for Economic Co-operation and Development (OECD). 27 of these countries are member states of the EU, which passed the EU AI Act in 2024. Various international organizations, including the African Union, Association of Southeast Asian Nations, G7, and the UN, are also developing AI initiatives.
However, in most cases, these policies have not yet been translated into legally binding regulations. While the EU is currently enforcing the worldâs first comprehensive AI regulation, some of the worldâs largest economies, including the US, appear to be adopting a different regulatory approach.
Getting AI regulation right is a big deal. The AI market is projected to be worth $4.8 trillion by 2033, according to the leading intergovernmental organization UN Trade and Development (UNCTAD). This level of economic opportunity means that when looking at how to regulate AI, precautions against its misuse must be balanced with the need to encourage innovation and positive uses.
This task is especially challenging when considering the fast-paced nature of AI technologies. For example, the EU AI Act was approved in 2024 but has not yet been fully enacted, and new AI technologies have emerged that the Act did not foresee. The challenge here is to craft "future-proof" regulations.
Another challenge is that different, non-cohesive AI regulations worldwide could generate legal uncertainty, making it harder for businesses to meet their compliance obligations with confidence.
Here are some of the best practices adopted in the past for developing a coordinated approach to AI:
- The OECD AI Principles. These principles are designed to âpromote use of AI that is innovative and trustworthy, and that respects human rights and democratic valuesâ. They involve commitments to values such as inclusive growth, sustainable development, human rights, security, privacy, transparency, and accountability. They were first adopted in 2019 and updated in 2024.
- The G7 Hiroshima AI Process. This report was developed for the 2023 G7 Summit in Japan. It set out principles to âpromote safe, secure, and trustworthy AI worldwideâ and to provide guidance on the development and use of the most advanced AI systems. Specific measures suggested include calling on organizations to identify, evaluate, and mitigate risks, publicly report on AI systemsâ capabilities, limitations, and areas of use, and implement robust security controls. The report was developed based on feedback from G7 members and was published on September 7, 2023.
- The UN AI Advisory Body. Formed in 2023, this high-level group of leading AI experts from around the world was tasked with gathering and analyzing perspectives on AI, then making recommendations to the UN on the international governance of AI. Following an interim report in December 2023, the body published its final report containing its recommendations in September 2024. These recommendations include setting up twice-yearly policy dialogues for governments and key stakeholders, establishing an AI standards exchange to develop definitions and standards for the technology, and creating a capacity development network to increase the availability of expertise as well as compute and training data.
These efforts show a strong international desire to create a coordinated approach to AI. They are likely to influence AI regulations across the world in the coming years, although to what extent is yet to be seen and will undoubtedly vary significantly between countries and regions.
Next, we will take a more in-depth look at how different countries and regions around the world are tackling AI regulation.
How are different countries and regions regulating AI?
The EU has so far taken the lead on passing and implementing an AI-specific regulation. Other leading economies are also looking closely at the matter, with some working towards AI-specific regulations and others taking steps to ensure existing laws will be applied to AI.
Here are some of the most prominent measures to date:
đŞđşEU
In July 2024, the EU adopted its Artificial Intelligence Act (the EU AI Act), which is intended to ensure AI applications that can affect the lives of those residing in the EU are âsafe, transparent, traceable, non-discriminatory and environmentally friendlyâ.
To this end, the EU AI Act addresses four basic areas:
1ď¸âŁ Banning AI systems that pose an unacceptable risk. This includes applications that utilize techniques such as cognitive behavioral manipulation, social scoring, and real-time, remote biometric identification in public spaces, e.g., facial recognition software, as described in Article 5 of the Act.
2ď¸âŁ Managing the potential for harm from high-risk AI systems. Where an AI system has been assessed as posing a potential risk to negatively impact peopleâs safety or fundamental rights, based on the risk categories outlined in Article 6 and Annex III of the Act, it will be classified as high risk. These categories include AI systems whose primary purpose is to manage and operate critical infrastructure, as well as to support employment, law enforcement, education, migration, asylum, and border control. The providers of this AI system will need to comply with several regulatory requirements to lawfully commercialize it in the EU market. Deployers and other stakeholders related to high-risk AI systems also have regulatory obligations to follow, although these are less burdensome than those applicable to providers. Such high-risk AI systems will need to be registered in an EU database. However, high-risk-related requirements are not enforceable at the time of writing.
3ď¸âŁ Imposing transparency requirements on all AI systems. Transparency requirements apply to the providers and deployers of AI systems capable of generating synthetic content. Transparency requirements refer to the need to disclose when content has been produced by such systems (e.g., through the usage of watermarks). This ensures transparency and traceability: users should be aware when they are interacting with an AI, and when content is synthetic or manipulated.
4ď¸âŁ Encouraging AI innovation within the EU. National authorities must provide testing environments for AI companies that closely approximate real-world conditions. This is intended to stimulate AI development by encouraging small and medium-sized enterprises (SMEs) to compete in the market.
âłEU AI Act implementation timeline
The EU AI Act was published in July 2024 and came into force on August 1, 2024. However, the Actâs provisions did not all become legal requirements straightaway. Instead, they are being phased in over several years.
The prohibition against AI systems that pose an unacceptable risk took effect from February 2, 2025. Rules regarding notified bodies, GPAI models, governance, confidentiality, and penalties for non-compliance took effect on August 2, 2025.
Almost all remaining provisions of the Act were first thought to be enforced on August 2, 2026, but that is now under discussion, considering the European Commissionâs (EC) Digital Omnibus proposal. If the European Parliament approves the ECâs proposal, the enforcement of provisions related to high-risk AI systems will become enforceable only once the necessary standards and guidelines aimed at facilitating compliance with the Act are published, though no later than December 2027.
International implications of the EU AI Act
While the EU AI Act only applies within EU member states, it will affect any international businesses and organizations that wish to operate in the EU. Essentially, they will need to comply with the EU regulations if they wish to do business there.
The EU AI Act is also likely to influence subsequent legislation from other countries. While they may not replicate the Actâs terms exactly, other nations will closely monitor the impact of the EUâs new rules for AI. The positive impacts and any problems that arise are almost certain to inform legislative approaches in other jurisdictions.
Future of the EU AI Act
The European Commission's new digital package is one such proposal. It proposes amendments to various European regulations with the aim of reducing administrative and compliance workloads and streamlining these regulations so that they can be applied more cohesively, ultimately fostering innovation. This digital package introduces targeted amendments to the EU AI Act, including extending simplified requirements that currently apply to SMEs to also apply to larger companies in the small to mid-cap range (SMCs).
While these proposed amendments still need to be voted on in the European Parliament, this illustrates the pace of change in the AI regulatory landscape. Many provisions of the EU AI Act have not yet come into force, and already there are moves to amend the Act.
đşđ¸US
The US does not currently have any federal AI regulations similar to the EU AI Act. However, a number of states have introduced or are working towards regulations for AI. In general, the Trump administration has taken a pro-AI, low-regulation stance, shown on different occasions, such as the repeal of former President Bidenâs AI Executive Order, and the proposal (now paused) to preempt AI state laws.
This positioning contrasts heavily with the statesâ hands-on approach towards AI regulatory matters. It is noteworthy, though, that despite the US taking a looser approach to AI regulation in favor of innovation, it has not stayed entirely hands-off. Different US administrations have sought to influence the production and distribution of components essential to AI systems, citing national security concerns.
For example, the Biden administration restricted chip access to what the US perceives as competing nations. In other words, while the US may not regulate AI in the same way as the EU, it does not mean that the US is inactive when it comes to AI regulation.
AI regulation at the US federal level
In July 2025, the Trump administration published its AI Action Plan, setting out its objectives for a wide range of issues relating to AI. Titled âWinning the Raceâ, the planâs main focus is to allow the US to âachieve and maintain unquestioned and unchallenged global technology dominanceâ.
The US AI Action Plan is made up of three âpillarsâ, which are to accelerate AI innovation, build AI infrastructure, and lead in AI international diplomacy and security. It specifically talks about rejecting "bureaucratic red tapeâ but also about preventing misuse or theft of AI technology and monitoring for âemerging and unforeseen risk from AIâ. This appears to advocate for minimal AI regulation, with some scope for managing potential risks from the technology.
Previously, US Republican lawmakers attempted to secure a 10-year moratorium on statesâ ability to regulate AI. While they did not succeed, when this is stacked up against President Trumpâs comments about overregulation, it tends to suggest that the current US administration is leaning towards a light-touch approach to regulating AI companies.
AI regulation at the US state level
48 US states have some form of AI legislation in effect or due to come into effect in the near future. Only Alaska and Ohio do not currently have any AI laws in place, but bills related to AI have been proposed in both states.
Many of these laws are aimed at protecting the general public, such as:
- The California Bot Act of 2019. Prohibiting the use of online bots to deceive people for commercial purposes or to influence votes in elections.
- Coloradoâs regulation on Protecting Consumers from Unfair Discrimination in Insurance Practices of 2021. Preventing insurance providers from using AI technology that unfairly discriminates against customers based on factors such as race, sex, religion, or disability.
- New Yorkâs Local Law 144 of 2023. Regulating the use of automated decision-making tools by employers and employment agencies during hiring processes.
Others are intended to ensure transparency over state governmentsâ use of AI systems, e.g.:
- Vermontâs Act on the Use and Oversight of AI in State Government of 2022. Requiring an inventory of all automated decision systems developed, procured, or used by the state government, as well as creating governance functions for the technology. California also has a similar AI inventory rule for high-risk automated decision systems, as do Connecticut, Delaware, Maryland, New York, and Texas.
- Californiaâs rules for Law Enforcement Usage of AI. Setting rules around the use of AI to generate reports and data protection.
- Kansasâs AI Platforms of Concern law. Prohibiting the use of proscribed AI platforms, such as DeepSeek, by state agencies.
- Kentuckyâs Government Use of AI Law. Including measures such as requiring a state AI governance committee, establishing powers to set and maintain policy standards and procedures for AI.
Additionally, California, Colorado, Utah, and Texas have all signed into law legislation to regulate AI governance in private companies, while New York and Virginia have passed AI governance laws that have not yet been signed off on. A further 20 states have introduced AI governance bills that are currently making their way through the legislative process and could become law in the near future.
It remains to be seen how these state-level rules might evolve and expand, as well as the impact that any federal government measures may have. But the range and breadth of these rules, as well as the differences between states, all add to the complex picture of AI regulation in the US today.
đ¨đłChina
China has been active in regulating AI with many laws already enforced. In 2017, it launched its âNew Generation Artificial Intelligence Development Planâ with the goal of achieving âglobal AI leadershipâ. Part of this strategy is to exert influence in the global realm of AI regulation.
Key AI regulations in China
Three major AI regulations have been introduced in China so far:
- Internet Information Service Algorithmic Recommendation Management Provisions (2022). This law was designed to address concerns around the role of algorithms in disseminating information online. Its requirements include that providers must register their AI models in Chinaâs publicly-owned Algorithm Registry, users must be warned when they are dealing with recommender algorithms, algorithms cannot be used for illegal or harmful purposes, and users must have the rights to turn off targeted recommendations, delete tags used to personalize recommendations, and receive an explanation when an algorithm has a major impact on their interests.
- Administration of Deep Synthesis of Internet Information Services (2022). This law deals with concerns around deepfakes within the broader category of âdeep synthesisâ, which includes any synthetically-generated media, not just that which is intended to deceive. It stipulates principles such as that any deep synthesis content must âadhere to the correct political directionâ, must not âendanger national security and interests, damage the country's image, infringe on social and public interests, or disrupt the economy, and social orderâ, and that deep synthesis content must be clearly marked if it has the potential to âcause confusion or mislead the publicâ.
- Administrative Measures for Generative Artificial Intelligence Services (2023). This law focuses on public-facing generative AI services. It places limits on AI services that might âincite subversion of state powerâ or âoverthrow the socialist systemâ, requires that certain types of generative AI models be registered with the national Algorithm Registry and undergo security assessments, and that AI-generated content be labelled in accordance with the requirements for deep synthesis (covered above). The law also sets out certain key requirements for providers to follow, including using data from legal sources, not infringing upon intellectual property rights, and obtaining proper consent for the use of personal information.
Chinaâs requirements for labelling AI content
In addition to the labeling requirements set out in the laws above, in March 2025, the Cyberspace Administration of China (CAC) published details of a new labeling requirement for AI-generated content. This requirement took effect on September 1, 2025, and obligates providers and distributors that create AI content to label it in both explicit and implicit ways. Explicit labels include indicators such as text, audio, and images that inform users that the content has been generated by AI. Implicit labels, on the other hand, include metadata embedded within AI-generated content that identifies the service provider and assigns a content ID.
Online distributors of content, such as social media platforms, must also put in place mechanisms to detect and reinforce these labelling requirements.
Creating a âClean Internetâ
In February 2025, the CAC announced a series of special enforcement measures to help create a âQinglangâ (Clean Internet). This includes various actions, with one being ârectifying the abuse of AI technology,â which is focused on generative AI content. Specific provisions included strengthening the identification of AI-generated content, as well as clamping down on the use of AI for purposes such as creating and disseminating false information and online trolling.
This, again, highlights concerns relating to transparency about when content is AI-generated, plus the potential for AI tools to produce content that the Chinese government considers harmful.
Guidelines for responding to generative AI security incidents in China
In December 2024, Chinaâs National Technical Committee 260 on Cybersecurity published a set of draft Guidelines for Emergency Response to Generative Artificial Intelligence Services. This is part of a public consultation and provides non-binding guidance for providers of generative AI services on how to classify and respond to security incidents involving this technology.
The Guidelines set out ten types of security incidents involving generative AI, including cyberattacks, data breaches, and information security incidents. A system of four levels of incident has also been proposed, from general incidents all the way up to significant incidents. The Guidelines outline a four-stage response to generative AI security incidents, which are preparing, monitoring, emergency handling, and reviewing incidents once they have occurred.
Once the consultation process from these draft guidelines is concluded, they may form the basis of regulatory requirements going forward.
Chinaâs AI plans
In 2016, China published its Three-Year Guidance for Internet Plus AI plan. This set objectives to enhance the countryâs AI hardware capacity, create an AI technology market worth billions of dollars, build platforms for key AI resources and innovation, and promote AI applications in targeted socioeconomic areas.
The following year, China produced its National New AI Generation Plan. This aimed to bring Chinaâs AI industry in line with international competitors by 2020, be a world leader in certain AI fields by 2025, and become the primary global centre for AI innovation by 2030.
Most recently, in 2021, China revealed its latest Five-Year Plan for National and Social Development, which has significant implications for AI in the country. While the plan does not specifically mention AI, it did include a provision to increase research and development spending by up to 7% each year, boost the digital economy to a 10% share of GDP, improve information infrastructure, and enhance data capabilities. According to one report, China has invested $6.1 billion in data centre projects, suggesting that this could be a key part of the current five-year plan.
The impact of Chinaâs AI plans
China is currently second only to the US in AI investment and leads the world in AI datacenter clusters, with 230 compared to 187 for the US in second place, and just 18 for France in third place. This suggests its approach to AI is paying off.
However, they are only ranked 7th for global computing power, which is something they are likely to want to address in the near future. Chinaâs next five-year plan will be published in 2026 and may contain important insights for the future direction of its efforts in this sector.
đŹđ§UK
The UK does not currently have any AI-specific regulations in place, but it has been looking at the issue for a number of years. As the worldâs third-largest AI market, the UK has a lot at stake when it comes to getting regulation of the sector right.
The current landscape for UK AI regulation
At the moment, the UK is taking a sector-by-sector approach to AI regulation. This means that existing regulators are required to interpret, implement, and enforce key principles around the use of AI in their sectors, as set out in the previous UK governmentâs whitepaper âA pro-innovation approach to AI regulationâ.
The UK is also actively looking at how to deal with the AI sector. The government-backed AI Security Institute (AISI) is dedicated to researching AI and building infrastructure with the goal to âunderstand the capabilities and impacts of advanced AI and to develop and test risk mitigationsâ. Its findings are likely to influence the future of AI regulation in the UK.
Will the UK create a dedicated AI regulation?
The current Labour government in the UK promised to introduce a limited AI law as part of its 2024 election manifesto. However, these plans have now been delayed with no firm timeline in place for when such legislation might be introduced.
An Artificial Intelligence (Regulation) Bill has been proposed in the UK as a Private Memberâs Bill by opposition politician Lord Holmes of Richmond, but it does not originate with the current UK government, so there is no guarantee it will ever become law.
Setting a direction of travel for AI regulation in the UK
The previous UK government published a National AI Strategy in 2021, setting out a âten-year plan to make Britain a global AI superpowerâ. This followed on from an earlier AI Sector Deal, published in 2018 but subsequently withdrawn by the new UK government.
The UKâs National AI Strategy aimed to invest in and plan for the long-term needs of the AI sector, support the transition to an economy capitalizing on AI, and develop effective national and international governance standards.
Now the current UK government has published its own AI Opportunities Action Plan as part of its broader Industrial Strategy. This plan is based on three pillars that are intended to support the growth of AI, increase its uptake in the private and public sectors, and keep the UK ahead of other countries internationally.
These pillars involve:
- Creating âAI Growth Zonesâ. These are designed to speed up planning approval for data centres, ensure they have the energy they need, and attract international investment.
- Setting up a new digital centre of government. This will sit within the government Department for Science, Innovation and Technology (DSIT). It will look for new ways to integrate AI in the public sector and encourage adoption in the private sector.
- Establishing a team to keep the UK at the forefront of new AI technology. They will have the task of ensuring the UK is an attractive place for AI companies to do business, for example, by guaranteeing access to key resources such as energy and data.
While this latest UK government initiative makes a lot of promises about creating an environment that is friendly to AI companies, this has not yet translated into any legislation placed before Parliament. It could, therefore, be years before any regulation changes are in place.
Balancing innovation against IP in the UK
Another area the UK government is exploring is the relationship between AI systems and copyright laws.
Many creatives are worried about the possibility that AI systems may have been trained on their copyrighted work without their permission. For example, almost two-thirds of UK novelists (59%) believe generative AI has been trained on their work without permission or remuneration.
However, AI innovators often argue that their systems cannot be created cost-effectively without free access to this type of data. In the US, it has been argued that restricted access to copyright-protected works for the purposes of training AI would âweaken America's technological edge and slow down innovationâ.
In an attempt to balance these competing interests, the UK government carried out a consultation on copyright and AI in 2024. The consultation aimed to support copyright holdersâ control and right to payment for use of their works, while enabling the development of world-leading AI models, and promoting increased transparency and trust between the creative and technology sectors.
Proposed approaches arising from this consultation include a data mining exception and rights reservation package. This would allow AI models to use copyright-protected works for training purposes where they have lawful access to those works, e.g., if they are legally available online or via a subscription to a service such as an online library. Rights holders would have the option to reserve their rights, meaning a license would be required for their works to be used for training AI.
Again, the recommendations put forth following the consultation have not yet been adapted into legislation, so it remains to be seen what final form the rules on the issue might take. However, given the high profile of the conflict and the potential financial rewards at stake, the matter is unlikely to simply go away.
đ¸đŹSingapore
Singapore does not have any AI-specific regulations yet, but it has previously published two national AI strategies, the first in 2019 and the second in 2023. It has also introduced guidance on AI governance and developed an AI governance testing framework, invested heavily in AI-friendly infrastructure, and participated in the Association of Southeast Asian Nations (ASEAN) protocols on AI governance and ethics.
Current AI initiatives in Singapore
Project Moonshot is a toolkit launched in 2024 by the government-backed AI Verify foundation. It is an open-sourced tool designed for benchmarking, Red-Teaming, and testing baselines in relation to large language models (LLM). The toolkit allows developers to test LLMs on various core competencies, such as language and context understanding.
In 2024, Singaporeâs Personal Data Protection Commission released Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems. These clarified expectations around the use of personal data for the purposes of AI development, including around issues such as seeking consent from consumers for the use of their data and supporting businesses to comply with the terms of the Personal Data Protection Act 2012 (PDPA).
Singaporeâs national AI strategies
2019âs National AI Strategy 1.0 outlined plans for increasing the use of AI to transform Singaporeâs economy. Its objectives included making Singapore a leader in scalable, impactful AI technology through five national AI projects. These focused on intelligent freight forwarding, predicting and managing chronic disease, border clearance operations, municipal services, and personalized education.
This strategy was refreshed in 2023 into the National AI Strategy 2.0, partly in response to the growth of generative AI. Concerns the second version of the strategy sought to address include potential safety and security risks from generative AI applications, as well as international competition for the resources and talent needed for AI innovation. The refreshed strategy set new objectives to âselectively develop peaks of excellence in AI, to advance the field and maximise value creationâ and to âraise up individuals, businesses, and communities to use AI with confidence, discernment, and trustâ.
These strategies reveal a strong commitment to being an international leader in AI while also encouraging adoption of the technology by businesses and the general public. Singapore is also taking real steps to make its strategy objectives a reality.
How Singapore is investing in AI-friendly initiatives
In 2024, the nation committed to a S$100million investment (~USD 77.2 m) in upgrading its National Broadband Network to support innovations and opportunities from digital technologies, including AI. In 2025, Singaporeâs government announced plans to build an Artificial Intelligence (AI)-fluent workforce to support growth in its digital economy, reskilling and empowering workers to create âbetter, safer jobsâ for more people. Also in 2025, Singapore introduced new tools to support businesses with data protection and allow them to test real-world applications in an Expanded Global AI Assurance Sandbox.
These investments in facilitating the growth of AI could give Singapore a fighting chance of being a future world leader in AI.
Guidelines on AI best practices in Singapore
Singapore is taking a leading role in setting AI governance standards. This includes the publishing of a Model AI Governance Framework for Generative AI in 2024, which introduces nine proposed dimensions to support a âcomprehensive and trusted AI ecosystemâ. These dimensions cover issues such as accountability, concerns about data quality and training data, incident reporting, security, and harnessing AI for the public good, with the goal to inform AI providers and deployers in regards to governance best practices, as well as to facilitate auditors on how to confirm such governance.
AI companies in Singapore can also take advantage of the Global AI Assurance Pilot tool released by the AI Verify foundation, based on emerging regulations and best practices, which allows for the technical testing of generative AI applications. This should help AI innovators to proactively tackle potential compliance issues and provide assurance that they are meeting good governance standards.
As part of the Association of Southeast Asian Nations (ASEAN), Singapore has also played a role in developing the ASEAN Guide on AI Governance and Ethics. This sets guiding principles for organizations aiming to design, develop, and deploy AI technologies in the region. These principles include transparency and explainability, fairness and equity, security and safety, privacy and data governance, and accountability and integrity.
This strong focus on AI governance and ethics in Singapore should act as a sign to those looking to invest in the sector that these are issues the national government takes seriously. As such, they may well form the basis for future regulations and compliance obligations.
What are some of the key issues global AI regulations are looking to address in 2026 and going forward?
Some of the key areas of regulatory concern for governments and organizations around the world are:
- Supporting innovation. Including policies to empower research and development of AI tools, such as the EU AI Act requirement for nations to provide companies with suitable AI testing facilities. Also taking into consideration concerns about over-regulation that could stifle innovation, such as those voiced by the US Trump administration.
- Governance and accountability. Such as the EU AI Actâs requirement for a risk-based approach, which obliges organizations to put in place suitable risk assessment and mitigation systems based on the level of risk posed by different types of AI applications. Principle 5 of the G7 Hiroshima AI Process similarly calls for AI governance and risk management policies, including a risk-based approach. Ensuring that developers and implementers of AI systems can be held accountable for their proper functioning and compliance with regulatory requirements. This is covered in Article 17 of the EU AI Act, which includes a requirement for accountability frameworks setting out what responsibilities people within an organization have for regulatory compliance.
- Fraud prevention. For example, the prohibition on using AI bots to deceive people for commercial purposes or to influence elections under the California Bot Act and Italyâs Law No. 132/2025, which contains provisions such as criminalizing the spreading of AI-generated or manipulated content (such as deepfakes) that causes harm.
- Transparency. Including the OECD AI Principle 1.3 on transparency and explainability. This recommends that entities designing, deploying, and using AI systems should be open and clear about issues such as their systemsâ capabilities and limitations, ensure people are aware when they are interacting with AI, make available the sources of data, factors, processes, and logic used by these applications, and inform anyone adversely affected by AI systems how they can challenge this. The EU AI Act imposes transparency requirements on high-risk AI systems and on the providers and deployers of AI systems that can generate synthetic content, such as the need to inform people when they are interacting with AI systems and label AI-generated content, while China has also introduced measures for labelling AI content.
- Data protection and hygiene. Such as Principle 11 of the G7 Hiroshima AI Process, which recommends measures to protect data collected and used by AI, including preserving data quality, mitigating risks from biased data sets, and offering transparency around training data. Meanwhile, the European Commission's new digital package is looking to simplify rules around AI, cybersecurity, and data to boost innovation and ensure that different EU regulations interact in a cohesive manner to ensure data privacy in AI systems.
- Intellectual property. As is being looked at through measures such as the UKâs Copyright and AI consultation, which seeks to balance the rights of IP holders against the needs of AI innovators to affordably access high-quality data.
- Security. For example, OECD AI Principle 1.4, which recommends that AI systems should be ârobust, secure and safe throughout their entire lifecycleâ to ensure they do not pose security and safety risks through ânormal use, foreseeable use or misuse, or other adverse conditionsâ.
- Discrimination. Such as the EU AI Actâs focus on mitigating discrimination and bias in high-risk AI systems, the New York State Senate Bill S1169A, which aims to prevent âalgorithmic discriminationâ by AI systems, and Coloradoâs law to protect customers from discrimination by AI tools used for automated decision-making by insurance companies.
- Sustainability. Focused on issues such as fostering inclusive growth, promoting social inclusion and equality, and mitigating the potential environmental impacts of AI technology, such as managing surging energy demands from the sector. This is addressed in OECD AI Principle 1.1, which promotes âresponsible stewardship of trustworthy AIâ with goals such as âadvancing inclusion of underrepresented populations, reducing economic, social, gender, and other inequalities, and protecting natural environmentsâ.
Natalia Fritzen, Head of AI Compliance at Sumsub, on deepfake regulation:
AI agents: Friends or foes?
âAI agentsâ are AI-powered programs that act autonomously, without human intervention. While they can be used legitimatelyâfor example, to automate transactions, manage portfolios, support compliance, or serve as powerful allies in fraud prevention by analyzing large volumes of data, adapting to new tactics, and identifying vulnerabilities in real timeâthey are increasingly being exploited by criminals. Fraudulent AI agents, first detected in 2025, can generate fake IDs and documents, mimic behavior to bypass verification checks, and learn from failed attempts, making them increasingly sophisticated.
The growing use of AI agents raises the question of whether existing AI-specific regulations and KYC rules will need to evolve to address these new AI tools, especially when leveraged by criminals. For now, KYC/AML rules do not regulate AI agents directly, but entities using them may still have compliance obligations.
AI-specific regulations might target agents themselves, but this is still a debated area with no global consensus.
Suggested read: Know Your Machines: AI Agents and the Rising Insider Threat in Banking and Crypto
Why does AI regulatory compliance matter for businesses?
AI regulatory compliance can help businesses protect their customers and the integrity of their platforms. It can also protect their reputations and, of course, avoid the risk of regulatory penalties and other enforcement actions.
The penalties for getting AI compliance wrong can be massive. Under Article 99 of the EU AI Act, non-compliance can result in fines of up to âŹ35 million (~$40 million) or up to 7% of an undertakingâs total worldwide annual turnover for the previous financial year. The fine will be based on whichever is the higher of these two amounts.
Businesses will also likely have other compliance obligations that are not directly related to AI but that are impacted by AI-powered techniques used by criminals (including AI agents). For example, in many countries, obliged entities have a duty to prevent fraud, for example, under the UKâs Economic Crime and Corporate Transparency Act 2023 (ECCTA). If regulated businesses fail to properly protect customers from criminals, which in many cases encompasses the use of AI tools, then this could be deemed a compliance breach and lead to regulatory action, and a breach with AI regulations, if the AI tools are not developed and deployed in a compliant manner.
The reputational harm that can be caused by the misuse of AI tools can also be significant. For example, HR finance company Workday is facing legal action from job candidates who claim the AI tool the business used to screen job applicants discriminated against those aged 40 and above. While Workday has refuted the claims, this type of situation has the potential to be very damaging, especially where breaches of rules around automated decision-making and other AI processes are involved.
It is therefore crucial that businesses stay on top of the changing AI regulatory landscape. They can then ensure they have the right compliance framework in place to effectively manage any AI regulatory risks.
Sumsubâs expert on AI compliance and the evolving relationship between AI ethics and enforceable AI law
Natalia Fritzen, Head of AI Compliance at Sumsub:
AI on your side
Against the backdrop of uncertain AI regulations, itâs always great to have AI working on your side.
AI-powered tools are now revolutionizing anti-money laundering (AML) and compliance. This is helping a wide range of businesses and organizations to stay ahead of the criminals and meet their regulatory obligations.
Sumsubâs solutions integrate AI in a variety of innovative ways. Some of these include:
- Our Advanced Case Management tool, Summy the AI Assistant, that streamlines and unifies risk operations
- Our KYC and KYB solutions that deploy AI to automate user onboarding, making these processes faster, more accurate, and more cost-effective
- Our AI-powered Transaction Monitoring that allows for more flexible, high-volume checks than traditional methods, helping to spot signs of financial crime faster and more accurately.
Whether you need help picking up sophisticated fraud patterns in real time and rapidly identifying fake identities and documents, want to speed up processes, improve their accuracy, and reduce workloads, or want to ensure continuous compliance across jurisdictions, AI can be the answer. Given the changing regulatory landscape for AI that businesses and organizations need to navigate, getting a little help from intelligent automation can make your life much easier.
FAQ
-
Which countries have AI regulations?
Only the EU has currently passed comprehensive AI regulations at the national level. Some US states have introduced their own local regulations into law, but these could be impacted by the federal governmentâs approach to AI regulation, which has not yet been set into law. China has also introduced limited regulation of AI technology, chiefly relating to the labelling of content produced by generative AI.
-
Which countries are looking at introducing AI regulations in the near future?
Excluding the EU member states already covered by the EU AI Act, there are 45 countries that currently have initiatives on AI. These are Algeria, Argentina, Armenia, Australia, Brazil, Canada, Chile, China, Colombia, Costa Rica, Egypt, Iceland, India, Indonesia, Israel, Japan, Kazakhstan, Kenya, Korea, Malaysia, Mauritius, Mexico, Morocco, New Zealand, Nigeria, Norway, Peru, Rwanda, Saudi Arabia, Serbia, Singapore, South Africa, Switzerland, Thailand, Tunisia, TĂźrkiye, Uganda, Ukraine, United Arab Emirates, UK, US, Uruguay, Uzbekistan, Vietnam, and Zambia.
-
How can businesses comply with AI regulations?
Businesses will need to understand the relevant AI regulations in the jurisdictions where they operate and carry out thorough risk analyses to establish how these rules apply to their operations. They will then need to put in place processes and risk frameworks to manage any compliance risks.
-
Where can I learn more about AI regulations?
There are various resources you can use to keep up to date with AI regulations. These include the OECD AI Policy Navigator, private resources such as global law firm Orrickâs US AI Law Tracker, and, of course, The Sumsuber.
Relevant articles
- Article
- 4 days ago
- 10 min read

- Article
- 3 weeks ago
- 5 min read
Discover the AI innovations that will help institutions adapt to evolving AML compliance demands and financial crime risks in 2026.

What is Sumsub anyway?
Not everyone loves complianceâbut we do. Sumsub helps businesses verify users, prevent fraud, and meet regulatory requirements anywhere in the world, without compromises. From neobanks to mobility apps, we make sure honest users get in, and bad actors stay out.


