Expert Corner: AI Regulations in APAC in 2024
'Top Voice' on AI regulation on Linkedin Raymond Sun, especially for The Sumsuber Expert Corner, discussed the current regulation of AI in the region
'Top Voice' on AI regulation on Linkedin Raymond Sun, especially for The Sumsuber Expert Corner, discussed the current regulation of AI in the region
AIâs potential to transform industries is undeniable. From healthcare and finance to manufacturing and agriculture, AI applications are rapidly changing the way we live and work.
The Asia-Pacific (APAC) is a particular growth area for AI. According to the Market Data Forecast, the APAC AI market is predicted to grow at a compound annual growth rate of 39.94% from 2024 to 2029 and the regional market size is expected to be valued at USD 356.13 billion by 2029 (from currently USD 66.38 billion in 2024).
In parallel to this market growth, some APAC governments are also working on regulations to mitigate certain risks and issues around AI like privacy and misinformation. Hereâs a quick update on whatâs happening in the APAC region (plus US), which has so far seen a diverse range of approaches, with some countries having comprehensive AI regulations while others taking a lighter piecemeal approach.
“AI” encompasses a spectrum of technologies, from narrow predictive algorithms used in finance to large language models embedded in chatbots that can deliver human-like conversations.
But across the broad spectrum of AI applications, common issues tend to arise around misinformation, privacy violations, intellectual property breaches, biases and discrimination.
Without any sort of mitigations, these issues can cause real harm to individuals (and even broader society).
Governments have a role in using regulation to lessen these AI-related risks. Given the broadness of the AI field, different applications may require different types and level of regulation. Regulatory approaches could include:
Clear regulations can foster public trust in AI, encouraging wider adoption and responsible innovation.
But governments need to find the right balance of regulation that sufficiently mitigate risks without slowing down innovation and industry. Each government has its own approach to this balance based on their political, economical and cultural factors. We can see these different approaches across APAC.
Letâs start with China, which is the only APAC country that already comprehensive AI-specific laws in place.
Currently, China has enacted laws on internet recommendation algorithms, âdeep synthesis technologyâ (which cover deepfakes), and generative AI.
Notice that China is regulating only specific applications of AI one-by-one, prioritising apps that are likely to spread misinformation.
This is why these regulations share into similar concepts and mechanisms, such as a database to register algorithms, watermarking and labelling requirements, pre-deployment security assessments, and prohibitions on use of these AI applications to harm societal values and individual rights. These regulations are relatively strict compared to global standards, and China plans to further standardise its domestic AI industry.
Chinaâs data protection law also regulates autonomous decision making (ADM) systems, requiring them to be transparent, fair and just, and allow individuals to opt-out of such systems.
Despite its leadership in AI research and development, the US has a rather messy patchwork of laws and voluntary frameworks on AI, across both federal and state levels.
But for now, the only development that you need to know is President Bidenâs Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Order). In the US, the President has the power to issue these âexecutive ordersâ to require federal agencies to do certain things. Under this Order, various US government departments have been working on regulations that require:
These initiatives mostly affect Big Tech and the US public sector, rather than SMEs in the private sector.
But there are some proposals on the horizon. Deepfakes are a hot topic lately, with recent high-profile incidents such as deepfake pornography of Taylor Swift, and scam calls using AI-generated voice purporting to be from President Biden highlighting the concerns around the misuse of deepfakes to spread false misinformation, defame others or violating privacy. AI-generated political content and advertising which intend to mislead or sway votes also pose a modern challenge to elections. Federal and state governments have thus been stepping up to control nefarious deepfakes. For example, with US federal lawmakers have proposeding a bill to protect individuals from deepfakes of them that they didnât consent to. Meanwhile, states like California, Virgina, Texas and Michigan have laws which criminalise deepfakes for pornography, political purposes, or both.
Given these above separate initiatives, the US is on path to creating a comprehensive regulatory environment for AI.
Australia has no dedicated law on AI yet, though it has a framework of AI ethic principles which are mandatory on the federal public sector (but voluntary for everyone else).
Apart from that, Australia relies on existing laws to address specific issues/harms of AI, with recent reforms or developments happening in the privacy, online safety and cybersecurity spaces.
However, in January 2024, in their response to the public consultation process on AI regulation, the Australian Government confirmed that it will seek to regulate high risk AI applications and developed AI safety standards and toolkits for businesses.
Singapore also doesnât have any specific law for AI yet.
For now, Singaporeâs telco and privacy authorities have been driving Singaporeâs position on AI, including the release of âModel AI Governance Frameworksâ (2020 version and 2024 version) that provide detailed and readily implementable guidance for business on how to responsibly develop, deploy and use AI.
Singapore is also known for its practical initiatives, being one of the early countries to roll out software toolkits (e.g. AI Verify) and catalogues (e.g. LLM Evaluation Catalogue) that help business implement AI safety at an operational and technical level.
On the fintech side, Singaporeâs financial regulator the Monetary Authority of Singapore has released responsible AI toolkits for the finance sector, and is currently working with industry players to develop a generative AI framework.
As a rising powerhouse in tech, India initially favoured limited regulatory intervention in the AI sector. In April 2023, the Indian Ministry of Electronics and IT announced that the Indian government will not consider regulating AI for the time being due to concerns of stifling innovation.
However, over the past year, India has slowly shifted its position, gradually regulating certain aspects of AI. In August 2023, India enacted its first privacy legislation which imposes data fiduciary obligations on certain AI developers.
Since January 2024, the Indian government has also said itâs considering to amend its existing IT Act (which regulates electronic commerce) to include new rules around AI.
In parallel, India has been working on a bill called Digital India Act which proposes to replace the IT Act and regulate high risk AI systems.
And most recently in February 2024, Indiaâs IT Minister announced that the government was working on an AI regulatory framework, to be released in June/July 2024. While the actual contents of the framework have yet to be revealed, its announcement marks an overturn of the governmentâs original position in April 2024.
Indonesia is still at the early stage of its AI regulatory journey. In December 2023, the Indonesian Ministry of Communication and Informatics released AI ethic guidelines in the form of a circular letter that was addressed to all public and private electronic system operators engaging in AI-based programming activities. In the same month, the Financial Services Authority issued its own set of AI ethical guidelines which apply to all fintech players in Indonesia.
It is expected that within the first quarter of 2024 the Ministry of Science, Technology and Innovation will present a comprehensive framework related to the code of ethics for AI. However, apart from that, Malaysia has had limited developments in the AI regulation space.
That said, Singapore, Malaysia and Indonesia are part of ASEAN which recently released its guide to AI governance and ethics. The guide calls for alignment of AI frameworks across ASEAN countries. So itâs likely Malaysia will eventually follow the similar trajectory of Indonesia and Singapore.
Over the past year, the APAC region has seen a diverse regulatory landscape for AI, with China leading in comprehensive application-specific laws, the US moving towards a cohesive framework, and other countries ranging from softer approaches to evolving positions.
As AI continues to reshape industries, the region is poised for ongoing regulatory developments. Striking the right balance between regulation and innovation will remain a key challenge, and will differ between countries based on their legal, economic and political circumstances.
But in the short term, these developments might still stay fragmented and unaligned, as countries within the region devise their own unique competitive approach to attract AI talent and businesses (especially with the intensifying AI competition between US and China).
However, over the medium to long terms, I expect there will be increasing constructive collaboration between APAC governments to align frameworks as they recognise the mutual benefit of aligned regional frameworks to support trade and multinational businesses.