The Future of AI Regulation in Southeast Asia


Single Panel


Session 1
Tue 09:30-11:00 REC A2.05


Save This Event

Add to Calendar


Show Paper Abstracts


In April 2021, the European Commission proposed the first regulatory framework, known as the Artificial Intelligence (AI) Act, to regulate the development and use of AI in the European Union (EU). The proposed Act adopts a risk-based approach, where the extent of legal obligations imposed on value chain participants depends on the level of risk the AI poses. The proposed Act establishes rules for data quality, transparency, human oversight, and accountability. It also aims to address ethical aspects of AI and the challenges in implementing the legislation in a wide range of industries. The proposed Act is expected to be adopted in 2024 and be in force in late 2025 or 2026. Once adopted, the Act represents the world’s first rules on AI.

In Southeast Asia, the governments have been cautious in deciding whether to adopt AI regulation. While some governments have issued AI guidelines and national policies, no government in the region has issued a comprehensive AI regulatory framework. Several factors have led to this overall cautious approach towards AI regulation. In particular, competing interests, such as the need to facilitate innovation while ensuring the responsible and trustworthy use of AI, are steering the regional countries towards a collaborative approach to regulate AI, rather than a prescriptive rule-based one.

The panel aims to critically explore the AI policymaking and governance trends in Southeast Asia. It aims to examine the existing AI guidelines and ethical principles that are in place (against the backdrop of the proposed EU legislation), and explores the rationales for the stance taken in the approach towards AI governance. Given that the adoption of the proposed EU legislation is bound to put pressure on other countries to follow suit, the panel will also explore the road ahead for Southeast Asia. Various considerations include: Is regulation inevitable? Is legislation always the best response? What safeguards are there in place already? If indeed regulation is the way forward, it should not be focused on the negative. Rather than simply addressing and managing the risks of AI, might AI legislation also look to encourage, promote and grow what will be beneficial for the economy and society?