top of page

An Indo-Pacific Perspective on AI Safety

Artificial intelligence (AI) is rapidly transforming the Indo-Pacific region, with the potential to revolutionize industries, enhance healthcare, and improve the lives of billions of people. However, as AI becomes more powerful and pervasive, it is important to ensure that it is used safely and responsibly. In April 2021, the Quad launched the Critical and Emerging Technologies Working Group (CETWG) to promote cooperation on critical technologies, including AI. The CETWG has identified several priority areas for collaboration, which include

  • Developing principles for responsible  development and use of AI

  • Sharing information and expertise on AI Safety.

  • Promoting research and development in AI Safety. 

The Quad's work on AI Safety is a welcome development, but it is important to recognize that a  comprehensive approach is needed to address the complex and interconnected challenges of AI safety that are emerging from limited geographies. 

Analysing varied approaches to AI regulation in key countries, this article explores the challenges and opportunities of AI Safety in the Indo-Pacific region and discusses the need for a coordinated approach to addressing these issues.


The Chinese Approach to AI Regulation

The three Chinese regulations on artificial intelligence (AI) are characterized by a strong emphasis on government control and data ownership. This is reflected in the following key features of these regulations:

Maximality: The Chinese government takes a maximalist approach to regulating AI, seeking to cover all aspects of AI development and deployment. This is evident in the comprehensive scope of the regulations, which address everything from data privacy to algorithm ethics.

Micromanagement: The Chinese government adopts a micromanaged approach to regulating AI, seeking to control the details of how AI systems are developed, deployed, and used. This is evident in the specific requirements imposed on AI service providers, such as the need to obtain licenses and implement specific technical measures.

Public ownership: The Chinese government asserts public ownership over data and algorithms, treating them as public resources that must be managed and controlled by the government. This is reflected in the requirement for AI service providers to register with the government and obtain licenses, as well as the government's right to access and control AI data.


The Biden Administration’s Executive Order on AI

The United States Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI) is a comprehensive and forward-looking document that outlines several important principles and requirements for the responsible development and use of AI. Here is a summary of the features of this Executive Order:

  • The Executive Order emphasizes the need for robust and reliable evaluations of AI systems, including post-deployment performance monitoring. This is an important step in ensuring that AI systems are safe and effective in real-world settings. 

  • The legal instrument also calls for the development of effective labeling and content provenance mechanisms to track and flag AI-generated content. This is important for transparency and accountability, that could help prevent the misuse of AI-generated content.

  • The Executive Order provides a flexible and technology-conscious definition of AI, which helps capture the ever-evolving nature of AI technologies.  

  • The definitions of "synthetic content," "testbed," and "watermarking" provided in the Executive Order are clear and concise. These definitions are important to ensure a common understanding of these key terms in the context of AI systems.


The European Union’s Artificial Intelligence Act

The legislation represents a comprehensive pan-European regulatory framework for artificial intelligence (AI) systems. Key features of the Act include the classification of AI into risk-based levels, with stringent regulations applied to high-risk AI systems. The European Commission has adopted a horizontal approach with a proportionate risk-based methodology, choosing Option 3+ over Option 4, aligning with the need for a balanced regulatory environment conducive to AI innovation.

The Act emphasizes the importance of human oversight, quality risk assessments for AI companies, and robust data governance practices. It addresses the intricate challenges associated with developing, testing, and monitoring high-risk AI systems, setting forth detailed criteria & procedures for compliance, and aims to ensure ethical and safe AI development, safeguarding fundamental rights and aligning with the unique requirements of the European AI landscape. 


The Indian AI Landscape

India's rapidly evolving AI landscape presents opportunities and challenges that demand a thorough re-examination of the nation's regulatory capacity. The proliferation of AI applications across industries necessitates transparent, safe, and standardized regulations. AI technology use across various sectors raises concerns regarding transparency, safety, data processing, privacy, and consent. These challenges require attention and sector-specific standardization to drive policy interventions and innovations globally. India needs to reinvent its regulatory capacity and intelligence streams to enable technology-neutral regulation and governance of AI technologies. Transparency and safety in AI applications pose a primary concern for emerging markets. Many AI use cases lack transparency in commercial viability and safety, particularly in data processing, privacy, consent, and dark patterns. Sector-specific standardization for algorithmic activities and operations is absent, hindering regulatory interventions and innovations globally. Enforcing existing sector-specific regulations, starting with data protection and processing, is the solution to pave the way for effective AI regulation.

Despite legislative advancements in digital sovereignty, digital connectivity, drones, and data protection, the AI and Law discourse in India shows limited transformation. Discussions mainly revolve around data protection rights and the civil and criminal liability of digital intermediaries. The government's proposed frameworks to regulate the use and processing of personal and non-personal data, including the Digital Personal Data Protection Act, 2023, and the proposed Digital India Act, reflect the Council of Ministers' commitment to these discussions. However, the focus on AI regulation remains limited, even in frameworks like the one proposed for the National Data Management Office (NDMO).

The absence of self-regulatory Explainable AI or Responsible AI guidelines from key AI and tech market players across places also underscores the need for a comprehensive and distinctive approach to AI regulation that aligns with India's unique requirements and standards. A recently proposed Artificial Intelligence (Development & Regulation) Bill was suggested in line with such requirements poignant for the development of the Indo-Pacific as an AI hub for the Global South and Democratic Asia. 


Establishing a comprehensive Indo-Pacific perspective on AI safety requires collective effort. A shared understanding of AI risks and opportunities must be cultivated through open dialogue and collaboration across the region. Guiding principles, anchored in human dignity and rights, are essential for steering responsible AI development. The Chinese regulatory approach emphasizes government control and public ownership of data,contrasting with the Biden Administration's focus on robust evaluations and transparency. The European Union's AI Act introduces risk-based classification and emphasizes human oversight. India, amid a rapidly evolving AI landscape, requires focused regulatory re-evaluation. The proposed Artificial Intelligence Bill reflects private initiatives to position India as an AI hub. By synthesizing these diverse approaches, the Indo-Pacific region can collectively navigate AI challenges, ensuring ethical integration.

LATEST OP-EDS

bottom of page