top of page

Explainable AI for a Free and Open Indo-Pacific

In this article, it is proposed that while governments in the Indo-Pacific region including India focus on strengthening digital public infrastructure and emphasize on the need to develop Responsible AI ethics practices, shifting to develop Explainable AI ethical principles would become imperative within the scope of technology governance, which would widen the pivot of a Free and Open Indo-Pacific.


Responsible AI and the Indo-Pacific’s Pivot for Critical Technologies


Looking at the Quad’s statements on critical technologies, and their commitment to establish standards for emerging and critical technologies along with unveiling an Expert Group on Critical and Emerging Technologies, it seems clear that the grouping intends to address shaping ethical, legal and industrial standards and the economic relationship behind technology transfer and innovation.


It is necessary because many algorithmic activities & operations undergone through these sophisticated AI technologies, be it in through complex machine learning algorithms, analytics, automation or any sub-segment transcend geographies and can be used in a way, which could be intrusive for human environments, both physical and digital. Protecting knowledge and information is also a key priority because in an interconnected world, AI technologies are the beneficiaries of transnationally available data and their algorithmic practices & operations shape with time.


Now, companies across Asia, due to the lack of relevant AI standards in several Indo-Pacific countries garner data which is not ethically guaranteed. If data is not fostered and used in an ethical way, which at the same is not explainable, then neither the consumer nor the regulator is aware of the method through which these technologies work. For example, the US Government already is concerned about the role of companies like TikTok whose recommendation algorithms are intrusive and could endanger data security. India has already banned TikTok through the security exceptions of General Agreement on Trade in Services (GATS) since mid 2020.


This is why since 2020, countries across the Indo-Pacific are already developing regulations on AI technologies. The NITI Aayog in India came up with Responsible AI guidelines as Japan and Singapore came up with their AI governance frameworks. This is how Responsible AI guidelines became mainstream in the region. However, the disruptive and evolutionary use of AI technologies, is not possible to be covered under the Responsible AI guidelines due to the aspect of innovation in these technologies, which remains uninformed and inexplicable in most use cases.


Now, many AI innovations are naturally, localized. When companies democratize the technology-based services and products, many of them ignore the lack of explainability of such generic narrow AI technologies. Also, when AI technologies fail to explain their steps, trends and missteps, companies fail to address issues surrounding trust building, knowledge management and data quality under the existing Responsible AI guidelines. These are some of the critical risks and issues that come up with the lack of AI explainability:

  • AI technologies have use cases which are fungible

  • There exist different stakeholders for different cases on AI-related disputes which are not taken into consideration

  • Various classes of mainstream AI technologies exist and not all classes are dealt by every major country in Asia which develops and uses AI technologies

  • The role of algorithms in shaping the economic and social value of digital public goods remains unclear and uneven within law


These factors then affect the role of Responsible AI guidelines, when self-regulatory / oversight bodies are established to address algorithmic bias. Let us suppose that a technology company asserts that they would like to have oversight bodies to address issues as to how their AI technologies affect market conditions, then lack of market consensus / business-level standards regulatory / self-regulatory standards shows that the Responsible AI guidelines are not practical and merely symbolic.


Even in the case of algorithmic bias, for every class of AI technologies, bias differs, and may be dealt better with a context and qualitative aspect (which again depends on data localisation issues). Even if data attribution is done, it is not possible to develop absolute considerations on the same because many machine learning-based models have explainability issues. This is where Responsible AI is a limited concept and consists of flaws, which is why, focusing on Explainable AI becomes necessary.


Achieving Explainable AI Consensus


Explainable AI, is very different from Responsible AI. In fact, the larger focus of this concept is to ensure that all possible technologies within the umbrella of “artificial intelligence” become explicit and explainable about their decision-making and implementation. The algorithmic operations & activities conducted by any AI technology, must be understandable and explainable for consumers. A lack of explainability for sure, is an ethical dilemma, which may be attributed to the black box problem and its legal implications. However, at the same time, Explainable AI may be helpful to find out the multi-sector policy repercussions that the “black box problem” (which means lack of explainability of algorithmic operations and activities) have.


Explainable AI may also expand to the specificity of stakeholders beyond ethical statements and declarations on maintaining a status quo on AI governance, because having a consensus makes stakeholders, especially public and private actors, responsible to partner and opt for self-regulatory measures. Developing a regulatory sandbox, which India, for example, has already begun with, in the case of few AI technologies, can also be attributed to the idea of Explainable AI, clearly showing how explainability of AI technologies is necessary.


To conclude, the Quad grouping and other minilateral forums in the Indo-Pacific region, including the I2U2 and others, may build consensus to shape AI explainability standards, which can be of much use to promote a safe, resilient and qualitative transmission of ethical data and safer AI-based products and services across Asia and Africa.



bottom of page