Excellent work! I have also been pretty concerned about gaps in the global AI Governance ecosystem but a bit sceptical of how impactful focusing on developing countries would be. This essay reminds me that LMICs are still a part of the ecosystem, and one hole can cause a leaky bucket.
Particularly love the bit on incentivizing checks and balances instead of forcing it on countries!
As developed countries rapidly become more equipped in the governance of safe and beneficial AI systems, developing countries are slackened off in the global AI race and standing at risk of extreme vulnerabilities. By examining not only “how we can effectively govern AI” but also “who has the power to govern AI”, this article will make a case against AI-accelerated forms of exploitation in low- and middle-income countries, highlight the need for AI governance in highly vulnerable countries, and propose ways to mitigate risks of AI-driven hegemons.
This report was written as a custom case of the AI governance hackathon[1].
Executive Summary
Introduction
With the exponential growth of AI systems and the potential emergence of superintelligence, our “risk society” preoccupied with the potential of a safe future[3], is urged to figure out how to navigate ourselves in this continuously expanding black box. Thus, the coordination and response of the global political economy to this important time of human history will shape the state of the world and impact future generations to come. As developed countries rapidly become more equipped in the governance of safe and beneficial AI systems, developing countries are slackened off in the global AI race and standing at risk of extreme vulnerabilities.
This article brings attention to the impact of transformative artificial intelligence on developing countries and where they stand in the rising AI race dynamics. It also raises questions about whether hegemonic AI systems will repeat the history of colonialism and how we can learn from the legacies of colonialism to prevent it from repeating in this age of digitalization and volatility, uncertainty, complexity, and ambiguity.
By examining not only “how we can effectively govern AI” but also “who has the power to govern AI”, this article will make a case against AI-accelerated forms of exploitation in low- and middle-income countries, highlight the need for AI governance in highly vulnerable countries, and propose ways to mitigate risks of AI-driven hegemons.
Background
“Developing country” is a contested label referring to “a sovereign state with a lesser developed industrial base and a lower Human Development Index (HDI) relative to other countries”. Though taxonomized differently, this concept is often used interchangeably with “low- and middle-income countries” (LMICs), which refers to all countries not considered to be high-income countries, as defined by the World Bank as those with a gross national income per capita of $13,205 or more in 2023.[4]
Artificial intelligence (AI) has been understood widely as the creation of machines that are capable of sophisticated information processing. To differentiate transformative AI (TAI) from other AI systems, Gruetzemacher and Whittlestone emphasized “practically irreversible change in certain trajectories of human life and progress”. They also argued that the potential emergence of TAI is currently a particularly neglected topic and proposed three levels of AI’s societal impact: narrowly transformative AI, transformative AI & radically transformative AI. By “AI” and “AI systems”, this article refers to narrowly transformative AI—”AI technology or application with the potential to lead to practically irreversible change focused primarily in a specific domain or sector of society”[5].
Dafoe framed the “AI governance problem” as “the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI”.[6] Open Philanthropy highlighted that AI governance also considers the “social outcomes” and “good outcomes” from transformative AI, especially by reducing potential catastrophic risks from transformative AI[7].
Contemporary researchers such as Couldry and Mejias have argued that data relations are undertaking a new type of “digital colonialism” that exploit people through data, in the same way, that historical colonialism seized territory and resources and governed subjects for profit.[8] The notion of “digital colonialism” describes the decentralized collection of data from citizens without their explicit consent “through communication networks developed and owned by Western tech companies” (Silas et al. 2018)[9]. Though as powers in the East also emerges, colonial power is no longer a fight of the east vs. the west, but powerful institutions with political and technological dominance in digitalization progress. Taking after the legacies of the white man's burden[10], these large-scale digital businesses can use their influence and financial resources to gain access to free data in a country or even geographic region. This large amount of data can be exploited for a variety of purposes, including predictive analytics, thanks to the absence of data security regulations and infrastructure (Coleman 2019)[11].
In the context of globalization and capitalism fueled by AI systems, the importance of AI governance become clear and urgent more than ever. Political economists such as Robert Gilpin had argued that throughout history, the aim of economic activity is ultimately decided by not only markets and the rules of technical economics, but also by the standards, values, and interests of the social and political institutions in which economic activity takes place.[12] In the cyberspace with no clear territory, the most powerful will triumph on the web, continue to extract information for their projects, and influence politics. If the AI ecosystem in a certain country is dominated by foreign investments and big companies, these forces will have more power to influence the politics and governance of AI systems.
Meanwhile, developing countries lack the ability to ensure these large projects are for the common good. If we agree that transformative AI can have a large impact to society, it will definitely harm these marginalized and vulnerable groups to a greater extent.
Analysis
Low- and middle-income countries often lack the financial, political, and technical resources needed to develop effective AI governance policies, particularly in the area of data management and protection. Unfortunately, the absence of governance and regulations allows big companies to collect vast amounts of data. As a result, these companies leverage this lack of governance to promote their AI products in these countries and influence cyber security laws by lobbying for weaker regulations that benefit their interests. This allows them to sell their products and collect even more data and develop better products. As a result, developing countries are most vulnerable to the exponential growth of transformative AI, which can lead to complicated economic, political, and social issues.
1. AI governance is not a priority of developing countries
Contrary to the rise of AI ethics and more global legislation on AI[13], developing countries are slow to adopt and have paid less attention to AI governance.
According to reported data by OECD, roughly 19.12% LMICs in the world have a plan to govern AI systems. In 69 countries with available data, 15 upper-middle-income countries, 9 low-middle income, and 2 low income have developed AI policies and strategies[14] (Fig. 1).
Fig.1. Countries with national AI policies & strategies (adapted from OECD.ai, n=69). Link to dataset.
One plausible reason is that AI governance is not their priority because they are concerned with basic needs and security rather than adapting to innovation and technology.
On the bright side, it has been reported that 193 countries adopted UNESCO’s global agreement on the Ethics of Artificial Intelligence announced by UNESCO chief Audrey Azoulay:
“It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its 193 Member states in its implementation and ask them to report regularly on their progress and practices.”
However as an institution that represent neutral and equity, the Recommendation is voluntary and non-binding. It is unclear to what extent the recommendations are implemented and how many developing countries have adapted them. This raise the question of how international institutions can be effective in ensuring inclusivity and diversity for all nations.
2. Developing countries may not have the high technological capability, but may have a high deployment of technology due to a large number of foreign investments
Chinese AI development in Africa
China's Digital Silk Road (DSR)[15] is a major infrastructure and investment project aimed at promoting transcontinental connectivity and cooperation. However, it has sparked debates as some argue that it is a tool for neo-colonialism or debt-trap diplomacy, while others see it as an opportunity for regional connectivity and economic expansion.
Researchers have argued that China's export of AI technology is part of a broader strategy to export authoritarianism and undermine democratic values globally. An article has highlighted the potential for China's AI technology to be used for surveillance and repression, particularly against political dissidents and minority groups. The article notes the lack of transparency and accountability surrounding the deployment of Chinese AI technology in Africa (Net Politics and Digital and Cyberspace Policy Program 2018)[16]. Other reports have discussed the incentives behind initiatives of the DSR project. The People's Republic of China is incentivizing private actors, such as Chinese telecommunications firm Huawei, to build digital infrastructure abroad in order to generate security externalities for China. This is supported by a case study involving Huawei's involvement in Nigeria in the realm of digital infrastructure development, the formulation of digital strategies, and associated standards (Hungerland and Chan 2021)[17].
Furthermore, several actions by large Chinese tech companies have been documented, serving as examples of how restrictions leave several loopholes for ongoing "digital colonialism." This includes historical violations of data privacy laws, restrictions on the severity of sanctions, unchecked mass data concentration, a lack of competition in enforcement, uninformed consent, and restrictions on clearly defined nation-state privacy laws. Multiple projects of this kind can also be listed[18]:
As foreign values will undoubtedly be integrated into these infrastructures, whether AI-based or not, and can be sold and utilized globally to reinforce the companies' own self-interests, this calls for increased regulation of not only data laws but also the use of AI technology. In order to mitigate these risks, it is crucial for states to prioritize transparency when engaging with foreign companies on AI technology.
U.S. infrastructures across the world
No less than China, the U.S. has also had multiple projects in India, Myanmar, Philippines, Ethiopia, Kenya, South Africa, and other African countries, most notably are cases of social networking sites Facebook and Whatsapp (Nothias 2022)[19].
In India, Free Basics, a mobile application developed by Facebook, announced a bold goal of "introducing" millions of people in developing countries like Kenya and Ghana to the internet, and boasted it as the "first step towards digital equality". It gathers information about users and their activity on the app without informing to consumers how their data would be used (Wanjohi 2017)[20].
According to Alfred McCoy, as cited in Kwet 2019, the quadruplex telegraph, commercial typewriter, Dewey decimal system, biometrics, photographic files, and the Hollerith punched card machine together made it possible to manage textual, statistical, and analytical data, which gave rise to the technological capacity for mass surveillance. The US military located, recorded, and examined networks of leaders from the Phillipines using these technologies, as well as their finances, assets, political affiliations, and family networks. The data was used to quiet the dissent of American conquest.
The South African government’s Operation Phakisa Education, a plan by to transform basic education provides productivitty devices such as laptops and tablets to poor students, is concerned to be influencing the habits, preferences and knowledge base of the young and future generations in the country. As they engage with Big Tech products from the U.S early on through education, they will grow up to favor the ecosystems, models, and ideologies of these companies (Kwet 2019)[21].
U.S. and China AI investment in South East Asia
U.S. and China have made significant investments in South East Asian AI startups, with foreign investments accounting for a higher share of total investments than local investments. Chinese investment in Southeast Asian AI firms has increased over the past ten years, although it still lags behind American investment in terms of total transactions. Additionally, investors from all around the world, including the United States, China, Japan, Germany, and the United Kingdom, are investing in Southeast Asian AI startups (Luong, Lee, and Konaev 2023)[22].
Fig. 2: Share of Domestic and Foreign Investments in AI Companies in Singapore, the Philippines, Thailand, Indonesia, Malaysia, and Vietnam (2010–2021) (Luong, Lee, and Konaev 2023).
3. AI systems may exploit technological infrastructures and the gaps in governance in developing countries
With increasingly sophisticated and robust AI systems, technological infrastructures and large pools of data can be abused by interest groups, driven by capitalist or political incentives. The 2022 AI Index Report revealed that as of 2021, 9 out of the 10 state-of-the-art AI systems benchmarks in the report were trained with extra data[23]. This trend implicitly favors private sector actors with access to vast datasets, incentivizing them to mine more and more data, especially at the cost of millions of individuals' personal data from developing countries. Powerful companies may take advantage of the absence or lack of attention to cybersecurity frameworks to push their AI products in these countries and influence laws and regulations by lobbying for weaker laws that can benefit them. Moreover, the import of large resources and intervention from foreign forces may lead to increased inter-dependency.
The 2021 report on "Ethics and governance of artificial intelligence for health" by WHO highlights the need to exercise caution when overestimating the benefits of AI in healthcare. The report points out that investing in core strategies for achieving universal health coverage should not be compromised in favor of AI. The report also identifies potential challenges and risks associated with AI in healthcare, such as unethical collection and use of health data, algorithmic biases, patient safety concerns, cybersecurity risks, and environmental impacts. Additionally, the WHO report cautions that AI systems trained primarily on data from high-income countries may not perform well in low- and middle-income settings. While investment in AI development is critical, the unregulated use of AI may subjugate the interests of citizens and communities to the commercial interests of technology companies or the surveillance and social control interests of governments[24].
In military and security, the industrial-military complex[25] may accelerate the growth of AI adoption from industry to domestic security and then international security with rising risks through each step. As a result, AI-based arms fueled by great power forces can be used to wage proxy wars as a means to ensure and strengthen their own hegemonic power.
In addition to foreign forces, governmental agendas, and local big corporations may be able to reinforce totalitarianism using AI.
4. Developing countries are most vulnerable to the global AI race due to the absence of AI governance
As established by previous arguments, the absence of governance and regulations in cyberspace means a lot of global corporations can exploit their “digital colonies” more easily. Developing countries will face the most risks in the exponential growth of transformative AI, leading to many potentially complicated economic, political, and social issues.
Discussion & Suggestions
Within the context of the global AI dominance race and long-term risk management for artificial general intelligence (AGI), developing countries would be the most vulnerable.
Governments of developing countries should prioritize the development of AI governance policies and frameworks to ensure that the risks associated with the use of AI technology are mitigated. They should also consider collaborating with developed countries and international organizations to exchange best practices and knowledge-sharing.
To address the issue of data privacy, developing countries can create laws that require foreign companies to adhere to their data privacy laws when operating within their borders, and hold them accountable for any violations. Additionally, governments can promote the use of open-source software to enhance transparency and accountability in the development and deployment of AI systems.
Civil society organizations and the media can also play a critical role in raising awareness and advocating for the development of AI governance policies in developing countries. This can include educating citizens on the potential risks associated with AI technology and advocating for more transparency and accountability in the development and deployment of AI systems.
Ultimately, it is essential that developing countries prioritize the development of AI governance policies and frameworks to ensure that the risks associated with the use of AI technology are mitigated and that the benefits of AI are realized in a manner that is equitable and inclusive.
In brief, there are multiple opportunities to navigate and coordinate developing countries in emerging AI powers:
On the global level
Actors: Global organizations e.g. UNESCO, OECD, etc., Regional organizations e.g. African Union, ASEAN, etc.
On the local level
Actors: Nation-states, grassroots organizations, etc.
Conclusion
There is no doubt that AI governance is a complex and multifaceted issue that requires attention from all countries, regardless of their level of development. Developing countries may face unique challenges, such as a lack of technological capability and governance frameworks, but they are also vulnerable to the potential negative impacts of AI systems exacerbating existing power structures and inequalities. To address these challenges, there is a need for increased transparency, regulation, and coordination between countries at both the global and local levels. This includes incentivizing companies to prioritize transparency in their engagement with foreign countries, contextualizing international initiatives and guidelines to national AI policies, and mobilizing resources to enhance global equity in AI governance.
Future research
Since AI governance has not received adequate attention in developing countries, and the contextualizing policies in the countries require a high level of complexities and diversities, more research should be conducted to inform better international AI governance in the future. We propose the following research topics:
AI governance landscape:
Contextualizing AI governance:
Case studies:
Risk mitigation:
_Acknowledgements: _
Great appreciation to my peers in the AI Governance reading group, Long, Jack, Chau, Phuc, for providing me with meaningful conversations that inspired this topic.
Thanks to BlueDot Impact for making the AGI Safety Fundamentals free and accessible for all.
Thanks to Apart Research for creating this valuable opportunity.
Thanks you for reading this report.
<!-- Footnotes themselves at the bottom. -->
Notes
Author’s note: After the hackathon, I made some amendments to some of the wordings and add the “U.S. infrastructures across the world” and “U.S. and China AI investment in South East Asia” section to balance the countries out. In total, I spent ~25-30 hours on this essay. ↩︎
Zuboff, Shoshana (January 2019). "Surveillance Capitalism and the Challenge of Collective Action". New Labor Forum. 28 (1): 10–29. doi:10.1177/1095796018819461. ISSN 1095-7960. S2CID 159380755. ↩︎
Beck, Ulrich (1992). Risk Society: Towards a New Modernity. Translated by Ritter, Mark. London: Sage Publications. ISBN 978-0-8039-8346-5. ↩︎
World Bank Country and Lending Groups – World Bank Data Help Desk ↩︎
Gruetzemacher, Ross, and Whittlestone, Jess. “The Transformative Potential of Artificial Intelligence.” Futures, vol. 135, no. 102884, Dec. 2021, doi:https://doi.org/10.1016/j.futures.2021.102884. ↩︎
Dafoe, Allan. “AI Governance: A Research Agenda.” GovAI. Accessed March 27, 2023. https://www.governance.ai/research-paper/agenda. ↩︎
Muehlhauser, Luke. “Our AI Governance Grantmaking so Far - Open Philanthropy.” Open Philanthropy, July 8, 2022. https://www.openphilanthropy.org/research/our-ai-governance-grantmaking-so-far/#our-priorities-within-ai-governance. ↩︎
Couldry, Nick, and Ulises A. Mejias. “Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject.” Television & New Media 20, no. 4 (2018): 336–49. https://doi.org/10.1177/1527476418796632. ↩︎
Digital colonialism on the African continent (ibw21.org) ↩︎
Exploitation and oppression in the facade of altruism. The White Man's Burden by Rudyard Kipling (poetry.com) ↩︎
Coleman, Danielle. “Digital Colonialism: The 21st Century Scramble for Africa through the Extraction and Control of User Data and the Limitations of Data Protection Laws.” Michigan Journal of Race & Law, no. 24.2 (2019): 417. https://doi.org/10.36643/mjrl.24.2.digital. ↩︎
Gilpin, Robert. “Global Political Economy: Understanding the International Economic Order”, Princeton University Press, 2001. ↩︎
The AI Index Report – Artificial Intelligence Index (stanford.edu) ↩︎
OECD’s live repository of AI strategies & policies - OECD.AI Caveats: It is unclear the criteria in which these countries are chosen to be on the list, whether they simply just have their own national AI policy and/or adhering to OECD AI Principles 2019. ↩︎
China's Digital Silk Road Initiative | The Tech Arm of the Belt and Road Initiative (cfr.org) ↩︎
Ibid. ↩︎
Hungerland, Nils, and Chan, Kenddrick. “Assessing China's Digital Silk Road: Huawei's Engagement in Nigeria.” LSE Research Online. LSE IDEAS, London School of Economics and Political Science, November 1, 2021. https://eprints.lse.ac.uk/112588/. ↩︎
Gravett, Willem H. “Digital Coloniser? China and Artificial Intelligence in Africa.” Survival 62, no. 6 (2020): 153–78. https://doi.org/10.1080/00396338.2020.1851098. ↩︎
How to Fight Digital Colonialism - Boston Review ↩︎
Free Basics: Facebook’s failure at ‘digital equality’ | Science and Technology | Al Jazeera ↩︎
Kwet, Michael. “Digital colonialism: US empire and the new imperialism in the Global South.” Race & Class 60 (2019): 26 - 3. ↩︎
Ngor Luong, Channing Lee, and Margarita Konaev, "Chinese AI Investment and Commercial Activity in Southeast Asia" (Center for Security and Emerging Technology, February 2023). https://doi.org/10.51593/20210072 ↩︎
The AI Index Report – Artificial Intelligence Index (stanford.edu) ↩︎
Ethics and governance of artificial intelligence for health (who.int) ↩︎
Network of individuals and institutions involved in the production of weapons and military technologies. Military-industrial complex | Britannica ↩︎