Growing awareness of the global impact of advanced artificial intelligence (AI) has inspired public discussions about the need for international governance structures to help manage opportunities and mitigate risks involved.
Many discussions have drawn on analogies with the ICAO (International Civil Aviation Organisation) in civil aviation; CERN (European Organisation for Nuclear Research) in particle physics; IAEA (International Atomic Energy Agency) in nuclear technology; and intergovernmental and multi-stakeholder organisations in many other domains. And yet, while analogies can be a useful start, the technologies emerging from AI will be unlike aviation, particle physics, or nuclear technology.
To succeed with AI governance, we need to better understand:
Our latest paper, with collaborators from the University of Oxford, Université de Montréal, University of Toronto, Columbia University, Harvard University, Stanford University, and OpenAI, addresses these questions and investigates how international institutions could help manage the global impact of frontier AI development, and make sure AI’s benefits reach all communities.
Access to certain AI technology could greatly enhance prosperity and stability, but the benefits of these technologies may not be evenly distributed or focused on the greatest needs of underrepresented communities or the developing world. Inadequate access to internet services, computing power, or availability of machine learning training or expertise, may also prevent certain groups from fully benefiting from advances in AI.
International collaborations could help address these issues by encouraging organisations to develop systems and applications that address the needs of underserved communities, and by ameliorating the education, infrastructure, and economic obstacles to such communities making full use of AI technology.
Additionally, international efforts may be necessary for managing the risks posed by powerful AI capabilities. Without adequate safeguards, some of these capabilities – such as automated software development, chemistry and synthetic biology research, and text and video generation – could be misused to cause harm. Advanced AI systems may also fail in ways that are difficult to anticipate, creating accident risks with potentially international consequences if the technology isn’t deployed responsibly.
International and multi-stakeholder institutions could help advance AI development and deployment protocols that minimise such risks. For instance, they might facilitate global consensus on the threats that different AI capabilities pose to society, and set international standards around the identification and treatment of models with dangerous capabilities. International collaborations on safety research would also further our ability to make systems reliable and resilient to misuse.
Lastly, in situations where states have incentives (e.g. deriving from economic competition) to undercut each other's regulatory commitments, international institutions may help support and incentivise best practices and even monitor compliance with standards.
We explore four complementary institutional models to support global coordination and governance functions:
Many important open questions around the viability of these institutional models remain. For example, a Commission on Advanced AI will face significant scientific challenges given the extreme uncertainty about AI trajectories and capabilities and the limited scientific research on advanced AI issues to date.
The rapid rate of AI progress and limited capacity in the public sector on frontier AI issues could also make it difficult for an Advanced AI Governance Organisation to set standards that keep up with the risk landscape. The many difficulties of international coordination raise questions about how countries will be incentivised to adopt its standards or accept its monitoring.
Likewise, the many obstacles to societies fully harnessing the benefits from advanced AI systems (and other technologies) may keep a Frontier AI Collaborative from optimising its impact. There may also be a difficult tension to manage between sharing the benefits of AI and preventing the proliferation of dangerous systems.
And for the AI Safety Project, it will be important to carefully consider which elements of safety research are best conducted through collaborations versus the individual efforts of companies. Moreover, a Project could struggle to secure adequate access to the most capable models to conduct safety research from all relevant developers.
Given the immense global opportunities and challenges presented by AI systems on the horizon, greater discussion is needed among governments and other stakeholders about the role of international institutions and how their functions can further AI governance and coordination.
We hope this research contributes to growing conversations within the international community about ways of ensuring advanced AI is developed for the benefit of humanity.