The gaming-updates Global Affairs project explores the increasingly intertwined relationship between the technology sector and global politics.
Geopolitical actors have always used technology to achieve their goals. Unlike other technologies, artificial intelligence (AI) is much more than just a tool. We don’t want to humanize AI or assume that it has its own intentions. This is not yet a moral agent. But it is fast becoming a major determinant of our collective destiny. We recognize that AI is already threatening the very foundations of global peace and security due to its unique characteristics and influence in fields ranging from biotechnology to nanotechnology.
The rapid pace of development in AI technologies, combined with the scale of new applications (the global AI market is expected to grow more than tenfold between 2020 and 2028), means that AI systems can be deployed on a large scale without significant legal oversight. to be. Notice their moral implications. This gap, often referred to as the speed problem, prevents lawmakers and officials from dealing with it.
After all, it is often difficult to predict the impact of new technologies. Smartphones and social media entered daily life before we were fully aware of their potential for abuse. Likewise, it took time to understand the implications of facial recognition technology for breaching privacy and human rights.
Some countries will use AI to manipulate public opinion, using information and surveillance available to them to restrict freedom of expression.
Looking ahead, we have no idea what problems will lead to the innovations currently being explored and how these innovations will interact with each other and with the wider environment.
These problems are especially acute in AI, because learning algorithms often come to their own conclusions that cannot be measured. When unwanted effects occur, it may be difficult or impossible to determine the cause. Systems that are constantly learning and changing their behavior may not pass ongoing security testing and certification.
AI systems can operate with little or no human intervention. You don’t have to read science fiction novels to imagine dangerous scenarios. Autonomous systems threaten to undermine the principle that there should always be an agent – a person or a business – who can be held responsible for the actions of the world – especially when it comes to matters of war and peace. We cannot hold the systems ourselves accountable, and those who deploy them will argue that they are not responsible if the systems act in unexpected ways.
In short, we believe that our societies are not ready for AI – politically, legally or morally. The world is not ready for how AI will change the geopolitics and ethics of international relations. We distinguish three ways in which this can happen.
First, the development of AI will change the balance of power between countries. Technology has always shaped geopolitical power. In the 19th and early 20th centuries, the international order was based on new industrial possibilities – steamboats, airplanes, etc. e. Later control of oil and gas wells became more important.
All great powers are well aware of the potential of AI to advance their national agendas. In September 2017, Vladimir Putin told a group of schoolchildren: “Who will become the leader, [in AI] Will be the ruler of the world.” While the US is currently at the forefront of AI, Chinese tech companies are moving faster and are clearly better at developing and implementing certain areas of research, such as facial recognition software.
AI dominance by superpowers will exacerbate existing structural inequalities and encourage new forms of inequalities to emerge. Countries that do not yet have access to the Internet and rely on the generosity of rich countries will be left far behind. AI-powered automation will change employment patterns in ways that benefit some countries more than others.
Second, artificial intelligence will empower a new set of geopolitical players beyond nation-states. In some ways, the leading digital technology companies are already more influential than many countries. As French President Emmanuel Macron asked in March 2019: “Who can single-handedly claim sovereignty against the digital giants?”
The recent invasion of Ukraine is an example of this. National governments responded by imposing economic sanctions against the Russian Federation. But perhaps just as impressive were the decisions by companies such as IBM, Dell, Meta, Apple and Alphabet to shut down operations in the country.
Similarly, when Ukraine feared an invasion would disrupt its Internet access, it turned to tech entrepreneur Elon Musk for help rather than a friendly government. In response, Musk turned on his Starlink satellite internet service in Ukraine and provided receivers so the country could continue to communicate.
A digital oligopoly with access to large and growing databases that fuel machine learning algorithms is rapidly becoming Yes Given their vast wealth, leading companies in the US and China may be developing new applications or acquiring smaller companies that invent promising tools. Machine learning systems can also help the AI elite bypass national regulations.
Third, AI opens up opportunities for new forms of conflict. They range from influencing public opinion and election results in other countries through fake media and manipulating social media posts to disrupt other countries’ critical infrastructure such as electricity, transportation or communications.
Such conflicts will be difficult to manage and will require a complete rethinking of arms control mechanisms that are not suitable for the forced handling of weapons. Current arms control negotiations require adversaries to have a clear understanding of each other’s capabilities and their military needs, but nuclear bombs, for example, are limited in development and use, making almost anything possible with AI because capabilities can evolve quickly and inexplicably.
Without binding treaties limiting their deployment, autonomous weapons systems made up of turnkey components will eventually become available to the military and other non-state actors. There is also a high chance that poorly understood autonomous weapons systems will inadvertently cause conflict or exacerbate existing hostilities.
The only way to mitigate the geopolitical risks of AI and provide the necessary flexible and comprehensive control is through an open dialogue about its benefits, limitations and complexity. The G20 is a potential meeting point, or a new international governance mechanism could be established, involving the private sector and other key stakeholders.
It is widely recognized that international security, economic prosperity, the public interest and human well-being depend on controlling the proliferation of lethal weapons systems and climate change. We believe they will depend at least as much on our collective ability to shape the development and trajectory of AI and other emerging technologies.