J.D. Vance and the Geopolitics of Artificial Intelligence
The incumbent’s full control over a pivotal realm of power implies several risks related to the incumbent’s behavior. The first temptation is to self-justify the strict control over AI technology “for the good of mankind” . A commentary by Andrea Colli

In a remarkably brief speech at the recent Artificial Intelligence Action Summit in Paris, US Vice-President J.D. Vance outlined the main principles guiding the current US administration regarding Artificial Intelligence.
These principles can be summarized as follows: the US holds a superior leadership in AI and wants to defend this position; the US rejects any attempt to regulate AI multilaterally; AI must be used to preserve freedom of speech and expression; and AI must contribute to job creation in the US first and foremost.
These principles can be interpreted in various ways within the context of modern geopolitical analysis, based on the strong neo-realist principle that endemic interstate competition for achieving and preserving leading positions in the hierarchy of world powers involves establishing firm control over strategic realms of power.
The first geopolitical interpretation concerns leadership in cutting-edge technologies. Historically, especially since the late 19th century, the appropriation and application of advanced technologies have been a privileged way for countries, particularly less developed ones, to accelerate their development.
Technology thus became a powerful instrument for countries to achieve sovereignty and welfare for their populations. Popularized as techno-nationalism, this practice involves attracting foreign investments to acquire relevant technologies.
A premise of techno-nationalist practices is the permissiveness of countries where a specific technology was developed, allowing it to circulate freely, beyond excessive geopolitical concerns.
The story of the transistor technology, invented in the US and later migrating to productive hubs in Southeast Asia, illustrates this.
A different perspective sees crucial technologies as potentially weaponizable to damage competing powers.
In this framework, multinational companies producing crucial technologies face increasing restrictions, and value chains traditionally based on the free flow of critical technological components are severely disrupted, as seen in the case of cutting-edge semiconductors and critical raw minerals.
Technological knowledge becomes an indispensable tool for securing the incumbent’s leadership at the expense of challenging powers.
This vision, evident in Vance’s speech, leverages the weaponization of technological progress and the preservation of leadership to benefit the job market in the incumbent’s country (the US, in this case).
The Pivot Framework
The second interpretation points to another concept deeply rooted in classic and modern geopolitics literature, introduced over 120 years ago by the British geographer and politician Halford Mackinder, who popularized the term ‘pivot’.
This term refers to a space of power so strategically relevant that controlling it gives the holder access to world domination.
Mackinder had in mind a geographic area on the Eurasian landmass with almost unlimited resources, which could secure a challenging land power and the chance to overcome the dominant sea power, Great Britain.
Since then, the concept of the pivot has been applied to various “spaces,” some of which are “deterritorialized,” such as the mastering of cutting-edge technologies. In the context of great-power competition, the incumbent’s strategy is to maintain a firm grip on the pivot of power, while a credible challenger will try to gain control over it.
Arguably, Vance is unaware of Mackinder or the pivot space; however, his arguments about US control over AI vividly echo this geopolitical refrain.
First, AI must be kept under US control. In Vance’s words, “The US possesses all components of the full AI stack, including advanced semiconductor design, frontier algorithms, and transformational applications.” This justifies his blunt rejection of the idea of regulation. Regulation means not only safety in the use of a resource that is still far from being fully understood but also implies sharing information that the incumbent does not want to share.
Additionally, regulation by definition limits the incumbent’s exclusive control over the pivot of power.
The incumbent’s full control over a pivotal realm of power implies several risks related to the incumbent’s behavior. The first temptation is to self-justify the strict control over AI technology “for the good of mankind.”
The second is to link this dominant position to a system that unleashes the creative forces of profit-oriented entrepreneurship, with very little consideration of safety – a term mentioned only twice in Vance’s speech.
The third is the temptation to use this technological superiority as a blackmailing, transactional instrument in managing interstate relations.
Taken together, these risks point directly to the exacerbation of relationships between the US and a significant portion of the international community.
IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.