AI Supremacy: Why Democracies Cannot Afford to Lose
This is not about gadgets, but about sovereignty, security, and the survival of free societies.
Every era has its decisive contest. In the 20th century, it was first the race for the atomic bomb. Later, the space race determined prestige, deterrence, and economic might. Further back, the Industrial Revolution tilted the balance of global power, allowing some nations to dominate others through industrial strength.
All of this rested on an earlier foundation: the Western embrace of science and technology from the 16th century onwards. The Scientific Revolution and Enlightenment unleashed a cycle of discovery, innovation, and industrial output. Unlike political or cultural ideas, which endure only when backed by hard power, science is objectively superior—it works. It allowed the West to transform the natural world and, coupled with industrial capacity, gave it a lasting edge in economic output, innovation, and military strength.
That is why Western norms once seemed so dominant that many believed they represented the natural “end state” of human history—the so-called End of History. We know now that this was an illusion. They prevailed not because they were destiny, but because the West had the hard power to defend them, and because it used science and technology to keep asking the right questions in the pursuit of discovery. In short, it was technology supremacy—and its application—that allowed Western democracies to thrive.
In the 21st century, the race for technology supremacy centres around AI—just as the defining race of the 20th was for the atomic bomb. The bomb reshaped geopolitics and produced the uneasy peace that followed, the longest in modern history. AI will play the same role for our century.
Each era has had its decisive technology; this century’s is intelligence itself.
The stakes could not be higher. AI is not just an enabler of prosperity—it is the driver of scientific progress, the backbone of national security, and the foundation of sovereignty in the 21st century. Democracies face a narrow window to secure control of the most powerful force of our age: intelligence, industrialized and deployed at scale.
So this is not a race about gadgets or consumer apps. It is about who builds and controls the infrastructure of intelligence—chips, compute, data, and energy.
But it is also about how that infrastructure is deployed. In the West, we risk trivializing AI into late-capitalist gimmicks—Mark Zuckerberg’s vision of “five AI friends for everyone”—a future of digital distraction. As David Foster Wallace warned in Infinite Jest, a society overdosing on passive, “lethally entertaining” content risks a kind of spiritual death—effortless pleasure replacing meaning and discipline.
Authoritarian regimes have already chosen their different path. China treats AI as a national project, embedding it across military modernization, surveillance, and industrial planning. Russia, meanwhile, is heavily reliant on China’s technology to power its own ambitions. Moscow’s future is tethered to Beijing’s rise—cementing the reality of an authoritarian technology bloc.
Democracies must absolutely keep pace on the military front—falling behind would be catastrophic. But unlike autocracies, the West cannot and should not embrace AI as a tool of social control. The challenge before us is to find a higher purpose: How can these technologies be mobilized in service of democracy itself? How can they renew the meaning of citizenship, national identity, and shared purpose in the 21st century?
History is clear: cultural norms and political systems do not endure because they are morally “good.” They endure only when backed by the hard power to defend and sustain them. For democracies, AI is the next great test—and it will determine whether freedom itself endures.
This is why I am delighted to be moderating a *POWERHOUSE* panel, convened by the University of North Texas’s National Security and Economic Strategy Initiative on September 13th.
What makes this conversation so vital is that these are not abstract debates. The leaders joining me are the ones in the field, doing this work now. They are defending us—and in doing so, they are also defining what our values mean in practice. We owe them a debt, and we owe ourselves a serious conversation about how AI fits not only into national security, but into national sovereignty and democracy itself.
Too often, discussions about AI swing wildly between extremes: claims that it is a scam or has “hit a wall,” versus dystopian fears that it will seize control from humanity. Both miss the point. Humans, context, and intention remain at the centre of how these technologies are built and deployed. The real question is one of intention: how do we, as democracies, choose to use AI?
First, we must secure our national security—because without that, nothing else holds. But beyond that, we have the opportunity to elevate what is possible: to harness AI not for trivial consumer gimmicks, nor authoritarian control, but to renew the democratic project itself.
Joining me:
Colonel Arnel David: Director, Project Maven, NATO Supreme Headquarters Allied Powers Europe (SHAPE)
Dr. Craig Martell: CTO, Lockheed Martin
Colonel Cameron Holt: Former Deputy Assistant Secretary of the U.S. Air Force and Space Force, now President of Exiger (Strategic Markets)
Why this matters:
Each of these leaders represents a critical front in the race for AI Supremacy. Project Maven was one of the U.S. military’s earliest efforts to integrate AI into defense systems, beginning with drone surveillance and expanding into modern battle networks.
Lockheed Martin, as the world’s largest defense and aerospace company, is embedding AI into the next generation of fighter jets, missile defense, and space systems—determining how democracies fight and deter wars.
And the U.S. Air Force and Space Force remain at the cutting edge of innovation, where leaders like Cameron Holt have driven procurement reforms to ensure the U.S. can innovate at the speed of its rivals.
Together, we will explore what is truly at stake—not only for the United States, but for all its allies. Because in the end, AI Supremacy is not about machines. It is about preserving the values that define free societies.
🎥 This conversation will be filmed, and I will share it with you soon. In the meantime, DM me or drop a comment below —how are you thinking about this moment? What questions should I raise with the panel? What perspectives do you think must be part of the conversation?
As a old educationalist, author and now software creator I have long argued for the teaching of Critical Thinking in the belief that youngsters, when adult, will have the skills to research, scrutinise, check and verify the material placed before them. The advent of AI generated material, often deliberately manipulated for financial, or political gain, is being consumed and seen as truth because it looks and sounds as, once trusted sources once were.
How best should we educate our populations to, at least have a chance to , identify, verify and use truth?
and linked: How can governments influence, persuade and control massively rich bad actors in the tech industries who exploit the power they wield for nefarious purposes?