BLETCHLEY SUMMIT: Leaders warn against ‘AI alarmism’ while comparing it to nuclear war
The UK scores diplomatic coup by hosting first global AI Summit - existential risk emerges as key narrative
Happy Friday all!
I’m still on ‘AI tour,’ so my apologies for not being able to write for a few weeks. It’s been pretty crazy — not least because of recent global events. AI is only one of many disruptive forces now reshaping the world in an era of constant change.
I have always predicted that AI would get political, right? And we have seen the politicization of AI at the highest levels over the last week here in the UK.
AI gets political
The British Prime Minister, Rishi Sunak, pulled off quite the coup when he became the first leader to bring together numerous national delegations for his AI safety summit. All the ‘big guns’ were there — notably U.S. VP Kamala Harris — but also Ursula Von Der Leyen, the European Commission President, and even representatives from China after much speculation about their attendance. And let's not forget the heavyweights of the AI industry, from Nvidia to Microsoft. Notably missing in action was President Emmanuel Macron, who seems to be itching to position France as Europe's AI champion, a rivalry I've touched on before.
First, the good bits. Global AI discussions are critical, and the UK summit produced 'The Bletchley Declaration,' with 28 signatories, including power players like the US and China, committing to safe AI development. Though 'safe' is up for interpretation, it’s a diplomatic triumph for Sunak, whose tenure will almost inevitably end in an election next year. Credit is due to the likes of Ian Hogarth (an invaluable AI advisor for the British government) for orchestrating this historic summit, with Bletchley Park, a symbol of legacy and triumph over adversity, as the backdrop.
‘AI death’ rears its head again
But, as the dust settled, the focus remained unshakably on AI's existential threats. The Telegraph put it bluntly in their headline:
"Sunak says people should not be 'alarmist' about risk posed by AI - while saying it could be as dangerous as nuclear war."
Personally, I find this unrelenting fixation on 'AI death' rather counterproductive. It skews the discourse because it isn't the primary concern of the majority of AI researchers, and it paints 'AI' as an ominous force—almost as if 'it' were already a self-governing entity rather than a tool shaped by human hands.
Elon everywhere
And then there's Elon Musk, the one who never fails to steal the limelight. In a rerun of his captivating performance before the US Senate, Elon once again seized the stage. Rishi Sunak, in an interview that could only be described as rapturous, handed the floor to Elon. I'm no marketing guru, but one can't help but wonder if the Prime Minister allowed Elon to eclipse him, leaving him gazing at the entrepreneur "more starry-eyed than a SpaceX telescope."
This takes me to another heated debate currently roiling the AI community—monopolies. As concerns about 'AI Safety' escalate, driven in large part by voices like Musk's, questions emerge about the impact on open-source AI development.
It's worth noting that those sounding the alarm about AI's existential risks are often the ones most deeply involved in building AI systems, all the while courting policymakers who may, at times, seem utterly out of their depth.
It's essential to acknowledge that not all AI luminaries share the same concerns about existential threats. The community is splintering, with figures like Yan Le Cunn (Meta) and Andrew Ng (Coursera, ex-Google Brain) pushing back against the 'AI death' narrative.
Yet, the real concern here is that the label 'risky' might stifle open-source AI development, further entrenching the dominance of major AI corporations. (For a deeper dive into this topic, check out my conversation with Thomas Wolf, the CSO of Hugging Space.)
AI Education
Here's what I felt was missing at the summit —a more robust focus on AI education. Imagine if those signatories had pledged to help people truly understand the potential of AI. Only a tiny fraction of the world's 8 billion inhabitants possess the know-how to work with and create AI systems, let alone grasp their risks and limitations. It's no wonder that we often perceive AI as an all-powerful, enigmatic, and autonomous entity. Investing in basic understanding could dispel much of the misinformation surrounding AI.
The Geopolitics of AI
And so, this brings us to the grand finale—the geopolitics of AI. I've had the privilege of witnessing the geopolitics of AI play out at the nation-state level during my recent travels (and I'll write more on that soon.) But in my view, the United States remains the undisputed heavyweight in this arena — thanks not only to the dominance and innovation of its tech giants but also to the sway figures like Musk wield over the global policymaking landscape.
China, however, emerges as the most likely contender to challenge the US for overall dominance, a narrative I'll explore further in the days to come. Additionally, other regions, like the UK, are asserting themselves, as evidenced by this week's summit. But keep a close eye on the Gulf region, particularly the UAE. They appointed an AI Minister as far back as 2017 and have invested in the world's only state-sponsored open-source LLM (Falcon 40B), which has delivered a stellar performance. Meanwhile, Saudi Arabia is also poised to make significant AI investments in the coming years.
As I often say, this is not just a tale of technology; it's a narrative that delves deep into the heart of humanity itself.
For now,
Namaste,
Nina
"Personally, I find this unrelenting fixation on 'AI death' rather counterproductive. It skews the discourse because it isn't the primary concern of the majority of AI researchers, and it paints 'AI' as an ominous force—almost as if 'it' were already a self-governing entity rather than a tool shaped by human hands."
I find this to be an... interesting take. My chronic concern over AI has hardly been about it becoming sentient and deciding that "the human is obsolete", but *precisely* the fact that it is shaped and wielded by human hands.
The humans who built the Chernobyl reactors knew full well how unforgiving nuclear forces are, and still nearly wiped humanity off the planet in efforts to cut corners financially.
We have let climate change slip farther and farther into disaster because the humans who stand to profit from factors that perpetuate it have their checkbooks in government.
Facebook isn't destructive to the world because it is sentient - it's destructive because it makes way more money when tuned for maximum profits rather than maximum human concern, resulting in damage ranging from tripling teen suicide rates to election interference in multiple democracies.
What scares me about AI is not sentience - it is that it is powerful, that it is only going to escalate in power, and that it is wielded by humans who consistently place profits in front of compassion and conflate market demands with human needs.