Last week, I discussed the Hero Founder, and now I want to delve into another emerging AI archetype: the 'Political Savior' and the growing toxicity surrounding AI.
It's no surprise that AI has become a political issue. As I've been arguing, the AI revolution is inherently political, transforming society on a macroeconomic level.
Narratives surrounding AI are becoming increasingly politicized as well. It's astonishing how quickly 'AI' has become the trend du jour in geopolitical circles. Suddenly, leaders without a cause have found a shiny new objet to seize.
AI's pervasive nature allows it to be spun to support almost any narrative. However, the most popular one by far is the 'existential' narrative— the idea that AI will bring about our demise. As I mentioned last week, this narrative is far from the reality of what is happening.
The Doomsday Narrative
Global politicians are not immune to the simplistic analysis of AI. They might be even more susceptible to it because the doomsday narrative presents an opportunity for them to emerge as the saviours of humanity.
I have no issue with politicians embracing AI. Leadership is essential in the AI domain. However, this merely states the obvious, like, 'the sky is blue.' Any competent leader must recognize that AI goes beyond being a potential 'extinction risk' and should be well-informed about its broader utility. In short, true global AI leadership requires more than empty rhetoric.
Unfortunately, the 'Political Savior' types often rely on hyperbole rather than substance.
‘Tech Bro Sunak makes drama of AI crisis’
Consider Rishi Sunak, the Prime Minister of the UK. By all accounts, he’s a smart guy who gets ‘tech,’ but this week — as the Times of London summed up his AI interventions with, ‘Tech Bro Sunak makes drama out of AI crisis.’
As the UK battles with a sluggish economic outlook and unsure of its place on the global stage, AI is emerging as a panacea for all its ills.
On the one hand, the UK is keen to become a beacon for AI innovation. Sunak was among those who courted OpenAI CEO Sam Altman (i.e., ‘God’) the other week. The British government announced the creation of its own AI Taskforce with £100m in funding for the development of Foundational Models.
Conversely, Rishi Sunak is not immune to the trend of grandstanding on AI's existential risks. He’s getting increasingly vocal about it, following in the footsteps of Musk et al (it’s all in vogue these days).
This week, on a visit to the US, came another flourish: Sunak, lobbying Biden for a ‘leadership role’ on AI, also announced that the UK will host the world’s first summit on the risks of existential AI. This is reportedly part of a strategy to position “the UK as the natural hub for efforts to regulate the industry on a global scale, one that can provide a bridge between the US and China.”
Hmmm.
I wonder if these global AI summits will end up like climate change summits— good for PR but lacking concrete action and often detached from reality. I hope that the UK's AI Summit brings something meaningful to the table and that Rishi resists the temptation to become a Political Savior.
We will see.
But in the meantime, here are five recommendations I would put to any global leader thinking about AI:
Take the tenor down: Feeding into this kind of moral panic is irresponsible.
Break down and articulate the risks (many non-existential) such that they can be tackled appropriately instead of falling back on the ‘extinction risk’ default.
Use political capital on breakthrough discoveries: lead international collaboration (as in the case of Covid vaccines) in fields where AI has the potential to transform humanity: IE drug discovery, disease prevention and climate change.
Develop internal AI capacity to expedite, automate and streamline government processes, efficiency and insights.
Allocate funds for AI R&D and knowledge work. Apart from funding AI research, start thinking seriously about redesigning education and learning in the understanding that AI will transform all knowledge work.
Toxic AI
One consequence of all this grandstanding on AI is that it is increasingly being ‘toxified.’
No wonder, that when Apple came out this week with its Vision Pro headset - a device that Generative AI will undoubtedly power - and a bunch of AI-powered features, they didn't mention AI at all.
Right now, AI equals scary, bad, and even death. Who can blame Apple for taking a wide berth?
The extinction threat of AI reminds me of the (likely mythical) story of people running from the theatre showing the first film of a train pulling into a station. It’s a convenient headline.
With his tech credentials this should be an open goal for Sunak - to be an adult about the whole thing and not just spout the obvious headline. As you say we’ll see.
As an attorney I constantly deal with the conundrum caused by the need to view data sets as large, fluid masses of information and at the same time as individual expressions of ideas and communications. For example, a piece of evidence or a communication protected by attorney client privilege are categorizations I need to make for my clients. But that’s not how data is created or stored or used. Data is by definition plural and typically too voluminous to consider... unless you have technology to assist.
So yeah, AI often gets data categorization wrong. But as a first cut I’m betting we get to a point that it’s the only possible way to slay the dragon.
And I’m betting others are feeling the same sense that this is tech we need AND need to manage.