The Hero and the AI Apocalypse: How AI Founders are hedging their bets
Why do the people who are developing AI at breakneck speed keep warning that it could kill us?
Another week, and another warning sounded about the AI extinction risk. This time, the Centre for AI Safety has collated the great and good from the worlds of AI, science, and technology to support this statement:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Many signatories are devoted researchers and philosophers who have dedicated their lives to studying existential risks. I have the privilege of knowing some of them personally and deeply respect the contributions of others.
I don't doubt that most signatories genuinely believe in the risks they highlight. And let's be clear; they are not unfounded. AI could pose an existential threat to humanity even before we achieve AGI or Superintelligence, i.e., before the so-called "robot takeover."
So, yes, we must take AI's existential risks seriously.
The Dilemma of Hero Founders: 🦸♂️🤔
By the same token, as these statements consistently dominate the AI debate, they are starting to ring hollow. The ‘Hero Founder’ warning humanity about AI — while also keenly developing it — is becoming a bit of tiresome trope.
Founders and leaders from the likes of OpenAI, Google DeepMind, Inflection, Stability AI, Microsoft, and Google are among the signatories of this latest statement. Notable names include Sam Altman, Emad Mostaque, Bill Gates, and Demis Hassabis.
These Founders are the vanguard of the AI revolution.
Their companies and visions have played a pivotal role in commercializing and productizing, and democratising (Generative) AI, particularly in the past six months, following the watershed 'ChatGPT moment.'
Talking the ‘good talk’ on the existential risks of AI while simultaneously engaging in an AI arms race to secure their dominance in space... seems a little, I don't know, hypocritical?
Take Elon Musk. He signed the 'AI moratorium letter,' advocating a six-month ban on AI development in March while simultaneously declaring he was working on 'Truth GPT' and scrambling to buy as many GPUs as possible. A few days ago, he said GPUs are "harder to buy than drugs." (He didn’t sign this statement.)
Saving humanity or profit?
Another intriguing case is OpenAI. Last week, I wrote about how Sam Altman had become the poster boy for Generative AI, feted and celebrated by global leaders — especially when he called upon policymakers to 'regulate' AI.
Yet, how far has OpenAI drifted from its founding mission to protect humanity from the existential risks of AI? It seems to me that the bottom line for the former non-profit these days... is, well... profit.
Historically, OpenAI has been heavily loss-making. In documents obtained by Fortune, its net loss, excluding employee stock options, was projected at $544.5 million in 2022. And the cost of running ChatGPT means those losses are soaring. (One estimate from several months ago put the cost of running ChatGPT at $3 million daily.)
Now, with ChatGPT developing into a commercial product, OpenAI is forecasting $200 million in revenue in 2023 and over $1 billion in revenue in 2024, according to the same documents seen by Fortune.
Regardless of how these projections were calculated, OpenAI (and all the other companies in the AI arms race) can smell the money. I mean, we all can, right?
For a few moments, NVIDIA joined the trillion-dollar club this week, thanks to its GPUs powering the Generative AI revolution. (Interestingly, I don’t see its CEO Jensen Huang, making statements about how AI could end humanity. I think it’s a wise play.)
Meanwhile, Goldman Sachs predicts that Generative AI could boost global GDP by $7 trillion (nearly 7%) and enhance productivity growth by 1.5%.
Hedging All AI-Bets
So at this point, the cynic in me can't help but wonder if some of the signatories to such grand statements are merely hedging their bets.
If everything goes well — it’s all gravy. However, in the face of a near AI apocalypse or the more probable occurrence of some ‘bad sh**’ happening (that doesn’t lead to mass extinction), they can always claim, "Well...Yes... but we WARNED this could happen."
This distinguishes them from the Founders of the social media platforms (hello Zuck), who lacked such a defence. They naively proclaimed their inventions to be an unmitigated boon for humanity. Perhaps the founders of the AI Era are a bit savvier.
One final point: reducing this important existential debate to daily headlines about the AI apocalypse is counterproductive. It leads to panic and means we are losing sight of how AI is already reshaping the world in many practical, tangible, and risky (but not necessarily existential) ways.
Consider the increasing use of Generative AI across industries — how it is already impacting human productivity and creativity — and by that same token, the economy and, eventually, the labour market. Similarly, Generative AI is already shaping art, culture, politics, and human experience.
Simply whittling it down to 'AI can kill us all' is not helpful.
That’s why I will delve deeper into the nuance of this debate — interesting conversations coming soon… starting with Gary Marcus.
PS, a little bit of housekeeping. I have decided to keep EGAI a weekly commentary instead listing new developments. There are zillions of new AI newsletters that do the former… I’ll focus on the big-picture themes.
Enjoy the weekend wherever you are in the world….
Namaste,
Nina
Brilliant and balanced article on AI 🙂💐💐👍