The 'defenestration' of Sam Altman and the schism in AI
If 'God' himself is vulnerable -- can the philosophical differences in AI development ever be bridged?
Happy Monday all,
Lots of informative/speculative takes on the biggest story in tech over the last few days, which came in three acts:
1. Sam Altman's 'defenestration' by Open AI's Board;
2. A live-X showdown over whether or not he would be reinstated (with lots of heart emojis).
3. And finally, a Satya Nadella 'flex ' by recruiting Sam and Greg Brockman to lead a new AI research unit within Microsoft.
Apart from the effect this will have on the AI ecosystem at large, what this means for Microsoft's dominance in terms of AI development, and more specifically, on the future of Open AI --- I was struck by something far more 'human' in this story.
Ultimately, it boils down to a philosophical/moral disagreement amongst a small coterie of people (less than ten?) -- who happened to be at the helm of a company that's changed the world's perception of AI at mind-numbing speed.
This was their moment. 🚀
And yet, and yet. They fell out in the most human display of disorganised clusterf****.
Despite all the meetings with starry-eyed leaders (I jokingly call Sam Altman 'God'), the billions of dollars invested, and the adulation of millions, Sam Altman is just one man, brilliant though he may be. This weekend, he, exalted as a divine tech oracle, suddenly became very human and vulnerable.
This reminds me of how lonely it is at the top of AI development. As it accelerates, it is worth remembering how strongly these 'lonely few' disagree on fundamental approaches to AI.
Indeed, the number of people who can *really* develop frontier AI systems; whether to commercialise or 'contain' them - is infinitesimally small. So small that a falling-out between a mere handful of colleagues reverberates around the world.
How do we interpret the fact that so much influence is wielded by so few? And can we temper the almost messianic zeal we feel for superstar researchers/founders and consider their fallibility, too?
Ultimately, this is why, I say AI boils down to people and power.
With such huge incentives and inevitable technology, I believe that AI development will not slow down. Similarly, considering the BIG personalities -- and their even stronger convictions, I don't see an end to the dissenting philosophical schisms in AI.
The biggest takeaway we can glean from Open AI's meltdown is that commercializing AND containing these systems will become *politically* (even if not hypothetically) incompatible as the differences between the pioneers of AI development become even more pronounced.
If 'God' himself is vulnerable, can the philosophical differences in AI development ever be bridged?
"Indeed, the number of people who can *really* develop frontier AI systems; whether to commercialise or 'contain' them - is infinitesimally small." -- I am not so sure about that. Yes, the success and attention that OpenAI got is unprecedented, no doubt. But the development of frontier AI systems is a global phenomenon. It is not happening in a vacuum, and it is not being driven by any single group of people. There are many different stakeholders involved in the development of AI.