Immediate thoughts on Open AI leadership crisis. Did OpenAI board decide that they don’t like snake oil or the geopolitics of AI is working its influence?
On 17th November 2023, OpenAI board of directors surprised the world and the markets with terminating Sam Altman as CEO. Chief Technology Officer Mira Murati has replaced him on a temporary basis until OpenAI recruits a new CEO. The primary responsibility of a board is to appoint and remove the CEO. A board relies on their CEO for honest information to make informed decisions. The board evidently found fault with Sam Altman, citing a lack of consistent candour (read lack of honesty) in his communications with the board, which undermined the board's ability to fulfil its duties.
"No one should be trusted."
Boards’s Fiduciary Duties
The official announcement, which the board must have endorsed, stated, "The board conducted a deliberative review process and concluded that he [the CEO] was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."
As per Open AI press release, OpenAI’s board is formed by Open AI Chief Scientist Ilya Sutskever and the following independent directors: Quora CEO, Adam D’Angelo; technology entrepreneur Tasha McCauley and Helen Toner from the Georgetown Center for Security and Emerging Technology.
The OpenAI board's size and structure seem relatively modest in size and experience for a rapidly expanding and highly successful company in the middle of a imminent funding round to top up the $13 billion already invested in them. A board of this size may be potentially fostering echo chambers and groupthink in decision-making processes.
The FT reports on 19th November 2023 “The OpenAI board’s abrupt decision to oust Altman and demote Brockman on Friday has drawn attention to its unusual corporate structure and governance. That board oversees a non-profit entity that owns a for-profit company. Unlike a typical for-profit, where fiduciary duties are owed to shareholders, OpenAI’s board is committed to a charter that pledges to ensure AI is developed for the benefit of all humanity.”
Certainly, it's crystal clear that a board holds a fiduciary duty to both their mission and the company. Boards don't casually oust their CEOs without reason. Releasing an official statement that directly questions the CEO's integrity raises a crucial concern: "What were the consequences of the board's decisions when they felt Mr. Altman wasn't straightforward with them?"
Why do all of us need to know? This technology isn't just affecting our business—operating in our Word files and Microsoft Teams meetings through Co-Pilot that uses OpenAI technology —but also has broader implications for our society and people's livelihoods. Everyone has a stake in getting an answer to this question.
Terminating the CEO
The reason behind Altman's termination remains unclear, but it has caught everyone off guard, including the markets which registered a drop in Microsoft share price shortly after the announcement on Friday 17th November 2023. Microsoft owns 49% of OpenAI and reportedly invested more than $13 billion.
This announcement has surprised everyone particularly following the recent success of Dev Ops day. In the past 4 days, GPT-4 Plus is unable to accept new paid subscriptions due to unexpected demand, leading to temporary suspension of new subscriptions. Thus, the CEO dismissal doesn’t seem to stem from poor performance or business growth challenges. Instead, it may be attributed to a rift between the board and the CEO regarding the vision and approach to achieving Artificial General Intelligence (AGI), the mission that the board of OpenAI reiterates they pursue.
The Wall Street Journal reports on 18th November 2023 that privately, Altman “told people after the announcement that he was shocked and angry about the board’s decision. He also said he felt it was the result of a power struggle between him and members of the board, according to a person familiar with one of the conversations. The change came as a surprise to Microsoft, which has invested $13 billion into OpenAI for a 49% stake. Microsoft’s top executives found out about Altman’s ouster minutes before the announcement, according to a person familiar with the situation.” Bloomberg reports that a number of AI researchers have also resigned shortly after Altman was dismissed, alongside with Brockman who was only demoted from the chairman role but then he resigned from OpenAI.
Points for Investors to Consider
On November 19, 2023, the FT reported that investors are urging OpenAI to reinstate Altman. Following his dismissal, on social media, Altman swiftly hinted at new ventures, sparking realistic assumptions that he could potentially secure ample financing to compete with OpenAI. That might not be such a bad idea, as more competition in this highly concentrated space can only be good news. Irrespective, OpenAI investors' stance is strategic - bringing Altman back safeguards their investments in OpenAI by preventing him from becoming a competitor to OpenAI. Needless to point, that the same investors can always be investing elsewhere.
As a macro view, the OpenAI debacle should be seen as more than a typical CEO termination; it's intricate with far-reaching consequences. It underscores three key questions that I coined as the AI Trinity for Investors:
What’s under the Hood? The reliance on AI systems built by OpenAI or one single provider should prompt broader questions about the design approach of AI solutions in general, and reliability of a business model. You cannot build a resilient and reliable company when you have everything with one provider.
Where’s the invisible hand? The Geopolitics of AI impacting the global reallocation of power and the commercial interests of current players, especially in cloud providers which see their revenues linked to ever growing compute power. Regulating AI can be used as tool for carving that influence. This dynamic will determine whether Altman's return is driven by commercial pressures.
What is founders philosophy on AI? Philosophical influences, emphasising the ongoing debate among the three schools of thought shaping AI creators' perspectives: humanism, transhumanism, and longtermism (linked to effective altruism). Investors should ask questions to establish what philosophy the founders subscribe to. And they’ll know what future society those founders are set to build. As an investor: Do you want to sponsor their vision? It's more than financial returns. It's about the future you are building for your children. Investors have a duty to know whether they are investing in empowering humanity or algorithmically enslaving it.
I've consistently advocated for investors to employ the "AI Trinity" intellectual lenses during the due diligence process, considering these multifaceted aspects before making AI-related investments.
Where is Reliability and Economic Value ?
In a recent interview with a member of the OpenAI board, Ilya Sutskever, who is the current Chief Scientist of Open AI, this is what he says around five minutes into this interview
“why were things disappointing in terms of real-world impact? My answer would be reliability. [...] why didn't things work out? It would be reliability. That you still have to look over the answers and double-check everything.That just really puts a damper on the economic value that can be produced by those systems. “
This statement underpins my message to my clients and my public talks: generative AI in its current technology approach is not 100% reliable. As I said it in my recent speech at a leading HR and payroll company offsite gathering “if it is not 100% reliable 100% of instances, then we need to carefully consider its applications and embedding it into our business processes. Maybe it’s a business risk we don’t want to take.” I know that my stance has caused disappointment with the armchair philosophers and overnight AI experts, but this is the situation we are finding ourselves in. The leadership needs to know the unvarnished truth.
In the recent weeks, the team at OpenAI has introduced a clear disclaimer about ChatGPT application reliability, a new addition and a clear sign that the issues are persistent.
So, when the chairman of the board is also sacked, as a consequence of sacking the CEO, it would appear to indicate that the two have been on the other side of the “reliability argument” highlighted by the Chief Scientist, who clearly intimates that without reliability there’s actually nothing. Without reliability there’s no economic value — just smoke and mirrors. Such a finding puts investors and buyers of their solution on pause.
Without safety, there’s unmitigated risk with AGI, which is the mission OpenAI has. Incidentally, Sutskever’s new priority is to figure out how to stop an artificial superintelligence from going rogue. This also should give us all a pause to think critically about the gen AI in the enterprise.
Selling vs Safety
Sam Altman, a softly spoken savvy salesman, known for closing lucrative deals, safeguarding investments, and delivering results, has a profit-oriented approach evident throughout his career. He achieved notable success, selling his first company, a dating app, for over $40 million shortly after he dripped out from school. The Silicon Valley VC community admires Altman for his Midas touch that consistently generates substantial profits. His name is highly recognisable across the global political spectrum and his name brand precedes him.
On the other hand, Ilya Sutskever, a mostly unknown name outside AI research where he is highly respected, with impressive background in building what Altman sells at OpenAI, emphasises the importance of reliability for economic value in these systems. When AI research community members also highlight clear limitations, it raises concerns about scrutinising the capabilities of these systems. And you can see why I openly criticise decisions of dismissing staff based on a salesman's claim that AI could replace their jobs.
Recent developments since Altman's dismissal reveal a growing concern about the safety and reliability of OpenAI technology. When revealed, the absence of these qualities could lead to significant legal and societal implications. While the OpenAI board prioritises the safe development of AGI, Altman and Brokman lean toward exponential growth and monumental valuations, potentially compromising technology safety and reliability. The board appears to have reached a point where they deemed enough is enough.
What’s next?
Surely no one can tell, and if you served on a board you’d know how rarefied the air is and the increased levels of volatility when firing the CEO. Ask any board director what’s the hardest task they had to do and invariably firing the CEO sits right up there.
1. Risks, Regulations and Governance
With Altman and Brockman's departure and the evident schism within OpenAI, we anticipate a continued fragmentation in the AI sector. The force of creative destruction should be expected in this nascent space.
The debate between open source and closed source AI, exemplified by Meta versus OpenAI debate, will amplify. Both factions will actively lobby governments to influence regulations in their favour (read in the Policy section about the current deadlock in Europe). The momentum behind AI regulation may wane, that may lead to increased risks, particularly in creating autonomous AI agents within multi-agent ecosystems. In that autonomous environment, technology is no longer neutral. It is optimised by its creators to reach an objective.
The magnification of AI risks arises from potential disagreements among these autonomous AI agents, posing significant threats to utility infrastructures such as banking, digital money, digital identity, electricity, shipping, or air travel—the foundational components of a well-functioning economy. The question arises: should we brace ourselves for crashes and malicious cyber attacks becoming the new normal? Can we afford the risk? What’s the plan B when a crash happens, for instance in the UK, where the government has recently announced that they won’t regulate AI?
2. Conflict of Interests?
Despite his abrupt departure, Altman is already involved in various commitments, and one wonders if any may pose a conflict of interest with OpenAI. Notably, Altman owns shares in Worldcoin, a venture employing iris scanning to amass biometrics in their digital identity database (currently at 2.1 million people and counting), aiming to distribute funds in a post-labor economy. Altman envisions a future where AI disseminates knowledge freely, surveillance is extensive by implication so wealth is distributed based on needs. Interestingly, while advocating for AI replacing the "medium human," with OpenAI, Altman simultaneously builds Worldcoin to address the economic consequences of the very consequences that Open AI technology is marked to bring about — labor displacement.
If you listen carefully to Altman's vision of the future and his vision behind Worldcoin, it is reminiscent of reimagined communist ideals desguised as technological progress, a concept Marx and Lenin might appreciate. In this context, I wouldn’t discard the details that OpenAI Chief Scientist, who was born in the Soviet Russia in late 1980s, and his parents immigrated during what historians identify as the terminal stage of communism — times that were traumatic, painful and unfair on people. This alone, would be sufficient to put the two men in disagrement. History may not repeat itself, but it rhymes, especially when observing the layers hidden beneath technological progress and the 4th Industrial Revolution. Ilya Sutskever is the only member of OpenAI board that has direct links to that era no one wants to remember. He knows that romanticising about a new age communist society is not something you want to get close to.
Altman's interests also extend to ventures such as Helion, a fusion startup aiming for net electricity by 2024 and a long-term goal of delivering electricity at 1 cent per kilowatt-hour. The age of cheap everything underpins Altman’s vision of the future. Additionally, he is part of a recently announced project with ex-Apple executive Jony Ive, aiming to create a more natural way to interact with AI, backed by over $1 billion in funding from SoftBank CEO Masayoshi Son.
Final Thoughts
Putting aside the drama surrounding OpenAI, has anyone delved into the motivations behind Meta, OpenAI, Anthropic, or Google's pursuit of AGI (artificial general intelligence)? It seems that liberal democracies and the enterprise ecosystem are not adequately prepared for or aligned with it. Do we need it? Do we want to fire 80% of staff and have the government create a new economic model to support them?
We all understand that these companies want to make even more money, and they have to come up with the next best thing for us to buy, so they become monumentally profitable, powerful and potent? We also understand that the big tech incumbents will put everything to protect their market share, including using their influence, the invisible hand, to determine outcomes in their favour. It's a strategy incumbents often use.
As AI technology becomes more prominent in our collective consciousness, there will be people attempting to sell it to you. They are salesmen. They'll hype it up and try to convince you. Don't fall for it. In the enterprise context, exercising caution and honing your critical reasoning skills is crucial to distinguish genuine value from mere hype. Salesmen pitch anything, even knowing it might not be worth the price and could lead to trouble. The responsibility falls on you to ensure that what you spend your money on is worthwhile and genuinely necessary.
I consistently highlight the significance of “Algorithms Have Parents®," a term I coined in 2017. The recent turmoil at OpenAI can be likened to parents having a falling out. Once the dust settles, we may discover the full range of reasons behind not only OpenAI decision to part ways with the CEO but also people's character and their moral values.
Comments