What does the OpenAI and Sam Altman Drama Mean for Responsible AI?
The events of the last few weeks have been sh*tshow, but some good will likely come out of it too.
I didn't want to write this column. There has been so much ink spilled about the OpenAI and Sam Altman that I'm sure sick of letting Sam Altman live rent-free in my head. But since this drama is sucking all the air out of the room, I suppose I don't have much of a choice. I'm not going to spend any time talking about what happened - there are about thirty thousand other articles out there that cover that. I'm going to focus on the outcome of the drama, and the question: What does this mean for responsible AI and AI governance?
Setting the stage
I can't completely get away from a rundown of what happened, because it's important for my points later on. So here goes: On a Friday night, the OpenAI board fired their CEO, Sam Altman. The press release said that Altman had been misrepresenting things to the board, and they couldn't trust him so they were firing him, effective immediately.
Chaos ensued: There were high-profile resignations in support of Altman. A legion of tech bros (without a stake in the situation, I might add) from around the world rallied to Altman's defense on social media. Then Microsoft offered Altman a job that he accepted. Shortly after, OpenAI employees wrote a letter, signed by most of the staff, demanding Altman be reinstated and the board fired. The drama ended with Altman walking away from the Microsoft job offer to return to his post as CEO at OpenAI, while most of the board was fired and replaced.
This action is significant because of the unique structure of the board was explicitly developed to govern OpenAI and ensure that AI was developed ethically and responsibly. Firing the board for following it’s mission undermines the narrative that companies like OpenAI can self-regulate.
Notably, the only two women on board were replaced with men, who are likely to be sympathetic to Altman and unlikely to hold him to account in the future. Altman is now, for all intents and purposes, free to do whatever he likes without oversight or accountability. The allegation that he lied and misrepresented things to the board was lost in the resulting chaos.
For the detailed version, check out the article from Max Read:
Friends, this is what we call a Grade A shitshow.
What does this shitshow tell us about AI Ethics, Responsible AI, and AI governance? A lot of things about trust. First, we've seen first-hand that we can't trust companies to self-regulate; second, organizations shouldn't trust Sam Altman and OpenAI, and finally the other two points are going to result in more governance, which will create more responsible AI.
The AI Industry Can't Self-Regulate
This situation was an attempt and a catastrophic failure at self-regulation. The board was concerned about Altman's risky behavior. They didn't trust him to be honest, so they took a self-regulation action, and the action failed. Instead of the desired outcome of responsible outcomes, the board was forced to resign and Altman was reinstated with no accountability for his behavior and no deterrent for future similar behavior. The system was designed to self-regulate, but self-regulation only works when the entity being regulated adheres. The folks at OpenAI ignored their governing system because it was inconvenient to them.
The self-regulation system failed because the people attempting to enforce the rules were terminated as they tried to enforce them.
We can anticipate swift and more restrictive regulations to emerge from legislative and regulatory bodies as a result of this situation. In the last few days, the UK submitted an AI regulation bill, and the UN ECE recently supported increasing AI governance. Additionally, both the Chinese government and the Indian government are closely watching the situation as it unfolds, with some trepidation. While US regulators have yet to respond, we can be confident based on past actions that they are closely monitoring and evaluating the situation.
The Situation Highlights the Risk for OpenAI's Customers
The events of the last two weeks demonstrated that OpenAI and Sam Altman are unpredictable, unstable, and as a result high-risk. With the board's unsuccessful attempt to fire Altman, he is unchecked and has no accountability, which makes it riskier to partner with OpenAI with him at the helm. Altman's cavalier attitude and zero accountability mean he's likely to ignore any regulation in the pursuit of his white whale. That lack of accountability for OpenAI and Altman's actions led to a single conclusion: You can't trust OpenAI and Sam Altman.
If Altman can't be trusted to be honest with his board, how can any customer expect him to be honest with them, or abide by any of their contracts? Can any believe him when he says confidential and IP data isn't being used to train his model? He had no problem lying to his board, so he has no problem lying to you. If something doesn't suit him or gets in the way of his ambitions, he will ignore it and move forward regardless of the consequences. Sam Altman and OpenAI represent an unquantifiable, unacceptable risk because you cannot trust what they say.
AI Governance and Responsible AI
It seems counter-intuitive, but this situation has likely been a net positive for responsible AI and AI ethics. Whether he realizes it or not, Altman's actions have driven the responsible AI movement forward. Regulators were already skittish about AI, and seeing the lack of accountability for OpenAI and Altman is only going to increase that skittishness. We'll see more regulations coming out sooner to address the gaps in the current governance approach. Businesses aren't going to want to take on the risk of unaccountable and unpredictable AI tools. We will likely see more due diligence requirements in contracts as businesses try to minimize or remove risk from uncontrolled AI development, and stronger contractual obligations for adhering to a responsible AI framework and repercussions for violating that contract.
Regulation doesn't happen quickly, but businesses will adjust much quicker. We've already seen requirements in contracts for the responsible use of AI, and we'll very likely start seeing a lot more. Between the upcoming regulations and contractual requirements, we will see a demand for AI governance professionals - those who are responsible for making sure that companies are using AI responsibly.
There is some downsides to regulation. Regulators are unlikely to get it “right” the first time, which means that we will go through some growing pains as we collectively figure out what works, what doesn’t, and how we can move together. This will naturally also slow AI development a bit. While I know some folks view this an anathema, I don’t think it’s bad to slow a smidge or two.
But what about Q*?
First - read this post from Gary Marcus. The important thing to remember here is that AI hype has been at a fever pitch for over a year. People who don't understand the tech make bold claims for the clicks. Q* is no different. No, it's not AGI. We know virtually nothing about it, so any assertions that we've achieved AGI are just a gift.
But, it's plausible, or even probable, that the Q* leak was intentional, to get the news to move on from the drama and the realization that Altman did a really bad thing and absolutely nothing bad happened to him.
I write Byte-Sized Ethics for exactly this situation. There’s so much to unpack, so many considerations, and so many stakeholders. Please consider subscribing to support my work, help me remain independent, and offer more Responsible AI and Tech content in the future.
Excellent overview. It seems to me like the only winner here is Altman, really. Do we know if he was actually dishonest with the board? I know they pretty much said he did, but has any proof come out?