"I expect some really bad stuff to happen"
OpenAI’s legal department is probably fuming over Sam Altman's recent statements | Edition #242
👋 Hi everyone, Luiza Jarovsky here. Welcome to our 242nd edition, trusted by over 81,600 subscribers worldwide. It’s great to have you here!
🔥 Paid subscribers have full access to all my essays and curations here, and can ask me questions to cover in future editions.
🎓 This is how I can support your learning and upskilling journey in AI:
Join my AI Governance Training [apply for a discounted seat here]
Strengthen your team’s AI expertise with a group subscription
Receive our job alerts for open roles in AI governance and privacy
Sign up for weekly educational resources in our Learning Center
Discover your next read in AI and beyond with our AI Book Club
“I expect some really bad stuff to happen”
In a recent podcast interview with a16z, a venture capital company in Silicon Valley, Sam Altman shared some shocking statements that may directly affect OpenAI's liability claims in existing and future lawsuits.
A reminder that, between 2014 and 2019, Sam Altman was the president of Y Combinator, one of the world’s most famous startup accelerators.
My guess is that he is naturally comfortable talking to VCs and does not feel intimidated by them in any sense. As a result, in this interview, he has likely let more slip than his legal team would like him to.
The statements he shared here are probably as close as we can get to what he really thinks, plans, and expects. Watch the excerpt I am referring to here:
At minute 1:00 of the clip, one of the interviewers asks him about his current thoughts on AI safety. He replies:
“I do still think there are gonna be some really strange or scary moments... The fact that, so far, the technology has not produced a really scary, giant risk doesn’t mean it never will. (...) I expect some really bad stuff to happen.”
So let us get this straight:
The CEO of OpenAI, one of the world's most powerful AI companies, is publicly saying that he expects some really bad stuff to happen, specifically related to AI, the type of product his company is developing.
From a legal liability perspective, this is a bomb, and I’m sure his legal department is extremely unhappy with this interview and has told him to avoid these types of statements and framing.
Why?
He is publicly saying that he expects a catastrophe.
At the same time, he is not saying he will immediately stop whatever his company is doing that could lead to this catastrophe.
So if or when a catastrophe, or AI-related harm occurs in connection to OpenAI’s products and services, he and his company will not be able to benefit from arguments such as:
“we could not have foreseen that”
“we had no idea”
“this was unpredictable”
“we have no responsibility”
“we could not have prevented it”
The typical corporate excuses used when facing liability claims will fall apart.
Another reminder that OpenAI is currently being sued by the family of Adam Raine, the teenager who committed suicide with assistance from ChatGPT.
In this horrifying case, ChatGPT actively helped the teenager plan a “beautiful suicide” (read more about the case and see screenshots from Adam Raine's interactions with ChatGPT here).
ChatGPT’s weak guardrails have already led to totally inappropriate conversations with Adam Raine, who ended up committing suicide. There are likely millions of other people engaging in unsafe conversations due to inadequate or unsafe guardrails. Unfortunately, there will likely be more tragedies and more lawsuits against OpenAI.
Will OpenAI deny the victims’ or their families’ claims by saying that the harm was unpredictable or not preventable?
Well, it might try. But Sam Altman and OpenAI cannot have it both ways: they cannot benefit from publicly fetishizing an AI catastrophe while also saying that they could not have foreseen it or done anything to prevent it.
Make sure to save this video.
And he spoke about OpenAI's move into adult entertainment/pornography. What could possibly go wrong?
sam altman is an alt-human and it should terrify you