When will we take AI doomers critically?
That’s a key subtext of Elon Musk’s try and shut down OpenAI’s for-profit AI enterprise. His attorneys argue that the group was arrange as a charity centered on AI security, and misplaced its method in pursuit of lucre. To show that, they cite outdated emails and statements from the group’s founders concerning the want for a public-spirited counterweight to Google DeepMind.
As we speak, they referred to as the one professional witness to talk on to AI know-how: Stuart Russell, a College of California, Berkeley pc science professor who has studied AI for many years. His job was to supply background on AI, and set up that this know-how is harmful sufficient to fret about.
Russell co-signed an open letter in March 2023 calling for a six-month pause in AI analysis. In an indication of the contradictions right here, Musk additionally signed the identical letter, whilst he was launching xAI, his personal for-profit AI lab.
Russell instructed jurors and Decide Yvonne Gonzalez Rodgers that there have been quite a lot of dangers related to the event of AI, starting from cybersecurity threats to issues with misalignment and the winner-take-all nature of creating Synthetic Normal Intelligence (AGI). Finally, he mentioned that there was a pressure between the pursuit of AGI and security.
Russell’s bigger considerations concerning the existential threats of unconstrained AI didn’t get aired in open court docket after objections from OpenAI’s attorneys led the choose to restrict Russell’s testimony. However Russell has lengthy been a critic of the arms-race dynamic created by frontier labs across the globe competing to achieve AGI first, and referred to as for governments to control the sector extra tightly.
OpenAI’s attorneys spent their cross-examination establishing that Russell wasn’t instantly evaluating the group’s company construction or its particular security insurance policies.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
However this reporter (in addition to the choose and the jurors) might be weighing how a lot worth to placed on the connection between company greed and AI security considerations. Nearly each one of many OpenAI founders have strenuously warned concerning the dangers of AI, whereas additionally emphasizing the advantages, trying to construct AI as quick as attainable — and hatching plans for AI-focused for-profit enterprises they might management.
From the skin, a transparent problem right here is the rising realization inside OpenAI after its founding that the group merely wanted extra compute spend if it was to succeed. That cash might solely come from for-profit traders. The founding group’s concern of AGI within the arms of a single group pushed them to hunt the capital that in the end tore the group aside, creating the arms race we all know at present—and bringing us to this lawsuit.
The identical dynamic is already taking part in out at a nationwide degree: Senator Bernie Sanders’ push for a regulation imposing a moratorium on knowledge heart development cites AI fears enunciated by Musk, Sam Altman, Geoffrey Hinton and others. Hoden Omar, who works on the commerce group the Middle for Knowledge Innovation, objected to Sanders citing their fears with out their hopes, telling TechCrunch that “it’s unclear why the general public ought to low cost all the pieces tech billionaires say besides when their phrases could be recruited to fill gaps in a precarious argument.”
Now, either side of the case are asking the court docket to do exactly that: take a part of Altman and Musk’s arguments critically, however low cost the elements which are much less helpful for his or her authorized argument.
Correction: The article was up to date to right identify of a Stuart Russell, College of California, Berkeley pc science professor.
While you buy by way of hyperlinks in our articles, we might earn a small fee. This doesn’t have an effect on our editorial independence.
