On the middle of each empire is an ideology, a perception system that propels the system ahead and justifies enlargement – even when the price of that enlargement immediately defies the ideology’s said mission.
For European colonial powers, it was Christianity and the promise of saving souls whereas extracting sources. For right now’s AI empire, it’s synthetic basic intelligence to “profit all humanity.” And OpenAI is its chief evangelist, spreading zeal throughout the trade in a method that has reframed how AI is constructed.
“I used to be interviewing individuals whose voices have been shaking from the fervor of their beliefs in AGI,” Karen Hao, journalist and bestselling creator of “Empire of AI,” instructed TechCrunch on a current episode of Fairness.
In her e book, Hao likens the AI trade usually, and OpenAI particularly, to an empire.
“The one technique to actually perceive the scope and scale of OpenAI’s habits…is definitely to acknowledge that they’ve already grown extra highly effective than just about any nation state on this planet, and so they’ve consolidated a rare quantity of not simply financial energy, but in addition political energy,” Hao mentioned. “They’re terraforming the Earth. They’re rewiring our geopolitics, all of our lives. And so you may solely describe it as an empire.”
OpenAI has described AGI as “a extremely autonomous system that outperforms people at most economically beneficial work,” one that may by some means “elevate humanity by rising abundance, turbocharging the financial system, and aiding within the discovery of latest scientific data that adjustments the boundaries of risk.”
These nebulous guarantees have fueled the trade’s exponential progress — its large useful resource calls for, oceans of scraped information, strained power grids, and willingness to launch untested methods into the world. All in service of a future that many specialists say could by no means arrive.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Hao says this path wasn’t inevitable, and that scaling isn’t the one technique to get extra advances in AI.
“You can even develop new strategies in algorithms,” she mentioned. “You may enhance the present algorithms to scale back the quantity of knowledge and compute that they should use.”
However that tactic would have meant sacrificing velocity.
“While you outline the hunt to construct useful AGI as one the place the victor takes all — which is what OpenAI did — then an important factor is velocity over anything,” Hao mentioned. “Pace over effectivity, velocity over security, velocity over exploratory analysis.”
For OpenAI, she mentioned, the easiest way to ensure velocity was to take present strategies and “simply do the intellectually low-cost factor, which is to pump extra information, extra supercomputers, into these present strategies.”
OpenAI set the stage, and fairly than fall behind, different tech corporations determined to fall in line.
“And since the AI trade has efficiently captured a lot of the high AI researchers on this planet, and people researchers not exist in academia, then you might have a complete self-discipline now being formed by the agenda of those corporations, fairly than by actual scientific exploration,” Hao mentioned.
The spend has been, and shall be, astronomical. Final week, OpenAI mentioned it expects to burn by way of $115 billion in cash by 2029. Meta mentioned in July that it might spend as much as $72 billion on constructing AI infrastructure this 12 months. Google expects to hit as much as $85 billion in capital expenditures for 2025, most of which shall be spent on increasing AI and cloud infrastructure.
In the meantime, the aim posts preserve transferring, and the loftiest “advantages to humanity” haven’t but materialized, even because the harms mount. Harms like job loss, focus of wealth, and AI chatbots that gas delusions and psychosis. In her e book, Hao additionally paperwork employees in growing nations like Kenya and Venezuela who have been uncovered to disturbing content material, together with little one sexual abuse materials, and have been paid very low wages — round $1 to $2 an hour — in roles like content material moderation and information labeling.
Hao mentioned it’s a false tradeoff to pit AI progress in opposition to current harms, particularly when different types of AI supply actual advantages.
She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is skilled on amino acid sequence information and sophisticated protein folding buildings, and may now precisely predict the 3D construction of proteins from their amino acids — profoundly helpful for drug discovery and understanding illness.
“These are the forms of AI methods that we want,” Hao mentioned. “AlphaFold doesn’t create psychological well being crises in individuals. AlphaFold doesn’t result in colossal environmental harms … as a result of it’s skilled on considerably much less infrastructure. It doesn’t create content material moderation harms as a result of [the datasets don’t have] the entire poisonous crap that you just hoovered up once you have been scraping the web.”
Alongside the quasi-religious dedication to AGI has been a story in regards to the significance of racing to beat China within the AI race, in order that Silicon Valley can have a liberalizing impact on the world.
“Actually, the alternative has occurred,” Hao mentioned. “The hole has continued to shut between the U.S. and China, and Silicon Valley has had an illiberalizing impact on the world … and the one actor that has come out of it unscathed, you might argue, is Silicon Valley itself.”
After all, many will argue that OpenAI and different AI corporations have benefitted humanity by releasing ChatGPT and different giant language fashions, which promise large good points in productiveness by automating duties like coding, writing, analysis, buyer assist, and different knowledge-work duties.
However the best way OpenAI is structured — half non-profit, half for-profit — complicates the way it defines and measures its impression on humanity. And that’s additional sophisticated by the information this week that OpenAI reached an settlement with Microsoft that brings it nearer to finally going public.
Two former OpenAI security researchers instructed TechCrunch that they worry the AI lab has begun to confuse its for-profit and non-profit missions — that as a result of individuals get pleasure from utilizing ChatGPT and different merchandise constructed on LLMs, this ticks the field of benefiting humanity.
Hao echoed these considerations, describing the risks of being so consumed by the mission that actuality is ignored.
“Even because the proof accumulates that what they’re constructing is definitely harming vital quantities of individuals, the mission continues to paper all of that over,” Hao mentioned. “There’s one thing actually harmful and darkish about that, of [being] so wrapped up in a perception system you constructed that you just lose contact with actuality.”