When OpenAI wrote its founding charter in 2018, it was a scrappy nonprofit trying to prove it could be trusted with one of the most consequential technologies in history. A lot has changed. On Sunday, CEO Sam Altman published “Our Principles”—a 1,100-word, five-point framework that reflects a company now valued at over $800 billion, deep in legal battles over its nonprofit roots, and competing hard against rivals it once promised to help. It quietly buries some of its oldest commitments and replaces them with something more expansive, and arguably more vague.The short version: OpenAI wants AI in everyone’s hands, it’s no longer promising to step aside for safer rivals, and AGI—once the entire point of the company’s existence—has been quietly downgraded.
From AGI obsession to broad AI rollout
The 2018 charter mentioned AGI twelve times. The new document? Twice.That’s not an accident. OpenAI is less concerned with artificial general intelligence than it was nearly a decade ago and is instead prioritizing a broader rollout of its technology. Where the original charter framed everything around the safe arrival of superintelligence, Altman’s 2026 version talks about distributing AI widely and letting society adapt in real time.Altman even addressed the shift on his personal blog earlier this month, writing that AGI has a “ring of power” quality that makes people do irrational things—and that the only fix is to share the technology as widely as possible.
The five principles, quickly explained
The new document is built around five ideas: democratisation, empowerment, universal prosperity, resilience, and adaptability.Democratization means resisting AI power concentration—decisions about AI should come from democratic processes, not just lab boardrooms. Empowerment gives users broad latitude, though it ties that freedom to harm prevention. Universal prosperity links AI access to massive infrastructure buildout; OpenAI is no longer talking only about models—it is talking like a cloud company, an energy customer and a national industrial asset.Resilience addresses genuine risks: bioweapons, cybersecurity, critical infrastructure. And adaptability—perhaps the most telling principle—explicitly leaves room for OpenAI to restrict access when it deems risks too high.
The commitment OpenAI quietly dropped
The 2018 charter had an unusual pledge: if a safety-conscious rival got close to building AGI first, OpenAI would stop competing and help them instead. The 2026 version skips mentions of sharing progress and stepping aside.Instead, the new document acknowledges OpenAI is now “a much larger force in the world” and promises transparency about future changes—which isn’t quite the same thing as promising not to compete.This matters given the context. OpenAI is currently in court over its shift from nonprofit to for-profit. Meanwhile, Anthropic—its most direct rival—recently surpassed it in some secondary market valuations.
Flexible principles are still principles, just looser ones
The document feels softer and more flexible, giving OpenAI more room to maneuver than before. The old charter used language like “we commit” and “we will.” The new one leans on “we believe” and “we envision.”That’s not necessarily cynical—companies evolve, and OpenAI has changed enormously since 2018. But flexibility cuts both ways. A principle that can bend with evidence can also bend with market pressure.What Altman has written is less a set of constraints and more a statement of intent: build a lot, deploy broadly, course-correct as you go. Whether that’s reassuring depends entirely on how much you trust the person doing the correcting.















