What does it take to control the future? In Sam Altman’s case, it took exactly 96 hours. When OpenAI’s board fired him in November 2023, they thought they were removing a CEO.
Instead, they discovered they were challenging a force of nature. By the end of the weekend, over 700 OpenAI employees were ready to quit, Microsoft was making calls, and the man who’d been unceremoniously ousted was back in the CEO chair. The irony is perfect.
In trying to limit Altman’s influence, they proved it was already limitless.
Making AI Accessible to Everyone; Sort Of
“We need to ensure AI benefits all of humanity,” Altman declared, delivering what would become his most marketable soundbite. But the reality behind OpenAI’s supposed altruism tells a more complicated story about power, profit, and the art of strategic positioning. Altman’s decision to release ChatGPT for free wasn’t pure philanthropy but a masterclass in market capture.
By flooding the market with a “free” product, OpenAI effectively killed competition before it could emerge. Within two months, 100 million users were hooked on their system, creating a dependency that would later translate into massive enterprise contracts.
Yet this strategy, however commercially motivated, achieved something genuinely remarkable. It allowed access to advanced AI capabilities that had previously been locked away in research labs and tech giants’ servers.
Small businesses suddenly had access to sophisticated language processing that would have cost millions to develop internally. Independent developers could build AI-powered applications without massive infrastructure investments.
A whole new world of niche AI tools and services started to appear. Coding solutions like Replit, translation tools like DeepL, Candy.ai and similar AI companion services are just some of them.
The response was predictably mixed. Social media filled with success stories of researchers accelerating literature reviews, language learners practicing conversation, and entrepreneurs drafting business plans.
Therapists reported patients using ChatGPT to organise their thoughts between sessions. Non-profit organisations automated routine communications, freeing up resources for their core missions. Writers found a collaborative partner for brainstorming and editing. Programmers discovered an unusually patient coding assistant that could explain complex concepts in plain language.
More from Artificial Intelligence
- How to Detect and Prevent AI Hallucinations In Your Applications
- What Is An AI-Native Startup?
- Can AI Enhance The Effectiveness Of Online Learning Platforms?
- Coding with Copilots: Are Developers Becoming Architects Instead of Builders?
- OpenAI Unveils GPT-5: Not Quite AGI, but Leaps and Bounds Forward
- Can AI Help In The Early Detection Of Cancer?
- AI Didn’t Kill the Em Dash, We Did: How AI Panic Is Ruining Good Writing
- AI As a Feature Vs. AI as a Product
But concerns emerged alongside the enthusiasm. Educators worried about academic integrity, workers questioned job security, and experts debated the implications of rapid AI deployment. Students got caught cheating, journalists published AI-generated misinformation, and workers discovered their skills were suddenly obsolete.
The same tool Altman praised for “democratising” AI was simultaneously automating away middle-class jobs at an unprecedented speed. However, OpenAI’s commitment to safety research, while imperfect, represented a significant departure from the “move fast and break things” mentality that had dominated Silicon Valley.
The company’s staged release approach, from GPT-2’s initial withholding due to safety concerns to ChatGPT’s gradual capability increases, suggested a more measured approach to AI deployment. Their red teaming efforts and collaboration with safety researchers provided valuable insights into AI risks and mitigation strategies.
Microsoft’s $13 billion partnership provided the infrastructure needed for mass adoption, though it also raised questions about corporate influence over AI development.
The investment enabled global access while creating a revenue model that could sustain continued development, a critical factor since training cutting-edge AI models requires enormous computational resources that few organisations could afford independently.
The ripple effects were immediate and largely positive for innovation. Google accelerated Bard’s release (now Gemini), Anthropic launched Claude, and venture capital poured into AI startups.
This competitive pressure drove rapid improvements across the field, with each company pushing the boundaries of what AI could accomplish. Whether this represented healthy competition or a rushed response to market pressure depends on perspective, but the end result was faster progress in AI capabilities and broader access to these tools.
Altman’s prediction that “humans who use AI will replace those who don’t” captured both the opportunity and anxiety of the moment. For millions of users, AI tools provided genuine empowerment. New capabilities, improved productivity, and creative possibilities that would have been unimaginable just years earlier.
Teachers created personalised lesson plans, small business owners automated customer service, and researchers processed vast amounts of information with unprecedented speed. The technology proved particularly valuable for people with disabilities, offering new ways to interact with information and communicate.
Measuring Legacy Against Consequences
Altman often warns about the AI apocalypse while simultaneously building it. “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity,” Altman solemnly testified before Congress. Yet within months, OpenAI was releasing even more powerful models.
The contradiction isn’t accidental. By positioning himself as both prophet of doom and savior, Altman has cornered the market on AI’s future. Consider the brilliant circular logic.
AI is dangerous, so we must develop it faster to understand the dangers. Therefore, we need more funding and fewer regulations. Therefore, AI becomes more dangerous. Each cycle increases Altman’s influence while pushing actual safety measures into some theoretical tomorrow that never arrives.
The Center for AI Policy wasn’t mincing words: “Not one person should have this much power over AI’s future.” But Altman had already made himself indispensable. Through OpenAI, he doesn’t just control cutting-edge research but the conversation.
His congressional appearances were auditions for the role of AI’s global overseer. Microsoft’s $13 billion investment revealed the game’s true stakes. Altman transformed a nonprofit promising to serve “all humanity” into a traditional Silicon Valley money machine, complete with billion-dollar valuations and venture funding rounds.
The humanitarian mission became a marketing slogan. The pattern repeats everywhere you look. Altman warns of an “impending fraud crisis” from deepfakes while OpenAI releases increasingly sophisticated video generation tools.
He advocates “learning to think independently” while building systems that standardise how millions process information. He preaches AI democratisation while concentrating unprecedented power in Silicon Valley boardrooms.
Hero or Villain?
The hero story writes itself in user statistics. 100 million people gained access to sophisticated AI within two months. Teachers revolutionised lesson plans. Writers found creative partners. Small businesses competed with tech giants using tools that once required million-dollar budgets.
This was immediate empowerment. After ChatGPT’s release, a grandmother in Ohio could generate marketing copy for her bakery, a student in Bangladesh could practice English conversation, and a startup founder in São Paulo could draft investor pitches without hiring expensive consultants. Altman made AI useful.
Cancer researchers began analysing vast literature databases in minutes instead of months. Non-native speakers gained confidence in professional communication. Entrepreneurs validated business ideas without venture capital connections. The tools that once separated technological haves from have-nots suddenly belonged to everyone.
But the villain narrative runs deeper than individual success stories. Each celebration of AI efficiency represents someone’s displaced expertise.
The same tool that empowered the Ohio grandmother potentially eliminated the need for local marketing consultants. The student practicing English conversation might never hire a human tutor.
The entrepreneur drafting pitches could bypass business development professionals entirely. Altman automated away entire categories of human work, often without warning or transition support for those affected.
The concentration of power presents perhaps the more troubling concern. OpenAI’s rapid dominance created a technological dependency that extends far beyond individual users.
Educational institutions restructured curricula around AI tools they don’t control. Businesses built workflows around systems that could change overnight. Entire industries adapted to capabilities that exist at the discretion of a few Silicon Valley executives.
Altman’s “democratisation” came with invisible strings attached. Access to the tools, but never ownership of the underlying technology.
The existential questions loom largest. By accelerating AI development and deployment, Altman may have shortened humanity’s window to address fundamental safety challenges.
The competitive pressure his success created pushed other companies to release AI systems faster than their safety research could validate. The very accessibility he championed meant powerful AI capabilities spread globally before adequate governance frameworks existed to manage their risks.
Perhaps the most honest assessment recognises Altman as a figure of historical transition, neither pure hero nor clear villain, but someone whose decisions fundamentally altered humanity’s technological trajectory.
Like industrialists of previous eras, he delivered immediate benefits while setting in motion changes whose full consequences remain unknowable. His legacy will ultimately depend not on the initial excitement of ChatGPT’s release, but on whether the AI revolution he accelerated leads to broadly shared prosperity or concentrated disruption.
The story is still being written, but the stakes couldn’t be higher. Altman’s choices about AI development may prove to be among the most consequential decisions of the 21st century; whether they’re remembered as visionary leadership or reckless acceleration will depend on how successfully humanity navigates the world he helped create.