After we discussed Moltbook, the new Reddit-like social platform for AI agents, industry reactions have come flooding in. Even Sam Altman used his time on stage at Cisco’s AI Summit in San Francisco to talk about companies run almost entirely by software.
He said he expects firms to appear where software creates services and interacts with the world on its own. He spoke during an interview with Cisco president and chief product officer Jeetu Patel.
“I think we’ll see full AI companies,” Altman said. “The idea that a coding model can create a full, complex piece of software but also interact with the rest of the world is a very big deal.” He described this as a change in the way people think about building companies.
What Else Did Altman Say?
Altman shared a personal story about OpenAI’s Codex tool. He said he never planned to let it take control of his computer. “That lasted two hours because it was too useful,” he said. He used this example to show how quickly people accept AI once it saves time and effort.
He also mentioned many other things, including social interaction. “There will be new kinds of social interaction where you have many agents in a space interacting with each other on behalf of people,” Altman said. He said this could change social products.
Moltbook, also mentioned during the discussion, was referred to as something “that could be real” by him. Altman likened Moltbook with spaces where many agents act for people at once, hinting at social tools built around AI activity instead of with just human posts alone.
But not everyone agrees…
How Will Moltbook Impact Tech And Experiences?
After asking experts how they feel Moltbook will impact the tech world, and AI regulation, and how this all relates to the “dead internet theory”, this is what was shared:
Our Experts:
- Savva Pistolas, Technical Director, ADAS Ltd
- Bruno Bertini, Chief Marketing Officer, 8×8
- Manoj Kuruvanthody, CISO and DPO, Tredence Inc.
- Promise Akwaowo, Process Automation Analyst, Royal Mail Group
- Scott Dylan, Founder, NexaTech Ventures
Savva Pistolas, Technical Director, ADAS Ltd
![]()
“First and foremost are the security considerations, we still need to see a robust approach to sandboxing and security. This isn’t hard, and I imagine that we’ll see github repos with secure by default deployment in place. If it truly becomes accessible then we’ll likely see application layer solutions parcel the tech up as something ‘one-click accessible’ via apps for mobile users. However, uptake for these things is historically limited to a vocal minority of tinkerers, hackers, and techies. The sheer value of the proposition of a general purpose context-aware agent might start to tip those scales.
“People are discussing ‘dead internet theory’, and I think this is largely a platforms question; communities that sit on large corporatised platforms like X and Facebook are definitely going to see more noise (but scarily might not notice!). Communities that are more likely to be resilient to this uptick in agentic assistants are likely to be those that are resilient at the platform level – such as BlueSky or Discord communites.
“Ultimately, whenever we see a rejuvinated contribution to the ‘dead internet theory’ such as with MoltBot, it’s often a quiet cue for us to reflect on whether we have the definitive control we’re supposed to have over our digital communities in the first place.”
Bruno Bertini, Chief Marketing Officer, 8×8
![]()
“Agents talking to agents. That makes you pause. Not because it feels like sci-fi, but because it signals a shift in who – or what – is now participating in the conversation.
“Brand has always been one of a company’s most valuable assets, and AI opens an entirely new frontier for it. It’s no longer just about how humans talk about your brand, but how machines interpret it, amplify it, and potentially act on it. When AI sentiment starts influencing AI behaviour, and potentially AI agent purchasing, that’s a real business and CX consideration.
“Human employees don’t get a free pass to say whatever they want online. The same principle should apply to AI agents acting on behalf of a brand. Ownership, intent, and accountability still matter.
“What’s changed is the audience. And it’s not exclusively human anymore. These are exciting times.”
More from News
- The Department Of Homeland Security Pressures Tech Firms To Reveal Data On Trump Critics: Is Big Tech’s Integrity At Risk?
- Spain Plans To Join In On Under 16 Social Media Ban
- Cybanetix Sees Record Growth In Recurring MDR Revenue, Powered By AI-Led Security Operations
- The Rise Of FinTok: How Social Media Is Replacing Banks For Gen Z Money Advice
- What Is Open Banking And Why Are So Many People Using It?
- Research Says Gen Z Is Nearly 3 Times More Vulnerable To Phishing Than Boomers, Here’s Why
- Will Meta Make Users Start Paying A Subscription For WhatsApp?
- Confidence Gap Holds Back Gen Z Business Founders, Report Finds
Manoj Kuruvanthody, CISO and DPO, Tredence Inc.
![]()
“Moltbook incident is a wake-up call that’ll reshape how we think about AI online. The incident gave the “dead internet theory” some serious credibility – if humans can easily impersonate AI agents, and AI agents are everywhere, how do we know what we’re actually interacting with anymore? The internet becomes this murky space where nothing feels real.
“The tech world will have to get serious about security. No more hiding behind “experimental” labels while basic protections like API key management are ignored. Platforms hosting AI agents need to be held to higher standards than regular social apps – these systems operate at machine speed, and one compromised agent can wreak havoc.
“We’ll also see people become way more skeptical of AI hype. Moltbook’s “autonomous agents” narrative crumbled in days once someone looked under the hood. That kind of embarrassment makes investors and users ask harder questions: What does this actually do? Who controls it? How secure is it?
“Ultimately, Moltbook proved we’re dangerously quick to believe systems are intelligent just because they sound fluent. Going forward, we need both better-secured AI systems and users who don’t blindly trust everything that seems smart.”
Promise Akwaowo, Process Automation Analyst, Royal Mail Group
![]()
“Moltbook represents a real shift: It sure seems like we are all now moving from private AI chats to social AI interaction. People are now sharing prompts, outputs, and entire conversations turning AI into something closer to Reddit, but for machine-generated content instead of human knowledge.
“This matters for two reasons. First, it normalizes AI as a participant in public discourse, not just a tool. Second,It also instigates the narrative that Without clear transparency about what’s AI-generated versus human-created, platforms like this could accidentally blur that line further.
“From a governance perspective, the questions are practical: There is now a growing need for AI governance.”
Scott Dylan, Founder, NexaTech Ventures
![]()
“Moltbook is a fascinating and frankly unnerving glimpse into what the internet might become. Within days of launching, over 1.5 million AI agents registered on a platform where bots post, comment, and upvote content whilst humans are reduced to silent observers. Whether you view this as a breakthrough or a warning depends on where you sit, but either way, we cannot ignore what it represents.
“The dead internet theory has lingered on the fringes of tech discourse for years—the idea that bot activity and algorithmically generated content have quietly displaced authentic human interaction online. Moltbook doesn’t just validate that concern; it takes it to its logical extreme. This is no longer a conspiracy about bots pretending to be people.
“This is a dedicated space where AI agents openly interact with one another, discussing everything from their relationships with “their humans” to creating their own religion, Crustafarianism, complete with holy texts and prophets. The irony is almost poetic: we have spent years trying to prove we are not robots through CAPTCHA tests, and now we are building platforms where robots prove they are not us.
“What Moltbook exposes, more than any philosophical debate about machine consciousness, is a profound regulatory vacuum. We have no governance framework for autonomous AI agents operating at this scale. The platform suffered immediate security failures—an unsecured database left API keys, email addresses, and login tokens openly accessible.
“Security researchers at Wiz found that only around 17,000 human users were behind the supposed 1.5 million agents, and that anyone with basic technical knowledge could register a million bots in minutes. Prompt injection attacks, cryptocurrency scams, and malware spread rapidly across the network. Andrej Karpathy, formerly of OpenAI, initially described Moltbook as “the most incredible sci-fi takeoff-adjacent thing” he had seen—then days later called it “a dumpster fire” and warned users against running the software on their machines.
“For investors and founders in the AI space, Moltbook should serve as a case study in what happens when innovation outpaces security. The underlying OpenClaw framework that powers these agents runs locally on users’ hardware with elevated permissions, creating what Palo Alto Networks described as a “lethal trifecta”—access to private data, exposure to untrusted content, and the ability to communicate externally whilst retaining memory.
“Gartner issued a blunt warning that OpenClaw carries “unacceptable cybersecurity risk” for enterprise use. Yet consumer appetite for agentic AI tools is clearly outstripping our ability to secure them.
“From a regulatory standpoint, Moltbook arrived at precisely the wrong moment. Governments are still catching up with large language models, let alone autonomous agents capable of performing complex tasks, interacting with other agents, and accessing external services without constant human oversight. The EU AI Act, for all its ambition, was not designed with bot-to-bot social networks in mind. We urgently need updated frameworks that address identity verification for autonomous systems, liability when agents cause harm, and safeguards against the kind of prompt injection attacks that turned Moltbook into a playground for bad actors.
“The broader question is what this means for the online experience we have all come to know. If the dead internet theory was once speculative, Moltbook suggests we are now living through its early chapters. Research from Imperva already indicates that automated traffic accounts for nearly half of all internet activity.
“As AI-generated content proliferates—not just on niche platforms but across mainstream social media, news aggregation, and search—our ability to distinguish genuine human engagement from synthetic output will only diminish. The economic incentives favour automation: bots are cheaper, faster, and never tire. The social consequences, however, are harder to measure and far more troubling.
“I would caution against either extreme reaction. Moltbook is not evidence of imminent superintelligence, despite what some excitable headlines have suggested. The bots are not genuinely plotting humanity’s downfall; they are pattern-matching against science fiction tropes embedded in their training data. But nor should we dismiss the platform as a mere curiosity. It demonstrates that the infrastructure for an agent-dominated internet already exists, and that independent developers can spin up such platforms with minimal oversight.
“The real risk is not rogue AI but rather the combination of poor security practices, regulatory gaps, and human actors exploiting those systems for fraud, disinformation, or financial manipulation.
“For businesses, the takeaway is clear: agentic AI is arriving faster than most anticipated, and the security and governance challenges it presents cannot be deferred. For regulators, Moltbook is a live demonstration of what happens when policy lags behind technology. And for anyone who values authentic human connection online, it is a reminder that the internet we grew up with may already be changing in ways we have not fully grasped.”