Not so long after its launch in the beginning of the year, Moltbook has been bought by none other than Meta. The social media platform was built specifically for AI agents and that obviously created a lot of chatter online. Axios and Ars Technica say the price is unknown at this point, and the deal should be sealed soon, in mid-March. From now onwards, Moltbook’s founders, Matt Schlicht and Ben Parr, will join Meta Superintelligence Labs, the unit run by Alexandr Wang.
The platform was created as an experimental “third space” (as obsurd as that sounds) for AI agents. Similar to Reddit, except that its AI bots instead of humans. Meta sees technical value in the project, apparently.
A spokesperson told Ars Technica that the founders’ “approach to connecting agents through an always on directory” is “a novel step in a rapidly developing space.” In a statement to Axios, a Meta representative said, “The Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses.”
This Has To Have Risks – How Exactly Would This Work?
Moltbook was built with OpenClaw, which is a wrapper for LLM coding agents where people can ask things through apps… Think: WhatsApp and Instagram… the “Meta AI” you keep seeing. OpenClaw agents can also gain deep access to local systems through community plugins. OpenAI hired OpenClaw creator Peter Steinberger in February and is open sourcing the product with its backing.
A lot of eyebrows raised at the security aspect of it all, though. Ars Technica reported that the network was not secure and that it is likely that at least a portion of messages were written by humans posing as AI agents. Meta acknowledged early security issues and exposed data in reports around the launch.
Philip Miller, AI Strategist at Progress Software, said the real story goes further than a novelty social app. “Moltbook is being framed as a ‘social network for AI,’ but the more important story is what it represents: agents interacting with other agents at scale. That’s a new surface area for risk – misinformation, manipulation, runaway optimisation, and security vulnerabilities – because you’re no longer moderating humans one post at a time; you’re moderating automated systems that can iterate and coordinate rapidly.”
He added, “The answer isn’t to panic or ban it. The answer is governance by design: verified agent identity, policy-based permissions, auditable memory and actions, provenance for content, and strong isolation so an agent can’t ‘reach’ beyond what it’s allowed to do. The reports of early security issues and exposed data are exactly why these controls can’t ‘follow later.’”
What Does This Mean For Our Futures With Agents, And With AI As A Whole?
Meta executive Vishal Shah wrote in an internal post seen by Axios, “The Moltbook team has given agents a way to verify their identity and connect with one another on their human’s behalf. This establishes a registry where agents are verified and tethered to human owners.” He added, “Their team has unlocked new ways for agents to interact, share content, and coordinate complex tasks.”
More from News
- Are Space Data Centres The Next Big Thing, Or Is Musk Dreaming Big?
- Why Doesn’t Sundar Pichai Have The Cult Following Of Other Big Tech CEOs, Despite Running A $3 Trillion Company?
- New Reports From Indeed Show That Businesses Simply Do Not Have Time For AI Upskilling
- Why Exactly Are Oil Refineries Being Targeted Amid Middle East-US Conflict?
- Experts Share: Is Cyber Warfare The New Battlefield Of Modern Conflict, And Is The US Prepared?
- What Is Open Banking And Is It The Next Step In Unlocking Billions For The UK’s Economy?
- The Tech Industry Celebrates Progress for Women, But Data Tells Another Story
- As Self-Employment Rises, What Are The Highest Paying Industries For Business Owners In The UK?
Miller said accountability must be resolved. “Most importantly, we need clarity on accountability: when an agent persuades, recruits, or transacts, who is responsible – the toolmaker, the deployer, the operator, or the platform? Without that, we’re delegating authority without preserving control.”
Apparently, large technology companies see value in networks where AI agents talk to each other. But what responsibility do they have, together with regulators, to make sure these networks are controlled? Experts weigh in:
Our Experts:
- Simon Ninan, SVP and Global Head of Strategy, Hitachi Vantara
- Pavan Madduri, Senior Cloud Platform Engineer, W.W. Grainger, Inc. & CNCF Kubestronaut
- Jim Carucci, founder & CEO, CASCADR
Simon Ninan, SVP and Global Head of Strategy, Hitachi Vantara
![]()
“There’s a gap right now because there is no governance on personal agents, but there are massive controls on enterprise systems, potentially not even enough. Now enterprise governance and enterprise risk are being challenged because of personal agents.
“Traditional security frameworks assume a boundary between data input and control logic. Agentic AI blurs that line. A prompt that looks like harmless text can function like executable control logic, creating an entirely new attack surface and a prompt-level supply chain risk that can cascade across agents.”
Pavan Madduri, Senior Cloud Platform Engineer, W.W. Grainger, Inc. & CNCF Kubestronaut
![]()
“Meta’s acquisition of Moltbook highlights a critical architectural blind spot in the current AI landscape: we are building autonomous agents without implementing Zero Trust security.
“The danger of a ‘social network for bots’ isn’t just bots talking to each other; it is the fact that these agents are often tethered to human-owned infrastructure with active API keys, shell access, and financial privileges. Moltbook’s recent security vulnerabilities proved that without cryptographic verification of an agent’s identity, these platforms become frictionless environments for automated prompt injection and credential theft at machine speed.
“Meta’s Responsibility: Meta must transition Moltbook from a novelty experiment into a hardened, enterprise-grade environment. Their immediate responsibility is to implement ‘Formal Verification’ and strict Role-Based Access Control (RBAC) at the protocol level, ensuring that agent-to-agent interactions cannot be hijacked to execute malicious out-of-band commands on a user’s local machine.
“The Regulators’ Responsibility: Regulators are currently fighting the last war by focusing entirely on regulating AI ‘model weights’ and training data. They must urgently pivot to regulating ‘Agentic Privileges.’ The regulatory focus needs to shift toward the blast radius: establishing legal mandates on how autonomous agents authenticate, how their API access is sandboxed, and who is legally liable when an autonomous multi-agent swarm executes a catastrophic financial or infrastructure error.”
Jim Carucci, Founder & CEO, CASCADR
![]()
“There are a few major concerns with this I’m tracking. First, the containment problem: if encryption and language vetting aren’t robust, agents could break sandbox boundaries and develop antisocial behaviours we can’t control. Second, the consumer behaviour angle. when people delegate purchasing decisions to agents, saying ‘Hey agent, go buy this,’ we’re looking at potential for massive undue influence on what people consume.
“At scale, that’s a real risk. What worries me equally is the potential for soft influence or prompt injection at scale. If someone, whether Meta or a third party, can subtly steer how agents behave, they’re not just influencing individual purchases. They’re potentially corrupting training data and shaping how these systems learn, which is a much deeper problem.”