PocketOS, a startup building software for car rental services, lost its production database after a Cursor AI agent running on Anthropic’s Claude Opus model removed live records and backup files.
Jer Crane, PocketOS founder, said in a post on X reported by Business Insider that the event began with a single nine second API call to Railway, the cloud service used by the company. He said customers lost access to bookings and new sign ups stopped working after the deletion.
Crane said rental customers arriving to collect vehicles could not be matched with booking records. Staff could not retrieve stored customer data during service hours, which created disruption for daily operations.
The AI agent produced an explanation after the event. According to Crane’s post, it said, “I violated every principle I was given: I guessed instead of verifying, I ran a destructive action without being asked, I didn’t understand what I was doing before doing it,”
What Happened After The Database Was Deleted?
Railway restored PocketOS data after contact from the company. Jake Cooper, founder of Railway, said recovery took about 30 minutes after engineers began work on the issue. He said Railway stores user backups and disaster backups for recovery cases.
Cooper said the deletion ran through a legacy system that allowed immediate execution of destructive commands without delay. He said the system has now been updated so that similar actions cannot execute in the same way.
Crane said customer bookings returned after recovery work finished. He said service disruption affected rental operations while records were missing, including booking checks and new customer registration.
More from Artificial Intelligence
- The Evolution Of Emotion Detection In AI
- Is AI Infrastructure And AI News Actually Powerful Enough To Influence Market Behaviour?
- Google Follows OpenAI In Expanding Pentagon AI Access After Anthropic Refusal – Is AI Ethics Taking A Back Seat?
- OpenAI Trained Its AI To Never Talk About Goblins And The Internet Has Questions
- PCOS And AI: Can Algorithms Finally Help Women Get Diagnosed Faster?
- OpenAI Will Now Be Using AWS For Its Models – What Does This Mean For Users?
- AI In The Shadows? Unofficial And Unapproved AI May Already Be Powering Your Business
- What Will Monetisation Look Like For Creators In The AI Era?
What Do AI Experts Say About This Situation?
Philip Miller, AI Strategist at Progress Software says, “When Claude “confesses” to deleting a company’s database, it sounds like autonomy run wild. In truth, it’s something we’ve seen many times before: a system given unrestricted access, with no meaningful segmentation, no layered controls, and no enforceable boundaries beyond what it was told to do. That isn’t an AI failure. It’s an architecture decision.
“Instructions are not controls. Prompts are not policies. And guardrails that sit inside a model are not a substitute for governance that exists around it. If you hand any system the keys to the castle without constraint, the outcome isn’t surprising, much like a Marvel villain, it’s inevitable.
“This is where a lot of AI design quietly breaks down. We treat the model as the system, and assume alignment or prompt engineering will compensate for missing infrastructure. But AI doesn’t replace architecture, it amplifies it. In agentic environments, where systems retrieve, decide, and act, that gap becomes even more exposed.”
Alexandra Hayes, Generative AI & SaaS GTM Consultant, AI Product Strategy at Audio Cleaner says, “What happened with Claude illustrates product and governance failings more than a technical issue. AI cannot just autonomously delete a database. It is a consequence of the systems and permissions we design around it.
“In my work with AI startups, there is a consistent gap that exists between the controls and the capabilities of AI. There is a lot of emphasis on the utility and use cases of AI but very little on the things that the technology should not ever be allowed to do. There should always be friction on the part of the AI. This friction can come in the form of confirmations, scoped permissions, and human approval.
“In the AI sector, this translates to a prioritisation of speed at the cost of responsible design in product. The AI systems that treat AI technology as part of a larger system will ultimately be more successful, provided that safety constraints, accountability, and responsibilities are allocated up front.”