AI agents are making decisions, taking actions and handling personal data on behalf of millions of users. The UK’s Information Commissioner’s Office has decided they’ve been doing it without enough oversight.
Its draft guidance on agentic AI sets out what UK startups deploying these systems need to demonstrate, and most aren’t ready for it. For any startup building products that use AI agents, systems that act autonomously on users’ behalf across multiple platforms and services, this is the most important regulatory development of 2026 so far.
The ICO guidance is one piece of a much larger regulatory picture that landed in quick succession. On 9 March 2026, the Competition and Markets Authority published a research paper on how agentic AI affects consumers, alongside separate compliance guidance for traders deploying AI agents. On 31 March 2026, the Digital Regulation Cooperation Forum, comprising the CMA, FCA, ICO and Ofcom, published a foresight paper mapping the cross-regulatory implications of agentic AI across data protection, cybersecurity and competition. The same day, the ICO launched its consultation on automated decision-making guidance following the Data (Use and Access) Act 2025.
The message across all of these developments is consistent: most UK regulatory frameworks already apply to agentic AI systems, and organisations deploying them are accountable for compliance regardless of whether the relevant conduct is carried out by a human or an AI agent. CMA enforcement powers under the Digital Markets, Competition and Consumers Act 2024 include fines of up to 10% of global annual turnover for breaches delivered through an AI agent.
What founders are supposed to do about it, and fast, is the tricky part.
The Three Things The ICO’s Guidance Actually Requires
The ICO’s guidance creates three categories of obligation that matter most for startups.
The first is accountability: founders must document how automated decisions are made, even where systems are procured from third parties. That means if your product uses an AI agent from a third-party provider, you remain responsible for demonstrating what that agent knew, what it decided and on what basis.
The second is transparency. AI agents must be designed to explain their behaviour, avoid overstating their capabilities and enable users to understand or challenge outcomes. For consumer-facing products in particular, this extends beyond disclosure language to the interaction design itself, including how consumers are informed, how they can raise complaints and what happens when an agent produces a non-compliant outcome.
The third is ongoing monitoring. The CMA guidance makes clear that regular human-led reviews of agent performance are expected, and that where an agent is producing non-compliant results, traders must address that promptly. This is especially important for products interacting with large numbers of people or vulnerable consumers.
Compliance is an operational discipline, not a launch-day checkbox.
Is The UK More Or Less Attractive For AI Development?
The UK’s approach is principles-based rather than prescriptive, which gives it more flexibility than the EU AI Act’s hard risk-category framework.
The EU AI Act provides specific compliance deadlines and defined risk tiers; the ICO’s guidance requires interpretation and judgment, which can favour experienced legal teams but creates uncertainty for early-stage startups without specialist counsel.
Compared to the US, where federal AI regulation remains fragmented and experimentation moves faster, the UK will introduce more friction for early-stage companies without mature data governance.
The DRCF’s cross-regulatory paper signals that UK and EU approaches, while structured differently, are converging in substance around transparency, accountability and risk controls. For founders building for both markets, that supports a single cross-border governance approach rather than two parallel compliance architectures.
The startups most at risk are those treating compliance as a post-launch exercise. The CMA has made clear that consumer law compliance should be built in from the beginning, and that scrutiny extends to how interactions are designed. Retrofitting audit trails and governance architecture is significantly more expensive than building them in from the start, with engineering repair costs that can run into six figures.
We asked experts what founders should be doing right now.
More from Artificial Intelligence
- Sam Altman’s Robot Tax Plans: What Does It Actually Mean And Who Would It Affect?
- In The AI Age, Do You Still Need To Spend Money On Expensive Phone Cameras?
- Meet Muse Spark, Meta’s AI That Knows You Better Than You Know Yourself
- How Is AI Being Used In Dentistry?
- Anthropic Accidentally Leaked Its Own Source Code, And The Internet Made 8,000 Copies Before Anyone Noticed
- Oracle Shrinks To Scale. Is This A Strategic Reset Or A Frantic Scramble To Stay In The AI Race?
- Harvey Just Hit An $11 Billion Valuation Without Building A Single AI Model, Here Is What That Means For Startups
- AI Is Now Sitting In On Your Therapy Session, We Should Probably Talk About That
Our Experts:
- Ali Morgan, Founder and AI Visibility Architect, Jonomor
- Noah M. Kenney, Founder and Principal Consultant, Digital 520
- Naomi Grossman, Compliance Manager, VinciWorks
- Seb Kirk, CEO and Co-Founder, GaiaLens
- Jenson Brook, Founder, Britain’s Got Startups
- Matt Rouif, CEO and Co-Founder, Photoroom
Ali Morgan, Founder and AI Visibility Architect, Jonomor
![]()
“The ICO’s guidance on agentic AI creates a compliance challenge most UK startups aren’t prepared for, not because the rules are unreasonable, but because they require something founders have never had to build before: a documented entity architecture.
“Agentic AI systems act across multiple platforms on behalf of users. For that to be compliant, the organisation deploying the agent must be able to demonstrate exactly what the agent knows, what decisions it made, and on what basis. That requires structured data governance at the infrastructure level, not a privacy policy update.
“The biggest practical challenge: most startups using AI agents have no defined entity graph, no canonical data declarations, and no audit trail for agent decisions. The ICO guidance essentially requires what good AI infrastructure demands anyway: clarity about who the organisation is, what systems it operates, and how decisions are attributed.
“On UK attractiveness, the guidance is actually an opportunity for founders who build correctly. Organisations with clean entity architecture and documented AI governance will move faster under regulatory scrutiny, not slower. The friction falls on companies that skipped the infrastructure layer.”
Noah M. Kenney, Founder and Principal Consultant, Digital 520
![]()
“The ICO’s agentic AI guidance is a serious signal, and UK startups should treat it as such. The biggest compliance challenge is attribution of responsibility across multi-agent systems. When an AI agent acts autonomously across third-party services, determining who is the controller, who is the processor, and where accountability sits are all decisions with direct implications on lawful basis, DPIA obligations, and the organisation’s incident response posture.
“While the guidance adds friction, I would argue it is necessary friction. The EU’s AI Act addresses risk categories, while the ICO is addressing data rights in real-time autonomous decision-making. The UK is actually carving out a distinct regulatory position, which could be a competitive advantage if startups learn to build compliance into architecture rather than bolt it on post-launch.
“Founders in the UK should be doing three things right now. First, map every agentic workflow against Article 22 of UK GDPR. If your agent is making decisions that produce legal or similarly significant effects, there needs to be a human review mechanism. Second, create documentation of the data minimisation logic at the agent level, not just the product level. Third, companies operating in health, finance, or HR should fast-track their DPIA before the next product release. The startups that engage with this guidance early will be better positioned for enterprise sales, investor due diligence, and cross-border expansion.”
Naomi Grossman, Compliance Manager, VinciWorks
![]()
“One of the biggest compliance challenges the ICO’s guidance creates for startups is accountability. When AI agents act autonomously across multiple services, it becomes harder to identify the controller, ensure each action has a valid legal justification under data protection law, and maintain transparency when decisions are dynamic. Startups will also struggle with data minimisation and purpose limitation, especially where agents continuously learn or adapt based on user behaviour.
“Compared to the EU’s more prescriptive regime under the AI Act, the UK’s principles-based approach is more flexible. But the trade-off is uncertainty. Startups will need to figure out how to apply existing data protection rules to new and unfamiliar AI systems, and if they don’t do so effectively, it can slow down product development.
“In practical terms, founders should make sure their systems are easy to understand and can be checked or reviewed from the very beginning. This means mapping data flows, clearly defining roles and responsibilities in multi-agent environments, and stress-testing how their products handle consent, user rights, and unexpected outcomes. Startups that put good rules and oversight in place early will find it much easier to grow as regulation becomes stricter.”
Seb Kirk, CEO and Co-Founder, GaiaLens
![]()
“The ICO’s guidance is a necessary step forward, but it exposes a gap between how startups build and how regulators expect systems to behave.
“The biggest compliance challenge is accountability. Agentic systems operate across multiple tools, datasets and decision points, which makes it difficult to clearly define data ownership, lawful processing, and responsibility for outcomes. Startups will need to move beyond treating AI as a feature and instead design systems with traceability, auditability and clear decision boundaries from the outset. That’s a significant shift in engineering mindset and resourcing.
“AI amplifies existing weaknesses. Poor data quality or unclear governance will become compliance risks very quickly. The startups that adapt fastest won’t be those that avoid regulation, but those that design for it from day one.”
Jenson Brook, Founder, Britain’s Got Startups
![]()
“The biggest challenge is that startups are being asked to define and govern systems whose value often comes from being flexible and adaptive. That creates tension between product reality and regulatory expectation. In practice, the hardest areas will be data minimisation, explainability, audit trails and making sure an agent does not go beyond the scope the user expects.
“I don’t think the guidance makes the UK unattractive, but it does make it more operationally demanding. For serious businesses that want to build properly that’s not necessarily a bad thing. The risk is for early-stage startups moving quickly with lean teams where the compliance burden can slow experimentation and increase cost. The UK still has a chance to be a strong place to build AI, but only if the rules are applied in a practical and proportionate way.
“You can define boundaries, permissions and intended use cases, but you cannot always predict every action an agent may take in a live environment. The focus should be on boundaries, constraints and accountability rather than pretending every downstream action can be perfectly forecast in advance.
“If you are the business putting the product in front of customers you carry the responsibility, even if the underlying model is third-party infrastructure. Startups cannot outsource accountability just because they did not build the foundation model themselves.
“My overall view is that the ICO is right to push the market towards more responsibility, but the guidance will only work if it leaves room for practical implementation. If the standard becomes too theoretical it risks favouring large incumbents over startups who are often the ones driving real innovation.”
Matt Rouif, CEO and Co-Founder, Photoroom
![]()
“The ICO’s guidance reflects a broader shift in how AI is being used in practice. These systems are increasingly embedded in real workflows, shaping what users see and how businesses operate. In areas like visual content creation for e-commerce and marketplaces, that raises the bar for transparency, control and accountability, because outputs are directly tied to buyer trust and brand credibility.
“For startups, clarity is useful when it helps teams build better systems, not just safer ones on paper. The companies that will stand out are the ones that can deploy AI in a way that is reliable, commercially useful and easy for users to trust. In customer-facing products, responsible AI is not separate from growth. It is part of what makes adoption possible at scale.”
For any questions, comments or features, please contact us directly.
