Public faith in tech companies running AI without outside control looks low. New polling from the Ada Lovelace Institute shows strong backing for independent oversight and deep unease about self-policing.
The nationally representative research found that 89% support an independent AI regulator with enforcement powers. The polling shows little comfort with private companies setting standards alone. People want authority that can step in when systems cause harm or treat people unfairly.
The research comes at a time where progress on AI law in the UK isn’t evolving that quickly. According to the Institute, this pause does not line up with public expectations. Many people already see AI tools affecting work, services and personal data, which raises expectations for strong oversight.
Trust in decision making also looks weak. The polling found that 84% fear the government will place partnerships with large technology companies ahead of public interest when shaping AI rules. That belief feeds scepticism about how choices get made.
What Do People Want AI Systems To Prioritise?
Fair treatment tops the list when it comes to public expectations. The Ada Lovelace Institute found that 91% believe AI systems must treat people fairly. When faced with trade-offs, people placed fairness and safety ahead of economic gains or faster development.
This shows that AI no longer feels distant. People see it operating across daily life, which brings expectations similar to those placed on public services rather than consumer apps.
The research also shows strong backing for supervision after systems enter use. People support independent standards, transparency reports and accountability when harm occurs. Oversight should apply before launch and continue during use.
Taken together, the findings tell us that trust depends on rules, monitoring and accountability rather than speed or profit.
More from News
- Uber Eats Will Now Be Using Robot Delivery Services In The UK
- African Startups Secured Over $3 Billion In Funding In 2025
- Warner Bros. Becomes The Piggy In The Middle As Netflix And Paramount Battle For Ownership
- Why Do 57% Of UK Founders Say The EU Is Better For Growth Than Britain
- What Checkout Options Should Ecommerce Businesses Use In 2026?
- Why Is Reddit Going Against Australia’s Social Media Ban?
- How Is Nvidia Managing Chip Smuggling?
- Canva Reveals 2026 Design Predictions, Here’s What They Think
How Much Trust Exists In AI Used Across Public Services?
Confidence is even lower when AI enters public services. Research from Nesta’s Centre for Collective Intelligence found that only 40% trust the public sector to use AI responsibly. Opinium Research surveyed more than 2,000 adults across the UK to produce the data.
Public views differ a bit, depending on the service. 41% said AI is dangerous and should not operate in public services. Only 29% supported broad use. The NHS received the strongest backing at 38%, followed by transport at 37% and education at 36%. Policing and defence both stood at 28%, while social care reached 29%.
Money does not drive opinion, because when asked what matters most before rolling out AI, 46% chose public backing. Only 18% prioritised saving money. Political differences appeared too. Labour voters placed stronger weight on public backing, while Conservative voters leaned more towards savings.
The survey also found that 52% want public involvement before AI enters public services. Only 20% want these decisions left to technical specialists.
Can Public Involvement Change Views On AI Tools?
Evidence from direct engagement suggests it can. Nesta’s Centre for Collective Intelligence ran workshops on Magic Notes, an AI transcription and summarisation tool used by social workers. The tool was developed by UK company Beam.
Before discussion, satisfaction with the existing social care process stood at 13%. After workshops and testing, 74% felt the benefits of the AI tool outweighed the risks. Support rose as people understood how the system worked and how reviews took place.
Kathy Peach, Director of the Centre for Collective Intelligence at Nesta, said the government’s AI Adoption plan depends on public backing. She said trust matters most in areas like social care, where acceptance starts low but gains appear strong.
Rachel Astall, Chief Customer Officer at Beam, said 86% of the public and people who use social care felt Magic Notes would benefit services overall. She said the consultation process showed what earns confidence and why public voices are important.
Nuala Polo, UK Public Policy Lead at the Ada Lovelace Institute, also commented: “Our research is clear: there is a major misalignment between what the UK public want and what the government is offering in terms of AI regulation. The government is betting big on AI, but success requires public trust. When people do not trust that government policy will protect them, they are less likely to adopt new technologies, and more likely to lose confidence in public institutions and services, including the government itself.”