The King’s Speech earlier this week addressed different important policies for the way forward as far as the UK’s growth and stability is concerned. Many anticipated an AI Bill to be announced, but it was not mentioned. Instead, the speech focused on a legislative programme that looks at security, fairness, and opportunity.
Stability was described as the core of economic policy, with an emphasis on having tax and spending changes reviewed independently. “My Government will legislate to ensure that all major tax and spending changes are subject to an independent assessment by the Office for Budget Responsibility,” the King stated.
The speech also addressed initiatives to promote economic growth, improve infrastructure, and improve employment rights. Measures such as planning reform to speed up the delivery of infrastructure and housing, and introducing a new deal for working people to ban exploitative practices.
The King mentioned, “My Government is committed to making work pay and will legislate to introduce a new deal for working people.” Other important topics included clean energy transition, strengthening border security, and improving community policing.
Where To From Here?
Although the AI Bill wasn’t directly addressed, experts have shared their views on how AI should be managed and used going forward. Rightfully so, industry leaders want to see more when it comes to regulations surrounding the latest popular tech.
Our Experts:
Matthew Worsfold, Risk Advisory Partner, Ashurst
Greg Hanson, GVP of EMEA North, Informatica
Arun Kumar, UK Regional Director, ManageEngine
Dr Marc Warner, CEO, Faculty AI
Matthew Worsfold, Risk Advisory Partner, Ashurst
“While the King stopped short of naming a specific AI Bill, businesses nonetheless finally have the clarity they were urgently seeking as to whether the UK is going to legislate on AI. However, there is still a question mark on the how, which could likely take time to clarify and therefore still leave some uncertainty.
“Interestingly, the focus so far appears to be on large language models and general purpose AI, which therefore would make any legislation narrower than the EU AI Act given it focuses predominantly on AI use cases rather than the underlying technology itself.
“However, this also has the potential to create some contention between the UK and EU laws as they both look to wrangle with the opportunities and risk that general purpose AI presents.”
On the new Digital Information and Smart Data Bill, which would enable new, “innovative” uses of data to help boost the economy, Rhiannon Webster, partner and head of UK data privacy at Ashurst, said:
“This is welcome news. The previous proposed data protection and digital information bill had been much criticised for simply tinkering around the edges of the UK GDPR with few benefits.
“However it did contain well received new data governance regimes which: (i) provide a framework for digital verification services; and (ii) pave the way for regulations impacting data access and governance beyond personal data. It would appear the new government has abandoned the previous approach of amending elements of the UK GDPR but retained the proposals regarding the new data governance regimes.”
More from News
- UK’s NayaOne Enters Saudi Market With AstroLabs, Launching First Fully Saudi-Hosted Fintech Platform
- How Is Google Dealing With Fraud In India?
- What Are The Main Sources Google’s AI Overview Uses?
- New Drone Flights Approved to Help Monitor Railways
- How The UK Government Is Helping With Employment Reform
- What Are The Data-Related Risks Of Period Tracker Apps?
- Investment in UK Businesses Up 3% This Year
- How Much Water Does ChatGPT Actually Use?
Greg Hanson, GVP of EMEA North at Informatica comments on how businesses may need to brace for greater intervention:
“Businesses must now brace for greater intervention and be prepared to demonstrate how they are protecting the integrity of AI systems and large language models. Developing robust foundations and controls for AI tools is a good starting point.
“Bad data could ultimately risk bad outcomes, so organisations need to have full transparency of the data used to train AI models. And just as importantly, businesses need to understand the decisions AI models are making and why.
“It’s also critical that AI is designed, guided, and interpreted from a human perspective. For example, there needs to be careful consideration about whether large language models have been trained on bias-free, inclusive data or whether AI systems can account for a diverse range of emotional responses.
“These are important considerations that will help manage the wider social risks and implications it brings, allowing businesses to tackle some of the spikier challenges that generative AI poses so its transformative powers can be realised.”
Arun Kumar, UK Regional Director, ManageEngine comments on how this could give businesses guidance on how to prioritise trust and safety.
“This could give businesses guidance on how to prioritise trust and safety, introducing essential guard rails to ensure the safe development and usage of AI. This bill promises to go a long way in helping to tackle the risks that come from a lack of specialised knowledge around this relatively new technology.
“Our recent research showed 45% of IT professionals only have a basic understanding of GenAI technologies and most don’t have governance frameworks in place for AI implementation. Introducing legislation on safety and control mechanisms, such as a requirement to protect the integrity of testing data, will help guide the use of AI so businesses can confidently use it to drive business growth.
“We also need closer collaboration between regulators, governments and industry to build a shared infrastructure – alongside the skills and security practices necessary to keep pace with the ever-evolving cyber security developments. This will offer the most robust defence and protection needed for our society moving forwards.”
Dr Marc Warner, CEO, Faculty AI
“Whilst tighter rules around frontier systems is sensible, Labour must guard against regulatory overreach.
“AI has been safely and successfully used for decades – from predicting travel times, spotting bank fraud, or reading patient scans.
“Embracing these “narrow” applications – AI tools with specific, predetermined goals set by humans – should be the priority. Cracking down here would only stifle growth and hamper innovation – as well as robbing the public of better, faster and cheaper public services.
“Starmer should release the handbrake on narrow AI, whilst implementing sensible rules around advanced, more general systems. This Bill looks to be a good start on that journey.”
Businesses seem ready to include AI in their operations, if they haven’t already started doing so, and so what’s left is for proper regulation and policies that support responsible AI use for all.