MPs have warned the UK government that they must introduce new legislation to control artificial intelligence (AI). Failure to do so may result in falling behind the EU and the US in setting the pace for regulating the technology, The Guardian reports.
Rishi Sunak’s government has been urged to get ahead in the AI arms race as it prepares to host a global AI safety summit at the iconic Bletchley Park this November, the site that played host to codebreakers cracking Engima.
“risking falling behind”
As the progress of artificial intelligence steams ahead in leaps and bounds, governments around the world work tirelessly to remain in the control seat.
In this, the UK government is no exception. As with other nations, it attempts to set in place regulations that will keep AI under its thumb, lest it end up growing out of control.
The technology has now firmly risen up the political agenda after breakthroughs in generative AI – the term for tools such as ChatGPT – have generated plausible text, image and audio content from human prompts.
On Thursday, the Science, Innovation and Technology Committee said the regulatory approach outlined in a recent government white paper risked falling behind others.
“The AI white paper should be welcomed as an initial effort to engage with this complex task, but its proposed approach is already risking falling behind the pace of development of AI,” the committee said in an interim report on AI governance.
“This threat is made more acute by the efforts of other jurisdictions, principally the European Union and the United States, to set international standards.”
In the White House, the US has published a blueprint for an AI bill of rights, with the Senate majority leader Chuck Schumer publishing a framework for developing AI regulations. Elsewhere the EU – a trendsetter in tech regulation – is pushing ahead with the AI Act.
The Committee’s report, whose introductory paragraph is written by the ChatGPT chatbot, lists 12 governance challenges for AI that it says must be addressed by policymakers and should guide the Bletchley summit.
More from News
- Experts Share: How In-App Whatsapp Ads Will Affect The Overall User Experience
- UK’s NayaOne Enters Saudi Market With AstroLabs, Launching First Fully Saudi-Hosted Fintech Platform
- How Is Google Dealing With Fraud In India?
- What Are The Main Sources Google’s AI Overview Uses?
- New Drone Flights Approved to Help Monitor Railways
- How The UK Government Is Helping With Employment Reform
- What Are The Data-Related Risks Of Period Tracker Apps?
- Investment in UK Businesses Up 3% This Year
Is The UK Government Doing Enough?
The AI summit that will take place at Bletchley Park will be attended by international governments, leading AI firms and researchers.
At the summit, the challenges that must be addressed include: the bias in AI systems; systems producing deep fake material that misrepresents someone’s behaviour and opinions; lack of access to the data and computing power needed to build AI systems; regulation of open-source AI, where the code behind an AI tool is made freely available to use and adapt; protecting the copyright of content used to build AI tools; and dealing with the potential for AI systems to create existential threats.
The UK government’s AI white paper published in March sets out five guiding principles for managing AI technology: safety, transparency, fairness, accountability, and the ability of newcomers to challenge established players in AI.
However, the white paper showed no intention of introducing new legislation to cover AI. Instead, it expects regulators – such as the data watchdog and the communications regulator, Ofcom – to thread those principles through their work, assisted by the government.
The document also refers to introducing a “statutory duty” on regulators to follow the principles.
But is the UK government doing enough?
The Committee has urged for the necessity for an AI bill to appear in the king’s speech, which sets out the government’s legislative agenda for the next parliamentary year. Otherwise, its report said, “other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer”.
Simply put, if the government does not take action now, legislation may not be enacted until late 2025, nearly three years from the publication of the white paper, the report added.
The Committee’s report also recommended that the Bletchley summit should include as wide a range of countries as possible, amid speculation that China, a big AI and tech power, will not be invited.
Asked this week whether China should be invited to Bletchley Park, Greg Clark, the conservative chair of the committee, said: “If this is to be the first global AI summit then to have as many voices there as possible, I think would be beneficial.”
“But it needs to be accompanied with a caveat that we don’t expect that some of the security aspects to be resolved at that level. Our recommendation would be that we need a more trusted forum for that”, he finished.
A government spokesperson said the forthcoming AI summit will indeed address the threat of risks and harms from AI technology. The spokesperson added that the UK government aims to harness AI “safely and responsibly”, and that the white paper sets out a “proportionate and adaptable approach to regulation in the UK”.
The government has also established a foundation model taskforce, referring to the underlying technology for AI tools such as text or image generators, which will look at the safe development of AI models.