The UK government has outlined its position on the regulation of Artificial Intelligence (AI) in 2023, focusing primarily on “frontier AI.” This stance comes as the nation prepares for a global AI summit in November.
A Principles-Based Approach
According to the white paper, “A pro-innovation approach to AI regulation,” the UK government plans to take a principles-based approach to AI regulation. Rather than introducing new legislation, the government will aim to clarify existing laws concerning AI. The secretary of state for science, innovation, and technology argues that placing rigid legislative requirements could hinder the innovation and rapid response to AI advances.
Some experts believe that the government’s approach indicates a reluctance to regulate AI to attract AI businesses.
Frontier AI: Potential and Risks
Frontier AI refers to the latest and most advanced AI models. Documents released before the summit detail various risks associated with AI, including AI-generated disinformation and disruptions in the job market. The documents also mention threats like election disruption, erosion of social trust, and increasing global inequalities.
A spokesperson for the Department for Science, Innovation, and Technology stated, “We are focusing on frontier AI at the AI Safety Summit because this is where the most urgent risks from advanced AI lie.”
However, not everyone is on board with this emphasis on frontier AI. Michael Birtwistle, the associate director of law and policy at the Ada Lovelace Institute, argues that focusing solely on frontier AI overlooks current problems with AI.
Current AI Worries Overlooked
There are worrries that by focusing mainly on theoretical and future AI issues, present challenges remain unaddressed. AI today has been involved in surveillance that targets specific groups unfairly, discriminatory hiring practices, and the spread of misinformation. Janet Haven, the executive director of the non-profit tech research organization Data & Society, points out, “There are many AI systems causing harm not addressed by regulation.”
Rishi Sunak, echoing the government’s desire to solidify the UK’s position in AI, mentioned that it’s too soon to legislate on AI. He believes that a better understanding of AI is essential before introducing regulations.
More from News
- 10 Subscription Boxes That Make Great Christmas Gifts
- Belgian Startups To Watch In 2024
- Series A Vs Series B Funding
- Everything You Need To Know About Google’s Gemini: A New AI Contender
- 10 Luxembourg Startups To Keep an Eye On In 2024
- 10 Polish Startups To Keep An Eye On In 2024
- Spotify to Cut Close to 20% Jobs Globally Despite Revenue Increase
- We Asked The Experts: How Has ChatGPT Affected Content Marketing?
US and UK: Waiting for Meaningful Regulation
The US has also been criticized for its hesitancy to enforce strict regulations on AI, concentrating more on hypothetical future harms. Kamala Harris, attending the UK summit, commented on the responsibility of governments to ensure AI’s safe adoption. Harris said, “We have a duty to ensure AI protects the public from potential harm.”
The UK is taking tips from the US on how to regulate AI. Companies have a big say in this, including leaders like Elon Musk and Sundar Pichai. Some people worry that this might mean companies control the rules rather than the government making clear laws.
Global Competition and the UK’s Position
The desire to be a leader in AI on the global stage motivates both the US and the UK. The US’s competitive spirit stems from concerns that countries, like China, might rapidly develop AI systems posing national security risks. Sunak has highlighted the UK’s expertise in AI, especially when compared to other western nations.
Experts believe that the UK’s approach to AI regulation aims to differentiate from the EU post-Brexit. Oliver Marsh, a project lead at AlgorithmWatch, mentions the UK’s need to strike a balance between following the EU and establishing its own distinct path.
EU’s Approach to AI
The EU has been working on the EU AI Act, focusing on a risk-based tiered approach to AI legislation. Sarah Chander from European Digital Rights (EDRi) says that generative AI’s hype has shifted the EU’s focus, causing some member states to reconsider what is deemed high-risk.
Clara Maguire, the executive director of the Citizens, states, “The frontier is here. We see AI’s weaponization today, enabled by many of the companies attending the summit.”
The global discussions on AI continue, with experts urging the UK and US to address the current challenges of AI urgently.