Bletchley Park has become iconic as the place where the likes of Alan Turing decrypted Enigma during the Second World War. This November, in the place where the first supercomputer was created, world leaders will meet to discuss the possibilities and risks posed by artificial intelligence (AI).
Addressing Our ‘Existential Threat’
Come this November world leaders will gather at Britain’s infamous Second World War codebreaking base to discuss the potential threat AI poses to human life.
The Science, Innovation and Technology Committee have said concerns around public well-being and national security must be addressed as part of a dozen challenges they have put forward to ministers.
The Committee has warned ministers that these must be addressed ahead of the UK hosting a world-first summit at Bletchley Park.
MPs have also warned that the threat AI poses to humanity should be a focus of any government.
At the chosen site – which was crucial to the development of technology as we know it today – Rishi Sunak and other leaders will address the 12 Challenges. Greg Clark, committee chair and Conservative MP, said he “strongly welcomes” this summit.
However, Clark warns that the government will need to show “greater urgency” to ensure potential legislation doesn’t quickly become ourdates as powers including China, the US and the EU consider their own rules around AI.
The 12 Challenges
The 12 challenges put forward by the Science, Innovation and Technology Committee are as follows:
1. Existential threat – if, as some experts have warned, AI poses a major threat to human life, then regulation must provide national security protections.
2. Bias – AI can introduce new or perpetuate existing biases in society.
3. Privacy – sensitive information about individuals or businesses could be used to train AI models.
4. Misrepresentation – language models like ChatGPT may produce material that misrepresents someone’s behaviour, personal views, and character.
5. Data – the sheer amount of data needed to train the most powerful AI.
6. Computing power – similarly, the development of the most powerful AI requires enormous computing power.
7. Transparency – AI models often struggle to explain why they produce a particular result, or where the information comes from.
8. Copyright – generative models, whether they be text, images, audio, or video, typically make use of existing content, which must be protected so not to undermine the creative industries.
9. Liability – if AI tools are used to do harm, policy must establish whether the developers or providers are liable.
10. Employment – politicians must anticipate the likely impact on existing jobs that embracing AI will have.
11. Openness – the computer code behind AI models could be made openly available to allow for more dependable regulation and promote transparency and innovation.
12. International coordination – the development of any regulation must be an international undertaking, and the November summit must welcome “as wide a range of countries as possible”.
More from News
- 10 Subscription Boxes That Make Great Christmas Gifts
- Belgian Startups To Watch In 2024
- Series A Vs Series B Funding
- Everything You Need To Know About Google’s Gemini: A New AI Contender
- 10 Luxembourg Startups To Keep an Eye On In 2024
- 10 Polish Startups To Keep An Eye On In 2024
- Spotify to Cut Close to 20% Jobs Globally Despite Revenue Increase
- We Asked The Experts: How Has ChatGPT Affected Content Marketing?
Looking Forward: AI Opportuntities and Government Reponse
Looking forward to what the future of AI could look like, Mr Clark highlighters healthcare as where the “most exciting” opportunities lie.
AI is already used in the healthcare sector for multiple purposes. From reading X-rays and scans to assisting in research to predicting damaging long-term effects of conditions.
But Mr Clark says that the use of AI in the healthcare sector can be taken a step further. He believes AI can be used to make treatment “increasingly personalised”, but reiterated the report’s concerns around potential biases being incorporated into any AI model’s training data.
“If you’re conducting medical research on a particular sample or ethnic minority, then the data on which AI is trained may mean the recommendations are inaccurate,” he added.
In terms of what else the government can do regarding the use of AI in the future, the Committee said it would publish a finalised set of recommendations “in due course”.
It wants any proposed AI legislation to be put before MPs during the next parliament, which begins in September following the summer recess.
A government spokesperson said it was committed to a “proportionate and adaptable approach to regulation”, and pointed towards an initial £100m fund set aside for the safe development of AI models in the UK.
They added: “AI has enormous potential to change every aspect of our lives, and we owe it to our children and our grandchildren to harness that potential safely and responsibly.”