UK Government Launches £8.5 Million AI Safety Grants

The UK government has introduced a new research funding programme worth £8.5 million, with a goal to protect society from the risks associated with AI while promoting its beneficial uses.
 

What’s The Funding About?

 
The government is offering up to £8.5 million in grants to researchers focused on AI safety.

This programme wants proposals that speak to systemic AI safety issues, and in that, they want to develop new ways to make sure that AI systems are safe and trustworthy to prevent misinformation and cyberattacks.
 

Who’s Leading The Charge?

 
The programme is led by Shahar Avin, an AI safety researcher, and Christopher Summerfield, Research Director at the UK AI Safety Institute.

It is a joint project with UK Research and Innovation and The Alan Turing Institute, and they soon also want to work with international partners.
 

Big Reveal At AI Seoul Summit

 
Tech Secretary Michelle Donelan announced the grants at the AI Seoul Summit, co-hosted by the UK and South Korea. This summit focuses on advancing AI safety on a global scale.

The grants will fund projects that explore safe AI deployment and how society can adapt to AI advancements.
 

Expert Insights From The Summit:

 
“When the UK launched the world’s first AI Safety Institute last year, we committed to an ambitious mission to reap the positive benefits of AI by advancing AI safety,” said Tech Secretary Michelle Donelan.

“We need to think carefully about adapting our systems for a world where AI is embedded in everything we do. This programme is designed to generate ideas for tackling these challenges and ensure great ideas can be put into practice,” said Christopher Summerfield, UK AI Safety Institute Research Director.
 
Greg Hanson, GVP of EMEA North, at Informatica also spoke on the summit’s progress, saying:

“We’ve reached an inflection point with generative AI. And after the initial hype and excitement, the discourse is starting to change to reflect that AI needs to be designed, guided, and interpreted from a human perspective.

“It’s important to remember that we are still at an early stage of generative AI adoption. It will take time to get the responsibility, guardrails, and controls around AI to the right place as its use evolves.

“However, it’s reassuring to see that organisations, policy makers and technology providers have taken on board the mandate to act responsibly. Now they want to understand how to tackle some of the spikier challenges that generative AI poses so its transformative powers can be realised.”

“For example, alongside considering the quality and reliability of data that is feeding models, AI systems also need to be designed with empathy.

“For example, there needs to be careful consideration about whether large language models have been trained on bias-free, inclusive data or whether AI systems can account for a diverse range of emotional responses. These are important considerations that will help manage the wider social risks and implications it brings.”
 

What Do They Want To Achieve?

 
The main goal is to explore how society can adapt to the changes brought by AI. Researchers are encouraged to propose innovative solutions to AI-related problems.

This includes fighting the spread of deepfakes and increasing productivity through AI. Promising proposals may receive more funding for long-term projects.
 

 

Building A Global AI Safety Network

 
The UK AI Safety Institute has established partnerships with AI Safety Institutes in the US and Canada. This global network intends to set standards for AI development and allow for positive outcomes. The grants programme will also bring collaborations between UK-based researchers and international experts.
 

What About Systemic AI Safety?

 
Systemic AI safety focuses on the long term and larger effects of AI on societal systems and infrastructure. This keeps the AI models and the environments in which they operate in mind. Addressing these systemic risks is so that they can create a safer digital ecosystem.
 

Who Can Apply, And How?

 
Researchers interested in applying must be based in the UK and associated with a host organisation. This could be a university, business, civil society group, or part of the government. International collaboration is encouraged, but the lead applicants must be UK-based.

To sum that up, here’s a list of the criteria:

  • Must be associated with a host organisation
  • Based in the UK
  • Open to various fields of research

 

Expected Benefits

 

The programme intends to generate a wealth of ideas for addressing AI-related problems.

Successful projects will translate into practical solutions that enhance AI safety. Grantees will have opportunities to collaborate with other researchers and industry partners to bring their ideas to life.
 

What Have Key Stakeholders Said?

 
Professor Helen Margetts from The Alan Turing Institute stressed the importance of this initiative in tackling AI risks. She discussed t why we need quick adaptation to the changing information environment influenced by AI. Here’s what other said…

“The AI Safety Institute’s work is vital for understanding AI risks and creating solutions to maximise the societal and economic value of AI for all citizens,” said UKRI Chief Executive, Professor Dame Ottoline Leyser.

“This programme will bring safety research into the heart of government, supporting the regulation that will shape the UK’s digital future,” added Professor Helen Margetts.
 

How To Apply

 
Researchers can find more details and submit their proposals through the official website of the UK AI Safety Institute. The application process is straightforward, with regular updates and support provided to applicants.