The Laboratory for AI Security Research, or LASR, is a new project the government has announced that will be working on strengthening cybersecurity with AI. Announced at the NATO Cyber Defence Conference in London, it will be dealing with and working on fixing the risks that come with AI technology becoming more advanced with time.
LASR brings together experts from government, industry, and academic institutions and is supported by £8.22 million in government funding and invites collaboration from international allies such as NATO and the Five Eyes countries.
How Will LASR Help Cybersecurity?
LASR is working to create tools that detect, analyse, and neutralise cyber threats better. Its use of artificial intelligence allows for faster identification of risks and responses that are more precise.
One of the main goals is to develop systems that can predict attacks before they can take place. AI technology allows these systems to process large volumes of data and recognise unusual patterns that resemble a security threat. This is great because it then reduces the times between detection and action.
Another goal is to improve automated responses to cyber incidents. With AI, these responses can be activated instantly, helping to prevent damage while human operators assess the situation further.
Who Is Collaborating On LASR?
LASR is a joint effort involving some of the most respected organisations in cybersecurity. Academic institutions such as the Alan Turing Institute and Queen’s University Belfast contribute cutting-edge research, while government agencies like GCHQ and the MOD’s Defence Science and Technology Laboratory offer expertise in applying these findings to real-world scenarios.
International partnerships are an important part of working towards the lab’s mission. Working with NATO members and Five Eyes allies means LASR benefits from shared knowledge and resources. These relationships also help the lab’s innovations with reaching the rest of the world.
Private industry also will be bringing additional investment and practical insights to help refine these projects.
More from News
- Will Meta Make Users Start Paying A Subscription For WhatsApp?
- Confidence Gap Holds Back Gen Z Business Founders, Report Finds
- Google To Pay $68m After Lawsuit Over Google Assistant Recordings
- Big Tech Paid $7.8bn In Fines Last Year, Report Finds
- New Research Reveals What (Or Who) The Actual Risks To Businesses Are In The AI Age
- UK Government Invests £36m To Make Supercomputing Centre 6 Times More Powerful
- Experts Share: How Will The 2026 Global Memory Shortage And GPU Rise Impact Industries?
- Golden Visas Used To Be A Pathway To The World’s Best Passports…But Not Anymore
What Technologies Are Being Developed By LASR?
One area of progress involves machine learning, which allows systems to recognise patterns in data that humans might miss. This capability is critical for predicting vulnerabilities and identifying potential threats.
Encryption technologies are another priority, with the goal of creating systems that can withstand even the most advanced attacks. These tools will help protect sensitive information and ensure that critical infrastructure remains secure.
And then, LASR is improving tools for incident response so that it is easier to react quickly to cyber incidents. These technologies aim to reduce recovery time and limit the damage caused by attacks.
How Is LASR Keeping AI’s Downsides In Mind?
AI can be used for both defence and attack, which makes it a tricky area to manage. LASR is managing these risks by researching ways to prevent the misuse of AI in malicious activities.
For example, the lab is developing tools to counter adversarial AI, where systems are manipulated to act against their intended purpose. Understanding how these attacks work allows researchers to create defences that make such manipulation more difficult.
LASR is also working to stop the use of AI for creating advanced malware. In anticipating how attackers might use AI, the lab can develop tools to detect and stop threats before they escalate.