Artificial Intelligence Bad News For Cyber Threats, Report Warns

According to a report from the UK government, by 2025, artificial intelligence (AI) has the potential to heighten the threat of cyber-attacks and undermine trust in online content.

And, although there are some experts who remain sceptical about the accuracy of these predictions, the report also suggests that AI could be employed in the planning of biological or chemical attacks by terrorists.

Understanding The AI Report


On Thursday, Prime Minister Rishi Sunak is set to highlight both the potential opportunities and threats associated with the emerging technology.

The government’s report focuses on generative AI, the technology that presently fuels popular chatbots and image-generation software.

This report draws from declassified information provided by intelligence agencies and cautions that, by 2025, generative AI could be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons”.

It says while firms are working to block this, “the effectiveness of these safeguards vary”.

According to the report, there are obstacles to getting hold of the knowledge, raw materials, and equipment for attacks, but, thanks to AI, those barriers are falling.

By 2025, it’s likely AI will also help create “faster-paced, more effective and larger scale” cyber-attacks, it warns.

This is because AI could help hackers overcome the difficulties of mimicking official language, says Joseph Jarnecki, who researches cyber threats at the Royal United Services Institute.

“There’s a tone that is adopted in bureaucratic language and cybercriminals have found that quite difficult to harness,” said Mr Jarnecki.

AI Convergence: Shaping the Future


The report precedes Mr Sunak’s forthcoming speech on Thursday, during which he is anticipated to outline the UK government’s strategy for ensuring the safety of AI and positioning the UK as a prominent global authority in AI safety.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” Mr Sunak is expected to say.

He will commit to addressing those fears head-on, “making sure you and your children have all the opportunities for a better future that AI can bring”.

The speech will set the scene for a government summit next week to discuss the threat posed by highly advanced AIs.

The forthcoming focus is on regulating what’s referred to as “Frontier AI,” which are advanced AI systems that ministers say “can perform a wide variety of tasks” and “exceed the capabilities of today’s most advanced models”.

The debate on whether these systems might pose a threat to humanity is fervent.

Another newly published (and far more optimistic) report by the Government Office for Science, which advises the prime minister and cabinet, says “Many experts consider this a risk with very low likelihood and few plausible routes to being realised.”

Contrasting the recent government report, this states that, to pose a real risk to human existence, AI would need some control over vital systems, such as weapons or financial systems.

AI would also need new skills such as the capacity to improve their own programming, the ability to evade human oversight and a sense of autonomy.

However, it does still note that “there is no consensus on the timelines and plausibility of when specific future capabilities could emerge”.

What About The Here and Now?

Most major AI companies have expressed their consensus on the necessity of regulation and are expected to have their representatives in attendance at the summit.

However, Rachel Coldicutt, a specialist in the social implications of technology, raised questions regarding the summit’s focal point.

She said it placed too much weight on future risk: “It makes loads of sense that technology companies, who stand to lose more by being regulated about the things they’re making in the here-and-now, will focus on long-term risk.”

“And it has felt over the summer, as if the government position has been very strongly aligned, supporting those views,” stated Ms Coldicutt.

But she said the government reports were “moderating some of the fervour” about these futuristic threats and made it clear that there was a gap between “the political position and the actual technical one”.