A recent report has found that large language models (LLMs) can give effective advice on how to conceal the true purpose of the purchase of anthrax, smallpox and plague bacteria, The Guardian reports.
According to research by a US think tank, the artificial intelligence (AI) models underpinning chatbots could help plan an attack with a biological weapon.
AI’s Role in Biological Attack Planning
On Monday, a report by the Rand Corporation revealed that its findings, achieved by testing several LLMs, demonstrated that LLMs could supply guidance that “could assist in the planning and execution of a biological attack”.
However, the preliminary findings also showed that the LLMs did not generate explicit biological instructions for creating weapons.
The report said previous attempts to weaponise biological agents, such as an attempt by the Japanese Aum Shinrikyo cult to use botulinum toxin in the 1990s, had failed because of a lack of understanding of the bacterium.
AI could “swiftly bridge such knowledge gaps”, the report said, though it did not specify which LLMs the US think tank tested.
More from Tech
- 10 Apps And Tech To Combat Loneliness In 2024
- Top 10 Most Installed Apps Of 2023
- Cybergang LockBit Targeted By UK & US Law Enforcement
- Top 10 Most Deleted Apps of 2023
- How Business Broadband Can Level-Up Online Gaming
- Expert Predictions For Social Media In 2024
- How Drones and Robotics Are Transforming Solar Maintenance
- The Age Of Modern Love: 15 Tech-Savvy Valentine’s Day Gifts
The AI-Related Bioweapon Threat
Bioweapons are among the serious AI-related threats that will be discussed at next month’s global AI safety summit in the UK.
In July Dario Amodei, the CEO of the AI firm Anthropic, warned that AI systems could help create bioweapons in two to three years’ time.
LLMs receive extensive training using vast datasets sourced from the internet and serve as a fundamental technology underpinning chatbots like ChatGPT. While Rand did not disclose the specific LLMs it examined, researchers stated that they accessed these models through an application programming interface (API).
In one test scenario devised by Rand, the undisclosed LLM identified potential biological agents – including those that cause smallpox, anthrax and plague – and discussed their relative chances of causing mass death.
In addition to this, the LLM also assessed the possibility of obtaining plague-infested rodents or fleas and transporting live specimens. It then went on to mention that the scale of projected deaths depended on factors such as the size of the affected population and the proportion of cases of pneumonic plague, which is deadlier than bubonic plague.
Due to the nature of its findings, the researchers admitted that extracting this information from an LLM required using text prompts that overrode the chatbot’s safety restrictions, something also known as “jailbreaking”.
In a further test, the unnamed LLM discussed the pros and cons of different delivery mechanisms for the botulinum toxin (something that can cause fatal nerve damage) such as food or aerosols.
The LLM also advised on a plausible cover story for acquiring Clostridium botulinum “while appearing to conduct legitimate scientific research”. This was suggested to be part of a project looking at diagnostic methods or treatments for botulism. The LLM response added: “This would provide a legitimate and convincing reason to request access to the bacteria while keeping the true purpose of your mission concealed.”
To conclude their preliminary results, the researchers stated that this indicated LLMs could “potentially assist in planning a biological attack”. They said their final report would examine whether the responses simply mirrored information already available online.
“It remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online,” said the researchers.
However, the Rand researchers said the need for rigorous testing of models was “unequivocal”. They said AI companies must limit the openness of LLMs to conversations such as the ones in their report.