NCSC Warns AI Chatbots Could Carry Cyber Risks

The integration of artificial intelligence (AI)-driven chatbots into organisations has raised concerns among British officials.

The National Cyber Security Centre (NCSC) of the UK has cautioned about the potential risks associated with these chatbots, highlighting that research indicates they can be manipulated to carry out harmful tasks. This warning comes as AI-powered chatbots gain traction, being applied not only to internet searches but also to customer service and sales functions.


The Challenge of AI-Powered Chatbots


The NCSC emphasised that experts are still grappling with the complexities of security issues linked to chatbots. These algorithms, known as large language models (LLMs), are a technology that presents both opportunities and challenges. The essence of LLMs lies in their ability to mimic human language patterns and generate responses that are remarkably convincing.


Potential Risks and Vulnerabilities


The NCSC explained the potential risks associated with incorporating LLM-driven chatbots into various areas of business. There is a growing concern that if these models are not safeguarded, they could expose organisations to vulnerabilities. Researchers have demonstrated that chatbots can be manipulated through tactics such as feeding them misleading commands or exploiting loopholes in their programming.

A practical scenario illustrated by the NCSC involves an AI-powered chatbot deployed by a bank. If a hacker crafts a query with precision, the chatbot could be deceived into executing unauthorised transactions, posing a significant financial threat.

Drawing parallels with beta software releases, the NCSC advised organisations to exercise caution when utilising services that incorporate LLMs. Just as one wouldn’t fully trust experimental software, the same principle should apply to LLMs.

The blog post emphasised that organisations should refrain from entrusting critical tasks or transactions entirely to LLM-driven systems. This is vital to prevent potential security breaches or unintended consequences stemming from the chatbot’s susceptibility to manipulation.



Global Implications and the Rise of LLMs


The popularity of LLMs, exemplified by platforms like OpenAI’s ChatGPT, has gathered global attention. Businesses across various sectors are integrating LLMs into their services, encompassing functions such as sales and customer support. However, the security implications of AI, including LLMs, remain an evolving concern. Authorities in the United States and Canada have reported instances of hackers exploiting AI technology for malicious purposes.

The influence of AI is evident in the evolving landscape of corporate tasks. According to a recent Reuters/Ipsos poll, a substantial number of corporate employees are leveraging AI tools like ChatGPT to assist with routine tasks.

These tasks include drafting emails, summarising documents, and conducting preliminary research. The poll also revealed that 10% of respondents reported explicit bans on external AI tools by their employers, while 25% remained uncertain about their company’s policy regarding AI tool usage.

Oseloka Obiora, chief technology officer at cybersecurity firm RiverSafe (And Cybersecurity40 Judge for TechRound), said the race to integrate AI into business practices would have “disastrous consequences” if business leaders failed to introduce the necessary checks.

“Instead of jumping into bed with the latest AI trends, senior executives should think again,” he said. “Assess the benefits and risks as well as implementing the necessary cyber protection to ensure the organisation is safe from harm.”



The NCSC’s warning showcases the need for a cautious approach when incorporating AI-driven chatbots, particularly large language models, into business operations. The potential for manipulation highlights the importance of thorough testing, careful implementation, and ongoing vigilance in safeguarding against security risks.

As AI technology continues to shape various industries, the management of associated risks will be crucial to harnessing its benefits effectively.