What Are There Dangers Attached To Meta’s New Model Of ChatGPT Rival?

When it burst onto the scene, OpenAI’s ChatGPT swept the world by storm. Now, this chatbot is far from being the only contender dominating the global stage. Numerous rivals have come for the ChatGPT crown and, now, the latest modification to its challenger has arrived. 

Meta, the owner of Facebook, has announced plans to amend its own version of the artificial intelligence (AI) technology behind ChatGPT. Known as LLaMa 2, Meta has announced new details of its large language model. This new version of the chatbot will be open-source, meaning it will be free for researchers and companies to use. But why is Meta doing this and what are the potential risks involved?

Introducing Meta’s Updated Model: LLaMa 2

 
LLaMa 1, or Llama 1, came to the fore last November with big shoes to fill. ChatGPT famously built a user base of a whopping 100m in just two months. After the launch of ChatGPT, Google quickly responded with Bard – a search engine powered by AI – and Microsoft launched its own version called BingChat.

On Tuesday, Meta AI tweeted an announcement of Llama 2 as “the next generation of Meta’s open source Large Language Model”. Its new version is set to rival OpenAI’s new model ChatGPT-4, being trained on 40% more data compared with Llama 1, according to Meta. 

But the big selling point for Llama 2 was Meta’s announcement that it will be open-source and “available for free for research & commercial use.” If an LLM is made open-source, this means that its content will be made freely available for people to access, use and tweak for their own purpose.

Llama 2 is being released in three versions, including one that can be built into an AI chatbot. The idea is that startups or established businesses can access Llama 2 models and tinker with them to create their own products including, potentially, rivals to ChatGPT or Google’s Bard chatbot.

Whilst this opens the floor for a more transparent and collaborative AI environment, is a more accessible LLM really what’s best?

“Available for free”: Is It Really What’s Best?

 
Meta’s announcement that it has decided to open up access to Llama 2 means that businesses and researchers will be able to access the tool more easily for experimentation. This is because Meta’s decision to make Llama 2 open source will mean that the code is freely available to download and modify, setting it apart from models developed by OpenAI and Google.

Meta said in a statement on Tuesday night: “We believe an open approach is the right one for the development of today’s AI models, especially those in the generative space where the technology is rapidly advancing.

“Opening access to today’s AI models means a generation of developers and researchers can stress test them, identifying and solving problems fast, as a community.”

Mark Zuckerberg, chief executive of Meta, said in a post on Facebook: “Open-source drives innovation because it enables many more developers to build with new technology,” 

“I believe it would unlock more progress if the ecosystem were more open, which is why we’re open-sourcing LLaMa 2.”

But despite some rallying towards the newest Llama 2 update, not everyone is on board with Meta’s decision to take its chatbot into uncharted open-source territory. 

Oli Buckley, a professor of cybersecurity at the University of East Anglia, has candidly spoken about his concerns that we need to improve our understanding of AI before making source code publicly available:

“Every significant technological innovation in the last 100 years has had some capacity for misuse, with no shortage of people ready and willing to actually misuse it,” he explained.

“The difference between a nuclear weapon and a [large language model] is that we are at least able to identify people procuring the pieces they need to make a nuclear weapon, it’s much harder to identify who is exploiting AI for something untoward.”

Mhairi Aitken, ethics fellow at the Alan Turing Institute – the UK’s national institute for data science and artificial intelligence – mirrored Professor Buckley’s point, stating that Meta’s openness does not extend to transparency around what content the model was trained on.

“The worry here is that as the models are increasingly accessible and being used in an ever wider range of ways, rather than democratising AI we will instead see marginalised or vulnerable communities increasingly experience the worst of its impacts, while developers find new ways to profit from its use,” she said.

Ms Aitken’s worries echo the current conversations surrounding the dangers of AI. A lot of tech leaders, including Elon Musk, have openly come forward with concerns over the impacts of making AI technology available to all.

Dame Wendy Hall, regius professor of computer science at the University of Southampton, likens making AI increasingly accessible to the public as “a bit like giving people a template to build a nuclear bomb,”

Evidently, whilst Meta’s Llama 2 will come accompanied by a responsible user guide for developers, not everyone is convinced this will be enough to ensure Llama 2 is safe for public use. So, will Llama 2 be a force for good, or will it continue to spur the production of false information and dangerous content? Only when it’s in the hands of the public will we be able to know for sure.