The Hidden Dangers of Private LLMs and Chatbots

The adoption of large language models and chatbots poses significant risks if not implemented with proper security controls, experts warn.
The Hidden Dangers of Private LLMs and Chatbots

The Hidden Dangers of Private LLMs and Chatbots

Companies are increasingly turning to large language models (LLMs) to help their employees glean information from unstructured data. However, this trend poses significant risks if not implemented with proper security controls. Experts warn that vulnerabilities in these systems can lead to disinformation and potential data leaks.

Image: KT Stock Photos via Shutterstock

The risks are similar to those facing developers of software applications, but with an AI twist. According to experts, companies should test their AI applications to ensure that the same controls used for web applications are also implemented on the APIs for their AI applications.

The adoption of next-gen artificial intelligence and machine learning (AI/ML) is on the rise, with the information and professional services sectors leading the way. According to a survey by the US Census Bureau conducted in October 2023, only 4% of US companies have adopted AI as part of their business operations, but some industries have higher adoption rates, with the information sector at 14% and the professional services sector at 9%.

Census Bureau data shows AI adoption rates across industries

The risks posed by the adoption of AI/ML are not necessarily due to the models themselves, but the software components and tools used to develop AI applications and interfaces. Experts warn that vulnerabilities in these components can be exploited by attackers, leading to data poisoning and potential data leaks.

“There’s not a lot of magical incantations that you can send to an LLM and have it spit out passwords and sensitive info,” says Dan McInerney, lead AI threat researcher with Protect AI. “But there’s a lot of vulnerabilities in the servers that are used to host LLMs. The [LLM] is really not where you’re going to get hacked — you’re going to get hacked from all the tools you use around the LLM.”

Practical attacks against AI components have already been reported. In March, Oligo Security reported on the discovery of active attacks against Ray, a popular AI framework, using a previously discovered security issue.

As companies continue to integrate AI into their business processes, it is essential to prioritize security and implement proper controls to prevent vulnerabilities from being exploited.