The Dark Side of AI: Hallucinated Packages, Malicious Models, and Insecure Code

AI has revolutionized application development, but it also brings its own set of risks. This article explores three ways AI models can lead to attacks, and what developers can do to stay safe.
The Dark Side of AI: Hallucinated Packages, Malicious Models, and Insecure Code
Photo by NOAA on Unsplash

The Dark Side of AI: Hallucinated Packages, Malicious Models, and Insecure Code

AI has revolutionized the world of application development, but it also brings its own set of risks. In this article, we’ll explore three ways AI models can lead to attacks, and what developers can do to stay safe.

Tripping Up AI Models: Hallucination Squatting

Large language machines (LLMs) don’t think like humans; they work by finding statistically plausible responses. Sometimes, they hallucinate, spitting out convincing but false information. An enterprising threat actor might prompt an LLM to hallucinate a package name, then create a malicious package with that name. All they need to do is wait for unsuspecting developers to come along.

AI models can hallucinate convincing but false information

Bad Actors and Bad AI Models

Hugging Face provides extensive security measures, including malware scanning, but malicious models still make their way onto the platform. Developers need to be skeptical when downloading AI models, only using those from trustworthy sources.

Malicious AI models can lead to remote code execution attacks

Letting Your Co-Pilot Take the Wheel

Code-producing LLMs have been known to include security vulnerabilities in their outputs. While engineers are working hard to improve these models, developers should still be wary of taking on code they don’t fully understand.

AI-generated code can include security vulnerabilities

Takeaways

In all three examples, inexperienced developers are more likely to be duped by threat actors or LLMs. Scanning code and watching for signs of attack are essential, but nothing beats developer training. Remind your teams to be extremely skeptical of third-party code, whether it’s generated by an LLM or not.

Developer training is key to staying safe in the world of AI development

That said, the world of AI is improving all the time. AI developers are aware of these problems, and updates are being made to remove malicious models and improve code security.

Even so, precaution is never a bad thing. Track AI models in your code base, and stay alert for signs of attack.

Tracking AI models in your code base is essential for staying safe