Guarding Secrets: How Defense Contractors Approach Large Language Models in National Security
Last month, the United States Department of Defense (DoD) announced that it had selected Scale AI to help test and evaluate generative artificial intelligence (AI) for military applications. Generative AI made headlines last year for its ability to create text, images, and other data by utilizing algorithmic models in response to a user’s prompt.
Illustrative image of AI-generated content
This has led to concerns about what it means for content creators, as well as whether any photos and videos could be trusted as being “real” and not AI-generated. However, generative AI still offers the potential to streamline workflows and review troves of information within seconds – an important capability as the world is increasingly data-driven.
The Power of a Large Language Model (LLM)
This includes harnessing the power of so-called “large language models” (LLMs), which are able to review gargantuan troves of information within seconds and crystallize it into a few key points.
“Any forward-thinking company will be studying the potential for LLMs to improve workflow and productivity in ordinary tasks,” explained Dr. Jim Purtilo, associate professor of computer science at the University of Maryland.
LLMs could certainly have a place in the government’s vast networks of federal contractors, where it could aid in the development of new systems and platforms. Data is power, but time is money.
Impact of LLMs on National Security
The ability to condense data so quickly could be a benefit for any company, as it could allow teams to be provided with nearly instantaneous pointers – and perhaps even limiting the flow of information on a need-to-know basis.
“Defense contractors would be no different, but they more than most companies must do so with a careful eye on security,” Purtilo told ClearanceJobs. “LLMs might help with, say, understanding an RFP or drafting a proposal, but doing so involves disclosing substantial information so the model has something to work with. Any prompt after that is a new opportunity for the model to volunteer your secrets.”
This is where any use of LLMs that involve classified information will need to be carefully employed.
Illustrative image of defense contractors
“Because defense contractors deal with government-sensitive data and have to follow strict government guidelines and requirements, most of them are, and may be for a while, in the exploratory phase with generative AI,” said Melissa Ruzzi, director of artificial intelligence at SaaS security provider AppOmni.
Defense contractors will therefore tread carefully when it comes to LLM adoption.
Focus is on Security
The use of LLMs will certainly include greater focus on security – and ensuring that the information can’t be improperly accessed or disseminated.
“Inappropriate disclosure of intellectual property can be a problem for any company, but in the defense industry, it could mean spilling important classified information,” added Purtilo. “They almost certainly would consider building their own models in-house, or at least procure outside services with extreme care. That’s expensive, but not as expensive as compromising national security.”
Illustrative image of Cloudflare’s new defensive AI products
Cloudflare Inc., a global cloud connectivity provider company, today announced the development of Firewall for AI to provide companies a layer of protection for artificial intelligence large language models, with an aim to identify potential attacks before they can tamper with critical functionality or access sensitive data.
The company also announced a suite of new defensive cybersecurity tools that use AI to fight emerging AI threats. Those include detecting anomalies in user behavior, scanning email to flag suspicious messages, and mitigating threats to the organization.
As more companies begin to embrace LLMs and AI models as a central part of digital transformation, the enterprise must face the security risks that come with it. According to a recent Deloitte study, only one in four C-suite-level executives are confident that their organizations are prepared to face the risks that AI poses.
“We’ve entered into the AI arms race, and it’s critical that today’s AI-powered apps – many of which fuel our healthcare, our banking systems, and our grid – are built on secure models,” said Matthew Prince, co-founder and chief executive of Cloudflare. “This protection should be available to all because a secure internet is better for everyone.”
The new Firewall for AI will provide security teams the capability to rapidly detect new threats, the company said, as it could be deployed in front of any LLMs running on Cloudflare’s existing Workers AI offering. Workers AI allows developers to deploy AI models at the edge at scale on Cloudflare’s global network, which in turn puts any LLM running as close to enterprise customers as possible, allowing extremely low-latency responses.
By putting the firewall in front of the LLM, it can scan prompts submitted by users to identify attempts to exploit the model and extract data, the company said. As a result, it can automatically block threats as they happen, with no need for human intervention. Any customer running an LLM on Cloudflare’s Workers can take advantage of the Firewall and be safeguarded by this new capability for free to defend against growing concerns such as prompt injection and other attack vectors.
Illustrative image of AI-powered apps