Meta’s open large language model family, Llama, isn’t “open-source” in a traditional sense, but it’s freely available to download and build on—and national defense agencies are among those putting it to use.
A recent Reuters report detailed how Chinese researchers fine-tuned Llama’s model on military records to create a tool for analyzing military intelligence. Meta’s director of public policy called the use “unauthorized.” But three days later, Nick Clegg, Meta’s president of public affairs, announced that Meta will allow use of Llama for U.S. national security.
“It shows that a lot of the guardrails that are put around these models are fluid,” says Ben Brooks, a fellow at Harvard’s Berkman Klein Center for Internet and Society. He adds that “safety and security depends on layers of mitigation.”
Meta isn’t alone in rush to support U.S. defense
The Reuters investigation found that researchers from China’s Academy of Military Science used the 13 billion parameter version of Meta’s Llama large language model to develop ChatBIT, an AI tool for military intelligence analysis and decision-making. It’s the first clear evidence of the People’s Liberation Army adapting open-source AI models for defense purposes.
Meta told Reuters that ChatBIT violated the company’s acceptable use policy, which prohibits use of Llama for (among other things) military, warfare, espionage, and nuclear industries or applications. Three days later, however, Clegg touted Meta’s support of the U.S. defense industry.
It was an odd turn of events, as use of Llama by any military would seem to violate Llama’s acceptable use policy. While Meta has no way to enforce its policy—its models don’t require authorization or authentication for use—the company’s stance on military use had, up until now, remained against it.
That’s still true today, but only for militaries outside the U.S. A Meta spokesperson told IEEE Spectrum that Llama’s terms haven’t changed; instead, the company is “waiving the military use policy for the U.S. government and the companies supporting their work.”
Meta isn’t alone in finding a sudden need to support U.S. defense. Anthropic’s Claude 3 and Claude 3.5 models will be used by defense contractor Palantir to sift through secret government data. OpenAI, meanwhile, recently hired former Palantir CISO Dane Stuckey and appointed retired U.S. Army General Paul M. Nakasone to its board of directors.
“All the [major AI companies] are eagerly showing their commitment to U.S. national security, so there’s nothing surprising about Meta’s response. And I think it would’ve been a curious outcome if open AI models were available to potential adversaries while [domestically] having strict national security or defense restrictions,” says Brooks.
What’s next for AI, defense, and regulation?
While Meta’s decision to make Llama available to the U.S. government could help approved military contractors adopt it, it doesn’t put the open AI genie back in the model. As the Reuters’ report shows, Llama models are already being put into use by militaries—authorized, or otherwise. Now the question becomes: What, if anything, will regulators do about it?
“By choosing not to secure their cutting-edge technology, Meta is single-handedly fueling a global AI arms race.” —David Evan Harris, California Initiative for Technology and Democracy
David Evan Harris, senior policy advisor to the California Initiative for Technology and Democracy, urged a stronger stance against open models. In 2023, IEEE Spectrum published an article by Harris about open AI’s dangers.
“By choosing not to secure their cutting-edge technology, Meta is single-handedly fueling a global AI arms race,” says Harris. “It’s not just the top unsecured model that comes from Meta. It’s the top three.” He says Meta’s decision to make its models freely available is similar to the idea of Lockheed Martin giving sophisticated military technology away to U.S. adversaries.
Brooks took the opposite view. He says open models are more transparent and easier to evaluate for opportunities or vulnerabilities. Brooks compared Llama to other popular open-source software, like Linux, which many companies and government agencies build on for custom-tailored applications. “I think the open-source community expects that open is the way forward for sensitive and regulated AI applications,” he says.
Elon Musk enters the scene
While Harris and Brooks have opposite views on regulating open AI, they agreed on one thing: Trump’s election victory is a wild card.
President-elect Trump’s position on AI isn’t yet clear, but Elon Musk—who appeared on stage with Trump several times during his presidential campaign and reportedly wields sizable influence with Trump—is emblematic of the uncertainty around the incoming administration’s position.
“The election results could reset the conversation in unusual ways.” —Ben Brooks, Berkman Klein Center for Internet and Society
Musk owns AI company Grok and believes AI will be smarter than humans by the end of the decade, yet spoke in favor of California’s Safe and Secure Innovation for Frontier Artificial Models Act, which sought broad restrictions on AI research (but was ultimately vetoed by governor Gavin Newsom). And if that weren’t confusing enough: While Musk supports AI regulation, he prefers open AI models and has a pending lawsuit against OpenAI for (among other claims) the company’s decision to close access to its models.
“The election results could reset the conversation in unusual ways,” says Brooks. “The effective accelerationist culture is going to clash with stop-AI culture in this administration, and that will be very interesting to watch.”
This article was updated on 18 November 2024 to provide additional context to Brooks’ comment about AI guardrails.
- Civilian AI Is Already Being Misused by the Bad Guys ›
- How Adversarial Attacks Could Destabilize Military AI Systems ›
Matthew S. Smith is a freelance consumer technology journalist with 17 years of experience and the former Lead Reviews Editor at Digital Trends. An IEEE Spectrum Contributing Editor, he covers consumer tech with a focus on display innovations, artificial intelligence, and augmented reality. A vintage computing enthusiast, Matthew covers retro computers and computer games on his YouTube channel, Computer Gaming Yesterday.