The Pentagon says AI is speeding up its 'kill chain'

Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people.

Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage” in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

“We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces,” said Plumb.

The “kill chain” refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb.

The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans.

"We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.

Nonetheless, this kicked off a speed dating round for AI companies and defense contractors.

Meta partnered with Lockheed Martin and Booz Allen, among others, to bring its Llama AI models to defense agencies in November. That same month, Anthropic teamed up with Palantir. In December, OpenAI struck a similar deal with Anduril. More quietly, Cohere has also been deploying its models with Palantir.

As generative AI proves its usefulness in the Pentagon, it could push Silicon Valley to loosen its AI usage policies and allow more military applications.

“Playing through different scenarios is something that generative AI can be helpful with,” said Plumb. “It allows you to take advantage of the full range of tools our commanders have available, but also think creatively about different response options and potential trade offs in an environment where there's a potential threat, or series of threats, that need to be prosecuted.”

It's unclear whose technology the Pentagon is using for this work; using generative AI in the kill chain (even at the early planning phase) does seem to violate the usage policies of several leading model developers. Anthropic's policy, for example, prohibits using its models to produce or modify "systems designed to cause harm to or loss of human life."