In the new year, Musk welcomed Hegseth to a meeting at SpaceX headquarters, where Hegseth unveiled a new partnership with Grok, which lately had been spending most of its time removing the clothes of women and children in photographs. The Pentagon, Hegseth said, “will not employ A.I. models that won’t allow you to fight wars.” Semafor reported that this was a specific jab at Anthropic. Shortly thereafter, according to the government’s story, an Administration official received a phone call from a contact at Palantir. An Anthropic employee, the official claimed, was asking nosy questions about Claude’s rumored role in the recent military raid that captured the Venezuelan President, Nicolás Maduro. This inquiry was taken not as a matter of idle curiosity but as an act of insubordination. (Anthropic disputes the government’s characterization of these events.)
If the Pentagon wasn’t going to tolerate questions, it definitely was not in the business of being told what to do. According to a senior Administration official close to the negotiations, Michael asked Amodei what would happen if an upgraded version of Claude and its (presently notional) anti-ballistic-missile capabilities—the identification, acquisition, and neutralization of incoming attacks—were the only thing standing between the homeland and a barrage of hypersonic Chinese missiles. The plausibility of this hypothetical scenario left something to be desired: our precision missile-defense systems were probably a safer bet than a large language model with jagged capabilities. (L.L.M.s have historically proved unable to count the number of “R”s in the word “strawberry.”) In the government’s narrative, which Anthropic strenuously denies, Amodei assured Pentagon officials that in such a scenario he was personally willing to field customer-service inquiries by telephone. The senior official told me, “What do you mean? We have, like, ninety seconds!”
Any residual good will between the Pentagon and Anthropic soon fully deteriorated. On February 14th, Anthropic was told that a failure to accept the government’s demands might result in contract cancellation. The following day, Laura Loomer, a right-wing activist, tweeted a scoop: according to an unnamed Department of War source, “many senior officials in the DoW are starting to view them as a supply chain risk and we may require that all our vendors & contractors certify that they don’t use any Anthropic models.” Such a distinction had only ever applied to infrastructure firms, like Huawei or Kaspersky Labs, with ties to adversarial foreign governments, and there was no domestic precedent. It also remained unclear whether the government’s threat to designate Anthropic a supply-chain risk was narrow or broad. The former, which would prohibit defense contractors from using Claude in their government workflows, was annoying for Anthropic, but endurable. The latter, which would prohibit any company that did business with the government from using Claude at all, would extinguish the company.
The Pentagon set a deadline of 5:01 P.M. on Friday, February 27th, for Anthropic to get in line. The consequences for demurral remained murky. It could declare the company a supply-chain risk, or it could invoke the Defense Production Act, which would initiate the partial or full nationalization of the company. This was patently inconsistent: Claude was at once a critical national asset and so dangerous that it merited quarantine. On Thursday, the day before the deadline, Amodei issued a statement refusing to cross the remaining red lines. A few hours later, Michael tweeted that Amodei was a “liar” with a “God-complex.”
The two sides nevertheless inched closer to a deal. Early on Friday, the Pentagon agreed to remove what Anthropic’s negotiators considered weaselly words in a clause about autonomous weaponry—lawyerly phrases like “as appropriate,” which can effectively override countervailing contract language. The final point of contention was surveillance. Anthropic was happy to permit a role for Claude to surveil individuals under the jurisdiction of a FISA court, a secretive tribunal that oversees requests for surveillance warrants involving foreign powers or their agents on domestic soil. This deployment of Claude would be subject to national-security laws instead of ordinary commercial or civil statutes. What mattered to Anthropic was a guarantee that Claude would have nothing to do with the analysis of bulk data collected domestically, an issue especially salient to its employees in the context of ongoing ICE raids. The Pentagon’s position was that all of this petty haggling was moot. Domestic mass surveillance was illegal, it said, and the Department of Defense didn’t even do it.