
The public backlash wherein Anthropic’s Claude surged to the top of Apple’s App Store as users uninstalled ChatGPT will not automatically translate into enterprise churn, said Abhishek Sengupta, vice president at Everest Group. “Public sentiment is often reactionary and not sticky,” he said. “Enterprise decision makers have another risk vector to assess when thinking of their AI stack. Expect national security guidelines to increasingly impact AI sourcing considerations, especially for geopolitically relevant economies.”
The deal and its critics
OpenAI signed the Pentagon agreement on February 27, hours after the Department of War designated rival Anthropic a supply-chain risk over its refusal to allow its models to be used for domestic mass surveillance or fully autonomous weapons. Altman announced the deal that evening, saying the Department of War had agreed to OpenAI’s red lines on surveillance and autonomous weapons. By Monday, he conceded it had been mishandled, saying on X it “looked opportunistic and sloppy.”
Under pressure, OpenAI revised the agreement to explicitly prohibit domestic surveillance of US persons and extended the prohibition to commercially acquired data, including geolocation and browsing history. Intelligence agencies such as the NSA were excluded from the contract’s scope, according to an updated post on OpenAI’s website.
