
However, said Misshra, once the US government is involved, particularly through an entity like the DoD, OpenAI’s ability to prevent certain uses becomes more fragile: “National security carve-outs, classified programs, and sovereign immunity doctrines significantly weaken any attempt to challenge government use purely on ethical grounds. Courts have historically shown deference where national security is invoked, even when private contractors object,” he said.
There have been “similar tensions” in past disputes involving telecom surveillance and defense contracting, where companies relied on contractual language and internal governance but “ultimately had limited leverage once federal authorities asserted statutory powers,” he said.
Unfavorably for OpenAI, there is no clean precedent where a technology provider successfully blocked the federal government from using a tool for security or defense purposes once access was contractually granted, Misshra said, adding that national security exceptions would likely override softer commitments around ethical AI use in the case of a direct conflict.
