
A senior executive at OpenAI has left the artificial intelligence startup, she announced on social media Saturday, citing a deal it signed with the Pentagon in February allowing its technology to be deployed across the country’s war-making and defense apparatus.
“This wasn’t an easy call,” Caitlin Kalinowski, formerly the leader of the company’s robotics division, said in a post on X. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people.”
The Defense Department reached its deal with the Sam Altman-led company while locked in a squabble with Anthropic, another AI startup, over restrictions put in place by CEO Dario Amodei barring the use of its tech in mass surveillance of American citizens or in military attacks without human input.
Unable to breach the impasse, President Donald Trump in late February directed all federal agencies to cease using Anthropic technology. The Pentagon then designated the startup a supply chain risk, a moniker usually reserved for entities with ties to foreign adversaries, last week.
And while Altman said the deal with the Pentagon included protections against the tech being used for mass surveillance and AI use of force, Kalinowski contended “that the announcement was rushed without the guardrails defined.”
“It’s a governance concern first and foremost,” she said. “These are too important for deals or announcements to be rushed.”
OpenAI confirmed Kalinowski’s departure in a statement but said the company believes its ”agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons.”
“We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world,” the company said.
Just one day after severing ties with Anthropic, the White House launched a military operation against Iran in tandem with Israel, reportedly continuing to use the company’s AI tools in its barrage of strikes on the country. Though there is no known instance of the U.S. using AI on a strike during the conflict without human oversight, questions about the role AI companies play in wartime — including who gets to make the rules — have reached a fever pitch.