3 hours ago

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.

Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.

Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”.

If OpenAI’s deal does prohibit its systems from being used for unethical ends, it would appear the company has succeeded in receiving assurances where Anthropic could not. Altman announced the deal with the government shortly after Trump said he would direct all federal agencies to “IMMEDIATELY CEASE” all use of Anthropic technology.

The Pentagon had demanded Anthropic loosen ethical guidelines on its AI systems or face severe consequences.

The president said on his Truth Social platform: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the [Pentagon], and force them to obey their Terms of Service instead of our Constitution.”

It remains to be seen how OpenAI staff respond to the government deal. In its battle with the Trump administration, Anthropic has drawn support from its most fierce rivals. Nearly 500 OpenAI and Google employees signed on to an open letter saying “we will not be divided”.

“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the letter reads. “They’re trying to divide each company with fear that the other will give in.”

Altman sought to reassure OpenAI employees in a memo sent on Friday night.

“Regardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance,” Altman wrote in the memo, which was obtained by Axios.

“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Altman added: “We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”

Anthropic, which presents itself as the most safety-forward of the leading AI companies, had been mired in months of disagreement with the Pentagon. US defense officials had pushed for unfettered access to Claude’s capabilities that they say can help protect the country. Meanwhile, Anthropic has resisted allowing its product to be used for surveilling en masse or weapons systems that can kill people autonomously.

“No amount of intimidation or punishment from the [Pentagon] will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said in its statement on Friday night.

“We have tried in good faith to reach an agreement with the [Pentagon], making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above,” the company continued. “To the best of our knowledge, these exceptions have not affected a single government mission to date.”

OpenAI on Friday said it is raising $110bn in a blockbuster funding round which would value the company at $840bn.

Read Entire Article

Comments

News Networks