OpenAI Partners with Anduril, Leaving Some Employees Concerned Over Militarization of AI
“OpenAI is partnering with defense tech company Anduril,” wrote the Verge this week, noting that OpenAI “used to describe its mission as saving the world.”
It was Anduril founder Palmer Luckey who advocated for a “warrior class” and autonomous weapons during a talk at Pepperdine University, saying society’s need people “excited about enacting violence on others in pursuit of good aims.” The Verge notes it’s OpenAI’s first partnership with a defense contractor “and a significant reversal of its earlier stance towards the military.”
OpenAI’s terms of service once banned “military and warfare” use of its technology, but it softened its position on military use earlier this year, changing its terms of service in January to remove the proscription.
Hours after the announcement, some OpenAI employees “raised ethical concerns about the prospect of AI technology they helped develop being put to military use,” reports the Washington Post. “On an internal company discussion forum, employees pushed back on the deal and asked for more transparency from leaders, messages viewed by The Washington Post show.”
OpenAI has said its work with Anduril will be limited to using AI to enhance systems the defense company sells the Pentagon to defend U.S. soldiers from drone attacks. Employees at the AI developer asked in internal messages how OpenAI could ensure Anduril systems aided by its technology wouldn’t also be directed against human-piloted aircraft, or stop the U.S. military from deploying them in other ways. One OpenAI worker said the company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer, the messages showed. Another said that they were concerned the deal would hurt OpenAI’s reputation, according to the messages…
OpenAI executives quickly acknowledged the concerns, messages seen by The Post show, while also writing that the company’s work with Anduril is limited to defensive systems intended to save American lives. Other OpenAI employees in the forum said that they supported the deal and were thankful the company supported internal discussion on the topic. “We are proud to help keep safe the people who risk their lives to keep our families and our country safe,” OpenAI CEO Sam Altman said in a statement…
[OpenAI] has invested heavily in safety testing, and said that the Anduril project was vetted by its policy team. OpenAI has held feedback sessions with employees on its national security work in the past few months, and plans to hold more, Liz Bourgeois, an OpenAI spokesperson said. In the internal discussions seen by The Post, the executives stated that it was important for OpenAI to provide the best technology available to militaries run by democratically-elected governments, and that authoritarian governments would not hold back from using AI for military uses. Some workers countered that the United States has sold weapons to authoritarian allies. By taking on military projects, OpenAI could help the U.S. government understand AI technology better and prepare to defend against its use by potential adversaries, executives also said.“The debate inside OpenAI comes after the ChatGPT maker and other leading AI developers including Anthropic and Meta changed their policies to allow military use of their technology,” the article points out. And it also notes another concern raised in OpenAI’s internal discussion forum.
The comment said “that defensive use cases still represented militarization of AI, and noted that the fictional AI system Skynet, which turns on humanity in the Terminator movies, was also originally designed to defend against aerial attacks on North America.
Source link