Do I have this right?
-
Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.
ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.
-
Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.
ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.
@alexhall really choosing between slime and slimier here: https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military
-
Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.
ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.
@alexhall Yes that is correct.
-
Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.
ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.
@alexhall My understanding is that the government was concerned Anthropic could pull their API access in the middle of an operation if they didn't like the nature of said operation, so they added some language that would guarantee DOD could use claud for anything "within the law." Anthropic thought that language was too broad, so they held a meeting. In the meeting, one of the questions asked was whether or not Anthropic would let the military use Claud to shoot down a nuclear ICBM. The CEO's answer was some form of, "well, call us, and we'll work it out." The defense department was, unsurprisingly, not happy with that response. So they penned a basically identical contract for OpenAI, and OpenAI signed it. It's also worth noting that the pentagon is still using claud for epic fury, so I think all sides are doing a bit of shadowboxing here.
-
@alexhall My understanding is that the government was concerned Anthropic could pull their API access in the middle of an operation if they didn't like the nature of said operation, so they added some language that would guarantee DOD could use claud for anything "within the law." Anthropic thought that language was too broad, so they held a meeting. In the meeting, one of the questions asked was whether or not Anthropic would let the military use Claud to shoot down a nuclear ICBM. The CEO's answer was some form of, "well, call us, and we'll work it out." The defense department was, unsurprisingly, not happy with that response. So they penned a basically identical contract for OpenAI, and OpenAI signed it. It's also worth noting that the pentagon is still using claud for epic fury, so I think all sides are doing a bit of shadowboxing here.
-
@jpellis2008 @alexhall To be fair, Obama would have been just as eager to use them. Same with Clinton, Bush, Biden,/Harris, Newsom, etc. We have a uniparty when it comes to automated war.
-
R relay@relay.infosec.exchange shared this topic