Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Do I have this right?

Do I have this right?

Scheduled Pinned Locked Moved Uncategorized
6 Posts 4 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A This user is from outside of this forum
    A This user is from outside of this forum
    alexhall@mastodon.social
    wrote last edited by
    #1

    Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.

    ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.

    wronglang@bayes.clubW J prism@infosec.exchangeP 3 Replies Last reply
    0
    • A alexhall@mastodon.social

      Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.

      ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.

      wronglang@bayes.clubW This user is from outside of this forum
      wronglang@bayes.clubW This user is from outside of this forum
      wronglang@bayes.club
      wrote last edited by
      #2

      @alexhall really choosing between slime and slimier here: https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military

      1 Reply Last reply
      0
      • A alexhall@mastodon.social

        Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.

        ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.

        J This user is from outside of this forum
        J This user is from outside of this forum
        jpellis2008@dragonscave.space
        wrote last edited by
        #3

        @alexhall Yes that is correct.

        1 Reply Last reply
        0
        • A alexhall@mastodon.social

          Do I have this right? Basically, Anthropic (Claude) was asked by the U.S. government to remove safeguards. They said no, knowing this refusal would hurt their business and blacklist them from all government contracts. Open AI (ChatGPT) was asked the same thing, happily agreed, then scrambled to do damage control on their public image by (probably) lying.

          ChatGPT might be a lot more accessible and nice to use, but I think I'll be sticking to Claude from here on out.

          prism@infosec.exchangeP This user is from outside of this forum
          prism@infosec.exchangeP This user is from outside of this forum
          prism@infosec.exchange
          wrote last edited by
          #4

          @alexhall My understanding is that the government was concerned Anthropic could pull their API access in the middle of an operation if they didn't like the nature of said operation, so they added some language that would guarantee DOD could use claud for anything "within the law." Anthropic thought that language was too broad, so they held a meeting. In the meeting, one of the questions asked was whether or not Anthropic would let the military use Claud to shoot down a nuclear ICBM. The CEO's answer was some form of, "well, call us, and we'll work it out." The defense department was, unsurprisingly, not happy with that response. So they penned a basically identical contract for OpenAI, and OpenAI signed it. It's also worth noting that the pentagon is still using claud for epic fury, so I think all sides are doing a bit of shadowboxing here.

          J 1 Reply Last reply
          0
          • prism@infosec.exchangeP prism@infosec.exchange

            @alexhall My understanding is that the government was concerned Anthropic could pull their API access in the middle of an operation if they didn't like the nature of said operation, so they added some language that would guarantee DOD could use claud for anything "within the law." Anthropic thought that language was too broad, so they held a meeting. In the meeting, one of the questions asked was whether or not Anthropic would let the military use Claud to shoot down a nuclear ICBM. The CEO's answer was some form of, "well, call us, and we'll work it out." The defense department was, unsurprisingly, not happy with that response. So they penned a basically identical contract for OpenAI, and OpenAI signed it. It's also worth noting that the pentagon is still using claud for epic fury, so I think all sides are doing a bit of shadowboxing here.

            J This user is from outside of this forum
            J This user is from outside of this forum
            jpellis2008@dragonscave.space
            wrote last edited by
            #5

            @prism @alexhall This Admin would have no problem using terminator style hunter killers if they were available. Empathy is dead.

            prism@infosec.exchangeP 1 Reply Last reply
            0
            • J jpellis2008@dragonscave.space

              @prism @alexhall This Admin would have no problem using terminator style hunter killers if they were available. Empathy is dead.

              prism@infosec.exchangeP This user is from outside of this forum
              prism@infosec.exchangeP This user is from outside of this forum
              prism@infosec.exchange
              wrote last edited by
              #6

              @jpellis2008 @alexhall To be fair, Obama would have been just as eager to use them. Same with Clinton, Bush, Biden,/Harris, Newsom, etc. We have a uniparty when it comes to automated war.

              1 Reply Last reply
              1
              0
              • R relay@relay.infosec.exchange shared this topic
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups