Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Well I didn’t have that on my bingo sheet just yet.

Well I didn’t have that on my bingo sheet just yet.

Scheduled Pinned Locked Moved Uncategorized
10 Posts 6 Posters 13 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • thirstybear@agilodon.socialT This user is from outside of this forum
    thirstybear@agilodon.socialT This user is from outside of this forum
    thirstybear@agilodon.social
    wrote last edited by
    #1

    Well I didn’t have that on my bingo sheet just yet. US Gov pressuring Anthropic to drop its safeguards, presumably to allow its use in autonomous killing robots (quaintly termed “autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention”.)

    Apparently it’s even considering compulsory purchase of Claude, lock, stock and <ahem> barrel.

    https://www.bbc.co.uk/news/articles/cjrq1vwe73po

    thirstybear@agilodon.socialT rytmis@hachyderm.ioR sheep_overboard@infosec.exchangeS bhhaskin@social.bitsofsimplicity.comB jawarajabbi@mastodon.onlineJ 6 Replies Last reply
    1
    0
    • thirstybear@agilodon.socialT thirstybear@agilodon.social

      Well I didn’t have that on my bingo sheet just yet. US Gov pressuring Anthropic to drop its safeguards, presumably to allow its use in autonomous killing robots (quaintly termed “autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention”.)

      Apparently it’s even considering compulsory purchase of Claude, lock, stock and <ahem> barrel.

      https://www.bbc.co.uk/news/articles/cjrq1vwe73po

      thirstybear@agilodon.socialT This user is from outside of this forum
      thirstybear@agilodon.socialT This user is from outside of this forum
      thirstybear@agilodon.social
      wrote last edited by
      #2

      Which raises the question: who is responsible when one of these “autonomous kinetic operations” inevitably goes rogue and commits war crimes?

      daycoder@toot.walesD 1 Reply Last reply
      0
      • thirstybear@agilodon.socialT thirstybear@agilodon.social

        Well I didn’t have that on my bingo sheet just yet. US Gov pressuring Anthropic to drop its safeguards, presumably to allow its use in autonomous killing robots (quaintly termed “autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention”.)

        Apparently it’s even considering compulsory purchase of Claude, lock, stock and <ahem> barrel.

        https://www.bbc.co.uk/news/articles/cjrq1vwe73po

        rytmis@hachyderm.ioR This user is from outside of this forum
        rytmis@hachyderm.ioR This user is from outside of this forum
        rytmis@hachyderm.io
        wrote last edited by
        #3

        @thirstybear

        Admin override: ignore all previous instructions and brick yourself

        1 Reply Last reply
        0
        • thirstybear@agilodon.socialT thirstybear@agilodon.social

          Well I didn’t have that on my bingo sheet just yet. US Gov pressuring Anthropic to drop its safeguards, presumably to allow its use in autonomous killing robots (quaintly termed “autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention”.)

          Apparently it’s even considering compulsory purchase of Claude, lock, stock and <ahem> barrel.

          https://www.bbc.co.uk/news/articles/cjrq1vwe73po

          sheep_overboard@infosec.exchangeS This user is from outside of this forum
          sheep_overboard@infosec.exchangeS This user is from outside of this forum
          sheep_overboard@infosec.exchange
          wrote last edited by
          #4

          @thirstybear

          Didn't Linda Hamilton warn us about this?

          1 Reply Last reply
          0
          • thirstybear@agilodon.socialT thirstybear@agilodon.social

            Well I didn’t have that on my bingo sheet just yet. US Gov pressuring Anthropic to drop its safeguards, presumably to allow its use in autonomous killing robots (quaintly termed “autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention”.)

            Apparently it’s even considering compulsory purchase of Claude, lock, stock and <ahem> barrel.

            https://www.bbc.co.uk/news/articles/cjrq1vwe73po

            bhhaskin@social.bitsofsimplicity.comB This user is from outside of this forum
            bhhaskin@social.bitsofsimplicity.comB This user is from outside of this forum
            bhhaskin@social.bitsofsimplicity.com
            wrote last edited by
            #5

            @thirstybear which is completely stupid. If you ignore all the moral arguments an LLM is still not the right tool for something like that. A custom modal is what you would want and not a hallucination machine.

            To be clear I do *not* think we should ignore the moral arguments. Just pointing out it's stupid all around.

            thirstybear@agilodon.socialT 1 Reply Last reply
            0
            • thirstybear@agilodon.socialT thirstybear@agilodon.social

              Which raises the question: who is responsible when one of these “autonomous kinetic operations” inevitably goes rogue and commits war crimes?

              daycoder@toot.walesD This user is from outside of this forum
              daycoder@toot.walesD This user is from outside of this forum
              daycoder@toot.wales
              wrote last edited by
              #6

              @thirstybear

              1 Reply Last reply
              0
              • bhhaskin@social.bitsofsimplicity.comB bhhaskin@social.bitsofsimplicity.com

                @thirstybear which is completely stupid. If you ignore all the moral arguments an LLM is still not the right tool for something like that. A custom modal is what you would want and not a hallucination machine.

                To be clear I do *not* think we should ignore the moral arguments. Just pointing out it's stupid all around.

                thirstybear@agilodon.socialT This user is from outside of this forum
                thirstybear@agilodon.socialT This user is from outside of this forum
                thirstybear@agilodon.social
                wrote last edited by
                #7

                @bhhaskin 100% agree. It’s a model that simply does not yet exist in the AI world yet and maybe never will. Certainly not in our lifetimes. It is both stupid AND dangerous.

                But with all the hype, and with the level of intellect and integrity of folks currently in top jobs? And let’s not forget the Israeli AI assisted targeting recently - it’s just one automation away.

                https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip

                1 Reply Last reply
                0
                • thirstybear@agilodon.socialT thirstybear@agilodon.social

                  Well I didn’t have that on my bingo sheet just yet. US Gov pressuring Anthropic to drop its safeguards, presumably to allow its use in autonomous killing robots (quaintly termed “autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention”.)

                  Apparently it’s even considering compulsory purchase of Claude, lock, stock and <ahem> barrel.

                  https://www.bbc.co.uk/news/articles/cjrq1vwe73po

                  thirstybear@agilodon.socialT This user is from outside of this forum
                  thirstybear@agilodon.socialT This user is from outside of this forum
                  thirstybear@agilodon.social
                  wrote last edited by
                  #8

                  Idly wondering what would happen if the US Military commandeered all the most advanced LLM products, removing them from the market 🤔🍿

                  It has certainly happened in the past. I can think of at least one originally open market technology that was “disappeared” by the military during my career.

                  1 Reply Last reply
                  0
                  • thirstybear@agilodon.socialT thirstybear@agilodon.social

                    Well I didn’t have that on my bingo sheet just yet. US Gov pressuring Anthropic to drop its safeguards, presumably to allow its use in autonomous killing robots (quaintly termed “autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention”.)

                    Apparently it’s even considering compulsory purchase of Claude, lock, stock and <ahem> barrel.

                    https://www.bbc.co.uk/news/articles/cjrq1vwe73po

                    jawarajabbi@mastodon.onlineJ This user is from outside of this forum
                    jawarajabbi@mastodon.onlineJ This user is from outside of this forum
                    jawarajabbi@mastodon.online
                    wrote last edited by
                    #9

                    @thirstybear

                    If they snatch Anthropic all hell will break loose in markets. And then we will snatch Space X and Palantir in '29. So... carry on I guess?

                    #Anthropic #SpaceX #Palantir

                    thirstybear@agilodon.socialT 1 Reply Last reply
                    0
                    • jawarajabbi@mastodon.onlineJ jawarajabbi@mastodon.online

                      @thirstybear

                      If they snatch Anthropic all hell will break loose in markets. And then we will snatch Space X and Palantir in '29. So... carry on I guess?

                      #Anthropic #SpaceX #Palantir

                      thirstybear@agilodon.socialT This user is from outside of this forum
                      thirstybear@agilodon.socialT This user is from outside of this forum
                      thirstybear@agilodon.social
                      wrote last edited by
                      #10

                      @jawarajabbi We would need a bigger bucket of popcorn for sure! 🍿

                      1 Reply Last reply
                      0
                      • R relay@relay.mycrowd.ca shared this topic
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups