Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to use to destroy their lives.

When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to use to destroy their lives.

Scheduled Pinned Locked Moved Uncategorized
34 Posts 16 Posters 40 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

    @randahl

    No, I don't agree that my stance on LLMs are easily identifiable from our conversation.

    Let's make a test: Describe how you think I feel about AI and LLMs in a paragraph, and then you have my word that I will truthfully describe how I use (or not) LLMs in my everyday life and where I see the dangers in it.

    And just to be clear: While being critical about a technology may be visible through public postings, the rest of your argument (having an affair, relationship with spouse and sister-in-law etc.) is not - and if it were, there would be no reason for someone to rely on any kind of AI to use it for blackmail.

    randahl@mastodon.socialR This user is from outside of this forum
    randahl@mastodon.socialR This user is from outside of this forum
    randahl@mastodon.social
    wrote last edited by
    #25

    @madsenandersc the reason you see my statement as [quote:] “pure bullshit” is, you and I are not in the same conversation.

    I opened this thread with a general prediction about the future capabilities of AI systems.

    You keep claiming I am wrong, because my post does not fully match your experience with the limitations of present day large language models — which (as you know) is just 1 of many different AI technologies.

    These are two very different conversations.

    1/2

    randahl@mastodon.socialR 1 Reply Last reply
    0
    • randahl@mastodon.socialR randahl@mastodon.social

      @madsenandersc the reason you see my statement as [quote:] “pure bullshit” is, you and I are not in the same conversation.

      I opened this thread with a general prediction about the future capabilities of AI systems.

      You keep claiming I am wrong, because my post does not fully match your experience with the limitations of present day large language models — which (as you know) is just 1 of many different AI technologies.

      These are two very different conversations.

      1/2

      randahl@mastodon.socialR This user is from outside of this forum
      randahl@mastodon.socialR This user is from outside of this forum
      randahl@mastodon.social
      wrote last edited by
      #26

      @madsenandersc
      …
      Now I agree with you, that there is a lot of hype surrounding LLMs, and I am certainly open two having a conversation about that. But please beware that the narrow goal posts of present day LLMs, were introduced by you in this conversation, not me.

      2/2

      madsenandersc@social.vivaldi.netM 1 Reply Last reply
      0
      • randahl@mastodon.socialR randahl@mastodon.social

        @madsenandersc
        …
        Now I agree with you, that there is a lot of hype surrounding LLMs, and I am certainly open two having a conversation about that. But please beware that the narrow goal posts of present day LLMs, were introduced by you in this conversation, not me.

        2/2

        madsenandersc@social.vivaldi.netM This user is from outside of this forum
        madsenandersc@social.vivaldi.netM This user is from outside of this forum
        madsenandersc@social.vivaldi.net
        wrote last edited by
        #27

        @randahl

        So you are talking about what LLMs may evolve into at some point in the future? Hmmm - I guess anything is possible, but we are still very far away from that point, to be honest.

        There is no way I can see LLMs with their current technology evolve into what you are describing - that would require a world where the AI has unobstructed access to anything you say or do, online or not, and that again would require your devices to be wide open for the AI.

        Also, it would require an AI that is much, much more capable of rational thinking than what we have today. I know there is a story going around about someone who asked their LLM to surprise them, and a day later it had created a phone number and called them, exclaiming "SURPRISE!", but I have still to see any evidence to support the story at all.

        I understand that there is a fear that Microsoft and Google is moving into that direction (Amazon as well, come to think of it), but it would require users to be absolutely indifferent to whatever large tech companies are trying to wrangle out of their devices, and I see things go in the exact opposite direction at the moment.

        That said, I could see US customers be screwed over by this, especially if privacy laws remain basically non-existent, but again - I see a movement in the opposite direction.

        1 Reply Last reply
        0
        • randahl@mastodon.socialR randahl@mastodon.social

          When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to use to destroy their lives.

          A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

          diana_european@mastodon.socialD This user is from outside of this forum
          diana_european@mastodon.socialD This user is from outside of this forum
          diana_european@mastodon.social
          wrote last edited by
          #28

          Exactly.

          1 Reply Last reply
          0
          • benfulton@mastodon.londonB benfulton@mastodon.london

            @randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.

            Link Preview Image
            The Evitable Conflict - Wikipedia

            favicon

            (en.wikipedia.org)

            #bookstodon #ai #scifi

            jimthewhyguy@techfieldday.netJ This user is from outside of this forum
            jimthewhyguy@techfieldday.netJ This user is from outside of this forum
            jimthewhyguy@techfieldday.net
            wrote last edited by
            #29

            @benfulton @randahl I'm actually using that short story in my upcoming keynote address in a few weeks. It's a Susan Calvin gem!

            1 Reply Last reply
            0
            • randahl@mastodon.socialR randahl@mastodon.social

              When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to use to destroy their lives.

              A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

              jimthewhyguy@techfieldday.netJ This user is from outside of this forum
              jimthewhyguy@techfieldday.netJ This user is from outside of this forum
              jimthewhyguy@techfieldday.net
              wrote last edited by
              #30

              @randahl As I understand it, China has been using social scores for at least a decade now to punish its citizens when they use for what passes as social media there whenever they show resistance to the party line - e.g. restricting them from travel by limiting access to payment kiosks or other services.

              LLMs may not be the direct tool governments would use, but there's plenty of surveillance techniques that would work perfectly

              1 Reply Last reply
              0
              • madsenandersc@social.vivaldi.netM This user is from outside of this forum
                madsenandersc@social.vivaldi.netM This user is from outside of this forum
                madsenandersc@social.vivaldi.net
                wrote last edited by
                #31

                @violetmadder @randahl

                "A second AI system known as “Where’s Daddy?” tracked Palestinians on the kill list and was purposely designed to help Israel target individuals when they were at home at night with their families."

                I'm not saying that this did not happen - I would just like to know HOW they were tracked?

                AI is not some magic 8 ball that will tell you everything you want to know - you need some kind of technology that will give you the basic information, and THEN an AI system can do some calculations for you.

                Were these individuals tricked into installing an app on their phones? Did Google and Apple provide the data? Amazon and their Alexa? - or did they simply conclude that most people spend their night in their home, and most likely in their bed?

                How did the system determine who was a potential target? Did someone eavedrop on private messages? Real-time decryption of secure chats? Public postings on social media?

                Regardless of how you look at this, the problem is not that some kind of AI (which simply isn't that intelligent - it can just recalculate things very quickly) was used - it is the collection of all the personal information that is then fed into the AI, that is the problem.

                Prevent illegal information collection - then the AI becomes much, much less useful.

                1 Reply Last reply
                0
                • madsenandersc@social.vivaldi.netM This user is from outside of this forum
                  madsenandersc@social.vivaldi.netM This user is from outside of this forum
                  madsenandersc@social.vivaldi.net
                  wrote last edited by
                  #32

                  @violetmadder @randahl

                  Ah - you must be american. Yeah, you're pretty much fucked, I'll give you that.

                  1 Reply Last reply
                  0
                  • madsenandersc@social.vivaldi.netM This user is from outside of this forum
                    madsenandersc@social.vivaldi.netM This user is from outside of this forum
                    madsenandersc@social.vivaldi.net
                    wrote last edited by
                    #33

                    @violetmadder @randahl

                    I don't disagree with you.

                    I can guarantee you one thing, though: In the current political climate, any kind of pressure coming from the US will face resistance like never before in Europe and in Denmark in particular.

                    Anything related to large American corporations and the US government is almost instinctively seen as something bad, that need to justify itself before people will even look at it, much less accept it.

                    1 Reply Last reply
                    0
                    • randahl@mastodon.socialR randahl@mastodon.social

                      When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to use to destroy their lives.

                      A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                      npars01@mstdn.socialN This user is from outside of this forum
                      npars01@mstdn.socialN This user is from outside of this forum
                      npars01@mstdn.social
                      wrote last edited by
                      #34

                      @randahl

                      It's why age verification is suddenly so popular.

                      "Cradle to the Grave" surveillance where something stupid a person said at 16 years of age is dredged up to discredit them at 56.

                      1 Reply Last reply
                      1
                      0
                      • R relay@relay.infosec.exchange shared this topic
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups