Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. AI psychosis among the C-suite is really high now.

AI psychosis among the C-suite is really high now.

Scheduled Pinned Locked Moved Uncategorized
31 Posts 31 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • nixcraft@mastodon.socialN nixcraft@mastodon.social

    AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

    elena@aseachange.comE This user is from outside of this forum
    elena@aseachange.comE This user is from outside of this forum
    elena@aseachange.com
    wrote last edited by
    #17

    @nixCraft not C-suite related but possibly more alarming: I’m getting fact-checked by people in my life when talking about things I have experience with (tech stuff, phone plan, whatever)… men I know feel the need to ask confirmation to LLMs. It’s mind boggling

    1 Reply Last reply
    0
    • nixcraft@mastodon.socialN nixcraft@mastodon.social

      AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

      digitalkrampus@mastodon.socialD This user is from outside of this forum
      digitalkrampus@mastodon.socialD This user is from outside of this forum
      digitalkrampus@mastodon.social
      wrote last edited by
      #18

      @nixCraft Yep, same thing at my company.

      It seems to be due to AI companies trying to make digital-god and they keep telling everyone they have.

      1 Reply Last reply
      0
      • nixcraft@mastodon.socialN nixcraft@mastodon.social

        AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

        windhamdavid@mastodon.socialW This user is from outside of this forum
        windhamdavid@mastodon.socialW This user is from outside of this forum
        windhamdavid@mastodon.social
        wrote last edited by
        #19

        @nixCraft also noticing it in the just average folk suite too.

        1 Reply Last reply
        0
        • nixcraft@mastodon.socialN nixcraft@mastodon.social

          AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

          thoe@snac.9space.noT This user is from outside of this forum
          thoe@snac.9space.noT This user is from outside of this forum
          thoe@snac.9space.no
          wrote last edited by
          #20
          @nixCraft@mastodon.social

          Yeah, the amount of trust they put in those things is absolutely mind-blowing
          1 Reply Last reply
          0
          • nixcraft@mastodon.socialN nixcraft@mastodon.social

            AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

            admin@mastodon.brk.ioA This user is from outside of this forum
            admin@mastodon.brk.ioA This user is from outside of this forum
            admin@mastodon.brk.io
            wrote last edited by
            #21

            @nixCraft <cough>Meta<cough> <- That's one of the primary reasons I left the company.

            1 Reply Last reply
            0
            • nixcraft@mastodon.socialN nixcraft@mastodon.social

              AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

              bigg@mastodon.africaB This user is from outside of this forum
              bigg@mastodon.africaB This user is from outside of this forum
              bigg@mastodon.africa
              wrote last edited by
              #22

              @nixCraft spot on...

              1 Reply Last reply
              0
              • nixcraft@mastodon.socialN nixcraft@mastodon.social

                AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                mr_grey@social.linux.pizzaM This user is from outside of this forum
                mr_grey@social.linux.pizzaM This user is from outside of this forum
                mr_grey@social.linux.pizza
                wrote last edited by
                #23

                @nixCraft being constantly bombarded with stress (news, economy, social discourse) makes people lose cognitive ability, and humans take the path of least resistance, enter ai to help take a load off

                1 Reply Last reply
                0
                • nixcraft@mastodon.socialN nixcraft@mastodon.social

                  AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                  avuko@infosec.exchangeA This user is from outside of this forum
                  avuko@infosec.exchangeA This user is from outside of this forum
                  avuko@infosec.exchange
                  wrote last edited by
                  #24

                  @nixCraft yes. The way it is set up, it creates easily digestible plausible bullshit.

                  Easily, because there is no social, emotional or cognitive friction or effort needed. It starts responding immediately and pleasantly.

                  Digestible, because it is trained on the most often occurring sentences, contexts and words. No new language, no cognitive effort to understand or investigate underlying concepts, no awkward idiosyncratic language by other humans who think feel and express differently.

                  Plausible, because it is a language model, so the grammar and tone and words fit expectations, with a high probability.

                  Bullshit, because the output can be either correct or wrong, but it has no basis in reality.

                  Something makes a certain part of society very susceptible to this.

                  1 Reply Last reply
                  0
                  • nixcraft@mastodon.socialN nixcraft@mastodon.social

                    AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                    tekhedd@byteheaven.netT This user is from outside of this forum
                    tekhedd@byteheaven.netT This user is from outside of this forum
                    tekhedd@byteheaven.net
                    wrote last edited by
                    #25

                    @nixCraft AI is like micromanager crack.

                    1 Reply Last reply
                    0
                    • nixcraft@mastodon.socialN nixcraft@mastodon.social

                      AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                      Z This user is from outside of this forum
                      Z This user is from outside of this forum
                      zekapariltisi@mastodon.social
                      wrote last edited by
                      #26

                      @nixCraft Trump's team is like this too. Trump squad probably be out here using some hyper-affirmation AI to make all their big brain decisions.

                      1 Reply Last reply
                      0
                      • nixcraft@mastodon.socialN nixcraft@mastodon.social

                        AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                        pavled@mastodon.socialP This user is from outside of this forum
                        pavled@mastodon.socialP This user is from outside of this forum
                        pavled@mastodon.social
                        wrote last edited by
                        #27

                        @nixCraft The asking the LLM in the middle of conversation is what gets me. Like, I'm here trying to tell you something and you have the audacity to fact-check me with a slopbot before I even finish my sentence?
                        Hell no. I've started simply walking away from such conversations. Go on, talk to your LLM, see how far that gets you.

                        1 Reply Last reply
                        0
                        • nixcraft@mastodon.socialN nixcraft@mastodon.social

                          AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                          merospit@infosec.exchangeM This user is from outside of this forum
                          merospit@infosec.exchangeM This user is from outside of this forum
                          merospit@infosec.exchange
                          wrote last edited by
                          #28

                          @nixCraft Mine let you know which "agent" every document has to be run through to "correct errors" before it gets to that level. If you don't, they reject it based on believing the non-human more than the human.

                          1 Reply Last reply
                          0
                          • nixcraft@mastodon.socialN nixcraft@mastodon.social

                            AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                            corvus_ch@social.corvus-ch.nameC This user is from outside of this forum
                            corvus_ch@social.corvus-ch.nameC This user is from outside of this forum
                            corvus_ch@social.corvus-ch.name
                            wrote last edited by
                            #29

                            @nixCraft well, the biggest red flag is their urge to fact check the expert. Using AI is just the icing.

                            1 Reply Last reply
                            0
                            • nixcraft@mastodon.socialN nixcraft@mastodon.social

                              AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                              spacejunk@fosstodon.orgS This user is from outside of this forum
                              spacejunk@fosstodon.orgS This user is from outside of this forum
                              spacejunk@fosstodon.org
                              wrote last edited by
                              #30

                              @nixCraft yep, my colleague too. Stops listening and starts typing while I’m explaining something. It’s rude

                              1 Reply Last reply
                              0
                              • nixcraft@mastodon.socialN nixcraft@mastodon.social

                                AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                                davep@infosec.exchangeD This user is from outside of this forum
                                davep@infosec.exchangeD This user is from outside of this forum
                                davep@infosec.exchange
                                wrote last edited by
                                #31

                                @nixCraft Yes, I quit.

                                1 Reply Last reply
                                1
                                0
                                • R relay@relay.infosec.exchange shared this topic
                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • Login

                                • Login or register to search.
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • World
                                • Users
                                • Groups