Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. AI psychosis among the C-suite is really high now.

AI psychosis among the C-suite is really high now.

Scheduled Pinned Locked Moved Uncategorized
31 Posts 31 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • nixcraft@mastodon.socialN nixcraft@mastodon.social

    AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

    ingonymous@mastodon.socialI This user is from outside of this forum
    ingonymous@mastodon.socialI This user is from outside of this forum
    ingonymous@mastodon.social
    wrote last edited by
    #6

    @nixCraft
    I had similar experiences with consultants before, and now it switched to AI, well, but... I would say not so much changed 🤷‍♂️😁

    1 Reply Last reply
    0
    • nixcraft@mastodon.socialN nixcraft@mastodon.social

      AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

      involture@hostux.socialI This user is from outside of this forum
      involture@hostux.socialI This user is from outside of this forum
      involture@hostux.social
      wrote last edited by
      #7

      @nixCraft
      I can't be fired. So I often support my hierarchy when they want to use AI instead of my work. I just do my own thing and ask them to notify me If I should just drop a project because my boss vibe coded it. Not my problem, I'll continue to think for myself and I'll be ready when they realize they fucked up.
      Or not and I'll die homeless, anyway I can't do anything to change the course of events I feel.

      1 Reply Last reply
      0
      • nixcraft@mastodon.socialN nixcraft@mastodon.social

        AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

        ferricoxide@blahaj.zoneF This user is from outside of this forum
        ferricoxide@blahaj.zoneF This user is from outside of this forum
        ferricoxide@blahaj.zone
        wrote last edited by
        #8

        @nixCraft@mastodon.social

        My favorite is our one executive who will say to me "Grok agrees with you". Of all the AI's to crosscheck me with,
        that one's probably the most insulting.

        1 Reply Last reply
        0
        • nixcraft@mastodon.socialN nixcraft@mastodon.social

          AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

          lemgandi@mastodon.socialL This user is from outside of this forum
          lemgandi@mastodon.socialL This user is from outside of this forum
          lemgandi@mastodon.social
          wrote last edited by
          #9

          @nixCraft Well heck, it might be even worse than you think..

          cf: https://houseofsaud.com/iran-war-ai-psychosis-sycophancy-rlhf/

          1 Reply Last reply
          0
          • nixcraft@mastodon.socialN nixcraft@mastodon.social

            AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

            gundersen@mastodon.socialG This user is from outside of this forum
            gundersen@mastodon.socialG This user is from outside of this forum
            gundersen@mastodon.social
            wrote last edited by
            #10

            @nixCraft what I really hate is any sentence that starts with "chat gpt says..." which is at the same time taking credit for anything that is right and useful and deflecting the blame for anything that is wrong onto an inanimate object.

            1 Reply Last reply
            0
            • nixcraft@mastodon.socialN nixcraft@mastodon.social

              AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

              rhold@norden.socialR This user is from outside of this forum
              rhold@norden.socialR This user is from outside of this forum
              rhold@norden.social
              wrote last edited by
              #11

              @nixCraft wtf?

              I can easily imagine that. Who wants to work in such a s-hole?

              1 Reply Last reply
              0
              • nixcraft@mastodon.socialN nixcraft@mastodon.social

                AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                T This user is from outside of this forum
                T This user is from outside of this forum
                tanavit@toot.aquilenet.fr
                wrote last edited by
                #12

                @nixCraft

                It is not a bizarre behavior.
                By using AI tools, they feel they have control on you. They are the chiefs and do not support to be dependant on your expertise.

                1 Reply Last reply
                0
                • nixcraft@mastodon.socialN nixcraft@mastodon.social

                  AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                  raymaccarthy@mastodon.ieR This user is from outside of this forum
                  raymaccarthy@mastodon.ieR This user is from outside of this forum
                  raymaccarthy@mastodon.ie
                  wrote last edited by
                  #13

                  @nixCraft
                  It's very annoying on groups / forums when someone asks a tech question (any kind, not computers) and someone copy/pastes a big "authoritative looking" AI response. It varies from misleading to completely wrong and worst than non-AI search.

                  1 Reply Last reply
                  0
                  • nixcraft@mastodon.socialN nixcraft@mastodon.social

                    AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                    aristot73@infosec.exchangeA This user is from outside of this forum
                    aristot73@infosec.exchangeA This user is from outside of this forum
                    aristot73@infosec.exchange
                    wrote last edited by
                    #14

                    @nixCraft haven't experienced it myself but it doesn't surprise me.
                    what does surprise me is that they don't realise that the long term harm done by this behaviour, that i call AI-guessing, to their relationship with people far outweighs any potential short term benefits.

                    1 Reply Last reply
                    0
                    • nixcraft@mastodon.socialN nixcraft@mastodon.social

                      AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                      mongoosestudios@hachyderm.ioM This user is from outside of this forum
                      mongoosestudios@hachyderm.ioM This user is from outside of this forum
                      mongoosestudios@hachyderm.io
                      wrote last edited by
                      #15

                      @nixCraft the emperor has no clothes, and he's starting to get worried that he's been walking around naked all this time. So he's desperately grasping at straws.

                      1 Reply Last reply
                      0
                      • nixcraft@mastodon.socialN nixcraft@mastodon.social

                        AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                        jordgubben@mastodon.gamedev.placeJ This user is from outside of this forum
                        jordgubben@mastodon.gamedev.placeJ This user is from outside of this forum
                        jordgubben@mastodon.gamedev.place
                        wrote last edited by
                        #16

                        @nixCraft Here we even have a few cases among middle managers, and at least one regular grunt?

                        Advise on how to carefully guide these poor unfortunate souls back to reality would be appreciated.

                        1 Reply Last reply
                        0
                        • nixcraft@mastodon.socialN nixcraft@mastodon.social

                          AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                          elena@aseachange.comE This user is from outside of this forum
                          elena@aseachange.comE This user is from outside of this forum
                          elena@aseachange.com
                          wrote last edited by
                          #17

                          @nixCraft not C-suite related but possibly more alarming: I’m getting fact-checked by people in my life when talking about things I have experience with (tech stuff, phone plan, whatever)… men I know feel the need to ask confirmation to LLMs. It’s mind boggling

                          1 Reply Last reply
                          0
                          • nixcraft@mastodon.socialN nixcraft@mastodon.social

                            AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                            digitalkrampus@mastodon.socialD This user is from outside of this forum
                            digitalkrampus@mastodon.socialD This user is from outside of this forum
                            digitalkrampus@mastodon.social
                            wrote last edited by
                            #18

                            @nixCraft Yep, same thing at my company.

                            It seems to be due to AI companies trying to make digital-god and they keep telling everyone they have.

                            1 Reply Last reply
                            0
                            • nixcraft@mastodon.socialN nixcraft@mastodon.social

                              AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                              windhamdavid@mastodon.socialW This user is from outside of this forum
                              windhamdavid@mastodon.socialW This user is from outside of this forum
                              windhamdavid@mastodon.social
                              wrote last edited by
                              #19

                              @nixCraft also noticing it in the just average folk suite too.

                              1 Reply Last reply
                              0
                              • nixcraft@mastodon.socialN nixcraft@mastodon.social

                                AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                                thoe@snac.9space.noT This user is from outside of this forum
                                thoe@snac.9space.noT This user is from outside of this forum
                                thoe@snac.9space.no
                                wrote last edited by
                                #20
                                @nixCraft@mastodon.social

                                Yeah, the amount of trust they put in those things is absolutely mind-blowing
                                1 Reply Last reply
                                0
                                • nixcraft@mastodon.socialN nixcraft@mastodon.social

                                  AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                                  admin@mastodon.brk.ioA This user is from outside of this forum
                                  admin@mastodon.brk.ioA This user is from outside of this forum
                                  admin@mastodon.brk.io
                                  wrote last edited by
                                  #21

                                  @nixCraft <cough>Meta<cough> <- That's one of the primary reasons I left the company.

                                  1 Reply Last reply
                                  0
                                  • nixcraft@mastodon.socialN nixcraft@mastodon.social

                                    AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                                    bigg@mastodon.africaB This user is from outside of this forum
                                    bigg@mastodon.africaB This user is from outside of this forum
                                    bigg@mastodon.africa
                                    wrote last edited by
                                    #22

                                    @nixCraft spot on...

                                    1 Reply Last reply
                                    0
                                    • nixcraft@mastodon.socialN nixcraft@mastodon.social

                                      AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                                      mr_grey@social.linux.pizzaM This user is from outside of this forum
                                      mr_grey@social.linux.pizzaM This user is from outside of this forum
                                      mr_grey@social.linux.pizza
                                      wrote last edited by
                                      #23

                                      @nixCraft being constantly bombarded with stress (news, economy, social discourse) makes people lose cognitive ability, and humans take the path of least resistance, enter ai to help take a load off

                                      1 Reply Last reply
                                      0
                                      • nixcraft@mastodon.socialN nixcraft@mastodon.social

                                        AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                                        avuko@infosec.exchangeA This user is from outside of this forum
                                        avuko@infosec.exchangeA This user is from outside of this forum
                                        avuko@infosec.exchange
                                        wrote last edited by
                                        #24

                                        @nixCraft yes. The way it is set up, it creates easily digestible plausible bullshit.

                                        Easily, because there is no social, emotional or cognitive friction or effort needed. It starts responding immediately and pleasantly.

                                        Digestible, because it is trained on the most often occurring sentences, contexts and words. No new language, no cognitive effort to understand or investigate underlying concepts, no awkward idiosyncratic language by other humans who think feel and express differently.

                                        Plausible, because it is a language model, so the grammar and tone and words fit expectations, with a high probability.

                                        Bullshit, because the output can be either correct or wrong, but it has no basis in reality.

                                        Something makes a certain part of society very susceptible to this.

                                        1 Reply Last reply
                                        0
                                        • nixcraft@mastodon.socialN nixcraft@mastodon.social

                                          AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

                                          tekhedd@byteheaven.netT This user is from outside of this forum
                                          tekhedd@byteheaven.netT This user is from outside of this forum
                                          tekhedd@byteheaven.net
                                          wrote last edited by
                                          #25

                                          @nixCraft AI is like micromanager crack.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups