AI psychosis among the C-suite is really high now.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft <cough>Meta<cough> <- That's one of the primary reasons I left the company.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft spot on...
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft being constantly bombarded with stress (news, economy, social discourse) makes people lose cognitive ability, and humans take the path of least resistance, enter ai to help take a load off
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft yes. The way it is set up, it creates easily digestible plausible bullshit.
Easily, because there is no social, emotional or cognitive friction or effort needed. It starts responding immediately and pleasantly.
Digestible, because it is trained on the most often occurring sentences, contexts and words. No new language, no cognitive effort to understand or investigate underlying concepts, no awkward idiosyncratic language by other humans who think feel and express differently.
Plausible, because it is a language model, so the grammar and tone and words fit expectations, with a high probability.
Bullshit, because the output can be either correct or wrong, but it has no basis in reality.
Something makes a certain part of society very susceptible to this.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft AI is like micromanager crack.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft Trump's team is like this too. Trump squad probably be out here using some hyper-affirmation AI to make all their big brain decisions.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft The asking the LLM in the middle of conversation is what gets me. Like, I'm here trying to tell you something and you have the audacity to fact-check me with a slopbot before I even finish my sentence?
Hell no. I've started simply walking away from such conversations. Go on, talk to your LLM, see how far that gets you. -
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft Mine let you know which "agent" every document has to be run through to "correct errors" before it gets to that level. If you don't, they reject it based on believing the non-human more than the human.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft well, the biggest red flag is their urge to fact check the expert. Using AI is just the icing.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft yep, my colleague too. Stops listening and starts typing while I’m explaining something. It’s rude
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft Yes, I quit.
-
R relay@relay.infosec.exchange shared this topic