AI psychosis among the C-suite is really high now.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft in an "at least the AI can keep up with him" kind of way or a "he might be wrong/lying" kind of way?
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft It is almost the same thing as like: 'It was on facebook, so it is true!'
Just ignore them..
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft
I had similar experiences with consultants before, and now it switched to AI, well, but... I would say not so much changed
️
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft
I can't be fired. So I often support my hierarchy when they want to use AI instead of my work. I just do my own thing and ask them to notify me If I should just drop a project because my boss vibe coded it. Not my problem, I'll continue to think for myself and I'll be ready when they realize they fucked up.
Or not and I'll die homeless, anyway I can't do anything to change the course of events I feel. -
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft@mastodon.social
My favorite is our one executive who will say to me "Grok agrees with you". Of all the AI's to crosscheck me with, that one's probably the most insulting. -
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft Well heck, it might be even worse than you think..
cf: https://houseofsaud.com/iran-war-ai-psychosis-sycophancy-rlhf/
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft what I really hate is any sentence that starts with "chat gpt says..." which is at the same time taking credit for anything that is right and useful and deflecting the blame for anything that is wrong onto an inanimate object.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft wtf?
I can easily imagine that. Who wants to work in such a s-hole?
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
It is not a bizarre behavior.
By using AI tools, they feel they have control on you. They are the chiefs and do not support to be dependant on your expertise. -
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft
It's very annoying on groups / forums when someone asks a tech question (any kind, not computers) and someone copy/pastes a big "authoritative looking" AI response. It varies from misleading to completely wrong and worst than non-AI search. -
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft haven't experienced it myself but it doesn't surprise me.
what does surprise me is that they don't realise that the long term harm done by this behaviour, that i call AI-guessing, to their relationship with people far outweighs any potential short term benefits. -
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft the emperor has no clothes, and he's starting to get worried that he's been walking around naked all this time. So he's desperately grasping at straws.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft Here we even have a few cases among middle managers, and at least one regular grunt?
Advise on how to carefully guide these poor unfortunate souls back to reality would be appreciated.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft not C-suite related but possibly more alarming: I’m getting fact-checked by people in my life when talking about things I have experience with (tech stuff, phone plan, whatever)… men I know feel the need to ask confirmation to LLMs. It’s mind boggling
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft Yep, same thing at my company.
It seems to be due to AI companies trying to make digital-god and they keep telling everyone they have.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft also noticing it in the just average folk suite too.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft@mastodon.social
Yeah, the amount of trust they put in those things is absolutely mind-blowing
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft <cough>Meta<cough> <- That's one of the primary reasons I left the company.
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft spot on...
-
AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?
@nixCraft being constantly bombarded with stress (news, economy, social discourse) makes people lose cognitive ability, and humans take the path of least resistance, enter ai to help take a load off