Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Spend the day talking to workers council members about "AI".

Spend the day talking to workers council members about "AI".

Scheduled Pinned Locked Moved Uncategorized
74 Posts 42 Posters 87 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • tante@tldr.nettime.orgT tante@tldr.nettime.org

    Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

    skjeggtroll@mastodon.onlineS This user is from outside of this forum
    skjeggtroll@mastodon.onlineS This user is from outside of this forum
    skjeggtroll@mastodon.online
    wrote last edited by
    #47

    @tante

    I'm genuinely starting to wonder if not LinkedIn has a significant share of the blame for this. There is a certain strain of brain-rot (not just related to AI) that seems to have unreasonably prevalent in management, and I'm not sure what other contamination vector there might be.

    1 Reply Last reply
    0
    • tante@tldr.nettime.orgT tante@tldr.nettime.org

      Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

      sandzwerg@chaos.socialS This user is from outside of this forum
      sandzwerg@chaos.socialS This user is from outside of this forum
      sandzwerg@chaos.social
      wrote last edited by
      #48

      @tante Goals for this year from the top: everyone should use AI. We shall find at least one good use case for AI per team. So much bullshit.

      1 Reply Last reply
      0
      • tante@tldr.nettime.orgT tante@tldr.nettime.org

        Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

        netraven@hear-me.socialN This user is from outside of this forum
        netraven@hear-me.socialN This user is from outside of this forum
        netraven@hear-me.social
        wrote last edited by
        #49

        @tante
        The people most likely to be destabilized by LLMs are the people most insulated from contradiction, and executives are professionally insulated from contradiction.

        1 Reply Last reply
        0
        • snoopj@hachyderm.ioS snoopj@hachyderm.io

          @aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics

          and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.

          arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)

          glyph@mastodon.socialG This user is from outside of this forum
          glyph@mastodon.socialG This user is from outside of this forum
          glyph@mastodon.social
          wrote last edited by
          #50

          @SnoopJ @aud @tante there so many people who are really, actually offering incentives and bonuses for *token use* though. Like it's not just a thing that is happening somewhere, it seems to be one of the more *common* mechanisms.

          I was sure when I first heard about this that it must be some kind of self-dealing kickback scam? But as far as I can tell… no? It's just a thing that managers *actually* think is a good idea? Literally incentivizing direct waste by employees

          cford@toot.thoughtworks.comC 1 Reply Last reply
          0
          • ehproque@neopaquita.esE ehproque@neopaquita.es

            @jaredwhite @tante why wouldn't they? The people who bullshit for a living are (ironically) not threatened, they're having the time of their lives instead

            patrickleavy@mastodon.socialP This user is from outside of this forum
            patrickleavy@mastodon.socialP This user is from outside of this forum
            patrickleavy@mastodon.social
            wrote last edited by
            #51

            @ehproque @jaredwhite @tante I've read some messed up stuff today - but this could be the most terrifying.

            1 Reply Last reply
            0
            • tante@tldr.nettime.orgT tante@tldr.nettime.org

              Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.

              S This user is from outside of this forum
              S This user is from outside of this forum
              slotos@toot.community
              wrote last edited by
              #52

              @tante I can’t get an answer to a simple question for the last few months: „what’s the goal and how will you know we’ve achieved it specifically thanks to AI?”

              Because if the work I’ve been doing to remove obstacles to productivity for the last year and a half will get attributed to this bullshit, I’ll start complying maliciously.

              1 Reply Last reply
              0
              • larsmb@mastodon.onlineL larsmb@mastodon.online

                @glyph @tante

                "AI is going to make us more productive at shipping our software."

                "Great! Amazing! That must be several phd theses you got there! Well done! Didn't know you had it in you."

                "?!?"

                "Well, I mean, you must have figured out how to measure software development productivity reliably, right? What's our baseline at?"

                S This user is from outside of this forum
                S This user is from outside of this forum
                slotos@toot.community
                wrote last edited by
                #53

                @larsmb I didn’t know how much I wanted to scream until I read this…

                1 Reply Last reply
                0
                • tante@tldr.nettime.orgT tante@tldr.nettime.org

                  Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

                  caffetino@social.pikaia.orgC This user is from outside of this forum
                  caffetino@social.pikaia.orgC This user is from outside of this forum
                  caffetino@social.pikaia.org
                  wrote last edited by
                  #54

                  @tante what a time to be alive. I'd be interested in seeing how you frame the discussions.

                  1 Reply Last reply
                  0
                  • tante@tldr.nettime.orgT tante@tldr.nettime.org

                    Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

                    ghostonthehalfshell@masto.aiG This user is from outside of this forum
                    ghostonthehalfshell@masto.aiG This user is from outside of this forum
                    ghostonthehalfshell@masto.ai
                    wrote last edited by
                    #55

                    @tante

                    To quote the science fiction writer Larry Niven:

                    Think of it as evolution in action

                    1 Reply Last reply
                    0
                    • tante@tldr.nettime.orgT tante@tldr.nettime.org

                      But: If you have any chance to speak to unions/workers from different domains and organizations do so.
                      It's fascinating how
                      a) different organizations are and operate
                      b) they all end up with the same handful of structural problems

                      ghostonthehalfshell@masto.aiG This user is from outside of this forum
                      ghostonthehalfshell@masto.aiG This user is from outside of this forum
                      ghostonthehalfshell@masto.ai
                      wrote last edited by
                      #56

                      @tante

                      Your remarks make me think that employees could make a proposal to investors, and here I am making a pretty big assumption, that they can run the company better than management. They can plan to say this to investors after the first major disaster.

                      The assumption I’m making here is that the investors are all interested in the company doing well rather than soaking money out of it by playing stock movements based on AI

                      bubbajet@mastodon.worldB 1 Reply Last reply
                      0
                      • tante@tldr.nettime.orgT tante@tldr.nettime.org

                        @glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.

                        Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?

                        ghostonthehalfshell@masto.aiG This user is from outside of this forum
                        ghostonthehalfshell@masto.aiG This user is from outside of this forum
                        ghostonthehalfshell@masto.ai
                        wrote last edited by
                        #57

                        @tante @glyph

                        Translation: management is not our non-engineers and they can’t do a cost benefit analysis.

                        I’d like to point out that that apparently mechanical business management tasks like that are actually really well done by AI.

                        Engineering not so much

                        1 Reply Last reply
                        0
                        • snoopj@hachyderm.ioS snoopj@hachyderm.io

                          @aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics

                          and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.

                          arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)

                          jrdepriest@infosec.exchangeJ This user is from outside of this forum
                          jrdepriest@infosec.exchangeJ This user is from outside of this forum
                          jrdepriest@infosec.exchange
                          wrote last edited by
                          #58

                          @SnoopJ @aud @tante @glyph

                          The thing about agents, from what I understand in talking to vendors about using them, is that to use them correctly you have to build very detailed and specific playbooks for them to "follow".

                          In practice, it seems like most people just think you can Claude your way to success with vibes and vaguery.

                          They seem to think having an agent eliminates the hard part: defining your process in clear language. In truth, it's more important because an agent won't have the "common sense" to not delete and recreate your production database at 4:30 on a Friday before a three day weekend. Or just delete it.

                          This is not even including the identity and access boundaries you need. Like, we are having deep discussions about an agentic solution that would just read help desk tickets and make suggestions to the help desk personnel. We have to consider all the ways prompt injection could abuse its access. And when the agentic AI is telling people what to do, that's a prime target for social engineering. They want it to be able to reboot servers. That's a denial of service attack waiting to happen.

                          An outside vendor we've spent lots of money on is trying to sell us a multi-agent system that management is already in love with and we have to educate them on the almost unfathomable risk it would create. How are they forgetting everything they've ever learned about risk modeling, threats, fraud, attack surfaces, least privilege, etc. These are not stupid people, but they are acting like wide-eyed children just because it has the word "AI" attached to it. They should be more skeptical, not less.

                          glyph@mastodon.socialG 1 Reply Last reply
                          0
                          • jrdepriest@infosec.exchangeJ jrdepriest@infosec.exchange

                            @SnoopJ @aud @tante @glyph

                            The thing about agents, from what I understand in talking to vendors about using them, is that to use them correctly you have to build very detailed and specific playbooks for them to "follow".

                            In practice, it seems like most people just think you can Claude your way to success with vibes and vaguery.

                            They seem to think having an agent eliminates the hard part: defining your process in clear language. In truth, it's more important because an agent won't have the "common sense" to not delete and recreate your production database at 4:30 on a Friday before a three day weekend. Or just delete it.

                            This is not even including the identity and access boundaries you need. Like, we are having deep discussions about an agentic solution that would just read help desk tickets and make suggestions to the help desk personnel. We have to consider all the ways prompt injection could abuse its access. And when the agentic AI is telling people what to do, that's a prime target for social engineering. They want it to be able to reboot servers. That's a denial of service attack waiting to happen.

                            An outside vendor we've spent lots of money on is trying to sell us a multi-agent system that management is already in love with and we have to educate them on the almost unfathomable risk it would create. How are they forgetting everything they've ever learned about risk modeling, threats, fraud, attack surfaces, least privilege, etc. These are not stupid people, but they are acting like wide-eyed children just because it has the word "AI" attached to it. They should be more skeptical, not less.

                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.social
                            wrote last edited by
                            #59

                            @jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"

                            glyph@mastodon.socialG aud@fire.asta.lgbtA jrdepriest@infosec.exchangeJ 3 Replies Last reply
                            0
                            • glyph@mastodon.socialG glyph@mastodon.social

                              @jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"

                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #60

                              @jrdepriest @SnoopJ @aud @tante like I'm trying to make light of it with little jokes but that is LITERALLY WHAT IS GOING ON in an absolutely WILD number of places

                              1 Reply Last reply
                              0
                              • glyph@mastodon.socialG glyph@mastodon.social

                                @jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"

                                aud@fire.asta.lgbtA This user is from outside of this forum
                                aud@fire.asta.lgbtA This user is from outside of this forum
                                aud@fire.asta.lgbt
                                wrote last edited by
                                #61

                                @glyph@mastodon.social @SnoopJ@hachyderm.io @jrdepriest@infosec.exchange @tante@tldr.nettime.org "when you think about it, if the call is coming from inside the house, you've really limited the amount of space you have to search to find the threat"

                                aud@fire.asta.lgbtA 1 Reply Last reply
                                0
                                • ghostonthehalfshell@masto.aiG ghostonthehalfshell@masto.ai

                                  @tante

                                  Your remarks make me think that employees could make a proposal to investors, and here I am making a pretty big assumption, that they can run the company better than management. They can plan to say this to investors after the first major disaster.

                                  The assumption I’m making here is that the investors are all interested in the company doing well rather than soaking money out of it by playing stock movements based on AI

                                  bubbajet@mastodon.worldB This user is from outside of this forum
                                  bubbajet@mastodon.worldB This user is from outside of this forum
                                  bubbajet@mastodon.world
                                  wrote last edited by
                                  #62

                                  @GhostOnTheHalfShell @tante You’re describing an Employee Stock Ownership Plan (ESOP) and when they don’t work out it’s ugly. All your eggs in one basket, etc.

                                  ghostonthehalfshell@masto.aiG 1 Reply Last reply
                                  0
                                  • glyph@mastodon.socialG glyph@mastodon.social

                                    @jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"

                                    jrdepriest@infosec.exchangeJ This user is from outside of this forum
                                    jrdepriest@infosec.exchangeJ This user is from outside of this forum
                                    jrdepriest@infosec.exchange
                                    wrote last edited by
                                    #63

                                    @glyph @SnoopJ @aud @tante

                                    Before our old CEO retired, he opined that turning agentic AI loose was like giving a toddler admin access.

                                    1 Reply Last reply
                                    0
                                    • aud@fire.asta.lgbtA aud@fire.asta.lgbt

                                      @glyph@mastodon.social @SnoopJ@hachyderm.io @jrdepriest@infosec.exchange @tante@tldr.nettime.org "when you think about it, if the call is coming from inside the house, you've really limited the amount of space you have to search to find the threat"

                                      aud@fire.asta.lgbtA This user is from outside of this forum
                                      aud@fire.asta.lgbtA This user is from outside of this forum
                                      aud@fire.asta.lgbt
                                      wrote last edited by
                                      #64

                                      @SnoopJ@hachyderm.io @jrdepriest@infosec.exchange @tante@tldr.nettime.org @glyph@mastodon.social "what does vertical integration mean to ME? To me, vertical integration is when all of your threats are insider threats. Now that's talking about shareholder value with corporate power."

                                      1 Reply Last reply
                                      0
                                      • bubbajet@mastodon.worldB bubbajet@mastodon.world

                                        @GhostOnTheHalfShell @tante You’re describing an Employee Stock Ownership Plan (ESOP) and when they don’t work out it’s ugly. All your eggs in one basket, etc.

                                        ghostonthehalfshell@masto.aiG This user is from outside of this forum
                                        ghostonthehalfshell@masto.aiG This user is from outside of this forum
                                        ghostonthehalfshell@masto.ai
                                        wrote last edited by
                                        #65

                                        @bubbajet @tante

                                        ESOP (d)evolved under specific conditions

                                        It’s does not imply all eggs in one basket.

                                        1 Reply Last reply
                                        0
                                        • tante@tldr.nettime.orgT tante@tldr.nettime.org

                                          @glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.

                                          Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?

                                          paul_ipv6@infosec.exchangeP This user is from outside of this forum
                                          paul_ipv6@infosec.exchangeP This user is from outside of this forum
                                          paul_ipv6@infosec.exchange
                                          wrote last edited by
                                          #66

                                          @tante @glyph

                                          that's the story. they want us to believe it's not all ego and greed and vibe management. but ask for actual data and metrics that aren't just a pretty graphy unrelated to reality and they will likely get defensive.

                                          don't question the emperor's new clothes. offer to design their next outfit...

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups