Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Spend the day talking to workers council members about "AI".

Spend the day talking to workers council members about "AI".

Scheduled Pinned Locked Moved Uncategorized
74 Posts 42 Posters 87 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • tante@tldr.nettime.orgT tante@tldr.nettime.org

    But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.

    tante@tldr.nettime.orgT This user is from outside of this forum
    tante@tldr.nettime.orgT This user is from outside of this forum
    tante@tldr.nettime.org
    wrote last edited by
    #3

    Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.

    tante@tldr.nettime.orgT jaredwhite@indieweb.socialJ glyph@mastodon.socialG S promovicz@chaos.socialP 5 Replies Last reply
    0
    • tante@tldr.nettime.orgT tante@tldr.nettime.org

      But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.

      cm@chaos.socialC This user is from outside of this forum
      cm@chaos.socialC This user is from outside of this forum
      cm@chaos.social
      wrote last edited by
      #4

      @tante is that framework public?

      tante@tldr.nettime.orgT 1 Reply Last reply
      0
      • tante@tldr.nettime.orgT tante@tldr.nettime.org

        Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

        gerrymcgovern@mastodon.greenG This user is from outside of this forum
        gerrymcgovern@mastodon.greenG This user is from outside of this forum
        gerrymcgovern@mastodon.green
        wrote last edited by
        #5

        @tante ceo's never were okay. I always found senior management a narcissistic bunch of assholes, always looking for the next cool project to burnish their cvs. Many were totally scared of tech, easily fooled. And many more were full on tech cultists, because the tech bros were always promising them how they could cut costs and fire people.

        pkw@snac.d34d.netP 1 Reply Last reply
        0
        • cm@chaos.socialC cm@chaos.social

          @tante is that framework public?

          tante@tldr.nettime.orgT This user is from outside of this forum
          tante@tldr.nettime.orgT This user is from outside of this forum
          tante@tldr.nettime.org
          wrote last edited by
          #6

          @cm I have presented it in talks but have not fully formalized it yet.

          tante@tldr.nettime.orgT simulo@hci.socialS 2 Replies Last reply
          0
          • tante@tldr.nettime.orgT tante@tldr.nettime.org

            @cm I have presented it in talks but have not fully formalized it yet.

            tante@tldr.nettime.orgT This user is from outside of this forum
            tante@tldr.nettime.orgT This user is from outside of this forum
            tante@tldr.nettime.org
            wrote last edited by
            #7

            @cm I should write it down, I know but that takes a lot of time and time is currently my most limited resource 😞

            1 Reply Last reply
            0
            • tante@tldr.nettime.orgT tante@tldr.nettime.org

              @cm I have presented it in talks but have not fully formalized it yet.

              simulo@hci.socialS This user is from outside of this forum
              simulo@hci.socialS This user is from outside of this forum
              simulo@hci.social
              wrote last edited by
              #8

              @tante @cm in case one of those talks had a video recording, I'd be happy about a link; otherwise, I look forward to a blogpost at some point

              tante@tldr.nettime.orgT 1 Reply Last reply
              0
              • tante@tldr.nettime.orgT tante@tldr.nettime.org

                Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.

                tante@tldr.nettime.orgT This user is from outside of this forum
                tante@tldr.nettime.orgT This user is from outside of this forum
                tante@tldr.nettime.org
                wrote last edited by
                #9

                But: If you have any chance to speak to unions/workers from different domains and organizations do so.
                It's fascinating how
                a) different organizations are and operate
                b) they all end up with the same handful of structural problems

                carstenschridde@norden.socialC ghostonthehalfshell@masto.aiG 2 Replies Last reply
                0
                • simulo@hci.socialS simulo@hci.social

                  @tante @cm in case one of those talks had a video recording, I'd be happy about a link; otherwise, I look forward to a blogpost at some point

                  tante@tldr.nettime.orgT This user is from outside of this forum
                  tante@tldr.nettime.orgT This user is from outside of this forum
                  tante@tldr.nettime.org
                  wrote last edited by
                  #10

                  @simulo @cm Nope. Was more a workshop kind of setup and in order to allow people to talk freely nothing was recorded.

                  1 Reply Last reply
                  0
                  • tante@tldr.nettime.orgT tante@tldr.nettime.org

                    Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.

                    jaredwhite@indieweb.socialJ This user is from outside of this forum
                    jaredwhite@indieweb.socialJ This user is from outside of this forum
                    jaredwhite@indieweb.social
                    wrote last edited by
                    #11

                    @tante 9 times out of 10 (yes that's an anecdotal stat), the people most resistant to AI-all-the-things are the most talented, most dedicated workers. Orgs who penalize or fire those people are committing self-sabotage. 🙁

                    ehproque@neopaquita.esE 1 Reply Last reply
                    0
                    • tante@tldr.nettime.orgT tante@tldr.nettime.org

                      Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.

                      glyph@mastodon.socialG This user is from outside of this forum
                      glyph@mastodon.socialG This user is from outside of this forum
                      glyph@mastodon.social
                      wrote last edited by
                      #12

                      @tante It is a weird time to be alive. I wrote The Futzing Fraction functionally *for free* to help CEOs do their own cost modeling. And they don't even read it themselves — employees read it, and carefully create customized internal presentations to make its framing *even gentler* to their orgs, and it still only works to help soften AI mandates like half the time (at least based on the feedback I have received).

                      glyph@mastodon.socialG 1 Reply Last reply
                      0
                      • glyph@mastodon.socialG glyph@mastodon.social

                        @tante It is a weird time to be alive. I wrote The Futzing Fraction functionally *for free* to help CEOs do their own cost modeling. And they don't even read it themselves — employees read it, and carefully create customized internal presentations to make its framing *even gentler* to their orgs, and it still only works to help soften AI mandates like half the time (at least based on the feedback I have received).

                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.social
                        wrote last edited by
                        #13

                        @tante Critics are characterized as surly bomb-throwers when we are trying SO HARD to help corporations succeed, just so they won't make our world *even worse*. It's a literal win win they are trying to avoid

                        tante@tldr.nettime.orgT 1 Reply Last reply
                        0
                        • tante@tldr.nettime.orgT tante@tldr.nettime.org

                          Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

                          otherdog@mastodon.socialO This user is from outside of this forum
                          otherdog@mastodon.socialO This user is from outside of this forum
                          otherdog@mastodon.social
                          wrote last edited by
                          #14

                          @tante For many founders and CEOs the one thing that irritated them about starting a tech company was having to build an expensive and often ungovernable engineering team. Some (ime) reluctantly embraced the idea by wearing hoodies and paying attention to tech journalism. Others maintained a resentful distance. Most try to micromanage it regardless.

                          The fact that AI is being embraced enthusiastically top-down is frankly one of the least surprising developments of my career thus far.

                          1 Reply Last reply
                          0
                          • glyph@mastodon.socialG glyph@mastodon.social

                            @tante Critics are characterized as surly bomb-throwers when we are trying SO HARD to help corporations succeed, just so they won't make our world *even worse*. It's a literal win win they are trying to avoid

                            tante@tldr.nettime.orgT This user is from outside of this forum
                            tante@tldr.nettime.orgT This user is from outside of this forum
                            tante@tldr.nettime.org
                            wrote last edited by
                            #15

                            @glyph you can only help people if they are willing to accept help I guess. But it's tragic.

                            tante@tldr.nettime.orgT 1 Reply Last reply
                            0
                            • tante@tldr.nettime.orgT tante@tldr.nettime.org

                              @glyph you can only help people if they are willing to accept help I guess. But it's tragic.

                              tante@tldr.nettime.orgT This user is from outside of this forum
                              tante@tldr.nettime.orgT This user is from outside of this forum
                              tante@tldr.nettime.org
                              wrote last edited by
                              #16

                              @glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.

                              Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?

                              glyph@mastodon.socialG dzwiedziu@mastodon.socialD richrarobi@mastodon.socialR ghostonthehalfshell@masto.aiG paul_ipv6@infosec.exchangeP 5 Replies Last reply
                              0
                              • tante@tldr.nettime.orgT tante@tldr.nettime.org

                                @glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.

                                Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?

                                glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.social
                                wrote last edited by
                                #17

                                @tante yeah it's a real "YOU HAD ONE JOB" situation

                                aud@fire.asta.lgbtA otherdog@mastodon.socialO larsmb@mastodon.onlineL missconstrue@mefi.socialM pathunstrom@ngmx.comP 5 Replies Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  @tante yeah it's a real "YOU HAD ONE JOB" situation

                                  aud@fire.asta.lgbtA This user is from outside of this forum
                                  aud@fire.asta.lgbtA This user is from outside of this forum
                                  aud@fire.asta.lgbt
                                  wrote last edited by
                                  #18

                                  @glyph@mastodon.social @tante@tldr.nettime.org this is the thing that drives me a little batty: "AI", or (mis)applied statistics, is just... well, statistics. And all these "AI experts" never even try to use any sort of metric, much less a statistically rigorous method, to gauge if the damn thing works or not...

                                  snoopj@hachyderm.ioS 1 Reply Last reply
                                  0
                                  • aud@fire.asta.lgbtA aud@fire.asta.lgbt

                                    @glyph@mastodon.social @tante@tldr.nettime.org this is the thing that drives me a little batty: "AI", or (mis)applied statistics, is just... well, statistics. And all these "AI experts" never even try to use any sort of metric, much less a statistically rigorous method, to gauge if the damn thing works or not...

                                    snoopj@hachyderm.ioS This user is from outside of this forum
                                    snoopj@hachyderm.ioS This user is from outside of this forum
                                    snoopj@hachyderm.io
                                    wrote last edited by
                                    #19

                                    @aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics

                                    and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.

                                    arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)

                                    snoopj@hachyderm.ioS aud@fire.asta.lgbtA glyph@mastodon.socialG jrdepriest@infosec.exchangeJ 4 Replies Last reply
                                    0
                                    • tante@tldr.nettime.orgT tante@tldr.nettime.org

                                      Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.

                                      olafke@muenchen.socialO This user is from outside of this forum
                                      olafke@muenchen.socialO This user is from outside of this forum
                                      olafke@muenchen.social
                                      wrote last edited by
                                      #20

                                      @tante unfortunately and increasingly, management is most interested in whatever looks good in PowerPoint rather than their product in the real world.

                                      missconstrue@mefi.socialM 1 Reply Last reply
                                      0
                                      • snoopj@hachyderm.ioS snoopj@hachyderm.io

                                        @aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics

                                        and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.

                                        arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)

                                        snoopj@hachyderm.ioS This user is from outside of this forum
                                        snoopj@hachyderm.ioS This user is from outside of this forum
                                        snoopj@hachyderm.io
                                        wrote last edited by
                                        #21

                                        @aud @tante @glyph the addition of "vision heads" has always been the brightest example of this to me, and came sooner than the craze for "agents".

                                        They ran out of runway to scale up on text alone but clearly adding more parameters was the thing that needed doing. Bolting an entire vision system to the side of the model sure does add a lot of parameters and keeps you on the curve of projected growth.

                                        It doesn't really solve any problems in a way that might generate revenue, but it demos quite well and a good demo is all you've ever really needed to separate tech speculators from their cash, *particularly* the ones gambling on "AI" at any point in tech history.

                                        aud@fire.asta.lgbtA 1 Reply Last reply
                                        0
                                        • snoopj@hachyderm.ioS snoopj@hachyderm.io

                                          @aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics

                                          and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.

                                          arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)

                                          aud@fire.asta.lgbtA This user is from outside of this forum
                                          aud@fire.asta.lgbtA This user is from outside of this forum
                                          aud@fire.asta.lgbt
                                          wrote last edited by
                                          #22

                                          @SnoopJ@hachyderm.io @tante@tldr.nettime.org @glyph@mastodon.social ah, I meant for the boosters who are "seeing huge gains"; it's always anecdotal and then any outside measurements of it contradict said anecdotal claims...

                                          but also,
                                          yes, what you just said, X 1000. Even the earlier "measurements" were horseshit: "we tested this by making it generate answers {for an extremely well documented standardized test for which answers appear many times in the training corpus} and it got a grade of 45%!" which they claim is good, except that's actually failing which they never seem to mention...

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups