Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    @svines some folks have found my post persuasive to their management and it has helped loosen or eliminate some mandates. it’s not advice to eliminate the mandate but just some rubrics for validating its effectiveness; not everyone is receptive but it might be worth a try?

    svines@gts.svines.rodeoS This user is from outside of this forum
    svines@gts.svines.rodeoS This user is from outside of this forum
    svines@gts.svines.rodeo
    wrote last edited by
    #55

    @glyph I love the enthusiasm but I'm a cog in a fortune500 and this decision was made about many levels above my pay grade. I don't think I can convince my boss, their boss and their boss to commit career suicide in the current climate 😅

    glyph@mastodon.socialG 1 Reply Last reply
    0
    • svines@gts.svines.rodeoS svines@gts.svines.rodeo

      @glyph I love the enthusiasm but I'm a cog in a fortune500 and this decision was made about many levels above my pay grade. I don't think I can convince my boss, their boss and their boss to commit career suicide in the current climate 😅

      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.social
      wrote last edited by
      #56

      @svines you obviously know your role and your relationship to your org better than I do :). but this COULD be pitched in a very non-career-suicidal way, i.e.: “hey boss I love the great-great-grandboss’s AI mandate but wouldn’t it be so cool if we had some actual DATA to show how productive it is making our team? I found this formula online…”

      svines@gts.svines.rodeoS 1 Reply Last reply
      0
      • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

        @MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
        But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example.

        miss_rodent@girlcock.clubM This user is from outside of this forum
        miss_rodent@girlcock.clubM This user is from outside of this forum
        miss_rodent@girlcock.club
        wrote last edited by
        #57

        @MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
        All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!"

        miss_rodent@girlcock.clubM mrberard@mastodon.acm.orgM 2 Replies Last reply
        0
        • glyph@mastodon.socialG glyph@mastodon.social

          Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.

          gundersen@mastodon.socialG This user is from outside of this forum
          gundersen@mastodon.socialG This user is from outside of this forum
          gundersen@mastodon.social
          wrote last edited by
          #58

          @glyph LLMs seem to use many of the same techniques as mentallists, psychics, fortune-tellers and mediums in how they manipulate their victims, like suggestions, cold reading, flattery, confidence and in the victims confirmation bias and suggestibility. People are influenced, by the politeness and the well structured text, into ignoring factual issues, and then by having a conversation they fix the glaring problems themselves, and later attribute it to the model.

          1 Reply Last reply
          0
          • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

            @MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
            All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!"

            miss_rodent@girlcock.clubM This user is from outside of this forum
            miss_rodent@girlcock.clubM This user is from outside of this forum
            miss_rodent@girlcock.club
            wrote last edited by
            #59

            @MrBerard @glyph (poverty of speech, flat affect, disorganized speech/though, delusions, reduced attention, brain fog, disorientation, confusion, etc. all being pretty common psychosis features - and all coming in various degrees, many of which LLM folks seem to exhibit to various degrees pretty commonly.)

            mrberard@mastodon.acm.orgM 1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!

              raphael@mastodon.sdf.orgR This user is from outside of this forum
              raphael@mastodon.sdf.orgR This user is from outside of this forum
              raphael@mastodon.sdf.org
              wrote last edited by
              #60

              @glyph I like your breakdown in those articles.

              I think that some of the more valuable stuff has been not when juniors prompt and don’t get value, but when seniors prompt, go do something else for a bit while the machine churns for a couple of minutes, and then come back to something that is pretty close to a good solution.

              Think about a thing that might take you 15 minutes to kinda menially do (add some CLI bo flag that then needs to get passed down 3 layers in some spot for example)

              raphael@mastodon.sdf.orgR zimzat@mastodon.socialZ 2 Replies Last reply
              0
              • raphael@mastodon.sdf.orgR raphael@mastodon.sdf.org

                @glyph I like your breakdown in those articles.

                I think that some of the more valuable stuff has been not when juniors prompt and don’t get value, but when seniors prompt, go do something else for a bit while the machine churns for a couple of minutes, and then come back to something that is pretty close to a good solution.

                Think about a thing that might take you 15 minutes to kinda menially do (add some CLI bo flag that then needs to get passed down 3 layers in some spot for example)

                raphael@mastodon.sdf.orgR This user is from outside of this forum
                raphael@mastodon.sdf.orgR This user is from outside of this forum
                raphael@mastodon.sdf.org
                wrote last edited by
                #61

                @glyph lowering of activation energy is how I see that. And while I agree that the futzing is way undercounted (and that, IMO, a lot of this falls over in longer sessions and is just not worth it)… a strong dev who knows exactly what the solution is supposed to look like can get paper cut-y stuff cleaned up. A lot.

                The “whine on slack about a thing being busted” turns into a fix, and most of that you can just go get a cup of water or review something in the meantime. Cool party trick at least

                raphael@mastodon.sdf.orgR glyph@mastodon.socialG 2 Replies Last reply
                0
                • raphael@mastodon.sdf.orgR raphael@mastodon.sdf.org

                  @glyph lowering of activation energy is how I see that. And while I agree that the futzing is way undercounted (and that, IMO, a lot of this falls over in longer sessions and is just not worth it)… a strong dev who knows exactly what the solution is supposed to look like can get paper cut-y stuff cleaned up. A lot.

                  The “whine on slack about a thing being busted” turns into a fix, and most of that you can just go get a cup of water or review something in the meantime. Cool party trick at least

                  raphael@mastodon.sdf.orgR This user is from outside of this forum
                  raphael@mastodon.sdf.orgR This user is from outside of this forum
                  raphael@mastodon.sdf.org
                  wrote last edited by
                  #62

                  @glyph totally to your point tho… the party trick might just be that. It feels fun to have progress happen when laundry is being folded but in the end I might end up churning anyways

                  1 Reply Last reply
                  0
                  • glyph@mastodon.socialG glyph@mastodon.social

                    @sabik uh I think that’s the METR one? IIRC not the best methodology but it’s still a kinda interesting result and well worth pursuing further https://arxiv.org/abs/2507.09089

                    sabik@rants.auS This user is from outside of this forum
                    sabik@rants.auS This user is from outside of this forum
                    sabik@rants.au
                    wrote last edited by
                    #63

                    @glyph
                    Thanks, that's the one!

                    1 Reply Last reply
                    0
                    • glyph@mastodon.socialG glyph@mastodon.social

                      I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.

                      N This user is from outside of this forum
                      N This user is from outside of this forum
                      nils_berger@sw-development-is.social
                      wrote last edited by
                      #64

                      @glyph while I am not aware of any study showing the poisonous character of LLMs, two items are already proven:
                      1. LLMs have a more detrimental effect on software development than they have benefits. Google's DORA report showed now multiple years in a row, that LLM use in SW dev decreases performance and outcomes in most teams.
                      2. Abuse for malicious intent is rampant, yielding scary propaganda, misinformation, distraction campaigns and intensifies the threat from social engineering attacks

                      glyph@mastodon.socialG 1 Reply Last reply
                      0
                      • glyph@mastodon.socialG glyph@mastodon.social

                        @svines you obviously know your role and your relationship to your org better than I do :). but this COULD be pitched in a very non-career-suicidal way, i.e.: “hey boss I love the great-great-grandboss’s AI mandate but wouldn’t it be so cool if we had some actual DATA to show how productive it is making our team? I found this formula online…”

                        svines@gts.svines.rodeoS This user is from outside of this forum
                        svines@gts.svines.rodeoS This user is from outside of this forum
                        svines@gts.svines.rodeo
                        wrote last edited by
                        #65

                        @glyph yeah true. I am in charge of setting OKRs for my team so productivity etc is part of that. Another guerilla tactic I thought about was asking our legal team what their thoughts on ai-generated code are now that the US supreme court have refused to hear an appeal to "AI code can't be copyrighted" - that potentially means our company no longer has protection given how much vibe coded stuff is around now

                        glyph@mastodon.socialG 1 Reply Last reply
                        0
                        • N nils_berger@sw-development-is.social

                          @glyph while I am not aware of any study showing the poisonous character of LLMs, two items are already proven:
                          1. LLMs have a more detrimental effect on software development than they have benefits. Google's DORA report showed now multiple years in a row, that LLM use in SW dev decreases performance and outcomes in most teams.
                          2. Abuse for malicious intent is rampant, yielding scary propaganda, misinformation, distraction campaigns and intensifies the threat from social engineering attacks

                          glyph@mastodon.socialG This user is from outside of this forum
                          glyph@mastodon.socialG This user is from outside of this forum
                          glyph@mastodon.social
                          wrote last edited by
                          #66

                          @nils_berger have you got a link for that report?

                          bbacc@mastodon.bida.imB hmperson1@furry.engineerH gbargoud@masto.nycG 3 Replies Last reply
                          0
                          • raphael@mastodon.sdf.orgR raphael@mastodon.sdf.org

                            @glyph lowering of activation energy is how I see that. And while I agree that the futzing is way undercounted (and that, IMO, a lot of this falls over in longer sessions and is just not worth it)… a strong dev who knows exactly what the solution is supposed to look like can get paper cut-y stuff cleaned up. A lot.

                            The “whine on slack about a thing being busted” turns into a fix, and most of that you can just go get a cup of water or review something in the meantime. Cool party trick at least

                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.social
                            wrote last edited by
                            #67

                            @raphael Believe me, I understand the appeal of the hit of dopamine to get moving when one is stuck. I really want a tool that can do that for me, but I would like to know what other effects it has, and whether it's going to be a net detriment.

                            1 Reply Last reply
                            0
                            • svines@gts.svines.rodeoS svines@gts.svines.rodeo

                              @glyph yeah true. I am in charge of setting OKRs for my team so productivity etc is part of that. Another guerilla tactic I thought about was asking our legal team what their thoughts on ai-generated code are now that the US supreme court have refused to hear an appeal to "AI code can't be copyrighted" - that potentially means our company no longer has protection given how much vibe coded stuff is around now

                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #68

                              @svines oh yeah you definitely won't be able to copyright anything vibe-coded, the outputs are flatly not copyrightable right now in the US. not clear that will actually make a difference given the work-as-a-whole probably is still pretty defensible for a while, but as a way to start putting more bricks in the wall, it's definitely worth raising concerns

                              1 Reply Last reply
                              0
                              • glyph@mastodon.socialG glyph@mastodon.social

                                2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                                doragasu@mastodon.sdf.orgD This user is from outside of this forum
                                doragasu@mastodon.sdf.orgD This user is from outside of this forum
                                doragasu@mastodon.sdf.org
                                wrote last edited by
                                #69

                                @glyph THIS. This is what confuses me the most, I know software devs that all their life have been very risk averse, embracing LLM coding tools. It's something I cannot understand.

                                1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                                  laprice@beige.partyL This user is from outside of this forum
                                  laprice@beige.partyL This user is from outside of this forum
                                  laprice@beige.party
                                  wrote last edited by
                                  #70

                                  @glyph so, where does AI stand on the inventory of cult-like behavior?

                                  Because what you are describing sounds a lot like a cult.

                                  And if you automate the love bombing and the extraction of secrets and instilling or distilling of mission...

                                  Ah, fuck.

                                  1 Reply Last reply
                                  0
                                  • mrberard@mastodon.acm.orgM mrberard@mastodon.acm.org

                                    @kirakira @glyph

                                    That's good, mine is 'epistemic thalidomide'

                                    davidtheeviloverlord@mastodon.socialD This user is from outside of this forum
                                    davidtheeviloverlord@mastodon.socialD This user is from outside of this forum
                                    davidtheeviloverlord@mastodon.social
                                    wrote last edited by
                                    #71

                                    @MrBerard @kirakira @glyph

                                    Stochastic Errorism.

                                    n_dimension@infosec.exchangeN 1 Reply Last reply
                                    0
                                    • glyph@mastodon.socialG glyph@mastodon.social

                                      2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                                      elseweather@mastodon.socialE This user is from outside of this forum
                                      elseweather@mastodon.socialE This user is from outside of this forum
                                      elseweather@mastodon.social
                                      wrote last edited by
                                      #72

                                      @glyph Something that has gotten under my skin for the past year or so is seeing code changes like: large refactors, porting a legacy tool to rust, even minor bugfixes - things that would be a struggle to push through the inertia of code review - get fast tracked when "the AI did it." Like the exact PRs I've written and tried to advocate before and eventually gave up on. The changes and their risks are the same, I can only conclude that the bar is lower for accepting "AI" contributions.

                                      oschonrock@mastodon.socialO 1 Reply Last reply
                                      0
                                      • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                        @MrBerard @glyph (poverty of speech, flat affect, disorganized speech/though, delusions, reduced attention, brain fog, disorientation, confusion, etc. all being pretty common psychosis features - and all coming in various degrees, many of which LLM folks seem to exhibit to various degrees pretty commonly.)

                                        mrberard@mastodon.acm.orgM This user is from outside of this forum
                                        mrberard@mastodon.acm.orgM This user is from outside of this forum
                                        mrberard@mastodon.acm.org
                                        wrote last edited by
                                        #73

                                        @miss_rodent @glyph

                                        Agreed. But it's the subtle influence on user's views I'm referring to. Which was a social media problem before it was an AI issue.

                                        Sure, we can categorise this as "delusions", but I don't know that bundling everything as 'psychosis' helps the debate, in that it flattens the nuances between subtle and overt cases.

                                        Ultimately, we're tying to apply a medical model designed before mass media , DSM updates notwithstanding. Not surprising it reaches the limits of its utility.

                                        1 Reply Last reply
                                        0
                                        • glyph@mastodon.socialG glyph@mastodon.social

                                          @mcc He thinks the technology is capable of many horrors but it can also be useful for pedestrian things.

                                          cliftonr@wandering.shopC This user is from outside of this forum
                                          cliftonr@wandering.shopC This user is from outside of this forum
                                          cliftonr@wandering.shop
                                          wrote last edited by
                                          #74

                                          @glyph @mcc

                                          What I've observed very recently is that even intelligent people, experienced developers - who know perfectly well that LLMs are just generators of text from statistical models of what someone is likely to write - will still pull up AI written search results and proceed on the automatic assumption that whatever they say is correct.

                                          That is not a general observation. That was this morning, with some senior programmers trying to solve a problem that's prolonging a code freeze.

                                          cliftonr@wandering.shopC paparouleur@mastodon.socialP 2 Replies Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups