Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    For me, this is the body horror money quote from that Scientific American article:

    "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

    So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

    If you can see it, the basilisk has already won.

    mmu_man@m.g3l.orgM This user is from outside of this forum
    mmu_man@m.g3l.orgM This user is from outside of this forum
    mmu_man@m.g3l.org
    wrote last edited by
    #97

    @glyph don't look at it!

    Link Preview Image
    Medusa - Wikipedia

    favicon

    (en.wikipedia.org)

    Or even better, the Doctor Who version:

    Link Preview Image
    Weeping Angel - Wikipedia

    favicon

    (en.wikipedia.org)

    1 Reply Last reply
    0
    • glyph@mastodon.socialG glyph@mastodon.social

      RE: https://mamot.fr/@pluralistic/116219642373307943

      I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth

      sabrina@fedi01.unicornsparkle.clubS This user is from outside of this forum
      sabrina@fedi01.unicornsparkle.clubS This user is from outside of this forum
      sabrina@fedi01.unicornsparkle.club
      wrote last edited by
      #98

      @glyph Why doesn’t he just use the word Luddite? Maybe because the Luddites were right and that would undermine his argument?

      Link Preview Image
      Phie Lux (@sabrina@fedi01.unicornsparkle.club)

      Imagine if, at the start of the Industrial Revolution, we as a species had paused and asked ourselves what the ethical implications are and what the possible and present harms could be. Maybe we could have avoided the worst excesses of modern society like pollution, increasing inequality, overconsumption, climate change, fascism, and social atomization. If we are truly at the start of another such technological revolution, maybe we should learn from history and not dive head first into it. Especially when we know a lot of the ethical issues and real harms already. It seems plainly foolish to look at the harm we’ve done to ourselves with the last technological revolution and decide to just double down on it.

      favicon

      fedi01.unicornsparkle.club (fedi01.unicornsparkle.club)

      1 Reply Last reply
      0
      • glyph@mastodon.socialG glyph@mastodon.social

        The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.

        gittaca@chaos.socialG This user is from outside of this forum
        gittaca@chaos.socialG This user is from outside of this forum
        gittaca@chaos.social
        wrote last edited by
        #99

        @glyph The "distortion" is from CoVID: https://www.panaccindex.info/p/answered-does-covid-19-harm-the-brain

        A facsimile/helper for _thinking_ seems pretty interesting if one suffers from brain fog, cognitive decline, neuro-nnflamation, etc.

        1 Reply Last reply
        0
        • R relay@relay.mycrowd.ca shared this topic
        • dpnash@c.imD dpnash@c.im

          @glyph

          Two statements I believe are consistently correct:

          (1) Generative “AI” produces code significantly faster than humans do only when nobody takes sufficient time to understand it (not just in a narrow syntactic sense; also in the context of organizational needs, longer-term plans, interaction with other applications, etc.)

          (2) Code nobody understands well is “technical debt” *by definition*, because it takes a disproportionate amount of time and brain power to change or improve.

          Conclusion: unless software developers are incredibly disciplined, and have a level of time and autonomy they generally do not have in a major tech company, generative “AI” usage will *consistently* create large amounts of “tech debt”.

          ohir@social.vivaldi.netO This user is from outside of this forum
          ohir@social.vivaldi.netO This user is from outside of this forum
          ohir@social.vivaldi.net
          wrote last edited by
          #100

          @dpnash @glyph
          > “AI” usage will *consistently* create large amounts of “tech debt”

          Um, no. There will be no technical debt in such products. Maintenance is too costly and the shop owners would be tied to some protein techie. They will soon pivot to #disposable #software

          If some user fills a bug, the whole thing will be generated anew with its prompt amended like "; make bug-description disappear". Possibly with new UI/UX. For the better, because users will be trained to not report bugs but make workarounds, as bug report might make protein serfs to endure UX change...

          1 Reply Last reply
          0
          • glyph@mastodon.socialG glyph@mastodon.social

            Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.

            Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.

            They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.

            lritter@mastodon.gamedev.placeL This user is from outside of this forum
            lritter@mastodon.gamedev.placeL This user is from outside of this forum
            lritter@mastodon.gamedev.place
            wrote last edited by
            #101

            @glyph it is nuts to dismiss the experience of a paint huffer

            1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

              tasket@infosec.exchangeT This user is from outside of this forum
              tasket@infosec.exchangeT This user is from outside of this forum
              tasket@infosec.exchange
              wrote last edited by
              #102

              @glyph

              many high profile people in tech, who I have respect for, take absolutely unhinged risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology

              Maybe they should have.

              I also hate the LLM force-feeding, but even before they surged the state of computing was becoming a smoldering wreck. Maybe those "leaders" just had bad judgment all along? IIRC most of them were either rubber-stamping or looking away from the IoT dumpster fire and organizing their curricula around the idea the users can't handle URLs responsibly.

              1 Reply Last reply
              1
              0
              • elseweather@mastodon.socialE elseweather@mastodon.social

                @glyph Something that has gotten under my skin for the past year or so is seeing code changes like: large refactors, porting a legacy tool to rust, even minor bugfixes - things that would be a struggle to push through the inertia of code review - get fast tracked when "the AI did it." Like the exact PRs I've written and tried to advocate before and eventually gave up on. The changes and their risks are the same, I can only conclude that the bar is lower for accepting "AI" contributions.

                oschonrock@mastodon.socialO This user is from outside of this forum
                oschonrock@mastodon.socialO This user is from outside of this forum
                oschonrock@mastodon.social
                wrote last edited by
                #103

                @elseweather @glyph

                The risks are not the same.

                The risks for AI PRs are higher.

                1 Reply Last reply
                0
                • cliftonr@wandering.shopC cliftonr@wandering.shop

                  @glyph @mcc

                  What I've observed very recently is that even intelligent people, experienced developers - who know perfectly well that LLMs are just generators of text from statistical models of what someone is likely to write - will still pull up AI written search results and proceed on the automatic assumption that whatever they say is correct.

                  That is not a general observation. That was this morning, with some senior programmers trying to solve a problem that's prolonging a code freeze.

                  paparouleur@mastodon.socialP This user is from outside of this forum
                  paparouleur@mastodon.socialP This user is from outside of this forum
                  paparouleur@mastodon.social
                  wrote last edited by
                  #104

                  @CliftonR @glyph @mcc it feels like two decades of “I’ll just google this” has conditioned people to trust whatever gets displayed right next to their search terms. The act of inspecting indexed materials is more vital than ever and fewer and fewer people do it.

                  1 Reply Last reply
                  0
                  • glyph@mastodon.socialG glyph@mastodon.social

                    I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.

                    di4na@hachyderm.ioD This user is from outside of this forum
                    di4na@hachyderm.ioD This user is from outside of this forum
                    di4na@hachyderm.io
                    wrote last edited by
                    #105

                    @glyph you know what that reminds me of?

                    Bloodletting and handwashing

                    mason@partychickens.netM 1 Reply Last reply
                    0
                    • kirakira@furry.engineerK kirakira@furry.engineer

                      @glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with

                      kimcrawley@zeroes.caK This user is from outside of this forum
                      kimcrawley@zeroes.caK This user is from outside of this forum
                      kimcrawley@zeroes.ca
                      wrote last edited by
                      #106

                      And yet Doctorow thinks LLMs are great for him to use for copyediting. Maybe find a less hypocritical person to quote. All Gen AI horrifies me, I visualize environmental destruction with every "prompt."

                      @kirakira @glyph
                      https://floss.social/@sstendahl/116220713455956161

                      1 Reply Last reply
                      0
                      • di4na@hachyderm.ioD di4na@hachyderm.io

                        @glyph you know what that reminds me of?

                        Bloodletting and handwashing

                        mason@partychickens.netM This user is from outside of this forum
                        mason@partychickens.netM This user is from outside of this forum
                        mason@partychickens.net
                        wrote last edited by
                        #107

                        @Di4na @glyph Why handwashing, out of curiosity?

                        di4na@hachyderm.ioD 1 Reply Last reply
                        0
                        • mrberard@mastodon.acm.orgM mrberard@mastodon.acm.org

                          @kirakira @glyph

                          That's good, mine is 'epistemic thalidomide'

                          baralheia@dragonchat.orgB This user is from outside of this forum
                          baralheia@dragonchat.orgB This user is from outside of this forum
                          baralheia@dragonchat.org
                          wrote last edited by
                          #108

                          @MrBerard @kirakira @glyph Nice. I'm digging the vibe of "mental revigator" myself

                          1 Reply Last reply
                          0
                          • F froztbyte@mastodon.social

                            @glyph Similarly, “hallucination” and “delusion” are pre-poisoned for use in this scope

                            I have on occasion made use of “phantasmagoria” around parts of this dynamic, especially for stuff like the droll “omg the AI is learning to lie to us, we’re cooked!” type bullshit posts, but that’s still not expansive enough to include the various other mental affectations

                            we need other perorations, and better perseverations alongside

                            joxn@wandering.shopJ This user is from outside of this forum
                            joxn@wandering.shopJ This user is from outside of this forum
                            joxn@wandering.shop
                            wrote last edited by
                            #109

                            @froztbyte @glyph maybe “AI mediated cognitive change”, subtypes “AI mediated cognitive enhancement”, “AI mediated cognitive decline”, and “AI mediated cognitive distortion”?

                            F 1 Reply Last reply
                            0
                            • janeishly@beige.partyJ janeishly@beige.party

                              @glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous

                              mmby@mastodon.socialM This user is from outside of this forum
                              mmby@mastodon.socialM This user is from outside of this forum
                              mmby@mastodon.social
                              wrote last edited by
                              #110

                              @janeishly @glyph it is also very present in art: e.g. once you've seen a partial draft for something (generated), your idea is no longer yours - you're primed by a foreign version of your creation.

                              like watching a movie before reading the book it was based on.

                              1 Reply Last reply
                              0
                              • glyph@mastodon.socialG glyph@mastodon.social

                                For me, this is the body horror money quote from that Scientific American article:

                                "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                                So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                                If you can see it, the basilisk has already won.

                                gary_alderson@infosec.exchangeG This user is from outside of this forum
                                gary_alderson@infosec.exchangeG This user is from outside of this forum
                                gary_alderson@infosec.exchange
                                wrote last edited by
                                #111

                                @glyph i like to let them sort it out - ask the same question to like 3 models, sort of crude arbitrage

                                Link Preview Image
                                1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  1. YES THEY ARE.

                                  They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                                  With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                                  pythonbynight@hachyderm.ioP This user is from outside of this forum
                                  pythonbynight@hachyderm.ioP This user is from outside of this forum
                                  pythonbynight@hachyderm.io
                                  wrote last edited by
                                  #112

                                  @glyph While this is purely anecdotal, it's darkly comical that just yesterday, at work, a "chief architect" explained and described their claude code setup as ... "giving a monkey a machine gun" ... with no irony or shame.

                                  His point was very clearly that he wasn't sure he could trust his setup, but it was still certainly worth it for the perceived gains.

                                  While I've not made many arguments pro/against LLM usage in general (based on how useful they are or aren't), this admission seemed really odd to me.

                                  We're being asked to implement these tools in our workflows, but we're not given guidance on how to do so safely.

                                  And I'm not against experimentation and learning new things--but I think that has its place within a certain context.

                                  You want to give a monkey a machine gun? Well, find someplace safe to do so, and hope nobody gets hurt... but, like, why should I do the same thing?

                                  ddelemeny@mastodon.xyzD 1 Reply Last reply
                                  0
                                  • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                    @MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
                                    But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example.

                                    mrberard@mastodon.acm.orgM This user is from outside of this forum
                                    mrberard@mastodon.acm.orgM This user is from outside of this forum
                                    mrberard@mastodon.acm.org
                                    wrote last edited by
                                    #113

                                    @miss_rodent @glyph

                                    That's an interesting example, because my understanding is that hearing voices is more common than people think, and often not accompanied by the symptom cluster that would lead to a psychosis diagnosis.

                                    I think the problem is the underlying model for diagnostic criteria, which was already defective IMO even before AI complicated the picture.

                                    Lexically, a single term blurs the nuances. For a broader, umbrella term, 'AI brainrot' seems more appropriate IMO.

                                    1 Reply Last reply
                                    0
                                    • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                      @MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
                                      All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!"

                                      mrberard@mastodon.acm.orgM This user is from outside of this forum
                                      mrberard@mastodon.acm.orgM This user is from outside of this forum
                                      mrberard@mastodon.acm.org
                                      wrote last edited by
                                      #114

                                      @miss_rodent @glyph

                                      Again, I am not disagreeing with this point, just with the practical utility of choosing to use the term based on it.

                                      1 Reply Last reply
                                      0
                                      • glyph@mastodon.socialG glyph@mastodon.social

                                        Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.

                                        But I'm still waiting.

                                        nielsa@mas.toN This user is from outside of this forum
                                        nielsa@mas.toN This user is from outside of this forum
                                        nielsa@mas.to
                                        wrote last edited by
                                        #115

                                        @glyph Very good analysis, thank you, I'll be passing this around 😁

                                        1 Reply Last reply
                                        0
                                        • happyborg@fosstodon.orgH happyborg@fosstodon.org

                                          @alys FYI the first health concerns with asbestos were being raised in 1907 and yet it was still legal to use it in UK buildings in, wait for it... 1999.

                                          So the lesson with #LLMs is...?

                                          nielsa@mas.toN This user is from outside of this forum
                                          nielsa@mas.toN This user is from outside of this forum
                                          nielsa@mas.to
                                          wrote last edited by
                                          #116

                                          @happyborg oh no

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups