Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. This is a thread on terms describing various aspects of "AI"

This is a thread on terms describing various aspects of "AI"

Scheduled Pinned Locked Moved Uncategorized
10 Posts 5 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • cellomomoncars@mastodon.socialC This user is from outside of this forum
    cellomomoncars@mastodon.socialC This user is from outside of this forum
    cellomomoncars@mastodon.social
    wrote on last edited by
    #1

    This is a thread on terms describing various aspects of "AI"

    ASBESTOS

    Jonathan Zittrain
    On "AI" in medical innovation.

    “I think of machine learning kind of as asbestos,” said BKC’s Jonathan Zittrain. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”

    https://cyber.harvard.edu/story/2019-06/what-if-ai-health-care-next-asbestos

    https://www.statnews.com/2019/06/19/what-if-ai-in-health-care-is-next-asbestos/

    cellomomoncars@mastodon.socialC 1 Reply Last reply
    0
    • cellomomoncars@mastodon.socialC cellomomoncars@mastodon.social

      This is a thread on terms describing various aspects of "AI"

      ASBESTOS

      Jonathan Zittrain
      On "AI" in medical innovation.

      “I think of machine learning kind of as asbestos,” said BKC’s Jonathan Zittrain. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”

      https://cyber.harvard.edu/story/2019-06/what-if-ai-health-care-next-asbestos

      https://www.statnews.com/2019/06/19/what-if-ai-in-health-care-is-next-asbestos/

      cellomomoncars@mastodon.socialC This user is from outside of this forum
      cellomomoncars@mastodon.socialC This user is from outside of this forum
      cellomomoncars@mastodon.social
      wrote on last edited by
      #2

      LIABILITY

      "Code is a liability, not an asset;
      AI code represents liability production at scale."

      The idea that code is a liability has been around for a long time, but "AI" "coding" supercharges that, writes Cory Doctorow.

      https://pluralistic.net/2026/01/06/1000x-liability/

      " "Writing code" is about making code that runs well. "Software engineering" is about making code that fails well."

      cellomomoncars@mastodon.socialC feoh@oldbytes.spaceF 2 Replies Last reply
      1
      0
      • cellomomoncars@mastodon.socialC cellomomoncars@mastodon.social

        LIABILITY

        "Code is a liability, not an asset;
        AI code represents liability production at scale."

        The idea that code is a liability has been around for a long time, but "AI" "coding" supercharges that, writes Cory Doctorow.

        https://pluralistic.net/2026/01/06/1000x-liability/

        " "Writing code" is about making code that runs well. "Software engineering" is about making code that fails well."

        cellomomoncars@mastodon.socialC This user is from outside of this forum
        cellomomoncars@mastodon.socialC This user is from outside of this forum
        cellomomoncars@mastodon.social
        wrote on last edited by
        #3

        REVERSE CENTAUR

        Cory Doctorow

        "In automation theory jargon, [an AI assisted] radiologist is a "centaur" – a human head grafted onto the tireless, ever-vigilant body of a robot.

        No one who invests in AI expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot.

        That human is there
        – to be blamed for errors.
        – to be a "moral crumple zone".
        – to be an "accountability sink"
        But they're not there to be radiologists.

        https://pluralistic.net/2025/03/18/asbestos-in-the-walls/

        cellomomoncars@mastodon.socialC n_dimension@infosec.exchangeN 2 Replies Last reply
        0
        • cellomomoncars@mastodon.socialC cellomomoncars@mastodon.social

          REVERSE CENTAUR

          Cory Doctorow

          "In automation theory jargon, [an AI assisted] radiologist is a "centaur" – a human head grafted onto the tireless, ever-vigilant body of a robot.

          No one who invests in AI expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot.

          That human is there
          – to be blamed for errors.
          – to be a "moral crumple zone".
          – to be an "accountability sink"
          But they're not there to be radiologists.

          https://pluralistic.net/2025/03/18/asbestos-in-the-walls/

          cellomomoncars@mastodon.socialC This user is from outside of this forum
          cellomomoncars@mastodon.socialC This user is from outside of this forum
          cellomomoncars@mastodon.social
          wrote on last edited by
          #4

          DIGITAL KESSLER SYNDROME

          Anton Danholt Lautrup

          "If we cannot reliably distinguish between synthetic and genuine data, we risk contaminating and diluting decades' worth of data collection."

          https://www.sdu.dk/en/forskning/c-ai-ethics/news-and-events/event-digital-kessler-syndrome.

          AI produces slop more often than it should. If it ingests the slop in subsequent trainings, the output becomes sloppier and sloppier, and good luck unscrambling that egg.

          peterbroks@mastodon.socialP cellomomoncars@mastodon.socialC 2 Replies Last reply
          0
          • cellomomoncars@mastodon.socialC cellomomoncars@mastodon.social

            REVERSE CENTAUR

            Cory Doctorow

            "In automation theory jargon, [an AI assisted] radiologist is a "centaur" – a human head grafted onto the tireless, ever-vigilant body of a robot.

            No one who invests in AI expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot.

            That human is there
            – to be blamed for errors.
            – to be a "moral crumple zone".
            – to be an "accountability sink"
            But they're not there to be radiologists.

            https://pluralistic.net/2025/03/18/asbestos-in-the-walls/

            n_dimension@infosec.exchangeN This user is from outside of this forum
            n_dimension@infosec.exchangeN This user is from outside of this forum
            n_dimension@infosec.exchange
            wrote on last edited by
            #5

            @CelloMomOnCars

            That is entirely the wrong headed (giggity) approach, IMHO.

            A big part of man(person)-machine interface is the control and responsibility remaining in human hands.

            Not so long ago, the few of us geeks who foresaw where machine brains would take us, campaigned in #stopkillerrobots.
            A campaign to keep human decision making in military #killchain
            A campaign that failed spectacularly, in no small part, I am sure to uniformed Doctorow analogues dismissing it as unnecessary farsical puppetry.

            Even now, I actively strive to #regulateAI IRL and human decision making is essential and imperative in AI.
            The "reverse centaur" is a canard, as much as a driver of a motorcar is not pulling the cargo by their muscle.

            AI is not going away for the same reason we don't see "Picks and shovels" (!) digging infrastructure trenches anymore. Machines have been eating jobs since the 1700s and it's only scary now because the white collars are on the chopping block.

            I have huge respect for @pluralistic and his role, which he fulfills admirably is an activist, a what we call in Australia, a shitstirer. His opinions stimulate debate, but keeping an expert in the decision chain, if it's only a tick box is a good thing.

            Call it a "moral crumple zone" if you will.
            Removing it all together is bad and I am disturbed anyone would try to make hay of this.
            The alternative is full automation and I am sure all the #AI "fans" would agree it's a bad thing.

            paco@infosec.exchangeP 1 Reply Last reply
            0
            • cellomomoncars@mastodon.socialC cellomomoncars@mastodon.social

              DIGITAL KESSLER SYNDROME

              Anton Danholt Lautrup

              "If we cannot reliably distinguish between synthetic and genuine data, we risk contaminating and diluting decades' worth of data collection."

              https://www.sdu.dk/en/forskning/c-ai-ethics/news-and-events/event-digital-kessler-syndrome.

              AI produces slop more often than it should. If it ingests the slop in subsequent trainings, the output becomes sloppier and sloppier, and good luck unscrambling that egg.

              peterbroks@mastodon.socialP This user is from outside of this forum
              peterbroks@mastodon.socialP This user is from outside of this forum
              peterbroks@mastodon.social
              wrote on last edited by
              #6

              @CelloMomOnCars Eating its own slop is exactly what happened when reduced cows were added to cattle feed. Result: Mad Cows Disease.

              I expect we will have to face up to Mad Computers Disease in the future.

              1 Reply Last reply
              0
              • cellomomoncars@mastodon.socialC cellomomoncars@mastodon.social

                DIGITAL KESSLER SYNDROME

                Anton Danholt Lautrup

                "If we cannot reliably distinguish between synthetic and genuine data, we risk contaminating and diluting decades' worth of data collection."

                https://www.sdu.dk/en/forskning/c-ai-ethics/news-and-events/event-digital-kessler-syndrome.

                AI produces slop more often than it should. If it ingests the slop in subsequent trainings, the output becomes sloppier and sloppier, and good luck unscrambling that egg.

                cellomomoncars@mastodon.socialC This user is from outside of this forum
                cellomomoncars@mastodon.socialC This user is from outside of this forum
                cellomomoncars@mastodon.social
                wrote on last edited by
                #7

                RE: https://infosec.exchange/@realn2s/115886782776932658

                RADIUM FAD

                Via @realn2s :
                https://mastodon.social/@realn2s@infosec.exchange/115886782805958819

                1 Reply Last reply
                0
                • cellomomoncars@mastodon.socialC cellomomoncars@mastodon.social

                  LIABILITY

                  "Code is a liability, not an asset;
                  AI code represents liability production at scale."

                  The idea that code is a liability has been around for a long time, but "AI" "coding" supercharges that, writes Cory Doctorow.

                  https://pluralistic.net/2026/01/06/1000x-liability/

                  " "Writing code" is about making code that runs well. "Software engineering" is about making code that fails well."

                  feoh@oldbytes.spaceF This user is from outside of this forum
                  feoh@oldbytes.spaceF This user is from outside of this forum
                  feoh@oldbytes.space
                  wrote last edited by
                  #8

                  @CelloMomOnCars This may well be one of the best essays you've ever written.

                  Not perhaps in the absolute sense, but in the sense that never have you crystallized the existential pain of a moment more expertly and eloquently.

                  This draws forensic diagrams showing the entry and exit wounds and where the bullet wound up at the crime scene for the death of the craft of software engineering in mainstream commercial environments.

                  Thank you.

                  1 Reply Last reply
                  0
                  • n_dimension@infosec.exchangeN n_dimension@infosec.exchange

                    @CelloMomOnCars

                    That is entirely the wrong headed (giggity) approach, IMHO.

                    A big part of man(person)-machine interface is the control and responsibility remaining in human hands.

                    Not so long ago, the few of us geeks who foresaw where machine brains would take us, campaigned in #stopkillerrobots.
                    A campaign to keep human decision making in military #killchain
                    A campaign that failed spectacularly, in no small part, I am sure to uniformed Doctorow analogues dismissing it as unnecessary farsical puppetry.

                    Even now, I actively strive to #regulateAI IRL and human decision making is essential and imperative in AI.
                    The "reverse centaur" is a canard, as much as a driver of a motorcar is not pulling the cargo by their muscle.

                    AI is not going away for the same reason we don't see "Picks and shovels" (!) digging infrastructure trenches anymore. Machines have been eating jobs since the 1700s and it's only scary now because the white collars are on the chopping block.

                    I have huge respect for @pluralistic and his role, which he fulfills admirably is an activist, a what we call in Australia, a shitstirer. His opinions stimulate debate, but keeping an expert in the decision chain, if it's only a tick box is a good thing.

                    Call it a "moral crumple zone" if you will.
                    Removing it all together is bad and I am disturbed anyone would try to make hay of this.
                    The alternative is full automation and I am sure all the #AI "fans" would agree it's a bad thing.

                    paco@infosec.exchangeP This user is from outside of this forum
                    paco@infosec.exchangeP This user is from outside of this forum
                    paco@infosec.exchange
                    wrote last edited by
                    #9

                    @n_dimension Can you say more about this: ‘keeping an expert in the decision chain, if it's only a tick box is a good thing.’

                    Good for whom? Good how?

                    @CelloMomOnCars

                    n_dimension@infosec.exchangeN 1 Reply Last reply
                    0
                    • R relay@relay.infosec.exchange shared this topic
                    • paco@infosec.exchangeP paco@infosec.exchange

                      @n_dimension Can you say more about this: ‘keeping an expert in the decision chain, if it's only a tick box is a good thing.’

                      Good for whom? Good how?

                      @CelloMomOnCars

                      n_dimension@infosec.exchangeN This user is from outside of this forum
                      n_dimension@infosec.exchangeN This user is from outside of this forum
                      n_dimension@infosec.exchange
                      wrote last edited by
                      #10

                      @paco @CelloMomOnCars

                      Good for the company running the AI, risk management. If I remember correctly, there has been case law precedents set in the US that 'AI is not reposnsible for damages'

                      Good for the invader bastards in Ukraine who at the last moment before a drone turns him into a pile of steaming meat, makes a gesture of surrender and the operator yanks back the kill mode.
                      Not a matter of idle speculation, Ai killer bots are hunting people in Ukraine.

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups