Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem

the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem

Scheduled Pinned Locked Moved Uncategorized
37 Posts 20 Posters 6 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • mcc@mastodon.socialM mcc@mastodon.social

    @glyph Even without the "Clyde" problem it's hard to talk about because there's a historical notion of a probabilistic algorithm where you have stochastic behavior operating with proven bounds and a provable distribution of behaviors, and the new type of statistics-based software where the software just sort of does whatever and we don't even discuss it as if it were statistics-based we call it "intelligence"

    glyph@mastodon.socialG This user is from outside of this forum
    glyph@mastodon.socialG This user is from outside of this forum
    glyph@mastodon.social
    wrote last edited by
    #6

    @mcc no disagreement with any of that, but the “AI alignment problem” is specified by its advocates in terms of “universal human values”. the stipulated “alignment” is not with specific user desires or a stated optimization objective but with those putative (imagined) values

    glyph@mastodon.socialG 3psboyd@mastodon.social3 2 Replies Last reply
    0
    • glyph@mastodon.socialG glyph@mastodon.social

      @mcc no disagreement with any of that, but the “AI alignment problem” is specified by its advocates in terms of “universal human values”. the stipulated “alignment” is not with specific user desires or a stated optimization objective but with those putative (imagined) values

      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.social
      wrote last edited by
      #7

      @mcc the first problem of course is that it ignores society and culture and difference and the entire concept of politics[1], but the second issue that I am highlighting here is that *to the extent* that there are sufficiently popular values that we might call them “universal” and “human”, and *to the extent* that we have an entity that actually poses a threat to those values, it is the capital class.

      glyph@mastodon.socialG 1 Reply Last reply
      0
      • glyph@mastodon.socialG glyph@mastodon.social

        @mcc the first problem of course is that it ignores society and culture and difference and the entire concept of politics[1], but the second issue that I am highlighting here is that *to the extent* that there are sufficiently popular values that we might call them “universal” and “human”, and *to the extent* that we have an entity that actually poses a threat to those values, it is the capital class.

        glyph@mastodon.socialG This user is from outside of this forum
        glyph@mastodon.socialG This user is from outside of this forum
        glyph@mastodon.social
        wrote last edited by
        #8

        @mcc [1]: inb4 somebody says they actually wrestle with those things at extremely exhaustive length: they mostly try to rationalize those things away, which is not the same process

        randomgeek@masto.hackers.townR jmeowmeow@hachyderm.ioJ 2 Replies Last reply
        0
        • glyph@mastodon.socialG glyph@mastodon.social

          the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem

          aud@fire.asta.lgbtA This user is from outside of this forum
          aud@fire.asta.lgbtA This user is from outside of this forum
          aud@fire.asta.lgbt
          wrote last edited by
          #9

          @glyph@mastodon.social Agreed!! "AI alignment" exists so they can fire and ignore people who are actually concerned with the ethics of how machine learning is made/deployed/used/etc

          I wish I had some links saved but Dr. Timnit Gebru has deeeeefinitely written about this, I'm pretty sure... and I wish it was more widely known.

          1 Reply Last reply
          0
          • R relay@relay.infosec.exchange shared this topic
          • glyph@mastodon.socialG glyph@mastodon.social

            @mcc [1]: inb4 somebody says they actually wrestle with those things at extremely exhaustive length: they mostly try to rationalize those things away, which is not the same process

            randomgeek@masto.hackers.townR This user is from outside of this forum
            randomgeek@masto.hackers.townR This user is from outside of this forum
            randomgeek@masto.hackers.town
            wrote last edited by
            #10

            @glyph @mcc resting safe in the assumption that anyone who claims adherence to universal human values hasn't so much as listened to Bruce's Philosopher's Song, and certainly not followed up on the associated readings.

            1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem

              xgranade@wandering.shopX This user is from outside of this forum
              xgranade@wandering.shopX This user is from outside of this forum
              xgranade@wandering.shop
              wrote last edited by
              #11

              @glyph

              ML ethics: here's why including ZIP codes in the data used by a classifier is bad

              AI ethics: what if some cryptogod hundreds of millennia in the future gets their feelings hurt by mean posts and decides to invent hell?

              xgranade@wandering.shopX 0x4d6165@transfem.social0 2 Replies Last reply
              0
              • glyph@mastodon.socialG glyph@mastodon.social

                @mcc no disagreement with any of that, but the “AI alignment problem” is specified by its advocates in terms of “universal human values”. the stipulated “alignment” is not with specific user desires or a stated optimization objective but with those putative (imagined) values

                3psboyd@mastodon.social3 This user is from outside of this forum
                3psboyd@mastodon.social3 This user is from outside of this forum
                3psboyd@mastodon.social
                wrote last edited by
                #12

                @glyph @mcc At the far end of this the rationalists going "Logically we need to feed every poor person into a wood chipper so humanity can get to Mars."

                glyph@mastodon.socialG 1 Reply Last reply
                0
                • deshipu@fosstodon.orgD deshipu@fosstodon.org

                  @mcc @glyph I think the biases in a random process (or more generally, the particular distribution) can still align with somebody else's biases and/or expectations. People have this thing where when you say "random", they immediately imagine some kind of fair lottery, with every option equally probable.

                  travisfw@fosstodon.orgT This user is from outside of this forum
                  travisfw@fosstodon.orgT This user is from outside of this forum
                  travisfw@fosstodon.org
                  wrote last edited by
                  #13

                  @deshipu @mcc @glyph yeah the flat distributions is commonly considered random, but really no distribution isn't an idealized model, even when biased. randomness, as statisticians like to talk about it, does not even exist.

                  deshipu@fosstodon.orgD 1 Reply Last reply
                  0
                  • xgranade@wandering.shopX xgranade@wandering.shop

                    @glyph

                    ML ethics: here's why including ZIP codes in the data used by a classifier is bad

                    AI ethics: what if some cryptogod hundreds of millennia in the future gets their feelings hurt by mean posts and decides to invent hell?

                    xgranade@wandering.shopX This user is from outside of this forum
                    xgranade@wandering.shopX This user is from outside of this forum
                    xgranade@wandering.shop
                    wrote last edited by
                    #14

                    @glyph (I hate how little I had to exaggerate to make that joke.)

                    glyph@mastodon.socialG erik@mastodon.infrageeks.socialE 2 Replies Last reply
                    0
                    • xgranade@wandering.shopX xgranade@wandering.shop

                      @glyph (I hate how little I had to exaggerate to make that joke.)

                      glyph@mastodon.socialG This user is from outside of this forum
                      glyph@mastodon.socialG This user is from outside of this forum
                      glyph@mastodon.social
                      wrote last edited by
                      #15

                      @xgranade I don't think there's an exaggeration here, just some uncharitable phrasing

                      flaviusb@mastodon.socialF 1 Reply Last reply
                      0
                      • 3psboyd@mastodon.social3 3psboyd@mastodon.social

                        @glyph @mcc At the far end of this the rationalists going "Logically we need to feed every poor person into a wood chipper so humanity can get to Mars."

                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.social
                        wrote last edited by
                        #16

                        @3psboyd @mcc I feel a *little* bad for the lesswrongers generally because this is really judging the community by its worst and most extreme elements, and here we are on fedi (not a group whose most extreme and unpleasant members I would like to represent me) but that faction is certainly … unduly powerful in society right now

                        jaystephens@mastodon.socialJ 1 Reply Last reply
                        0
                        • glyph@mastodon.socialG glyph@mastodon.social

                          the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem

                          uint8_t@chaos.socialU This user is from outside of this forum
                          uint8_t@chaos.socialU This user is from outside of this forum
                          uint8_t@chaos.social
                          wrote last edited by
                          #17

                          @glyph the real misaligned superintelligence were the corporations we met along the way

                          1 Reply Last reply
                          0
                          • travisfw@fosstodon.orgT travisfw@fosstodon.org

                            @deshipu @mcc @glyph yeah the flat distributions is commonly considered random, but really no distribution isn't an idealized model, even when biased. randomness, as statisticians like to talk about it, does not even exist.

                            deshipu@fosstodon.orgD This user is from outside of this forum
                            deshipu@fosstodon.orgD This user is from outside of this forum
                            deshipu@fosstodon.org
                            wrote last edited by
                            #18

                            @travisfw @mcc @glyph are you saying bayesians are not statisticians?

                            travisfw@fosstodon.orgT davidgerard@circumstances.runD 2 Replies Last reply
                            0
                            • deshipu@fosstodon.orgD deshipu@fosstodon.org

                              @travisfw @mcc @glyph are you saying bayesians are not statisticians?

                              travisfw@fosstodon.orgT This user is from outside of this forum
                              travisfw@fosstodon.orgT This user is from outside of this forum
                              travisfw@fosstodon.org
                              wrote last edited by
                              #19

                              @deshipu @mcc @glyph them's fightin' words

                              1 Reply Last reply
                              0
                              • mcc@mastodon.socialM mcc@mastodon.social

                                @glyph Even without the "Clyde" problem it's hard to talk about because there's a historical notion of a probabilistic algorithm where you have stochastic behavior operating with proven bounds and a provable distribution of behaviors, and the new type of statistics-based software where the software just sort of does whatever and we don't even discuss it as if it were statistics-based we call it "intelligence"

                                W This user is from outside of this forum
                                W This user is from outside of this forum
                                whbboyd@infosec.exchange
                                wrote last edited by
                                #20

                                @mcc @glyph LLMs are an epsilon-approximation to an intelligent autonomous system, where epsilon is equal to infinity.

                                1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem

                                  luis_in_brief@social.coopL This user is from outside of this forum
                                  luis_in_brief@social.coopL This user is from outside of this forum
                                  luis_in_brief@social.coop
                                  wrote last edited by
                                  #21

                                  @glyph if we talk enough about paperclip maximizers, we can ignore the profit maximizers behind the curtain

                                  1 Reply Last reply
                                  0
                                  • R relay@relay.an.exchange shared this topic
                                  • mcc@mastodon.socialM mcc@mastodon.social

                                    @glyph I do think there is an interesting perspective where computer software based on deterministic execution of instructions *can* be aligned with the goals of a user but computer software based on a trained statistical model cannot, technically, be aligned with anything at all as there is inherently random behavior. But we can't conceptualize that problem because the capital class is lying and saying that their computer has a soul because they named it "Cylde" and drew googly eyes on it

                                    stilescrisis@mastodon.gamedev.placeS This user is from outside of this forum
                                    stilescrisis@mastodon.gamedev.placeS This user is from outside of this forum
                                    stilescrisis@mastodon.gamedev.place
                                    wrote last edited by
                                    #22

                                    @mcc @glyph I don't think alignment has anything to do with determinism. People are non-deterministic but a person can absolutely be ethnically aligned (or not).

                                    mcc@mastodon.socialM 1 Reply Last reply
                                    0
                                    • glyph@mastodon.socialG glyph@mastodon.social

                                      @mcc [1]: inb4 somebody says they actually wrestle with those things at extremely exhaustive length: they mostly try to rationalize those things away, which is not the same process

                                      jmeowmeow@hachyderm.ioJ This user is from outside of this forum
                                      jmeowmeow@hachyderm.ioJ This user is from outside of this forum
                                      jmeowmeow@hachyderm.io
                                      wrote last edited by
                                      #23

                                      @glyph the first thing we'll do, is fire all the (actual) ethicists.

                                      1 Reply Last reply
                                      0
                                      • deshipu@fosstodon.orgD deshipu@fosstodon.org

                                        @travisfw @mcc @glyph are you saying bayesians are not statisticians?

                                        davidgerard@circumstances.runD This user is from outside of this forum
                                        davidgerard@circumstances.runD This user is from outside of this forum
                                        davidgerard@circumstances.run
                                        wrote last edited by
                                        #24

                                        @deshipu @travisfw @mcc @glyph there's people who apply Bayes' theorem and then there's *Bayesians*

                                        1 Reply Last reply
                                        0
                                        • stilescrisis@mastodon.gamedev.placeS stilescrisis@mastodon.gamedev.place

                                          @mcc @glyph I don't think alignment has anything to do with determinism. People are non-deterministic but a person can absolutely be ethnically aligned (or not).

                                          mcc@mastodon.socialM This user is from outside of this forum
                                          mcc@mastodon.socialM This user is from outside of this forum
                                          mcc@mastodon.social
                                          wrote last edited by
                                          #25

                                          @stilescrisis @glyph I think a certain sort of predictability is a prerequisite for alignment. Necessary but not sufficient. Humans are not deterministic but their behavior can be consistent, because they can act with intent. They can have beliefs and moral codes. They can understand their own incentives and the consequences of their actions. You can do things that cause them to understand the consequences of their actions better.

                                          stilescrisis@mastodon.gamedev.placeS 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups