the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
-
@glyph I do think there is an interesting perspective where computer software based on deterministic execution of instructions *can* be aligned with the goals of a user but computer software based on a trained statistical model cannot, technically, be aligned with anything at all as there is inherently random behavior. But we can't conceptualize that problem because the capital class is lying and saying that their computer has a soul because they named it "Cylde" and drew googly eyes on it
@mcc @glyph I think the biases in a random process (or more generally, the particular distribution) can still align with somebody else's biases and/or expectations. People have this thing where when you say "random", they immediately imagine some kind of fair lottery, with every option equally probable.
-
@glyph Even without the "Clyde" problem it's hard to talk about because there's a historical notion of a probabilistic algorithm where you have stochastic behavior operating with proven bounds and a provable distribution of behaviors, and the new type of statistics-based software where the software just sort of does whatever and we don't even discuss it as if it were statistics-based we call it "intelligence"
@mcc no disagreement with any of that, but the “AI alignment problem” is specified by its advocates in terms of “universal human values”. the stipulated “alignment” is not with specific user desires or a stated optimization objective but with those putative (imagined) values
-
@mcc no disagreement with any of that, but the “AI alignment problem” is specified by its advocates in terms of “universal human values”. the stipulated “alignment” is not with specific user desires or a stated optimization objective but with those putative (imagined) values
@mcc the first problem of course is that it ignores society and culture and difference and the entire concept of politics[1], but the second issue that I am highlighting here is that *to the extent* that there are sufficiently popular values that we might call them “universal” and “human”, and *to the extent* that we have an entity that actually poses a threat to those values, it is the capital class.
-
@mcc the first problem of course is that it ignores society and culture and difference and the entire concept of politics[1], but the second issue that I am highlighting here is that *to the extent* that there are sufficiently popular values that we might call them “universal” and “human”, and *to the extent* that we have an entity that actually poses a threat to those values, it is the capital class.
@mcc [1]: inb4 somebody says they actually wrestle with those things at extremely exhaustive length: they mostly try to rationalize those things away, which is not the same process
-
the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
@glyph@mastodon.social Agreed!! "AI alignment" exists so they can fire and ignore people who are actually concerned with the ethics of how machine learning is made/deployed/used/etc
I wish I had some links saved but Dr. Timnit Gebru has deeeeefinitely written about this, I'm pretty sure... and I wish it was more widely known. -
R relay@relay.infosec.exchange shared this topic
-
@mcc [1]: inb4 somebody says they actually wrestle with those things at extremely exhaustive length: they mostly try to rationalize those things away, which is not the same process
-
the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
ML ethics: here's why including ZIP codes in the data used by a classifier is bad
AI ethics: what if some cryptogod hundreds of millennia in the future gets their feelings hurt by mean posts and decides to invent hell?
-
@mcc no disagreement with any of that, but the “AI alignment problem” is specified by its advocates in terms of “universal human values”. the stipulated “alignment” is not with specific user desires or a stated optimization objective but with those putative (imagined) values
-
@mcc @glyph I think the biases in a random process (or more generally, the particular distribution) can still align with somebody else's biases and/or expectations. People have this thing where when you say "random", they immediately imagine some kind of fair lottery, with every option equally probable.
-
ML ethics: here's why including ZIP codes in the data used by a classifier is bad
AI ethics: what if some cryptogod hundreds of millennia in the future gets their feelings hurt by mean posts and decides to invent hell?
@glyph (I hate how little I had to exaggerate to make that joke.)
-
@glyph (I hate how little I had to exaggerate to make that joke.)
@xgranade I don't think there's an exaggeration here, just some uncharitable phrasing
-
@3psboyd @mcc I feel a *little* bad for the lesswrongers generally because this is really judging the community by its worst and most extreme elements, and here we are on fedi (not a group whose most extreme and unpleasant members I would like to represent me) but that faction is certainly … unduly powerful in society right now
-
the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
@glyph the real misaligned superintelligence were the corporations we met along the way
-
-
-
@glyph Even without the "Clyde" problem it's hard to talk about because there's a historical notion of a probabilistic algorithm where you have stochastic behavior operating with proven bounds and a provable distribution of behaviors, and the new type of statistics-based software where the software just sort of does whatever and we don't even discuss it as if it were statistics-based we call it "intelligence"
-
the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
@glyph if we talk enough about paperclip maximizers, we can ignore the profit maximizers behind the curtain
-
R relay@relay.an.exchange shared this topic
-
@glyph I do think there is an interesting perspective where computer software based on deterministic execution of instructions *can* be aligned with the goals of a user but computer software based on a trained statistical model cannot, technically, be aligned with anything at all as there is inherently random behavior. But we can't conceptualize that problem because the capital class is lying and saying that their computer has a soul because they named it "Cylde" and drew googly eyes on it
-
@mcc [1]: inb4 somebody says they actually wrestle with those things at extremely exhaustive length: they mostly try to rationalize those things away, which is not the same process
@glyph the first thing we'll do, is fire all the (actual) ethicists.
-