the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
-
the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
@glyph the real misaligned superintelligence were the corporations we met along the way
-
-
-
@glyph Even without the "Clyde" problem it's hard to talk about because there's a historical notion of a probabilistic algorithm where you have stochastic behavior operating with proven bounds and a provable distribution of behaviors, and the new type of statistics-based software where the software just sort of does whatever and we don't even discuss it as if it were statistics-based we call it "intelligence"
-
the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
@glyph if we talk enough about paperclip maximizers, we can ignore the profit maximizers behind the curtain
-
R relay@relay.an.exchange shared this topic
-
@glyph I do think there is an interesting perspective where computer software based on deterministic execution of instructions *can* be aligned with the goals of a user but computer software based on a trained statistical model cannot, technically, be aligned with anything at all as there is inherently random behavior. But we can't conceptualize that problem because the capital class is lying and saying that their computer has a soul because they named it "Cylde" and drew googly eyes on it
-
@mcc [1]: inb4 somebody says they actually wrestle with those things at extremely exhaustive length: they mostly try to rationalize those things away, which is not the same process
@glyph the first thing we'll do, is fire all the (actual) ethicists.
-
-
@stilescrisis @glyph I think a certain sort of predictability is a prerequisite for alignment. Necessary but not sufficient. Humans are not deterministic but their behavior can be consistent, because they can act with intent. They can have beliefs and moral codes. They can understand their own incentives and the consequences of their actions. You can do things that cause them to understand the consequences of their actions better.
-
@stilescrisis @glyph I think a certain sort of predictability is a prerequisite for alignment. Necessary but not sufficient. Humans are not deterministic but their behavior can be consistent, because they can act with intent. They can have beliefs and moral codes. They can understand their own incentives and the consequences of their actions. You can do things that cause them to understand the consequences of their actions better.
-
@stilescrisis @glyph "Models are non-deterministic at the token level but pretty darn consistent at the macro level"
At recreating the structural properties of language, yeah, because that's what the algorithm's for. But the product is not sold as a "structural properties of text simulator". It is sold as an engine for producing meaning. And when it comes to meaning the tokens matter very much, very very much
-
@flipper @davidgerard @deshipu @travisfw @glyph i (a frequentist) once dated a Bayesian for a while. Nothing was learned from this experience which applies to other situations
-
@3psboyd @mcc I feel a *little* bad for the lesswrongers generally because this is really judging the community by its worst and most extreme elements, and here we are on fedi (not a group whose most extreme and unpleasant members I would like to represent me) but that faction is certainly … unduly powerful in society right now
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
@jaystephens @3psboyd @mcc consider this my “born in the dark” Bane speech
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
@glyph @jaystephens @3psboyd @mcc
I know what “felicific calculus” refers to, but every time I see that phrase, I’m annoyed that it refers to generic happiness and not to the number of cats people have (or that they would like to have).
-
@xgranade I don't think there's an exaggeration here, just some uncharitable phrasing
-
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
-
ML ethics: here's why including ZIP codes in the data used by a classifier is bad
AI ethics: what if some cryptogod hundreds of millennia in the future gets their feelings hurt by mean posts and decides to invent hell?
@glyph@mastodon.social @xgranade@wandering.shop Eliezer Yudkowsky and his consequences have been a disaster for the human race