the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
-
@flipper @davidgerard @deshipu @travisfw @glyph i (a frequentist) once dated a Bayesian for a while. Nothing was learned from this experience which applies to other situations
-
@3psboyd @mcc I feel a *little* bad for the lesswrongers generally because this is really judging the community by its worst and most extreme elements, and here we are on fedi (not a group whose most extreme and unpleasant members I would like to represent me) but that faction is certainly … unduly powerful in society right now
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
@jaystephens @3psboyd @mcc consider this my “born in the dark” Bane speech
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
@glyph @jaystephens @3psboyd @mcc
I know what “felicific calculus” refers to, but every time I see that phrase, I’m annoyed that it refers to generic happiness and not to the number of cats people have (or that they would like to have).
-
@xgranade I don't think there's an exaggeration here, just some uncharitable phrasing
-
-
@jaystephens @3psboyd @mcc if they were at least real Benthamites they’d get out the felicific calculus and do the damn arithmetic and not just slosh around a bunch of half-assed Fermi estimates with orders of magnitude instead of numbers
-
ML ethics: here's why including ZIP codes in the data used by a classifier is bad
AI ethics: what if some cryptogod hundreds of millennia in the future gets their feelings hurt by mean posts and decides to invent hell?
@glyph@mastodon.social @xgranade@wandering.shop Eliezer Yudkowsky and his consequences have been a disaster for the human race
-
@glyph (I hate how little I had to exaggerate to make that joke.)