Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end.
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
@emilymbender It is *fascinating* how you appear in AI-related media. Smart reporters and tech people know they have to mention you, but they can't engage with your arguments without turning off the hype machine. Thanks for sharing this.
-
@spdrnl No, I was writing a thread about it, as indicated inter alia, with
🧵>>
I also was talking about and article *I was interviewed in*, as per the top post in my thread.
The post contained more than just the link. Did you only read the link?
@emilymbender Ah, that thread was not visible to me. On my account it just showed that post.
I took the effort to click via your profile and then I can see the thread.
-
R relay@relay.infosec.exchange shared this topic
-
I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:
>>
@emilymbender I did not know what a Magic 8 Ball is, so I looked it up: https://en.wikipedia.org/wiki/Magic_8_Ball
-
@spdrnl Good ol' Mastodon. First reply is of course some more mansplaining.
Don't ever mansplain to a an internationally known subject material expert whose consulting fee schedule for tech bros starts at $2000 per hour.
-
@dngrs The transformer architecture produced improvements in MT, but I think the best results come from training systems specifically for MT, rather than asking the allegedly "general purpose" (they're not) models to do it.
in a similar vein, what is it that makes people expect that MT between two languages that don't have much useful translated corpus between them should be any good? I mean, what's the conceptual ground for such beliefs about how language is supposed to work?
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
> a message specifically included for tech bros with startups who want to download all her knowledge about LLMs: “My consulting fee is $2,000/hour. I do not ‘grab coffee’ or ‘jump on the phone’.”
Nice, how many of them took you up on that?
-
I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:
>>
@emilymbender When I explain my qualms about GenAI chatbots to others, I usually refer to Clever Hans as a historic example of a situation where an observer falsely attributes "intelligence" to a non-intelligent process.
-
@emilymbender When I explain my qualms about GenAI chatbots to others, I usually refer to Clever Hans as a historic example of a situation where an observer falsely attributes "intelligence" to a non-intelligent process.
@emilymbender Oh, TIL that there is an AI-related use of the term Clever Hans effect, unrelated to what I meant here. My reason to refer to Clever Hans is how the intelligence (or consciousness?) attributed to the chatbot isn't in the chatbot, but only in the mind of the observer.
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
@emilymbender You mention a $2,000/hr consulting fee. Are you also getting a flood of prospective students you have to turn away?
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
When I read that headline, it gave me the impression that "AI" was going to be declared as more than conscious in some way. I suppose that's just "how you write a headline".
I was pleasantly surprised at how sober Holly Baxter's take on "AI" was. She does not blindly buy in to the hype and she hasn't fallen down the rabbit hole of installing Claude and getting bamboozled by its magical cold reading skills.
I was further surprised to see just how much space was given over to your interview.
Thank you for even taking the time to continue talking to reporters when, as you said, you are often a checkbox just so they can say they did a "both sides".
-
I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:
>>
@emilymbender This company is selling a magic 8-ball as "Offline ChatGPT":
CHATGPT MAGIC-8 BALL
After much research and development I have finally made an offline version of ChatGPT. Now you can save water and electricity while carrying one of the world's most powerfully annoying AI chatbots in your pocket. Have every whim affirmed with up to 20 of the most popular ChatGPT responses. Smooth your brain into a frictionless hypermind capable of instant regurgitation via a corporate flattery and theft engine. They're 40 quid, and you can order one here. I have a limited pre-Xmas supply with mo
SPELLING MISTAKES COST LIVES (www.spellingmistakescostlives.com)
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
@emilymbender If you ever wanted to know how religion got started in human civilization, here it is, playing out in real time. Make it spooky, make it hype.
-
My only quibble is that I am (again) paraphrased as if I talked about "AI" as a thing, or used "AI" to refer to language models. I'm sure what I said to Holly Baxter here was "language models" have these uses. I've asked for a correction.
In general, if you see me quoted/paraphrased in the media and the term "AI" is outside the quotes, that's gonna be a journalist mis-paraphrasing me.
/fin
I am happy to say my request for a correction was honored.

-
I am happy to say my request for a correction was honored.

@emilymbender Check. Watched Prof. Michael Woolridge's Royal Society Lecture this AM.
-
I am happy to say my request for a correction was honored.

@emilymbender Now we just have to make everyone watch the 1980s Twilight Zone episode "Wordplay". Where "dinner" (oops, Scotticism, I mean "lunch" everywhere else but .scot) slowly mutates into "dinosaur". The protagonist is trapped in an existential nightmare not unlike Phillip K. Dick's "Ubik".
-
R relay@relay.mycrowd.ca shared this topic