Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end.
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:
>>
-
I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:
>>
"Technologies of isolation" is due to @hypervisible , but even if I'm careful to tell journalists that, they don't necessarily include the attribution.
>> -
"Technologies of isolation" is due to @hypervisible , but even if I'm careful to tell journalists that, they don't necessarily include the attribution.
>>I also appreciated O'Niell's comments quoted in the article:
>>

-
I also appreciated O'Niell's comments quoted in the article:
>>

My only quibble is that I am (again) paraphrased as if I talked about "AI" as a thing, or used "AI" to refer to language models. I'm sure what I said to Holly Baxter here was "language models" have these uses. I've asked for a correction.
In general, if you see me quoted/paraphrased in the media and the term "AI" is outside the quotes, that's gonna be a journalist mis-paraphrasing me.
/fin

-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
@emilymbender Nicely put. Thank you for standing up for sanity.
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
@emilymbender The real issue here might be that machine learning models pursue a single objective; no matter what.
So the next step is then to say that machine learning models are superior because of the single objective.
Being human means not having a single objective. These people are rich and powerful enough to redeclare utilitarianism
It's all a bit narrow minded; one long impaired intellectual gooning session.
-
@emilymbender The real issue here might be that machine learning models pursue a single objective; no matter what.
So the next step is then to say that machine learning models are superior because of the single objective.
Being human means not having a single objective. These people are rich and powerful enough to redeclare utilitarianism
It's all a bit narrow minded; one long impaired intellectual gooning session.
@spdrnl Good ol' Mastodon. First reply is of course some more mansplaining.
-
@spdrnl Good ol' Mastodon. First reply is of course some more mansplaining.
@emilymbender Oh, that was not my intention.
I was reacting to the caption under the photo; I highly distrust the AI crowd. My post was meant as an inside take, what I think is behind these projections.
My statement was intended to sympathize with your many good insights; I really admire your take on things.
I can remove the post.
-
My only quibble is that I am (again) paraphrased as if I talked about "AI" as a thing, or used "AI" to refer to language models. I'm sure what I said to Holly Baxter here was "language models" have these uses. I've asked for a correction.
In general, if you see me quoted/paraphrased in the media and the term "AI" is outside the quotes, that's gonna be a journalist mis-paraphrasing me.
/fin

@emilymbender this is something I've been curious about, if you don't mind the question: do LLMs in particular actually improve upon machine translation? I theorized they would perform worse than more bespoke approaches
Kraftwerk-Das Model Collapse (@dngrs@chaos.social)
@hongminhee@hollo.social is there evidence that LLMs are superior to special purpose machine translation models? In my subjective experience the quality of google translate has gone down recently (but I don't know what tech they are using behind the scenes - I think it's likely they shifted to LLM translation but cannot prove it); apart from that I suspect that since LLM training data is largely untagged for translation this would degrade quality vs. purpose built models.
chaos.social (chaos.social)
-
@emilymbender Oh, that was not my intention.
I was reacting to the caption under the photo; I highly distrust the AI crowd. My post was meant as an inside take, what I think is behind these projections.
My statement was intended to sympathize with your many good insights; I really admire your take on things.
I can remove the post.
@spdrnl My advice is if you want to do something like that, make it clear in your post who you are addressing your comments to.
You started by clicking "reply" to me, so the default interpretation is that you're replying to me.
Another option is to quote post instead. Or post your own link to the article.
-
@emilymbender this is something I've been curious about, if you don't mind the question: do LLMs in particular actually improve upon machine translation? I theorized they would perform worse than more bespoke approaches
Kraftwerk-Das Model Collapse (@dngrs@chaos.social)
@hongminhee@hollo.social is there evidence that LLMs are superior to special purpose machine translation models? In my subjective experience the quality of google translate has gone down recently (but I don't know what tech they are using behind the scenes - I think it's likely they shifted to LLM translation but cannot prove it); apart from that I suspect that since LLM training data is largely untagged for translation this would degrade quality vs. purpose built models.
chaos.social (chaos.social)
@dngrs The transformer architecture produced improvements in MT, but I think the best results come from training systems specifically for MT, rather than asking the allegedly "general purpose" (they're not) models to do it.
-
@spdrnl My advice is if you want to do something like that, make it clear in your post who you are addressing your comments to.
You started by clicking "reply" to me, so the default interpretation is that you're replying to me.
Another option is to quote post instead. Or post your own link to the article.
@emilymbender Noted.
-
@emilymbender Noted.
@spdrnl p.s. Starting with "The real issue here...." suggests that you think that what I wrote was not the real issue, or somehow beside the point.
-
@spdrnl p.s. Starting with "The real issue here...." suggests that you think that what I wrote was not the real issue, or somehow beside the point.
@emilymbender I really thought you were just pointing to an article by Holly Baxter. These short written messages are not always easy to assess.
-
@emilymbender I really thought you were just pointing to an article by Holly Baxter. These short written messages are not always easy to assess.
@spdrnl No, I was writing a thread about it, as indicated inter alia, with
🧵>>
I also was talking about and article *I was interviewed in*, as per the top post in my thread.
The post contained more than just the link. Did you only read the link?
-
E em0nm4stodon@infosec.exchange shared this topic
-
Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter
The people building AI think it might be conscious. That’s not the most alarming part
Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans
The Independent (www.the-independent.com)
🧵>>
@emilymbender It is *fascinating* how you appear in AI-related media. Smart reporters and tech people know they have to mention you, but they can't engage with your arguments without turning off the hype machine. Thanks for sharing this.
-
@spdrnl No, I was writing a thread about it, as indicated inter alia, with
🧵>>
I also was talking about and article *I was interviewed in*, as per the top post in my thread.
The post contained more than just the link. Did you only read the link?
@emilymbender Ah, that thread was not visible to me. On my account it just showed that post.
I took the effort to click via your profile and then I can see the thread.
-
R relay@relay.infosec.exchange shared this topic
-
I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:
>>
@emilymbender I did not know what a Magic 8 Ball is, so I looked it up: https://en.wikipedia.org/wiki/Magic_8_Ball
-
@spdrnl Good ol' Mastodon. First reply is of course some more mansplaining.
Don't ever mansplain to a an internationally known subject material expert whose consulting fee schedule for tech bros starts at $2000 per hour.