Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end.

Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end.

Scheduled Pinned Locked Moved Uncategorized
31 Posts 15 Posters 6 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • spdrnl@sigmoid.socialS spdrnl@sigmoid.social

    @emilymbender The real issue here might be that machine learning models pursue a single objective; no matter what.

    So the next step is then to say that machine learning models are superior because of the single objective.

    Being human means not having a single objective. These people are rich and powerful enough to redeclare utilitarianism

    It's all a bit narrow minded; one long impaired intellectual gooning session.

    emilymbender@dair-community.socialE This user is from outside of this forum
    emilymbender@dair-community.socialE This user is from outside of this forum
    emilymbender@dair-community.social
    wrote last edited by
    #8

    @spdrnl Good ol' Mastodon. First reply is of course some more mansplaining.

    spdrnl@sigmoid.socialS pattykimura@beige.partyP 2 Replies Last reply
    0
    • emilymbender@dair-community.socialE emilymbender@dair-community.social

      @spdrnl Good ol' Mastodon. First reply is of course some more mansplaining.

      spdrnl@sigmoid.socialS This user is from outside of this forum
      spdrnl@sigmoid.socialS This user is from outside of this forum
      spdrnl@sigmoid.social
      wrote last edited by
      #9

      @emilymbender Oh, that was not my intention.

      I was reacting to the caption under the photo; I highly distrust the AI crowd. My post was meant as an inside take, what I think is behind these projections.

      My statement was intended to sympathize with your many good insights; I really admire your take on things.

      I can remove the post.

      emilymbender@dair-community.socialE 1 Reply Last reply
      0
      • emilymbender@dair-community.socialE emilymbender@dair-community.social

        My only quibble is that I am (again) paraphrased as if I talked about "AI" as a thing, or used "AI" to refer to language models. I'm sure what I said to Holly Baxter here was "language models" have these uses. I've asked for a correction.

        In general, if you see me quoted/paraphrased in the media and the term "AI" is outside the quotes, that's gonna be a journalist mis-paraphrasing me.

        /fin

        dngrs@chaos.socialD This user is from outside of this forum
        dngrs@chaos.socialD This user is from outside of this forum
        dngrs@chaos.social
        wrote last edited by
        #10

        @emilymbender this is something I've been curious about, if you don't mind the question: do LLMs in particular actually improve upon machine translation? I theorized they would perform worse than more bespoke approaches

        Kraftwerk-Das Model Collapse (@dngrs@chaos.social)

        @hongminhee@hollo.social is there evidence that LLMs are superior to special purpose machine translation models? In my subjective experience the quality of google translate has gone down recently (but I don't know what tech they are using behind the scenes - I think it's likely they shifted to LLM translation but cannot prove it); apart from that I suspect that since LLM training data is largely untagged for translation this would degrade quality vs. purpose built models.

        favicon

        chaos.social (chaos.social)

        emilymbender@dair-community.socialE 1 Reply Last reply
        0
        • spdrnl@sigmoid.socialS spdrnl@sigmoid.social

          @emilymbender Oh, that was not my intention.

          I was reacting to the caption under the photo; I highly distrust the AI crowd. My post was meant as an inside take, what I think is behind these projections.

          My statement was intended to sympathize with your many good insights; I really admire your take on things.

          I can remove the post.

          emilymbender@dair-community.socialE This user is from outside of this forum
          emilymbender@dair-community.socialE This user is from outside of this forum
          emilymbender@dair-community.social
          wrote last edited by
          #11

          @spdrnl My advice is if you want to do something like that, make it clear in your post who you are addressing your comments to.

          You started by clicking "reply" to me, so the default interpretation is that you're replying to me.

          Another option is to quote post instead. Or post your own link to the article.

          spdrnl@sigmoid.socialS 1 Reply Last reply
          0
          • dngrs@chaos.socialD dngrs@chaos.social

            @emilymbender this is something I've been curious about, if you don't mind the question: do LLMs in particular actually improve upon machine translation? I theorized they would perform worse than more bespoke approaches

            Kraftwerk-Das Model Collapse (@dngrs@chaos.social)

            @hongminhee@hollo.social is there evidence that LLMs are superior to special purpose machine translation models? In my subjective experience the quality of google translate has gone down recently (but I don't know what tech they are using behind the scenes - I think it's likely they shifted to LLM translation but cannot prove it); apart from that I suspect that since LLM training data is largely untagged for translation this would degrade quality vs. purpose built models.

            favicon

            chaos.social (chaos.social)

            emilymbender@dair-community.socialE This user is from outside of this forum
            emilymbender@dair-community.socialE This user is from outside of this forum
            emilymbender@dair-community.social
            wrote last edited by
            #12

            @dngrs The transformer architecture produced improvements in MT, but I think the best results come from training systems specifically for MT, rather than asking the allegedly "general purpose" (they're not) models to do it.

            zombiecide@polyglot.cityZ 1 Reply Last reply
            0
            • emilymbender@dair-community.socialE emilymbender@dair-community.social

              @spdrnl My advice is if you want to do something like that, make it clear in your post who you are addressing your comments to.

              You started by clicking "reply" to me, so the default interpretation is that you're replying to me.

              Another option is to quote post instead. Or post your own link to the article.

              spdrnl@sigmoid.socialS This user is from outside of this forum
              spdrnl@sigmoid.socialS This user is from outside of this forum
              spdrnl@sigmoid.social
              wrote last edited by
              #13

              @emilymbender Noted.

              emilymbender@dair-community.socialE 1 Reply Last reply
              0
              • spdrnl@sigmoid.socialS spdrnl@sigmoid.social

                @emilymbender Noted.

                emilymbender@dair-community.socialE This user is from outside of this forum
                emilymbender@dair-community.socialE This user is from outside of this forum
                emilymbender@dair-community.social
                wrote last edited by
                #14

                @spdrnl p.s. Starting with "The real issue here...." suggests that you think that what I wrote was not the real issue, or somehow beside the point.

                spdrnl@sigmoid.socialS 1 Reply Last reply
                0
                • emilymbender@dair-community.socialE emilymbender@dair-community.social

                  @spdrnl p.s. Starting with "The real issue here...." suggests that you think that what I wrote was not the real issue, or somehow beside the point.

                  spdrnl@sigmoid.socialS This user is from outside of this forum
                  spdrnl@sigmoid.socialS This user is from outside of this forum
                  spdrnl@sigmoid.social
                  wrote last edited by
                  #15

                  @emilymbender I really thought you were just pointing to an article by Holly Baxter. These short written messages are not always easy to assess.

                  emilymbender@dair-community.socialE 1 Reply Last reply
                  0
                  • spdrnl@sigmoid.socialS spdrnl@sigmoid.social

                    @emilymbender I really thought you were just pointing to an article by Holly Baxter. These short written messages are not always easy to assess.

                    emilymbender@dair-community.socialE This user is from outside of this forum
                    emilymbender@dair-community.socialE This user is from outside of this forum
                    emilymbender@dair-community.social
                    wrote last edited by
                    #16

                    @spdrnl No, I was writing a thread about it, as indicated inter alia, with

                    🧵>>

                    I also was talking about and article *I was interviewed in*, as per the top post in my thread.

                    The post contained more than just the link. Did you only read the link?

                    spdrnl@sigmoid.socialS 1 Reply Last reply
                    0
                    • em0nm4stodon@infosec.exchangeE em0nm4stodon@infosec.exchange shared this topic
                    • emilymbender@dair-community.socialE emilymbender@dair-community.social

                      Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

                      Link Preview Image
                      The people building AI think it might be conscious. That’s not the most alarming part

                      Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

                      favicon

                      The Independent (www.the-independent.com)

                      🧵>>

                      cstrauber@mastodon.socialC This user is from outside of this forum
                      cstrauber@mastodon.socialC This user is from outside of this forum
                      cstrauber@mastodon.social
                      wrote last edited by
                      #17

                      @emilymbender It is *fascinating* how you appear in AI-related media. Smart reporters and tech people know they have to mention you, but they can't engage with your arguments without turning off the hype machine. Thanks for sharing this.

                      1 Reply Last reply
                      0
                      • emilymbender@dair-community.socialE emilymbender@dair-community.social

                        @spdrnl No, I was writing a thread about it, as indicated inter alia, with

                        🧵>>

                        I also was talking about and article *I was interviewed in*, as per the top post in my thread.

                        The post contained more than just the link. Did you only read the link?

                        spdrnl@sigmoid.socialS This user is from outside of this forum
                        spdrnl@sigmoid.socialS This user is from outside of this forum
                        spdrnl@sigmoid.social
                        wrote last edited by
                        #18

                        @emilymbender Ah, that thread was not visible to me. On my account it just showed that post.

                        I took the effort to click via your profile and then I can see the thread.

                        1 Reply Last reply
                        0
                        • R relay@relay.infosec.exchange shared this topic
                        • emilymbender@dair-community.socialE emilymbender@dair-community.social

                          I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:

                          >>

                          chpietsch@fedifreu.deC This user is from outside of this forum
                          chpietsch@fedifreu.deC This user is from outside of this forum
                          chpietsch@fedifreu.de
                          wrote last edited by
                          #19

                          @emilymbender I did not know what a Magic 8 Ball is, so I looked it up: https://en.wikipedia.org/wiki/Magic_8_Ball

                          1 Reply Last reply
                          0
                          • emilymbender@dair-community.socialE emilymbender@dair-community.social

                            @spdrnl Good ol' Mastodon. First reply is of course some more mansplaining.

                            pattykimura@beige.partyP This user is from outside of this forum
                            pattykimura@beige.partyP This user is from outside of this forum
                            pattykimura@beige.party
                            wrote last edited by
                            #20

                            @emilymbender

                            ❤

                            @spdrnl

                            Don't ever mansplain to a an internationally known subject material expert whose consulting fee schedule for tech bros starts at $2000 per hour.

                            1 Reply Last reply
                            0
                            • emilymbender@dair-community.socialE emilymbender@dair-community.social

                              @dngrs The transformer architecture produced improvements in MT, but I think the best results come from training systems specifically for MT, rather than asking the allegedly "general purpose" (they're not) models to do it.

                              zombiecide@polyglot.cityZ This user is from outside of this forum
                              zombiecide@polyglot.cityZ This user is from outside of this forum
                              zombiecide@polyglot.city
                              wrote last edited by
                              #21

                              @emilymbender @dngrs

                              in a similar vein, what is it that makes people expect that MT between two languages that don't have much useful translated corpus between them should be any good? I mean, what's the conceptual ground for such beliefs about how language is supposed to work?

                              1 Reply Last reply
                              0
                              • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

                                Link Preview Image
                                The people building AI think it might be conscious. That’s not the most alarming part

                                Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

                                favicon

                                The Independent (www.the-independent.com)

                                🧵>>

                                gbargoud@masto.nycG This user is from outside of this forum
                                gbargoud@masto.nycG This user is from outside of this forum
                                gbargoud@masto.nyc
                                wrote last edited by
                                #22

                                @emilymbender

                                > a message specifically included for tech bros with startups who want to download all her knowledge about LLMs: “My consulting fee is $2,000/hour. I do not ‘grab coffee’ or ‘jump on the phone’.”

                                Nice, how many of them took you up on that?

                                1 Reply Last reply
                                0
                                • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                  I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:

                                  >>

                                  hzulla@infosec.exchangeH This user is from outside of this forum
                                  hzulla@infosec.exchangeH This user is from outside of this forum
                                  hzulla@infosec.exchange
                                  wrote last edited by
                                  #23

                                  @emilymbender When I explain my qualms about GenAI chatbots to others, I usually refer to Clever Hans as a historic example of a situation where an observer falsely attributes "intelligence" to a non-intelligent process.

                                  Link Preview Image
                                  Clever Hans - Wikipedia

                                  favicon

                                  (en.wikipedia.org)

                                  hzulla@infosec.exchangeH 1 Reply Last reply
                                  0
                                  • hzulla@infosec.exchangeH hzulla@infosec.exchange

                                    @emilymbender When I explain my qualms about GenAI chatbots to others, I usually refer to Clever Hans as a historic example of a situation where an observer falsely attributes "intelligence" to a non-intelligent process.

                                    Link Preview Image
                                    Clever Hans - Wikipedia

                                    favicon

                                    (en.wikipedia.org)

                                    hzulla@infosec.exchangeH This user is from outside of this forum
                                    hzulla@infosec.exchangeH This user is from outside of this forum
                                    hzulla@infosec.exchange
                                    wrote last edited by
                                    #24

                                    @emilymbender Oh, TIL that there is an AI-related use of the term Clever Hans effect, unrelated to what I meant here. My reason to refer to Clever Hans is how the intelligence (or consciousness?) attributed to the chatbot isn't in the chatbot, but only in the mind of the observer.

                                    1 Reply Last reply
                                    0
                                    • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                      Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

                                      Link Preview Image
                                      The people building AI think it might be conscious. That’s not the most alarming part

                                      Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

                                      favicon

                                      The Independent (www.the-independent.com)

                                      🧵>>

                                      thalia@discuss.systemsT This user is from outside of this forum
                                      thalia@discuss.systemsT This user is from outside of this forum
                                      thalia@discuss.systems
                                      wrote last edited by
                                      #25

                                      @emilymbender You mention a $2,000/hr consulting fee. Are you also getting a flood of prospective students you have to turn away?

                                      1 Reply Last reply
                                      0
                                      • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                        Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

                                        Link Preview Image
                                        The people building AI think it might be conscious. That’s not the most alarming part

                                        Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

                                        favicon

                                        The Independent (www.the-independent.com)

                                        🧵>>

                                        jrdepriest@infosec.exchangeJ This user is from outside of this forum
                                        jrdepriest@infosec.exchangeJ This user is from outside of this forum
                                        jrdepriest@infosec.exchange
                                        wrote last edited by
                                        #26

                                        @emilymbender

                                        When I read that headline, it gave me the impression that "AI" was going to be declared as more than conscious in some way. I suppose that's just "how you write a headline".

                                        I was pleasantly surprised at how sober Holly Baxter's take on "AI" was. She does not blindly buy in to the hype and she hasn't fallen down the rabbit hole of installing Claude and getting bamboozled by its magical cold reading skills.

                                        I was further surprised to see just how much space was given over to your interview.

                                        Thank you for even taking the time to continue talking to reporters when, as you said, you are often a checkbox just so they can say they did a "both sides".

                                        1 Reply Last reply
                                        0
                                        • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                          I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:

                                          >>

                                          R This user is from outside of this forum
                                          R This user is from outside of this forum
                                          robinadams@mathstodon.xyz
                                          wrote last edited by
                                          #27

                                          @emilymbender This company is selling a magic 8-ball as "Offline ChatGPT":

                                          Link Preview Image
                                          CHATGPT MAGIC-8 BALL

                                          After much research and development I have finally made an offline version of ChatGPT. Now you can save water and electricity while carrying one of the world's most powerfully annoying AI chatbots in your pocket. Have every whim affirmed with up to 20 of the most popular ChatGPT responses. Smooth your brain into a frictionless hypermind capable of instant regurgitation via a corporate flattery and theft engine. They're 40 quid, and you can order one here. I have a limited pre-Xmas supply with mo

                                          favicon

                                          SPELLING MISTAKES COST LIVES (www.spellingmistakescostlives.com)

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups