Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end.

Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end.

Scheduled Pinned Locked Moved Uncategorized
31 Posts 15 Posters 6 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • emilymbender@dair-community.socialE emilymbender@dair-community.social

    @dngrs The transformer architecture produced improvements in MT, but I think the best results come from training systems specifically for MT, rather than asking the allegedly "general purpose" (they're not) models to do it.

    zombiecide@polyglot.cityZ This user is from outside of this forum
    zombiecide@polyglot.cityZ This user is from outside of this forum
    zombiecide@polyglot.city
    wrote last edited by
    #21

    @emilymbender @dngrs

    in a similar vein, what is it that makes people expect that MT between two languages that don't have much useful translated corpus between them should be any good? I mean, what's the conceptual ground for such beliefs about how language is supposed to work?

    1 Reply Last reply
    0
    • emilymbender@dair-community.socialE emilymbender@dair-community.social

      Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

      Link Preview Image
      The people building AI think it might be conscious. That’s not the most alarming part

      Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

      favicon

      The Independent (www.the-independent.com)

      🧵>>

      gbargoud@masto.nycG This user is from outside of this forum
      gbargoud@masto.nycG This user is from outside of this forum
      gbargoud@masto.nyc
      wrote last edited by
      #22

      @emilymbender

      > a message specifically included for tech bros with startups who want to download all her knowledge about LLMs: “My consulting fee is $2,000/hour. I do not ‘grab coffee’ or ‘jump on the phone’.”

      Nice, how many of them took you up on that?

      1 Reply Last reply
      0
      • emilymbender@dair-community.socialE emilymbender@dair-community.social

        I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:

        >>

        Link Preview ImageLink Preview Image
        hzulla@infosec.exchangeH This user is from outside of this forum
        hzulla@infosec.exchangeH This user is from outside of this forum
        hzulla@infosec.exchange
        wrote last edited by
        #23

        @emilymbender When I explain my qualms about GenAI chatbots to others, I usually refer to Clever Hans as a historic example of a situation where an observer falsely attributes "intelligence" to a non-intelligent process.

        Link Preview Image
        Clever Hans - Wikipedia

        favicon

        (en.wikipedia.org)

        hzulla@infosec.exchangeH 1 Reply Last reply
        0
        • hzulla@infosec.exchangeH hzulla@infosec.exchange

          @emilymbender When I explain my qualms about GenAI chatbots to others, I usually refer to Clever Hans as a historic example of a situation where an observer falsely attributes "intelligence" to a non-intelligent process.

          Link Preview Image
          Clever Hans - Wikipedia

          favicon

          (en.wikipedia.org)

          hzulla@infosec.exchangeH This user is from outside of this forum
          hzulla@infosec.exchangeH This user is from outside of this forum
          hzulla@infosec.exchange
          wrote last edited by
          #24

          @emilymbender Oh, TIL that there is an AI-related use of the term Clever Hans effect, unrelated to what I meant here. My reason to refer to Clever Hans is how the intelligence (or consciousness?) attributed to the chatbot isn't in the chatbot, but only in the mind of the observer.

          1 Reply Last reply
          0
          • emilymbender@dair-community.socialE emilymbender@dair-community.social

            Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

            Link Preview Image
            The people building AI think it might be conscious. That’s not the most alarming part

            Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

            favicon

            The Independent (www.the-independent.com)

            🧵>>

            thalia@discuss.systemsT This user is from outside of this forum
            thalia@discuss.systemsT This user is from outside of this forum
            thalia@discuss.systems
            wrote last edited by
            #25

            @emilymbender You mention a $2,000/hr consulting fee. Are you also getting a flood of prospective students you have to turn away?

            1 Reply Last reply
            0
            • emilymbender@dair-community.socialE emilymbender@dair-community.social

              Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

              Link Preview Image
              The people building AI think it might be conscious. That’s not the most alarming part

              Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

              favicon

              The Independent (www.the-independent.com)

              🧵>>

              jrdepriest@infosec.exchangeJ This user is from outside of this forum
              jrdepriest@infosec.exchangeJ This user is from outside of this forum
              jrdepriest@infosec.exchange
              wrote last edited by
              #26

              @emilymbender

              When I read that headline, it gave me the impression that "AI" was going to be declared as more than conscious in some way. I suppose that's just "how you write a headline".

              I was pleasantly surprised at how sober Holly Baxter's take on "AI" was. She does not blindly buy in to the hype and she hasn't fallen down the rabbit hole of installing Claude and getting bamboozled by its magical cold reading skills.

              I was further surprised to see just how much space was given over to your interview.

              Thank you for even taking the time to continue talking to reporters when, as you said, you are often a checkbox just so they can say they did a "both sides".

              1 Reply Last reply
              0
              • emilymbender@dair-community.socialE emilymbender@dair-community.social

                I have been sharing the Magic 8 Ball analogy for a while now, but I think this is maybe the first time it's made it to print:

                >>

                Link Preview ImageLink Preview Image
                R This user is from outside of this forum
                R This user is from outside of this forum
                robinadams@mathstodon.xyz
                wrote last edited by
                #27

                @emilymbender This company is selling a magic 8-ball as "Offline ChatGPT":

                Link Preview Image
                CHATGPT MAGIC-8 BALL

                After much research and development I have finally made an offline version of ChatGPT. Now you can save water and electricity while carrying one of the world's most powerfully annoying AI chatbots in your pocket. Have every whim affirmed with up to 20 of the most popular ChatGPT responses. Smooth your brain into a frictionless hypermind capable of instant regurgitation via a corporate flattery and theft engine. They're 40 quid, and you can order one here. I have a limited pre-Xmas supply with mo

                favicon

                SPELLING MISTAKES COST LIVES (www.spellingmistakescostlives.com)

                1 Reply Last reply
                0
                • emilymbender@dair-community.socialE emilymbender@dair-community.social

                  Usually, when I get interviewed for a piece on something like "AI consciousness" I am relegated to the skeptics box --- some short paragraph near the end. So it is a nice change to see this piece by Holly Baxter

                  Link Preview Image
                  The people building AI think it might be conscious. That’s not the most alarming part

                  Anthropic’s CEO Dario Amadei says he can’t rule out that its chatbot, Claude, is conscious. A Google engineer is sure he once built a sentient being. Holly Baxter speaks to the experts about whether or not ‘AI welfare’ is a serious pursuit — and what that means for humans

                  favicon

                  The Independent (www.the-independent.com)

                  🧵>>

                  gcvsa@mstdn.plusG This user is from outside of this forum
                  gcvsa@mstdn.plusG This user is from outside of this forum
                  gcvsa@mstdn.plus
                  wrote last edited by
                  #28

                  @emilymbender If you ever wanted to know how religion got started in human civilization, here it is, playing out in real time. Make it spooky, make it hype.

                  1 Reply Last reply
                  0
                  • emilymbender@dair-community.socialE emilymbender@dair-community.social

                    My only quibble is that I am (again) paraphrased as if I talked about "AI" as a thing, or used "AI" to refer to language models. I'm sure what I said to Holly Baxter here was "language models" have these uses. I've asked for a correction.

                    In general, if you see me quoted/paraphrased in the media and the term "AI" is outside the quotes, that's gonna be a journalist mis-paraphrasing me.

                    /fin

                    Link Preview Image
                    emilymbender@dair-community.socialE This user is from outside of this forum
                    emilymbender@dair-community.socialE This user is from outside of this forum
                    emilymbender@dair-community.social
                    wrote last edited by
                    #29

                    I am happy to say my request for a correction was honored.

                    Link Preview Image
                    bms48@mastodon.socialB 2 Replies Last reply
                    1
                    0
                    • emilymbender@dair-community.socialE emilymbender@dair-community.social

                      I am happy to say my request for a correction was honored.

                      Link Preview Image
                      bms48@mastodon.socialB This user is from outside of this forum
                      bms48@mastodon.socialB This user is from outside of this forum
                      bms48@mastodon.social
                      wrote last edited by
                      #30

                      @emilymbender Check. Watched Prof. Michael Woolridge's Royal Society Lecture this AM.

                      1 Reply Last reply
                      0
                      • emilymbender@dair-community.socialE emilymbender@dair-community.social

                        I am happy to say my request for a correction was honored.

                        Link Preview Image
                        bms48@mastodon.socialB This user is from outside of this forum
                        bms48@mastodon.socialB This user is from outside of this forum
                        bms48@mastodon.social
                        wrote last edited by
                        #31

                        @emilymbender Now we just have to make everyone watch the 1980s Twilight Zone episode "Wordplay". Where "dinner" (oops, Scotticism, I mean "lunch" everywhere else but .scot) slowly mutates into "dinosaur". The protagonist is trapped in an existential nightmare not unlike Phillip K. Dick's "Ubik".

                        1 Reply Last reply
                        0
                        • R relay@relay.mycrowd.ca shared this topic
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups