Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake.

When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake.

Scheduled Pinned Locked Moved Uncategorized
3 Posts 3 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jitterted@sfba.socialJ This user is from outside of this forum
    jitterted@sfba.socialJ This user is from outside of this forum
    jitterted@sfba.social
    wrote last edited by
    #1

    When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake. LLMs have no consciousness, agency, nor self-awareness, and using such terms can make it seem like they do.

    (Even "writing code" hits different than "generates code".)

    This isn't a pro- or anti-AI comment, it's a truth vs. lying (perhaps to oneself) comment. How we (especially the sellers of trained models) talk about these statistical token generators affects how/when/if we use them and what we expect of them.

    thirstybear@agilodon.socialT elduvelle@neuromatch.socialE 2 Replies Last reply
    1
    0
    • jitterted@sfba.socialJ jitterted@sfba.social

      When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake. LLMs have no consciousness, agency, nor self-awareness, and using such terms can make it seem like they do.

      (Even "writing code" hits different than "generates code".)

      This isn't a pro- or anti-AI comment, it's a truth vs. lying (perhaps to oneself) comment. How we (especially the sellers of trained models) talk about these statistical token generators affects how/when/if we use them and what we expect of them.

      thirstybear@agilodon.socialT This user is from outside of this forum
      thirstybear@agilodon.socialT This user is from outside of this forum
      thirstybear@agilodon.social
      wrote last edited by
      #2

      @jitterted Agreed. I try and use terms like “generates code”, “statistically likely output” and of course “stochastic parroting” (a remarkably accurate term)

      Still struggling to find a phrase that hits home hard enough for the mistakes though - currently I usually say it has generated bad output.

      1 Reply Last reply
      0
      • jitterted@sfba.socialJ jitterted@sfba.social

        When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake. LLMs have no consciousness, agency, nor self-awareness, and using such terms can make it seem like they do.

        (Even "writing code" hits different than "generates code".)

        This isn't a pro- or anti-AI comment, it's a truth vs. lying (perhaps to oneself) comment. How we (especially the sellers of trained models) talk about these statistical token generators affects how/when/if we use them and what we expect of them.

        elduvelle@neuromatch.socialE This user is from outside of this forum
        elduvelle@neuromatch.socialE This user is from outside of this forum
        elduvelle@neuromatch.social
        wrote last edited by
        #3

        @jitterted thank you! this language is so annoying

        1 Reply Last reply
        0
        • R relay@relay.infosec.exchange shared this topic
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Login or register to search.
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Users
        • Groups