Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "You should really try Claude/WhateverLLM before criticizing"

"You should really try Claude/WhateverLLM before criticizing"

Scheduled Pinned Locked Moved Uncategorized
8 Posts 7 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ploum@mamot.frP This user is from outside of this forum
    ploum@mamot.frP This user is from outside of this forum
    ploum@mamot.fr
    wrote last edited by
    #1

    "You should really try Claude/WhateverLLM before criticizing"

    is the new

    "But it contains electrolytes"

    Link Preview Image
    armavica@social.sciences.reA frankaulux@mastodon.socialF kafeinnet@mstdn.ingrats.netK crankylinuxuser@infosec.exchangeC jairbubbles@mastodon.socialJ 5 Replies Last reply
    1
    0
    • ploum@mamot.frP ploum@mamot.fr

      "You should really try Claude/WhateverLLM before criticizing"

      is the new

      "But it contains electrolytes"

      Link Preview Image
      armavica@social.sciences.reA This user is from outside of this forum
      armavica@social.sciences.reA This user is from outside of this forum
      armavica@social.sciences.re
      wrote last edited by
      #2

      @ploum To me, it is even worse than that. In their mind, there is no criticism that holds, and if you still criticize LLMs, it is because you haven't seen the light yet.
      "I am never using LLMs because of ethical / philosophical / moral / environmental arguments" -> "you cannot have an opinion without trying at least try once"
      "I asked ChatGPT something and it gave me a wrong answer" -> "you should use it more, to learn about good prompting"
      "I asked a code question and its answer was riddled with bugs" -> "you should try an agent"
      etc., ad nauseam. If you have criticism, it is only because you are not a believer yet. To me, it is extremely religion-like.

      1 Reply Last reply
      0
      • ploum@mamot.frP ploum@mamot.fr

        "You should really try Claude/WhateverLLM before criticizing"

        is the new

        "But it contains electrolytes"

        Link Preview Image
        frankaulux@mastodon.socialF This user is from outside of this forum
        frankaulux@mastodon.socialF This user is from outside of this forum
        frankaulux@mastodon.social
        wrote last edited by
        #3

        @ploum

        ce bon vieil Idiocracy !
        on ne s'en lasse pas !!!
        Par le même, à voir aussi: la série "Silicon Valley".
        De mémoire: 6 saisons !

        1 Reply Last reply
        0
        • ploum@mamot.frP ploum@mamot.fr

          "You should really try Claude/WhateverLLM before criticizing"

          is the new

          "But it contains electrolytes"

          Link Preview Image
          kafeinnet@mstdn.ingrats.netK This user is from outside of this forum
          kafeinnet@mstdn.ingrats.netK This user is from outside of this forum
          kafeinnet@mstdn.ingrats.net
          wrote last edited by
          #4

          @ploum "you read documentation on paper? like the paper in toilets?"

          1 Reply Last reply
          0
          • ploum@mamot.frP ploum@mamot.fr

            "You should really try Claude/WhateverLLM before criticizing"

            is the new

            "But it contains electrolytes"

            Link Preview Image
            crankylinuxuser@infosec.exchangeC This user is from outside of this forum
            crankylinuxuser@infosec.exchangeC This user is from outside of this forum
            crankylinuxuser@infosec.exchange
            wrote last edited by
            #5

            @ploum

            Better question:

            How many neurons does it take to be "slop"?

            I introduce a 3 neuron example. A PID loop, from control systems theory.

            It has a training phase in which it 'learns' the control from known inputs. And it has a execution phase, of which it applies the learned inputs.

            Even my 3d printers use PID for the nozzle and bed. My oven in the kitchen does so as well.

            Is all learning software "evil"? If no, where's the cutoff?

            whiteshoulders@piaille.frW 1 Reply Last reply
            1
            0
            • R relay@relay.infosec.exchange shared this topic
            • ploum@mamot.frP ploum@mamot.fr

              "You should really try Claude/WhateverLLM before criticizing"

              is the new

              "But it contains electrolytes"

              Link Preview Image
              jairbubbles@mastodon.socialJ This user is from outside of this forum
              jairbubbles@mastodon.socialJ This user is from outside of this forum
              jairbubbles@mastodon.social
              wrote last edited by
              #6

              @ploum well you can test and still criticize 😅

              1 Reply Last reply
              0
              • crankylinuxuser@infosec.exchangeC crankylinuxuser@infosec.exchange

                @ploum

                Better question:

                How many neurons does it take to be "slop"?

                I introduce a 3 neuron example. A PID loop, from control systems theory.

                It has a training phase in which it 'learns' the control from known inputs. And it has a execution phase, of which it applies the learned inputs.

                Even my 3d printers use PID for the nozzle and bed. My oven in the kitchen does so as well.

                Is all learning software "evil"? If no, where's the cutoff?

                whiteshoulders@piaille.frW This user is from outside of this forum
                whiteshoulders@piaille.frW This user is from outside of this forum
                whiteshoulders@piaille.fr
                wrote last edited by
                #7

                @crankylinuxuser @ploum PID and gradient-descent optimized learning system are different in nature though. Putting PID on the same spectrum as LLM seems wrong. Or the definition of your spectrum is so broad that you could put any self-regulating system on it (like a water flush), making this spectrum near useless to describe/compare anything.

                crankylinuxuser@infosec.exchangeC 1 Reply Last reply
                0
                • whiteshoulders@piaille.frW whiteshoulders@piaille.fr

                  @crankylinuxuser @ploum PID and gradient-descent optimized learning system are different in nature though. Putting PID on the same spectrum as LLM seems wrong. Or the definition of your spectrum is so broad that you could put any self-regulating system on it (like a water flush), making this spectrum near useless to describe/compare anything.

                  crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                  crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                  crankylinuxuser@infosec.exchange
                  wrote last edited by
                  #8

                  @whiteshoulders @ploum

                  Thats kind of the point.

                  Theres intermediate learning software like K-Nearest-Neighbors that also are trained on classified (properly annotated) data, and then can provide percentage responses on trained data. We see this with tools like Merlin birdsong identification.

                  I even made a 10 position classifier with the MYO myoelectrical armband back in 2016. No GPU needed. Modest CPU and ram was needed, something that easily a RPi 2 could do.

                  Point being is this whole debate is being forced into a binary with folks saying "this is amazing" to "horrific garbage". Maybe LLMs could be made more useful if they output % confidence and citations accurately?

                  But again, I'm not going to dismiss, nor am I going to trust everything. Both actions are foolish.

                  1 Reply Last reply
                  1
                  0
                  Reply
                  • Reply as topic
                  Log in to reply
                  • Oldest to Newest
                  • Newest to Oldest
                  • Most Votes


                  • Login

                  • Login or register to search.
                  • First post
                    Last post
                  0
                  • Categories
                  • Recent
                  • Tags
                  • Popular
                  • World
                  • Users
                  • Groups