"You should really try Claude/WhateverLLM before criticizing"
-
"You should really try Claude/WhateverLLM before criticizing"
is the new
"But it contains electrolytes"

-
"You should really try Claude/WhateverLLM before criticizing"
is the new
"But it contains electrolytes"

@ploum To me, it is even worse than that. In their mind, there is no criticism that holds, and if you still criticize LLMs, it is because you haven't seen the light yet.
"I am never using LLMs because of ethical / philosophical / moral / environmental arguments" -> "you cannot have an opinion without trying at least try once"
"I asked ChatGPT something and it gave me a wrong answer" -> "you should use it more, to learn about good prompting"
"I asked a code question and its answer was riddled with bugs" -> "you should try an agent"
etc., ad nauseam. If you have criticism, it is only because you are not a believer yet. To me, it is extremely religion-like. -
"You should really try Claude/WhateverLLM before criticizing"
is the new
"But it contains electrolytes"

ce bon vieil Idiocracy !
on ne s'en lasse pas !!!
Par le même, à voir aussi: la série "Silicon Valley".
De mémoire: 6 saisons ! -
"You should really try Claude/WhateverLLM before criticizing"
is the new
"But it contains electrolytes"

@ploum "you read documentation on paper? like the paper in toilets?"
-
"You should really try Claude/WhateverLLM before criticizing"
is the new
"But it contains electrolytes"

Better question:
How many neurons does it take to be "slop"?
I introduce a 3 neuron example. A PID loop, from control systems theory.
It has a training phase in which it 'learns' the control from known inputs. And it has a execution phase, of which it applies the learned inputs.
Even my 3d printers use PID for the nozzle and bed. My oven in the kitchen does so as well.
Is all learning software "evil"? If no, where's the cutoff?
-
R relay@relay.infosec.exchange shared this topic
-
"You should really try Claude/WhateverLLM before criticizing"
is the new
"But it contains electrolytes"

@ploum well you can test and still criticize

-
Better question:
How many neurons does it take to be "slop"?
I introduce a 3 neuron example. A PID loop, from control systems theory.
It has a training phase in which it 'learns' the control from known inputs. And it has a execution phase, of which it applies the learned inputs.
Even my 3d printers use PID for the nozzle and bed. My oven in the kitchen does so as well.
Is all learning software "evil"? If no, where's the cutoff?
@crankylinuxuser @ploum PID and gradient-descent optimized learning system are different in nature though. Putting PID on the same spectrum as LLM seems wrong. Or the definition of your spectrum is so broad that you could put any self-regulating system on it (like a water flush), making this spectrum near useless to describe/compare anything.
-
@crankylinuxuser @ploum PID and gradient-descent optimized learning system are different in nature though. Putting PID on the same spectrum as LLM seems wrong. Or the definition of your spectrum is so broad that you could put any self-regulating system on it (like a water flush), making this spectrum near useless to describe/compare anything.
Thats kind of the point.
Theres intermediate learning software like K-Nearest-Neighbors that also are trained on classified (properly annotated) data, and then can provide percentage responses on trained data. We see this with tools like Merlin birdsong identification.
I even made a 10 position classifier with the MYO myoelectrical armband back in 2016. No GPU needed. Modest CPU and ram was needed, something that easily a RPi 2 could do.
Point being is this whole debate is being forced into a binary with folks saying "this is amazing" to "horrific garbage". Maybe LLMs could be made more useful if they output % confidence and citations accurately?
But again, I'm not going to dismiss, nor am I going to trust everything. Both actions are foolish.