LLMs are absolutely stumbling over shit WolframAlpha was able to fifteen years ago for a fraction of the cost
-
LLMs are absolutely stumbling over shit WolframAlpha was able to fifteen years ago for a fraction of the cost
-
LLMs are absolutely stumbling over shit WolframAlpha was able to fifteen years ago for a fraction of the cost
who would win
running convex optimization on a few hundred million tokens of scraped text
vs
a bare minimum amount of programatic effort to analyze common query phrases and connect them to the relevant databases
-
who would win
running convex optimization on a few hundred million tokens of scraped text
vs
a bare minimum amount of programatic effort to analyze common query phrases and connect them to the relevant databases
even building robust indices of the texts ingested by an LLM would be a better use of computational resources. recreating information from them probabalistically is just such a dumbfuck idea.
-
even building robust indices of the texts ingested by an LLM would be a better use of computational resources. recreating information from them probabalistically is just such a dumbfuck idea.
when you've promised investors that in a matter of months no one is going to need anything but a hammer, everything looks like a nail
-
when you've promised investors that in a matter of months no one is going to need anything but a hammer, everything looks like a nail
i feel like i was innoculated against llm hype because in like 2015 i spent a few hours laughing my ass off at the half-successful attempts to generate MTG cards and the absolutely unsuccessful experiment where they tried to do the same thing with a database of recipes and i was like oh, it works for MTG because nothing in that game refers anything outside of the symbols being manipulated
-
LLMs are absolutely stumbling over shit WolframAlpha was able to fifteen years ago for a fraction of the cost
@tholindeth And when wolfram alpha made mistakes, they were fun, and still usually technically correct!
-
@diogenes @tholindeth and no one gets sick or dies if the mtg cards are wrong
-
LLMs are absolutely stumbling over shit WolframAlpha was able to fifteen years ago for a fraction of the cost
@tholindeth @gsuberland it turns out computer programs with a deliberately-constructed ontology are Good, Actually
-
@tholindeth And when wolfram alpha made mistakes, they were fun, and still usually technically correct!
@unlofl @tholindeth big @weirdunits energy
-
@tholindeth And when wolfram alpha made mistakes, they were fun, and still usually technically correct!
@unlofl @tholindeth Heeee!
Is there an open source equivalent for Wolfram Alpha yet? Seems like the kind of fun shit critters would totally build for the hell of it, but I don't know if anyone has, 'cause it's also probably terribly complicated.
-
@tholindeth And when wolfram alpha made mistakes, they were fun, and still usually technically correct!
@unlofl @tholindeth qalc parses that as (32 gram·barns) / (5 tonne·barns)
-
R relay@relay.mycrowd.ca shared this topic