@cap_ybarra @davidgerard i mean maybe but i find most people just flatly do not understand higher-order programming and cannot be made to
doriantaylor@mastodon.social
Posts
-
this is what actual nuance about AI looks like - Dorian Taylor @doriantaylor on how LLM coding is vaguely possible, but of limited usefulness for real work -
this is what actual nuance about AI looks like - Dorian Taylor @doriantaylor on how LLM coding is vaguely possible, but of limited usefulness for real work@cap_ybarra @davidgerard the empirical reality i'm seeing is a) people seem to believe they're useful and b) they're already expensive, and those people are not going to want to pay 10-20x what they're currently paying, and so the situation will equilibrate eventually around locally-runnable models that are "good enough" for rote coding tasks over existing mundane procedural languages.
like i would bet money that the halo of LLM as panacea is eventually going to dissipate, but that will remain.
-
this is what actual nuance about AI looks like - Dorian Taylor @doriantaylor on how LLM coding is vaguely possible, but of limited usefulness for real work@cap_ybarra that i agree with completely; my own work is completely deterministic and i'm a big proponent of open standards, and resent the fact that every web API is ever so slightly different from every other.
i guess my observation is that people *are* using LLMs to generate code, and will likely continue to (despite being a blunt tool in my opinion), but i suspect when measured from the outside, the net gains are going to vary dramatically.
(@davidgerard has written similar-ish things)
-
this is what actual nuance about AI looks like - Dorian Taylor @doriantaylor on how LLM coding is vaguely possible, but of limited usefulness for real work2) web api client boilerplate is a consummate pain in the ass, because every vendor does theirs ever so slightly differently. it's also likely one of the things that's super well-represented in the training data, plus it'll either work when you run it or it won't; it's pretty low-risk. the failure mode is i have to correct it by hand (and it isn't like i didn't have the reference docs open contemporaneously anyway).
-
this is what actual nuance about AI looks like - Dorian Taylor @doriantaylor on how LLM coding is vaguely possible, but of limited usefulness for real work1) at no point did i say it was *good* for writing unit tests, i just said it was *possible* to generate them; whether they're any good is a separate consideration
like these things are known to make bad tests and even alter tests to pass; my point was you can't get away with skipping test coverage whether you write the tests by hand or it generates them, because of the way it works ("works")
but as we both pointed out, no guarantee generating tests will save any time
-
this is what actual nuance about AI looks like - Dorian Taylor @doriantaylor on how LLM coding is vaguely possible, but of limited usefulness for real work@cap_ybarra @davidgerard cool, what do they get wrong