The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
-
Here's the thing: I believe that you deserve to have access to high quality products and services. You deserve to use products and services that are safe, secure, well-designed and not destroying the ecological, informational or social environment.
@tante I mean… people accepted that for transports, agriculture, entertainments, even education and healthcare. Why stop here?
-
R relay@relay.infosec.exchange shared this topic
-
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
@tante Good take! But also, like "can you create software" is not really an accurate framing of what the hard part of software was.
Most people could "create software" by looking up a Hello world example. That wouldn't help them solve amy real problems tho.
LLMs produce software that *looks more like* it solves problems... but security, integrity, legality were kind of always implied parts of the problem.
Like, it takes a weird subtle reframing of the goal to make LLMs look at all useful.
-
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
@tante By now I’m pretty convinced llms can make it easier to produce high quality code then writing high quality code manually. Particularly because the AI is willing to do all the tedious, borring tasks that most developers are often to lazy for. Yes, it also makes it much easier to produce shittier code as well. (1/2)
-
@tante By now I’m pretty convinced llms can make it easier to produce high quality code then writing high quality code manually. Particularly because the AI is willing to do all the tedious, borring tasks that most developers are often to lazy for. Yes, it also makes it much easier to produce shittier code as well. (1/2)
Right now we are seeing way more of the latter because most people haven't learned yet how to produce good AI code and because the bad code sticks out while the good code blends in. But I'm convinced your underlying assumption “AI code = shitty" isn't correct. (2/2)
@tante -
R relay@relay.mycrowd.ca shared this topic
-
Right now we are seeing way more of the latter because most people haven't learned yet how to produce good AI code and because the bad code sticks out while the good code blends in. But I'm convinced your underlying assumption “AI code = shitty" isn't correct. (2/2)
@tante -
Here's the thing: I believe that you deserve to have access to high quality products and services. You deserve to use products and services that are safe, secure, well-designed and not destroying the ecological, informational or social environment.
@tante Then there’s the thing that we never had that software. Business has always accepted low quality products and services. So while I do agree with you, I’m afraid the people who run the software companies simply don’t care.
-
@cjk @tante Honestly I'm not sure about the skill degradation. I think there is a very high chance this is the same “new technology will make the youth stupid" panic that we have seen for centuries with every new technology. Also I really would like to see a deep analysis how much AI is hurting the environment. I don't trust Sam Altmans numbers but I also buy the “every prompt is burning down a small forrest" hyperbole.
-
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
@tante Yes, if I had Root Cause Analysis training - #BehavioralScience - in earlier education, I and more like me would've made more of a difference.
-
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
@tante No truly Germany has managed to give us great software over the past decades without LLMS.
-
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
@tante In my opinion, the problem is that most decision-makers don't understand the risks involved. When those in charge lack the necessary knowledge for software development, the empty promises of AI seem like an easy solution

-
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
@tante
Is that running or ruining? -
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
Entropy is somebody else's problem - they'll be comfortable on their yacht. We can't expect sociopaths to care about others, else they wouldn't be sociopaths.
-
Right now we are seeing way more of the latter because most people haven't learned yet how to produce good AI code and because the bad code sticks out while the good code blends in. But I'm convinced your underlying assumption “AI code = shitty" isn't correct. (2/2)
@tante@343max @tante
Then, Max, you have no understanding of LLM/Gen AI, or maybe of specifying requirements, designing system (modules, APIs etc) and then writing the code test & debug. If it's any size of project you need a team & management.
There is also documentation.
Actually writing the code is the easiest bit & the only bit the current LLM/Gen AI does, and does badly as it relies on code scraped from elsewhere & statistical shuffling of fragments.
Can't work. It's a technological dead end. -
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
@tante Well, well done for admitting that demonstrably, the dog can play the piano. Now we are just talking about how well it plays.
FWIW these LLMs have no need for being consistent with what happened in a previous context. The same LLM, in a new context, will usefully critique and find and fix flaws in what it itself did in the previous context.
The "slop" aspect of LLM output seems to come from just going with what one context produced blind, when it can iterate as, eg, QA manager.
-
-
@343max @tante
Then, Max, you have no understanding of LLM/Gen AI, or maybe of specifying requirements, designing system (modules, APIs etc) and then writing the code test & debug. If it's any size of project you need a team & management.
There is also documentation.
Actually writing the code is the easiest bit & the only bit the current LLM/Gen AI does, and does badly as it relies on code scraped from elsewhere & statistical shuffling of fragments.
Can't work. It's a technological dead end.@raymaccarthy @tante Oh the “someone disagrees with me so they must be stupid" argument! Amazing. Please go away now.
-
@raymaccarthy @tante @map AI can't create. You create, AI just implements it. I know it is hard to digest but this is everyday work for a lot of us now.
-
The question is not whether you can create software using LLMs - you can - most software is just boring CRUD shit.
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"
I mean, you can also "build a house" by using deck screws to connect some wet doug fir 2x4s into a "frame" and then staple on some drywall and siding and drape the whole thing in a plastic tarp.
You will die when it falls on you, but for a time, it was a "house".
-
@map exactly. In a way accepting responsibility for the code one puts in front of people is accepting the connected care duties towards these people.
-
@raymaccarthy @tante Oh the “someone disagrees with me so they must be stupid" argument! Amazing. Please go away now.
@343max @tante
I've designed & written SW for decades and done physical AI courses as well as studying it.
What's your qualification for your amazing claims Max?
Expert systems was AI in 1980s and relied on good design and curation of the knowledge of experts. It was too expensive to build and fragile.
I forecast the the idea of LLM 20+ years ago. Chatbots then had data encoded in the program (Eliza, ALICE etc). I suggested a statistical engine and using the Internet as data. A toy.
