Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj I mostly agree with you on the principle of not dismissing LLM code outright, though I do think the analogy might be slightly misguided/mischosen. It's less like using a washing machine and more like using a hypothetical sort of clothes vending machine that puts together and sews your clothes on-demand. Your clothes are already made when you wash them, and presumably you read the washing instructions on the label and set your washing machine to the right settings. So yes, you're trusting it to follow those settings and to not mess up your expensive clothes, but you're not really having it create anything and the settings are quite limited-scope. I do think there is ethical use of AI, I try to make responsible use of it, as much as its background is very problematic and we ought to be conscious of that, so I definitely agree with you. But I also know that the ratio of shitty to decent AI-coded projects is much, much higher than the ratio of disastrous to successful washing machine cycles. Hey, how many tokens worth of water does a washing machine cycle use? Now that's a thought!
-
@Onj I mostly agree with you on the principle of not dismissing LLM code outright, though I do think the analogy might be slightly misguided/mischosen. It's less like using a washing machine and more like using a hypothetical sort of clothes vending machine that puts together and sews your clothes on-demand. Your clothes are already made when you wash them, and presumably you read the washing instructions on the label and set your washing machine to the right settings. So yes, you're trusting it to follow those settings and to not mess up your expensive clothes, but you're not really having it create anything and the settings are quite limited-scope. I do think there is ethical use of AI, I try to make responsible use of it, as much as its background is very problematic and we ought to be conscious of that, so I definitely agree with you. But I also know that the ratio of shitty to decent AI-coded projects is much, much higher than the ratio of disastrous to successful washing machine cycles. Hey, how many tokens worth of water does a washing machine cycle use? Now that's a thought!
@guilevi Yep, all fair points.
-
@Onj so I agree to a significant extent. The issue is when AI companies are openly saying: AI will write all your code, this will mean you need half the number of devs and half the amount of time to ship the same product because AI is just so damned good at code. Then bosses say, awesome! Let's make loads of talented coders redundant and if the team tells me they still need loads of time for testing because AI code still needs oversight...
@bermudianbrit Yeah, that I don't agree with at all, but I don't know where my level of 'this has to stop' really is. I think it's hard to define.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj Somewhat related, but this is the first technology that humans invented that they don't fully understand how it works. Until now, all the technology humans invented someone does know how it exactly works. That's why there are bunch of AI safety people working on interpretability and find out how it works. Still we don't know much.
-
@BTowersCoding How did humans make a thing they don't know how it works? genuine question. This seems odd to me.
@Onj @BTowersCoding We know how random number generators work, but we don't know what number a properly made one will spit out next.
We know how LLMs do what they do, and hence we can be absolutely certain that they are non-deterministic.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj I think you make some good points here. I fully agree AI should be tested always.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj The analergy is a little flawed because you're comparing an end user to a developer. If I create a washing machine and have no idea how it works and give it to people and things break and I then have no idea how to fix them, that's on me. Any end user using any program may not know how it's going to work, but they can go to the manufacturer, outline their problems and hopefully get fixes, work-arounds or bug fixes.
-
@Onj The analergy is a little flawed because you're comparing an end user to a developer. If I create a washing machine and have no idea how it works and give it to people and things break and I then have no idea how to fix them, that's on me. Any end user using any program may not know how it's going to work, but they can go to the manufacturer, outline their problems and hopefully get fixes, work-arounds or bug fixes.
@JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?
-
@JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?
@Onj So you receive a support ticket, negotiate with your user, while simultaneously submitting a support ticket to your AI of choice and negotiating with that. If you can't duplicate the problem the user is having, what would you do? You wouldn't know how to advise them and would have to pass on every piece of possibly incorrect, possibly unsafe advice the model gave you and await feedback from the user. Exponentially grow that problem for every bug
-
@Onj So you receive a support ticket, negotiate with your user, while simultaneously submitting a support ticket to your AI of choice and negotiating with that. If you can't duplicate the problem the user is having, what would you do? You wouldn't know how to advise them and would have to pass on every piece of possibly incorrect, possibly unsafe advice the model gave you and await feedback from the user. Exponentially grow that problem for every bug
@JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.
-
@JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?
@Onj exponentially grow that problem for every bug report received, whereas a programmer with innate skill, one with an intimate understanding of software architecture, hardware and browser influence etc, would be able to have a more direct conversation, offering competent solutions they could be more assured of success with.
What damage would all the wild goose chases and delays do your brand?
-
@JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?
@Onj Also, integrating yourself into someone else's system, i.e. curating AI code for errors, bears a higher cognitive load and has a higher risk of things being missed than if you code yourself.
-
@JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.
@Onj I don't understand the point. There is Fiver for coding. You can commission people to produce software for you. Thing is, human-coded software, the culpability can be traced back. Imagine my shock, my horror, my outrage, when you told me the software I had my model produce for you introduced vulnerabilities! However did that happen? there's no way for you to prove that I didn't do it on purpose or that the model didn't mess up.
-
@JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.
@Onj Proliferating the ability to produce software to many many more people just exponentially increases the possibility for mallice, unintentional vulnerabilities and incompetence. At least the hacker in your example is human, therefore can be blamed and had to invest a lot of time to get skilled. Do they want to blow that investment on bad acting?
-
@Onj I don't understand the point. There is Fiver for coding. You can commission people to produce software for you. Thing is, human-coded software, the culpability can be traced back. Imagine my shock, my horror, my outrage, when you told me the software I had my model produce for you introduced vulnerabilities! However did that happen? there's no way for you to prove that I didn't do it on purpose or that the model didn't mess up.
@JustinMac84 Sure, but I think you're doing what most people do right now, absolute, absolute worse-case scenario. I don't know why people do this honestly, other than if it scores points, but OK, point made. It could be terrible. It could be catastrophic but... What if it just isn't? What if it simply does the job it's intended to do?
-
@JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.
@Onj Whereas an abusive partner could quite happily blow a day's effort to produce a tracker, key logger or other piece of malicious software with which to infect a partner, ex or rival business.
-
@JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.
@Onj But anyway, this doesn't address your original point, my answer to which is that it's fine for an end user to have no idea how their product works and not to be able to fix it unaided, much less so for a dev or business supplying something they have no ideaabout to people.
-
@Onj But anyway, this doesn't address your original point, my answer to which is that it's fine for an end user to have no idea how their product works and not to be able to fix it unaided, much less so for a dev or business supplying something they have no ideaabout to people.
@JustinMac84 Lol come on now, businesses supply whatever to people all the time and how hard is it to get help with whatever it is when all the people you talk to are just people working there for work experience or something? We've probably all seen it. No excuse but you know it's true.
-
@JustinMac84 Sure, but I think you're doing what most people do right now, absolute, absolute worse-case scenario. I don't know why people do this honestly, other than if it scores points, but OK, point made. It could be terrible. It could be catastrophic but... What if it just isn't? What if it simply does the job it's intended to do?
@Onj I don't think it is worst case scenario. Worst case would be bad acting; second worst case would be unintentionally introduced vulnerabilities that allowed other bad actors; third worst case would be software that messes up your machine; fourth worst case would be incompatibility with certain setups and bugs.
-
@JustinMac84 Sure, but I think you're doing what most people do right now, absolute, absolute worse-case scenario. I don't know why people do this honestly, other than if it scores points, but OK, point made. It could be terrible. It could be catastrophic but... What if it just isn't? What if it simply does the job it's intended to do?
@Onj The point is that the quotes developer, who has outsourced all the skill to a stochastic parrot, has no idea how to fix any of those issues without arduous, lengthy back and forth that may introduce even more problems. Would you do that to a customer? Andre, I'm having this problem. Knowing that consulting an LLM is like rolling a dice, albeit a weighted dice and you wouldn't know if the answer was right or wrong,