Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj I don't think this is a bad/unpopular opinion at all. Old stuff may be better for older people, and though I do a bit of hobby AIML coding myself, I am having so much fun with recreating old games I played on DOS Pcs in 1984. I haven't shared them because I'm not even sure anyone would care, but for me, they provide so much good entertainment. When i played the ones I now recreated to have audio cues for me, I was dependent on sighted assistance, not that it was a problem, because I had a younger sister that loved playing the games as much as myself...but the feeling of being able to do stuff independently, stuff I would have had to ask help for when I was younger, leads to so much happiness, if happiness is even the right word. Satisfaction, more likely. Yes there are people that do bad things with AI, but every tool can be used positively or negatively. But then computer programming in itself, even before AI could be used positively or negatively. If memory serves right, then the first computer virus was created in 1981. Again, Computer program, tool...Tool can be used beneficially or harmfully. AI is a tool, just like everything else, even electricity is a tool that can do both good bad.
-
@Onj I don't think this is a bad/unpopular opinion at all. Old stuff may be better for older people, and though I do a bit of hobby AIML coding myself, I am having so much fun with recreating old games I played on DOS Pcs in 1984. I haven't shared them because I'm not even sure anyone would care, but for me, they provide so much good entertainment. When i played the ones I now recreated to have audio cues for me, I was dependent on sighted assistance, not that it was a problem, because I had a younger sister that loved playing the games as much as myself...but the feeling of being able to do stuff independently, stuff I would have had to ask help for when I was younger, leads to so much happiness, if happiness is even the right word. Satisfaction, more likely. Yes there are people that do bad things with AI, but every tool can be used positively or negatively. But then computer programming in itself, even before AI could be used positively or negatively. If memory serves right, then the first computer virus was created in 1981. Again, Computer program, tool...Tool can be used beneficially or harmfully. AI is a tool, just like everything else, even electricity is a tool that can do both good bad.
@Sozhami Yep, a good way of looking at it.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj The thing is that no one knows how the "AI" works because the processes can't be observed. The reason I'm able to run my washing machine without understanding how it works is because there are a sufficient number of people who do understand how it works and can fix it when it breaks.
-
@Onj The thing is that no one knows how the "AI" works because the processes can't be observed. The reason I'm able to run my washing machine without understanding how it works is because there are a sufficient number of people who do understand how it works and can fix it when it breaks.
@BTowersCoding How did humans make a thing they don't know how it works? genuine question. This seems odd to me.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj Absolutely!!
-
@BTowersCoding How did humans make a thing they don't know how it works? genuine question. This seems odd to me.
@Onj Well, the engineers who build the language models know how its algorithms work, but when it is used to perform a task the neural network accomplishes it in a way that is completely opaque.
There is a field known as "observable artificial intelligence" which attempts to solve this, but as far as I'm aware it's still mostly theoretical.
-
@Onj Well, the engineers who build the language models know how its algorithms work, but when it is used to perform a task the neural network accomplishes it in a way that is completely opaque.
There is a field known as "observable artificial intelligence" which attempts to solve this, but as far as I'm aware it's still mostly theoretical.
@BTowersCoding OK that is fascinating and also kinda weird.
-
@BTowersCoding OK that is fascinating and also kinda weird.
@Onj Yeah it's really interesting, because I don't think it's ever happened before, where it is possible to perform a task without any connection to its history. It's kind of like in Star Trek where a primitive culture can be contaminated by giving it technology that it didn't develop itself, which can lead to drastic consequences, but in this case we are doing it to ourselves.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj so I agree to a significant extent. The issue is when AI companies are openly saying: AI will write all your code, this will mean you need half the number of devs and half the amount of time to ship the same product because AI is just so damned good at code. Then bosses say, awesome! Let's make loads of talented coders redundant and if the team tells me they still need loads of time for testing because AI code still needs oversight...
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj Then I'll just ignore them and boost the teams' targets anyway. Massive companies have been doing this because people at the top are assured that AI can do the work. So its not so much a problem of AI itself, but a problem with the salesmen foisting it off on companies, and a problem with those at the top not listening to their teams when they say that testing is still needed
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj I mostly agree with you on the principle of not dismissing LLM code outright, though I do think the analogy might be slightly misguided/mischosen. It's less like using a washing machine and more like using a hypothetical sort of clothes vending machine that puts together and sews your clothes on-demand. Your clothes are already made when you wash them, and presumably you read the washing instructions on the label and set your washing machine to the right settings. So yes, you're trusting it to follow those settings and to not mess up your expensive clothes, but you're not really having it create anything and the settings are quite limited-scope. I do think there is ethical use of AI, I try to make responsible use of it, as much as its background is very problematic and we ought to be conscious of that, so I definitely agree with you. But I also know that the ratio of shitty to decent AI-coded projects is much, much higher than the ratio of disastrous to successful washing machine cycles. Hey, how many tokens worth of water does a washing machine cycle use? Now that's a thought!
-
@Onj I mostly agree with you on the principle of not dismissing LLM code outright, though I do think the analogy might be slightly misguided/mischosen. It's less like using a washing machine and more like using a hypothetical sort of clothes vending machine that puts together and sews your clothes on-demand. Your clothes are already made when you wash them, and presumably you read the washing instructions on the label and set your washing machine to the right settings. So yes, you're trusting it to follow those settings and to not mess up your expensive clothes, but you're not really having it create anything and the settings are quite limited-scope. I do think there is ethical use of AI, I try to make responsible use of it, as much as its background is very problematic and we ought to be conscious of that, so I definitely agree with you. But I also know that the ratio of shitty to decent AI-coded projects is much, much higher than the ratio of disastrous to successful washing machine cycles. Hey, how many tokens worth of water does a washing machine cycle use? Now that's a thought!
@guilevi Yep, all fair points.
-
@Onj so I agree to a significant extent. The issue is when AI companies are openly saying: AI will write all your code, this will mean you need half the number of devs and half the amount of time to ship the same product because AI is just so damned good at code. Then bosses say, awesome! Let's make loads of talented coders redundant and if the team tells me they still need loads of time for testing because AI code still needs oversight...
@bermudianbrit Yeah, that I don't agree with at all, but I don't know where my level of 'this has to stop' really is. I think it's hard to define.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj Somewhat related, but this is the first technology that humans invented that they don't fully understand how it works. Until now, all the technology humans invented someone does know how it exactly works. That's why there are bunch of AI safety people working on interpretability and find out how it works. Still we don't know much.
-
@BTowersCoding How did humans make a thing they don't know how it works? genuine question. This seems odd to me.
@Onj @BTowersCoding We know how random number generators work, but we don't know what number a properly made one will spit out next.
We know how LLMs do what they do, and hence we can be absolutely certain that they are non-deterministic.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj I think you make some good points here. I fully agree AI should be tested always.
-
Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.Shout at me as you wish.
PS. Wasn't written with aI.@Onj The analergy is a little flawed because you're comparing an end user to a developer. If I create a washing machine and have no idea how it works and give it to people and things break and I then have no idea how to fix them, that's on me. Any end user using any program may not know how it's going to work, but they can go to the manufacturer, outline their problems and hopefully get fixes, work-arounds or bug fixes.
-
@Onj The analergy is a little flawed because you're comparing an end user to a developer. If I create a washing machine and have no idea how it works and give it to people and things break and I then have no idea how to fix them, that's on me. Any end user using any program may not know how it's going to work, but they can go to the manufacturer, outline their problems and hopefully get fixes, work-arounds or bug fixes.
@JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?
-
@JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?
@Onj So you receive a support ticket, negotiate with your user, while simultaneously submitting a support ticket to your AI of choice and negotiating with that. If you can't duplicate the problem the user is having, what would you do? You wouldn't know how to advise them and would have to pass on every piece of possibly incorrect, possibly unsafe advice the model gave you and await feedback from the user. Exponentially grow that problem for every bug
-
@Onj So you receive a support ticket, negotiate with your user, while simultaneously submitting a support ticket to your AI of choice and negotiating with that. If you can't duplicate the problem the user is having, what would you do? You wouldn't know how to advise them and would have to pass on every piece of possibly incorrect, possibly unsafe advice the model gave you and await feedback from the user. Exponentially grow that problem for every bug
@JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.