Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?
-
@JustinMac84 You're not even wrong, because clearly you've done all the reading, read all the bad press, and it vindicates your own bias about it (which goes back to my post in itself) and that's absolutely your choice to make. I'm not going to change your mind. I just think it's sad that before we can enjoy a new technology, we have to crap all over it first. It happens in all sectors when a new thing comes on the scene.
@Onj If Microsoft, a staunch proponent of AI itself is publishing studies demonstrating that AI causes cognitive atrophy, a reduction in critical thinking skills; if Amazon itself is falling over badly generated AI code; if the BBC is testing chat bots and noting sometimes a 50% failure rate; if proper programmers are noticing the cumulative and most importantly hidden errors AI coders are generating...
-
@Onj Opposing tech for the sake of opposing tech is stupid. I would hesitate before describing a breadth of profound and, most importantly, substantiated concerns as negativity and hate though. I would argue that charging wrecklessly into the adoption of a technology, without considering all the ramifications is equally foolish. For me it's not about black and white no-one should use this stuff, it's about how the stuff is used.
@JustinMac84 Of course it is. If I made a thing, didn't even give it a single test and threw it out there and it killed someone's machine, that is terribly irresponsible. You haven't come up with a single good usecase so far though, your entire response to the thread has been:
Amazon screwed up, you could screw up, people are screwing up. your brand would suck if...That's putting problems and limits right at the door before you even step out of the house.
Me, I can't live that way. I think trying a thing and seeing if all it does is suck, is better than not knowing at all.
Taking other people's word for it, and again *only* seeing the bad in a thing, well it speaks for itself. -
@Onj If Microsoft, a staunch proponent of AI itself is publishing studies demonstrating that AI causes cognitive atrophy, a reduction in critical thinking skills; if Amazon itself is falling over badly generated AI code; if the BBC is testing chat bots and noting sometimes a 50% failure rate; if proper programmers are noticing the cumulative and most importantly hidden errors AI coders are generating...
@JustinMac84 I rest my case. You just made all my points for me right there.
-
@JustinMac84 You're not even wrong, because clearly you've done all the reading, read all the bad press, and it vindicates your own bias about it (which goes back to my post in itself) and that's absolutely your choice to make. I'm not going to change your mind. I just think it's sad that before we can enjoy a new technology, we have to crap all over it first. It happens in all sectors when a new thing comes on the scene.
@Onj to say nothing of the implications for livelihoods and the environment, don't you thinkk that weight of evidence, already amassing when AI is so young, is worth listening to and being mindful of, rather than dismissing it as negativity and hate. The jaenie is out of the bottle now. It's never going away. wishing it so is pointless. Doesn't mean we have to accept the jaenie as is though. Doesn't mean we can't hold out for better.
-
@Onj to say nothing of the implications for livelihoods and the environment, don't you thinkk that weight of evidence, already amassing when AI is so young, is worth listening to and being mindful of, rather than dismissing it as negativity and hate. The jaenie is out of the bottle now. It's never going away. wishing it so is pointless. Doesn't mean we have to accept the jaenie as is though. Doesn't mean we can't hold out for better.
@JustinMac84 You won't get better if we stop it in it's tracks. Remember the size of computers when we were young? Talk to your grandparents and ask them the size of computers back then. They had to start out like junk before they got down to the smaller than fingernail size we can produce now.
New things have to suck before they don't.
Old things learnt how not to suck by sucking in the beginning. -
@JustinMac84 I rest my case. You just made all my points for me right there.
@Onj that is missing the point of my argument. The issue is not that you might screw up. Bugs, with all the permutations, compatibility issues etc are absolutely and completely inevitable. It's not the screw-ups that worry me. It's how the screw-ups happened and how the screw-ups are dealt with, as appropriate, that concern me.
-
@jakobrosin yep, and I have 0 problem with that. Transparency is key.
-
@Onj that is missing the point of my argument. The issue is not that you might screw up. Bugs, with all the permutations, compatibility issues etc are absolutely and completely inevitable. It's not the screw-ups that worry me. It's how the screw-ups happened and how the screw-ups are dealt with, as appropriate, that concern me.
@JustinMac84 If you let it concern you. If people don't fix things they're putting out, if what they're putting out sucks so bad it hurts, kills people, don't go near it. ever. You're absolutely not wrong for that.
-
@JustinMac84 You won't get better if we stop it in it's tracks. Remember the size of computers when we were young? Talk to your grandparents and ask them the size of computers back then. They had to start out like junk before they got down to the smaller than fingernail size we can produce now.
New things have to suck before they don't.
Old things learnt how not to suck by sucking in the beginning.@Onj But again, you're over-simplifying my argument. No-one said anything about stopping anything in its tracks. I believe I was quite clear that AI isn't going away, nor should it. Carrying on regardless, "yes there are a bunch of nay-sayers and we're already seeing the ways things are going wrong and people losing their jobs etc, but fuck it, they're just pooping the party, bunch of miseries,"
-
@JustinMac84 You won't get better if we stop it in it's tracks. Remember the size of computers when we were young? Talk to your grandparents and ask them the size of computers back then. They had to start out like junk before they got down to the smaller than fingernail size we can produce now.
New things have to suck before they don't.
Old things learnt how not to suck by sucking in the beginning.@Onj or stop everything, ban AI forever, don't have to be the only options. We could appropriate regulate, investigate, move more slowly and get a healthy AI with a more significant net benefit.
-
@Onj But again, you're over-simplifying my argument. No-one said anything about stopping anything in its tracks. I believe I was quite clear that AI isn't going away, nor should it. Carrying on regardless, "yes there are a bunch of nay-sayers and we're already seeing the ways things are going wrong and people losing their jobs etc, but fuck it, they're just pooping the party, bunch of miseries,"
@JustinMac84 I'm not oversimplifying anything, I'm stating a fact. You're taking it as a personal slight. The 'You' in my statement is not personally directed, it is just a thing.
I also agree with your last post, so that could hardly be oversimplifying could it?
Ethical AI is important.
Stuff that is going to be used to kill people, autonomous robots controled by AI, that is not your friend or anybody's friend. WE need to push for that to be banned and quickly. -
@JustinMac84 I'm not oversimplifying anything, I'm stating a fact. You're taking it as a personal slight. The 'You' in my statement is not personally directed, it is just a thing.
I also agree with your last post, so that could hardly be oversimplifying could it?
Ethical AI is important.
Stuff that is going to be used to kill people, autonomous robots controled by AI, that is not your friend or anybody's friend. WE need to push for that to be banned and quickly.@Onj I assumed your propenultimate post was a response to my argument. I don't feel slighted, just want you to understand that I regard banning all ai ever as impractical, unworkable and not necessarily beneficial. I do however feel that we are adopting it at a wreckless pace and that things like cognitive atrophy; accute job loss; inaccurate decisions, information and unreliable products; environmental destruction
-
@JustinMac84 I'm not oversimplifying anything, I'm stating a fact. You're taking it as a personal slight. The 'You' in my statement is not personally directed, it is just a thing.
I also agree with your last post, so that could hardly be oversimplifying could it?
Ethical AI is important.
Stuff that is going to be used to kill people, autonomous robots controled by AI, that is not your friend or anybody's friend. WE need to push for that to be banned and quickly.@Onj and people being forced to bear the cost of data centres etc, worrying about these things is not negativity or hate, it is common, rational sense.
-
@Onj I assumed your propenultimate post was a response to my argument. I don't feel slighted, just want you to understand that I regard banning all ai ever as impractical, unworkable and not necessarily beneficial. I do however feel that we are adopting it at a wreckless pace and that things like cognitive atrophy; accute job loss; inaccurate decisions, information and unreliable products; environmental destruction
@JustinMac84 I personally like the idea that the CEO of Claud basically told trump to 'go fuck yourself' but not in those words, because they didn't want to build no-control robots to spy on the US and whatever else might come of that. Good for him.
-
@JustinMac84 I personally like the idea that the CEO of Claud basically told trump to 'go fuck yourself' but not in those words, because they didn't want to build no-control robots to spy on the US and whatever else might come of that. Good for him.
@Onj Absolutely agree with you there. and also, I am glad that people are making money off the tools you create. That is cool. My only intent in replying was to assert that I think there's a difference in requirements and responsibility between someone who owns a thing like a washing machine and the people who make a thing. I also think that we need to listen to the science and not be in such a hurry.
-
@Onj Absolutely agree with you there. and also, I am glad that people are making money off the tools you create. That is cool. My only intent in replying was to assert that I think there's a difference in requirements and responsibility between someone who owns a thing like a washing machine and the people who make a thing. I also think that we need to listen to the science and not be in such a hurry.
@JustinMac84 You know that thing in the 90's, was it the ESRB which was the board for rating video games?
We need a modern something for that with regards AI. comprehensive, third-party independent testing that come up with a proper rating standard and some kind of scale for energy use and many more things that I'm not qualified to think of. -
@JustinMac84 If you let it concern you. If people don't fix things they're putting out, if what they're putting out sucks so bad it hurts, kills people, don't go near it. ever. You're absolutely not wrong for that.
@Onj @JustinMac84 Thing is, I think you can have it both ways. If I write code, I make sure I know how it works and how it's created. AI is a tool for me. It saves me time. But I do know how it works, I'm an engineer and I've coded stuff by hand. I play piano ... not as good as you. I know you've used Suno or other tools to play with AI and its creativity. Would you accept a piece of music that AI made as yours? When I make things with code, I involve myself with the process, but I know I don't have time to do what I did today and write 5000 lines of it. I give it attribution as my assistant to as a writer in my code and in my application. I do, however, and will always, be able to break it apart, know what code was written, and be able to solve the problems that inevitably will come. Because I use AI as a tool to make something of the form of an application work. Without some knowledge of programming though, I would never release it because I know that it or some portion of it will break. What I'm saying is that Ai should be used with care. Know what it's maaking. Understand how it works. And for goodness sake, don't do what I know you don't do but others often do and rather than going to Google to search if something has been made that solves your problem, you employ AI to write you a program to do it. For me that's too risky, and if I've found an application that has had thousands of people run it, try to break it, push it to its limits, I'll use it more fully. I can write svupport for said app. But for the person who would rather AI a solution but does not know how their new solution works, eventually they'll get a call, know nothing of why its breaking on person B's computer, ask AI about it, be confused because AI no longer has the background that allowed it to make the thing. To make this ... thing ... shorter, just be careful. Learn about your code and how it works, you'll thank me later.g or
-
@Onj @JustinMac84 Thing is, I think you can have it both ways. If I write code, I make sure I know how it works and how it's created. AI is a tool for me. It saves me time. But I do know how it works, I'm an engineer and I've coded stuff by hand. I play piano ... not as good as you. I know you've used Suno or other tools to play with AI and its creativity. Would you accept a piece of music that AI made as yours? When I make things with code, I involve myself with the process, but I know I don't have time to do what I did today and write 5000 lines of it. I give it attribution as my assistant to as a writer in my code and in my application. I do, however, and will always, be able to break it apart, know what code was written, and be able to solve the problems that inevitably will come. Because I use AI as a tool to make something of the form of an application work. Without some knowledge of programming though, I would never release it because I know that it or some portion of it will break. What I'm saying is that Ai should be used with care. Know what it's maaking. Understand how it works. And for goodness sake, don't do what I know you don't do but others often do and rather than going to Google to search if something has been made that solves your problem, you employ AI to write you a program to do it. For me that's too risky, and if I've found an application that has had thousands of people run it, try to break it, push it to its limits, I'll use it more fully. I can write svupport for said app. But for the person who would rather AI a solution but does not know how their new solution works, eventually they'll get a call, know nothing of why its breaking on person B's computer, ask AI about it, be confused because AI no longer has the background that allowed it to make the thing. To make this ... thing ... shorter, just be careful. Learn about your code and how it works, you'll thank me later.g or
@ner @JustinMac84 No because I'm probably a hypocrite. If I prompt Suno to make something based on an idea it isn't mine but I could probably learn to play it. Coding feels different. It's not. I know that in my head but it feels more wholesome. I cannot explain why, and I have zero reason for thinking so.
-
@Onj @JustinMac84 Thing is, I think you can have it both ways. If I write code, I make sure I know how it works and how it's created. AI is a tool for me. It saves me time. But I do know how it works, I'm an engineer and I've coded stuff by hand. I play piano ... not as good as you. I know you've used Suno or other tools to play with AI and its creativity. Would you accept a piece of music that AI made as yours? When I make things with code, I involve myself with the process, but I know I don't have time to do what I did today and write 5000 lines of it. I give it attribution as my assistant to as a writer in my code and in my application. I do, however, and will always, be able to break it apart, know what code was written, and be able to solve the problems that inevitably will come. Because I use AI as a tool to make something of the form of an application work. Without some knowledge of programming though, I would never release it because I know that it or some portion of it will break. What I'm saying is that Ai should be used with care. Know what it's maaking. Understand how it works. And for goodness sake, don't do what I know you don't do but others often do and rather than going to Google to search if something has been made that solves your problem, you employ AI to write you a program to do it. For me that's too risky, and if I've found an application that has had thousands of people run it, try to break it, push it to its limits, I'll use it more fully. I can write svupport for said app. But for the person who would rather AI a solution but does not know how their new solution works, eventually they'll get a call, know nothing of why its breaking on person B's computer, ask AI about it, be confused because AI no longer has the background that allowed it to make the thing. To make this ... thing ... shorter, just be careful. Learn about your code and how it works, you'll thank me later.g or
-
@ner @JustinMac84 No because I'm probably a hypocrite. If I prompt Suno to make something based on an idea it isn't mine but I could probably learn to play it. Coding feels different. It's not. I know that in my head but it feels more wholesome. I cannot explain why, and I have zero reason for thinking so.
@Onj @ner Props for the honesty. But then we come to the interesting question of at what point does something become yours. Hans Zimmer, John Williams, they write pieces. They tell the orchestra what to play. At what point is the prompt detailed enough for the same ownership to be legitimate, when you're just telling the AI what to play?