Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

Scheduled Pinned Locked Moved Uncategorized
80 Posts 13 Posters 103 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • onj@hear-me.socialO onj@hear-me.social

    @BTowersCoding OK that is fascinating and also kinda weird.

    btowerscoding@functional.cafeB This user is from outside of this forum
    btowerscoding@functional.cafeB This user is from outside of this forum
    btowerscoding@functional.cafe
    wrote last edited by
    #9

    @Onj Yeah it's really interesting, because I don't think it's ever happened before, where it is possible to perform a task without any connection to its history. It's kind of like in Star Trek where a primitive culture can be contaminated by giving it technology that it didn't develop itself, which can lead to drastic consequences, but in this case we are doing it to ourselves.

    1 Reply Last reply
    0
    • onj@hear-me.socialO onj@hear-me.social

      Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

      Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
      If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
      If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
      What's my point then?
      AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
      The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
      AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
      Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
      Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
      Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
      Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
      New thing comes along, people hate it, old way was better.
      New way becomes old way, new thing comes along, people hate it, old way was better.

      Shout at me as you wish.
      PS. Wasn't written with aI.

      B This user is from outside of this forum
      B This user is from outside of this forum
      bermudianbrit@mastodon.online
      wrote last edited by
      #10

      @Onj so I agree to a significant extent. The issue is when AI companies are openly saying: AI will write all your code, this will mean you need half the number of devs and half the amount of time to ship the same product because AI is just so damned good at code. Then bosses say, awesome! Let's make loads of talented coders redundant and if the team tells me they still need loads of time for testing because AI code still needs oversight...

      onj@hear-me.socialO 1 Reply Last reply
      0
      • onj@hear-me.socialO onj@hear-me.social

        Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

        Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
        If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
        If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
        What's my point then?
        AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
        The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
        AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
        Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
        Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
        Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
        Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
        New thing comes along, people hate it, old way was better.
        New way becomes old way, new thing comes along, people hate it, old way was better.

        Shout at me as you wish.
        PS. Wasn't written with aI.

        B This user is from outside of this forum
        B This user is from outside of this forum
        bermudianbrit@mastodon.online
        wrote last edited by
        #11

        @Onj Then I'll just ignore them and boost the teams' targets anyway. Massive companies have been doing this because people at the top are assured that AI can do the work. So its not so much a problem of AI itself, but a problem with the salesmen foisting it off on companies, and a problem with those at the top not listening to their teams when they say that testing is still needed

        1 Reply Last reply
        0
        • onj@hear-me.socialO onj@hear-me.social

          Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

          Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
          If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
          If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
          What's my point then?
          AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
          The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
          AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
          Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
          Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
          Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
          Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
          New thing comes along, people hate it, old way was better.
          New way becomes old way, new thing comes along, people hate it, old way was better.

          Shout at me as you wish.
          PS. Wasn't written with aI.

          guilevi@dragonscave.spaceG This user is from outside of this forum
          guilevi@dragonscave.spaceG This user is from outside of this forum
          guilevi@dragonscave.space
          wrote last edited by
          #12

          @Onj I mostly agree with you on the principle of not dismissing LLM code outright, though I do think the analogy might be slightly misguided/mischosen. It's less like using a washing machine and more like using a hypothetical sort of clothes vending machine that puts together and sews your clothes on-demand. Your clothes are already made when you wash them, and presumably you read the washing instructions on the label and set your washing machine to the right settings. So yes, you're trusting it to follow those settings and to not mess up your expensive clothes, but you're not really having it create anything and the settings are quite limited-scope. I do think there is ethical use of AI, I try to make responsible use of it, as much as its background is very problematic and we ought to be conscious of that, so I definitely agree with you. But I also know that the ratio of shitty to decent AI-coded projects is much, much higher than the ratio of disastrous to successful washing machine cycles. Hey, how many tokens worth of water does a washing machine cycle use? Now that's a thought!

          onj@hear-me.socialO 1 Reply Last reply
          0
          • guilevi@dragonscave.spaceG guilevi@dragonscave.space

            @Onj I mostly agree with you on the principle of not dismissing LLM code outright, though I do think the analogy might be slightly misguided/mischosen. It's less like using a washing machine and more like using a hypothetical sort of clothes vending machine that puts together and sews your clothes on-demand. Your clothes are already made when you wash them, and presumably you read the washing instructions on the label and set your washing machine to the right settings. So yes, you're trusting it to follow those settings and to not mess up your expensive clothes, but you're not really having it create anything and the settings are quite limited-scope. I do think there is ethical use of AI, I try to make responsible use of it, as much as its background is very problematic and we ought to be conscious of that, so I definitely agree with you. But I also know that the ratio of shitty to decent AI-coded projects is much, much higher than the ratio of disastrous to successful washing machine cycles. Hey, how many tokens worth of water does a washing machine cycle use? Now that's a thought!

            onj@hear-me.socialO This user is from outside of this forum
            onj@hear-me.socialO This user is from outside of this forum
            onj@hear-me.social
            wrote last edited by
            #13

            @guilevi Yep, all fair points.

            1 Reply Last reply
            0
            • B bermudianbrit@mastodon.online

              @Onj so I agree to a significant extent. The issue is when AI companies are openly saying: AI will write all your code, this will mean you need half the number of devs and half the amount of time to ship the same product because AI is just so damned good at code. Then bosses say, awesome! Let's make loads of talented coders redundant and if the team tells me they still need loads of time for testing because AI code still needs oversight...

              onj@hear-me.socialO This user is from outside of this forum
              onj@hear-me.socialO This user is from outside of this forum
              onj@hear-me.social
              wrote last edited by
              #14

              @bermudianbrit Yeah, that I don't agree with at all, but I don't know where my level of 'this has to stop' really is. I think it's hard to define.

              1 Reply Last reply
              0
              • onj@hear-me.socialO onj@hear-me.social

                Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

                Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
                If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
                If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
                What's my point then?
                AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
                The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
                AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
                Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
                Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
                Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
                Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
                New thing comes along, people hate it, old way was better.
                New way becomes old way, new thing comes along, people hate it, old way was better.

                Shout at me as you wish.
                PS. Wasn't written with aI.

                chikim@mastodon.socialC This user is from outside of this forum
                chikim@mastodon.socialC This user is from outside of this forum
                chikim@mastodon.social
                wrote last edited by
                #15

                @Onj Somewhat related, but this is the first technology that humans invented that they don't fully understand how it works. Until now, all the technology humans invented someone does know how it exactly works. That's why there are bunch of AI safety people working on interpretability and find out how it works. Still we don't know much.

                1 Reply Last reply
                0
                • onj@hear-me.socialO onj@hear-me.social

                  @BTowersCoding How did humans make a thing they don't know how it works? genuine question. This seems odd to me.

                  jscholes@dragonscave.spaceJ This user is from outside of this forum
                  jscholes@dragonscave.spaceJ This user is from outside of this forum
                  jscholes@dragonscave.space
                  wrote last edited by
                  #16

                  @Onj @BTowersCoding We know how random number generators work, but we don't know what number a properly made one will spit out next.

                  We know how LLMs do what they do, and hence we can be absolutely certain that they are non-deterministic.

                  1 Reply Last reply
                  0
                  • onj@hear-me.socialO onj@hear-me.social

                    Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

                    Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
                    If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
                    If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
                    What's my point then?
                    AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
                    The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
                    AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
                    Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
                    Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
                    Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
                    Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
                    New thing comes along, people hate it, old way was better.
                    New way becomes old way, new thing comes along, people hate it, old way was better.

                    Shout at me as you wish.
                    PS. Wasn't written with aI.

                    sapphireangel@mastodon.onlineS This user is from outside of this forum
                    sapphireangel@mastodon.onlineS This user is from outside of this forum
                    sapphireangel@mastodon.online
                    wrote last edited by
                    #17

                    @Onj I think you make some good points here. I fully agree AI should be tested always.

                    1 Reply Last reply
                    0
                    • onj@hear-me.socialO onj@hear-me.social

                      Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

                      Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
                      If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
                      If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
                      What's my point then?
                      AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
                      The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
                      AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
                      Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
                      Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
                      Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
                      Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
                      New thing comes along, people hate it, old way was better.
                      New way becomes old way, new thing comes along, people hate it, old way was better.

                      Shout at me as you wish.
                      PS. Wasn't written with aI.

                      J This user is from outside of this forum
                      J This user is from outside of this forum
                      justinmac84@mastodon.social
                      wrote last edited by
                      #18

                      @Onj The analergy is a little flawed because you're comparing an end user to a developer. If I create a washing machine and have no idea how it works and give it to people and things break and I then have no idea how to fix them, that's on me. Any end user using any program may not know how it's going to work, but they can go to the manufacturer, outline their problems and hopefully get fixes, work-arounds or bug fixes.

                      onj@hear-me.socialO 1 Reply Last reply
                      0
                      • J justinmac84@mastodon.social

                        @Onj The analergy is a little flawed because you're comparing an end user to a developer. If I create a washing machine and have no idea how it works and give it to people and things break and I then have no idea how to fix them, that's on me. Any end user using any program may not know how it's going to work, but they can go to the manufacturer, outline their problems and hopefully get fixes, work-arounds or bug fixes.

                        onj@hear-me.socialO This user is from outside of this forum
                        onj@hear-me.socialO This user is from outside of this forum
                        onj@hear-me.social
                        wrote last edited by
                        #19

                        @JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?

                        J 3 Replies Last reply
                        0
                        • onj@hear-me.socialO onj@hear-me.social

                          @JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?

                          J This user is from outside of this forum
                          J This user is from outside of this forum
                          justinmac84@mastodon.social
                          wrote last edited by
                          #20

                          @Onj So you receive a support ticket, negotiate with your user, while simultaneously submitting a support ticket to your AI of choice and negotiating with that. If you can't duplicate the problem the user is having, what would you do? You wouldn't know how to advise them and would have to pass on every piece of possibly incorrect, possibly unsafe advice the model gave you and await feedback from the user. Exponentially grow that problem for every bug

                          onj@hear-me.socialO 1 Reply Last reply
                          0
                          • J justinmac84@mastodon.social

                            @Onj So you receive a support ticket, negotiate with your user, while simultaneously submitting a support ticket to your AI of choice and negotiating with that. If you can't duplicate the problem the user is having, what would you do? You wouldn't know how to advise them and would have to pass on every piece of possibly incorrect, possibly unsafe advice the model gave you and await feedback from the user. Exponentially grow that problem for every bug

                            onj@hear-me.socialO This user is from outside of this forum
                            onj@hear-me.socialO This user is from outside of this forum
                            onj@hear-me.social
                            wrote last edited by
                            #21

                            @JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.

                            J 4 Replies Last reply
                            0
                            • onj@hear-me.socialO onj@hear-me.social

                              @JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?

                              J This user is from outside of this forum
                              J This user is from outside of this forum
                              justinmac84@mastodon.social
                              wrote last edited by
                              #22

                              @Onj exponentially grow that problem for every bug report received, whereas a programmer with innate skill, one with an intimate understanding of software architecture, hardware and browser influence etc, would be able to have a more direct conversation, offering competent solutions they could be more assured of success with.

                              What damage would all the wild goose chases and delays do your brand?

                              1 Reply Last reply
                              0
                              • onj@hear-me.socialO onj@hear-me.social

                                @JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?

                                J This user is from outside of this forum
                                J This user is from outside of this forum
                                justinmac84@mastodon.social
                                wrote last edited by
                                #23

                                @Onj Also, integrating yourself into someone else's system, i.e. curating AI code for errors, bears a higher cognitive load and has a higher risk of things being missed than if you code yourself.

                                1 Reply Last reply
                                0
                                • onj@hear-me.socialO onj@hear-me.social

                                  @JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.

                                  J This user is from outside of this forum
                                  J This user is from outside of this forum
                                  justinmac84@mastodon.social
                                  wrote last edited by
                                  #24

                                  @Onj I don't understand the point. There is Fiver for coding. You can commission people to produce software for you. Thing is, human-coded software, the culpability can be traced back. Imagine my shock, my horror, my outrage, when you told me the software I had my model produce for you introduced vulnerabilities! However did that happen? there's no way for you to prove that I didn't do it on purpose or that the model didn't mess up.

                                  onj@hear-me.socialO 1 Reply Last reply
                                  0
                                  • onj@hear-me.socialO onj@hear-me.social

                                    @JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.

                                    J This user is from outside of this forum
                                    J This user is from outside of this forum
                                    justinmac84@mastodon.social
                                    wrote last edited by
                                    #25

                                    @Onj Proliferating the ability to produce software to many many more people just exponentially increases the possibility for mallice, unintentional vulnerabilities and incompetence. At least the hacker in your example is human, therefore can be blamed and had to invest a lot of time to get skilled. Do they want to blow that investment on bad acting?

                                    1 Reply Last reply
                                    0
                                    • J justinmac84@mastodon.social

                                      @Onj I don't understand the point. There is Fiver for coding. You can commission people to produce software for you. Thing is, human-coded software, the culpability can be traced back. Imagine my shock, my horror, my outrage, when you told me the software I had my model produce for you introduced vulnerabilities! However did that happen? there's no way for you to prove that I didn't do it on purpose or that the model didn't mess up.

                                      onj@hear-me.socialO This user is from outside of this forum
                                      onj@hear-me.socialO This user is from outside of this forum
                                      onj@hear-me.social
                                      wrote last edited by
                                      #26

                                      @JustinMac84 Sure, but I think you're doing what most people do right now, absolute, absolute worse-case scenario. I don't know why people do this honestly, other than if it scores points, but OK, point made. It could be terrible. It could be catastrophic but... What if it just isn't? What if it simply does the job it's intended to do?

                                      J 3 Replies Last reply
                                      0
                                      • onj@hear-me.socialO onj@hear-me.social

                                        @JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.

                                        J This user is from outside of this forum
                                        J This user is from outside of this forum
                                        justinmac84@mastodon.social
                                        wrote last edited by
                                        #27

                                        @Onj Whereas an abusive partner could quite happily blow a day's effort to produce a tracker, key logger or other piece of malicious software with which to infect a partner, ex or rival business.

                                        1 Reply Last reply
                                        0
                                        • onj@hear-me.socialO onj@hear-me.social

                                          @JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.

                                          J This user is from outside of this forum
                                          J This user is from outside of this forum
                                          justinmac84@mastodon.social
                                          wrote last edited by
                                          #28

                                          @Onj But anyway, this doesn't address your original point, my answer to which is that it's fine for an end user to have no idea how their product works and not to be able to fix it unaided, much less so for a dev or business supplying something they have no ideaabout to people.

                                          onj@hear-me.socialO 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups