Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Don't use LLM generated code in your projects yet!

Don't use LLM generated code in your projects yet!

Scheduled Pinned Locked Moved Uncategorized
24 Posts 17 Posters 10 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coop
    wrote last edited by
    #1

    Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.

    I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.

    You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.

    And this hasn't shaken out in an international setting yet either.

    You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.

    But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.

    promovicz@chaos.socialP cwebber@social.coopC rusty__shackleford@mastodon.socialR rootwyrm@weird.autosR ohir@social.vivaldi.netO 8 Replies Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.

      I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.

      You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.

      And this hasn't shaken out in an international setting yet either.

      You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.

      But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.

      promovicz@chaos.socialP This user is from outside of this forum
      promovicz@chaos.socialP This user is from outside of this forum
      promovicz@chaos.social
      wrote last edited by
      #2

      @cwebber Reckoning! Reckoning!

      1 Reply Last reply
      0
      • cwebber@social.coopC cwebber@social.coop

        Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.

        I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.

        You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.

        And this hasn't shaken out in an international setting yet either.

        You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.

        But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.

        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coop
        wrote last edited by
        #3

        That said, I think a lot of people think we can fight AI / LLM output on copyright grounds, and I actually think that's a losing strategy. Copyright almost always helps the big players, and it would here too!

        You can see, they're already counting on and hoping it will be the case.

        What the big players want is for copyright to apply to AI generated output because then *only* the big players can provide LLM services. See also Sam Altman's "running intelligence as a metered utility" pitch.

        And the reason they could do this: *they* can make deals with Disney, Netflix, etc. But open models can't.

        But what about all the "little guys" stuff? Well, when you sign that ToS on GitHub, Stack Overflow, DeviantArt, etc etc etc, all those places, you give them a right to your content too.

        And THOSE places get to sell your rights.

        So fighting on copyright grounds won't be an even playing field. It helps the big AI companies win.

        cwebber@social.coopC mhoye@cosocial.caM woozle@toot.catW thomasjwebb@mastodon.socialT 4 Replies Last reply
        0
        • cwebber@social.coopC cwebber@social.coop

          That said, I think a lot of people think we can fight AI / LLM output on copyright grounds, and I actually think that's a losing strategy. Copyright almost always helps the big players, and it would here too!

          You can see, they're already counting on and hoping it will be the case.

          What the big players want is for copyright to apply to AI generated output because then *only* the big players can provide LLM services. See also Sam Altman's "running intelligence as a metered utility" pitch.

          And the reason they could do this: *they* can make deals with Disney, Netflix, etc. But open models can't.

          But what about all the "little guys" stuff? Well, when you sign that ToS on GitHub, Stack Overflow, DeviantArt, etc etc etc, all those places, you give them a right to your content too.

          And THOSE places get to sell your rights.

          So fighting on copyright grounds won't be an even playing field. It helps the big AI companies win.

          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coop
          wrote last edited by
          #4

          There are only two strategies which are acceptable: either AI model output is completely illegal because of copyright stuff (this is unlikely to happen because there is now too much money behind it), or AI model output is fully in the public domain, which has its own problems but at least is an even playing field.

          There won't be a middle ground that is safe. Because they want something that looks like a "middle ground", but really, all it does is lock in the big players' control over information, forever.

          promovicz@chaos.socialP wyatt_h_knott@vermont.masto.hostW janl@narrativ.esJ jrconlin@mindof.jrconlin.comJ raggi@don.rag.pubR 6 Replies Last reply
          1
          0
          • cwebber@social.coopC cwebber@social.coop

            That said, I think a lot of people think we can fight AI / LLM output on copyright grounds, and I actually think that's a losing strategy. Copyright almost always helps the big players, and it would here too!

            You can see, they're already counting on and hoping it will be the case.

            What the big players want is for copyright to apply to AI generated output because then *only* the big players can provide LLM services. See also Sam Altman's "running intelligence as a metered utility" pitch.

            And the reason they could do this: *they* can make deals with Disney, Netflix, etc. But open models can't.

            But what about all the "little guys" stuff? Well, when you sign that ToS on GitHub, Stack Overflow, DeviantArt, etc etc etc, all those places, you give them a right to your content too.

            And THOSE places get to sell your rights.

            So fighting on copyright grounds won't be an even playing field. It helps the big AI companies win.

            mhoye@cosocial.caM This user is from outside of this forum
            mhoye@cosocial.caM This user is from outside of this forum
            mhoye@cosocial.ca
            wrote last edited by
            #5

            @cwebber "The law always bends to capital, and when it doesn't capital buys new laws" is how I've heard that fundamentally expressed. Nobody should be looking at the copyright term extension acts and seeing a tool that benefits the people or the common good.

            Link Preview Image
            Copyright Term Extension Act - Wikipedia

            favicon

            (en.wikipedia.org)

            1 Reply Last reply
            0
            • cwebber@social.coopC cwebber@social.coop

              Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.

              I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.

              You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.

              And this hasn't shaken out in an international setting yet either.

              You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.

              But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.

              rusty__shackleford@mastodon.socialR This user is from outside of this forum
              rusty__shackleford@mastodon.socialR This user is from outside of this forum
              rusty__shackleford@mastodon.social
              wrote last edited by
              #6

              @cwebber
              Biggest enshittification to come: LLM companies trying to claim rights to the linux kernel and every opensource project their software has touched.

              From a copyright perspective, everyone is absolutely insane for doing this.

              1 Reply Last reply
              0
              • cwebber@social.coopC cwebber@social.coop

                There are only two strategies which are acceptable: either AI model output is completely illegal because of copyright stuff (this is unlikely to happen because there is now too much money behind it), or AI model output is fully in the public domain, which has its own problems but at least is an even playing field.

                There won't be a middle ground that is safe. Because they want something that looks like a "middle ground", but really, all it does is lock in the big players' control over information, forever.

                promovicz@chaos.socialP This user is from outside of this forum
                promovicz@chaos.socialP This user is from outside of this forum
                promovicz@chaos.social
                wrote last edited by
                #7

                @cwebber I think we should resist socially and politically, for as long as there is a point, and until we figure out "benign LLMs". I'm pretty sure that's possible.

                johannab@cosocial.caJ 1 Reply Last reply
                0
                • cwebber@social.coopC cwebber@social.coop

                  There are only two strategies which are acceptable: either AI model output is completely illegal because of copyright stuff (this is unlikely to happen because there is now too much money behind it), or AI model output is fully in the public domain, which has its own problems but at least is an even playing field.

                  There won't be a middle ground that is safe. Because they want something that looks like a "middle ground", but really, all it does is lock in the big players' control over information, forever.

                  wyatt_h_knott@vermont.masto.hostW This user is from outside of this forum
                  wyatt_h_knott@vermont.masto.hostW This user is from outside of this forum
                  wyatt_h_knott@vermont.masto.host
                  wrote last edited by
                  #8

                  @cwebber so, we will get a middle ground answer. Because what they actually want is to lock in the big player's control over information, forever. Just listen to Altman and his "we see intelligence as a utility that you will pay us for"

                  This is why Meta and Google are building fiber under the oceans. This is why Amazon wants to be all things to everyone. They want you locked in, they DO NOT LIKE the distributed power that the internet currently gives to indivuals.

                  1 Reply Last reply
                  0
                  • cwebber@social.coopC cwebber@social.coop

                    That said, I think a lot of people think we can fight AI / LLM output on copyright grounds, and I actually think that's a losing strategy. Copyright almost always helps the big players, and it would here too!

                    You can see, they're already counting on and hoping it will be the case.

                    What the big players want is for copyright to apply to AI generated output because then *only* the big players can provide LLM services. See also Sam Altman's "running intelligence as a metered utility" pitch.

                    And the reason they could do this: *they* can make deals with Disney, Netflix, etc. But open models can't.

                    But what about all the "little guys" stuff? Well, when you sign that ToS on GitHub, Stack Overflow, DeviantArt, etc etc etc, all those places, you give them a right to your content too.

                    And THOSE places get to sell your rights.

                    So fighting on copyright grounds won't be an even playing field. It helps the big AI companies win.

                    woozle@toot.catW This user is from outside of this forum
                    woozle@toot.catW This user is from outside of this forum
                    woozle@toot.cat
                    wrote last edited by
                    #9

                    @cwebber This agrees with my intuition on the matter -- the problem is not that content is being "stolen", it's that free AI "labor" "steals" the revenue that creators need in order to survive. For me, that points towards UBI, not reinforcing the highly unjust systems that trickle media revenue back to (a select few) creators.

                    (...speaking as a lifelong creator who almost made $5 playing live one time.)

                    1 Reply Last reply
                    0
                    • cwebber@social.coopC cwebber@social.coop

                      That said, I think a lot of people think we can fight AI / LLM output on copyright grounds, and I actually think that's a losing strategy. Copyright almost always helps the big players, and it would here too!

                      You can see, they're already counting on and hoping it will be the case.

                      What the big players want is for copyright to apply to AI generated output because then *only* the big players can provide LLM services. See also Sam Altman's "running intelligence as a metered utility" pitch.

                      And the reason they could do this: *they* can make deals with Disney, Netflix, etc. But open models can't.

                      But what about all the "little guys" stuff? Well, when you sign that ToS on GitHub, Stack Overflow, DeviantArt, etc etc etc, all those places, you give them a right to your content too.

                      And THOSE places get to sell your rights.

                      So fighting on copyright grounds won't be an even playing field. It helps the big AI companies win.

                      thomasjwebb@mastodon.socialT This user is from outside of this forum
                      thomasjwebb@mastodon.socialT This user is from outside of this forum
                      thomasjwebb@mastodon.social
                      wrote last edited by
                      #10

                      @cwebber Now I feel dumb. This is basically what my concern has been - that a situation would arise where the regulatory or legal situation turns it into an oligopoly and destroy smaller software companies. Yet I didn’t consider use of the output as a harm to oss projects that use it (unless the code quality is bad) so I’ve been using it in a few oss repos of mine on the grounds my day job leaves me with insufficient time to do it all myself. And thinking it’ll get more expensive.

                      1 Reply Last reply
                      0
                      • cwebber@social.coopC cwebber@social.coop

                        There are only two strategies which are acceptable: either AI model output is completely illegal because of copyright stuff (this is unlikely to happen because there is now too much money behind it), or AI model output is fully in the public domain, which has its own problems but at least is an even playing field.

                        There won't be a middle ground that is safe. Because they want something that looks like a "middle ground", but really, all it does is lock in the big players' control over information, forever.

                        janl@narrativ.esJ This user is from outside of this forum
                        janl@narrativ.esJ This user is from outside of this forum
                        janl@narrativ.es
                        wrote last edited by
                        #11

                        @cwebber I’d settle for: if the models include licensed sources and use those without a license (proprietary or open source) then the model needs to be published openly and usage needs to be free.

                        1 Reply Last reply
                        0
                        • cwebber@social.coopC cwebber@social.coop

                          Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.

                          I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.

                          You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.

                          And this hasn't shaken out in an international setting yet either.

                          You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.

                          But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.

                          rootwyrm@weird.autosR This user is from outside of this forum
                          rootwyrm@weird.autosR This user is from outside of this forum
                          rootwyrm@weird.autos
                          wrote last edited by
                          #12

                          @cwebber the US is not a country of laws, period. What USPTO says doesn't matter.

                          The EU however, just 3 days ago adopted text. LLM scammers MUST comply with licenses including payment to train on copyrighted work, regardless of location. And purely LLM generated slop *cannot be copyrighted*. There MUST be significant human contribution.

                          So purely LLM generated slop to try and license wash something is pretty much definitively unlawful now.

                          Link Preview Image
                          Protecting copyrighted work and the EU’s creative sector in the age of AI | News | European Parliament

                          To protect the creative sector in the EU, the use of copyrighted work by artificial intelligence requires transparency and fair remuneration, Parliament says.

                          favicon

                          (www.europarl.europa.eu)

                          rootwyrm@weird.autosR 1 Reply Last reply
                          0
                          • rootwyrm@weird.autosR rootwyrm@weird.autos

                            @cwebber the US is not a country of laws, period. What USPTO says doesn't matter.

                            The EU however, just 3 days ago adopted text. LLM scammers MUST comply with licenses including payment to train on copyrighted work, regardless of location. And purely LLM generated slop *cannot be copyrighted*. There MUST be significant human contribution.

                            So purely LLM generated slop to try and license wash something is pretty much definitively unlawful now.

                            Link Preview Image
                            Protecting copyrighted work and the EU’s creative sector in the age of AI | News | European Parliament

                            To protect the creative sector in the EU, the use of copyrighted work by artificial intelligence requires transparency and fair remuneration, Parliament says.

                            favicon

                            (www.europarl.europa.eu)

                            rootwyrm@weird.autosR This user is from outside of this forum
                            rootwyrm@weird.autosR This user is from outside of this forum
                            rootwyrm@weird.autos
                            wrote last edited by
                            #13

                            @cwebber and remember, these are the dipshits pissing off the old companies that have infinite dollars by stealing *their* stuff. The people who spent millions turning copyright into a way to maintain monopolies and permanent rent-seeking.
                            The people who have used copyright as a weapon for many decades are decidedly not fans of 'companies' stealing the things they own to generate and sell things based on it.
                            And the LLM grifters absolutely do not have the money to pay them off.

                            1 Reply Last reply
                            0
                            • cwebber@social.coopC cwebber@social.coop

                              There are only two strategies which are acceptable: either AI model output is completely illegal because of copyright stuff (this is unlikely to happen because there is now too much money behind it), or AI model output is fully in the public domain, which has its own problems but at least is an even playing field.

                              There won't be a middle ground that is safe. Because they want something that looks like a "middle ground", but really, all it does is lock in the big players' control over information, forever.

                              jrconlin@mindof.jrconlin.comJ This user is from outside of this forum
                              jrconlin@mindof.jrconlin.comJ This user is from outside of this forum
                              jrconlin@mindof.jrconlin.com
                              wrote last edited by
                              #14

                              @cwebber

                              I fully expect well funded companies to repeatedly challenge "AI cannot be copywritten because it wasn't human generated", and I expect it will be continually chipped away. That's going to make things stupidly complicated for a lot of non-technical reasons for a long, long time.

                              The advice I've given is to absolutely, and definitively denote exactly what code was AI generated keep detailed records of the history around it (including the source and date), because I guarantee that will become the crux of any future decision.

                              Until there's case law established, AI code is a liability.

                              1 Reply Last reply
                              1
                              0
                              • R relay@relay.infosec.exchange shared this topic
                              • cwebber@social.coopC cwebber@social.coop

                                Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.

                                I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.

                                You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.

                                And this hasn't shaken out in an international setting yet either.

                                You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.

                                But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.

                                ohir@social.vivaldi.netO This user is from outside of this forum
                                ohir@social.vivaldi.netO This user is from outside of this forum
                                ohir@social.vivaldi.net
                                wrote last edited by
                                #15

                                @cwebber
                                It used to not be copyrightable. But considering nazi track the US is sliping on, the new copyright act prepared by Bezos and Thiel over a some blody drink will say:

                                1) anything produced by humanity belong to whomever the tyrant wants, as we have it all in the LLM.
                                2) any royalties are going to us, see above.

                                [1] https://www.copyright.gov/newsnet/2025/1060.html

                                1 Reply Last reply
                                0
                                • cwebber@social.coopC cwebber@social.coop

                                  Don't use LLM generated code in your projects yet! If for no other reason than that the legal case law is NOT ESTABLISHED YET.

                                  I know there was the "copyright laundering" thing that went around a lot, but we actually don't know.

                                  You'll see commenters everywhere on the internet say that "the US Supreme Court ruled that AI generated output is in the public domain". That's misinfo: they *declined to take on* a case from a lower court coming to that conclusion. The US Supreme Court hasn't yet ruled.

                                  And this hasn't shaken out in an international setting yet either.

                                  You may be surprised to hear: I actually think it's more dangerous and empowers centralized AI companies even more if it *isn't* the case that AI output is in the public domain (I'll follow up about that), but regardless, right now we just don't know.

                                  But despite that, I'm STILL saying that you're putting yourself in legally dubious territory right now if you include LLM generated code, for now. We don't know yet.

                                  kennethbousquet@mastodon.socialK This user is from outside of this forum
                                  kennethbousquet@mastodon.socialK This user is from outside of this forum
                                  kennethbousquet@mastodon.social
                                  wrote last edited by
                                  #16

                                  @cwebber In my opinion, the moment that personal information gets out in the public domain without proper consent , this becomes an actionable matter.
                                  AI generated code must be open-source and doing this way, helps everybody to freely create.
                                  The moment the $$$ gets in the picture, you are killing the true creativity potential of the people.

                                  1 Reply Last reply
                                  0
                                  • cwebber@social.coopC cwebber@social.coop

                                    There are only two strategies which are acceptable: either AI model output is completely illegal because of copyright stuff (this is unlikely to happen because there is now too much money behind it), or AI model output is fully in the public domain, which has its own problems but at least is an even playing field.

                                    There won't be a middle ground that is safe. Because they want something that looks like a "middle ground", but really, all it does is lock in the big players' control over information, forever.

                                    raggi@don.rag.pubR This user is from outside of this forum
                                    raggi@don.rag.pubR This user is from outside of this forum
                                    raggi@don.rag.pub
                                    wrote last edited by
                                    #17

                                    @cwebber did you read the copyright office opinion doc? What’s your take on what it says?

                                    1 Reply Last reply
                                    0
                                    • promovicz@chaos.socialP promovicz@chaos.social

                                      @cwebber I think we should resist socially and politically, for as long as there is a point, and until we figure out "benign LLMs". I'm pretty sure that's possible.

                                      johannab@cosocial.caJ This user is from outside of this forum
                                      johannab@cosocial.caJ This user is from outside of this forum
                                      johannab@cosocial.ca
                                      wrote last edited by
                                      #18

                                      @promovicz @cwebber

                                      There is validity, with all kinds of different framing, to resisting the careless use of a complex and poorly understood technology as the answer to Life, the Universe, and Everything.

                                      I think the thesis at hand though, is that trying to use outdated and inadequate, poorly fit-for-context copyright law as the tool (a technology, heh) to do that is not likely to be productive. It will consume our resources without meeting our purposes.

                                      johannab@cosocial.caJ 1 Reply Last reply
                                      0
                                      • johannab@cosocial.caJ johannab@cosocial.ca

                                        @promovicz @cwebber

                                        There is validity, with all kinds of different framing, to resisting the careless use of a complex and poorly understood technology as the answer to Life, the Universe, and Everything.

                                        I think the thesis at hand though, is that trying to use outdated and inadequate, poorly fit-for-context copyright law as the tool (a technology, heh) to do that is not likely to be productive. It will consume our resources without meeting our purposes.

                                        johannab@cosocial.caJ This user is from outside of this forum
                                        johannab@cosocial.caJ This user is from outside of this forum
                                        johannab@cosocial.ca
                                        wrote last edited by
                                        #19

                                        @promovicz @cwebber

                                        Part of the problem still being … what, exactly, IS our purpose in this melee?

                                        1 Reply Last reply
                                        0
                                        • cwebber@social.coopC cwebber@social.coop

                                          There are only two strategies which are acceptable: either AI model output is completely illegal because of copyright stuff (this is unlikely to happen because there is now too much money behind it), or AI model output is fully in the public domain, which has its own problems but at least is an even playing field.

                                          There won't be a middle ground that is safe. Because they want something that looks like a "middle ground", but really, all it does is lock in the big players' control over information, forever.

                                          martyfouts@mastodon.onlineM This user is from outside of this forum
                                          martyfouts@mastodon.onlineM This user is from outside of this forum
                                          martyfouts@mastodon.online
                                          wrote last edited by
                                          #20

                                          @cwebber The UK has a third option: the person operating the AI is the author and the output is copyrighted. Would not surprise me if the industry lobbies more jurisdictions into similar legislation.

                                          cwebber@social.coopC 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups