Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

Scheduled Pinned Locked Moved Uncategorized
26 Posts 6 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • phillmv@hachyderm.ioP phillmv@hachyderm.io

    @yoasif i’m happy to engage on the harms.

    broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.

    that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.

    i don’t really believe in intellectual property so we can skip “theft”.

    this mostly leaves us with environmental concerns and social upheaval.

    as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.

    environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK

    yoasif@mastodon.socialY This user is from outside of this forum
    yoasif@mastodon.socialY This user is from outside of this forum
    yoasif@mastodon.social
    wrote last edited by
    #14

    RE: https://mastodon.social/@yoasif/116301328058936154

    @phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.

    I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.

    How does copyleft exist in a world without copyright?

    yoasif@mastodon.socialY phillmv@hachyderm.ioP 2 Replies Last reply
    0
    • yoasif@mastodon.socialY yoasif@mastodon.social

      RE: https://mastodon.social/@yoasif/116301328058936154

      @phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.

      I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.

      How does copyleft exist in a world without copyright?

      yoasif@mastodon.socialY This user is from outside of this forum
      yoasif@mastodon.socialY This user is from outside of this forum
      yoasif@mastodon.social
      wrote last edited by
      #15

      @phillmv Beyond that, even if you believe in the abolition of copyright, what do we do about the stolen labor? Just ignore that it was stolen?

      It isn't as if the LLM vendors are playing fair here - they knew that people were restricting their works under existing law, and instead of lobbying governments to abolish copyright, they are instead simply taking from the commons.

      Should we simply ignore that?

      phillmv@hachyderm.ioP 1 Reply Last reply
      0
      • yoasif@mastodon.socialY yoasif@mastodon.social

        RE: https://mastodon.social/@yoasif/116301328058936154

        @phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.

        I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.

        How does copyleft exist in a world without copyright?

        phillmv@hachyderm.ioP This user is from outside of this forum
        phillmv@hachyderm.ioP This user is from outside of this forum
        phillmv@hachyderm.io
        wrote last edited by
        #16

        @yoasif copyleft is a hack that uses copyright as a way of enforcing contributions back to the commons. i generally license my code (A,L)GPL and i think ppl who complain about the GPL are fools

        but! the important part is the existence of a commons, not the exact enforcement mechanism - i use a lot of MIT and Apache licensed code too. i prefer it when ppl are forced to share but sharing still happens without it

        i wont go into too much detail cos im still working on a demo but my early vibe is the commons might stand to benefit; i think we’ll be able to use LLMs to clone proprietary software and place it in the commons

        yoasif@mastodon.socialY 1 Reply Last reply
        0
        • yoasif@mastodon.socialY yoasif@mastodon.social

          @phillmv Beyond that, even if you believe in the abolition of copyright, what do we do about the stolen labor? Just ignore that it was stolen?

          It isn't as if the LLM vendors are playing fair here - they knew that people were restricting their works under existing law, and instead of lobbying governments to abolish copyright, they are instead simply taking from the commons.

          Should we simply ignore that?

          phillmv@hachyderm.ioP This user is from outside of this forum
          phillmv@hachyderm.ioP This user is from outside of this forum
          phillmv@hachyderm.io
          wrote last edited by
          #17

          @yoasif when Aaron Schwartz crawled all of JSTOR i thought that was cool. my ideal solution here is making all of JSTOR public.

          i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.

          yoasif@mastodon.socialY 1 Reply Last reply
          0
          • phillmv@hachyderm.ioP phillmv@hachyderm.io

            @yoasif copyleft is a hack that uses copyright as a way of enforcing contributions back to the commons. i generally license my code (A,L)GPL and i think ppl who complain about the GPL are fools

            but! the important part is the existence of a commons, not the exact enforcement mechanism - i use a lot of MIT and Apache licensed code too. i prefer it when ppl are forced to share but sharing still happens without it

            i wont go into too much detail cos im still working on a demo but my early vibe is the commons might stand to benefit; i think we’ll be able to use LLMs to clone proprietary software and place it in the commons

            yoasif@mastodon.socialY This user is from outside of this forum
            yoasif@mastodon.socialY This user is from outside of this forum
            yoasif@mastodon.social
            wrote last edited by
            #18

            @phillmv I disagree and I just wrote about it: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html

            The idea that people will be able to clone proprietary software and place it into the commons is an interesting idea - except for the fact that the models are very much copying machines - if the proprietary software is built on innovation not already copied by the commons (and models), that clone isn't coming out the other end. That means using your brain.

            Besides which, the LLMs aren't going to be cheap forever.

            phillmv@hachyderm.ioP 1 Reply Last reply
            0
            • yoasif@mastodon.socialY yoasif@mastodon.social

              @phillmv I disagree and I just wrote about it: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html

              The idea that people will be able to clone proprietary software and place it into the commons is an interesting idea - except for the fact that the models are very much copying machines - if the proprietary software is built on innovation not already copied by the commons (and models), that clone isn't coming out the other end. That means using your brain.

              Besides which, the LLMs aren't going to be cheap forever.

              phillmv@hachyderm.ioP This user is from outside of this forum
              phillmv@hachyderm.ioP This user is from outside of this forum
              phillmv@hachyderm.io
              wrote last edited by
              #19

              @yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.

              as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API

              yoasif@mastodon.socialY 1 Reply Last reply
              0
              • phillmv@hachyderm.ioP phillmv@hachyderm.io

                @yoasif when Aaron Schwartz crawled all of JSTOR i thought that was cool. my ideal solution here is making all of JSTOR public.

                i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.

                yoasif@mastodon.socialY This user is from outside of this forum
                yoasif@mastodon.socialY This user is from outside of this forum
                yoasif@mastodon.social
                wrote last edited by
                #20

                @phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.

                What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.

                Sort of a reverse robin hood - stealing from the poor to give to the rich.

                phillmv@hachyderm.ioP 1 Reply Last reply
                0
                • yoasif@mastodon.socialY yoasif@mastodon.social

                  @phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.

                  What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.

                  Sort of a reverse robin hood - stealing from the poor to give to the rich.

                  phillmv@hachyderm.ioP This user is from outside of this forum
                  phillmv@hachyderm.ioP This user is from outside of this forum
                  phillmv@hachyderm.io
                  wrote last edited by
                  #21

                  @yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo

                  yoasif@mastodon.socialY 1 Reply Last reply
                  0
                  • phillmv@hachyderm.ioP phillmv@hachyderm.io

                    @yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.

                    as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API

                    yoasif@mastodon.socialY This user is from outside of this forum
                    yoasif@mastodon.socialY This user is from outside of this forum
                    yoasif@mastodon.social
                    wrote last edited by
                    #22

                    @phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

                    The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.

                    Monkey see, monkey do.

                    phillmv@hachyderm.ioP 1 Reply Last reply
                    0
                    • phillmv@hachyderm.ioP phillmv@hachyderm.io

                      @yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo

                      yoasif@mastodon.socialY This user is from outside of this forum
                      yoasif@mastodon.socialY This user is from outside of this forum
                      yoasif@mastodon.social
                      wrote last edited by
                      #23

                      @phillmv How is propping up the LLM companies doing what Aaron was trying to do?

                      Aaron was Robin Hood.

                      The LLM companies are the opposite.

                      1 Reply Last reply
                      0
                      • yoasif@mastodon.socialY yoasif@mastodon.social

                        @phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

                        The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.

                        Monkey see, monkey do.

                        phillmv@hachyderm.ioP This user is from outside of this forum
                        phillmv@hachyderm.ioP This user is from outside of this forum
                        phillmv@hachyderm.io
                        wrote last edited by
                        #24

                        @yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.

                        my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.

                        yoasif@mastodon.socialY 1 Reply Last reply
                        0
                        • phillmv@hachyderm.ioP phillmv@hachyderm.io

                          @yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.

                          my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.

                          yoasif@mastodon.socialY This user is from outside of this forum
                          yoasif@mastodon.socialY This user is from outside of this forum
                          yoasif@mastodon.social
                          wrote last edited by
                          #25

                          @phillmv When the code is in the corpus, the LLM generates plausible code.

                          That doesn't mean it is good, or that you can protect it in any way.

                          If you are saying that, people will be able to describe an app to produce something plausible if the code exists in the corpus... perhaps.

                          That assumes that people are interested in feeding the models for free - LLMs copy, so if it isn't already a solved problem, you are still going to need to use your brain.

                          1 Reply Last reply
                          0
                          • phillmv@hachyderm.ioP phillmv@hachyderm.io

                            @yoasif i’m happy to engage on the harms.

                            broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.

                            that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.

                            i don’t really believe in intellectual property so we can skip “theft”.

                            this mostly leaves us with environmental concerns and social upheaval.

                            as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.

                            environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK

                            configures@mindly.socialC This user is from outside of this forum
                            configures@mindly.socialC This user is from outside of this forum
                            configures@mindly.social
                            wrote last edited by
                            #26

                            @phillmv @yoasif it's not just the energy. AI data centers are stealing water from communities that need it badly. It's a water hog. I can imagine cooling that doesn't use it but that's not the realities right now.

                            1 Reply Last reply
                            1
                            0
                            • R relay@relay.publicsquare.global shared this topic
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups