Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

Scheduled Pinned Locked Moved Uncategorized
26 Posts 6 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • henry@radikal.socialH henry@radikal.social

    @phillmv another thing making it hard to talk about imo is that anyone who's successfully boycotted it for those past 12-18 months now has an extremely out-of-date perspective on its capabilities. so it adds up to three different alternate realities talking past each other quite a lot. like i still see people opposing it on the basis that it "doesn't work" when from where i'm sitting we have a _far_ worse problem that all the other problems still apply but that one less and less so

    phillmv@hachyderm.ioP This user is from outside of this forum
    phillmv@hachyderm.ioP This user is from outside of this forum
    phillmv@hachyderm.io
    wrote last edited by
    #6

    @henry it’s still overhyped constantly. it’s a big struggle. hard to communicate that it’s still sloppy but useful

    1 Reply Last reply
    0
    • phillmv@hachyderm.ioP phillmv@hachyderm.io

      because this is the fediverse, an ethics disclosure:

      - AI has been very harmful to the open web’s infrastructure
      - it’s plain to see that AI has hurt a lot of people’s cognitive and emotional skills
      - the dumbest and most evil people alive misuse it constantly
      - i don’t really believe in copyright tbh my ideal compromise is we make every academic paper free for everyone not just big tech companies
      - so far AI’s externalities outweigh the positives
      - the environmental costs are real but overstated; imho can be reduced to “capitalism is bad for the environment and rich people need to be stopped”

      phillmv@hachyderm.ioP This user is from outside of this forum
      phillmv@hachyderm.ioP This user is from outside of this forum
      phillmv@hachyderm.io
      wrote last edited by
      #7

      (also people really ought to disclose when they use it. nothing makes my blood boil like being asked to review slop they haven’t read, or realizing a blog author’s become prolific because they’re cutting a lot of corners. just disclose!)

      1 Reply Last reply
      0
      • phillmv@hachyderm.ioP phillmv@hachyderm.io

        because this is the fediverse, an ethics disclosure:

        - AI has been very harmful to the open web’s infrastructure
        - it’s plain to see that AI has hurt a lot of people’s cognitive and emotional skills
        - the dumbest and most evil people alive misuse it constantly
        - i don’t really believe in copyright tbh my ideal compromise is we make every academic paper free for everyone not just big tech companies
        - so far AI’s externalities outweigh the positives
        - the environmental costs are real but overstated; imho can be reduced to “capitalism is bad for the environment and rich people need to be stopped”

        yyj1983@fans.fansY This user is from outside of this forum
        yyj1983@fans.fansY This user is from outside of this forum
        yyj1983@fans.fans
        wrote last edited by
        #8

        @phillmv 有道理😁

        1 Reply Last reply
        0
        • phillmv@hachyderm.ioP phillmv@hachyderm.io

          i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

          jalcine@todon.euJ This user is from outside of this forum
          jalcine@todon.euJ This user is from outside of this forum
          jalcine@todon.eu
          wrote last edited by
          #9

          @phillmv this is my current dilemma

          1 Reply Last reply
          0
          • phillmv@hachyderm.ioP phillmv@hachyderm.io

            i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

            yoasif@mastodon.socialY This user is from outside of this forum
            yoasif@mastodon.socialY This user is from outside of this forum
            yoasif@mastodon.social
            wrote last edited by
            #10

            RE: https://hachyderm.io/@phillmv/116374969941559197

            @phillmv Quoting you. What is there to talk about after we take all of that into consideration?

            PS: I think it is hard to talk about because there's nothing to talk about besides special pleading.

            phillmv@hachyderm.ioP 1 Reply Last reply
            0
            • yoasif@mastodon.socialY yoasif@mastodon.social

              RE: https://hachyderm.io/@phillmv/116374969941559197

              @phillmv Quoting you. What is there to talk about after we take all of that into consideration?

              PS: I think it is hard to talk about because there's nothing to talk about besides special pleading.

              phillmv@hachyderm.ioP This user is from outside of this forum
              phillmv@hachyderm.ioP This user is from outside of this forum
              phillmv@hachyderm.io
              wrote last edited by
              #11

              @yoasif the past three-ish years it was extremely impressive but also kind of useless.

              the harms obviously outweighed the benefit.

              now however it caught up to (some) of the hype: i’m feeling excited about the kinds of projects i’ll be able to deliver with good quality.

              yoasif@mastodon.socialY 1 Reply Last reply
              0
              • phillmv@hachyderm.ioP phillmv@hachyderm.io

                @yoasif the past three-ish years it was extremely impressive but also kind of useless.

                the harms obviously outweighed the benefit.

                now however it caught up to (some) of the hype: i’m feeling excited about the kinds of projects i’ll be able to deliver with good quality.

                yoasif@mastodon.socialY This user is from outside of this forum
                yoasif@mastodon.socialY This user is from outside of this forum
                yoasif@mastodon.social
                wrote last edited by
                #12

                @phillmv The harms haven't gone away - it sounds like you are just doing the special pleading thing.

                phillmv@hachyderm.ioP 1 Reply Last reply
                0
                • yoasif@mastodon.socialY yoasif@mastodon.social

                  @phillmv The harms haven't gone away - it sounds like you are just doing the special pleading thing.

                  phillmv@hachyderm.ioP This user is from outside of this forum
                  phillmv@hachyderm.ioP This user is from outside of this forum
                  phillmv@hachyderm.io
                  wrote last edited by
                  #13

                  @yoasif i’m happy to engage on the harms.

                  broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.

                  that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.

                  i don’t really believe in intellectual property so we can skip “theft”.

                  this mostly leaves us with environmental concerns and social upheaval.

                  as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.

                  environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK

                  yoasif@mastodon.socialY configures@mindly.socialC 2 Replies Last reply
                  0
                  • phillmv@hachyderm.ioP phillmv@hachyderm.io

                    @yoasif i’m happy to engage on the harms.

                    broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.

                    that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.

                    i don’t really believe in intellectual property so we can skip “theft”.

                    this mostly leaves us with environmental concerns and social upheaval.

                    as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.

                    environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK

                    yoasif@mastodon.socialY This user is from outside of this forum
                    yoasif@mastodon.socialY This user is from outside of this forum
                    yoasif@mastodon.social
                    wrote last edited by
                    #14

                    RE: https://mastodon.social/@yoasif/116301328058936154

                    @phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.

                    I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.

                    How does copyleft exist in a world without copyright?

                    yoasif@mastodon.socialY phillmv@hachyderm.ioP 2 Replies Last reply
                    0
                    • yoasif@mastodon.socialY yoasif@mastodon.social

                      RE: https://mastodon.social/@yoasif/116301328058936154

                      @phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.

                      I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.

                      How does copyleft exist in a world without copyright?

                      yoasif@mastodon.socialY This user is from outside of this forum
                      yoasif@mastodon.socialY This user is from outside of this forum
                      yoasif@mastodon.social
                      wrote last edited by
                      #15

                      @phillmv Beyond that, even if you believe in the abolition of copyright, what do we do about the stolen labor? Just ignore that it was stolen?

                      It isn't as if the LLM vendors are playing fair here - they knew that people were restricting their works under existing law, and instead of lobbying governments to abolish copyright, they are instead simply taking from the commons.

                      Should we simply ignore that?

                      phillmv@hachyderm.ioP 1 Reply Last reply
                      0
                      • yoasif@mastodon.socialY yoasif@mastodon.social

                        RE: https://mastodon.social/@yoasif/116301328058936154

                        @phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.

                        I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.

                        How does copyleft exist in a world without copyright?

                        phillmv@hachyderm.ioP This user is from outside of this forum
                        phillmv@hachyderm.ioP This user is from outside of this forum
                        phillmv@hachyderm.io
                        wrote last edited by
                        #16

                        @yoasif copyleft is a hack that uses copyright as a way of enforcing contributions back to the commons. i generally license my code (A,L)GPL and i think ppl who complain about the GPL are fools

                        but! the important part is the existence of a commons, not the exact enforcement mechanism - i use a lot of MIT and Apache licensed code too. i prefer it when ppl are forced to share but sharing still happens without it

                        i wont go into too much detail cos im still working on a demo but my early vibe is the commons might stand to benefit; i think we’ll be able to use LLMs to clone proprietary software and place it in the commons

                        yoasif@mastodon.socialY 1 Reply Last reply
                        0
                        • yoasif@mastodon.socialY yoasif@mastodon.social

                          @phillmv Beyond that, even if you believe in the abolition of copyright, what do we do about the stolen labor? Just ignore that it was stolen?

                          It isn't as if the LLM vendors are playing fair here - they knew that people were restricting their works under existing law, and instead of lobbying governments to abolish copyright, they are instead simply taking from the commons.

                          Should we simply ignore that?

                          phillmv@hachyderm.ioP This user is from outside of this forum
                          phillmv@hachyderm.ioP This user is from outside of this forum
                          phillmv@hachyderm.io
                          wrote last edited by
                          #17

                          @yoasif when Aaron Schwartz crawled all of JSTOR i thought that was cool. my ideal solution here is making all of JSTOR public.

                          i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.

                          yoasif@mastodon.socialY 1 Reply Last reply
                          0
                          • phillmv@hachyderm.ioP phillmv@hachyderm.io

                            @yoasif copyleft is a hack that uses copyright as a way of enforcing contributions back to the commons. i generally license my code (A,L)GPL and i think ppl who complain about the GPL are fools

                            but! the important part is the existence of a commons, not the exact enforcement mechanism - i use a lot of MIT and Apache licensed code too. i prefer it when ppl are forced to share but sharing still happens without it

                            i wont go into too much detail cos im still working on a demo but my early vibe is the commons might stand to benefit; i think we’ll be able to use LLMs to clone proprietary software and place it in the commons

                            yoasif@mastodon.socialY This user is from outside of this forum
                            yoasif@mastodon.socialY This user is from outside of this forum
                            yoasif@mastodon.social
                            wrote last edited by
                            #18

                            @phillmv I disagree and I just wrote about it: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html

                            The idea that people will be able to clone proprietary software and place it into the commons is an interesting idea - except for the fact that the models are very much copying machines - if the proprietary software is built on innovation not already copied by the commons (and models), that clone isn't coming out the other end. That means using your brain.

                            Besides which, the LLMs aren't going to be cheap forever.

                            phillmv@hachyderm.ioP 1 Reply Last reply
                            0
                            • yoasif@mastodon.socialY yoasif@mastodon.social

                              @phillmv I disagree and I just wrote about it: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html

                              The idea that people will be able to clone proprietary software and place it into the commons is an interesting idea - except for the fact that the models are very much copying machines - if the proprietary software is built on innovation not already copied by the commons (and models), that clone isn't coming out the other end. That means using your brain.

                              Besides which, the LLMs aren't going to be cheap forever.

                              phillmv@hachyderm.ioP This user is from outside of this forum
                              phillmv@hachyderm.ioP This user is from outside of this forum
                              phillmv@hachyderm.io
                              wrote last edited by
                              #19

                              @yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.

                              as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API

                              yoasif@mastodon.socialY 1 Reply Last reply
                              0
                              • phillmv@hachyderm.ioP phillmv@hachyderm.io

                                @yoasif when Aaron Schwartz crawled all of JSTOR i thought that was cool. my ideal solution here is making all of JSTOR public.

                                i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.

                                yoasif@mastodon.socialY This user is from outside of this forum
                                yoasif@mastodon.socialY This user is from outside of this forum
                                yoasif@mastodon.social
                                wrote last edited by
                                #20

                                @phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.

                                What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.

                                Sort of a reverse robin hood - stealing from the poor to give to the rich.

                                phillmv@hachyderm.ioP 1 Reply Last reply
                                0
                                • yoasif@mastodon.socialY yoasif@mastodon.social

                                  @phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.

                                  What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.

                                  Sort of a reverse robin hood - stealing from the poor to give to the rich.

                                  phillmv@hachyderm.ioP This user is from outside of this forum
                                  phillmv@hachyderm.ioP This user is from outside of this forum
                                  phillmv@hachyderm.io
                                  wrote last edited by
                                  #21

                                  @yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo

                                  yoasif@mastodon.socialY 1 Reply Last reply
                                  0
                                  • phillmv@hachyderm.ioP phillmv@hachyderm.io

                                    @yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.

                                    as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API

                                    yoasif@mastodon.socialY This user is from outside of this forum
                                    yoasif@mastodon.socialY This user is from outside of this forum
                                    yoasif@mastodon.social
                                    wrote last edited by
                                    #22

                                    @phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

                                    The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.

                                    Monkey see, monkey do.

                                    phillmv@hachyderm.ioP 1 Reply Last reply
                                    0
                                    • phillmv@hachyderm.ioP phillmv@hachyderm.io

                                      @yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo

                                      yoasif@mastodon.socialY This user is from outside of this forum
                                      yoasif@mastodon.socialY This user is from outside of this forum
                                      yoasif@mastodon.social
                                      wrote last edited by
                                      #23

                                      @phillmv How is propping up the LLM companies doing what Aaron was trying to do?

                                      Aaron was Robin Hood.

                                      The LLM companies are the opposite.

                                      1 Reply Last reply
                                      0
                                      • yoasif@mastodon.socialY yoasif@mastodon.social

                                        @phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

                                        The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.

                                        Monkey see, monkey do.

                                        phillmv@hachyderm.ioP This user is from outside of this forum
                                        phillmv@hachyderm.ioP This user is from outside of this forum
                                        phillmv@hachyderm.io
                                        wrote last edited by
                                        #24

                                        @yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.

                                        my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.

                                        yoasif@mastodon.socialY 1 Reply Last reply
                                        0
                                        • phillmv@hachyderm.ioP phillmv@hachyderm.io

                                          @yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.

                                          my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.

                                          yoasif@mastodon.socialY This user is from outside of this forum
                                          yoasif@mastodon.socialY This user is from outside of this forum
                                          yoasif@mastodon.social
                                          wrote last edited by
                                          #25

                                          @phillmv When the code is in the corpus, the LLM generates plausible code.

                                          That doesn't mean it is good, or that you can protect it in any way.

                                          If you are saying that, people will be able to describe an app to produce something plausible if the code exists in the corpus... perhaps.

                                          That assumes that people are interested in feeding the models for free - LLMs copy, so if it isn't already a solved problem, you are still going to need to use your brain.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups