i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about
-
i would say the hype is about 12-18mo ahead of the tech; opus 4.6 is about as good as people said this stuff was a year ago
ie what ppl said was an urgent reality one year ago has actually finally arrived
@phillmv another thing making it hard to talk about imo is that anyone who's successfully boycotted it for those past 12-18 months now has an extremely out-of-date perspective on its capabilities. so it adds up to three different alternate realities talking past each other quite a lot. like i still see people opposing it on the basis that it "doesn't work" when from where i'm sitting we have a _far_ worse problem that all the other problems still apply but that one less and less so
-
@phillmv another thing making it hard to talk about imo is that anyone who's successfully boycotted it for those past 12-18 months now has an extremely out-of-date perspective on its capabilities. so it adds up to three different alternate realities talking past each other quite a lot. like i still see people opposing it on the basis that it "doesn't work" when from where i'm sitting we have a _far_ worse problem that all the other problems still apply but that one less and less so
@henry it’s still overhyped constantly. it’s a big struggle. hard to communicate that it’s still sloppy but useful
-
because this is the fediverse, an ethics disclosure:
- AI has been very harmful to the open web’s infrastructure
- it’s plain to see that AI has hurt a lot of people’s cognitive and emotional skills
- the dumbest and most evil people alive misuse it constantly
- i don’t really believe in copyright tbh my ideal compromise is we make every academic paper free for everyone not just big tech companies
- so far AI’s externalities outweigh the positives
- the environmental costs are real but overstated; imho can be reduced to “capitalism is bad for the environment and rich people need to be stopped”(also people really ought to disclose when they use it. nothing makes my blood boil like being asked to review slop they haven’t read, or realizing a blog author’s become prolific because they’re cutting a lot of corners. just disclose!)
-
because this is the fediverse, an ethics disclosure:
- AI has been very harmful to the open web’s infrastructure
- it’s plain to see that AI has hurt a lot of people’s cognitive and emotional skills
- the dumbest and most evil people alive misuse it constantly
- i don’t really believe in copyright tbh my ideal compromise is we make every academic paper free for everyone not just big tech companies
- so far AI’s externalities outweigh the positives
- the environmental costs are real but overstated; imho can be reduced to “capitalism is bad for the environment and rich people need to be stopped”@phillmv 有道理

-
i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about
@phillmv this is my current dilemma
-
i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about
RE: https://hachyderm.io/@phillmv/116374969941559197
@phillmv Quoting you. What is there to talk about after we take all of that into consideration?
PS: I think it is hard to talk about because there's nothing to talk about besides special pleading.
-
RE: https://hachyderm.io/@phillmv/116374969941559197
@phillmv Quoting you. What is there to talk about after we take all of that into consideration?
PS: I think it is hard to talk about because there's nothing to talk about besides special pleading.
@yoasif the past three-ish years it was extremely impressive but also kind of useless.
the harms obviously outweighed the benefit.
now however it caught up to (some) of the hype: i’m feeling excited about the kinds of projects i’ll be able to deliver with good quality.
-
@yoasif the past three-ish years it was extremely impressive but also kind of useless.
the harms obviously outweighed the benefit.
now however it caught up to (some) of the hype: i’m feeling excited about the kinds of projects i’ll be able to deliver with good quality.
@phillmv The harms haven't gone away - it sounds like you are just doing the special pleading thing.
-
@phillmv The harms haven't gone away - it sounds like you are just doing the special pleading thing.
@yoasif i’m happy to engage on the harms.
broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.
that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.
i don’t really believe in intellectual property so we can skip “theft”.
this mostly leaves us with environmental concerns and social upheaval.
as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.
environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK
-
@yoasif i’m happy to engage on the harms.
broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.
that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.
i don’t really believe in intellectual property so we can skip “theft”.
this mostly leaves us with environmental concerns and social upheaval.
as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.
environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK
RE: https://mastodon.social/@yoasif/116301328058936154
@phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.
I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.
How does copyleft exist in a world without copyright?
-
RE: https://mastodon.social/@yoasif/116301328058936154
@phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.
I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.
How does copyleft exist in a world without copyright?
@phillmv Beyond that, even if you believe in the abolition of copyright, what do we do about the stolen labor? Just ignore that it was stolen?
It isn't as if the LLM vendors are playing fair here - they knew that people were restricting their works under existing law, and instead of lobbying governments to abolish copyright, they are instead simply taking from the commons.
Should we simply ignore that?
-
RE: https://mastodon.social/@yoasif/116301328058936154
@phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.
I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.
How does copyleft exist in a world without copyright?
@yoasif copyleft is a hack that uses copyright as a way of enforcing contributions back to the commons. i generally license my code (A,L)GPL and i think ppl who complain about the GPL are fools
but! the important part is the existence of a commons, not the exact enforcement mechanism - i use a lot of MIT and Apache licensed code too. i prefer it when ppl are forced to share but sharing still happens without it
i wont go into too much detail cos im still working on a demo but my early vibe is the commons might stand to benefit; i think we’ll be able to use LLMs to clone proprietary software and place it in the commons
-
@phillmv Beyond that, even if you believe in the abolition of copyright, what do we do about the stolen labor? Just ignore that it was stolen?
It isn't as if the LLM vendors are playing fair here - they knew that people were restricting their works under existing law, and instead of lobbying governments to abolish copyright, they are instead simply taking from the commons.
Should we simply ignore that?
@yoasif when Aaron Schwartz crawled all of JSTOR i thought that was cool. my ideal solution here is making all of JSTOR public.
i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.
-
@yoasif copyleft is a hack that uses copyright as a way of enforcing contributions back to the commons. i generally license my code (A,L)GPL and i think ppl who complain about the GPL are fools
but! the important part is the existence of a commons, not the exact enforcement mechanism - i use a lot of MIT and Apache licensed code too. i prefer it when ppl are forced to share but sharing still happens without it
i wont go into too much detail cos im still working on a demo but my early vibe is the commons might stand to benefit; i think we’ll be able to use LLMs to clone proprietary software and place it in the commons
@phillmv I disagree and I just wrote about it: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html
The idea that people will be able to clone proprietary software and place it into the commons is an interesting idea - except for the fact that the models are very much copying machines - if the proprietary software is built on innovation not already copied by the commons (and models), that clone isn't coming out the other end. That means using your brain.
Besides which, the LLMs aren't going to be cheap forever.
-
@phillmv I disagree and I just wrote about it: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html
The idea that people will be able to clone proprietary software and place it into the commons is an interesting idea - except for the fact that the models are very much copying machines - if the proprietary software is built on innovation not already copied by the commons (and models), that clone isn't coming out the other end. That means using your brain.
Besides which, the LLMs aren't going to be cheap forever.
@yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.
as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API
-
@yoasif when Aaron Schwartz crawled all of JSTOR i thought that was cool. my ideal solution here is making all of JSTOR public.
i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.
@phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.
What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.
Sort of a reverse robin hood - stealing from the poor to give to the rich.
-
@phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.
What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.
Sort of a reverse robin hood - stealing from the poor to give to the rich.
@yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo
-
@yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.
as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API
@phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code
The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.
Monkey see, monkey do.
-
@yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo
@phillmv How is propping up the LLM companies doing what Aaron was trying to do?
Aaron was Robin Hood.
The LLM companies are the opposite.
-
@phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code
The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.
Monkey see, monkey do.
@yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.
my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.