starting to wonder if the MLG pro strat of LLM critique is just to go relatively quiet for the next 18 months and wait for the cost of inference to go vertical so everyone has to stop using it regardless
-
@glyph That interval's gonna be rough, though - even if I can keep quietly doing my work myself, I still have to wade through everyone else's slopcode, which is incoming at a whole new pace now
@delta_vee so, if you're dealing with a lot of inbound — I am, honestly, not, for some reason none of my open source projects are receiving much in the way of slop PRs or even security reports — I am curious if *you* have a perspective on the "claude is so much better now" subjective experience. Is the slop of a more manageable quality in the last 3 months or so?
-
I don't _think_ they'll be able to fully deskill the whole software industry and become blocking obstructions in every knowledge-work workflow in that time, right?
@glyph one issue with this thought process is "the software industry". It's just not an actual thing, though it kind of seems like one because software people have historically been able to easily pivot between software jobs in various industries.
-
@glyph one issue with this thought process is "the software industry". It's just not an actual thing, though it kind of seems like one because software people have historically been able to easily pivot between software jobs in various industries.
@glyph but the people trying to do what you're talking about don't need to deskill that industry, they just need to make deskill the people willing to work in software in the industries they care about (the not highly optimized parts of advertising software, etc.).
-
@glyph we don't think so either. it'll be a fight but there are too many of us who actually love knowing stuff.
@ireneista @glyph I mean, just look at all of us who have learned what there already is. And consider how much absolute doo doo the tubes are full of already.
We still love puzzles, we still love to play and make things.
-
@ireneista @glyph I mean, just look at all of us who have learned what there already is. And consider how much absolute doo doo the tubes are full of already.
We still love puzzles, we still love to play and make things.
-
starting to wonder if the MLG pro strat of LLM critique is just to go relatively quiet for the next 18 months and wait for the cost of inference to go vertical so everyone has to stop using it regardless
@glyph that mostly have been mine. I let everyone at work go ham and i just keep working. In 6 to 12 months there will be nothing left of it so eh.
They are adult, they can burn themselves.
-
@glyph ex: work went from ai-boosterism to "this is mandatory, slop or get fired, all training cancelled in favor of agentic ai coding bot training, etc"
everyone i talk to says "claude has gotten so much better since december" in lockstep
@cap_ybarra @glyph I remember "claude has gotten so much better since december, they used to not being able to do shit" was the same thing they said a year ago at the same time
-
I don't _think_ they'll be able to fully deskill the whole software industry and become blocking obstructions in every knowledge-work workflow in that time, right?
@glyph Many of the great devs I’ve met were self taught. I’m pretty sure LLMs won’t destroy human curiosity (but SoMe might make us too lazy). 90% of people I would call at the top of the game weren’t applying stuff they learned in school. School/uni gave them foundations. Those foundations aren’t gone overnight.
-
starting to wonder if the MLG pro strat of LLM critique is just to go relatively quiet for the next 18 months and wait for the cost of inference to go vertical so everyone has to stop using it regardless
@glyph What I wonder is how companies that force their employees to code with these tools are going to transition into a world of expensive inference. Not sure they'll ask everyone to revert to brain use. Sounds more likely that they'll cover the increased costs with layoffs.
-
@glyph What I wonder is how companies that force their employees to code with these tools are going to transition into a world of expensive inference. Not sure they'll ask everyone to revert to brain use. Sounds more likely that they'll cover the increased costs with layoffs.
@miguelgrinberg @glyph The squeeze comes when they've already done those layoffs, and then the tools go vertical.
-
@delta_vee so, if you're dealing with a lot of inbound — I am, honestly, not, for some reason none of my open source projects are receiving much in the way of slop PRs or even security reports — I am curious if *you* have a perspective on the "claude is so much better now" subjective experience. Is the slop of a more manageable quality in the last 3 months or so?
@glyph Quite honestly, the slop has gotten less shit. It's still not good per se, but it's not trivially dismissable. The problem is, it's locally coherent but not necessarily global, it never quite fits the style, and ofc has that LLM hollowness and lack of intention which makes it harder to actually read through.
I think that lack of intention is actually my biggest problem (well, and the volume): when a real human writes code, there's (usually) a consistency of intention in there, which gives the thing a form of cohesion, but these don't have any of that. Any function they create, even if it works just fine, doesn't have any of that feel to it, that sense of affordance.
All that makes it much harder to go through, because you have to trace everything yourself, and do all the thinking about how it's going to be in the future which the bot obviously didn't. Does that make sense?
-
R relay@relay.mycrowd.ca shared this topic