I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir i keep repeating to myself “my admiration for them was an illusion that needed to die” but clearly: same.
No only grieving people i admire but also profound friends that I loved and admired. -
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir the lesson I keep learning (am I?) is not to have heroes in the first place. But it’s hard and it sucks when they fall.
-
@samir the lesson I keep learning (am I?) is not to have heroes in the first place. But it’s hard and it sucks when they fall.
-
@samir i keep repeating to myself “my admiration for them was an illusion that needed to die” but clearly: same.
No only grieving people i admire but also profound friends that I loved and admired.@romeu Yeah, that’s it, it was an illusion. I can appreciate their past work, their writing, whatever, without putting them on a daïs.
A lesson I will learn over and over again, no doubt!
I’m sorry for your losses too. They’re in our heads, but they’re still real.
-
@samir the lesson I keep learning (am I?) is not to have heroes in the first place. But it’s hard and it sucks when they fall.
@janl I’m learning it for the third or fourth time. Let’s see if it sticks!
It is hard, and I’m sorry.
-
-
I'm generally against hyper-large-model based generative algorithms for at least 3 reasons
1. the ethics of training resources and getting training data
2. new class of bugs and security risks, as well as the poor maintainability of generated code
3. deskilling as people rely on these auto-parrots, and no longer learn the actual skill these machines are bad at mimicking
I was surprised to see a famous mathematician, world famous, invest a lot of time in AI-aided proofs. I guess the difference between AI-generated code or human-language content - and AI-generated proofs is that the proofs can be checked with proof checkers (which are not AI). I'm still thinking about whether this "idea generation" is a good thing or a bad thing.
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir Proof once more that people don't have hidden depths, only hidden shallows.
-
@lizzy There are some good ones. They’re typically the ones who actually ship code!
And then there’s Anil Dash, who up until last year, I worshipped.
@samir @lizzy Anil Dash has posted absolute junk recently. Ron Jeffries is one of the good ones too. I'm discovering more heroes as I read new thoughtful posts, because they show me the people who are smart *and* compassionate. When I was a young engineer, smart would have been enough for me, but now I'm trying my best to live my personal values.
-
@samir @lizzy Anil Dash has posted absolute junk recently. Ron Jeffries is one of the good ones too. I'm discovering more heroes as I read new thoughtful posts, because they show me the people who are smart *and* compassionate. When I was a young engineer, smart would have been enough for me, but now I'm trying my best to live my personal values.
@sanityinc @lizzy Ron Jeffries is still one of my heroes, indeed.

-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir agreed
-
@sanityinc @lizzy Ron Jeffries is still one of my heroes, indeed.

@samir @sanityinc @lizzy —@RonJeffries is also one of my heroes!
-
@samir Proof once more that people don't have hidden depths, only hidden shallows.
@adrian I think there’s both! (I hope there’s both.)
-
-
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir I think I'm even more exhausted by the well-meaning, well-reasoned, balanced takes that try to find the positives in LLMs. I appreciate it and also usually try to argue in that way but specifically for this topic it's draining the life out of me
-
@samir I think I'm even more exhausted by the well-meaning, well-reasoned, balanced takes that try to find the positives in LLMs. I appreciate it and also usually try to argue in that way but specifically for this topic it's draining the life out of me
@dtemme Is it because most of these takes are in bad faith?
Because it’s definitely not in good faith when you ignore negative externalities (climate, etc.), power differential, the abuse required to make the product (model-training sweatshops)…
If anything, I think that one of the crimes here is that we technology people mostly ignored all this when it was in other fields, e.g. fast fashion.
-
@dtemme Is it because most of these takes are in bad faith?
Because it’s definitely not in good faith when you ignore negative externalities (climate, etc.), power differential, the abuse required to make the product (model-training sweatshops)…
If anything, I think that one of the crimes here is that we technology people mostly ignored all this when it was in other fields, e.g. fast fashion.
@dtemme Maybe that’s it. It’s not just that fast fashion, and fast programming, is bad. It’s that no one is arguing fast fashion is actually great and we should be doing way more of it. But for programming, there are so many people saying exactly this.
-
@dtemme Maybe that’s it. It’s not just that fast fashion, and fast programming, is bad. It’s that no one is arguing fast fashion is actually great and we should be doing way more of it. But for programming, there are so many people saying exactly this.
@samir agreed. Though I'm not sure I think of these as bad faith as such. And maybe that makes it feel worse for me.
I just can't get my head around why people are so eager to prove that their work can easily be replaced by the statistical average of the stolen works of the most prolific of their peers.
-
@samir agreed. Though I'm not sure I think of these as bad faith as such. And maybe that makes it feel worse for me.
I just can't get my head around why people are so eager to prove that their work can easily be replaced by the statistical average of the stolen works of the most prolific of their peers.
@dtemme I don’t get it either. Major despair.