I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
-
@samir rob pike is really really against it
@lizzy There are some good ones. They’re typically the ones who actually ship code!
And then there’s Anil Dash, who up until last year, I worshipped.
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir I thought I didn’t have heroes anymore (because most of them were sexual harassers) and then I realised a lot of my anxiety came from people I respected in the field 🫂
We shall learn to not have heroes
-
@samir I thought I didn’t have heroes anymore (because most of them were sexual harassers) and then I realised a lot of my anxiety came from people I respected in the field 🫂
We shall learn to not have heroes
@RosaCtrl Fuck yeah!
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
I'm generally against hyper-large-model based generative algorithms for at least 3 reasons
1. the ethics of training resources and getting training data
2. new class of bugs and security risks, as well as the poor maintainability of generated code
3. deskilling as people rely on these auto-parrots, and no longer learn the actual skill these machines are bad at mimicking
I was surprised to see a famous mathematician, world famous, invest a lot of time in AI-aided proofs. I guess the difference between AI-generated code or human-language content - and AI-generated proofs is that the proofs can be checked with proof checkers (which are not AI). I'm still thinking about whether this "idea generation" is a good thing or a bad thing.
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir I always say nobody is immune to bad judgement. Even our heroes.
Equally, nobody is immune to good judgement. Even those who we have already set the ‘bozo-bit’.
Keep an open mind.
-
@samir I always say nobody is immune to bad judgement. Even our heroes.
Equally, nobody is immune to good judgement. Even those who we have already set the ‘bozo-bit’.
Keep an open mind.
@thirstybear Yeah, I am not writing these people off, but I am disappointed.
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir i keep repeating to myself “my admiration for them was an illusion that needed to die” but clearly: same.
No only grieving people i admire but also profound friends that I loved and admired. -
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir the lesson I keep learning (am I?) is not to have heroes in the first place. But it’s hard and it sucks when they fall.
-
@samir the lesson I keep learning (am I?) is not to have heroes in the first place. But it’s hard and it sucks when they fall.
-
@samir i keep repeating to myself “my admiration for them was an illusion that needed to die” but clearly: same.
No only grieving people i admire but also profound friends that I loved and admired.@romeu Yeah, that’s it, it was an illusion. I can appreciate their past work, their writing, whatever, without putting them on a daïs.
A lesson I will learn over and over again, no doubt!
I’m sorry for your losses too. They’re in our heads, but they’re still real.
-
@samir the lesson I keep learning (am I?) is not to have heroes in the first place. But it’s hard and it sucks when they fall.
@janl I’m learning it for the third or fourth time. Let’s see if it sticks!
It is hard, and I’m sorry.
-
-
I'm generally against hyper-large-model based generative algorithms for at least 3 reasons
1. the ethics of training resources and getting training data
2. new class of bugs and security risks, as well as the poor maintainability of generated code
3. deskilling as people rely on these auto-parrots, and no longer learn the actual skill these machines are bad at mimicking
I was surprised to see a famous mathematician, world famous, invest a lot of time in AI-aided proofs. I guess the difference between AI-generated code or human-language content - and AI-generated proofs is that the proofs can be checked with proof checkers (which are not AI). I'm still thinking about whether this "idea generation" is a good thing or a bad thing.
-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir Proof once more that people don't have hidden depths, only hidden shallows.
-
@lizzy There are some good ones. They’re typically the ones who actually ship code!
And then there’s Anil Dash, who up until last year, I worshipped.
@samir @lizzy Anil Dash has posted absolute junk recently. Ron Jeffries is one of the good ones too. I'm discovering more heroes as I read new thoughtful posts, because they show me the people who are smart *and* compassionate. When I was a young engineer, smart would have been enough for me, but now I'm trying my best to live my personal values.
-
@samir @lizzy Anil Dash has posted absolute junk recently. Ron Jeffries is one of the good ones too. I'm discovering more heroes as I read new thoughtful posts, because they show me the people who are smart *and* compassionate. When I was a young engineer, smart would have been enough for me, but now I'm trying my best to live my personal values.
@sanityinc @lizzy Ron Jeffries is still one of my heroes, indeed.

-
I think the thing that makes me saddest about this whole “use an LLM to generate code” thing is that many of my heroes are no longer my heroes.
@samir agreed
-
@sanityinc @lizzy Ron Jeffries is still one of my heroes, indeed.

@samir @sanityinc @lizzy —@RonJeffries is also one of my heroes!
-
@samir Proof once more that people don't have hidden depths, only hidden shallows.
@adrian I think there’s both! (I hope there’s both.)
-