"there is little evidence that the brain’s fundamental ability to concentrate has been impaired.
-
"there is little evidence that the brain’s fundamental ability to concentrate has been impaired. This suggests that if we can shut down the distractions of our environment, it is possible to recover focus."
Inevitability and "ability decay" explanations essentially teach us learned helplessness about the things we experience, when really, we could change our environments. This is why I'm so against brain-based scare tactics.
@grimalkina I hate to say this, but Nature has really been going down the sh!tter as of late.
-
@chloechloechloe @grimalkina Some idiot who I called out tonight on his deep-learning biased narrative when he called ME out on questioning the whole ethos of "AI" "alignment" when it is scientifically known that LLMs are not conscious entities... tried to fob me off with a "neurodiversity" allusion on me which I immediately shut down. Modern pseudo-religious babble. What gets me is that I cited actual "AI" research and researchers. What an idiot.
Just reading from article "...One concern is the /technical alignment problem/ given a desired informally specified set of goals or values, how can we imbue an AI system with them?"
At least, I can remark that I shirk from the level of personification, here.
I might also add a quote from Nietzsche: "Only individuals feel responsibility". I feel this is apt and even if we reach a modern Prometheus machine with general intelligence.
-
Just reading from article "...One concern is the /technical alignment problem/ given a desired informally specified set of goals or values, how can we imbue an AI system with them?"
At least, I can remark that I shirk from the level of personification, here.
I might also add a quote from Nietzsche: "Only individuals feel responsibility". I feel this is apt and even if we reach a modern Prometheus machine with general intelligence.
"Ai's ability to make extremely fine-grained yet systematic decisions cuts both ways It could make things either much better or worse, depending on whether AI systems are appropriately aligned with human values"
-- "Moral disagreement and the limits of AI value alignment" (2025)
/Yes, I see. The premise of "alignment" is completely stupid./ -
"Ai's ability to make extremely fine-grained yet systematic decisions cuts both ways It could make things either much better or worse, depending on whether AI systems are appropriately aligned with human values"
-- "Moral disagreement and the limits of AI value alignment" (2025)
/Yes, I see. The premise of "alignment" is completely stupid./@chloechloechloe @grimalkina It is difficult to tell whether you are deliberately conflatingconcept here or not and that seems to be the source of some misunderstanding here. If you are discussing how human system prompts affect the output of a system when you discuss "alignment", then that is entirely different, from falling into the cognitive trap of believing one is doing more than that when employing the terminology to discuss process. That is what I'm objecting to.
-
Just reading from article "...One concern is the /technical alignment problem/ given a desired informally specified set of goals or values, how can we imbue an AI system with them?"
At least, I can remark that I shirk from the level of personification, here.
I might also add a quote from Nietzsche: "Only individuals feel responsibility". I feel this is apt and even if we reach a modern Prometheus machine with general intelligence.
@chloechloechloe @grimalkina On the misplaced personification we can agree on. AGI (is that what we're calling it again this week? /s) is still very far off; just ask Gary Marcus. Yann Lecun is in denial about this IMHO. Nietzsche, "Beyond Good and Evil": "He who fights monsters should see to it that he himselfcdoes not become a monster. And if you gaze long enough into an abyss, theabyss will also gaze into you."
-
@chloechloechloe @grimalkina On the misplaced personification we can agree on. AGI (is that what we're calling it again this week? /s) is still very far off; just ask Gary Marcus. Yann Lecun is in denial about this IMHO. Nietzsche, "Beyond Good and Evil": "He who fights monsters should see to it that he himselfcdoes not become a monster. And if you gaze long enough into an abyss, theabyss will also gaze into you."
@bms48 @grimalkina
Yes, I love that quote. Hold up a moment while I try and reply to your previous comment and clalrify my nascent understanding. ^^ -
@chloechloechloe @grimalkina Some idiot who I called out tonight on his deep-learning biased narrative when he called ME out on questioning the whole ethos of "AI" "alignment" when it is scientifically known that LLMs are not conscious entities... tried to fob me off with a "neurodiversity" allusion on me which I immediately shut down. Modern pseudo-religious babble. What gets me is that I cited actual "AI" research and researchers. What an idiot.
Yes. Thx for pointing out the conflation although it is a bit of an idiosyncratic rhetorical device I use for brevity. It's habitual rather than deliberate. Ok, so yea, making allusion to neuro-divergence follows from an unenviable cognitive trap, sure. But I also think that we expect more-than-rationally from "Ai" the moment we speak of 'human values', at all, and approximating these is both an implicit deification of machine and dangerously narrow thinking .
-
Yes. Thx for pointing out the conflation although it is a bit of an idiosyncratic rhetorical device I use for brevity. It's habitual rather than deliberate. Ok, so yea, making allusion to neuro-divergence follows from an unenviable cognitive trap, sure. But I also think that we expect more-than-rationally from "Ai" the moment we speak of 'human values', at all, and approximating these is both an implicit deification of machine and dangerously narrow thinking .
@chloechloechloe @grimalkina "Sweedack" (short for: Je suis d'accord from "The Shockwave Rider" by John Brunner which is very relevant to the current sociopolitical situation developing). You may find Grady Booch illuminating on this topic which I've just posted as part of a response on another thread here on Fedi https://newsletter.pragmaticengineer.com/p/software-architecture-with-grady-booch
-
Yes. Thx for pointing out the conflation although it is a bit of an idiosyncratic rhetorical device I use for brevity. It's habitual rather than deliberate. Ok, so yea, making allusion to neuro-divergence follows from an unenviable cognitive trap, sure. But I also think that we expect more-than-rationally from "Ai" the moment we speak of 'human values', at all, and approximating these is both an implicit deification of machine and dangerously narrow thinking .
@chloechloechloe @grimalkina Also Prof. Michael Woolridge from February at the Royal Society https://www.youtube.com/watch?v=CyyL0yDhr7I
-
@chloechloechloe @grimalkina "Sweedack" (short for: Je suis d'accord from "The Shockwave Rider" by John Brunner which is very relevant to the current sociopolitical situation developing). You may find Grady Booch illuminating on this topic which I've just posted as part of a response on another thread here on Fedi https://newsletter.pragmaticengineer.com/p/software-architecture-with-grady-booch
@bms48 @grimalkina thx so much

-
@chloechloechloe @grimalkina Also Prof. Michael Woolridge from February at the Royal Society https://www.youtube.com/watch?v=CyyL0yDhr7I
@bms48 @grimalkina I'll try my best with this one. If you know the presenter tell him to get his stuff on peer tube.

-
Internet blockers, etc, do wonders for productivity.
-
-
R relay@relay.infosec.exchange shared this topic