Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "there is little evidence that the brain’s fundamental ability to concentrate has been impaired.

"there is little evidence that the brain’s fundamental ability to concentrate has been impaired.

Scheduled Pinned Locked Moved Uncategorized
17 Posts 4 Posters 29 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • bms48@mastodon.socialB bms48@mastodon.social

    @chloechloechloe @grimalkina Some idiot who I called out tonight on his deep-learning biased narrative when he called ME out on questioning the whole ethos of "AI" "alignment" when it is scientifically known that LLMs are not conscious entities... tried to fob me off with a "neurodiversity" allusion on me which I immediately shut down. Modern pseudo-religious babble. What gets me is that I cited actual "AI" research and researchers. What an idiot.

    chloechloechloe@musician.socialC This user is from outside of this forum
    chloechloechloe@musician.socialC This user is from outside of this forum
    chloechloechloe@musician.social
    wrote last edited by
    #6

    @bms48 @grimalkina

    Just reading from article "...One concern is the /technical alignment problem/ given a desired informally specified set of goals or values, how can we imbue an AI system with them?"

    At least, I can remark that I shirk from the level of personification, here.

    I might also add a quote from Nietzsche: "Only individuals feel responsibility". I feel this is apt and even if we reach a modern Prometheus machine with general intelligence.

    chloechloechloe@musician.socialC bms48@mastodon.socialB 2 Replies Last reply
    0
    • chloechloechloe@musician.socialC chloechloechloe@musician.social

      @bms48 @grimalkina

      Just reading from article "...One concern is the /technical alignment problem/ given a desired informally specified set of goals or values, how can we imbue an AI system with them?"

      At least, I can remark that I shirk from the level of personification, here.

      I might also add a quote from Nietzsche: "Only individuals feel responsibility". I feel this is apt and even if we reach a modern Prometheus machine with general intelligence.

      chloechloechloe@musician.socialC This user is from outside of this forum
      chloechloechloe@musician.socialC This user is from outside of this forum
      chloechloechloe@musician.social
      wrote last edited by
      #7

      @bms48 @grimalkina

      "Ai's ability to make extremely fine-grained yet systematic decisions cuts both ways It could make things either much better or worse, depending on whether AI systems are appropriately aligned with human values"

      -- "Moral disagreement and the limits of AI value alignment" (2025)

      /Yes, I see. The premise of "alignment" is completely stupid./

      bms48@mastodon.socialB 1 Reply Last reply
      0
      • chloechloechloe@musician.socialC chloechloechloe@musician.social

        @bms48 @grimalkina

        "Ai's ability to make extremely fine-grained yet systematic decisions cuts both ways It could make things either much better or worse, depending on whether AI systems are appropriately aligned with human values"

        -- "Moral disagreement and the limits of AI value alignment" (2025)

        /Yes, I see. The premise of "alignment" is completely stupid./

        bms48@mastodon.socialB This user is from outside of this forum
        bms48@mastodon.socialB This user is from outside of this forum
        bms48@mastodon.social
        wrote last edited by
        #8

        @chloechloechloe @grimalkina It is difficult to tell whether you are deliberately conflatingconcept here or not and that seems to be the source of some misunderstanding here. If you are discussing how human system prompts affect the output of a system when you discuss "alignment", then that is entirely different, from falling into the cognitive trap of believing one is doing more than that when employing the terminology to discuss process. That is what I'm objecting to.

        1 Reply Last reply
        0
        • chloechloechloe@musician.socialC chloechloechloe@musician.social

          @bms48 @grimalkina

          Just reading from article "...One concern is the /technical alignment problem/ given a desired informally specified set of goals or values, how can we imbue an AI system with them?"

          At least, I can remark that I shirk from the level of personification, here.

          I might also add a quote from Nietzsche: "Only individuals feel responsibility". I feel this is apt and even if we reach a modern Prometheus machine with general intelligence.

          bms48@mastodon.socialB This user is from outside of this forum
          bms48@mastodon.socialB This user is from outside of this forum
          bms48@mastodon.social
          wrote last edited by
          #9

          @chloechloechloe @grimalkina On the misplaced personification we can agree on. AGI (is that what we're calling it again this week? /s) is still very far off; just ask Gary Marcus. Yann Lecun is in denial about this IMHO. Nietzsche, "Beyond Good and Evil": "He who fights monsters should see to it that he himselfcdoes not become a monster. And if you gaze long enough into an abyss, theabyss will also gaze into you."

          chloechloechloe@musician.socialC 1 Reply Last reply
          0
          • bms48@mastodon.socialB bms48@mastodon.social

            @chloechloechloe @grimalkina On the misplaced personification we can agree on. AGI (is that what we're calling it again this week? /s) is still very far off; just ask Gary Marcus. Yann Lecun is in denial about this IMHO. Nietzsche, "Beyond Good and Evil": "He who fights monsters should see to it that he himselfcdoes not become a monster. And if you gaze long enough into an abyss, theabyss will also gaze into you."

            chloechloechloe@musician.socialC This user is from outside of this forum
            chloechloechloe@musician.socialC This user is from outside of this forum
            chloechloechloe@musician.social
            wrote last edited by
            #10

            @bms48 @grimalkina
            Yes, I love that quote. Hold up a moment while I try and reply to your previous comment and clalrify my nascent understanding. ^^

            1 Reply Last reply
            0
            • bms48@mastodon.socialB bms48@mastodon.social

              @chloechloechloe @grimalkina Some idiot who I called out tonight on his deep-learning biased narrative when he called ME out on questioning the whole ethos of "AI" "alignment" when it is scientifically known that LLMs are not conscious entities... tried to fob me off with a "neurodiversity" allusion on me which I immediately shut down. Modern pseudo-religious babble. What gets me is that I cited actual "AI" research and researchers. What an idiot.

              chloechloechloe@musician.socialC This user is from outside of this forum
              chloechloechloe@musician.socialC This user is from outside of this forum
              chloechloechloe@musician.social
              wrote last edited by
              #11

              @bms48 @grimalkina

              Yes. Thx for pointing out the conflation although it is a bit of an idiosyncratic rhetorical device I use for brevity. It's habitual rather than deliberate. Ok, so yea, making allusion to neuro-divergence follows from an unenviable cognitive trap, sure. But I also think that we expect more-than-rationally from "Ai" the moment we speak of 'human values', at all, and approximating these is both an implicit deification of machine and dangerously narrow thinking .

              bms48@mastodon.socialB 2 Replies Last reply
              0
              • chloechloechloe@musician.socialC chloechloechloe@musician.social

                @bms48 @grimalkina

                Yes. Thx for pointing out the conflation although it is a bit of an idiosyncratic rhetorical device I use for brevity. It's habitual rather than deliberate. Ok, so yea, making allusion to neuro-divergence follows from an unenviable cognitive trap, sure. But I also think that we expect more-than-rationally from "Ai" the moment we speak of 'human values', at all, and approximating these is both an implicit deification of machine and dangerously narrow thinking .

                bms48@mastodon.socialB This user is from outside of this forum
                bms48@mastodon.socialB This user is from outside of this forum
                bms48@mastodon.social
                wrote last edited by
                #12

                @chloechloechloe @grimalkina "Sweedack" (short for: Je suis d'accord from "The Shockwave Rider" by John Brunner which is very relevant to the current sociopolitical situation developing). You may find Grady Booch illuminating on this topic which I've just posted as part of a response on another thread here on Fedi https://newsletter.pragmaticengineer.com/p/software-architecture-with-grady-booch

                chloechloechloe@musician.socialC 1 Reply Last reply
                0
                • chloechloechloe@musician.socialC chloechloechloe@musician.social

                  @bms48 @grimalkina

                  Yes. Thx for pointing out the conflation although it is a bit of an idiosyncratic rhetorical device I use for brevity. It's habitual rather than deliberate. Ok, so yea, making allusion to neuro-divergence follows from an unenviable cognitive trap, sure. But I also think that we expect more-than-rationally from "Ai" the moment we speak of 'human values', at all, and approximating these is both an implicit deification of machine and dangerously narrow thinking .

                  bms48@mastodon.socialB This user is from outside of this forum
                  bms48@mastodon.socialB This user is from outside of this forum
                  bms48@mastodon.social
                  wrote last edited by
                  #13

                  @chloechloechloe @grimalkina Also Prof. Michael Woolridge from February at the Royal Society https://www.youtube.com/watch?v=CyyL0yDhr7I

                  chloechloechloe@musician.socialC 1 Reply Last reply
                  0
                  • bms48@mastodon.socialB bms48@mastodon.social

                    @chloechloechloe @grimalkina "Sweedack" (short for: Je suis d'accord from "The Shockwave Rider" by John Brunner which is very relevant to the current sociopolitical situation developing). You may find Grady Booch illuminating on this topic which I've just posted as part of a response on another thread here on Fedi https://newsletter.pragmaticengineer.com/p/software-architecture-with-grady-booch

                    chloechloechloe@musician.socialC This user is from outside of this forum
                    chloechloechloe@musician.socialC This user is from outside of this forum
                    chloechloechloe@musician.social
                    wrote last edited by
                    #14

                    @bms48 @grimalkina thx so much 🙂

                    1 Reply Last reply
                    0
                    • bms48@mastodon.socialB bms48@mastodon.social

                      @chloechloechloe @grimalkina Also Prof. Michael Woolridge from February at the Royal Society https://www.youtube.com/watch?v=CyyL0yDhr7I

                      chloechloechloe@musician.socialC This user is from outside of this forum
                      chloechloechloe@musician.socialC This user is from outside of this forum
                      chloechloechloe@musician.social
                      wrote last edited by
                      #15

                      @bms48 @grimalkina I'll try my best with this one. If you know the presenter tell him to get his stuff on peer tube. 😛

                      1 Reply Last reply
                      0
                      • mwl@io.mwl.ioM This user is from outside of this forum
                        mwl@io.mwl.ioM This user is from outside of this forum
                        mwl@io.mwl.io
                        wrote last edited by
                        #16

                        @Voline @grimalkina

                        Internet blockers, etc, do wonders for productivity.

                        1 Reply Last reply
                        0
                        • grimalkina@mastodon.socialG This user is from outside of this forum
                          grimalkina@mastodon.socialG This user is from outside of this forum
                          grimalkina@mastodon.social
                          wrote last edited by
                          #17

                          @Voline @mwl not as bad as the one that's currently going around that's a ten minute prolific platform task

                          1 Reply Last reply
                          0
                          • R relay@relay.infosec.exchange shared this topic
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups