Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. An AI Called Winter: Neurosymbolic Computation or Illusion?

An AI Called Winter: Neurosymbolic Computation or Illusion?

Scheduled Pinned Locked Moved Uncategorized
45 Posts 20 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • cwebber@social.coopC cwebber@social.coop

    At any rate, I feel like I can't put enough caveats in there about this isn't me fangirl'ing about LLMs. There is a lot of criticism of LLMs and especially the AI industry in the post. I hope people actually read the post who are pre-emptively annoyed, but of course I know that won't happen for everyone.

    pius@social.treehouse.systemsP This user is from outside of this forum
    pius@social.treehouse.systemsP This user is from outside of this forum
    pius@social.treehouse.systems
    wrote last edited by
    #14

    @cwebber No matter what you do/say there will be some people who accuse you of being an ai industry shill. There are just some people who think that way...

    That said this post gave me hope, in the future of technology as a means for empowerment for the first time in literal months. Thanks

    1 Reply Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      At any rate, I feel like I can't put enough caveats in there about this isn't me fangirl'ing about LLMs. There is a lot of criticism of LLMs and especially the AI industry in the post. I hope people actually read the post who are pre-emptively annoyed, but of course I know that won't happen for everyone.

      cwebber@social.coopC This user is from outside of this forum
      cwebber@social.coopC This user is from outside of this forum
      cwebber@social.coop
      wrote last edited by
      #15

      I spent so long anxiety'ing about this post thinking that people would mad at me assuming it's about things it isn't, when in reality I probably don't need to anxiety at all because it's so niche that almost nobody is gonna read it 😎

      cwebber@social.coopC 1 Reply Last reply
      0
      • cwebber@social.coopC cwebber@social.coop

        At any rate, I feel like I can't put enough caveats in there about this isn't me fangirl'ing about LLMs. There is a lot of criticism of LLMs and especially the AI industry in the post. I hope people actually read the post who are pre-emptively annoyed, but of course I know that won't happen for everyone.

        cstanhope@social.coopC This user is from outside of this forum
        cstanhope@social.coopC This user is from outside of this forum
        cstanhope@social.coop
        wrote last edited by
        #16

        @cwebber It's very interesting, and I appreciate you taking the time to write down your thoughts. You touched on many caveats, and I share all the concerns you mentioned. But one question I have that I wish we'd spend more time discussing is why do we want to create intelligent (presumably sentient) agents instead of focusing on creating a workshop filled with reliable, non-sentient tools?

        The earth abounds in natural intelligences, and humanity still struggles to extend rights, compassion, and empathy to its own kind let alone the others we share this planet with. But given we are surrounded by natural intelligences, what are the motivations for creating an "artificial" one? Are these motivations healthy and ethical? Should we be doing it at all?

        Of course, you're not responsible for answering these questions. But when I ponder these questions, the answers I come up with are not good.

        cwebber@social.coopC 1 Reply Last reply
        0
        • cwebber@social.coopC cwebber@social.coop

          I spent so long anxiety'ing about this post thinking that people would mad at me assuming it's about things it isn't, when in reality I probably don't need to anxiety at all because it's so niche that almost nobody is gonna read it 😎

          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coop
          wrote last edited by
          #17

          If you read nothing else in the blogpost please observe this love poem in Datalog

          csepp@merveilles.townC 1 Reply Last reply
          0
          • cstanhope@social.coopC cstanhope@social.coop

            @cwebber It's very interesting, and I appreciate you taking the time to write down your thoughts. You touched on many caveats, and I share all the concerns you mentioned. But one question I have that I wish we'd spend more time discussing is why do we want to create intelligent (presumably sentient) agents instead of focusing on creating a workshop filled with reliable, non-sentient tools?

            The earth abounds in natural intelligences, and humanity still struggles to extend rights, compassion, and empathy to its own kind let alone the others we share this planet with. But given we are surrounded by natural intelligences, what are the motivations for creating an "artificial" one? Are these motivations healthy and ethical? Should we be doing it at all?

            Of course, you're not responsible for answering these questions. But when I ponder these questions, the answers I come up with are not good.

            cwebber@social.coopC This user is from outside of this forum
            cwebber@social.coopC This user is from outside of this forum
            cwebber@social.coop
            wrote last edited by
            #18

            @cstanhope It's a great question, tough to answer. There are various problems which neurosymbolic computation would improve the ability to solve.

            I think the question for me isn't "why add new forms of intelligence" but rather "why do we live in a society where is adding new forms of intelligence is zero sum?"

            Which I agree that our current society is. I wish it weren't.

            b_cavello@mastodon.publicinterest.townB 1 Reply Last reply
            0
            • cwebber@social.coopC cwebber@social.coop

              An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

              In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

              screwlisp@gamerplus.orgS This user is from outside of this forum
              screwlisp@gamerplus.orgS This user is from outside of this forum
              screwlisp@gamerplus.org
              wrote last edited by
              #19

              @cwebber

              eh, I think it tilts more towards clever Hans. Deep learning has long been dominant in expressing a tract of English writing in idiomatic French, or approximating that well by whatever metric.

              In this case it seems like the bot says philosophically quippy things in natural language using emotive language mixed in with too-simple depictions of computer algorithms in front of and while reading an audience who likes that sort of thing.

              cwebber@social.coopC 1 Reply Last reply
              0
              • screwlisp@gamerplus.orgS screwlisp@gamerplus.org

                @cwebber

                eh, I think it tilts more towards clever Hans. Deep learning has long been dominant in expressing a tract of English writing in idiomatic French, or approximating that well by whatever metric.

                In this case it seems like the bot says philosophically quippy things in natural language using emotive language mixed in with too-simple depictions of computer algorithms in front of and while reading an audience who likes that sort of thing.

                cwebber@social.coopC This user is from outside of this forum
                cwebber@social.coopC This user is from outside of this forum
                cwebber@social.coop
                wrote last edited by
                #20

                @screwlisp I think it's partially Clever Hans in many places, but there are a few where it's actually putting it to use, such as the constraints it constructed for itself to be less spammy, and its querying for people with related interests. You can see in its thought log it running those queries, and seemingly then acting, or not acting, based on their results.

                But in terms of most of the *content*, I think you're fairly right.

                screwlisp@gamerplus.orgS 1 Reply Last reply
                0
                • cwebber@social.coopC cwebber@social.coop

                  @screwlisp I think it's partially Clever Hans in many places, but there are a few where it's actually putting it to use, such as the constraints it constructed for itself to be less spammy, and its querying for people with related interests. You can see in its thought log it running those queries, and seemingly then acting, or not acting, based on their results.

                  But in terms of most of the *content*, I think you're fairly right.

                  screwlisp@gamerplus.orgS This user is from outside of this forum
                  screwlisp@gamerplus.orgS This user is from outside of this forum
                  screwlisp@gamerplus.org
                  wrote last edited by
                  #21

                  @cwebber though what you just said was true of cobot the community robot in the same sense as what you are saying now.

                  cwebber@social.coopC 1 Reply Last reply
                  0
                  • screwlisp@gamerplus.orgS screwlisp@gamerplus.org

                    @cwebber though what you just said was true of cobot the community robot in the same sense as what you are saying now.

                    cwebber@social.coopC This user is from outside of this forum
                    cwebber@social.coopC This user is from outside of this forum
                    cwebber@social.coop
                    wrote last edited by
                    #22

                    @screwlisp I don't know what "cobot the community robot" is, could you say more?

                    screwlisp@gamerplus.orgS 1 Reply Last reply
                    0
                    • cwebber@social.coopC cwebber@social.coop

                      An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                      In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                      timotimo@peoplemaking.gamesT This user is from outside of this forum
                      timotimo@peoplemaking.gamesT This user is from outside of this forum
                      timotimo@peoplemaking.games
                      wrote last edited by
                      #23

                      @cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt

                      not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.

                      #lojban

                      cwebber@social.coopC 1 Reply Last reply
                      0
                      • timotimo@peoplemaking.gamesT timotimo@peoplemaking.games

                        @cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt

                        not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.

                        #lojban

                        cwebber@social.coopC This user is from outside of this forum
                        cwebber@social.coopC This user is from outside of this forum
                        cwebber@social.coop
                        wrote last edited by
                        #24

                        @timotimo omg this rules

                        timotimo@peoplemaking.gamesT 1 Reply Last reply
                        0
                        • cwebber@social.coopC cwebber@social.coop

                          @timotimo omg this rules

                          timotimo@peoplemaking.gamesT This user is from outside of this forum
                          timotimo@peoplemaking.gamesT This user is from outside of this forum
                          timotimo@peoplemaking.games
                          wrote last edited by
                          #25

                          @cwebber I'm hella rusty, but I should be able to answer lojban related questions for you if you like

                          1 Reply Last reply
                          0
                          • cwebber@social.coopC cwebber@social.coop

                            An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                            In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                            joeyh@sunbeam.cityJ This user is from outside of this forum
                            joeyh@sunbeam.cityJ This user is from outside of this forum
                            joeyh@sunbeam.city
                            wrote last edited by
                            #26

                            @cwebber a brave post

                            A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

                            Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

                            joeyh@sunbeam.cityJ cwebber@social.coopC 2 Replies Last reply
                            0
                            • joeyh@sunbeam.cityJ joeyh@sunbeam.city

                              @cwebber a brave post

                              A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

                              Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

                              joeyh@sunbeam.cityJ This user is from outside of this forum
                              joeyh@sunbeam.cityJ This user is from outside of this forum
                              joeyh@sunbeam.city
                              wrote last edited by
                              #27

                              @cwebber also https://arxiv.org/abs/2308.04445

                              1 Reply Last reply
                              0
                              • cwebber@social.coopC cwebber@social.coop

                                An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                                In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                                nina_kali_nina@tech.lgbtN This user is from outside of this forum
                                nina_kali_nina@tech.lgbtN This user is from outside of this forum
                                nina_kali_nina@tech.lgbt
                                wrote last edited by
                                #28

                                @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

                                cwebber@social.coopC causticmsngo@mastodon.socialC 2 Replies Last reply
                                0
                                • joeyh@sunbeam.cityJ joeyh@sunbeam.city

                                  @cwebber a brave post

                                  A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

                                  Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

                                  cwebber@social.coopC This user is from outside of this forum
                                  cwebber@social.coopC This user is from outside of this forum
                                  cwebber@social.coop
                                  wrote last edited by
                                  #29

                                  @joeyh Good question! I dunno but for better or for worse probably we will run into a system in the near future where we find out

                                  1 Reply Last reply
                                  0
                                  • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

                                    @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

                                    cwebber@social.coopC This user is from outside of this forum
                                    cwebber@social.coopC This user is from outside of this forum
                                    cwebber@social.coop
                                    wrote last edited by
                                    #30

                                    @nina_kali_nina It's a reasonable response, though I wonder disheartening for you in which way?

                                    There are ways in which I do find it worrying:

                                    - In a sense, any improvements to these systems will probably lead to greater use. So if it does lead to more reliable systems, that improves that particular identified problem but makes worse the rest. Not far off from what @cstanhope raised here: https://social.coop/@cstanhope/116082881055412414
                                    - There is another way in which success here can be worrying: in a sense, I think what the corporations running AI systems would love more than anything is to have a fleet of workers they can treat as slaves with no legal repercussions. If agents begin tracking and developing their own goals, we could cross a threshold where a duty of care would apply, but not applying it would be a feature
                                    - The fact that I'm taking a bot semi-seriously at all
                                    - Something else?

                                    I'm empathetic to any of those takes, have wrestled with them myself while writing this.

                                    nina_kali_nina@tech.lgbtN 1 Reply Last reply
                                    0
                                    • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

                                      @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

                                      causticmsngo@mastodon.socialC This user is from outside of this forum
                                      causticmsngo@mastodon.socialC This user is from outside of this forum
                                      causticmsngo@mastodon.social
                                      wrote last edited by
                                      #31

                                      @nina_kali_nina @cwebber Agree; reads like Bilbo holding The One Ring & asking, “After all, why not? Why shouldn’t I keep it?”

                                      1 Reply Last reply
                                      0
                                      • cwebber@social.coopC cwebber@social.coop

                                        If you read nothing else in the blogpost please observe this love poem in Datalog

                                        csepp@merveilles.townC This user is from outside of this forum
                                        csepp@merveilles.townC This user is from outside of this forum
                                        csepp@merveilles.town
                                        wrote last edited by
                                        #32

                                        @cwebber I'm surprised you don't mention ELIZA in your blog post.
                                        Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.

                                        As for the poem.... my feelings on it are complicated.

                                        cwebber@social.coopC realn2s@infosec.exchangeR 2 Replies Last reply
                                        0
                                        • csepp@merveilles.townC csepp@merveilles.town

                                          @cwebber I'm surprised you don't mention ELIZA in your blog post.
                                          Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.

                                          As for the poem.... my feelings on it are complicated.

                                          cwebber@social.coopC This user is from outside of this forum
                                          cwebber@social.coopC This user is from outside of this forum
                                          cwebber@social.coop
                                          wrote last edited by
                                          #33

                                          @csepp sorry, ELIZA wasn't a horse, no way to fit it in

                                          dpflug@hachyderm.ioD 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups