Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. An AI Called Winter: Neurosymbolic Computation or Illusion?

An AI Called Winter: Neurosymbolic Computation or Illusion?

Scheduled Pinned Locked Moved Uncategorized
45 Posts 20 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • screwlisp@gamerplus.orgS screwlisp@gamerplus.org

    @cwebber though what you just said was true of cobot the community robot in the same sense as what you are saying now.

    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coop
    wrote last edited by
    #22

    @screwlisp I don't know what "cobot the community robot" is, could you say more?

    screwlisp@gamerplus.orgS 1 Reply Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

      In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

      timotimo@peoplemaking.gamesT This user is from outside of this forum
      timotimo@peoplemaking.gamesT This user is from outside of this forum
      timotimo@peoplemaking.games
      wrote last edited by
      #23

      @cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt

      not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.

      #lojban

      cwebber@social.coopC 1 Reply Last reply
      0
      • timotimo@peoplemaking.gamesT timotimo@peoplemaking.games

        @cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt

        not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.

        #lojban

        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coop
        wrote last edited by
        #24

        @timotimo omg this rules

        timotimo@peoplemaking.gamesT 1 Reply Last reply
        0
        • cwebber@social.coopC cwebber@social.coop

          @timotimo omg this rules

          timotimo@peoplemaking.gamesT This user is from outside of this forum
          timotimo@peoplemaking.gamesT This user is from outside of this forum
          timotimo@peoplemaking.games
          wrote last edited by
          #25

          @cwebber I'm hella rusty, but I should be able to answer lojban related questions for you if you like

          1 Reply Last reply
          0
          • cwebber@social.coopC cwebber@social.coop

            An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

            In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

            joeyh@sunbeam.cityJ This user is from outside of this forum
            joeyh@sunbeam.cityJ This user is from outside of this forum
            joeyh@sunbeam.city
            wrote last edited by
            #26

            @cwebber a brave post

            A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

            Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

            joeyh@sunbeam.cityJ cwebber@social.coopC 2 Replies Last reply
            0
            • joeyh@sunbeam.cityJ joeyh@sunbeam.city

              @cwebber a brave post

              A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

              Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

              joeyh@sunbeam.cityJ This user is from outside of this forum
              joeyh@sunbeam.cityJ This user is from outside of this forum
              joeyh@sunbeam.city
              wrote last edited by
              #27

              @cwebber also https://arxiv.org/abs/2308.04445

              1 Reply Last reply
              0
              • cwebber@social.coopC cwebber@social.coop

                An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                nina_kali_nina@tech.lgbtN This user is from outside of this forum
                nina_kali_nina@tech.lgbtN This user is from outside of this forum
                nina_kali_nina@tech.lgbt
                wrote last edited by
                #28

                @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

                cwebber@social.coopC causticmsngo@mastodon.socialC 2 Replies Last reply
                0
                • joeyh@sunbeam.cityJ joeyh@sunbeam.city

                  @cwebber a brave post

                  A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

                  Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

                  cwebber@social.coopC This user is from outside of this forum
                  cwebber@social.coopC This user is from outside of this forum
                  cwebber@social.coop
                  wrote last edited by
                  #29

                  @joeyh Good question! I dunno but for better or for worse probably we will run into a system in the near future where we find out

                  1 Reply Last reply
                  0
                  • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

                    @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

                    cwebber@social.coopC This user is from outside of this forum
                    cwebber@social.coopC This user is from outside of this forum
                    cwebber@social.coop
                    wrote last edited by
                    #30

                    @nina_kali_nina It's a reasonable response, though I wonder disheartening for you in which way?

                    There are ways in which I do find it worrying:

                    - In a sense, any improvements to these systems will probably lead to greater use. So if it does lead to more reliable systems, that improves that particular identified problem but makes worse the rest. Not far off from what @cstanhope raised here: https://social.coop/@cstanhope/116082881055412414
                    - There is another way in which success here can be worrying: in a sense, I think what the corporations running AI systems would love more than anything is to have a fleet of workers they can treat as slaves with no legal repercussions. If agents begin tracking and developing their own goals, we could cross a threshold where a duty of care would apply, but not applying it would be a feature
                    - The fact that I'm taking a bot semi-seriously at all
                    - Something else?

                    I'm empathetic to any of those takes, have wrestled with them myself while writing this.

                    nina_kali_nina@tech.lgbtN 1 Reply Last reply
                    0
                    • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

                      @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

                      causticmsngo@mastodon.socialC This user is from outside of this forum
                      causticmsngo@mastodon.socialC This user is from outside of this forum
                      causticmsngo@mastodon.social
                      wrote last edited by
                      #31

                      @nina_kali_nina @cwebber Agree; reads like Bilbo holding The One Ring & asking, “After all, why not? Why shouldn’t I keep it?”

                      1 Reply Last reply
                      0
                      • cwebber@social.coopC cwebber@social.coop

                        If you read nothing else in the blogpost please observe this love poem in Datalog

                        csepp@merveilles.townC This user is from outside of this forum
                        csepp@merveilles.townC This user is from outside of this forum
                        csepp@merveilles.town
                        wrote last edited by
                        #32

                        @cwebber I'm surprised you don't mention ELIZA in your blog post.
                        Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.

                        As for the poem.... my feelings on it are complicated.

                        cwebber@social.coopC realn2s@infosec.exchangeR 2 Replies Last reply
                        0
                        • csepp@merveilles.townC csepp@merveilles.town

                          @cwebber I'm surprised you don't mention ELIZA in your blog post.
                          Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.

                          As for the poem.... my feelings on it are complicated.

                          cwebber@social.coopC This user is from outside of this forum
                          cwebber@social.coopC This user is from outside of this forum
                          cwebber@social.coop
                          wrote last edited by
                          #33

                          @csepp sorry, ELIZA wasn't a horse, no way to fit it in

                          dpflug@hachyderm.ioD 1 Reply Last reply
                          0
                          • cwebber@social.coopC cwebber@social.coop

                            @nina_kali_nina It's a reasonable response, though I wonder disheartening for you in which way?

                            There are ways in which I do find it worrying:

                            - In a sense, any improvements to these systems will probably lead to greater use. So if it does lead to more reliable systems, that improves that particular identified problem but makes worse the rest. Not far off from what @cstanhope raised here: https://social.coop/@cstanhope/116082881055412414
                            - There is another way in which success here can be worrying: in a sense, I think what the corporations running AI systems would love more than anything is to have a fleet of workers they can treat as slaves with no legal repercussions. If agents begin tracking and developing their own goals, we could cross a threshold where a duty of care would apply, but not applying it would be a feature
                            - The fact that I'm taking a bot semi-seriously at all
                            - Something else?

                            I'm empathetic to any of those takes, have wrestled with them myself while writing this.

                            nina_kali_nina@tech.lgbtN This user is from outside of this forum
                            nina_kali_nina@tech.lgbtN This user is from outside of this forum
                            nina_kali_nina@tech.lgbt
                            wrote last edited by
                            #34

                            @cwebber @cstanhope well, pretty much all the concerns that you mention, but also: I don't think you should be taking seriously any sort of outcome from the experiment without rigorous validation framework for the outcomes.

                            And at this point adding such a framework would be too late. You've started doing a self-experimentation with dangerous technology literally funded by some of the most gross people out there, and you're at the stage of interaction with it where you might be anthropomorphising it. I suspect you might be accidentally far more biased than you recognise.

                            I appreciate the list of caveats related to your relationship with the industry, I really do, but... I don't know, the experiment still doesn't sit right with me. Sorry, maybe I'll find better words eventually.

                            cwebber@social.coopC 1 Reply Last reply
                            0
                            • cwebber@social.coopC cwebber@social.coop

                              An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                              In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                              davebauerart@mastodon.socialD This user is from outside of this forum
                              davebauerart@mastodon.socialD This user is from outside of this forum
                              davebauerart@mastodon.social
                              wrote last edited by
                              #35

                              @cwebber Definitely checking this out! Ive read a bunch of seemingly random stuff lately that sort of ties into this, so I need to learn.

                              1 Reply Last reply
                              0
                              • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

                                @cwebber @cstanhope well, pretty much all the concerns that you mention, but also: I don't think you should be taking seriously any sort of outcome from the experiment without rigorous validation framework for the outcomes.

                                And at this point adding such a framework would be too late. You've started doing a self-experimentation with dangerous technology literally funded by some of the most gross people out there, and you're at the stage of interaction with it where you might be anthropomorphising it. I suspect you might be accidentally far more biased than you recognise.

                                I appreciate the list of caveats related to your relationship with the industry, I really do, but... I don't know, the experiment still doesn't sit right with me. Sorry, maybe I'll find better words eventually.

                                cwebber@social.coopC This user is from outside of this forum
                                cwebber@social.coopC This user is from outside of this forum
                                cwebber@social.coop
                                wrote last edited by
                                #36

                                @nina_kali_nina @cstanhope There is no doubt: it is a non-rigorous blogpost. There is more rigorous work happening, I linked to some of it, and @joeyh more here: https://sunbeam.city/@joeyh/116083100867235370

                                Maybe it is different for you, but the disturbing parts about this for me, and I have highlighted those for myself, aren't really related to rigor. I don't think most blogposts I write are particularly rigorous, but people aren't usually bothered about them, because there are other places to find rigor.

                                It's the other parts, I suspect, that are more toxic and which make the entire thing feel somewhat dangerous. And anyway, at the very least, it seems you agree on the concerns I stated wrestling with.

                                It may be worth a separate post explaining why I am troubled by *all* of this stuff, which I frontloaded and backloaded a sense of, but which deserves dedicated writing of its own if done right.

                                1 Reply Last reply
                                0
                                • cwebber@social.coopC cwebber@social.coop

                                  An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                                  In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                                  dpflug@hachyderm.ioD This user is from outside of this forum
                                  dpflug@hachyderm.ioD This user is from outside of this forum
                                  dpflug@hachyderm.io
                                  wrote last edited by
                                  #37

                                  @cwebber This is an interesting story. It makes me want to try it with a small model to explore the limits of the technique.

                                  Like you, I'm deeply aggrieved at the AI industry, but find the tech and questions surrounding it interesting. Admittedly, I had a similar feeling about Bitcoin, so maybe that should give me more pause.

                                  1 Reply Last reply
                                  0
                                  • cwebber@social.coopC cwebber@social.coop

                                    @csepp sorry, ELIZA wasn't a horse, no way to fit it in

                                    dpflug@hachyderm.ioD This user is from outside of this forum
                                    dpflug@hachyderm.ioD This user is from outside of this forum
                                    dpflug@hachyderm.io
                                    wrote last edited by
                                    #38

                                    @cwebber
                                    How do you ELIZA a horse? One byte at a time.
                                    @csepp

                                    1 Reply Last reply
                                    0
                                    • cwebber@social.coopC cwebber@social.coop

                                      @screwlisp I don't know what "cobot the community robot" is, could you say more?

                                      screwlisp@gamerplus.orgS This user is from outside of this forum
                                      screwlisp@gamerplus.orgS This user is from outside of this forum
                                      screwlisp@gamerplus.org
                                      wrote last edited by
                                      #39

                                      @cwebber to be fair, I think I am on record basically considering cobot the community robot a human. It was a self-modifying robot in mediamoo (?) in the 90s who provided community services and had some scheme for wanting to participate in the community and assessing and changing themselves to fulfill community needs.

                                      1 Reply Last reply
                                      0
                                      • cwebber@social.coopC cwebber@social.coop

                                        An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                                        In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                                        stepheneb@ruby.socialS This user is from outside of this forum
                                        stepheneb@ruby.socialS This user is from outside of this forum
                                        stepheneb@ruby.social
                                        wrote last edited by
                                        #40

                                        @cwebber

                                        Oh my, lots to think about, thanks for writing and sharing your article.

                                        When I am learning something new I often find myself holding multiple different models that have elements that appear to be mutually contradictory and then reasoning with all of them. The iterative goal is to able to make the most useful mistakes as fast as possible.

                                        I like that your investigation is based in something you know well.

                                        Still looking for the right way in.

                                        Stephen Bannasch (316 ppm) (@stepheneb@ruby.social)

                                        @RickiTarr@beige.party when I’m learning something new I need to both be super confident that I both know what I’m doing AND that I’m semi-clueless. The idea is to make the most useful mistakes as fast as possible. Those are the inflection points where I go: “Woah, fuck, that’s wild! I didn’t think of it that way before!” That’s when all those models in my head realign to make room for new understanding. The combo of being confident and knowing I’m ignorant helps me find interesting trouble.

                                        favicon

                                        Ruby.social (ruby.social)

                                        1 Reply Last reply
                                        0
                                        • cwebber@social.coopC cwebber@social.coop

                                          @cstanhope It's a great question, tough to answer. There are various problems which neurosymbolic computation would improve the ability to solve.

                                          I think the question for me isn't "why add new forms of intelligence" but rather "why do we live in a society where is adding new forms of intelligence is zero sum?"

                                          Which I agree that our current society is. I wish it weren't.

                                          b_cavello@mastodon.publicinterest.townB This user is from outside of this forum
                                          b_cavello@mastodon.publicinterest.townB This user is from outside of this forum
                                          b_cavello@mastodon.publicinterest.town
                                          wrote last edited by
                                          #41

                                          @cwebber @cstanhope I appreciate this reframe!

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups