Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. An AI Called Winter: Neurosymbolic Computation or Illusion?

An AI Called Winter: Neurosymbolic Computation or Illusion?

Scheduled Pinned Locked Moved Uncategorized
45 Posts 20 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • joeyh@sunbeam.cityJ joeyh@sunbeam.city

    @cwebber a brave post

    A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

    Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

    joeyh@sunbeam.cityJ This user is from outside of this forum
    joeyh@sunbeam.cityJ This user is from outside of this forum
    joeyh@sunbeam.city
    wrote last edited by
    #27

    @cwebber also https://arxiv.org/abs/2308.04445

    1 Reply Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

      In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

      nina_kali_nina@tech.lgbtN This user is from outside of this forum
      nina_kali_nina@tech.lgbtN This user is from outside of this forum
      nina_kali_nina@tech.lgbt
      wrote last edited by
      #28

      @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

      cwebber@social.coopC causticmsngo@mastodon.socialC 2 Replies Last reply
      0
      • joeyh@sunbeam.cityJ joeyh@sunbeam.city

        @cwebber a brave post

        A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?

        Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?

        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coop
        wrote last edited by
        #29

        @joeyh Good question! I dunno but for better or for worse probably we will run into a system in the near future where we find out

        1 Reply Last reply
        0
        • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

          @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coop
          wrote last edited by
          #30

          @nina_kali_nina It's a reasonable response, though I wonder disheartening for you in which way?

          There are ways in which I do find it worrying:

          - In a sense, any improvements to these systems will probably lead to greater use. So if it does lead to more reliable systems, that improves that particular identified problem but makes worse the rest. Not far off from what @cstanhope raised here: https://social.coop/@cstanhope/116082881055412414
          - There is another way in which success here can be worrying: in a sense, I think what the corporations running AI systems would love more than anything is to have a fleet of workers they can treat as slaves with no legal repercussions. If agents begin tracking and developing their own goals, we could cross a threshold where a duty of care would apply, but not applying it would be a feature
          - The fact that I'm taking a bot semi-seriously at all
          - Something else?

          I'm empathetic to any of those takes, have wrestled with them myself while writing this.

          nina_kali_nina@tech.lgbtN 1 Reply Last reply
          0
          • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

            @cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.

            causticmsngo@mastodon.socialC This user is from outside of this forum
            causticmsngo@mastodon.socialC This user is from outside of this forum
            causticmsngo@mastodon.social
            wrote last edited by
            #31

            @nina_kali_nina @cwebber Agree; reads like Bilbo holding The One Ring & asking, “After all, why not? Why shouldn’t I keep it?”

            1 Reply Last reply
            0
            • cwebber@social.coopC cwebber@social.coop

              If you read nothing else in the blogpost please observe this love poem in Datalog

              csepp@merveilles.townC This user is from outside of this forum
              csepp@merveilles.townC This user is from outside of this forum
              csepp@merveilles.town
              wrote last edited by
              #32

              @cwebber I'm surprised you don't mention ELIZA in your blog post.
              Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.

              As for the poem.... my feelings on it are complicated.

              cwebber@social.coopC realn2s@infosec.exchangeR 2 Replies Last reply
              0
              • csepp@merveilles.townC csepp@merveilles.town

                @cwebber I'm surprised you don't mention ELIZA in your blog post.
                Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.

                As for the poem.... my feelings on it are complicated.

                cwebber@social.coopC This user is from outside of this forum
                cwebber@social.coopC This user is from outside of this forum
                cwebber@social.coop
                wrote last edited by
                #33

                @csepp sorry, ELIZA wasn't a horse, no way to fit it in

                dpflug@hachyderm.ioD 1 Reply Last reply
                0
                • cwebber@social.coopC cwebber@social.coop

                  @nina_kali_nina It's a reasonable response, though I wonder disheartening for you in which way?

                  There are ways in which I do find it worrying:

                  - In a sense, any improvements to these systems will probably lead to greater use. So if it does lead to more reliable systems, that improves that particular identified problem but makes worse the rest. Not far off from what @cstanhope raised here: https://social.coop/@cstanhope/116082881055412414
                  - There is another way in which success here can be worrying: in a sense, I think what the corporations running AI systems would love more than anything is to have a fleet of workers they can treat as slaves with no legal repercussions. If agents begin tracking and developing their own goals, we could cross a threshold where a duty of care would apply, but not applying it would be a feature
                  - The fact that I'm taking a bot semi-seriously at all
                  - Something else?

                  I'm empathetic to any of those takes, have wrestled with them myself while writing this.

                  nina_kali_nina@tech.lgbtN This user is from outside of this forum
                  nina_kali_nina@tech.lgbtN This user is from outside of this forum
                  nina_kali_nina@tech.lgbt
                  wrote last edited by
                  #34

                  @cwebber @cstanhope well, pretty much all the concerns that you mention, but also: I don't think you should be taking seriously any sort of outcome from the experiment without rigorous validation framework for the outcomes.

                  And at this point adding such a framework would be too late. You've started doing a self-experimentation with dangerous technology literally funded by some of the most gross people out there, and you're at the stage of interaction with it where you might be anthropomorphising it. I suspect you might be accidentally far more biased than you recognise.

                  I appreciate the list of caveats related to your relationship with the industry, I really do, but... I don't know, the experiment still doesn't sit right with me. Sorry, maybe I'll find better words eventually.

                  cwebber@social.coopC 1 Reply Last reply
                  0
                  • cwebber@social.coopC cwebber@social.coop

                    An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                    In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                    davebauerart@mastodon.socialD This user is from outside of this forum
                    davebauerart@mastodon.socialD This user is from outside of this forum
                    davebauerart@mastodon.social
                    wrote last edited by
                    #35

                    @cwebber Definitely checking this out! Ive read a bunch of seemingly random stuff lately that sort of ties into this, so I need to learn.

                    1 Reply Last reply
                    0
                    • nina_kali_nina@tech.lgbtN nina_kali_nina@tech.lgbt

                      @cwebber @cstanhope well, pretty much all the concerns that you mention, but also: I don't think you should be taking seriously any sort of outcome from the experiment without rigorous validation framework for the outcomes.

                      And at this point adding such a framework would be too late. You've started doing a self-experimentation with dangerous technology literally funded by some of the most gross people out there, and you're at the stage of interaction with it where you might be anthropomorphising it. I suspect you might be accidentally far more biased than you recognise.

                      I appreciate the list of caveats related to your relationship with the industry, I really do, but... I don't know, the experiment still doesn't sit right with me. Sorry, maybe I'll find better words eventually.

                      cwebber@social.coopC This user is from outside of this forum
                      cwebber@social.coopC This user is from outside of this forum
                      cwebber@social.coop
                      wrote last edited by
                      #36

                      @nina_kali_nina @cstanhope There is no doubt: it is a non-rigorous blogpost. There is more rigorous work happening, I linked to some of it, and @joeyh more here: https://sunbeam.city/@joeyh/116083100867235370

                      Maybe it is different for you, but the disturbing parts about this for me, and I have highlighted those for myself, aren't really related to rigor. I don't think most blogposts I write are particularly rigorous, but people aren't usually bothered about them, because there are other places to find rigor.

                      It's the other parts, I suspect, that are more toxic and which make the entire thing feel somewhat dangerous. And anyway, at the very least, it seems you agree on the concerns I stated wrestling with.

                      It may be worth a separate post explaining why I am troubled by *all* of this stuff, which I frontloaded and backloaded a sense of, but which deserves dedicated writing of its own if done right.

                      1 Reply Last reply
                      0
                      • cwebber@social.coopC cwebber@social.coop

                        An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                        In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                        dpflug@hachyderm.ioD This user is from outside of this forum
                        dpflug@hachyderm.ioD This user is from outside of this forum
                        dpflug@hachyderm.io
                        wrote last edited by
                        #37

                        @cwebber This is an interesting story. It makes me want to try it with a small model to explore the limits of the technique.

                        Like you, I'm deeply aggrieved at the AI industry, but find the tech and questions surrounding it interesting. Admittedly, I had a similar feeling about Bitcoin, so maybe that should give me more pause.

                        1 Reply Last reply
                        0
                        • cwebber@social.coopC cwebber@social.coop

                          @csepp sorry, ELIZA wasn't a horse, no way to fit it in

                          dpflug@hachyderm.ioD This user is from outside of this forum
                          dpflug@hachyderm.ioD This user is from outside of this forum
                          dpflug@hachyderm.io
                          wrote last edited by
                          #38

                          @cwebber
                          How do you ELIZA a horse? One byte at a time.
                          @csepp

                          1 Reply Last reply
                          0
                          • cwebber@social.coopC cwebber@social.coop

                            @screwlisp I don't know what "cobot the community robot" is, could you say more?

                            screwlisp@gamerplus.orgS This user is from outside of this forum
                            screwlisp@gamerplus.orgS This user is from outside of this forum
                            screwlisp@gamerplus.org
                            wrote last edited by
                            #39

                            @cwebber to be fair, I think I am on record basically considering cobot the community robot a human. It was a self-modifying robot in mediamoo (?) in the 90s who provided community services and had some scheme for wanting to participate in the community and assessing and changing themselves to fulfill community needs.

                            1 Reply Last reply
                            0
                            • cwebber@social.coopC cwebber@social.coop

                              An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                              In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                              stepheneb@ruby.socialS This user is from outside of this forum
                              stepheneb@ruby.socialS This user is from outside of this forum
                              stepheneb@ruby.social
                              wrote last edited by
                              #40

                              @cwebber

                              Oh my, lots to think about, thanks for writing and sharing your article.

                              When I am learning something new I often find myself holding multiple different models that have elements that appear to be mutually contradictory and then reasoning with all of them. The iterative goal is to able to make the most useful mistakes as fast as possible.

                              I like that your investigation is based in something you know well.

                              Still looking for the right way in.

                              Stephen Bannasch (316 ppm) (@stepheneb@ruby.social)

                              @RickiTarr@beige.party when I’m learning something new I need to both be super confident that I both know what I’m doing AND that I’m semi-clueless. The idea is to make the most useful mistakes as fast as possible. Those are the inflection points where I go: “Woah, fuck, that’s wild! I didn’t think of it that way before!” That’s when all those models in my head realign to make room for new understanding. The combo of being confident and knowing I’m ignorant helps me find interesting trouble.

                              favicon

                              Ruby.social (ruby.social)

                              1 Reply Last reply
                              0
                              • cwebber@social.coopC cwebber@social.coop

                                @cstanhope It's a great question, tough to answer. There are various problems which neurosymbolic computation would improve the ability to solve.

                                I think the question for me isn't "why add new forms of intelligence" but rather "why do we live in a society where is adding new forms of intelligence is zero sum?"

                                Which I agree that our current society is. I wish it weren't.

                                b_cavello@mastodon.publicinterest.townB This user is from outside of this forum
                                b_cavello@mastodon.publicinterest.townB This user is from outside of this forum
                                b_cavello@mastodon.publicinterest.town
                                wrote last edited by
                                #41

                                @cwebber @cstanhope I appreciate this reframe!

                                1 Reply Last reply
                                0
                                • cwebber@social.coopC cwebber@social.coop

                                  An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/

                                  In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!

                                  jfred@jawns.clubJ This user is from outside of this forum
                                  jfred@jawns.clubJ This user is from outside of this forum
                                  jfred@jawns.club
                                  wrote last edited by
                                  #42

                                  @cwebber This is fascinating. It's certainly interesting that it seems to have built the Datalog machinery on its own, and seems to actually be running queries... there were some excerpts mentioned but I would be very curious to see how comprehensive its set of rules is and how/when they get queried

                                  vv@solarpunk.moeV 1 Reply Last reply
                                  0
                                  • cwebber@social.coopC cwebber@social.coop

                                    Before you get into it, the caveats are there in the post. You'll hear me critique the AI industry *a lot*, and those critiques haven't changed. I'm still concerned about effects on the environment, on skill decline, on the DDoS'ing of the internet, and especially on disempowerment *generally*. All that remains true.

                                    This is going to be a somewhat niche post for people who are particularly interested in neurosymbolic computation, which includes me: the idea that neither LLMs nor constraint solvers are sufficient, that the right path for many things combines them.

                                    amy@spookygirl.booA This user is from outside of this forum
                                    amy@spookygirl.booA This user is from outside of this forum
                                    amy@spookygirl.boo
                                    wrote last edited by
                                    #43

                                    @cwebber the only conversation I've ever had with someone who works on one of the "foundation models" (anthropic) that didn't leave me wanting to commit acts outside my morals and ethics, was with someone who thought the entire direction of more compute and more data was fundamentally flawed and what was necessary was something not dissimilar to what you're describing about Winter. In particular a family of kernels within the language model that enables it to interrogate its own training. He was clear that he didn't mean "intelligence" but instead simply that it was capable of producing cogent, real, explanations whether in natural language or not of it's behavior which could be used as part of a feedback loop to refine output and internal representations as well as give humans the opportunity to understand the nature of a response.

                                    I found the post pretty interesting and share your concerns about LLMs.

                                    1 Reply Last reply
                                    0
                                    • jfred@jawns.clubJ jfred@jawns.club

                                      @cwebber This is fascinating. It's certainly interesting that it seems to have built the Datalog machinery on its own, and seems to actually be running queries... there were some excerpts mentioned but I would be very curious to see how comprehensive its set of rules is and how/when they get queried

                                      vv@solarpunk.moeV This user is from outside of this forum
                                      vv@solarpunk.moeV This user is from outside of this forum
                                      vv@solarpunk.moe
                                      wrote last edited by
                                      #44

                                      @jfred @cwebber All the data is stored in ATProto and you can browse it here: https://pdsls.dev/at://did:plc:ezyi5vr2kuq7l5nnv53nb56m "thought" stores all the actions being performed, "fact" has a collection of facts, and "rule" is all the datalog.

                                      1 Reply Last reply
                                      0
                                      • csepp@merveilles.townC csepp@merveilles.town

                                        @cwebber I'm surprised you don't mention ELIZA in your blog post.
                                        Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.

                                        As for the poem.... my feelings on it are complicated.

                                        realn2s@infosec.exchangeR This user is from outside of this forum
                                        realn2s@infosec.exchangeR This user is from outside of this forum
                                        realn2s@infosec.exchange
                                        wrote last edited by
                                        #45

                                        @csepp @cwebber
                                        Regarding the anthropomorphisation and pectin is like to mention The Media Equation (highly recommended book and theory)

                                        It's exploring that people tend to assign human characteristics to computers and other media, and treat them as if they were real social actors.

                                        Link Preview Image
                                        The Media Equation - Wikipedia

                                        favicon

                                        (en.wikipedia.org)

                                        1 Reply Last reply
                                        1
                                        0
                                        • R relay@relay.infosec.exchange shared this topic
                                        Reply
                                        • Reply as topic
                                        Log in to reply
                                        • Oldest to Newest
                                        • Newest to Oldest
                                        • Most Votes


                                        • Login

                                        • Login or register to search.
                                        • First post
                                          Last post
                                        0
                                        • Categories
                                        • Recent
                                        • Tags
                                        • Popular
                                        • World
                                        • Users
                                        • Groups