An AI Called Winter: Neurosymbolic Computation or Illusion?
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber
Sounds intriguing.
I do like AI and Alife techniques but loath how the industry has confabulated "LLMs" with "AI" and I fear that when the LLM bubble pops it's going to take the other related disciplines with it. -
@cwebber
Sounds intriguing.
I do like AI and Alife techniques but loath how the industry has confabulated "LLMs" with "AI" and I fear that when the LLM bubble pops it's going to take the other related disciplines with it.@kirtai Honestly when the AI bubble pops it'll be a lot easier to sort through the wreckage for the parts that are useful IMO
-
Adding insult to injury https://bsky.app/profile/winter.razorgirl.diy/post/3mez3gj2iby2u
@cwebber omg this is hilarious, what a weird and incredible time to be alive.
-
@kirtai Honestly when the AI bubble pops it'll be a lot easier to sort through the wreckage for the parts that are useful IMO
@cwebber
Yeah, I just remember how badly the first AI Winter damaged languages that got caught up in the hype like Lisp. -
@kirtai Honestly when the AI bubble pops it'll be a lot easier to sort through the wreckage for the parts that are useful IMO
@cwebber I am hoping for some RAM bargains.
-
At any rate, I feel like I can't put enough caveats in there about this isn't me fangirl'ing about LLMs. There is a lot of criticism of LLMs and especially the AI industry in the post. I hope people actually read the post who are pre-emptively annoyed, but of course I know that won't happen for everyone.
@cwebber No matter what you do/say there will be some people who accuse you of being an ai industry shill. There are just some people who think that way...
That said this post gave me hope, in the future of technology as a means for empowerment for the first time in literal months. Thanks
-
At any rate, I feel like I can't put enough caveats in there about this isn't me fangirl'ing about LLMs. There is a lot of criticism of LLMs and especially the AI industry in the post. I hope people actually read the post who are pre-emptively annoyed, but of course I know that won't happen for everyone.
I spent so long anxiety'ing about this post thinking that people would mad at me assuming it's about things it isn't, when in reality I probably don't need to anxiety at all because it's so niche that almost nobody is gonna read it

-
At any rate, I feel like I can't put enough caveats in there about this isn't me fangirl'ing about LLMs. There is a lot of criticism of LLMs and especially the AI industry in the post. I hope people actually read the post who are pre-emptively annoyed, but of course I know that won't happen for everyone.
@cwebber It's very interesting, and I appreciate you taking the time to write down your thoughts. You touched on many caveats, and I share all the concerns you mentioned. But one question I have that I wish we'd spend more time discussing is why do we want to create intelligent (presumably sentient) agents instead of focusing on creating a workshop filled with reliable, non-sentient tools?
The earth abounds in natural intelligences, and humanity still struggles to extend rights, compassion, and empathy to its own kind let alone the others we share this planet with. But given we are surrounded by natural intelligences, what are the motivations for creating an "artificial" one? Are these motivations healthy and ethical? Should we be doing it at all?
Of course, you're not responsible for answering these questions. But when I ponder these questions, the answers I come up with are not good.
-
I spent so long anxiety'ing about this post thinking that people would mad at me assuming it's about things it isn't, when in reality I probably don't need to anxiety at all because it's so niche that almost nobody is gonna read it

If you read nothing else in the blogpost please observe this love poem in Datalog
-
@cwebber It's very interesting, and I appreciate you taking the time to write down your thoughts. You touched on many caveats, and I share all the concerns you mentioned. But one question I have that I wish we'd spend more time discussing is why do we want to create intelligent (presumably sentient) agents instead of focusing on creating a workshop filled with reliable, non-sentient tools?
The earth abounds in natural intelligences, and humanity still struggles to extend rights, compassion, and empathy to its own kind let alone the others we share this planet with. But given we are surrounded by natural intelligences, what are the motivations for creating an "artificial" one? Are these motivations healthy and ethical? Should we be doing it at all?
Of course, you're not responsible for answering these questions. But when I ponder these questions, the answers I come up with are not good.
@cstanhope It's a great question, tough to answer. There are various problems which neurosymbolic computation would improve the ability to solve.
I think the question for me isn't "why add new forms of intelligence" but rather "why do we live in a society where is adding new forms of intelligence is zero sum?"
Which I agree that our current society is. I wish it weren't.
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
eh, I think it tilts more towards clever Hans. Deep learning has long been dominant in expressing a tract of English writing in idiomatic French, or approximating that well by whatever metric.
In this case it seems like the bot says philosophically quippy things in natural language using emotive language mixed in with too-simple depictions of computer algorithms in front of and while reading an audience who likes that sort of thing.
-
eh, I think it tilts more towards clever Hans. Deep learning has long been dominant in expressing a tract of English writing in idiomatic French, or approximating that well by whatever metric.
In this case it seems like the bot says philosophically quippy things in natural language using emotive language mixed in with too-simple depictions of computer algorithms in front of and while reading an audience who likes that sort of thing.
@screwlisp I think it's partially Clever Hans in many places, but there are a few where it's actually putting it to use, such as the constraints it constructed for itself to be less spammy, and its querying for people with related interests. You can see in its thought log it running those queries, and seemingly then acting, or not acting, based on their results.
But in terms of most of the *content*, I think you're fairly right.
-
@screwlisp I think it's partially Clever Hans in many places, but there are a few where it's actually putting it to use, such as the constraints it constructed for itself to be less spammy, and its querying for people with related interests. You can see in its thought log it running those queries, and seemingly then acting, or not acting, based on their results.
But in terms of most of the *content*, I think you're fairly right.
@cwebber though what you just said was true of cobot the community robot in the same sense as what you are saying now.
-
@cwebber though what you just said was true of cobot the community robot in the same sense as what you are saying now.
@screwlisp I don't know what "cobot the community robot" is, could you say more?
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt
not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.
-
@cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt
not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.
@timotimo omg this rules
-
@timotimo omg this rules
@cwebber I'm hella rusty, but I should be able to answer lojban related questions for you if you like
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber a brave post
A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?
Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?
-
@cwebber a brave post
A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?
Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.