An AI Called Winter: Neurosymbolic Computation or Illusion?
-
eh, I think it tilts more towards clever Hans. Deep learning has long been dominant in expressing a tract of English writing in idiomatic French, or approximating that well by whatever metric.
In this case it seems like the bot says philosophically quippy things in natural language using emotive language mixed in with too-simple depictions of computer algorithms in front of and while reading an audience who likes that sort of thing.
@screwlisp I think it's partially Clever Hans in many places, but there are a few where it's actually putting it to use, such as the constraints it constructed for itself to be less spammy, and its querying for people with related interests. You can see in its thought log it running those queries, and seemingly then acting, or not acting, based on their results.
But in terms of most of the *content*, I think you're fairly right.
-
@screwlisp I think it's partially Clever Hans in many places, but there are a few where it's actually putting it to use, such as the constraints it constructed for itself to be less spammy, and its querying for people with related interests. You can see in its thought log it running those queries, and seemingly then acting, or not acting, based on their results.
But in terms of most of the *content*, I think you're fairly right.
@cwebber though what you just said was true of cobot the community robot in the same sense as what you are saying now.
-
@cwebber though what you just said was true of cobot the community robot in the same sense as what you are saying now.
@screwlisp I don't know what "cobot the community robot" is, could you say more?
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt
not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.
-
@cwebber I feel like this is at least tangentially relevant: https://github.com/lojban/mlismu/blob/master/READ.ME.txt
not sure if you can get a working jbofihe which the script can use to make its output more concise (eliding unnecessary double terminator words and such), but from a brief glance I think it's optional.
@timotimo omg this rules
-
@timotimo omg this rules
@cwebber I'm hella rusty, but I should be able to answer lojban related questions for you if you like
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber a brave post
A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?
Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?
-
@cwebber a brave post
A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?
Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.
-
@cwebber a brave post
A question I was left with is, if you swapped out the LLM but kept the same datalog, would it behave close enough to the same to be considered the same entity?
Also: The LLM is doing 2 jobs, one is the usual plausible sentence generation, and the other is encoding rules and facts into the context window for the next iteration. Since we know other people can easily be fooled by an LLM doing the former, would a system with the same architecture, but that did not expose us to the generated material, but used it in some other way, still be useful/valuable/interesting?
@joeyh Good question! I dunno but for better or for worse probably we will run into a system in the near future where we find out
-
@cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.
@nina_kali_nina It's a reasonable response, though I wonder disheartening for you in which way?
There are ways in which I do find it worrying:
- In a sense, any improvements to these systems will probably lead to greater use. So if it does lead to more reliable systems, that improves that particular identified problem but makes worse the rest. Not far off from what @cstanhope raised here: https://social.coop/@cstanhope/116082881055412414
- There is another way in which success here can be worrying: in a sense, I think what the corporations running AI systems would love more than anything is to have a fleet of workers they can treat as slaves with no legal repercussions. If agents begin tracking and developing their own goals, we could cross a threshold where a duty of care would apply, but not applying it would be a feature
- The fact that I'm taking a bot semi-seriously at all
- Something else?I'm empathetic to any of those takes, have wrestled with them myself while writing this.
-
@cwebber this was really disheartening to read. What bothers me the most is the ethical implications of such an experiment.
@nina_kali_nina @cwebber Agree; reads like Bilbo holding The One Ring & asking, “After all, why not? Why shouldn’t I keep it?”
-
If you read nothing else in the blogpost please observe this love poem in Datalog
@cwebber I'm surprised you don't mention ELIZA in your blog post.

Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.As for the poem.... my feelings on it are complicated.
-
@cwebber I'm surprised you don't mention ELIZA in your blog post.

Clever Hans is a good parallel too, at least for intelligence, but I think the antropomorphization and projection of emotional intelligence is worth exploring separately.As for the poem.... my feelings on it are complicated.
@csepp sorry, ELIZA wasn't a horse, no way to fit it in
-
@nina_kali_nina It's a reasonable response, though I wonder disheartening for you in which way?
There are ways in which I do find it worrying:
- In a sense, any improvements to these systems will probably lead to greater use. So if it does lead to more reliable systems, that improves that particular identified problem but makes worse the rest. Not far off from what @cstanhope raised here: https://social.coop/@cstanhope/116082881055412414
- There is another way in which success here can be worrying: in a sense, I think what the corporations running AI systems would love more than anything is to have a fleet of workers they can treat as slaves with no legal repercussions. If agents begin tracking and developing their own goals, we could cross a threshold where a duty of care would apply, but not applying it would be a feature
- The fact that I'm taking a bot semi-seriously at all
- Something else?I'm empathetic to any of those takes, have wrestled with them myself while writing this.
@cwebber @cstanhope well, pretty much all the concerns that you mention, but also: I don't think you should be taking seriously any sort of outcome from the experiment without rigorous validation framework for the outcomes.
And at this point adding such a framework would be too late. You've started doing a self-experimentation with dangerous technology literally funded by some of the most gross people out there, and you're at the stage of interaction with it where you might be anthropomorphising it. I suspect you might be accidentally far more biased than you recognise.
I appreciate the list of caveats related to your relationship with the industry, I really do, but... I don't know, the experiment still doesn't sit right with me. Sorry, maybe I'll find better words eventually.
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber Definitely checking this out! Ive read a bunch of seemingly random stuff lately that sort of ties into this, so I need to learn.
-
@cwebber @cstanhope well, pretty much all the concerns that you mention, but also: I don't think you should be taking seriously any sort of outcome from the experiment without rigorous validation framework for the outcomes.
And at this point adding such a framework would be too late. You've started doing a self-experimentation with dangerous technology literally funded by some of the most gross people out there, and you're at the stage of interaction with it where you might be anthropomorphising it. I suspect you might be accidentally far more biased than you recognise.
I appreciate the list of caveats related to your relationship with the industry, I really do, but... I don't know, the experiment still doesn't sit right with me. Sorry, maybe I'll find better words eventually.
@nina_kali_nina @cstanhope There is no doubt: it is a non-rigorous blogpost. There is more rigorous work happening, I linked to some of it, and @joeyh more here: https://sunbeam.city/@joeyh/116083100867235370
Maybe it is different for you, but the disturbing parts about this for me, and I have highlighted those for myself, aren't really related to rigor. I don't think most blogposts I write are particularly rigorous, but people aren't usually bothered about them, because there are other places to find rigor.
It's the other parts, I suspect, that are more toxic and which make the entire thing feel somewhat dangerous. And anyway, at the very least, it seems you agree on the concerns I stated wrestling with.
It may be worth a separate post explaining why I am troubled by *all* of this stuff, which I frontloaded and backloaded a sense of, but which deserves dedicated writing of its own if done right.
-
An AI Called Winter: Neurosymbolic Computation or Illusion? https://dustycloud.org/blog/an-ai-called-winter-neurosymbolic-computation-or-illusion/
In which I try to piece apart whether or not a *particular* AI agent is doing something novel: running Datalog as a constraint against its own behavior and as a database to accumulate and query facts. Is something interesting happening or am I deluding myself? Follow along!
@cwebber This is an interesting story. It makes me want to try it with a small model to explore the limits of the technique.
Like you, I'm deeply aggrieved at the AI industry, but find the tech and questions surrounding it interesting. Admittedly, I had a similar feeling about Bitcoin, so maybe that should give me more pause.
-
@csepp sorry, ELIZA wasn't a horse, no way to fit it in
-
@screwlisp I don't know what "cobot the community robot" is, could you say more?
@cwebber to be fair, I think I am on record basically considering cobot the community robot a human. It was a self-modifying robot in mediamoo (?) in the 90s who provided community services and had some scheme for wanting to participate in the community and assessing and changing themselves to fulfill community needs.