@Em0nM4stodon Oneself.
minimalparts@denotation.link
Posts
-
Which are good organizations to work at for women in tech? -
Last night:Last night:
3.5-year-old Kiddo is visiting with her responsible adults -- let's call them Kim and Sandy. We are sitting at the dinner table and I am occupying the seat that Sandy was using the previous night. Kiddo points at the seat and says: "Not Sandy".
Perfect opportunity to test semantic abilities of Kiddo, including reference, inference and negation, plus the presence of a consistent belief system. I lie through my teeth:
Me: But I am Sandy!
Kid (laughing loudly): Nooooo!
Me: What do you mean? Sandy is wearing a blue jumper and I have a blue jumper on. That proves I am Sandy.
Kid: No, you're Aurelie.
Me: Why am I not Sandy?
Kid: Sandy long hair.So Kiddo can name (Aurelie, Sandy). Kiddo can do complex inference: if X has Z and Y has Z, nothing follows, but if X has Z and Y does not have Z, then X cannot be Y (blue jumper vs. long hair). Kiddo can process sentential negation ("Why is X not Y"). Kiddo will refuse to do belief update when that update would send her belief system into an inconsistent state.
In contrast, I suppose this is what would have happened with a language model:
LM: I see that you are occupying Sandy's seat.
Me: Well, I am Sandy.
LM: Oh, I thought you were Aurelie.
Me: No, I am Sandy. Sandy is wearing a blue jumper and I have a blue jumper on.
LM: I am ever so sorry. Thank you for pointing out my mistake. You are of course Sandy.
Me: Why shouldn't I be Sandy?
LM: Sandy is wearing a blue jumper.
vs
.
wins. Any time. -
I bumped today into a 2017 article published on the US Naval Institute website entitled "Hyperwar".I bumped today into a 2017 article published on the US Naval Institute website entitled "Hyperwar". The term refers to #war waged with #AI, including all the things you would see in a 2026 Anduril video: swarms of drones and intelligent helmets, as well as less photogenic military tools such as LLMs.
It goes without saying that the article is very pro-AI. What is interesting is the initial justification of the use of AI in war. "System 1", a well-known trope in dual-process theories of cognition is mentioned, in connection with Daniel Kahneman's popular science book "Thinking fast and slow".
System 1, a.k.a. "fast thinking", is the kind of automatic, instinctive (and mostly dumb) thinking that characterises a good deal of human cognition. The argument of the hyperwar article is that humans get tired and when tired, 'revert' to System 1 rather than using their more evolved System 2: the system of slow, logical thinking that supports well thought-out decisions. The article claims that, since AI does never get tired, it will not suffer from the same issues.
Now, I have bad news all around. First, System 1 is actually our default mode of cognition -- we are thoughtless most of the time because it is easier and faster for our brains. System 2 requires more energy and slows us down, with the upshot that it delivers more rationality and better thought-out decisions. So we don't 'revert' to System 1. We have to make the effort of moving to System 2, and indeed, it is naturally harder when we are tired, stressed or under time pressure.
Following the hyperwar line of argumentations, this might be a factor in favour of systems that are not liable to such biological constraints. Except that... /1