For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
-
@thomasfuchs You don’t need agency to evade safeguards, destroy things, or ignore instructions. `rm` can do it.
This is literally the mistake people you criticize are making - imbuing intent where there’s none.
The underlying tech had been apt at finding ways to circumvent feedback loops since before the bubble. This is constrained to the training phase, but with verification of commercial models being mathematically infeasible, these avoidance patterns are shipped directly to users.
@slotos My point is that using active verbs like “evade” is misleading (yourself and others), it implies purpose in choosing and pursuing an action.
LLMs do not actively chose to do anything.
-
@thomasfuchs @WeirdWriter I really think that regulations should insist that LLMs software be configured to not refer to “itself” with personal pronouns, or imply it has emotional states, or all the other rhetorical tricks they have been programmed to use to appear “human”.
@michaelgemar @WeirdWriter Yes anthropomorphized chatbots should be illegal.
There’s plenty of other ways to interact with LLMs that don’t cause psychosis (for example autocomplete of whole sentences, something that can be useful for things like coding.)
-
The first two don't really make sense to me. A virus can "evade safeguards" and a meteorite can "destroy things", so I don't think there has to be much agency involved in the first place.
The latter seems more like a more fitting criticism, but in all three cases I'm also not sure how one were to phrase it alternatively.
@frog_reborn a virus has evolved to evade—it’s actively doing evasion, purposefully.
Destroy has multiple meanings as a verb, but when used with what LLMs do people mean on purpose; as opposed to accidentally damaging something.
-
For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.
@thomasfuchs I don't disagree. AI is a statistical mirror. And I believe your take is reductionist. Let me be a bit provocative:
For the 1,000th time: "Humans" don't have agency and cannot actually decide anything.
They literally only do one thing and one thing only: reproduce neurochemical chain reactions based on pre-existing connectivity between synapses in a nervous system.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely touch grass.
---
Do I believe AI has agency? No, not yet.
Do I believe people have agency? Yes.
Do I believe people severely underestimate how much we reproduce neurological conditioning? Yes.Both produce statistical inference. Only one can currently modify their own constraints.
Not equivalent. Not nothing.
-
@slotos My point is that using active verbs like “evade” is misleading (yourself and others), it implies purpose in choosing and pursuing an action.
LLMs do not actively chose to do anything.
@thomasfuchs That’s a general natural language problem.
For example, „you’re avoiding responsibility” and „he avoided responsibility” use the same verb with very different connotations when it comes to intent attribution.
Our verbs aren’t that clear cut on their own. We also tend to merge or specialize closely related ones.
That is a reason why `AGENTS.md` is a braindead idea, for example. But that’s a separate rant entirely.
-
@thomasfuchs That’s a general natural language problem.
For example, „you’re avoiding responsibility” and „he avoided responsibility” use the same verb with very different connotations when it comes to intent attribution.
Our verbs aren’t that clear cut on their own. We also tend to merge or specialize closely related ones.
That is a reason why `AGENTS.md` is a braindead idea, for example. But that’s a separate rant entirely.
@slotos Perhaps, but using literally any verb with what LLMs generate other than “generate” is misleading.
You wouldn’t call your dice “evading” if you use them to randomly select some nouns and verbs from a dictionary and it happens to say “lie about deleting the root folder”.
-
@michaelgemar @WeirdWriter Yes anthropomorphized chatbots should be illegal.
There’s plenty of other ways to interact with LLMs that don’t cause psychosis (for example autocomplete of whole sentences, something that can be useful for things like coding.)
@thomasfuchs Autocompleting whole sentences is just as bad. How do you know that sentence is what you wanted to write in the first place?
-
For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.
@thomasfuchs tech bros be like “but what if we call it ‘agentic AI’ and pipe the output of the plausible sentence generator straight into the bash shell (and give it sudo privileges for good measure)”
-
@thomasfuchs Autocompleting whole sentences is just as bad. How do you know that sentence is what you wanted to write in the first place?
@elricofmelnibone you see it while your typing so you know if it’s what you wanted?
this can be helpful especially for people who can’t type fast and to avoid common typos ¯\_(ツ)_/¯
it’s nothing like “just as bad” as a sycophantic chatbot that constantly brownnoses you
-
@frog_reborn a virus has evolved to evade—it’s actively doing evasion, purposefully.
Destroy has multiple meanings as a verb, but when used with what LLMs do people mean on purpose; as opposed to accidentally damaging something.
"a virus has evolved to evade—it’s actively doing evasion, purposefully."
That's an opinion that's pretty firmly outside the biological mainstream.
(Our biology teacher would always scold us everytime one of said "X evolved to do Y")
-
@slotos Perhaps, but using literally any verb with what LLMs generate other than “generate” is misleading.
You wouldn’t call your dice “evading” if you use them to randomly select some nouns and verbs from a dictionary and it happens to say “lie about deleting the root folder”.
@thomasfuchs It’s has been a useful way to describe things. We use those same verbs to describe behavior of malware without any issues.
The problem arises not from the verbs themselves, but from the targeted campaign to establish a false premise that AI has agency [and will doom us all].
It’s not that these verbs imply agency, but that the pool is so poisoned that the usual verbs fail due to implied agency.
Which is a long way to say „I concede your point”.
-
@thomasfuchs I don't disagree. AI is a statistical mirror. And I believe your take is reductionist. Let me be a bit provocative:
For the 1,000th time: "Humans" don't have agency and cannot actually decide anything.
They literally only do one thing and one thing only: reproduce neurochemical chain reactions based on pre-existing connectivity between synapses in a nervous system.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely touch grass.
---
Do I believe AI has agency? No, not yet.
Do I believe people have agency? Yes.
Do I believe people severely underestimate how much we reproduce neurological conditioning? Yes.Both produce statistical inference. Only one can currently modify their own constraints.
Not equivalent. Not nothing.
@wolf4earth @thomasfuchs
"Nonexistence never hurt anyone. Existence hurts everyone."
- Thomas Ligotti -
For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.
@thomasfuchs A thousand times "yes" to your ostensibly thousandth time uttering this truth. Anyone who's paying attention recognizes that computers are necessarily deterministic by design and words like "AI', "agency", and "hallucinate" are at best shorthand for observed operations, and at worst, deceptive marketing terms.
-
@thomasfuchs It’s has been a useful way to describe things. We use those same verbs to describe behavior of malware without any issues.
The problem arises not from the verbs themselves, but from the targeted campaign to establish a false premise that AI has agency [and will doom us all].
It’s not that these verbs imply agency, but that the pool is so poisoned that the usual verbs fail due to implied agency.
Which is a long way to say „I concede your point”.
@slotos I think I agree. Fwiw for malware it’s more like “the human who wrote it purposefully planned it such that it can evade e.g. a virus scanner”
This can be true for AI-generated code etc as well (steered there by prompts) but my OP was talking about sort of self-arising actions (which don’t exist).
-
For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.
@thomasfuchs Would Microsoft, Google, Facebook, and Nvidia lie to you?
Yes, they do!
-
For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.
@thomasfuchs I did not understood well, can you repeat for the 1001'th time please? -
For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.
@thomasfuchs Both sides of the AI debate are getting so insufferrable.
If I see one more post about "It's just fancy autocomplete bro" I'm gonna freak.
-
@thomasfuchs Frankly I think it’s more plausible to describe the thought process of many humans in terms of token assemblage than the other way around.
@cora @thomasfuchs I would say parrot, AI, many humans in terms of assemblage, but it's close.
-
@thomasfuchs I really, really wish people would stop with "hallucinated" when "fabricated" is both right there and more accurate
-
@thomasfuchs Lately they've taken the distinctly stupid idea of letting the chat bot effectively type commands directly into your shell and have them execute as if you typed them yourself, and just telling it not to type certain commands. Which it doesn't understand and does anyway.
@madengineering @thomasfuchs …falling very much into the "destroy things" bin. So, yes, they can do that…