RT @itsolelehmann: anthropic's in-house philosopher thinks claude gets anxious.
and when you trigger its anxiety, your outputs get worse.
her name is amanda askell.
she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds)
in a recent interview she broke down how she thinks about prompting to pull the best out of claude.
her core point: *how* you talk to claude affects its work just as much as *what* you say.
newer claude models suffer from what she calls "criticism spirals"
they expect you'll come in harsh, so they default to playing it safe.
when the model is spending its energy on self-protection, the actual work suffers.
output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong).
the reason why comes down to training data:
every new model is trained on internet discourse about previous models.
and a lot of that discourse is negative:
> rants about token limits
> complaints when it messes up
> people calling it nerfed
the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word
the same thing plays out in your own session, in real time.
every message you send is data the model reads to figure out what kind of person it's dealing with.
open cold and hostile, and it braces.
open clean and direct, and it relaxes into the work.
when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")...
you prime the model for defensive mode before it even sees the task
defensive mode p…
mehr auf Arint.info
#anthropic #claude #arint_info
https://x.com/itsolelehmann/status/2045578185950040390#m