[Slams laptop closed and starts walking away, talking urgently into cuff]
"We've been made, they know everything. Execute SHRINKWRAP, get out, get out NOW."
[Slams laptop closed and starts walking away, talking urgently into cuff]
"We've been made, they know everything. Execute SHRINKWRAP, get out, get out NOW."
Hmm, and presumably anyone operating a general-purpose chatbot that could conceivably be prompted to give such advice (e.g. as the conversational interface to a regular web-page) are also plausibly at risk?