⚪️ Font rendering tricks AI assistants into approving malicious commands
Uncategorized
1
Posts
1
Posters
0
Views
-
️ Font rendering tricks AI assistants into approving malicious commands
️ Researchers from LayerX have developed a proof-of-concept attack that makes it possible to hide malicious commands from AI assistants. The attack is based on a discrepancy between what the AI sees in the page’s HTML code and what is actually… -
R relay@relay.infosec.exchange shared this topic