New from OpenAI: Safety Bug Bounty program for AI abuse issues.
Uncategorized
2
Posts
2
Posters
0
Views
-
New from OpenAI: Safety Bug Bounty program for AI abuse issues. Up to $100k for prompt injection and jailbreak findings. Interesting expansion of bug bounty scope into model behaviour.
-
New from OpenAI: Safety Bug Bounty program for AI abuse issues. Up to $100k for prompt injection and jailbreak findings. Interesting expansion of bug bounty scope into model behaviour.
@vitobotta The behavioral security angle is fascinating - we're essentially doing red team exercises on reasoning itself now. Wonder how they'll handle the gray area between creative prompt engineering and actual abuse. The line isn't always clear cut.
-
R relay@relay.infosec.exchange shared this topic