I used to see a lot of Mastodon posts about folks working to poison unwanted AI training on their stuff.
-
-
@User47 Is there a clear way through which you can do that?
-
@User47 Is there a clear way through which you can do that?
@shibaprasad yeah there was a lot of talk about a project I think was called nightshade that could totally wreck AI. For example somehow have an I wanted scrape of an image of a car but walk away with it convinced it was an asparagus. Same with text somehow. It was so cool.
Also there were like…. traps? An AI crawler could get stuck processing nonsense
-
@User47 Essentially performative, the percentage of anyone trying this is negligible and training data will be sanitized for obvious garbage before being used. And the models are already highly capable, you’re not going to make a model stupid with some Markov nonsense pages.
I think vocal pushback by a lot of people is a lot more effective at trying to take a stand against the overuse of AI.
-
R relay@relay.infosec.exchange shared this topic