I now have an AI policy: https://github.com/iam-py-test/iam-py-test/blob/main/ai.md
-
I now have an AI policy: https://github.com/iam-py-test/iam-py-test/blob/main/ai.md
Its the same policy from my my_filters_001 repository, but now it applies to all my repositories and has a severability clause.
Why does it have a severability clause?
Would my threats be more threatening if they weren't filled with typos?So many questions, so little time.
-
I now have an AI policy: https://github.com/iam-py-test/iam-py-test/blob/main/ai.md
Its the same policy from my my_filters_001 repository, but now it applies to all my repositories and has a severability clause.
Why does it have a severability clause?
Would my threats be more threatening if they weren't filled with typos?So many questions, so little time.
For the purposes of this policy, artificial intelligence refers specifically to generative artificial intelligence. Generative artificial intelligence is a type of AI which learns patterns from its training data, and uses that to generate new data - such as images or code - based on that training data. Code created by generative artificial intelligence frequently contains security vulnerabilities and bugs. Text from generative artificial intelligence frequently contains errors and misrepresentation of sources. Generative artificial intelligence is also bad at handling context, meaning it is vulnerable to taking misinformation, satire, roleplay, fiction, and suggestions as fact. Furthermore, for people learning coding, AI use hinders learning. For these reasons any many other, I have chosen to greatly restrict usage of AI, stopping short of a full ban for now.
-
R relay@relay.infosec.exchange shared this topic