So today I found this cool project on Codeberg and wanted to share it with you!https://codeberg.org/robida/human.jsonIt is meant to build a web of people confirming they are human, so you can have a root of trust that extends!
-
So today I found this cool project on Codeberg and wanted to share it with you!
https://codeberg.org/robida/human.json
It is meant to build a web of people confirming they are human, so you can have a root of trust that extends! In this age where many just post AI slop, this is great!
That is why I also implemented it already on my blog, which is hosted on Codeberg pages! -
So today I found this cool project on Codeberg and wanted to share it with you!
https://codeberg.org/robida/human.json
It is meant to build a web of people confirming they are human, so you can have a root of trust that extends! In this age where many just post AI slop, this is great!
That is why I also implemented it already on my blog, which is hosted on Codeberg pages!@Wolkensteine it would be neat to make it more holistic, so it also extends to art, code, translations and such
i don't want to be exposed to genai anything -
@Wolkensteine it would be neat to make it more holistic, so it also extends to art, code, translations and such
i don't want to be exposed to genai anything@lumi
Yeah, currently that can only do webpages, and also on a per-domain basis, if I understood correctly. But at least for web content independent of the type, this could at least help identify quality content.Do you have an idea how one could extend this to stuff outside just webpages? Would be interested to hear your thoughts.
-
@lumi
Yeah, currently that can only do webpages, and also on a per-domain basis, if I understood correctly. But at least for web content independent of the type, this could at least help identify quality content.Do you have an idea how one could extend this to stuff outside just webpages? Would be interested to hear your thoughts.
@Wolkensteine well, the point here is a web of trust.
human.jsoncould be extended to vouch that nothing in the site was made with genai; not the code, not the artwork, not the translations, nothing at all
for code this could be difficult in the short term, we need to inform non-programmers about how genai is being used in code and how they can find ways to publish where they can be reasonably certain the project is against genai
but i think ahuman.jsonlike that would be fine for websites
now, if we are talking about sites where content is hosted but the people on the site are not the ones running the site, it gets more thorny
i do think we should have code forges, art websites, etc that completely ban genai in their ToS and vouch for things that way
in case of code, we also could have a file likeVALUES.mdwhich contains the values of the project. i have been thinking about this for a while now, and when i have the spoons will be drafting templates for it, such that projects can easily get a policy with a full ban on genai (as well as being inclusive, anti-capitalist, and all the other good stuff) -
@Wolkensteine well, the point here is a web of trust.
human.jsoncould be extended to vouch that nothing in the site was made with genai; not the code, not the artwork, not the translations, nothing at all
for code this could be difficult in the short term, we need to inform non-programmers about how genai is being used in code and how they can find ways to publish where they can be reasonably certain the project is against genai
but i think ahuman.jsonlike that would be fine for websites
now, if we are talking about sites where content is hosted but the people on the site are not the ones running the site, it gets more thorny
i do think we should have code forges, art websites, etc that completely ban genai in their ToS and vouch for things that way
in case of code, we also could have a file likeVALUES.mdwhich contains the values of the project. i have been thinking about this for a while now, and when i have the spoons will be drafting templates for it, such that projects can easily get a policy with a full ban on genai (as well as being inclusive, anti-capitalist, and all the other good stuff)@Wolkensteine it would be very nice if @Codeberg could make a stand here, but i am also aware it can be difficult to enforce such a policy. so i feel enforcement should only be done in obvious cases and after repeated warnings
genai boosters tend to be very obvious about it and they would rather leave the platform than not be able to promote their abusive tech, so that does make enforcement a bit easier
it's also better that they have to lie about their usage of it than that they can be proud of it
this is something i would love to have a discussion on with other people, to try and create a sane policy that will not affect innocent projects -
@Wolkensteine it would be very nice if @Codeberg could make a stand here, but i am also aware it can be difficult to enforce such a policy. so i feel enforcement should only be done in obvious cases and after repeated warnings
genai boosters tend to be very obvious about it and they would rather leave the platform than not be able to promote their abusive tech, so that does make enforcement a bit easier
it's also better that they have to lie about their usage of it than that they can be proud of it
this is something i would love to have a discussion on with other people, to try and create a sane policy that will not affect innocent projects@lumi @Codeberg
I've read a bit through the issues of the project and found that for after v0.2.0 they plan to add not only a notes field but also a way to point out bad URLs. Also, I seem to have understood the workings a bit wrong, since you can not only vouch for a domain but also subpages on that domain. This is certainly nice for pages where multiple people might publish. This will at least make it more granular and with notes you could add why you vouch for it.
Currently they suggest all sites to add an /ai page to describe their policies (personally I will refer people just to my blog post about AI since I am not against neural learning but mostly how it is done right now and that needs a bunch of text). But since this seems to be in its early stages this all might change later on.I would also love if @Codeberg added a policy for that, but sadly many people want to just put that kind of rule off, just because it appears to be unenforceable.
-
@lumi @Codeberg
I've read a bit through the issues of the project and found that for after v0.2.0 they plan to add not only a notes field but also a way to point out bad URLs. Also, I seem to have understood the workings a bit wrong, since you can not only vouch for a domain but also subpages on that domain. This is certainly nice for pages where multiple people might publish. This will at least make it more granular and with notes you could add why you vouch for it.
Currently they suggest all sites to add an /ai page to describe their policies (personally I will refer people just to my blog post about AI since I am not against neural learning but mostly how it is done right now and that needs a bunch of text). But since this seems to be in its early stages this all might change later on.I would also love if @Codeberg added a policy for that, but sadly many people want to just put that kind of rule off, just because it appears to be unenforceable.
-
@lumi @Codeberg
I've read a bit through the issues of the project and found that for after v0.2.0 they plan to add not only a notes field but also a way to point out bad URLs. Also, I seem to have understood the workings a bit wrong, since you can not only vouch for a domain but also subpages on that domain. This is certainly nice for pages where multiple people might publish. This will at least make it more granular and with notes you could add why you vouch for it.
Currently they suggest all sites to add an /ai page to describe their policies (personally I will refer people just to my blog post about AI since I am not against neural learning but mostly how it is done right now and that needs a bunch of text). But since this seems to be in its early stages this all might change later on.I would also love if @Codeberg added a policy for that, but sadly many people want to just put that kind of rule off, just because it appears to be unenforceable.
@Wolkensteine @Codeberg oh that is very neat indeed
it's actually way more enforceable than people expect, because genai boosters tend to be so obvious about it. and i think banning obvious use or promotion of genai is already a great first step. at least don't let them proudly use it
also have a document stating codeberg itself is completely against genai
at that point, why would genai boosters use codeberg? they can just use github, and they can be proud and out about their dehumanization machines there -
-
@Wolkensteine @Codeberg oh that is very neat indeed
it's actually way more enforceable than people expect, because genai boosters tend to be so obvious about it. and i think banning obvious use or promotion of genai is already a great first step. at least don't let them proudly use it
also have a document stating codeberg itself is completely against genai
at that point, why would genai boosters use codeberg? they can just use github, and they can be proud and out about their dehumanization machines there@lumi @Codeberg
Personally, I also think that even a rule that might not be realistic to enforce in each and every scenario will be helpful, since there are likely more good faith individuals than bad faith ones. It is the same with speed limits (At least in Germany we have some people who aggravate me highly because they defend the unlimited speed allowed on the Autobahn, since if there was a rule they say no one would adhere to it), but where studies showed that the presence of such a limit even without it being enforced constantly made most people adhere to it or at least drive way less fast. Since the psychological effects should be quite similar, I would assume such a rule for AI would have a similar impact. Furthermore, as you say it can actually be enforced to such a degree that using AI can become more of a hurdle on Codeberg so people who want to use it, just leave. -
@lumi @Codeberg
Personally, I also think that even a rule that might not be realistic to enforce in each and every scenario will be helpful, since there are likely more good faith individuals than bad faith ones. It is the same with speed limits (At least in Germany we have some people who aggravate me highly because they defend the unlimited speed allowed on the Autobahn, since if there was a rule they say no one would adhere to it), but where studies showed that the presence of such a limit even without it being enforced constantly made most people adhere to it or at least drive way less fast. Since the psychological effects should be quite similar, I would assume such a rule for AI would have a similar impact. Furthermore, as you say it can actually be enforced to such a degree that using AI can become more of a hurdle on Codeberg so people who want to use it, just leave.@Wolkensteine @Codeberg yeah, do what you practically can and foster an environment toxic to dehumanization machines
-
@Wolkensteine @Codeberg yeah, do what you practically can and foster an environment toxic to dehumanization machines
@lumi @Codeberg
Also since AI crawlers have in the past hurt Codebergs uptime, users of Codeberg are in general probably not good to speak on AI.At least Codeberg has this in the ToS:
You must only share content on Codeberg which you have the explicit right under copyright and other laws to share under the legal terms with which the content is made available on Codeberg.
Which in my opinion should already forbid the use of AI in its own, but still, a separate statement could be nice.And judging from § 2.1.6 Codebergs ToS already contain stuff many would count as political (although they basically just say: German law exists and also human rights)
-
@lumi @Codeberg
Also since AI crawlers have in the past hurt Codebergs uptime, users of Codeberg are in general probably not good to speak on AI.At least Codeberg has this in the ToS:
You must only share content on Codeberg which you have the explicit right under copyright and other laws to share under the legal terms with which the content is made available on Codeberg.
Which in my opinion should already forbid the use of AI in its own, but still, a separate statement could be nice.And judging from § 2.1.6 Codebergs ToS already contain stuff many would count as political (although they basically just say: German law exists and also human rights)
@Wolkensteine @Codeberg i use codeberg and i absolutely detest the dehumanization machine x)
the copyright angle is a grey area, it might become legal at some point, or might become illegal
it is best to outlaw genai on ethical grounds. because even if it becomes legal, it is still unethical -
@Wolkensteine it would be neat to make it more holistic, so it also extends to art, code, translations and such
i don't want to be exposed to genai anything@lumi @Wolkensteine it reminds me of https://humanstxt.org/ but without the trust system. Good idea

-
R relay@relay.mycrowd.ca shared this topic