Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. So today I found this cool project on Codeberg and wanted to share it with you!https://codeberg.org/robida/human.jsonIt is meant to build a web of people confirming they are human, so you can have a root of trust that extends!

So today I found this cool project on Codeberg and wanted to share it with you!https://codeberg.org/robida/human.jsonIt is meant to build a web of people confirming they are human, so you can have a root of trust that extends!

Scheduled Pinned Locked Moved Uncategorized
codebergnoaimadebyhumanslowcontent
14 Posts 3 Posters 25 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lumi@snug.moeL lumi@snug.moe

    @Wolkensteine well, the point here is a web of trust. human.json could be extended to vouch that nothing in the site was made with genai; not the code, not the artwork, not the translations, nothing at all

    for code this could be difficult in the short term, we need to inform non-programmers about how genai is being used in code and how they can find ways to publish where they can be reasonably certain the project is against genai

    but i think a
    human.json like that would be fine for websites

    now, if we are talking about sites where content is hosted but the people on the site are not the ones running the site, it gets more thorny

    i do think we should have code forges, art websites, etc that completely ban genai in their ToS and vouch for things that way

    in case of code, we also could have a file like
    VALUES.md which contains the values of the project. i have been thinking about this for a while now, and when i have the spoons will be drafting templates for it, such that projects can easily get a policy with a full ban on genai (as well as being inclusive, anti-capitalist, and all the other good stuff)

    lumi@snug.moeL This user is from outside of this forum
    lumi@snug.moeL This user is from outside of this forum
    lumi@snug.moe
    wrote last edited by
    #5

    @Wolkensteine it would be very nice if @Codeberg could make a stand here, but i am also aware it can be difficult to enforce such a policy. so i feel enforcement should only be done in obvious cases and after repeated warnings

    genai boosters tend to be very obvious about it and they would rather leave the platform than not be able to promote their abusive tech, so that does make enforcement a bit easier

    it's also better that they have to lie about their usage of it than that they can be proud of it

    this is something i would love to have a discussion on with other people, to try and create a sane policy that will not affect innocent projects

    wolkensteine@mastodon.wolkenheim.euW 1 Reply Last reply
    0
    • lumi@snug.moeL lumi@snug.moe

      @Wolkensteine it would be very nice if @Codeberg could make a stand here, but i am also aware it can be difficult to enforce such a policy. so i feel enforcement should only be done in obvious cases and after repeated warnings

      genai boosters tend to be very obvious about it and they would rather leave the platform than not be able to promote their abusive tech, so that does make enforcement a bit easier

      it's also better that they have to lie about their usage of it than that they can be proud of it

      this is something i would love to have a discussion on with other people, to try and create a sane policy that will not affect innocent projects

      wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
      wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
      wolkensteine@mastodon.wolkenheim.eu
      wrote last edited by
      #6

      @lumi @Codeberg
      I've read a bit through the issues of the project and found that for after v0.2.0 they plan to add not only a notes field but also a way to point out bad URLs. Also, I seem to have understood the workings a bit wrong, since you can not only vouch for a domain but also subpages on that domain. This is certainly nice for pages where multiple people might publish. This will at least make it more granular and with notes you could add why you vouch for it.
      Currently they suggest all sites to add an /ai page to describe their policies (personally I will refer people just to my blog post about AI since I am not against neural learning but mostly how it is done right now and that needs a bunch of text). But since this seems to be in its early stages this all might change later on.

      I would also love if @Codeberg added a policy for that, but sadly many people want to just put that kind of rule off, just because it appears to be unenforceable.

      wolkensteine@mastodon.wolkenheim.euW lumi@snug.moeL 2 Replies Last reply
      0
      • wolkensteine@mastodon.wolkenheim.euW wolkensteine@mastodon.wolkenheim.eu

        @lumi @Codeberg
        I've read a bit through the issues of the project and found that for after v0.2.0 they plan to add not only a notes field but also a way to point out bad URLs. Also, I seem to have understood the workings a bit wrong, since you can not only vouch for a domain but also subpages on that domain. This is certainly nice for pages where multiple people might publish. This will at least make it more granular and with notes you could add why you vouch for it.
        Currently they suggest all sites to add an /ai page to describe their policies (personally I will refer people just to my blog post about AI since I am not against neural learning but mostly how it is done right now and that needs a bunch of text). But since this seems to be in its early stages this all might change later on.

        I would also love if @Codeberg added a policy for that, but sadly many people want to just put that kind of rule off, just because it appears to be unenforceable.

        wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
        wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
        wolkensteine@mastodon.wolkenheim.eu
        wrote last edited by
        #7

        @lumi @Codeberg
        If Codeberg took such a policy to a member's vote (members of the e.V.) then I'd certainly vote for that policy.

        wolkensteine@mastodon.wolkenheim.euW 1 Reply Last reply
        0
        • wolkensteine@mastodon.wolkenheim.euW wolkensteine@mastodon.wolkenheim.eu

          @lumi @Codeberg
          I've read a bit through the issues of the project and found that for after v0.2.0 they plan to add not only a notes field but also a way to point out bad URLs. Also, I seem to have understood the workings a bit wrong, since you can not only vouch for a domain but also subpages on that domain. This is certainly nice for pages where multiple people might publish. This will at least make it more granular and with notes you could add why you vouch for it.
          Currently they suggest all sites to add an /ai page to describe their policies (personally I will refer people just to my blog post about AI since I am not against neural learning but mostly how it is done right now and that needs a bunch of text). But since this seems to be in its early stages this all might change later on.

          I would also love if @Codeberg added a policy for that, but sadly many people want to just put that kind of rule off, just because it appears to be unenforceable.

          lumi@snug.moeL This user is from outside of this forum
          lumi@snug.moeL This user is from outside of this forum
          lumi@snug.moe
          wrote last edited by
          #8

          @Wolkensteine @Codeberg oh that is very neat indeed

          it's actually way more enforceable than people expect, because genai boosters tend to be so obvious about it. and i think banning obvious use or promotion of genai is already a great first step. at least don't let them proudly use it

          also have a document stating codeberg itself is completely against genai

          at that point, why would genai boosters use codeberg? they can just use github, and they can be proud and out about their dehumanization machines there

          wolkensteine@mastodon.wolkenheim.euW 1 Reply Last reply
          0
          • wolkensteine@mastodon.wolkenheim.euW wolkensteine@mastodon.wolkenheim.eu

            @lumi @Codeberg
            If Codeberg took such a policy to a member's vote (members of the e.V.) then I'd certainly vote for that policy.

            wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
            wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
            wolkensteine@mastodon.wolkenheim.eu
            wrote last edited by
            #9

            @lumi @Codeberg
            Ah, and it was proposed to widen the reach of the link, since by spec links of HTML can also be put into link headers this would allow putting voucher links also on other content floating around on your web server.

            1 Reply Last reply
            0
            • lumi@snug.moeL lumi@snug.moe

              @Wolkensteine @Codeberg oh that is very neat indeed

              it's actually way more enforceable than people expect, because genai boosters tend to be so obvious about it. and i think banning obvious use or promotion of genai is already a great first step. at least don't let them proudly use it

              also have a document stating codeberg itself is completely against genai

              at that point, why would genai boosters use codeberg? they can just use github, and they can be proud and out about their dehumanization machines there

              wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
              wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
              wolkensteine@mastodon.wolkenheim.eu
              wrote last edited by
              #10

              @lumi @Codeberg
              Personally, I also think that even a rule that might not be realistic to enforce in each and every scenario will be helpful, since there are likely more good faith individuals than bad faith ones. It is the same with speed limits (At least in Germany we have some people who aggravate me highly because they defend the unlimited speed allowed on the Autobahn, since if there was a rule they say no one would adhere to it), but where studies showed that the presence of such a limit even without it being enforced constantly made most people adhere to it or at least drive way less fast. Since the psychological effects should be quite similar, I would assume such a rule for AI would have a similar impact. Furthermore, as you say it can actually be enforced to such a degree that using AI can become more of a hurdle on Codeberg so people who want to use it, just leave.

              lumi@snug.moeL 1 Reply Last reply
              0
              • wolkensteine@mastodon.wolkenheim.euW wolkensteine@mastodon.wolkenheim.eu

                @lumi @Codeberg
                Personally, I also think that even a rule that might not be realistic to enforce in each and every scenario will be helpful, since there are likely more good faith individuals than bad faith ones. It is the same with speed limits (At least in Germany we have some people who aggravate me highly because they defend the unlimited speed allowed on the Autobahn, since if there was a rule they say no one would adhere to it), but where studies showed that the presence of such a limit even without it being enforced constantly made most people adhere to it or at least drive way less fast. Since the psychological effects should be quite similar, I would assume such a rule for AI would have a similar impact. Furthermore, as you say it can actually be enforced to such a degree that using AI can become more of a hurdle on Codeberg so people who want to use it, just leave.

                lumi@snug.moeL This user is from outside of this forum
                lumi@snug.moeL This user is from outside of this forum
                lumi@snug.moe
                wrote last edited by
                #11

                @Wolkensteine @Codeberg yeah, do what you practically can and foster an environment toxic to dehumanization machines

                wolkensteine@mastodon.wolkenheim.euW 1 Reply Last reply
                0
                • lumi@snug.moeL lumi@snug.moe

                  @Wolkensteine @Codeberg yeah, do what you practically can and foster an environment toxic to dehumanization machines

                  wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
                  wolkensteine@mastodon.wolkenheim.euW This user is from outside of this forum
                  wolkensteine@mastodon.wolkenheim.eu
                  wrote last edited by
                  #12

                  @lumi @Codeberg
                  Also since AI crawlers have in the past hurt Codebergs uptime, users of Codeberg are in general probably not good to speak on AI.

                  At least Codeberg has this in the ToS:
                  You must only share content on Codeberg which you have the explicit right under copyright and other laws to share under the legal terms with which the content is made available on Codeberg.
                  Which in my opinion should already forbid the use of AI in its own, but still, a separate statement could be nice.

                  And judging from § 2.1.6 Codebergs ToS already contain stuff many would count as political (although they basically just say: German law exists and also human rights)

                  lumi@snug.moeL 1 Reply Last reply
                  0
                  • wolkensteine@mastodon.wolkenheim.euW wolkensteine@mastodon.wolkenheim.eu

                    @lumi @Codeberg
                    Also since AI crawlers have in the past hurt Codebergs uptime, users of Codeberg are in general probably not good to speak on AI.

                    At least Codeberg has this in the ToS:
                    You must only share content on Codeberg which you have the explicit right under copyright and other laws to share under the legal terms with which the content is made available on Codeberg.
                    Which in my opinion should already forbid the use of AI in its own, but still, a separate statement could be nice.

                    And judging from § 2.1.6 Codebergs ToS already contain stuff many would count as political (although they basically just say: German law exists and also human rights)

                    lumi@snug.moeL This user is from outside of this forum
                    lumi@snug.moeL This user is from outside of this forum
                    lumi@snug.moe
                    wrote last edited by
                    #13

                    @Wolkensteine @Codeberg i use codeberg and i absolutely detest the dehumanization machine x)

                    the copyright angle is a grey area, it might become legal at some point, or might become illegal

                    it is best to outlaw genai on ethical grounds. because even if it becomes legal, it is still unethical

                    1 Reply Last reply
                    0
                    • lumi@snug.moeL lumi@snug.moe

                      @Wolkensteine it would be neat to make it more holistic, so it also extends to art, code, translations and such

                      i don't want to be exposed to genai anything

                      pikselkraft@mastodon.designP This user is from outside of this forum
                      pikselkraft@mastodon.designP This user is from outside of this forum
                      pikselkraft@mastodon.design
                      wrote last edited by
                      #14

                      @lumi @Wolkensteine it reminds me of https://humanstxt.org/ but without the trust system. Good idea 🙂

                      1 Reply Last reply
                      0
                      • R relay@relay.mycrowd.ca shared this topic
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups