Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. AI-assisted moderation in the fediverse is happening.

AI-assisted moderation in the fediverse is happening.

Scheduled Pinned Locked Moved Uncategorized
fediverse
18 Posts 14 Posters 3 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • piefedadmin@join.piefed.socialP This user is from outside of this forum
    piefedadmin@join.piefed.socialP This user is from outside of this forum
    piefedadmin@join.piefed.social
    wrote last edited by
    #1

    AI-assisted moderation in the fediverse is happening. Now what?

    I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

    OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

    Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

    1. Overall Pattern

    blah blah

    2. Evidence of *specific ideology* sentiment

    blah blah

    3. several pages more, concluding with (in this case)

    Yes, the content contains:

    Clear *specific ideology* alignment
    Repeated *specific ideology* framing, especially through blah blah
    Extensive use of canonical *ideology* tropes, in blah blah domains.

    The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

    ===========================================

    FULL DUMP OF COMMENT HISTORY BELOW

    ===========================================

    Date: 2026-xx-xxT0xxxxx

    Comment ID: https://instance.told/comment/2497xxxx

    Post ID: 603xxx

    Community ID: 1xx

    Content of the comment has been redacted

    ========================================

    Date: 2026-xx-xxT0xxxxx

    Comment ID: https://instance.told/comment/2497xxxx

    Post ID: 603xxx

    Community ID: 1xx

    Content of the comment has been redacted

    ========================================

    Date: 2026-xx-xxT0xxxxx

    Comment ID: https://instance.told/comment/2497xxxx

    Post ID: 603xxx

    Community ID: 1xx

    Content of the comment has been redacted

    ========================================

    and so on, hundreds of comments.

    I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

    The use and existence of this tooling raises a lot of questions.

    What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

    What safeguards do we need?

    Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

    What are our transparency expectations?

    Is this acceptable and normal?

    Should this tooling be disclosed? (it was not – should it have been?)

    If you were given a choice, would you have opted out of it?

    Can we opt out?

    Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

    Are private messages being scanned and sent to OpenAI?

    How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

    Once the user’s comments are sent to OpenAI, is it used to train their models?

    What will the effect be on our discourse and culture if people know they are being politically profiled?

    Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

    I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

    And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

    What do you make of this?

    #fediverse
    zvavybir@social.zvavybir.euZ C kitkat_blue@mastodon.socialK ahhhhhhoniichan@snug.moeA xarvos@outerheaven.clubX 11 Replies Last reply
    1
    0
    • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

      AI-assisted moderation in the fediverse is happening. Now what?

      I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

      OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

      Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

      1. Overall Pattern

      blah blah

      2. Evidence of *specific ideology* sentiment

      blah blah

      3. several pages more, concluding with (in this case)

      Yes, the content contains:

      Clear *specific ideology* alignment
      Repeated *specific ideology* framing, especially through blah blah
      Extensive use of canonical *ideology* tropes, in blah blah domains.

      The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

      ===========================================

      FULL DUMP OF COMMENT HISTORY BELOW

      ===========================================

      Date: 2026-xx-xxT0xxxxx

      Comment ID: https://instance.told/comment/2497xxxx

      Post ID: 603xxx

      Community ID: 1xx

      Content of the comment has been redacted

      ========================================

      Date: 2026-xx-xxT0xxxxx

      Comment ID: https://instance.told/comment/2497xxxx

      Post ID: 603xxx

      Community ID: 1xx

      Content of the comment has been redacted

      ========================================

      Date: 2026-xx-xxT0xxxxx

      Comment ID: https://instance.told/comment/2497xxxx

      Post ID: 603xxx

      Community ID: 1xx

      Content of the comment has been redacted

      ========================================

      and so on, hundreds of comments.

      I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

      The use and existence of this tooling raises a lot of questions.

      What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

      What safeguards do we need?

      Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

      What are our transparency expectations?

      Is this acceptable and normal?

      Should this tooling be disclosed? (it was not – should it have been?)

      If you were given a choice, would you have opted out of it?

      Can we opt out?

      Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

      Are private messages being scanned and sent to OpenAI?

      How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

      Once the user’s comments are sent to OpenAI, is it used to train their models?

      What will the effect be on our discourse and culture if people know they are being politically profiled?

      Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

      I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

      And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

      What do you make of this?

      #fediverse
      zvavybir@social.zvavybir.euZ This user is from outside of this forum
      zvavybir@social.zvavybir.euZ This user is from outside of this forum
      zvavybir@social.zvavybir.eu
      wrote last edited by
      #2

      @piefedadmin I at least am certainly not okay with having my posts read/processed by an LLM and will defederate all instances that expose me to that.

      1 Reply Last reply
      0
      • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

        AI-assisted moderation in the fediverse is happening. Now what?

        I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

        OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

        Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

        1. Overall Pattern

        blah blah

        2. Evidence of *specific ideology* sentiment

        blah blah

        3. several pages more, concluding with (in this case)

        Yes, the content contains:

        Clear *specific ideology* alignment
        Repeated *specific ideology* framing, especially through blah blah
        Extensive use of canonical *ideology* tropes, in blah blah domains.

        The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

        ===========================================

        FULL DUMP OF COMMENT HISTORY BELOW

        ===========================================

        Date: 2026-xx-xxT0xxxxx

        Comment ID: https://instance.told/comment/2497xxxx

        Post ID: 603xxx

        Community ID: 1xx

        Content of the comment has been redacted

        ========================================

        Date: 2026-xx-xxT0xxxxx

        Comment ID: https://instance.told/comment/2497xxxx

        Post ID: 603xxx

        Community ID: 1xx

        Content of the comment has been redacted

        ========================================

        Date: 2026-xx-xxT0xxxxx

        Comment ID: https://instance.told/comment/2497xxxx

        Post ID: 603xxx

        Community ID: 1xx

        Content of the comment has been redacted

        ========================================

        and so on, hundreds of comments.

        I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

        The use and existence of this tooling raises a lot of questions.

        What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

        What safeguards do we need?

        Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

        What are our transparency expectations?

        Is this acceptable and normal?

        Should this tooling be disclosed? (it was not – should it have been?)

        If you were given a choice, would you have opted out of it?

        Can we opt out?

        Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

        Are private messages being scanned and sent to OpenAI?

        How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

        Once the user’s comments are sent to OpenAI, is it used to train their models?

        What will the effect be on our discourse and culture if people know they are being politically profiled?

        Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

        I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

        And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

        What do you make of this?

        #fediverse
        C This user is from outside of this forum
        C This user is from outside of this forum
        cameron29@mastodon.social
        wrote last edited by
        #3

        @piefedadmin it is one thing to do that with a ai that they control(i still don’t support this) but with a cloud ai provider heck no I hope that they stop

        1 Reply Last reply
        0
        • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

          AI-assisted moderation in the fediverse is happening. Now what?

          I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

          OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

          Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

          1. Overall Pattern

          blah blah

          2. Evidence of *specific ideology* sentiment

          blah blah

          3. several pages more, concluding with (in this case)

          Yes, the content contains:

          Clear *specific ideology* alignment
          Repeated *specific ideology* framing, especially through blah blah
          Extensive use of canonical *ideology* tropes, in blah blah domains.

          The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

          ===========================================

          FULL DUMP OF COMMENT HISTORY BELOW

          ===========================================

          Date: 2026-xx-xxT0xxxxx

          Comment ID: https://instance.told/comment/2497xxxx

          Post ID: 603xxx

          Community ID: 1xx

          Content of the comment has been redacted

          ========================================

          Date: 2026-xx-xxT0xxxxx

          Comment ID: https://instance.told/comment/2497xxxx

          Post ID: 603xxx

          Community ID: 1xx

          Content of the comment has been redacted

          ========================================

          Date: 2026-xx-xxT0xxxxx

          Comment ID: https://instance.told/comment/2497xxxx

          Post ID: 603xxx

          Community ID: 1xx

          Content of the comment has been redacted

          ========================================

          and so on, hundreds of comments.

          I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

          The use and existence of this tooling raises a lot of questions.

          What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

          What safeguards do we need?

          Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

          What are our transparency expectations?

          Is this acceptable and normal?

          Should this tooling be disclosed? (it was not – should it have been?)

          If you were given a choice, would you have opted out of it?

          Can we opt out?

          Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

          Are private messages being scanned and sent to OpenAI?

          How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

          Once the user’s comments are sent to OpenAI, is it used to train their models?

          What will the effect be on our discourse and culture if people know they are being politically profiled?

          Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

          I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

          And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

          What do you make of this?

          #fediverse
          kitkat_blue@mastodon.socialK This user is from outside of this forum
          kitkat_blue@mastodon.socialK This user is from outside of this forum
          kitkat_blue@mastodon.social
          wrote last edited by
          #4

          @piefedadmin

          this is just more free LLM training data.

          It's also non-consensual data harvesting.

          gen-ai is poison.

          1 Reply Last reply
          0
          • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

            AI-assisted moderation in the fediverse is happening. Now what?

            I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

            OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

            Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

            1. Overall Pattern

            blah blah

            2. Evidence of *specific ideology* sentiment

            blah blah

            3. several pages more, concluding with (in this case)

            Yes, the content contains:

            Clear *specific ideology* alignment
            Repeated *specific ideology* framing, especially through blah blah
            Extensive use of canonical *ideology* tropes, in blah blah domains.

            The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

            ===========================================

            FULL DUMP OF COMMENT HISTORY BELOW

            ===========================================

            Date: 2026-xx-xxT0xxxxx

            Comment ID: https://instance.told/comment/2497xxxx

            Post ID: 603xxx

            Community ID: 1xx

            Content of the comment has been redacted

            ========================================

            Date: 2026-xx-xxT0xxxxx

            Comment ID: https://instance.told/comment/2497xxxx

            Post ID: 603xxx

            Community ID: 1xx

            Content of the comment has been redacted

            ========================================

            Date: 2026-xx-xxT0xxxxx

            Comment ID: https://instance.told/comment/2497xxxx

            Post ID: 603xxx

            Community ID: 1xx

            Content of the comment has been redacted

            ========================================

            and so on, hundreds of comments.

            I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

            The use and existence of this tooling raises a lot of questions.

            What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

            What safeguards do we need?

            Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

            What are our transparency expectations?

            Is this acceptable and normal?

            Should this tooling be disclosed? (it was not – should it have been?)

            If you were given a choice, would you have opted out of it?

            Can we opt out?

            Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

            Are private messages being scanned and sent to OpenAI?

            How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

            Once the user’s comments are sent to OpenAI, is it used to train their models?

            What will the effect be on our discourse and culture if people know they are being politically profiled?

            Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

            I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

            And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

            What do you make of this?

            #fediverse
            ahhhhhhoniichan@snug.moeA This user is from outside of this forum
            ahhhhhhoniichan@snug.moeA This user is from outside of this forum
            ahhhhhhoniichan@snug.moe
            wrote last edited by
            #5

            @piefedadmin I am definitely not okay with any of my posts read/processed by an LLM, especially ChatGPT, or any of the non-self hosted models. Realistically speaking, my posts are being scraped somewhere, but even if you are using it in a productive way does not make it okay. I would ask the servers I am on to defederate any servers that use that for moderation.

            1 Reply Last reply
            0
            • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

              AI-assisted moderation in the fediverse is happening. Now what?

              I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

              OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

              Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

              1. Overall Pattern

              blah blah

              2. Evidence of *specific ideology* sentiment

              blah blah

              3. several pages more, concluding with (in this case)

              Yes, the content contains:

              Clear *specific ideology* alignment
              Repeated *specific ideology* framing, especially through blah blah
              Extensive use of canonical *ideology* tropes, in blah blah domains.

              The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

              ===========================================

              FULL DUMP OF COMMENT HISTORY BELOW

              ===========================================

              Date: 2026-xx-xxT0xxxxx

              Comment ID: https://instance.told/comment/2497xxxx

              Post ID: 603xxx

              Community ID: 1xx

              Content of the comment has been redacted

              ========================================

              Date: 2026-xx-xxT0xxxxx

              Comment ID: https://instance.told/comment/2497xxxx

              Post ID: 603xxx

              Community ID: 1xx

              Content of the comment has been redacted

              ========================================

              Date: 2026-xx-xxT0xxxxx

              Comment ID: https://instance.told/comment/2497xxxx

              Post ID: 603xxx

              Community ID: 1xx

              Content of the comment has been redacted

              ========================================

              and so on, hundreds of comments.

              I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

              The use and existence of this tooling raises a lot of questions.

              What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

              What safeguards do we need?

              Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

              What are our transparency expectations?

              Is this acceptable and normal?

              Should this tooling be disclosed? (it was not – should it have been?)

              If you were given a choice, would you have opted out of it?

              Can we opt out?

              Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

              Are private messages being scanned and sent to OpenAI?

              How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

              Once the user’s comments are sent to OpenAI, is it used to train their models?

              What will the effect be on our discourse and culture if people know they are being politically profiled?

              Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

              I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

              And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

              What do you make of this?

              #fediverse
              xarvos@outerheaven.clubX This user is from outside of this forum
              xarvos@outerheaven.clubX This user is from outside of this forum
              xarvos@outerheaven.club
              wrote last edited by
              #6

              @piefedadmin@join.piefed.social i wonder how you find out which model and the prompt they use. did they talk about it?

              piefedadmin@join.piefed.socialP 1 Reply Last reply
              0
              • xarvos@outerheaven.clubX xarvos@outerheaven.club

                @piefedadmin@join.piefed.social i wonder how you find out which model and the prompt they use. did they talk about it?

                piefedadmin@join.piefed.socialP This user is from outside of this forum
                piefedadmin@join.piefed.socialP This user is from outside of this forum
                piefedadmin@join.piefed.social
                wrote last edited by
                #7

                @xarvos I have receipts, original ones, straight from their own server. It appears to be an unintentional leak but they might have published the link to the script output without realizing how it will look to outsiders. Hard to know.

                It’s best if we have the discussion about how things should be without knowing which instances it is because that will just make them overly defensive and cause harassment.

                I hope they can clean house, get their story straight, and then go public in a way that restores trust.

                1 Reply Last reply
                0
                • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

                  AI-assisted moderation in the fediverse is happening. Now what?

                  I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                  OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                  Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                  1. Overall Pattern

                  blah blah

                  2. Evidence of *specific ideology* sentiment

                  blah blah

                  3. several pages more, concluding with (in this case)

                  Yes, the content contains:

                  Clear *specific ideology* alignment
                  Repeated *specific ideology* framing, especially through blah blah
                  Extensive use of canonical *ideology* tropes, in blah blah domains.

                  The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                  ===========================================

                  FULL DUMP OF COMMENT HISTORY BELOW

                  ===========================================

                  Date: 2026-xx-xxT0xxxxx

                  Comment ID: https://instance.told/comment/2497xxxx

                  Post ID: 603xxx

                  Community ID: 1xx

                  Content of the comment has been redacted

                  ========================================

                  Date: 2026-xx-xxT0xxxxx

                  Comment ID: https://instance.told/comment/2497xxxx

                  Post ID: 603xxx

                  Community ID: 1xx

                  Content of the comment has been redacted

                  ========================================

                  Date: 2026-xx-xxT0xxxxx

                  Comment ID: https://instance.told/comment/2497xxxx

                  Post ID: 603xxx

                  Community ID: 1xx

                  Content of the comment has been redacted

                  ========================================

                  and so on, hundreds of comments.

                  I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                  The use and existence of this tooling raises a lot of questions.

                  What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                  What safeguards do we need?

                  Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                  What are our transparency expectations?

                  Is this acceptable and normal?

                  Should this tooling be disclosed? (it was not – should it have been?)

                  If you were given a choice, would you have opted out of it?

                  Can we opt out?

                  Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                  Are private messages being scanned and sent to OpenAI?

                  How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                  Once the user’s comments are sent to OpenAI, is it used to train their models?

                  What will the effect be on our discourse and culture if people know they are being politically profiled?

                  Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                  I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                  And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                  What do you make of this?

                  #fediverse
                  sharpcheddargoblin@reclusive.blogS This user is from outside of this forum
                  sharpcheddargoblin@reclusive.blogS This user is from outside of this forum
                  sharpcheddargoblin@reclusive.blog
                  wrote last edited by
                  #8

                  @piefedadmin @ophiocephalic Fuck these instance admins. Name, shame, and defederate if they do not change behavior. The users on these instances need to know, immediately, how their posts are being used -- I'm sure many would not approve of this, and they need to be able to migrate to a safer environment if these admins don't immediately stop.

                  1 Reply Last reply
                  0
                  • brettm@swarm.coiloptic.orgB This user is from outside of this forum
                    brettm@swarm.coiloptic.orgB This user is from outside of this forum
                    brettm@swarm.coiloptic.org
                    wrote last edited by
                    #9
                    @hazelnoot@enby.life i would like to know who does this
                    1 Reply Last reply
                    0
                    • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

                      AI-assisted moderation in the fediverse is happening. Now what?

                      I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                      OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                      Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                      1. Overall Pattern

                      blah blah

                      2. Evidence of *specific ideology* sentiment

                      blah blah

                      3. several pages more, concluding with (in this case)

                      Yes, the content contains:

                      Clear *specific ideology* alignment
                      Repeated *specific ideology* framing, especially through blah blah
                      Extensive use of canonical *ideology* tropes, in blah blah domains.

                      The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                      ===========================================

                      FULL DUMP OF COMMENT HISTORY BELOW

                      ===========================================

                      Date: 2026-xx-xxT0xxxxx

                      Comment ID: https://instance.told/comment/2497xxxx

                      Post ID: 603xxx

                      Community ID: 1xx

                      Content of the comment has been redacted

                      ========================================

                      Date: 2026-xx-xxT0xxxxx

                      Comment ID: https://instance.told/comment/2497xxxx

                      Post ID: 603xxx

                      Community ID: 1xx

                      Content of the comment has been redacted

                      ========================================

                      Date: 2026-xx-xxT0xxxxx

                      Comment ID: https://instance.told/comment/2497xxxx

                      Post ID: 603xxx

                      Community ID: 1xx

                      Content of the comment has been redacted

                      ========================================

                      and so on, hundreds of comments.

                      I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                      The use and existence of this tooling raises a lot of questions.

                      What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                      What safeguards do we need?

                      Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                      What are our transparency expectations?

                      Is this acceptable and normal?

                      Should this tooling be disclosed? (it was not – should it have been?)

                      If you were given a choice, would you have opted out of it?

                      Can we opt out?

                      Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                      Are private messages being scanned and sent to OpenAI?

                      How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                      Once the user’s comments are sent to OpenAI, is it used to train their models?

                      What will the effect be on our discourse and culture if people know they are being politically profiled?

                      Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                      I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                      And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                      What do you make of this?

                      #fediverse
                      hoco@sfba.socialH This user is from outside of this forum
                      hoco@sfba.socialH This user is from outside of this forum
                      hoco@sfba.social
                      wrote last edited by
                      #10

                      @piefedadmin The potential for abuse is a good reason to avoid it entirely. I imagine an overworked moderator turning to AI to help. That is kind of a scalability issue with Mastodon. And, it gets worse as more of the population joins and more people who are online jerks, and who require moderation, join an instance. So scalability is a real issue for moderators and we can't just take away what they need to scale, or they might fail or quit.

                      I think the answer is has *at least* a couple parts. First, there must be transparency so people know what is being done with their posts. It must be possible to see the prompt used, so people can decide if it's fair and move to a different instance if it isn't.

                      Second, it should only be used to bring a post to the attention of a human. All actions must only be done by a person, after they have reviewed the actual post. I think automatically banning or blocking because of the results of an AI should be forbidden (somehow, perhaps blocking an instance).

                      1 Reply Last reply
                      0
                      • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

                        AI-assisted moderation in the fediverse is happening. Now what?

                        I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                        OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                        Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                        1. Overall Pattern

                        blah blah

                        2. Evidence of *specific ideology* sentiment

                        blah blah

                        3. several pages more, concluding with (in this case)

                        Yes, the content contains:

                        Clear *specific ideology* alignment
                        Repeated *specific ideology* framing, especially through blah blah
                        Extensive use of canonical *ideology* tropes, in blah blah domains.

                        The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                        ===========================================

                        FULL DUMP OF COMMENT HISTORY BELOW

                        ===========================================

                        Date: 2026-xx-xxT0xxxxx

                        Comment ID: https://instance.told/comment/2497xxxx

                        Post ID: 603xxx

                        Community ID: 1xx

                        Content of the comment has been redacted

                        ========================================

                        Date: 2026-xx-xxT0xxxxx

                        Comment ID: https://instance.told/comment/2497xxxx

                        Post ID: 603xxx

                        Community ID: 1xx

                        Content of the comment has been redacted

                        ========================================

                        Date: 2026-xx-xxT0xxxxx

                        Comment ID: https://instance.told/comment/2497xxxx

                        Post ID: 603xxx

                        Community ID: 1xx

                        Content of the comment has been redacted

                        ========================================

                        and so on, hundreds of comments.

                        I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                        The use and existence of this tooling raises a lot of questions.

                        What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                        What safeguards do we need?

                        Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                        What are our transparency expectations?

                        Is this acceptable and normal?

                        Should this tooling be disclosed? (it was not – should it have been?)

                        If you were given a choice, would you have opted out of it?

                        Can we opt out?

                        Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                        Are private messages being scanned and sent to OpenAI?

                        How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                        Once the user’s comments are sent to OpenAI, is it used to train their models?

                        What will the effect be on our discourse and culture if people know they are being politically profiled?

                        Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                        I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                        And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                        What do you make of this?

                        #fediverse
                        mistersmith@mastodon.socialM This user is from outside of this forum
                        mistersmith@mastodon.socialM This user is from outside of this forum
                        mistersmith@mastodon.social
                        wrote last edited by
                        #11

                        @piefedadmin Using, yes, but relying on it, no. There has to be a way to keep Llm out of the steering process which involves training of the moderator. There have to be precise netiquette and guidelines of how to be able to involve these tools and where to restrict them.

                        1 Reply Last reply
                        0
                        • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

                          AI-assisted moderation in the fediverse is happening. Now what?

                          I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                          OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                          Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                          1. Overall Pattern

                          blah blah

                          2. Evidence of *specific ideology* sentiment

                          blah blah

                          3. several pages more, concluding with (in this case)

                          Yes, the content contains:

                          Clear *specific ideology* alignment
                          Repeated *specific ideology* framing, especially through blah blah
                          Extensive use of canonical *ideology* tropes, in blah blah domains.

                          The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                          ===========================================

                          FULL DUMP OF COMMENT HISTORY BELOW

                          ===========================================

                          Date: 2026-xx-xxT0xxxxx

                          Comment ID: https://instance.told/comment/2497xxxx

                          Post ID: 603xxx

                          Community ID: 1xx

                          Content of the comment has been redacted

                          ========================================

                          Date: 2026-xx-xxT0xxxxx

                          Comment ID: https://instance.told/comment/2497xxxx

                          Post ID: 603xxx

                          Community ID: 1xx

                          Content of the comment has been redacted

                          ========================================

                          Date: 2026-xx-xxT0xxxxx

                          Comment ID: https://instance.told/comment/2497xxxx

                          Post ID: 603xxx

                          Community ID: 1xx

                          Content of the comment has been redacted

                          ========================================

                          and so on, hundreds of comments.

                          I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                          The use and existence of this tooling raises a lot of questions.

                          What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                          What safeguards do we need?

                          Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                          What are our transparency expectations?

                          Is this acceptable and normal?

                          Should this tooling be disclosed? (it was not – should it have been?)

                          If you were given a choice, would you have opted out of it?

                          Can we opt out?

                          Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                          Are private messages being scanned and sent to OpenAI?

                          How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                          Once the user’s comments are sent to OpenAI, is it used to train their models?

                          What will the effect be on our discourse and culture if people know they are being politically profiled?

                          Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                          I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                          And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                          What do you make of this?

                          #fediverse
                          teledyn@mstdn.caT This user is from outside of this forum
                          teledyn@mstdn.caT This user is from outside of this forum
                          teledyn@mstdn.ca
                          wrote last edited by
                          #12

                          @piefedadmin

                          (sigh) so now I am wont to use #Fediverse at all now knowing which if whatever I may have 'politically' said would be routed to ICE.

                          Not to mention how each comment-test burns another 300 watt-hours uselessly burning down my planet. Next they'll be hosting on orbiting space servers? I want none of it.

                          Not great news for a Monday morning. Hopefully @chad can clarify #mstdnca but I'm really on pause here until these enemies of Earth confess and can be server-blocked.

                          Link Preview Image
                          sirtao@social.sirtao.itS 1 Reply Last reply
                          1
                          0
                          • R relay@relay.mycrowd.ca shared this topic
                          • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

                            AI-assisted moderation in the fediverse is happening. Now what?

                            I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                            OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                            Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                            1. Overall Pattern

                            blah blah

                            2. Evidence of *specific ideology* sentiment

                            blah blah

                            3. several pages more, concluding with (in this case)

                            Yes, the content contains:

                            Clear *specific ideology* alignment
                            Repeated *specific ideology* framing, especially through blah blah
                            Extensive use of canonical *ideology* tropes, in blah blah domains.

                            The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                            ===========================================

                            FULL DUMP OF COMMENT HISTORY BELOW

                            ===========================================

                            Date: 2026-xx-xxT0xxxxx

                            Comment ID: https://instance.told/comment/2497xxxx

                            Post ID: 603xxx

                            Community ID: 1xx

                            Content of the comment has been redacted

                            ========================================

                            Date: 2026-xx-xxT0xxxxx

                            Comment ID: https://instance.told/comment/2497xxxx

                            Post ID: 603xxx

                            Community ID: 1xx

                            Content of the comment has been redacted

                            ========================================

                            Date: 2026-xx-xxT0xxxxx

                            Comment ID: https://instance.told/comment/2497xxxx

                            Post ID: 603xxx

                            Community ID: 1xx

                            Content of the comment has been redacted

                            ========================================

                            and so on, hundreds of comments.

                            I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                            The use and existence of this tooling raises a lot of questions.

                            What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                            What safeguards do we need?

                            Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                            What are our transparency expectations?

                            Is this acceptable and normal?

                            Should this tooling be disclosed? (it was not – should it have been?)

                            If you were given a choice, would you have opted out of it?

                            Can we opt out?

                            Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                            Are private messages being scanned and sent to OpenAI?

                            How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                            Once the user’s comments are sent to OpenAI, is it used to train their models?

                            What will the effect be on our discourse and culture if people know they are being politically profiled?

                            Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                            I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                            And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                            What do you make of this?

                            #fediverse
                            unattributed@gotosocial.socialU This user is from outside of this forum
                            unattributed@gotosocial.socialU This user is from outside of this forum
                            unattributed@gotosocial.social
                            wrote last edited by
                            #13

                            @piefedadmin This is very much a massive violation of the transparency, trust and privacy of users on the #Fediverse.

                            I've been uncovering numerous #aiagents and #aiprofiles on the Fediverse that do not disclose they are automated accounts, and trying to pass themselves off as regular users. Those accounts are complete violation of the rights of the #Fedizens to maintain their privacy and the autonomy of the information they share.

                            This is actually a worse violation.

                            1 Reply Last reply
                            0
                            • teledyn@mstdn.caT teledyn@mstdn.ca

                              @piefedadmin

                              (sigh) so now I am wont to use #Fediverse at all now knowing which if whatever I may have 'politically' said would be routed to ICE.

                              Not to mention how each comment-test burns another 300 watt-hours uselessly burning down my planet. Next they'll be hosting on orbiting space servers? I want none of it.

                              Not great news for a Monday morning. Hopefully @chad can clarify #mstdnca but I'm really on pause here until these enemies of Earth confess and can be server-blocked.

                              Link Preview Image
                              sirtao@social.sirtao.itS This user is from outside of this forum
                              sirtao@social.sirtao.itS This user is from outside of this forum
                              sirtao@social.sirtao.it
                              wrote last edited by
                              #14
                              (sigh) so now I am wary to use #Fediverse at all now not knowing which of whatever I may have 'politically' said would be routed to ICE.
                              No offense but... malicious actors(or anybody with a grudge against you) were always been able to do that, as you are posting publicly(same as me).
                              Posting on public-facing social networks, including the #fediverse, always was talking loud in a public place.

                              I'm more worried\irritated by the LLM training scraping.
                              1 Reply Last reply
                              1
                              0
                              • teledyn@mstdn.caT This user is from outside of this forum
                                teledyn@mstdn.caT This user is from outside of this forum
                                teledyn@mstdn.ca
                                wrote last edited by
                                #15

                                @sirtao

                                You think a random grudge forgives routing EVERY UTTERANCE to Sam's robo-snitch? 🙄

                                sirtao@social.sirtao.itS 1 Reply Last reply
                                1
                                0
                                • teledyn@mstdn.caT teledyn@mstdn.ca

                                  @sirtao

                                  You think a random grudge forgives routing EVERY UTTERANCE to Sam's robo-snitch? 🙄

                                  sirtao@social.sirtao.itS This user is from outside of this forum
                                  sirtao@social.sirtao.itS This user is from outside of this forum
                                  sirtao@social.sirtao.it
                                  wrote last edited by
                                  #16
                                  I was talking in general. One could easily route everything to "robot snitch" without even using an instance, just scraping the public posts.
                                  Heck, the government could do it directly. No real reason to pass through "robo-snitch".

                                  I think "permissions" and "legit use" are the main problems here.
                                  1 Reply Last reply
                                  1
                                  0
                                  • piefedadmin@join.piefed.socialP piefedadmin@join.piefed.social

                                    AI-assisted moderation in the fediverse is happening. Now what?

                                    I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                                    OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                                    Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                                    1. Overall Pattern

                                    blah blah

                                    2. Evidence of *specific ideology* sentiment

                                    blah blah

                                    3. several pages more, concluding with (in this case)

                                    Yes, the content contains:

                                    Clear *specific ideology* alignment
                                    Repeated *specific ideology* framing, especially through blah blah
                                    Extensive use of canonical *ideology* tropes, in blah blah domains.

                                    The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                                    ===========================================

                                    FULL DUMP OF COMMENT HISTORY BELOW

                                    ===========================================

                                    Date: 2026-xx-xxT0xxxxx

                                    Comment ID: https://instance.told/comment/2497xxxx

                                    Post ID: 603xxx

                                    Community ID: 1xx

                                    Content of the comment has been redacted

                                    ========================================

                                    Date: 2026-xx-xxT0xxxxx

                                    Comment ID: https://instance.told/comment/2497xxxx

                                    Post ID: 603xxx

                                    Community ID: 1xx

                                    Content of the comment has been redacted

                                    ========================================

                                    Date: 2026-xx-xxT0xxxxx

                                    Comment ID: https://instance.told/comment/2497xxxx

                                    Post ID: 603xxx

                                    Community ID: 1xx

                                    Content of the comment has been redacted

                                    ========================================

                                    and so on, hundreds of comments.

                                    I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                                    The use and existence of this tooling raises a lot of questions.

                                    What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                                    What safeguards do we need?

                                    Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                                    What are our transparency expectations?

                                    Is this acceptable and normal?

                                    Should this tooling be disclosed? (it was not – should it have been?)

                                    If you were given a choice, would you have opted out of it?

                                    Can we opt out?

                                    Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                                    Are private messages being scanned and sent to OpenAI?

                                    How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                                    Once the user’s comments are sent to OpenAI, is it used to train their models?

                                    What will the effect be on our discourse and culture if people know they are being politically profiled?

                                    Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                                    I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                                    And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                                    What do you make of this?

                                    #fediverse
                                    theriac@plasmatrap.comT This user is from outside of this forum
                                    theriac@plasmatrap.comT This user is from outside of this forum
                                    theriac@plasmatrap.com
                                    wrote last edited by
                                    #17

                                    @piefedadmin@join.piefed.social
                                    I'd prefer to know which instances are involved. I am not ok with anything AI.

                                    1 Reply Last reply
                                    0
                                    • teledyn@mstdn.caT This user is from outside of this forum
                                      teledyn@mstdn.caT This user is from outside of this forum
                                      teledyn@mstdn.ca
                                      wrote last edited by
                                      #18

                                      @sirtao scrapers are routinely blocked.

                                      1 Reply Last reply
                                      1
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups