Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot.

I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot.

Scheduled Pinned Locked Moved Uncategorized
18 Posts 13 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • davidgerard@circumstances.runD This user is from outside of this forum
    davidgerard@circumstances.runD This user is from outside of this forum
    davidgerard@circumstances.run
    wrote last edited by
    #1

    I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.

    davidgerard@circumstances.runD oxy@social.bsdlab.auO grouchybeast@mastodon.socialG brass75@twit.socialB 4 Replies Last reply
    2
    0
    • davidgerard@circumstances.runD davidgerard@circumstances.run

      I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.

      davidgerard@circumstances.runD This user is from outside of this forum
      davidgerard@circumstances.runD This user is from outside of this forum
      davidgerard@circumstances.run
      wrote last edited by
      #2

      today's - a paper on which chatbots are most politically censorious. looks great! THE ANALYSIS STEP WAS FUCKING CHATGPT

      fayedrake@furry.engineerF guerillaontologist@social.coopG 2 Replies Last reply
      0
      • davidgerard@circumstances.runD davidgerard@circumstances.run

        I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.

        oxy@social.bsdlab.auO This user is from outside of this forum
        oxy@social.bsdlab.auO This user is from outside of this forum
        oxy@social.bsdlab.au
        wrote last edited by
        #3
        @davidgerard why would you even bother. Is anyone actually publishing anything whom hasnt been one-shotted?
        davidgerard@circumstances.runD 1 Reply Last reply
        0
        • oxy@social.bsdlab.auO oxy@social.bsdlab.au
          @davidgerard why would you even bother. Is anyone actually publishing anything whom hasnt been one-shotted?
          davidgerard@circumstances.runD This user is from outside of this forum
          davidgerard@circumstances.runD This user is from outside of this forum
          davidgerard@circumstances.run
          wrote last edited by
          #4

          @oxy can't wait for the entire field of machine learning to realise it has to start over again from 2022

          ra@mstdn.socialR meznor@mstdn.socialM 2 Replies Last reply
          0
          • davidgerard@circumstances.runD davidgerard@circumstances.run

            @oxy can't wait for the entire field of machine learning to realise it has to start over again from 2022

            ra@mstdn.socialR This user is from outside of this forum
            ra@mstdn.socialR This user is from outside of this forum
            ra@mstdn.social
            wrote last edited by
            #5

            @davidgerard @oxy The broken things moved fast.

            1 Reply Last reply
            0
            • davidgerard@circumstances.runD davidgerard@circumstances.run

              @oxy can't wait for the entire field of machine learning to realise it has to start over again from 2022

              meznor@mstdn.socialM This user is from outside of this forum
              meznor@mstdn.socialM This user is from outside of this forum
              meznor@mstdn.social
              wrote last edited by
              #6

              @davidgerard @oxy and unfortunately they will have destroyed any credibility they may have gained until then and no one will trust anything they do ever again (right.. right?!?)

              1 Reply Last reply
              0
              • davidgerard@circumstances.runD davidgerard@circumstances.run

                today's - a paper on which chatbots are most politically censorious. looks great! THE ANALYSIS STEP WAS FUCKING CHATGPT

                fayedrake@furry.engineerF This user is from outside of this forum
                fayedrake@furry.engineerF This user is from outside of this forum
                fayedrake@furry.engineer
                wrote last edited by
                #7

                @davidgerard I’m always amazed by shit like this

                You’ve got Musk out there with Grok openly tuned to be right wing biased.

                You think the other chatbots aren’t encoding implicit, unexaminable bias too? The only difference is that while (at least in the case of OpenAI) they’re lead by sociopathic incompetents, they’re at least smart enough to listen to the intelligent evil people in the room.

                Only Musk has the level of reality distortion capable of replacing a PR department.

                rycochet@furs.socialR 1 Reply Last reply
                0
                • davidgerard@circumstances.runD davidgerard@circumstances.run

                  today's - a paper on which chatbots are most politically censorious. looks great! THE ANALYSIS STEP WAS FUCKING CHATGPT

                  guerillaontologist@social.coopG This user is from outside of this forum
                  guerillaontologist@social.coopG This user is from outside of this forum
                  guerillaontologist@social.coop
                  wrote last edited by
                  #8

                  @davidgerard
                  So it's a "chatbot rates other chatbots" paper. Just what we need. 🤦‍♂️ People's brains are so f'n cooked...

                  1 Reply Last reply
                  0
                  • fayedrake@furry.engineerF fayedrake@furry.engineer

                    @davidgerard I’m always amazed by shit like this

                    You’ve got Musk out there with Grok openly tuned to be right wing biased.

                    You think the other chatbots aren’t encoding implicit, unexaminable bias too? The only difference is that while (at least in the case of OpenAI) they’re lead by sociopathic incompetents, they’re at least smart enough to listen to the intelligent evil people in the room.

                    Only Musk has the level of reality distortion capable of replacing a PR department.

                    rycochet@furs.socialR This user is from outside of this forum
                    rycochet@furs.socialR This user is from outside of this forum
                    rycochet@furs.social
                    wrote last edited by
                    #9

                    @FayeDrake @davidgerard Isn't Gemini using Grokipedia as a source now? Elon's getting control of the front segments of the Human centipede of AI so even if the rest were run to be perfectly neutral, which they aren't, right wing bias is going to be excreted along the chain.

                    fayedrake@furry.engineerF 1 Reply Last reply
                    0
                    • rycochet@furs.socialR rycochet@furs.social

                      @FayeDrake @davidgerard Isn't Gemini using Grokipedia as a source now? Elon's getting control of the front segments of the Human centipede of AI so even if the rest were run to be perfectly neutral, which they aren't, right wing bias is going to be excreted along the chain.

                      fayedrake@furry.engineerF This user is from outside of this forum
                      fayedrake@furry.engineerF This user is from outside of this forum
                      fayedrake@furry.engineer
                      wrote last edited by
                      #10

                      @Rycochet @davidgerard it’s so stupid and obvious.

                      Just so fucking stupid I lose brain cells every time I think about it.

                      All we can do is whatever small activism we can, then hide out on the corners of the indie-web and commiserate over how the emperor has no clothes.

                      1 Reply Last reply
                      0
                      • System shared this topic
                      • davidgerard@circumstances.runD davidgerard@circumstances.run

                        I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.

                        grouchybeast@mastodon.socialG This user is from outside of this forum
                        grouchybeast@mastodon.socialG This user is from outside of this forum
                        grouchybeast@mastodon.social
                        wrote last edited by
                        #11

                        @davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!

                        cliffsesport@mastodon.socialC ghostonthehalfshell@masto.aiG tuban_muzuru@beige.partyT andrei_chiffa@mastodon.socialA 4 Replies Last reply
                        0
                        • R relay@relay.infosec.exchange shared this topic
                        • grouchybeast@mastodon.socialG grouchybeast@mastodon.social

                          @davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!

                          cliffsesport@mastodon.socialC This user is from outside of this forum
                          cliffsesport@mastodon.socialC This user is from outside of this forum
                          cliffsesport@mastodon.social
                          wrote last edited by
                          #12

                          @Grouchybeast @davidgerard Recursive irony?

                          1 Reply Last reply
                          0
                          • davidgerard@circumstances.runD davidgerard@circumstances.run

                            I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.

                            brass75@twit.socialB This user is from outside of this forum
                            brass75@twit.socialB This user is from outside of this forum
                            brass75@twit.social
                            wrote last edited by
                            #13

                            @davidgerard highlighting problems with AI. @grim_elsewhere

                            1 Reply Last reply
                            0
                            • grouchybeast@mastodon.socialG grouchybeast@mastodon.social

                              @davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!

                              ghostonthehalfshell@masto.aiG This user is from outside of this forum
                              ghostonthehalfshell@masto.aiG This user is from outside of this forum
                              ghostonthehalfshell@masto.ai
                              wrote last edited by
                              #14

                              @Grouchybeast @davidgerard

                              Cue circular firing squad

                              1 Reply Last reply
                              0
                              • grouchybeast@mastodon.socialG grouchybeast@mastodon.social

                                @davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!

                                tuban_muzuru@beige.partyT This user is from outside of this forum
                                tuban_muzuru@beige.partyT This user is from outside of this forum
                                tuban_muzuru@beige.party
                                wrote last edited by
                                #15

                                @Grouchybeast @davidgerard

                                The very effing idea, that an LLM is some sort of Answer Machine. Cargo cultists.

                                1 Reply Last reply
                                0
                                • grouchybeast@mastodon.socialG grouchybeast@mastodon.social

                                  @davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!

                                  andrei_chiffa@mastodon.socialA This user is from outside of this forum
                                  andrei_chiffa@mastodon.socialA This user is from outside of this forum
                                  andrei_chiffa@mastodon.social
                                  wrote last edited by
                                  #16

                                  @Grouchybeast @davidgerard @reedmideke usually on request by reviewers or senior faculty.

                                  davidgerard@circumstances.runD 1 Reply Last reply
                                  0
                                  • andrei_chiffa@mastodon.socialA andrei_chiffa@mastodon.social

                                    @Grouchybeast @davidgerard @reedmideke usually on request by reviewers or senior faculty.

                                    davidgerard@circumstances.runD This user is from outside of this forum
                                    davidgerard@circumstances.runD This user is from outside of this forum
                                    davidgerard@circumstances.run
                                    wrote last edited by
                                    #17

                                    @andrei_chiffa @Grouchybeast @reedmideke they know where the funding comes from

                                    andrei_chiffa@mastodon.socialA 1 Reply Last reply
                                    0
                                    • davidgerard@circumstances.runD davidgerard@circumstances.run

                                      @andrei_chiffa @Grouchybeast @reedmideke they know where the funding comes from

                                      andrei_chiffa@mastodon.socialA This user is from outside of this forum
                                      andrei_chiffa@mastodon.socialA This user is from outside of this forum
                                      andrei_chiffa@mastodon.social
                                      wrote last edited by
                                      #18

                                      @davidgerard @Grouchybeast @reedmideke not sure for reviewers, but TBH for some senior faculty I have observed what I refer to as "LLM-induced prefrontal cortex ablation". Despite offloading thousands to LLM providers rather than getting a cent from them, they keep insisting that LLMs should be used for everything, criticizing actual human evaluations as something that "could have been done better by GPT 4.X/5.X or Claude".

                                      1 Reply Last reply
                                      1
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups