I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot.
-
I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.
-
I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.
today's - a paper on which chatbots are most politically censorious. looks great! THE ANALYSIS STEP WAS FUCKING CHATGPT
-
I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.
@davidgerard why would you even bother. Is anyone actually publishing anything whom hasnt been one-shotted? -
@davidgerard why would you even bother. Is anyone actually publishing anything whom hasnt been one-shotted?
@oxy can't wait for the entire field of machine learning to realise it has to start over again from 2022
-
@oxy can't wait for the entire field of machine learning to realise it has to start over again from 2022
@davidgerard @oxy The broken things moved fast.
-
@oxy can't wait for the entire field of machine learning to realise it has to start over again from 2022
@davidgerard @oxy and unfortunately they will have destroyed any credibility they may have gained until then and no one will trust anything they do ever again (right.. right?!?)
-
today's - a paper on which chatbots are most politically censorious. looks great! THE ANALYSIS STEP WAS FUCKING CHATGPT
@davidgerard I’m always amazed by shit like this
You’ve got Musk out there with Grok openly tuned to be right wing biased.
You think the other chatbots aren’t encoding implicit, unexaminable bias too? The only difference is that while (at least in the case of OpenAI) they’re lead by sociopathic incompetents, they’re at least smart enough to listen to the intelligent evil people in the room.
Only Musk has the level of reality distortion capable of replacing a PR department.
-
today's - a paper on which chatbots are most politically censorious. looks great! THE ANALYSIS STEP WAS FUCKING CHATGPT
@davidgerard
So it's a "chatbot rates other chatbots" paper. Just what we need.
️ People's brains are so f'n cooked... -
@davidgerard I’m always amazed by shit like this
You’ve got Musk out there with Grok openly tuned to be right wing biased.
You think the other chatbots aren’t encoding implicit, unexaminable bias too? The only difference is that while (at least in the case of OpenAI) they’re lead by sociopathic incompetents, they’re at least smart enough to listen to the intelligent evil people in the room.
Only Musk has the level of reality distortion capable of replacing a PR department.
@FayeDrake @davidgerard Isn't Gemini using Grokipedia as a source now? Elon's getting control of the front segments of the Human centipede of AI so even if the rest were run to be perfectly neutral, which they aren't, right wing bias is going to be excreted along the chain.
-
@FayeDrake @davidgerard Isn't Gemini using Grokipedia as a source now? Elon's getting control of the front segments of the Human centipede of AI so even if the rest were run to be perfectly neutral, which they aren't, right wing bias is going to be excreted along the chain.
@Rycochet @davidgerard it’s so stupid and obvious.
Just so fucking stupid I lose brain cells every time I think about it.
All we can do is whatever small activism we can, then hide out on the corners of the indie-web and commiserate over how the emperor has no clothes.
-
System shared this topic
-
I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.
@davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!
-
R relay@relay.infosec.exchange shared this topic
-
@davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!
@Grouchybeast @davidgerard Recursive irony?
-
I can't tell you how much it pisses me off when I find an interesting study on problems with AI and their methodology includes running the data through a fucking chatbot. what the arsing fuck are you even doing, you idiot.
@davidgerard highlighting problems with AI. @grim_elsewhere
-
@davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!
Cue circular firing squad
-
@davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!
The very effing idea, that an LLM is some sort of Answer Machine. Cargo cultists.
-
@davidgerard The otherwise great paper on LLMs hallucinating images in image analysis benchmarking made me facepalm when they used ChatGPT to assess some of the output. WHAT ARE YOU DOING? LLMS MAKE THINGS UP WHEN ASKED TO ANALYSE DATA! THAT IS LITERALLY THE ENTIRE THESIS OF YOUR PAPER!
@Grouchybeast @davidgerard @reedmideke usually on request by reviewers or senior faculty.
-
@Grouchybeast @davidgerard @reedmideke usually on request by reviewers or senior faculty.
@andrei_chiffa @Grouchybeast @reedmideke they know where the funding comes from
-
@andrei_chiffa @Grouchybeast @reedmideke they know where the funding comes from
@davidgerard @Grouchybeast @reedmideke not sure for reviewers, but TBH for some senior faculty I have observed what I refer to as "LLM-induced prefrontal cortex ablation". Despite offloading thousands to LLM providers rather than getting a cent from them, they keep insisting that LLMs should be used for everything, criticizing actual human evaluations as something that "could have been done better by GPT 4.X/5.X or Claude".