Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test".

TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test".

Scheduled Pinned Locked Moved Uncategorized
27 Posts 18 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • davidgerard@circumstances.runD davidgerard@circumstances.run

    TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

    ianbetteridge@social.vivaldi.netI This user is from outside of this forum
    ianbetteridge@social.vivaldi.netI This user is from outside of this forum
    ianbetteridge@social.vivaldi.net
    wrote last edited by
    #3

    @davidgerard Good grief.

    1 Reply Last reply
    0
    • davidgerard@circumstances.runD davidgerard@circumstances.run

      TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

      cstross@wandering.shopC This user is from outside of this forum
      cstross@wandering.shopC This user is from outside of this forum
      cstross@wandering.shop
      wrote last edited by
      #4

      @davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

      dwm@mastodon.socialD waffelhard@f.reun.deW ra@mstdn.socialR photo55@mastodon.socialP jer@chirp.enworld.orgJ 5 Replies Last reply
      1
      0
      • davidgerard@circumstances.runD davidgerard@circumstances.run

        TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

        ianbetteridge@social.vivaldi.netI This user is from outside of this forum
        ianbetteridge@social.vivaldi.netI This user is from outside of this forum
        ianbetteridge@social.vivaldi.net
        wrote last edited by
        #5

        @davidgerard It's really quite a thing that we have reached the “have faith, unbeliever” stage of AI already. Although these are mostly also the guys who made “HODL” a thing.

        1 Reply Last reply
        0
        • davidgerard@circumstances.runD davidgerard@circumstances.run

          TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

          mab_813@fedi.atM This user is from outside of this forum
          mab_813@fedi.atM This user is from outside of this forum
          mab_813@fedi.at
          wrote last edited by
          #6

          @davidgerard

          Everyone should have access to medical professionals who take their problems seriously.

          If they have that and still ask ChatGPT for medical advice... sigh

          1 Reply Last reply
          0
          • cstross@wandering.shopC cstross@wandering.shop

            @davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

            dwm@mastodon.socialD This user is from outside of this forum
            dwm@mastodon.socialD This user is from outside of this forum
            dwm@mastodon.social
            wrote last edited by
            #7

            @cstross @davidgerard

            Hmm, and presumably anyone operating a general-purpose chatbot that could conceivably be prompted to give such advice (e.g. as the conversational interface to a regular web-page) are also plausibly at risk?

            cstross@wandering.shopC 1 Reply Last reply
            0
            • cstross@wandering.shopC cstross@wandering.shop

              @davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

              waffelhard@f.reun.deW This user is from outside of this forum
              waffelhard@f.reun.deW This user is from outside of this forum
              waffelhard@f.reun.de
              wrote last edited by
              #8

              @cstross @davidgerard Any coin can give medical advice. I just ask the coin: should I take this medicine, say "head". Then I throw the coin. I hope the people at the coin minting facility get imprisoned for that.

              mabande@mastodon.socialM 1 Reply Last reply
              0
              • cstross@wandering.shopC cstross@wandering.shop

                @davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

                ra@mstdn.socialR This user is from outside of this forum
                ra@mstdn.socialR This user is from outside of this forum
                ra@mstdn.social
                wrote last edited by
                #9

                @cstross @davidgerard needs an IANAD subroutine.

                1 Reply Last reply
                0
                • cstross@wandering.shopC cstross@wandering.shop

                  @davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

                  photo55@mastodon.socialP This user is from outside of this forum
                  photo55@mastodon.socialP This user is from outside of this forum
                  photo55@mastodon.social
                  wrote last edited by
                  #10

                  @cstross @davidgerard
                  I think that applies to veterinary advice, but not to human. Hence Chiropractic, Homeopathy, assorted woo.
                  When people complain to the GMC that someone ^^^ is giving bad advice, the GMC says that they only have powers over registered medical practitioners.
                  But there are laws about animals.

                  1 Reply Last reply
                  0
                  • davidgerard@circumstances.runD davidgerard@circumstances.run

                    TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

                    gbargoud@masto.nycG This user is from outside of this forum
                    gbargoud@masto.nycG This user is from outside of this forum
                    gbargoud@masto.nyc
                    wrote last edited by
                    #11

                    @davidgerard

                    There is a bill in New York to make any companies that deploy chat bots that act like licensed professionals liable in the same way as those professionals:

                    Just a moment...

                    favicon

                    (www.nysenate.gov)

                    1 Reply Last reply
                    0
                    • davidgerard@circumstances.runD davidgerard@circumstances.run

                      TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

                      zzt@mas.toZ This user is from outside of this forum
                      zzt@mas.toZ This user is from outside of this forum
                      zzt@mas.to
                      wrote last edited by
                      #12

                      @davidgerard this one hit close to my heart because I’ve had two family members die in large part because their caretaker ignored medical advice and used awful alternative medicine information from the internet to try and treat them.

                      an LLM can’t do critique. as you’ve said, truth is not a data type in an LLM. all of these models suck in every form of medical crankery available on the internet, mix it with words from authentic medical sources, and present it all as credible.

                      zzt@mas.toZ 1 Reply Last reply
                      0
                      • zzt@mas.toZ zzt@mas.to

                        @davidgerard this one hit close to my heart because I’ve had two family members die in large part because their caretaker ignored medical advice and used awful alternative medicine information from the internet to try and treat them.

                        an LLM can’t do critique. as you’ve said, truth is not a data type in an LLM. all of these models suck in every form of medical crankery available on the internet, mix it with words from authentic medical sources, and present it all as credible.

                        zzt@mas.toZ This user is from outside of this forum
                        zzt@mas.toZ This user is from outside of this forum
                        zzt@mas.to
                        wrote last edited by
                        #13

                        @davidgerard I know that alternative medicine has a body count; I’ve seen it in the flesh. I know what some of the horseshit on the Internet can do if you’re very desperate or very trusting.

                        the LLM lowers the trust barrier because the crank information is no longer crank flavored, but it’s still dangerous as fuck to follow the advice.

                        I keep seeing LLMs be presented as better than nothing and that’s wrong. I wish the people who needed help could get it, but the LLM is worse than nothing.

                        zzt@mas.toZ 1 Reply Last reply
                        0
                        • cstross@wandering.shopC cstross@wandering.shop

                          @davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

                          jer@chirp.enworld.orgJ This user is from outside of this forum
                          jer@chirp.enworld.orgJ This user is from outside of this forum
                          jer@chirp.enworld.org
                          wrote last edited by
                          #14

                          @cstross @davidgerard who will you imprison? The ceo? The programmers? The qa team?

                          One of the big draws of tech is the ability to turn human error (and malfeasance) into "computer error". And society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable

                          That needs to change. Companies need to be accountable for their "computer errors" - especially when they're baked into design and not actually errors

                          wronglang@bayes.clubW cstross@wandering.shopC 2 Replies Last reply
                          0
                          • davidgerard@circumstances.runD davidgerard@circumstances.run

                            TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

                            drewtowler@mas.toD This user is from outside of this forum
                            drewtowler@mas.toD This user is from outside of this forum
                            drewtowler@mas.to
                            wrote last edited by
                            #15

                            @davidgerard I don't even know what that means. I'm referring to "purity test".

                            1 Reply Last reply
                            0
                            • dwm@mastodon.socialD dwm@mastodon.social

                              @cstross @davidgerard

                              Hmm, and presumably anyone operating a general-purpose chatbot that could conceivably be prompted to give such advice (e.g. as the conversational interface to a regular web-page) are also plausibly at risk?

                              cstross@wandering.shopC This user is from outside of this forum
                              cstross@wandering.shopC This user is from outside of this forum
                              cstross@wandering.shop
                              wrote last edited by
                              #16

                              @dwm @davidgerard Yes, although it all depends on whether the GMC (and the Police) have the guts to go after a large foreign corporation with deep pockets. It probably won't happen unless there's a major death-related scandal and/or one of the aforementioned corporations decides to go after the competition, i.e. small locally run and/or open source models with broad training sets.

                              1 Reply Last reply
                              0
                              • waffelhard@f.reun.deW waffelhard@f.reun.de

                                @cstross @davidgerard Any coin can give medical advice. I just ask the coin: should I take this medicine, say "head". Then I throw the coin. I hope the people at the coin minting facility get imprisoned for that.

                                mabande@mastodon.socialM This user is from outside of this forum
                                mabande@mastodon.socialM This user is from outside of this forum
                                mabande@mastodon.social
                                wrote last edited by
                                #17

                                @waffelhard @cstross @davidgerard …as coins are often claimed to be able to replace doctors by the coin minting industry and its adherents.

                                1 Reply Last reply
                                0
                                • jer@chirp.enworld.orgJ jer@chirp.enworld.org

                                  @cstross @davidgerard who will you imprison? The ceo? The programmers? The qa team?

                                  One of the big draws of tech is the ability to turn human error (and malfeasance) into "computer error". And society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable

                                  That needs to change. Companies need to be accountable for their "computer errors" - especially when they're baked into design and not actually errors

                                  wronglang@bayes.clubW This user is from outside of this forum
                                  wronglang@bayes.clubW This user is from outside of this forum
                                  wronglang@bayes.club
                                  wrote last edited by
                                  #18

                                  @Jer @cstross @davidgerard it's the CEOs job to manage legal risk. Imprison the CEO.

                                  jer@chirp.enworld.orgJ 1 Reply Last reply
                                  0
                                  • wronglang@bayes.clubW wronglang@bayes.club

                                    @Jer @cstross @davidgerard it's the CEOs job to manage legal risk. Imprison the CEO.

                                    jer@chirp.enworld.orgJ This user is from outside of this forum
                                    jer@chirp.enworld.orgJ This user is from outside of this forum
                                    jer@chirp.enworld.org
                                    wrote last edited by
                                    #19

                                    @wronglang @cstross @davidgerard I actually agree. It would certainly justify the vast amounts of money they make if they had to take personal responsibility for their harmful decisions. Might make them think a little harder about their decisions

                                    wronglang@bayes.clubW 1 Reply Last reply
                                    0
                                    • jer@chirp.enworld.orgJ jer@chirp.enworld.org

                                      @cstross @davidgerard who will you imprison? The ceo? The programmers? The qa team?

                                      One of the big draws of tech is the ability to turn human error (and malfeasance) into "computer error". And society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable

                                      That needs to change. Companies need to be accountable for their "computer errors" - especially when they're baked into design and not actually errors

                                      cstross@wandering.shopC This user is from outside of this forum
                                      cstross@wandering.shopC This user is from outside of this forum
                                      cstross@wandering.shop
                                      wrote last edited by
                                      #20

                                      @Jer @davidgerard That's a broader corporate liability question. Personally I'd LIKE to see the C-suite and boards of corporations that kill people sentenced to serious prison time. (Lower level staff too, but only if it's found that they made decisions that led to deaths on their own initiative. The directors *are responsible for the company's actions*.)

                                      Going further: the current privileged legal status of corporations is an obscenity and needs to be de-legitimized.

                                      1 Reply Last reply
                                      0
                                      • zzt@mas.toZ zzt@mas.to

                                        @davidgerard I know that alternative medicine has a body count; I’ve seen it in the flesh. I know what some of the horseshit on the Internet can do if you’re very desperate or very trusting.

                                        the LLM lowers the trust barrier because the crank information is no longer crank flavored, but it’s still dangerous as fuck to follow the advice.

                                        I keep seeing LLMs be presented as better than nothing and that’s wrong. I wish the people who needed help could get it, but the LLM is worse than nothing.

                                        zzt@mas.toZ This user is from outside of this forum
                                        zzt@mas.toZ This user is from outside of this forum
                                        zzt@mas.to
                                        wrote last edited by
                                        #21

                                        @davidgerard LLMs get alternative medicine patients to the “I don’t care what you say, *I* feel better” point of no return so much quicker because they don’t know it’s alternative medicine. some of it might even be legitimate medicine that works! and all this does is make them less skeptical until they get output that’s plausible but fatal, or until the damage from what they’ve been doing builds up and they can’t survive anymore. and thanks to the LLM, they’ll fight off anyone who tries to help.

                                        tarmil@mastodon.tarmil.frT 1 Reply Last reply
                                        0
                                        • zzt@mas.toZ zzt@mas.to

                                          @davidgerard LLMs get alternative medicine patients to the “I don’t care what you say, *I* feel better” point of no return so much quicker because they don’t know it’s alternative medicine. some of it might even be legitimate medicine that works! and all this does is make them less skeptical until they get output that’s plausible but fatal, or until the damage from what they’ve been doing builds up and they can’t survive anymore. and thanks to the LLM, they’ll fight off anyone who tries to help.

                                          tarmil@mastodon.tarmil.frT This user is from outside of this forum
                                          tarmil@mastodon.tarmil.frT This user is from outside of this forum
                                          tarmil@mastodon.tarmil.fr
                                          wrote last edited by
                                          #22

                                          @zzt @davidgerard Lies are never more effective than when they're sprinkled with truth, and that's exactly the bread and butter of LLMs: truth-flavoured bullshit.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups