Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. A woman sues her insurance company for terminating her disability benefits.

A woman sues her insurance company for terminating her disability benefits.

Scheduled Pinned Locked Moved Uncategorized
66 Posts 23 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • krupo@infosec.exchangeK krupo@infosec.exchange

    @mjd on LinkedIn I saw a breathless post about how the professionals of the future are using autonomous agents to do all this magic....

    Then I come back here for a reality check

    mjd@mathstodon.xyzM This user is from outside of this forum
    mjd@mathstodon.xyzM This user is from outside of this forum
    mjd@mathstodon.xyz
    wrote last edited by
    #13

    @krupo It's an interesting time. Many of the successes are overstated. So are many of the failures. Nobody knows how it will shake out in the end.

    falcennial@mastodon.socialF 1 Reply Last reply
    0
    • mjd@mathstodon.xyzM mjd@mathstodon.xyz

      A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.

      She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.

      She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.

      CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.

      The court denies her motion to reopen the case.

      Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.

      Now the insurance company has sued OpenAI for tortious interference with their settlement contract.

      🍿

      https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf

      wcbdata@vis.socialW This user is from outside of this forum
      wcbdata@vis.socialW This user is from outside of this forum
      wcbdata@vis.social
      wrote last edited by
      #14

      @mjd The only use case for Generative AI is fraud.

      mjd@mathstodon.xyzM 1 Reply Last reply
      0
      • wcbdata@vis.socialW wcbdata@vis.social

        @mjd The only use case for Generative AI is fraud.

        mjd@mathstodon.xyzM This user is from outside of this forum
        mjd@mathstodon.xyzM This user is from outside of this forum
        mjd@mathstodon.xyz
        wrote last edited by
        #15

        @wcbdata That is demonstrably false.

        wcbdata@vis.socialW 1 Reply Last reply
        0
        • mjd@mathstodon.xyzM mjd@mathstodon.xyz

          @wcbdata That is demonstrably false.

          wcbdata@vis.socialW This user is from outside of this forum
          wcbdata@vis.socialW This user is from outside of this forum
          wcbdata@vis.social
          wrote last edited by
          #16

          @mjd Try me. There isn't a use case for it that isn't, at its core, fraud.

          wcbdata@vis.socialW 1 Reply Last reply
          0
          • mjd@mathstodon.xyzM mjd@mathstodon.xyz

            A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.

            She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.

            She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.

            CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.

            The court denies her motion to reopen the case.

            Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.

            Now the insurance company has sued OpenAI for tortious interference with their settlement contract.

            🍿

            https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf

            marshray@infosec.exchangeM This user is from outside of this forum
            marshray@infosec.exchangeM This user is from outside of this forum
            marshray@infosec.exchange
            wrote last edited by
            #17

            @mjd “41. On October 29, 2025, OPENAI amended the terms and usage policies of ChatGPT to prohibit users from using ChatGPT to provide tailored legal advice. Prior to the October 29, 2025 emendation, ChatGPT’s terms of use did not prohibit users from using ChatGPT to draft legal papers, conduct legal research, provide legal analysis or give legal advice.”

            mjd@mathstodon.xyzM wellsitegeo@masto.aiW 2 Replies Last reply
            0
            • wcbdata@vis.socialW wcbdata@vis.social

              @mjd Try me. There isn't a use case for it that isn't, at its core, fraud.

              wcbdata@vis.socialW This user is from outside of this forum
              wcbdata@vis.socialW This user is from outside of this forum
              wcbdata@vis.social
              wrote last edited by
              #18

              @mjd Couldn't think of even one reasonable candidate in 15 minutes, even with your precious AI right there in front of you? I rest my case.

              1 Reply Last reply
              0
              • mjd@mathstodon.xyzM mjd@mathstodon.xyz

                A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.

                She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.

                She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.

                CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.

                The court denies her motion to reopen the case.

                Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.

                Now the insurance company has sued OpenAI for tortious interference with their settlement contract.

                🍿

                https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf

                divverent@social.vivaldi.netD This user is from outside of this forum
                divverent@social.vivaldi.netD This user is from outside of this forum
                divverent@social.vivaldi.net
                wrote last edited by
                #19

                @mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.

                The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.

                Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.

                sabik@rants.auS jonoleth@mastodon.socialJ mjd@mathstodon.xyzM adriano@lile.clA wellsitegeo@masto.aiW 6 Replies Last reply
                0
                • rowat_c@mastodon.socialR rowat_c@mastodon.social

                  @mjd Levine (12/03/26):

                  Here’s a Perkins Coie memo from last month:
                  > On February 17, 2026, the Southern District of New York, in United States v. Bradley Heppner, held that a criminal defendant's written exchanges with a “publicly available AI platform” are not protected by attorney-client privilege or work product doctrine and, thus, could be inspected by the government.

                  divverent@social.vivaldi.netD This user is from outside of this forum
                  divverent@social.vivaldi.netD This user is from outside of this forum
                  divverent@social.vivaldi.net
                  wrote last edited by
                  #20

                  @rowat_c @mjd This honestly just makes sense. Not an attorney, no attorney-client privilege.

                  1 Reply Last reply
                  0
                  • mjd@mathstodon.xyzM mjd@mathstodon.xyz

                    A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.

                    She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.

                    She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.

                    CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.

                    The court denies her motion to reopen the case.

                    Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.

                    Now the insurance company has sued OpenAI for tortious interference with their settlement contract.

                    🍿

                    https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf

                    infoseepage@mastodon.socialI This user is from outside of this forum
                    infoseepage@mastodon.socialI This user is from outside of this forum
                    infoseepage@mastodon.social
                    wrote last edited by
                    #21

                    @mjd My guess is her thought process was that her insurance company lowballed her on the settlement offer, and she was forced to accept it anyways because she couldn't afford pursue an aggressively in court, versus the insurance company has all the resources in the world and can just sit there and bankrupt anybody trying to seek justice in that manner.

                    infoseepage@mastodon.socialI mjd@mathstodon.xyzM 2 Replies Last reply
                    0
                    • divverent@social.vivaldi.netD divverent@social.vivaldi.net

                      @mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.

                      The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.

                      Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.

                      sabik@rants.auS This user is from outside of this forum
                      sabik@rants.auS This user is from outside of this forum
                      sabik@rants.au
                      wrote last edited by
                      #22

                      @divVerent @mjd
                      OpenAI are certainly marketing ChatGPT as being useful, whatever the fine print says, so they do bear some responsibility there

                      divverent@social.vivaldi.netD 1 Reply Last reply
                      0
                      • infoseepage@mastodon.socialI infoseepage@mastodon.social

                        @mjd My guess is her thought process was that her insurance company lowballed her on the settlement offer, and she was forced to accept it anyways because she couldn't afford pursue an aggressively in court, versus the insurance company has all the resources in the world and can just sit there and bankrupt anybody trying to seek justice in that manner.

                        infoseepage@mastodon.socialI This user is from outside of this forum
                        infoseepage@mastodon.socialI This user is from outside of this forum
                        infoseepage@mastodon.social
                        wrote last edited by
                        #23

                        @mjd So, she turned the whole thing around and used an llm to generate a large quantity of motions and filings which the insurance company now had to analyze and rebut, costing them lots of time and money.

                        mjd@mathstodon.xyzM 1 Reply Last reply
                        0
                        • sabik@rants.auS sabik@rants.au

                          @divVerent @mjd
                          OpenAI are certainly marketing ChatGPT as being useful, whatever the fine print says, so they do bear some responsibility there

                          divverent@social.vivaldi.netD This user is from outside of this forum
                          divverent@social.vivaldi.netD This user is from outside of this forum
                          divverent@social.vivaldi.net
                          wrote last edited by
                          #24

                          @sabik @mjd It did probably exactly what she asked for in the prompt, so where's the problem? Definitely "useful".

                          In case she was misinformed by ChatGPT and has to pay penalties for that reason, then _she_ should be the one suing OpenAI, not the insurance company.

                          sabik@rants.auS 1 Reply Last reply
                          0
                          • infoseepage@mastodon.socialI infoseepage@mastodon.social

                            @mjd So, she turned the whole thing around and used an llm to generate a large quantity of motions and filings which the insurance company now had to analyze and rebut, costing them lots of time and money.

                            mjd@mathstodon.xyzM This user is from outside of this forum
                            mjd@mathstodon.xyzM This user is from outside of this forum
                            mjd@mathstodon.xyz
                            wrote last edited by
                            #25

                            @Infoseepage Yes, and which the judge and the court clerks also had to deal with, consuming public resources that should have been allocated to more deserving citizens.

                            1 Reply Last reply
                            0
                            • infoseepage@mastodon.socialI infoseepage@mastodon.social

                              @mjd My guess is her thought process was that her insurance company lowballed her on the settlement offer, and she was forced to accept it anyways because she couldn't afford pursue an aggressively in court, versus the insurance company has all the resources in the world and can just sit there and bankrupt anybody trying to seek justice in that manner.

                              mjd@mathstodon.xyzM This user is from outside of this forum
                              mjd@mathstodon.xyzM This user is from outside of this forum
                              mjd@mathstodon.xyz
                              wrote last edited by
                              #26

                              @Infoseepage You made that up out of your head to come to the conclusion you selected beforehand.

                              I don't know what actually happened, and neither do you.

                              infoseepage@mastodon.socialI 1 Reply Last reply
                              0
                              • mjd@mathstodon.xyzM mjd@mathstodon.xyz

                                A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.

                                She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.

                                She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.

                                CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.

                                The court denies her motion to reopen the case.

                                Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.

                                Now the insurance company has sued OpenAI for tortious interference with their settlement contract.

                                🍿

                                https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf

                                gyrosgeier@hachyderm.ioG This user is from outside of this forum
                                gyrosgeier@hachyderm.ioG This user is from outside of this forum
                                gyrosgeier@hachyderm.io
                                wrote last edited by
                                #27

                                @mjd Wouldn't that be "tortuous inference"?

                                falcennial@mastodon.socialF 1 Reply Last reply
                                0
                                • gyrosgeier@hachyderm.ioG gyrosgeier@hachyderm.io

                                  @mjd Wouldn't that be "tortuous inference"?

                                  falcennial@mastodon.socialF This user is from outside of this forum
                                  falcennial@mastodon.socialF This user is from outside of this forum
                                  falcennial@mastodon.social
                                  wrote last edited by
                                  #28

                                  @GyrosGeier @mjd torturous interference

                                  gyrosgeier@hachyderm.ioG mjd@mathstodon.xyzM 2 Replies Last reply
                                  0
                                  • mjd@mathstodon.xyzM mjd@mathstodon.xyz

                                    @krupo It's an interesting time. Many of the successes are overstated. So are many of the failures. Nobody knows how it will shake out in the end.

                                    falcennial@mastodon.socialF This user is from outside of this forum
                                    falcennial@mastodon.socialF This user is from outside of this forum
                                    falcennial@mastodon.social
                                    wrote last edited by
                                    #29

                                    @mjd @krupo it will be a financial bubble pop, followed by what we will call 'the AI recession,' then limited, appropriate use. the shit is the dot com and GFC playbook all day.

                                    falcennial@mastodon.socialF 1 Reply Last reply
                                    0
                                    • marshray@infosec.exchangeM marshray@infosec.exchange

                                      @mjd “41. On October 29, 2025, OPENAI amended the terms and usage policies of ChatGPT to prohibit users from using ChatGPT to provide tailored legal advice. Prior to the October 29, 2025 emendation, ChatGPT’s terms of use did not prohibit users from using ChatGPT to draft legal papers, conduct legal research, provide legal analysis or give legal advice.”

                                      mjd@mathstodon.xyzM This user is from outside of this forum
                                      mjd@mathstodon.xyzM This user is from outside of this forum
                                      mjd@mathstodon.xyz
                                      wrote last edited by
                                      #30

                                      @marshray I wonder if that will help get them off the hook. If not, it shows that they were aware that what they were doing could be a problem.

                                      qhstone@mstdn.socialQ 1 Reply Last reply
                                      0
                                      • falcennial@mastodon.socialF falcennial@mastodon.social

                                        @GyrosGeier @mjd torturous interference

                                        gyrosgeier@hachyderm.ioG This user is from outside of this forum
                                        gyrosgeier@hachyderm.ioG This user is from outside of this forum
                                        gyrosgeier@hachyderm.io
                                        wrote last edited by
                                        #31

                                        @falcennial @mjd I mean, because running an AI model is called "inference."

                                        1 Reply Last reply
                                        0
                                        • falcennial@mastodon.socialF falcennial@mastodon.social

                                          @GyrosGeier @mjd torturous interference

                                          mjd@mathstodon.xyzM This user is from outside of this forum
                                          mjd@mathstodon.xyzM This user is from outside of this forum
                                          mjd@mathstodon.xyz
                                          wrote last edited by
                                          #32

                                          @falcennial @GyrosGeier They're all closely related. They're from the Latin verb “to twist”.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups