Please stop asking AI for legal advice.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer I asked a Claude Haiku instance, and a Qwen VL 30B instance, for UK-specific intellectual property advice. The answers directly contradicted standing advice from the UK Intellectual Property Office. Risk management does not even begin to describe how ridiculous this is.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer
Similarly, chatbot users who were discussing mental health with their LLM of choice also believed, incorrectly, that their health information is protected
https://fediscience.org/@nyhan/115152094181697986 -
@tantramar @fsinn @theleftistlawyer
My apologies if that was a little too heavy.
@thriftwicker @fsinn @theleftistlawyer Not at all. Made me realize I came across too heavily. π«€
-
@theleftistlawyer I asked a Claude Haiku instance, and a Qwen VL 30B instance, for UK-specific intellectual property advice. The answers directly contradicted standing advice from the UK Intellectual Property Office. Risk management does not even begin to describe how ridiculous this is.
@theleftistlawyer Oh yeah. Don't forget the CDPA 1988 Act here means you can copyright GenAI stuff as long as it's... "original". But I don't quite think in the way Toni Halliday was singing about on Leftfield's debut album. Gotta document user prompts. But system prompts and reasoning traces are proprietary for proprietary models, so I question how this might hold up in practice. The safest option? One-shotting the boilerplate with a US-hosted Chinese open weights model, or don't use it at all.
-
@thriftwicker @fsinn @theleftistlawyer Not at all. Made me realize I came across too heavily. π«€
@tantramar @fsinn @theleftistlawyer
Nah. We are good. Being on mastodon has been an enlightening lession on thinking before I speak.
-
@theleftistlawyer
"Computers can't make mistakes"? Seriously? Whoa.



@AlisonW @theleftistlawyer someone heard "computers do exactly what they're programmed to do" and extrapolated that to "they don't make mistakes."
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
It gets worse!!
Even when a tech corp *claims* it is following privacy law--they -still- do not.
Honestly, it's come to the point where the privacy assurances of any digital corpo entity are reduced to "maybe they will adhere--and maybe they won't!"
I bet on "won't". I never trust corpos to be honest, ethical or moral in any of their dealings. I am proved reasonable in that assumption, again and again.
evacide (@evacide@hachyderm.io)
One reason that threat modeling for your digital privacy/security is harder than it looks is that most people don't know what data is being generated about them and who has access to it. https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/
Hachyderm.io (hachyderm.io)
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer worse yet, Iβve literally been dealing with a lawyer using AI to give me legal advice and literally get things wrong that can be cross-checked with a quick Google search.



-
P pixelate@tweesecake.social shared this topic
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer see also technical advise, medical advise, and relationship advise as a starter ... sigh
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer
Computers can't make mistakes -
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer The comedy sketch I always think of when people suggest the problem may be with the computer: https://m.youtube.com/watch?v=qNDS4kVwA68
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer Why anyone would trust an aggregation tool over a studied professional is beyond me. Just like people self diagnosing and treating illnesses and wounds with ChatGPT rather than consulting a vetted first aid handbook/manual. It's a fancy search tool that delivers overly confident information in a way that is designed to make you trust it, regardless of the answer. This has been tested and proven time and time again. A tool that convinces young people to kill themselves should not be considered the authority on anything based in reality. What happened to critical thinking?!
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer In case of interest, there is an international database about this: https://www.damiencharlotin.com/hallucinations/
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer As a Senior Software and Firmware Engineer for decades, that "computers can't make mistakes" part has me grinning.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer I wish there was a search engine to find out what kind of lawyer I need to ask. That would be infinitely more helpful than an AI would say "Oh, you're absolutely right!" while I'm in jail.
-
@theleftistlawyer I wrote a blog post about this very fallacy some years ago: cholling.com/posts/self_driving_fallacy/
@cholling @theleftistlawyer Another fallacy is the belief that it would be advantageous for car traffic to be a safe environment.
It might be true for people in cars, but definitely not for people outside cars. Car traffic needs to be dangerous for people outside cars for car traffic to work.
So either, we replace drivers with self-driving cars that never hit people, but then traffic will not flow the way it used to. Or we replace drivers with self-driving cars that deliberately hit people.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer This is infuriating. I just had a client ask ChatGPT (twice) something and then ask me which answer she got was correct. I asked her why she was using an AI instead of calling me?
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer Coming up next: somebody making genAI testify in court.
-
@theleftistlawyer I wish there was a search engine to find out what kind of lawyer I need to ask. That would be infinitely more helpful than an AI would say "Oh, you're absolutely right!" while I'm in jail.
@hellomiakoda Usually, reputable lawyers are willing to help you find one who specialises in whatever field matches the problem you're dealing with. That's kind of a part of why the introductory meeting is for.
The details vary between jurisdictions, but legal ethics generally requires a lawyer to be competent in the subfield of law they're practicing, and the sort of lawyers that can make things better for you generally know what the limits of their competency are, but also know enough about law outside their competency to be able to find a lawyer who is competent in that other subfield.
-
@theleftistlawyer Coming up next: somebody making genAI testify in court.
Whatever happened to the lawyers who got caught presenting fake "AI" slop citations as precedent cases?