Please stop asking AI for legal advice.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer see also technical advise, medical advise, and relationship advise as a starter ... sigh
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer
Computers can't make mistakes -
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer The comedy sketch I always think of when people suggest the problem may be with the computer: https://m.youtube.com/watch?v=qNDS4kVwA68
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer Why anyone would trust an aggregation tool over a studied professional is beyond me. Just like people self diagnosing and treating illnesses and wounds with ChatGPT rather than consulting a vetted first aid handbook/manual. It's a fancy search tool that delivers overly confident information in a way that is designed to make you trust it, regardless of the answer. This has been tested and proven time and time again. A tool that convinces young people to kill themselves should not be considered the authority on anything based in reality. What happened to critical thinking?!
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer In case of interest, there is an international database about this: https://www.damiencharlotin.com/hallucinations/
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer As a Senior Software and Firmware Engineer for decades, that "computers can't make mistakes" part has me grinning.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer I wish there was a search engine to find out what kind of lawyer I need to ask. That would be infinitely more helpful than an AI would say "Oh, you're absolutely right!" while I'm in jail.
-
@theleftistlawyer I wrote a blog post about this very fallacy some years ago: cholling.com/posts/self_driving_fallacy/
@cholling @theleftistlawyer Another fallacy is the belief that it would be advantageous for car traffic to be a safe environment.
It might be true for people in cars, but definitely not for people outside cars. Car traffic needs to be dangerous for people outside cars for car traffic to work.
So either, we replace drivers with self-driving cars that never hit people, but then traffic will not flow the way it used to. Or we replace drivers with self-driving cars that deliberately hit people.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer This is infuriating. I just had a client ask ChatGPT (twice) something and then ask me which answer she got was correct. I asked her why she was using an AI instead of calling me?
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer Coming up next: somebody making genAI testify in court.
-
@theleftistlawyer I wish there was a search engine to find out what kind of lawyer I need to ask. That would be infinitely more helpful than an AI would say "Oh, you're absolutely right!" while I'm in jail.
@hellomiakoda Usually, reputable lawyers are willing to help you find one who specialises in whatever field matches the problem you're dealing with. That's kind of a part of why the introductory meeting is for.
The details vary between jurisdictions, but legal ethics generally requires a lawyer to be competent in the subfield of law they're practicing, and the sort of lawyers that can make things better for you generally know what the limits of their competency are, but also know enough about law outside their competency to be able to find a lawyer who is competent in that other subfield.
-
@theleftistlawyer Coming up next: somebody making genAI testify in court.
Whatever happened to the lawyers who got caught presenting fake "AI" slop citations as precedent cases?
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer Sigh, I'm sorry you had to deal with that.
People really need to understand that GenAI is a parrot! It spits out patterns based on how humans have responded to similar words in other contexts. It does not understand what you asked, and it has no capacity for discerning whether the pattern response it gave is true or appropriate.
So it makes perfect sense that a genAI program given a legal Q as a prompt might give a response that says it's it covered by attorney -client privilege. It's just parroting. -
@theleftistlawyer What will exacerbate this problem is the chronic loss of critical thinking across the land.
Too many allow themselves to be driven by the herd - they see justification because 'most people do that'.
Including wearing headphones when crossing the road and not looking as your eyes are still on your phone.
Hear no car.
See no car.
...@NicelyManifest @theleftistlawyer We already have a major deficit of critical thinking skills. Most people prefer the easy way every time, whether that means trusting an authority figure, what the social group says, reading it in a paper, or from a social computer.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer @SRLevine I would just politely suggest they go collect some
real world data to back up their assertions. -
It gets worse!!
Even when a tech corp *claims* it is following privacy law--they -still- do not.
Honestly, it's come to the point where the privacy assurances of any digital corpo entity are reduced to "maybe they will adhere--and maybe they won't!"
I bet on "won't". I never trust corpos to be honest, ethical or moral in any of their dealings. I am proved reasonable in that assumption, again and again.
evacide (@evacide@hachyderm.io)
One reason that threat modeling for your digital privacy/security is harder than it looks is that most people don't know what data is being generated about them and who has access to it. https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/
Hachyderm.io (hachyderm.io)
@theleftistlawyer @kitkat_blue My understanding is that PII is like toxic waste—you can take all the precautions and still have a leak, at which point the law doesn’t really care if you follow best practices and will still hold you liable for the consequences (however inadequate the remedy). The best practice is to minimize the amount of PII (or toxic waste) that you handle or pay through the nose for insurance.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
Do you have time to explain how a lawyer might use strategic omission and framing that serves the lawyer's interests over the client's?
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
-
Please stop asking AI for legal advice. I got yelled at today by a person who was convinced their GenAI was right and I was wrong on a legal issue bc "computers can't make mistakes."
Worse, the GenAI assured the person (wrongly) it is covered by the same attorney client privilege that lawyers are.
@theleftistlawyer It’s a nightmare. They often sound like Sovereign Citizens

-
Whatever happened to the lawyers who got caught presenting fake "AI" slop citations as precedent cases?
@float13 I believe I have heard of two who got harshly sanctioned, but not yet disbarred. There was also a recent case where somebody got caught being coached on how to testify while on tand via a phone and a Bluetooth headset; they lost the case (the misbehaving witness was the plaintiff in this particular case, which was fortunate). The person to coach them happened to be a lawyer licenced in Lithuania; I'm inclined to argue that a lawyer should be disbarred for even participating in a stunt like that, but I haven't heard of the Lithuanian advocature having taken any action so far.