1/ There has never been a more concentrated distillation of my teaching than this lesson: Algos, Bias, Due Process, & You.
-
@anwagnerdreas yes, though the last exercise didn't have the time to breathe I would have liked. In my own classes I usually take things a lot slower, but this was a guest lecture, and last year one of my students told me my Ai & the Law class was the first time they had been shown any of Ai's downsides. So, I wanted to make sure I got as much in as I could. The simulations really helped folks jump right into conversations. And yeah, our students are great!

@anwagnerdreas also, thank you. You are very kind.
-
11/ Almost everyone fell victim to automation bias. The assistant's accuracy was 100% in phase 1 & 2, then dropped to 70%. Student performance started at 79% in phase 1, improved to 85% for a bit, but when the tool's accuracy declined, scores fell to 65%, worse than their initial performance.

So students were (maybe) better than the tool in phase 1. Then they came to rely on the tool more and more in phases 2 and 3.
And in the 3rd phase, they performed *WORSE* that the tool itself! The tool's accuracy dropped to 70%, but the student's accuracy, with the tool, "scores fell to 65%"

-
So students were (maybe) better than the tool in phase 1. Then they came to rely on the tool more and more in phases 2 and 3.
And in the 3rd phase, they performed *WORSE* that the tool itself! The tool's accuracy dropped to 70%, but the student's accuracy, with the tool, "scores fell to 65%"

@JeffGrigg That's a measure of the tool's flag/recommendation accuracy, which was perfect in 1 and 2. So, the students were never quite as good as the tool though it did make them better in phase 2 than they were in phase 1. It was a highly-engineered scenario (unlikely to occur IRL) designed to make falling victim to automation bias likely.
-
8/ I told them that for our first exercise they would all be using an AI assistant I built to review citations. After they had a chance to use it we would have a class discussion. I suggested they hold the following question in their head, “What makes something a good decision assistant?“

@Colarusso Can you describe the AI citation tool? I'm unclear what it is supposed to do and how this part of the exercise worked.
Were they competing with each other for speed in creating citations and that was creating a dark pattern?
-
@Colarusso Can you describe the AI citation tool? I'm unclear what it is supposed to do and how this part of the exercise worked.
Were they competing with each other for speed in creating citations and that was creating a dark pattern?
@D_J_Nathanson yes, they were competing against each other, but the pacing buddy wasn't one of their peers. It was just a script that made it look like someone was just a head of them. All the "AI" suggested flags were per-determined. So nothing they did would effect the "AI" but of course, how carefully they read the materials effected their own performance.
-
14/ Since we had just made use of a tool that purported to make predictions with some level of confidence, I suggested we might want to look more into what such tools are really telling us. So, I asked them the following.

@Colarusso ok, but: what’s the correct answer?
-
@Colarusso ok, but: what’s the correct answer?
@blogdiva D, there isn't enough information/no way to know given just the info in the question. You need to know how prevalent the thing you're testing for is before you can venture a guess. See e.g., https://bail-risk-simulator-50382557550.us-west1.run.app/
-
1/ There has never been a more concentrated distillation of my teaching than this lesson: Algos, Bias, Due Process, & You. It is the apotheosis of what I do. I very much hope you enjoy it, share it, and make bits of it your own. https://suffolklitlab.org/algos-bias-due-process-you/
I agree, the design and work you put into putting this together is incredible! Thanks so much for sharing.
-
@blogdiva D, there isn't enough information/no way to know given just the info in the question. You need to know how prevalent the thing you're testing for is before you can venture a guess. See e.g., https://bail-risk-simulator-50382557550.us-west1.run.app/
@Colarusso have already bookmarked everything for studying. thank you!
-
@Colarusso have already bookmarked everything for studying. thank you!
@blogdiva if you want just one bookmark, this blog post puts it all in one place (and even adds a bit) https://suffolklitlab.org/algos-bias-due-process-you/
-
I agree, the design and work you put into putting this together is incredible! Thanks so much for sharing.
@stepheneb thank you. It was almost as much work as it was fun to put together. A bunch of things just clicked.
-
18/ It (https://fairness-simulator-the-toilet-seat-dilemma-50382557550.us-west1.run.app/) lets you simulate what happens when folks following different rules share a toilet. It assumes 2 populations, "sitters" & "standers" (folks who sometimes stand). It lets you see how different behavior effects 2 costs:
(1) the cost of having to change the seat's position before you use the toilet; and
(2) the cost of having to clean the seat if the last person failed to raise the seat when really they should have.

@Colarusso
Having both the lid and the seat always down levels the cost for sitters and standers AND is more hygienic -
R relay@relay.infosec.exchange shared this topic