This is a thread on terms describing various aspects of "AI"
-
This is a thread on terms describing various aspects of "AI"
ASBESTOS
Jonathan Zittrain
On "AI" in medical innovation.“I think of machine learning kind of as asbestos,” said BKC’s Jonathan Zittrain. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”
What if AI in health care is the next asbestos?
Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison?
Berkman Klein Center (cyber.harvard.edu)
What if AI in health care is the next asbestos?
“I think of machine learning kind of as asbestos,” one speaker said, explaining that AI in medicine could turn out to be as harmful as it is promising.
STAT (www.statnews.com)
-
This is a thread on terms describing various aspects of "AI"
ASBESTOS
Jonathan Zittrain
On "AI" in medical innovation.“I think of machine learning kind of as asbestos,” said BKC’s Jonathan Zittrain. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”
What if AI in health care is the next asbestos?
Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison?
Berkman Klein Center (cyber.harvard.edu)
What if AI in health care is the next asbestos?
“I think of machine learning kind of as asbestos,” one speaker said, explaining that AI in medicine could turn out to be as harmful as it is promising.
STAT (www.statnews.com)
LIABILITY
"Code is a liability, not an asset;
AI code represents liability production at scale."The idea that code is a liability has been around for a long time, but "AI" "coding" supercharges that, writes Cory Doctorow.
" "Writing code" is about making code that runs well. "Software engineering" is about making code that fails well."
-
LIABILITY
"Code is a liability, not an asset;
AI code represents liability production at scale."The idea that code is a liability has been around for a long time, but "AI" "coding" supercharges that, writes Cory Doctorow.
" "Writing code" is about making code that runs well. "Software engineering" is about making code that fails well."
REVERSE CENTAUR
Cory Doctorow
"In automation theory jargon, [an AI assisted] radiologist is a "centaur" – a human head grafted onto the tireless, ever-vigilant body of a robot.
No one who invests in AI expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot.
That human is there
– to be blamed for errors.
– to be a "moral crumple zone".
– to be an "accountability sink"
But they're not there to be radiologists. -
REVERSE CENTAUR
Cory Doctorow
"In automation theory jargon, [an AI assisted] radiologist is a "centaur" – a human head grafted onto the tireless, ever-vigilant body of a robot.
No one who invests in AI expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot.
That human is there
– to be blamed for errors.
– to be a "moral crumple zone".
– to be an "accountability sink"
But they're not there to be radiologists.DIGITAL KESSLER SYNDROME
Anton Danholt Lautrup
"If we cannot reliably distinguish between synthetic and genuine data, we risk contaminating and diluting decades' worth of data collection."
https://www.sdu.dk/en/forskning/c-ai-ethics/news-and-events/event-digital-kessler-syndrome.
AI produces slop more often than it should. If it ingests the slop in subsequent trainings, the output becomes sloppier and sloppier, and good luck unscrambling that egg.
-
REVERSE CENTAUR
Cory Doctorow
"In automation theory jargon, [an AI assisted] radiologist is a "centaur" – a human head grafted onto the tireless, ever-vigilant body of a robot.
No one who invests in AI expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot.
That human is there
– to be blamed for errors.
– to be a "moral crumple zone".
– to be an "accountability sink"
But they're not there to be radiologists.That is entirely the wrong headed (giggity) approach, IMHO.
A big part of man(person)-machine interface is the control and responsibility remaining in human hands.
Not so long ago, the few of us geeks who foresaw where machine brains would take us, campaigned in #stopkillerrobots.
A campaign to keep human decision making in military #killchain
A campaign that failed spectacularly, in no small part, I am sure to uniformed Doctorow analogues dismissing it as unnecessary farsical puppetry.Even now, I actively strive to #regulateAI IRL and human decision making is essential and imperative in AI.
The "reverse centaur" is a canard, as much as a driver of a motorcar is not pulling the cargo by their muscle.AI is not going away for the same reason we don't see "Picks and shovels" (!) digging infrastructure trenches anymore. Machines have been eating jobs since the 1700s and it's only scary now because the white collars are on the chopping block.
I have huge respect for @pluralistic and his role, which he fulfills admirably is an activist, a what we call in Australia, a shitstirer. His opinions stimulate debate, but keeping an expert in the decision chain, if it's only a tick box is a good thing.
Call it a "moral crumple zone" if you will.
Removing it all together is bad and I am disturbed anyone would try to make hay of this.
The alternative is full automation and I am sure all the #AI "fans" would agree it's a bad thing. -
DIGITAL KESSLER SYNDROME
Anton Danholt Lautrup
"If we cannot reliably distinguish between synthetic and genuine data, we risk contaminating and diluting decades' worth of data collection."
https://www.sdu.dk/en/forskning/c-ai-ethics/news-and-events/event-digital-kessler-syndrome.
AI produces slop more often than it should. If it ingests the slop in subsequent trainings, the output becomes sloppier and sloppier, and good luck unscrambling that egg.
@CelloMomOnCars Eating its own slop is exactly what happened when reduced cows were added to cattle feed. Result: Mad Cows Disease.
I expect we will have to face up to Mad Computers Disease in the future.
-
DIGITAL KESSLER SYNDROME
Anton Danholt Lautrup
"If we cannot reliably distinguish between synthetic and genuine data, we risk contaminating and diluting decades' worth of data collection."
https://www.sdu.dk/en/forskning/c-ai-ethics/news-and-events/event-digital-kessler-syndrome.
AI produces slop more often than it should. If it ingests the slop in subsequent trainings, the output becomes sloppier and sloppier, and good luck unscrambling that egg.
-
LIABILITY
"Code is a liability, not an asset;
AI code represents liability production at scale."The idea that code is a liability has been around for a long time, but "AI" "coding" supercharges that, writes Cory Doctorow.
" "Writing code" is about making code that runs well. "Software engineering" is about making code that fails well."
@CelloMomOnCars This may well be one of the best essays you've ever written.
Not perhaps in the absolute sense, but in the sense that never have you crystallized the existential pain of a moment more expertly and eloquently.
This draws forensic diagrams showing the entry and exit wounds and where the bullet wound up at the crime scene for the death of the craft of software engineering in mainstream commercial environments.
Thank you.
-
That is entirely the wrong headed (giggity) approach, IMHO.
A big part of man(person)-machine interface is the control and responsibility remaining in human hands.
Not so long ago, the few of us geeks who foresaw where machine brains would take us, campaigned in #stopkillerrobots.
A campaign to keep human decision making in military #killchain
A campaign that failed spectacularly, in no small part, I am sure to uniformed Doctorow analogues dismissing it as unnecessary farsical puppetry.Even now, I actively strive to #regulateAI IRL and human decision making is essential and imperative in AI.
The "reverse centaur" is a canard, as much as a driver of a motorcar is not pulling the cargo by their muscle.AI is not going away for the same reason we don't see "Picks and shovels" (!) digging infrastructure trenches anymore. Machines have been eating jobs since the 1700s and it's only scary now because the white collars are on the chopping block.
I have huge respect for @pluralistic and his role, which he fulfills admirably is an activist, a what we call in Australia, a shitstirer. His opinions stimulate debate, but keeping an expert in the decision chain, if it's only a tick box is a good thing.
Call it a "moral crumple zone" if you will.
Removing it all together is bad and I am disturbed anyone would try to make hay of this.
The alternative is full automation and I am sure all the #AI "fans" would agree it's a bad thing.@n_dimension Can you say more about this: ‘keeping an expert in the decision chain, if it's only a tick box is a good thing.’
Good for whom? Good how?
-
R relay@relay.infosec.exchange shared this topic
-
@n_dimension Can you say more about this: ‘keeping an expert in the decision chain, if it's only a tick box is a good thing.’
Good for whom? Good how?
Good for the company running the AI, risk management. If I remember correctly, there has been case law precedents set in the US that 'AI is not reposnsible for damages'
Good for the invader bastards in Ukraine who at the last moment before a drone turns him into a pile of steaming meat, makes a gesture of surrender and the operator yanks back the kill mode.
Not a matter of idle speculation, Ai killer bots are hunting people in Ukraine.