Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

bertdriehuis@infosec.exchangeB

bertdriehuis@infosec.exchange

@bertdriehuis@infosec.exchange
About
Posts
2
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/
    bertdriehuis@infosec.exchangeB bertdriehuis@infosec.exchange

    @drgroftehauge @atax1a there are tons of bad models out there, that's a fact. ML is an opaque tool. But an ML model is easier to validate independently. Biases can be shown, and results are reproducible within statistical limits. It is as much a science as statistics are, and those are equally abused in the domain you refer to.

    The Netherlands by the way also has its fair share of problematic algorithms based on ML in the social domain. The biggest issue is not ML itself, but the lack of openness and independent validation. If the algorithm were written in a traditional programming language the result would not have been different (and we also have failed examples of those in our governments' recent past).

    Uncategorized

  • https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/
    bertdriehuis@infosec.exchangeB bertdriehuis@infosec.exchange

    @atax1a one rather important distinction that is often lost at reporters (as part of the general public) is whether we're dealing with ML or with LLM. I've seen my share of absolute bonkers implementation of, well, anything, but I have a hard time believing f'ing LLM's entered the operating theatre.

    I'm not decided on whether I prefer to die because of an ML model going off the rails, or an old fashioned coding error like the infamous Therac-25. I've seen code for medical software and I'm not optimistic either way.

    Frankly, I prefer doctors who don't Google my symptoms during a GP visit, but I'm afraid that is an art that's dying out.

    Uncategorized
  • Login

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups