OpenAI (like all the other tech companies) trying to work in the health space without being subject to any of the regulations of the health space.
-
RE: https://dair-community.social/@alex/116065351796939323
OpenAI (like all the other tech companies) trying to work in the health space without being subject to any of the regulations of the health space.
-
RE: https://dair-community.social/@alex/116065351796939323
OpenAI (like all the other tech companies) trying to work in the health space without being subject to any of the regulations of the health space.
@emilymbender "... so you can feel more informed and prepared ..."
Not BE more informed, FEEL more informed. This is dark and ghoulish and gets at a core evilness of AI - that obsequious overconfident presentation of unvetted information, the casual amoral lying and the greasy faux apology when called on it. And to be clear, I'm not anthropomorphizing the software - the system is a proxy for its owners and developers. There is a set of humans that is responsible for creating and deploying and selling these systems, a set who profits from them - the hand inside the puppet head. Regardless of the elaborate statistical text generation mechanism inside the puppet, someone controls the presentation of results and built that fake human-like interface, added that fake apology, etc. That's not the LLM, that's deterministic UI programming, designed and built by humans to let you feel the system is more intelligent and feeling than it is. Remove the raw LLM results from the total response and what you have is that overconfident and obsequious amoral framing which is an intentional choice by the system designers. However obscured, there's still a human hand inside that puppet head and that human's presence is what transforms the LLMs unvetted extruded text into fraud and lies.
-
R relay@relay.publicsquare.global shared this topicR relay@relay.an.exchange shared this topic
-
@emilymbender "... so you can feel more informed and prepared ..."
Not BE more informed, FEEL more informed. This is dark and ghoulish and gets at a core evilness of AI - that obsequious overconfident presentation of unvetted information, the casual amoral lying and the greasy faux apology when called on it. And to be clear, I'm not anthropomorphizing the software - the system is a proxy for its owners and developers. There is a set of humans that is responsible for creating and deploying and selling these systems, a set who profits from them - the hand inside the puppet head. Regardless of the elaborate statistical text generation mechanism inside the puppet, someone controls the presentation of results and built that fake human-like interface, added that fake apology, etc. That's not the LLM, that's deterministic UI programming, designed and built by humans to let you feel the system is more intelligent and feeling than it is. Remove the raw LLM results from the total response and what you have is that overconfident and obsequious amoral framing which is an intentional choice by the system designers. However obscured, there's still a human hand inside that puppet head and that human's presence is what transforms the LLMs unvetted extruded text into fraud and lies.
@emilymbender Aside: I do QA on nuclear safety software and have worked as a safety analyst for years. I recently had a check-in talk with my manager with concerns about some of our staff's use of rhetoric - forceful presentation, overconfidence, errors of omission, unvetted claims - and to the degree that may be unacceptably challenging to system safety and team effectiveness. Presentation and framing absolutely matters and we rely on honest presentation of fact and free argumentation to make responsible conservative decisions about system design, analysis, and operation.
Any human engineer that was as overconfident and unreliable and glad-handing as your average commercial chatbot would be fired for cause and walked off premises. I'm lucky and privileged to work in a highly regulated safety focused environment with comparatively strong systems for identifying and fixing problems (i.e. a formal audited corrective action program). It's by no means perfect but it's far and away better than I've experienced outside this regulated safety-focused industrial niche. I see so many people outside my niche having AI forced on them, the low quality of LLM generated prose and code and I realize how lucky I am to be shielded from that aggressive reduction in quality and personal and organizational responsibility.
The irony of working on power supply systems expected to be used to support AI datacenters that implement the chatbots we vigorously defend against is not lost on me. It's not why got into this field 40 years ago.
-
RE: https://dair-community.social/@alex/116065351796939323
OpenAI (like all the other tech companies) trying to work in the health space without being subject to any of the regulations of the health space.
@emilymbender we love when a company like Uber or AirBnB or Amazon or OpenAI or most other of the companies that popped up alongside OpenAI think that they can make the necessary innovation to the field of their choice but that to do so they *simply must* be able to operate without oversight or regulation, and whether that results in a service that makes a significant change to its target market/industry or not, it always comes at the consequence of hurting people. Gig economy companies harm their gig workers by not offering them real employment and throwing them under the bus when things go wrong. Amazon tolerates through its market service and Audible a system that harms buyers and sellers and authors and publishers with fees and poor return policies. Now here come the "AI" companies that think that they have the solutions to all these problems that we never asked them to solve in the first place, and they'll sneak around or squeeze past any regulation to make it happen. OpenAI doesn't care to have any regulation as a health service because they believe that regulating their services would be too costly and complicated, and they can't deal with that, whether because they really think that "AI" will be this incomprehensible revolutionary thingy or because they simply don't want to waste time that could be spent making money. All this to say, this is a problem with tech companies. I don't know if it's just with tech companies, but they are certainly quite prone to it. They want to grow and grow and not think about regulations and sustainable business models. The latter usually warrants its own rant, but that is a separate rant from this one. OpenAI might not have enough users for its products to justify operating costs, and ChatGPT might not have enough data for another major upgrade, but what does it matter to them when they were always primarily concerned about services that produce growth and appeal to investors.
-
R relay@relay.infosec.exchange shared this topic