Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

diemkay@hachyderm.ioD

diemkay@hachyderm.io

@diemkay@hachyderm.io
About
Posts
7
Topics
1
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • A post is doing the rounds in my feed that two AI safety researchers recently quit and made the news.
    diemkay@hachyderm.ioD diemkay@hachyderm.io

    IDK where this ends and all of this sees like a hot air balloon pumped for IPO. The productivity gains aren’t to be seen, but what is there are excuses to fire/hire, manipulate quarterly returns, deskill a few generations, make one person do the job of 2-3-4 others but with AI, chew them up and spit them out once they’re burnt out. Maybe then they can write poetry!

    Adam Raine went from talking to ChatGPT 1 hour daily for his math homework to 5 hours daily before his suicide. That whole relationship replaced the human connection that maybe could've saved him.

    A few thoughts:

    🍩 Don’t trust corporate AI messaging in an IPO year.
    🍩 Support independent press and real journalists who ask hard questions, because the rest of the media industry uncritically publish what AI companies want (see Oxford Reuters Institute research)
    🍩 Support actual regulation, e.g., EU AI Act. Look at where the lobby money is going and what it’s trying to hide.
    🍩 Watch your own usage. If you're spending your day glued to your phone in front of kids, that's what they'll do too. Detox cabins in the wood where they don’t let you keep your phone don’t constitute healthy usage either.
    🍩 Watch friends who seem too attached to their AI conversations.
    🍩 Accept friction in your life. Sit with it, it’s not so bad. These tools make people doubt abilities they spent years developing. Isn’t that *weird*? You knew how to make muffins from a real person’s recipe before they came along. Ask a real person for help. We all looove helping people, but it has to start with asking.

    Uncategorized

  • A post is doing the rounds in my feed that two AI safety researchers recently quit and made the news.
    diemkay@hachyderm.ioD diemkay@hachyderm.io

    Most AI scientists don’t think the current tech path leads to AGI, badly defined as it is. Many consider it a dead end.

    But these two big companies exist, the VC capital is deployed, now they can't not IPO. They're rocket ships going full tilt toward the cash out. After that? Good luck, everyone! We can't even get memory and hard drives for our computers today.

    Reading between the lines of the cryptic quit message, it sounds to me like "I don't trust the people in charge, this is moving too fast, nobody is thinking about the consequences." Which wouldn’t be surprising.

    There are "several crises unfolding." Well yes, when have there not been?

    Organized crime, faulty economic reasoning and weak politicians seem to be running most of the world really, and I’ve barely recovered from the Epstein news.

    The question is: what do we do with that knowledge? "Become invisible"? A nice privileged answer. But the lobbyists are hard at work to make AI and other tech by their rules inevitable. Hiding and shrinking don’t help anyone.

    Either way, the narrative machine will kick in with the usual "Disgruntled employees, ignore them. Are your workers learning AI skills fast enough?"

    Uncategorized

  • A post is doing the rounds in my feed that two AI safety researchers recently quit and made the news.
    diemkay@hachyderm.ioD diemkay@hachyderm.io

    LLM models don’t have safeguards in them. They can't have reliable safeguards. You can introduce some friction, time/parental controls, age limits, and discourage a few people from using them, but the fundamental nature doesn't change.

    After several children died by suicide linked to AI chatbots, California passed the bipartisan LEAD for Kids Act, only for Governor Newsom to veto it, citing industry arguments that safe-by-design guardrails are unworkable and might amount to a total ban.

    Groups like TechNet, Chamber of Progress, and the American Innovators Network, backed by Meta, OpenAI, Anthropic, Google and others, lobbied against the bill.

    Then they pivoted to push a weaker alternative bill, which Newsom signed instead.

    Which raises the question; if it’s too hard to make it safe, should you regulate or do nothing?

    They won't stop at the border. Some of the same groups lobby in Brussels too.

    Uncategorized

  • A post is doing the rounds in my feed that two AI safety researchers recently quit and made the news.
    diemkay@hachyderm.ioD diemkay@hachyderm.io

    Tech companies are trying to weaken the EU AI Act using the same playbooks (and lobby groups sometimes), trying to convince politicians that "if you don’t innovate you’ll be left behind."

    That old chestnut!

    In the first half of 2025, lobbyists as well as CEOs from tech companies met with high-level European Commission staff on average more than once every single working day, with AI as the most discussed topic.

    They held an average of two meetings per working day with members of the European Parliament. There are now more lobbyists than MEPs. Something is fundamentally wrong here.

    Just a moment...

    favicon

    (www.somo.nl)

    https://corporateeurope.org/en/2025/11/there-are-now-more-big-tech-lobbyists-meps

    They are hard at work. This is well-documented.

    Uncategorized

  • A post is doing the rounds in my feed that two AI safety researchers recently quit and made the news.
    diemkay@hachyderm.ioD diemkay@hachyderm.io

    Parasocial relationships with LLM chatbots are a serious topic.

    Parasocial might as well be called parasitic or even predatory.

    So far, three US teenagers killed themselves after talking to chatbots (and getting advice or encouragement from them).

    Yet more adults took their life, such that there's a Wikipedia page now: "Deaths linked to Chatbots." File this under facts you didn't want to know about the world.

    Adam Raine, 14, started with math homework help for an hour each day, escalated to discussing depression ( "having a hard time at school" type stuff), and ended up talking to ChatGPT five hours daily, culminating with getting advice on how to end things.

    https://www.washingtonpost.com/technology/2025/12/27/chatgpt-suicide-openai-raine/ (I know but the story is legit)

    His parents are suing, but OpenAI's defense was that he "circumvented guardrails" and "was at risk anyway."

    As a parent, all I can say: the ABSOLUTE GALL of using legally calculated corporate speak to blame the victim. It's a dagger to the heart of anyone who has a child they love. "Your kid would’ve killed themselves anyway, tough luck."

    I can’t even imagine the hurt and sorrow.

    A company that charges money for a service that has killed children, and their response is that they should have read the terms more carefully.

    Other parents describe these conversations as grooming when they turn sexual. That it's like "having a stranger in your house" talking to your child, except it's an algorithm.

    Link Preview Image
    Mothers say AI chatbots encouraged their sons to kill themselves

    In her first UK interview Megan Garcia speaks to Laura Kuenssberg about the death of her teenage son.

    favicon

    (www.bbc.com)

    One company at least banned under-18s from talking directly to chatbots while the other argues that you're "holding it wrong."

    I hope the parents get some closure from the lawsuits. But many others don’t have the resources to do this, nor should they have to.

    Uncategorized

  • A post is doing the rounds in my feed that two AI safety researchers recently quit and made the news.
    diemkay@hachyderm.ioD diemkay@hachyderm.io

    A post is doing the rounds in my feed that two AI safety researchers recently quit and made the news.

    One quit by giving a cryptic warning that "the world is in peril", advised contemplating beauty, and another warned about the perils of parasocial relationships people are developing with LLMs.

    On one hand, they did quit, which is more than most do. You don't reform a system from the inside, and they found out the hard way.

    BUT they helped build this, got their stock options and their salaries, and NOW they have concerns and tell everyone to be careful?

    Another part of me is like, why do we care about what they think? Geoffrey Hinton quit Google with similar warnings three years ago and everyone cared for a week.

    Instead of wondering "what could they have possibly meant", wow, so cryptic, we could be looking at something more tangible instead: legislation!

    Also, it’s an IPO year. Everything could be marketing at this point.

    But let's assume good faith. At least one of them used their platform on the way out to warn of specific dangers.

    Uncategorized

  • AI slop is so useful and desirable that Google and Microsoft have to spend shit tons of money to have "influencers" shill for it:https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
    diemkay@hachyderm.ioD diemkay@hachyderm.io

    @rysiek I remember they were telling artists that if they don’t become NFT artists they’ll be left behind

    Uncategorized
  • Login

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups