Skip to content
  • 0 Votes
    1 Posts
    0 Views
    T
    Bloomberg Technology | Jury Finds Meta, Google Liable for Addiction | Bloomberg Tech 3/26/2026Bloomberg’s Caroline Hyde and Ed Ludlow discuss the jury verdict holding Google and Meta liable for a young woman’s social media addiction. Plus, the case in California was just the start of thousands of similar cases that could lead to impacts on the businesses of social media companies. And Google researchers tout a new compression technique for LLMs and vector search engines, sending shares of memory and storage companies lower. (Source: Bloomberg)Read more: https://www.bloomberg.com/news/videos/2026-03-26/bloomberg-tech-3-26-2026-video#meta #google #california #llms
  • Boost plz!

    Uncategorized llm llms libraries archives
    1
    0 Votes
    1 Posts
    0 Views
    lina@neuromatch.socialL
    Boost plz!Looking for critical scholarship on the use of "AI" by library/archive workers. University libraries in particular, but adjacent and tangentially-relevant-at-best stuff is welcome too. Any format is fine: books, papers, blogposts, whatever. If it's good, gimme all you've got!Looks like we're gonna have a department-wide conversation about people using LLMs, and it's being framed as "we're all using it, but we're not talking about it, so let's make sure we're all on the same page about using it responsibly" ... I'll of course be pushing the "there's basically no way to use it responsibly" position, and I'd like to arm myself and others with some critical analyses of issues related to its use in library/archive spaces.#llm #LLMs #ai #libraries #archives
  • 0 Votes
    1 Posts
    2 Views
    michalfita@mastodon.socialM
    I asked #Google #Stitch for screens for simple application based on my spreadsheet's screenshot... 20 minutes of processing ended up with unexpected error and inability to finish the task.#Prompt had three lines.Right. Am I #LLMs #killer?
  • 0 Votes
    1 Posts
    0 Views
    abucci@buc.ciA
    A good review of reasons insurance companies are pulling back from insuring companies that lean on generative AI. Point 4, "The main problem is not just the error, but the incentive not to see it" is especially damning: use of AI not only obscures audit trails, it sets up perverse incentives against accountability, pushing costs and risk to other parts of an organization, its customers, or society. The net result is that whatever "local" advantages AI may provide turn into downstream risk that cannot be easily accounted for. Insurance companies are (rightly) allergic to this state of affairs.Another example of how (whole)-systems thinking is very helpful for parsing the effects of technology changes like this.https://freakonometrics.hypotheses.org/89367#AI #GenAI #GenerativeAI #LLMs #AgenticAI #GPT #ChatGPT #Claude #Gemini #ActuarialScience #insurance
  • #democracy #elections #LLMs

    Uncategorized democracy elections llms
    1
    0 Votes
    1 Posts
    0 Views
    renatomancer@vmst.ioR
    #democracy #elections #LLMs https://viterbischool.usc.edu/news/2026/03/usc-study-finds-ai-agents-can-autonomously-coordinate-propaganda-campaigns-without-human-direction/
  • 0 Votes
    3 Posts
    4 Views
    mikalai@privacysafe.socialM
    @janriemer I articulate usefulness of Rust's type system in passing "mental model" from creator to reader/user-developer. And compiler precisely enforces model, expressed in types.Switching into "use of GenAI" tools, should have the same criteria: if explicit mental model is articulated in types, human/readers can fully grasp it, and compiler produces code that works, then, why should I tell anyone what tool to use?The reality, of course, is that despite M in LLM there is no explicit model.
  • 0 Votes
    1 Posts
    8 Views
    concertina226@infosec.exchangeC
    Today the House of Lords has published an report on their inquiry into #AI and its impact on copyright and the creative industries, which lists some very sensible reasons why AI needs to be better regulated This relates closely to what I said on @BBCNews 2 weeks ago, that the House of Lords has consistently raised concerns about creative imagery being devalued and used by AI models for free in bills, yet somehow these bills disappear into nothing when they get to Parliament Full report: https://publications.parliament.uk/pa/ld5901/ldselect/ldcomm/267/267.pdfTL;DR in short, they found that: The answers produced by AI are not based on it learning anything, and thus shouldn’t be treated as such Gov UK needs to strengthen licensing, transparency & enforcement of copyright law, not weaken it Gov UK's mixed messaging regarding AI is creating big problems, including undermining investment Gov UK needs to issue a "clear public statement" that commercial AI developers need to obtain licences in order to use copyrighted works to train AI models My analysis ️:For some very strange reason, we don’t seem to value imagery, which makes no sense given copyright laws today.The datasets used to train AI clearly need to be legislated. The reason that images were used to train AI models in the first place was to enable the AI to be able to differentiate different objects in a photo - i.e. to understand the difference between a car, a tree, a child, the grass in the background and the sky. But AI models were never meant to replace creativity, art, design and illustration industries, the film industry, video games etc. All of these things are licensed and copyrighted. So why is nobody doing anything?Why are we allowing computer algorithms to replace human expression when we have always enforced IP on human-created materials?All governments should enforce legislation on the datasets being used to train the AI and I predict that firms in these industries and multiple other affected industries relating to text IP will eventually cobble together and sue these tech giants. Because the alternative literally doesn’t make any sense. #Copyright #HouseofLords #LLMs #technews #technology #UKlaw #analysis
  • 0 Votes
    10 Posts
    14 Views
    jb@social.lemee.coJ
    @bladecoder @ploum D'accord mais mets toi à la place du manager et de l'entreprise face à deux dev "équivalents" dont l'un fait x2 en productivité et pas l'autre. Difficile de ne pas tenter de convaincre le dev qui ne souhaite pas changer ses outils. Et ensuite économiquement quand la concurrence te rattrappe t'es pas d'autres choix que d'accélerer pareil ou de "purger". Personne n'a le choix, ni les devs, ni les managers, ni les entreprises
  • So, everyone's working right?

    Uncategorized slop llms vibecode claude
    4
    1
    0 Votes
    4 Posts
    10 Views
    rolle@mementomori.socialR
    @iamkonstantin https://status.claude.comIssues whole day.
  • 0 Votes
    1 Posts
    2 Views
    erikjonker@mastodon.socialE
    LLMs are good at deanonymization.https://arxiv.org/abs/2602.16800#AI #LLMs #deanonymization #cybersecurity
  • 0 Votes
    1 Posts
    5 Views
    neilmadden@infosec.exchangeN
    “What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself. The teacher usually learns more than the pupil. Isn’t that true?” — Douglas Adams“It is not knowledge, but the act of learning, not possession, but the act of getting there which generates the greatest satisfaction.” — Carl Friedrich Gauss“You think you KNOW when you learn, are more sure when you can write, even more when you can teach, but certain when you can program” — Alan Perlis (of course)Why I don’t use #LLMs for #programming …
  • 0 Votes
    2 Posts
    8 Views
    remixtures@tldr.nettime.orgR
    "This result is not surprising. Password generation seems precisely the thing that LLMs shouldn’t be good at. But if AI agents are doing things autonomously, they will be creating accounts. So this is a problem.Actually, the whole process of authenticating an autonomous agent has all sorts of deep problems."https://www.schneier.com/blog/archives/2026/02/llms-generate-predictable-passwords.html
  • 0 Votes
    1 Posts
    5 Views
    r_alb@mastodon.socialR
    What exactly do they teach at business schools these days?Wanting to reduce a company's dependency on humans by replacing them with slop machines, which dramatically increases the company's dependency on a mere handful of other companies makes no sense, not even from a capitalist business perspective.Unless ... of course ... you needed some fancy buzzwords to feed to eager investors. Ah, well, there we have the answer.--#StopTheSlop #LLMs #business
  • 0 Votes
    8 Posts
    0 Views
    gotofritz@hachyderm.ioG
    @petersuber I applaud their courage, sadly I fear that's 684 candidates for the next round of layoffs
  • 0 Votes
    1 Posts
    15 Views
    timokissel@mastodon.worldT
    If we get nothing else out of #AI than #LLMs making software less Swiss-Cheese-like, I'm going to consider that a big win.#cybersecurityhttps://red.anthropic.com/2026/zero-days/