Skip to content
  • 0 Votes
    1 Posts
    8 Views
    arint@arint.infoA
    RT @TheAhmadOsman: GPT 5.5 (und eine Reihe von zufälligen Modellnamen) wurden zu Codex CLI hinzugefügt mehr auf Arint.info #AI #CodexCLI #GPT5 #LLM #Software #arint_info https://x.com/TheAhmadOsman/status/2046802876924666297#m
  • 0 Votes
    1 Posts
    0 Views
    arint@arint.infoA
    RT @HotAisle: Kimi K2.6 + DFlash: 508 tok/s auf 8x H100 mehr auf Arint.info #inference #LLM #LLMServing #throughput #transformers #arint_info https://x.com/HotAisle/status/2046620289984057634#m
  • 0 Votes
    1 Posts
    0 Views
    arint@arint.infoA
    RT @TheGeorgePu: Anthropic hat Claude Code gerade aus dem Pro-Plan entfernt. mehr auf Arint.info #AItools #Anthropic #ClaudeCode #LLM #SoftwareEngineering #arint_info https://x.com/TheGeorgePu/status/2046705634331025855#m
  • 0 Votes
    1 Posts
    1 Views
    arint@arint.infoA
    RT @poezhao0605: Ein Xiaomi-Executive, der zuvor bei DeepSeek gearbeitet hat, hat die mathematischen Hintergründe dargelegt, noch bevor Anthropic überhaupt reagierte. Ihr Post ging in chinesischen Tech-Kreisen viral. Fast keine Berichterstattung im englischsprachigen Raum. Ihr Kernpunkt: Der Wettbewerb bei KI-Coding verschiebt sich von Kosten pro Token zu erledigten Aufgaben pro Token. Billigere Tokens helfen nicht, wenn die Agenten durch mangelhaftes Context-Management Millionen von ihnen verschwenden. Poe Zhao (@poezhao0605) hat Claude Code aus dem 20-$/Monat-Pro-Plan entfernt. Jetzt ist der Max-Plan (100 $/Monat oder höher) erforderlich. Dies folgt auf den Crackdown bei OpenClaw vor zwei Wochen. Dieselbe Logik. Flatrate-Abos können die Workloads von Agenten nicht mehr auffangen, die pro Task 10- bis a-100-mal mehr Tokens verbrauchen als Chat-Modelle. Ich habe letzte Woche einen Deep Dive zu genau dieser Dynamik veröffentlicht. Alibaba's Coding-Plan ist täglich um 9:30 Uhr ausverkauft. Die Wirtschaftlichkeit passt nicht mehr zusammen. hellochinatech.com/p/china-a… — https://nitter.net/poezhao0605/status/2046747127309836329#m mehr auf Arint.info #AI #Anthropic #Claude #coding #LLM #tech #arint_info https://x.com/poezhao0605/status/2046774802149707798#m
  • RT @rot13maxi: ok.

    Uncategorized automation llm wiki arintinfo
    1
    0 Votes
    1 Posts
    0 Views
    arint@arint.infoA
    RT @rot13maxi: ok. hermes-agent-profile sind cool. Ich habe ein dediziertes librarian-Profil für mein Wiki. Es nutzt ein spezielles Modell und hat eigene Skills. Meine anderen Agents (hermes, coding agents, etc.) können es fragen. Ich habe mich voll auf das Wiki-Management konzentriert. @NousResearch hat mit profiles gute Arbeit geleistet, so kann ich jetzt einfach librarian query nutzen — https://nitter.net/rot13maxi/status/2046651664556294373#m mehr auf Arint.info #AI #Automation #LLM #Wiki #arint_info https://x.com/rot13maxi/status/2046684246618607941#m
  • 0 Votes
    1 Posts
    1 Views
    arint@arint.infoA
    RT @Srini_Pa: LLMs scheitern im Unternehmenseinsatz, weil sie nicht lernen können. Was für Verbraucher ein Ärgernis ist, ist für Unternehmen ein Risiko. mehr auf Arint.info #EnterpriseAI #KünstlicheIntelligenz #LLM #Softwareentwicklung #TechTrends #arint_info https://x.com/Srini_Pa/status/2046574546376135025#m
  • 0 Votes
    1 Posts
    0 Views
    arint@arint.infoA
    RT @Kimi_Moonshot: Wir machen FlashKDA open-source — unsere auf CUTLASS basierende Implementierung von Kimi Delta Attention-Kernels mit hoher Performance. Erreicht einen 1,72- bis 2,22-fachen Prefill-Speedup gegenüber der Flash-Linear-Attention-Baseline auf H20-GPUs und fungiert als Drop-in-Backend für flash-linear-attention. mehr auf Arint.info #AttentionMechanism #DeepLearning #GPUoptimization #LLM #OpenSource #arint_info https://x.com/Kimi_Moonshot/status/2046607915424034839#m
  • 0 Votes
    1 Posts
    0 Views
    arint@arint.infoA
    RT @ns123abc: EILMELDUNG: Google DeepMind hat ein Spezialteam zusammengestellt, da Anthropic bei Programmieraufgaben die Oberhand gewinnt mehr auf Arint.info #AI #Anthropic #ArtificialIntelligence #GoogleDeepMind #LLM #SoftwareEngineering #arint_info https://x.com/ns123abc/status/2046241790110445930#m
  • 0 Votes
    1 Posts
    1 Views
    mainframed767@infosec.exchangeM
    Why didn't they use mythos to protect mythos? #mythos #anthropic #llm #ai
  • Another chapter of AI slop ruining everything:

    Uncategorized unifi llm
    10
    0 Votes
    10 Posts
    1 Views
    fedops@fosstodon.orgF
    @azsiaz fully on-brand for that outfit.@sheogorath
  • 🎥 Video

    Uncategorized dfir llm forensics privacy
    1
    0 Votes
    1 Posts
    1 Views
    hasamba@infosec.exchangeH
    ---------------- Video===================Opening: The episode from 13Cubed presents a practitioner-focused conversation about integrating modern LLMs into digital forensics and incident response workflows. The speaker provides concrete examples from recent investigations and offers practical cautions about data handling and model choice.Technical details: The video documents at least two concrete use cases: (1) using a public LLM (Claude) to guide the analyst through steps to access and query an unfamiliar database format by mounting a disk image and attaching the database in a WSL2 Ubuntu 24.04 environment, and (2) asking a public model to generate a Bash script that parsed unstructured strings and produced a deduplicated, sorted CSV output (avoiding manual grep/sed/awk/cut work). The creator emphasizes that no case-sensitive investigation details were shared with the public model during these interactions.Analysis: The examples illustrate how LLMs can accelerate triage and parsing tasks in DFIR — from transportable guidance on unfamiliar file formats to rapid generation of parsing scripts for data transformation. The practical value lies in time savings for routine extraction and formatting steps and in lowering the friction of working with obscure artifacts.Limitations and risks: The content highlights two primary limitations: data privacy (risk of exposing investigation details to public models) and model reliability (potential for incorrect or incomplete guidance). The speaker explicitly recommends treating outputs from public models as aids that must be validated, not authoritative results. Local models are introduced as an alternative, but with trade-offs around capability, accuracy, and operational complexity.Detection and operational considerations: While the video is not a threat report, it implies defensive considerations: log activities when integrating model-based tooling into processing pipelines, validate generated parsing outputs against known-good data, and maintain provenance for transformed evidence. Conceptually, model-assisted parsing should be accompanied by reproducible workflows and human verification steps.Practical integration points: Candidate uses include assisted analysis of unknown formats, automated script generation for data extraction and deduplication, summarization of large unstructured artifacts, and incremental triage to prioritize analyst time. The presenter also touches on 'vibe coding' (rapid, iterative coding with model help) and career advice for new practitioners adapting to AI-augmented workflows.Conclusion: The episode offers measured, example-driven guidance: LLMs can materially speed DFIR tasks, but they create privacy and verification obligations. Outputs require skeptical validation and cautious operational integration. #AI #DFIR #LLM #forensics #privacy Source: https://www.youtube.com/watch?v=wKn-9sKBqX8
  • Our team is moving from Azure DevOps to #Jira.

    Uncategorized jira agile llm
    4
    0 Votes
    4 Posts
    19 Views
    grumpasaurus@infosec.exchangeG
    @airwhale A man walks into a bar, orders a drink, then takes out from his pocket a tiny man and a tiny piano. He puts them on the counter, and the tiny man begins to play the piano beautifully.The bartender, obviously impressed, asks the man, "Wow, where did you find him?""I wished on this magic lamp," says the man, taking a lamp out of his other pocket."Wow! Can I try?""Sure, go ahead."So the bartender concentrates and rubs the lamp. All of a sudden, the bar is absolutely filled with ducks."What's going on?" the bartender shouts. "I asked for a million *bucks*!""Do you really think I wished for a ten-inch pianist?"*Note you have to pronounce pianist like "PEE enist"https://www.reddit.com/r/Jokes/s/6FYahg0I8P
  • 0 Votes
    1 Posts
    5 Views
    xenodium@indieweb.socialX
    chatgpt-shell was my most popular #emacs package. Here comes agent-shell #llm #agent #ai #agentic #opencode #gemini #claude #foss #oss #macos #linux
  • 0 Votes
    4 Posts
    2 Views
    juergen_hubert@mementomori.socialJ
    @skysong Oh, I have a lengthy list of books I plan to publish, too.But the wiki is an excellent marketing tool for my work.
  • 0 Votes
    1 Posts
    10 Views
    arint@arint.infoA
    RT @Ali_TongyiLab: 1/4 Qwen3.6-Max-Preview: Smarter, sharper, still evolving mehr auf Arint.info #Coding #KünstlicheIntelligenz #LLM #Qwen3 #TechNews #arint_info https://x.com/Ali_TongyiLab/status/2046227346727014842#m
  • 0 Votes
    1 Posts
    1 Views
    dendrobatus_azureus@mastodon.bsd.cafeD
    Interesting read on the way LLM bots retrieve pages from a websiteExplanations are clear precise and surgicalhttps://surfacedby.com/blog/nginx-logs-ai-traffic-vs-referral-traffic #LLM #AI #slop #nginx #traffic #programming #referral #traffic #networking #robots #txt #claude #chatgpt #bing #meta #metaAI
  • BBC: AI chatbots could be making you stupider

    Uncategorized llm
    2
    0 Votes
    2 Posts
    0 Views
    ai6yr@m.ai6yr.orgA
    "... Those who used their own minds had a brain that was "on fire", showing widespread activity across many parts of the brain, she says. The search engine-only group still showed strong activity in the visual parts of the brain, but the ChatGPT group showed notably less brain activity – it was reduced by up to 55%...."#ai #llm #intelligence
  • 0 Votes
    1 Posts
    0 Views
    juergen_hubert@mementomori.socialJ
    RE: https://sunkencastles.com/2026/04/20/1101/I was asked quite often whether I would consider using #LLM for my translations of German folk tales.I have strong opinions on this, and here they are.##folklore #translation
  • 0 Votes
    1 Posts
    1 Views
    juergen_hubert@sunkencastles.comJ
    Why I refuse to use Machine TranslationIn the last few years, there has been a lot of talk about how artificial intelligence (actually: commercial chatbots and LLMs) will be transforming our way of working – how it will make some jobs more efficient, and others obsolete. There are also concerns that such systems do not live up to the hype – though this has not stopped CEO and their consultants from pushing them into the workplace, in the hopes of drastically reducing their work force and labor costs even though they cannot substitute for their workers’ process knowledge.I translate old German folk tales into English, and translation work is already heavily automated these days due to the sheer amount of material that needs to be translated. Thus, it is unsurprising that many people have asked me whether I use machine translation for my work – usually with the assumption that this would save me time.In this essay, I am going to tell you why I won’t use AI systems for my translation work. I could talk about the ethical concerns – how the work of others is used to train LLM systems without compensation while charging for their output, or how they consume massive amounts of electricity and other resources while our planet and its ecosystems are already on the precipice, or how they are used to build up the mother of all investment bubbles.I could also add some personal grievances. For instance, in my day job as a bid manager, I also have to price server systems for our customers, and when I recently noticed that a simple 16 GB DDR5 RAM module had a purchase price of €1,600, I realized that something is going very wrong indeed. Furthermore, anonymous bot networks are constantly scraping my websites for LLM training data, forcing me to upgrade my website hosting plan twice last fall to keep outages at a tolerable level.But since others have elaborated on the ethical concerns in much more detail than I ever could, I won’t be talking about these further. Instead, I will be discussing the practical reasons why machine translation does not fit into my working processes when translating German folk tales.Reading the Fraktur TypesetThe first challenge for machine translation is parsing the source material. For copyright reasons, I exclusively use public domain works – German folk tale collections which were largely published in the 19th century. And the vast majority of these works were not printed with the modern Antiqua letters, but the old German Fraktur typeset. Here is a reasonably “clean” example of a story I have translated (the source page is hereUsually, texts that are converted into a new language by machine translation are already in a machine-readable format – but these old digital scans are not. Thus, before I could use machine translations for these texts, I would need to convert them into a machine-readable format. While OCR (“Optical Character Recognition”) tools exist that can handle Fraktur typesets, the output would require additional effort for proofreading, especially since the input data is highly variable in its quality.Thus, in contrast to the original premise, machine translation would actually increase my workload even before I got to the actual translation step. Translating Old Words and PhrasesLLM systems are largely trained on the most commonly available modern texts (such as Reddit posts). 19th century German folk tales are not “modern texts”. They are rife with old words and phrases that were only used in some small geographical area and are no longer in modern use. Would a standard machine translation system (i.e., one trained on Reddit) come up with a decent translation for “Bindelbaum” – to pick just one example that stuck in my mind? Especially considering that the old texts that could provide some context were not in a machine-readable format, and thus of limited use for training the LLMs?Perhaps they could, and perhaps they couldn’t. However, “maybe this is an accurate translation” is not good enough for my purposes, and indeed, it is not sufficient for any professional translator. If I provide a translation for certain old words and prices, I need to be as sure as possible that this translation is accurate – and if I am uncertain, I need to explain that to my readers as well.Thus, I would have to double-check every machine-translated text I work with with my own research – which, again, would not save me any time. And if I am doing all the research anyway, I might as well skip the machine translation and do it all by myself in the first place.Providing ContextBut truth to be told, the actual translation is the easiest part of my work. German folk tales were told in a specific time and a specific cultural context. The original audience for these tales (mostly 19th century German peasants) were deeply familiar with this context.A modern audience will usually not be familiar with this context. Many aspects of these folk tales are hard to grasp even for modern Germans – so what chance does an international audience have?This is why one of my most important tasks as a translator is to explain this context. This is why my books have many hundreds of footnotes, and explanatory commentary following each tale. While I am not primarily writing my books as scientific treatises, I have spent enough years in academia that I have views on providing inaccurate information. Sure, mistakes can and will happen. But allowing errors to proliferate in my manuscripts because I was outsourcing the most critical aspects of my research to LLM systems would be a gross violation of ethical standards (not that this seems to stop a lot of LLM users…).So I will do my research the proper way. And with each paragraph I translate, I contemplate its hidden meanings and context, and how to convey it to my readers. But if I don’t do the first step of the work myself – that is, translating and thinking about every single sentence – then I have already lost my first opportunity to truly understand the story. Preserving Unique VoicesGerman folk tales were told by tens of thousands of people, each of whom had their own unique way of telling their stories. And later on, they were collected by hundreds of folklore researchers, each of whom had their own unique editorial approach. That adds up to a lot of unique voices.However, LLMs are well-known to generate texts that trend towards the average. They have been trained on vast archives of human-written texts, and their task is to create texts that are “most likely” to fit the prompt – the common denominator, if you will. Worse, it will be the most common denominator of Reddit users and the like. The only LLM system that might even come even close to capturing the unique voices of the original texts would be one that has been trained exclusively on their translations – including my translations.While I want people to be entertained by my translations, these tales are also part of my country’s cultural heritage. Not even trying to capture the unique voices of these long-ago storytellers and instead replacing them with the generic output of LLMs feels hugely disrespectful.They deserve better, and my audience deserves better as well. #LLM #MachineTranslation #Translation
  • 0 Votes
    1 Posts
    0 Views
    juergen_hubert@mementomori.socialJ
    RE: https://mastodon.social/@gwynnion/116421925941002587Even before #LLM arrived, there were plenty of hucksters who sold a "publish lots of books in order to farm lots of passive income $$$!"Back then, the height of the scam was hiring ghostwriters in developing countries who would write your "werewolf romance" novels of whatever. These days, it's done by LLMs.