Skip to content
  • 0 Votes
    1 Posts
    3 Views
    danie10@squeet.meD
    “Subscription costs have a way of feeling invisible. You might have a cloud storage here, an AI tool there, a transcription service you barely use anymore. All of this can add up to something substantial. But if you own a mid-range GPU, there’s a good chance you’re paying for things your hardware could handle for free. I have an NVIDIA RTX 3060 (currently around $250), and last year it saved me $819.96 by allowing me to cut or downgrade four subscriptions. It wasn’t by doing anything exotic, but by running free, open-source tools locally that do the same job.”I started to realise this myself, just this week, as I was experimenting with AI tools and AI image generation. I made the mistake when I upgraded my video card a few months ago, from a 6 GB VRAM card, to a 12 GB VRAM card. This was because I had a game that really wanted about 8 GB of VRAM and I reckoned that the 12 GB would give it a bit of headroom. DaVinci Resolve Studio also wanted 8 GB of VRAM for its new AI functions.Yep, I know the prices get expensive as you go higher up, but I was thinking in a gaming mode, and not what else I could use that card for. Thinking now with this other mindset I realise I should have pushed higher on my new card.Still that said, you can work efficiently with a 12 GB card, or even a bit smaller, if you don’t run too many GPU intensive apps together, and you can get away with smaller more efficient AI models too.See howtogeek.com/ways-my-old-nvid…#Blog, #GPU, #opensource, #subscriptions, #technology
  • 0 Votes
    1 Posts
    0 Views
    cpponsea@vmst.ioC
    ACCU on Sea 2026 SESSION ANNOUNCEMENT: Bridging CPUs and GPUs with std::execution - Using Senders / Receivers as a Frame Graph by Al-Afiq Yeonghttps://accuonsea.uk/2026/sessions/bridging-cpus-and-gpus-with-stdexecution-using-senders-receivers-as-a-frame-graph/Register now at https://accuonsea.uk/tickets/#cpu #gpu #cpp #coding
  • 0 Votes
    1 Posts
    0 Views
    T
    Build fixes for #FreeBSD #ports graphics/drm-61-kmod and graphics/drm-66-kmod are landed. This was the show-stopper.Now submitted patch to upgrade #NVIDIA #GPU #driver set to 595.71.05 as Bug295058 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=295058and opened corresponding review D56851. https://reviews.freebsd.org/D56851This seems to be a bugfix release. https://www.nvidia.com/en-us/drivers/details/267226/Info about Linux counterpart is here. https://www.nvidia.com/en-us/drivers/details/267223/
  • 0 Votes
    1 Posts
    0 Views
    T
    Version 595.71.05 of #NVIDIA #GPU driver sets are released at Apr.28, 2026. https://www.nvidia.com/en-us/drivers/details/267226/But patch to upgrade #FreeBSD #ports are now #pending to be submitted until temporary workaround or fixed version for build issues of graphics/drm-61-kmod and graphics/drm-66-kmod reported as Bug 294870 and Bug 294875 is committed. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=294870 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=294875Upstream seems to be working on "real fixes", but maybe need some more time to finish. So I've attached a patch for workaround both for Bug 294870 and Bug 294875 at Bug 294870 already (not tested on any branch other than stable/15, though).
  • 0 Votes
    1 Posts
    3 Views
    arint@arint.infoA
    RT @outsource_: Meine 4090 stieg von 26 auf 154 Tokens pro Sekunde bei Qwen 3.6 27B🤯 mehr auf Arint.info #AI #GPU #LLM #MachineLearning #Performance #Qwen #arint_info https://x.com/outsource_/status/2047558951303028855#m
  • 0 Votes
    1 Posts
    6 Views
    0mega@sk.zehnvorne.social0
    Ich habe für den dmemcg-booster ein einfaches Debian-Paket gestrickt. Momentan noch ohne postinst und prerm-Trigger, deshalb müsst ihr die systemd-Dienste dmemcg-booster-system und dmemcg-booster-user händisch aktivieren.sudo systemctl enable dmemcg-booster-system --now && systemctl enable --user dmemcg-booster-user --nowDas Paket liegt in meinem Debian-Repo oder als Download in den Releases des Git-Repos. #linux #gaming #hardware #gpu #drivers #amd #dmemcg
  • 0 Votes
    1 Posts
    1 Views
    arint@arint.infoA
    RT @ZenMagnets: Minimax m2.7 nvfp4 läuft mit ~130 tok/s im Single-Stream auf 2x RTX 6k mit sglang. Bis zu ~1500 tok/s bei 64 gleichzeitigen frischen Kontexten. Enormer Leistungsabfall bei höheren Kontexten. Aber viel schneller als meine m2.5 vLLM-Konfiguration von vor zwei Monaten (sprich: 2 KI-Jahre), und ich bin beeindruckt, wie sehr SgLang bei der Performance bei hoher Nebenläufigkeit aufgeholt hat, was früher eine Spezialität von vLLM war. Verwendung der lukealonso/MiniMax-M2.7-NVFP4 Konfiguration ️ Alt-Text des Bildes 𝗭𝗲𝗻 𝗠𝗮𝗴𝗻𝗲𝘁𝘀 (@ZenMagnets) GROSSE BEGEISTERUNG: Erster Minimax m2.5 NVFP4 Quant auf Hugging Face. 83 tok/s Single-Stream vLLM auf zwei RTX 6000. Oder etwa doppelt so schnell wie ein Mac 512GB-System, das halb so viel kostet. Außer dass der Mac nicht auch 1000+ tok/s über 32+ gleichzeitige Verbindungen schafft. Leistungsbegrenzung bei 550W pro GPU für diesen Test. lukealonso/MiniMax-M2.5-NVFP4 vLLM-Rezept, das ich im Alt-Text des Bildes verwendet habe — https://nitter.net/ZenMagnets/status/2022562893091475626#m mehr auf Arint.info #AI #GPU #LLM #MachineLearning #NVIDIA #SGLang #arint_info https://x.com/ZenMagnets/status/2044281284885958780#m