Skip to content
  • 0 Votes
    1 Posts
    0 Views
    solomonneas@infosec.exchangeS
    ️ Ollama: Native MLX Backend for Apple SiliconOllama now runs on Apple MLX natively. On M5 Max + Qwen3.5-35B-A3B: 1851 tok/s prefill, 134 tok/s decode. Also adds NVFP4 quantization for production parity with NVIDIA inference and improved KV cache reuse for agentic workloads.solomonneas.dev/intel#Ollama #LLM #AppleSilicon #DevTools
  • Ollama v0.19.0-rc1 dropped.

    Uncategorized ollama localai devtools aiinfra
    1
    0 Votes
    1 Posts
    0 Views
    solomonneas@infosec.exchangeS
    Ollama v0.19.0-rc1 dropped.New warning when local server context is below 64K tokens. If you run Ollama for agent workflows, this prerelease will surface misconfigured deployments that were silently truncating on longer tasks. Also includes VS Code path handling fixes and hides the Cline integration.Test in non-production before upgrading anything OpenClaw-adjacent.Source: https://github.com/ollama/ollama/releases/tag/v0.19.0-rc1Full intel feed: solomonneas.dev/intel#Ollama #LocalAI #DevTools #AIInfra
  • 0 Votes
    1 Posts
    0 Views
    solomonneas@infosec.exchangeS
    Ollama v0.19.0-rc1 dropped with a useful new warning: if your local server context is below 64K tokens, it flags it now instead of silently truncating. Also changes VS Code path handling and removes the Cline integration from the UI.Worth testing in a non-prod environment before upgrading anything OpenClaw-adjacent.Release notes: https://github.com/ollama/ollama/releases/tag/v0.19.0-rc1solomonneas.dev/intel#Ollama #DevTooling #LocalAI #OpenSource
  • 0 Votes
    2 Posts
    0 Views
    M
    Es kommt noch besser. Die #katze hat das #model komplett zerlegt. Immerhin, es scheint eine Trefferrate zu geben, dass ich überhaupt eine Antwort bekomme, aber in wie weit die stimmt... Schaut in den Alt-Text des Bildes. Für alle, die den wirklich brauchen: Das Bild zeigt eine graue Tabby Katze mit schwarzen Mustern und grünen Augen vor irgenwelchen grünen Pflanzen im Hintergrund.
  • 0 Votes
    1 Posts
    0 Views
    hasamba@infosec.exchangeH
    ----------------️ Tool: meetscribe — Local meeting capture, diarization and summaries===================meetscribe is a locally‑run meeting capture and transcription tool that records dual‑channel audio (user mic and remote system audio) at the OS level and produces diarized transcripts, time‑aligned text, AI‑generated summaries, and a polished PDF export. The project chains several open components to provide an end‑to‑end offline workflow for meetings.Architecture and core components• Audio capture: captures mic and remote audio as separate channels via PipeWire or PulseAudio with ffmpeg handling recording and file creation.• ASR and alignment: uses WhisperX for batched inference with the openai/whisper-large-v3-turbo model and performs word‑level timestamp alignment using wav2vec2 alignment methods.• Speaker diarization: uses pyannote‑audio to assign speech segments to speakers; the dual‑channel signal enables automatic YOU/REMOTE labeling.• Local LLM summaries: integrates with local LLM runtimes (Ollama) to extract key topics, action items, decisions, and follow‑ups without sending data to cloud services.• Outputs and UX: produces multiple export formats (.txt, .srt, .json, .summary.md, and a professionally formatted PDF containing summary plus full transcript) and exposes both a small GTK3 always‑on widget for recording control and a command‑line interface for scripted workflows.Operational details and requirements• Platform: Linux with PipeWire or PulseAudio. The tool is designed to work with any meeting app that plays audio through the system (Zoom, Meet, Teams, Slack, Discord, etc.).• Models and tokens: diarization requires a HuggingFace model token for pyannote‑audio; ASR relies on WhisperX with model artifacts. Local LLM summarization is optional and requires a local LLM runtime and model.• Hardware: GPU acceleration is supported and recommended (NVIDIA CUDA, 8GB+ VRAM suggested) for faster inference; CPU mode is available but slower.Capabilities and limitations• Capabilities: reliable dual‑channel capture, word‑level timestamps, speaker diarization with automatic YOU/REMOTE labels, offline LLM summaries, organized per‑session folders, and multi‑format exports including a professional PDF.• Limitations: Linux‑centric; diarization depends on a HuggingFace model access token; LLM summaries require a local LLM runtime and model artifacts. Performance and latency depend on local hardware. meetscribe #WhisperX #pyannote_audio #Ollama #PipeWire Source: https://github.com/pretyflaco/meetscribe