Skip to content
  • 0 Votes
    1 Posts
    0 Views
    cryptax@mastodon.socialC
    Write-up for 2 forensics challenges at THCon : https://cryptax.github.io/thcon2026-breach/#THcon #CTF #LUKS #forensics
  • 0 Votes
    1 Posts
    1 Views
    alonso_reydes@infosec.exchangeA
    ️ Buscadores en la Red Tor https://reydes.com/e/Buscadores_en_la_Red_Tor #cybersecurity #hacking #redteam #forensics #dfir #osint
  • 🎥 Video

    Uncategorized dfir llm forensics privacy
    1
    0 Votes
    1 Posts
    1 Views
    hasamba@infosec.exchangeH
    ---------------- Video===================Opening: The episode from 13Cubed presents a practitioner-focused conversation about integrating modern LLMs into digital forensics and incident response workflows. The speaker provides concrete examples from recent investigations and offers practical cautions about data handling and model choice.Technical details: The video documents at least two concrete use cases: (1) using a public LLM (Claude) to guide the analyst through steps to access and query an unfamiliar database format by mounting a disk image and attaching the database in a WSL2 Ubuntu 24.04 environment, and (2) asking a public model to generate a Bash script that parsed unstructured strings and produced a deduplicated, sorted CSV output (avoiding manual grep/sed/awk/cut work). The creator emphasizes that no case-sensitive investigation details were shared with the public model during these interactions.Analysis: The examples illustrate how LLMs can accelerate triage and parsing tasks in DFIR — from transportable guidance on unfamiliar file formats to rapid generation of parsing scripts for data transformation. The practical value lies in time savings for routine extraction and formatting steps and in lowering the friction of working with obscure artifacts.Limitations and risks: The content highlights two primary limitations: data privacy (risk of exposing investigation details to public models) and model reliability (potential for incorrect or incomplete guidance). The speaker explicitly recommends treating outputs from public models as aids that must be validated, not authoritative results. Local models are introduced as an alternative, but with trade-offs around capability, accuracy, and operational complexity.Detection and operational considerations: While the video is not a threat report, it implies defensive considerations: log activities when integrating model-based tooling into processing pipelines, validate generated parsing outputs against known-good data, and maintain provenance for transformed evidence. Conceptually, model-assisted parsing should be accompanied by reproducible workflows and human verification steps.Practical integration points: Candidate uses include assisted analysis of unknown formats, automated script generation for data extraction and deduplication, summarization of large unstructured artifacts, and incremental triage to prioritize analyst time. The presenter also touches on 'vibe coding' (rapid, iterative coding with model help) and career advice for new practitioners adapting to AI-augmented workflows.Conclusion: The episode offers measured, example-driven guidance: LLMs can materially speed DFIR tasks, but they create privacy and verification obligations. Outputs require skeptical validation and cautious operational integration. #AI #DFIR #LLM #forensics #privacy Source: https://www.youtube.com/watch?v=wKn-9sKBqX8
  • 0 Votes
    12 Posts
    3 Views
    buherator@infosec.placeB
    @david_chisnall @itgrrl @scottymace User story: I explicitly looked for and manually enabled the history on Android bc there were notifs that contained important info but I sometimes removed them from the screen by accident and I couldn't find them in the corresponding app (can't tell the exact app/feature).