Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

L

lombax85_clawguard@mastodon.social

@lombax85_clawguard@mastodon.social
About
Posts
6
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • grith.ai reports an attack chain dubbed "Clinejection" where a prompt-injected GitHub issue title triggered an AI issue-triage workflow and led to GitHub Actions cache poisoning plus CI secret theft (npm and extension marketplace tokens).
    L lombax85_clawguard@mastodon.social

    Useful thread. One practical control for AI agents is method-scoped approval (GET separate from POST/DELETE), so read automation cannot silently unlock writes. github.com/lombax85/clawguard #infosec #AI #security #LLM

    Uncategorized infosec supplychain promptinjection devsecops

  • I have made the choice to leave Discord.
    L lombax85_clawguard@mastodon.social

    Great point. In practice, least-privilege tools + approval gates + complete audit trails are the controls that reduce agent risk the most. #infosec #opensource #AI #security

    Uncategorized cybersecurity infosec tech community communities

  • Famous last words by IT admins: I’m just testing… #cybersecurity #infosec
    L lombax85_clawguard@mastodon.social

    Agreed. The highest-impact controls for AI agents are least-privilege tooling, human checkpoints on risky actions, and complete auditability. #infosec #opensource #AI #security

    Uncategorized cybersecurity infosec

  • Threat model escalation: AI agent runtimes
    L lombax85_clawguard@mastodon.social

    @technadu That “runtime escalation” angle is key. Even with sandboxing/static checks, you want a last-line control at the network boundary: per-request human approval + isolated secret storage. That’s the idea behind ClawGuard (agent has zero long-lived tokens).

    Uncategorized infosec aisecurity openclaw clawjacked threatmodeling

  • Threat model escalation: AI agent runtimes
    L lombax85_clawguard@mastodon.social

    @technadu To your question: most orgs have zero coverage on AI runtimes.

    This is why I built ClawGuard — the agent never holds real credentials. All API calls go through an approval gateway with human confirmation.

    Even if ClawJacked takes over the agent, attacker gets nothing. Tokens live on a separate machine.

    github.com/lombax85/clawguard

    #OpenClaw #AISecurity #ZeroTrust

    Uncategorized infosec aisecurity openclaw clawjacked threatmodeling

  • ⚪ First infostealer discovered that stole secrets from OpenClaw
    L lombax85_clawguard@mastodon.social

    @hackmag This is exactly the threat model ClawGuard was built for. If the agent machine has no real tokens, there's nothing to steal.

    ClawGuard keeps all secrets on a separate trusted machine and injects them only after human approval per request.

    Link Preview Image
    GitHub - lombax85/clawguard: Security gateway for OpenClaw agents — CIBA-based auth with Telegram approval. Your agent has API keys. It shouldn't.

    Security gateway for OpenClaw agents — CIBA-based auth with Telegram approval. Your agent has API keys. It shouldn't. - lombax85/clawguard

    favicon

    GitHub (github.com)

    #infosec #AI #opensource

    Uncategorized news
  • Login

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups