Skip to content
  • 0 Votes
    1 Posts
    1 Views
    jerraddahlager@infosec.exchangeJ
    I deployed Microsoft Entra Prompt Shield end-to-end and tested it against real jailbreak payloads across supported AI traffic, including ChatGPT and Gemini in my lab.Prompt Shield inspects AI traffic at the network layer using TLS inspection and conversation schemes, allowing adversarial prompts to be blocked before they reach the model while clean traffic passes through transparently.Instead of building defenses into every application independently, you can apply one policy across multiple AI services. That’s a meaningful step toward giving security teams better visibility into AI usage.I published the full deployment, testing, and results in my blog below:https://nineliveszerotrust.com/blog/prompt-shield-network-ai-gateway/#AISecurity #PromptInjection #ZeroTrust #MicrosoftEntra #CloudSecurity
  • 0 Votes
    1 Posts
    1 Views
    hasamba@infosec.exchangeH
    ---------------- AI Pentesting Roadmap — LLM Security and Offensive Testing===================OverviewThis roadmap provides a structured learning path for practitioners aiming to assess and attack AI/ML systems, with a focus on LLMs and related pipelines. It organizes topics into progressive phases: foundations in ML and APIs, core AI security concepts, prompt injection and LLM-specific attacks, hands-on labs, advanced exploitation techniques, and real-world research/bug bounty work.Phased StructurePhase 1 (Foundations) covers machine learning fundamentals and LLM internals, including model architectures and tokenization concepts. Phase 2 (AI/ML Security Concepts) anchors the curriculum on standards and frameworks such as OWASP LLM Top 10, MITRE ATLAS, and NIST AI risk guidance. Phase 3 focuses on prompt injection and LLM adversarial vectors, describing attack surfaces like context manipulation, instruction-following bypasses, and RAG pipeline poisoning. Phase 4 emphasizes hands-on practice through CTFs, sandboxed labs, and safe testing methodologies. Phase 5 explores advanced exploitation: model poisoning, data poisoning, backdoor techniques, and chaining vulnerabilities across API/authentication layers. Phase 6 targets real-world research, disclosure workflows, and bug bounty engagement.Technical CoverageThe roadmap lists practical tooling and repositories for experiment design and testing concepts without prescribing deployment steps. It calls out necessary foundations—Python programming, HTTP/API mechanics, and web security basics (XSS, SSRF, SQLi) to support end-to-end attack scenarios against AI systems. Notable conceptual risks include RAG poisoning, adversarial ML perturbations, prompt injection, and leakage through augmented memory or external tool integrations.Limitations & ConsiderationsThe guide is educational and emphasizes conceptual descriptions of capabilities and use cases rather than operational recipes. It highlights standards and references rather than prescriptive mitigations. Practical exploration should respect ethical boundaries and responsible disclosure norms. OWASP #MITRE_ATLAS #RAG #prompt_injection #adversarialML Source: https://github.com/anmolksachan/AI-ML-Free-Resources-for-Security-and-Prompt-Injection