@aimaz @mweiss @adamshostack yep. It reaches its limits with more complex issues or specific technical stuff (for example, it usually gets very specific keycloak configuration strategies wrong and hallucinates features that do not exist). I actually have a personal benchmark question for LLMs where I‘m asking a question whose answer requires some specific knowledge of how TLS works, which works quite well as a differentiator for me: https://infosec.exchange/@hacksilon/116076554555995053So, general brainstorming = good, the more specific it gets, the likelier it will lead you wrong.