@saraislet "I had a prompt that worked perfectly on Monday. Generated clean, well-structured code for an API endpoint. I used the same prompt on Tuesday for a similar endpoint. The output was structurally different, used a different error handling pattern, and introduced a dependency I didn't ask for."Why? No reason. Or rather, no reason I can access. There's no stack trace for "the model decided to go a different direction today." There's no log that says "temperature sampling chose path B instead of path A." It just... happened differently."This is why it's so difficult to work with AI. Why does it introduce dependencies I didn't ask for? Why does it not do things in a deterministic way? And what's more scary for me is: Why are some others not more worried about this?