Three levels of AI in software development π§
After my recent posts about vibecoding and devibecoding I want to zoom out a bit. I think there are three levels of using AI in software development β and they are really about risk.
π’ Level 1: passive AI usage. Autocomplete, code review, planning, answering coding questions, writing documentation. You stay in full control, AI just saves you time. Almost zero risk, immediate productivity gains.
π‘ Level 2: vibecoding non-production code. Tests, internal tools, CI/CD scripts, prototypes. This is the sweet spot most teams underestimate. The upside is high but the blast radius is small β if a generated test is wrong it fails, if an internal tool has quirks nobody outside your team notices. Great place to learn what AI can and can't do.
Level 3: vibecoding production code. This is where it gets real. By my definition from the earlier post: vibecoded code is code nobody on your team has fully understood. Shipping that to production is a conscious risk decision.
The key insight: these aren't steps you walk through sequentially. It's a risk assessment. Level 1 and 2 are almost always worth it. Level 3 depends on your situation β a startup that needs an MVP in three months has a different equation than an enterprise with compliance requirements.
And when level 3 code needs to grow up? That's where devibecoding comes in β turning code nobody fully grasps into code your team truly owns.
Where does your team sit on this spectrum right now? 
#SoftwareDevelopment #AI #Vibecoding #Devibecoding #CodeQuality #DevLife #RiskManagement
Devibecoding is the process of taking code you don't fully grasp β whether AI generated or not β and systematically working through it until you truly own it. Understanding it, restructuring it, making it maintainable.
Someone in the replies to my last post described exactly this: putting in the effort to understand and reformat AI output until it becomes their code. That's devibecoding in practice.
And here is my take: this will become its own discipline. With its own tools, its own best practices, maybe its own specialists. Think tools that don't just lint but explain. That visualize where your understanding gaps are. Possibly even AI helping you understand AI code β ironic but inevitable.
Everyone talks about vibecoding but most definitions focus on how the code was created. I think that misses the point.
Generating cryptographically secure random values in C and C++ β what are your options?
libsodium is the easiest and most recommended choice. One function call, cross-platform, and built specifically for cryptography:
OpenSSL / LibreSSL is the classic option. RAND_bytes() does the job and is available almost everywhere. Worth using if you already have OpenSSL as a dependency β otherwise libsodium is cleaner.
οΈ If you want no external dependency at all, go directly to the OS:
οΈ What about std::random_device in C++? It looks convenient but the standard does not guarantee cryptographic security. On some platforms it falls back to a predictable seed. Fine for games or simulations β not for security-critical code.

οΈ Three knobs you control:
)
Without it you risk:
Who needs it most:
Your server shouldnβt mourn connections that are already gone.
Interesting read on heise.de: "From Output to Outcome" argues that dev teams should stop measuring success by features shipped and start asking: what actually changed for the user?
But here's where it gets tricky for B2B software:
spdlog β Logging for C++ that actually gets out of your way
Header-only β zero build ceremony
The C10K Problem β The challenge that changed the internet
οΈ The old model was simple but deadly: