Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

marcelschmall@infosec.exchangeM

marcelschmall@infosec.exchange

@marcelschmall@infosec.exchange
About
Posts
16
Topics
10
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • Three levels of AI in software development 🧠
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    Three levels of AI in software development 🧠

    After my recent posts about vibecoding and devibecoding I want to zoom out a bit. I think there are three levels of using AI in software development β€” and they are really about risk.

    🟒 Level 1: passive AI usage. Autocomplete, code review, planning, answering coding questions, writing documentation. You stay in full control, AI just saves you time. Almost zero risk, immediate productivity gains.

    🟑 Level 2: vibecoding non-production code. Tests, internal tools, CI/CD scripts, prototypes. This is the sweet spot most teams underestimate. The upside is high but the blast radius is small β€” if a generated test is wrong it fails, if an internal tool has quirks nobody outside your team notices. Great place to learn what AI can and can't do.

    πŸ”΄ Level 3: vibecoding production code. This is where it gets real. By my definition from the earlier post: vibecoded code is code nobody on your team has fully understood. Shipping that to production is a conscious risk decision.

    ⚑ The key insight: these aren't steps you walk through sequentially. It's a risk assessment. Level 1 and 2 are almost always worth it. Level 3 depends on your situation β€” a startup that needs an MVP in three months has a different equation than an enterprise with compliance requirements.

    πŸ”§ And when level 3 code needs to grow up? That's where devibecoding comes in β€” turning code nobody fully grasps into code your team truly owns.

    Where does your team sit on this spectrum right now? πŸ”

    #SoftwareDevelopment #AI #Vibecoding #Devibecoding #CodeQuality #DevLife #RiskManagement

    Uncategorized softwaredevelop vibecoding devibecoding codequality

  • #introduction #newhere
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    @Ken5280 Welcome!

    Uncategorized newhere introduction pixelfed urbandesign cityplanning

  • πŸ€– Everyone talks about vibecoding but most definitions focus on how the code was created.
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    @Blf_tpe Totally agree β€” the intent is real and the community should meet that with openness, not defense. The people coming in through vibecoding are potential long term contributors and supporters. Your mom donating to Debian is living proof of that.

    The YouTube tutorial idea is exactly right. The technical barrier to contributing is lower than ever thanks to AI. But nobody teaches you how to write a good commit message, how to read contributing guidelines, or when NOT to open a PR. That cultural onboarding is the real gap.

    Maybe that's actually a community project worth vibecoding β€” an interactive guide for first time open source contributors.

    Uncategorized softwaredevelop vibecoding codequality devlife

  • πŸ€– Everyone talks about vibecoding but most definitions focus on how the code was created.
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    @Blf_tpe Great point β€” AI as a gateway to open source is a real upside I hadn't considered enough. More people discovering and appreciating the ecosystem is genuinely valuable.

    But there's a flip side: if a lot of newcomers start opening issues or PRs on established projects without fully understanding the codebase, that can overwhelm maintainers who are already stretched thin. The intent is good but the burden is real.

    Maybe the better path for beginners is to start your own small open source project with vibecoded code. Put it out there, get feedback, learn from the community. That way you build understanding without adding noise to existing projects.

    Places like Mastodon could actually be great for that β€” share your project, ask for feedback, learn in public. Devibecoding as a community effort rather than a solo struggle.

    Uncategorized softwaredevelop vibecoding codequality devlife

  • Devibecoding πŸ”§
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    @radicalabacus Love the archaeology metaphor. And I think you nailed the core difference β€” legacy code has a story, vibecode doesn't. Digging through an old codebase you can always ask "why did they do this?" and find an answer. With vibecode that question leads nowhere.

    Which makes me wonder: is devibecoding even the right response in every case? Maybe your instinct is the pragmatic answer β€” treat vibecode as a disposable draft. Use it to understand the problem space, extract the spec, then write it properly from scratch.

    That might actually be the most efficient form of devibecoding β€” not saving the code but saving the knowledge.

    Uncategorized softwaredevelop vibecoding devibecoding codequality

  • Hi all, I am an IT professional from the US.
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    @BuuBuu Welcome!

    Uncategorized introduction

  • Devibecoding πŸ”§
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    Devibecoding πŸ”§

    In my last post I defined vibecoded code as code nobody on your team has fully understood. But what happens when you take that code and make it yours?

    I think we need a term for this: devibecoding.

    πŸ’» Devibecoding is the process of taking code you don't fully grasp β€” whether AI generated or not β€” and systematically working through it until you truly own it. Understanding it, restructuring it, making it maintainable.

    🧠 This is not just code review. It's a mix of reverse engineering, refactoring and deep comprehension β€” without being able to ask the original author about their intent. Because there was no human author.

    πŸ’¬ Someone in the replies to my last post described exactly this: putting in the effort to understand and reformat AI output until it becomes their code. That's devibecoding in practice.

    πŸš€ And here is my take: this will become its own discipline. With its own tools, its own best practices, maybe its own specialists. Think tools that don't just lint but explain. That visualize where your understanding gaps are. Possibly even AI helping you understand AI code β€” ironic but inevitable.

    ⚑ The more vibecode exists in the world, the bigger the need for people who can devibecode it.

    What do you think β€” is this already part of your workflow? And what tooling would help you most? πŸ”

    #SoftwareDevelopment #AI #Vibecoding #Devibecoding #CodeQuality #DevLife #Refactoring

    Uncategorized softwaredevelop vibecoding devibecoding codequality

  • πŸ€– Everyone talks about vibecoding but most definitions focus on how the code was created.
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    @Blf_tpe This is a great real world example! Your point 2 is exactly the key β€” the moment you put in the effort to understand and reformat, it stops being vibecode. You are turning AI output into YOUR code.

    And point 1 is interesting β€” a few hundred lines seems to be the natural ceiling where vibecoding starts to break down. Beyond that, debugging (point 3) becomes the real cost.

    Sounds like you are already moving from vibecoding to AI-assisted coding β€” and that's a huge difference in terms of risk.

    Uncategorized softwaredevelop vibecoding codequality devlife

  • πŸ€– Everyone talks about vibecoding but most definitions focus on how the code was created.
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    πŸ€– Everyone talks about vibecoding but most definitions focus on how the code was created. I think that misses the point.

    My take: vibecoded code is code that nobody on your team has fully understood. It doesn't matter if an AI wrote it, a junior dev copied it from Stack Overflow, or a senior dev hacked it together at 2am. If nobody has truly reviewed and comprehended it β€” it's vibecode.

    That distinction matters because it shifts the conversation from "did you use AI?" to "do you actually know what this does?" πŸ”

    This also means: code that an AI generated but you thoroughly reviewed and understood is NOT vibecode. The tool doesn't define the category β€” your level of understanding does.

    Why does this matter? Because it changes the risk assessment entirely. Using AI to write code you then deeply review is just a productivity tool. Shipping code you don't fully grasp is a conscious risk decision β€” sometimes justified, sometimes not.

    Do you agree with this definition? Or would you draw the line somewhere else?

    #SoftwareDevelopment #AI #Vibecoding #CodeQuality #DevLife

    Uncategorized softwaredevelop vibecoding codequality devlife

  • 🎲 Generating cryptographically secure random values in C and C++ – what are your options?
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    🎲 Generating cryptographically secure random values in C and C++ – what are your options?

    After writing about how secure random links work, a few people asked about the underlying libraries. So here is a quick overview.

    πŸ”’ libsodium is the easiest and most recommended choice. One function call, cross-platform, and built specifically for cryptography:

    randombytes_buf(buffer, size);

    That is really all there is to it. libsodium picks the best available entropy source on the OS automatically.

    πŸ”‘ OpenSSL / LibreSSL is the classic option. RAND_bytes() does the job and is available almost everywhere. Worth using if you already have OpenSSL as a dependency – otherwise libsodium is cleaner.

    πŸ–₯️ If you want no external dependency at all, go directly to the OS:

    Linux: getrandom() – available since kernel 3.17
    macOS / BSD: arc4random_buf() – even simpler, no error handling needed

    Both are solid choices for system-level code.

    ⚠️ What about std::random_device in C++? It looks convenient but the standard does not guarantee cryptographic security. On some platforms it falls back to a predictable seed. Fine for games or simulations – not for security-critical code.

    So for anything security-related: libsodium or the OS primitives directly. std::random_device is a trap if you care about real randomness.

    What do you use in your projects for secure randomness? Still rolling your own or already on libsodium? πŸ€”

    #CPlusPlus #C #Security #Cryptography #libsodium #Infosec #SystemsProgramming

    Uncategorized cplusplus security cryptography libsodium

  • πŸ«€ SO_KEEPALIVE β€” How your server detects dead connections before the client knowsA client connects to your server.
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    πŸ«€ SO_KEEPALIVE β€” How your server detects dead connections before the client knows
    A client connects to your server. Then their laptop lid closes. WiFi drops. Router reboots.
    The TCP connection is dead β€” but your server has no idea. It just sits there. Holding a socket. Waiting forever. πŸ‘»

    This is called a half-open connection β€” one of TCP’s most silent failure modes.

    πŸ”§ The fix β€” one line:
    setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &flag, sizeof(flag));

    The kernel now sends small probe packets on idle connections. No response after a few tries? Connection gets cleaned up automatically.

    ⏱️ Three knobs you control:
    β†’ tcp_keepalive_time β€” idle time before first probe (default: 2h 😱)
    β†’ tcp_keepalive_intvl β€” time between probes (default: 75s)
    β†’ tcp_keepalive_probes β€” failures before giving up (default: 9)
    The defaults are hilariously conservative. For a real server you want minutes, not hours.

    πŸ’€ Without it you risk:
    β†’ File descriptor leaks
    β†’ Thread pool exhaustion
    β†’ Memory piling up for connections that died hours ago

    🎯 Who needs it most:
    β†’ WebSockets & long-lived connections
    β†’ Servers behind NAT β€” routers silently drop idle mappings
    β†’ Any server where clients disappear without sending FIN

    🐧 Your server shouldn’t mourn connections that are already gone.

    #Linux #Networking #SystemsProgramming #ServerDevelopment

    Uncategorized linux networking systemsprogramm serverdevelopme

  • πŸ“– Interesting read on heise.de: "From Output to Outcome" argues that dev teams should stop measuring success by features shipped and start asking: what actually changed for the user?
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    πŸ“– Interesting read on heise.de: "From Output to Outcome" argues that dev teams should stop measuring success by features shipped and start asking: what actually changed for the user?

    The idea is simple but powerful β€” a feature is just output. Outcome is when a customer actually solves a problem faster, makes fewer errors, or needs less support. Developers are encouraged to ask "why are we building this?" before writing a single line of code. πŸ€”

    🏒 But here's where it gets tricky for B2B software:

    ⚑ Your actual users and your paying customers are different people. The CFO signs the contract, the clerk uses the software daily β€” their definitions of "value" rarely align.

    ⚑ You often have no direct telemetry. On-premise deployments, strict data policies, and months-long update cycles mean you may never see how a feature is actually used.

    ⚑ Feedback is heavily filtered. It travels through support tickets, account managers, and customer success teams before it reaches the dev team β€” losing signal at every step.

    ⚑ Outcomes are slow. In B2B, the real proof shows up at contract renewal time β€” sometimes a year later.

    So the question is: how do you build an outcome-oriented culture when the outcome is invisible to you? πŸ”

    Is opt-in telemetry the answer? Closer collaboration with customer success? Structured user interviews? Or something else entirely?

    #SoftwareDevelopment #ProductManagement #B2BSoftware #AgileB2B

    Uncategorized softwaredevelop productmanageme b2bsoftware agileb2b

  • πŸ“– Interesting read on heise.de: "From Output to Outcome" argues that dev teams should stop measuring success by features shipped and start asking: what actually *changed* for the user?
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    πŸ“– Interesting read on heise.de: "From Output to Outcome" argues that dev teams should stop measuring success by features shipped and start asking: what actually *changed* for the user?

    The idea is simple but powerful β€” a feature is just output. Outcome is when a customer actually solves a problem faster, makes fewer errors, or needs less support. Developers are encouraged to ask "why are we building this?" before writing a single line of code. πŸ€”

    🏒 But here's where it gets tricky for B2B software:

    ⚑ Your actual users and your paying customers are different people. The CFO signs the contract, the clerk uses the software daily β€” their definitions of "value" rarely align.

    ⚑ You often have no direct telemetry. On-premise deployments, strict data policies, and months-long update cycles mean you may never see how a feature is actually used.

    ⚑ Feedback is heavily filtered. It travels through support tickets, account managers, and customer success teams before it reaches the dev team β€” losing signal at every step.

    ⚑ Outcomes are slow. In B2B, the real proof shows up at contract renewal time β€” sometimes a year later.

    So the question is: **how do you build an outcome-oriented culture when the outcome is invisible to you?** πŸ”

    Is opt-in telemetry the answer? Closer collaboration with customer success? Structured user interviews? Or something else entirely?

    #SoftwareDevelopment #ProductManagement #B2BSoftware #AgileB2B

    https://www.heise.de/hintergrund/Von-Output-zu-Outcome-Entwickler-als-Produktgestalter-11204293.html?seite=all

    Uncategorized softwaredevelop productmanageme b2bsoftware agileb2b

  • πŸ“ spdlog β€” Logging for C++ that actually gets out of your way
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    πŸ“ spdlog β€” Logging for C++ that actually gets out of your way

    When you’re writing a Linux server in C++ close to the hardware, the last thing you want is a logging library that slows you down β€” at build time, at runtime, or when reading the code.

    That’s exactly where spdlog shines. ✨

    πŸ“¦ Header-only β€” zero build ceremony
    Drop the headers into your project, #include "spdlog/spdlog.h" and you’re done. No linking against extra libraries, no CMake gymnastics, no ABI headaches. For embedded systems or minimal server builds this alone is worth a lot.
    And because spdlog supports C++11 and up, it runs happily on older GCC toolchains β€” exactly the kind you find in Linux BSP environments and long-lived server codebases.

    ⚑ Logging should be fast AND readable
    spdlog uses the {fmt} library under the hood, which means your log messages are formatted at compile time where possible. Instead of stringing together cout-style streams, you write:

    spdlog::info("Connection from {} on port {}", client_ip, port);

    Clean, readable, and significantly faster than printf or std::cout at runtime.

    πŸͺ£ The Sink System β€” one logger, many destinations
    The real power comes from sinks. A sink is simply a destination for log output β€” and you can attach as many as you want to a single logger.

    β†’ stdout_sink for live debugging in the terminal

    β†’ rotating_file_sink to write to disk with automatic file rotation

    β†’ syslog_sink to feed directly into the Linux system journal

    In a hardware-near server this means you can log to syslog for production monitoring AND to a rotating file for post-mortem debugging β€” with a single log call in your code. No duplication, no extra logic.

    🎯 Log levels keep noise under control
    spdlog supports the classic hierarchy β€” trace, debug, info, warn, error, critical. You set the minimum level per sink, so your rotating file might catch everything from debug upwards while syslog only sees warn and above. Perfect for production servers where log volume matters.

    🐧 For C++ server development on Linux, spdlog hits a rare sweet spot: trivial to integrate, fast enough for hot paths, and flexible enough for real production setups.

    #Cpp #Linux #SystemsProgramming #ServerDevelopment

    Uncategorized include cpp linux systemsprogramm serverdevelopme

  • 🌐 The C10K Problem β€” The challenge that changed the internet
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    🌐 The C10K Problem β€” The challenge that changed the internet

    1999. The web is booming. Servers are struggling. Dan Kegel asks one simple question:

    β€œWhy can’t a web server handle 10,000 simultaneous connections?”

    Not a bandwidth problem. Not a CPU problem. A design problem.

    βš™οΈ The old model was simple but deadly:
    β†’ 1 connection = 1 thread
    β†’ 10.000 connections = 10.000 threads
    β†’ 80GB RAM just for thread stacks
    β†’ Kernel spends more time scheduling than actually working

    πŸ’€ The server wasn’t busy doing work. It was busy managing the chaos.

    πŸ”§ C10K forced a complete rethink:
    β†’ 1 thread handling thousands of connections
    β†’ Non-blocking sockets
    β†’ Event loops instead of thread pools

    ⚑ The ripple effects were massive:
    β†’ epoll landed in Linux 2002
    β†’ nginx was born with this model in mind
    β†’ Node.js made it mainstream
    β†’ Redis, HAProxy β€” all children of C10K

    🐧 One blog post in 1999 rewired how the entire industry thinks about network servers. We’re still building on those ideas today.

    #Linux #Networking #SystemsProgramming #WebDev

    Uncategorized linux networking systemsprogramm webdev

  • The Prefork server model gets dismissed as β€œold school”.
    marcelschmall@infosec.exchangeM marcelschmall@infosec.exchange

    The Prefork server model gets dismissed as β€œold school”. I think that’s wrong – especially on Linux.
    With SO_REUSEPORT, the kernel distributes incoming connections across multiple pre-forked worker processes natively. No thread contention. No shared memory complexity. Each worker is an isolated process – a crash stays contained.

    What you get:
    – True process isolation per connection
    – Kernel-level load balancing, no userspace overhead
    – Predictable memory footprint
    – Simpler security boundaries between workers

    In a world obsessed with async event loops, we forget that prefork scales surprisingly well for workloads with high per-connection compute and where isolation actually matters – think security-sensitive services.
    SO_REUSEPORT didn’t just fix the thundering herd problem. It quietly gave prefork a second life.
    More on this soon.
    #linux #infosec #networking #serversecurity #prefork

    Uncategorized linux infosec networking serversecurity prefork
  • Login

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups