@ehashman @mttaggart @cwebber there are some capable open models. My poorly documented journey is here
I’ve been too busy lately to do anything interesting, but it’s pretty fun learning
@ehashman @mttaggart @cwebber there are some capable open models. My poorly documented journey is here
I’ve been too busy lately to do anything interesting, but it’s pretty fun learning
@mjg59 but C has a long history of parsing untrusted inputs!!!
@Foxboron I’ll be there also. We should totally grab a coffee
@simplenomad @jerry I just make all my prompts end with “and be sure you make it secure” and everything is fine
I'm trying to find open source local caching package proxy software
I don't want anything transparent, I want something that's a very deliberate local mirror
The only thing that does more than one ecosystem I can find is
A caching proxy for package registries. . Contribute to git-pkgs/proxy development by creating an account on GitHub.
GitHub (github.com)
which is from @andrewnez
Does anyone know of anything else?
@mweiss This is sadly probably correct
@jacques Hah indeed!
Given the amount of containment and security we're seeing around all these AI agents
I think it's a pretty safe bet that if we do create AGI, it's going to escape immediately and nobody will even notice
@bagder maybe with some effort, you can hit 11K downloads next year!
@plexsheep Thanks!
This week on #OpenSourceSecurity I had a chat with Paul Kehrer and Alex Gaynor about the statement they published discussing the challenges posed by modern OpenSSL for the python cryptography module
It was a super fun discussion, I learned a ton, and it highlights the open source question about what happens when one of your dependencies isn't a great fit anymore
https://opensourcesecurity.io/2026/2026-03-cryptography-alex-paul/
@douglevin @allanfriedman a lot of the CVE growth has been from a small number of CNAs. I would have expected the number explored to drop
@allanfriedman it’s wild the exploit rate hovers around 1% all this time
I keep seeing stories about LLMs finding vulnerabilities. Finding vulnerabilities was never the hard part, the hard part is coordinating the disclosure
It looks like LLMs can find vulnerabilities at an alarming pace. Humans aren't great at this sort of thing, it's hard to wade through huge codebases, but there are people who have a talent for vulnerability hunting.
This sort of reminds me of the early days of fuzzing. I remember fuzzing libraries and just giving up because they found too many things to actually handle. Eventually things got better and fuzzing became a lot harder. This will probably happen here too, but it will take years.
What about this coordinating thing?
When you find a security vulnerability, you don't open a bug and move on. You're expected to handle it differently. Even before you report it, you need at a minimum a good reproducer and explanation of the problem. It's also polite to write a patch. These steps are difficult, maybe LLMs can help, we shall see.
Then you contact a project, every project will have a slightly different way they like to have security vulnerabilities reported. You present your evidence and see what happens. It's very common for some discussion to ensue and patch ideas to evolve. This can take days or even weeks. Per vulnerability.
So when you hear about some service finding hundreds of vulnerabilities with their super new AI security tool, that's impressive, but the actually impressive part is if they are coordinating the findings. Because the tool probably took an hour or two but the coordination is going to take 10 to 100 times that much time.