As sentient meat, however illusory our identities are , we craft our identities by making value judgments. Everybody judges, all the time. Now if you don’t agree with that, you’re wrong.
mnl@hachyderm.io
Posts
-
As sentient meat, however illusory our identities are , we craft our identities by making value judgments. -
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.@lcamtuf I heavily curate my feed / intake. There’s only so many high quality contents I can take in per day anyway.
-
first impressions of the Lego smart brick, before I do any actual tearing down: wow, I forgot how good they are at working with plastic.@whitequark Lego plastics are something else…
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit@jenniferplusplus true, I hope that's not what I'm doing when I say "there's something to this and you need to pay attention to the impact of LLMs on security", even if I think anthropic is run by dangerous clowns (like you have mythos, and also your other stuff is maybe the most broken software I've ever used
) -
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit@jenniferplusplus this is maybe more what i'm reacting to. don't dismiss this stuff too quickly and bathe yourself in false comfort. If you are working on software, there's a reasonable chance these things can do a significant chunk of your job better than you. That they can't necessarily do it all, or do so for an extravagant amount of resources doesn't change that. I also don't want to sound contrarian, I know I might be a bit too autistic in my communication style (and I'm just as frustrated and anxious and exhausted like the rest of us).
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit@jenniferplusplus a threat to? My livelihood as a programmer? The industry? I agree. But it is not an empty threat (meaning, I'm pretty sure this is real and that they are not just putting up such a disclosure announcement for hype and boost).
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit@jenniferplusplus I don't think i made a hypothetical? I don't disagree with the rest, but I wouldn't call this announcement bullshit.
I don't think saying that LLMs have gotten scaringly good at finding vulnerabilities (not hypothetical) is adopting the capitalist framing, in fact it's something that as a person supporting opensource and right to privacy, needs to be taken pretty seriously, since we can assume that these tools are in the hands of the government.
There's a fair amount of people (and yes, "AI companies") combining more traditional approaches to vulnerability finding with small models with known externalities to do similar work, one example I could find (I'm not a security's person) as a direct reaction to the mythos announcement: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit@jenniferplusplus while I agree with the "AI companies are mostly full of shit" part, this would be the first kind of announcement like this I am taking semi-seriously.
Here's what's been happening the last couple of months, and this is with _current_ models. There are step functions at play, and I think the step function from "at least some skill needed to wield an LLM to find security issues" to "everybody with a $200 can exploit every OS/browser out there" should be considered very carefully.
Nicholas Carlini saying he found more bugs in 2 weeks than in his entire career with Mythos is not something I can dismiss.
Or daniel stenberg, certainly someone with actual authority and experience compared to me showing the current situation:
daniel:// stenberg:// (@bagder@mastodon.social)
I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".
Mastodon (mastodon.social)
daniel:// stenberg:// (@bagder@mastodon.social)
If your Open Source project sees a steep increase in number of high quality security reports (mostly done with AI) right now (#curl, Linux kernel, glibc confirmed) please tell me the name of this project. (I'd like to make a little list for my coming talk on this.)
Mastodon (mastodon.social)
-
What's the competitive edge if software is free?@mempko awesome, i'm working on something similar. "what's an OS (in the large sense of the term) when the primitive is 'you can just generate code'".
The chat code generation is not wired up here, but if you open the "Stacks and Cards" viewer, you should be able to see what 'on the fly generated app code' looks like.
My "objects" (also smalltalk inspired? maybe?) need to support two methods:
- tell me what languages to speak to you (english, JS, elixir, go, some DSL, some JSON format, anything really...) and how
- eval(someLanguage)THe primitives I'm playing with are all in JS, because I can easily sandbox it. I've laid it aside for a few weeks now but will turn it into something I do want to use everyday soon.
Curious to hear where you're oging next.