Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
-
@mttaggart But Attie is built on top of Claude so how is it a push for open models?
@cwebber Also that! Apparently though components of it use smaller, purposed models that are not Claude? I don't know what the thinking is here, other than an unshakeable belief that generative code must be the way to do things now, and all other reasoning walks backwards from that starting position.
-
I think game developers especially (and I do game dev sometimes, sometimes even for work) tend to perceive code and art as interrelated and intertwined things. I find it unlikely that they can be easily separated.
I suppose some may see form vs function, but I personally see form *as* function.
@cwebber This is a bit of an obscure reference, but among gamedevs that use C++, you'll find that many went all-in with C++ features, inheritance, and encapsulation earlier in their careers. Then as they were forced to maintain code that spanned many many files, many layers of indirection, regret sank in. Many pulled a full about-face, especially those that had worked in C, to adopt a more C-like style in C++ (or C only), as it greatly improved maintainability. The happy medium for most seasoned developers lands somewhere in the middle.
I personally see a lot of these "ALL IN ON AI" leads as misguided as we were going "ALL IN ON C++". The consequences are something you can only truly appreciate once you're forced to maintain something so large that navigation doesn't make sense.
-
@cwebber Paul's reply here has been living in my head.
Paul Frazee (@pfrazee.com)
As I said elsewhere I am basically radicalized about this. We fight now for personal computing and personal agency or we lose another decade to closed clouds We push now for an open internet and open models. Nobody is going to hand it to us because they’re nice
Bluesky Social (bsky.app)
I fundamentally can't understand this position. Pinning all your hopes for free and open computing on "open models," a thing that doesn't meaningfully exist, is so confusing to me.
But this does appear to be dogma for them.
@mttaggart @cwebber We don't even *have* any open models as far as I'm aware, and short of someone sitting down with Project Gutenberg and maybe a copy of Wikipedia I can't see any way we'll get one for english text, and I'm pretty sure there's *no* properly licensed corpus of code for any programming language to do even minimal training there.
Every model I'm aware of is based on theft. (I'd love to be wrong, but that doesn't seem likely alas)
-
Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
> we also dislike AI slop. this is why we’re using AI to generate code, not content.
It's a philosophical distinction but one I feel like I don't get. Maybe it's because I like livecoding, etc, and see code itself as a form of art. Is AI code *not* slop in a way that feed content is?
And will vibecoded apps with Attie be likely to insert AIgen content?
@cwebber art and code are the same thing, they're the creative output of humans
and even if they were different, we still should be rejecting genai for all purposes, to stand in solidarity with affected people (and many other social, ethical, political and environmental reasons) -
@cwebber Also that! Apparently though components of it use smaller, purposed models that are not Claude? I don't know what the thinking is here, other than an unshakeable belief that generative code must be the way to do things now, and all other reasoning walks backwards from that starting position.
@mttaggart I asked the question here https://bsky.app/profile/did:plc:dyyvywontyeuaegemczcushz/post/3miei3zqook2a
-
@mttaggart @cwebber We don't even *have* any open models as far as I'm aware, and short of someone sitting down with Project Gutenberg and maybe a copy of Wikipedia I can't see any way we'll get one for english text, and I'm pretty sure there's *no* properly licensed corpus of code for any programming language to do even minimal training there.
Every model I'm aware of is based on theft. (I'd love to be wrong, but that doesn't seem likely alas)
@wordshaper @cwebber Exactly! That's my understanding as well. At bare minimum, the pirated indie books corpus is in almost all training datasets.
Now it's happened before that the utility of a thing is so great that courts will handwave copyright law (e.g. YouTube). But in this case, the precedent has not been established—and even once it is, legality and ethicality are two different things. I expect many (most here on Masto) will forever be uncomfortable with the original sin of large language models.
-
@cwebber Paul's reply here has been living in my head.
Paul Frazee (@pfrazee.com)
As I said elsewhere I am basically radicalized about this. We fight now for personal computing and personal agency or we lose another decade to closed clouds We push now for an open internet and open models. Nobody is going to hand it to us because they’re nice
Bluesky Social (bsky.app)
I fundamentally can't understand this position. Pinning all your hopes for free and open computing on "open models," a thing that doesn't meaningfully exist, is so confusing to me.
But this does appear to be dogma for them.
@mttaggart @cwebber I was really interested in genAI trained only on public domain content, but even the ones that apparently do are like:
-well we use this dataset that was made from flickr public domain and creative commons content
+okay but creative commons can (and almost always does) require attribution or limits derivative works to specific conditions, can you ensure you don't include images with these requirements?
-lol no we did the model, not the dataset -
M mttaggart@infosec.exchange shared this topic
R relay@relay.infosec.exchange shared this topic -
Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
> we also dislike AI slop. this is why we’re using AI to generate code, not content.
It's a philosophical distinction but one I feel like I don't get. Maybe it's because I like livecoding, etc, and see code itself as a form of art. Is AI code *not* slop in a way that feed content is?
And will vibecoded apps with Attie be likely to insert AIgen content?
@cwebber It sounds to me like they're trying to say the distinction is if you generate code then a human can validate the result (and/or make changes as necessary) because it's less abstract. But they also seem to be pushing the idea that it's intended to be used by people who don't know how to code, which means those people would not be able to validate and fix issues without going through the LLM again.
With art/content generation directly you'd likely get a different result each time, with feed code generation once the code is made that's what the feed uses and there is no longer the abstraction to the LLM.
I don't think I personally agree with this logic and that the LLM's "opinion" during that code generation stage is still pretty relevant to the overall output.
-
Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
> we also dislike AI slop. this is why we’re using AI to generate code, not content.
It's a philosophical distinction but one I feel like I don't get. Maybe it's because I like livecoding, etc, and see code itself as a form of art. Is AI code *not* slop in a way that feed content is?
And will vibecoded apps with Attie be likely to insert AIgen content?
@cwebber One major development in tech industry marketing over the last decade or so is that they've learned how to deploy EEE strategies against the language of critique. The function of posts like that is to embrace a term of criticism (slop), extend the informal rules for how it's applied (to content, but not code), and reinforce their preferred usage to effectively extinguish from popular use any prior understandings (any quickly generated output that threatens to overwhelm human output).
-
Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
> we also dislike AI slop. this is why we’re using AI to generate code, not content.
It's a philosophical distinction but one I feel like I don't get. Maybe it's because I like livecoding, etc, and see code itself as a form of art. Is AI code *not* slop in a way that feed content is?
And will vibecoded apps with Attie be likely to insert AIgen content?
@cwebber Microsoft putting a security exploit into Notepad should have been the end of “vibe coding”
-
@mttaggart I asked the question here https://bsky.app/profile/did:plc:dyyvywontyeuaegemczcushz/post/3miei3zqook2a
@cwebber @mttaggart Seems to me that they could have made a Good Enough version using one of the "open" models. That they decided not to suggests to me that it's less about performance, and more about the sort of externalities that Big Data companies never really discuss in the open.
-
Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
> we also dislike AI slop. this is why we’re using AI to generate code, not content.
It's a philosophical distinction but one I feel like I don't get. Maybe it's because I like livecoding, etc, and see code itself as a form of art. Is AI code *not* slop in a way that feed content is?
And will vibecoded apps with Attie be likely to insert AIgen content?
@cwebber They're just disingenuous, there's no good faith engagement to be had. They're careerist coders who think coding is just a silly mmorpg you play to get real money, and they're now successfully botting in this game to get money with less effort and completely ruin it for the rest of us, actual engineers.
-
I think game developers especially (and I do game dev sometimes, sometimes even for work) tend to perceive code and art as interrelated and intertwined things. I find it unlikely that they can be easily separated.
I suppose some may see form vs function, but I personally see form *as* function.
@cwebber To me, code is a "material" through which art can be produced. Code is to the program as paint is to portrait. The material can be used for things that some might not consider art, but the potential for human expression, for representing and wrestling with the human experience is as possible with code as it is with paint.
-
Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
> we also dislike AI slop. this is why we’re using AI to generate code, not content.
It's a philosophical distinction but one I feel like I don't get. Maybe it's because I like livecoding, etc, and see code itself as a form of art. Is AI code *not* slop in a way that feed content is?
And will vibecoded apps with Attie be likely to insert AIgen content?
Folks seem to think AI is bad for something they actually understand, but great for something they barely understand.
It might be coincidence that there's so much overlap between AI boosters and the dunning-keuggerand (crypto) crowd, but probably not?
-
I think game developers especially (and I do game dev sometimes, sometimes even for work) tend to perceive code and art as interrelated and intertwined things. I find it unlikely that they can be easily separated.
I suppose some may see form vs function, but I personally see form *as* function.
I think there's a material difference — with a big caveat. AI slop "content" is (nominally at least) meant for direct human engagement: reading, watching, listening. Code is means to an end — the end user sees the app or web ui or whatever, not the code directly.
But the caveat is: well, except, for developers working with a team. And _especially_ in open source. There, code _is_ communication, human-to-human communication. (Which, of course, is why LLMs can generate code _at all_.)
-
@cwebber Also that! Apparently though components of it use smaller, purposed models that are not Claude? I don't know what the thinking is here, other than an unshakeable belief that generative code must be the way to do things now, and all other reasoning walks backwards from that starting position.
@mttaggart @cwebber the existence of "open" models is really just an excuse to use proprietary models /now/: The open weight models will always be "almost good enough" so you can keep using the stuff the big boys are using.
-
Been thinking about this: https://bsky.app/profile/jay.bsky.team/post/3micpg7z2h22g
> we also dislike AI slop. this is why we’re using AI to generate code, not content.
It's a philosophical distinction but one I feel like I don't get. Maybe it's because I like livecoding, etc, and see code itself as a form of art. Is AI code *not* slop in a way that feed content is?
And will vibecoded apps with Attie be likely to insert AIgen content?
@cwebber > … see code itself as a form of art.
When Knuth started his magnum opus about code, he very deliberately chose the title to be “The Art of Computer Programming”.
-
@mttaggart @cwebber the existence of "open" models is really just an excuse to use proprietary models /now/: The open weight models will always be "almost good enough" so you can keep using the stuff the big boys are using.
@tante @mttaggart @cwebber And that's still ignoring how the "open" models are trained to begin with.
-
@tante @mttaggart @cwebber And that's still ignoring how the "open" models are trained to begin with.
@ainmosni @mttaggart @cwebber yeah. You know my position. Actually open LLMs do not exist outside of a few lab settings and they don't perform well
-
I think game developers especially (and I do game dev sometimes, sometimes even for work) tend to perceive code and art as interrelated and intertwined things. I find it unlikely that they can be easily separated.
I suppose some may see form vs function, but I personally see form *as* function.
@cwebber I think this is one of those things where in my open source work a significant fraction of the code I write is art, while in my corporate day job there's a fraction of it that's craft and artistry and a fraction that's basically mechanical
The code I wrote a couple of weeks ago to iterate a table, join on a different table, and backfill the first table with the data? That's not art. It's this intermediate ground between boilerplate and "actual" code; it's toil. And even more so, that was temporary.
And in corporate work you end up with so much that falls into these categories; so much that's boring gluing stuff together, and the library teams that are supposed to reduce the amount of boilerplate in that are often underfunded or don't exist.
When we're building stuff for ourselves, even as a part of a research project like Spritely, it can be very different. Heck, because you're an engineering driven organisation I'm sure it's very different