@gloriouscow@oldbytes.space do low quality then. old youtube styles may see a comeback if anything overproduced becomes conflated with slop
asie@mk.asie.pl
Posts
-
I have a lot of ideas that I think would make great #retrocomputing videos for YouTube, except I certainly do not underestimate the amount of time, effort and investment in equipment that goes into making videos of even moderate production quality. -
I've met yet another example of English being ableist af, and I wonder: why?@nina_kali_nina@tech.lgbt I think Polish might even be more casually ableist than English. Assuming I'm reading your post right.
-
I generally prefer the MIT license for my personal projects.The impartial observer might just suggest that this is the point where I realize they are all Bad People or such.
I don't think this observer would be impartial. I think it takes a very specific, if not exactly unpopular, mindset to decide LLMs are Bad People technology but almost everything that came before them is not. I have spent my time being wary of social media, for example, instituting a personal boycott of Meta in particular, though I acknowledge that too is somewhat hypocritical of me.There is even a lot of frank hypocrisy in anti-AI crowd - a lot of people dozens of terabytes of pirated movies and books on their NAS are suddenly outraged about companies not respecting copyright.
I don't think that's hypocrisy, however, but a difference in values. There exist "information wants to be free" pirates, and there exist "fuck the corporations" pirates. The former are going to be enthusiastic about LLM research, the latter are going to be apprehensive. -
I generally prefer the MIT license for my personal projects.@gloriouscow@oldbytes.space
I don't think observing a difference in values is cynical. If you value productivity more than digital sovereignty or ecology, of if you don't hold a positive view of copyright, or if you hold a positive view of modern day corporate capitalism, why wouldn't you use these tools?
The most cynical thing I think I believe about generative AI users is that the feedback loop of using LLMs often enables a kind of narcissistic-leaning tendency to treat the feedback loop as a first resort over other humans. It was particularly apparent to me in the case of the music generation tool Suno AI, where people were hard-pressed to name other AI generating users who inspire them, or even other AI generated music they listen to! I don't think that's a good change.
And, of course, I am worried for the backlash against AI generated works pivoting against humans who aren't skilled enough to not be accused of being LLM tool users. I mean, this has already been happening. -
I generally prefer the MIT license for my personal projects.@gloriouscow@oldbytes.space
(And I continue to question how good these tools have become in a general sense. I've seen a community member try, i believe, Gemini-2.5-Flash, to perform summarization of its own scraped Discord posts (in particular, overseas travel advice). It, uh, it didn't go well. Though we did laugh a lot, between the conversations about consent it provoked.) -
I generally prefer the MIT license for my personal projects.@gloriouscow@oldbytes.space I think a key reason LLMs do better with programming than other fields is that code is much more hopelessly repetitive than we like to admit to ourselves. To borrow your example, how many Mandelbrot renderers were written on GitHub? And that's a niche example - think of things people write for a living, CRUD services, REST APIs, login pages, parsing libraries, wrappers...
I agree, and have said for a while now, that it is a disservice to frame the opposition to the LLM boom in terms of anything other than (a) opposition to Big Tech's view of the world and (b) a kind of labor dispute. Copyright laws can be changed; power efficiency can improve; slop can be made less sloppy by making the number of weight-monkeys approach infinity - under the condition that the music doesn't stop first - which I think is what companies like OpenAI and Anthropic are banking on.
Personally, my key issue is the idea of what I call "digital sovereignty". I do not want to be beholden to a cloud subscription to do the most basic elements of my job or my passion, because I have seen where that road takes us: enshittification, raising prices, customer-hostile changes, even geopolitical problems. Notably, this doesn't apply to so-called "open weight" models - but the "good ones" are both still behind SOTA and unviable for all but the largest polycules, not to mention the RAM/SSD pricing upheaval.
I am also concerned about the copyright angle, deskilling, AI psychosis, cultural impact, et cetera - but for more practical reasons. I also still believe LLMs are an evolutionary dead end for artificial intelligence, even if they have gotten considerably further than I anticipated.
In addition, I've seen many groups concede that while they are not interested in AI generated art or music (Adam Neely's video on Suno AI raises a lot of good points about that), they don't mind, say, AI generated code. This personally makes me a little sad, but I understand that for most people art is an end, but code is merely a means to an end.
But I don't believe the technology itself, as in the mathematical equations or the idea of generating tokens using LLMs in response to inputs, is inherently evil. I really like viznut's essay on that matter: http://viznut.fi/texts-en/machine_learning_rant.html - but I've also seen LLM efforts which try to avoid, say, the mass copyright infringement problem, and while their results certainly look more impressive than I anticipated, they also aren't really commercially viable, so to speak.
Final note - a lot of people trying LLM-based technology compare it to a slot machine, in that the quality of the result you get is highly unpredictable. I think, outside of niche tech circles, some don't realize that so many things have already become akin to gambling. Sports, mobile games, software bugs, cloud services, apparently the news, etc. - in that lens, ChatGPT becomes just another unreliable tool, not something uniquely unreliable. -
i did not realize hp elitebooks, too, have a keyboard lottery!i did not realize hp elitebooks, too, have a keyboard lottery! this is how I got to compare a Primax and LiteOn board for the EliteBook 845 G8
... and honestly, contrary to the ThinkPad folks, I'm not sure I can tell much of a difference. I think the most notable difference is that the sound profile and key weight of the Primax board is more muted and slightly more uneven, but I'm not sure if I would be able to tell them apart in a blind test. they certainly don't feel anywhere near as "mushy" as some Reddit reports for the equivalent ThinkPad lottery claim - though the different sound profile certainly gives such an impression.
wild