Writing this up again so I can pin it: AI is literally a fascist project.
-
... compute resources have the edge. And that is not you or me.
In short, AI is a system which a) aims to replace human labour, while b) shifting the means of production into the hands of the few.
This would be "fine" if nobody used it. What matters for this to succeed is that everyone depends on it. At that point, "means of production" becomes the digital equivalent of a "natural resource".
Marx matters, folk.
You can still argue that this makes AI a weapon of capitalism or tyranny, but...
... not outright fascism.
Technically, that's kind of true. But it's also missing an important part of the picture. As the infamous Chad C. Mulligan wrote, "COINCIDENCE: You weren't paying attention to the other half of what was going on."
First, note how Hitler's extermination camps were inspired by Henry Ford's assembly line. Capitalism and fascism always had a close relationship, and it's not really possible to separate the two. It's no coincidence that the Jews of the time were also...
-
... not outright fascism.
Technically, that's kind of true. But it's also missing an important part of the picture. As the infamous Chad C. Mulligan wrote, "COINCIDENCE: You weren't paying attention to the other half of what was going on."
First, note how Hitler's extermination camps were inspired by Henry Ford's assembly line. Capitalism and fascism always had a close relationship, and it's not really possible to separate the two. It's no coincidence that the Jews of the time were also...
... associated with the Bolsheviks, in order to justify the application of means for dealing with one supposed threat to the other.
But more importantly, Peter Thiel is a literal fascist, strong promoter and heavy investor in AI. The ties are there, right here, right now, and who benefits - and it's not just Thiel, but all of his Epstein Ilk" - from an AI takeover is abundantly clear.
It's also well documented. This isn't some vague conspiracy shit. They're saying this quiet part out loud.
-
... associated with the Bolsheviks, in order to justify the application of means for dealing with one supposed threat to the other.
But more importantly, Peter Thiel is a literal fascist, strong promoter and heavy investor in AI. The ties are there, right here, right now, and who benefits - and it's not just Thiel, but all of his Epstein Ilk" - from an AI takeover is abundantly clear.
It's also well documented. This isn't some vague conspiracy shit. They're saying this quiet part out loud.
In short, *as a system* rather than a technology, AI is without any doubt a deeply fascist project. It is a weapon aimed straight at the world population at large.
Caveats that the tech itself can be seen as neutral, and definitely has good applications remain unaffected by this.
The survival of our democracies - or sufficiently democratic systems around the world - is the thing that concerns me, though. (Also the environment, but arguably less so overall.)
-
I'm still on the fence about it. It is fascinating technology, and it doesn't inherently have to be used to replace people; I've always said that strong AI (now AGI) is a pointless goal because we have plenty of people; we should use AI for things himans are bad at. However, capitalism is of course looking to use it to replace people.
But apart from that, the cost, and the origin of the training data, I see other risks in its use: that we become too dependent on it, that we outsource our actual thinking to it and become dumber as a result. I know the same has been claimed about previous technologies, like books, but man, I can just feel myself getting dumber when I use it incorrectly at work. There are better ways to use it, like as a tool to access info and learn more effectively, but we already know that many people will use it to outsource their thinking, and may be pressured explicitly or implicitly by their employer to do so. And if you do that, you're allowing yourself to be replaced by the AI.
@mcv Please read the entire thread. I am going into this.
-
@jens I'm strongly in the "yes, but..." camp here. You're right about the current hype cycle, funding, how it is used to affect people largely around the world.
I probably end up pedantic because of my technical perspective on it. I think there are even good uses for LLMs (text related work), but it's not anythink like the chatbots, agents, general code generators today...
For the general population, AI means those things today, and in that I agree.
Is this reasonable, in your view, or no?
@nielsa I think you need to read the entire thread

-
@jens also, hallucinating assistive technology is a really bad thing, especially if it is deemed "good enough" by abled people, and deployed instead of actually reliable assistive technology, because it is cheaper.
For example, the availability of image description software is used to justify no longer describing images. That is a step up from "helpfully" running image description software on your own site and not verifying the result (because it is obvious that no description exists), but still a lot worse than actually providing good descriptions that put the image into the context of the site, and highlight important points.
@GyrosGeier I actually find it difficult to write good image descriptions. The ones I write zero in on the point I want to make, but often omit details. In a way, that's a writing faux pas. In creative writing you learn "show, don't tell", and I do the opposite.
This isn't a counter-argument (nor an argument). All I want to do is acknowledge how hard it is to do well with assitance of this kind.
-
... things are done, so spending on individual people or groups of people is significantly less effective than spending on the population at large.
The result is that democracies and service oriented economies go hand in hand, and support each other rather than work in opposition.
Marx would not have used the words "service economy", but would have said "labour". Both are synonyms for "people".
Now cryptocurrencies and AI have one thing in common, other than using insane amounts of resources.
There's an aside here that I sometimes found worth pointing out: "replacing people" doesn't necessarily mean firing people.
It may simply mean lowering their "worth" in salary negotiations, because you can use the threat of replacement with AI.
Sometimes chains of logic are as simple as "A because B", and sometimes there are several intermediary steps.
You can do a step further: even if YOUR job is not threatened by AI takeover, if the average salary drops (locally), you're also affected.
-
@condret Your mental model is not my mental model.
In my mental model, hypercapitalists - billionaire oligarchs - have no more need for extra capital. They'll pursue it, but it has absolutely lost meaning other than as a number. This is also the suggestion the very few insider views we get suggest: those people care only that their number is bigger than the other person's, not about money as such.
So any model that reduces this to a capitalist need to extract more capital is, IMHO, wrong. 1/n
-
@condret Your mental model is not my mental model.
In my mental model, hypercapitalists - billionaire oligarchs - have no more need for extra capital. They'll pursue it, but it has absolutely lost meaning other than as a number. This is also the suggestion the very few insider views we get suggest: those people care only that their number is bigger than the other person's, not about money as such.
So any model that reduces this to a capitalist need to extract more capital is, IMHO, wrong. 1/n
@condret What the involvement of e.g. Thiel, Musk, Zuck and Bezos in politics instead demonstrate is that those people care about power.
You don't need to amass capital to have power. That's where the game is currently at, sure. But real power is enslavement.
Slaves either do not buy products, or they buy products you tell them to buy, with the money you give them, carefully adjusted so that they will never have enough to break out of enslavement.
This is the game.
And what better... 2/n
-
@condret What the involvement of e.g. Thiel, Musk, Zuck and Bezos in politics instead demonstrate is that those people care about power.
You don't need to amass capital to have power. That's where the game is currently at, sure. But real power is enslavement.
Slaves either do not buy products, or they buy products you tell them to buy, with the money you give them, carefully adjusted so that they will never have enough to break out of enslavement.
This is the game.
And what better... 2/n
@condret ... way to play it than to make your future slaves dependent on something you control entirely? Make them dependent not only for their livelihood, but for their information - their education?
I don't think mere capitalist logic applies here at all.
/3
-
@GyrosGeier I actually find it difficult to write good image descriptions. The ones I write zero in on the point I want to make, but often omit details. In a way, that's a writing faux pas. In creative writing you learn "show, don't tell", and I do the opposite.
This isn't a counter-argument (nor an argument). All I want to do is acknowledge how hard it is to do well with assitance of this kind.
@jens that is a good description though: the details aren't important, but the point is. If you can't show because the recipient is vision impaired, then you need to tell.
My point is that while AI has its uses in assistive technologies, it is also inherently limited, so it's not a good direction to take research in assistive technologies in.
-
... scarcity, in which - by whichever proof scheme - those who participate early in the system benefit off those who come later (aka pyramid schemes). The proof algorithm guarantees scarcity; it's the whole point of blockchain vs. any other distributed system that there is a chokehold on resource creation somewhere.
AI is doing much the same thing, but it doesn't advertise this artificial scarcity as part of the solution. Instead, it simply guarantees that those who already own the most...
@jens The way the global stock market works is an interesting progenitor for cryptocurrencies, too. It used to be traded mostly based on earnings paid for holding the stock, but has in recent decades transitioned into being traded speculatively, which makes each stock into its own little proto-ponzi scheme.
-
@nielsa I think you need to read the entire thread

@jens I read the thread, it's a good thread.
I guess I'm just delineating the caveat of what kind of LLM can be neutral technology. Which *is* a minor footnote in what is currently happening.
Thanks for writing this up

-
@jens The way the global stock market works is an interesting progenitor for cryptocurrencies, too. It used to be traded mostly based on earnings paid for holding the stock, but has in recent decades transitioned into being traded speculatively, which makes each stock into its own little proto-ponzi scheme.
@nielsa Oh, yes.
My understanding of financial products isn't exactly complete, but my take is that they all fall into two categories.
I mean, buying stock is a bet on future earnings. You can lose that bet, so one category is to aggregate things in such a way that - hopefully - losses in one are offset by gains in the other.
The other category is a layer of indirection, i.e. bets on something other people are betting on.
All of this multi-layered to the point where you can't know what...
-
@nielsa Oh, yes.
My understanding of financial products isn't exactly complete, but my take is that they all fall into two categories.
I mean, buying stock is a bet on future earnings. You can lose that bet, so one category is to aggregate things in such a way that - hopefully - losses in one are offset by gains in the other.
The other category is a layer of indirection, i.e. bets on something other people are betting on.
All of this multi-layered to the point where you can't know what...
@nielsa ... you're betting on, which makes ponzi schemes and insider trading so much more effective, as the costs are externalized to the average shareholder.
And people think this is serious business.
The only thing that seems serious about it is that it seriously affects us.
-
@jens I read the thread, it's a good thread.
I guess I'm just delineating the caveat of what kind of LLM can be neutral technology. Which *is* a minor footnote in what is currently happening.
Thanks for writing this up

@nielsa And frankly, as a neutral tech or tool, I do find the whole thing interesting!
It's just... pretty much like fusion is interesting. I would love for us to have cheap, safe "desktop" fusion.
It's just always been 20 years away, and inextricably tied up with dirty fission, so how can one *practically* support one and not the other?
The cost-benefit-analysis suggests to me that the cost of getting this wrong is so much higher than the cost of missing out on good stuff, though.
-
@nielsa And frankly, as a neutral tech or tool, I do find the whole thing interesting!
It's just... pretty much like fusion is interesting. I would love for us to have cheap, safe "desktop" fusion.
It's just always been 20 years away, and inextricably tied up with dirty fission, so how can one *practically* support one and not the other?
The cost-benefit-analysis suggests to me that the cost of getting this wrong is so much higher than the cost of missing out on good stuff, though.
@jens Absolutely agree on all of that.
I have a few ideas I think could make good, ethical use of generalized LLMs, but only assuming no side benefits to the people largely driving their development and to some extent that the LLM itself is produced ethically... and that leaves a very narrow space and thus a significant startup cost...
-
Writing this up again so I can pin it: AI is literally a fascist project. Friends don't let friends use it.
Before I go into this, there are two types of responses to this that I have taken seriously so far.
One I'll call HashTagNotAllAI, which yields the obligatory "sure", but has the same smell. I'll leave it at that.
The other is that an anti AI stance also throws some assistive technology under the bus, making such a stance intrinsically ableistic. The easy thing to do is to refer...
@jens AI is too confusing of a term, especially when talking about assistance. e.g., can text to speech or voice recognition technology be called AI? It certainly doesn't a rainforest destroying LLM level of technology; it's been around for at least 35 years.
I don't stay abreast of all the assistive technology, but is there any that really requires LLMs at massive scale?
-
@jens AI is too confusing of a term, especially when talking about assistance. e.g., can text to speech or voice recognition technology be called AI? It certainly doesn't a rainforest destroying LLM level of technology; it's been around for at least 35 years.
I don't stay abreast of all the assistive technology, but is there any that really requires LLMs at massive scale?
@lwriemen As has been mentioned in a sub-thread, there e.g. exist things that analyze an image and provide textual desceiptions.
In the broader sense, translation is an assistive tech for non-native speakers of any language.
-
@jens AI is too confusing of a term, especially when talking about assistance. e.g., can text to speech or voice recognition technology be called AI? It certainly doesn't a rainforest destroying LLM level of technology; it's been around for at least 35 years.
I don't stay abreast of all the assistive technology, but is there any that really requires LLMs at massive scale?
Yes. AI is a far older and broader field than just the current LLM hype. Speech recognition, handwriting recognition, chess playing, various types of expert systems, route-finding, etc.
But LLMs and other modern genAI does feel different to a lot of people. And it uses a lot more data and resources.
-
R relay@relay.mycrowd.ca shared this topic