Writing this up again so I can pin it: AI is literally a fascist project.
-
@GyrosGeier I actually find it difficult to write good image descriptions. The ones I write zero in on the point I want to make, but often omit details. In a way, that's a writing faux pas. In creative writing you learn "show, don't tell", and I do the opposite.
This isn't a counter-argument (nor an argument). All I want to do is acknowledge how hard it is to do well with assitance of this kind.
@jens that is a good description though: the details aren't important, but the point is. If you can't show because the recipient is vision impaired, then you need to tell.
My point is that while AI has its uses in assistive technologies, it is also inherently limited, so it's not a good direction to take research in assistive technologies in.
-
... scarcity, in which - by whichever proof scheme - those who participate early in the system benefit off those who come later (aka pyramid schemes). The proof algorithm guarantees scarcity; it's the whole point of blockchain vs. any other distributed system that there is a chokehold on resource creation somewhere.
AI is doing much the same thing, but it doesn't advertise this artificial scarcity as part of the solution. Instead, it simply guarantees that those who already own the most...
@jens The way the global stock market works is an interesting progenitor for cryptocurrencies, too. It used to be traded mostly based on earnings paid for holding the stock, but has in recent decades transitioned into being traded speculatively, which makes each stock into its own little proto-ponzi scheme.
-
@nielsa I think you need to read the entire thread

@jens I read the thread, it's a good thread.
I guess I'm just delineating the caveat of what kind of LLM can be neutral technology. Which *is* a minor footnote in what is currently happening.
Thanks for writing this up

-
@jens The way the global stock market works is an interesting progenitor for cryptocurrencies, too. It used to be traded mostly based on earnings paid for holding the stock, but has in recent decades transitioned into being traded speculatively, which makes each stock into its own little proto-ponzi scheme.
@nielsa Oh, yes.
My understanding of financial products isn't exactly complete, but my take is that they all fall into two categories.
I mean, buying stock is a bet on future earnings. You can lose that bet, so one category is to aggregate things in such a way that - hopefully - losses in one are offset by gains in the other.
The other category is a layer of indirection, i.e. bets on something other people are betting on.
All of this multi-layered to the point where you can't know what...
-
@nielsa Oh, yes.
My understanding of financial products isn't exactly complete, but my take is that they all fall into two categories.
I mean, buying stock is a bet on future earnings. You can lose that bet, so one category is to aggregate things in such a way that - hopefully - losses in one are offset by gains in the other.
The other category is a layer of indirection, i.e. bets on something other people are betting on.
All of this multi-layered to the point where you can't know what...
@nielsa ... you're betting on, which makes ponzi schemes and insider trading so much more effective, as the costs are externalized to the average shareholder.
And people think this is serious business.
The only thing that seems serious about it is that it seriously affects us.
-
@jens I read the thread, it's a good thread.
I guess I'm just delineating the caveat of what kind of LLM can be neutral technology. Which *is* a minor footnote in what is currently happening.
Thanks for writing this up

@nielsa And frankly, as a neutral tech or tool, I do find the whole thing interesting!
It's just... pretty much like fusion is interesting. I would love for us to have cheap, safe "desktop" fusion.
It's just always been 20 years away, and inextricably tied up with dirty fission, so how can one *practically* support one and not the other?
The cost-benefit-analysis suggests to me that the cost of getting this wrong is so much higher than the cost of missing out on good stuff, though.
-
@nielsa And frankly, as a neutral tech or tool, I do find the whole thing interesting!
It's just... pretty much like fusion is interesting. I would love for us to have cheap, safe "desktop" fusion.
It's just always been 20 years away, and inextricably tied up with dirty fission, so how can one *practically* support one and not the other?
The cost-benefit-analysis suggests to me that the cost of getting this wrong is so much higher than the cost of missing out on good stuff, though.
@jens Absolutely agree on all of that.
I have a few ideas I think could make good, ethical use of generalized LLMs, but only assuming no side benefits to the people largely driving their development and to some extent that the LLM itself is produced ethically... and that leaves a very narrow space and thus a significant startup cost...
-
Writing this up again so I can pin it: AI is literally a fascist project. Friends don't let friends use it.
Before I go into this, there are two types of responses to this that I have taken seriously so far.
One I'll call HashTagNotAllAI, which yields the obligatory "sure", but has the same smell. I'll leave it at that.
The other is that an anti AI stance also throws some assistive technology under the bus, making such a stance intrinsically ableistic. The easy thing to do is to refer...
@jens AI is too confusing of a term, especially when talking about assistance. e.g., can text to speech or voice recognition technology be called AI? It certainly doesn't a rainforest destroying LLM level of technology; it's been around for at least 35 years.
I don't stay abreast of all the assistive technology, but is there any that really requires LLMs at massive scale?
-
@jens AI is too confusing of a term, especially when talking about assistance. e.g., can text to speech or voice recognition technology be called AI? It certainly doesn't a rainforest destroying LLM level of technology; it's been around for at least 35 years.
I don't stay abreast of all the assistive technology, but is there any that really requires LLMs at massive scale?
@lwriemen As has been mentioned in a sub-thread, there e.g. exist things that analyze an image and provide textual desceiptions.
In the broader sense, translation is an assistive tech for non-native speakers of any language.
-
@jens AI is too confusing of a term, especially when talking about assistance. e.g., can text to speech or voice recognition technology be called AI? It certainly doesn't a rainforest destroying LLM level of technology; it's been around for at least 35 years.
I don't stay abreast of all the assistive technology, but is there any that really requires LLMs at massive scale?
Yes. AI is a far older and broader field than just the current LLM hype. Speech recognition, handwriting recognition, chess playing, various types of expert systems, route-finding, etc.
But LLMs and other modern genAI does feel different to a lot of people. And it uses a lot more data and resources.
-
R relay@relay.mycrowd.ca shared this topic