excluding all the egregious moral hazards of "AI", i fail to see the current growth to be sustainable.
-
excluding all the egregious moral hazards of "AI", i fail to see the current growth to be sustainable. the silent majority HATE AI, from an aesthetic and political position. i cannot help but wonder if developers' and C-suites' obsessions with LLMs is enough to buoy the gargantuan expenditures, even with modest increases in per-token prices. i've never availed myself to the cushy Silicon Valley equity benefits so i don't have that hugbox insulating me from class consciousness.
i am not alone.
-
excluding all the egregious moral hazards of "AI", i fail to see the current growth to be sustainable. the silent majority HATE AI, from an aesthetic and political position. i cannot help but wonder if developers' and C-suites' obsessions with LLMs is enough to buoy the gargantuan expenditures, even with modest increases in per-token prices. i've never availed myself to the cushy Silicon Valley equity benefits so i don't have that hugbox insulating me from class consciousness.
i am not alone.
those outside the SV bubble often wish to emulate those within, spare the disadvantaged who never had the chance to taste that forbidden fruit, so of course they'll want to partake.
i cannot help but see a calamity of technical debt on the horizon. in AI's "best case", the US regime will deem it "too big to fail" and nationalise LLM infrastructure for surveillance, suppression and warmongering. class resentment may likely expand to tech workers.
-
those outside the SV bubble often wish to emulate those within, spare the disadvantaged who never had the chance to taste that forbidden fruit, so of course they'll want to partake.
i cannot help but see a calamity of technical debt on the horizon. in AI's "best case", the US regime will deem it "too big to fail" and nationalise LLM infrastructure for surveillance, suppression and warmongering. class resentment may likely expand to tech workers.
the fruit is a poison, reaped by the masses and sown by hegemons. why do people willingly hand over their autonomy to a few massive providers of LLM infrastructure? with so much at stake, why diminish oneself and contribute to the calamity? a machine offering "yes, and" by design is not a trustworthy copilot. it is a first officer who learned the wrong lesson: that of never questioning the captain even when that captain is not acting in full capacity.
LLM developers, why do this to yourselves?
-
@atax1a why do they wholly align their priorities with those of managers? i wish they understood just how much this weakens us all
-
@atax1a why do they wholly align their priorities with those of managers? i wish they understood just how much this weakens us all
@xan lack of class consicousness, egocentric individualism, (sepulchral whisper) white privilege.
-
R relay@relay.infosec.exchange shared this topic