Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. excluding all the egregious moral hazards of "AI", i fail to see the current growth to be sustainable.

excluding all the egregious moral hazards of "AI", i fail to see the current growth to be sustainable.

Scheduled Pinned Locked Moved Uncategorized
5 Posts 2 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • xan@xantronix.socialX This user is from outside of this forum
    xan@xantronix.socialX This user is from outside of this forum
    xan@xantronix.social
    wrote last edited by
    #1

    excluding all the egregious moral hazards of "AI", i fail to see the current growth to be sustainable. the silent majority HATE AI, from an aesthetic and political position. i cannot help but wonder if developers' and C-suites' obsessions with LLMs is enough to buoy the gargantuan expenditures, even with modest increases in per-token prices. i've never availed myself to the cushy Silicon Valley equity benefits so i don't have that hugbox insulating me from class consciousness.

    i am not alone.

    xan@xantronix.socialX 1 Reply Last reply
    0
    • xan@xantronix.socialX xan@xantronix.social

      excluding all the egregious moral hazards of "AI", i fail to see the current growth to be sustainable. the silent majority HATE AI, from an aesthetic and political position. i cannot help but wonder if developers' and C-suites' obsessions with LLMs is enough to buoy the gargantuan expenditures, even with modest increases in per-token prices. i've never availed myself to the cushy Silicon Valley equity benefits so i don't have that hugbox insulating me from class consciousness.

      i am not alone.

      xan@xantronix.socialX This user is from outside of this forum
      xan@xantronix.socialX This user is from outside of this forum
      xan@xantronix.social
      wrote last edited by
      #2

      those outside the SV bubble often wish to emulate those within, spare the disadvantaged who never had the chance to taste that forbidden fruit, so of course they'll want to partake.

      i cannot help but see a calamity of technical debt on the horizon. in AI's "best case", the US regime will deem it "too big to fail" and nationalise LLM infrastructure for surveillance, suppression and warmongering. class resentment may likely expand to tech workers.

      xan@xantronix.socialX 1 Reply Last reply
      0
      • xan@xantronix.socialX xan@xantronix.social

        those outside the SV bubble often wish to emulate those within, spare the disadvantaged who never had the chance to taste that forbidden fruit, so of course they'll want to partake.

        i cannot help but see a calamity of technical debt on the horizon. in AI's "best case", the US regime will deem it "too big to fail" and nationalise LLM infrastructure for surveillance, suppression and warmongering. class resentment may likely expand to tech workers.

        xan@xantronix.socialX This user is from outside of this forum
        xan@xantronix.socialX This user is from outside of this forum
        xan@xantronix.social
        wrote last edited by
        #3

        the fruit is a poison, reaped by the masses and sown by hegemons. why do people willingly hand over their autonomy to a few massive providers of LLM infrastructure? with so much at stake, why diminish oneself and contribute to the calamity? a machine offering "yes, and" by design is not a trustworthy copilot. it is a first officer who learned the wrong lesson: that of never questioning the captain even when that captain is not acting in full capacity.

        LLM developers, why do this to yourselves?

        1 Reply Last reply
        0
        • xan@xantronix.socialX This user is from outside of this forum
          xan@xantronix.socialX This user is from outside of this forum
          xan@xantronix.social
          wrote last edited by
          #4

          @atax1a why do they wholly align their priorities with those of managers? i wish they understood just how much this weakens us all

          atax1a@infosec.exchangeA 1 Reply Last reply
          0
          • xan@xantronix.socialX xan@xantronix.social

            @atax1a why do they wholly align their priorities with those of managers? i wish they understood just how much this weakens us all

            atax1a@infosec.exchangeA This user is from outside of this forum
            atax1a@infosec.exchangeA This user is from outside of this forum
            atax1a@infosec.exchange
            wrote last edited by
            #5

            @xan lack of class consicousness, egocentric individualism, (sepulchral whisper) white privilege.

            1 Reply Last reply
            1
            0
            • R relay@relay.infosec.exchange shared this topic
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups