Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. My stance on #LLM :

My stance on #LLM :

Scheduled Pinned Locked Moved Uncategorized
llm
19 Posts 11 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

    My stance on #LLM :

    1. There _might_ be some useful use cases with this technology that could be worth exploring.

    2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

    3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

    4. Thus, there is currently pretty much no ethical way of using LLMs.

    5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

    nazokiyoubinbou@urusai.socialN This user is from outside of this forum
    nazokiyoubinbou@urusai.socialN This user is from outside of this forum
    nazokiyoubinbou@urusai.social
    wrote last edited by
    #3

    @juergen_hubert I have to say that every use case I can imagine for them would be perfectly 100% ok on smaller local only models.

    Actually, I would go so far as to say even most of the use-cases they're pretending they can do could be done local. They don't really need to be 300+GB models. That's just a way to not actually acquire properly clean data (eg just steal everything and cram it all together rather than paying people to actually create qualitative sets.)

    So yeah, every single use-case basically blows away the current system anyway.

    As you said, the current system exists only for supporting the bubble.

    My hope is when this is all over communities make community collated data sets that are clean and lighter and never push them as being "general AI" which they are not.

    flying_saucers@mastodon.socialF 1 Reply Last reply
    0
    • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

      My stance on #LLM :

      1. There _might_ be some useful use cases with this technology that could be worth exploring.

      2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

      3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

      4. Thus, there is currently pretty much no ethical way of using LLMs.

      5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

      sablebadger@dice.campS This user is from outside of this forum
      sablebadger@dice.campS This user is from outside of this forum
      sablebadger@dice.camp
      wrote last edited by
      #4

      @juergen_hubert There was a moment when the tech could have benefits millions of people, but tech-bros are not in that business, they are in teh business of making themselves stupidly rich with no morals.

      I worked at an AI company until recently, and have some in depth hands on experience with AI. It has some legit, and very interesting uses. But techbros/greed has ruined it for everyone, and the damage it's doing makes it really difficult to justify it at all.

      1 Reply Last reply
      0
      • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

        My stance on #LLM :

        1. There _might_ be some useful use cases with this technology that could be worth exploring.

        2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

        3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

        4. Thus, there is currently pretty much no ethical way of using LLMs.

        5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

        renwillis@mstdn.socialR This user is from outside of this forum
        renwillis@mstdn.socialR This user is from outside of this forum
        renwillis@mstdn.social
        wrote last edited by
        #5

        @juergen_hubert 100%! Science, accessibility, cobbling disparate data, definitely some good use cases, but absolutely… we’ll need to wait for the crash as everyone seeks hyper capitalism first.

        1 Reply Last reply
        0
        • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

          My stance on #LLM :

          1. There _might_ be some useful use cases with this technology that could be worth exploring.

          2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

          3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

          4. Thus, there is currently pretty much no ethical way of using LLMs.

          5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

          ukeleleeric@mstdn.socialU This user is from outside of this forum
          ukeleleeric@mstdn.socialU This user is from outside of this forum
          ukeleleeric@mstdn.social
          wrote last edited by
          #6

          @juergen_hubert anyone with this commonsense stance is worth a follow. Thank you.

          1 Reply Last reply
          0
          • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

            My stance on #LLM :

            1. There _might_ be some useful use cases with this technology that could be worth exploring.

            2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

            3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

            4. Thus, there is currently pretty much no ethical way of using LLMs.

            5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

            gatesvp@mstdn.caG This user is from outside of this forum
            gatesvp@mstdn.caG This user is from outside of this forum
            gatesvp@mstdn.ca
            wrote last edited by
            #7

            @juergen_hubert

            But if I run the LLM on a computer I own and I use it for tasks I find useful, does the argument still hold?

            Is it unethical to use an open model today in the same way I use LibreOffice or Plex or Linux?

            ced@mapstodon.spaceC 1 Reply Last reply
            0
            • gatesvp@mstdn.caG gatesvp@mstdn.ca

              @juergen_hubert

              But if I run the LLM on a computer I own and I use it for tasks I find useful, does the argument still hold?

              Is it unethical to use an open model today in the same way I use LibreOffice or Plex or Linux?

              ced@mapstodon.spaceC This user is from outside of this forum
              ced@mapstodon.spaceC This user is from outside of this forum
              ced@mapstodon.space
              wrote last edited by
              #8

              @gatesvp how was your open model trained ?

              @juergen_hubert

              gatesvp@mstdn.caG 1 Reply Last reply
              0
              • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

                My stance on #LLM :

                1. There _might_ be some useful use cases with this technology that could be worth exploring.

                2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

                3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

                4. Thus, there is currently pretty much no ethical way of using LLMs.

                5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

                ced@mapstodon.spaceC This user is from outside of this forum
                ced@mapstodon.spaceC This user is from outside of this forum
                ced@mapstodon.space
                wrote last edited by
                #9

                @juergen_hubert

                💯 this. Although I wonder if we’ll be able to train a new frontier model ever again once the bubble has burst (and frankly I don’t care if we can’t)

                1 Reply Last reply
                0
                • ced@mapstodon.spaceC ced@mapstodon.space

                  @gatesvp how was your open model trained ?

                  @juergen_hubert

                  gatesvp@mstdn.caG This user is from outside of this forum
                  gatesvp@mstdn.caG This user is from outside of this forum
                  gatesvp@mstdn.ca
                  wrote last edited by
                  #10

                  @ced @juergen_hubert Sounds like you're suggesting that there is a specific model trainee regimen that you would consider to be ethical?

                  What does that look like?

                  juergen_hubert@mementomori.socialJ 1 Reply Last reply
                  0
                  • gatesvp@mstdn.caG gatesvp@mstdn.ca

                    @ced @juergen_hubert Sounds like you're suggesting that there is a specific model trainee regimen that you would consider to be ethical?

                    What does that look like?

                    juergen_hubert@mementomori.socialJ This user is from outside of this forum
                    juergen_hubert@mementomori.socialJ This user is from outside of this forum
                    juergen_hubert@mementomori.social
                    wrote last edited by
                    #11

                    @gatesvp @ced

                    One that's trained on specifically public domain/Creative Commons Zero source material.

                    As opposed to, say, the entirety of the World Wide Web.

                    clayote@peoplemaking.gamesC 1 Reply Last reply
                    0
                    • nazokiyoubinbou@urusai.socialN nazokiyoubinbou@urusai.social

                      @juergen_hubert I have to say that every use case I can imagine for them would be perfectly 100% ok on smaller local only models.

                      Actually, I would go so far as to say even most of the use-cases they're pretending they can do could be done local. They don't really need to be 300+GB models. That's just a way to not actually acquire properly clean data (eg just steal everything and cram it all together rather than paying people to actually create qualitative sets.)

                      So yeah, every single use-case basically blows away the current system anyway.

                      As you said, the current system exists only for supporting the bubble.

                      My hope is when this is all over communities make community collated data sets that are clean and lighter and never push them as being "general AI" which they are not.

                      flying_saucers@mastodon.socialF This user is from outside of this forum
                      flying_saucers@mastodon.socialF This user is from outside of this forum
                      flying_saucers@mastodon.social
                      wrote last edited by
                      #12

                      @nazokiyoubinbou @juergen_hubert yeah it seems to me like they’re too preoccupied with chasing ever more marginal performance improvements instead of maybe considering how to make the models more efficient (bar that context compression thing recently). Evil tongues would claim it’s a competitive advantage for those with deeper pockets

                      juergen_hubert@mementomori.socialJ 1 Reply Last reply
                      0
                      • flying_saucers@mastodon.socialF flying_saucers@mastodon.social

                        @nazokiyoubinbou @juergen_hubert yeah it seems to me like they’re too preoccupied with chasing ever more marginal performance improvements instead of maybe considering how to make the models more efficient (bar that context compression thing recently). Evil tongues would claim it’s a competitive advantage for those with deeper pockets

                        juergen_hubert@mementomori.socialJ This user is from outside of this forum
                        juergen_hubert@mementomori.socialJ This user is from outside of this forum
                        juergen_hubert@mementomori.social
                        wrote last edited by
                        #13

                        @flying_saucers @nazokiyoubinbou

                        Also, the current approach places a disproportionate burden on those who maintain websites, as these get constantly scraped for new content and thus see drastically increased page loads.

                        Speaking as the maintainer of such a website. 😡

                        flying_saucers@mastodon.socialF 1 Reply Last reply
                        0
                        • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

                          @flying_saucers @nazokiyoubinbou

                          Also, the current approach places a disproportionate burden on those who maintain websites, as these get constantly scraped for new content and thus see drastically increased page loads.

                          Speaking as the maintainer of such a website. 😡

                          flying_saucers@mastodon.socialF This user is from outside of this forum
                          flying_saucers@mastodon.socialF This user is from outside of this forum
                          flying_saucers@mastodon.social
                          wrote last edited by
                          #14

                          @juergen_hubert @nazokiyoubinbou wouldn’t it be nice if people respected robots.txt 🙃

                          juergen_hubert@mementomori.socialJ 1 Reply Last reply
                          0
                          • flying_saucers@mastodon.socialF flying_saucers@mastodon.social

                            @juergen_hubert @nazokiyoubinbou wouldn’t it be nice if people respected robots.txt 🙃

                            juergen_hubert@mementomori.socialJ This user is from outside of this forum
                            juergen_hubert@mementomori.socialJ This user is from outside of this forum
                            juergen_hubert@mementomori.social
                            wrote last edited by
                            #15

                            @flying_saucers @nazokiyoubinbou

                            Instead, we get anonymous bot-nets with a changing roster of IP addresses.

                            Seriously. Within the span of six hours, my wiki once received 3,800 requests for "Special:RecentChanges". This is not something most readers will do.

                            flying_saucers@mastodon.socialF 1 Reply Last reply
                            0
                            • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

                              @flying_saucers @nazokiyoubinbou

                              Instead, we get anonymous bot-nets with a changing roster of IP addresses.

                              Seriously. Within the span of six hours, my wiki once received 3,800 requests for "Special:RecentChanges". This is not something most readers will do.

                              flying_saucers@mastodon.socialF This user is from outside of this forum
                              flying_saucers@mastodon.socialF This user is from outside of this forum
                              flying_saucers@mastodon.social
                              wrote last edited by
                              #16

                              @juergen_hubert that’s messed up. Is there a way to rate limit just that page in particular?

                              juergen_hubert@mementomori.socialJ 1 Reply Last reply
                              0
                              • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

                                My stance on #LLM :

                                1. There _might_ be some useful use cases with this technology that could be worth exploring.

                                2. However, it is glaringly obvious that, as of now, their main purpose is to power the mother of all investments bubbles.

                                3. Which leads us to the present trillion dollar business case for "we must build energy- and water-wasting data centers everywhere so that we can scrape every single website a thousand times a month for new training data!"

                                4. Thus, there is currently pretty much no ethical way of using LLMs.

                                5. Any ethical exploration of LLM use cases will thus have to wait until the bubble has burst, the investors have moved on to the next scam, and we can sort through the rubble to check what is left.

                                danbrotherston@types.plD This user is from outside of this forum
                                danbrotherston@types.plD This user is from outside of this forum
                                danbrotherston@types.pl
                                wrote last edited by
                                #17

                                @juergen_hubert "The investors have moved to the next scam".

                                Well, by this logic, there is no ethical existence in our current society.

                                1 Reply Last reply
                                0
                                • juergen_hubert@mementomori.socialJ juergen_hubert@mementomori.social

                                  @gatesvp @ced

                                  One that's trained on specifically public domain/Creative Commons Zero source material.

                                  As opposed to, say, the entirety of the World Wide Web.

                                  clayote@peoplemaking.gamesC This user is from outside of this forum
                                  clayote@peoplemaking.gamesC This user is from outside of this forum
                                  clayote@peoplemaking.games
                                  wrote last edited by
                                  #18

                                  @juergen_hubert @gatesvp @ced Which has been done! You have to put up with the model's capabilities being a couple years behind the state of the art, but then, if the state of the art can only be achieved by labor theft, maybe it shouldn't be the state of the art. https://www.engadget.com/ai/it-turns-out-you-can-train-ai-models-without-copyrighted-material-174016619.html

                                  1 Reply Last reply
                                  0
                                  • flying_saucers@mastodon.socialF flying_saucers@mastodon.social

                                    @juergen_hubert that’s messed up. Is there a way to rate limit just that page in particular?

                                    juergen_hubert@mementomori.socialJ This user is from outside of this forum
                                    juergen_hubert@mementomori.socialJ This user is from outside of this forum
                                    juergen_hubert@mementomori.social
                                    wrote last edited by
                                    #19

                                    @flying_saucers

                                    Still trying to figure that out.

                                    1 Reply Last reply
                                    0
                                    Reply
                                    • Reply as topic
                                    Log in to reply
                                    • Oldest to Newest
                                    • Newest to Oldest
                                    • Most Votes


                                    • Login

                                    • Login or register to search.
                                    • First post
                                      Last post
                                    0
                                    • Categories
                                    • Recent
                                    • Tags
                                    • Popular
                                    • World
                                    • Users
                                    • Groups