Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. It's demotivating to think that:

It's demotivating to think that:

Scheduled Pinned Locked Moved Uncategorized
26 Posts 17 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • cwebber@social.coopC cwebber@social.coop

    In a sense, the decision is somewhat made for us in that we're developing next-generation stuff that LLMs don't know how to auto-code at @spritely. We are working on core infrastructure that needs to be carefully thought about and written. LLMs introduce a lot of errors and aren't good at doing this kind of work on their own.

    And the goal was always that our work is there to be lifted from, to spread outward, the way people have long drawn from the well of the MIT / Stanford research labs in CS for decades, but for decentralized networking today

    But doing it now, in this way, in this environment, it's just really depressing and demotivating.

    dvshkn@social.treehouse.systemsD This user is from outside of this forum
    dvshkn@social.treehouse.systemsD This user is from outside of this forum
    dvshkn@social.treehouse.systems
    wrote last edited by
    #17

    @cwebber It's difficult to not think of Anathem. Communities of theorists living an ascetic life away from the rest of society.

    1 Reply Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      In a sense, the decision is somewhat made for us in that we're developing next-generation stuff that LLMs don't know how to auto-code at @spritely. We are working on core infrastructure that needs to be carefully thought about and written. LLMs introduce a lot of errors and aren't good at doing this kind of work on their own.

      And the goal was always that our work is there to be lifted from, to spread outward, the way people have long drawn from the well of the MIT / Stanford research labs in CS for decades, but for decentralized networking today

      But doing it now, in this way, in this environment, it's just really depressing and demotivating.

      mcc@mastodon.socialM This user is from outside of this forum
      mcc@mastodon.socialM This user is from outside of this forum
      mcc@mastodon.social
      wrote last edited by
      #18

      @cwebber @spritely I mean the problem as I see it is: The people who primarily benefit from the work aren't paying for it, and there's no way to get them to contribute back ("licenses" no longer exist). So the art can only be extended by individual humans expending their savings or going into personal debt. (In theory basic research could additionally be funded by corporations, but since people who care about the art exist as a resource to be exploited, there is no reason for them to do so.)

      mcc@mastodon.socialM 1 Reply Last reply
      0
      • jorgecandeias@mastodon.socialJ jorgecandeias@mastodon.social

        @cwebber @spritely We need you guys.

        The thing that scares me the most is that in 10 years time there'll be no new people able to code new stuff, to innovate.

        And *that* is the main reason why we absolutely need you guys. Regardless of how demotivating it may seem right now.

        gemelen@mammut.moeG This user is from outside of this forum
        gemelen@mammut.moeG This user is from outside of this forum
        gemelen@mammut.moe
        wrote last edited by
        #19

        @jorgecandeias @cwebber @spritely

        It's not demotivation that comes first, but rather a simple survival of those who are out of money, out of funding for the choice of doing things that last and that bridges to the future.

        1 Reply Last reply
        0
        • mcc@mastodon.socialM mcc@mastodon.social

          @cwebber @spritely I mean the problem as I see it is: The people who primarily benefit from the work aren't paying for it, and there's no way to get them to contribute back ("licenses" no longer exist). So the art can only be extended by individual humans expending their savings or going into personal debt. (In theory basic research could additionally be funded by corporations, but since people who care about the art exist as a resource to be exploited, there is no reason for them to do so.)

          mcc@mastodon.socialM This user is from outside of this forum
          mcc@mastodon.socialM This user is from outside of this forum
          mcc@mastodon.social
          wrote last edited by
          #20

          @cwebber @spritely This is similar to the problem I have making video games: Some portion of my audience will pirate my work. Technically that doesn't harm me, *but* if *everyone* pirates the game then I don't get any money and I don't get to keep making games. I decide I don't care because not everyone pirates games and *some* of the people playing the game will pay for it. LLMs, for code, sets up the possibility the entire audience will be pirating the work. Which is wild since my code is MIT

          mcc@mastodon.socialM 1 Reply Last reply
          0
          • cwebber@social.coopC cwebber@social.coop

            In a sense, the decision is somewhat made for us in that we're developing next-generation stuff that LLMs don't know how to auto-code at @spritely. We are working on core infrastructure that needs to be carefully thought about and written. LLMs introduce a lot of errors and aren't good at doing this kind of work on their own.

            And the goal was always that our work is there to be lifted from, to spread outward, the way people have long drawn from the well of the MIT / Stanford research labs in CS for decades, but for decentralized networking today

            But doing it now, in this way, in this environment, it's just really depressing and demotivating.

            rysiek@mstdn.socialR This user is from outside of this forum
            rysiek@mstdn.socialR This user is from outside of this forum
            rysiek@mstdn.social
            wrote last edited by
            #21

            @cwebber @spritely

            techbros gonna techbro, sigh

            1 Reply Last reply
            0
            • cwebber@social.coopC cwebber@social.coop

              It's demotivating to think that:

              - LLMs aren't good at producing original / novel work
              - You still need experts to advance that stuff
              - It will always be slower to move without using LLMs
              - Once an innovation is done though, an innovation can always be scooped up by the LLM users
              - "Bro why are you doing all this manually, I just vibe coded that in a weekend"

              Will it always be this way? It's depressing in the meanwhile, at least.

              gnuxie@social.applied-langua.geG This user is from outside of this forum
              gnuxie@social.applied-langua.geG This user is from outside of this forum
              gnuxie@social.applied-langua.ge
              wrote last edited by
              #22
              @cwebber yeah but programming was always about solving problems anyways. If we take what you say about LLMs here as like the reality of how they are used and worked or whatever. Then the thing to think here is that what is unravelled is that for the most part of the last 20 years these guys were just solving problems other people already solved over and over.
              gnuxie@social.applied-langua.geG 1 Reply Last reply
              0
              • gnuxie@social.applied-langua.geG gnuxie@social.applied-langua.ge
                @cwebber yeah but programming was always about solving problems anyways. If we take what you say about LLMs here as like the reality of how they are used and worked or whatever. Then the thing to think here is that what is unravelled is that for the most part of the last 20 years these guys were just solving problems other people already solved over and over.
                gnuxie@social.applied-langua.geG This user is from outside of this forum
                gnuxie@social.applied-langua.geG This user is from outside of this forum
                gnuxie@social.applied-langua.ge
                wrote last edited by
                #23
                @cwebber and if that is true then that isn't good either.
                1 Reply Last reply
                0
                • R relay@relay.an.exchange shared this topic
                • mcc@mastodon.socialM mcc@mastodon.social

                  @cwebber @spritely This is similar to the problem I have making video games: Some portion of my audience will pirate my work. Technically that doesn't harm me, *but* if *everyone* pirates the game then I don't get any money and I don't get to keep making games. I decide I don't care because not everyone pirates games and *some* of the people playing the game will pay for it. LLMs, for code, sets up the possibility the entire audience will be pirating the work. Which is wild since my code is MIT

                  mcc@mastodon.socialM This user is from outside of this forum
                  mcc@mastodon.socialM This user is from outside of this forum
                  mcc@mastodon.social
                  wrote last edited by
                  #24

                  @cwebber @spritely This said, I want to give you the flipside to the process you're describing: I am currently creating a small programming language which exists for no purpose except for me to make games for the Game Boy and NES. When I look at my language, I think: *An LLM user could not use this language, because there is not a sufficient corpus to generate code from¹*. And this sparks joy in me

                  ¹ And a significant portion of the corpus is testcases designed to fail

                  1 Reply Last reply
                  0
                  • cwebber@social.coopC cwebber@social.coop

                    In a sense, the decision is somewhat made for us in that we're developing next-generation stuff that LLMs don't know how to auto-code at @spritely. We are working on core infrastructure that needs to be carefully thought about and written. LLMs introduce a lot of errors and aren't good at doing this kind of work on their own.

                    And the goal was always that our work is there to be lifted from, to spread outward, the way people have long drawn from the well of the MIT / Stanford research labs in CS for decades, but for decentralized networking today

                    But doing it now, in this way, in this environment, it's just really depressing and demotivating.

                    viss@mastodon.socialV This user is from outside of this forum
                    viss@mastodon.socialV This user is from outside of this forum
                    viss@mastodon.social
                    wrote last edited by
                    #25

                    @cwebber @spritely once the honeymoon period is over and the folks who keep getting rm'ed get louder and more often complain than the success stories gush, the scale will tip.

                    people have realised cloud was way riskier and more expensive and have started brining stuff in house again, the same will happen with llms.

                    itll just take a critical mass, like anything else.

                    and the llm horror stories are piling up

                    1 Reply Last reply
                    0
                    • cwebber@social.coopC cwebber@social.coop

                      It's demotivating to think that:

                      - LLMs aren't good at producing original / novel work
                      - You still need experts to advance that stuff
                      - It will always be slower to move without using LLMs
                      - Once an innovation is done though, an innovation can always be scooped up by the LLM users
                      - "Bro why are you doing all this manually, I just vibe coded that in a weekend"

                      Will it always be this way? It's depressing in the meanwhile, at least.

                      andrewt@mathstodon.xyzA This user is from outside of this forum
                      andrewt@mathstodon.xyzA This user is from outside of this forum
                      andrewt@mathstodon.xyz
                      wrote last edited by
                      #26

                      @cwebber LLM users are the same people who walk through modern art galleries saying "my kid could do that"

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups