Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

Scheduled Pinned Locked Moved Uncategorized
26 Posts 6 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • yoasif@mastodon.socialY yoasif@mastodon.social

    @phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.

    What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.

    Sort of a reverse robin hood - stealing from the poor to give to the rich.

    phillmv@hachyderm.ioP This user is from outside of this forum
    phillmv@hachyderm.ioP This user is from outside of this forum
    phillmv@hachyderm.io
    wrote last edited by
    #21

    @yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo

    yoasif@mastodon.socialY 1 Reply Last reply
    0
    • phillmv@hachyderm.ioP phillmv@hachyderm.io

      @yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.

      as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API

      yoasif@mastodon.socialY This user is from outside of this forum
      yoasif@mastodon.socialY This user is from outside of this forum
      yoasif@mastodon.social
      wrote last edited by
      #22

      @phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

      The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.

      Monkey see, monkey do.

      phillmv@hachyderm.ioP 1 Reply Last reply
      0
      • phillmv@hachyderm.ioP phillmv@hachyderm.io

        @yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo

        yoasif@mastodon.socialY This user is from outside of this forum
        yoasif@mastodon.socialY This user is from outside of this forum
        yoasif@mastodon.social
        wrote last edited by
        #23

        @phillmv How is propping up the LLM companies doing what Aaron was trying to do?

        Aaron was Robin Hood.

        The LLM companies are the opposite.

        1 Reply Last reply
        0
        • yoasif@mastodon.socialY yoasif@mastodon.social

          @phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

          The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.

          Monkey see, monkey do.

          phillmv@hachyderm.ioP This user is from outside of this forum
          phillmv@hachyderm.ioP This user is from outside of this forum
          phillmv@hachyderm.io
          wrote last edited by
          #24

          @yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.

          my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.

          yoasif@mastodon.socialY 1 Reply Last reply
          0
          • phillmv@hachyderm.ioP phillmv@hachyderm.io

            @yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.

            my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.

            yoasif@mastodon.socialY This user is from outside of this forum
            yoasif@mastodon.socialY This user is from outside of this forum
            yoasif@mastodon.social
            wrote last edited by
            #25

            @phillmv When the code is in the corpus, the LLM generates plausible code.

            That doesn't mean it is good, or that you can protect it in any way.

            If you are saying that, people will be able to describe an app to produce something plausible if the code exists in the corpus... perhaps.

            That assumes that people are interested in feeding the models for free - LLMs copy, so if it isn't already a solved problem, you are still going to need to use your brain.

            1 Reply Last reply
            0
            • phillmv@hachyderm.ioP phillmv@hachyderm.io

              @yoasif i’m happy to engage on the harms.

              broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.

              that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.

              i don’t really believe in intellectual property so we can skip “theft”.

              this mostly leaves us with environmental concerns and social upheaval.

              as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.

              environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK

              configures@mindly.socialC This user is from outside of this forum
              configures@mindly.socialC This user is from outside of this forum
              configures@mindly.social
              wrote last edited by
              #26

              @phillmv @yoasif it's not just the energy. AI data centers are stealing water from communities that need it badly. It's a water hog. I can imagine cooling that doesn't use it but that's not the realities right now.

              1 Reply Last reply
              1
              0
              • R relay@relay.publicsquare.global shared this topic
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups