Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "You go to sleep, the agent does the science and when you wake up, you have the results."

"You go to sleep, the agent does the science and when you wake up, you have the results."

Scheduled Pinned Locked Moved Uncategorized
13 Posts 10 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • tante@tldr.nettime.orgT tante@tldr.nettime.org

    "You go to sleep, the agent does the science and when you wake up, you have the results."

    That's not how any of this works. You're not "doing research with help" you have something generated that looks like a research report. Without the research happening. How many different ways can "AI" bros find to express "I AM TOTALLY MISSING THE POINT"?

    david@openbiblio.socialD This user is from outside of this forum
    david@openbiblio.socialD This user is from outside of this forum
    david@openbiblio.social
    wrote last edited by
    #4

    @tante who wrote that? it’s silly.

    1 Reply Last reply
    0
    • tante@tldr.nettime.orgT tante@tldr.nettime.org

      "You go to sleep, the agent does the science and when you wake up, you have the results."

      That's not how any of this works. You're not "doing research with help" you have something generated that looks like a research report. Without the research happening. How many different ways can "AI" bros find to express "I AM TOTALLY MISSING THE POINT"?

      mmby@mastodon.socialM This user is from outside of this forum
      mmby@mastodon.socialM This user is from outside of this forum
      mmby@mastodon.social
      wrote last edited by
      #5

      @tante did the correlation machine find some causation?

      1 Reply Last reply
      0
      • tante@tldr.nettime.orgT tante@tldr.nettime.org

        "You go to sleep, the agent does the science and when you wake up, you have the results."

        That's not how any of this works. You're not "doing research with help" you have something generated that looks like a research report. Without the research happening. How many different ways can "AI" bros find to express "I AM TOTALLY MISSING THE POINT"?

        liw@toot.liw.fiL This user is from outside of this forum
        liw@toot.liw.fiL This user is from outside of this forum
        liw@toot.liw.fi
        wrote last edited by
        #6

        @tante 25 years ago I saw the number 12765 inside the cap of a soda bottle. Part of a game. Seems like as good an answer as any.

        1 Reply Last reply
        0
        • tante@tldr.nettime.orgT tante@tldr.nettime.org

          "You go to sleep, the agent does the science and when you wake up, you have the results."

          That's not how any of this works. You're not "doing research with help" you have something generated that looks like a research report. Without the research happening. How many different ways can "AI" bros find to express "I AM TOTALLY MISSING THE POINT"?

          n_dimension@infosec.exchangeN This user is from outside of this forum
          n_dimension@infosec.exchangeN This user is from outside of this forum
          n_dimension@infosec.exchange
          wrote last edited by
          #7

          @tante

          The latest models are largely written by other models.

          The goal of #OpenAi is not to develop #AGI...
          ...but to develop an AI, AI researcher that can then develop AGI

          Most frontier models are at about 40% on Humanities Last Exam, they sat on about 3% when launched. They can defo do zero shot knowledge

          TLDR; AI can do research.

          tante@tldr.nettime.orgT T 2 Replies Last reply
          0
          • n_dimension@infosec.exchangeN n_dimension@infosec.exchange

            @tante

            The latest models are largely written by other models.

            The goal of #OpenAi is not to develop #AGI...
            ...but to develop an AI, AI researcher that can then develop AGI

            Most frontier models are at about 40% on Humanities Last Exam, they sat on about 3% when launched. They can defo do zero shot knowledge

            TLDR; AI can do research.

            tante@tldr.nettime.orgT This user is from outside of this forum
            tante@tldr.nettime.orgT This user is from outside of this forum
            tante@tldr.nettime.org
            wrote last edited by
            #8

            @n_dimension Models are not "written", they are "trained". That is very much different. And sure frontier models are good at standard tests. It's basically open book testing.

            n_dimension@infosec.exchangeN 1 Reply Last reply
            0
            • tante@tldr.nettime.orgT tante@tldr.nettime.org

              "You go to sleep, the agent does the science and when you wake up, you have the results."

              That's not how any of this works. You're not "doing research with help" you have something generated that looks like a research report. Without the research happening. How many different ways can "AI" bros find to express "I AM TOTALLY MISSING THE POINT"?

              korenchkin@chaos.socialK This user is from outside of this forum
              korenchkin@chaos.socialK This user is from outside of this forum
              korenchkin@chaos.social
              wrote last edited by
              #9

              @tante why don't they just have an agent come up with a way of missing the point? Such a missed opportunity.

              tante@tldr.nettime.orgT 1 Reply Last reply
              0
              • korenchkin@chaos.socialK korenchkin@chaos.social

                @tante why don't they just have an agent come up with a way of missing the point? Such a missed opportunity.

                tante@tldr.nettime.orgT This user is from outside of this forum
                tante@tldr.nettime.orgT This user is from outside of this forum
                tante@tldr.nettime.org
                wrote last edited by
                #10

                @korenchkin they probably are

                1 Reply Last reply
                0
                • tante@tldr.nettime.orgT tante@tldr.nettime.org

                  "You go to sleep, the agent does the science and when you wake up, you have the results."

                  That's not how any of this works. You're not "doing research with help" you have something generated that looks like a research report. Without the research happening. How many different ways can "AI" bros find to express "I AM TOTALLY MISSING THE POINT"?

                  vanecx@mastodon.pirateparty.beV This user is from outside of this forum
                  vanecx@mastodon.pirateparty.beV This user is from outside of this forum
                  vanecx@mastodon.pirateparty.be
                  wrote last edited by
                  #11

                  @tante but if AI says it has done it, then it must have done it, right?

                  Link Preview Image
                  Paco Hope (@paco@infosec.exchange)

                  Attached: 1 image One of the ways that LLM-authored code improves productivity is by merely SAYING it does things. It's way faster than the whole time-consuming process of actually doing things. This is real code someone sent to me for review.

                  favicon

                  Infosec Exchange (infosec.exchange)

                  1 Reply Last reply
                  0
                  • n_dimension@infosec.exchangeN n_dimension@infosec.exchange

                    @tante

                    The latest models are largely written by other models.

                    The goal of #OpenAi is not to develop #AGI...
                    ...but to develop an AI, AI researcher that can then develop AGI

                    Most frontier models are at about 40% on Humanities Last Exam, they sat on about 3% when launched. They can defo do zero shot knowledge

                    TLDR; AI can do research.

                    T This user is from outside of this forum
                    T This user is from outside of this forum
                    tanavit@toot.aquilenet.fr
                    wrote last edited by
                    #12

                    @n_dimension

                    As you know, research leads to patents and patents lead to profit.

                    So, if LLMs can produce valuable research, why do not AI industries keep the usage of their tools for themselves instead of generously offer them to competitor industries ?

                    @tante

                    1 Reply Last reply
                    0
                    • tante@tldr.nettime.orgT tante@tldr.nettime.org

                      @n_dimension Models are not "written", they are "trained". That is very much different. And sure frontier models are good at standard tests. It's basically open book testing.

                      n_dimension@infosec.exchangeN This user is from outside of this forum
                      n_dimension@infosec.exchangeN This user is from outside of this forum
                      n_dimension@infosec.exchange
                      wrote last edited by
                      #13

                      @tante

                      Written not trained.
                      #vibecoding
                      https://www.hyperdimensional.co/p/on-recursive-self-improvement-part

                      Humanity's last exam is not open book, it's been expressly designed to exclude Ai available datasets and centres around specific domain knowledge. Check it out.

                      1 Reply Last reply
                      1
                      0
                      • R relay@relay.infosec.exchange shared this topic
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups