Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Lulz.

Lulz.

Scheduled Pinned Locked Moved Uncategorized
10 Posts 2 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • bontchev@infosec.exchangeB This user is from outside of this forum
    bontchev@infosec.exchangeB This user is from outside of this forum
    bontchev@infosec.exchange
    wrote last edited by
    #1

    Lulz.

    So, I asked Claude.ai to convert a Bash script for Linux to a functionally equivalent PowerShell script for Windows. It did pretty well but there were some funsies.

    In the Bash script, I send an ASNI escape sequence to clear the screen to the terminal, because this is the most portable way of doing it. Claude faithfully replicated that, in a separate function at that.

    I pointed out to it that in PowerShell, there's a perfectly good Clear-Host command that does just that. "Good point," said it, "Let me fix that".

    I looked at the "fix" and - there was nothing to clear the screen at all. No escape sequence, no Clear-Host, not its alias "clear" - nothing.

    So, I pointed out to it that the "fix" is doing worse than the original. It basically said "Oops", although in more words. Turns out, when "fixing" it, it first replaced the operator that outputs the escape sequence in the function that clears the screen with Clear-Host, but then also deleted the whole function because it is no longer needed (since there's a single operator that does its job) - but forgot to replace the function call with this operator. 🤣

    aakl@infosec.exchangeA 1 Reply Last reply
    0
    • bontchev@infosec.exchangeB bontchev@infosec.exchange

      Lulz.

      So, I asked Claude.ai to convert a Bash script for Linux to a functionally equivalent PowerShell script for Windows. It did pretty well but there were some funsies.

      In the Bash script, I send an ASNI escape sequence to clear the screen to the terminal, because this is the most portable way of doing it. Claude faithfully replicated that, in a separate function at that.

      I pointed out to it that in PowerShell, there's a perfectly good Clear-Host command that does just that. "Good point," said it, "Let me fix that".

      I looked at the "fix" and - there was nothing to clear the screen at all. No escape sequence, no Clear-Host, not its alias "clear" - nothing.

      So, I pointed out to it that the "fix" is doing worse than the original. It basically said "Oops", although in more words. Turns out, when "fixing" it, it first replaced the operator that outputs the escape sequence in the function that clears the screen with Clear-Host, but then also deleted the whole function because it is no longer needed (since there's a single operator that does its job) - but forgot to replace the function call with this operator. 🤣

      aakl@infosec.exchangeA This user is from outside of this forum
      aakl@infosec.exchangeA This user is from outside of this forum
      aakl@infosec.exchange
      wrote last edited by
      #2

      @bontchev How long did this take, from beginning to end?

      bontchev@infosec.exchangeB 1 Reply Last reply
      0
      • aakl@infosec.exchangeA aakl@infosec.exchange

        @bontchev How long did this take, from beginning to end?

        bontchev@infosec.exchangeB This user is from outside of this forum
        bontchev@infosec.exchangeB This user is from outside of this forum
        bontchev@infosec.exchange
        wrote last edited by
        #3

        @AAKL Oh, it generated the first PowerShell script almost instantly. The whole process (including fixing the bug) took just a few minutes; most of the time was spent by me typing. Despite all its idiosyncrasies and occasional bugs, Claude is a very useful tool. Saved me a lot of time, especially given that I don't know PowerShell very well and would have wasted a lot of time learning how to do various things.

        aakl@infosec.exchangeA 1 Reply Last reply
        0
        • bontchev@infosec.exchangeB bontchev@infosec.exchange

          @AAKL Oh, it generated the first PowerShell script almost instantly. The whole process (including fixing the bug) took just a few minutes; most of the time was spent by me typing. Despite all its idiosyncrasies and occasional bugs, Claude is a very useful tool. Saved me a lot of time, especially given that I don't know PowerShell very well and would have wasted a lot of time learning how to do various things.

          aakl@infosec.exchangeA This user is from outside of this forum
          aakl@infosec.exchangeA This user is from outside of this forum
          aakl@infosec.exchange
          wrote last edited by
          #4

          @bontchev A saving grace, for the comedy of errors that you ran into.

          bontchev@infosec.exchangeB 1 Reply Last reply
          0
          • aakl@infosec.exchangeA aakl@infosec.exchange

            @bontchev A saving grace, for the comedy of errors that you ran into.

            bontchev@infosec.exchangeB This user is from outside of this forum
            bontchev@infosec.exchangeB This user is from outside of this forum
            bontchev@infosec.exchange
            wrote last edited by
            #5

            @AAKL Oh, yes. *Never* trust blindly any of its output and *always* examine every single line of code that it has generated. But it is still very useful and saves a lot of time.

            aakl@infosec.exchangeA 1 Reply Last reply
            0
            • bontchev@infosec.exchangeB bontchev@infosec.exchange

              @AAKL Oh, yes. *Never* trust blindly any of its output and *always* examine every single line of code that it has generated. But it is still very useful and saves a lot of time.

              aakl@infosec.exchangeA This user is from outside of this forum
              aakl@infosec.exchangeA This user is from outside of this forum
              aakl@infosec.exchange
              wrote last edited by
              #6

              @bontchev Here's my question: if you were to repeat the exact same thing all over again, would the agent do the same mistakes all over again? And is there someone on the receiving end who reviews this interaction and prompts a correction for similar future interactions?

              bontchev@infosec.exchangeB 1 Reply Last reply
              1
              0
              • R relay@relay.infosec.exchange shared this topic
              • aakl@infosec.exchangeA aakl@infosec.exchange

                @bontchev Here's my question: if you were to repeat the exact same thing all over again, would the agent do the same mistakes all over again? And is there someone on the receiving end who reviews this interaction and prompts a correction for similar future interactions?

                bontchev@infosec.exchangeB This user is from outside of this forum
                bontchev@infosec.exchangeB This user is from outside of this forum
                bontchev@infosec.exchange
                wrote last edited by
                #7

                @AAKL I don't know. On the one hand, LLMs are somewhat stochastic, so there might be differences. On the other hand, they are trained on particular pieces of text, so at least the responses to common tasks should be the same. There might also be other factors - e.g., if the LLM remembers its previous interactions with me.

                bontchev@infosec.exchangeB 1 Reply Last reply
                0
                • bontchev@infosec.exchangeB bontchev@infosec.exchange

                  @AAKL I don't know. On the one hand, LLMs are somewhat stochastic, so there might be differences. On the other hand, they are trained on particular pieces of text, so at least the responses to common tasks should be the same. There might also be other factors - e.g., if the LLM remembers its previous interactions with me.

                  bontchev@infosec.exchangeB This user is from outside of this forum
                  bontchev@infosec.exchangeB This user is from outside of this forum
                  bontchev@infosec.exchange
                  wrote last edited by
                  #8

                  @AAKL You might want to test it yourself. Here's a link to our conversation, so that you can see the prompting:

                  https://claude.ai/share/b60d2cb7-0c19-4452-a65f-c87b45825911

                  Here's the original Bash script:

                  https://gitlab.com/bontchev/ipphoney/-/raw/develop/unittests/test.sh?ref_type=heads

                  (You might want to download it and upload it to Claude, like I did; I think Claude had mentioned to me in the past that the environment in which it experiments with code has no network access.)

                  And here is the final PowerShell script it generated:

                  https://gitlab.com/bontchev/ipphoney/-/raw/pypi/ipphoney/data/unittests/test.ps1?ref_type=heads

                  aakl@infosec.exchangeA 1 Reply Last reply
                  0
                  • bontchev@infosec.exchangeB bontchev@infosec.exchange

                    @AAKL You might want to test it yourself. Here's a link to our conversation, so that you can see the prompting:

                    https://claude.ai/share/b60d2cb7-0c19-4452-a65f-c87b45825911

                    Here's the original Bash script:

                    https://gitlab.com/bontchev/ipphoney/-/raw/develop/unittests/test.sh?ref_type=heads

                    (You might want to download it and upload it to Claude, like I did; I think Claude had mentioned to me in the past that the environment in which it experiments with code has no network access.)

                    And here is the final PowerShell script it generated:

                    https://gitlab.com/bontchev/ipphoney/-/raw/pypi/ipphoney/data/unittests/test.ps1?ref_type=heads

                    aakl@infosec.exchangeA This user is from outside of this forum
                    aakl@infosec.exchangeA This user is from outside of this forum
                    aakl@infosec.exchange
                    wrote last edited by
                    #9

                    @bontchev I'll pass, thanks. But this certainly puts the superintelligence thingy in doubt, at least based on what we have now.

                    bontchev@infosec.exchangeB 1 Reply Last reply
                    0
                    • aakl@infosec.exchangeA aakl@infosec.exchange

                      @bontchev I'll pass, thanks. But this certainly puts the superintelligence thingy in doubt, at least based on what we have now.

                      bontchev@infosec.exchangeB This user is from outside of this forum
                      bontchev@infosec.exchangeB This user is from outside of this forum
                      bontchev@infosec.exchange
                      wrote last edited by
                      #10

                      @AAKL LOL, LLMs are anything but "super". They aren't even intelligent. In fact, I suspect that even if we manage to construct real (self-aware, reasoning) AI one day, it won't be an LLM.

                      Still, they are a useful tool; more useful than googling. Whether their usefulness is worth the cost is an entirely different matter.

                      1 Reply Last reply
                      1
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups