Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

Scheduled Pinned Locked Moved Uncategorized
12 Posts 9 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • carnage4life@mas.toC This user is from outside of this forum
    carnage4life@mas.toC This user is from outside of this forum
    carnage4life@mas.to
    wrote last edited by
    #1

    Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

    Many smart people struggle with this because they either
    1. Get frustrated with a non-deterministic tool whose output they can’t trust.
    2. Decide to blindly trust it because “Claude/ChatGPT said so”

    Both are failure patterns and quite common.

    carnage4life@mas.toC noodlemaz@mstdn.gamesN itgrrl@infosec.exchangeI damageboy@hachyderm.ioD fivetonsflax@tilde.zoneF 5 Replies Last reply
    0
    • carnage4life@mas.toC carnage4life@mas.to

      Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

      Many smart people struggle with this because they either
      1. Get frustrated with a non-deterministic tool whose output they can’t trust.
      2. Decide to blindly trust it because “Claude/ChatGPT said so”

      Both are failure patterns and quite common.

      carnage4life@mas.toC This user is from outside of this forum
      carnage4life@mas.toC This user is from outside of this forum
      carnage4life@mas.to
      wrote last edited by
      #2

      Being good at using LLMs includes

      1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
      2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.

      zenkat@sfba.socialZ jayalane@mastodon.onlineJ performat@mastodon.socialP klausfiend@mstdn.caK 4 Replies Last reply
      0
      • carnage4life@mas.toC carnage4life@mas.to

        Being good at using LLMs includes

        1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
        2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.

        zenkat@sfba.socialZ This user is from outside of this forum
        zenkat@sfba.socialZ This user is from outside of this forum
        zenkat@sfba.social
        wrote last edited by
        #3

        @carnage4life I find #2 is skipped a lot. Modern software engineering is all about measuring eval'ing the quality of your outputs. We should be doing the same with our agents.

        1 Reply Last reply
        0
        • carnage4life@mas.toC carnage4life@mas.to

          Being good at using LLMs includes

          1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
          2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.

          jayalane@mastodon.onlineJ This user is from outside of this forum
          jayalane@mastodon.onlineJ This user is from outside of this forum
          jayalane@mastodon.online
          wrote last edited by
          #4

          @carnage4life 1. Is an ability to consciously shape your language usage to be that of the community whose info you seek, and 2. Is an ability to use logic to build determinism. So the mythical analytical person who intimately understands human language communities 🙂

          1 Reply Last reply
          0
          • carnage4life@mas.toC carnage4life@mas.to

            Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

            Many smart people struggle with this because they either
            1. Get frustrated with a non-deterministic tool whose output they can’t trust.
            2. Decide to blindly trust it because “Claude/ChatGPT said so”

            Both are failure patterns and quite common.

            noodlemaz@mstdn.gamesN This user is from outside of this forum
            noodlemaz@mstdn.gamesN This user is from outside of this forum
            noodlemaz@mstdn.games
            wrote last edited by
            #5

            @carnage4life underneath, an assumption that there is a way to use it 'well' and that this is desirable.
            And a willingness to dismiss all ethical concerns in doing so.

            1 Reply Last reply
            0
            • carnage4life@mas.toC carnage4life@mas.to

              Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

              Many smart people struggle with this because they either
              1. Get frustrated with a non-deterministic tool whose output they can’t trust.
              2. Decide to blindly trust it because “Claude/ChatGPT said so”

              Both are failure patterns and quite common.

              itgrrl@infosec.exchangeI This user is from outside of this forum
              itgrrl@infosec.exchangeI This user is from outside of this forum
              itgrrl@infosec.exchange
              wrote last edited by
              #6

              @carnage4life an alternate framing:

              Getting good at using generative AI means using a tool that produces incorrect output 10% - 50% of the time. Such tools used to be rejected as not fit-for-purpose / not production-ready.

              Many smart people struggle with this because they either

              1 Get frustrated with being required to use a tool that’s not fit-for-purpose and having to expend time & energy fixing its incorrect outputs.

              2 Decide to say “fuck it” and use it anyway because “management said so” and they have no genuine agency to stop or derail the train.

              Both would have been considered reasonable positions only a few years ago and quite common.

              carnage4life@mas.toC 1 Reply Last reply
              0
              • itgrrl@infosec.exchangeI itgrrl@infosec.exchange

                @carnage4life an alternate framing:

                Getting good at using generative AI means using a tool that produces incorrect output 10% - 50% of the time. Such tools used to be rejected as not fit-for-purpose / not production-ready.

                Many smart people struggle with this because they either

                1 Get frustrated with being required to use a tool that’s not fit-for-purpose and having to expend time & energy fixing its incorrect outputs.

                2 Decide to say “fuck it” and use it anyway because “management said so” and they have no genuine agency to stop or derail the train.

                Both would have been considered reasonable positions only a few years ago and quite common.

                carnage4life@mas.toC This user is from outside of this forum
                carnage4life@mas.toC This user is from outside of this forum
                carnage4life@mas.to
                wrote last edited by
                #7

                @itgrrl As you point out, they aren’t reasonable positions today. 😊

                1 Reply Last reply
                0
                • carnage4life@mas.toC carnage4life@mas.to

                  Being good at using LLMs includes

                  1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
                  2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.

                  performat@mastodon.socialP This user is from outside of this forum
                  performat@mastodon.socialP This user is from outside of this forum
                  performat@mastodon.social
                  wrote last edited by
                  #8

                  @carnage4life genuinely curious:

                  can 2. be - even partly - delegated to LLMs or does it necessarily require human involvement?

                  1 Reply Last reply
                  0
                  • carnage4life@mas.toC carnage4life@mas.to

                    Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

                    Many smart people struggle with this because they either
                    1. Get frustrated with a non-deterministic tool whose output they can’t trust.
                    2. Decide to blindly trust it because “Claude/ChatGPT said so”

                    Both are failure patterns and quite common.

                    damageboy@hachyderm.ioD This user is from outside of this forum
                    damageboy@hachyderm.ioD This user is from outside of this forum
                    damageboy@hachyderm.io
                    wrote last edited by
                    #9

                    @carnage4life I think you are discounting LLM users who are also in the habit of distrusting human quality of work. I think for some people it's easier/in their bones to setup LLMs in a verifiable harness/loop which dramatically reuces the 10-50% -> 1-5%...

                    But yeah, most people aren't like that, and this generally tracks with poor critical thinking in humans

                    carnage4life@mas.toC 1 Reply Last reply
                    0
                    • carnage4life@mas.toC carnage4life@mas.to

                      Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.

                      Many smart people struggle with this because they either
                      1. Get frustrated with a non-deterministic tool whose output they can’t trust.
                      2. Decide to blindly trust it because “Claude/ChatGPT said so”

                      Both are failure patterns and quite common.

                      fivetonsflax@tilde.zoneF This user is from outside of this forum
                      fivetonsflax@tilde.zoneF This user is from outside of this forum
                      fivetonsflax@tilde.zone
                      wrote last edited by
                      #10

                      @carnage4life @davidnjoku #1 is a success, not a failure. It’s the same reason I unfollow LLM apologists — I don’t like tools who make things up

                      1 Reply Last reply
                      0
                      • damageboy@hachyderm.ioD damageboy@hachyderm.io

                        @carnage4life I think you are discounting LLM users who are also in the habit of distrusting human quality of work. I think for some people it's easier/in their bones to setup LLMs in a verifiable harness/loop which dramatically reuces the 10-50% -> 1-5%...

                        But yeah, most people aren't like that, and this generally tracks with poor critical thinking in humans

                        carnage4life@mas.toC This user is from outside of this forum
                        carnage4life@mas.toC This user is from outside of this forum
                        carnage4life@mas.to
                        wrote last edited by
                        #11

                        @damageboy This is an uncomfortable truth that isn’t discussed much.

                        I’m definitely in this bucket.

                        1 Reply Last reply
                        0
                        • carnage4life@mas.toC carnage4life@mas.to

                          Being good at using LLMs includes

                          1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
                          2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.

                          klausfiend@mstdn.caK This user is from outside of this forum
                          klausfiend@mstdn.caK This user is from outside of this forum
                          klausfiend@mstdn.ca
                          wrote last edited by
                          #12

                          @carnage4life I think you missed 3., having SMEs who can identify and fix hallucinations and errors because for them AI is an accelerator, not a replacement.

                          1 Reply Last reply
                          1
                          0
                          • R relay@relay.mycrowd.ca shared this topic
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups