Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.
-
Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.
Many smart people struggle with this because they either
1. Get frustrated with a non-deterministic tool whose output they can’t trust.
2. Decide to blindly trust it because “Claude/ChatGPT said so”Both are failure patterns and quite common.
-
Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.
Many smart people struggle with this because they either
1. Get frustrated with a non-deterministic tool whose output they can’t trust.
2. Decide to blindly trust it because “Claude/ChatGPT said so”Both are failure patterns and quite common.
Being good at using LLMs includes
1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it. -
Being good at using LLMs includes
1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.@carnage4life I find #2 is skipped a lot. Modern software engineering is all about measuring eval'ing the quality of your outputs. We should be doing the same with our agents.
-
Being good at using LLMs includes
1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.@carnage4life 1. Is an ability to consciously shape your language usage to be that of the community whose info you seek, and 2. Is an ability to use logic to build determinism. So the mythical analytical person who intimately understands human language communities

-
Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.
Many smart people struggle with this because they either
1. Get frustrated with a non-deterministic tool whose output they can’t trust.
2. Decide to blindly trust it because “Claude/ChatGPT said so”Both are failure patterns and quite common.
@carnage4life underneath, an assumption that there is a way to use it 'well' and that this is desirable.
And a willingness to dismiss all ethical concerns in doing so. -
Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.
Many smart people struggle with this because they either
1. Get frustrated with a non-deterministic tool whose output they can’t trust.
2. Decide to blindly trust it because “Claude/ChatGPT said so”Both are failure patterns and quite common.
@carnage4life an alternate framing:
Getting good at using generative AI means using a tool that produces incorrect output 10% - 50% of the time. Such tools used to be rejected as not fit-for-purpose / not production-ready.
Many smart people struggle with this because they either
1 Get frustrated with being required to use a tool that’s not fit-for-purpose and having to expend time & energy fixing its incorrect outputs.
2 Decide to say “fuck it” and use it anyway because “management said so” and they have no genuine agency to stop or derail the train.
Both would have been considered reasonable positions only a few years ago and quite common.
-
@carnage4life an alternate framing:
Getting good at using generative AI means using a tool that produces incorrect output 10% - 50% of the time. Such tools used to be rejected as not fit-for-purpose / not production-ready.
Many smart people struggle with this because they either
1 Get frustrated with being required to use a tool that’s not fit-for-purpose and having to expend time & energy fixing its incorrect outputs.
2 Decide to say “fuck it” and use it anyway because “management said so” and they have no genuine agency to stop or derail the train.
Both would have been considered reasonable positions only a few years ago and quite common.
@itgrrl As you point out, they aren’t reasonable positions today.

-
Being good at using LLMs includes
1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.@carnage4life genuinely curious:
can 2. be - even partly - delegated to LLMs or does it necessarily require human involvement?
-
Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.
Many smart people struggle with this because they either
1. Get frustrated with a non-deterministic tool whose output they can’t trust.
2. Decide to blindly trust it because “Claude/ChatGPT said so”Both are failure patterns and quite common.
@carnage4life I think you are discounting LLM users who are also in the habit of distrusting human quality of work. I think for some people it's easier/in their bones to setup LLMs in a verifiable harness/loop which dramatically reuces the 10-50% -> 1-5%...
But yeah, most people aren't like that, and this generally tracks with poor critical thinking in humans
-
Gettting good at using generative AI means being effective at working with a tool that makes things up 10% - 50% of the time.
Many smart people struggle with this because they either
1. Get frustrated with a non-deterministic tool whose output they can’t trust.
2. Decide to blindly trust it because “Claude/ChatGPT said so”Both are failure patterns and quite common.
@carnage4life @davidnjoku #1 is a success, not a failure. It’s the same reason I unfollow LLM apologists — I don’t like tools who make things up
-
@carnage4life I think you are discounting LLM users who are also in the habit of distrusting human quality of work. I think for some people it's easier/in their bones to setup LLMs in a verifiable harness/loop which dramatically reuces the 10-50% -> 1-5%...
But yeah, most people aren't like that, and this generally tracks with poor critical thinking in humans
@damageboy This is an uncomfortable truth that isn’t discussed much.
I’m definitely in this bucket.
-
Being good at using LLMs includes
1. Being able to provide context and craft prompts that limit the risk of hallucinations AND
2. Having processes and frameworks to vet the quality of the output from the LLM instead of blindly trusting it.@carnage4life I think you missed 3., having SMEs who can identify and fix hallucinations and errors because for them AI is an accelerator, not a replacement.
-
R relay@relay.mycrowd.ca shared this topic