Is it OK to use AI to analyze code and documents for errors?
-
-
@evan No. AI can't do the analysis (technically/philosophically). It can provide input, and this will probably be acceptable to me once the hype is over.
-
@evan I'm not a big fan of AI in general (the possibilities it opens are scary, and we're already seeing where that's going), but I think analyzing documents -- if, and only if, it isn't your only check -- is one of very few use-cases where it's more helpful than damaging.
-
@evan ...but only code authored by other AI
-
@evan I voted yes because an additional check is always useful, whether it's by an AI or a human. LLMs can make mistakes and overloook errors, but so do humans.
-
@evan there's NO ethical use of LLMs. Absolute statement.
-
@evan Done w this question. Fully set it down when I realized AI is 'just' the latest religious war among coders. (Perhaps second only to tabs-vs-spaces.)
Now I'm consciously trying to participate in AI conversations (if I participate at all) in ways that break the "Is AI good or bad?" framing, rather than reifying the fight.
Firefox did this in code: big button to turn AI off, little buttons to turn on LLM-powered features, starting w on-device translation which ~everyone understands & wants.
-
@evan The definitions of “OK” and “AI” are pretty darn load bearing in this question.
I’m “No, but”… it hinges on my exact assumption that AI means large private LLMs, and OK means “an ethical thing to do”.
(Not a complaint, I know you can’t exhaustively define every word in existence)
-
@evan Yes but don't just rely on what the AI says, actually make sure its correct.
-
@evan yes, but don't trust it to find all the errors. It's also better if a human reviews the error reports, rather than just plugging the error list output into another pipeline.
-
@evan no, because ai is a societal and environmental disaster, but this is not the worst use it could be put to
-
@evan I feel like this is one thing AI is week suited to do.
-
@evan
By #ai you probably mean #llm?Personally I'm not very fond of them, some people however don't seem to be able to function without them anymore.
The definition of what an error is can be very wide or very narrow. To assess 'correctness' can entail several things.
Was the correct syntax, spelling or grammar used? Does the logic contain any obvious mistakes? The topic of ethics is very tricky as an LLM is unable to do any actual reasoning. The output can look convincing, but is it really?
-
@evan there's NO ethical use of LLMs. Absolute statement.
-
@evan I went with no but, as there is hardly any ethically trained ai. But if your employer forces you, you are not a bad person when you comply. Also using it for review is the less evil use of ai, where it might help improve / inspire quality, and not adds up to the slop, although I fear it might make your critical muscle lazy.
-
@evan We know large language models can't exist in their current form without using copyrighted data.
How are you ensuring your model doesn't contain copy righted data?
andif you're going to use a model by one of the big tech providers, there's going to be the issue of complicity with what they're doing.
-
@evan
By #ai you probably mean #llm?Personally I'm not very fond of them, some people however don't seem to be able to function without them anymore.
The definition of what an error is can be very wide or very narrow. To assess 'correctness' can entail several things.
Was the correct syntax, spelling or grammar used? Does the logic contain any obvious mistakes? The topic of ethics is very tricky as an LLM is unable to do any actual reasoning. The output can look convincing, but is it really?
Look, Evan is a professional communicator. Director, board member, researcher... Using the right words is key to each of his jobs.
Why would assume that he meant a word he specifically didn't say?
-
R relay@relay.infosec.exchange shared this topicR relay@relay.mycrowd.ca shared this topic
