I realise on the fediverse this is maybe asking for a flaming, but yesterday out of sheer curiosity I tried Claude for a simpleish coding task that I'd been putting off (largely inspired by @hausfath 's latest on #theclimatebrink).
-
@benjamingeer As far as I understand it, the task of @Ruth_Mottram was different from the two examples:
- not trying to learn a skill
- not something that’s complex to program, just a time sink (if I understand it correctly)And there is something in the text by @hausfath that I’ve also seen from others: a management role, detached from development.
Like many scientists who do their data evaluation in Excel or sas GUIs (social sciences). And often don’t understand why it works.
@UlrikeHahn@ArneBab It's true that scientists use calculators even though many of them probably don't really know how calculators work. But if you bought a calculator that sometimes said 2 + 2 = 5, you'd return it and get a refund. LLMs are like that.
LLMs can certainly generate a lot of code very fast. But is it good code, or a mass of spaghetti? Will you be able to maintain it, considering that you don't know how it works? When it turns out to have bugs, will you be able to fix them?
-
@ArneBab @benjamingeer @hausfath @UlrikeHahn yes that's exactly the kind of task I think the ml models work well on. A lot of science is actually quite boring and repetitive but needs careful monitoring. If a tool can do part of that. Then why not. I think Zeke is correct in that the human mind needs to come up with the creativity and the experiments as well as with careful analysis to understand the results.
@Ruth_Mottram The main risk I see with that is that it can quickly limit creativity.
I experimented with ChatGPT for writing (but didn’t make the results public, except for an experiment explicitly done for evaluation of its effects -- worrying¹), and I found that it is good at providing a start, but repetitive, so that when I started with it, it limited imagination -- kind of like an effect of advertisements. So it’s a bad start.
¹ https://www.draketo.de/software/ai-translation-evaluated#completely-changed
@benjamingeer @hausfath @UlrikeHahn -
@UlrikeHahn Is slop productivity? LLMs are good at producing fake course work, fake scientific papers, fake political debates, etc., which can look plausible and often pass for the real thing if you don't look too closely. @Ruth_Mottram @hausfath
@benjamingeer @Ruth_Mottram @hausfath I take “productivity” to refer to the efficiency of production of a good or service
readily available AI systems now can (and do) produce essay answers that I would have to assign a passing grade (and actually an increasingly good grade) given our marking criteria, and it can do that in seconds. It’s a huge problem for higher education.
How is that not a “productivity gain”?
I find the conflation of questions about what these systems can actually do (an empirical question!) with questions of desirability deeply counter-productive.
-
@ArneBab @benjamingeer @hausfath @UlrikeHahn yes that's exactly the kind of task I think the ml models work well on. A lot of science is actually quite boring and repetitive but needs careful monitoring. If a tool can do part of that. Then why not. I think Zeke is correct in that the human mind needs to come up with the creativity and the experiments as well as with careful analysis to understand the results.
@Ruth_Mottram The risk is in the "careful monitoring" part: https://mastodon.online/@pseudonym/116135917950981989 @ArneBab @hausfath @UlrikeHahn
-
@Ruth_Mottram The main risk I see with that is that it can quickly limit creativity.
I experimented with ChatGPT for writing (but didn’t make the results public, except for an experiment explicitly done for evaluation of its effects -- worrying¹), and I found that it is good at providing a start, but repetitive, so that when I started with it, it limited imagination -- kind of like an effect of advertisements. So it’s a bad start.
¹ https://www.draketo.de/software/ai-translation-evaluated#completely-changed
@benjamingeer @hausfath @UlrikeHahn@Ruth_Mottram One experiment I did was to turn a text I wrote years ago into a scientific paper in economics.
It took two hours and reached a quality that I (physicist, not from economics) could not have distinguished it from a real paper.
AI causes the form to be easier to repeat, so we can no longer trust the form of scientific writing to be a hint that people actually have scientific education.
And that is a huge risk.
@benjamingeer @hausfath @UlrikeHahn -
@benjamingeer @Ruth_Mottram @hausfath I take “productivity” to refer to the efficiency of production of a good or service
readily available AI systems now can (and do) produce essay answers that I would have to assign a passing grade (and actually an increasingly good grade) given our marking criteria, and it can do that in seconds. It’s a huge problem for higher education.
How is that not a “productivity gain”?
I find the conflation of questions about what these systems can actually do (an empirical question!) with questions of desirability deeply counter-productive.
@UlrikeHahn The real productivity that you're asking your students for is their own thinking and learning, right? LLMs aren't producing that, they're producing fake evidence for it, by parroting sentences that were written by people who had actually done the thinking and learning. The problem for higher education is now to figure out how to measure thinking and learning in other ways. @Ruth_Mottram @hausfath
-
@UlrikeHahn The real productivity that you're asking your students for is their own thinking and learning, right? LLMs aren't producing that, they're producing fake evidence for it, by parroting sentences that were written by people who had actually done the thinking and learning. The problem for higher education is now to figure out how to measure thinking and learning in other ways. @Ruth_Mottram @hausfath
@benjamingeer @Ruth_Mottram @hausfath sometimes replies here leave me speechless…
-
@ArneBab It's true that scientists use calculators even though many of them probably don't really know how calculators work. But if you bought a calculator that sometimes said 2 + 2 = 5, you'd return it and get a refund. LLMs are like that.
LLMs can certainly generate a lot of code very fast. But is it good code, or a mass of spaghetti? Will you be able to maintain it, considering that you don't know how it works? When it turns out to have bugs, will you be able to fix them?
@benjamingeer scientific code is usually a mass of spaghetti.
I once made a data cleanup program of a colleague at least 100x faster by just processing the data in one go instead of opening it again and seeking to the last position for each single line.
You need to know where you come from to check whether something brings benefits.
That said: if that had been a 10k lines AI code monster, I couldn’t have fixed it in the 30 minutes I had.
-
@benjamingeer scientific code is usually a mass of spaghetti.
I once made a data cleanup program of a colleague at least 100x faster by just processing the data in one go instead of opening it again and seeking to the last position for each single line.
You need to know where you come from to check whether something brings benefits.
That said: if that had been a 10k lines AI code monster, I couldn’t have fixed it in the 30 minutes I had.
@benjamingeer But, just to make it clear: that code which was 100x slower than it could have been, was still correct.
It was slow, but it did very complex tasks correctly.
@Ruth_Mottram @hausfath @UlrikeHahn -
@Ruth_Mottram One experiment I did was to turn a text I wrote years ago into a scientific paper in economics.
It took two hours and reached a quality that I (physicist, not from economics) could not have distinguished it from a real paper.
AI causes the form to be easier to repeat, so we can no longer trust the form of scientific writing to be a hint that people actually have scientific education.
And that is a huge risk.
@benjamingeer @hausfath @UlrikeHahn@Ruth_Mottram though my main gripe with us as human society is that we’re spending more than 400 billion dollars a year to build error-prone general pattern recognition and reproduction while finding maybe 100 problems where it brings big benefits -- that would each require less than 10 million dollars to solve.
Why don’t we have solutions for those tasks already?
Why is matplotlib mostly written by some folks in their spare time while it has tons of value?
@benjamingeer @hausfath @UlrikeHahn -
@benjamingeer @Ruth_Mottram @hausfath sometimes replies here leave me speechless…
@UlrikeHahn What is the "good" that you want your students to produce? The thing that has real value? Is it essays or learning? Perhaps students are using LLMs to write essays because they mistakenly believe that the essay is an end in itself, rather than a means to an end. As somebody said, sometimes it makes sense to have someone cook your meal for you, but it never makes sense to have someone eat your meal for you. @Ruth_Mottram @hausfath
-
I realise on the fediverse this is maybe asking for a flaming, but yesterday out of sheer curiosity I tried Claude for a simpleish coding task that I'd been putting off (largely inspired by @hausfath 's latest on #theclimatebrink). The performance of Claude was seriously impressive. I am convinced the AI cycle is more than hype (and have been for a while), the chatbots have been a huge attention hogger, misleadingly so, while the serious work has been done elsewhere. (We are developing ML tools to supplement parts of our climate model workflows).
Now I'm wondering if there is any serious EU competition to Anthropic? - Mistral's codestral perhaps?
Because this kind of performance changes everything and we can't afford to lag behind...
#AIcoding #MLEdit: here is the climate brink post I mentioned
The AI-Augmented Scientist
The promise and pitfalls of using AI tools to boost my capabilities as a scientist
(www.theclimatebrink.com)
Do people actually read the code Claude runs and how it differs from what Claude gives as an output?
-
@UlrikeHahn What is the "good" that you want your students to produce? The thing that has real value? Is it essays or learning? Perhaps students are using LLMs to write essays because they mistakenly believe that the essay is an end in itself, rather than a means to an end. As somebody said, sometimes it makes sense to have someone cook your meal for you, but it never makes sense to have someone eat your meal for you. @Ruth_Mottram @hausfath
@benjamingeer @Ruth_Mottram @hausfath Benjamin, maybe just reread the previous post of yours and ask yourself “what in this post am I saying that could possibly be new to the person I am addressing?”…and then see where that leads you
-
@benjamingeer @Ruth_Mottram @hausfath Benjamin, maybe just reread the previous post of yours and ask yourself “what in this post am I saying that could possibly be new to the person I am addressing?”…and then see where that leads you
@UlrikeHahn It would surprise me if anything I said was new to you. What surprised me was that you described the production of counterfeit goods as productivity. @Ruth_Mottram @hausfath
-
@UlrikeHahn It would surprise me if anything I said was new to you. What surprised me was that you described the production of counterfeit goods as productivity. @Ruth_Mottram @hausfath
@benjamingeer @Ruth_Mottram @hausfath maybe that should be a clue that you are somehow missing the intended point?
-
@benjamingeer @Ruth_Mottram @hausfath maybe that should be a clue that you are somehow missing the intended point?
@UlrikeHahn The original question was whether LLM coding assistants would make scientists more productive. It sounded like you were arguing that they would, since LLMs are not just hype, as evidenced by their efficiency in producing fake course work, etc. Were you being ironic? @Ruth_Mottram @hausfath
-
@Ruth_Mottram One experiment I did was to turn a text I wrote years ago into a scientific paper in economics.
It took two hours and reached a quality that I (physicist, not from economics) could not have distinguished it from a real paper.
AI causes the form to be easier to repeat, so we can no longer trust the form of scientific writing to be a hint that people actually have scientific education.
And that is a huge risk.
@benjamingeer @hausfath @UlrikeHahn@Ruth_Mottram when you use AI to transform your content from one form to another, parts of the content usually associated with the target form creep into your content.
This can be as bad as turning "agriculture that needs less antibiotics, because animals stay healthier" into "agriculture without antibiotics" (so sick animals suffer needlessly).
Because AI does not differentiate between content and form.
@benjamingeer @hausfath @UlrikeHahn -
@UlrikeHahn The original question was whether LLM coding assistants would make scientists more productive. It sounded like you were arguing that they would, since LLMs are not just hype, as evidenced by their efficiency in producing fake course work, etc. Were you being ironic? @Ruth_Mottram @hausfath
@benjamingeer @Ruth_Mottram @hausfath I will leave that to you to puzzle out and now stop bombarding Ruth’s thread….
-
R relay@relay.an.exchange shared this topic
-
I realise on the fediverse this is maybe asking for a flaming, but yesterday out of sheer curiosity I tried Claude for a simpleish coding task that I'd been putting off (largely inspired by @hausfath 's latest on #theclimatebrink). The performance of Claude was seriously impressive. I am convinced the AI cycle is more than hype (and have been for a while), the chatbots have been a huge attention hogger, misleadingly so, while the serious work has been done elsewhere. (We are developing ML tools to supplement parts of our climate model workflows).
Now I'm wondering if there is any serious EU competition to Anthropic? - Mistral's codestral perhaps?
Because this kind of performance changes everything and we can't afford to lag behind...
#AIcoding #MLEdit: here is the climate brink post I mentioned
The AI-Augmented Scientist
The promise and pitfalls of using AI tools to boost my capabilities as a scientist
(www.theclimatebrink.com)
On pure software side: 10 years ago playing with the first gen Raspberry Pi camera, I realized its relatively exotic video interface could be leveraged to do motion detection with extremely low CPU usage.
Those interfaces have since changed and the same approach no longer works. So a few months ago I decided to try an experiment: could OpenCode make a new version, compatible with the latest hardware and interfaces? 1/2
@Ruth_Mottram @hausfath -
On pure software side: 10 years ago playing with the first gen Raspberry Pi camera, I realized its relatively exotic video interface could be leveraged to do motion detection with extremely low CPU usage.
Those interfaces have since changed and the same approach no longer works. So a few months ago I decided to try an experiment: could OpenCode make a new version, compatible with the latest hardware and interfaces? 1/2
@Ruth_Mottram @hausfathThe planning stage worked like magic. It generated a plan which detailed why the old code doesn't work, listed all new new solutions, and outlined a plan of conversion.
It all fell apart moving to implementation though. Spinning in circles it ended up producing a completely unworkable resemblance of code that didn't even have hope of working.
What looked excitingly plausible for a forward port turned out a dead end. 2/2
@Ruth_Mottram @hausfath