BREAKING!
-
@hopeless @tomgag Not if the code is crap. (Aside from the numerous people who will eschew it for ethical reasons or out of the assumption that it is crap.) And one dude trying to QA code that he wrote with prompts, presumably because he doesn't know how to write the code in the first place, seems like a recipe for failure. (Heck, projects that are just one dude who *does* know what he's doing are chancy because they're a single point of failure, which the non-TM team is not.)
-
Depends what you mean by 'compatible'... in the sense of AI holding copyright in its output like a human, it seems it's not going to play out like that.
In the sense of a human owning the copyright on AI output, sure. A photographer definitely owns the copyright on images his camera took, and he essentially just pointed it in a direction and pressed a button. I don't think copyright + AI coding assists is going to be any special problem.
-
@hopeless @tomgag Not if the code is crap. (Aside from the numerous people who will eschew it for ethical reasons or out of the assumption that it is crap.) And one dude trying to QA code that he wrote with prompts, presumably because he doesn't know how to write the code in the first place, seems like a recipe for failure. (Heck, projects that are just one dude who *does* know what he's doing are chancy because they're a single point of failure, which the non-TM team is not.)
If you are looking after the code and forcing the incremental drops to be maintainable iteratively, AI code can be fine even in Apr 2026.
-
@scopecreeppress @hopeless @tomgag
I was thinking that not disclosing the AI was witholding pertinent information.
-
@scopecreeppress @hopeless @tomgag
I was thinking that not disclosing the AI was witholding pertinent information.
@bjb @hopeless @tomgag I think it's definitely witholding pertinent information from potential users/customers, though I don't think that's a legal issue. (Maybe license violation, in some cases?) If there was an attempt to file copyright on AI drafted material, then that would be a problem. But the only test case I'm aware of--comic with a human-written script + AI art--had an easily defined (and disclosed) divide between human work and slop.
-
If you are looking after the code and forcing the incremental drops to be maintainable iteratively, AI code can be fine even in Apr 2026.
-
There are a lot of claims but they don't rise to the level of coherent argument in most cases. What exactly is 'unethical' about a machine reading liberally licensed FOSS that allows it? Most of Github is liberally licensed FOSS.
As I said whoever pours more effort in their fork "wins". We'll have to see how it actually turns out, but with that in mind, the non-"MIINE" guys seem to have made a tactical error eschewing AI... the upstart bro will increasingly out-compete them via AI.
... case in point is scopecreeppress, who sprayed his opinion here as if it especially mattered and then blocked me presumably for simply politely disagreeing with him.
Instead of arguing and defending (or heaven forbid, modifying) his point and maybe we all learn something, his claims are vomited out and left like a bad smell, and then he's gone, forbidding any reply and guaranteeing the last, useless word and turning off the pain receptors.
-
... case in point is scopecreeppress, who sprayed his opinion here as if it especially mattered and then blocked me presumably for simply politely disagreeing with him.
Instead of arguing and defending (or heaven forbid, modifying) his point and maybe we all learn something, his claims are vomited out and left like a bad smell, and then he's gone, forbidding any reply and guaranteeing the last, useless word and turning off the pain receptors.
@hopeless I don't know who blocked you, but from my point of view your contribution to the conversation was misdirected. You are trying to make a point pro-AI, which can be agreed upon or not, but the story was about a dev who used AI to contribute code without disclosing it. And that's only the minor side of the story, from my point of view it was much more about the trademark registration thing. Regardless of one's stance about AI, the facts stand that a sizeable share of folks don't like it, so it is IMHO a bit disrepectful to not disclose its use in a community project.
-
@hopeless I don't know who blocked you, but from my point of view your contribution to the conversation was misdirected. You are trying to make a point pro-AI, which can be agreed upon or not, but the story was about a dev who used AI to contribute code without disclosing it. And that's only the minor side of the story, from my point of view it was much more about the trademark registration thing. Regardless of one's stance about AI, the facts stand that a sizeable share of folks don't like it, so it is IMHO a bit disrepectful to not disclose its use in a community project.
I did not read about any ongoing legal action about the trademark. Instead both sides mentioned in the story are going to try to fight it out taking the project their own way.
I explained already whoever can invest more effort typically ends up driving the project. And that the guy who uses AI will be able to invest more effort than the team with more people, who exclude using AI.
If you have a different take, fine; it should be easy to show what's wrong with my points?
-
@scopecreeppress @hopeless to be fair, Meshtastic is a toy compared to Meshcore, at least according to my tests.