i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark I guess if your code is extruded as a homogenous paste and probably didn't work to begin with, one doesn't care as much...?
-
@ireneista @GeoffWozniak based on a discussion with someone who has worked on this problem before we want to try building a diffusion model that captures the whitespace between code tokens and is then able to inject it into a given parsetree, which appears to be a fairly efficient and unproblematic way to do this
@whitequark @ireneista @GeoffWozniak ~~ah, so python indentation~~
-
@whitequark @deborahh @danlyke ie, the sort of thing a linter does?
@nxskok @whitequark @deborahh @danlyke to be fair, according to the paper, replacing for with while loops and vice versa and the like was also the goal
-
@whitequark @danlyke so … by "reformatted" I assume you mean aesthetically tidied up, with no change in functionality required?
If I got that right: wtf?
@deborahh @whitequark @danlyke
No.
"there is no existing work that performs full stylization on an arbitrary piece of code. The most common methods are rule-based linters, formatters, which are limited to a few pre-defined style rules"
-
@deborahh @whitequark @danlyke
No.
"there is no existing work that performs full stylization on an arbitrary piece of code. The most common methods are rule-based linters, formatters, which are limited to a few pre-defined style rules"
@mrkeen @deborahh @danlyke I do think that stretching the definition of what "code style" could reasonably refer to until it fits the shape of the research product is a part of the problem here. (Consider that the introduction explicitly refers to the gotofail bug as something the research is supposed to help with, whereas it is plainly evident that it would make that problem only worse.)
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark I'm slightly embarrassed that this is coming from Germany.
-
@whitequark So let me get this straight, IEEE thinks you should count it as a win if rewriting your code by vibing it has less than 15% better odds than a literal coinflip of reproducibility?
edited for clarity and to fix a typo
@disorderlyf @whitequark IEEE and ACM don't do the research nor they think you to do things, they are publishers that own journals and conferences where researchers publish their work
-
@theeclecticdyslexic @lu_leipzig yeah if a formatter requires me to do things I don't want I simply quit using the formatter (and sometimes the codebase)
@whitequark @theeclecticdyslexic @lu_leipzig You are absolutely right. So for JS/TS we're using eslint only. It is much less strict about things but gets the job done. Line length is one of my pet peeves. I simply cannot and don't want a strict length because sometimes a line is longer than the rest. For reasons. I don't use formatters either for that reason. Works well for me.
-
@disorderlyf @whitequark IEEE and ACM don't do the research nor they think you to do things, they are publishers that own journals and conferences where researchers publish their work
@urixturing @disorderlyf yeah. there are other issues with their models but this isn't one
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark well the paper speaks of *code style* which is more than just formatting but also, shouldn't we welcome negative results in science?
-
@whitequark well the paper speaks of *code style* which is more than just formatting but also, shouldn't we welcome negative results in science?
@mc I feel like if the negative result is obvious given the hypothesis it has a lot less value
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark I do think that asking for 100.0% equivalency is something that's both necessary to ask of something you'd want to put in a CI _and_ unreasonable to ask of something that tries to solve this problem
having accidentally gone through this specific kind of exercise a few times in the last couple weeks — turning java code into kotlin code intellij would spit into kotlin code I'd be happy to put my name on — I usually reach maybe 98% compatibility, then settle for that because I identify the remaining 2% of behaviours as "hard to replicate in the new shape of the code," "minor enough not to matter" and "not desirable, actually"
once you're happy to aim somewhere south than 100.0% I guess it's interesting to figure out how close you can get — and then yeah this approach only gets you to 64% which is only good as a milestone for future efforts to compare against
️maybe all this ends up being good for is dropping comments on PRs (and, if you recognize me, we both know how we feel about that)
-
@whitequark I do think that asking for 100.0% equivalency is something that's both necessary to ask of something you'd want to put in a CI _and_ unreasonable to ask of something that tries to solve this problem
having accidentally gone through this specific kind of exercise a few times in the last couple weeks — turning java code into kotlin code intellij would spit into kotlin code I'd be happy to put my name on — I usually reach maybe 98% compatibility, then settle for that because I identify the remaining 2% of behaviours as "hard to replicate in the new shape of the code," "minor enough not to matter" and "not desirable, actually"
once you're happy to aim somewhere south than 100.0% I guess it's interesting to figure out how close you can get — and then yeah this approach only gets you to 64% which is only good as a milestone for future efforts to compare against
️maybe all this ends up being good for is dropping comments on PRs (and, if you recognize me, we both know how we feel about that)
@sop but I'm not doing language translation, input and output are in the same language and should have essentially identical (machine-checkably equivalent) ASTs
-
@whitequark I do think that asking for 100.0% equivalency is something that's both necessary to ask of something you'd want to put in a CI _and_ unreasonable to ask of something that tries to solve this problem
having accidentally gone through this specific kind of exercise a few times in the last couple weeks — turning java code into kotlin code intellij would spit into kotlin code I'd be happy to put my name on — I usually reach maybe 98% compatibility, then settle for that because I identify the remaining 2% of behaviours as "hard to replicate in the new shape of the code," "minor enough not to matter" and "not desirable, actually"
once you're happy to aim somewhere south than 100.0% I guess it's interesting to figure out how close you can get — and then yeah this approach only gets you to 64% which is only good as a milestone for future efforts to compare against
️maybe all this ends up being good for is dropping comments on PRs (and, if you recognize me, we both know how we feel about that)
@whitequark (reads https://social.treehouse.systems/@whitequark/116283070331505039) oh no did I just explain something you thought obvious back to you
-
@whitequark (reads https://social.treehouse.systems/@whitequark/116283070331505039) oh no did I just explain something you thought obvious back to you
@sop i guess? basically, you can set up a system around an ML model in two ways: where the model gets to alter things that are not (lexer) whitespace, and where the model gets to alter random (lexer) tokens
the paper goes for #2
i am collabrating on a project that does #1, which gives 100.0% (with the caveat above) by design—because a formatting tool that sometimes breaks code is a net negative -
@whitequark And this is how research money is lit on fire, I guess. Why else conduct research into ML for a task that has had obvious, deterministic, efficient and well-tested solutions for decades?
@lu_leipzig @whitequark i would honestly be more interested into a deterministic but very configurable formatter, and a ml model to, from sample code, write a config for you, and you just do minor adjustments to it, generally all code styles stand in just a few hundred switches
-
@lu_leipzig @whitequark i would honestly be more interested into a deterministic but very configurable formatter, and a ml model to, from sample code, write a config for you, and you just do minor adjustments to it, generally all code styles stand in just a few hundred switches
@SRAZKVT @lu_leipzig this would be ~easy to do but convincing people to implement and maintain "a few hundred switches" has been incredibly difficult; my motivation is exactly that rustfmt maintainers have been consistently unwilling to entertain that
-
@SRAZKVT @lu_leipzig this would be ~easy to do but convincing people to implement and maintain "a few hundred switches" has been incredibly difficult; my motivation is exactly that rustfmt maintainers have been consistently unwilling to entertain that
@SRAZKVT @lu_leipzig if every language i cared about (at this point: mainly rust, python, and c++) had highly configurable formatters i would not care to spend as much effort as i'm planning to on ml research
-
@SRAZKVT @lu_leipzig if every language i cared about (at this point: mainly rust, python, and c++) had highly configurable formatters i would not care to spend as much effort as i'm planning to on ml research
@whitequark @lu_leipzig most tooling devs today seem to believe in a one size fits all with no configurability, kind of sad
also i think the problem of "but if every codebase isn't formatted exactly the same" is way overblown, once you start reading the code it really doesn't take long to adapt to a new style, barely a few minutes from my experience
-
@whitequark @lu_leipzig most tooling devs today seem to believe in a one size fits all with no configurability, kind of sad
also i think the problem of "but if every codebase isn't formatted exactly the same" is way overblown, once you start reading the code it really doesn't take long to adapt to a new style, barely a few minutes from my experience
@SRAZKVT @lu_leipzig there is a more real problem of "some people bounce off contributing if you ask them to fix style"