i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark what the what.
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark Just like, use one of the tools that already exists? It'll be:
- Fast
- Cheap
- Efficient
- AccurateI don't understand any of this "industry" outside of being a massive destructive boondoggle.
-
R relay@relay.an.exchange shared this topic
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
Why is AI as a consumer product being rushed out the door so fast? It's obviously not ready for prime time. It's unreliable, inaccurate, and fragile.
It's like a car being sent out to car dealerships with only 3 wheels with hasty promises of a future 4th wheel.
Possibly the goal isn't a car with 4 wheels but a plan for something else, similar to a Waymo power outage gridlock
Reliance on AI is a national security risk vulnerable to high fuel prices
1/
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark "92 boosts, 115 favourites" damn
I swear to god sometimes Mastodon is just "old person yells at thing".
Researchers spend tons of time and money trying to solve Sudoku in polynomial time not because Sudoku is such a important problem to humanity, but because it's a NP-Hard problem, and you can thus reduce all other NP-complete problems to Sudoku and solve them all in polynomial time if you can solve Sudoku in polynomial time.
The research challenge is disentangling content from style in a learned embedding space, it's a classic representation learning problem that's genuinely hard: 1) Two functions that do the same thing should have identical content embeddings but different style embeddings, 2) Style must generalise to unseen code patterns, not just pattern-match known rules, 3) It's unsupervised, so there's no labeled (code_A, same_code_in_style_B) training pairs.
Code formatting is actually a very good medium to test this hypothesis, because you have an infinite latent space of code that does the exact same thing but is stylistically different.
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark feck that
-
@whitequark @deborahh @danlyke ie, the sort of thing a linter does?
-
@whitequark "92 boosts, 115 favourites" damn
I swear to god sometimes Mastodon is just "old person yells at thing".
Researchers spend tons of time and money trying to solve Sudoku in polynomial time not because Sudoku is such a important problem to humanity, but because it's a NP-Hard problem, and you can thus reduce all other NP-complete problems to Sudoku and solve them all in polynomial time if you can solve Sudoku in polynomial time.
The research challenge is disentangling content from style in a learned embedding space, it's a classic representation learning problem that's genuinely hard: 1) Two functions that do the same thing should have identical content embeddings but different style embeddings, 2) Style must generalise to unseen code patterns, not just pattern-match known rules, 3) It's unsupervised, so there's no labeled (code_A, same_code_in_style_B) training pairs.
Code formatting is actually a very good medium to test this hypothesis, because you have an infinite latent space of code that does the exact same thing but is stylistically different.
@budududuroiu the reason I was reading the paper is because I'm working on the same problem and I think the encoding presented in the paper makes no sense at all to use
-
@budududuroiu the reason I was reading the paper is because I'm working on the same problem and I think the encoding presented in the paper makes no sense at all to use
@whitequark to use for what? It's research, it's not meant to create something for industry use. Academia already suffers from the "File-drawer problem". I also did research on using GANs for Outlier Detection, when most of the time Outlier Detection is a classification problem, not a learned representation problem.
-
@whitequark to use for what? It's research, it's not meant to create something for industry use. Academia already suffers from the "File-drawer problem". I also did research on using GANs for Outlier Detection, when most of the time Outlier Detection is a classification problem, not a learned representation problem.
@budududuroiu yes yes i know you're here because you look at trends and start arguments, now move on to something else and stop wasting my time
-
@budududuroiu yes yes i know you're here because you look at trends and start arguments, now move on to something else and stop wasting my time
@whitequark lmao, have fun "clowning" on stuff you don't understand
-
@whitequark lmao, have fun "clowning" on stuff you don't understand
@budududuroiu go take a short walk off a long pier
-
@porglezomp you'll love Fig. 6
@whitequark "If" right next to "if"
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark if I have understood you correctly, they're saying 64% functional is a satisfactory result?
-
@whitequark if I have understood you correctly, they're saying 64% functional is a satisfactory result?
@FibroJedi that's my read of it yeah
-
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark Whenever I hear about these benchmarks I can't help but wonder how people can say these things with a straight face. -
@FibroJedi that's my read of it yeah
@whitequark Maybe they'd like their phone and car 64% functional as a real world test
.Some of those logic misses/switches are disturbing. I don't know how it's allowable.
If the code works 100%, and "reformatting" it reduces that % then it's wrong by definition.
-
@whitequark @porglezomp I'm spitting out my drink at j++ → j--. Holy shit.
@xgranade @whitequark @porglezomp
I think reversing the `j` for loop is actually wanted by them? It's labelled "ground truth", and it is a potential valid optimisation -
i'm at a loss of words after reading a paper about reformatting code using an ML model that has a measured statistical quantity A_c which says how often the reformatted code behaves the same as the original
the "ideal" (their choice of words) case is 64.2%
@whitequark But... why? Why not just use a linter?
-
@whitequark because "the thing we're promoting is incredibly dangerous, and not in fun ways" is not really the thing anyone wants to be cited for
@ireneista @whitequark Now, show me the numbers on the effort to make a rule-based style file compared to this. Because I'm sure that A_c is 100.0 in that case.

-
@whitequark But... why? Why not just use a linter?
@DaKangaroo see edit