@whitequark which one is the latter?
-
@whitequark good thing university taught rawdogging riscv assembly

-
@whitequark i am fundamentally against making things hard to reimplement, especially compilers, because i want to put as little friction as possible in the way of people porting software in a language to their platform.
making reimplementations impossible isn't how you protect users, it's how you lock them to a specific technical instance, and to the whims of whoever decides the direction it goes. allowing reimplementations allow the users to at least partially put their trust into another party instead, giving them much more power
non standard extensions in compilers is not enough of a cost to not do it imo, since it is up to projects to be responsible about which extensions to use or not
PS: also fuck ISO sucks, requiring payment to have access to documentation is the opposite of empowerement, and we can have standards without ISO anyway
-
@SRAZKVT now, i don't think compilers should be intentionally hard to reimplement. i just don't think that "ease of reimplementation" is a valuable target to pursue on its own and it has a somewhat negative effect on the language overall; whether this negative effect will become a serious problem in practice basically depends on how homogeneous your culture is, i think
-
@whitequark it is up to the maintainer to decide which extensions they require, if a downstream user's compiler doesn't support it, then they can either add it to their compiler, patch the codebase to not require it, or go for something else instead
-
@SRAZKVT now, i don't think compilers should be intentionally hard to reimplement. i just don't think that "ease of reimplementation" is a valuable target to pursue on its own and it has a somewhat negative effect on the language overall; whether this negative effect will become a serious problem in practice basically depends on how homogeneous your culture is, i think
@SRAZKVT or to put it in much more primitive terms: if you fork the language then have the decency to change the name, too
-
@whitequark it is up to the maintainer to decide which extensions they require, if a downstream user's compiler doesn't support it, then they can either add it to their compiler, patch the codebase to not require it, or go for something else instead
@SRAZKVT the practical outcome of all three cases is make-work
-
@SRAZKVT now, i don't think compilers should be intentionally hard to reimplement. i just don't think that "ease of reimplementation" is a valuable target to pursue on its own and it has a somewhat negative effect on the language overall; whether this negative effect will become a serious problem in practice basically depends on how homogeneous your culture is, i think
@whitequark obviously, it isn't absolute, but if you have the option as language designer between doing just syntax sugar around already existing features, or adding a whole new component, then the former should be prioritised
-
@SRAZKVT the practical outcome of all three cases is make-work
@whitequark yes, making software work on a system it wasn't designed for is make-work, it would be regardless
having more options on how to tackle makes it less bad though
-
@whitequark yes, making software work on a system it wasn't designed for is make-work, it would be regardless
having more options on how to tackle makes it less bad though
@SRAZKVT there's several things implicit here that i don't really like:
- placing the burden of making it work on the end user and/or maintainer (ocaml sidesteps this nicely by providing a baseline bytecode interpreter that's mostly fast enough; no language extensions are involved at any point)
- biasing the language towards the endless scope-creep of implementations that gave us c instead of going "no, if you want this to run on a 8-bit AVR, get a different language, this one isn't fit for the use case" (which would leave everyone involved happier in those cases)
-
@SRAZKVT there's several things implicit here that i don't really like:
- placing the burden of making it work on the end user and/or maintainer (ocaml sidesteps this nicely by providing a baseline bytecode interpreter that's mostly fast enough; no language extensions are involved at any point)
- biasing the language towards the endless scope-creep of implementations that gave us c instead of going "no, if you want this to run on a 8-bit AVR, get a different language, this one isn't fit for the use case" (which would leave everyone involved happier in those cases)
@whitequark yes, the language should have a baseline that is expected to be implemented everywhere, that's the language without extensions
widely implemented extensions should be included in the baseline eventually to better compatibility
and for the second, yeah, but if you are on 8bit avr, you likely don't need a kernel with system utilities written by someone else who has no knowledge of your system, you'll likely need something completely custom anyway
-
@whitequark yes, the language should have a baseline that is expected to be implemented everywhere, that's the language without extensions
widely implemented extensions should be included in the baseline eventually to better compatibility
and for the second, yeah, but if you are on 8bit avr, you likely don't need a kernel with system utilities written by someone else who has no knowledge of your system, you'll likely need something completely custom anyway
@SRAZKVT we are talking past each other. ocaml's situation that i'm mentioning is "if you are on certain platforms, then if you want your code faster, you're out of luck", in contrast to an approach where "if you are on certain platforms, you have to use certain extensions to make things faster". i think that while both have merit the former is severely underutilized. not every platform needs to be supported equally. this is not the same "baseline" as a "core without extensions" in that nobody except for the compiler maintainer and the people using that platform have to spend effort on a platform they never use.
for the latter part, rust has a 8-bit avr port that i've always found fairly senseless. it isn't a very nice thing to do to others to take a language where programmers could previously assume that a machine word is 32-bit and to extend it to a 8-bit microcontroller series which violates that assumption. i've always thought it should've just been left out of scope entirely
-
@whitequark @SRAZKVT
> i don't think bootstrapping and having a stable abi are an essential component of a healthy ecosystem. in particular not having a robust interoperability story can motivate people to reimplement a lot of existing software, hopefully while taking lessons learned to heart
rust doesn't have a stable abi across rust <-> rust modules/crates, which has nothing to do with makes does the opposite of what you say -- all it does is making rust-rust dynamic linking impossible, so people have to drop to the system abi for it, and/or make any sort of build cache invalid whenever you update the compiler -
@whitequark @SRAZKVT
> i don't think bootstrapping and having a stable abi are an essential component of a healthy ecosystem. in particular not having a robust interoperability story can motivate people to reimplement a lot of existing software, hopefully while taking lessons learned to heart
rust doesn't have a stable abi across rust <-> rust modules/crates, which has nothing to do with makes does the opposite of what you say -- all it does is making rust-rust dynamic linking impossible, so people have to drop to the system abi for it, and/or make any sort of build cache invalid whenever you update the compiler@navi @SRAZKVT i know how rust works. any sort of friction at module boundaries creates a dual effect: first, it disincentivizes people from maintaining mixed codebases (we'd see a lot more mixed rust/c++ codebases if you could directly use polymorphic rust methods from c++, for example); second, it lets you avoid freezing the internals of your runtime on an implementation that more certainly than not has significant flaws (c++'s itanium abi dynamic_cast for example), or at least reduces how quickly that happens. these two things let you focus on addressing just your own mistakes, instead of adding everyone else's mistakes into the mix
-
@navi @SRAZKVT i know how rust works. any sort of friction at module boundaries creates a dual effect: first, it disincentivizes people from maintaining mixed codebases (we'd see a lot more mixed rust/c++ codebases if you could directly use polymorphic rust methods from c++, for example); second, it lets you avoid freezing the internals of your runtime on an implementation that more certainly than not has significant flaws (c++'s itanium abi dynamic_cast for example), or at least reduces how quickly that happens. these two things let you focus on addressing just your own mistakes, instead of adding everyone else's mistakes into the mix
@whitequark @SRAZKVT
a stable abi does not need to be exported to other languages
it'd be even ideal to have rustc have an abi for rlibs and say "do not use this from somewhere else, we will not help you" -- and that would solve so many packaging pains with rust
a system's programming language without a stable abi is pure hell
for application programming maybe, not for system's -
@whitequark @SRAZKVT
a stable abi does not need to be exported to other languages
it'd be even ideal to have rustc have an abi for rlibs and say "do not use this from somewhere else, we will not help you" -- and that would solve so many packaging pains with rust
a system's programming language without a stable abi is pure hell
for application programming maybe, not for system's@navi @SRAZKVT there is nothing unique about systems programming that requires a stable ABI. there are many things about old Linux distributions that are built around the assumptions of having one, but that's a separate thing and if we are to have a discussion of this at all that's the one i want to have, not a proxy for it
-
@navi @SRAZKVT there is nothing unique about systems programming that requires a stable ABI. there are many things about old Linux distributions that are built around the assumptions of having one, but that's a separate thing and if we are to have a discussion of this at all that's the one i want to have, not a proxy for it
@whitequark @SRAZKVT
the unique thing is the kind of software that is written in them
and as someone that suffered to package rust and tools in similar languages, that's a discussion i can have if desired yes -- mostly involving dynamic linking, but even with static linking, the lack of being able to package prebuilds also creates issues (not even considering the pain that lockfiles are) -
@whitequark @SRAZKVT
the unique thing is the kind of software that is written in them
and as someone that suffered to package rust and tools in similar languages, that's a discussion i can have if desired yes -- mostly involving dynamic linking, but even with static linking, the lack of being able to package prebuilds also creates issues (not even considering the pain that lockfiles are)@navi @SRAZKVT my position on distributions boils down to "it is pretty weird and otherwise unprecedented that we've normalized it that once you release software some other group of people (who don't really understand how it works) is going to build and publish it, giving you little to no say in the matter, but leaving you responsible for support in the end". so far as this is true i think the value distributions provide to me as a developer, and also as a user, is neutral to negative. Debian is the worst at this but I think the entire model should be replaced
-
@navi @SRAZKVT my position on distributions boils down to "it is pretty weird and otherwise unprecedented that we've normalized it that once you release software some other group of people (who don't really understand how it works) is going to build and publish it, giving you little to no say in the matter, but leaving you responsible for support in the end". so far as this is true i think the value distributions provide to me as a developer, and also as a user, is neutral to negative. Debian is the worst at this but I think the entire model should be replaced
@whitequark @SRAZKVT
and i think that staggering software distribution is a benefit for the user, as a ton of developer do not ever consider setups that differ from their on even in the slightest -- as an example nix and gentoo packagers so often send dozens and dozens of patches upstream fixing build systems that had baked in expectations
i've personally sent patches out fixing autotools issues with cross-building a handful of packages from portage
sure there is distros whose people make no effort to learn about the software they package, nor to fix issues, but most if not all packagers i've ever talked to are not like that at all, and that includes packagers for gentoo, nix, guix, alpine, void, and a few debian ones (though i am *well* aware of many issues debian in general has with packaging)
decent distros have their own bug tracker, on gentoo the majority of bugs go there, before going upstream (if the problem turns out to not be with the downstream packaging) -- it does help when the package has some branding build-time flags where we can replace e.g. the upstream issues tracker url with our bug tracker, makes it easier to direct users there first
staggered releases are to the benefit of users, if users had gotten the newest xz as soon as the developer pushed it, instead of having it land on a testing branch first, how many more people wouldn't have been affected day 1
in particular also gentoo held back the shadow package from hitting stable for a while because new versions had a ton of refactoring of security sensitive code, so the packager wanted to be sure it was all okay before pushing it for everyone (though if one wanted, they can select per-package ~$arch, to enable testing packages on said $arch)
--
and redistribution being seen as weird is odd to me, the nature of foss is collaborative and communitarian, and not unique to foss, we're pretty okay with libraries redistributing books -
@whitequark @SRAZKVT
and i think that staggering software distribution is a benefit for the user, as a ton of developer do not ever consider setups that differ from their on even in the slightest -- as an example nix and gentoo packagers so often send dozens and dozens of patches upstream fixing build systems that had baked in expectations
i've personally sent patches out fixing autotools issues with cross-building a handful of packages from portage
sure there is distros whose people make no effort to learn about the software they package, nor to fix issues, but most if not all packagers i've ever talked to are not like that at all, and that includes packagers for gentoo, nix, guix, alpine, void, and a few debian ones (though i am *well* aware of many issues debian in general has with packaging)
decent distros have their own bug tracker, on gentoo the majority of bugs go there, before going upstream (if the problem turns out to not be with the downstream packaging) -- it does help when the package has some branding build-time flags where we can replace e.g. the upstream issues tracker url with our bug tracker, makes it easier to direct users there first
staggered releases are to the benefit of users, if users had gotten the newest xz as soon as the developer pushed it, instead of having it land on a testing branch first, how many more people wouldn't have been affected day 1
in particular also gentoo held back the shadow package from hitting stable for a while because new versions had a ton of refactoring of security sensitive code, so the packager wanted to be sure it was all okay before pushing it for everyone (though if one wanted, they can select per-package ~$arch, to enable testing packages on said $arch)
--
and redistribution being seen as weird is odd to me, the nature of foss is collaborative and communitarian, and not unique to foss, we're pretty okay with libraries redistributing books -
@whitequark @SRAZKVT i don't like software patches beyond "fix major $bug that didn't land upstream yet" either! not a coincidence most distros i mentioned ship vanilla software