as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field.
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
@whitequark As a fellow toolmaker, I feel you!
I know Microsoft has created a couple of drivers using my tools. Let's say it's 3 drivers and it saved a junior San Fran engineer 1 day of work each. Then by my rough estimation they'd have saved about $1500.
In the mean time, I have seen none of that value in return.
Idk what to think of it. I made the tool for people like me and I want them to have it for free. But yeah, then MS also gets it for free unless I do weird license things.
-
I used to do time at Google. Passed the interviews, and, inbetween engineering, got the training to administer them, and took a bunch of interviews from new applicants before I left.
A running theme in their interviewing criteria, at least back then — it's been a while — was, they looked for an applicant's ability to shift between levels of abstraction.
In recruitment context, this tends to be conceptualised as a matter of skill and knowledge, but it's actually also a matter of design, to a significant degree. When more effort is put into plugging abstraction leakage, less people have practical "everyday" reasons for moving across those tightly plugged boundaries, get the experience of doing it, and, well, both de-skilling and baroquisation can set in as a result.
Maybe putting effort into well-designed abstraction leakages, rather than trying to abolish them, would be a useful and pro-social subthread in the work against enshittification. I'm also going to argue that literate programming is a useful tool for managing and understanding (some kinds of) well-designed abstraction leakages.
@riley @xgranade I think designing around a high-skill-specialization expectation has historically been harmful in this industry; consider how the expectation of needing to know C (a language notoriously lacking in guardrails and good tooling) to do systems programming has both directly contributed to the pervasive gatekeeping and also created a barrier to entry to people not willing to dedicate their life to nvaigating the social and technical aspects of it. it's pretty difficult to me to see how this could be turned around to be prosocial
-
@whitequark As a fellow toolmaker, I feel you!
I know Microsoft has created a couple of drivers using my tools. Let's say it's 3 drivers and it saved a junior San Fran engineer 1 day of work each. Then by my rough estimation they'd have saved about $1500.
In the mean time, I have seen none of that value in return.
Idk what to think of it. I made the tool for people like me and I want them to have it for free. But yeah, then MS also gets it for free unless I do weird license things.
@diondokter I don't really mind that particular bit because my goal with OSS/OSHW is less "creating value" (that's on the agenda but it's more incidental) and more "terraforming", changing the rules by which the world works. I think this is a more interesting mindset to approach OSxx with because a lot of the systems we've been building in the last two decades are of such a high quality that no commercial entity would possibly purchase them (since it's not justifiable to build something like that for a business that would run just fine with a much shittier version of the same thing).
yes, under a different economic system, you could have (maybe?) captured some of that value. but under our current one, if Microsoft had to pay you $1500 they would've probably not used your tools at all (because the overhead of figuring out how to get you that money multiplies it severalfold and takes up valuable time of administrative and legal staff). my overall feeling about it, personally, is just "shrug"; I build tools for different reasons
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
@whitequark I think the processes of value-extraction under capitalism have - structural limitations? - which mean tools like Amaranth are unlikely to be used as part of a destructive and alienating hype bubble.
namely, tools contributing to hype bubbles requires not only that a given end is easier than before, but that it's _the easiest_ way to achieve hype at any given moment; any RTL design is never the fastest way to a consumer demo; so Amaranth isn't going to be implicated?
-
@whitequark I think the processes of value-extraction under capitalism have - structural limitations? - which mean tools like Amaranth are unlikely to be used as part of a destructive and alienating hype bubble.
namely, tools contributing to hype bubbles requires not only that a given end is easier than before, but that it's _the easiest_ way to achieve hype at any given moment; any RTL design is never the fastest way to a consumer demo; so Amaranth isn't going to be implicated?
@coral oh, without overtly violating my NDA, I'll just say that flashy demos were absolutely involved
-
@riley @xgranade I think designing around a high-skill-specialization expectation has historically been harmful in this industry; consider how the expectation of needing to know C (a language notoriously lacking in guardrails and good tooling) to do systems programming has both directly contributed to the pervasive gatekeeping and also created a barrier to entry to people not willing to dedicate their life to nvaigating the social and technical aspects of it. it's pretty difficult to me to see how this could be turned around to be prosocial
@whitequark I'm thinking something like "An abstraction shall not leak without a good reason to", and considering it an important principle of good design that it is the end engineer (or end user) who gets to ultimately override what reasons are good enough, should upstream reasonings turn out to be problematic. Things like "Thou shalt not hamper logic probe access" would then inherently follow.
-
This is very close to where I parted ways with the FSF. There's always a tension between enabling people to create the desirable thing and enabling people to make the undesirable. Their view is that it should be very hard to make the undesirable thing, and slightly easier to make the desirable thing. My view is that you should make it so easy to make the desirable thing that people always have a choice and then, once the desirable thing exists, you can apply other pressures to get rid of the undesirable thing.
I don't think deskilling is the right framing for a lot of these things, it's about where you focus cognitive load. There's a line from the Stantec ZEBRA's manual (1956) that says that the 150-instruction limit is not a real problem because no one could possibly write a working program that complex. Small children write programs more complex than that now. That's not a loss to the world, the fact that you don't have to think about certain things means you can think about other things, such as good algorithm and data structure design.
There was research 20ish years ago comparing C and Java programs and found that the Java programs tended to be more efficient for the same amount of developer effort, because Java programmers would spend more time refining data structure and algorithmic choices and improve entire complexity classes, whereas C programmers spend the time tracking down annoying bug classes that are impossible in Java and doing microoptimisations. Of course, under time pressure, Java developers will simply ship the first thing that works and move onto new features rather than doing that optimisation. C programmers would take longer to get to the MVP level and their poorly optimised code was often faster than poorly optimised Java.
I see LLMs as very different because they don't provide consistent abstractions. A programmer in a high-level language has a set of well-defined constraints on how their language is lowered to the target hardware and can reason about things, while allowing their run-time environment to make choices within those constraints. Vibe coding does not do this, it delegates thinking to a machine, which then generates code that is not working within a well-defined specification. This really is deskilling because it's not giving you a more abstract reasoning framework, it's removing your ability to reason.
Letting people accomplish more with less effort, in an environment where their requirements are finite, ends up shifting power to individuals, because it reduces the value of economies of scale.
@david_chisnall@infosec.exchange
To be honest I think you are misrepresenting #FSF ethical position on the matter that is perfectly aligned with your own: thus the freedom of use for any purpose that is a strong requirement for any #FreeSoftware license.
@whitequark@treehouse.systems
-
@whitequark I'm thinking something like "An abstraction shall not leak without a good reason to", and considering it an important principle of good design that it is the end engineer (or end user) who gets to ultimately override what reasons are good enough, should upstream reasonings turn out to be problematic. Things like "Thou shalt not hamper logic probe access" would then inherently follow.
-
@whitequark I'm thinking something like "An abstraction shall not leak without a good reason to", and considering it an important principle of good design that it is the end engineer (or end user) who gets to ultimately override what reasons are good enough, should upstream reasonings turn out to be problematic. Things like "Thou shalt not hamper logic probe access" would then inherently follow.
@whitequark Or I might be misunderstanding your argument. Would you like to elaborate on it?
-
@david_chisnall@infosec.exchange
To be honest I think you are misrepresenting #FSF ethical position on the matter that is perfectly aligned with your own: thus the freedom of use for any purpose that is a strong requirement for any #FreeSoftware license.
@whitequark@treehouse.systems@giacomo @david_chisnall I think you'll find it that using search to insert yourself uninvited into conversations with people you don't know is a poor way to promote your cause, whatever that is.
-
@whitequark Yes, all abstractions leak.
But sometimes, people like to pretend, and/or make laws about pretending, that some don't, or mustn't, or "it's impossible to cross this abstraction boundary, so anybody who does it must be harshly punished" kind of thing. Likewise, some design cultures[1] like to build elaborate wrappers for hiding abstraction leakages, because of the simplistic notion that such leaks are bad design.
[1] Particularly the "enterprise software" school of thought, in what I've seen. But the idea can also be seen outside big corporate environments.
-
@diondokter I don't really mind that particular bit because my goal with OSS/OSHW is less "creating value" (that's on the agenda but it's more incidental) and more "terraforming", changing the rules by which the world works. I think this is a more interesting mindset to approach OSxx with because a lot of the systems we've been building in the last two decades are of such a high quality that no commercial entity would possibly purchase them (since it's not justifiable to build something like that for a business that would run just fine with a much shittier version of the same thing).
yes, under a different economic system, you could have (maybe?) captured some of that value. but under our current one, if Microsoft had to pay you $1500 they would've probably not used your tools at all (because the overhead of figuring out how to get you that money multiplies it severalfold and takes up valuable time of administrative and legal staff). my overall feeling about it, personally, is just "shrug"; I build tools for different reasons
@whitequark Yeah agreed. The fact that MS has used my tool didn't cost me anything either.
But like I said, I've been building it to help people like me and I think it's succeeding at that. And it generally makes me happy seeing people use it successfully.
> such a high quality that no commercial entity would possibly purchase them
lol yeah, seems paradoxical, but very likely true
-
@whitequark Yeah agreed. The fact that MS has used my tool didn't cost me anything either.
But like I said, I've been building it to help people like me and I think it's succeeding at that. And it generally makes me happy seeing people use it successfully.
> such a high quality that no commercial entity would possibly purchase them
lol yeah, seems paradoxical, but very likely true
lol yeah, seems paradoxical, but very likely true
I didn't come up with that; it's a rephrasing of a very good post on the topic I've read and subsequently neglected to bookmark
-
@whitequark Yes, all abstractions leak.
But sometimes, people like to pretend, and/or make laws about pretending, that some don't, or mustn't, or "it's impossible to cross this abstraction boundary, so anybody who does it must be harshly punished" kind of thing. Likewise, some design cultures[1] like to build elaborate wrappers for hiding abstraction leakages, because of the simplistic notion that such leaks are bad design.
[1] Particularly the "enterprise software" school of thought, in what I've seen. But the idea can also be seen outside big corporate environments.
-
@david_chisnall@infosec.exchange
To be honest I think you are misrepresenting #FSF ethical position on the matter that is perfectly aligned with your own: thus the freedom of use for any purpose that is a strong requirement for any #FreeSoftware license.
@whitequark@treehouse.systemsI think you're misunderstanding my point. The FSF decides to promote the creation of Free Software (a goal I agree with) by creating complex licenses.
Developing software reusing software under any license requires understanding the license. The FSF's licenses are sufficiently complex that I have had multiple conversations with lawyers (including some with the FSF's lawyers) where they have not been able to tell me whether a specific use case is permitted. This places a burden on anyone developing Free Software using FSF-approved licenses, because there are a bunch of use cases that the FSF would regard as ethical, but where their licenses do not clearly permit the use.
It places a larger burden on people doing things that the FSF disapproves of. They have to come up with exciting loopholes. Unfortunately, it turns out that this isn't that hard and once you've found a loophole you can keep using it. The FSF responds with even more complex licenses.
EDIT: To be clear, the FSF and I have very similar goals. I just think that their strategy is completely counterproductive. Complex legal documents empower people who can afford expensive lawyers. We're increasingly seeing companies using AGPLv3 to control nominally-Free Software ecosystems.
-
@whitequark You might be overgeneralising; I only did Google for a year, and was a sort of an infrastructure-consultant-for-statistics-support before that. We had numerous Big Data clients (in a time when twenty PCs was a Big Cluster), and, well, data warehouse systems tend to be places that need to talk to a lot of operative software (and make sense of the data that comes from there, but we had other people who tended to specialise on the data-shape-and-quality kind of problems).
-
@whitequark You might be overgeneralising; I only did Google for a year, and was a sort of an infrastructure-consultant-for-statistics-support before that. We had numerous Big Data clients (in a time when twenty PCs was a Big Cluster), and, well, data warehouse systems tend to be places that need to talk to a lot of operative software (and make sense of the data that comes from there, but we had other people who tended to specialise on the data-shape-and-quality kind of problems).
-
@whitequark Right. So, perhaps, different contexts.
-
@coral oh, without overtly violating my NDA, I'll just say that flashy demos were absolutely involved
@whitequark Ah, I'd missed the platform config demo. They are trying!
Re the degradation of the "open-guild" position of the RTL designer; I also don't know how to feel. Removing footguns is marginally deskilling, but it's such a social good that I can't object to it.
Other industries have unions to smooth the social consequences in changing labor values. Govs and LLMs are both tools that seek to bypass that, it's not unreasonable to form guild-like closed systems of practice in response.
-
@whitequark Ah, I'd missed the platform config demo. They are trying!
Re the degradation of the "open-guild" position of the RTL designer; I also don't know how to feel. Removing footguns is marginally deskilling, but it's such a social good that I can't object to it.
Other industries have unions to smooth the social consequences in changing labor values. Govs and LLMs are both tools that seek to bypass that, it's not unreasonable to form guild-like closed systems of practice in response.
@coral I fully expect the formation of closed-guild systems (and am already a part of some amount of them); but I also don't know what the future holds & this is definitely a potent way of sawing off the branch on which one sits, so I only admit those as a last resort (and a hedge against the failure of other responses), not the first response