as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field.
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
(matter of factly if you ask an LLM to write you some Amaranth, it usually just stops doing that after a few dozen lines and switches to SystemVerilog, so I don't think I have personally contributed to this; the above is more me questioning the structural factors we live with today)
-
(matter of factly if you ask an LLM to write you some Amaranth, it usually just stops doing that after a few dozen lines and switches to SystemVerilog, so I don't think I have personally contributed to this; the above is more me questioning the structural factors we live with today)
@whitequark I've been staring down the same conundrum for a long time. I think the only answer is that we have to build solidarity, rebellion, and moral progress in the social, interpersonal domain rather than hoping technical improvements will be emancipatory forces
it's one reason I've been focusing more on education - a lot of the reason right-wing propaganda works is simply because people are ignorant of patterns that play out over history, and they take motivated liars at their word
-
@whitequark I've been staring down the same conundrum for a long time. I think the only answer is that we have to build solidarity, rebellion, and moral progress in the social, interpersonal domain rather than hoping technical improvements will be emancipatory forces
it's one reason I've been focusing more on education - a lot of the reason right-wing propaganda works is simply because people are ignorant of patterns that play out over history, and they take motivated liars at their word
@migratory to me, going out of my way to spend years working on making a (technological, in this case) field more accessible is very much in the interpersonal domain: it is motivated by social factors (empathy towards users of technology that causes misery and alienation) and aimed at changing social factors (participation in a field)
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
This is very close to where I parted ways with the FSF. There's always a tension between enabling people to create the desirable thing and enabling people to make the undesirable. Their view is that it should be very hard to make the undesirable thing, and slightly easier to make the desirable thing. My view is that you should make it so easy to make the desirable thing that people always have a choice and then, once the desirable thing exists, you can apply other pressures to get rid of the undesirable thing.
I don't think deskilling is the right framing for a lot of these things, it's about where you focus cognitive load. There's a line from the Stantec ZEBRA's manual (1956) that says that the 150-instruction limit is not a real problem because no one could possibly write a working program that complex. Small children write programs more complex than that now. That's not a loss to the world, the fact that you don't have to think about certain things means you can think about other things, such as good algorithm and data structure design.
There was research 20ish years ago comparing C and Java programs and found that the Java programs tended to be more efficient for the same amount of developer effort, because Java programmers would spend more time refining data structure and algorithmic choices and improve entire complexity classes, whereas C programmers spend the time tracking down annoying bug classes that are impossible in Java and doing microoptimisations. Of course, under time pressure, Java developers will simply ship the first thing that works and move onto new features rather than doing that optimisation. C programmers would take longer to get to the MVP level and their poorly optimised code was often faster than poorly optimised Java.
I see LLMs as very different because they don't provide consistent abstractions. A programmer in a high-level language has a set of well-defined constraints on how their language is lowered to the target hardware and can reason about things, while allowing their run-time environment to make choices within those constraints. Vibe coding does not do this, it delegates thinking to a machine, which then generates code that is not working within a well-defined specification. This really is deskilling because it's not giving you a more abstract reasoning framework, it's removing your ability to reason.
Letting people accomplish more with less effort, in an environment where their requirements are finite, ends up shifting power to individuals, because it reduces the value of economies of scale.
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
@whitequark "Democratization" is a friendly-sounding euphemism for "commoditization", often in most negative possible form.
-
This is very close to where I parted ways with the FSF. There's always a tension between enabling people to create the desirable thing and enabling people to make the undesirable. Their view is that it should be very hard to make the undesirable thing, and slightly easier to make the desirable thing. My view is that you should make it so easy to make the desirable thing that people always have a choice and then, once the desirable thing exists, you can apply other pressures to get rid of the undesirable thing.
I don't think deskilling is the right framing for a lot of these things, it's about where you focus cognitive load. There's a line from the Stantec ZEBRA's manual (1956) that says that the 150-instruction limit is not a real problem because no one could possibly write a working program that complex. Small children write programs more complex than that now. That's not a loss to the world, the fact that you don't have to think about certain things means you can think about other things, such as good algorithm and data structure design.
There was research 20ish years ago comparing C and Java programs and found that the Java programs tended to be more efficient for the same amount of developer effort, because Java programmers would spend more time refining data structure and algorithmic choices and improve entire complexity classes, whereas C programmers spend the time tracking down annoying bug classes that are impossible in Java and doing microoptimisations. Of course, under time pressure, Java developers will simply ship the first thing that works and move onto new features rather than doing that optimisation. C programmers would take longer to get to the MVP level and their poorly optimised code was often faster than poorly optimised Java.
I see LLMs as very different because they don't provide consistent abstractions. A programmer in a high-level language has a set of well-defined constraints on how their language is lowered to the target hardware and can reason about things, while allowing their run-time environment to make choices within those constraints. Vibe coding does not do this, it delegates thinking to a machine, which then generates code that is not working within a well-defined specification. This really is deskilling because it's not giving you a more abstract reasoning framework, it's removing your ability to reason.
Letting people accomplish more with less effort, in an environment where their requirements are finite, ends up shifting power to individuals, because it reduces the value of economies of scale.
@david_chisnall this is an interesting view, I'll have to think about it.
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
@whitequark I feel like the rise of LLMs is mostly just an acceleration of the trend that already existed.
There's long been the tension within FLOSS software, between the fact that you're making technology more accessible, and you're helping out greedy capitalists who just want to squeeze every ounce of profit out.
This new trend of LLMs has absolutely accelerated that, and I can definitely see why it causes people to pause and wonder what it is that they're doing and whether they want to support that.
But I still think it's good to provide better, accessible tooling for new people.
Shitty people are going to squeeze every ounce of profit and control out of everything no matter what you do. But in the meantime, Amaranth does make it possible for people who want to actually engage with their own brains to learn about hardware design, and I think that's still valuable, whether or not there are awful people out there exploiting it for personal profit.
-
@whitequark I feel like the rise of LLMs is mostly just an acceleration of the trend that already existed.
There's long been the tension within FLOSS software, between the fact that you're making technology more accessible, and you're helping out greedy capitalists who just want to squeeze every ounce of profit out.
This new trend of LLMs has absolutely accelerated that, and I can definitely see why it causes people to pause and wonder what it is that they're doing and whether they want to support that.
But I still think it's good to provide better, accessible tooling for new people.
Shitty people are going to squeeze every ounce of profit and control out of everything no matter what you do. But in the meantime, Amaranth does make it possible for people who want to actually engage with their own brains to learn about hardware design, and I think that's still valuable, whether or not there are awful people out there exploiting it for personal profit.
@unlambda It's a bit different if you are (or were, in my case) on their payroll!
-
@riley sort of? baroque processes are reactionary in nature: they help the incumbent keep its position. if you like the incumbent this is useful. if you don't like the incumbent, like @xgranade didn't like the AI-fication of Calibre, then you get to spend months of your life fixing the plumbing that would otherwise wash out the foundations.
I used to do time at Google. Passed the interviews, and, inbetween engineering, got the training to administer them, and took a bunch of interviews from new applicants before I left.
A running theme in their interviewing criteria, at least back then — it's been a while — was, they looked for an applicant's ability to shift between levels of abstraction.
In recruitment context, this tends to be conceptualised as a matter of skill and knowledge, but it's actually also a matter of design, to a significant degree. When more effort is put into plugging abstraction leakage, less people have practical "everyday" reasons for moving across those tightly plugged boundaries, get the experience of doing it, and, well, both de-skilling and baroquisation can set in as a result.
Maybe putting effort into well-designed abstraction leakages, rather than trying to abolish them, would be a useful and pro-social subthread in the work against enshittification. I'm also going to argue that literate programming is a useful tool for managing and understanding (some kinds of) well-designed abstraction leakages.
-
I used to do time at Google. Passed the interviews, and, inbetween engineering, got the training to administer them, and took a bunch of interviews from new applicants before I left.
A running theme in their interviewing criteria, at least back then — it's been a while — was, they looked for an applicant's ability to shift between levels of abstraction.
In recruitment context, this tends to be conceptualised as a matter of skill and knowledge, but it's actually also a matter of design, to a significant degree. When more effort is put into plugging abstraction leakage, less people have practical "everyday" reasons for moving across those tightly plugged boundaries, get the experience of doing it, and, well, both de-skilling and baroquisation can set in as a result.
Maybe putting effort into well-designed abstraction leakages, rather than trying to abolish them, would be a useful and pro-social subthread in the work against enshittification. I'm also going to argue that literate programming is a useful tool for managing and understanding (some kinds of) well-designed abstraction leakages.
@whitequark Another perspective about this sort of thing is how many basic MUDs and IF languages offer tools for linking up rooms to their immediate neighbours, but a more realistic world-building would often require to also offer some faraway view for the next room over, say, the forest that you can see over the river or a valley, and many early data models of these kinds of languages really weren't very good at making such set-ups convenient. Even though the pattern being modelled is common in real life, it does not come up often enough in Thinking About Thinking kind of discussions (and object-oriented programming classes), and so people designing (meta-)systems often tend to ignore it.
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
@whitequark As a fellow toolmaker, I feel you!
I know Microsoft has created a couple of drivers using my tools. Let's say it's 3 drivers and it saved a junior San Fran engineer 1 day of work each. Then by my rough estimation they'd have saved about $1500.
In the mean time, I have seen none of that value in return.
Idk what to think of it. I made the tool for people like me and I want them to have it for free. But yeah, then MS also gets it for free unless I do weird license things.
-
I used to do time at Google. Passed the interviews, and, inbetween engineering, got the training to administer them, and took a bunch of interviews from new applicants before I left.
A running theme in their interviewing criteria, at least back then — it's been a while — was, they looked for an applicant's ability to shift between levels of abstraction.
In recruitment context, this tends to be conceptualised as a matter of skill and knowledge, but it's actually also a matter of design, to a significant degree. When more effort is put into plugging abstraction leakage, less people have practical "everyday" reasons for moving across those tightly plugged boundaries, get the experience of doing it, and, well, both de-skilling and baroquisation can set in as a result.
Maybe putting effort into well-designed abstraction leakages, rather than trying to abolish them, would be a useful and pro-social subthread in the work against enshittification. I'm also going to argue that literate programming is a useful tool for managing and understanding (some kinds of) well-designed abstraction leakages.
@riley @xgranade I think designing around a high-skill-specialization expectation has historically been harmful in this industry; consider how the expectation of needing to know C (a language notoriously lacking in guardrails and good tooling) to do systems programming has both directly contributed to the pervasive gatekeeping and also created a barrier to entry to people not willing to dedicate their life to nvaigating the social and technical aspects of it. it's pretty difficult to me to see how this could be turned around to be prosocial
-
@whitequark As a fellow toolmaker, I feel you!
I know Microsoft has created a couple of drivers using my tools. Let's say it's 3 drivers and it saved a junior San Fran engineer 1 day of work each. Then by my rough estimation they'd have saved about $1500.
In the mean time, I have seen none of that value in return.
Idk what to think of it. I made the tool for people like me and I want them to have it for free. But yeah, then MS also gets it for free unless I do weird license things.
@diondokter I don't really mind that particular bit because my goal with OSS/OSHW is less "creating value" (that's on the agenda but it's more incidental) and more "terraforming", changing the rules by which the world works. I think this is a more interesting mindset to approach OSxx with because a lot of the systems we've been building in the last two decades are of such a high quality that no commercial entity would possibly purchase them (since it's not justifiable to build something like that for a business that would run just fine with a much shittier version of the same thing).
yes, under a different economic system, you could have (maybe?) captured some of that value. but under our current one, if Microsoft had to pay you $1500 they would've probably not used your tools at all (because the overhead of figuring out how to get you that money multiplies it severalfold and takes up valuable time of administrative and legal staff). my overall feeling about it, personally, is just "shrug"; I build tools for different reasons
-
as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.
I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?
you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.
so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..
@whitequark I think the processes of value-extraction under capitalism have - structural limitations? - which mean tools like Amaranth are unlikely to be used as part of a destructive and alienating hype bubble.
namely, tools contributing to hype bubbles requires not only that a given end is easier than before, but that it's _the easiest_ way to achieve hype at any given moment; any RTL design is never the fastest way to a consumer demo; so Amaranth isn't going to be implicated?
-
@whitequark I think the processes of value-extraction under capitalism have - structural limitations? - which mean tools like Amaranth are unlikely to be used as part of a destructive and alienating hype bubble.
namely, tools contributing to hype bubbles requires not only that a given end is easier than before, but that it's _the easiest_ way to achieve hype at any given moment; any RTL design is never the fastest way to a consumer demo; so Amaranth isn't going to be implicated?
@coral oh, without overtly violating my NDA, I'll just say that flashy demos were absolutely involved
-
@riley @xgranade I think designing around a high-skill-specialization expectation has historically been harmful in this industry; consider how the expectation of needing to know C (a language notoriously lacking in guardrails and good tooling) to do systems programming has both directly contributed to the pervasive gatekeeping and also created a barrier to entry to people not willing to dedicate their life to nvaigating the social and technical aspects of it. it's pretty difficult to me to see how this could be turned around to be prosocial
@whitequark I'm thinking something like "An abstraction shall not leak without a good reason to", and considering it an important principle of good design that it is the end engineer (or end user) who gets to ultimately override what reasons are good enough, should upstream reasonings turn out to be problematic. Things like "Thou shalt not hamper logic probe access" would then inherently follow.
-
This is very close to where I parted ways with the FSF. There's always a tension between enabling people to create the desirable thing and enabling people to make the undesirable. Their view is that it should be very hard to make the undesirable thing, and slightly easier to make the desirable thing. My view is that you should make it so easy to make the desirable thing that people always have a choice and then, once the desirable thing exists, you can apply other pressures to get rid of the undesirable thing.
I don't think deskilling is the right framing for a lot of these things, it's about where you focus cognitive load. There's a line from the Stantec ZEBRA's manual (1956) that says that the 150-instruction limit is not a real problem because no one could possibly write a working program that complex. Small children write programs more complex than that now. That's not a loss to the world, the fact that you don't have to think about certain things means you can think about other things, such as good algorithm and data structure design.
There was research 20ish years ago comparing C and Java programs and found that the Java programs tended to be more efficient for the same amount of developer effort, because Java programmers would spend more time refining data structure and algorithmic choices and improve entire complexity classes, whereas C programmers spend the time tracking down annoying bug classes that are impossible in Java and doing microoptimisations. Of course, under time pressure, Java developers will simply ship the first thing that works and move onto new features rather than doing that optimisation. C programmers would take longer to get to the MVP level and their poorly optimised code was often faster than poorly optimised Java.
I see LLMs as very different because they don't provide consistent abstractions. A programmer in a high-level language has a set of well-defined constraints on how their language is lowered to the target hardware and can reason about things, while allowing their run-time environment to make choices within those constraints. Vibe coding does not do this, it delegates thinking to a machine, which then generates code that is not working within a well-defined specification. This really is deskilling because it's not giving you a more abstract reasoning framework, it's removing your ability to reason.
Letting people accomplish more with less effort, in an environment where their requirements are finite, ends up shifting power to individuals, because it reduces the value of economies of scale.
@david_chisnall@infosec.exchange
To be honest I think you are misrepresenting #FSF ethical position on the matter that is perfectly aligned with your own: thus the freedom of use for any purpose that is a strong requirement for any #FreeSoftware license.
@whitequark@treehouse.systems
-
@whitequark I'm thinking something like "An abstraction shall not leak without a good reason to", and considering it an important principle of good design that it is the end engineer (or end user) who gets to ultimately override what reasons are good enough, should upstream reasonings turn out to be problematic. Things like "Thou shalt not hamper logic probe access" would then inherently follow.
-
@whitequark I'm thinking something like "An abstraction shall not leak without a good reason to", and considering it an important principle of good design that it is the end engineer (or end user) who gets to ultimately override what reasons are good enough, should upstream reasonings turn out to be problematic. Things like "Thou shalt not hamper logic probe access" would then inherently follow.
@whitequark Or I might be misunderstanding your argument. Would you like to elaborate on it?
-
@david_chisnall@infosec.exchange
To be honest I think you are misrepresenting #FSF ethical position on the matter that is perfectly aligned with your own: thus the freedom of use for any purpose that is a strong requirement for any #FreeSoftware license.
@whitequark@treehouse.systems@giacomo @david_chisnall I think you'll find it that using search to insert yourself uninvited into conversations with people you don't know is a poor way to promote your cause, whatever that is.