@harrysintonen That showed up as ‘disabled’ for me when I went to the settings page. I think it’s been there for a while, I vaguely remember turning it off a while ago.
david_chisnall@infosec.exchange
Posts
-
#Microsoft sent an email to everyone saying they're listening to people now and they will definitely not pushing AI to everything anymore. -
I won’t be verifying that I’m over 18 on iOS 26.4, and as of yet, I haven’t seen any downside to not doing so.@SecurityWriter I didn't on Xbox and so far it's been a bet positive. They have a load of features I want to turn off that keep getting enabled and I need to find them in global and per-game settings. Don't do age verification and they're now globally disabled. Win. I hope the iOS feature is the same.
-
I wonder if part of the reason I’m unimpressed with LLMs is that I can generate plausible nonsense at line rate without mechanical assistance.@kunev This depends entirely on the efficiency of the pizza oven. When measuring the sustainability of such systems (and you must consider the entire system rather than components in isolation), it it’s important to remember that pizzas are fungible.
-
I wonder if part of the reason I’m unimpressed with LLMs is that I can generate plausible nonsense at line rate without mechanical assistance.I wonder if part of the reason I’m unimpressed with LLMs is that I can generate plausible nonsense at line rate without mechanical assistance.
-
Heads-up for published authors:I filed mine a few weeks ago and got a thing in the post yesterday reminding me to file, so now I’ve no idea if they lost it or what. Any ideas how to check?
-
Dear Europe: Germany has shown the way forward, by making the Open Document Format (ODF) mandatory within its sovereign digital infrastructure.@mkljczk @jonxion @libreoffice
It's been many years since I actually read the specifications, but I was not convinced that ODF was particularly good in this regard this when I did.
OOXML had a bunch of things like the infamous 'typeset like Word 97' entry, but they were clearly marked in OOXML as for legacy compatibility (like emoji in Unicode, until the Unicode Consortium went silly). It also has a bunch of things like assuming everyone knows how the Windows GDI drawing model works. It is an objectively terrible standard.
ODF and OOXML were both rushed through standardisation too quickly and both were bad specifications.
ODF was much shorter than OOXML and that was partly because a lot of things were underspecified, people implementing it just did what OpenOffice did and had to use OpenOffice as a reference because it was the only way to know what you needed.
It is uncontroversial to say that OOXML is terrible. But it is a logical fallacy to say 'X is bad, Y is not X, therefore Y is good.
-
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing.As a system architect, this is also what I do. The thing is, I absolutely depend on the people who do the implementation having good judgement. They need to fill in the gaps (if there were no gaps, I would have an implementation already) but also tell me if there are real problems with some of the ideas. This is why the first thing I do with a design is have it reviewed by people who will implement it. If they tell me ‘actually, this thing you forgot to consider is where our critical path is’ then that often leads to a complete redesign, or at least to significant change. The LLM will just produce something. With an ‘agentic’ loop and some automated testing, it will produce something that passes my tests. But it won’t tell me I’m solving the wrong problem.
I don’t have a problem with constrained nondeterminism in general. There are loads of places where this is fine. The place I used machine learning in my PhD was in prefetching. Get it right and everything is faster. Get it wrong and you haven’t lost much. This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one. The other place it works well is if you have a way of immediately validating the output. I supervised a student using some machine-learning techniques to find better orderings of passes for LLVM. They were tuning for code size (in a student project, this was easier than performance, which requires more testing). You run the old and new versions, one is smaller. That gives you an immediate signal and so using non-deterministic state-space exploration is great. You (probably) won’t get the optimal solution but you will get a good one, for far less effort than trying to reason about the behaviour of the interactions between dozens of transforms.
It’s not clear to me that LLMs for programming have either of these properties.
-
I’ve read a bunch of posts in the last few weeks that say ‘Moore’s Law is over’, not as their key point but as an axiom from which they make further claims.Easy is relative. When you double the number of transistors available for a design like the 8086, there are a load of things you can do with them that will have an immediate impact on performance for most workloads. The same doubling for a modern CPU will need to be mostly spent on clever structures for trying to keep execution units busy. Doubling the number of execution units would have almost no impact on performance.
-
I’ve read a bunch of posts in the last few weeks that say ‘Moore’s Law is over’, not as their key point but as an axiom from which they make further claims.I’ve read a bunch of posts in the last few weeks that say ‘Moore’s Law is over’, not as their key point but as an axiom from which they make further claims. The problem is: this isn’t really true. A bunch of things have changed since Moore’s paper, but the law still roughly holds.
Moore’s law claims that the number of transistors that you can put on a chip (implicitly, for a fixed cost: you could always put more transistors in a chip by paying more) doubles roughly every 18 months. This isn’t quite true anymore, but it was never precisely true and it remains a good rule of thumb. But a load of related things have changed.
First, a load of the free lunches were eaten. Moore’s paper was written in 1965. Even 20 years later, modern processors had limited arithmetic. The early RISC chips didn’t do (integer) divide (sometimes even multiply) in hardware because you could these with a short sequence of add and shift operations in a loop (some CISC chips had instructions for these but implemented them in microcode). Once transistor costs dropped below a certain point, of course you would do them in hardware. Until the mid ‘90s, most consumer CPUs didn’t have floating-point hardware. They had to emulate floating point arithmetic in software. Again, with more transistors, adding these things is a no brainer: they make things faster because they are providing hardware for things that people were already doing.
This started to end in the late ‘90s. Superscalar out-of-order designs existed because just running a sequence of instructions faster was no longer something you got for free. Doubling the performance of something like an 8086 was easy. It wasn’t even able to execute one instruction per cycle and a lot of things were multi-instruction sequences that could become single instructions if you had more transistors, Once you get above one instruction per cycle with hardware integer multiply and divide and hardware floating point, doubling is much harder.
Next, around 2007, Dennard Scaling ended. Prior to this, smaller feature sizes meant lower leakage. This meant that you got faster clocks in the same power budget. The 100 MHz Pentium shipped in 1994. The 1 GHz Pentium 3 in 2000. Six years after that, Intel shipped a 3.2 GHz Pentium 4, which was incredibly power hungry in comparison. Since then, we haven’t really seen an increase in clock speed.
Finally, and most important from a market perspective, demand slowed. The first computers I used were fun but you ran into hardware limitations all of the time. There was a period in the late ‘90s and early 2000s when every new generation of CPU meant you could do new things. These were things you already had requirements for, but the previous generation just wasn’t fast enough to manage. But the things people use computers for today are not that different from the things they did in 2010. Moore’s Law outpaced the growth in requirements. And the doubling in transistor count is predicated on having money from selling enough things in the previous generation. The profits from the 7 nm process funded 4 nm, which funds 2 nm, and so on.
The costs of developing new processes has also gone up but this requires more sales (or higher margins) to fund. And we’ve had that, but mostly driven by bubbles causing people to buy very-expensive GPUs and similar. The rise of smartphones was a boon because it drove a load of demand: billions of smartphones now exist and have a shorter lifespan than desktops and laptops.
Somewhere, I have an issue of BYTE magazine about the new one micron process. It confidently predicted we’d hit physical limits within a decade. That was over 30 years ago. We will eventually hit physical limits, but I suspect that we’ll hit limits of demand being sufficient to pay for new scaling first.
The slowing demand is, I believe, a big part of the reason hyperscalers push AI: they are desperate for a workload that requires the cloud. Businesses compute requirements are growing maybe 20% year on year (for successful growing companies). Moore’s law is increasing the supply per dollar by 100% every 18 months. A few iterations of that and outsourcing compute stops making sense unless you can convince them that they have some new requirements that massively increase their demand.
-
I don’t object to ‘if you don’t like it, fork it’ as a response as long as you have structured the project to make it easy for people to maintain downstream forks.I don’t object to ‘if you don’t like it, fork it’ as a response as long as you have structured the project to make it easy for people to maintain downstream forks. Indeed, I consider the existence of downstream forks to be a sign of health in an open-source ecosystem. This means:
- External interfaces to the rest of your ecosystem need to be 100% stable and to be added slowly. You must have feature-discovery mechanisms that make it easy for things to work with old versions of your project.
- Internal code churns infrequently. Pulling in changes from upstream and reviewing them should be easy.
- Internal structure is well documented and modular.
This leads to small projects with loose coupling that can be done (or, at least, ‘maintenance mode’, where they get occasional bug fixes but meet their requirements and don’t need to change).
A lot of projects were like that 20-30 years ago. Reaching the ‘maintenance mode’ state was a badge of honour: you had achieved your goals and no one else needed to reinvent the wheel. New things could be built as external projects. The last few decades have seen a push towards massive too-big-to-fork projects that have external interfaces that the rest of the ecosystem needs to integrate with, which are complex and lead to tight coupling.
-
If your answer to anyone who doesn’t like something that an open source project is doing is “then fork it yourself”, you’re a piece of shit.This goes doubly for a lot of big projects that actively adopt engineering practices that make maintaining a downstream fork difficult.
-
We’ll be talking more about the progress on the CHERIoT port of Rust at CHERI Blossoms next week, but here’s a teaser:@jmorris @bsdphk @cheri_alliance
Yup. Microsoft also maintains builds for the Arty A7
-
The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:
If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.
A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.
A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.
A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.
-
We’ll be talking more about the progress on the CHERIoT port of Rust at CHERI Blossoms next week, but here’s a teaser:I guess you’d need an application core for that? Our first chips are microcontrollers and we’ll be entering mass production for those later in the year. You can run the cores on fairly cheap FPGAs today.
Codasip’s X730 core is an application-class core roughly comparable to a Cortex-A55. It is available to license today and they have an EU project to build chips for supercomputers based on it.
There are also some Morello systems still available via the @cheri_alliance .
-
We’ll be talking more about the progress on the CHERIoT port of Rust at CHERI Blossoms next week, but here’s a teaser:We’ll be talking more about the progress on the CHERIoT port of Rust at CHERI Blossoms next week, but here’s a teaser:
The embedded graphics crate rendering an image on Sonata. This currently using a (memory-safe) C function to draw pixels (that can go away with a little bit more work) but the current compiler is able to build this crate and run it in a CHERIoT compartment.
#Rust #CHERI #CHERIoT #CHERIBlossoms

-
@radhitya How are you monitoring memory usage?@radhitya How are you monitoring memory usage?
-
trying to figure out if i suck at programming or if realtek has byte-reversed their own OUI in addition to bit-reversing it as the spec needsWeird, that implies there’s some hiring overlap between IBM’s legal and DV teams.
-
trying to figure out if i suck at programming or if realtek has byte-reversed their own OUI in addition to bit-reversing it as the spec needs@whitequark Their DV team is made of vampires and they were hanging upside down when they got to this bit?
-
trying to figure out if i suck at programming or if realtek has byte-reversed their own OUI in addition to bit-reversing it as the spec needsA complete guess, but:
A lot of networking equipment used big-endian MIPS until recently. Big-endian avoided a load of byte swapping for packet headers (this is effectively free on more complex cores) and MIPS basically gave away the R4K core when they were low on cash (unlimited-use licenses). It may be that Realtek did it deliberately to make it easier to read on big-endian MIPS, but I wouldn’t be at all surprised if they did testing on big-endian MIPS and forgot that they needed to byte swap, so it passed the tests and then they shipped it.
-
A few of the things I've learned in the run up to taping out our first chip that working with FPGAs had not prepared me for (fortunately, the folks driving the tape out had done this before and were not surprised):Fuzzing is great, but it needs to be usefully tied to coverage and that's tricky. A simple case of fuzzing a CPU, you can fairly trivially generate every 32-bit instruction and feed them through an RTL simulator. But that will mostly test the same things. You really want to test things like different pipelines with different timing with dependent instructions.
Defining a coverage model that you can feed into a fuzzing tool and get useful output is tricky.
That said, being able to just throw compute at the problem is a great way of increasing confidence.