All in all modern CPUs are beasts of tremendous complexity and bugs have become inevitable. I wish the industry would be spending more resources addressing them, improving design and testing before CPUs ship to users, but alas most of the tech sector seems more keen on playing with unreliable statistical toys rather than ensuring that the hardware users pay good money for works correctly. 31/31
-
-
Bonus end-of-thread post: when you encounter these bugs try to cut the hardware designers some slack. They work on increasingly complex stuff, with increasingly pressing deadlines and under upper management who rarely understands what they're doing. Put the blame for these bugs where it's due: on executives that haven't allocated enough time, people and resources to make a quality product.
-
@gabrielesvelto This is one of those cases where I wish I had a Mastodon client that let me like the whole thread.
-
@gabrielesvelto I went to a lecture in the early 1990's by Tim Leonard, the formal methods guy at DEC. His story was that DEC had as-built simulators for every CPU they designed, and they had correct-per-the-spec simulators for these CPUs.
At night, after the engineers went home, their workstations would fire up tools that generated random sequences of instructions, throw those sequences at both simulators, and compare the results. This took *lots *of machines, but, as Tim joked, Equipment was DEC's middle name.
And they'd find bugs - typically with longer sequences, and with weird corner cases of exceptions and interrupts - but real bugs in real products they'd already shipped.
But here was the banger: sure, they'd fix those bugs. But there were still more bugs to find, and it took longer and longer to find them.
Leonard's empirical conclusion is that there is no "last bug" to be found and fixed in real hardware. There's always one more bug out there, and it'll take you longer and longer (and cost more and more) to find it.
-
@gabrielesvelto Thank you for this detailed and specific explanation. Chris Hobbs discusses the relative unreliability of popular modern CPUs in "Embedded Systems Development for Safety-Critical Systems" but not to this depth.
I don't do embedded work but I do safety-related software QA. Our process has three types of test - acceptance tests which determine fitness-for-use, installation tests to ensure the system is in proper working order, and in-service tests which are sort of a mystery. There's no real guidance on what an in-service test is or how it differs from an installation test. Those are typically run when the operating system is updated or there are similar changes to support software. Given the issue of CPU degradation, I wonder if it makes sense to periodically run in-service tests or somehow detect CPU degradation (that's probably something that should be owned by the infrastructure people vs the application people).
I've mainly thought of CPU failures as design or manufacturing defects, not in terms of "wear" so this has me questioning the assumptions our testing is based on.
-
@gabrielesvelto Super interesting; thanks for writing this up!
-
@gabrielesvelto great read ty!
-
@gabrielesvelto Fascinating thread, especially the degradation over time inherit to modern processors. That came up recently in an interesting viral video on a world where we forget how to make new CPUs.
Bit of an aside, but I assume this affects other architectures? The thread mentioned Intel and AMD, but I assume Arm and Risc-V are similarly prone to these sorts of problems?
-
@gabrielesvelto that's the deep nerdy stuff I love about IT! Thanks a ton for sharing this!
-
@gabrielesvelto nitpick: the propagation velocity of a *signal* in a circuit is not affected by the voltage magnitude; that is a function of the (innate) dielectric constant of the material.
however, a higher core voltage does mean that a rising edge tends to reach the gate threshold voltage of a transistor more quickly, which reduces the time it takes for each asynchronous logic element's output to reach a well-defined state after a change in input, thus propagating logic *state* more quickly.
-
@gabrielesvelto (what you said is absolutely correct regarding "signals" in the HDL sense of the word, it just gets a bit muddled when we're simultaneously talking about the analogue behaviours of the actual electrical signals, hence the clarification ^^)
-
@gabrielesvelto This was a phenomenal write-up, thank you!
-
@gabrielesvelto fantastic thread thank you

-
@gabrielesvelto Nice thread!
You seem to imply that bugs have become considerably more frequent, largely due to the increased complexity. Right?
To me it's not obvious that the larger number of known issues isn't to a large degree due to much better visibility (we didn't have anywhere close to today's automatic crash collection systems in the past) and due to the vastly increased number of CPUs... Do you have any gut feeling about that?
-
@gsuberland thanks, I was playing a bit fast and loose with the terminology. As I was writing these toots I reminded myself that entire books have been written just to model transistor behavior and propagation delay, and my very crude wording would probably give their authors a heart attack.
-
@AndresFreundTec I've been in charge of Firefox stability for ten years now and some of my early work to detect hardware issues dates back then. In pre-2020 years we could get a 2-3 bugs per year, usually across different CPUs. Now we get dozens, it's really on another level.
-
@AndresFreundTec admittedly we get a lot more after a new microarchitecture launches, and then they go down as microcode updates get rolled out. If Microsoft hadn't started shipping microcode updates with their OS updates we'd be swamped.
-
@gabrielesvelto
There’s also meta-stability. If a value is snapshotted half way through it changing, it may occasionally result in the output not being one or zero, but some ‘half’ value. Depending on the circuits using that result, it may be interpreted as either 1 or 0 — and maybe different parts of the circuit will use different interpretations. Such intermediate states are only meta-stable, and will flip to a firm 1 or 0 at some indeterminate time later, possibly propagating the problem. -
@KimSJ ah yes, very good point. It's been a while since my days in hardware land and I had forgotten about it.

-
@dubiousblur glad you liked it!
Citiverse è un progetto che si basa su NodeBB ed è federato! | Categorie federate | Chat | 📱 Installa web app o APK | 🧡 Donazioni | Privacy Policy