|
I have been fighting x86 since 1992-93, but lost the first battle: The IT department of the Tech. College where I was teaching had two alternatives for a set of 30 new machines, to be used for Unix software and also one of my courses, Computer Architecture (with assembler coding). The choice was made in a democratic manner: The educational staff of the department, including me, came out in favor of an M68030 based system. The department head was in favor of the x86 based solution. When he saw that the majority went against his preference, he announced: I can't be the head of a department that works against me. I quit! Find another department head! So the next day, we repeated the democratic voting, and this time the majority was in favor of the department head's preference, and he didn't quit. I had to teach Introduction to Computer Architecture on the messiest architecture around.
(Btw, Denmark got into EU by a similar democratic vote. They had a referendum, giving an 'no' to join EU. The Danish authorities told the people that the answer was wrong, and gave the people another chance to give the right answer. The second time, The People understood what was expected of them, and Denmark joined EU. Hooray for democratic processes! At least as long as they give The Right Answer.)
M68K didn't survive in the big markets. If it had, the RISC wave would have been mostly superfluous. So let's cross our fingers that the ARM architecture will be strong enough to fight down the x86/x64.
Although ARM started as a 'clean' RISC, it certainly isn't any more today! The very first 'Thumb' instruction set laid the first ground for irregular instruction coding, need for an intermediate decoding level and reduced regularity of the instruction set. That has grown 'worser and worser' with every new architecture revision; it is today very far from the RISC ideal of instruction word bits directly activating the various logic circuits. They have had to introduce caching and pipelining and lookahead and speculative execution and out-of-order execution and whathaveuyou of hardware speedup techniques. The instruction set has grown and grown and grown and ... Certainly not always in an orderly, well designed manner. AArch hasn't had as many years as x86/64 to grow cancer, but the old word saying that 'any sufficiently high-versioned standard is indistinguishable from a can of worms' is beginning to bite ARM as well.
Note that the discussion you are referring to is more than three years old. The thread is almost void of references to the Aarch64 alternatives that were available even then, but with plenty of references to the M1 of 2007. It it tempting to suspect that a fair share of the commenters are not fully aware of the more recent (even then) updates to the architecture.
If you go for the detail, 'the ARM Cortex architecture is largely continuous from their little M0 real time chips all the way up to their multicore A line' does hold true for a sizable core. Not for the Thumb instruction sets. A number of the 'ordinary' instructions didn't make it to the 64 bit architecture. Compatibility at the binary level is significantly less than at the assembler source code level; some of the top Aarch64 models have completely dropped support for Aarch32. Vector instructions are now in its second version of the second generation.
Yet: I do like the general ARM architecture. I have come to love the register based philosophy, with less reliance on the stack. I have seen how the system architecture for 'peripherals' integral to the CPU is great for extending the CPU in a SoC. I am really hoping that traditional PC manufacturer soon will come up with a broader range of ARM based machines, covering even the more 'classical' kind of desktop machines in large cabinets, allowing for extensions with peripherals, memory etc. that you can't do with a portable or tablet.
'The fact that ARM doesn't manufacture is also a huge win' - it is, but don't overestimate it. ARM provides a CPU core, for anyone else to extend with their own (on-chip) peripherals, several architectural features are optional, and every manufacturer will pack the chip to his preferences. So you will rarely if ever see a 'plugin compatible' chip from an alternate vendor. If you have to switch to another chip manufacturer, be prepared for another pin layout, maybe your old chip had some useful peripherals missing in the new one (and if the new one has some similar peripheral, it is almost certainly managed differently) and some instruction codes may be invalid because that option was left out of your new replacement chip.
A common core is of course a great win. But the salesman speak is often a lot more rosy than realities, especially if you are making use of optional functions and on-chip peripherals.
|
|
|
|
|
Good info/background.
Thanks
|
|
|
|
|
ARM is interesting, and certainly the Raspberry-PI situation has made ARM almost a household name. I'm watching RISC-V with interest. As a royalty-free instruction set, it might have legs. On the other hand, one of the big RISC-V development companys, SiFive SiFive - Leading the RISC-V Revolution just laid off 20 % of its work force. So maybe RISC-5 is not quite the industry darling some make it out to be.
I'm curious if anyone has any experience with RISC-V, and if so, is it or or meh
Keep Calm and Carry On
|
|
|
|
|
I've only tinkered slightly on some of the RISC-V based ESP32s. Nothing special about them to me.
Sure the instruction set is open, but they aren't entrenched. Inertia is everything in this arena, so for better or worse, I think ARM is the future, at least in the near to mid term. I don't think Risc-V will get the traction necessary to unseat it, particularly when you have everyone from Qualcomm to NXP manufacturing them.
I think RISC-V will find it's niche in IoT more than anything, with companies like Espressif using it to spin off cheap MCUs, but I'd be more surprised to start finding it in things like high end phones.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
RISC-V is suffering from the same problem as a lot of 'open' projects: It is open to anyone to make their own additions, extensions, modifications, all in different directions. And people do. It may work out if there is a strong core under central management. I am not sure that the core is strong enough, and the central management tight enough.
From the outset, the architecture looks like it tries to be everything to everybody: Address space of 32, 64 or 128 bits. Big endian and little endian. Lots of what is basic functionality in a modern x86/64 are extensions that may or may not be there, and anyone can make their own proprietary extensions. The CPU is fundamentally 32 bits, but then comes the 64 bits extensions. An opening for an alternate 16 bit instruction set (similar to ARM Thumb).
I suspect that the flexibility and openness will create such a "rich" (another possible word is "messy") world of options and extensions that it will lack the focus to become a mainstream success in general markets, where you are dependent on a lot of manufacturers offering identical facilities, to run identical programs in identical ways.
My guess is that it has a greater future in fixed code applications, like embedded/IoT, where the core functionality is more significant than the extensions and compatibility with other software is almost irrelevant. Also, for embedded/IoT solutions, the architecture license fee makes up a larger fraction of the unit cost, compared to e.g. a desktop computer, giving RISC-V a competitive advantage.
I'm happy with RISC-V entering my micro devices, but I strongly doubt that my next desktop machine will have a RISC-V CPU.
|
|
|
|
|
I agree that x86 is old and limited, but I'll believe that ARM is taking over once I begin to see ARM PC devices for sale on NewEgg.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
I mean, they don't sell Apples at Newegg AFAIK, but given that Apple has two ARM based offerings now, it's only a matter of time before other manufacturers follow suit.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
> Quote: I mean, they don't sell Apples at Newegg AFAIK, but given that Apple has two ARM based offerings now, it's only a matter of time before other manufacturers follow suit.
There's a big difference between Apple computers and Dell/HP/etc computers: Apple owns the entire vertical, the others don't. This is why the others can't follow suit.
Briefly, Dell aren't fabbing their own processors, Apple are. Why would Dell, et al, switch to ARM and lose the benefit of economies of scale from using X86_64? Sure, they offer ARM[1], but that's an expensive product for them to produce.
Apple owning the vertical means that it is neither cheaper nor more expensive for them to offer ARM over X86_64: it's exactly the same! Dell doesn't own their vertical - they assemble existing finished components into a finished product - for them moving to a new chip is going to be hella expensive.
It's not about technology, it's about business, and Apple is in the business of providing products at premium price points. The other companies are not, so you can't expect the same level of vertical ownership from them.
With all that being said, low-powered laptops and desktops would certainly be welcome, as long as the price point is in line with the product offering.
It makes no difference to the end-user (even us embedded devs) whether the chip is based on X86, X86_64, ARM, MIPS, Sparc or m68k[2] - you're gonna do roughly the same work, with the roughly the same constraints, using roughly the same devtools, to produce roughly the same products.
The people who it matter to are hardware designers, specifically verilog/VHDL engineers who are designing those chips and peripherals, but I don't think they care either.
[1] Well, they used to. I don't know about now.
[2] I've programmed for all of those at some point or the other. Even the z80 processor (Zilog?) when I was but a young lad.
|
|
|
|
|
They don't need to fab their own processors. All it takes is one company (like say Ampere) to come in and fill the vacuum. Oh, capitalism.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
|
Sorry, but I don't understand your intent in posting this link? It doesn't bench ARM processors at all.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Not my field of expertise and the site isn't that each to use.
But I did (finally) figure out how to search and I found a 'ARM ARMv8 2016 MHz (8 cores)' listed. There are others.
Geekbench Search - Geekbench[^]
So that is a ARM device right? (I really have no idea if this is what you are referring to or not.)
Following claims to discuss the internals.
https://www.geekbench.com/doc/geekbench6-benchmark-internals.pdf
It says is supports ARM on page 8.
|
|
|
|
|
Yeah it is. I just couldn't find it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
The first in the top is 17th:
System iPhone 14 Pro Apple A16 Bionic 3460 MHz (6 cores)
Uploaded Oct 24, 2023
Platform iOS
Single-Core Score 3732
Multi-Core Score 10547
Mac Benchmarks - Geekbench
Top one is
Mac Studio (2023) Apple M2 Max @ 3.7 GHz (12 CPU cores, 30 GPU cores) 2803 points
Top x86 is (without obvious extreme overclocking) ASRock Z690 AQUA OC Intel Core i9-13900K 3000 MHz (8 cores) Uploaded Aug 25, 2023 Platform Windows Single-Core Score 4220
|
|
|
|
|
Don't know how I missed that. Thanks!
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Intel is indeed removing some of the really old legacy stuff from there next generation of processors. They're keeping the 32-bit subsystem for older applications. What they are removing is the "real mode" startup and all the support to shift into and out of real mode. 32-bit applications will run in a virtualized x86 environment.
|
|
|
|
|
I don't know... I keep waiting for someone to come up with a GPU-based OS based on Ampere or something like that. It can't be more than a couple years out.
|
|
|
|
|
No thanks, I see no real advantage for me personally. I'm an infrequent laptop user so any power benefits are irrelevant. I've retired from the biz and most of my computer time is spent on games, a little video editing and the occasional photo editing session. I'd rather not be throwing away the hundreds of games I've acquired over the years (yes I DO replay a lot of the old ones). So I'll be stick with my AMD processors for quite a while.
I've used ARM processors on a number of the products for the last place I worked at and it seems to be a nice processor. It was adequate for what I needed to do, just some relatively simple audio processing for some 911 equipment.
|
|
|
|
|
>
There's no getting around that x86 is showing its age architecturally. Even discounting all the ancient backward compatibility, like "real mode", it's getting awkward.
The Intel x86 CPU's will never be able to match the performance of Apple M series CPU because of a design problem with the instruction set. When x86 was designed parallelism was not an issue because instructions were decoded serially. Today CPU's gain most of their performance from parallel decode.
The ARM instruction set is mostly fixed width while x86 is variable width. You can't decode instructions efficiently in parallel when you can't easily determine how to break them up into separate instructions.
For example x86 gets a sequence of bytes and first needs to decode the bytes to figure out how to group them together into instructions. ARM can skip this step because the instructions are simpler and fixed width. In that case it can easily issue groups of bytes to parallel decoders.
Here is an analogy: Imagine someone trying to direct groups of people to into separate at an airport security screening. They need to make sure that families go together in the same line. If each family could have 1-4 people then they would need to ask each person which family they belong to (i.e. x86). If there is a requirement that all families have 4 people then they can all move through to the x-rays without being asked.
|
|
|
|
|
ARM just designs and licenses the CPU core. Everyone who implementing the actually chips put a lot of stuff around it, like GPU/video and peripheral I/O (USB, I2C, networking (wired/wireless)) and a lot of that stuff is NOT compatible across implementations.
And that is even more true for those "little devices" that you mentioned than it is for anything desktop/server related...
|
|
|
|
|
Ralf Quint wrote: ARM just designs and licenses the CPU core.
I actually mentioned that in my OP, and said it was a win.
As far as the peripherals, that doesn't matter as much.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: Microsoft is doing similar with their operating system
Are you suggesting this is something new?
Following says this happened in 2017?
"The platform started out bringing Windows 10 to Arm-powered laptops and tablets all the way back in 2017"
Windows on Arm — Everything you need to know about low-power PCs[^]
honey the codewitch wrote: There's no getting around that x86 is showing its age architecturally
Perhaps. But is this a comment related to your business domain rather than the overall computing market?
I suspect choosing components for any system depends on a number of factors both technological and non-technological.
|
|
|
|
|
ARM is sooo last year. I hear RISC-V is the new ARM.
|
|
|
|
|
|
There's a few bands doing this. I think post modern jukebox is the most popular.
|
|
|
|
|