|
One of the things I like about Live Mail is that Send/Receive is by default bound to F5 - with Outlook is was sodding F9!
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I prefer CTRL-F5 as it satisfies the freak in me.
|
|
|
|
|
Not as refreshing as a vacation would be. But that's probably just a | dream.
|
|
|
|
|
Finding a hundred different benchmarks comparing the performance of various smartphones is easy. It is a lot harder to find comparisons with traditional desktop CPUs, or for the GPU: desktop graphic cards of various classes.
Obviously: A smartphone processor cannot consume 50-100 W power (or more, for extreme desktop/gaming PCs), so you can't expect the perforance to be at the same level. Yet, it is well known that the ARM cores give a lot of performance per watt, usually better than the X86/X64 family.
That aside: In absolute performance, if you port a "heavy" classical desktop application to a smartphone app, maintaining the same algorithms etc, and run the smartphone at max performance without worrying about battery life, how would it compare to a modern desktop CPU and graphics card?
Since I am mostly curious about the CPU/GPU performance, I assume that the desktop PC for comparison has a flash disk like the smartphone, no power saving features reducing performance etc.
I guess that the results would depend a lot on the kind of task, e.g. whether it is CPU-bound, GPU-bound, or I/O-bound, how well the code can utilize multiple cores etc. So I am not expecting a single numeric factor for the relative performance. I am looking for benchmarks showing the performace factors of various classes of workloads, on PCs and on smartphones. Where can I find that?
|
|
|
|
|
BAck in the early days of the ARM processor, the company used to show it off to potential customers by demonstrating it performing the same task as a Pentium processor.
The only difference was that the ARM was powered from the waste heat emitted by the Intel device ...
It's a RISC chip (in theory, have a look at the instruction set some day and you may start to doubt that) and they are generally faster and more efficient than the traditional CISC devices fitted to desktops.
But ... you are comparing apples and oranges to a large extent: the OS running on the chip makes a HUGE difference to perceived performance (compare a Linux setup to a Windows 10 one on similar hardware and you'll see what I mean) and smartphones OSes are generally tightly coupled to the hardware they are running on, unlike desktop devices which have to cope with a huge variety of hardware environments. And GPU's are different too - smartphones don't have or need the kind of processing a modern PC graphics card will have (heck, the latest Nvidia devices have 46 cores, and 8 GB of RAM!).
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: It's a RISC chip (in theory, have a look at the instruction set some day and you may start to doubt that) and they are generally faster and more efficient than the traditional CISC devices fitted to desktops. A common misconception that it's the number of instructions that is being reduced. It's actually the number of addressing modes and the number of variants of each instruction that use these addressing modes that are reduced.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
For all practical purposes, that is the situation today. The CISC addressing modes had grown into a huge mess as the various processor families evolved (I will not mention x86 i particular).
If you go back in history, RISC was Reduced INSTRUCTION SET computers, not Reduced ADDRESSING MODES computers. RISC was more than reduced number of instructions: There was a reduction in number of instruction formats: All instructions being x bits wide, all having the operand spec in the same bits etc. The regularity was just as important as the count. It lead to far more direct hardware decoding of the instruction / operand codes into signal lines within the CPU, avoiding (possibly multiple layers of) microcode decoding, making faster instruction execution possible.
Now, ARM is certainly not as regular as the classical RISC chips (or for that sake, 68K). And when microprocessors startet adopting pipelining, speculative execution etc, and interrupt handling became more sophisticated, the ideal of direct decoding from instruction code bits to internal signals began breaking down. You don't see very many references to RISC architectures today, because very few chips follow the RISC principles of the 1980s and 1990s.
|
|
|
|
|
Member 7989122 wrote: If you go back in history, RISC was Reduced INSTRUCTION SET computers, not Reduced ADDRESSING MODES computers. RISC was more than reduced number of instructions: There was a reduction in number of instruction formats: All instructions being x bits wide, all having the operand spec in the same bits etc. The regularity was just as important as the count. It lead to far more direct hardware decoding of the instruction / operand codes into signal lines within the CPU, avoiding (possibly multiple layers of) microcode decoding, making faster instruction execution possible. I see that more in microcontrollers where instruction memory needed not be organized in bytes and any number of bits could be realized as instruction word size. This way the processors only had to fetch one instruction word of n bits instead of several bytes and could execute most instructions in a single cycle.
Did you ever see a 8 bit RISC CPU (as opposed to microcontroller)? I still like to program on the granddaddy of all RISC (and CMOS) CPUs and I can assure you that the addressing modes are extremely reduced there. Some people went so far to call it one of the earliest and most radical RISC implementations ever. I think this spartan design was not due to a radical design philosohy. It was probably the low number of gates available on the die because it was also an early CMOS design and CMOS gates need transistor pairs.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I can hardly think of even a 16-bit RISC CPU! You could, at least to a ceratin degree, say that the RISCs cleared the way for 32 bit microprocessors: Due to their lower architectural complexity (including, as you point out, addressing modes), it was possible to fit a 32 bit CPU on a single chip, given the technology of the 1980s.
There were microprocessors labeled as CISC which had far more regular, simpler addressing modes than the x86: When I see what modern RISCs have come to, I repeat once more: M68K was a close to a RISC as a CISC could possibly get. If you consider it as somewhat RISCy: The first models had external 16-bit buses (or even 8 bit, for the 68008), but the internal architecture was 32 bit from day 1.
|
|
|
|
|
I loved the 68K back in the day. It was an actual pleasure to write programs for it in assembly language.
|
|
|
|
|
That and all Intel family chips, including those from AMD, have been RISC since the Pentium processor (and possibly the 486 and 386 as well). The first stage of these chips is to convert the x86/64 CISC instruction set into a series of RISC instructions.
|
|
|
|
|
I know, but externally they behave like CISC processors. When it comes down to writing code in assembly or even machine code, you quickly will learn to value a RISC processor. Intel processors are a pain to write assembly code for by now, even if they are RISC processors somewhere deep in their black hearts.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
OriginalGriff wrote: the OS running on the chip makes a HUGE difference to perceived performance (compare a Linux setup to a Windows 10 one on similar hardware and you'll see what I mean) So assume that the desktop PC runs Linux, when it e.g. compiles a million lines of code or converts from one video format to another, or generates an animation movie from a script, or ... Android is Linux based, so even though a number of adaptations to smartphone hardware has been made, this shouldn't affect pure CPU / GPU performance that much.
Obviously there are lots of different Intel/AMD desktop chips and GPU chips, and there are lots of Snapdragon models. That should not make it impossible to say that an Intel-so-and-so at X GHz running FFmpeg will convert MPEG2 to H.264 x times faster than a Snapdragon-so-and-so at Y GHz, also running MMmpeg! (FFmpeg is available for ARM, I guess that also includes Snapdragon.)
Similarly obvious: Smartphone CPUs/GPUs are specialized to the assumed needs. But a Snapdragon is Turing complete, so in principle it can do anything that a desktop PC can do. The reason why I ask for relative performance under various workloads is to learn what kind of tasks fit into the smartphones' intended application area (that is, good performance) and which is outside (that is, poor performance, compared to the desktop PC).
I have several friends who went from desktop PCs to portable PCs to smartphones - they haven't owned a desktop machine for eight years, not a portable for three. They do all their tasks on their phone, even video editing. (Don't ask for my comments on the result of that video editing, though...) Portable PCs have essentially been almost as closed, fully controlled hardware environments as the smartphones (today, you connect the same crowd of USB devices to smartphones as you do to portables). We still can compare performance with desktop PCs. Smartphones gradually take over a far more varied set of tasks, software-wise becoming more and more similar to PCs. Today, the fruit salad is a mix of apples and oranges.
Five years ago we could evade the question of relative performance by pointing out differences in tasks and environment. Today, getting to know hard performance factors is highly relevant. If there isn't any availabel, it ie about time that someone start doing it.
|
|
|
|
|
I have several friends who went from desktop PCs to portable PCs to smartphones - they haven't owned a desktop machine for eight years, not a portable for three. They do all their tasks on their phone, even video editing. (Don't ask for my comments on the result of that video editing, though...) This is actually a keyed item in my mind on the difference between the two platforms.
The last phone I had was an HTC One M7. It took pretty good pictures. HTC decided to up it's game and made the software better for it. OK, it now took better pictures. Then worse, and now they are near worthless. The improved software still works fine; however, the processing power required generated too much heat within the camera sensor, and the sensor now takes all pictures in a wonderful shade of purple. HTC did become aware of the problem and was replacing the module for no cost. Phone was already falling apart and needed to be replaced- which it was.
So while the phone was fully capable of taking and processing high quality pictures; it was self-destructive in nature due to the heat generated exceeded the cooling capacity.
So what is the manufacturer to do? Lower the quality of the resulting image or throttle the processing?
The answer they came up with, in at least my eyes; was to make the phones bigger so they had higher cooling capacity
Director of Transmogrification Services
Shinobi of Query Language
Master of Yoda Conditional
|
|
|
|
|
Member 7989122 wrote:
Obviously: A smartphone processor cannot consume 50-100 W power (or more, for extreme desktop/gaming PCs), so you can't expect the perforance to be at the same level. Yet, it is well known that the ARM cores give a lot of performance per watt, usually better than the X86/X64 family. Why? A processor does not do any physical work. More than 99% of the power is simply converted to heat which is not what we want to get and must even get rid of.
A processor's power requirements depend on the number of transistors and the clock frequency. Leakage is at least one order lower, so we can safely ignore it. Let's assume I can optimize the processor's hardware implementation and reduce the number of transistors or lower the clock frequency, I might get the same performance for less power. It's not as simple as judging the performance by looking at the power consumption.
Comparing two processors with very different architechtures is very hard. Benchmarks are notoriously misleading. The manufacturers of course like to use benchmarks that favor their architecture. Then there is the problem that many benchmarks represent abstract scenarios that have only little bearing on any real applications. How many applications have you seen that need as many floating point operations per second as possible? Or the other way around: What 'real' application would be a fair test of any possible processor?
A RISC processor (like the ARM) generally needs less transistors and the reduced instruction set tends to need fewer clock pulses for the instructions than a CISC processor. So it's a fast processor, even if you have to run it at a lower clock frequency, right?
Maybe it is, maybe it's not. It may execute more instructions per second, even at a lower clock frequency. On the other hand it may also need more instructions to do the same thing as a CISC processor. Anyway, a fair test would reveal both strength and weaknesses of the two processors, so it's your turn to tell me what such a fair test looks like and how we weigh al results to a final figure that tells us that processor A has X percent of the performance of processor B. Any time, any place, any circumstances.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
"cannot consume 50-100W power" is because it would drain the batteries within minutes. Also, it would have a cooling problem. So I maintain that a smartphone CPU cannot comsume 50-100 W.
That is part of the explanation why you cannot expect similar performance. But that's what I am after: How much real, absolute performance is sacrified by lowering the power dissipation?
Within one machine class, say desktop PCs, lots of independent groups (tech magazines etc) have developed and run benchmark tests, independent of manufacturers' wishes and outside their control. Some of the benchmarks have been highly synthetic, testing specific hardware features, but there is also a lot of suites that is modelled to resemble actual workloads. This holds for smartphones, too: You will find dozens of tests ranging the performance of different smartphone models, even across CPU architectures and OSes (Android vs. iPhone). Benchmarking looks at end result performance, without being concerned about number of instructions exeuted, memory technology and whathaveyou.
There is no reason why you shouldn't be able to compare, say, video compression speed on a smartphone and on a desktop PC. Or generating an animation. Or compiling a million lines. Face recognition. Automatic translation. ... Lots of tasks can be done on both machine families, without any concern for RISC/CISC, CPU frequency, memory technology etc: You measure how long it takes to complete the task, and that is it.
The factor will certainly be different for different tasks. The right answer is then NOT that "We cannot come up with one single value - it wouldn't be correct under all circumstances - so therefore we can't give you any figure at all" ... The right answer is to give a set of figures: When circumstances are so-and-so, the factor is X. When they are such-and-such, the factor is Y.
It seems as if everybody and his grandma is very reluctant to make real comparisons between desktop PC performance and smartphone perfomance. It seems like the effort spent on finding casues for not doing it is a lot higher than the effort that would be required to do it
|
|
|
|
|
btw: besides the processors being different animals, the memory are different beasts too, that too makes a huge impact on performance even before you look at I/O.
Message Signature
(Click to edit ->)
|
|
|
|
|
OK, but that is exactly what I am after:
How large an impact does eg. different memory technologies make on performance? Snapdragon/ARM is in no way principally different from x86/64; there is (or at least was) a Windows for ARM. And there is software for emulating the smartphone environment on a desktop PC for running apps (that also requires emulation of the ARM instruction set, so it can't be used in a benchmark test to compare CPUs, though!).
If benchmark comparison betwen different hardware, solving the same tasks, were meaningless when hardware differs, then 99.9% of all benchmark comparisons are meaningless. You do the comparison to learn the effects of different hardware.
|
|
|
|
|
Here is a comparison from earlier this year of an ARM processor in a server setup, dual CPU, >60 cores etc, apples to apples, kind of.
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality
There are some benchmarks the ARM CPUs are faster, especially the multi threaded situations, or comparable to Intel and cheaper.
But overall not a huge power difference vs performance than at the lower end.
|
|
|
|
|
ARM 11 ~= 486 I seem to recall was the rough equivalent.
|
|
|
|
|
They are powerful enough for the task they are doing in the context of a smartphone.
Newer instances of CPU will only have marginal gains in performance, but I hope offer better power usage and heat management.
I'd rather be phishing!
|
|
|
|
|
Such as photo and video editing ... Yes I have friends editing video on smartphones. Or they do video and audio recoding.
Everbody today do image and sound analysis - face recognition, voice command processing. The quality of the result is very much limited by the processing power available. A CPU with ten times the peformance on tasks like these could do a more reliable recognition, fewer errors in interpreting speech or gestures or whatever. You get the quality that your smartphone's CPU is capable of giving you, saying "It good enough for me", so you don't care to know if your desktop PC have two or ten times the performance of your smartphone.
Maybe, in a couple of years, you will have higher expectations. Not too long ago did I dig up some old family videos, asking myself: But... I thought these were digital video recordings! This is VHS quality, isn't it? - I have been working with a fairly high quality HD camera for 8-10 years. I had to dig up the old DV tapes from the basement, comparing the orignial tapes to my harddisk copy. The quality was identical. What I considered razor sharp, superb resolution in 1995 was no better than VHS when I judged it 20+ years later. (Well, it was: VHS was far worse than I rememered.)
Smartphone capabilities (and our expectations) will develop the same way. Once we get access to more processing power, we will expect better results.
I suspect that people's reluctance to compare smartphone to desktop PCs is a fear of discovering how far ahead desktop CPUs are in performance. If you've recently spent USD 1500 on a top-of-the-line smartphone that you are really proud of, then you don't want to be told that it has only a fraction of the power of that USD 600 desktop PC that is nothing to be proud of. You want to compare that new expensive flagship to inferior models, not to anything with greatly superior performance. The reason why we don't have the figures I am asking for is that we don't want them!
|
|
|
|
|
Member 7989122 wrote:
I suspect that people's reluctance to compare smartphone to desktop PCs is a fear of discovering how far ahead desktop CPUs are in performance. If you've recently spent USD 1500 on a top-of-the-line smartphone that you are really proud of, then you don't want to be told that it has only a fraction of the power of that USD 600 desktop PC that is nothing to be proud of. You want to compare that new expensive flagship to inferior models, not to anything with greatly superior performance. The reason why we don't have the figures I am asking for is that we don't want them!
(for the whole quote) So what; I don't care, and most people don't care either; and if they were, it would be like "Hey look what my phone can do that your 5 kilos desktop PC cannot do!"
Desktop and smartphones are different, they have different usage; they have different requirements.
Don't compare tomatoes to oranges.
I work in an engineering domain, we use very high end desktop computers, we need the computing power; but we're probably the 0.001% (or even less) of the people that really need that kind of performance on a desktop PC.
If I was asked to develop a mobile version of our software, the requirements would be completely different.
I'd rather be phishing!
|
|
|
|
|
Few days ago heard another "the phone in your pocket has multiple times more processing power then all the computers combined for round trip of NASA landing a man on the moon."
That's great, except all that NASA computing was many times more focused on a few jobs. I (now and then) don't take for granted how amazing processing power my phone has, but NASA (as much as I know) was not dealing with malware detection, stupid user inputs, rogue code, wasted code, bad code, bloated code and such.
to answer your question at a basic level - for today, it has enough processing power.
|
|
|
|
|
Code Of Conduct[^]
Caveat Emptor.
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
|
|
|
|
|