|
Thanks for the feedback, Folks. Sean asked me to reply on this thread so here's the link and a little more info about the clip:
Back in 1983, when the IBM PC was relatively new and the Mac had not yet been introduced, I was involved with a users group in New York called NYPC. In conjunction with the local Apple group, we produced an all-day event with microcomputer-related seminars, exhibits, and vendors. We also had guest speakers including Steve Wozniak of Apple and Bob Frankston of VisiCalc fame.
I produced a video "covering" the event which was shown on public access cable TV. It was seen by probably no more than 4 people. The show included interviews with Wozniak and Frankston.
Here's a 5-minute clip of the Wozniak interview. It's preceded by a brief chat between Woz and Frankston on the merits of the BASIC programming language. I hope you enjoy this little trip down memory lane:
https://vimeo.com/788473906/f9281f58a4?share=copy
|
|
|
|
|
|
Pretty cool. It starts extremely well, but IMMHO they lose some... pinch... when the song goes happy-happy
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
|
|
AFAICT, all cores/threads (12/24) on my system get used when doing parallel tasks. And things seem to scale as expected going from 8 to 12, so my experience seems refute those claims. Additionally, I would have thought that those that are using AMD ThreadRippers with 32/64 cores would have noticed that they're not getting the expected boost from the huge core count.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
I have an AMD® Ryzen 5 2600x six-core processor × 2 (12) core in mine and I notice that all my cores get used also. Running Ubuntu 22.04.3 LTS all seems good.
|
|
|
|
|
Yeah, but see Daniel Pfeffer's reply down-thread. The actual issue has to do with time-slice calculations for the scheduler, not the use of CPU cores/threads. If performance could be better with finer grained calculations for more than 8 cores, it's probably pretty subtle. Like all things, there's probably a point of diminishing returns, and maybe somewhere around 8 cores, scheduling characteristics don't make much difference overall. No doubt someone like the guys over at Phoronix will do some benchmarking with patched kernels and report. Then we'll know what, if anything, we've been missing.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
I read the original article.
The issue is not that only 8 cores are used, but that the time slice does not scale properly with the number of cores. The larger the number of cores, the larger the inherent multitasking, so less switching is performed on each core to simulate multitasking.
The Linux kernel is supposed to use a certain core number-dependent algorithm to calculate the time slice size, but the number of cores used in the calculation is maxed out at 8.
IMO, this is deliberate. When you have more than 8 cores, increasing the time slice size gives no real benefit.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Why would anyone need more than 8 codes?
As the aircraft designer said, "Simplicate and add lightness".
PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com
Latest Article: SimpleWizardUpdate
|
|
|
|
|
Eight cores oughta be enough for anybody.
|
|
|
|
|
Anything more is just pretentious!
As the aircraft designer said, "Simplicate and add lightness".
PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com
Latest Article: SimpleWizardUpdate
|
|
|
|
|
Ah! But how many threads? I seem to recall a CPU (maybe MIPS?) that supported 3 threads per core, and there's tales of IBM Power supporting 4 or 8 TPC, and I think Sun SPARC had chips that supported 8 TPC. Imagine a Beowulf cluster of those! Oops, sorry, wrong forum
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
(PIEBALD looks up Itanium specs...)
|
|
|
|
|
I think how many threads per core are supported is determined by the O/S, not the CPU.
|
|
|
|
|
I'm thinking in terms of "Hyperthreading", or "Virtual Cores", which is definitely hardware, not software.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
PIEBALDconsult wrote: Eight cores oughta be enough for anybody.
Consider
(a) the number of cores in an AMD Threadripper CPU
(b) the fact that you can use AMD CPUs as space heaters
You might want to make use of more cores during those cold winter nights...
|
|
|
|
|
Uh huh, here in freaking Phoenix.
|
|
|
|
|
I'm not without sympathy.
The one time I had an AMD CPU for my primary machine...I remember shutting it down at times because it was just getting too damned hot in summer, despite the AC unit keeping the rest of the house reasonably cool. Nothing wrong with the CPU or heatsink, as I was repeatedly told this was "to be expected" with that particular generation (I forget which exactly).
I have no need for a machine that has to be turned off due to the amount of heat it throws off. I've never owned another system with an AMD CPU.
|
|
|
|
|
dandy72 wrote: because it was just getting too damned hot in summer, despite the AC unit keeping the rest of the house reasonably cool.
That sounds a bit scary.
So the CPU was getting so hot that it was warming up the room (not just computer) that you were in to such an extent that you turned it off to get cooler?
Sounds more like a fireplace or an oven than the computer.
|
|
|
|
|
The prototypes of the DEC Alpha required a 3 phase power supply. Before the release, they managed to trim down the power requirements so that a plain single-phase PS could handle it.
While I was teaching at a tech. college, we got an Alpha machine. Good thing was that it could provide hot lunches for the students
(I'm kidding. It was a common joke among the students, though.)
|
|
|
|
|
jschell wrote: So the CPU was getting so hot that it was warming up the room (not just computer) that you were in to such an extent that you turned it off to get cooler? Have you never been working in an environment with, say, 40 servers gathered in a server room?
A few years ago, we were moving to another wing of the building. The move was delayed by a couple of months because the machine room needed so much AC that there wasn't enough electric power for it without a significant rewiring, with more circuits of higher rating.
A home environment is different, of course. But look at these gaming machines: They have power supplies of 1200 W, 1500 W, ... The three huge screens come on top of that. And the 6 channel times 50 W sound system. It all ends up as heat, similar to a 2000-2500 W electric heater. If you don't need it to keep your house warm (and it certainly is not a very efficient way to heat your house!), you need an AC which can dispose of 2000-2500 W of heat - and the AC unit will give you another 500 W or so of heat (assuming a COP of 4 to 5; they don't go very much higher).
If you live in a house built by current insulation standards (as far as I have read, Canada and Norway standards are comparable), then an extra 3000 W of heat may be sufficient to keep your entire house warm even in mid-winter.
|
|
|
|
|
The post said "primary" so I presumed one.
trønderen wrote: They have power supplies of 1200 W, 1500 W,
Not really my thing these days but I believe that if you drive the power supply to the maximum on the computer and the other devices you are going to end up having hardware issues quite quickly.
|
|
|
|
|
jschell wrote: So the CPU was getting so hot that it was warming up the room (not just computer) that you were in to such an extent that you turned it off to get cooler?
Sounds more like a fireplace or an oven than the computer.
You sound surprised. Heat management is absolutely a consideration for any IT manager who's responsible for putting together a number of PCs in a room. It'll get hot - very hot. Data centers spend a fortune on AC. But this one PC was a special case.
As mentioned, that particular generation of AMD CPUs (Athlons? It was over 10 years ago) was known to be generating a lot of heat; it's not like the thermal paste needed to be replaced or the heat sink reseated as is often the solution nowadays. It just generated that much heat, by design, so even with proper cooling/ventilation, the heat had to go somewhere. So, it heated the room.
Nowadays, it seems like high-end GPUs have taken that role...
|
|
|
|
|