|
Firmware has other considerations. I'm talking PCs primarily, user machines.
If those resources are queued up and preallocated they are that much *more* ready to use than if you suddenly need gigs of RAM waiting in the wings. This is precisely why modern apps, and frameworks (like .NET) do it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Quote: I'm talking PCs primarily, user machines.
In this hypothetical ideal world where everything is at 100% utilisation on a user's PC, anything the user does (like moving the mouse 2mm to the left) will have to wait for the utilisation to drop before that action can be completed.
Even in this hypothetical world scenario, it still seems like a bad idea to have everything at 100% utilisation: users don't want a 15s latency each time they move the mouse.
(In the real world, of course, it's worse - CPUs and cores scale their power drawn with their load - increasing the load to 100% makes them draw more power. In the real world, it makes sense to have as little CPU utilisation as possible, and to leave as much RAM as possible for unpredictable overhead.)
|
|
|
|
|
To be clear I did not say the CPU should *stay* at 100%. I said when it's performing work, it should use it all.
And yes, realistically you want about 10% off the top for the scheduler to work effectively, if I'm being technical.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: I like to see my CPU work hard when it works at all.
In the space that I work in which is different than yours I like it when the CPU load is less than 50%. That gives me a buffer when the new feature I added for some reason starts chewing up that additional space.
And for a database I want to see it at even less than that. Similar reason but I expect more surprises with the db than with the application. It gets real scary when the database is running at a sustained utilization of 80%.
|
|
|
|
|
I probably should have been clear that I am primarily talking about traditionally user facing machines like desktops and laptops here rather than servers and embedded.
Utilization is important in those arenas too, but both how you achieve it, and where you want it are going to be dramatically different.
I sure hope that when I'm searching a distributed partitioned view in SQL Server, that all the logical "spindles" its partitioned across are speeding right along together. I also expect a database server to be less CPU heavy and more storage heavy, meaning your utilization metric will be your storage and I/O primarily. That's how you know your queries are being properly parallelized, for example.
It's different considerations to be sure, even if utilization sits at the center of all of them.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
At one time unused physical RAM in Windows machines was used for disk cache, thereby keeping RAM utilization at 100% for all intents and purposes.
Software Zen: delete this;
|
|
|
|
|
That's actually in theory a good idea. I wonder why they stopped allocating all of it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
At one time they did use all of it, minus a fraction to keep handy as reserve. In today's world with SSD's and much faster 'disk' interfaces, I don't know if this is still valuable or not.
The fact that the unallocated RAM was used for disk cache wasn't visible to the user or to applications.
Software Zen: delete this;
|
|
|
|
|
I don't even necessarily mean for disk cache, but as something.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I think they just changed it so it doesn't appear that way anymore.
It still predictively loads things into RAM but the presentation is different so it doesn't appear that RAM is used.
I think they changed that because people were like "WTF MSFT WHY USE ALL MY RAM?!"
It takes almost nothing for the OS to chuck it and use it for whatever is actually needed instead of what it predicted if it got it wrong.
|
|
|
|
|
That's generally correct. My system shows 5 GB available RAM right now; however, the majority of that should be freed pages which point to disk files (including application code) so that if the file (application) is (re)opened, it doesn't need to be read from disk. A small portion (128MB) is zeroed pages, just enough that when an application asks for a blank memory page it can be delivered instantly without waiting to zero it.
Windows also has a mechanism for pre-loading pages it expects to need shortly (mostly used during boot which is more predictable) and .NET has similar mechanisms for pre-loading code before it's needed (although it typically requires running optimization tooling to build the pre-loading list, which major apps like VS do but many don't).
There's a number of other apps that have 'fast load' setups to pre-load the memory their application uses; however, I often find them annoying as they may pre-load their application even though I have no intention of using it that day...
|
|
|
|
|
I am having the same relationship to my car.
When I am not driving it, its total utilization is falling, so I keep it running as much as possible.
I pick up four friends to go with me on trips, to utilize the seat capacity as much as possible.
To utilize the engine to the maximum extent we have to go out on the highway (otherwise we would break the speed limit all the time).
This part of the year, I am happy about the utilization factor of the headlights; I keep them on at all times to rise utilization.
Also, with lots of rain, the windshield wipers is another component that can contribute to the total utilization.
Obviously, the car stereo is active all the time, to make sure it is utilized to the fullest extent possible.
Making the maximum possible use of everything you have at your disposition is essential for a good life. Keep your fridge and freezer filled up to utilize its capacity. If you have spare beds in your home, invite someone to sleep in them. Keep all your electric lights at maximum utilization. Maybe even your SO!
|
|
|
|
|
I think you may be having a laugh at me, I'm not sure. But either way, I agree with parts of what you wrote.
trønderen wrote: If you have spare beds in your home, invite someone to sleep in them.
I used to do this when I was younger and could get away with it. I was a homeless teenager, so when i was in my twenties and living in seattle among a sea of homeless young adults, I'd let them crash where I lived. I lost some stuff to theft, and a little peace to some drama, but I'm still glad I did it. Because once or twice I met someone who did that for others, when I needed a place to stay.
Imagine a world where people with more than they need were very open to sharing their excess with others.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster.
That's all fine and dandy when it knows what it is I'm going to be using, but making the wrong guess means someone's expended resources to do that work for nothing. Leaving less memory for other things that could've been cached.
It's really all a balancing act, every OS has its own guidelines explaining what each app should do or avoid in order to be a good citizen. Then it's up to the OS to juggle it all and try to make the correct guesses.
Bottom line, I'm with you, if you have the resources, by all means, use them. But the key, as already mentioned, is that you have to be smart about it, you can't act as if you're the only one around, 'cuz everybody else is trying to do the same...
|
|
|
|
|
Of course, and by ideally, I am indeed intending a hypothetical scenario with ideal conditions, as illustrative of a point, rather than an attempt to reflect reality.
That point is that if you can get your I/O to do useful work when it's not doing anything else, that is typically a net win.
Even if you can't, as long as you win more times than you lose, it's still a win - like blackjack and card counting, if you do it right, you'll win a lot.
But again, these situations are only intended as illustrative hypotheticals, and broadly articulated ones at that. I didn't want to get lost in the weeds.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Sure. Cache hit is a thing, and so is cache miss. It doesn't mean we shouldn't try to cache anything at all. Just that the algorithm used to decide what to cache vs what to let go of is very much something that's still in development. I'm not aware of any magic bullet.
|
|
|
|
|
There isn't really. It's all highly situational.
For example, I do dithering and automatic color matching in my graphics library so that I can load a full color JPG on to for example, a 7-color e-paper display. It will match any red it gets, with the nearest red that the e-paper can support and then if possible, dither it with another color to get it closer.
It takes time. I cache the color matching and dithering results in a hash table as I load the page. The hit rate is extremely high. It's very rare that a pixel of a particular color only appears once. That's close to ideal. The cache is discarded all at once once the frame is rendered. In that case, also easy to determine.
Naturally, for a web site, things look much different, and considerations change. Your cache hit algo probably won't be as ideal as my previous example just because there are so few examples in life that closely match a general algorithm's design.
At the end of the day though, you don't need a silver bullet to make it worthwhile, luckily for us - you just need to win more than you lose, once all the chips are counted.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: At the end of the day though, you don't need a silver bullet to make it worthwhile, luckily for us - you just need to win more than you lose, once all the chips are counted.
This. So much this.
|
|
|
|
|
honey the codewitch wrote: When your CPU core(s) aren't performing tasks, they are idle hands.
When your RAM is not allocated, it's doing no useful work. (Still drawing power though!)
While your I/O was idle, it could have been preloading something for you.
Sounds like the wife complaining about her hubby.
|
|
|
|
|
Anything above 80-85% utilization will quickly start thrashing that particular resource. Up to that point you're spot on.
|
|
|
|
|
I wouldn't say *anything*, but I do hear you.
Certainly thrashing is a concern with something like virtual memory, but I'm not even necessarily talking about vmem here. With the memory example, my point was simply about a hypothetical ideal. It takes the same amount of power to run 32GB of allocated memory as it does 32GB of unallocated memory, so if you're not using that memory for something, it's in effect, being wasted. In the standard case, this would be an OS responsibility, and if an OS wanted to approach that ideal, it might use something, like an internal ramdisk to preload commonly used apps and data for example. May as well. It's not being used for anything else, and if you run out, you just start dumping all your ramdisk. Only after it's gone, start going to vmem. Something like that. It's just an idea, there are a million ways to use RAM.
I/O (to storage) is really where your thrashing occurs, and historically there was literal thrashing due to the moving parts involved, even though that's so often not the case anymore.
But again, the idea would be in an ideal "typical" situation, an OS would manage that, and run any preloads at idle time, and make them lower priority than anything else.
In effect, as long as everything you're doing on top of idling is basically "disposable" thrashing won't be much of a concern.
The CPU is a bit of an animal, in that you'll need about 10% of it to run the scheduler effectively, and without that, everything else falls apart. So yeah, with a CPU it's more like 80-90% utilization, although 100% is acceptable for bursts. In any case, I worded my post carefully to dictate that the CPU should be utilized when it has something to do. It's not the case that I'd necessarily want to "find" things to do with it the way I would with RAM. It's that when it does need to do something, it expands like a lil puffer fish and uses all of its threading power toward a task - again, ideal scenario. The reason for the discrepancy here, vs say with RAM is because of power concerns. RAM uses the same power regardless. A CPU varies with task so it should be allowed to idle if that makes sense.
I hope this clears things up rather than making it worse.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
|
Maybe I'm misinformed, or maybe DDR5 does something previous RAM doesn't to save power. Neither would surprise me.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I am pretty sure all RAM requires more energy to read/write than to just be powered up. Similar to a SSD or NVME drive
|
|
|
|
|
My understanding is that DRAM needs constant periodic refresh voltage to maintain its data
Memory refresh - Wikipedia[^]
So it's not act of reading or writing like an NVMe. It works kind of like an LCD does, in that the charge is sent to the LCD panel over and over with whatever the data is at that point.
In effect, the writes are always happening regardless, at least to my understanding.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|