The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I used to have connections, but why don't we just improve the plan? We need a big ship that seems unsinkable with security doors that help only if the water does not get over a certain mark and trap many peaople inside. Too few lifeboats would also help. Has anyone rebuilt the Titanic yet? Let's treat all those guy to the maiden voyage.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
A CEO, a lawyer, and a priest, friends since college, are out on a boat deep-sea fishing. A squall rolls in and the boat capsizes, leaving the three men treading water. Fins start circling around them, hungry sharks.
The next day a Coast Guard cutter is performing search-and-rescue when they find the lawyer, the lone survivor, floating amongst the circling sharks. A young seaman asks the captain "Sir? How did he survive? Why didn't the sharks eat him like the other two?"
There was another question (in General Programming) regarding RAM extension from 8 to 16 GByte. I was tempted to add a comment: Well, if you can't make it work, then relax: You need no more that 8 GByte!
I didn't make the comment. This user might belong to that tiny little fraction of users actually running applications with huge working sets. Every now and then, when meeting someone in person (not on the net) claiming that then "must" have 16 GByte, or their PC will be slow as a turtle, I ask them: Show me! Reboot your PC, start the programs you usually have active, and let's see what Resource Monitor says is 'In Use'! ... OK, open some large files, maybe the data segments will grow huge ...
Most of those I challenge have to dig up every piece of software they've got to come even close to the 8 GByte mark. And we all know that lots of the 'In Use' pages contain initialization code/data, or rarely used functions/data: If they were paged out, maybe a 'print' operation would be delayed by 30 or 40 milliseconds, but who would notice?
The 'real' working set is usually significantly lower than even the 'In use' size.
To prove my point, I wish I had a utility that could cause Windows to flush all its pageable memory and clear the cache/'Standby', so that all pages would have to be brought in from backing store. And 30 seconds after the cleanup operation, all active pages in memory would be those actively referenced the last 30 seconds, giving a reasonable idea about the real working set size. Your real RAM requirements.
Will Windows allow such RAM cleanup? (In other words: Is there an API for requesting such operations?)
If the answer is 'yes', has anyone created any utility to do it?
Preferably, the utility should be a service running cleanup at regular, configurable intervals, and it should be able to log 'In Use', 'Modified' and cache/'Standby' sizes, both before and after the cleanup. A switch for flushing modified pages as part of the cleanup would be a bonus. I take for granted that Administrator privileges are required to run it; that is OK.
If such a utility is available, I suspect that lots of people would be in for a surprise.
(Note: I am not talking about huge computer centers with true experts keeping the hardware configuration tuned at all times, but about hobbyists and small scale developers - and also developers in larger companies insisting that they need 16 or 32 GByte on their frontend PCs to edit files using vi, for compilation on the company build server park.)
I admit I only read half of your post, but I'd like to point out that if the page file is enabled, then the RAM use will *never* (OK, rarely) go above approx 50 to 60%. (Not scientific. Those are just my observations.)
If you really want to see how much RAM you require for your applications, temporarily disable the page file and then do the test.
The difficult we do right away...
...the impossible takes slightly longer.
Also, some systems -- such as SQL Server -- can be configured to carve out a huge chunk of RAM for their exclusive use.
For a backend server, that may be a good idea. A similar decision may be done for the disk storage: A single huge contiguous file is allocated, and the DBMS manages the space its own way, rather than, say, create individual files for each table (relation).
Then again: If you configure your DBMS to carve out a huge chunk of RAM for its exclusive use, then you know what you are doing. (Or possibly: You don't know what you are doing ... ) I'd expect that from highly qualified DBMS managers at a computing center. If the default configuration of a DBMS for home or small office use allocates reserves several gigabyte of RAM for itself, I might question that decision.
(Note: I do not question a qualified DBMS manager who allocates 75% of the RAM to the DBMS of the computing center!)
Sure I can disable the page file, but that won't let me distinguish between RAM pages actually accessed, distinct from those occupying physical RAM because noone has required the page to be purged to free up RAM space. Maybe it was last referenced ten minutes ago, and never will be referenced again in this program run.
I see no logical reason why enabling the page file would not "go above approx 50 to 60%", if there really is a need for more RAM. (To be frank: Some of those who claim that they "must" have 16 GByte RAM, but unable to give a demo showing more than 8 GByte RAM might be likely to make claims in that direction: The reason why they don't go above 8 GByte is that Windows won't let them, but rather flush them to the paging file. If that was true, what would the benefit of spending money on the upper 8 GByte of RAM?)
That looks a lot like what I was looking for. Thanks!
I do not immediately see any way to run it as a service or scripted, but at least I can crank it by hand. (I've already had a few surprises, of the kind 'Why the elephant is that file occupying RAM space??')
For what it's worth, Windows uses unallocated RAM for disk cache. For most folks, that's probably where they see a speed improvement with more RAM.
I've observed this with the servers we use for software builds at work. Multiple processors or higher cores reduces build time somewhat, mainly during C++ compiles and linking. Adding RAM reduces build times substantially throughout. I'll admit we get the most bang for our buck from that combined with hardware RAID. Our fastest build machine has 64GB of RAM plus 24 1TB hard drives configured as a RAID-5 array. Our largest product, which required nearly 100 minutes to build on the previous server, now builds in 12 minutes.
That could serve as an explanation, if file I/O really was a bottleneck. If you keep an eye on the file IO load, for most users it is very low.
Usually, a cache shows its strength when the same piece of data is referenced repeatedly, typically for the CPU cache between the CPU and RAM. Lots of code reference the same instructions and variables lots of times. For file I/O, repeated accesses to the same disk pages is far less common: Data is read into a buffer in the application, as a variable, and (maybe) repeatedly accessed there. The page is retrieved once from disk.
The disk cache has some effect if you read a huge file sequentially, but request only one page at a time: If the (logically) subsequent pages are located in the same extent, i.e. on the subsequent physical disk addresses, the OS may choose to read multiple pages in a single read operation, buffering them until your program asks for the data.
This is what the RAM buffer on the disk unit does, without any assistance from the OS. Most of this fruit has already been picked. Sure: The OS may have a far larger buffer, but most files are small (compared to 16 GByte RAM), and prefetching beyond the end of the file doesn't make sense. Most huge files contain real time video. When played, prefetching won't increase video playback speed, and after the video has been played back, there is no use keeping it buffered in RAM.
As I pointed out in my first post: A server park running multiple jobs may be able to fully utilize a large memory. What I am talking about is the "Mine is larger than yours!" arguing from people who insist that their single-user desktop PC simply must have 16 GByte to edit those files that will be built on that backend build cluster. If this was limited to boys in early teenage years comparing their gaming machines, I would simply give it a laugh. Fact is that everybody and his grandma "knows" that the machine simply demands 16 GByte RAM. Computer center operators may be right, but very few others. (Note that I say "very few", I do not say "no" - but I am dead serious about "very few".)
Do you think it's possible the reason file I/O is very low is because I/O is being satisfied from cache a large amount of the time?
It seems like Windows would have separate performance counters for those two cases.
I could see performance improvements from a large cache even for typical consumer apps like web browsers: video, images, large data chunks, and so on. The files enter the cache when downloaded, and then are read from cache during the short-term period of use.
I'm not arguing here, by the way; it's just interesting to ponder .
Do you think it's possible the reason file I/O is very low is because I/O is being satisfied from cache a large amount of the time?
Accessing disk in large chuncks is certainly more efficient than doing it in small chunks, whoever does it (the application, OS, disc driver or disc firmware). As long as you access a contiguous sequence of disk sectors, the time cost and disk load is almost constant, independent of data volume. Before disks with RAM caches were common, before the OS did much buffering, your high performance application might read 64 KiByte at a time and gain a lot of speed. (And for that purpose, keeping your FAT disk defragmented was essential!).
DOS did no buffering; it already occupied 384 KiByte of the 1 MiByte address space available, leaving 640 KiByte to the application (well known fellow allegedly considered that it "ought to be enough for anyone"...).
The law of diminishing returns soon comes to play. Reading beyond the end of the file fragment is a waste (your application or OS won't do it; the disc cache has no awareness of fragment limits). If you have flagged your NTFS file as encrypted or compressed, it is anyhow processed in 64 KiByte chunks. At least some RAID solutions does striping in 64 KiByte chunks. Quite a few files - by number, not by total volume - are less than 64 KiByte in size, or not very much more (in particular in software development environments). The performance benefit of reading up to 64 KiByte chunks may be significant, but for the all over system performance, it drops rapidly off beyond that.
Today, RAM is so cheap that we uncritically buffer, whether beneficial or not. The benefit of OS prefetching (i.e. transferring large chunks) has diminished a lot the last few years, due to a couple of other fairly recent (on a historical scale) developments:
Nowadays, most system disks (and almost all new ones), and an increasing share of data disks, are solid state - still slower than RAM, but the factor is more like one to ten, rather than one to ten thousand. If you turn off all buffering, always reading a single page at a time, flash disk slowdown would not be much noticeable on application performance; speed would be almost like before.
Second: Most new magnetic discs have on-disk RAM buffers, reading an entire track (or a significant portion of it) into their own RAM, whether asked to or not. On the next single-page request from the OS, data goes from one RAM buffer (in the disc) to another RAM buffer (in OS managed memory), at a speed usually limited only by the disk interface.
Certainly: If your application makes 16 single-page (4 KiByte) disc accesses rather than a single 64 KiByte access, management work done by the CPU is higher. If the OS doesn't find the pages in its own buffer, it may have to make up to 16 separate disc accesses. This takes some CPU capacity as well. Yet you never see the CPU load rocket when you access the disk. CPU load is insignificant.
Opening or creating a file may require quite a few disc accesses, for accessing / updating its directory, reading or allocating an MFT entry, updating the allocation bit map (create, write), ... These are file system structures that the OS repeatedly accesses, and can benefit from caching. But they are OS owned data, not user data.
I could see performance improvements from a large cache even for typical consumer apps like web browsers: video, images, large data chunks, and so on.
Tuning web caching may be quite different from tuning disc caching, but they do share some characteristics. For video and large data chunks: How often do you watch that same video again, while it is still in memory? I'd say: Not very often. You download some huge software - say, a new OS image. How often do you repeat that download before the first one is our of the cache? Web caching saves a lot of tiny little transfers, such as logos or icons used on every page presented by a web site. But first: The cache is maintained in the file system, by the browser - not in RAM by the OS.
Second: HTTP allows an expiry time for a chunk of data (such as a logo), but many web sites are lazy at setting this properly, so web browsers commonly make a request anyway, asking if the logo has been updated recently. If it hasn't, there is no need to transfer those two hundred bytes again. Maybe it took a couple thousand bytes to save the transfer of two hundred...
Images are in a middle-between (and they are usually cached by the browser): I have tried timing web newspaper front pages with a couple dozen of photos, before and after a complete cleaning of the browser cache. The first access was measurably slower, when all the logos and icons had to be retransferred. On the second access to the front page, the speed was the same as before the cleanup. So the caching obviously gave some speedup. Not much, but measurable.
Yet, the trend today is exactly the opposite: Lots of sites presenting scores of images in a huge display, deliberately do not fetch all of them, but only those currently visible in the browser window. Do not waste resources on retrieving anything that might not be needed after all!
This may be justifiable, considering the speed of an Internet connection vs. the speed of a flash disk transfer. Also, the probability of a disk file user accessing the entire file may higher than that of a web user wanting to see the complete display of all pictures. Also, the difference between an application (the web browser) managing a cache in disk files, vs. the OS managing a cache in RAM, is quite significant. So several considerations regarding caching cannot be directly transferred from the one area to the other.
In days when used Windows to development I saw very good improvement when used large amount of memory - so the last Windows machine I built used 8Gb for each core (totalling to 64Gb)... It give me the ability to run virtual machines for each server (SQL, IIS, app server and so) with maximum efficiency... It did go up to 85-95% of memory usage at full usage, but remained very fast.
It probably right that I could use some memory optimizing to recycle unused memory and save some, but those days the memory was much cheaper that to be bothered by that...
About two years after I built that monster I move to Linux and things changed dramatically, even in the first phase, where still used some Windows in VM I could drop half of the memory, but today that I have no traces of Windows anymore I rarely go over the 4Gb (!!!), still doing the same things mostly...
But! At work I have Windows with the 16Gb and while developing I do hit memory problems occasionally... It seems that some software just eating up the memory without second thought... The main problem is VS (that can eat several Gb of memory while left open overnight) and node.js (used to compile Angular on the fly)...
So if there is a problem with memory it is that some of us used to see memory as an endless resource that need not be take care of... And this is something you see from the OS level up the the end product... It is very simple to write a few line of codes that will use any memory it can get
"The only place where Success comes before Work is in the dictionary." Vidal Sassoon, 1928 - 2012
PC performance is a great market for placebo and snake oil. Lots of myths, lots of lack of understanding. And very little actual measurements to verify any claims.
One of my friends insists that he must vacuum clean his PC regularly, to keep it from slowing down. He also has a theory to explain it: Dust accumulates on the fans making them less effective, so the CPU lowers the clock speed to avoid overheating. I once challenged him to do some real speed measurements before and after the spring clean: Of course there was no measurable difference. Seeing the results, he claimed that this time, the fans were far less dusty than they use to be; he couldn't explain why. But that must be The Explanation for measurements not supporting his subjective feeling of higher PC performance after vacuuming.
History has seen numerous similar reports. The first lengthy quarrel I remember is from back in the days of 286 / 386sx, where floating point arithmetic was delivered as a separate chip (287 / 387): This guy who had upgraded his PC with a 387 an insisted stubbornly that boot-up now was much faster. Guys from MS shook their heads: No. The boot process, or Windows in general, makes no use of floating point whatsoever, so adding a 387 will in no way affect the speed of any Windows code! (Maybe Windows today uses FP - at that time, it didn't.)
When hearing statements like "I have a feeling that the PC is now a lot faster", I usually just nod and make no further comment. If there is an undisputable speedup, I would like to investigate the entire machine, both before and after the upgrade/modification: You may claim some explanation that turns out not to be the real reason. Say, if your VM software actually manages to utilize all eight cores fully, while the setup of the non-virtualized machine for all practical purposes run everything on a single core, then 64 GByte of RAM may not be the real reason for speedup. Maybe the speed would be the same with 32 GByte shared RAM (not statically distributed among the cores).
But again: As I said in my first posting, computing center experts will know how to monitor and balance resources. I am not talking about those users. If you on your PC have 8 VMs, each running some heavy server, then you are halfway to a computing center. You are far beyond that single-user desktop PC and some non-computer-professional (or one who essentially knows application programming, not system tuning) who has been told that his PC will perform so much better if he doubles the RAM. There are a hundred times as many users in that group as those who knows how to use system monitoring and tuning tools.
Finally: From your description, it looks like you used to run 8 VMs each requiring 8 Gbyte. Today you run the same 8 tasks, with an average of half a GByte for each, and you have the same performance. Did you ever consider that maybe the Windows performance might have been the same with far less RAM? Did you ever reduce the amount of RAM to, say, half as much, watching paging go through the ceiling? Most PCs have a LED flashing up when a physical disk transfer is made. If it flashes up every now and then, you do not have excessive paging. (When you start a new application, there is of course disc activity to get the code and data segments into memory!)