|
Bring up the properties dialog for C:\Windows, you will see the count of files and folders steadily increasing which leads me to believe that internaly it is also using the FindFiles set of api's.
I suppose if I run a bare bones loop which only increases a counter, it should only take a second or two.
Waldermort
|
|
|
|
|
What do you mean with "bare bones loop"?
Just doing FindFirst and FindNext to determine the end of the list and doing nothing more but increase a counter? or is it something special? It's the first time I "listen" about.
Greetings.
--------
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
|
|
|
|
|
Nelek wrote: What do you mean with "bare bones loop"?
Just doing FindFirst and FindNext to determine the end of the list and doing nothing more but increase a counter?
That's exactly what it means.
Waldermort
|
|
|
|
|
Well, I think that it is not possible to get the file/folder count.
But the thing that is possible is the used space.
You can use that for showing the progress.
Every time you handle a file, subtract the file size from the Total size.
Though the progress will not smooth, but atleast the user can have some idea, how long the process will take.
Hope this will work. Good Luck.
|
|
|
|
|
This theory won't be 100% reliable. There are some objects that consume disk space but are not accessible via APIs like FindFirstFile() .
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
Hello,
you may count the number of files (with APIs you mention) and put the file paths into a collection And then bring up a loop and start to do the tasks, and show the progress.
If the number of files are big and you dont want to use a collection, do the process twice. In the first pass, count the number of files and in the second perform the tasks.
Hope this helps.
Bekir.
|
|
|
|
|
The problem is on the 1st pass if it take 10 mins, the users PC will look like it's hung!
---
Yours Truly, The One and Only!
devmentor.org
Design, Code, Test, Debug
|
|
|
|
|
Here's a way-out-in-left-field idea. Start a secondary thread that does nothing but count files. After it has been running for a few seconds, then commence to processing the files. After each file is processed, update the progress indicator. This will fluctuate at first, but will eventually smooth out once the secondary thread has counted all of the files. Make sense?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
It seems like that would be the only visualy pleasing method of doing this.
Through test I have found that a bare bone file count takes between 5 and 10 seconds ( on my 22GB of used space, bloated to the depths of hell with nothing but windows Vis-I'm gonna make you spend money-ta log files, C drive ).
I could also implement the Used space count, which as you stated is a lot higher than what is accessable. This way the progress would have a meaningful start and when the second thread has finished, the actual file count could be used as a completion. I will have to test this but I really doubt there would be any noticable effects, especially on vlumes with a high file count.
Waldermort
|
|
|
|
|
WalderMort wrote: I could also implement the Used space count, which as you stated is a lot higher than what is accessable.
I did a test on my machine and found that I could only account for 25.3GB of the 28.2GB that is used. That's 2.8GB that something is consuming. The net result was the progress indicator finished at 90%.
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
When you first format a NTFS volume, some 12% of that is pre-allocted for the MFT ( which FindFile's can't find ). Try running your test again but manualy open "C:\$MFT" and "C:\$MFTMirr" and account for the sizes
The other NTFS specific file are small enough to forget about.
Waldermort
|
|
|
|
|
WalderMort wrote: Try running your test again but manualy open "C:\$MFT" and "C:\$MFTMirr" and account for the sizes
c:\$mft = 125.5MB
c:\$mftmirr = 4KB
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
Well that accounts for some 5%...
That's got me thinking now, what does windows need all that space for? Just out of curiousity, how does your test work?
Waldermort
|
|
|
|
|
WalderMort wrote: Just out of curiousity, how does your test work?
Nothing special. I just used a CFileFind object to round up the sizes of all the files on the C: drive.
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
ahh, did you take into account that file size on disk is usually higher than that CFileFind reports? Also many of the smaller files are stored inside the MFT rather than waste cluster space.
Through tests, I have found some 30% of my C: volume contains files that CFileFind cannot find.
I think the only way to find out what those files are is to read the MFT directly and compare to the results of CFileFind.
Waldermort
|
|
|
|
|
WalderMort wrote: did you take into account that file size on disk is usually higher than that CFileFind reports?
Yes, I accounted for slack space.
WalderMort wrote: Also many of the smaller files are stored inside the MFT rather than waste cluster space.
I thought of that, but I've got files that are a few bytes in length, and they are found by CFileFind .
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
DavidCrow wrote: I thought of that, but I've got files that are a few bytes in length, and they are found by CFileFind.
CfileFind will return the size regardless of which part of the disk it is stored. A more realistic method would be to pass each file into DeviceIoControl and count the cluster/fragment usage. 0 fragments indicates that it is stored in the MFT.
Also, I have just accounted for a further 4GB by giving myself access rights to the "System Volume Information". If you know of a way to do this programaticaly I would be glad to hear.
My next step is to call CreateFile for every file/folder found, if it fails then I should attempt to adjust the access rights. This is where CFindFile fails the most, it gives no indication of where it skipped a file/directory simply because it had no rights.
Waldermort
|
|
|
|
|
WalderMort wrote: ...giving myself access rights to the "System Volume Information".
How?
WalderMort wrote: My next step is to call CreateFile for every file/folder found, if it fails then I should attempt to adjust the access rights.
Going back to your "...for each file I then perform a few tasks." comment, do you need to perform said tasks on those files for which you have insufficient access?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
DavidCrow wrote: How?
A simple command prompt[^]
But this method is not temporary.
DavidCrow wrote: comment, do you need to perform said tasks on those files for which you have insufficient access?
I'm creating a defragger, so I would prefer to have access to as many files as possible, especially those on the windows install volume.
Waldermort
|
|
|
|
|
WalderMort wrote: I'm creating a defragger,
There might be something here you can use.
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
Thanks, that may come in handy. At a glance it works similar to my method, but I have also built in a DiskMap viewer.
Waldermort
|
|
|
|
|
WalderMort wrote: Also, I have just accounted for a further 4GB by giving myself access rights to the "System Volume Information".
Same here. It now seems that my numbers have gone the other way. GetDiskFreeSpaceEx() reports 28GB used bytes, but I am counting 30.3GB bytes used by files, a difference of 231.4MB. Hmmm...
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
Haha, sounds like your loop could do with a little refining. Try the DeviceIoApproach as I mentioned earlier, you should get some more realistic values.
Waldermort
|
|
|
|
|
WalderMort wrote: Try the DeviceIoApproach as I mentioned earlier...
Which control code did you have in mind?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
you would need to use the FSCTL_GET_EXTENT_BUFFERS. The trouble is with this operation is that you don't know how large the buffer should be. I wrapped the call up in a class with the buffer as a static member. Each time the buffer is too small, I double the size and keep calling until successful.
The Extent count indicates how many fragments the file has, 0 indicates that it is located in the MFT. Each extent returned will then tell you how many clusters each fragment has. By adding these and multiplying by your drives 'bytes per cluster' you will find out exactly how much space a particular file/folder is taking.
Also be aware of NT's compressed runs which is indicated by a startLCN of -1. I still haven't managed to get my head around what this actualy means.
Waldermort
|
|
|
|