That looks wrong to me unless there's only one buffer and the entire recording is done in that one buffer. But then it doesn't make sense because it calls waveInAddBuffer()...
I think it should be
if(WHDR_DONE != (WHDR_DONE &pHdr->dwFlags))
I feel since the callback receives WIM_DATA message cheking for header WHDR_DONE is redundant.
If you look at the WIM_DATA documentation it states "The message can be sent when the buffer is full or after the waveInReset function is called. ". So when waveInReset is called the buffer will probably be empty and the WHDR_DONE flag will be set. That's what your check above is failing on....try switching it to !=
I finally managed to start, record and stop an audio file.
When I step thru the process first two "buffers" have no data recorded.
As you suggested , I just return the empty buffer back into the queue.
As far as I can tell there is nothing missing in the file and no waveInReset was performed.
But is is really noisy – background noise. I'll play with it to see how to improve the noise.
"Opinions are neither right nor wrong. I cannot change your opinion. I can, however, change what influences your opinion." - David Crow Never mind - my own stupidity is the source of every "problem" - Mixture
If order is important you have to use "Permutation". Otherwise you have to use "Combination". You have to select 4 characters out of the 36 characters. 36c4 is what you need. 36!/32!*4! is the way to calculate if my memory is good.
If a character may not be repeated, it is a little more involved.
# combinations = (numCharsToChooseFrom!) / ((numCharsToChooseFrom-lengthOfGeneratedString)!)
4 character string - 37 * 36 * 35 * 34 = 1,585,080
5 char string - 37 * 36 * 35 * 34 * 33 = 52,307,640
Often times, when disk-io needs to be sped-up there are gains to be made in the way that the data is accessed -but not so much (if any gain) available in the function used to get that data.
For example - if you have (say)a million pieces of data stored in a file, it's a better(faster) option to use ReadFile just once and read in the 1,000,000 pieces of data than it is to call the function 1,000,000 times to read 1 piece of data.
Provided you're running in an OS with protected memory (i.e not DOS), no - not practically. You could attempt to write some low level code, but I'd suspect that the folk that wrote that part of windows would be more capable of speed than you.
Are you able to elaborate on the conditions in which you're using ReadFile, and the reasons you've come to the conclusion it's slow? It's very likely that algorithmic improvements can be made.
Usually when reading from a file, then one reads in small chunks like one byte at a time.
If using a wrapper around ReadFile like fstream, then it will perform readahead caching by reading large chunks from the file into memory. When reading small chunks from fstream, then it will actually read directly from the internal cache inside fstream.