Click here to Skip to main content
15,881,898 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Hi All,

I am developing a file download utility using MFC. This utilty uses the CHttpFile class to download the file in chunks of 5096bytes. Sometimes due to network outage the download fails. However after gaining network connectivity I want the utility to resume download from where it was interrupted. The utility knows how many bytes it has downloaded previously and calls the Seek() method to reset the pointer while resuming download. But the issue is after calling Seek() method and then calling the Read() method takes a lot of time to return. The return time is longer if the file downloaded previously is around 20MB approx and it increases proportionally as the file size increases. Any idea how to resolve this??

NOTE: Upon resuming download, I open a new internet session. Could this be the culprit?

Thanks in advance...
Posted
Comments
JackDingler 20-Sep-12 11:12am    
It sounds like the problem is likely with the web server.

Run netmon or a similar utility to track down where the lag is occurring.
Sergey Alexandrovich Kryukov 20-Sep-12 12:07pm    
This is more likely than other reasons. Good advice.
--SA
pasztorpisti 20-Sep-12 12:04pm    
I have no experience with CHTTPFile but I've done file download via HTTP with own socket code. I did the job by specifying a Range header in my HTTP request: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2
The Range header isn't supported by all HTTP servers so your get request might return the whole file even if you specified a range header but you can check the status code: the request returns 206 if the range header was handled, returns 200 otherwise with the full content of the file.
Just guessing but one possible problem can be that the server doesn't support the range header and your CHTTPFile object simulates the seek by downloading the whole file.
Sunil P V 21-Sep-12 1:14am    
Well sorry if I wasn't clear. These are my conditions:
- The file will be hosted in a webserver
- Server doesn't have any logic to control the download parameters.
- The download client connects to this URL (hosted file) and starts reading this file in 5096 bytes every second.
- If the network connection is lost then the client utility saves to registry the total bytes read.
- Once network connection is back, the client resumes downloading from the point where it lost connection. The logic that I have implemented to determine resuming is:
1) Get from registry the number of bytes (say N) read previously.
2) Call the CHttpfile::Seek() method to seek to position N bytes of the source file
3) Continue calling the CHttpfile::Read method to continue download from that N position onwards.

From point 3 the CHttpfile::Read takes a long time to read 5096bytes. If N value is around 20Mb then CHttpfile::Read takes even longer say around 10minutes.

This is the issue. How can this be resolved? Any pointers?
Sunil P V 21-Sep-12 1:16am    
Also the to mention the file to be downloaded is around 50Mb.

1 solution

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900