|
Allocate the back buffer (memory dc and associated bitmap) in create, and resize in WM_SIZE handler. When I need to take resizing into consideration, I never shrink the bitmap. That gives an extra performance boost speedwise, but is perhaps not optimal in terms of memory. Classic tradeoff.
Don't fire invalidations inside your WM_ERASEBKGND handler. Just return TRUE if your aim is to avoid redrawing the background.
Shutter wrote:
2) When using a custom/ownerdraw for a listbox/listview/treeview, would it be best to use one memory DC for each item as it is being updated, or would it be faster to use a memdc for the entire window and redraw that?
It depends a lot on what you are drawing. If you are drawing simple text and/or an icon, then no memory dc is needed. If it's flickering, it's probably the result of your Invalidate() inside the WM_ERASEBKGND handler.
Shutter wrote:
Is there a trick that I'm not seeing?
The best trick I know of is to draw everything in a bitmap, and then blit it to the screen on WM_PAINT.
Generally I do:
* allocate a bitmap used for drawing, and I do it once (may be resized if the control/window is to be resized)
* all operations which alter the appearance of the window, draw to the back buffer. Then I invalidate the corresponding window rectangles, where the changes occurred
* in the WM_PAINT handler, I just blit the bitmap to screen
To make it fast as possible, make sure you only blit portions which need to be repainted.
Good music: In my rosary[^]
|
|
|
|
|
Thank you both; that had been bothering me for a while.
|
|
|
|
|
Having problems in my app working with large amounts of data.
I store a load of data using a std::vector list of paired doubles.
The app works fine with small quantities of data, but crawls when loading large amounts of data and displays "out of memory" when dealing with my larger data sets (around 3,456,000,000 doubles).
Tried speeding things up by informing the vector the intended size at the start. This had a marked speed improvement on the load routine but its still painfully slow and it still crashes with larger sets.
Whats the best way of handling large amounts of data like this?
--
The Obliterator
|
|
|
|
|
Obliterator wrote:
3,456,000,000 doubles
...is almost 26GB of data.
Obliterator wrote:
Whats the best way of handling large amounts of data like this?
how about a database ?
Cleek | Image Toolkits | Thumbnail maker
|
|
|
|
|
<reality check=""> hold on a minute! that can't be right!!!
Sorry about that, I copied the wrong value in!!
Its actually around 29,000,000 data points (and possibly worst-case as many as 115,000,000).
--
The Obliterator
|
|
|
|
|
for large datasets, you absolutly need to have a larger amount of memory.
check your system to see if you are missing memory ( the system will swap out memory to disk ).
Maximilien Lincourt
Your Head A Splode - Strong Bad
|
|
|
|
|
I've got 1GB memory. I'm sure thats not the problem.
I'm processing what is essentially around 100MB of data. Even if its swapped out, it shouldn't be as slow as it is!!!
I'm wondering if I need to drop the use of vectored lists and just use large arrays of malloc.
--
The Obliterator
|
|
|
|
|
I thought a double was 8 bytes. With 29 million xy points, that's over 400 meg I think.
Anyways, another idea is to define an XYPoint class to encapsulate your pair of doubles, and then store pointers to XYPoint in your vector instead. When you close a dataset, don't delete all of the points - you could return some of them to a free pool (up to 50 meg say) so that the point objects could be reused when loading the next dataset.
|
|
|
|
|
Interesting, I gave this method a try.
I wrapped my doubles into a class and allocated them using 'new' then storing pointers to the objects.
It had a marked improvement in that it processes more of the data, but it still falls over - just further along the dataset.
Thanks for the suggestion though.
--
The Obliterator
|
|
|
|
|
My guess is you are running out of virtual memory still.
Other members' suggestions such as using STL list or memory mapped files sound good to look into more as well.
But I would also consider whether your application really needs to work with the entire dataset in memory all at once. For example, is it possible to allocate a fixed cache of say 1 million points, and read in a million points, process them, write them back out, etc. The general idea is to see if your requirements allow you to load/unload just a portion of the dataset on demand, rather than all at once up front.
|
|
|
|
|
Hello,
I suppose that the data is stored in a file. Maybe you want to use Memory Mapped Files to map certain portions of the file into your process address space. This will not only save load time, but will also save you RAM.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
This could be an excellent solution.
I'd forgotton about the existance of such things!
I'll look into this further.
I have a feeling though it will result in a major rewrite of this module given its current design!
Thanks
--
The Obliterator
|
|
|
|
|
Obliterator wrote:
Having problems in my app working with large amounts of data.
Whenever you start dealing with large memory systems you need to start thinking about how you access your data, how many times you access your data etc.
I deal with Megs to Gigs of data on occasion. A flat list of items is not always the most efficient use of memory, certainly not for speed. It has no knowledge of the contents, no optimized structure for accessing the contents. By using a vector you are in a dynamically allocating system, so if you store iterators to data, then suddenly over-run your reserve() capacity all the iterators are useless and the software can easily crash. So be very careful of your algorithms.
So the first thing I would do is verify you set it up right. Check the size() and capacity() as a diagnostic. If you notice the capacity is still increasing as you run the software, you didn't reserve enough items to hold the data, or your algorithm is using more data than you think (which amounts to the same as not reserving enough).
This could lead you to an iterator that is jumping outside your reserved size. I actually prefer a crash state in debugging because I can find what was happening at the crash state and work backwards to find why. It is sometimes long and tedious, but it is at least straight forward.
Obliterator wrote:
Whats the best way of handling large amounts of data like this?
Like I said at the start, this is dataset dependant. Only you know the contents of the raw data inside the vector and how it was intended to be used. There are bintrees, quadtrees, octtrees, as many datastructures as there are stars in the sky. But not all are suited for each type of data. Octtrees are designed well for spatially oriented data that fit in a 3 dimensional construct. If your data exceeds that of your core memory, now you get into a larger system of handling paging of out-of-core datasets -- and that is an art of its own.
So before you change your container, first make sure you understand the cause of the crash what was your size() and capacity() at the crash, was the capacity() still set to the same as what you reserved?
Then if you truly feel the container of a vector is not suffient for you needs, which may be possible, then you have to get into the guts of your data and choose something that is more efficient for access. I've done multi-dimensional lists for storage that gave my professor headaches, of course at 5 dimensions you are probably trading understanding for access speed. Which is why you always start at the contents of the data. The right tool for the right job works for software as well as woodshop.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
I'll do some investigations with size() and capacity(). Though I suspect the problem is the way I'm allocating the objects with new that is causing me to run out of heap memory.
With regards to the data, think of it as a simple 2d graph. I don't need anything fancy. I need to be able to produce calculations based on each point in turn, to rapidly access sections of points within the list and to be able to move both forwards and backwards through the list.
At the time of design, I never considered I would be dealing with so much data. Hence, the little thought I put into the design! I'm sure theres a lesson there for me somewhere
--
The Obliterator
|
|
|
|
|
Obliterator wrote:
I need to be able to produce calculations based on each point in turn, to rapidly access sections of points within the list and to be able to move both forwards and backwards through the list.
Well, I may be reading too much into that... but that sounds like a list container rather than a vector. It may be splitting hairs, but a vector is considering the whole data-set as a single item, you need to reference the whole rapidly and intereferencing each other -- basically a far more powerful array data-type. When you allocate memory you allocate as a whole rather than in pieces, so you must have a large contiguous section of memory. This is why re-allocation of vectors is more difficult. if you have allocated 100000 items in memory, but access 100001th item, the vector container class automatically allocates 150000 items in a new location in memory, and copies the previous 100000 values into the new memory location, frees teh old 100000 item memory group (which subsequently fragments contiguous memory), and only THEN allows you to access the 100001th item in what ever operation you had intended. But suddenly you are working with a new location for all items in the vector.
If all you are worried about is going through the data, backwards forwards or seeking a new location and starting backwards and forwards, then you can do a list container. The primary difference is a list container allocates each item independantly and links it to those around it. you can move backwards and fowards rapidly, you can "search" for a jump place not so rapidly. It depends on how often you jump to a new location and start calculations again. A strong advantage is allocation does not fragment memory by constant reallocation and freeing of heap memory. The disadvantage is that you cannot address the item as a whole by jumping randomly within in a reasonable time frame.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
Jeffry J. Brickley wrote:
a vector is considering the whole data-set as a single item
Very interesting. I certainly was not aware of that!
I have no need for it to be stored in one continuous block, providing I can iterate through it and (upon user actions) jump to sections of it (say 1/3rd into the list for example).
It looks like std::vector is probably not ideal for my needs.
Maybe I should be looking at std::list?
--
The Obliterator
|
|
|
|
|
Obliterator wrote:
It looks like std::vector is probably not ideal for my needs.
Maybe I should be looking at std::list?
There are pros and cons on every container, a list offers faster dynamic extention, minimal performance hit moving forwards and backwards through your data, great insertion times, but a larger hit on a search/seek/jump type operation. If your data access is random, you need constant rapid access to your data, a vector is the best container. Because it's memory is uniform and linear in structure, jumping around is VERY fast. If your data access is linear movement, forward or backward, a list makes more sense (not always, but usually). So the question is how often will the user move through the data? if you switch to a list container, you will want to make sure you remove any size() diagnostics, your performance will drop significantly using size() since it basically walks the entire list counting items.
<br />
Summary of Vector Benefits<br />
<br />
Vectors are somewhat easier to use than regular arrays. At the very least, they get around having to be resized constantly using new and delete. Furthermore, their immense flexibility - support for any datatype and support for automatic resizing when adding elements - and the other helpful included functions give them clear advantages to arrays.<br />
<br />
Another argument for using vectors are that they help avoid memory leaks--you don't have to remember to free vectors, or worry about how to handle freeing a vector in the case of an exception. This simplifies program flow and helps you write tighter code. Finally, if you use the at() function to access the vector, you get bounds checking at the cost of a slight performance penalty.<br />
<br />
List Summary<br />
The Good<br />
* Lists provide fast insertions (in amortized constant time) at the expensive of lookups<br />
* Lists support bidirectional iterators, but not random access iterators<br />
* Iterators on lists tend to handle the removal and insertion of surrounding elements well <br />
<br />
The Gotchas<br />
* Lists are slow to search, and using the size function will take O(n) time<br />
* Searching for an element in a list will require O(n) time because it lacks support for random access <br />
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
It's the reallocations that does it. Please see the vector::reserve() method. Constantly push_back() ing will yield a lot of reallocations, which is painfully slow.
Good music: In my rosary[^]
|
|
|
|
|
Thanks, I'm already using reserve() which definitely helps speed up the loading process.
But I still have to push_back() each entry in the array, unless there is an alternative?
--
The Obliterator
|
|
|
|
|
Obliterator wrote:
But I still have to push_back() each entry in the array, unless there is an alternative?
I think that you can use operator[] up to capacity() - 1 , but I'm not sure.
Good music: In my rosary[^]
|
|
|
|
|
hi, in order to "Enable3dControlsStatic();" would it be define in project setting as preprocessor definition "_AFXSTATIC" or somewhere else? Thanks!
|
|
|
|
|
MFC's AppWizard should have added the call for you automatically. Have you checked the app's InitInstance() method? It typically looks like:
#ifdef _AFXDLL
Enable3dControls();
#else
Enable3dControlsStatic();
#endif
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
|
Michael Dunn wrote:
The CTL3D functions haven't been relevant since 1995 are are of no use today.
How so? I would say the two 3D functions are relevant for all VC++ v6 applications. With MFC v5, I understand it's built in.
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
CTL3D is for giving Windows 3.1 apps a 3-D look. (Only buttons look 3-D in Win3.1) All OSes from 95 on and NT 4 on have the 3-D look natively.
--Mike--
Visual C++ MVP
LINKS~! Ericahist | 1ClickPicGrabber | NEW~! CP SearchBar v3.0 | C++ Forum FAQ
Magnae clunes mihi placent, nec possum de hac re mentiri.
|
|
|
|
|