Click here to Skip to main content
15,886,724 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Hi there,
The aim is to prepare for an old fashioned Windows directory walk with view to RemoveDirectory aided by FindXXXXfile and SetCurrentDirectory.

The variable array in question is called folderTreeArray keeps track of this.

The first vector of the array represents the tree level, the second the folders (possibly not all) found in that level, the third the char folder name array.

C++
wchar_t *(folderTreeArray)[1000][MAX_PATH - 3];
Globals: BOOL topofTree and int treeLevel


The recursive function called RecurseRemovePath (currPathW, folderTreeArray) is outlined as follows:

if (topofTree == TRUE)
{
treeLevel -=1
Go down one level and delete directory just visited (the nth directory in folderTreeArray). Set it to {0} and move on to the next directory
}
else
{
treeLevel +=1;
Populate a new tree level in olderTreeArray, i.e realloc the first array by one.
}

Problem is in the treatment of realloc. We only prefer the third vector fixed.
Something like

C++
wchar_t *(temp)[1000][MAX_PATH - 3]= realloc((folderTreeArray)[1000][MAX_PATH - 3], treeLevel * sizeof (wchar_t));

doesn't work. The [1000] also looks inefficient. Any suggestions?
Thanks
Posted
Updated 16-Nov-15 5:26am
v4
Comments
E.F. Nijboer 16-Nov-15 11:26am    
Why not use a linked list?
http://randu.org/tutorials/c/ads.php
Laurie Stearn 16-Nov-15 22:28pm    
Yes thanks, that's an idea. I read your tutorial, great. It's probably a little beyond the scope of my abilities to directly apply that here. Do you recommend any code samples that delve a little deeper?

1 solution

It's possible that you just need to develop a more complicated data structure supporting semantic of an "array list". Say, you have too many elements to justify the use of a linked list with a separate element (with one or two pointers in each one), and the speed of addressing of an element is more critical factor then the cost of element insertion, which rarely or never happens. This is a very usual situation. Then you can develop one of the know hybrid model. For example, you can have a linked list of element each representing a fairly big chunk of your array. In your array structure, you will also need to support two different lengths: allocated and actually used. In general case, your last chunk sill be only partially used. You can create some initialization for your structure which accept custom chunk size, which you can vary in actual object depending on the usage criteria.

Say, in typical use, you may have the number of linked list elements not exceeding, say, 30-100 elements, which is not a big burden the your algorithm of the indexing: you will have the time complexity of O(N) for reaching required chunk, but inside chuck it will be direct pointer calculation, with time complexity of O(1). When your data grows, you just add another linked list element and put data in fully filled last chunk element, and then in new chunk element. Overall, can will give you some reasonable trade off between flexibility of the growing size of data and performance of element addressing.

This is only one of possible solution. I wanted to give you just the idea of some hybrid data structure. In all cases, I would avoid realloc. The problem is not just it's performance, but the fact that its code is external to your application, so the consequence are hard to predict; it includes unpredictable performance and unpredictable level of process memory fragmentation. When you go with fixed-size chunks, it's much more likely that runtime system will avoid memory considerable memory fragmentation.

The problem of heap optimization is a separate big problem, but here is just some food for thought: a number of software development technologies manage memory based on some runtime library with internal sub-allocator. That is, the implementation does not allocate heap directly through the OS API allocation/deallocaton function, but do it in larger chunks, leaving allocation to smaller objects to that sub-allocator.

—SA
 
Share this answer
 
Comments
Laurie Stearn 16-Nov-15 23:05pm    
Thanks for the time out to reply Sergey. I guess I should have mentioned this is part of an exercise to Program that can create a long Path:

http://stackoverflow.com/questions/31858373/program-to-create-and-move-a-pathname-with-more-than-260-characters-in-windows
so I'll be wanting to touch the limits. Without being too bullish, something like 1000 * 1000 * Max_Path won't topple the stack?
That's an interesting point on fragmentation: doesn't the free() function work as expected? I always assumed Windows employed routines to defrag it's own heap on a regular basis. Found this

https://bitsum.com/virtual_memory_facts_and_memory_optimizer_scams.php interesting and illustrates your point rather well.
Sergey Alexandrovich Kryukov 16-Nov-15 23:49pm    
No, with fragmentation, free() works as expected. If you don't know how fragmentation happens, I don't want to explain right now; you can find good explanation in many places. This is a bit different topic.

The well-known construct I described can actually cover the usage you want. It's the matter of what would be the most reasonable trade-of between different factors.

Now, why have you mentioned the stack? More or less memory demanding data should rather go to heap. Things like dynamic allocation and releasing of memory really belong to heap. The stack use is very different. When you use stack, you simply remove the stack frame. But works only on short-leaved memory which is taken and discarded (there is no analog of release) on each stack frame...

As to the last link you referenced... It's interesting enough, but... You should not mix up the concepts of RAM and heap. Frankly, I don't know where the myths the author tries to dismiss exist. Sometimes, people invent imaginary opponent and argue against this imaginary person... Overall, this article is not very consistent or logical, rather, this is a set of few claim which I would take with some critical thinking. :-)

—SA
Laurie Stearn 17-Nov-15 5:42am    
Ah yes I see your point now. I can't believe this was written in 1993, but there you go, it still applies:
https://msdn.microsoft.com/en-us/library/ms810627.aspx
Nice diagrams here
http://www2.hawaii.edu/~walbritt/ics212/examples/HeapStack.htm

This stuff is not for the faint hearted. Will have to learn about Linked Lists then, but I fear it's going to be more than five minutes!
http://www.learn-c.org/en/Linked_lists
Sergey Alexandrovich Kryukov 17-Nov-15 9:49am    
Right. Also, you can write in C++ where the standard libraries do it all for you ;-)
—SA

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900