|
Why not write your own? After all, a ListView for the files, a drop down ComboBox for the folders, and some navigation buttons is all that makes up the bare bones of it.
The graveyards are filled with indispensible men.
|
|
|
|
|
|
"Time is money" if it wasent for time i would have written my own!
as far as i know c# does not allow fileopendialog to be interhited which takes away the ability to customize it or is there any other way?
|
|
|
|
|
Roll your own, but base it on the Win32 dialog. You'll need to do a bunch of interop and API calls.
Fluid[^] will have an easily-customizable open file dialog built from scratch, but it won't be available for at least a few months, so it likely won't help you here.
"Blessed are the peacemakers, for they shall be called sons of God." - Jesus
"You must be the change you wish to see in the world." - Mahatma Gandhi
|
|
|
|
|
Roll your own, but base it on the Win32 dialog. You'll need to do a bunch of interop and API calls.
can you give me a clue how can i do that?
any references?
|
|
|
|
|
You'll want to read up on the details of the Win32 COmmon Dialog APIs.
I learned a lot about customizing common dialogs from the Common Dialogs Section[^] of VBAccelerator[^]. Yes, I know, it's VB - but, it still gives you some valuable information and source code.
"Blessed are the peacemakers, for they shall be called sons of God." - Jesus
"You must be the change you wish to see in the world." - Mahatma Gandhi
|
|
|
|
|
Hi all,
My application isn't cleaning up when it's supposed too! I'm creating lots of byte arrays to store images etc before indexing them in a resource manager, but once in the system despite the importer no longer referencing it it won't clean up!
I want to try to avoid unsafe code, although being a C++ developer at heart it is very tempting to just screw this garabage collection stuff and do it myself with good old fashioned new & delete type stuff!
Is there a sure-fire C# "safe" way of doing a garabage collection on a byte array, forcing it to free memory on the spot?
Cheers,
Paul
/**********************************
Paul Evans, Dorset, UK.
Personal Homepage "EnjoySoftware" @
http://www.enjoysoftware.co.uk/
**********************************/
|
|
|
|
|
|
yes I have. I do System.GC.Collect(). Memory usage does not change. I'm 99% positive that all references to the array are null.
/**********************************
Paul Evans, Dorset, UK.
Personal Homepage "EnjoySoftware" @
http://www.enjoysoftware.co.uk/
**********************************/
|
|
|
|
|
Nick,
Although the documentation says that this forces garbage collection, if you drill down one level to here[^], it says, "Use this method to attempt to reclaim all memory that is inaccessible. However, the Collect method does not guarantee that all inaccessible memory is reclaimed." This leaves me wondering if the GC operates like the one in Java, where the garbage collector runs on a low-priority thread, and will go back to sleep if the thread scheduler figures that there's other, more pressing work to do. I'd bet that it does.
I'm not sure about the specifics of this, though. I've always tried to avoid calling System.GC, as the documentation says that it will force a suspension of all active threads in the process. My first question for Paul would probably be, "Are you really sure that there's no dangling reference to the arrays?". There's no offense meant by this-- everyone misses small things once in a while.
Paul could try waiting a little bit in an attempt to coax the GC into action, I guess. This is an acceptable troubleshooting step. Also, I've seen some extreme GC slowness (as you'd expect) in testing scenarios involving paging, and it seemed to run less often.
Regards,
Jeff Varszegi
|
|
|
|
|
Thanks for you input Jeff, I seem to remember reading somewhere (I think in Jeffery Richter’s Applied .NET Programming) that the GC process actually requires two full passes however I will have to re-check my references.
-Nick Parker
DeveloperNotes.com
|
|
|
|
|
Hey, that looks like a great book! I just ordered it. 70 reviews on Amazon and it's got almost five stars, almost unheard of for a tech book. Thanks for the reference.
Jeff Varszegi
|
|
|
|
|
Well, your interest in my dangly bits is quite flattering I suppose
Although less flattering is that I'm yet to find any sign of my dangly bits.
There is a record type class (you know XmlDocument m_xmlMetadata, string m_sSrcFilepath, etc etc) with the byte array in. I even have a special "cleanup" function that ensures that the array internally is nullfied just before the record reference itself is set to null.
The array is read accessible via a getter property though - so that's my current line of investigation.
I did try every few records with nothing. Invoking it at the end of each record being used also caused nothing to happen.
To give you an idea of the volume of data that's being banded around by this import application - it generated 1.5 gig of xml datafiles and imported 250 images - at the price of a steadly climbing memory usage of past 450meg.
Obviously it's doing some garbage collection, but the byte arrays because thats a similar size to the source images overall.
I do hope I'm not hitting some kind of .Net scalabilty issues!
What really ticks me off is after it's done all the importing, cleaned up all the objects (so the engine is nullified, etc etc) it STILL doesn't free up memory after a call to GC.
/**********************************
Paul Evans, Dorset, UK.
Personal Homepage "EnjoySoftware" @
http://www.enjoysoftware.co.uk/
**********************************/
|
|
|
|
|
A couple of comments:
1) The .NET image classes are a thin layer on top of unmanaged GDI+. You should definitely call Dispose() on them to free that data.
2) Look at the performance counters for .NET to see how much memory you are really using. Task manager is not a good indication of how much is really being used.
3) In general, you should try to avoid calling GC.Collect(). You may, however, want to call it and call GC.WaitForPendingFinalizers() for testing purposes. That will ensure that all the memory is reclaimed before you look at memory usage.
You might also want to try running the allocation profiler - it will give you a better idea of what your application is doing with memory.
http://www.gotdotnet.com/community/usersamples/Default.aspx?query=allocation%20profiler
|
|
|
|
|
1) I can't use them because the resource manager needs it in raw byte array form. This is because it can also handle pdfs, sounds, etc
2) I'm looking in to the performance counters thing now. Thank you (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/gngrfperformancecounters.asp & http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/gngrfmemoryperformancecounters.asp incase anyone is interested).
However even these counters say I'm only using 10 meg, if .Net has actually allocated 500meg and not freeing it up in real task manager terms then it's still too much of a resource hog.
3) Again thanks for another revenue of investigation.
I'll post in this section of the thread if I discover anything due to the things you've suggested. Links always helpful, thanks!
/**********************************
Paul Evans, Dorset, UK.
Personal Homepage "EnjoySoftware" @
http://www.enjoysoftware.co.uk/
**********************************/
|
|
|
|
|
Hey, did you try using the WeakReference class to track the destruction of the objects in question? This might help you determine for sure whether there are lingering references somewhere. HTH.
Thank you.
Jeff Varszegi
|
|
|
|
|
Thank you, I'll try again with that approach tommorow. Anything is worth a shot huh?
I'm going to leave the office before anything else happens to keep me here. Thanks everyone - I'll keep checking this thread tommorow.
/**********************************
Paul Evans, Dorset, UK.
Personal Homepage "EnjoySoftware" @
http://www.enjoysoftware.co.uk/
**********************************/
|
|
|
|
|
A few months ago we had a similar issue to the one you described. After lots of emails back and forth between me and microsoft I spoke to one of the framework development team who advised us of the following (which has been working perfectly for us since) :
Add this code to you constructor or to main :
GC.Collect(GC.MaxGeneration);
GC.WaitForPendingFinalizers();
GC.Collect(GC.MaxGeneration);
Our situation was slightly different in that our application worked with hundreds of different datasets that would then just sit in the gen2 heap until the memory was needed, rather than clearing them out when they fell out of use.
post.mode = signature;
SELECT everything FROM everywhere WHERE something = something_else;
> 1 Row Returned
> 42
|
|
|
|
|
Cheers, I'll add that to my list of to-dos.
I've found (although I feel dirty for doing it) that passing around references to byte arrays even after they should have fallen out of scope or nullified by the class that created them, and would be unavailble to a C++ program - INSTEAD of doing deep copies - I cut down the memory usage alot, and of course sped it up.
In C++ terms I would have my ass whooped, I just hope all this C# isn't going to make me a sloppy C++ developer!!!
I will add this to my repositry of useful snippets, thank you!
/**********************************
Paul Evans, Dorset, UK.
Personal Homepage "EnjoySoftware" @
http://www.enjoysoftware.co.uk/
**********************************/
|
|
|
|
|
I'm writing an app that downloads and processes documents from a web site. Since the document types (i.e., ContentType from the HTTP header) are unknown until they are downloaded, I have created a parent object called WebDocument. After downloading and determining the type, I would like to cast this generic parent object to a specific child type such as HtmlDocument, MsWordDocument, PdfDocument, etc.
How can I dynamically cast like this? It would be nice if I could somehow use the string returned from the HTTP ContentType to do this. This dynamic casting becomes even more of a concern since I would like to allow other developers to create plugins to handle other types of documents.
|
|
|
|
|
If WebDocument is your parent and MsWordDocument etc are specilised children then it makes OO sense to do the cast.
Windows usually just associates a program with a extension and that's that, and you can ask the shell to execute that for you by attempting to execute the document.
I suppose the question is how much information do you need to store about different kinds of documents, and can it be more simply stored as a string attribute (Eg the mimetype recieved stored as a string) rather then lots of specialised classes.
Those sorts of things are design considerations though, something only you can decide!
/**********************************
Paul Evans, Dorset, UK.
Personal Homepage "EnjoySoftware" @
http://www.enjoysoftware.co.uk/
**********************************/
|
|
|
|
|
I don't want to execute the files, I want to process them. Unfortunately you can't rely on file extensions when dealing with the web. I can use a .asp, .pl, .php, etc to return any type (not just an HTML document). This is why it is so important that I use dynamic casting.
Also, I need to do more than just store information about the document. I need to process each type of document differently (hence the specialized child classes). So, for example, if it's an HTML document, I want to run an HTML parser on it or check the validity of its links. If it's a GIF, I may want to process its formatting or read its internal comments. However, it's not really the specific processing I have a question about - I need to know how to cast a parent object to an inherited child object without knowing the specific type at design time.
|
|
|
|
|
I did something like this on a project. My solution was a variation of the GoF Bridge design pattern. The idea is to separate the abstraction (the base WebDocument class) from its implementation (child classes of WebDocument).
1. Make WebDocument an abstract class, so it can't be instantiated. Add a method that child classes must override that will be called to process themselves (i.e. Process). Create a static factory method on WebDocument to instantiate an the appropriate handler class. You'll need to pass some information into the method so the class can decide which to create (the HTTP header, etc.).
public abtract class WebDocument
{
protected HttpRequest Request;
public static WebDocument CreateInstance(HttpRequest r)
{
switch r.ContentType
{
case "text/html":
return new HtmlDocument(r)
break;
...
}
}
public abstract void Process()
}
2. Create a class for each document type you need to handle (PDF, Word, etc.) that inherits from WebDocument. Create something like UnhandledDocument to process documents that you don't currently support.
public class HtmlDocument : WebDocument
{
public HtmlDocument(HttpRequest r)
{
this.Request = r;
}
public override void Process()
{
// do something with this.Request
}
}
3. Write client code something like this:
HttpRequest req = HttpContext.Current.Request;
WebDocument d = WebDocument.CreateInstance(req);
d.Process();
Can you see how the abstraction (WebDocument) is separated from an implementation (HtmlDocument)? Supporting new document types is as easy as creating the implementation class and adding it into CreateInstance, and will affect no other code. The client doesn't know or need to know the instance type. All it is responsible for is getting an instance of WebDocument to process a request.
Hope this helps. It certainly helped me!
|
|
|
|
|
Thank you CBoland!
This is very helpful. I didn't want to resort to the switch or if-then-else statements to pick a type (which is why I was asking about dynamic casting), however, it looks like very clean code, and I may end up doing it this way.
I'll probably modify the static "CreateInstance()" method to check for plugins which implement the new IWebDocument interface. This way I (or other developers on the project) can easily distribute updates for old versions, while new version can simply add another line to the switch statement.
|
|
|
|
|
See what you are saying. Ok if you are going that route, may I suggest that the generic version that falls out of all other known documents store the data in a byte array from the stream, that way at least it can just reproduce the data as it was by replaying the stream, but still implement the interface as per whatever spec u give it.
|
|
|
|