Actually, if you attribute your class with the SerializableAttribute, the default serialization will serialize public and private fields of your class. There is an interface, ISerializable, that allows you to control the serialization so that you, the developer, controls what gets serialized and what doesn't. This does go along - like the original poster mentioned - with a constructor that has the signature (SerializationInfo, StreamingContext). This constructor can have any access modifier, but are common protected so that subclasses can pass serialization info to its base class.
There are also other serialization interfaces that can be useful, such as the IDeserializationCallback that allows you to perform any layout or hook-ups after serialization is complete, and the ISerializationSurrogate that - with an IFormatter implementation allows you to serialize Types that are themselves serializable. You can also use a SerializationBinder with the formatter to serialize from one Type to another, using SerializationBinder.BindToType. This seems especially handy when you have to upgrade a serialized file with older versions of Types.
No, you still have to implement ISerializable if you wish. Just attributing the class as I mentioned before with SerializableAttribute serializes all private and public fields. Implementing ISerializable is required if you want to control what gets serialized. See the ISerializable documentation for both more detail and an example.
If you'll read the documentation for SerializableAttribute, you'll see that the serialization infrastructure serializes private and public fields by default if a class is attributed (the class itself - inheriting the attribute doesn't make a derivative class serializable). This is done via reflection, yes. Note that your class must be attributed with the SerializableAttribute even if it implements ISerializable to be serializable. Implementing the interface just gives you explicit control over serialization of your Type.
ok, i guess i see how this is done-- pls correct me if i'm wrong: on invoking Formatter.Serialize, Formatter uses reflection to check whether ISerializable is implemented in the class of the object. if so it calls the interface. If not it uses reflection to serialize all the fields, checking for the NonSerialized as it goes.
is that what happens? and if this is, how is that done efficiently enough to not take forever to serialize each two-bit-- i mean two-byte-- object in the graph? does BinaryFormatter perform some sort of run time analysis or "compilation" like regex?
Gosh, it would be awful pleas'n, to reason out the reason, for things I can't explain.
Then perhaps I'd deserve ya, and be even worthy of ya..
if I only had a brain!
You should read Serializing Objects[^] in the .NET Framework. I think it'll help understand many of these concepts.
As far as performance, it is true that reflecting all those fields and Types takes a while but it is necessary when crossing contexts (via Remoting) or serialing to streams for any other means. The process of serialization is actually pretty complex. If you're interested in the details, you should use ildasm.exe (if you know IL) or a good decompiler like .NET Reflector[^] to see how much of that is done.
Our application I designed uses A LOT of Remoting in the Internet-deployed edition and it functions pretty fast - not much slower than the LAN version (and most people don't even notice). As far as I can tell, nothing is cached from serialization to serialization either because everything changes from call to call.
Why not learn IL? You can never know too much. Besides, decompilers don't always do such a great job so you'll have to look at IL at some point to see what's really going on. At least knowing what the instructions do is important, which is also documented in the .NET Framework SDK.
All compilers targeting the CLR using pure managed code produce Intermediate Language, or IL. So yes, it is IL. In fact, the Managed C++ compiler is the only compiler publicly available from Microsoft that can produce native code in an assembly (known as mixed mode).
But using an loop for serialization would be a poor idea. Recursion is most likely at work since for every Type that is added to the SerializationInfo, it has to be serialized, and then their Types, and so on so forth.
By decorating your class with the Serializable attribute you are just telling other classes that you pass it to that it is "auto-serializable", by which it is meant that it contains no volatile references (i.e. a SQLConnection object) or references to other classes that are not serializable. This way a remoting object (for example) knows that it does not need any special instructions to serialize/deserialize.
I was working on POP3 mail application. I'm able to implement the POP3 protocol and get the mails and attachments.. etc. But i'm unable to decode some of the mails encoded by Quoted-PrintableContent-Transfer-Encoding.
Please help ASAP.
Right, and there doesn't appear to be any examples you can use. There is one commercial ASP.NET emailing package that uses it but they didn't give any details.
Hence the RFC link - it isn't hard to do, so you should be able to figure it out. Mostly, it's just replacing certain characters with their hexidecimal equivalents prefixed by an equal sign. Use a StringBuilder for optimal performance.
Replace the characters as necessary and break the lines into 76 character sequences taking into account whether tabs or spaces must follow on the next line.
If I host the SingleCall SAOs in IIS, I can use HttpContext.Current to get
current HttpContext in the calling method, so I can get some useful
information. But if I host the remote objects in a console application for
debug, HttpContext.Current is a null object. Can I get some object likes
HttpContext when I do not host the objects in IIS in the calling method? I’
d not like to use CallContext cross Internet.
Define "client ID". We use a lot of remoting in our application and deal with lots of stuff like that. I just need to understand your circumstances. Is it unique to a machine (perhaps an IP address)? A user identity? A client application version or client name (like a user agent for browsers)?
Well, there's always the CallContext, which you said you didn't want to use but this is a good purpose for it. What's wrong with it? It's just another part of the remoting infrastructure. Both creating your own RealProxy and adding a channel sink could possibly do this, but it's a lot more work and not easily accessible to the SAO itself, since creating your own proxy is more for modifying the params, returns, etc. of a message, and channel sinks are for manipulating the message typically without modifying the content (like compression, encryption, routing, etc.).
The other methods I mentioned will be much harder to use. Frankly, the serialized ILogicalThreadAffinative implementation doesn't have to take much room if you just encapsulate basic Types like a String or a Guid. The channel sinks could take even more bandwidth than a CallContext if you're not mindful of the implementation. Besides, setting and getting objects from a CallContext is a heck of a lot easier than dealing with channel sinks. Once you read about them (again, that ".NET Remoting" book is good and gets in to that), I'm sure you'll agree!
Oh, and if you use CallContext.FreeNamedDataSlot in the server method with the name you used in the CallContext.SetData call on the client, there shouldn't be anything serialized and sent back in the return message. This could save you bandwidth easily.