|
What's wrong with System.Threading.Timer ? The TimerCallback is executed in a separate thread which is one of your requirements. For even more accuracy (according to the .NET Framework SDK), you can use a server-based timer like System.Timers.Timer .
If you want to use native function, you have to P/Invoke it and worry about marshaling parameters (depending on the native function called). The very act of marshaling could degrade accuracy over time, whereas the System.Threading.Timer is internally managed by the CLR. The other two call CreateWaitableTimer . So, basically, either you P/Invoke the methods or use the timers that do, or use System.Threading.Timer which should be about as accurate as you can get with the class library since it's managed internally by the CLR.
-----BEGIN GEEK CODE BLOCK-----
Version: 3.21
GCS/G/MU d- s: a- C++++ UL@ P++(+++) L+(--) E--- W+++ N++ o+ K? w++++ O- M(+) V? PS-- PE Y++ PGP++ t++@ 5 X+++ R+@ tv+ b(-)>b++ DI++++ D+ G e++>+++ h---* r+++ y+++
-----END GEEK CODE BLOCK-----
|
|
|
|
|
Firstly I thank you for your reply (i have never used a message board before). Secondly to bug you some more... I don't know if i was told the wrong thing but i was told that for automation purposes using operating system timers wasn't a good idea... do u have any thoughts on this?
When i say automation... my application will control a couple of robots....
If say I were to use the timers that you have mentioned above ... is there a way of finding out the resolution of their Ticks? also which of the above timers should i use? And you say that this is as accurate as i'm going to get?
|
|
|
|
|
As far as the resolution of the three timers in the .NET base class library go, the System.Windows.Forms.Timer uses the native SetTimer API, which uses time in milliseconds. System.Timers.Timer uses the native SetWaitableTimer and then uses a WaitableTimer (internal class) which calls SetWaitableTimer . This native function uses ticks - or 100 nanoseconds. Unfortunately, the System.Timers.Timer.Interval property takes time in milliseconds, thus decreasing the resolution. The System.Threading.Timer is mostly managed internally, so we can't know for sure how it works, but the documentation states that the interval is also specified in milliseconds.
So, the resolution of the timers in the .NET base class library is 1 millisecond. If you want to wrap CreateWaitableTimer , SetWaitableTimer , and CancelWaitableTimer in order to use ticks for better resolution, you can. The difference in time may make up for any time spent marshaling parameters. Of course, as long as you stick with the intrinsic types (int , long , double , byte , etc.), the SDK states that no time is spent marshaling since it is unnecessary.
As far as using timers besides those provided by the OS, I'm not really sure what else you'd use besides a timer card[^]. The resolution of a few of those cards isn't any better (some are even measured in seconds!) but you might get more accurate results taking only the hardware into account. Since each will most likely come with a C SDK, you might find yourself worrying about marshaling again.
Since you mentioned in your last post that these are to control robots, I would recommend you find some resources online regarding programming robots to find out how others accomplish that. The only experience I've had is reading a little about programmable robots with their own controller boards (a former-coworker of mine did the work, but I was passively curious at best).
-----BEGIN GEEK CODE BLOCK-----
Version: 3.21
GCS/G/MU d- s: a- C++++ UL@ P++(+++) L+(--) E--- W+++ N++ o+ K? w++++ O- M(+) V? PS-- PE Y++ PGP++ t++@ 5 X+++ R+@ tv+ b(-)>b++ DI++++ D+ G e++>+++ h---* r+++ y+++
-----END GEEK CODE BLOCK-----
|
|
|
|
|
Dear Heath Stewart,
Thank you ... your reply has given me alot to go on!
Thanks
Maria
|
|
|
|
|
I read this on code project....
"Not long time ago, I was programming an application for Windows that every certain time had to execute a task; in the development of the project everything went with normality but arrived the day to prove the program in the definitive machines where it had to remain working, and we observed that once in a while timers stopped working. We did thousand verifications always reaching the same result, sometimes, timers provided by the Framework don't tick, no events fired. It was then when we began to look for a solution to replace these timers and the solution that seemed better to me was the one to replace them by timers of the API of Windows. We looked on the Internet in case somebody had had the same idea that I had, without success, and therefore put hands to the work."
the address for this article is http://www.codeproject.com/csharp/FTwin32Timers.asp?target=timers
Does anyone have any comments? In my application i really cannot afford for a timer not to fire.
maria
|
|
|
|
|
I have commented on this before. The timers in .NET use the Windows APIs! Perhaps there's a bug when they encapsulated the API calls, or perhaps the thread in which the elapsed handler was executed and the base .NET implementation didn't account for that. Either way, they still rely on the Windows APIs as most things do in .NET.
Use FTwin32Timers if you want. At most, maybe it has better exception handling. If you write your program well-enough, you shouldn't experience any problems since they most likely call the same APIs. Otherwise - as I mentioned before - look into a hardware-based solution but you'll be forced to encapsulate their APIs as well.
Microsoft MVP, Visual C#
My Articles
|
|
|
|
|
Hi
Im doing a project for the Uni and Im trying to find help for implementing a Control similar to the Inbox list on Outlook 2003.
Im coding in C# and so far I cant find something about it. I've tried using the ListView and DataGrid controls by doing some mods through inherit them into my class but no luck.
I want to have a list where I can add an item with parameters for the Name, Date, Subject mainly but also I want to be able to control the Icons and the flags(Not too much the flags). Plus having the main item as the Outlook has the day and the have my e-mail items as sub-items will be nice too.
Does anybody know something that might help me?????
Please let me know.
Thank you.....
Nick
|
|
|
|
|
A ListView control is what you want to use, but you should look at the List-View Common Control[^] documentation in the Platform SDK. The .NET ListView control wraps the List-View common control - just as most other controls in System.Windows.Forms wrap respective common controls. There are several articles about overriding functionality in the ListView here on CodeProject and you can google for more. Previous Windows programming experience will be helpful because you'll have to P/Invoke several native functions (like SendMessage ) and know how to send messages and handle notification messages by overriding WndProc in your derived Listview control. You'll also have to be familiar with marshaling, which isn't difficult, so you can declare your P/Invoke methods and structs appropriately.
Most of the articles on this site talk about most of this, such as several listed in the following search: http://www.codeproject.com/info/search.asp?cats=3&cats=5&searchkw=ListView[^].
-----BEGIN GEEK CODE BLOCK-----
Version: 3.21
GCS/G/MU d- s: a- C++++ UL@ P++(+++) L+(--) E--- W+++ N++ o+ K? w++++ O- M(+) V? PS-- PE Y++ PGP++ t++@ 5 X+++ R+@ tv+ b(-)>b++ DI++++ D+ G e++>+++ h---* r+++ y+++
-----END GEEK CODE BLOCK-----
|
|
|
|
|
Thank you for your reply Heath Stewart. Im not familiar with P/Invoke or marshaling, but I'll give it a go and see how far I can take it.
Thanks again.
Nick...
|
|
|
|
|
Hi!
How can it be that a comparison of two <double> values by the <Equals> method provides a TRUE although the <GetHashCode> returns a different value ?!?
Curiously the hashcode of the second value was only differing by the sign!
I would think that if two values are equal by bitwise comparison in consequence the hashcode should do so, too, shouldn't it?
Thanx for your help!
Matej
|
|
|
|
|
That is strange. Could you post some code that demonstrates the problem?
|
|
|
|
|
I use the Unit testing framework "NUnit":
My code looks like follows:
-------------------------------------------
MyClass f = new MyClass();
MyClass _f = new MyClass();
...
some manipulation of "f" and "_f"
...
f.Matrix = _f.Matrix;
Assert.AreEqual(_f.Matrix, f.Matrix); //OK
Assert.AreEqual(_f.B, f.B); //OK
Assert.AreEqual(_f.B.GetHashCode(), f.B.GetHashCode()); //FAILS! _f hashcode is e.g. "123123123" f is "-123123123"
------------------------------------
Implementation of "MyClass":
------------------------------------
public class MyClass
{
private Frame _matrix; //"Frame" is a kind of matrix class
private double _b;
...
public Matrix
{
get
{
return _matrix;
}
set
{
_matrix = value;
syncWithMatrix(); //calculates "_b" from "_matrix"
}
}
public B
{
get
{
return _b;
}
set
{
_b = value;
syncMatrix(); //calculates "_matrix" from "_b"
}
}
}
--------------------------------
Bye,
Matej
|
|
|
|
|
I'm trying to download a page from a website. You're required to log in to access this page. I have a username/password, and I use Internet Explorer to log in, and make sure the checkbox "Keep me Signed In" is checked. So when I try to access any page again from Internet Explorer, everything's fine.
However, when I try using WebClient to download this HTML page, it returns the login page for me.
What can I do?
Sammy
"A good friend, is like a good book: the inside is better than the cover..."
|
|
|
|
|
See the HttpWebRequest.Credentials property, which also includes an example. Note that .NET does not currently use the same password caching APIs that Internet Explorer and other native clients use. For more information about these, see WNetCachePassword and information about DPAPI[^] (supported on Windows XP and newer) on MSDN[^]. For an example of using DPAPI - which Internet Explorer uses on support platforms - with .NET, see the article, Using Credential Management in Windows XP and Windows Server 2003[^]. You'll still have to assign an ICredential implementation (like the provided System.Net.NetworkCredential class) to the HttpWebRequest.Credentials property, though.
-----BEGIN GEEK CODE BLOCK-----
Version: 3.21
GCS/G/MU d- s: a- C++++ UL@ P++(+++) L+(--) E--- W+++ N++ o+ K? w++++ O- M(+) V? PS-- PE Y++ PGP++ t++@ 5 X+++ R+@ tv+ b(-)>b++ DI++++ D+ G e++>+++ h---* r+++ y+++
-----END GEEK CODE BLOCK-----
|
|
|
|
|
Never used Heath's method, but I have used what I put in the other thread:
old thread
I, for one, do not think the problem was that the band was down. I think that the problem may have been that there was a Stonehenge monument on the stage that was in danger of being crushed by a dwarf.
-David St. Hubbins
|
|
|
|
|
His question was about passing credentials (username and password). Depending on the authentication type, cookies won't help this. This is a case where the HTTP daemon sends a WWW-Authenticate header with a 401 status code to prompt for authentication that's given in the aforementioned header. The client must pass credentials. This won't work with Forms Authentication in .NET, however, because you are redirected to a page. In that case a cookie will work, but it won't solve his problem. The server is asking for credentials, not redirecting the user.
The HttpWebRequest.Credentials property must be used to pass credentials when the server requires authentication in the way I mentioned above.
-----BEGIN GEEK CODE BLOCK-----
Version: 3.21
GCS/G/MU d- s: a- C++++ UL@ P++(+++) L+(--) E--- W+++ N++ o+ K? w++++ O- M(+) V? PS-- PE Y++ PGP++ t++@ 5 X+++ R+@ tv+ b(-)>b++ DI++++ D+ G e++>+++ h---* r+++ y+++
-----END GEEK CODE BLOCK-----
|
|
|
|
|
Based on how he phrased it in the other thread and the fact he's clicking a checkbox called "keep me signed in" (which doesn't show up on the authentication dialog box, only "remember my password"), I think he's talking about a site that saves your login session with cookies.
I could be wrong though.
I, for one, do not think the problem was that the band was down. I think that the problem may have been that there was a Stonehenge monument on the stage that was in danger of being crushed by a dwarf.
-David St. Hubbins
|
|
|
|
|
Hi,
I am implementing a application that uses remoting that is generating some doubts as to best practices, scalability possibility, round trips on the net and things like this. I would like to know which the "better" form of doing what needed.
(I am afraid of giving a shot in my foot! )
Grossly, I have the following assemblies (scenery for questions):
--------------------------
Model.dll - It contains the classes that model tables for classes and lines for collections - Distributed in the client and server.
IRules.dll - Interface for the rules that will be activated in the server (that accesses a DAL, and so, so...) - Distributed in the client and server.
Rules.dll - Implementation of the defined interfaces in IRules - Distributed in the server.
--------------------------
Considering these components, the classes would be something as:
[Serializable()]
public class ModelCustomer
{
public string Code;
public string Name;
}
public interface IRulesCustomer
{
bool Insert(Model.ModelCustomer customer);
}
public class RulesCustomer : MarshalByRefObject, IRules.IRulesCustomer
{
public virtual bool Insert(Model.ModelCustomer customer)
{
}
}
--------------------------
In the server, my configuration file is it something as:
<configuration>
<system.runtime.remoting>
<application>
<channels>
<channel ref="http">
<serverProviders>
<formatter ref="binary" typeFilterLevel="Full"/>
</serverProviders>
</channel>
</channels>
<service>
<wellknown
mode="SingleCall" objectUri="RulesCustomer.rem"
type="Rules.RulesCustomer, Rules" />
<activated type="Model.ModelCustomer, Model" />
</service>
</application>
</system.runtime.remoting>
</configuration>
--------------------------
In the client, he resembles with:
<configuration>
<system.runtime.remoting>
<application>
<channels>
<channel ref="http">
<clientProviders>
<formatter ref="binary" />
</clientProviders>
<serverProviders>
<formatter ref="binary" typeFilterLevel="Full"/>
</serverProviders>
</channel>
</channels>
<client>
<wellknown type="IRules.IRulesCustomer, IRules"
url="http://server:80/App/RulesCustomer.rem"
/>
</client>
<client url="http://server:80/App">
<activated type="Model.ModelCustomer, Model" />
</client>
</application>
</system.runtime.remoting>
</configuration>
--------------------------
Finally, what happens following: I am activating the model object in the "client" (it will receive yours state of UI/UserProcess) and the rules object in the "server". After passing the values of UI for properties of model object, I call the method insert of rule object passing the model object as parameter.
The doubts can be summarized like this:
1 - Will it be that to activate the model object in the client it harms the scalability of the application?
2 - Will it be that to activate the model object in the client it will force more round trip trips to the server of the one what the necessary (or advisable)?
3 - Will it be that a collection to come back (CollectionBase) for one of the methods of the class of rules it is viable?
4 - Will it be that I should activate everything in the server?
Thank you in advance and excuse me for the VERY long question,
Marcelo Palladino
Brazil
|
|
|
|
|
Marcelo
The first thing you should be aware of: by making your object Single Call, the object will be created and destroyed each time it is called. If you expect to have a high volume of calls made, you may consider establishing it as a Singleton and create a custom lease time.
For scalability it is VERY important that you always consider your deployment from the aspect of what is called 'chunky calls'. If you have two processes as such:
1. create HTTP channel
2. call Activator.GetObject
3. call to init
4. call to get first info
5. call to get second piece
6. call to get third piece
7. display information
-VS-
1. create HTTP channel
2. call Activator.GetObject
3. call to get large object -- server object does 3-6 of prior process
4. display information
you will find the second process to SCREAM while the first process will crawl like a snail.
Secondly, your use of BinarySerializer is good. The SOAP serializer is horrible. However if is at all possible to remove the server from HTTP and make it a pure TCP client, you will have even faster throughput and better results.
There are several other considerations to make as well as determining if remoting is truly the best implementation for you solution. If you read the FAQ here[^], then you can check it out.
_____________________________________________
Of all the senses I could possibly lose, It is most often the one called 'common' that gets lost.
|
|
|
|
|
By the way....there is one problem with using a BinaryClientFormatterSink in your application: any fault that occurs on the server side during initialization will return the error that the remoting version is wrong version, expected 1.0 and received (an ugly number)
This is due to the exception being sent out in text and the client is expecting a return of the version number. To get around this, I had to incorporate the ability to put the application into debug state. So my code is as follows during init time:
try
{
if (DebugState == false)
{
channel = new HttpChannel(null,new BinaryClientFormatterSinkProvider(),new BinaryServerFormatterSinkProvider());
}
else
{
channel = new HttpChannel(null,null,null);
runningInDebugMode = true;
System.Diagnostics.Debug.WriteLine("TCA Navigator has been requested to run in debug mode.", "Remote Communication Manager");
}
channel.Properties["proxyName"] = null;
channel.Properties["useDefaultCredentials"] = "true";
ChannelServices.RegisterChannel(channel);
}
...
Also : setting the channel properties is required to keep the code from trying to locate a proxy server and use that to navigate to the site. This is used assuming your application is internal (though you really would not want to do remoting in the open on the internet).
Michael
_____________________________________________
Of all the senses I could possibly lose, It is most often the one called 'common' that gets lost.
|
|
|
|
|
Hi Michael,
Before anything else, thank you for the answers. They helped a lot! In relation to the informed problems in my previous post, your answers took me to do the following:
1 - My model objects don't inherit more of MarshalByRefObject. (only the rules objects inherit now)
2 - My model objects now are marked as [Serializable ()] and they implement the interface ISerializable.
3 - I am not more using CAO. The model object is created in the client and gone by value through methods in my classes of rules.
To use "chunky interfaces": Excellent touch, thank you very much! This solves the problem of the roundtrips.
To use "Singleton" instead of "SingleCall": humm.... I am still forming an opinion to respect, but own Ingo (in the link that you indicated ) says that SingleCall turns easier to make a scalability application. But I understood your point, that is gone back more to performance and use of resources. On the other hand, if I am in an environment with load balance...
Now another question: Did I understand certain with relationship to the fact of implementing ISerializable in my model objects? This way they are gone by value, they are not? (like a DataSet, for example)
A great hug and thank you very much again,
Marcelo Palladino
Brazil
|
|
|
|
|
The only item that I would say you should make an adjustment to is the ISerializable implementation. The reason I say so is that you have to implement your own serializer/deserializer!!!
Just add the [Serializable] attribute. The process of 'going remote' goes through several SinkProviders including the BinarySerializer. So implementing the ISerializable interface just creates redundancy.
Now ---> once you get everything set to run with SAO instead of CAO, all you need to do is the following:
1) Get your app finished and working so your server objects
are stabilized
2) Set up a stress test if possible
3) Now you can compare the impacts to your app on whether or
not you should be Singleton or SingleCall.
IMHO -- this ends being based on application process and design rather than just a cookie-cutter decision. For my app, I found that having the objects there was great and the Singletons will eventually be flushed if no activity occurs, but remain when activity is high.
For me, I was initializing collections of objects remotely and passing single instances back to the caller. Great setup for a Singleton. If I did not need the populated collections, I probably could get away with the SingleCall.
Enjoy!
Michael
_____________________________________________
Of all the senses I could possibly lose, It is most often the one called 'common' that gets lost.
|
|
|
|
|
Hi again Michael,
Unintentionally to abuse, but already abusing am not sure if I understood what you wanted to say below sentence:
theRealCondor wrote:
The only item that I would say you should make an adjustment to is the ISerializable implementation. The reason I say so is that you have to implement your own serializer/deserializer!!!
The one that I thought about doing is something of the type:
[Serializable()]
public class MyModelObject : ISerializable
{
private int a;
private string b;
public MyModelObject()
{
}
public MyModelObject(int a, string b):base()
{
this.a = a;
this.b = b;
}
protected MyModelObject(SerializationInfo info, StreamingContext context)
{
this.A = info.GetInt32("A");
this.B = info.GetString("B");
}
public int A
{
get {return this.a;}
set {this.a = value;}
}
public string B
{
get {return this.b;}
set {this.b = value;}
}
void ISerializable.GetObjectData(SerializationInfo info,
StreamingContext context)
{
info.AddValue("A", this.a);
info.AddValue("B", this.b);
}
}
For the what could understand I should not make this?! Only use [Serializable()] it didn't work. Can you feel a light (larger still)?
Greetings,
Marcelo Palladino
Brazil
|
|
|
|
|
Pallidino said:
Michael responds:
Here is what I did that worked and is very simple:
using System.Runtime.Remoting;
using System.Collections;
[Serializable]
public class WidgetCollection:CollectionBase
{
public WidgetItem this[int index]
{
get{return (WidgetItem)this.List[index]; }
set{this.List[index] = value; }
}
public void Add(WidgetItem item)
{List.Add(item)}
public WidgetItem this(string key)
{return (WidgetItem) List(key)}
}
[Serializable]
public class WidgetItem
{
Sprocket internalSprocket
Spoke internalSpoke
DooHickey internalDooHickey
...
}
This construct is used by both the client and server objects.
Server creates the WidgetCollection and populates it with Widgets.
Both the WidgetCollection and the Widget are marked [Serializable].
So your server object could be like this assuming that you had a server object instance named Server that handled the population of the WidgetCollection, and you already created the Interface object that defines your WidgetManager for your client use:
public class WidgetManager:MarshalByRefObject, IWidgetManager
{
public Widget GetWidget(WidgetKey item)
{
WidgetCollection itemList = Server.PopulateWidgets();
return itemList[item];
}
}
Now.....if you do this, run your remote objects, and get an error that says that {some object} cannot be serialized , then {some object} has not been marked [Serializable].
In my example, WidgetItem uses a unique object type of Sprocket, Spoke, and DooHickey. I must make certain that all three object type definitions are marked as [Serializable]. You must keep iterating through your object tree until you finally succeed without the 'cannot be serialized' error. So I'm required to add:
[Serializable]
public class Sprocket()
{
...existing definition
}
[Serializable]
public class Spoke()
{
...existing definition
}
[Serializable]
public class DooHickey()
{
...existing definition
}
The advantage of this approach.....
You can construct your client and use the remote object as a local object while developing and testing without any serialization occuring.
Once you are ready to make the move, you change the local object to a MarshalByRefObject object and mark the objects being returned as [Serializable] and your client use is almost identical with the exception of how the objects are initialized.
We established the standard that if you have objects that are local and MIGHT become remote, that the object initialization be isolated in a method. That way all remote objects are isolated and easy to change from new Object to Activator.GetObject() with the beginning of the method altered to first initialize the channel.
Voila, remoting at it's simplest.
_____________________________________________
Of all the senses I could possibly lose, It is most often the one called 'common' that gets lost.
|
|
|
|
|
Marcelo,
Putting this another way:
"With the exception of earlier TCP/IP RPC implementations, in which you even had to worry about little-endian/big-endian conversions,
all current remoting frameworks support the automatic encoding of simple data types into th echosen transfer format.
The problem starts when you want to pass a copy of an object from server to client. Java RMI and EJB support these requirements, but COM+ for example, did not.
The commonly used serializable objects within COM+ were PropertyBags and ADO Recordsets -- but there was no easy way of passing large object structs around.
In .NET Remoting the encoding/decoding of objects is natively supported.
You just need to mark such objects with the [Serializable] attribute -OR- implement the interface ISerializable and the rest will be taken care of by the framework.
This even allows you to pass your objects cross-platform via XML.
The serialization mechanism marshal simple data types and subobjects (which have to be serializable or exist as remote objects),
and even ensures that circular references (which could result in endless loops when not discovered) don't do any harm.
<sub> -- Ingo Rammer, Advanced .NET Remoting </sub>
_____________________________________________
Of all the senses I could possibly lose, It is most often the one called 'common' that gets lost.
|
|
|
|
|