|
Create another class called Permissions or something similar. In that class you would have boolean properties which correspond to each permission you would like. Then in your role class you would have a property permissions, which would return a permissions class with appropriate permissions set. Then each user would be assigned a role, which would have the permissions set. Then in you application you couls have a currentuser property, who would have their role with permissions, and so you could just enable buttons/menuitems according to the permissions. Here is a simple example:-
public partial class Form1 : Form
{
User currentUser;
public Form1()
{
InitializeComponent();
btnAddNewDoc.Enabled = currentUser.CurrentUserRole.Permissions.CanAddNewDoc;
}
}
class User
{
public int UserID { get; set; }
public string UserName { get; set; }
public UserRole CurrentUserRole { get; set; }
}
class UserRole
{
public string RoleName { get; set; }
public RolePermissions Permissions { get; set; }
}
class RolePermissions
{
public bool CanAddNewDoc { get; set; }
public bool CanDeleteDoc { get; set; }
}
When I was a coder, we worked on algorithms. Today, we memorize APIs for countless libraries — those libraries have the algorithms - Eric Allman
|
|
|
|
|
Oh wow, thanks a lot. I will keep this in mind. The concept is quite simple really and I can easily create my own classes based on this idea. Thanks a lot, Wayne.
djj55: Nice but may have a permission problem
Pete O'Hanlon: He has my permission to run it.
|
|
|
|
|
Glad to help.
When I was a coder, we worked on algorithms. Today, we memorize APIs for countless libraries — those libraries have the algorithms - Eric Allman
|
|
|
|
|
Good answer.
As an extension to this, you can then data bind the Enabled property to User.CurrentUserRole.CanXxx (at least in WPF/Silverlight, I think that works in WinForms too).
(By the way I'd just call that property User.Role. I like brevity .)
|
|
|
|
|
Thanks. In WPF an easier way is to use this in your RelayCommand, something like this:-
public RelayCommand DeleteFileCommand
{
get
{
return new RelayCommand(() => DeleteFile(), () => CanDeleteFile());
}
}
void DeleteFile()
{
}
bool CanDeleteFile()
{
return currentUser.Role.Permissions.CanDeleteFile;
}
and then your button is automagically disabled
When I was a coder, we worked on algorithms. Today, we memorize APIs for countless libraries — those libraries have the algorithms - Eric Allman
|
|
|
|
|
Matt U. wrote: The only items I need to hide will be menu and toolstrip items, in which case the layout will change accordingly to place items in their new position. Addressing this part of your design only: have you considered having a container control place-holder on your client UI, and having three separate UserControls, one for each Role, which then are inserted into the place-holder based on Role ? Save you layout calculations: makes it easy to modify/maintain Role UI's ?
A good question, and Wayne's answers are great !
"Use the word 'cybernetics,' Norbert, because nobody knows what it means. This will always put you at an advantage in arguments." Claude Shannon (Information Theory scientist): letter to Norbert Weiner of M.I.T., circa 1940
|
|
|
|
|
I was answering a question here on CP with a recursive solution[^], and I realized I had never really thought about the internal memory use of variables ... which contain constant values at run-time ... in the parameter list of the recursive call in C#.
Now I do remember (dimly) from computer-science daze of yore that recursion uses stacks, and so forth. But, on a practical level in today's .NET C#, is there any real benefit from declaring constant value variables outside the recursive call vs. having them in the parameter list ? Example:
private void SetControlsEnabledProperty(bool isEnabled, bool isRecursive, string typeToCheck, Control.ControlCollection theControls)
{
foreach (Control theControl in theControls)
{
if (theControl.GetType().Name == typeToCheck)
{
theControl.Enabled = isEnabled;
}
else
{
if (isRecursive && theControl.HasChildren)
{
SetControlsEnabledProperty(isEnabled, isRecursive, typeToCheck, theControl.Controls);
}
}
}
}
}
In this code there are three variables whose constant-values are set by the calling code: can I assume the compiler recognizes this and does some kind of optimization; or is it possible that there may be some real benefit to defining those constant-value variables outside the recursive call ? As in
private void EnableAllTextBoxes()
{
bool isEnabled = true;
bool isRecursive = true;
string typeToCheck = "TextBox"
SetControlsEnabledProperty(Control.ControlCollection theControls);
} thanks, Bill
"Use the word 'cybernetics,' Norbert, because nobody knows what it means. This will always put you at an advantage in arguments." Claude Shannon (Information Theory scientist): letter to Norbert Weiner of M.I.T., circa 1940
modified 11-Oct-11 2:04am.
|
|
|
|
|
More method parameters means more bytes and more cycles, there isn't much the compiler can do about that. It has to obey the signature and semantics of the method, there could be other callers using non-const values.
|
|
|
|
|
Hi, I'm going to ask a friend of mine who is fluent in IL ... I'm not ... to look under-the-hood for me, and tell me if there is any optimization ... but I doubt there is. But, I have underestimated the C# compiler before: I did not realize until I asked I learned here on this forum that the switch/case statement, given an integer case-selector in a continuous range, generates a totally efficient IL jump table
thanks, Bill
"Use the word 'cybernetics,' Norbert, because nobody knows what it means. This will always put you at an advantage in arguments." Claude Shannon (Information Theory scientist): letter to Norbert Weiner of M.I.T., circa 1940
|
|
|
|
|
I don't think I've given it any thought either -- until now.
I suppose the parameters would get copied each time, unless the compiler is smart enough.
BillWoodruff wrote: if (theControl.GetType().Name == typeToCheck)
TypeTransmogrifier[^]
|
|
|
|
|
Hi, My guess is that it would be very difficult for a compiler to analyze code for any possible change of a variable's value in a recursive call ... a huge possibility matrix comes to mind ... but then I'm not a compiler-writer I'm not fluent enough in IL to look under-the-hood and see what's going on.
I have studied ... in the sense of reading other people's explanations ... how the C# compiler optimizes switch/case statements, and the range of implementations/optimizations is quite fascinating.
I read your TypeTransmogrifier article with interest; I am curious if now, a few years later, you'd use the same techniques and code.
best, Bill
"Use the word 'cybernetics,' Norbert, because nobody knows what it means. This will always put you at an advantage in arguments." Claude Shannon (Information Theory scientist): letter to Norbert Weiner of M.I.T., circa 1940
|
|
|
|
|
BillWoodruff wrote: use the same techniques and code.
Oh yes, but I don't need it very often.
|
|
|
|
|
I'm fairly sure that it does cost those parameters on the stack each time. That's going to be 4b or so per parameter, I guess, if they're reference types, so it is a fairly minor cost, and compared to the time and resources of doing a recursive call in the first place, it's not really worth worrying about.
I'm not sure if the C# compiler optimises tail recursion (i.e. calling yourself on the last functional line of a code path), which can essentially be turned into a 'goto 1;' with some parameter reassignments and avoid the cost of a new function frame entirely.
|
|
|
|
|
For the type checking, directly comparing types (instead of the name) is actually better optimized in many case (though not all). Do you have access to the actual type to check against?
|
|
|
|
|
harold aptroot wrote: Do you have access to the actual type to check against? Yes ! Thanks for reminding me, Harold; the last update of that code I did ... maybe a year ago ... used direct comparison. I'll update both my answer, and the post here on this forum.
My original goal was a method that was even more generalized, taking as its arguments the name (string) of the Property to be changed, and the new value for the Property, but I came to the conclusion this could not be done without Reflection, so I stopped there.
"Use the word 'cybernetics,' Norbert, because nobody knows what it means. This will always put you at an advantage in arguments." Claude Shannon (Information Theory scientist): letter to Norbert Weiner of M.I.T., circa 1940
|
|
|
|
|
Hello there.
I am currently writing a server and client application. I have code that can handle it when client users close their connection, however detecting when the connection has closed due to an error is a completely different story. What is the best way to do this?
I had a look at the forum and found similar questions. One of these propose that I use the Linger and Time Out properties of TCPClient to determine whether the connection is still open. What I read from MSDN about these properties are that they determine the time that write/read operations may wait to complete. To me this does not seem like a good way to test the connection, because shortening these times might close connections when in actual fact the remote connection hasn't disconnected but is merely very busy. Am I wrong? Is my understanding of lingering and time outs correct?
Another solution proposes that I use polling to determine whether the connections are open. This is actually what I am currently doing. My server has a thread that every once in a while sends a small piece of test data to every connection. I then try and catch the IOException exceptions that are thrown when I send data to a connection that closed at the client side. This seems to work for the most part, however, it does not throw immediately when I send the test data, it seems to throw randomly on the 2nd or 3rd set of test data. Only after it has thrown does the server's TCPCLient.Connected property change to false. How can I get the connection to immediately throw the exception? Could there be a better way to fix this and to see if the connection has closed?
Any Help is appreciated
Thanks!
KOM UIT DAAAAA!!!
|
|
|
|
|
To understand why this is a problem, look at how a TCP connection works. It is kept alive by, essentially, ping and acknowledgement packets, and a connection is 'dropped' if no acknowledgement packets are received. (It can also be actively closed, which is different, and which closes the Socket object and gives you a 0 byte read on the input stream.) That means that if a connection is lost, the time taken for the OS to notice is dependent on how long it's prepared to wait and how often it pings the connection.
I think, if you aren't sending any data, the connection is never checked. That's why sending data into the connection provokes the exception – it makes the OS look at the connection and say 'oh, nothing has been received for a minute, it must be closed'. Sometimes it won't fail because the overall timeout hasn't expired and it hasn't done a 'ping test' so your first data packet is what triggers the 'waiting for acknowledgement' and then the connection is marked as dropped after the ack timeout (a few seconds, I think).
These issues all come about because TCP is essentially emulating a connected, stream transfer subsystem on top of a packet-based, fire and forget one (IP), and working out when a receiver has gone away in that situation is not always immediately possible.
In my sockets library I catch the exceptions (SocketException, and also ObjectDisposedException which can happen if .Net closes and disposes of the socket without telling you about it) and treat that as a disconnection.
|
|
|
|
|
Hi thanks for the reply.
You say your library catches SocketException and also ObjectDisposedException and treat that as an indication that the connection has closed.
When does these exceptions get thrown? When you send data?
Say for example I am connected to a remote computer. If his connection closes abruptly, under which conditions will these exceptions get thrown on my machine? Would .Net eventually close\dispose my connection if the connection was closed on the remote machine? I was not aware that it worked that way, maybe I misunderstood what you meant.
KOM UIT DAAAAA!!!
|
|
|
|
|
If the remote client actively closes the connection, you will get a 0 byte result from Read, most of the time. This should be your normal closure condition.
If the connection is lost, either because the network drops out or because the remote client dies or otherwise closes the connection without doing so actively, you will, in general, not be notified. Yes, typically exceptions are thrown when you try to write to the socket. I think if you wait long enough the OS will time out an idle connection, but that timeout is quite long (minutes, at least), and in some cases the timeout can be infinite i.e. you would never be told that the connection died.
|
|
|
|
|
Thanks for the reply. It seems to me then that the way I am currently doing is probably the best way to detect the closed connection due to errors. I would then just have to accept that I need to wait for the connection to time out.
I will add a check to check what the result is of an read, to see if the remote client actively closed. Thanks for the help.
KOM UIT DAAAAA!!!
|
|
|
|
|
I had a look at my initial post. I did not clearly state the nature of the problem. The problem is that I am having trouble detecting when the remote connection closes on it's side of the connection. It seems like your answer is on detecting when the connection closed on my side of the connection, or it seems that way.
If there was any confusion, I apologize since I was not more clear on the nature of the problem in the initial post...
Thanks for the replies though.
KOM UIT DAAAAA!!!
|
|
|
|
|
|
Yes this will be VERY useful! Thanks!
KOM UIT DAAAAA!!!
|
|
|
|
|
Good catch. For a dedicated server machine that could be handy.
|
|
|
|
|
TCP the protocol (not TCPClient) for the most part does not do what you are asking.
And although Keep Alive is part of the protocol in general it is not going to be useful. Generally it won't even work unless you control the entire network infrastructure for all the components. And working doesn't mean it is useful.
If a connection pool is in use, then the properties of the connection pool dictate how idle connections are handled. If a connection pool is not in use then idle connections will not be closed (excluding a close from the client.)
A pool will close sockets that are in an error state or closed but that only happens after it is detected. Excluding a pool being configured to send a keep alive (probably not the same as the TCP Keep Alive) it relies on the user functionality to detect such errors. Pool keep alives can be considered to exist only in database pools as determining other types of keep alives depend on the client.
This has nothing to do with language/API either. But briefly looking at the docs for TCPClient I see nothing that suggests a pool is in use.
In general the only way to tell if a socket is still good is to send something and wait for it to indicate it went. This is not 100% but is generally sufficient especially if a reply is expected. Again this is how TCP works.
Additionally, and this impacts the point of Keep Alive (TCP protocol) the point is - why do you think this matters? What exactly are you gaining?
For example lets say you have a data center with two 24x7 servers that talk to each other.
From that the following scenarios exist.
- 99% of the time there is no problem.
- 0.99% of the time there is a socket closure caused by a scheduled bounce of one of the servers.
- 0.01% of the time the socket fails because of a 'network' failure which could include someone kicking out the power cord of one of the servers.
Now for the last case is can occur at ANY time. So the fact that a socket was 'alive' a minute ago doesn't help you any when you actually use it because it can fail when you use it. So the only way do deal with is either your code that USES the socket is written to deal with retries or you accept some infrequent processing errors (and other businesses process deal with it.)
Note that the second case can occur at any time as well.
Conversely the other 99.99% of the time you are now sending keep alives (not necessarily TCP) around the system to no purpose. Why no purpose? Because the system is up. So they don't do anything.
So you have useless traffic which doesn't keep you from writing code to deal with down time anyways.
(An additional considersation is what happens if the server on the other end gets the message but never does anything with it?
Grimes wrote: How can I get the connection to immediately throw the exception?
Presumably you mean for your test case - but in general the answer is you can't. The only way that happens if is the origination computer already knows that the other end is down. Which only happens if there was a previous error or it got a close from the server. Otherwise the origination MUST wait a non-trivial amount of time for a response. That is how TCP works.
If and ONLY if your computers are in a high quality data center then you can adjust the OS (OS, not app) configuration values to significantly reduce how long it waits. But on windows it cannot be reduced below 30 seconds.
Again this ONLY applies to server WITHIN a data center. It is completely inappropriate for the internet and is unlikely to be appropriate for a business lan.
Grimes wrote: Could there be a better way to fix this and to see if the connection has
closed?
Closed represents a happy path based on the target serving actually closing the socket. Other failure scenarios do not have anything to with closing the socket.
As an example if a router goes bad the path between the two servers can be unresolvable. Both servers are still up and think their sockets are good but one end never gets messages and the other end fails when it sends.
|
|
|
|
|