|
Cheers Mark, Led Mike has come up with the idea of a server in the middle to gather the data and reduce the network load on the monitors. Is this what you were getting at with your idea of "pulled in one request for all monitored machines"?
Mark Churchill wrote: broadcast
I'm interested in this idea. Is it possible for all the monitors to simply broadcast their state, and each dash board to listen without increasing the bandwidth? I've not really thought about this kind of communication. How would I go about doing this. Is it something that WCF supports?
Simon
|
|
|
|
|
Sorry, I assumed that the "monitor app" was some sort of aggregation server anyway
I was working on the assumption that the monitor app would be pinging/receiving some sort of data from the monitored machines.
Then several dashboard apps would pull data from the monitor app. By "pulled in one request" I mean that the dashboard could ask the server for an entire snapshot of state in one request, and then (if bandwidth was going to be an issue) after that ask for a difference. I was suggesting this over dashboards subscribing/having data pushed from the monitor server.
I wasnt too concerned about how the monitored things updated the monitor app. I could see that being a variety of methods, such as the monitor app pinging them, or servers pushing info onto message queues.
I dont think WCF specifically supports broadcast (I might be wrong here, I didnt check) - but i think MSMQ does. If you are on a LAN and your router supports it I guess you could save some bandwidth with UDP broadcast, but I think we'd be in overkill territory there :P
But I agree with Mike, I think the "monitor app"/ aggregation server is a given (for a start it gives you centralised logging / stats if you need them). It also lets your dashboard clients request only the data they are interested in. TBH it could probably just dump your info in a SQL database and have the dashboards request them out, or even use dependancies...
|
|
|
|
|
Mark Churchill wrote: Sorry, I assumed that the "monitor app" was some sort of aggregation server anyway [Wink]
I was working on the assumption that the monitor app would be pinging/receiving some sort of data from the monitored machines.
Ahh, maybe I explained it badly. Each monitor app is running on a standalone PC that is connected to 1 machine. It receives several inputs from the machine which are things like production speed, temperature, etc. The connection from the PC to the machine is via a serial port connection to the machines PLC. (The monitor app actually serves 2 purposes, the first is to display the monitored data visually, but also allows some control of the machine via a touch screen interface)
Mark Churchill wrote: I dont think WCF specifically supports broadcast (I might be wrong here, I didnt check) - but i think MSMQ does. If you are on a LAN and your router supports it I guess you could save some bandwidth with UDP broadcast, but I think we'd be in overkill territory there [Poke tongue]
Yeah, although I like the idea of a broadcast based system, I'm thinking now it's slightly over the top, the server options seems the best.
Mark Churchill wrote: But I agree with Mike, I think the "monitor app"/ aggregation server is a given (for a start it gives you centralised logging / stats if you need them). It also lets your dashboard clients request only the data they are interested in. TBH it could probably just dump your info in a SQL database and have the dashboards request them out, or even use dependancies...
Yeah, there are plenty of advantages to a server approach. Also that would allow me to easily build in a security layer to only grant access to the appropriate people, without having to have the monitor app's aware of all the privileges, all they need to know is only to give up the data to the server.
Thanks guys, you've helped me loads. Can get cracking on a decent design now.
Simon
|
|
|
|
|
Simon Stevens wrote: But surely there's a better way.
Simon Stevens wrote: Is there really no way I can reduce the network traffic?
Perhaps you don't have each dashboard connect directly to a monitor. Instead you write a server application to do this and host it on it's own machine (not a monitor machine). The server app will communicate directly with the monitors. Then the dashboard apps communicate with the server. This might introduce small amount of latency (I don't know if that is important to you) as the server machine becomes a bottle neck, but it would reduce network connections and the load on monitor machines since they only communicate with a single remote process.
led mike
|
|
|
|
|
Genius. But yet so simple, why didn't I think of that! It solves all my issues. A bit of latency is fine, so a simple server box in the middle can aggregate all the data from the monitors and provide it easily to the dash board apps without a gazillion messages flying around.
Thanks Led Mike. One big fat 5 is heading your way.
Simon
|
|
|
|
|
Frankly I think you have some misconceptions about what constitutes heavy network traffic. Admittedly I don't know how much data your messages will contain but the number of messages as such doesn't seem to me as much of an issue.
If I am wrong, however... you could use some form of multicast/broadcast technique so that a monitored PC doesn't have to send the same packet to each of it's monitors. I don't know how it is done but it sure is possible. Apart from that you can do simpler things like scale back the poll frequency a bit and try to optimize the communication - use compact bitarrays instead of boolean flags, for example, and a remoting over a low-latency protocol like UDP rather than some bloated XML Web service. Remoting uses binary serialization so it's very quick as well as very compact; XML is neither (though it has it's uses of course).
|
|
|
|
|
dojohansen wrote: Frankly I think you have some misconceptions about what constitutes heavy network traffic. Admittedly I don't know how much data your messages will contain but the number of messages as such doesn't seem to me as much of an issue.
I did wonder if this might be the case. My messages really should be rather small. We're talking maybe a few dozen status bit flags, a few integer values (32 bit), and a few string values (only 10 or so chars long each). And could be optimised further by only transmitting changes between each message.
It just doesn't seem very scalable, each new addition to the system adds a whole load of extra messages.
You say that the level of messages I've described is fine. Out of interest, at what point would you say I should start being concerned at the level of traffic?
Simon
|
|
|
|
|
I don't know if it is fine or not; I'm just going by some very crude speculation. A second is a very long time in computing terms.
If I launch task manager and visit the networking tab, then select columns to show the number of unicasts and non-unicasts per interval, then convert this to per second (at "normal" update speed task manager appears to use an interval close to 2 seconds) I see that the activity varies a bit but is between 50 and 100 "casts" per second when idle. My network utilization varies between 0.01% and 0.03%.
Another crude thought: When loading a web page, the browser requests the document and then issues separate requests for all the linked resources, such as js files, style sheets, and above all images. It seems to me this load must be many thousand times greater than what you're trying to do, yet I am sure a much greater proportion of PCs on our corporate network than just one in thousand is browsing the web at any given time.
I don't know if there even is a general answer to your question (since network capacity might vary rather a lot in the world and you've said little about the network on which this will operate), and if there is one I don't have the knowledge to provide it. So I cannot say at what level you should be concerned. But very simple and crude common sense observations seem to me to indicate that it would be a non-issue.
I may be wrong of course, but until someone presents me with some better basis than just my own arrogant speculation I *think* adding a few bytes per second to the network traffic of each PC translates into adding less than 1% overhead to the network overall. Check your own stats, I'm sure you'll have at least a kilobyte per second in network traffic when your PC is idle!
|
|
|
|
|
1- is it right to start refactoring a code by revising the architecture or test the code with old architecture and when you became sure that it's working properly , then start improving the architecture.
2- is that right to change an architecture which is working properly because you find better ways?
|
|
|
|
|
fateme_developer wrote: or test the code with old architecture
"or test"? What does that mean? How can you have existing architecture that has NOT been tested?
fateme_developer wrote: 2- is that right to change an architecture which is working properly because you find better ways?
There is no single answer to that question. All the project variables must be considered.
led mike
|
|
|
|
|
led mike wrote: All the project variables must be considered.
would you express more? what are important variables and how do they effect the final dicision?
|
|
|
|
|
fateme_developer wrote: would you express more? what are important variables and how do they effect the final dicision?
There are far to many but for example, say you developed an XBox game. Once you shipped it there is likely very little benefit in refactoring the code since you won't be modifying and extending the game. In the example there still might be a reason to refactor. If in developing the game you also developed your graphics engines that you intend to use in future games. Then it would be a good decision to refactor the engine but not the game level code.
led mike
|
|
|
|
|
The best way to approach any refactoring excercise really should be to make sure that you have unit tests in place (with as much code coverage as possible) and that those unit tests should all pass. This allows you to refactor and objectively verify that the refactoring hasn't broken anything. That being said, this isn't always possible depending on the situation, but as long as you are careful and targeted in the refactoring you can minimize the potential risks.
As for changing the architecture because you find better ways to do something, you have to make a decision if the risks of changing the architecture at that point in time outweigh the benefits.
Scott Dorman Microsoft® MVP - Visual C# | MCPD
President - Tampa Bay IASA
[ Blog][ Articles][ Forum Guidelines] Hey, hey, hey. Don't be mean. We don't have to be mean because, remember, no matter where you go, there you are. - Buckaroo Banzai
|
|
|
|
|
Scott Dorman wrote: The best way to approach any refactoring excercise really should be to make sure that you have unit tests in place
I agree you should have unit tests, but not for the purpose of refactoring. Sure they are invaluable to the effort but unit tests should be in place for many reasons having nothing to do with refactoring and the reasons come into play long before you have anything to refactor.
led mike
|
|
|
|
|
led mike wrote: you should have unit tests, but not for the purpose of refactoring
Absolutely. Ideally, unit tests should be in place long before any thoughts of refactoring like this occur. Since that isn't always the case, the next best thing is to recommend that they be in place before the refactoring.
Scott Dorman Microsoft® MVP - Visual C# | MCPD
President - Tampa Bay IASA
[ Blog][ Articles][ Forum Guidelines] Hey, hey, hey. Don't be mean. We don't have to be mean because, remember, no matter where you go, there you are. - Buckaroo Banzai
|
|
|
|
|
Yes. It's hard to imagine someone is thinking about design and architecture and refactoring but they haven't yet thought about the fact they should have unit tests. As hard as it might be to imagine that someone, no doubt they exist.
led mike
|
|
|
|
|
It's not so strange. There are many cases where constructing unit tests would in itself represent a huge development effort, and I for one disagree that it is a necessarily a good idea to do so when about to embark on a refactoring.
The project I work on is a point in case. It is a web application and it isn't easy to unit test, among other reasons because intrinsic asp.net objects (request, response, session state, application state, httpserverutility) are used indiscriminately anywhere, so to make a unit test for some business class you need to make mocks of these, which you can't, because they are all sealed classes. Even if you could it would be quite a bit of extra work. And if your refactoring involves removing all these objects from your business logic you don't get any long-term benefit from doing all that work - once your tests are developed your refactoring then breaks them!
And that leads me to the more general observation: Refactoring isn't (usually) just about changing implementation details; it usually involves interface changes, decoupling of objects, and construction logic changes - perhaps introducing a factory somewhere. Most of these changes are of a nature that make them very likely to break any tests. So why do all the development effort to make a bunch of tests that you will immediately go about invalidating? In my view, far better to refactor and make tests for your *new* code as you go. In the end, if you had bugs that are still there after the refactoring there is no reason to think the refactoring itself would make them harder to fix afterwards, and if your refactoring introduced bugs that is no different to if you had made the unit tests for the "before refactoring" version of the application.
So I don't get it. What would be the great benefit of developing tests to know that some code you're about to change works, when those tests cannot be reused to test the code after the changes?
|
|
|
|
|
dojohansen wrote: There are many cases where constructing unit tests would in itself represent a huge development effort
Therefore your conclusion is to not do it? That's going to make reading the rest of your post difficult, but I will try.
dojohansen wrote: because intrinsic asp.net objects (request, response, session state, application state, httpserverutility) are used indiscriminately anywhere
Bad, bad design, period.
dojohansen wrote: so to make a unit test for some business class you need to make mocks of these
No you don't, well you do apparently but I don't because I don't tightly couple business objects to ASP.NET intrinsics that are Presentation Layer objects. See Separation of Concerns[^], Isolation and Object Oriented Design Principles and Best Practices.
dojohansen wrote: So I don't get it.
Clearly.
dojohansen wrote: What would be the great benefit of developing tests to know that some code you're about to change works, when those tests cannot be reused to test the code after the changes?
Once again it's about how you Design your Unit Tests, it's about the cases, the cases should be able to be reused on the new Unit Test code you write for the new refactoring of things. This way you have the same test coverage of the new code as you did on the old code.
Final note: Avoiding work is NOT a valid excuse for not testing. If it were, software in general would be a lot worse quality than it already is.
led mike
|
|
|
|
|
I wrote a sincere and I would say well considered opinion about a subject relevant to some people, and certainly to the original question posted. Your reply can be summed up as you trying to tell me "you're an idiot". I am sure the person who asked the question is able to judge the reasons given for and against any recommendation and I suggest you stick to explaining why he should develop unit tests before refactoring and let readers draw their own conclusions.
Your venomous, arrogant reply was quite provocative. If it were not for the fact that your arrogance is - incredibly - superseded by your stupidity, I'd take it as a bitter pill and just try to learn from it. As it is, I'll take issue with your "argument".
We are not discussing the quality of the application code in the project I work on. It should be blindingly obvious to all of us why using, say, session state in a business object is just plain wrong. I was debating whether or not *generally* speaking it is smart to develop unit tests BEFORE or AFTER a refactoring of code. Unfortunately not one word you input related to this subject, so I can only say I am unused to "debating" at this level and not quite certain how I can best help you see the light. I will try though, adding some emphasis especially for you:
It is my OPINION that it is not ALWAYS or AUTOMATICALLY best to develop unit tests PRIOR TO REFACTORING CODE. If developing the tests represents a SIGNIFICANT WORKLOAD that would be UNDONE by the subsequent refactoring, it is my opinion that it is A WASTE OF TIME to do it IN THIS ORDER.
Furthermore, I claim that in the real world, MOST (this is not quite the same as all) refactoring would be of such a nature as to be very likely to break any unit tests developed. Hence, if about to refactor a project for which no unit tests exist, it MAY be a GOOD IDEA to CONSIDER whether one should refactor FIRST and THEN develop the unit tests.
If you don't have anything more intelligent to say than "avoiding work is NOT a valid excuse" when I haven't even remotely suggested anything of the kind then PLEASE, just spare us. You are neither funny or enlightning. Working hard is not sufficient in software development, you have to use your brain too. So do it.
|
|
|
|
|
dojohansen wrote: I was debating whether or not *generally* speaking it is smart to develop unit tests BEFORE or AFTER a refactoring of code. Unfortunately not one word you input related to this subject
Perhaps it's because you replied to my post that said:
led mike wrote: Yes. It's hard to imagine someone is thinking about design and architecture and refactoring but they haven't yet thought about the fact they should have unit tests. As hard as it might be to imagine that someone, no doubt they exist.
What I was discussing in that post as well as my last one to you was the importance of Unit Tests during INITIAL DEVELOPMENT. The original post used terms like "old architecture" which I took to mean in production.
dojohansen wrote: It is my OPINION that it is not ALWAYS or AUTOMATICALLY best to develop unit tests PRIOR TO REFACTORING CODE
It's my opinion that it's insane to be in production without unit tests. And no this is not some new belief based on TDD, I learned to use Unit Tests long before I ever heard the term Test Driven Development and long before any frameworks existed, AFAIK. Therefore I was never discussing the merits of developing unit tests based on a need to refactor in post production since the Unit Tests would already exist prior to production. My comments were only to the point of developing unit tests as a standard practice of software development, period.
So you can get as upset as you want with what I said, it doesn't change any of that.
led mike
|
|
|
|
|
Somebody's been really hammering you with 1 votes on this board Mike. I've corrected the balance on some of them, and will continue to do so as time allows.
|
|
|
|
|
Pete O'Hanlon wrote: Somebody'
Yeah, probably the same guy that was giving you the 2's. He explained that was an accident though. Apparently he didn't like my response to his argumentative reply in this thread about the importance of Unit Tests. Like I give a crap what he thinks.
led mike
|
|
|
|
|
Ahhh. Yup - I was reading this thread through with interest, especially because it highlights the differences in opinion in whether or not UT is important. It's a shame that people miss the point that good up front UT is a great way to force yourself to think more about the architecture of the system, and to look at putting good practices into place.
|
|
|
|
|
Pete O'Hanlon wrote: It's a shame that people miss the point
Sure, it's one in long list of points people miss in this field regarding good practices. witnessed here every single day. "here" having multiple usages:
1) Code Project
2) My peers and *gulp* managers or where I work
3) Me myself and I
Well maybe I don't miss them as much as I don't know them. I remember when I started out in University we worked with Unix and DOS. This was pre Windows and PCs were starting to move out so DOS became the hot bed. My internship, which became a job (long story or not), was DOS POS, yes Point of Sale and Piece of Shite. Anyway back then my vision of the future was I would be one of these guys that new almost everything about PC development. Then came (for me) OO, Windows and the Internet, etc., and I soon realized I would ever only scratch the surface or not, depends on your point of view. Most days it feels like not.
Anyway people that don't have a clue and don't even try, those that ignore people like Cunningham, Fowler, Beck, Booch and well there are just so many, as though they think they know better than they do I find annoying, at best, at worst they seem disgraceful. For my part I just hope I can even understand their points, the idea that I could judge them is not something I would even consider.
led mike
|
|
|
|
|
led mike wrote: I soon realized I would ever only scratch the surface
Yup - the more I learn, the more it becomes apparent that there's more to learn. For me, the forums are a great place to learn from people I respect. Even if I don't often ask questions on the forums, it's amazing how much I've picked up from the better answers.
|
|
|
|
|