|
Quote: Yup, sounds like TCP/IP, but that does not answer whether the robot will be pushing/reading from a socket, or is expecting a webpage. You'd need to ask the team-member.
The robot will keep sending request to server (handshaking with server) , thus most of the time the server application will be in listening mode. Unless there have certain command need to robot take action, then server application will send a command via socket (robot have their own API).
Okay, now i am quite clear what should i do but one more things. For the server application, i need to response a webpage to client, so do i need to separate the server application ? what i mean is one server application do for serve robot, and another one do for client.My client need to monitor all the robot information on-time (update each 1 to 3 second).
Thanks again
|
|
|
|
|
That will depend on how your teammate will have implemented it on his side. You should ask if he/she can actually communicate using webpages (does the robot have a server where you can request pages from?)
A socket would be simpeler, as you'd open it and wait for a text to arrive
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Quote: A socket would be simpeler, as you'd open it and wait for a text to arrive Smile |
Okay, i think i will continue in websocket solution which is able to communicate with ROS
This is my last question, shall i develop the websocket as a window service or just host in IIS?
Thanks in advance
|
|
|
|
|
Host it in IIS.
Windows Services are meant for applications that do NOT require interaction with the user, and are usually started before a user logs on. It would introduce complexity without any extra benefits.
If you want to create a WinForm UI, then create a WinForm app. The choice would be between a webapp and a WinForm app, not a Windows service. Given your experience, I'd recommend the webapp.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy,thousand thanks for your suggestion and tips
Another question
if websocket compare with WCF, which technology is more suitable for communication with machine/robot?
modified 23-Dec-14 0:13am.
|
|
|
|
|
You're welome.
I haven't used WCF yet, so I can't comment there. I'd be going for a prototype using the socketclasses, probably TcpClient - though WCF is supposed to be a bit more friendly and flexible.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Hi,
Since a robot is a real-time system, I wouldn't want to mix it directly into a web server.
I would make an application in C# or Java with a dedicated socket for communication with the robot. This application can continuously communicate with the robot - read states and send commands.
When that is in place, you can do use a database or the file system to pass data to and from a webserver. In this way you have the application in control of when to read a command from a client, when to send the command to the robot and when to sample the states of the robot.
In a typical web servers everything is request based. A request is initiated by the client and your code is basically a method call that has to return as fast as possible. It might be easy at start, but it is a poor way to monitor a robot. As an example it becomes a difficult task to track the robot and document where the robot was and then deduce why it did as it did.
I would let a standalone application dump the robot's state to a file system periodically and let it check the file system for the next command to send. In this way - if it is made flexible enough - you can have a folder with commands that takes your robot for a spin and produces a folder with its monitored states. If it fails, you can redo the experiment after changing what not on the robot, and you will have all data stored in the file system for your report.
Afterwards you can make a web server page that reads/writes the file system.
Kind Regards,
Keld Ølykke
|
|
|
|
|
Websocket is an extension of http protocol. Therefore, your machine (robot) needs to understand websocket (and http) protocol if you plan to use websockets. You also need IIS to run your web application.
Plain TCP sockets require less overhead but *may* require the robot to open a listening port (and your web application will send its updates to this listening port). Whether you need an extra listening port on the robot depends on implementation of your robot.
You can use a web app to serve Client UI and plain sockets to communicate with your robot. Additionally, you can use websockets to send monitoring data to Client UI. However, you need latest browser for websockets to work.
|
|
|
|
|
Hi,
instead of using types like int, float or bool, I came to the thinking of typedeffing/classing all the built in types to what they actually represent in the code. (in language, that support this feature)
e.g. (c++)
typedef int Age;
typedef int Number
Age ageOfAPerson;
Number donatedMoney
...
Reasons for doing so:
* having full control over what type the variable really represents.
scenarios:
- an ID is a *number* at first and changes to have letters in it later
- the System (connected with a database) was not designed for having so many users and now we're running out of IDs (getting closer to max-integer)
* throughout all code it's rather easy to see which variables represent the same functional meaning.
* range-changes can be made rather easy: just change the typedef
* deciding that one former "just-variable" should be a class with a function or some range checking can be implemented rather easy.
* avoid uncareful castings(anti-temptating): it's easy to cast a "integer" to an "unsigned integer". But it's rather hard to cast a "second" to a "user-id", even if both is an "integer" behind the scenes.
On the cons-side of course, we have a massive overhead (at first)
what do you think about that?
Would be pleased to hear some experiences or opinions
regards
|
|
|
|
|
There is no reason not to, but if you miss the typedef it is easy to think that this type is a class rather than a basic type. I would definitely not do this in a project that would be worked on by other developers. Trying to maintain other people's code is not easy at the best of times.
|
|
|
|
|
Hi,
Quote: There is no reason not to, but if you miss the typedef it is easy to think that this type is a class rather than a basic type.
I think, it should matter what it really is. This would basically the reason to do it. Masking the types. Only then it would be no Problem to make it a class later.
|
|
|
|
|
It's a good choice for APIs or libraries that should be given to other developers, because if you keep the interface the same you can change implementation easily should the need arise.
Another good reason to use this pattern is when you have ever-shifting technologies, if you need to heavily multiplatform your project you could hide some code in the type class to arrange for different word sizes, signed/unsigned mismatches and so on.
In any other case I would be reluctant to use it because it is a massive overhead over simple operations - it would be good to use in the frontier classes but not in the inner worker functions.
Not only that, but debugging may rapidly become complicated, passing through constructors and methods of all sorts for simple operations. Also the size of the code grows and that too can be a performance and deploy problem.
That said, I admit my vision is biased towards intensive high performance semi-embedded systems, probably for business applications and network middleware a more diligent and safe approach is better - but I wouldn't know, having no consistent experience in the field.
My 2 cents.
Geek code v 3.12
GCS d--- s-/++ a- C++++ U+++ P- L- E-- W++ N++ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t++ 5? X R++ tv-- b+ DI+++ D++ G e++>+++ h--- r++>+++ y+++*
Weapons extension: ma- k++ F+2 X
|
|
|
|
|
den2k88 wrote: In any other case I would be reluctant to use it because it is a massive overhead over simple operations - it would be good to use in the frontier classes but not in the inner worker functions.
Not only that, but debugging may rapidly become complicated, passing through constructors and methods of all sorts for simple operations. Also the size of the code grows and that too can be a performance and deploy problem.
Huh? In C++.
A typedef for an int is erased in the binary. Excluding perhaps meta data it has no impact on how the code runs.
Maybe you are thinking of replacing every type with a class rather than just simple type?
Although I question that as well. If a class only has a single data member and that data member is an int then the most efficient compiler optimization would be to allocate storage for just the int. Thus all forms of method passing would have exactly the same cost as the int itself.
As far as code growth at least in standard desktop applications the actual code size (binary) is never a problem. It is the runtime memory usage that can become a problem. If you have a C++ application that compiles down to a 1 gig binary then good for you but that certainly isn't the standard app. Conversely many smaller apps can easily using 1 gig of memory while running for data.
But maybe you meant something else that I was thinking of?
|
|
|
|
|
jschell wrote: Maybe you are thinking of replacing every type with a class rather than just simple type?
Precisely - otherwise it has little sense if not to allow further modifications to the API, as time_t that can expand either to _time32_t or _time64_t (I have no code at hand so I may be making some naming error but i trust it is inteligible).
As I was saying, this is useful if you provide APIs and want to keep the interface changes at the bare minimum, internally it has no sense.
jschell wrote: If a class only has a single data member and that data member is an int then the most efficient compiler optimization would be to allocate storage for just the int.
Never rely on compiler optimizations unless you can force them, undertand them completely and enable/disable them at will for every single block of code inside the code itself - and then prepare to fight hard when you upgrade the compiler to a new version (yes, we're still using VS6 for that reason, and it's all out of support, like one day will be VS2008, the one which is our "future").
Using classes with only one data member in the inner working of a component is overkill at best, makes debugging much more convoluted. Classes IMHO should be used to express composite concepts where the single data members may change but leaving externally the same interface. One class per type is really messy.
Of course none of the above are destructive problems, these are only the reasons for which I, with my small luggage of experience and my viewpoint, wouldn't use this design pattern unless it is prove to be the most effective way in some context.
Geek code v 3.12
GCS d--- s-/++ a- C++++ U+++ P- L- E-- W++ N++ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t++ 5? X R++ tv-- b+ DI+++ D++ G e++>+++ h--- r++>+++ y+++*
Weapons extension: ma- k++ F+2 X
|
|
|
|
|
den2k88 wrote: Never rely on compiler optimizations unless you can force them, undertand them completely and enable/disable them at will for every single block of code inside the code itself
Perhaps but the original poster, at least as I read it, wasn't suggesting using a class for every type. That was your suggestion.
And it isn't an optimization to allow the class to take the same number of bytes as the data type if in fact
1. The data type fits the word boundary
2. The compiler is not adding virtual support to every single class.
den2k88 wrote: Using classes with only one data member in the inner working of a component is overkill at best, makes debugging much more convoluted
Again - that was your interpretation, not mine, of what the OP said. Certainly the example code posted did not suggest that.
den2k88 wrote: wouldn't use this design pattern unless it is prove to be the most effective way in some context
Thus my first response with the specifics about that.
|
|
|
|
|
Ricky Rick wrote: * having full control over what type the variable really represents.
scenarios:
- an ID is a *number* at first and changes to have letters in it later
- the System (connected with a database) was not designed for having so many users and now we're running out of IDs (getting closer to max-integer)
Nope. Certainly can use "int ID" in a million lines of code and then expect to do nothing if you change it to "string ID".
Ricky Rick wrote: * throughout all code it's rather easy to see which variables represent the same functional meaning.
Throughout all of the code the data probably will not have the same meaning. For example as a database designer I might need to store a time stamp in a database using two columns (seconds and nanos), the business logic uses a timestamp (single value) and the UI uses a formatted text value localized for the user.
Ricky Rick wrote: * range-changes can be made rather easy: just change the typedef
Normal sequential numerics in a database do not extend for the full binary range of something like an integer. Only positive values need apply.
And I do NOT want to find out I have run out of ids in the database when the entire application fails because the id is too big. So I need a range check before that. And although a medical database might allow a birthday of Jan 1 1901, it shouldn't allow one for Jan 1 1001.
Ricky Rick wrote: * avoid uncareful castings(anti-temptating): it's easy to cast a "integer" to an "unsigned integer". But it's rather hard to cast a "second" to a "user-id", even if both is an "integer" behind the scenes.
Been a few years since I worked in C++ but rather positive than I can do exactly that. Matter of fact it is easier to do it that way than the right way. And assignment still works so why would anyone be attempting to cast (per your example)?
Ricky Rick wrote: On the cons-side of course, we have a massive overhead (at first)
Presumably you mean managing the types rather than anything to do with performance.
The real problem here is that you are going to end up with a non-trivial number of types that exist for one instance.
Ricky Rick wrote: what do you think about that?
Code to what you know and not all possibilities.
It makes it easier for you and makes it vastly easier for someone else that needs to maintain your code after you are gone.
So, for example, if you did in fact have a type of value that was used in many places and which you knew (based on existing or future requirements) that would need to change then putting an actual type, like a class, in place would be a good idea.
Otherwise don't.
|
|
|
|
|
Hi,
jschell wrote: Nope. Certainly can use "int ID" in a million lines of code and then expect to do nothing if you change it to "string ID".
that's true. But if you *have to* change it anyway. I'd say it's better this way, as you don't forget to change any places. And then you can see all the consequences at once, instead of forgetting something, that somehow manages itself to production code.
jschell wrote:
Throughout all of the code the data probably will not have the same meaning.
This sentence - taken as it is - would be a serious design problem that crashes in the long view.
But with your example, it gets clearer, what you mean
jschell wrote:
For example as a database designer I might need to store a time stamp in a database using two columns (seconds and nanos), the business logic uses a timestamp (single value) and the UI uses a formatted text value localized for the user.
In that case I'd have three different types (as you have in the code at the moment) - or one class, that supports every representation.
jschell wrote:
Normal sequential numerics in a database do not extend for the full binary range of something like an integer.
jschell wrote:
So I need a range check before that.
I guess you mean a range check in the DB, right?
I'd say, that it's rather dangerous to rely on the databases ranges and functionalities.
jschell wrote:
And I do NOT want to find out I have run out of ids in the database when the entire application fails because the id is too big
but what if you do?
then you need to search the whole application for every appearance or every function (even the generic ones, that might not be named well) and change/adjust the behaviour.
(You need to do in both versions, but - as above - the compiler will help you in one version, as it becomes a static error, not a runtime one)
jschell wrote: it shouldn't allow one for Jan 1 1001. so what to do if Jesus walks in because he's reawoken? "Sorry sir, you must be born after 1970 to get medical care"
jschell wrote:
Been a few years since I worked in C++ but rather positive than I can do exactly that. Matter of fact it is easier to do it that way than the right way. And assignment still works so why would anyone be attempting to cast (per your example)?
sorry, I didn't really get this one.
You mean nobody would try to cast a second to a user-id, but many might cast int to uint?
If I sum everything up right, I'd say, that it's not a bad idea, but one needs to take care where to apply it. So not *every* type, but very important or technical types.
(@technical: is the English language lagging the difference between technical (like hardware, programming, "thread") and real-world stuff (like a "train"-class, that is not a technical, but rather a ??? type?)
jschell wrote: Certainly can use "int ID" in a million lines of code and then expect to do nothing if you change it to "string ID". this was btw. the reason for my idea/question, because that happend to me - although only some > 100.000 lines.
|
|
|
|
|
In the team I am currently working with, we have to deal with a software which was initially thought as a database driven programme. As a result of that, there is a set of Databases, having each more or less 300 stored procedures!
We cannot anymore accept writing backend stored procedures, as they do not contain any object oriented thinking and in the end we have a procedural code like in the days of old good C. We have to deal with lots of bugs, which is not sure whether they come from backend (MSSQL) or the frontend (Windows App in C#)
Are there any other solutions to avoid so much SPs and still have a professional code? I thought of exchanging SELECT SPs through Views but I am interested on what other ways do exist.
Thank you.
|
|
|
|
|
Why only backend stored procs? I would run away from any frontend stored procs first.
No object is so beautiful that, under certain conditions, it will not look ugly. - Oscar Wilde
|
|
|
|
|
nstk wrote: having each more or less 300 stored procedures! ..it can hold lots more.
nstk wrote: as they do not contain any object oriented thinking And why is that a problem?
nstk wrote: in the end we have a procedural code like in the days of old good C. C and TSQL are different beasts.
nstk wrote: We have to deal with lots of bugs, which is not sure whether they come from
backend (MSSQL) or the frontend (Windows App in C#) Simple; if it is raised after executing a query, it is in the database. With sprocs it is even easier to verify as one only has to check the parameters.
A good way would be to have error-handling in the sproc, and rollback any changes if there's an error, and raising your own custom error that gets handled and logged by the C# code. There's examples on this site on how to do so.
nstk wrote: Are there any other solutions to avoid so much SPs and still have a professional
code? Yes, the alternative is to use inline SQL. That'll be even harder to debug.
nstk wrote: I thought of exchanging SELECT SPs through Views A view is nothing more than a SELECT statement. You can create 300 views, but they would not be very object-oriented. You also cannot pass a parameter to a view like one could with a sproc. Finally you could execute the select directly from code, without any views or sprocs - but that would be rather dirty.
SProcs are the professional way.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I can't see what problem you're trying to solve, other than to arbitrarily re-factor your application's architecture.
Do you have a specific issue with these backend stored procedures? from what you have described here, I can't see any problem. Database servers can easily handle hundreds of stored procedures. There are also huge benefits to using stored procedures such as performance and load balancing.
If you are experiencing specific problems then please describe what they are.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I'm trying to figure out to use System.Threading.Tasks.Task asynchronicity in conjunction with waiting for Win32 HANDLE triggering.
Summary:
I'm interfacing with an unmanaged DLL which communicates with a data collection device. ([DllImport("xxx.dll", CallingConvention = CallingConvention.Cdecl, ...] on each unsafe method.)
One of the methods will acquire N data records asynchronously into a user supplied buffer. Specifically, it "spawns the data read on a background thread", and returns a HANDLE to wait on for completion.
There is an additional method which will provide a different HANDLE that can be waited on for notification that some of the data is ready (every M (M <= N) data records).
The UI should display the latest data at EITHER the interval or completion notifications.
Further, all of the data is captured in a BlockingCollection so the captured data can be written to files asynchronously.
For an initial feasibility effort I have 3 BackgroundWorker instances for these.
The intent was learning how the interfacing with the device works, so no design!
This is coding by accretion == throw away!
The data collection worker calls a method in a class that wraps the DLL. The wrapper method calls the above DLL methods and gets the 2 HANDLE s. They are associated with instances of AutoResetEvent which are waited on with WaitHandle.WaitAny() . The wrapper method loops until completion, passing each data record to a delegate that was provided.
The delegate enqueues the data record in the BlockingCollection and, conditionally, updates a pointer to the next data to display (using Interlocked.Exchange() ) and Set() s a ManualResetEvent to indicate that there is something to display.
The live display worker is just a simple loop that Reset() s the ManualResetEvent (set above), WaitOne() on it, and again using Interlocked.Exchange() , gets the pointer to the data to display and displays it.
The file save worker is a foreach over the BlockingCollection.GetConsumingEnumerable() , uses the ReportProgress() to have a progress bar updated and writes each data record to its file.
This will be redesigned for the production use.
This all seems way too complicated.
BackgroundWorker doesn't seem like the correct strategy.
It seems that using System.Threading.Tasks.Task ought to be a cleaner solution.
(I haven't needed to use System.Threading.Tasks.Task on any project before, so no real experience... I have read the blog entries by Stephen Cleary.)
Any suggestions on restructuring away from "quick-n-dirty" to "clean and maintainable"?
Or pointers to how to think with System.Threading.Tasks.Task ?
Thanks.
A positive attitude may not solve every problem, but it will annoy enough people to be worth the effort.
|
|
|
|
|
Not sure how much this will help, but you can convert a WaitHandle to a Task with an extension method[^]:
public static class WaitHandleExtensions
{
private static void WaitHandleCallback(object state, bool timedOut)
{
var taskCompletionSource = (TaskCompletionSource<bool>)state;
if (timedOut)
{
taskCompletionSource.TrySetCanceled();
}
else
{
taskCompletionSource.TrySetResult(true);
}
}
public static Task WaitOneAsync(this WaitHandle waitHandle)
{
if (waitHandle == null) throw new ArgumentNullException("waitHandle");
var taskCompletionSource = new TaskCompletionSource<bool>();
var registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitHandle,
WaitHandleCallback, taskCompletionSource, Timeout.Infinite, true);
var result = taskCompletionSource.Task;
result.ContinueWith(_ => registeredWaitHandle.Unregister(null), TaskContinuationOptions.ExecuteSynchronously);
return result;
}
}
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I like BackgroundWorker. I’ve created a “generic worker” (class) into which I can inject callbacks (work; progress; completion) that makes kicking them off easy.
Anyway, in your case, I would use a worker to poll the “data collection device”. I would use 2 ConcurrentQueues: when the “collection worker” has some record(s), it would push them onto the queues (one queue for a “display worker” and one for a “file worker”) and check if the “display” and “file” workers are running; if not, the “collection” worker would start the other worker(s) (the “slave” workers should be started from the “progress reporting event” of the “collection” worker if they are to update the UI; assuming the “collection” worker was started from the UI thread).
The “display” and “file” workers would pop records off their respective queues and do their thing until their respective queues were exhausted; exit; and get reincarnated when the “collection worker” retrieved more data.
|
|
|
|
|
I am looking for someone with a lot of experience with Windows Remote Desktop Connection to answer a basic question or two...
I have a WireShark capture (only one side) which shows 880 MBytes sent from the client running the Remote Desktop Connection, to the host; but I have no WireShark capture of the packet volume coming from the host to the client IP address. The capture is for a period spanning 32 hours. I am being asked to and determine the amount of data that might have been sent from the host to the client... I know this is a real out there question; which is why I'm looking for someone with real Remote Desktop Connection experience.
I assume that of the volume of traffic from the client to the host is 880MBytes over 32 hours; is there any way at all of guessing how much might have been sent from the host to the client?
Please be kind, this is important and I am presenting all of the information I have.
Thank you!
|
|
|
|
|