|
den2k88 wrote: Never rely on compiler optimizations unless you can force them, undertand them completely and enable/disable them at will for every single block of code inside the code itself
Perhaps but the original poster, at least as I read it, wasn't suggesting using a class for every type. That was your suggestion.
And it isn't an optimization to allow the class to take the same number of bytes as the data type if in fact
1. The data type fits the word boundary
2. The compiler is not adding virtual support to every single class.
den2k88 wrote: Using classes with only one data member in the inner working of a component is overkill at best, makes debugging much more convoluted
Again - that was your interpretation, not mine, of what the OP said. Certainly the example code posted did not suggest that.
den2k88 wrote: wouldn't use this design pattern unless it is prove to be the most effective way in some context
Thus my first response with the specifics about that.
|
|
|
|
|
Ricky Rick wrote: * having full control over what type the variable really represents.
scenarios:
- an ID is a *number* at first and changes to have letters in it later
- the System (connected with a database) was not designed for having so many users and now we're running out of IDs (getting closer to max-integer)
Nope. Certainly can use "int ID" in a million lines of code and then expect to do nothing if you change it to "string ID".
Ricky Rick wrote: * throughout all code it's rather easy to see which variables represent the same functional meaning.
Throughout all of the code the data probably will not have the same meaning. For example as a database designer I might need to store a time stamp in a database using two columns (seconds and nanos), the business logic uses a timestamp (single value) and the UI uses a formatted text value localized for the user.
Ricky Rick wrote: * range-changes can be made rather easy: just change the typedef
Normal sequential numerics in a database do not extend for the full binary range of something like an integer. Only positive values need apply.
And I do NOT want to find out I have run out of ids in the database when the entire application fails because the id is too big. So I need a range check before that. And although a medical database might allow a birthday of Jan 1 1901, it shouldn't allow one for Jan 1 1001.
Ricky Rick wrote: * avoid uncareful castings(anti-temptating): it's easy to cast a "integer" to an "unsigned integer". But it's rather hard to cast a "second" to a "user-id", even if both is an "integer" behind the scenes.
Been a few years since I worked in C++ but rather positive than I can do exactly that. Matter of fact it is easier to do it that way than the right way. And assignment still works so why would anyone be attempting to cast (per your example)?
Ricky Rick wrote: On the cons-side of course, we have a massive overhead (at first)
Presumably you mean managing the types rather than anything to do with performance.
The real problem here is that you are going to end up with a non-trivial number of types that exist for one instance.
Ricky Rick wrote: what do you think about that?
Code to what you know and not all possibilities.
It makes it easier for you and makes it vastly easier for someone else that needs to maintain your code after you are gone.
So, for example, if you did in fact have a type of value that was used in many places and which you knew (based on existing or future requirements) that would need to change then putting an actual type, like a class, in place would be a good idea.
Otherwise don't.
|
|
|
|
|
Hi,
jschell wrote: Nope. Certainly can use "int ID" in a million lines of code and then expect to do nothing if you change it to "string ID".
that's true. But if you *have to* change it anyway. I'd say it's better this way, as you don't forget to change any places. And then you can see all the consequences at once, instead of forgetting something, that somehow manages itself to production code.
jschell wrote:
Throughout all of the code the data probably will not have the same meaning.
This sentence - taken as it is - would be a serious design problem that crashes in the long view.
But with your example, it gets clearer, what you mean
jschell wrote:
For example as a database designer I might need to store a time stamp in a database using two columns (seconds and nanos), the business logic uses a timestamp (single value) and the UI uses a formatted text value localized for the user.
In that case I'd have three different types (as you have in the code at the moment) - or one class, that supports every representation.
jschell wrote:
Normal sequential numerics in a database do not extend for the full binary range of something like an integer.
jschell wrote:
So I need a range check before that.
I guess you mean a range check in the DB, right?
I'd say, that it's rather dangerous to rely on the databases ranges and functionalities.
jschell wrote:
And I do NOT want to find out I have run out of ids in the database when the entire application fails because the id is too big
but what if you do?
then you need to search the whole application for every appearance or every function (even the generic ones, that might not be named well) and change/adjust the behaviour.
(You need to do in both versions, but - as above - the compiler will help you in one version, as it becomes a static error, not a runtime one)
jschell wrote: it shouldn't allow one for Jan 1 1001. so what to do if Jesus walks in because he's reawoken? "Sorry sir, you must be born after 1970 to get medical care"
jschell wrote:
Been a few years since I worked in C++ but rather positive than I can do exactly that. Matter of fact it is easier to do it that way than the right way. And assignment still works so why would anyone be attempting to cast (per your example)?
sorry, I didn't really get this one.
You mean nobody would try to cast a second to a user-id, but many might cast int to uint?
If I sum everything up right, I'd say, that it's not a bad idea, but one needs to take care where to apply it. So not *every* type, but very important or technical types.
(@technical: is the English language lagging the difference between technical (like hardware, programming, "thread") and real-world stuff (like a "train"-class, that is not a technical, but rather a ??? type?)
jschell wrote: Certainly can use "int ID" in a million lines of code and then expect to do nothing if you change it to "string ID". this was btw. the reason for my idea/question, because that happend to me - although only some > 100.000 lines.
|
|
|
|
|
In the team I am currently working with, we have to deal with a software which was initially thought as a database driven programme. As a result of that, there is a set of Databases, having each more or less 300 stored procedures!
We cannot anymore accept writing backend stored procedures, as they do not contain any object oriented thinking and in the end we have a procedural code like in the days of old good C. We have to deal with lots of bugs, which is not sure whether they come from backend (MSSQL) or the frontend (Windows App in C#)
Are there any other solutions to avoid so much SPs and still have a professional code? I thought of exchanging SELECT SPs through Views but I am interested on what other ways do exist.
Thank you.
|
|
|
|
|
Why only backend stored procs? I would run away from any frontend stored procs first.
No object is so beautiful that, under certain conditions, it will not look ugly. - Oscar Wilde
|
|
|
|
|
nstk wrote: having each more or less 300 stored procedures! ..it can hold lots more.
nstk wrote: as they do not contain any object oriented thinking And why is that a problem?
nstk wrote: in the end we have a procedural code like in the days of old good C. C and TSQL are different beasts.
nstk wrote: We have to deal with lots of bugs, which is not sure whether they come from
backend (MSSQL) or the frontend (Windows App in C#) Simple; if it is raised after executing a query, it is in the database. With sprocs it is even easier to verify as one only has to check the parameters.
A good way would be to have error-handling in the sproc, and rollback any changes if there's an error, and raising your own custom error that gets handled and logged by the C# code. There's examples on this site on how to do so.
nstk wrote: Are there any other solutions to avoid so much SPs and still have a professional
code? Yes, the alternative is to use inline SQL. That'll be even harder to debug.
nstk wrote: I thought of exchanging SELECT SPs through Views A view is nothing more than a SELECT statement. You can create 300 views, but they would not be very object-oriented. You also cannot pass a parameter to a view like one could with a sproc. Finally you could execute the select directly from code, without any views or sprocs - but that would be rather dirty.
SProcs are the professional way.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I can't see what problem you're trying to solve, other than to arbitrarily re-factor your application's architecture.
Do you have a specific issue with these backend stored procedures? from what you have described here, I can't see any problem. Database servers can easily handle hundreds of stored procedures. There are also huge benefits to using stored procedures such as performance and load balancing.
If you are experiencing specific problems then please describe what they are.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I'm trying to figure out to use System.Threading.Tasks.Task asynchronicity in conjunction with waiting for Win32 HANDLE triggering.
Summary:
I'm interfacing with an unmanaged DLL which communicates with a data collection device. ([DllImport("xxx.dll", CallingConvention = CallingConvention.Cdecl, ...] on each unsafe method.)
One of the methods will acquire N data records asynchronously into a user supplied buffer. Specifically, it "spawns the data read on a background thread", and returns a HANDLE to wait on for completion.
There is an additional method which will provide a different HANDLE that can be waited on for notification that some of the data is ready (every M (M <= N) data records).
The UI should display the latest data at EITHER the interval or completion notifications.
Further, all of the data is captured in a BlockingCollection so the captured data can be written to files asynchronously.
For an initial feasibility effort I have 3 BackgroundWorker instances for these.
The intent was learning how the interfacing with the device works, so no design!
This is coding by accretion == throw away!
The data collection worker calls a method in a class that wraps the DLL. The wrapper method calls the above DLL methods and gets the 2 HANDLE s. They are associated with instances of AutoResetEvent which are waited on with WaitHandle.WaitAny() . The wrapper method loops until completion, passing each data record to a delegate that was provided.
The delegate enqueues the data record in the BlockingCollection and, conditionally, updates a pointer to the next data to display (using Interlocked.Exchange() ) and Set() s a ManualResetEvent to indicate that there is something to display.
The live display worker is just a simple loop that Reset() s the ManualResetEvent (set above), WaitOne() on it, and again using Interlocked.Exchange() , gets the pointer to the data to display and displays it.
The file save worker is a foreach over the BlockingCollection.GetConsumingEnumerable() , uses the ReportProgress() to have a progress bar updated and writes each data record to its file.
This will be redesigned for the production use.
This all seems way too complicated.
BackgroundWorker doesn't seem like the correct strategy.
It seems that using System.Threading.Tasks.Task ought to be a cleaner solution.
(I haven't needed to use System.Threading.Tasks.Task on any project before, so no real experience... I have read the blog entries by Stephen Cleary.)
Any suggestions on restructuring away from "quick-n-dirty" to "clean and maintainable"?
Or pointers to how to think with System.Threading.Tasks.Task ?
Thanks.
A positive attitude may not solve every problem, but it will annoy enough people to be worth the effort.
|
|
|
|
|
Not sure how much this will help, but you can convert a WaitHandle to a Task with an extension method[^]:
public static class WaitHandleExtensions
{
private static void WaitHandleCallback(object state, bool timedOut)
{
var taskCompletionSource = (TaskCompletionSource<bool>)state;
if (timedOut)
{
taskCompletionSource.TrySetCanceled();
}
else
{
taskCompletionSource.TrySetResult(true);
}
}
public static Task WaitOneAsync(this WaitHandle waitHandle)
{
if (waitHandle == null) throw new ArgumentNullException("waitHandle");
var taskCompletionSource = new TaskCompletionSource<bool>();
var registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitHandle,
WaitHandleCallback, taskCompletionSource, Timeout.Infinite, true);
var result = taskCompletionSource.Task;
result.ContinueWith(_ => registeredWaitHandle.Unregister(null), TaskContinuationOptions.ExecuteSynchronously);
return result;
}
}
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I like BackgroundWorker. I’ve created a “generic worker” (class) into which I can inject callbacks (work; progress; completion) that makes kicking them off easy.
Anyway, in your case, I would use a worker to poll the “data collection device”. I would use 2 ConcurrentQueues: when the “collection worker” has some record(s), it would push them onto the queues (one queue for a “display worker” and one for a “file worker”) and check if the “display” and “file” workers are running; if not, the “collection” worker would start the other worker(s) (the “slave” workers should be started from the “progress reporting event” of the “collection” worker if they are to update the UI; assuming the “collection” worker was started from the UI thread).
The “display” and “file” workers would pop records off their respective queues and do their thing until their respective queues were exhausted; exit; and get reincarnated when the “collection worker” retrieved more data.
|
|
|
|
|
I am looking for someone with a lot of experience with Windows Remote Desktop Connection to answer a basic question or two...
I have a WireShark capture (only one side) which shows 880 MBytes sent from the client running the Remote Desktop Connection, to the host; but I have no WireShark capture of the packet volume coming from the host to the client IP address. The capture is for a period spanning 32 hours. I am being asked to and determine the amount of data that might have been sent from the host to the client... I know this is a real out there question; which is why I'm looking for someone with real Remote Desktop Connection experience.
I assume that of the volume of traffic from the client to the host is 880MBytes over 32 hours; is there any way at all of guessing how much might have been sent from the host to the client?
Please be kind, this is important and I am presenting all of the information I have.
Thank you!
|
|
|
|
|
Please do not cross post.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Sadly, due to the optimisations and features in place in RDC, there's no way to know how much data will be sent from the host to the client. For instance, remote desktop is pretty clever at determining the dirty area of a screen that needs updating. This means that the "bitmap" that is transmitted back from the host can vary in size.
|
|
|
|
|
I see a lot of jobs for 'architects', but this role doesn't fit in with my beliefs. A lot of modern practices eg. Scrum encourage self organizing teams rather than having a leader. I believe teams can work better without an architect.
It seems that an 'architect' or 'lead' would be the next step in my career, but I don't think this is the right title for what I want to do. I think all/most developers should do some architecture design and I think a 'god' role would be too much for one person.
What do people think? What should an architect do?
|
|
|
|
|
Member 4487083 wrote: What should an architect do?
Use design patterns (if you believe the Gang of Four)!
|
|
|
|
|
to make project like building etc.
|
|
|
|
|
Member 4487083 wrote: What should an architect do? Whatever he or she is told; most likely design a system, which the programmers then turn into code. Much the same as a biuilding architect.
|
|
|
|
|
Richard MacCutchan wrote: design a system, which the programmers then turn into code
What are senior developers for then? Their knowledge would be wasted if they're just doing what they are told. It's the developers that will be using the architecture so I think it makes sense for all developers to have some input.
|
|
|
|
|
Well it all depends on the company and the management. There are no hard and fast rules.
|
|
|
|
|
Member 4487083 wrote: I think a 'god' role would be too much for one person. What is considered an 'architect' may vary from company to company, but it does not seem to be a god role. And yes, as a seasoned developer I had to build quite some applications without the help of an architect.
Ever seen anyone work on a whiteboard, throwing patterns around like they are lego-bricks, explaining to a group of developers the implications of each approach? Half an hour further, the blocks on the whiteboard are work-items, we got to work.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: Ever seen anyone work on a whiteboard, throwing patterns around like they are lego-bricks, explaining to a group of developers the implications of each approach? Half an hour further, the blocks on the whiteboard are work-items, we got to work.
No, I haven't seen it. I've never seen an architect capable of that, although I know a few developers who would be. An architect is ultimately just another developer with a different title. In my current team, I think there are a number of good developers who would have good input on what kind of design/patterns to use. On their own, I don't think any single developer on my team would come up with the best solution. Some of the developers on my team are far better than any architect I have seen.
|
|
|
|
|
Member 4487083 wrote: I've never seen an architect capable of that Then why does he hold the title?
Member 4487083 wrote: although I know a few developers who would be. The architects at our company are developers. How could one talk about software-architecture if one has no idea how it is built? Being a damned good developer is a prerequisite.
Member 4487083 wrote: I think there are a number of good developers who would have good input on what
kind of design/patterns to use If you have four and they disagree, things get interesting. Put the architects you meet in a room with that kind of devs and shout that the flame-wars have begun.
Member 4487083 wrote: Some of the developers on my team are far better than any architect I have seen That may be, but it sounds like you want to generalize your experience to every architect.
Not all dogs bite.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
As has been pointed out Architect is a different animal depending on your perspective. An architect in a major corporate may be the guy who puts together the core design for a system and never codes (almost certainly did code at some time). In a smaller corporate he may be what you consider a Senior Developer,
I call myself a developer architect because I refuse to let go of the coding aspect and am not interested in just putting together solutions for someone else to code.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
If you think of a software architect in terms of a building architect then you're on the right road.
An architect designs the overall system, including interfaces to internal / external systems, the flow of data, the key systems / users etc. They employ top down design and map out the key processes.
If you think of the person who designs the overall system topology in an enterprise SOA system, then that's the architect.
An architect is NOT the lead developer, they are NOT the project manager and they are NOT the scrum-master.
|
|
|
|
|
Hi,
I would like some suggestion on archietecture design.
Here are my requirements
1) I have clients spreaded over multiple locations
2) Their are groups of clients connected to a common server
3) Each server will gather gb of data everyday
4) There is a main server which is connected to every other local server
5) This server will have all the data which the local server will have. local servers will send data to main server.
6) I will be doing analytics on the main server as well as local servers
My main questions are
1) Since this is big data, which database should I consider sql or nosql
2) Where can hadoop fit here?
3) Is there are framework which helps me in sending data from local to global servers
|
|
|
|
|