|
That's a good example. Paul knows his stuff.
|
|
|
|
|
Hi,
I have a class as under:
public class ExampleClass : BaseClass
{
public int ExampleMethod(int _toValidate) {
ExampleValidator(_toValidate);
return 1;
}
}
The ExampleValidator method is contained in the BaseClass. It is used to validate the parameter _toValidate. The ExampleValidator check is very important and every method in the class ExampleClass should contain it. I want to create a unit test for this class ExampleClass, to test if every method does contain the ExampleValidator. How can I do that?
Can reflection help me?
Regards,
ap.
|
|
|
|
|
I'd like to learn more about the optimum color depth for .NET Windows Forms over Citrix connections. I'd prefer 256 (8 bit) for performance but some would like to use skins with some fancy colors if need be. Any ideas, pointers where I can learn more?
the confused are confused beyond confusion
|
|
|
|
|
Has anyone out there actually used inversion of control containers *within* compiler implementations yet? I know that by design, compilers need to be very fast, but I've always been curious about how IoC/DI could affect the construction of a programming language--hot-swappable syntaxes, anyone?
|
|
|
|
|
I've never come across any, but it does sound an interesting concept.
|
|
|
|
|
Pete O'Hanlon wrote: it does sound an interesting concept
Yes, it does sound very interesting.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
Hi all,
In many situations I've encountered applications that are built using multiple tiers but user identification is limited only to few of these tiers (this is the opposite how I've usually done). For example in 3-tier approach, UI knows the user, calls to business logic are authenticated for example using Windows authentication, but commonly the database does not have any knowledge of the actual user executing the original request.
What I find disturbing in this scenario is that data protection in database (in my opinion) should be as critical as in any other layer (or even more critical). Also knowing the actual user in the database gives many possibilities that cannot easily be solved in middle-tier.
The question is: If you could share your opinion, which approach would be better and especially why? Should the actual user be used all the way down to the database or only between client and middle-tier?
Just to clarify, the question is not how to implement either scenario (that's already figured), but what are the pros and cons in either case.
Cheers,
Mika
modified on Saturday, September 20, 2008 1:27 PM
|
|
|
|
|
Having the actual user at the database level may be nice but it also presents a management complication in that now the database must be aware of all the users that could access the application. Usually, this level of restriction isn't practical.
Scott Dorman Microsoft® MVP - Visual C# | MCPD
President - Tampa Bay IASA
[ Blog][ Articles][ Forum Guidelines] Hey, hey, hey. Don't be mean. We don't have to be mean because, remember, no matter where you go, there you are. - Buckaroo Banzai
|
|
|
|
|
Good point, thanks!
What if account management is centralized (with small effort). For example Windows environment + SQL Server this could be done using AD (applies also to several other databases) or even using logic in middle tier (in heterogenous environments). Would you still consider this an overhead?
The need to optimize rises from a bad design
|
|
|
|
|
If you could centralize it to specific AD groups and assign that group privileges in SQL Server it would probably work without too much overhead. This, of course, restricts you to running only in environments that have AD implemented.
Scott Dorman Microsoft® MVP - Visual C# | MCPD
President - Tampa Bay IASA
[ Blog][ Articles][ Forum Guidelines] Hey, hey, hey. Don't be mean. We don't have to be mean because, remember, no matter where you go, there you are. - Buckaroo Banzai
|
|
|
|
|
That's true and not always acceptable.
Thanks for your answers!
Mika
The need to optimize rises from a bad design
|
|
|
|
|
In the traditional 3-tier setup, the only thing that "should" be performing data operations is your business logic. Permissions may not necessarily make sense at the data level.
As an example a "loan manager" may have permission to "approve loans". The loan approval process may update a loan record, an audit table, salesman perfomance/manager performance/sales funnel tables, etc, etc.
Granted, there will always be "update contact details".
I generally find granting permissions based on roles/interactions causes less friction in implementation, especially when dealing with workflow and interception. Quite a few times I've seen teams struggling with a "well, it goes into the approval state, so you set user to read-only, the group to "managers", and make it read-write...".
|
|
|
|
|
Thanks for the answer!
I just realized that my question is missing some relevant information.
In no situation I will grant permissions directly to users. Sometimes I've used implementation where the connection from middle tier to the database doesn't use user info at all as traditionally is done, but sometimes I use user information at all levels. In those cases typical building elements may include:
- user identification at client (AD based or not)
- secure identification and authorization to middle tier based on client user
- connection to database for the business logic, but identified by user info (in other words BL connects to the database on behalf of the user)
- database roles granted to the user
- database roles granted to the application (BL) etc...
Object privileges in the database are always granted to roles.
After that description, do you feel that the overall idea is getting better or worse?
Mika
The need to optimize rises from a bad design
|
|
|
|
|
As soon as you start connecting to the database as a different user (assuming basic .Net/SQL server/ADO) then you will break the connection pooling, which can be a performance hit.
In a moderately complex application, if you look at CRUD operations at table level, then you might find that just about every user has full access to most of the critical tables anyway :/ So you don't really gain much at that level.
Some people would say to do everything via stored procedures and permission these up...
I'd tend to put the security at the business logic level. You can even use the declarative security attribute thingos on methods called by the presentation layer if you like. :P
|
|
|
|
|
Thanks, I got your point.
These were excellent things to consider. Especially since in some cases it's benefitial (or even critical) to identify the user in the db and the side effects can be managed.
Side-note: Actually I don't break the connection pooling and it's still very elemental in the application. Only the 'level' of pooling is changed but the pool itself must remain (or else there will be lot's of angry users )
Thanks again,
Mika
The need to optimize rises from a bad design
|
|
|
|
|
This is a tough one...propagating user definitions from the UI down to the database level often makes database administration a nightmare. We use application-defined "users" at the database level to reduce this issue, and copy UI user names into database records to maintain an audit trail (and don't let most users anywhere NEAR a direct connection to the database server and its schemae).
|
|
|
|
|
Actually the db administration doesn't seem to be problematic since the whole process of creating a user is part of the application logic, but that's still a good point.
Thanks,
Mika
The need to optimize rises from a bad design
|
|
|
|
|
I am exploring an idea I would like to research further and am wondering if anyone else has any thoughts.
For high performance web applications the database and data access code is often heavily optimised, using near and far caches, distributed caches, optimised queries, optimised data access code and so on. The database is usually separate from other TP systems that the organisation runs its business on, and if not could/should well be.
It occurred to me that the reason we use far distributed caches is mainly to avoid
a) Relational/object conversion overhead
b) Disk access overhead
I had been doing some reading about new generations of universal memory (NRAM,MRAM,FeRAM), some of which is commercially available and will eventually replace SRAM,DRAM,SDRAM,Flash,etc as a universal type of non volatile high speed memory. I wondered how this would affect architecture of high performance systems. It then occurred to me that we could use (the now cheaper) solid state hard drives (SSDs) that use standard DRAM with internal UPS and backup devices in the meantime.
I thought - in this case, the need for the far cache would be reduced or removed completely. In fact, if the distributed cache could handle transactions and concurrency, and if the web app was geared less to set based operation and more to CRUD operations, then this 'advanced cache' would, on non volatile high speed memory, serve also as the primary data store.
In short, an Object Database that was mature enough to deal with transactions, clustering and a few other things, deployed on machines with DRAM based SSD's could remove the need for a far cache, an RDBMS, data access logic, etc completely, and boost performance.
There do exist Object DBs that permit SQL based relational queries, though I would imagine that many of these queries would be done for transmitting data to other systems and therefore could be done using the web app's own API.
So, in summary, what I'd to explore is this architecture for high availability large scale systems:
a) Web Farm running web apps with near caches
b) Distributed Object-Relational database on DRAM based SSD as backend
c) Separate API for datapumps to RDBMS for queries, integration etc
What do you think? What problems do you foresee? Would you imagine large performance gains?
|
|
|
|
|
Which of the following is a better design? Why or Why not?
Private Sub CheckCheckBoxes()
CheckBox1.Checked = True
CheckBox2.Checked = True
CheckBox3.Checked = True
End sub
Private Sub UncheckCheckBoxes()
CheckBox1.Checked = False
CheckBox2.Checked = False
CheckBox3.Checked = False
End sub
Or
Private Sub CheckUnchecCheckBoxes(ByVal checkValue as Boolean)
CheckBox1.Checked = checkValue
CheckBox2.Checked = checkValue
CheckBox3.Checked = checkValue
End Sub
Or
Private Sub CheckUnchecCheckBoxes()
Dim checkValue as Boolean = GetCheckValue() ' Has the logic for whether to be checked or not
CheckBox1.Checked = checkValue
CheckBox2.Checked = checkValue
CheckBox3.Checked = checkValue
End Sub
|
|
|
|
|
They are all bad designs. What happens if you add a new checkbox? Have a read around on patterns, and see why this set of implementations is bad. BTW - those are really bad names for your checkboxes.
|
|
|
|
|
Not good design at all. You may want to stick to real names for the checkboxes as Pete has said. It would make it easier to read and know what each checkbox belongs to.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
My question is not about naming controls or variables, therefore, I did not give them very meaningful names. Of course, I would give them meaningful names in real projects.
If none of them are good designs then can you recommend a good one? I do know patterns and I use them all the time, however, I am not sure what patterns have to do with creating good subroutines. Can you please clarify or recommend a reading.
|
|
|
|
|
I posted this [^] on another forum but got no answer. I'm referencing it here in case you want some background on why I need to do this.
In short, I want to detect analog beacon radar pulses, that have already been converted from an A/D board to values between 1 and 256, the Y component or amplitude levels. The data is sampled at a 2 ns rate or 500 Mhz which comprises the X component. Typically, from what I've been told is that pulse lead edge detection and time stamping the LE is done using hardware. I'm not planning on posting all of the requirements but suffice it to say that I will meet all requirements if I can design the foundation of which this topic addresses.
I'm approaching this in two ways based mostly on my empirical observations of the raw data.
Using a plot program I can plot the continuous waveform and see where hardware or software would have some problems separating the pulses. Pulses must meet a minimum width and have various pulse to pulse and other timing tolerances. Of course, individual pulses are really part of a pulse train of a particular type of message, and can be correlated by amplitude and bit position once the pulses have been extracted as pulse records with attributes for the LE, TE, pulse width, plateau or amplitude, overlapped, etc.
1) Pulses of different signal level or amplitude can be intermixed, interleaved, or overlapped sometimes causing wide pulses where two or more pulses of different amplitudes may be joined and the Trailing Edge (TE) from the first pulse may not be detectable, though recoverable, using extrapolation of any downward slope. Pulses, that do not meet minimum width constraints and occasionally of significant amplitude are most likely noise and can be discarded or not stored. Noise can also interfere with pulse width sometimes causing a TE to be initially indiscernible or undetectable unless a small TE can be detected first.
2) I am performing my code design based on how I, a human, would interpret the individual pulses based on an empirical observation. Consequently, I and the program would process the data from left to right or for the direction of X where X equals time, in an increasing time reference.
Before I get too far, my initial question regarded using a state machine as part of the design to perform the pulse processing. Since that time, I already have started coding some of this in the manner #2 described above. When separating individual pulses from combined pulses and when separating pulses when TE's do not return to zero or the noise floor, changes in direction of slope (downward or upward ) I am running into some slight problems.
I've decided that a sliding window history (implemented as a circular queue) of my last 5 slope directions that the pulse is/was travelling could be used to construct a scoring algorithm to try to determine how confident I am about the waveform's future detection into individual pulses. For example, if I am following a TE pulse to about the 3 db point below the pulse's plateau, if I get say 3 consecutive hits in a change of direction initially from a downward slope to a rising slope, then depending on the existing pulse's width up to that point, I could be processing a noise spike, or a second pulse. Suffice it to say, I now just assume that if I have reached a -3 db point (~ 70% of pulse plateau level) and the pulse meets minimum width tolerance, I can just declare it a pulse and save its characteristics.
I want to assign a higher score based on a high degree of confidence that the direction change in slope is not momentary but for a longer period of time. I could also use a second scoring algorithm to assign higher levels of confidence to pulses that do return more to the noise floor perhaps all the way to the floor or the -6 db point.
Now, getting back to program design, IMHO I have discovered that, to me, there is no easy way to code this. There are several possibilities that exist once the LE, and plateau have been acquired: a normal TE could follow resulting in a clean pulse, a noise spike can temporarily occur as part of the TE widening the pulse slightly, a noise spike can be of significant enough of amplitude or width to be possibly interpreted as a secondary pulse adjacent to or following the pulse we were processing, an actual second pulse could be occurring intermixed with noise such that the first pulse's TE never returned to the noise floor or the noise floor is high as compared to the pulse or pulse train's amplitude level. Consequently, the LE for the second pulse may only be observable from say the -3 db point below the second pulse's plateau level.
After thinking about these possibilities and that you or the algorithm processing the continuous waveform won't really know what condition you have until after it occurs, this does not allow for calling of functions in a sequential or logical manner and have found that most or all possibilities must be included in a single function that looks for all of these possibilities. Needless to say I have made some progress and would appreciate any suggestions in design or in helping with coding suggestions.
|
|
|
|
|
I have a form which is used for data entry and then data is saved by pressing a button. I load numerous controls in the load event of the form. When user saves, I just hide the form so I do not have to load all the data again when user I show the form again. This also means I have to clear all the controls for the next time the form is shown.
I am using events of controls to change properties of classes I have. For example, in textchanged event of textbox I might have something like: employee.Name = TextBox1.Text where employee is an object. However, when I hide the form I have to clear the text box which causes the text changed event. I can have a boolean variable (something like cleaningUp) set to true during hiding of the form. If this variable is set to true then it will exit the text changed event handler. Alternatively, I can also remove the handler for text box before I clear it up and then rewire it. This is a lot of work with a form which has many controls.
What am I doing wrong? Is there a better way of accomplishing my task?
|
|
|
|
|
CodingYoshi wrote: This is a lot of work with a form which has many controls.
Yes it is. Littering windows with individual controls for each individual data point is what I call the Bug Splat User Interface technique. It's not user friendly and it's not coding friendly. Prefer to use controls like PropertyGrid for data entry. They are both user friendly and coding friendly.
led mike
|
|
|
|
|