|
|
Please don't cross post.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
At my place of employment we have access cards as a system for entering the building. We have a utility that shows who is in the building, and who is out. This utility gets the in and out employees from a SQL Database.
My supervisor wants a new feature added, and I have been giving the task of doing the design work for this. I have been mulling around for a few hours, considering my options. I was wondering what the CPians think of this idea, or if you guys have a better idea.
The new feature would allow you to select an employee from the "Out of the building list," and set a notification on the person. So, when they enter the building, you will recieve an E-Mail saying "Joe Shmoe entered the building at 5:00 PM." We want this added to enterprise namespace so this can be a re-usable object in any software we design.
So, my Idea is as such:
I would create a service running on one of the servers in the building. The user will look at the In&Out utility, select a user, and say "Hey I wanna be notified when they come in" The service will accept notification request, and store it in a database. The service will constantly monitor the In&Out database, and when a condition for one of the notification requests has been met, it will send an E-Mail to the requestor. Then, the service will remove the notification request from the list.
I have been looking at This Article[^] and I believe a lot of this is applicable to me. What do you guys think?
I get all the news I need from the weather report - Paul Simon (from "The Only Living Boy in New York")
|
|
|
|
|
Well - another way to accomplish this would be to use a Notifications system like, say, the one with SQL Server 2005. Take a look at it - it's really very, very good.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
Leaving the implementation to one side, the approach taken to establish if an employer is in or out of the building using the card system may not be 100% accurate i.e. employer holds the door open for another, some override over the lock is performed, employer is using a temporary card if they misplaced their assigned one etc... Might be worth making your supervisor aware of this.
As for the implementation if your using sql server 2005 I believe you can use a SqlDependency to check for data changes, it might be worth investigating.
|
|
|
|
|
Leaving aside implementation specific details and assuming you have access to DB servers keeping this information.
Whenever you get a request for monitoring someone you can create a trigger in the database. The trigger should get triggered when the employee uses his access card to enter the building. In the code for trigger you can do the necessary processing.
In an even based programming model, you should have events for whenever someone enters or leaves the building (OnEnter, OnExit).
When you get a request to monitor someone you can store that in DB, and when the onEnter event is triggered, you can check against the DB to see if this person is on the notification list.
Of course you have to put in appropriate code to remove the trigger once the condition has been met.
Hope this was helpful
- Vivek
|
|
|
|
|
How you seen MAC OS forms and dialogs? Most of them don't have any save button. I was wondering when exactly (and in response to which event) they store data to hard drive.
I wanted to implement a simmilar functionality in MFC Dialogs, I have a grid, clicking on each row of the grid, loads respective data to the controls below the grid. After user changes data in the controls(textboxes, combo, etc) data should be saved automatically.
Which messages would you process to archive this goal? (I thought of KillFocus but there are some issues )
-- modified at 12:07 Friday 21st September, 2007
// "Life is very short and is very fragile also." Yanni while (I'm_alive) { cout<<"I love programming."; }
|
|
|
|
|
Hamed Mosavi wrote: but there are some issues
Like what?
|
|
|
|
|
Saving data on each killfocus causes a lot of hard disk engagement. Putting save on killfocus of last control is not practical because later UI might change, or it might be optional, and user might never set focus into it, so there won't be any killfocus.
Anyway, I decided to put that save button, after thinking a little bit more today. As another post suggested here, it's not such a nice idea to put such a feature.
Thanks.
// "Life is very short and is very fragile also." Yanni while (I'm_alive) { cout<<"I love programming."; }
|
|
|
|
|
Hamed Mosavi wrote: How you seen MAC OS forms and dialogs? Most of them don't have any save button.
They do. It's in the menu bar. Applications on MacOS X usually do not have toolbars like they do on Windows.
When people want to save, they press the "Apple Key"-S or go to the menu and select the Save item.
Also, to be noted, all applications on Mac OSX (There are some exceptions) have a menu; and it's good practice to have the basic items available ( File, Edit, ...)
Now, on Windows, if you have a Dialog based application, you will need to have a UI mechanism to let the user save the data. Either put a button "save", a menubar or a toolbar.
IMO, it's not a good idea to have an Autosave that replace a save (and "save as") command.
|
|
|
|
|
Maximilien wrote: They do. It's in the menu bar.
Sorry, I once, out of curiosity, downloaded the tiger version of MAC on a Vmware Hdd. I think it was illegal, so I deleted that. I had not enough time to investigate all parts of the OS. I couldn't see that button, and I don't remember where it was exactly. But you seems to have a good experience with that, so you're right probably. I'm sorry because of such a stupid mistake.
Maximilien wrote: IMO, it's not a good idea to have an Autosave that replace a save (and "save as") command.
You're right and I added that already today. It's almost a long time since I posted the comment today.
Anyway, thank you so much for your help.
// "Life is very short and is very fragile also." Yanni while (I'm_alive) { cout<<"I love programming."; }
|
|
|
|
|
I'm creating a new rule for the IT department, every new deployed system will have an associated Deployment Diagram.
The first attempt, using Visio 2007, I didn't find the best fit Diagram model.
Do you have any tips on tools that could be used ? I prefer
Microsoft tools but another ones can also be considered.
Thanks for any tip !
|
|
|
|
|
You could use Rational Rose[^].
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
Try downloading some free visio templates -- make sure they are UML v2.0
rational rose sucks and it's way too expensive for most teams to justify the cost
if u use eclipse there is the umlet plugin and you can run it standalone as well...you have to get used to it's odd user interface, but if you have the true personality of a software engineer you can figure it out...it also outputs your model diagrams in many file formats...it's free too.
i personally like Artisan RT studio, but again it's tough to justify the costs of these tools.
visio is virtually free if you have win-doz office junk...
check out the omg sysml stuff too...it's interesting, and offers other diagrams as well
kind regards,
David
|
|
|
|
|
Hi all,
Our project has two teams - Application Production Support and the Development team. Its a reasonably big project (around 50 people). Can anyone suggest the release model and/or how the configuration management should work for the project? If you could explain how it works in any of your projects, maybe we can take a hint from there.
Thanks in advance.
|
|
|
|
|
Virat Soni wrote: If you could explain how it works in any of your projects, maybe we can take a hint from there.
Just a crazy idea but maybe you could use Google to find things like this[^]
|
|
|
|
|
led mike wrote: Just a crazy idea but maybe you could use Google
Oh you crazy so and so. It's zany ideas like this that make people chuckle. You're mad you are.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
use subversion for your cm tool, you can use it from the command line, or tortoisesvn (windows gui version), or subclipse (eclipse plugin).
use trac to track progress, tasks, bugfixes, user docs etc... http://trac.edgewall.org
setup subversion and trac on an apache webserver, and you should have no problem.
make sure you identify two people to handle all the CM work, and appoint them as your official "build-meisters", set up nightly cron jobs for builds, etc.
have two branches a delivery branch, integration branch, and make sure the developers create their own branches when they work on code.
set periodic dates for builds, do code reviews and make sure your developers unit test their stuff before checking their code into to the integration branch. test everyones stuff together using the integration branch...make corrections, retest, repeat until you have something good...then merge it into the deliverable branch.
reset the integration branch from that point on the delivery branch... and repeat...
make sure the developers continually grab the latest from the integration branch if their stuff doesn't make the deadline or cut... they need to always play with the current stuff..
kind regards,
David
|
|
|
|
|
Hi all,
am working on pharmacy related project in which now working on Low Level Design document, want to know what no clearly what comes under below stated section in LLD:
Design Alternatives:
Brief description of the design alternatives considered for this module should be stated. Reasons for selection of a particular design from the alternatives.
hari.k
|
|
|
|
|
This is just asking what you considered in your design, and why you chose the particular design you did. It's normally in there to show that you did consider alternatives and the design isn't just something that you threw together.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
I need some ideas or pointers on storing several hundred gigabytes of data. Currently, the files transferred range from 1 KB up to 600 MB and vary in sizes.
What I would like to do is break the files down into small (8K?) blocks and index them with a hash. Then in a database, create a chain so the files can be reconstructed. I like this because it will allow duplicated blocks of data to be identified and reduce the size of the storage. Bad idea? How might a directory structure look to accomplish this so it isn’t impossible to enumerate through the files?
I thought about storing the files blocks in a database as blobs but I think that would be too much strain on the database resources. SQL 2008 will have some nice features to accomplish this but it will be a year or two before we get there.
Any ideas? Thanks in advance.
|
|
|
|
|
Whats wrong with the filesystem for storing your data, with a seperate index if necessary? We probably need more information on what type of data you have and how you intend to index it.
When you said "I have a few hundred gig of files and I need to store them somehow" the first thing that comes to mind is NTFS
|
|
|
|
|
Thanks for the response Mark.
The system I am working on is a pub/sub that distributes files to multiple subscribers. It is an in-house system that transfers manufacturing data – application, documents, collected data, test results, etc. I currently have it where the data is uploaded to a file server in the sky and the subscribers then download – nothing too complicated.
It is currently architected where the subscribers connect through a web farm (load balanced) to download small chunks of 64K until the transfer is complete. Each chunk is a new http request which is causing heavy I/O between the web servers and file server as it opens the file, reads until the requested chunk and then returns the data. It would be ideal to just stream the file using the same connection until the transfer is complete and resume with network hiccups. The problem here is that some of the third-party sites use older proxy servers which won't allow for that. In addition, there is a firewall(beyond my control) which limit the amount of time a connection is allowed to remain open.
My new plan is to store the files in smaller chunks so it is more efficient when downloading so it doesn't have to navigate through the large files up to the position of the chunk being downloaded. I could also leverage this to reduce duplications of data stored on the server. Unfortunately with this design, I would then have to open a database connection to identify where the blocks of data are stored and how to piece them together. I would probably end up in a worse scenario with the IO to the database and calculating what to return. This is where I thought storing the data in the database might be better as I would already have a connection and I can query the data and return the exact requested chunk. I am torn here because of the associated costs for storing that much data in a database.
I am just curious if there are any ideas. I probably should not worry about how the data is stored and work on solutions to solve the number of http requests.
Thanks
|
|
|
|
|
rcardare wrote: I probably should not worry about how the data is stored and work on solutions to solve the number of http requests.
Yes, it sounds to me your current solution is not using chunked-encoding, or not using it correctly.
|
|
|
|
|
Hi,
It seems like the issue with some legacy clients not handling large downloads may be solvable by the clients requesting the data in chunks. This doesnt mean the data needs to be stored in chunks though
I'd solve the issues with how the clients retrieve the files first. Then you can see how appropriate your storage mechanism is (I'd suggest NTFS with additionally indexing in SQL Server would work fine.
|
|
|
|