Basically, any transaction will lock the table to other updates, so the only way is to use small transactions. I would be very reluctant to implement something that has a 30 minute transaction anyway - consider the imapct of a failure after 29 minutes. The transaction will have to rollback completely before any further updates can be done, and that would probably be close to another 30 minutes.
Bite the bullet and do it right - it will pay off in the long run
Bob Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
Its hard to say without knowing more about your data structure, but if you have a tree of objects to update then each parent item needs to be a transaction, so it all works or all fails and rolls back. Think of it in terms of atomic transactions i.e. each transaction is a single unit of work, no matter how many tables are impacted.
Bob Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
Basically my test data contains a single parent with children, grand children, great grand children (up to the 6th level) totaling 1200 nodes.
2 tables are affected: The main table where the data is contained and another table which just contains the parent/child ids relationships. Only the main table is updated.
The previous code generates the tree, traverses and updates each node using a stored procedure.
I now just moved the tree generation and traversal to C# code and keep a simple stored procedure to update each data.
Then I enclosed the entire thing in a single transaction (using the SqlTransaction class).
Any comments about this?
My concern is that, since it is a single transaction... I may still experience the same problem that we currently have, which is that when the data is updating, other users who are updating data in the same SQL table are stuck until this finishes updating.
You may already be doing this or something more sophisticated... Depending on how you are loading the data and if you can check constraints/triggers without database access, you may be able to drop constraints and triggers temporarily. Then afterwards you reapply the constraints and triggers. This may be faster and also let you recover from certain errors as you would log only those rows that you could not load.
Create an indicator somehow that could be interrogated prior to making a change to the table. If the "bulk update" indicator is true then maybe you could warn the user that the database is currently being updated and they want to try their update at a later time. Or, if you want to get sophiticated, maybe you could create a pending transaction table where you could temporarily store their transaction until the "bulk update" is finished and apply it when done.
Basically, you would be creating a Queue for updates on your table.
Look at sp_GetAppLock as a possible tool for creating your "bulk update" inidcator.
If you perform the updates using the (RowLock) hint it may restrict the locking. Though a half hour transaction will still end up locking everything in sight until it is done,
The only other thing would be to run your selects using the (NoLock) hint, with the corresponding risk that you will read some updated data that will later get rolled back and so have never existed.
What is more important, stopping the locking, or getting guaranteed data? Your choice.
Having said that Unless I have guaranteed single user access to a database, such as in an overnight job, there is no way I would allow any transaction to lock records for more than a few milliseconds. In my experience, if the Application hangs for half an hour waiting for database access the end users will buy a new system.
I converted the large stored procedure into smaller ones, by placing the tree generation and traversal business logic in C# code and just using stored procedures for simple checks and updates. The entire operation, then, is enclosed in a single transaction.
The good news is that the 30-minute operation was optimized to 15 minutes.
The bad news is that the new c# code made the web server too busy that other users couldn't use the application (even though multi-threading is automatically done by the IIS).
What I plan to do is to implement multi-threading manually for this function. Do you think it will solve my problem?
Any other suggestions?
Alter procedure CheckForValidUser
select ltrim(rtrim(name_first + ' '+ isnull(name_last,''))) as name,rolename,p.pid, date_exit from hms_person p, HMS_personell pl where pl.username = @Username and password = @Password and pl.pid=p.pid ;
I'm passing two parameters but to display three arguments,i want to get a message to display whether the date_exit is not null is it possible to display as popup
I would again say, you should not let that null date to reach your DB. Check it beforehand. For other exception, you can have something like this:
You can use try/catch in your stored procedure. Then, if it fails, you should send the exception back to your data access layer. That or business should inturn change the message to a user friendly one. Also, before doing this make sure you log it in somewhere for future reference.
Once your message has reached UI layer, use MessageBox or Alert as applicable.
In order to understand ho try catch works in SQL, use google and you will find a lot of help.
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
If this is required for debugging purpose, you can use the print statement, or if you want to notify your application users, you need to pass an appropriate value to the client and show the popup in the client application (such as a WinForms Form or an ASP.NET page).
I Developed an Application in vb.net 2005 using sql server 2005 express. So far I installed in single PCs and is working fine.
But now i want the application to be access by client PCs but update the SQL database on a centralized server PC.
e.g. MASTER_PC will have the SQL database & the main application will run on PC1, PC2 and PC3; so that users can access the database from their respective PCs.
Overall to say, I just need a central location where the SQL database can be stored, and all employees can run and modify the database through their application via internet/LAN(or some other source)
Please suggest me how to go ahead with this:
1) As I have 4 PCs in my office, Do i need to create a LAN ?
2) Do I need to do any major modifications in my application?
Any links/tutorial/references will be highly appriciated.
For your connection string, it's easiest to troubleshoot a client if you use DSN connection as opposed to ipaddress. I am not sure if you need to create a LAN since I have always set these up when I already had a LAN.
If you've already networked your PC's, then the main step is modify the system DSN on each pc. This can be done by modifying registry settings or through odbccad32.exe. I am assuming that you will want to call your database by name. With Oracle, you also might create a TNSNames file. This is not necessary for SQL Server. Clients can access the database when the login with VPN or are on your network at the office.
Assuming that your have a Major modifications to your application should not be necessary.
Hi guys, I'm relatively new to databases and have been playing with SQL Server 2008 Express for some time. I want to build a database on stock prices and various other stock-related indicators across time. So basically it's going to have three dimensions: name of the stock, stock attributes, and time. The user will be able to query for certain stock's performance on a particular attribute across a designated time period.
Normally this would be a perfect job for OLAP (multidimensional database). However, due to financial constraint and other reasons , I am not considering OLAP at the moment. With my limited knowledge on databases, I think I could achieve the above by using:
1) Relational database: if I need to track 1000 stocks, I will construct 1000 tables, with time and stock attributes on each stock table.
2) XML: I am relatively new to XML too, but it seems that with some clever XPath/XQuery coding, dumping all the data into one huge XML database is not a bad idea (or is it?), as long as data can be effectively retrieved.
In terms of speed/performance, maintenance convenience, and server-memory efficiency, which of the above two is a better choice? Or do I really need to migrate to using OLAP?
Sorry if this is a simple question, I am a total newbie regarding databases.
I wouldn't store similar information on different stocks in different tables, just the one table with a field added to identify the stock would be fine. It would be able to provide more functionality with less code, as you can now easily search/list over many stocks, and never need to enumerate the tables.
If this is true, assuming that the table will contain 1000 stocks with 50 attributes for a 3-year period, this table will need to contain: 1000 x 50 x 260(business days) x 3 = 39,000,000 stock prices. Is this a little bit too large for one table? (If it is, as an alternative perhaps I could break 1000 stocks down into several smaller tables based on the market in which they are traded in? So one table for each market. What do you think?
yes I am. I haven't done this with millions of records, my DB apps aren't large, but yes that is what I would do. I avoid tables with identical structure, I only use one Persons table, one Vehicles table, etc.