|
Thanks Mycroft.
The final query has 8 subs and does the job. Initially a little slowly until I placed an index on the TransDate field. I don't speak Idiot - please talk slowly and clearly
'This space for rent'
Driven to the arms of Heineken by the wife
|
|
|
|
|
SELECT mt.TransDate,
SUM(CASE WHEN pc.Name IN('Hadware','Custom')
THEN mt.Qty
ELSE 0 END) AS 'Hardware',
SUM(CASE WHEN pc.Name LIKE 'DS %'
THEN mt.Qty
ELSE 0 End) As 'DataStrip'
From MaterialTransaction mt
JOIN Production pr
ON mt.Prodn_ID = pr.Prodn_ID
JOIN WorkOrder wo
ON pr.WO_ID = wo.WO_ID
JOIN Product p
ON wo.Product_ID = p.ID
JOIN ProductCategory pc
ON p.ProdCat_ID = pc.ID
WHERE Date Between '11/24/2009' And '11/25/2009'
GROUP BY mt.TransDate
|
|
|
|
|
Cheers - Nice
I am testing for speed against the other query.I don't speak Idiot - please talk slowly and clearly
'This space for rent'
Driven to the arms of Heineken by the wife
|
|
|
|
|
Would anyone happen to know of any public web pages or documents, by respected authorities in the field that would definitively convince a product owner that creating tens of thousands of databases in a single SQL Server is not a reasonable thing to do? In addition, having Joe-blow users creating and deleting databases is not a good idea, i.e. these are administrator only tasks.
The product in question is an update of a previous product where there was a single ‘database’ per profile. Only in the old version, the ‘database’ was just a single file with no constraint checking.
|
|
|
|
|
I ran across this type of system in the 90s, app was build on system5 or some such crap. The owner wanted exactly the same thing written in Access or SQL server, same functionality and everything. When I flatly refused to even discuss the solution and offered to walk away from the contract he saw the light.
Seriously if you cannot move him off this idiotic design, quit, move on your stress levels will appreciate it.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
Thanks for the link to the great article i.j.russell, I'm still reading and learning.
Ah, I see, my description of the question wasn't very good. I should expand a bit on what I mean by profile. A better description would be an item where multiple, think thousands or tens of thousands, of these items are associated with a single tenant.
The new software being developed copied a lot of code from the old version where an item, single file, was represented as a database using an API provided by an antiquated database product.
The new software is trying to use this same scheme in SQL Server, one database per item. Since there can be tens of thousands of items, there will be tens of thousands of databases. I can hear the murderous laughter of database administrators everywhere.
I think this scheme breaks a number of well-established guidelines but I’d like to give the product owner ammunition, concrete definitive documentation, to force the developers to change the product to something more reasonable. (Oh, and I’m one of the developers – just showing up late in the development cycle of the product.) The product owner is not very technical and so may have been convinced for some reason or other that this was an okay solution.
|
|
|
|
|
The other option to consider is going the NoSql route and using something like MongoDb, CouchDb or even cloud-based storage like Microsoft Azure Table Storage or Amazon SimpleDb.
|
|
|
|
|
Good options, though I'm pretty sure they are going to use SQL Server no matter what. The customers of the product are typically Microsoft shops so they really want SQL Server.
I just can't imagine how the developers convinced the product owner that creating tens of thousands of databases would be okay.
|
|
|
|
|
|
Exactly, 'technically possible' but a horrendous idea. I'm guessing this is how development 'convinced' the product owner.
Any good web pages to convince them that this is a ridiculous idea?
|
|
|
|
|
I can't find any sites that would help, but that in itself is helpfull because it means that the concept is either uber-cutting edge or plain stupid. Since Sql Server has been around for well over a decade, I am going for the latter.
From a technical perspective, I can think of a few issues;
1. Your hardware requirements are going to be well in excess of what your application data should need. Your data should be easily handled by a single Sql Server Express database.
2. Connection pooling is going to be interesting as each database will have a different connection string. Performance will be adversely affected as new connections are created.
3. Whilst you can carry out DDL in a stored procedure to create a database, table(s) and indexes, this will take a few seconds each time.
|
|
|
|
|
I'd go with i.j's plain stupid scenario, this is so far outside sensible that it would not be entertained by any reasonable designer.
Consider a code base change to the database, maintenance will be an absolute nightmare and the cost of maintaining an app is much higher than developing the app.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
The key to your argument should be about cost; each cost can then have a technical answer to the question - for example, annual maintenance cost is $500k as we need to employ a team of DBA's to manage 1000's of databases
Give your suggested solution with a cost, give his solution with it's associated costs. You can always ham it up a bit to ensure that your solution looks better.
This will get his interest far more than technical discussions on the merits or otherwise of solutions.
|
|
|
|
|
i have a database model that has an isbn as the primary key
The isbn is input by the user
it is the primary key for book and book is linked to book_by_authors on the isbn number
and it is also linked to bookcategory by the isbn and so forth.
How in the the world could I update the isbn easily? I keep getting constraint issues with the foreign keys. Should I take and create a method that stores the existing values and then deletes the record then turns around and adds the record back with the corrected data? just wondering what should be done in this case. thanks
|
|
|
|
|
Perhaps you'd be better using a surrogate key rather than the ISBN if you want to allow users to edit them (you'd still have to check for collisions if they can manually alter an ISBN).Tychotics: take us back to the moon
"Life, for ever dying to be born afresh, for ever young and eager, will presently stand upon this earth as upon a footstool, and stretch out its realm amidst the stars."
H. G. Wells
|
|
|
|
|
Well you're screwed, a primary key that is dependent on your user input BBrrrttt WRONG.
This is one of the most basic errors in database design. Like using a phone number or email address or ANY other user data as a PK/FK. Keys should be stupid, they should have absolutely NO other function than to maintain your data structure.
Solution - create another field,BookID int with identity on your book table, everywhere you use the ISBN go there and add the new field and populate it based on existing structures (isbn). Now change you FKs to the BookID fields, now remove the FKs for the isbn. NOW you can edit your ISBN.
It is a lot of work but you have to do it to correct the design error. Never put off fixing these errors, they get into the corners of your database and breed and shortly you are totally f***ed.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
hello every one!
i was wrote a storedprocedure and execute it (in sqlserver 2005)
this sp will get some data as parameters and after some complicated transactions delete some rows of the tables but it will complete with the following error
"Msg 2754, Level 16, State 1, Procedure sp_InsertHistoryTbl, Line 30
Error severity levels greater than 18 can only be specified by members of the sysadmin role, using the WITH LOG option."
1) how can i get the error message as an admin?
2) dose any one know about this failure?
3) is there any list of error numbers (for example a list that has a table with error numbers and error messages and perhaps it`s cause and etc)?
please note me any thing about these
thanks!
|
|
|
|
|
So did you do some research or just come straight to CP. I found these responses[^] and it look like you have a locking issue, some research may provide you with a solution! Never underestimate the power of human stupidity
RAH
|
|
|
|
|
thanks for your attention
yes u right may be its better to googeling first !
|
|
|
|
|
Did you find a solution? Never underestimate the power of human stupidity
RAH
|
|
|
|
|
yes i did
i was wrote a rise error with 27 severity level !!!!
shame on me!!!!
|
|
|
|
|
Hello,
I need to write a script that creates a table variable with a column for every instance of a row in a table. The data in the table it pulls from may change from time to time, so I'm looking for a dynamic approach.
Example table:
ColumnA: ColumnB:
1 Hello
2 World
Then my table variable declaration should look like:
DECLARE @TableVariable TABLE
(
Hello varchar(20),
World varchar(20)
)
Can anyone think of a performance-friendly solution to this?
Thanks,
Kevin
|
|
|
|
|
I thing you should dynamically create an sql string that creates the table.
On the other hand, why would you want a to create a column for each row? suppose. Keep in mind that SQL Server has a max column limit of 255(or 256) per table.
|
|
|
|
|
Kevin Leeds wrote: Can anyone think of a performance-friendly solution to this?
Yes.. I think.
I encounted a similar issue this week and solved it by using a pivot table. There is an excellent article on hear that gives a workable example.
Pivot two or more columns in SQL Server 2005[^]
Hope this helps...
Tim
|
|
|
|