|
jujiro wrote: "on delete cascade"
No, I use that very seldom and only when the data is transient anyway.
|
|
|
|
|
jujiro wrote: On Delete Cascade: BAD (VERY BAD actually)
I use DELETE and DELETE CASCADE where the model requires me to do so. I find it good practice to limit the number of RI keys I'll define on a table so that I don't get into messes where all for the sake of normalization, someone has defined a row with a couple of dozen RI keys in it. To me that's just asking for a coding nightmare.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
|
|
|
|
|
jujiro wrote: Deletion of a row at a top level table can become a project very quickly due the child tables dependencies, which can run several levels deep.
"Cascading delete". Most modern databases do.
Bastard Programmer from Hell
|
|
|
|
|
Yeah and most experienced devs DON'T use it. I do not recall ever using cascade delete. I do recall chastisiing a junior dev for trashing the database using cascading delete. Oops just does not cut it!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: Yeah and most experienced devs DON'T use it.
They rather write queries to check whether there are child-rows, and delete them manually. I understand that people don't use what bites them - about the same brilliant idea as keeping every field a VARCHAR .
It's standard functionality of the database, and it's less error-prone than coding it manually.
I bet you're glad that I'm not allowed near your systems?
Bastard Programmer from Hell
|
|
|
|
|
Eddy Vluggen wrote: I understand that people don't use what bites them
You may well be right there, the idea of cascading deletes just makes my toes curl.
Eddy Vluggen wrote: I bet you're glad that I'm not allowed near your systems
You are most certainly right there, I could see endless discussions where each tries to justify their OPOV - fruitlessly.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
If an article is too much then a tip/trick requires less commitment.
If you can't keep track of your data structure during development then there is something lacking in your methods. Referential integrity via FKs usually meets most of the requirements!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
>> If an article is too much then a tip/trick requires less commitment.
I don't understand what you mean by that.
Anyway, the reason why I developed the script is to see how deep the rabbit hole is. Yes, I can use any reverse engineering ERD tool to see that (Visio is my favorite.) If you have been around as long as I have and have worked on projects where no one really knew the entire system/database, you have to rely on quick and dirty tools like my script to find your way. Visio may or may not be there at your disposal.
I really do not understand why people are making smart ass comments about how they do things. Col. Jessep (A few good men) said, "I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of the very freedom I provide, then questions the manner in which I provide it! I'd rather you just said thank you and went on your way."
|
|
|
|
|
jujiro wrote: you have to rely on quick and dirty tools like my script to find your way.
This shows there is something wrong with the environment, if you are going in having to build cascading deletes with no reference to the data structure then it is wrong. Not that we have not all had to do it but in a reasonable application you should not have to.
I did not even look at your script/solution, my point was that the way you are approaching it is wrong, this thread will dissapear into the pits fairly rapidly, writing an article will make your contribution more useful, but even a tip/trick will have a more lasting usefulness than a link in a forum post.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
My point in the message to you was that people started writing comments without even clicking on the link. You click on the link you get some more information, may be not a whole lot of explanation, but information it is.
Now, about my approach: There could be myriad of reasons why you would want to know the relational linkages. May be you, you were doing a data cleanup, which meant deleting the parent rows, and keeping the child rows by making the foreign key column null, or may even delete cascading, if that is what was needed as a part of the cleanup process.
So, unless you know the context, you can say an emphatic "wrongful approach."
Do you get my point?
|
|
|
|
|
jujiro wrote: So, unless you know the context, you can say an emphatic "wrongful approach."
No no no not the approach you took with the code, I never even looked at it, the wrong way to PRESENT it here. There are always requirements for hacks and workarounds, I was not dumping on your code, just the way you presented it!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
There are two store procedure on same DB server:
create procedure usp_A
as
declare @ShiftName as datetime
select @ShiftName=getdate()
select * from LineConfig where Cur_PlanQty_0=0
GO
create procedure usp_B
as
select * from LineConfig where Cur_PlanQty_0=1
GO
I use follow sql want to get the union record sets
select * from openrowset('sqloledb','trusted_connection=yes','abc.dbo.usp_A')
U N I O N ALL
select * from openrowset('sqloledb','trusted_connection=yes','abc.dbo.usp_B')
get the follow error message:
Could not execute query against OLE DB provider 'sqloledb'. The provider could not support a required property. The provider indicates that conflicts occurred with other properties or requirements.
OLE/DB provider returned message: Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.
if I remove the usp_A the local variable operation sentence:
select @ShiftName=getdate()
the error message disappear.
if I add new sentence such as:
declare @ShiftValue as varchar(10)
the same error message display again.
|
|
|
|
|
Don't UNION on an OpenRowSet . Import it into a local (temp) table, and union from there. That way you'll also see which of both procedures is failing.
Bastard Programmer from Hell
|
|
|
|
|
I have to insert the temp record into a temp table in order to resolve the issue at last.
But I think I have a lot of user store procedures to compute record sets. Do I have to create a lot of temp tables? Is it a bug or some mis-understand by me?
|
|
|
|
|
Zhenjie Fu wrote: Do I have to create a lot of temp tables?
Nope, only for the two tables that you're importing with the OpenRowSet-command.
Zhenjie Fu wrote: <layer>Is it a bug or some mis-understand by me?
It's an error in one of the imports. Hence the suggestion to select those two first into a temporary table, and then to union on those.
Bastard Programmer from Hell
|
|
|
|
|
I am trying to retrieve the product, price, and date_despatched for the last three orders (date_despatched) for each product.
There are two tables involved:
opheadm (Sales Order Header)
date_despatched
order_no
opdetm (Sales Order Detail)
order_no
product
net_price
Desired Result
product_a 2.999 2012/03/23
product_a 2.600 2012/02/01
product_a 3.812 2012/01/23
product_b 1.456 2012/04/01
product_b 4.786 2012/02/13
product_b 2.563 2012/10/10
...
I have seen two approaches, but cant get either to work (since in this case there needs to be a join on the header and detail tables)
a) OVER PARTITION BY (below is nowhere near correct - just to show example)
SELECT product_code, price, date_entered
FROM (SELECT ROW_NUMBER() OVER (PARTITION BY product_code ORDER BY date_entered DESC) AS OrderedDate,* FROM opdetm ) AS ordlist
WHERE ordlist.OrderedDate<=3;
b) SELECT.. WHERE IN (SELECT TOP n...)
SELECT E.scheme_opdetm.product, scheme_opdetm.net_price, opheadm.date_despatched
FROM scheme_opdetm, scheme_opheadm AS E
WHERE E.date_despatched IN (SELECT TOP 3 E2.date_despatched from scheme_opdetm, scheme_opheadm AS E2 WHERE E.product = E2.product Order by E2.date_despatched DESC)
ORDER BY E.product, net_price DESC;
Questions:
a) Which approach is best? Why?
b) Please could someone assist with the query I need, and if possible a short explanation of how to construct this type of query and what the different parts do. I'd like to try to understand what I'm diong as opposed to just copying someones SQL
Thanks in advance
|
|
|
|
|
Take a look at this:
CREATE TABLE Opheadm (date_despatched date, order_num int)
create table opdetm (order_num int, product VARCHAR(100), net_price money)
INSERT INTO Opheadm values
('3/23/2012', 4),
('2/1/2012', 3),
('1/23/2012', 2),
('1/25/2011', 1)
INSERT INTO opdetm values
(4, 'prod_a', 2.99),
(3, 'prod_a', 2.60),
(2, 'prod_a', 3.80),
(1, 'prod_a', 1.20)
SELECT top 3 a.product, a.net_price,
ROW_NUMBER() Over(order by b.date_despatched DESC) AS RowNum
from opdetm a
inner join Opheadm b
on a.order_num = b.order_num
|
|
|
|
|
Hi djj55
I tested the query and it did not return what I was expecting.
It only returned three rows.
I need the last three orders for every stock code.
Maybe I need to
SELECT DISTINCT (product) from opdetm or something like that?
And then do the joins to opheadm and join to get the last three orders?
There is another table (stockm) stock master which has product as a primary key, so I could use that as well, except I would also need to only select records from stockm where the warehouse = '06'
Currently I am pulling all stockm records where warehouse = '06' into another datatable in my application with a separate query, then I want to loop through the result set of this query and populate the table from stockm with the prices, dates and quantities of the last three sales orders, as well as comparing the standard cost of the item with the last sales order, and calculating the difference between the last Sales order and the standard cost
The end result is a grid showing ALL stock items in warehouse 06, with the following columns:
warehouse, product, physical_qty, standard_material, Error, SO1Price, SO1Date, SO1Qty, SO2Price, SO2Date, SO2Qty, SO3Price, SO3Date, SO3Qty
If an item in the stock master has no sales, it must be listed, but the all the SO fields must be blank.
|
|
|
|
|
Sorry, being at work I do not have a lot of time.
I believe what is needed is a sub query so you can then use a WHERE clause to get only three of each. If you are using 2008 you might try WITH.
This has not been tested but something like:
SELECT Produce, Net_Price
FROM (
SELECT a.product, a.net_price,
ROW_NUMBER() Over(order by b.date_despatched DESC) AS RowNum
from opdetm a
inner join Opheadm b
on a.order_num = b.order_num ) AS c
WHERE RowNum < 4
|
|
|
|
|
Also at work, but on my way home, will test when I get home - and yes, using 2008
|
|
|
|
|
I have a problem in a SSIS solution that is i have many flat file data sources "Text Files" want to load them in a SQL SERVER Database as a
destination, but flat files overridden every day with new data and i want only to take new data , so how can i take the new data ???
|
|
|
|
|
How do you determine which data is new?
I would likely import the data to a table that is designed for that purpose and then execute a separate SQL statement to move only the new data to the actual destination. The import area would get cleared out before each import.
|
|
|
|
|
The source data is a text files that contain the attendance data empID, datetime and etc and every day each text file come with the new and old attendance data, i am using SSIS solution
|
|
|
|
|
|
how is it easy even though every day text files come with the new and old data???
|
|
|
|
|