|
Some threads below, people have told how t osearch duplicate records from a table. Check that out. Once you have the records, just find the ones with older timestamp and delete them.
I believe it should be a mere customization of the same query.
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
I know the logic for how to get duplicate rows but I need to delete them. Below is the table and my logic for getting duplicate records with old timestamp. But I need to delete these records. Can you help by doing little modification in my code.
ProductID BugDate Bug Timestamp
111 2009-04-12 1 2009-12-08 12:18:07.453
111 2009-04-14 2 2009-12-08 12:18:07.453
111 2009-04-19 1 2009-12-08 12:18:07.453
111 2009-04-12 1 2009-12-08 12:29:07.453
111 2009-04-14 3 2009-12-08 12:29:07.453
111 2009-04-19 2 2009-12-08 12:29:07.453
SELECT [Product ID],BugDate,Bug,[Timestamp]
FROM BugsDB_DefectVolume, (
SELECT [Product ID] AS B, MIN([Timestamp]) AS S
FROM BugsDB_DefectVolume
GROUP BY [Product ID]) X
WHERE [Timestamp] = X.S and [Product ID]='111'
|
|
|
|
|
Try this.
First I am giving a sample demo and then I will modify your query
Sample demo
declare @t table(bugdate varchar(10),timestamps datetime)
insert into @t
select '4/14/2009','12/8/2009 12:54.611' union all
select '4/19/2009', '12/8/2009 12:54.611' union all
select '4/26/2009', '12/8/2009 12:54.611' union all
select '4/14/2009','12/8/2009 12:58.623' union all
select '4/19/2009','12/8/2009 12:58.623' union all
select '4/26/2009','12/8/2009 12:58.623'
Query:
delete from @t where timestamps in(
select MIN(timestamps) from @t
group by bugdate
having (COUNT(timestamps)>1))
select * from @t
Output:
bugdate timestamps
4/14/2009 2009-12-08 12:58:00.623
4/19/2009 2009-12-08 12:58:00.623
4/26/2009 2009-12-08 12:58:00.623
Coming to your code:
Try this
delete from BugsDB_DefectVolume where [Timestamp] in(
select [Timestamp] from (
SELECT [Product ID],BugDate,Bug,[Timestamp]
FROM BugsDB_DefectVolume, (
SELECT [Product ID] AS B, MIN([Timestamp]) AS S
FROM BugsDB_DefectVolume
GROUP BY [Product ID]) X
WHERE [Timestamp] = X.S and [Product ID]='111' )X)Y
Hope this helps
Niladri Biswas
|
|
|
|
|
but have one problem. if there are duplicate records that first time its working fine but when we execute second time all records were deleted from the table. I think we need to put a check some where in the logic.
|
|
|
|
|
I guess, you can do 1 thing.
Put a check like if the timestamp field is not equal to getdate()
like
select
case when (timestamp = getdate())
then call the delete operation
end as checking
Hope this might help you
Niladri Biswas
|
|
|
|
|
it will not work out. but the example and query you sent in the previous reply is working fine even we executed query many times.
delete from @t where timestamps in(select MIN(timestamps) from @tgroup by bugdate having (COUNT(timestamps)>1))..
As you have used having clause I think we need to put the same thing in my query. I tired but getting syntax errors... Can you think once again...
|
|
|
|
|
Got the solution.. below query will be workout
DELETE T1
FROM BugsDB_DefectVolume T1, BugsDB_DefectVolume T2
WHERE T1.[Product ID] = T2.[Product ID]
and T1.BugDate = T2.BugDate and T1.Timestamps < T2.Timestamps
|
|
|
|
|
Hi all,
I have two tables A(a1,a2,a3), B(b1,b2,b3) and a period of time Delta t [t1,t2].
When it is between t1 - t2 of time, if there is a row inserted in table A, the field b1 of table B will be updated such as b1=b1+1.
Moreover, I also have a web application which can assign value for t1, t2 through an aspx page.
Could you tell me how to do that?
Thank kiu so much.
|
|
|
|
|
If you are using SQL Server, you could set up a trigger on table a, so when a row is inserted it checks if the time is between t1 and t2, and if so update table b.
|
|
|
|
|
Could you give me an example because I do not know how to assign value for t1 & t2?
Thank kiu!
|
|
|
|
|
On a SQLServer 2000 server, the default database for the user accounts was changed. This appears to have occurred for ALL users accounts, both network login names and SQLServer named accounts.
Any idea how this can happen?
Our DBA resigned Thanksgiving week, so there is no help onsite.
Thanks in advance,
Tim
|
|
|
|
|
There are multiple ways to change the default database for the user accounts. You will need to retrieve a backup to restore the old default databases for all users. You may need to talk to the previous DBA about this issue. It is time consuming to find the cause by yourself. Was the previous DBA happy or not?
|
|
|
|
|
Use this statement to verify what you think happened ...
select name,default_database_name,modify_date from sys.sql_logins
If, indeed the changes were made, you will see when they happened and someone here might be able to work you through fixing it.
You may want to start by changing your administrative passwords since your DBA left. Make sure to tighten up any remote access accounts, etc.
|
|
|
|
|
David Mujica wrote: Use this statement to verify what you think happened ...
Users could access the system last Thursday; on Friday, they couldn't. The default database was changed to a database that is used for sending e-mail messages, so, they neither need nor have access to it.
The accounts (there are only a handful) have been manually updated.
David Mujica wrote: select name,default_database_name,modify_date from sys.sql_logins
Since this is a SQLServer 2000 node, the listed table didn't work, but it gave me enough information to move forward. I used the following:
select l.name, xdate1 as creation_Date, xdate2 as modify_date, l.dbid,
d.name as defaultdb
from sysxlogins l inner join
sysdatabases d on l.dbid = d.dbid
And that just opened up more questions...
One of the accounts was created in 2007, updated a minute later and not touched after that, but, they have a default database set to a database that didn't exist at the time.
As far as access rights, when someone leaves the company, their password is changed and remote access removed.
Tim
|
|
|
|
|
Hi ,ALL
If i have this Customer table :
ID | CustomerName | PhoneNum | City |Street
<code>19 jon 555 NK st.5</code>
21 Mark 236 TK KA.st
<code>37 jon 555 NK st.5</code>
45 AJ 800 LA WS.17
We note that Customer Name "Jon" added in this table tow time with deffrint IDs .
In my real database , i have Thousands of this case.
How i can write query return the douplicate record with there IDs ? , like :
ID | CustomerName | PhoneNum | City |Street
19 jon 555 NK st.5
37 jon 555 NK st.5
modified on Monday, December 7, 2009 4:18 AM
|
|
|
|
|
Do some basic research[^]
This is a common problem that has been solved over and over again. The first 2 items look interesting.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Dear Mycroft ,
i search befor and after post my question. but i did not find the solustion . could you help me please ?!
according my qusetion , my problem is i have tow douplicte record with deffrint ID , when i use Max(ID) or top(ID) functions i will ignor of the IDs ,becuse ID is deffirnt. and i wont to display it as it is with out ignor any IDs . how i can do it ?
|
|
|
|
|
You asked samething a few posts down. You got the reply and as able to do this as well. Now, why are you posting this again?
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
Dear d@nish ,
Here i have different case , becuse i have two douplicate record with different ID , and when i use max() or top() function i will be ignor one of the ID ,and this couse problem for me to link to other tables.
so i need to display all the two records with there IDs.
ould you please help me to do that ?!
Thanks
|
|
|
|
|
If you will read Shameel's reply to your earlier post, you should be able to figure it out. Anyways, a general query would be:
select [colNames]
from tableName
group by [colNames]
having count(*) > 1
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
you are right and i know that ! but
if i write this :
select [ID,colNames,Phone,...]from tableNamegroup by [colNames]having count(*) > 1
i will have 0 douplicate record !! , why ? because IDs for each douplicted records are different !!
and if use all columns without IDs ,i will got douplicted record without IDs.
please see the example in my question , i don't need to ignore any IDs.
could you please help me to find way?
Thanks .
|
|
|
|
|
As Luc mentioned, you don't need to group by ID. You will get the duplicate records then.
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
Dear d@nish ,
see my replay to LUC.
Thanks
|
|
|
|
|
All you need is a variation on SQL code shown by others. Try this:
SELECT ID, otherfields FROM table
GROUP BY otherfields
HAVING COUNT(*) > 1
i.e. don't use MAX(ID) or TOP(ID)
and include all fields ("otherfields") that are relevant for identity checking
|
|
|
|
|
Hi Luc ,you are right,
I Wrote that what you say , but i got error that " you try to excute a query that does not include the specified expression 'ID' as part of an aggregate function"
Thanks
|
|
|
|