|
Thanks David. I was able to figure out.
|
|
|
|
|
Say I have this TSQL table
create table Table (
TableId int identity(1, 1) not null,
value nvarchar(max) null,
id0 int null,
id1 int null,
id2 int null
)
and say I make a query like (getting the biggest id2 for a given id0 and id1 )
declare @arg0 int = 2
declare @arg1 int = 3
select top 1 id2
from Table
where id0 = @arg0 and id1 = @arg1
order by id2 desc
What would be the better index Index1 , or Index2 ?
create index Index1 on Table (id2) include (id0, id1)
create index index2 on Table (id0, id1, id2)
modified 5-Feb-20 20:52pm.
|
|
|
|
|
Index3. Didn't actually try it, just based it on documentation.
Super Lloyd wrote: create index index2 on Table (id0, id1, id2) Creates a composite index, based on those three columns, sorted first by id0, and within that group on id1, and within that group on id2.
Super Lloyd wrote: create index Index1 on Table (id2) include (id0, id1) Creates a index on id2, but adds the (values of) columns id0 and id1 to the index for quick access.
Super Lloyd wrote: What would be the better index Index1 , or Index2 ? You mean "faster"? Do you mean faster read, or faster write? From what I see, they'd both need to write three fields when writing, since the index needs be updated. They'd both supply the three fields from the index (without accessing the table) when reading. The bigger difference is in the first field of the index, id2 in one, id0 in the other. Both queries seem to cover the query, but the docs point to making a composite of all three fields;
Redesign nonclustered indexes with a large index key size so that only columns used for searching and lookups are key columns. Make all other columns that cover the query into nonkey columns. In this way, you will have all columns needed to cover the query, but the index key itself is small and efficient. Which field comes first in that list, is also on MSDN;
List the columns to be included in the composite index, in sort-priority order Soo.. option 2? Maybe option 3, with (id0, id1, id2); you are first looking up on the first two fields, than ordering on id2. That'd be my guess, given the example query and the docs. You could of course create a table and actually test that; MSSQLMS would give you an execution plan with timings.
Why is there an identity field? Wouldn't id0 and id1 simply be your primary key? That's what you are using to locate a unique value in the set; if you're not actively using the identity-field, then inserts would benefit from removing it. If you make id0 and id1 your primary key, then the table will have a clustered index on those fields, meaning the table is physically sorted on those fields. For large tables, that would be actually preffered;
The preferred way to build indexes on large tables is to start with the clustered index and then build any nonclustered indexes So docs are hinting that (for large sets) a clustered index is preferred and autmatically created when defining a PK; but then the ids' can't be NULL . If those ids' represent categories, you may even want to go for a filtered index.
I hope for you that someone posts an answer that simply says the first or the second, without all these details
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Mm.. you seem to recommend Index2 so I am confused by your "Index3 " on the first line...
basically this is a table that aggregate a summary from multiple data sources. id0 and id1 identify the datasource and are small number (between 1 to 9 each) and id2 is the original record id in the foreign datasource.
I am interested to speed up read query...
But.. mmm... I realize this particular query is not too important.. and querying by id2 will be used elsewhere... mmmmm.....
Thanks for your feedback and MSDN quotes though!
|
|
|
|
|
Super Lloyd wrote: Mm.. you seem to recommend Index2 so I am confused by your "Index3 " on the first line... Same as Index2, but with another order of the fields.
Super Lloyd wrote: basically this is a table that aggregate a summary from multiple data sources. id0 and id1 identify the datasource and are small number (between 1 to 9 each) and id2 is the original record id in the foreign datasource. That makes those three fields a compound primary key.
Super Lloyd wrote: Thanks for your feedback and MSDN quotes though! My pleasure, and seems they know most about Sql Server
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Index2 is the faster one because index1 will not be used at all except maybe for a full scan.
You might want to create it as;
create index index2 on Table (id0, id1, id2 DESC) But the difference is academical at best. SQL Server can scan in reverse direction after all.
|
|
|
|
|
For that query, definitely index 2.
Think of it like searching a massive address book. You're trying to find the highest house number for anyone called "John Smith".
- Index 1 sorts the addresses by house number, and includes the first and last name. You have to start at the end and scan backwards until you find an entry for "John Smith".
- Index 2 sorts the addresses by first name, then last name, then house number. You can jump straight to the end of the "John Smith" records, and see the highest house number.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
The quote below is from An Introduction to Database Systems, 8th edition, C J Date.
Is that possible systems crash while transferring updates from the physical device to the database?
If yes, since transactions are the unit of recovery, those updates that are written to the database should be undone?
"It is quite possible, for instance, that the system might crash after the COMMIT has been honored but before the updates have been physically written to the database--they might still be waiting in a main-memory buffer and so lost at the time of the crash. Even if that happens, the system's restart procedure will still record those updates in the database: it is able to discover the values to be written by examining the relevant records in the log."
modified 2-Feb-20 13:48pm.
|
|
|
|
|
Not sure if I understand the question correctly, but are you looking for Rollforward? . With rollforward, when the database is recovered, all the logs are reapplied and in the end, everything that is not committed is rolled back. This guarantees a consisted situation even after crash.
If the recovery is interrupted because of a new crash, the recovery is simply started from the beginning next time.
|
|
|
|
|
Many thanks!
Sorry can you please explain this:
If the system crash after the COMMIT has been honored but just part of the updates have been physically written to the database other parts are still waiting in a main-memory buffer, then what will happen? How they know that just part of the data is written?
|
|
|
|
|
It's explained in the text you quoted:
"... the system's restart procedure will still record those updates in the database: it is able to discover the values to be written by examining the relevant records in the log."
|
|
|
|
|
The quote below is from *An Introduction to Database Systems, 8th edition, C J Date*.
Is that possible systems crash while transferring the updates from physical device to database?
"It is quite possible, for instance, that the system might crash after the COMMIT has been honored but before the updates have been physically written to the database--they might still be waiting in a main-memory buffer and so lost at the time of the crash. Even if that happens, the system's restart procedure will still record those updates in the database: it is able to discover the values to be written by examining the relevant records in the log."
modified 2-Feb-20 13:48pm.
|
|
|
|
|
In theory, yes. I would expect modern databases to wait for verification of the write before returning from the commit, which implies reading to verify after the data has been written.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
I want to copy the structure of all the tables of one database from one server to another server by using Stored Procedure ( due to automation ) there should be no index, no trigger , no fkr etc
|
|
|
|
|
Nice requirement. You have permission to proceed
"Five fruits and vegetables a day? What a joke!
Personally, after the third watermelon, I'm full."
|
|
|
|
|
Easiest way to copy the structure is to do a backup, initiated from the sproc. You can restore that on whatever server you want.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
For example to transfer the sum of $100 from account 123 to account 456. we want the total number of dollars in accounts 123 and 456 taken together not to change.
Why it would be unreasonable to declare an integrity constraint to that effect?
|
|
|
|
|
That is what transactions are for, wrap the transfer in a transaction if there is an error roll back the transaction. An after process integrity check is not unreasonable but the correct way is with a transaction.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
But the book says it's unreasonable.
|
|
|
|
|
The book should say it is incorrect or redundant and should recommend using a transaction. All books are not 100% correct all the time.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
Tara11 wrote: Why it would be unreasonable to declare an integrity constraint to that effect?
Mainly because integrity constraint operate on row-basis. Transactions then again ensure that the operation is completed as a whole even when multiple commands are affected.
But neither of these guarantee that the total sum is unaffected when money is transferred. If this would be a single table operation, a trigger could be utilized to do that check.
modified 2-Feb-20 15:32pm.
|
|
|
|
|
Hi
I have to migrate a database from Oracle to MS-SQL. The database migration went well, but i have a query that gives me trouble. I have tried to convert it using this on-line tool - http://www.sqlines.com/online, but it is not working since the Oracle alias seems to be a problem for MS-SQL.
Is there any way to use a similar syntax in MS-SQL?
The original Oracle SQL:
UPDATE folk.individ c
SET (c.stat,
c.statusdto,
c.pnrgaeld,
c.koen,
c.fornvnmrk,
c.mellemnvnmrk,
c.efternvnmrk,
c.nvndto,
c.fornvn_solo,
c.fornvn,
c.mellemnvn,
c.efternvn,
c.adr_fornvn,
c.adr_efternvn,
stilling,
z_stilling,
c.z_fornvn,
c.z_mellemnvn,
c.z_efternvn,
c.z_adr_fornvn,
c.z_adr_efternvn,
c.z_fornvn_solo,
c.haenstart_umrk_navne,
c.stillingdto,
c.mynkod_ctnavn,
c.myntxt_ctnavn,
c.indrap,
c.adrnvndto,
c.soegnvn,
c.soegnvndto
) =
(SELECT l.status,
TO_DATE(DECODE(SUBSTR(l.statushaenstart,7,2),'00',NULL,
SUBSTR(l.statushaenstart,1,8)),'YYYYMMDD'),
l.pnrgaeld,
l.koen,
a.fornvn_mrk,
a.melnvn_mrk,
a.efternvn_mrk,
TO_DATE(DECODE(SUBSTR(a.nvnhaenstart,7,2),'00',NULL,
SUBSTR(a.nvnhaenstart,1,8)),'YYYYMMDD'),
Upper(a.fornvn), -- c.fornvn_solo
substr(decode(a.melnvn, null, upper(a.fornvn), upper(a.fornvn)|| ' ' || upper(a.melnvn) ),1,50), -- skal være som før: fornavn indh. også mlnavne
upper(a.melnvn),
upper(a.efternvn),
upper(LTRIM(SUBSTR(a.adrnvn, INSTR(a.adrnvn,',')+1))), --adr_fornvn
upper(SUBSTR(a.adrnvn,1,INSTR(a.adrnvn,',') - 1)), --adr_efternavn
upper(l.stilling),
l.stilling,
substr(decode(a.melnvn, null, a.fornvn, a.fornvn || ' ' || a.melnvn),1,50), -- z_fornvn, fornavn indh. også mlnavne
a.melnvn, -- z_mellemnvn
a.efternvn,
LTRIM(SUBSTR(a.adrnvn, INSTR(a.adrnvn,',')+1)), --z_adr_fornvn
SUBSTR(a.adrnvn,1,INSTR(a.adrnvn,',') - 1), -- z_adr_eftermnvn
a.fornvn, -- z_fornvn_solo
a.haenstart_umrk_navne, -- anvendelse ukendt
null, --stillingdto udgået
null, --mynkod_ctnavn udgået
null, --myntxt_ctnavn udgået
null, --indrap udgået
null, -- adrnvndto udgået
null, --soegnvn udgået
null --soegnvndto udgået
FROM folk.lperson2010 l, folk.laktnvnopl2010 a
WHERE l.pnr = c.pnr
AND l.pnr = a.pnr
AND l.lseq BETWEEN 1 AND 6000 )
where c.pnr in (select l.pnr FROM folk.lperson2010 l
where l.lseq BETWEEN 1 AND 6000 );
The code after conversion parsing it through http://www.sqlines.com/online:
UPDATE folk.individ c
SET (c.stat,
c.statusdto,
c.pnrgaeld,
c.koen,
c.fornvnmrk,
c.mellemnvnmrk,
c.efternvnmrk,
c.nvndto,
c.fornvn_solo,
c.fornvn,
c.mellemnvn,
c.efternvn,
c.adr_fornvn,
c.adr_efternvn,
stilling,
z_stilling,
c.z_fornvn,
c.z_mellemnvn,
c.z_efternvn,
c.z_adr_fornvn,
c.z_adr_efternvn,
c.z_fornvn_solo,
c.haenstart_umrk_navne,
c.stillingdto,
c.mynkod_ctnavn,
c.myntxt_ctnavn,
c.indrap,
c.adrnvndto,
c.soegnvn,
c.soegnvndto
) =
(SELECT l.status,
CONVERT(DATETIME, CASE SUBSTRING(l.statushaenstart,7,2) WHEN '00' THEN NULL
ELSE SUBSTRING(l.statushaenstart,1,8) END,'YYYYMMDD'),
l.pnrgaeld,
l.koen,
a.fornvn_mrk,
a.melnvn_mrk,
a.efternvn_mrk,
CONVERT(DATETIME, CASE SUBSTRING(a.nvnhaenstart,7,2) WHEN '00' THEN NULL
ELSE SUBSTRING(a.nvnhaenstart,1,8) END,'YYYYMMDD'),
Upper(a.fornvn), -- c.fornvn_solo
substring(case when a.melnvn is null then upper(a.fornvn) isnull( else upper(a.fornvn), '')+ ' ' + isnull(upper(a.melnvn), '') end,1,50), -- skal være som før: fornavn indh. også mlnavne
upper(a.melnvn),
upper(a.efternvn),
upper(LTRIM(SUBSTRING(a.adrnvn, CHARINDEX(',', a.adrnvn)+1, LEN(a.adrnvn)))), --adr_fornvn
upper(SUBSTRING(a.adrnvn,1,CHARINDEX(',', a.adrnvn) - 1)), --adr_efternavn
upper(l.stilling),
l.stilling,
substring(case when a.melnvn is null then a.fornvn isnull( else a.fornvn, '') + ' ' + isnull(a.melnvn, '') end,1,50), -- z_fornvn, fornavn indh. også mlnavne
a.melnvn, -- z_mellemnvn
a.efternvn,
LTRIM(SUBSTRING(a.adrnvn, CHARINDEX(',', a.adrnvn)+1, LEN(a.adrnvn))), --z_adr_fornvn
SUBSTRING(a.adrnvn,1,CHARINDEX(',', a.adrnvn) - 1), -- z_adr_eftermnvn
a.fornvn, -- z_fornvn_solo
a.haenstart_umrk_navne, -- anvendelse ukendt
null, --stillingdto udgået
null, --mynkod_ctnavn udgået
null, --myntxt_ctnavn udgået
null, --indrap udgået
null, -- adrnvndto udgået
null, --soegnvn udgået
null --soegnvndto udgået
FROM folk.lperson2010 l, folk.laktnvnopl2010 a
WHERE l.pnr = c.pnr
AND l.pnr = a.pnr
AND l.lseq BETWEEN 1 AND 6000 )
where c.pnr in (select l.pnr FROM folk.lperson2010 l
where l.lseq BETWEEN 1 AND 6000 );
|
|
|
|
|
 The tool doesn't seem to have done anything!
At a guess, you're looking for something like:
UPDATE
c
SET
stat = l.status,
statusdto = CASE SUBSTRING(l.statushaenstart, 7, 2)
WHEN '00' THEN NULL
ELSE CONVERT(datetime, SUBSTRING(l.statushaenstart, 1, 8))
END,
pnrgaeld = l.pnrgaeld,
koen = l.koen,
fornvnmrk = a.fornvn_mrk,
mellemnvnmrk = a.melnvn_mrk,
efternvnmrk = a.efternvn_mrk,
nvndto = CASE SUBSTRING(a.nvnhaenstart, 7, 2)
WHEN '00' THEN NULL
ELSE CONVERT(datetime, SUBSTRING(a.nvnhaenstart, 1, 8))
END,
fornvn_solo = UPPER(a.fornvn),
fornvn = SUBSTRING(CASE
WHEN a.melnvn Is Null THEN UPPER(a.fornvn)
ELSE UPPER(a.fornvn) + ' ' + UPPER(a.melnvn)
END, 1, 50),
mellemnvn = UPPER(a.melnvn),
efternvn = UPPER(a.efternvn),
adr_fornvn = UPPER(LTRIM(SUBSTRING(a.adrnvn, CHARINDEX(',', a.adrnvn) + 1, LEN(a.adrnvn)))),
adr_efternvn = UPPER(SUBSTRING(a.adrnvn, 1, CHARINDEX(',', a.adrnvn) - 1)),
stilling = UPPER(l.stilling),
z_stilling = l.stilling,
z_fornvn = SUBSTRING(CASE
WHEN a.melnvn Is Null THEN a.fornvn
ELSE a.fornvn + ' ' + a.melnvn
END, 1, 50),
z_mellemnvn = a.melnvn,
z_efternvn = a.efternvn,
z_adr_fornvn = LTRIM(SUBSTRING(a.adrnvn, CHARINDEX(',', a.adrnvn) + 1, LEN(a.adrnvn))),
z_adr_efternvn = SUBSTRING(a.adrnvn, 1, CHARINDEX(',', a.adrnvn) - 1),
z_fornvn_solo = a.fornvn,
haenstart_umrk_navne = a.haenstart_umrk_navne,
stillingdto = Null,
mynkod_ctnavn = Null,
myntxt_ctnavn = Null,
indrap = Null,
adrnvndto = Null,
soegnvn = Null,
soegnvndto = Null
FROM
folk.individ As c
INNER JOIN folk.lperson2010 As l ON l.pnr = c.pnr
INNER JOIN folk.laktnvnopl2010 a ON a.pnr = c.pnr
WHERE
l.lseq BETWEEN 1 AND 6000
;
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Thanks a lot. I did not realize that Oracle SQL and MS-SQL is that different from each other
|
|
|
|
|
|