|
|
I'm certainly ok with their decision. My primary goal is that the customer be happy and make the best decision for themselves. But I also think it would have been a fun project.
|
|
|
|
|
Hi,
I have two tables like below.
select childname,wgt from
#Hy order by wgt desc
select childname,ror from
#Hy1 order by ror desc
I need like below.
#Hy.childname #HY.wgt #Hy1.childname #hy1.ror
Just like appending the two tables without altering the result set
means first one should be wgt desc and ror should be desc.
|
|
|
|
|
Do you mean that if the first query returns 10 rows and the second 20 rows, you will have only one result set, 30 rows?
If that's the case you could use union and if you want the sorting to remain so that rows from first result set are first and then from the second one, you could have something like:
select 1, childname,wgt
from #Hy
union
select 2, childname,ror
from #Hy1
order by 1, 2 desc
|
|
|
|
|
HI,
It will just combine the values.I need in columns like
#hy1.childname #hy1.ror #hy.childname #hy.wgt without altering the sorting.
|
|
|
|
|
AFAIK that won't be possible using a single query and it will be problematic if the result sets have different amount of rows. For example, if the first result set is 10 rows and the second one is 20 row, the only way that comes in mind is something like:
- insert the first sorted result set (10 rows) to a temporary table
- add two columns to the temporary table
- update the existing 10 rows with the first 10 rows of the second sorted result set
- insert the rest 10 rows from the second sorted result set to the table in the columns 3 and 4.
However this will lead to a situation where you have 20 rows but the column 1 and 2 for the last 10 rows will be null. It makes me wonder what is the use for this kind of result.
|
|
|
|
|
Hello Experts,
I Create One store proc in oracle 10g . I am getting values but its is taking only single values not taking mulitiple values when i am calling this store procedure from my c# application. But i need mulitiple values in single parametre . Below is my store procedure What i am getting is when i am calling this store proc from C# its taking like this '035,034,023,022,021,015,013,011,010,009,008,007' thats why its not getting values but what i need is i dont want like i want lie this 035,034,023,022,021,015,013,011,010,009,008,007 (or) '035','034','023','022','021','015','013','011','010','009','008','007' . Then It will work For me . All i need is like this I tried it somany ways but no luck If any one can help us then thankful to him.
CREATE OR REPLACE PROCEDURE SC_SP_SelectTranscribe(
in_appname in varchar2 ,
in_subappname in varchar2,
in_taskname in varchar2,
in_functionname in varchar2,
in_variablename in varchar2,
in_valuename in varchar2,
in_variablename2 in varchar2,
in_valuename2 in varchar2,
p_cursor OUT SYS_REFCURSOR)
is
ln_end number;
lv_smv varchar2(100);
lv_temp varchar2(100);
sql_str varchar2(8000);
begin
ln_end :=0;
lv_smv := in_valuename;
ln_end := instr(lv_smv,',',1);
if ln_end = 0 then
-- This condition becomes true when there is
lv_temp := lv_smv;
else
--lv_temp := replace(lv_smv,'%',chr(39));
lv_temp := substr(lv_smv,1,length(lv_smv)-1);
end if;
open p_cursor FOR
SELECT distinct RecordID
FROM sc_tbl_apprepository
WHERE(
(sc_tbl_apprepository.taskname=in_taskname)
AND
(sc_tbl_apprepository.FUNCTIONNAME=in_functionname)
AND
sc_tbl_apprepository.RecordID in (SELECT sc_tbl_apprepository.RecordID
FROM sc_tbl_apprepository
WHERE (sc_tbl_apprepository.taskname=in_taskname) AND (sc_tbl_apprepository.FUNCTIONNAME=in_functionname) AND
(sc_tbl_apprepository.fieldname =in_variablename) AND
--'035,034,023,022,021,015,013,011,010,009,008,007' (sc_tbl_apprepository.fieldvalue in(substr(lv_smv,1,length(lv_smv)-1))))) AND
sc_tbl_apprepository.RecordID in (SELECT sc_tbl_apprepository.RecordID
FROM sc_tbl_apprepository WHERE (sc_tbl_apprepository.taskname=in_taskname) AND (sc_tbl_apprepository.FUNCTIONNAME=in_functionname) AND
(sc_tbl_apprepository.fieldname=in_variablename2) AND
(sc_tbl_apprepository.fieldvalue =in_valuename2));
end;
|
|
|
|
|
Which client are you using? The Oracle provider that ships with .Net Framework or Oracle's own ODP.NET?
|
|
|
|
|
Hi
We are planning on installing SQL Server 2008 on the same server that already has SQL Server 2005. The server is Server 2008. Any problems with loading both db's on the same server? We will use a completely different instance name for SQL Server 2008 than SQL Server 2005.
Anyone encounter any problems with this?
Thanks
|
|
|
|
|
I'm running both without any problems. Just make sure to use different ports and remember that only one can be default instance.
|
|
|
|
|
I've got a table containing around 1,500,000 rows. This table has a column called "Culture" whose value could be either 0 or 1. Right now, all the values are 1.
The following query executes very fast (less than a second):
SELECT TOP 10 * FROM News
WHERE Culture = 1
The following query executes very fast as well:
SELECT TOP 10 * FROM News
WHERE Culture = 0
When I put a variable in where clause, it still executes very fast (less than a sec):
DECLARE @c TINYINT
SET @c = 1
SELECT TOP 10 * FROM News
WHERE Culture = @c
But the following query takes several minutes to complete:
DECLARE @c TINYINT
SET @c = 0
SELECT TOP 10 * FROM News
WHERE Culture = @c
There two things to mention:
1- There is a nonclustered index which contains Culture field.
2- The same thing happens if I execute the four above-mentioned queries wrapped in a stored procedure.
I'm really confused I don't know what the problem is...
|
|
|
|
|
I had a very similar issue recently myself. Not eaxactly sure why it is but it was suggested that SQL Server can't optimise queries which contain variables - they could change as the query executes.
Anyway, the general conclusion and advice I drew was to avoid using variables and join onto other tables where possible. Have you tried inserting @c into a single line temporary table and joining on that? I admit that its rather a grubby solution, but then again it is SQL Server and sometimes you have to hold its hand.
Regards,
Rob Philpott.
|
|
|
|
|
Hi Rob,
Thanks for suggested solution. I'll test it as soon as I get access to database tomorrow morning.
|
|
|
|
|
Rob Philpott wrote: Have you tried inserting @c into a single line temporary table and joining on that?
Dear Rob,
I used the following query but it didn't make any difference:
DECLARE @Start INT, @Count INT
SET @Start = 10
SET @Count = 5
DECLARE @c TINYINT
SET @c = 0;
WITH paging AS (
SELECT id, title, [description], ROW_NUMBER() OVER (ORDER BY Date DESC) rownum
FROM news
JOIN (SELECT @C AS Cult) AS CC ON news.Culture = CC.Cult
--WHERE Culture = @c
)
SELECT * FROM paging WHERE rownum BETWEEN @Start AND (@Start + @Count - 1)
modified on Wednesday, January 28, 2009 3:03 AM
|
|
|
|
|
Right, might be barking up the wrong tree then. How about using a proper temporary table such:
declare @c tinyint
set @c = 0
create table #c (Culture tinyint)
insert into #c select @c
select id, title, [description] from news join #c on news.Culture = #c.Culture
drop table #c
Regards,
Rob Philpott.
|
|
|
|
|
Have you compared the execution plans? That should shed some light on how SQL Server is processing them differently and what its doing that is making it take longer.
Hope in one hand and poop in the other; see which fills up first. Hope and change were good slogans, now show us more than words.
|
|
|
|
|
Hi Ben,
I couldn't see the execution plan for the long running query since it didn't finish executing after 25 minutes and I couldn't wait more
|
|
|
|
|
It looks like Mika has helped you very well, and hopefully you're on your way to resolving the problem. I should've been more explicit in my message that I was referring to the Estimated Execution Plan rather than the Actual Execution Plan. For the Estimated execution plan, the query is not actually run, it's just parsed and analyzed. It's handy to see how SQL Server plans on executing the query, which may be different than the execution plan that it actually uses. However, I've found it to be a very good resource in situations like this where something is long-running and the actual execution plan is hard to come by.
Hope in one hand and poop in the other; see which fills up first. Hope and change were good slogans, now show us more than words.
|
|
|
|
|
The problem is that the data is unevenly distributed.
Another part of the problem is that the optimizer doesn't see the values for bind variables as it does when they are literals.
Few questions:
- Do you have significant differences with row counts having Culture 1 or 0?
- Also how many different values for cultures do you have?
The first thing you should do is to run UPDATE STATISTICS News. After doing that, could you inform if it helped and answers to those questions.
Mika
|
|
|
|
|
Hi Mika,
As I told before, all the Culture values are 1 at this time but in future, other values like 0, 2 and 3 will be added to possible values.
Unfortunately I do not have access to database right now (because I'm at home) but tomorrow morning I'll run UPDATE STATISTICS [TableName] to see if it helps. I forgot to say that I have inserted these 1.5 million rows into local database with a data generator application in 5 minutes and this might be cause of the problem (maybe statistics couldn't be updated correctly).
Thanks
|
|
|
|
|
Ok. Also make sure that you have auto create statistics and auto update statistics on. Even though those database options are on there's still need to update statistics regularly since the algorithm in auto updates isn't very good or precise (thus leading to falsely biased histograms).
One thing about your query. You specified TOP 10, but the question is TOP 10 from what order? You didn't have any ORDER BY in the statement. Was this intentional since that would always return first 10 rows from the table matching your criteria and they are most likely always the same rows.
|
|
|
|
|
No. it was not intentional. The actual query is using pagination (using ROWNUMBER()) and rows are actually ordered by the Date column. This is just a simple sample query.
|
|
|
|
|
That's what I suspected. And now we're actually talking about a whole different story.
Now if you modify the query and use a date column, for example something like:
DECLARE @c TINYINT
SET @c = 0
SELECT TOP 10 *
FROM News
WHERE Culture = @c
ORDER BY AdditionDate
you need to add a new index to your table on columns Culture, AdditionDate. If you make that modification to your query, your filter factor will be dramatically different so the index will be efficient.
There's a simple rule of thumb (which is correct in most of the cases). If the filter factor regarding an index is less than 5%, using index is efficient. Above that it's not efficient. So in your first example if you had 4 different cultures and let's say that they were evenly distributed, the filter factor was 25%. So in that case using the index (culture) is going to be a very poor solution by the optimizer. But if it does use the index in that scenario (as I suspect it did) you will suffer from poor performance. In such cases actually removing the index makes the performance better (unless the index is used in other scenarios). The reasons why some of your test were performing better was because the data was unevenly distributed.
So what I suggest is that you first modify the query to the final version and begin optimization after that (if needed anymore). What I said about UPDATE STATISTICS is still a valid statement.
|
|
|
|
|
Dear Mika,
Here is the most similar query to the actual working one in our database:
DECLARE @Start INT, @Count INT
SET @Start = 10
SET @Count = 5
DECLARE @c TINYINT
SET @c = 1;
WITH paging AS (
SELECT id, title, description, ROW_NUMBER() OVER (ORDER BY Date DESC) rownum
FROM News
WHERE Culture = @c
)
SELECT * FROM paging WHERE rownum BETWEEN @Start AND (@Start + @Count - 1)
The above query runs very fast but when I set @c to 0, it becomes a long-running query. From execution plan it's seen that only Date index is used while Culture index is not. I have uploaded an image of the execution plan of the above query here[^] for you to have a look at.
I also ran "UPDATE STATISTICS News" which made no difference.
Thanks for any help
modified on Wednesday, January 28, 2009 2:49 AM
|
|
|
|
|
How many rows do you have in the table that have Culture 0 and how many for Culture 1? Also is the estimated execution plan same in both cases?
|
|
|
|
|