|
hi,thank you for your suggestion.
this line:
You can use Sql Server to prepare the dataset (and yes, it can operate on the previous record), and Excel to visualize that data.
------------------------------------
look at this scene:
1. I use sql to get some data, filled them in Excel, called it dataset1.
2. In Excel, I made some computation, then I got dataset2.
3. I need join the dataset2 with a table in database to generate dataset3.
how to do? creat a new table in database, fill dataset2 to database? and run join statment? then fill dataset3 to Excel? No, It's a big trouble.
so I need a tools like SQL+Excel.
|
|
|
|
|
bestbird7788 wrote: how to do?
Create a linked server to both Excell files, and do a SELECT with a UNION into your Sql Server.
bestbird7788 wrote: No, It's a big trouble.
so I need a tools like SQL+Excel.
Good luck
Bastard Programmer from Hell
|
|
|
|
|
bestbird7788 wrote: What would you great expert recommend?
That you adjust your mind set.
You seem to think that you have something that is easy yet solves anything. Which isn't possible.
The reason programming languages exist is because complex(=many steps) problems can only be solved using complex implementations.
So what you can do is one of the following
1. Find an existing application (like excel) and limit your problems to ones that work in that.
2. Use a programming language and solve anything.
3. Use a mix of 1 and 2 based on the needs of the problem.
Obviously there are disadvantages to all but that does cover the possibilities.
|
|
|
|
|
Thank you for your advice.
I know you are right, there is no easy way to solve my problem.
I think SQL is more easy than programming languages like VB and C++ on common mass data computation, so I imagined if there is a tool that is more easy than SQL on multiple steps computation of mass data. So I posted this post and wanted someone who knows more than me.
DMTL maybe a good choice, I will make a research on it.
|
|
|
|
|
If money is no issue, you can always get Qlikview[^].
|
|
|
|
|
thank you
I will check it.
And I have 3 quesiton about Qlikview ,and I will appreciate if you give me some tips.
1.can Qlikview solve the problem above:to compute the product whose annual sales values are all among the top 100.
MSSQL data structure( sales table's fields): productID, time, value
SQL solution is as below:
---------------------------------------------------------
WITH sales1 AS (
SELECT productID, YEAR(time) AS year, SUM(value) AS value1
FROM sales
GROUP BY productID, YEAR(time)
)
SELECT productID
FROM (
SELECT productID
FROM (
SELECT productID,RANK() OVER(PARTITION BY year ORDER BY value1 DESC) rankorder
FROM sales1 ) T1
WHERE rankorder<=100) T2
GROUP BY productID
HAVING COUNT(*)=(SELECT COUNT(DISTINCT year ) FROM sales1)
2.does Qlikview have stepwise ability. What I mean is like: step1. filter data. step2. group data on step1 . step3. filter data again on step2. the most important is: step4. sorting data on the result of step1 as a sub step. It's like what Excel does.
3.dees Qlikview support multiple datasets computation without SQL. what I mean is like: join statements in SQL .
thank you for your reply.
|
|
|
|
|
Hi this is Chandra. i have one issue with Sql Stored procedure.
Please find the query below.
declare @Measure varchar(50)
declare @Time varchar(50)
declare @Manufacturer varchar(50)
declare @CategoryID int
SET @Measure ='Value'
SET @Time ='4 Weeks&'
SET @Manufacturer ='Jti Gallaher Ltd'
SET @CategoryID = 377;
Declare @SELECTtQuery1 varchar(1000)
Declare @FromQuery1 varchar(1000)
Declare @GROUPBYQuery1 varchar(1000)
Declare @SQLQuery AS NVarchar(4000)
Declare @ParamDefinition AS NVarchar(2000)
SET @FromQuery1 = 'FROM [dbo].[iView_Data_TCG_Aggregated_Product]
WHERE PeriodName = '''+@Time+'''
AND Manufacturer = '''+@Manufacturer+''' AND CategoryId = '''+ @CategoryID+
print @FromQuery1
When i execute this query i am getting the conversion error. PLease find the error below.
----------------------
Conversion failed when converting the varchar value ' FROM [dbo].[iView_Data_TCG_Aggregated_Product]
WHERE PeriodName = '4 Weeks'
AND Manufacturer = 'Test1' AND CategoryId = '' to data type int.
----------------------------------------------
I am not getting categoryid value when i execute. It is giving error.
Please suggest me wht to do.
Thanks
Chandrakanth
|
|
|
|
|
A few things:
- Since you're concatenating things together, @CategoryID will have to be converted to a string. Otherwise, SQL Server is trying to perform addition and convert the first variables to an int.
Try changing the SET to this:
SET @FromQuery1 = 'FROM [dbo].[iView_Data_TCG_Aggregated_Product]
WHERE PeriodName = '''+@Time+'''
AND Manufacturer = '''+@Manufacturer+''' AND CategoryId = '''+ cast(@CategoryID as varchar(10))
- There was a dangling plus sign I removed from what you posted just to get it to execute
- If this is going to be a valid query, your'e missing the SELECT clause with the columns to select.
Scott
|
|
|
|
|
Hi Scott,Thanks and lot. I got it. It is working fine from my end .
Thanks
Chandra
|
|
|
|
|
My boss just tells me that I could have a table in the database that receives about 3x10^9 records a year (3 000 000 000). I could split this table in roughly four equal parts, but it still remains a lot. The user can choose to view just a day a year or even the entire history. (eg. the last 2 years wich still means 1 500 000 000 records)
I generally held a rule for myself to keep a production database around maximum 100 000 to 1 000 000 records total and put the rest in history, but this doesn't seem to be an option here and I'm very worried about the performance.
Has anyone got any experience with this size of database and how to solve the issue of getting the data quickly?
(Additonal info. The data are datapoints that need to be plotted on a graph in a certain time period. Performance is one of the top priorities.)
I see two major problems:
1. Keep table size reasonable, so how to split the data into different tables easily?
2. Getting the data to the user. (eg create average graph on the server, because a graph with more then, let's say 1000-2000 points is not useful on the screen anyway.)
Many thanks for any advice in advance.
(PS: Only thing I could find on google is splitting into smaller tables, but that doesn't immediately solve the problem of sending data to the user)
V.
|
|
|
|
|
Look into partitioning, probably by month.
Your users will NEVER look at 100k rows let alone 2 years worth. They may wish to analyse that volume but will now "view" them!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Actually, what I think will happen is that the "view" on large amounts will be averaged. for example one point per month or week.
I do realize I'll need to talk to the boss.
V.
|
|
|
|
|
Using SQL Server to deal with large volumes of data takes some serious planning, if you need to get into this area I would engage a professional, while I consider myself a good SQL developer I would not undertake such a project without professional support.
We had such a requirement and ended up using Oracle, it is faster and more robust but you HAVE to have a full time DBA.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: Look into partitioning, probably by month.
Good call.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
V. wrote: how to solve the issue of getting the data quickly?
Sql Server can handle millions of rows. ..but it won't perform at all if you put in a single scalar user-function in TSQL. As long as you're not doing a complex query, performance shouldn't be too much of a problem.
It might pay to create a test-environment with random data to identify bottlenecks as early as possible. With large amounts of data, I'd suggest large amounts of RAM (so that Sql Server can keep the large indexes in memory).
Bastard Programmer from Hell
|
|
|
|
|
V. wrote: The user can choose to view just a day a year or even the entire history. (eg.
the last 2 years wich still means 1 500 000 000 records)
Nonsense.
Simple math will demonstrate that a user (person) will NEVER look at that many records at once.
They will either look at a summary, or a summary with drill downs or a will be looking for a very, very small subset of that.
V. wrote: Has anyone got any experience with this size of database and how to solve the
issue of getting the data quickly?
Get real requirements.
V. wrote: The data are datapoints that need to be plotted on a graph in a certain time
period.
Excellent example. Graph 1.5 billion data points (from your first requirement) - so xactly how many pixels are on your screen? Again simple math will demonstrate that you can't view that many data points on a graph. So either there will be a much smaller time period or a summary of the entire period.
If a summary one solution is to build summary tables. So for example at the end of every data you create summary data of the day. Then a graph that display every day, uses the summary table rather than the raw data.
V. wrote: 1. Keep table size reasonable, so how to split the data into different tables
easily?
You start with real usage patterns. For example what percentage of the time does a user want to look at data for the last week, month, year? Or hour, day, week? Or by collection point via week. Etc, etc, etc.
Second you get realistic estimates and anticipated growth. So is 3 billion that average or the maximum? Will it be 3 billion next year or 30 billion. Keep in mind it must be realistic, not pie in the sky.
Third how long must you keep it? 1 year, 10 years 100 years?
And if it no one seems willing to discuss reality then you might want to look into price tags for a really hug SAN system, and submit a request to buy it because you will need it for testing - because the system will need that to run.
|
|
|
|
|
jschell wrote: Excellent example. Graph 1.5 billion data points (from your first requirement)
- so xactly how many pixels are on your screen?
In meantime we already got here and I told my boss we need to downsample the information.
V.
|
|
|
|
|
when designing the erd , let's consider the following case :
we have two entities : (management_board) and (person) , a person can't be a member in more than one management board at once , but he can be a member in management_board_1 in 2011 , and then a member in management_board_2 in 2012 , so , the relation between entities : (management_board) and (person) , is it (one to many) or (many to many) ?
|
|
|
|
|
If he can be in only 1 MB at a time then it is one to many.
If you need a history of membership then it requires a different design, either m-m or an archive record.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi All,
I am Newbie to Database and want to have all tables name and then columns of respective tables using following lines.
DECLARE CURSOR TEMPCURSOR is Select table_name from all_tables where owner='FSS_ADMIN' order by TABLE_NAME;
begin
open TEMPCURSOR;
for P in TEMPCURSOR loop
select COLUMN_NAME from user_tab_columns where table_name= p.table_name;
end loop;
end;
But while executing above lines, following error is occuring
Error starting at line 1 in command:
DECLARE CURSOR TEMPCURSOR is Select table_name from all_tables where owner='FSS_ADMIN' order by TABLE_NAME;
begin
open TEMPCURSOR;
for P in TEMPCURSOR loop
select COLUMN_NAME from user_tab_columns where table_name= p.table_name;
end loop;
end;
Error report:
ORA-06550: line 5, column 5:
PLS-00428: an INTO clause is expected in this SELECT statement
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:
I googled to understand the issue but could not get so I request you all to help me out and I will be very thankful for your support.
With regards
R e h a n
|
|
|
|
|
Your syntax appears to be incorrect. Read this[^] to see how to construct a cursor.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
Hi,
How can i get the full record that is delete/Update via exec sp_trace_setevent @TraceId
I mean:
I have table with this columns
Id int ,Name nvarchar(max) ,Family nvarchar(max)
with the values:
1,jojoba,alin<br />
2,babk,babyi
...
now i when i update like this:
UPDATE tblName
SET Name='okki', Family='koki' where Id=2
now I wanna this in my trace:
2,babk,babyi Deleted<br />
2,ooki,koki Inserted
|
|
|
|
|
Still working on that audit?
Stop wasting time, if your customer really needs the old values of the record, then you'd be best of with doing only inserts; somebody wants to update a record? Mark the current one as outdated (datetime) and insert a new one.
Yes, that'll make your database huge, but logging every action would result in yet even more data, since you're not only storing the new values (ignoring the old) - but also the command.
You best rethink your requirements here.
Bastard Programmer from Hell
|
|
|
|
|
thanks for your attention!
but its the end of story.
is it possible to get that info via <pre lang="c#">sp_trace_setevent</pre>.
i know that it is in ldf file.
plz give the help to close the question.
|
|
|
|
|
jojoba2011 wrote: is it possible to get that info via
Not as far as I can see.
<layer>Text value dependent on the event class captured in the trace. However, if you are tracing a parameterized query, the variables will not be displayed with data values in the TextData column.
jojoba2011 wrote: i know that it is in ldf file
How did you know?
Bastard Programmer from Hell
|
|
|
|