|
Hello experts,
I'm trying to design a framework for generating reports by storing the meta data in an XML.
The following are the sources (in both SQL / Oracle providers).
1) Tables/Views
2) Stored Procedures
3) OLAP Cubes
I'll write a wrapper to sync changes to these Tables/Views/SPs/Cubes so that the XML is up to date.
Tables/Views
============
The user will be able to store subset of the columns and filter criteria (1 or many) and should be able to run the same report at a later stage.
Stored Procedures
=================
Whenever the user wants to run the stored procedure, he/she will supply the parameters and get the output.
OLAP Cubes
==========
Predetermined cubes and/or cubes with specific members as parameters.
I'm thinking the following way using strategy pattern for retrieving data.
1) IReportData: Gets the report data in a dataset/cellset
2) IReportParameters: Gets report parameters (For SPs?)
3) IReportFilters: Report filters / criteria /criteria set
3) IReportSQL: The final executable SQL
4) IReportFormat: Final rendering methodology
What do you guys think about this? Do you have any design pattern (s) that you would like to recommend?
Thanks,
RK.
You can never try. You either do it or you don't.
modified on Saturday, September 5, 2009 3:02 PM
|
|
|
|
|
You might want to take this to the design forum for a wider response.
Ever since I first heard of XML, way back in the 90s, I figured you should be able to use XSLT to format a report and then chuck an xml data file at it for generic type reports. Never followed up on it as I loathe XML.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thank you! I posted it in the design and architecture forum. I did not know it was there.
You can never try. You either do it or you don't.
|
|
|
|
|
Hi everyone.
For my current employer I've developed a few systems and deployed them. The deployment of the databases themselves has always been fairly easy. I use migratordotnet to update the schema as I continually develop these systems. That bit works great for me and I've never had any problems.
The problem I am starting to have though is that as the number of releases comes down now to maybe 3 or 4 a year. The schema structures are usually vastly different and this gives me a real headache when I need to migrate the data from a previous version to the latest schema. At the moment to do this I'm writing custom migrations for each release. However the database now has over 60 tables so these migrations are getting very long and more and more bugs are starting to appear.
So my question is how does everyone else tackle this problem? Are there industry standard approaches that I'm not aware of? Are the standard tools that I'm not using that help with this?
Any help / links to docs would be great!
Phil
|
|
|
|
|
The basic problem with your predicament, is that even if you could find a third party tool to do the migrations for you, you would still always have to do the mapping and possible conversions, which I suppose are the main part of the work anyway.
I think your best bet is to either start designing your database preemptively, or employ an add-and-recycle strategy when writing applications (i.e. as necessary, just only add new tables and/or columns, and as much as possible rename and/or convert existing tables and/or columns). This way you can just leave the existing databases be, and maybe clean them up a little every once in a while when you have time.
Admittedly, I may not have understood the full depth of your plight.
My advice is free, and you may get what you paid for.
|
|
|
|
|
No I think you've hit the nail on the head. I've asked a few friends who work in different businesses what they do and they say the same that basically i have to write a migration or only add to the schema (which I hadn't thought of as an approach so thanks for that).
Oh well I'll struggle with these queries then.
Thanks
Phil
|
|
|
|
|
This may or may not be of use to you, but for example our SAP system uses a control database.
In other words, you could make one database that just contains information about the schemas of your other application databases. This can help you keep better track of what different columns and formats, etc, you have, it can help you make development charts, upgrade / migration plans, and you can use it to run database checks (see if an application database has the correct schema, and such), and you can use it to build and store migration queries (or templates) there.
Another nice advantage is, if you ever have to build multiple language GUIs (english and spanish for example), you can use it to store common translations for columns and controls.
Good luck with it.
My advice is free, and you may get what you paid for.
|
|
|
|
|
Phillip Donegan wrote: maybe 3 or 4 a year
I would think if you are deploying a major schema change (vastly different) every 3-4 months then you must have an extremely dynamic business model or somebody got the design wrong, and continues to get it wrong.
I usually work on single project contracts and have almost always used the migrate method, a PITA but then the business does not expect to repeat this, ever. However I am currently in a position where I am using the incrementing schema as the changes are fairly minor over a long period. The problem I find is the business does not necesarily keep up with the changes, be prepared for, why is this info not being up dated anymore type questions.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
You know how it is, a request for a change or enhancement comes in, that requires some new tables. Do this 5 or 6 times and you spot a better way of doing it so decide to change the schema, then your old data is screwed to you have to write a monster migration!!
|
|
|
|
|
I now find I have quite a few "attribute" tables in my schemas. Customer now wants 2nd mobile - add a phone attribute linked to the customer table.
I have one app that is ALL attributes, it tends to get nasty when trying to retrieve data, monstrous number of joins. Which goes to prove you can go too far down that path.
|
|
|
|
|
Hello,
I write a query (combine of 2 queries using UNION):
query1
UNION
query2
I want the result of query 2 (there is only one result in query2) will be placed in the last row of the results of the big query.
I mean the results of query1 will be in the first rows and the result of the query2 will be in the last row.
How can i do it?
|
|
|
|
|
Example
select * from table1 <br />
union<br />
select * from table2
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.aktualiteti.com
|
|
|
|
|
OK,
I know that.
But the problem is that the results are mixed.
I want the results of the second query will be the last (in the last rows of the result big table)
|
|
|
|
|
Remember that SQL unions are based on set theory, so if value A is from table 1 and value A is also from table 2 you would only get one value A is your result set. By introducing the sort indicator that "Ashfield" suggests, you would get 2 values of A.
You would get 1,A and 2,A.
I'm not sure if this would impact your algorithm, but I just wanted to make you aware of it.
Cheers,
David
|
|
|
|
|
The easiest way I know of is to introduce a sorting column
select '1' as seq,* from table1
union
select '2' as seq from table2
order by seq
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
Good Day All
I have SQL 200 and a SQL 2008 in one Machine. At First there was SQL 2000 and i install SQL 2008. Now we have a ilog Application written in C++. am not sure what that developer uses to connect to the SQL. Now the Application can connect to SQL 2000 but not to SQL 2008.
I had a thought that the Default instance still points to SQL 2000
I dont know what is wrong
Thanks
Vuyiswa Maseko,
Few companies that installed computers to reduce the employment of clerks have realized their expectations.... They now need more and more expensive clerks even though they call them "Developers" or "Programmers."
C#/VB.NET/ASP.NET/SQL7/2000/2005/2008
http://www.vuyiswamaseko.com
vuyiswa@its.co.za
http://www.itsabacus.co.za/itsabacus/
|
|
|
|
|
Yes... If you have already installed SQL server 2000, the default instance will be held by it. You can only install 2008 as Named instance(Only if you dont configure it to uninstall 2000 default instance).
I think it points to the default instance, can you check the connectionstring of the application to trace how they are connecting.
|
|
|
|
|
Thank you , The Problem was not on the SQL side. we have a C++ App and the Developer is developing in his way and i think the Connection string is hard coded and that is a bad idea. but the problem is now resolved. i had to check the registry to resolve it.
Thanks
Vuyiswa Maseko,
Few companies that installed computers to reduce the employment of clerks have realized their expectations.... They now need more and more expensive clerks even though they call them "Developers" or "Programmers."
C#/VB.NET/ASP.NET/SQL7/2000/2005/2008
http://www.vuyiswamaseko.com
vuyiswa@its.co.za
http://www.itsabacus.co.za/itsabacus/
|
|
|
|
|
All,
I have been reading and trying diffrent ways to get this to work but the gist is I am trying to create a temp table that is slecting cloumns from multiple tables
I cant see how a join would work and I have tried pass the columns from table2 as a varible..no luck
The goal is to create one big table I can export off to a file
I am using SQL 2008 Ent.
Here is what I have:
BEGIN
If NOT Exists(Select * from tempdb..sysobjects Where id = object_id('tempdb.dbo.#temp'))
SELECT firstname,lastname, INTO #TEMP FROM Table1
Select email, username into #temp from Table2
END
Regards,
Hulicat
|
|
|
|
|
Do you get any error message? Can you explain better your problem?
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.aktualiteti.com
|
|
|
|
|
Thanks I think I got it...I was trying to get one result set from multiple tables
I needed to use a cross join....
Regards
Regards,
Hulicat
|
|
|
|
|
;with cte as(select t1.*, t2.* from table1 t1 cross join table2 t2)
select * from cte
OR
;with cte as(select t1.*, t2.* from table1 t1 ,table2 t2)
select * from cte
Niladri Biswas
|
|
|
|
|
Ok, if anyone knows anything about the subject please help.
Here's the setup: I have written a dll using vs 2005 that reads data from an excel file. It works fine when I reference the dll in my windows app...but when I create the assembly/function combo in SQL I get an OleDB Unspecified error. The weird thing is that it was working on Monday, then when I came back in on Tuesday the server gave me weird errors and now the conn.Open throws an exception I cannot figure it out: (Thanks in advance for any help!)
Here is the error when I try to call the Function in SQL
Declare @a int
set @a = (Select dbo.ExcelTester())
A .NET Framework error occurred during execution of user-defined routine or aggregate "ExcelTester":
System.Data.OleDb.OleDbException: Unspecified error
System.Data.OleDb.OleDbException:
at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection)
at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject)
at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup)
at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.OleDb.OleDbConnection.Open()
at ChargeCardCom.ChargeCardCom.ExcelTester()
Here are the errors that happened on the server Tuesday:
AppDomain 122 (KWCDB_WBU.dbo[runtime].121) is marked for unload due to common language runtime (CLR) or security data definition language (DDL) operations.
and
AppDomain 130 (KWCDB_WBU.dbo[runtime].129) is marked for unload due to memory pressure.
and
AppDomain KWCDB_WBU.dbo[ddl].131 was unloaded by escalation policy to ensure the consistency of your application. Out of memory happened while accessing a critical resource.
|
|
|
|
|
|
Thank you for your reply Abhishek...I will post some code shortly (I have to log in to work as I'm at home now). Basically the ExcelTester (at this point) just creates an OleDB Connection to an excel file and attempts to open it. This is not my first CLR function integration project and I had the program working almost perfectly by Monday afternoon. Then I came in on Tuesday and started cleaning up some comments, re built the project and once again attempted to create the assemblies. It was at this time I began receiving the errors listed in my post and basically I spent all week trying to make it work again. It would probably be much easier to figure out if OleDB was not returning Unspecified Error when I catch the exception. The whole thing is weird and I'm sure it is something simple that I am just overlooking.
Basically, the function just returns a string "Success" if it works and "Failure" if it doesn't (or an Integer 0 if it works and -1 if not..I've tried many variations). It is breaking on the conn.Open and I feel pretty certain the connection string is okay. If I remove the .Open call the function works perfectly and I'm able to create the assemblies and function and call it with no problem.
|
|
|
|