|
Hi
Need to know how to connect to Oracle (11g) database from Log4net.config (using appenders) file. As I need to wright the logs to the Oracle (11g) database when any user logon to the web application.
Note: I am succesfully able to wright the logs to the text files but not able to wright the logs to the Oracle (11g) database.
Any help is greatly appreciated.
Regards
Bala
|
|
|
|
|
Try this[^].
Bastard Programmer from Hell
|
|
|
|
|
As part of my degree I recently submitted an interim paper and one of the feedback comments from the tutor was along the lines of "consider asking the opinions of others on the database, it is not cheating as long as you properly reference your source". Fair point I thought so here I am.
In my submission I had written about the two options I had considered for the database model and why I had elected to choose the one I did. So the question arises from this, what other options are there or what would your preferred approach be?
Basically, the project is looking at migrating an existing file based process across to a web solution, and part of the system relates to the daily reporting of end of period figures and status.
At the end of each daily reporting 24 hour period, a whole bunch of items are recorded, and these are recorded for more that one site, and not every site records the same metrics, but the majority of them are all the same.
As it stands at present there are approximately 30 parameters recorded, some numerical, some textual.
The first option I considered was row per site per period, with each column representing a metric (I will just call them param, but in reality they have meaningful names).
<id><site><period-end-date><param-n><param-n+1><param-n+2>............etc.
The second option was splitting the KPIs into groups, and have table per group with one master record identifying the site and period in another table.
Master Table
<id><site><period-end-date><KPI-group-A-ID><KPI-Group-B-ID><KPI-Group-C-ID>
KPI Group A Table
<id><param-n><param-n+1><param-n+2>............etc.
KPI Group B Table
<id><param-n><param-n+1><param-n+2>............etc.
KPI Group C Table
<id><param-n><param-n+1><param-n+2>............etc.
I originally have coded the models using code first approach (I'm using EF and MVC3) using option 2, as thought it would be easier to manage and allow for easier scaling, and appears to work well.
The part I am adding in at the moment is a losses breakdown for period, so each site might have multiple entries in a loss table linked to the Master Record in a one-to-many.
So the question is, particularly to any DBA's or architects, how would you or what suggestions would you have on how best to build the database to meet the needs, allowing for potentially more kpi parameters or groups to be added in the future.
Any thoughts greatly appreciated.
|
|
|
|
|
I'd vary on option two; have a master-record with period and all columns that the sites have in common (since they're dependent on the key) and keep the optional columns in a separate table, with a 1-1 relation.
It'd mean having to create a new table when a new column-definition arises, which may or may not be a problem. It also sucks when creating reports if you have to take tables in account that "might" exist.
Bastard Programmer from Hell
|
|
|
|
|
Thanks for your input Eddy.
The only issue I have is with seperating out the common columns, this would result in the grouping of columns being broken, and fragmenting a cluster of points across different tables.
Then there is the issue if you have 1 point that started as non-common point, becoming a common point due to modifications on the plant. You have then broken the rule of keeping common points together, moving the column and data across to the common table could break all the underlying code.
Likewise, going the other way, you may have a point that starts common, but due to modifications on the plant no longer becomes common.
I have been working on this on and off over the last week, and so far, it still feels more logical to group categories of points together in one table and keep the master site record entry seperate.
I have since also added in another table which records a breakdown of each individual sites losses (an entry for each root cause event) for each reporting period (one to many), and by introducing this additional table and relationship, it makes even more sense to keep any KPI metrics out of the master table.
Cheers,
|
|
|
|
|
DaveAuld wrote: So the question is, particularly to any DBA's or architects, how would you or
what suggestions would you have on how best to build the database to meet the
needs, allowing for potentially more kpi parameters or groups to be added in the
future.
Your notation is not familar to me however it doesn't seem to encapsulate what I would do.
The problem of your description seems to be the following.
1. You have a number of sites
2. You collect named 'metrics' from each site
3. The collection of metrics for a specific site do not match other sites.
4. And as a guess you have not considered that over time the metrics from one site might not be the same set either.
Given that I would have probably have the following table structure
1. "Site" with "Site Id" and other information specific only to the site.
2. "Metric Description" with "Metric Desc Id" and perhaps "Value Type"
3. "Metric" which has "Site Id", "Collected Timestamp", "Metric Desc Id" and "Metric Value"
Each site results in an entry in 1.
Each metric results in an entry in 3.
Table 2 is probably managed manually.
The "Value Type" allows one to identify what the "Metric Value" represents. For example it could be a timestamp, count, time span, float, integer, etc.
The above is a bit vague because the specifics of the actual system are needed to refine it further.
|
|
|
|
|
|
Thanks for your valued input. Further to the comments made to Eddy above;
What you have shared is probably what approach I would use for data collection repository, when any given point could be recorded at any given time.
All points I am recording will be manually entered at the same time, all with the same 'end of period' timestamp.
I am using some validation logic on the post back to check which site the data is being recorded for and set any 'unused' point to zero or empty string (depending on the datatype)
Also, the views are tailored to only display the relevant collection points to the user based on site.
You have given me some more food for thought.........thanks.
|
|
|
|
|
I want it in runtime using C#.
|
|
|
|
|
|
carm_ella wrote:
I want it in runtime using C#.
You want mayo with that, hon? Should I wrap it, or are you gonna eat the code right away?
Bastard Programmer from Hell
|
|
|
|
|
Thank you! I want the code sir.
However, can you give me some tutorials on how to create SQL database in visual studio 2010 using c# with backup and restore? I knew a lot in MySQL using a server package of it and currently studying SQL built-in in VS2010.
|
|
|
|
|
First, the answer to your question; BACKUP[^] and RESTORE[^].
carm_ella wrote: I want the code sir. You have been given links to the manual. I suggest you read it and write the code.
Bastard Programmer from Hell
|
|
|
|
|
Hi.
I had a database that was working very well in sql server 2005. I upgraded my SQL to SQL 2008 R2 . After upgrading , I wanted to drop one of my tables.
But i could not. It gives this error
No catalog entry found for partition ID 281474980577280 in database 5. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption
I don't know what to do
|
|
|
|
|
Just as a wild guess, but why not run DBCC CHECKDB ?
|
|
|
|
|
I did it but it gave me again the same error
|
|
|
|
|
The easiest option would be to open the instance up in SQL Server 2005 from your last working backup.
If this is not an option for you. you might want to read this[^] article. Warning! Anything you do here is at your own risk - if you don't have backups to go back to, you could be well and truly stuffed.
|
|
|
|
|
When i get the script from sql 2008 to sql 2005 without any error it finish.
when i wanna to execute the script in sql2005 i get this error :
Msg 139, Level 15, State 1, Procedure Language_Update, Line 0
Cannot assign a default value to a local variable.
Msg 137, Level 15, State 2, Procedure Language_Update, Line 9
Must declare the scalar variable "@param".
this is the part of script :
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[Language_Update]
(@LanguageId SmallInt , @Language NVarChar(50))
AS
BEGIN
declare @param nvarchar(1000)= N'update [Common].[Language] set '
if @Language is not NULL
BEGIN
set @param = @param + '[Language]=''' + @Language + ''','
End
set @param= substring(@param,0,len(@param))
set @param = @param + ' Where LanguageId=''' + cast( @LanguageId as nvarchar(5)) + ''' '
exec sp_executesql @param
END
GO
take attention that the script is about 77000 line and i have about 200 of this type error.<br />
<br />
Please Help !
|
|
|
|
|
When you declare variable @param, set default value after declared. Do this replace:
Old part of query
declare @param nvarchar(1000)= N'update [Common].[Language] set '
New part of query
declare @param nvarchar(1000)
set @param ='update [Common].[Language] set '
I Love T-SQL
"VB.NET is developed with C#.NET"
If my post helps you kindly save my time by voting my post.
|
|
|
|
|
Thanks for Help!
But its about 200 to 500 of this items!
How can i do that ?<br />
<br />
and why doesn't the SQL create correct scripts.
|
|
|
|
|
How/where is SQL Server generating the script from?
Try doing a search and replace ') = N' with '): Set @Update =N'
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
jojoba2011 wrote: and why doesn't the SQL create correct scripts
It did. You generated them on Sql2008, and thus it will use all the language-features available for 2008. If you wanted to be compatible with Sql2005, you'd generate them from there.
Put your database in compatibility mode (Google knows how) and try to script again.
Bastard Programmer from Hell
|
|
|
|
|
Thanks!<br />
but this is not good trick!<br />
and no use in here cause i have different parameters;<br />
this was example @Param.
|
|
|
|
|
Hi
I posted a similar question to this here [^]
however after looking at the data in more detail, the query has to be changed a lot. Also in the previous post, I could not get it to work.
The select statement below is what i need except, this returns all the rows for each product, and all I need is the MAX date_despatched for EACH product
SELECT scheme_opdetm.warehouse, scheme_opdetm.product, scheme_opdetm.net_price, scheme_opdetm.despatched_qty, scheme_opheadm.date_despatched
FROM scheme_opdetm INNER JOIN scheme_opheadm ON scheme_opdetm.order_no = scheme_opheadm.order_no
WHERE (((scheme_opheadm.status)="8") AND ((scheme_opdetm.warehouse)="06") AND ((scheme_opdetm.net_price)>0) AND ((scheme_opdetm.despatched_qty)>0))
ORDER BY scheme_opdetm.product, scheme_opheadm.date_despatched DESC;
Basically the Sales Order Header (scheme_opheadm) contains the Date Despatched, Sales Order, ans Status (which must be 8 for despatched orders).
The Sales Order Detail (scheme_opdetm) has the Warehouse Product Net Price (There are also comments and other lines that I need to ignore in the Detail Table, hence the filters for Net Price > 0, Qty Despatched > 0, Warehouse = 06 (finished goods).
How can I get only the details of the LAST despatch for EACH item?
|
|
|
|
|
Create a sub query that uses Row_Number() and Partition based in the id field in scheme_opdetm, selecting data from your transaction table and numbered/ordered by the date field. Join that to your main query and filter it on row_number = 1.
Never underestimate the power of human stupidity
RAH
|
|
|
|