|
Hi,
I design a windows application that needs to pull data from a SQL server database.
It works just fine!!!.
But, when i install the application in another computer i get problems,I need to attach the database in SQL server in order to work.
So, i want to get a nice "connection string" to pull data from the database without no further "attachments"!
Can anyone help me?
(I usually use *.udl files to see the connection-string)
Thanks
modified on Wednesday, July 2, 2008 9:29 PM
|
|
|
|
|
Please do ODBC Connctivity. did you know any thing about that?
You should first define a ODBC connectivity in your Projects. After that first you establish ODBC Connection in your client system, after install your exe in the particular system, it will work definately.
|
|
|
|
|
ODBC will require that you establish an ODBC connection from the client - therefore you need to manage the client as well as your app.
I suggest this site
clickety[^]
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Any idea how to create an SID for the oracle instant client??
Many thanks
All generalizations are wrong, including this one!
(\ /)
(O.o)
(><)
|
|
|
|
|
|
|
HI,
i am dealing with a database, which is having billions of data. To improve the performance have introduced threads( thread pool c#). Also implemented locks to avoid conflict. But unfortunately that didnt improved the performance.
I did the same without lock now i got 25% improvement...
Tell me an idea to achieve performance without conflict
My small attempt...
|
|
|
|
|
Is the database normalized? Have you checked queries with query analyzer for bottlenecks? Those would be the first things I'd look at.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
the database is normalized.... doing indexing every weekend...
will check the queries to find bottlenecks...
do you have anyidea what happened to my threading ?
My small attempt...
|
|
|
|
|
sujithkumarsl wrote: Also implemented locks to avoid conflict. But unfortunately that didn't improved the performance.
How you did it ? Please post sample code so we can check what happened with your threading.
|
|
|
|
|
hi,
check this...
WaitHandle[] resetEvents = new WaitHandle[GeCommonConfig.ThreadCount];
for (int index = 0; index < GeCommonConfig.ThreadCount; index++)
{
resetEvents[index] = new AutoResetEvent(false);
ThreadDetails objThreaddetails = new ThreadDetails((AutoResetEvent)resetEvents[index], index.ToString());
ThreadPool.QueueUserWorkItem(new WaitCallback(StartProcessing), objThreaddetails);
}
WaitHandle.WaitAll(resetEvents);
Here StartProcessing is the method.... in which 80 datatables have been updating...among some are having billions of data...
protected static bool ExecuteQuery(string strQuery, DataSet dataSet, Object lockObject)
{
try
{
lock (lockObject)
{
return ExecuteQueryEx(strQuery, dataSet);
}
}
catch
{
}
return false;
}
I have a LockType class in which defined 5 differnt lock objects. according to the query( which all tables are using) the lock is passing to the ExecuteQuery method..
My small attempt...
|
|
|
|
|
sujithkumarsl wrote: WaitHandle.WaitAll(resetEvents);
This blocks the caller thread till all threads you queued finished executing. If it blocks, I don't see the benefit of using threading here. Your design is confusing. Can you post sample of the "StartProcessing" method ? Are you on a stand alone application or a web application ?
|
|
|
|
|
WaitHandle.WaitAll(resetEvents); just make sure that all threads completed.... main thread will wait there no problem..
here Startprocessing will do all the Database operations ( will run in different threads)
its windows application..
startProcessing()
{
Heavy Databse operations are doing here ( including insert, delete, select, update)
executeQuery();
}
My small attempt...
|
|
|
|
|
Are your clustered indexes being used?
Usually having the correct clustered indexes help a lot, but if they are build wrong for the tables, then your going to have overhead (long query times, more space being taken up).
If your using SQL Server, then the "Estimated Execution Plan" in the SQL Management Stuido can help a lot with seeing the steps of the queries and what operations are taking the longest to complete.
|
|
|
|
|
a couple more ideas
Partitioning your data
Striping your tables across multiple disks
And of course index tuning
Then check your queries and the join/where clauses
Hire a DBA to do some performance tuning (if tables have "billions" of rows you need one)
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi i have tested the application in a better machine( 32 GB ram, 16 processors) with 8 threads and finally i got 355% improvements.
anyway i am planning to have a small database for all operations and end of the day i will deploy the values to the main database.. what you think
My small attempt...
|
|
|
|
|
I need help on creating a trigger for insert and update. My table looks like this:
tblCandidates
canID smallint PK identity
ID int FK
eleID smallint FK
What I need to do is: During insert or update, if there are two records with the same ID and eleID, operation should be aborted and to return some string as output (ex: "operation impossible")
Thank you
|
|
|
|
|
Why not include eleID in primary key?
Giorgi Dalakishvili
#region signature
my articles
#endregion
|
|
|
|
|
I don't think a trigger works like that. As I never use triggers this is conjecture
Insert/Update causes trigger to execute, the trigger is a seperate stored procedure and cannot affect the outcome of whatever causes the insert/update.
However:
You really should put this check in the initial method, otherwise you are programming by error. IE try something if there is an error then it is wrong ahh do something else.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Easiest way to do this is by creating an unique non-clustered index (it could be clustered, but since there is a PK on the table a clustered index should have been created by default as the PK columns):
Create unique nonclustered index IX_Id_eleID on <table_name>
(
Id
,eleId
)
When duplicate rows are inserted, it will give the following error:
Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.cTest' with unique index 'IX_Id_eleID'.
Just put the name of the table in place of <table_name>, and name the index how you want.
|
|
|
|
|
Scott
While this works and is probably the correct solution it is still "programming by error" and you have an additional constraint on the table to support sloppy development.
I would do the check BEFORE attempting to insert/update the record, this may entail an additional index so it may nullify the constraint argument.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
hi i am using a store procedure for passing number of value and using two parameter
one is value and another one is id
use to update table
my store procedure like below coding but this store procedure is created but when i am execute this sp it display error like this :
Conversion failed when converting the varchar value '1,2,7,8,12' to data type int.
my coding is like:
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER procedure [dbo].[sp_txtSplit_Summa1](@sp_Visible varchar(1000), @sp_CRID Varchar(100), @sp_Delimiter char(1)=',')
as
begin
set nocount on
declare @Item varchar(1000)
declare @ItemID varchar(100)
while (charIndex(@sp_Delimiter,@sp_Visible,0)<>0 and charIndex(@sp_CRID,@sp_Visible,0)<>0)
begin
select
@Item= rtrim(ltrim(substring(@sp_Visible,1,CharIndex(@sp_Delimiter,@sp_Visible,0)-1))),
@ItemID=rtrim(ltrim(substring(@sp_CRID,1,CharIndex(@sp_Delimiter,@sp_CRID,0)-1))),
@sp_Visible=rtrim(ltrim(substring(@sp_visible,charindex(@sp_delimiter,@sp_Visible,0)+1,len(@sp_Visible)))),
@sp_CRID=rtrim(ltrim(substring(@sp_visible,charindex(@sp_delimiter,@sp_CRID,0)+1,len(@sp_CRID))))
if len(@Item)>0
select (@item)
select (@ItemID)
update summa1 set age= (select @Item ) where sno= (select @ItemID )
end
if len(@sp_Visible)>0
update summa1 set age= (select @sp_Visible) where sno= (select@sp_CRID)
select (@item)
select (@ItemID)
select * from summa1
return
end
--CAST(CAST(@myval AS varbinary(20)) AS decimal(10,5))
exec sp_txtSplit_Summa1 '21,22,333,444,555','1,2,7,8,12',','
with regards,
bretto
|
|
|
|
|
the problem is in the second update if i'm reading this correctly, @sp_CRID is a comma delimited list if integers which must be treated as a string. andi'm guessing the sno column is an int.
Please remember to rate helpful or unhelpful answers, it lets us and people reading the forums know if our answers are any good.
|
|
|
|
|
Friends I have written a Stored Procedure like below. But It's not working. Can Any1 help me solving this plz
create procedure sp_victim_status_by_casenumber
SELECT District, Thana, CaseNumber, COUNT(IdVictim) AS Male, 0 AS Victim
into #tmp4
FROM tblVictim
WHERE Gender = 'Male'
GROUP BY District, Thana, CaseNumber
insert into #tmp4
SELECT District, Thana, CaseNumber, 0 AS Victim, COUNT(IdVictim) AS Male
FROM tblVictim
GROUP BY District, Thana, CaseNumber
SELECT District, Thana, CaseNumber, COUNT(IdVictim) AS Female
into #tmp4
FROM tblVictim
WHERE Gender = 'Female'
GROUP BY District, Thana, CaseNumber
insert into #tmp4
SELECT District, Thana, CaseNumber,0 AS Victim, COUNT(IdVictim) AS Female
FROM tblVictim
GROUP BY District, Thana, CaseNumber
select District, Thana, CaseNumber,sum (Victim) as Victim, sum( Male) as Male, sum(Female) as Female
from #tmp4
group by District, Thana, CaseNumber
Any help would be really helpful.
|
|
|
|
|
SELECT INTO's only work on tables that dont exist when the command is run, you can use a SELECT INTO #tmp4 once but it wont work a second time beacuse #tmp4 will exist. also the columns in your insert into select statement arent in the same order as the select into, since the select into will create the table with the column order specified you need to make sure your column order in the inserts match, or you'll get wierd data. the better solution is to list out the columns from the table you're insterting into i.e. INSERT INTO (col1, col2, col3..) SELECT col1, col2, col3...
that way the order in the table wont matter as long as each column list matches. I dont know what you are trying to accomplish but you may be able to use self joins, or the NULLIF command to figure it out without a temp table. Hope this helps.
Please remember to rate helpful or unhelpful answers, it lets us and people reading the forums know if our answers are any good.
|
|
|
|
|