|
Hi to all, sorry me for my Inglish.
I have a Problem:
I want to select rows from my Oracle table which named "Table1" and I want to do it with oracleDataAdapter.
It's OK when I use OleDbDataAdapter:
............
command.CommandText="select name from Table1 where num = ?";
Ole.OleDbDataAdapter ada=new Ole.OleDbDataAdapter();
ada.SelectCommand=com;
command.Parameters.Add("@num",Ole.OleDbType.Numeric,3);
command.Parameters[0].Value=1;
............
Can you give me this code with using the oracleDataAdapter and with parameters of oracleConnection.
I had not find the using of oracle parameters in MSDN.At last can you give me Internet resource with the using of oracle in C#.
Thanks.
Gerder
|
|
|
|
|
Have you tried using the SQLDataAdapter and just substitute "SQL" for everywhere you have "OleDb"?
|
|
|
|
|
Hi there,
I am having difficuly inserting a record into the database. I came to realize that it may be because of my column name that contains brackets. It inserts just fine with other fields that are the same data type but it stops at the bracketed field name. How can I get around this? I can change the column name to exclude any spaces because it's a pre-defined format. I tried bracketing the field name within my code so it's the same as in the design view but it still doesn't work. Can someone help me please. Here's my code below:
----------------------------------------------------------------------------------------------------
COleSafeArray vaFieldList;
vaFieldList.CreateOneDim(VT_VARIANT, 4);
COleSafeArray vaValueArray;
vaValueArray.CreateOneDim(VT_VARIANT, 4);
long lArrayIndex[1];
lArrayIndex[0] = 0;
vaFieldList.PutElement(lArrayIndex, &(_variant_t("NAME")));
lArrayIndex[0] = 1;
vaFieldList.PutElement(lArrayIndex, &(_variant_t("ADDRESS")));
lArrayIndex[0] = 2;
vaFieldList.PutElement(lArrayIndex, &(_variant_t("EMAIL")));
lArrayIndex[0] = 3;
vaFieldList.PutElement(lArrayIndex, &(_variant_t("[NUM CARS]"))); //will not insert this field
lArrayIndex[0] = 0;
vaValueArray.PutElement(lArrayIndex, &(_variant_t(m_name)));
lArrayIndex[0] = 1;
vaValueArray.PutElement(lArrayIndex, &(_variant_t(m_address)));
lArrayIndex[0] = 2;
vaValueArray.PutElement(lArrayIndex, &(_variant_t(m_email)));
lArrayIndex[0] = 3;
vaValueArray.PutElement(lArrayIndex, &(_variant_t(m_num_cars)));
pRecordset->AddNew(vaFieldList, vaValueArray);
pRecordset->Close();
Thanks!
|
|
|
|
|
If the field name on the Sql Server is actually [NUM CARS] then you would use:
[[Num Cars]]]
If the field name is NUM CARS and you are using the brackets to deal with the space - I don't know what the problem is.
If this does solve your problem then any time you run into the issue again use QUOTENAME()in query analyzer to find the solution.
SELECT QUOTENAME('[NUM CARS]')
|
|
|
|
|
Hi To ALL!
Can any one tell me the max no of input parameter and also output parameter supported in stored procedure (sql server 2000)
Another problem that i face is that i need to convert the sql server 7 database to sql server 2000 database.
Thanks for your valuable help
Mahesh
|
|
|
|
|
|
Thank You Very Much...
You mean 2100 parameters is a combination of input,output,return.
|
|
|
|
|
yes the total number.
Tom Wright
tawright915@yahoo.com
|
|
|
|
|
Hi !
I have a application which use an Access DataBase to store its data. I use an OleDbDataAdapter to update it, everything works fine.
Now, I want to perform some tests, to see what happens in case of errors. The first thing I've tried is when I have not enough space on disk. As I have a VarBinary field in a table, I put inside it a large amount of data.
This is where the problem appears. I add a single DataRow in the table, and then call OleDbDataAdapter.Update() on my table. And no exception is thrown. I know that I have not enough space, but it tells me that the row was updated (although it was not). If I add two rows in the table, it throws the exception on the second row, and it's still telling me that the first one was updated...
Any ideas ? Thanks !
|
|
|
|
|
Access plays some games with storage allocation for Varbinary fields, It likely has an 'unused' varbinary row left over from some previous update that is already allocated but not in use. Compact the database before you run the test, the result will likely be an exception. Access will try to 'recycle' blob storage, so the size of the database file is not the only determining factor in whether or not it can store the update.
Anger is the most impotent of passions. It effects nothing it goes about, and hurts the one who is possessed by it more than the one against whom it is directed.
Carl Sandburg
|
|
|
|
|
Thanks for replying !
I'll try to compact my database
|
|
|
|
|
hi,
i've a little db design problem. we are designing an app which should support multiple data storage technologies(sql server, oracle and xml).
because of xml and sql server replication we decided to use a guid column as pk col. but there are some problems.
if use guid:
@ the old version of this app used auto int as pk, so theres no way
to import old databases.
@ oracle doesnt have a guid datatype so we would need to put it into
a different datatype like char or raw but i think it would result
in bad performance
if not use guid:
@ sql server replication is not that easy anymore
@ how do i get a unique id in a xml document??????
does anybody have some hints??
is there another way to get a unique key in a xml doc except guids??
thanks in advance
|
|
|
|
|
The only way I can think of is to create an empty copy of the old database and append the GUID fields to all of the tables. Import the old database then update all of the GUID fields according per their in pk to keep the referencial integrity. It would take some time to create the script to update all of the GUIDs properly but is doable. Then you could import the GUID'd db into the final target db. As far as GUID in oracle as char columns, the performance would most probably suffer but if the columns are indexed properly it shouldn't be that bad.
Michael
I firmly believe that any man's finest hour, the greatest fulfillment of all that he holds dear, is that moment when he has worked his heart out in a good cause and lies exhausted on the field of battle - victorious.
Vince Lombardi (1913-1970)
|
|
|
|
|
This was also the solution i was thinking of but it seem to be much work. thanks for your reply.
|
|
|
|
|
I have an application that lets users search based on degree of separation, so for instance, a user can search for age, hobbies, etc. and limit the search to just users who are 1 degree separate, 2 degrees separate, or 3 degrees separate.
However as it stands now, searches are taking a long long time because according to the programmer, the degree of separation is calculated dynamically upon search, and with a system of about 50,000 users this is taking way over one minute to execute, causing timeouts in the browser.
Is there a better way to go about doing this type of search? Perhaps using a scheduler to perform some calculations beforehand so the searches can use it? Any feedback will be greatly appreciated.
|
|
|
|
|
There are almost always ways to increase search speed. I usually attempt to trade insert time for select time when queries become to long and other methods (indexes, temp tables) have failed. In your case; when you add a new user you can precalc and store criteria that is necessary for future select statements. This usually will have minor effect on the Insert since it only affects one row in the database whereas, the select statement affects 50,000 rows.
I guess I would need a better description of what 1,2,3 degrees separate means to take this any farther.
|
|
|
|
|
My understanding of "degrees of separation" is:
Jack knows John - 1 degree of separation
John knows Mary - 1 degree of separation
Jack is 2 degrees of separation from Mary.
Mary knows Alice
John is 2 degrees of separation from Alice
Jack is 3 degrees of separation from Alice.
and so on. Someone once calculated that each person is a maximum of 6 degrees of separation from anyone else (I don't know how true that is)
Cada uno es artifice de su ventura
WDevs.com - Open Source Code Hosting, Blogs, FTP, Mail and Forums
|
|
|
|
|
I have a Trigger on MSSQL, which calls an Extended Stored Procedure using the exec master..xp_sample where xp_sample is the exported function in the dll. Now the Trigger is written to fire AFTER an INSERT event using either FOR or AFTER. In the DLL a new connection to the DB is made and a query is used to query through the DB to return an xml string. The query is as:
SELECT * FROM Employees FOR XML AUTO
the query is being executed using the function:
retCode = SQLExecDirect(hstmt1, (UCHAR*)"SELECT * FROM Employees FOR XML AUTO", SQL_NTS);
The problem is when I do an INSERT from the Query Analyzer then the trigger is invoked and calls the dll function xp_sample. But the function SQLExecDirect never returns. Looks like it goes into some kind of a deadlock. Can anyone help, please
|
|
|
|
|
I'm guessing that your trigger is defined on the Employees table. Since you're creating a new connection, I think the query in your extended stored procedure is deadlocking on the locks held by the INSERT.
Triggers run in the context of the statements that cause them to execute, and significantly, as part of the implicit transaction implied by those statements. An INSERT causes the newly inserted row to be locked with an Exclusive lock. The statement in the XP takes shared locks. When it gets to the newly inserted row, it will block.
The Extended Stored Procedures documentation discusses bound connections, where you can make a new connection part of the original transaction. This might solve your problem.
What are you actually trying to achieve? An XP is really the last resort; you should try to use T-SQL if you can.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
Hi,
Thanks for the reply, I am looking into it. Basically what I am looking at is whenever I do an INSERT into a Table, I need to Query the Table after the INSERT and send the data via sockets to some client. Now for the sockets implementation I could best think of Extended Stored Procedures where I would Execute a query and then send the data using sockets. This code would go into the dll making the ESP. Could there be a better option, I am not very comfortable with SQL, and would want to have more insight into it. Thanks again, Regards
Samvibes
|
|
|
|
|
OK, I can't think of a way to do this in T-SQL! But do you have to do it in the database server? Notification Services[^] looks like it might be able to do what you need.
Developing extended procedures is pretty hard to get right, from what I hear.
Stability. What an interesting concept. -- Chris Maunder
|
|
|
|
|
How can I insert the binary data into image type field in the SQL database with ADO in VC++ project?
Thanks
|
|
|
|
|
Hi all,
Is it possible to write a SQL to join two tables from two different
databases ? Is this a common operation for business applications ?
If not, then is it not a good practice to split the stuff into different databases ? My case is like that : I have a central database to record
the meta information of all objects from different applications, and also
have a seperate database for each single application to store the detail of the objects. But when I search for objects according to some criteria,
then I need to have join SQL over two different databases(the central database and the application detail database).
Thanks
|
|
|
|
|
Hi
is two db in single server?
if it is in single server, u can
other wise
on way
bye
timcyspet
|
|
|
|
|
hi
you can do join as:
Server1.dbname1.owner1.table1 tb1 inerjoin
Server2.dbname2.owner2.table2 tb2 on column....
|
|
|
|