|
Well,I don't know about the way you did it,but I use a function like this to save changes,and it works fine me.May you can make some change to use in server and client side:
private System.Data.DataSet deletedRows;
private System.Data.DataSet updatedRows;
private System.Data.DataSet insertedRows;
private bool save()
{
deletedRows=null;
updatedRows=null;
insertedRows=null;
try
{
deletedRows = myDataSet1.GetChanges(System.Data.DataRowState.Deleted);
if(deletedRows != null)
mysqlAdapter.Update(myDataSet1 , "Table1");
updatedRows = myDataSet1.GetChanges(System.Data.DataRowState.Modified);
if(updatedRows !=null)
mysqlAdapter.Update(myDataSet1 , "Table1");
insertedRows = myDataSet1.GetChanges(System.Data.DataRowState.Added);
if(insertedRows !=null)
mysqlAdapter.Update(myDataSet1 , "Table1");
return true;
}
catch(Exception myE)
{
MessageBox.Show("Problem!!" + myE.ToString() );
return false;
}
}
I call this function when I want to make database updated when I use disconnected dataset.
Mazy
"And the carpet needs a haircut, and the spotlight looks like a prison break
And the telephone's out of cigarettes, and the balcony is on the make
And the piano has been drinking, the piano has been drinking...not me...not me-Tom Waits
|
|
|
|
|
Just a thought:
maybe it has something to do with the fact that the client dataset rows are different than the database rows. It only occurs after I delete something. Maybe it has something to do with the auto-increment field in my datatable? If I delete the last row in the client dataset, send it to the server and update the database, what happens to the autonumber? On the database, if the last row was number 61, then 61 is gone and the next number would be 62. But is this also the case on client side? If not, maybe this is what causing the concurrency violation.
The idea would be to make sure that client dataset rows are equal to database rows, only question is, how do I do that?
Ludwig
|
|
|
|
|
From my experience with databases, cuncurrency usually occurs when you violate relationship rules. For example, if Table A is a composite table with relationship to Table B, and if a row in Table B is deleted, concurrency error occurs because the values in Table A just became invalid
Also, from what i know, commandbuilder doen't work with datasets with multiple tables
Does that help?
Notorious SMC
The difference between the almost-right word & the right word is a really large matter - it's the difference between the lightning bug and the Lightning
Mark Twain
Get your facts first, and then you can distort them as much as you please
Mark Twain
|
|
|
|
|
Well, I found the reason.
When a new row is inserted in my dataset at client side, the autonumber column for that row receives a number that is equal to the previous rownumber + 1, for example 10.
The dataset is then sent to the server, where it is written back to the database. However, the autonumber of the new row in the database is not always the previous autonumber + 1, for example if you previously deleted the last row in the database. In this example, it will be 11.
At this point, the dataset at client side and the database rows are not consistent anymore.
Solution: after the database has been updated, the updated dataset has to be sent back to the client. But first, the autonumber column of the new row has to be adjusted to the real database autonumber (by using the RowUpdated event handler).
At client side, the new row at the original client dataset is first purged; then it is merged with the new dataset coming from the server, and then AcceptChanges() is called.
After this, the client dataset is equal to the database rows, and concurrency violation won't occur.
It was a day work to find this one
|
|
|
|
|
lustuyck wrote:
It was a day work to find this one
Well, you know what they say, better a day than a week
Glad you found the problem!
Notorious SMC
The difference between the almost-right word & the right word is a really large matter - it's the difference between the lightning bug and the Lightning
Mark Twain
Get your facts first, and then you can distort them as much as you please
Mark Twain
|
|
|
|
|
Hello, I have extreamly huge table and I want to show it in DataGrid. How can I read this table from DB by parts (oracle database and c# client)? I dont want to select each data part from server (select first N-records then next N-records ...).
|
|
|
|
|
|
What is Visual C++ .NET's primary role currently? I wanted to do some ADO database with it but I'm at a lost. It seems that there is no built in support or wizards for ADO.
I checked out Amazon but there the books on databases are mainly for VB only. Please help.
|
|
|
|
|
Visual C++.NET allows you to use the .NET Framework from an application compiled using the /clr flag. If I remember correctly using /clr outputs everything as IL, but anything unmanaged isn't garbage collected (you're responsible for calling delete after you use new etc.).
Effectively, if you create an MFC Project you should be able to add the /clr flag so that it can use managed code, and then you'll be able to use the .NET ADO.NET classes -- System.Data namespace (for example System.Data.SqlClient lets you connect to SQL Server 7+).
The fortunate thing about .NET is that everything's done the same way in any language, so any ADO.NET book will be of use. Since VC++.NET is pretty weird when it comes to mixing managed and unmanaged code you may want to look at the new edition of Programming Visual C++ (Scott Wingo on MS Press), although I've also heard that Visual C++.NET Step by Step is good.
Short of that, you'll want to look at any of the ADO.NET books, and I definitely remember somebody recommending one in either the lounge or the database message boards.
I don't think there are any built-in wizards, but using ADO.NET its pretty easy to put together (no more COM ), so I wouldn't be too concerned about having to build it yourself.
--
Paul
"If you can keep your head when all around you have lost theirs, then you probably haven't understood the seriousness of the situation."
- David Brent, from "The Office"
MS Messenger: paul@oobaloo.co.uk
Sonork: 100.22446
|
|
|
|
|
Unless you have a really really good reason for using C++, make the move to C#. Then ADO.NET becomes a real breeze.
Tatham Oddie (VB.NET/C#/ASP.NET/VB6/ASP/JavaScript)
tatham@e-oddie.com
+61 414 275 989
|
|
|
|
|
Is there an easy way to select only records from the last X number of days (given that the table has a column which indicates the date/time it was added)?
|
|
|
|
|
select * from mytable where datediff(d, mydatecol, getdate()) < 20
where
mytable is the table
mydatecol is the date column in the table
20 is the number of days you want records for
|
|
|
|
|
I have been researching for ways to automatically create a SQL Server DSN on a local machine for a application and include it in the installation of the client to eliminate a user attempting to set the DSN themselves. I have found a couple of examples inserting information into the registry, but I do not want to do it this way. I found a vague article on the use of DBEngine.RegisterDatabase function, but am unable to find details. Has anyone performed this in the past?
|
|
|
|
|
Any reason why you don't want to go with the registry?
Microsoft even recommends it:
HOWTO: Programmatically Create a DSN for SQL Server with VB
http://support.microsoft.com/default.aspx?scid=KB;en-us;q184608
|
|
|
|
|
check out the documentation on SQLConfigDataSource function from odbccp32.dll...
Alexandre Kojevnikov
MCP (SQL2K)
Leuven, Belgium
|
|
|
|
|
Heres some code I have been using:
// Create ODBC Datasource ////////////////////////////////////////////////
char szConfig[256] = "";
sprintf(szConfig,
"DSN=MyDSN\0Description=My Description\0DBQ=%s\0UID=Admin\0\0",
m_sDatabasePath);
SQLConfigDataSource(NULL, ODBC_ADD_DSN,
"Microsoft Access Driver (*.mdb)\0",
szConfig );
//////////////////////////////////////////////////////////////////////////
You might need to modify it slightly for SQL Server. I just run this in InitInstace, there is no need to check if it exists since it will just be overwritten if it does.
|
|
|
|
|
I´ve seen many different implementations for database manipulation using ADO.NET. Some wrap all the DB stuff around a class, some use and create the ADO.NET "on-the-fly" and do all the manipulation. So, which method you guys think is th best ? Wrap all the DB stuff (or part of it) inside a class or use the ADO.NET objects directly ? Which option are you using ?
Mauricio Ritter - Brazil
Sonorking now: 100.13560 MRitter
"Th@ langwagje is screwed! It has if's but no end if's!! Stupid php cant even do butuns on forms! VISHAUL BASICS ARE THE FUTSHURE!" - Simon Walton
|
|
|
|
|
I tend towards wrapping everything, especially if it is code I'm going to be doing a lot. I hate re-inventing the wheel, so If I can wrap something and save some typing time then I will.
On our Oracle system, we have a live and test database - by wrapping the Oracle connection object, I can switch between them with the flick of a parameter. Very useful for release and debug testing.
I tend to create wrappers around Command objects that relate to the stored procedures in the database. It helps for code reuse across multiple projects.
Michael
Fat bottomed girls
You make the rockin' world go round -- Queen
|
|
|
|
|
I've wrapped up some DB access code into a class and have just used that since.
At least you're going from +-10 lines of code down to maybe 2 in your class.
Also, you don't have to worry about closing connections, etc... is you're using a tried-and-tested component.
Cheers,
Simon
"The day I swan around in expensive suits is the day I hope someone puts a bullet in my head.", Chris Carter.
my svg article
|
|
|
|
|
|
|
I run several wbsites where I have wrapped all of the data into classes.
I have a base class which is defined as MustInherit. I can then create say a SQL, or XML class which inherits of this.
Then within my code I have a variable to hold the class defined assembly wide.
The advantage of this is that say I can no longer afford my SQL hosting in the future and wish to move to XML, I create a new class DataXML which inherits from Data. Then in my web application I call:
DataCenter = New DataXML("../data.xml")
instead of:
DataCenter = New DataSQL("data.play47.net")
After that my whole application is using XML instead of SQL.
Theother advantage I find is that if I want to add a business rules layer down the track I can simplywrap it around the data project and there are no chanegs to the actual web application.
Tatham Oddie (VB.NET/C#/ASP.NET/VB6/ASP/JavaScript)
tatham@e-oddie.com
+61 414 275 989
|
|
|
|
|
Hello,
I have this scenaro:
Process 1: open connection, start infinite loop, [open recordset, search recordset, close recordset]
Process 2: open connection, open recordset, add record, close recordset, close connection.
Both processes are working on the same .MDB file and table.
Process 1 is started and keeps searching (till I press a key).
Process 2 is then started, and a new record is added.
After process 2 closes the recordset, it takes process 1 a few seconds before it finds the newly added item.
1) Did you understand my scenario? If not, please ask me for more details.
2) If there a faster way for process 1 to know about the new record?
Thanks,
Jeremy
Jeremy Pullicino
Professional C++ Developer
Done any hacking lately?
|
|
|
|
|
Ok, first question, what platform? Access or SQL Server?
I've seen this happen on Access and never solved it. I don't think Access is as aware of commit status as a normal sql server.
If this is on Sql Server, are you explicitly commiting after each insert? Also, you may be running into the page cacheing. I know there is a way to turn it off (It's NOCACHE in Oracle) but I'm not sure what it is in Sql Server.
Mark Conger
Sonork:100.28396
|
|
|
|
|