|
By the way, I forgot to say that I tried to look at DataSet in reflector and though I didn't dig very deep it seems pretty clear that you're correct, and that site wrong: XML, it seems, is indeed not the internal representation. I just wanted to excuse myself by showing this misconception exists elsewhere. I do not however wish to continue spreading it!
|
|
|
|
|
And a final (I hope, this is fun but getting a bit long and I fear I might start annoying people by now, even if they don't have to read my scribblings) note.
Wikipedia, that sometimes authorative source of information about anything (it always makes me think of THHGTTG), has an article on ADO.NET. It says this about datasets:
A DataSet is populated from a database by a DataAdapter whose Connection and Command properties have been set. However, a DataSet can save its contents to XML (optionally with an XSD schema), or populate itself from XML, making it exceptionally useful for web services, distributed computing, and occasionally-connected applications.
|
|
|
|
|
dojohansen wrote: Personally I think data adapters are useful and the disconnected data model can be enough for many things.
Don't forget memory intensive, and a real no-no when it comes to interoperability. Speaking as somebody who spent a lot of time writing code that communicates with Java based systems, I can tell you that DataSets/etc, are just plain evil.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
I think you mean "speaking as someone who considers himself quite the expert".
Pete O'Hanlon wrote: Don't forget memory intensive, and a real no-no when it comes to interoperability.
I believe you mean "scalability". If I am mistaken, I invite you to explain to us all how whether or not something is "memory intensive" (again, it's not very clear what you mean by this) affects whether or not it is interoperable. I for one have no idea. Exposed in a web service the dataset becomes an xml stream that the client may process as it comes in over the wire, save to a file and then process as a stream, or load into memory in full and then process in a random-access manner. I cannot imagine how the "memory intensity" of datasets can possibly affect their interoperability.
If you mean "takes up a large amount of memory" then datasets are "intensive" in the sense that you can't easily work with them as streams. But again, a drawback that represents a *potential* issue in *some* usage scenarios is presented in a totally dogmatic fashion as if it was a disqualifying feature of the technology. And again, it's not like entity objects solve this problem either. If you need to keep a dataset in memory there's some reason why that does not magically disappear just because you choose to represent the data in a different way. It may *help* with a more compact representation, but only a totally different approach like streaming would really *solve* such an issue.
So again, my challenge to you: If Datasets are so universally bad, describe a solution that is universally better. I assure you it will not be difficult to do like you did and just point out some potential problems that may exist in your solution and pretend that these automtically make it useless, even though the simple truth is that *ANY* solution has drawbacks and advantages compared to any other.
|
|
|
|
|
dojohansen wrote: I think you mean "speaking as someone who considers himself quite the expert".
Nope. And don't try to put words into my mouth. I'm speaking as somebody who writes software that has to interoperate with systems running on Java, or other none .NET systems, where DataSets just add complications and overheads.
dojohansen wrote: I believe you mean "scalability".
You're right. That's what comes of thinking three sentences ahead of what I'm typing.
Also - where did I say that DataSets are universally bad? At no stage did I state that - I did state that they were a no-no when it came to interoperable systems, and this is based on hard learned lessons, with paying clients. Please stop reading more into posts than were ever intended.
If I needed to use a DataSet internally, I would. I can't think of any instance where I've needed to off the top of my head, but that wouldn't stop me. In most cases, I prefer the flexibility of a plain old business entities, especially as we use them in conjunction with change notification and validation.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
Option 2 always. I can't think of a good enough reason for option 1.
|
|
|
|
|
Option 2 is the way to go. No need to tie up system resources when it is not being used.
|
|
|
|
|
Yep, number two. But what that means depends on what the application does.
|
|
|
|
|
Hi,
the answer is that in ADO.NET you should always close the connection when it isn't in use. The connection classes manage the underlying TCP connection for you, so although you are "logically" closing the connection you are not in fact incurring the cost of tearing down and reestablishing the database connection each time.
In practice this makes the most difference in server apps where connection pooling is of great use. Whenever your code closes a connection it in fact simply releases that connection to the pool, and the next time an instance of a connection is constructed that uses the exact same connection string the pre-existing connection is returned. (There is something involved to make the connection state as if it was a freshly established one - see 'sp_reset' in SQL Server, not sure exactly how this is implemented with the other providers.)
But even in a desktop app where each client has a dedicated connection to the database and no pooling takes place it's considered good practice to open and close the connection. I think it is; it makes error handling a little easier. You still have to catch exceptions and perhaps log and present errors, but if the user wishes to retry an operation there's no additional logic to check the state of the connection or find out if it's necessary to open it first, because you simply *always* open it where you need it and close it when you're done with it.
I personally use a simple connection wrapper class to centralize the code implementing the patterns I wish to use. You may not bother doing this if you use code generation for most of your data access code, but if you hand-code this stuff it makes a huge difference - much less code, far fewer errors, and much easier debugging. And if you just put this class in a separate library and never put anything app-specific in it you'll start building reusable code that has applications everywhere.
For example, with my Connection class you can do very common tasks like these very easily:
Connection c = Connection.FromConfig("mainDB");
int count = c.ExecuteScalar<int>("select count(*) from [table] where [col] < @p0", value);
using (c.BeginTransaction())
{
c.ExecuteProc("update_stock");
c.ExecuteNonQuery("truncate table [pending_orders]");
someOtherObject.DoSomething(c);
c.CommitTransaction();
}
string[] popular = c.GetColumn<string>("select top 10 p.name from product p join sales s on s.productID = p.id order by s.itemsSold desc");
</string></int>
These are just a few examples. The feature shown with the generic ExecuteScalar<t> method is used basically everywhere strings are used to specify command texts. It's inspired by the string formatting feature, as in string.Format("{0} loves {1}", "Dag", "Jennifer") . Just like here "{0}" refers to the first argument in the args array, we adopted the naming convention "@p<index>" so that the first argument is the value for a parameter named "@p0", the second one named "@p1" and so on. It's simple, efficient, and encourages use of parameterized queries where lazy programmers might otherwise simply concatenate strings (which of course is very dangerous since it opens the door to SQL injection attacks).
I really think I should prepare a CodeProject article to present this logic. It's quite simple but I feel a lot of people might find it useful.
|
|
|
|
|
dojohansen wrote: The connection classes manage the underlying TCP connection for you
Assuming you are connecting to the SQL Server with a TCP/IP connection.
|
|
|
|
|
True. It's hard to comment on anything here with all the gotchas hiding in the bushes.
|
|
|
|
|
Always opening and closing the connection, without a helper object, can became really problematic.
I started to work in a project that counted in this technique, and here is the problem:
Method A - Opens a connection to do it's queries and then calls method B.
Method B - Opens a connection to do it's queries and then call method C.
Method C - Can you see the pattern?
I created some "helper" objects for this. For example, the application considers one database to be the Default database. So, I use:
using (var connection = new ThreadConnection())
{
}
This ThreadConnection is a wrapper class that:
1 - Checks if a connection for this given thread exists, and use it or opens a new connection if there is no connection opened.
2 - At dispose(), if it created the connection, then closes it, but if it didn't create the connection, does nothing.
This solution looks very similar, as method A, B and C will all create a "ThreadConnection" object, but only the outer method (A in this example) will create and dispose the connection.
But, if B is called from another method, not having an already created connection, then B will create and close it. Much better than having methods with overloads, so they create the connection or use the existing one (that was the "original solution" in the project I work now) and is absolutelly better than opening hundreds of connections as each method opens it's own connection.
|
|
|
|
|
It's a very good and important question for beginners. It is not wise to open the connection for the whole application's life time. It will allocate the memory resource and cost in performance. Closing and opening database connections may be bit tougher than the earlier but it drammatically enhance the performance and it is the best practice as well.
Either you love IT or leave IT...
|
|
|
|
|
|
when we login on a site, after login it shows a welocme page, if i copy and paste the welcome page url in another browser, page redirected to login apge. how does it happen, what coding is done in web.conf file. pls help me out...
|
|
|
|
|
deepak baldia wrote: when we login on a site, after login it shows a welocme page, if i copy and paste the welcome page url in another browser, page redirected to login apge. how does it happen
Your authentication ticket is stored in a cookie. If you open up another browser it doesn't have the cookie and it redirects you to the login page so you can generate an authentication ticket.
|
|
|
|
|
i have a question ;
how i can insert an image to a sql server data base?and how i canconfigure it to show information in datagrid??thinks
..Med
|
|
|
|
|
mohamedmrc wrote: i have a question ;
how i can insert an image to a sql server data base?
The butler is out, but you could try Google[^] or the examples [^]on this site in the meantime
I are troll
|
|
|
|
|
mohamedmrc wrote: thinks
Indeed, give that a try.
|
|
|
|
|
led mike wrote: mohamedmrc wrote:
thinks
Indeed, give that a try.
It'll never catch on.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
If I have a quantity of unmanaged code (in C, designed for another system), and most of it could be safely terminated at any time but it makes callbacks into managed code which could not be so terminated, would it be safe to do something like:
volatile int statflag;
if (Exchange(&statflag, 1) == 0)
{
call_managed_code();
if (Exchange(&statflag, 0) != 1))
terminate();
}
else
terminate();
' In the other thread...
If Threading.Interlocked.Exchange(safeflag, 2) = 0 Then
TheThread.Terminate
If the portion of the code which is in C never does anything that could not be safely stopped asynchronously while safeflag is one, would there be any danger in calling Terminate() upon such a thread? Obviously any resources that had been allocated by that thread would have to be freed elsewhere, but in this particular application I'm not expecting the unmanaged code to use only resources that are given to it by other code (which could then take care of any necessary de-allocation).
In the particular application, there is a risk that the C portion of the code could hang without any callbacks to the managed portion, so I'd like to be able to use something more "potent" than Thread.Interrupt; is Thread.Terminate safe if used only upon a thread which is known not to hold any locks or be manipulating any other dangerous constructs?
|
|
|
|
|
supercat9 wrote: would there be any danger in calling Terminate() upon such a thread?
Hi again, welcome to CodeProject. Let me just warn you that many of the regular members here will tend to flame on people who are obviously not reading the documentation.
TerminateThread[^]
TerminateThread is a dangerous function that should only be used in the most extreme cases.
Terminating threads is never considered Best Practice. On the other hand if you have no other choice then it really doesn't matter what dangers might exist. Obviously, if you do have another choice then go with the other choice.
|
|
|
|
|
Hi again, welcome to CodeProject. Let me just warn you that many of the regular members here will tend to flame on people who are obviously not reading the documentation.
It states that it is a dangerous function that should only be used in extreme cases. I can certainly appreciate that many bad things can happen if it is done on managed code or code which uses the Windows API. My situation is a bit different from the usual one, though, and I was wondering whether anyone had done anything similar.
I'm writing code for a microcontroller with a few kbytes of RAM and a few dozen kbytes of code. For testing purposes, I would like to be able to run the code on a Windows machine, replacing the physical I/O routines with calls to wrapper functions which would call back into managed code that would simulate the I/O in question. If every call to a wrapper function checks for a "terminate, please" flag, then any loop which calls any of the wrapper functions will exit once the flag in question is set, but there's no guarantee of when that will actually take place. If something goes wrong in the target system code, such callbacks might never take place.
Since the target application shouldn't get stuck in such a loop, it probably wouldn't be the end of the world to require that someone use ctrl-alt-del and kill the non-responsive application, but that would seem more dangerous than having the application terminate the stuck thread, especially since the application might be doing things on other threads that should not be blindly terminated.
|
|
|
|
|
supercat9 wrote: For testing purposes
For testing I would think as long as you can accomplish the testing goals you wouldn't care about the dangers. I imagine the author talking about dangers is addressing production software environments.
|
|
|
|
|
I imagine the author talking about dangers is addressing production software environments.
The system might get used in something between a testing and production environment; the actual hardware device includes the ability to initiate TCP connections to a server, and it's possible that customers might use the "simulator" to simulate having a number of devices connecting to the server at once. By the time the code gets to the customer it shouldn't get into bad situations, but in case something bad happens (e.g. the server comes back with a response that would confuse the device) it would be nicer to allow a controlled shutdown of the thread than to force the user to kill the application.
I guess my main question should perhaps have been better phrased as, "The documentation says to avoid using TerminateThread; is there any accepted style for wrapping it in those circumstances where its use may be reasonably safe and appropriate?" I guess if there's no way to recover the stack in older versions of Windows that would be bad, but it could still be better than having a stuck thread gobble up all the CPU time it can get.
On a related note, under what circumstances is an "isBackground" thread killed off blindly when an application terminates? If a thread is doing a "WriteAllText" to create and write a file, is the function guaranteed to either fail altogether or succeed completely in case of application termination, or should it be surrounded by saving the isBackground property of the current thread, setting it to False, performing the function, and restoring the old value of isBackground?
In any case, thanks for responding. -- John
|
|
|
|
|