First of all, why would you need to display a hidden window in a service, anyway? It would seem to serve no purpose?
If you do need to anyway, check the "Allow service to interact with desktop" option in the Services snap-in if running as the SYSTEM account (Local System), or specify a user with login rights to the local machine. This form would only be available in the context of that user, however.
My C# app talks to a SQL database to get information back. The app is basically a GUI representation of the data in the database. When I modify the data in the client C# app, I would like to not only send the changes back to the database, but also to all other clients connected to the SQL database (all clients will be running this same C# app).
Now, I already have the C# app reading, displaying, and sending the data back to the SQL database. What I need is to do now is to send this modified data to all the other connected clients. My thought is that I will probably need a seperate application to do this, possibly using remoting.
Can some of the experts here tell me if I'm on the right track? Does this seem like a very big undertaking? Is remoting the best way to do this or would you suggest something else such as a webservice or an additional SQL table for holding the modified data?
Any suggestions, words of warning, comments, any feedback whatsoever is welcome and appreciated.
I could, but that's not an ideal situation: the total amount of data is several gigabytes worth. If I only change a single row in the database, why refresh all the data; it'd have to search through several gigs of data just to update a single row. Any other solutions that would be more efficient?
Since your original question didn't mention anything about gigabytes of data being involved maybe you would like clue us in to any more details your holding back. IMO if you have gigabytes of data streaming to client apps you have way to much information. What is your network load when all of these apps start up at once?
Yes, you're right I should've mentioned that. The database itself has gigabytes worth of data, only some of it is shared between multiple clients; a single client won't be loading the entire database's data.
If you use a DataSet to store this information - which will be good to use in a .NET Remoting application because it serializes very nicely - you can use DataSet.GetChanges to return another DataSet with only the rows that changed. Make sure you don't call DataSet.AcceptChanges first, though, otherwise you'll get an empty DataSet.
Currently, I'm storing the information locally in custom collections rather than datasets, but I could make that change quite easily.
So, I could use a dataset for the storage, no problem.
I guess the real question I'm asking is, what's the best way to actually synchronize the data between the clients? The ideal situation I'm looking for is that each client gets notified when data has been changed, and gets notified what that data is. How that is done, I don't know - my initial thought was that I could build a server app that all the clients connect to. Each client would send data modifcations to the server app, then the server app would write that data modification to the database, then notify all connected clients of the modification. What do you think, is this feasible? Is there a simpler way?
As Mazdak mentioned, .NET Remoting is good for this, so long as you host it in a service that allows two-way communication. For instance, if you host it in IIS it is automatically exposed as a Web Server. Because of the one-way nature of HTTP, it can't communicate with the clients. If you use a TcpChannel with a BinaryFormatter, you'll get great performance, too (as well as remote calls go).
Just implement an event on your remoting object. When data is updated via the remoting object, you fire your event. Clients that handle that event will get notified of the changes, which you can pass as event arguments similar to most events (especially for controls) work in the .NET base class library.
And allow me to ask one more question: my company has researched some 3rd party database synchronization and data replication tools to do the job of data syncing. I've been telling my boss we should write the data synchronization and replication logic ourselves, but he's leaning more toward a 3rd party tool. Do you have any suggestions for data replication tools? Do you have any thoughts/warnings on using such non-.Net data replication tools vs. rolling our own data replication scheme?
Thanks for any information you give Heath, always informative. You are like an invaluable consultant in these forums...except that you work for free.
If you're writing an application in .NET, it will typically be easier to do everything in .NET. When you start introducing other technologies such as COM components or making native function calls, you incur performance penalties because of marshaling.
If there are 3rd-party tools in .NET, you might consider that route, though. So long as they come from reputable companies that you know to release good products and have good quality assurance, you won't have to worry too much about maintaining that code (just make sure that there are no bugs related to your software, which you should report to them). They also can provide support, though the unfortunate current trend in modern commercial software is that you have to buy the product and support is an additional charge! Just depends on the vendor, I guess.
There are many ways which depends on situation and what your data is. For example you want this task which your clent always have updated information cause they change regulary,in this situation you do not need to use disconnected dataset , you can use other classes so your client always connect to database and have the last informationn,in this situation you can use SqlDataReader class.But if you really want stick to DataSet, you can simply add a function which requery and refill the tables in dataset and put it in a timer or add it in a event handler for buttton and users simply click it when they want updated data. TO say about about remoting or web servise I personnaly don't think you need them only because your client want to get last data because you can simply add a stored procedure to your database which handle this task. If your modified data is kind of data that needs only some approvement,you can simply add a bit field to your table and set it to false for non-approved and query from them when you need it.
Hope that it helps you.
Mazdak wrote: you can use other classes so your client always connect to database and have the last informationn
My current setup is that the client application is always connected to the SQL database. The trick, though, is trying to get clientA to know when clientB has made a data modification, and the exact modification that was made (ie. instead of querying through several gigs worth of data, I'd be nice if clientA knew that clientB updated Row #31 in Table 123)
Mazdak wrote: But if you really want stick to DataSet, you can simply add a function which requery and refill the tables in dataset and put it in a timer or add it in a event handler for buttton and users simply click it when they want updated data.
I can't do this in my current setup. The data itself needs to be updated in real time, and filling data tables is not feasible when there are several gigs worth of it.
So maybe remoting could be good. I don't know about the performance in huge data but you can create a server which recieve these informations and notify to all connected clients. Like the way you broadcast the messages in a chat server applications.
It's just an unsigned integer, so you can replace ALG_ID in your P/Invoked method with int or uint (unsigned integers are not CLS-compliant, but using a signed integer means you have to take into account the sign of an integer, or just use the hex notation 0x....). For the values, you can either use an enum (which, by default, inherits from int) and cast to an int when you use it, or define a bunch of constants in your class ( or do neither and have to remember all the values yourself! ). See http://msdn.microsoft.com/library/en-us/security/security/alg_id.asp[^] for more information about the values.
I have this huge solution containing over 20 projects (mixed VB and C#) I always develop in "Debug" mode and deploy in "Release" as I am sure all of you do.
Here's my problem:
Let's say I deploy my application in Release. When I come back to make some changes, I go back in Debug mode. When I'm done with my changes, for example only in my UI project, I return to Release, I expect to recompile only this UI project and distribute only this file (.exe in this case).
The problem is that VS.NET won't let me do this. If I try to compile the UI only, I get all sort of errors, just as the other assemblies are not compiled and I need to recompile ALL projects in order for the UI to be able to build. Take note that all my assemblies are signed (if that matters).
What am I doing wrong? I'd like to distribute only the files that have changed (and the assemblies that reference them since my files are signed). It is really a pain in the neck to distribute all the DLLs all the time. Please help me save some bandwidth
It sounds like you have references to the other projects in the you main project. In which case it is working as would be expected. You can use the Configuration Manager to select which projects to build.
I think you didn't understand the question... I am referencing other projects but in Config Manager, they are not marked for build. However, I absolutely need to rebuild them if I switch between release and debug.
Switching between builds seems to remove some temporary build files which flags the projects as having changed, or even the very act of switching builds flags the projects as having changed. If you use project references in your projects (which is always a good idea to keep these in sync), building one project with references to other projects that have changed will make sure those are compiled, too. It also helps if you simply right-click on a single project and select build, as opposed to using the Build menu in the main menu, which builds the entire solution.
Also, are you using AssemblyVersionAttribute with an asterisk (*) in it? If you are and these projects aren't rebuilt properly, the assembly that is dependent on them won't link against the right versions of assemblies, which is important in .NET (since Type and Assembly names are hingent on the Version). In large applications - especially those with lots of dependencies - it is better to manage the version numbers yourself otherwise you can easily fall into this problem. Without control on the version numbers, you're at the mercy of the compiler and updating test directories gets to be a REAL pain.
Trust me, maintaining your own versions can be a real blessing! I'm both the software architect and the build master so it's one thing I have to worry about. It didn't take me long to make policy that we do it ourselves. Besides, look at all your commercial .NET libraries and find a good one that uses automated versioning!
With our app, it was even more crucial. For testing, we all drop assemblies in a repository for varoius reasons (like only some people have licenses to third-party libraries, though we use very few). We do use assembly binding redirection in our .config file because our app is deployed as a touchless-deployment smart client that we can easily update, but it got to be a complete hastle to keep synced when many different people are dropped locally-tested assemblies in a testing repository!
If it would give you any ideas, here's how we do ours: the major and minor versions are still incremented as most applications are. The build number is the same as what it is in .NET when you use an asterisk in that position: the number of days since Jan. 1, 2000 (ex, 1475 for today). .NET uses the number of seconds since midnight of the current day modulo 2 for the revision number, but we simply increment that for bugfixes to an existing, already-deployed assembly. Feature enhancements is what usually constitutes a minor or major version number increment. The build number we typically use to just track the date since the aforementioned epoch when the assembly was build. That's what .NET's automatic versioning seems to do anway.
Now, what I'm still wondering about is how Microsoft manages there. Take a look at .NET 1.0 assemblies: 1.0.3300.0 (1.0.3300.228 for a file version - not assembly version - for SP2). For .NET 1.1 assemblies, they use 1.0.5000.0. Why that's not 1.1.5000.0 and what that build number represents I don't know. Also note that they don't change their assembly versions for service packs. You could also consider that, using the AssemblyFileVersionAttribute instead for bug fixes instead of the AssemblyVersionAttribute.