|
In it? Man, I'm up to my neck in it, and it's great
Granted, I tend to stray from proper MVVM now and then, but I gotta say, WPF enables some nice-looking and stable interfaces... Never going back to WinForms
|
|
|
|
|
So I'm assuming that the VM is bound to the view automatically?
If it's not broken, fix it until it is
|
|
|
|
|
Ideally, you don't want to change the DataContext property from code. The usual method is to set the entire window to a ViewModel, and each section can have its DataContext bound to a property on that ViewModel.
MainWindow --> DataContext = MainWindowVIewModel
ContentFrame --> DataContext bound to ContentModel property (on MainWindowViewModel)
HelpFrame --> DataContext bound to HelpModel property (on MainWindowViewModel)
That way, when you change the content, all you do is change, in this example, the ContentModel property. Since the GUI control is hooked up with databinding, it'll automatically grab the new data, and switch to the proper DataTemplate.
So in short, you want to set the uppermost DataContext property once on window load, then operate exclusively through ViewModel properties.
|
|
|
|
|
I understand that, but Pete pointed me to this[^], and I'm wondering where or how in this design is the data context set?
So, if the content area is defined like this
<ContentPresenter x:Name="MainContent"
Grid.Row="2"
Grid.Column="0"
Content="{Binding MainContent}" />
In the MainWindowViewModel I would then set the MainContent property to the MainContentViewModel. How does the MainContentViewModel.DataContext get set. It won't know about the MainWindowViewModel.
Unless I'm missing something?
If it's not broken, fix it until it is
|
|
|
|
|
DataContext is inherited, so when the window itself is created, set its DataContext to a root ViewModel class. Then when you use the code you just posted, it would bind to the "MainContent" property of that ViewModel, which would be your MainContentViewModel (Or whatever you decide to call it).
The initial DataContext property for the window is set in code... After that, everything is data-bound.
|
|
|
|
|
Got it! Thanks
If it's not broken, fix it until it is
|
|
|
|
|
No problem, man. Glad we could help
|
|
|
|
|
|
My office has gone 100% thin client [ VMWare ], the exception being we developers. I'm not well versed in the official descriptive terms used, but it is of the type of environment where each user has an image stored on a server, initially identical, but with a fixed storage space available to them.
One problem is that a number of my applications, when moved to the TC, do not function correctly. This has various causes - different version of Windows applications which break my references or a local dedicated impact printer that required a local driver.
I'm requesting that I be given an instance that is non-volatile (i.e., I can customize by installing compilers, &etc. that won't be destroyed on the next refresh). Then I was asked the following:
- Are there specialized compilers and/or techniques that should be used when coding for this type of TC environment?
- Related to this is the question of a the feasibility/sense of a single copy (rather than clones in each workspace) being a possible development target?
In light of the above (bolded) questions, I'm hoping for some answers, best-practices, and references.
Thanks, in advance
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
modified 19-Jun-12 12:38pm.
|
|
|
|
|
May I ask what they define as a thin client? AFAIK, a tablet-pc would qualify as a "thin" client - Windows OTOH is a "rich" client. No, you don't need a special compiler to run software in a VM that's running Windows.
W∴ Balboos wrote: Related to this is the question of a the feasibility/sense of a single copy (rather than clones in each workspace) being a possible development target?
A single copy means that you only have to maintain a single point. Having clones means that you'll also need to update the clones.
Bastard Programmer from Hell
|
|
|
|
|
I thought the reference to VMWare in the first line delineated the type of thin client.
Your particular division doesn't seem to fit: the user's have small boxes on their desktops which connect them to our network; they run a local instance of Win-XP (downloaded from their space on the server), along with various applications that run under Win-XP . So, running individual Windows in VMs, resident upon a remote server: rich or thin? If a table PC runs stand-alone, how could that qualify as a thin client?
Perhaps better phrased: I know that the applications developed for Win-XP and Visual Studio will work in the TC environment: my argument to management is that I should be developing in the environment in which the apps will run in order to be sure they actually do run (an environment that won't be wiped out every time the user instances get a general reset). One of the managers then wondered if there is, perhaps, specialization to working within this TC environment (ergo, special tools) rather than considering that, virtual though it may be, it may be treated as if programming for a standard desktop.
A single copy vs. individual copies for each user: maintenance is not an issue as they can update them all at a stroke in either case. There could conceptually be a difference in application builds, libraries, &etc., for the single-copy version as, at the least, it must take care of any number of active threads.
As my understanding of the server world is limited, perhaps I am missed something in your response.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos wrote: So, running individual Windows in VMs, resident upon a remote server: rich or thin? If a table PC runs stand-alone, how could that qualify as a thin client?
Let's skip the argument that I caused, it's not going to help much.
W∴ Balboos wrote: I know that the applications developed for Win-XP and Visual Studio will work in the TC environment: my argument to management is that I should be developing in the environment in which the apps will run in order to be sure they actually do run (an environment that won't be wiped out every time the user instances get a general reset). One of the managers then wondered if there is, perhaps, specialization to working within this TC environment (ergo, special tools) rather than considering that, virtual though it may be, it may be treated as if programming for a standard desktop.
Such tool would need to "save" any changes made to the system, otherwise it would still help much. I don't know of a tool that does that; cloning is usually an all-or-nothing deal, and the only facility that I have seen used Citrix to "deploy" their app to the clones.
FWIW; Windows NT supports symbolic links[^], and you could have some directories in the clone "point" to writable places.
Bastard Programmer from Hell
|
|
|
|
|
The arguments were mostly out of my ignorance about a field newly sprung upon me - leaning the correct descriptions will prevent me from confusing the issues certain to appear in the future.
The all-or-nothing thing is where the (main) problem lies - even if I load the compiler and backup my source, the registry will be trashed (from my point of view) every time they refresh from their image.
Following your link, I looked up the symbolic link idea. It would seem to be feasible, recreate them whenever the system's refreshed, pointing to a non-volatile work area - but it still seems that the registry, now lacking all references to the applications in their non-volatile home, would leave me with an unworkable system.
Still - thanks. It's somewhere to begin. Really, they need to not refresh the developer VM's.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Some time ago I used a tool to create/compare registry-snapshots. You could use that tool to identify which changes are made to the registry. You could save a description of those changes in a database on a remote server, and import them into the local registry when your app starts.
That would create a lot of extra work, but that way you wouldn't have to worry about registry-settings from third-party components.
W∴ Balboos wrote: Really, they need to not refresh the developer VM's
The argument would be that it keeps the cost down of keeping your system up. I'd have a hard time if I couldn't install a supporting tool on demand.
Bastard Programmer from Hell
|
|
|
|
|
I considered that - basically, why not take it to simply imaging my virtual machine and restoring it, altogether?
In your version, I think I'd end up swapping out registry changes that were made for changes made to all the systems: even if my still ran, I'd be diverging from the standard configuration.*
If I image the drive, everything will work - but now my system has again diverged from the standard configuration.*
Your way, with a registry reconciliation, or via the imaging, would both would work for the short-haul.
A server-tending friend said that, at his place (large international bank) they set up the developer's VM's the same as the users, give them full admin privilege on their area, and then they're on their own. That would work as well as the other two options, and save some headaches. All they need to do is tell us when they make upgrades, install SP's, and add new applications, etc. I believe the term is cooperation.
* standard configuration implying the same version, SP, etc., that the users have.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos wrote: give them full admin privilege on their area,
That sounds ideal
Bastard Programmer from Hell
|
|
|
|
|
W∴ Balboos wrote: One problem is that a number of my applications, when moved to the TC, do not
function correctly. This has various causes - different version of Windows
applications which break my references or a local dedicated impact printer that
required a local driver. I'm requesting that I be given an
instance that is non-volatile
I don't see anything in the above that really has anything to do with the VM.
You have environment X.
Your target system(s) have environment Y.
You are creating applications that run in X and do not run in Y.
So for the above the following options exist.
1. You must have access to systems that match each different Y. And your project schedule must include time for FULLY testing on each.
2. Refuse to support the app on anything that does not have a list (A, B, C) of installed features.
3. Add a large amount of time to every project to allow you to run for box to box to figure out why it isn't working. Be assured that this will take more time than 1.
You can ease some of this by recognizing that your app must have a certain feature and then testing for that feature before using. If the feature is not there then present an error saying exactly that. That should occur at app start up and without the feature the app either refuses to start or disables some feature.
If you choose option 1 then you MUST have access to systems that match each environment. It doesn't matter how that is physically managed.
|
|
|
|
|
jschell wrote: 1. You must have access to systems that match each different Y. And your project schedule must include time for FULLY testing on each. This is how I do business - and software with long-term stability is the result. I would like to maintain that reputation.
The some obtuse question, do special compilers exist, was pushed on me by management. It's not really that bad an idea to check if there's a specialized method for coding/building for the VMs. My hope was, and you confirm, there is not. Asking is part of a due diligence.
jschell wrote: f you choose option 1 then you MUST have access to systems that match each environment. It doesn't matter how that is physically managed. Which is perfect agreement with what I keep telling them. They'll break down, eventually, but the waste time is frustrating.
Thanks
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
First of all, differentiate between hardware and software requirements.
As a developer, you need your development environment including all its dependencies (third party libraries) installed on the virtual machine. Then your references won't be broken.
Often some folders are redirected to network shares - the user's home directory need not be in C:\Documents and Settings\Username... If you hardcoded such paths, it's a good idea to correct that!
The other point is hardware redirection. VMWare view can do a lot of hardware redirection. But of course, when hardware is redirected from the ThinClient to the virtual machine, you need a driver installed on the Virtual Machine.
But you lucky guy have Windows XP ThinClients - this allows for installing drivers on the client, and even sharing a printer which is connected to the client. With Linux clients, that's practically impossible.
|
|
|
|
|
Pretty much, your description of what I should do is what I want to be doing: I long ago understood the "it works on my machine' experience.
My broken items were broken not via hard-coded paths, but rather, adding references to Windows objects via Visual Studio. That object has either moved, had its name changed, or moved with the new version of MS Office (in this case, the spell-checker was broken). This is the precise imperative why I'm insisting on a development environment in the same VM's as the users. Local printers, reinstalled at the same workstation, failed to operate: apparently it needs to be installed on the VM client that is connected physically to the (Okidata) printer. The Server folks don't want to do this - so the printer only prints Ascii streams: all but useless.
A shame, too, as you noted that our XP ThinClients allow this. That, of course, is a second side of the problem with the administrator(s) of our server farm. Eventually, he'll give in (at least for a usable environment) - but only after he's extracted a pound of flesh wherever he can.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
hello
I have a problem in an exercise with rational rose here are the answers :
1) Using the wizard to create a class diagram (mode
relational as stereotype), set the class diagram of the issue 2) limiting itself to both customer and order tables. Implement code obtained to create the schema of the database corresponding oracle9i.
2) Implement diagram object databases
I created the relational schema and I generated the scrit
but I don't know how to implement diagram object database
do you have any idea
thank you in advance
|
|
|
|
|
I'm building an retail appliction where the software will be installed locally on the client. There will be a local copy of the database, and the software will do CRUD operations on it.
Then, each day the software will send inserts/updates/deletes to the server. On the server the database is identical to the local DB.
The reason for this design is that if the user cannot connect to the server, the software has to still be able to run. The customer cannot be stopped from using the software in their store because they cannot connect to the server.
To facilitate this I'm going to use composite primary keys. The first part of the key is the CustomerId, and the second is the Id of that table. So, for an Order row the key would be CustomerId + OrderId. This way, when the data is pushed to the server I will have unique keys across all customers.
The question is this... Assume a new customer downloads and installs the software. How do I get the CustomerId? I'm guessing that during setup I would connect to the server and get the next available CustomerId, and to do that I would have to store the CustomerId to the server's table right then & there. But again, what if they can't connect?
Anyone have a better idea? Any problems with this design?
Thanks
If it's not broken, fix it until it is
|
|
|
|
|
Why not use a GUID for the id field? It's better because it doesn't rely on knowledge of server values.
|
|
|
|
|
I thought of that, but then I figured I didn't want to be transmitting alot of GUID's to the server. Seems like alot of data each trip.
If it's not broken, fix it until it is
|
|
|
|
|
As much as I loath the blind use of Guids, they don't consume "that much". And no, you needn't select the Guid with each record that you're fetching, it'd only become part of the filter-statement.
Bastard Programmer from Hell
|
|
|
|
|