|
My office has gone 100% thin client [ VMWare ], the exception being we developers. I'm not well versed in the official descriptive terms used, but it is of the type of environment where each user has an image stored on a server, initially identical, but with a fixed storage space available to them.
One problem is that a number of my applications, when moved to the TC, do not function correctly. This has various causes - different version of Windows applications which break my references or a local dedicated impact printer that required a local driver.
I'm requesting that I be given an instance that is non-volatile (i.e., I can customize by installing compilers, &etc. that won't be destroyed on the next refresh). Then I was asked the following:
- Are there specialized compilers and/or techniques that should be used when coding for this type of TC environment?
- Related to this is the question of a the feasibility/sense of a single copy (rather than clones in each workspace) being a possible development target?
In light of the above (bolded) questions, I'm hoping for some answers, best-practices, and references.
Thanks, in advance
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
modified 19-Jun-12 12:38pm.
|
|
|
|
|
May I ask what they define as a thin client? AFAIK, a tablet-pc would qualify as a "thin" client - Windows OTOH is a "rich" client. No, you don't need a special compiler to run software in a VM that's running Windows.
W∴ Balboos wrote: Related to this is the question of a the feasibility/sense of a single copy (rather than clones in each workspace) being a possible development target?
A single copy means that you only have to maintain a single point. Having clones means that you'll also need to update the clones.
Bastard Programmer from Hell
|
|
|
|
|
I thought the reference to VMWare in the first line delineated the type of thin client.
Your particular division doesn't seem to fit: the user's have small boxes on their desktops which connect them to our network; they run a local instance of Win-XP (downloaded from their space on the server), along with various applications that run under Win-XP . So, running individual Windows in VMs, resident upon a remote server: rich or thin? If a table PC runs stand-alone, how could that qualify as a thin client?
Perhaps better phrased: I know that the applications developed for Win-XP and Visual Studio will work in the TC environment: my argument to management is that I should be developing in the environment in which the apps will run in order to be sure they actually do run (an environment that won't be wiped out every time the user instances get a general reset). One of the managers then wondered if there is, perhaps, specialization to working within this TC environment (ergo, special tools) rather than considering that, virtual though it may be, it may be treated as if programming for a standard desktop.
A single copy vs. individual copies for each user: maintenance is not an issue as they can update them all at a stroke in either case. There could conceptually be a difference in application builds, libraries, &etc., for the single-copy version as, at the least, it must take care of any number of active threads.
As my understanding of the server world is limited, perhaps I am missed something in your response.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos wrote: So, running individual Windows in VMs, resident upon a remote server: rich or thin? If a table PC runs stand-alone, how could that qualify as a thin client?
Let's skip the argument that I caused, it's not going to help much.
W∴ Balboos wrote: I know that the applications developed for Win-XP and Visual Studio will work in the TC environment: my argument to management is that I should be developing in the environment in which the apps will run in order to be sure they actually do run (an environment that won't be wiped out every time the user instances get a general reset). One of the managers then wondered if there is, perhaps, specialization to working within this TC environment (ergo, special tools) rather than considering that, virtual though it may be, it may be treated as if programming for a standard desktop.
Such tool would need to "save" any changes made to the system, otherwise it would still help much. I don't know of a tool that does that; cloning is usually an all-or-nothing deal, and the only facility that I have seen used Citrix to "deploy" their app to the clones.
FWIW; Windows NT supports symbolic links[^], and you could have some directories in the clone "point" to writable places.
Bastard Programmer from Hell
|
|
|
|
|
The arguments were mostly out of my ignorance about a field newly sprung upon me - leaning the correct descriptions will prevent me from confusing the issues certain to appear in the future.
The all-or-nothing thing is where the (main) problem lies - even if I load the compiler and backup my source, the registry will be trashed (from my point of view) every time they refresh from their image.
Following your link, I looked up the symbolic link idea. It would seem to be feasible, recreate them whenever the system's refreshed, pointing to a non-volatile work area - but it still seems that the registry, now lacking all references to the applications in their non-volatile home, would leave me with an unworkable system.
Still - thanks. It's somewhere to begin. Really, they need to not refresh the developer VM's.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Some time ago I used a tool to create/compare registry-snapshots. You could use that tool to identify which changes are made to the registry. You could save a description of those changes in a database on a remote server, and import them into the local registry when your app starts.
That would create a lot of extra work, but that way you wouldn't have to worry about registry-settings from third-party components.
W∴ Balboos wrote: Really, they need to not refresh the developer VM's
The argument would be that it keeps the cost down of keeping your system up. I'd have a hard time if I couldn't install a supporting tool on demand.
Bastard Programmer from Hell
|
|
|
|
|
I considered that - basically, why not take it to simply imaging my virtual machine and restoring it, altogether?
In your version, I think I'd end up swapping out registry changes that were made for changes made to all the systems: even if my still ran, I'd be diverging from the standard configuration.*
If I image the drive, everything will work - but now my system has again diverged from the standard configuration.*
Your way, with a registry reconciliation, or via the imaging, would both would work for the short-haul.
A server-tending friend said that, at his place (large international bank) they set up the developer's VM's the same as the users, give them full admin privilege on their area, and then they're on their own. That would work as well as the other two options, and save some headaches. All they need to do is tell us when they make upgrades, install SP's, and add new applications, etc. I believe the term is cooperation.
* standard configuration implying the same version, SP, etc., that the users have.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
W∴ Balboos wrote: give them full admin privilege on their area,
That sounds ideal
Bastard Programmer from Hell
|
|
|
|
|
W∴ Balboos wrote: One problem is that a number of my applications, when moved to the TC, do not
function correctly. This has various causes - different version of Windows
applications which break my references or a local dedicated impact printer that
required a local driver. I'm requesting that I be given an
instance that is non-volatile
I don't see anything in the above that really has anything to do with the VM.
You have environment X.
Your target system(s) have environment Y.
You are creating applications that run in X and do not run in Y.
So for the above the following options exist.
1. You must have access to systems that match each different Y. And your project schedule must include time for FULLY testing on each.
2. Refuse to support the app on anything that does not have a list (A, B, C) of installed features.
3. Add a large amount of time to every project to allow you to run for box to box to figure out why it isn't working. Be assured that this will take more time than 1.
You can ease some of this by recognizing that your app must have a certain feature and then testing for that feature before using. If the feature is not there then present an error saying exactly that. That should occur at app start up and without the feature the app either refuses to start or disables some feature.
If you choose option 1 then you MUST have access to systems that match each environment. It doesn't matter how that is physically managed.
|
|
|
|
|
jschell wrote: 1. You must have access to systems that match each different Y. And your project schedule must include time for FULLY testing on each. This is how I do business - and software with long-term stability is the result. I would like to maintain that reputation.
The some obtuse question, do special compilers exist, was pushed on me by management. It's not really that bad an idea to check if there's a specialized method for coding/building for the VMs. My hope was, and you confirm, there is not. Asking is part of a due diligence.
jschell wrote: f you choose option 1 then you MUST have access to systems that match each environment. It doesn't matter how that is physically managed. Which is perfect agreement with what I keep telling them. They'll break down, eventually, but the waste time is frustrating.
Thanks
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
First of all, differentiate between hardware and software requirements.
As a developer, you need your development environment including all its dependencies (third party libraries) installed on the virtual machine. Then your references won't be broken.
Often some folders are redirected to network shares - the user's home directory need not be in C:\Documents and Settings\Username... If you hardcoded such paths, it's a good idea to correct that!
The other point is hardware redirection. VMWare view can do a lot of hardware redirection. But of course, when hardware is redirected from the ThinClient to the virtual machine, you need a driver installed on the Virtual Machine.
But you lucky guy have Windows XP ThinClients - this allows for installing drivers on the client, and even sharing a printer which is connected to the client. With Linux clients, that's practically impossible.
|
|
|
|
|
Pretty much, your description of what I should do is what I want to be doing: I long ago understood the "it works on my machine' experience.
My broken items were broken not via hard-coded paths, but rather, adding references to Windows objects via Visual Studio. That object has either moved, had its name changed, or moved with the new version of MS Office (in this case, the spell-checker was broken). This is the precise imperative why I'm insisting on a development environment in the same VM's as the users. Local printers, reinstalled at the same workstation, failed to operate: apparently it needs to be installed on the VM client that is connected physically to the (Okidata) printer. The Server folks don't want to do this - so the printer only prints Ascii streams: all but useless.
A shame, too, as you noted that our XP ThinClients allow this. That, of course, is a second side of the problem with the administrator(s) of our server farm. Eventually, he'll give in (at least for a usable environment) - but only after he's extracted a pound of flesh wherever he can.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "As far as we know, our computer has never had an undetected error." - Weisert | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
hello
I have a problem in an exercise with rational rose here are the answers :
1) Using the wizard to create a class diagram (mode
relational as stereotype), set the class diagram of the issue 2) limiting itself to both customer and order tables. Implement code obtained to create the schema of the database corresponding oracle9i.
2) Implement diagram object databases
I created the relational schema and I generated the scrit
but I don't know how to implement diagram object database
do you have any idea
thank you in advance
|
|
|
|
|
I'm building an retail appliction where the software will be installed locally on the client. There will be a local copy of the database, and the software will do CRUD operations on it.
Then, each day the software will send inserts/updates/deletes to the server. On the server the database is identical to the local DB.
The reason for this design is that if the user cannot connect to the server, the software has to still be able to run. The customer cannot be stopped from using the software in their store because they cannot connect to the server.
To facilitate this I'm going to use composite primary keys. The first part of the key is the CustomerId, and the second is the Id of that table. So, for an Order row the key would be CustomerId + OrderId. This way, when the data is pushed to the server I will have unique keys across all customers.
The question is this... Assume a new customer downloads and installs the software. How do I get the CustomerId? I'm guessing that during setup I would connect to the server and get the next available CustomerId, and to do that I would have to store the CustomerId to the server's table right then & there. But again, what if they can't connect?
Anyone have a better idea? Any problems with this design?
Thanks
If it's not broken, fix it until it is
|
|
|
|
|
Why not use a GUID for the id field? It's better because it doesn't rely on knowledge of server values.
|
|
|
|
|
I thought of that, but then I figured I didn't want to be transmitting alot of GUID's to the server. Seems like alot of data each trip.
If it's not broken, fix it until it is
|
|
|
|
|
As much as I loath the blind use of Guids, they don't consume "that much". And no, you needn't select the Guid with each record that you're fetching, it'd only become part of the filter-statement.
Bastard Programmer from Hell
|
|
|
|
|
Kevin Marois wrote: , but then I figured I didn't want to be transmitting alot of GUID's to the server.
However each guid represents a row and the row will have data. That is more significant.
And what do you think a "lot" is exactly?
|
|
|
|
|
Kevin Marois wrote: But again, what if they can't connect?
Have them call you, you select the next identity from the server, and have them enter it. There's really no alternative to a non-existing connection.
Bastard Programmer from Hell
|
|
|
|
|
One way to deal with IDs in disconnected systems is to have the server pre-allocate a number of them and distribute them in advance to the clients (give each of them a chunk); the clients would then hold "empty records" they can fill up at their own pace.
BTW: Whatever your solution will be, you must consider that a customer could interact with more than one retail point, so sooner or later you may need a way to coalesce some records.
|
|
|
|
|
Bloody hell, we built this system in the 90s and it was a PITA then! We used GUIDs and if they cannot connect they cannot download - have them register when they download - if they do not install then delete them at some later time. There is nothing wrong with the ID/ID structure it is just more work to maintain and you are saddled with a composite primary key.
This is where I got my loathing of GUIDs they really are irritating to debug with. We also segregated the data so the user only replicated their own information locally, not the entire database!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I got to thinking about my design some more....
My application will be installed in a retail environment. My initial design calls for the software to work even if there's a loss of internet connection. So I thought of installing a local copy of SQL for my app to work with. Each night the app's update service would then transmit any data to the server that's not already been sent.
But then I got to thinking, it's possible for there to be multiple PC's in any given store. What if there's no network? I could offer to install a basic peer to peer network, and make the first PC the server, then all the other PC's could hit the DB there.
I went into this thinking thatI can't assume an internet connection. Do you think this is unreasonable in today's world? Should I build the app so that all data resides in the server and require them to have an internet connection? It would solve both the internet and network the issues.
Your thoughts?
Any thoughts on this?
If it's not broken, fix it until it is
|
|
|
|
|
It depends on your market, small outlets I would think. However even small outlets often have multiple POS. A large, multi shop company will require the data to be consolidated but the POS operation is decidedly local.
I would build a local environment (WPF on the desktop) and make sure you cater for a replication scenario where the local server can replicate its data to HO. Personally I see absolutely no requirement for the internet in such an environment.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Ya, it's a tough design decision. I'll sleep on it over the weekend.
Thanks
If it's not broken, fix it until it is
|
|
|
|
|
hi, my question is
if i have 3 basic colors (each made of rgb):
color1 : R:150, B:zero, G:255
color2 : R:255, B:150, G:zero
color3 : R:zero, B:255, G:150
each of them can be mixed using the formula :
new_color = floor(X*0.9)+floor(Y*0.1)
X and Y can be abasic color or a new color allready created by using the formula.
for example, if i want to mix color1 as main with color3 :
new_color(R,B,G) = (floor(0.9*150)+floor(0.1*0) , floor(0.9*0)+floor(0.1*255) , floor(0.9*255)+floor(0.1*150) ) = (135, 25, 244).
<b>I need to find a way to mix the colors in order to get a desired color, for example : R:187 B:135 G:201.</b>
so far i wrote a "brute force" program which go all over the combinations of basic colors (runing for 7 days now...)
hope there is a smarter way to solve the problem.
Thanks.
|
|
|
|
|