|
As Windows users developing mainly in Winforms we were disappointed, as Docker turns out to be a Linux thing.
We don't use it (yet)
|
|
|
|
|
There is a Windows version. I'm not saying it's great, but there is one.
This space for rent
|
|
|
|
|
Are you sure it's not based on a Linux virtual machine ?
|
|
|
|
|
Don't use the term "virtual machine" when close to Docker people, unless you are eager to listen to a 45 minute intense talk that Docker is NOT, I repeat: NOT virualization!
Virtualization is evil, Docker is good! And Docker isn't even "lightweight" virtualization. It is useless trying to discuss definitions of "virtualization" with Docker guys, or trying to compare the Docker way of providing isolation with a hypothetical minmal VM providing exactly those functions that your application needs while still being a VM (for the purpose of learning the details of what is so evil about virtualization). It is no use. The answer is given: VMs are evil, by definition.
On the more serious side:
Yes, the Docker demon is managed by a Linux kernel even in the Windows implemnentation.
This is not a Linux virtual machine. On Windows 10, the Docker demon runs inside a Hyper-V VM (so it requires a 64 bit CPU with Extended Page Tables. (On Server 2016 the implementation is somewhat different, and does not use Hyper-V.)
You can run Linux docker images in a Windows implementation; the Linux kernel functions are executed by the same kernel that runs the demon. You can obviously also run Windows docker images on Windows, but currently, the demon is in either Linux or Windows mode; it cannot run both flavors side by side. (I have seen rumours that this is being worked on, and will be possible in a future release.) The Linux implementation cannot run Windows images.
Docker is essentially suited for backend services: Until you start doing fancy tricks, a container's only interface to the world outside the Docker demon is one or more TCP ports, or for persistent data: Mapping (parts of) an external file system as a Docker volume.
There are two main alternatives for providing some sort of user interface: Either the container runs a web server, or you hook up a SSH console to it. In principle, I guess you could run e.g. an X.11 client in a Docker contiainer to give it a GUI interface; I doubt that anyone has seriously done anything like that.
I guess that Docker is as suitable for web servers running on a Windows host as for web servers running on a Linux host. But applications running a Windows GUI of any kind cannot be adapted to Docker. Nor can any application that requires user interaction for installation, installation must be pure command-line based, with all parameters supplied either on the call line or in a setup/ini-file.
When used for what it is good at, Docker is OK. Streching it to do "everything", being a complete replacement for traditional software design, installation and running, you should be prepared for some pains, in particular in environments where users prefer a highly functional GUI (like in high-quality native Windows applications).
|
|
|
|
|
Good info, thank you.
|
|
|
|
|
Thanks, that clears things up a lot.
I knew Docker is not a virtual machine, but did not know how to call it otherwise, maybe "containerization platform" would fit the bill ?
|
|
|
|
|
AFAIK, Docker for WindoZe is mostly meant for developing purposes and is not yet recommended for production(*) (at least last time I checked.)
(*) Just as much as WindoZe recommended for production isn't either ...
|
|
|
|
|
|
I like how docker is a 'demon' and not daemon
|
|
|
|
|
honest: that was a genuine typo, it was not intended
|
|
|
|
|
|
|
|
Don't they have Windows support now?
|
|
|
|
|
If anyone ever managed to run a Winforms application on Docker I would like to know !
|
|
|
|
|
If you by "Windows application" mean one with a native Windows GUI: No, that is not possible.
Windows applications without any GUI, communicating through either a web or SSH interface, is certainly possible - the kind of backend / web services.
The bottom layer of a Windows Docker image contains all that OS functionaliy that the upper layers have access to is a "Windows nanokernel" of, believe it or not, almost a gigabyte. (One might wonder what size a megakernel would be!) My guess is that those services offered by the nanokernel (which does not include any GUI functions!) really could be done in a fraction of the size, but the various modules are so deeply intertwingled that shaving off all that stuff that really does nothing for the API would require man-years of effort. Since this layer is shared between all running containers, and code that is never used is never paged in from disk, they probably figure "A GB of disk space is nothing, so shaving it further down isn't worth the cost". Sure, I am just guessing, but to me that looks like a reasonable explanation for that GB size bottom layer.
|
|
|
|
|
Why would you want to run a UI app in Docker?
|
|
|
|
|
Mainly for testing purposes so our tester has a ready to run Windows testing environment that can be produced by our Continuous Integration pipeline.
|
|
|
|
|
|
If you can do it as a web application, with a HTML based GUI: Yes.
If you want a native Windows GUI: No.
|
|
|
|
|
You know, I just spent 10 minutes looking at the Docker site and I have not clue at all what it is and what it should do.
"Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud. Today’s businesses are under pressure to digitally transform but are constrained by existing applications and infrastructure while rationalizing an increasingly diverse portfolio of clouds, datacenters and application architectures. Docker enables true independence between applications and infrastructure and developers and IT ops to unlock their potential and creates a model for better collaboration and innovation."
There is so much technobabble in there, I'm not certain who's the target audience.
I'd rather be phishing!
|
|
|
|
|
Maximilien wrote: There is so much technobabble in there, I'm not certain who's the target audience.
The marketing department. And your next deliverable will use it.
Signature ready for installation. Please Reboot now.
|
|
|
|
|
Maximilien wrote: There is so much technobabble in there
You have to get past that nonsense - it put me off too.
Best way to explain it is as being like a lightweight VM, but instead of storing an image you store how to create the image. The image definition is like an onion - you build it up layer by layer, e.g. we start with a plain linux box, install say node (one layer), global npm packages (second layer) webpack (another layer) until the box can be used for what we want. The layers are all cached when built and stored so you can pull them later - really the "lengthy" build process only needs to happen once. Then you can run the image, mounting parts of the host file system (in our case the source to build) and then scripts to run, say a webpack build. The images are "universal" so running inside the container gets consistent behaviour if you are running the host on Windows or Linux.
We're using it to migrate our build process, we've managed to get it so our build-agent infrastructure is just a linux box with docker installed, we bypass the corporate stuff about getting out infra bods to install/upgrade dependencies. Also the dockerfiles (image definitions) are all just text scripts so you can even source control these.
|
|
|
|
|
Wow, I know what it is and even I'm confused
|
|
|
|
|
That reads exactly like Microsoft's technobabble: I've given up even trying to find out which version of whatever Windows Server is called this minute is appropriate for any given type of business, as even the technical pre-sales guff doesn't ever actually tell you why anyone might want to use the system or what for. As a MS partner, I have campaigned quite strongly against this kind of useless guff, but to no avail.
In the end, I just go with whatever the 'standard' version is called and add on any bits I might need (eg Exchange, SQL server etc).
I look after a charity that is currently using the very short lived 2011 version of Small Business Server (which is not supported by VMWare BTW!). Anyone care to guess what the off-the-shelf replacement for that is? The charity is contemplating moving everything to Google instead, because there isn't a direct equivalent it seems, and the licensing makes something that matches their requirements horrendously expensive...
Maybe I'm just a grumpy old git, but Docker et al appears to me to implement a lightweight virtualisation platform - so far so good, but I have yet to find any business case for using it rather than a solid hypervisor like vSphere etc. Everywhere I've deployed virtualisation so far, the key thing has been to reduce hardware dependence and maintenance whilst allowing easier backup and fail-over in the event of a hardware issue. I cannot see that Docker offers any advantages over the other well-established hypervisors in such cases.
And don't start quoting performance issues at me - its been a long time since that was a serious consideration for smaller deployments! 8)
|
|
|
|