|
Marc Clifton wrote: Microsoft cares about my future I just threw up in my mouth so hard I dislodged a crown.
Software Zen: delete this;
|
|
|
|
|
I watched a bit of the beginning and the chap presenting did look like he was about to cry a few times.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
"Mm a chai latte", said a Michael Matt. What's his formula? (12)
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
MATHEMATICAL
(anag * 2)
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
YAUT! Back to cooking - coquilles st jacques tonight. Mash and silverbeet.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Sounds good.
I'll be "enjoying" a salad. Herself is dieting.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
And the answer is also an anagram of the other anagrams - good clue
"I didn't mention the bats - he'd see them soon enough" - Hunter S Thompson - RIP
|
|
|
|
|
... that is, according to one our senior IT infrastructure guys!
Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up."
Wha? First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up?
I am pretty much gob-smacked by this. Is this just me?
|
|
|
|
|
I guess the upside should be that your team have complete control of the machine, right?
So you could include backup in your deployments scripts/tool/procedure/whatever, right!?
|
|
|
|
|
Super Lloyd wrote: I guess the upside should be that your team have complete control of the machine, right? We have a large IT department, broken up into many 'silos'. Developers are just developers and have no say on infrastructure!
Super Lloyd wrote: So you could include backup in your deployments scripts/tool/procedure/whatever, right!? Had we known this was the 'policy', yes.
|
|
|
|
|
IT is also being squeezed for manpower, so their typical response is only dealing with systems set up to their standards and maintained by themselves. Can't really blame them if their manpower to deal with exceptions has been taken away by a bean-counter not understanding consequences. I would expect the bean-counter got a bonus for saving this manpower in IT. Sure you are going to waste a lot more manpower - but that waste will look like you not being productive, so that's clearly not the bean-counters fault. So in short, he made the right choice seen from the top.
If you work for a software development company it is typically a bit easier to get dev systems included (you can argue they are essential to the core business - i.e. server down, production halted) - if software development is just a "side-kick" in the business, then it is going to be hard and you should probably do your own backup. IT might be able to provide a file share you can dump the files on and then they will back up those. Alternatively create scripts setting up the servers - and have that in source control - which is hopefully backed up....
|
|
|
|
|
Yes, I can blame them. They should be automating these backups so they don't take manpower to do.
|
|
|
|
|
Automating also takes manpower. Taking away manpower saves money. Bean-counters get bonus for saving money. Bean-counters do not get blame for missing backups. It is a VERY easy decision to make for the bean-counters.
The problem is too much thinking in boxes and managing by them. This is typically NOT done by the people on the floor in any department. It is pushed down from above by setting stupid goals and budgets. So no, I do not blame IT in a case like this, I blame upper management for turning everything into a spread-sheet where anything they don't understand is simply deleted.
|
|
|
|
|
I agree 100%, having seen the damage done to IT by Bean Counters.
I once worked in an organization where the Bean Counters refused to replace a faulty air-conditioner in our server room. Their argument was it would cost $100,000 to do, and no amount of logic or reason could convince them to release the money.
Here is the kicker, after the inevitable outage whereby some of our servers literally cooked themselves, which cost the company $250,000 in lost hardware and productivity (and that was just an afternoon), the Bean Counters still refused to release money to fix the air-conditioning because we'd just lost $250,000 and they didn't want to add another $100,000 on top.
The real ironic thing was this was in an IT company.
Perhaps the weirdest thing was I was at a party about 6 months later, and I was talking to a Bean Counter (not in our company) about what happened, and he was justifying that the way our Bean Counters had approached things was correct, and yet his justifications had no concept of the real world.
|
|
|
|
|
5teveH wrote: Our "ApplicationHost.config" file got wiped a few days back Why wasn't it under source control?
5teveH wrote: Dev systems shouldn't be backed up.
Arguable - GIT is there for a reason. You don't back up dev systems because they are not stable, any snapshot in time won't be accurate or valid for long, or it may not be valid at all (i.e. a broken build, broken system due to ongoing changes). GIT is there with branches and commit messages, when you get to a milestone or at least a stable version you just push it in a master / release / whatyoucallit branch with a meaningful commit message and voila, here comes the vackup and the diff history in a single package.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
den2k88 wrote: Why wasn't it under source control?
den2k88 wrote: GIT is there for a reason.
I was thinking the exact same thing.
Noobs.
|
|
|
|
|
For our build agents we have an Azure DevOps repository with corresponding pipeline (running on a hosted agent) that can spin up the agent from scratch - including creating the VM in Azure. Besides being great for history and backup, it also discourages people from "just making some undocumented changes" as they know it will be wiped soon.
First time I saw this was over 20 years ago now - they had to physically install Windows and Source Safe (yes that old), get one repo our of SS and run a script in it. Then wait half a day. At least there is progress, no Source Safe and my agent spins up from scratch in a couple of hours. Will do until we get all our builds into docker images.
We do have one VM deployment target that has not yet been set up this way (used to test on-prem installations) - but looking forward to getting that under control as well.
|
|
|
|
|
den2k88 wrote: Why wasn't it under source control? Because it contained secrets like connection strings and passwords and machine specific information such as file paths that you generally don't want in source control?
|
|
|
|
|
den2k88 wrote: Why wasn't it under source control? Yes, everything we change is under Source Control, but our developers don't make direct changes to ApplicationHost.config. That's done by our deployment tool - which clearly has a bug!
But that wasn't my main concern. It just highlighted the 'no back-up policy'. What if we had a complete fail of our Dev system?
|
|
|
|
|
5teveH wrote: What if we had a complete fail of our Dev system? You broke it, you pay it
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
A dev system is much more than just the source files that are copied to it.
A source control system is not the same as a backup system.
|
|
|
|
|
And you'd trust your company to keep a backup of it? Image, zrchive and up into the backupped file server and you're on.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
YMMV, but in a lot of organizations it's IT's job to back everything up. Some companies go out of their way to ensure their developers don't waste spend their time doing menial IT tasks.
Personally - at home - I don't do any backup from within an OS. I have everything running in VMs, and simply copy disk image files onto other drives, either across my LAN or external drives on USB. Has served me well for over a decade, and even restoring to another host was rather trivial. Not recommended for all scenarios however (eg, I'm not including VM metadata).
[Edit]
Another random thought:
Depending on what you do with them, test systems could definitely disqualify for backing up. I very often find myself putting together test VMs that I would consider to be throwaway systems. But I'd call those QA machines, not dev machines. A dev machine is very often a lot more complex in setting up juuuuust right.
modified 22-Sep-21 12:06pm.
|
|
|
|
|
Exactly.
All my dev is done in VMs which I can snapshot before a major change/update in case it borks. These VMs are replicated automatically between two machines (desktop and a dev laptop) and are also, though only occasionally, physically, backed up to another server just in case! This is all in addition to source control.
On more than one occasion I have had to either copy back one of the clones or at least fire one up to recover something completely destroyed by an OS update etc. (This applies to Linux and MS stuff!)
It is hard to envisage any setup no matter how small or large where backing up dev machines, at least occasionally (when known to be in good working order, say after initial setup and config) is not a good idea.
|
|
|
|
|
Git is limited by the devs who use it. If they keep origin backed up and push their branches pretty regularly, it'll be fine, but even then all that work between pushes is in danger w/o backup of the individual repositories on the dev server.
|
|
|
|