The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
IT is also being squeezed for manpower, so their typical response is only dealing with systems set up to their standards and maintained by themselves. Can't really blame them if their manpower to deal with exceptions has been taken away by a bean-counter not understanding consequences. I would expect the bean-counter got a bonus for saving this manpower in IT. Sure you are going to waste a lot more manpower - but that waste will look like you not being productive, so that's clearly not the bean-counters fault. So in short, he made the right choice seen from the top.
If you work for a software development company it is typically a bit easier to get dev systems included (you can argue they are essential to the core business - i.e. server down, production halted) - if software development is just a "side-kick" in the business, then it is going to be hard and you should probably do your own backup. IT might be able to provide a file share you can dump the files on and then they will back up those. Alternatively create scripts setting up the servers - and have that in source control - which is hopefully backed up....
Automating also takes manpower. Taking away manpower saves money. Bean-counters get bonus for saving money. Bean-counters do not get blame for missing backups. It is a VERY easy decision to make for the bean-counters.
The problem is too much thinking in boxes and managing by them. This is typically NOT done by the people on the floor in any department. It is pushed down from above by setting stupid goals and budgets. So no, I do not blame IT in a case like this, I blame upper management for turning everything into a spread-sheet where anything they don't understand is simply deleted.
I agree 100%, having seen the damage done to IT by Bean Counters.
I once worked in an organization where the Bean Counters refused to replace a faulty air-conditioner in our server room. Their argument was it would cost $100,000 to do, and no amount of logic or reason could convince them to release the money.
Here is the kicker, after the inevitable outage whereby some of our servers literally cooked themselves, which cost the company $250,000 in lost hardware and productivity (and that was just an afternoon), the Bean Counters still refused to release money to fix the air-conditioning because we'd just lost $250,000 and they didn't want to add another $100,000 on top.
The real ironic thing was this was in an IT company.
Perhaps the weirdest thing was I was at a party about 6 months later, and I was talking to a Bean Counter (not in our company) about what happened, and he was justifying that the way our Bean Counters had approached things was correct, and yet his justifications had no concept of the real world.
Our "ApplicationHost.config" file got wiped a few days back
Why wasn't it under source control?
Dev systems shouldn't be backed up.
Arguable - GIT is there for a reason. You don't back up dev systems because they are not stable, any snapshot in time won't be accurate or valid for long, or it may not be valid at all (i.e. a broken build, broken system due to ongoing changes). GIT is there with branches and commit messages, when you get to a milestone or at least a stable version you just push it in a master / release / whatyoucallit branch with a meaningful commit message and voila, here comes the vackup and the diff history in a single package.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
For our build agents we have an Azure DevOps repository with corresponding pipeline (running on a hosted agent) that can spin up the agent from scratch - including creating the VM in Azure. Besides being great for history and backup, it also discourages people from "just making some undocumented changes" as they know it will be wiped soon.
First time I saw this was over 20 years ago now - they had to physically install Windows and Source Safe (yes that old), get one repo our of SS and run a script in it. Then wait half a day. At least there is progress, no Source Safe and my agent spins up from scratch in a couple of hours. Will do until we get all our builds into docker images.
We do have one VM deployment target that has not yet been set up this way (used to test on-prem installations) - but looking forward to getting that under control as well.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
YMMV, but in a lot of organizations it's IT's job to back everything up. Some companies go out of their way to ensure their developers don't waste spend their time doing menial IT tasks.
Personally - at home - I don't do any backup from within an OS. I have everything running in VMs, and simply copy disk image files onto other drives, either across my LAN or external drives on USB. Has served me well for over a decade, and even restoring to another host was rather trivial. Not recommended for all scenarios however (eg, I'm not including VM metadata).
Another random thought:
Depending on what you do with them, test systems could definitely disqualify for backing up. I very often find myself putting together test VMs that I would consider to be throwaway systems. But I'd call those QA machines, not dev machines. A dev machine is very often a lot more complex in setting up juuuuust right.
All my dev is done in VMs which I can snapshot before a major change/update in case it borks. These VMs are replicated automatically between two machines (desktop and a dev laptop) and are also, though only occasionally, physically, backed up to another server just in case! This is all in addition to source control.
On more than one occasion I have had to either copy back one of the clones or at least fire one up to recover something completely destroyed by an OS update etc. (This applies to Linux and MS stuff!)
It is hard to envisage any setup no matter how small or large where backing up dev machines, at least occasionally (when known to be in good working order, say after initial setup and config) is not a good idea.
Git is limited by the devs who use it. If they keep origin backed up and push their branches pretty regularly, it'll be fine, but even then all that work between pushes is in danger w/o backup of the individual repositories on the dev server.
Arguable - GIT is there for a reason. You don't back up dev systems because they are not stable, any snapshot in time won't be accurate or valid for long, or it may not be valid at all (i.e. a broken build, broken system due to ongoing changes).
This argument leaves out the time it takes to rebuild a DEV PC, which isn't trivial. My employer has several standard disk images, but none contain all tools a particular developer needs. This also doesn't cover the loss of items backed up to the local git but not the main instance.
Dev systems need more frequent backups than "ordinary" systems, because it's too easy to lose stuff through a tiny mistake. And dev systems tend to change more than "normal user" data does as well.
Not telling you they stopped is ... incompetantcy.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
The entire team is work remote. We have no formal office space. We use Microsoft Teams to communicate and meet.
-- we develop locally on our dev laptops, running synchronized local databases using DbUp(see below).
-- visual studio 2019, etc.
-- We use DevOps and Git. We use DevOps continuous integration with build and release pipelines.
-- We use AWS for DEV, QA, UAT, and PROD (applications, sites, SQL Server dbs, etc.).
-- Everything that is deployed to an environment (DEV, QA, UAT, PROD) is in source control (Git), including all configuration files (app, web, appsettings.json, etc.).
We never backup anything except our databases in DEV, QA and Prod. All code is in source control, obviously.
We have 14 developers
We have 2 engineers that manage AWS and all build and release pipelines.
We have 4+ QA testers.
We have 5+ Business Analysists.
We have one Release Manager who is also our Scrum master, if you want to call it that.