|
R. Giskard Reventlov wrote: It is 121.
Not even close. It's 1001.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
We don't have any life-critical data on our little home network, but I keep full backups of our data on air-gapped drives. Then I also backup some important data to DVDs that cannot be corrupted.
I cannot help but wonder: Hospitals and other medical institutions have very critical data. How can they not keep regularly updated backups on safe media, out of reach of Ransom viruses? It just seems extremely negligent to me.
Get me coffee and no one gets hurt!
|
|
|
|
|
Cornelius Henning wrote: Hospitals and other medical institutions You have to remember not all of the people who work at these places are dedicated medical professionals. There are many low paid admin staff who sit at computers all day, perhaps surfing for things unconnected with your heart attack or my haemorrhoids.
|
|
|
|
|
I believe it is the responsibility of the IT professionals to have safe data backups in case they are attacked. Assume it WILL happen and plan accordingly. What is the alternative? What we have in the UK and other countries today?
Get me coffee and no one gets hurt!
|
|
|
|
|
In an ideal world ... but unfortunately we live in the real world where things get forgotten, or done wrong. Just look at some of the stuff in QA every day.
|
|
|
|
|
Quote: we live in the real world where things get forgotten Sadly, yes!
Get me coffee and no one gets hurt!
|
|
|
|
|
I believe they generally do (exception exists of course), but before they can push back a backup they need to make sure all computers are clean. That could take quite a while for an understaffed IT department.
|
|
|
|
|
Let me explain it this way. I work in the medical industry. We have 100,000 or so workstations where I work, I can't even count how many servers. It's in the the thousands. Backing those up to DVD's would require more DVD's than have ever been made on Earth, and the manpower to do the backups - same.
Now, we do backups. We have million dollar robotic backup libraries, spread across 3 cities in 2 states. It is a huge task. There are dozens of staff who do nothing but manage this. We have continuity of operations manuals and training on a regular basis to make sure everything is "on top". Still, it's not enough.
Your backup of a desktop computer is comparing apples to oranges. I manage a VAST amount of data, and that's just in my tiny little world. Petabytes.
|
|
|
|
|
So what is your strategy in case you are attacked?
Get me coffee and no one gets hurt!
|
|
|
|
|
Actually that's not my job, so I'm not the one to ask. I'm just an engineer. And also the term "attacked" is a broad one. Think about it, there are many ways we can lose service. Actually one of the worst I remember was when a water main burst and flooded a prime data center. Still that really didn't take anything down for long.
I don't think anyone is going to answer that question directly because it violates security principles anyway.
I will say this, no one slept much this weekend.
The point I was trying to make is that those people responsible for your data do take this VERY seriously, at the least the one's I know do. But it's a very complex problem. And it's expensive. Everyone is doing the best they can with limited resources.
|
|
|
|
|
Don't start sleeping yet - Europol pointed out that the real fun will be Monday, when all those "turned off for the weekend" computers are booted up...
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
It's cool Griff. We are spending the week dead, for tax reasons.
|
|
|
|
|
Just don't press that weird black button that is labelled in black on a black background.
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Well that's strange. A sign popped up and it said "please do not press that button again".
|
|
|
|
|
That is strange! Normally a small black light lights up black to let you know you've done it.
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Except the virus stopped spreading when a random domain name was registered, as the virus assumes it is being run in a an analysis sandbox. They are keeping the domain up.
Assuming the hackers don't start a DOS attack against it (my fear).
|
|
|
|
|
A DDOS shouldn't do it in theory, since it's the IP it looks for (apparently) - which comes back from the DNS lookup rather than the domain itself.
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Thanks for shedding some light on the scale of the problem in individual organizations. However even in large distributed systems there must be a daily reconciliation and backup of local servers. Thus I'm assuming that an organization with proper backup policies in place should only be risking a day or two of data at any time.
Peter Wasser
"The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts." - Bertrand Russell
|
|
|
|
|
Basildane wrote: Your backup of a desktop computer is comparing apples to oranges. I manage a VAST amount of data, and that's just in my tiny little world. Petabytes. Thanks for the view from the other side.
Makes my backup scheme seem trivial. We have three servers that back up to each other nightly. One of the servers has an external hard drive that gets everything as well. We have a remote server that receives backups of our source control data bases. Weekly I back up the source control data bases to DVD's, plus to a thumb drive that goes home with me. And yes, I regularly check the backups to make sure they contain the data I think they should contain.
Software Zen: delete this;
|
|
|
|
|
Actually, it's worse than that. We are just talking about backing up raw data. To recover from a disaster would require a colossal effort, not just restoring data. Re-configuring servers / clusters, database schemas, firewall configurations, all the myriad of server customizations and service account settings to make a particular service operational. DNS and VLAN's, all the networking configurations.
If I had to restore my project from a complete loss, I can't even imagine that. Would probably take a month with my whole team working on it non-stop. That's with a full data backup.
|
|
|
|
|
It's hard to imagine.
If I lost our primary source control server, I could have us back up and working in less than an hour. If I lost all three servers it would take a day to build a box(*) and get everything installed. Anything more serious than that would suggest building damage (fire, tornado, etc.) that would mean far more significant problems.
(*) All three of my current servers are recycled server-class industrial PC's from our products. I have a pile of these machines in my lab called the Island of Misfit Toys , all of them functional. If I had the time, I'd love to create a distributed build system. Our current build process takes 30-90 minutes, depending upon the product and which server is running the build. With a distributed process, I could probably get that down to under 10.
Software Zen: delete this;
|
|
|
|
|
This one system runs on servers spread across 3 cities (for technical reasons). We just this month moved 5 racks of data processing from the 3rd floor of this building to a new datacenter on the 2nd floor. This took 2 YEARS of planning. We just finished the move this month (with no loss of service).
|
|
|
|
|
When I worked at DOW Chemical I was introduced, as an intern, to the Disaster Recovery Plan they had.
Their backups go offsite.
They rent a virtual offsite location year round.
Quarterly they test their restore process, and TIME IT. When you are dealing with this much data, and tens of thousands of shipments coming into various ports throughout the world, this gets serious.
They update their documentation on when people have to be on planes to fly to one of the few restore centers, and had fallback plans for emergency leasing of jets, and people driving!!!
My first question, after realizing that MANY companies pay this same company for these services, and access to their mainframes, etc.... "What happens if many companies get hit at the same time?"
The answer was "The risks of that are LOW, but they can handle up to 3 companies at once". Which is incredibly rare. (And the lesson of the last 25 years... UNTIL IT ISNT)
This outbreak brought back those two memories.
Having worked for companies that CANNOT REASONABLY complete a "backup" in 24 hrs, think of your exposure.
Just hope it never spreads through bitcoin
|
|
|
|
|
DVDs and CDs have a shelf life unfortunately.
|
|
|
|
|
I think the news reports were not completely accurate. Big systems were attacked but backups could be restored which took a day or two. That is pretty much what I would expect especially on a weekend.
Peter Wasser
"The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts." - Bertrand Russell
|
|
|
|