|
yes, because the word comes from the fruit.
|
|
|
|
|
|
That color would be called orange.
|
|
|
|
|
I've got an orange tree in the garden. It's more green and brown than orange.
veni bibi saltavi
|
|
|
|
|
Yes, we would call them #ffe033.
|
|
|
|
|
Orange is not called as "Orange" because of colour. The colour comes after the existence of the fruit Orange.
|
|
|
|
|
Wasn't the colour named after the fruit*?
I'm pretty sure that "orange" is a new colour, which used to be just a red.
* Or we may have taken it from the Dutch royals ("oranje"), who used it as their colour, but who wants to admit that?
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
The real existential question you should ponder:
Why is a carrot more orange than an orange?
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Yes - the colour gets its name from the fruit, and not the other way round. It came into English from Arabic via Spanish - it was Naranja which mutated into Norange in English, where a norange became an orange.
Therefore the fruit would still be an orange, but the colour would be different.
=========================================================
I'm an optoholic - my glass is always half full of vodka.
=========================================================
|
|
|
|
|
This will probably be long. But I'm gonna try to keep it brief nonetheless.
I have a system I essentially use as little more than a NAS, providing other systems on my LAN access to files via a share. Whenever I buy hard drives (eg, I've outgrown the storage capacity of what I currently have), I buy new drives in sets of 3:
a) One in the "live" system, powered on 24/7
b) One in an external USB3 enclosure, kept offline and physically disconnected unless I'm actively doing a backup and
c) One in a second external USB3 enclosure, that I keep off-site and swap with drive (b) above every month.
I'm a big believer in keeping things simple, so I just use robocopy in a batch file to sync changed files from the live disk to the backup drive. I only back up data files (nothing from the OS itself or executable files from running programs), so attempts to copy files while they may be in use and all that sort of crap is no concern. This has worked great for me for years.
A little while ago I found out I have a file that somehow got itself corrupted. The file is physically "ok", in the sense that it can be read and copied without any sort of file system error--it's the software that last wrote to it that managed to write bad data somehow and made it unusable--the content can no longer be parsed, and in fact the size went from a few hundred MBs down to a few dozen KBs. Clearly, something's gone horribly, horribly wrong and this file is simply no longer usable. There's nothing in it to recover--no need to even try.
Here's the real problem: I rather rarely need to use the file (which doesn't negate its importance), so the corruption happened a few months ago. During that time, I had plenty of time to back up that corrupt file over the last backup set, as well as the one from the disk I keep off-site. It's only when I happened to need the file that I realized the file was corrupt...along with the last two backups.
People always say you need to backup, but also verify everything can be restored. That's great and all, as it proves the content can be read back when you need it and the file system integrity has not been compromised. That does nothing, however, for the integrity of the content of a file. No backup/verify strategy can help address this - the bits in the file from the backup match the source and it can be read back. That's not the problem that "backup but verify" addresses.
My backup drives aren't large enough to keep complete copies of multiple backup sets "just in case" I need to go back to an older version. And I'm not going to keep buying additional hard drives every few months just so I have a few "snapshots in time". Again, how far back do I need to go?
What's the best strategy here? I guess what I really need is versioning applied in some fashion (and volume shadow copies kinda exist for that purpose), but because MS has still so far put so little effort into integrating this in the OS so it's actually usable, trying to use that goes against my attempts to keep my backup mechanism simple. I've had someone suggest running Git locally (ha!), but where do I draw the line as to what files to include? Essentially I'd want to include the entire drive--what would be the point of keeping multiple versions of only a subset of them?
So again...what's a good, simple, and cost-effective way to address this?
|
|
|
|
|
Windows Backup.
The workstation version afaik only addresses you libraries but server version will image everything even ms exchange back to a bare metal drive like nothing bad never happened. Continuous historical file by file restore too.
For complete and bare metal restore of windows workstations I like Acronis true image to usb drives.
|
|
|
|
|
Has Windows Backup evolved enough to let you browse the file system as it existed at a given point in time, and then let you restore the one file (or subset of files) you want?
That's really the problem with all backup systems I've encountered (and gave up on decades ago and haven't looked at since), where they produce a single huge binary blob you can't manipulate yourself but have to restore as an "all-or-nothing" deal...
|
|
|
|
|
How To Extract Individual Files From a Windows 7 System Image Backup[^]
#SupportHeForShe
Government can give you nothing but what it takes from somebody else. A government big enough to give you everything you want is big enough to take everything you've got, including your freedom.-Ezra Taft Benson
You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun
|
|
|
|
|
Oh, cool. I didn't realize Windows Backup was saving stuff in VHD files nowadays. I'm very much familiar with the process of mounting them "manually".
Looks like I have some thinking to do, to figure out how I want to use that to my advantage here...
|
|
|
|
|
Yes we do it all the time when our customers delete that one file. Or a whole folder.
Acronis can do that too. pull single files and / or folders from image archives.
|
|
|
|
|
Windows backup "needs" to reformat your backup drive with a format that won't display in explorer anymore - just in the backup / restore program and in disk management. Merely visually alarming. Otherwise it works a treat.
|
|
|
|
|
That's a little disconcerting. I'm ok with Windows Backup creating VHD files like any other; I handle those all the time. Either what you're saying runs contrary to this, or I'm confused.
|
|
|
|
|
I believe use the system you already have but add differential or incremental backup. In other words, do a full backup weekly or monthly and differential/incremental backups daily. I believe Windows Backup does this. A differential/incremental backup will backup only what has changed and will allow you to restore to a certain point in time. A google on "Versioning Backup" or "Differential Backup" will provide a wealth of info.
The difference between incremental and differential backups are explained here: Incremental vs differential backup – what is the difference?[^]
You could also use online backups like Carbonite and Crashplan that will backup a file anytime it is changed and keeps those versions around for a month or more.
In the end though, none of this will necessarily save you from software that corrupts your files. Unless you test critical software data periodically or you keep backups to the beginning of time.
#SupportHeForShe
Government can give you nothing but what it takes from somebody else. A government big enough to give you everything you want is big enough to take everything you've got, including your freedom.-Ezra Taft Benson
You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun
modified 3-Dec-16 12:34pm.
|
|
|
|
|
I do similar but 3x weekly, I run robocopy with /MAXAGE:8 to a directory like /evenmonths/2016/Dec/Week51, on a Samba server. Monthly gets the last 2 years and yearly gets it all. Small data share, only about 100GB so it is all manageable and can keep for the whole year and then shelf it. Weekly backups are about 200 meg. Monthly stuff is 4 servers (VM) via Veeam along everything needed to recreate all servers, then taken off site. Veeam will also restore individual files/folders. I run the free version with a script and back up (6) VM's twice a month. Like you, all are also copied to external drives. I use RDX that are ejected by the VBS script after the copy over.
We issue new tin foil hats monthly.
|
|
|
|
|
If I understood your post correctly you don't have a backup issue, it's an application issue. If your application writes crap, you'll backup, ahem, no offense, crap:
the file is physically "ok", in the sense that it can be read and copied without any sort of file system error--it's the software that last wrote to it that managed to write bad data somehow and made it unusable--the content can no longer be parsed, and in fact the size went from a few hundred MBs down to a few dozen KBs. Clearly, something's gone horribly, horribly wrong and this file is simply no longer usable. There's nothing in it to recover--no need to even try.
If I'm correct, I have some ideas, comment back.
cg
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
That's correct. Sure, the application isn't blameless in this case, but still, it proved that my backup strategy is somewhat lackluster.
modified 3-Dec-16 22:08pm.
|
|
|
|
|
Well, I guess my point is that backups backup, they have no idea what they are backing up. Many of the applications I've developed over the years require writing files and moving them around to other machines. File movement can be done over ftp or via USB (embedded environment). The surest way to make your life miserable is to trust the file(s).
It's just amazing how frequently these files are corrupted due to an ftp burp, variations in USB drives, evil users, etc. As a developer, you just don't think the corruption will happen - we tend to know not to remove the USB early, don't edit the files, ....
So, all of our files contain a CRC value. Files copied to USB are then read back and validated. It might be you need to do something like this.
Funny story - I once opened up a support ticket asking "How do I know my backups are good?" This was interpreted by IT as "Why are my backups not good?" and it sent them into a tizzy. Too much caffeine that day I guess.
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
charlieg wrote: So, all of our files contain a CRC value. Files copied to USB are then read back and validated. It might be you need to do something like this.
Not sure if this is what you're getting at, but I never claimed I wrote the app that corrupted the data.
And frankly I know exactly how it happened, and it was a probably my own fault (although the app could have been doing more sanity checks.
The data file was sitting on the NAS, which is a Windows 7 machine handling little more than file shares. The app that was using the file is running on another machine on the network. I had to reboot the NAS machine after installing updates, and I allowed it to, forgetting that this one file happened to be opened on another box. I remember intentionally avoiding saving while the machine was rebooting, but after it was back up and confirmed with Explorer I could again access shares, I hit Save as I had changes that needed to be saved. While the application didn't complain - and I guess I was shortsighted enough to think everything was ok - that was the last time I've seen any of the data in that file. At the very least I should have saved to another file, and immediately verify its integrity by closing/reloading it. I guess despite what other say about me, I'm still too much of an optimist...
|
|
|
|
|
"too much of an optimist"
I resemble that remark. I understand what happened now. Falls under, "I should have been more careful."
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
Am I a good developer ? This question bugs me at times. On one hand I am always able to provide solution to business problems using what I know or by learning something new to resolve problem. On the other hand I don’t know all the latest and greatest frameworks and tools out there.
I have been doing development for more than 7 years now. In those 7 years I have worked on so many different tools, technologies and libraries to solve business problems. I have .NET as my main technology stack but I did development on Java, SQL, Integration tools, Salesforce, PLC and few other technologies. One thing is consistent in all those projects is business generally don’t care about underlying technology stack as long as it serves their requirement. As a developer you make conscious decision on what to offer to your client as a solution. You cannot ask a small shop to go on and invest arm and a leg on some product just because you are comfortable working with it and on opposite side there are big organizations which can afford to buy or build products using latest and greatest technologies. As a developer you understand that latest is not always the best. Why would you implement a solution using some new JavaScript framework or any other latest fad that takes more time and effort and eventually becomes a maintenance nightmare? Just because Google and Facebook uses Python I must use it ? As a developer you are not only required to know programming languages but also different server platforms, deployment tools, source control tools, CI / CD platforms, testing frameworks etc. A developer’s job is a demanding job.
In today’s market there is ever growing expectation from developer to be proficient in whatever technology is hot at that time and there are quite a few at any given point in time. Not only you need to know that technology inside out you are expected to know any other tools, libraries and frameworks build around it. It makes you question are you really a good developer even if you don’t know latest technologies or tools ? You don’t know those technologies because up until now you did not have a need to use it in whatever application you are building. Just because AngulrJS sounds cool and everyone is using it I must use it ? One thing I have felt is if you don’t use today’s latest fad you are perceived to be old school and at times will not get a chance to even be considered for a new role.
Do you ever wonder are you a good developer or not ?
Zen and the art of software maintenance : rm -rf *
Maths is like love : a simple idea but it can get complicated.
|
|
|
|
|