|
Not a bad idea, but how do you exclude circular dependencies from cropping up in this case. Not all of them are caught at compile time which you could run the risk of introducing them into your application with a just get it to compile mindset.
|
|
|
|
|
You could try the technique used in the Git project.
Firstly, they do have two two top level `.h` files that are always included as the first lines in any other file that will need includes. This provides a level commonality across the project, and a consistent inclusion order.
Secondly, and slightly more important for you, is that all the included `<file>.h` files have a preamble/suffix of:
#ifndef <FILE>_H
#define <FILE)_H
... stuff ...
#endif /* <FILE>_H */
Thus stuff is included only once and a hierarchy of usage is created.
Add
#else /* warnings */ for extra feedback and local tastes.
|
|
|
|
|
Instead of inserting until there are no errors, have you tried deleting until there's an error and then re-inserting it? When you can't delete any without causing an error, you've finished.
|
|
|
|
|
When I want to debug things related to header inclusion, I typically run the compiler in preprocessor-only mode and scan the output.
vc /P
... or ...
gcc -E
You've tried this? The preprocessed output sometimes shows some interesting things and might be good input for a tool chain.
-- Matt
|
|
|
|
|
So because I don't want to selectively choose what should be encrypted vs what I shouldn't bother encrypting, my entire backup drives have been encrypted with TrueCrypt. I've stuck with this for over a decade.
TrueCrypt development has ceased years and years ago, and (it seems to me) the most popular replacement (branched off of TrueCrypt) is now VeraCrypt.
It's been long enough, I really should be moving my backup drives from TC to VC.
Has anyone here done that? What was your approach?
I have two backup drives - identical to each other. The way I see myself doing it is:
a) Format DriveA with VC
b) Mount Drive B with TC
c) Copy the content of DriveB to DriveA
d) Dismount everything
Once I'm confident all the data's been transferred, repeat the process in the opposite direction - that is,
a) Format DriveB with VC
b) Mount DriveA with VC
c) Copy the content of DriveA to DriveB
d) Dismount everything
Or just use something like CloneZilla to clone DriveA back to DriveB.
At that point, TC is completely out of the picture.
The reason I want to copy backup drive to backup drive, rather than directly backing up my live drive to the first backup drive, is that in order to do a full backup, I need to dismount some files (primarily VMs), and I don't want to leave my system with those VMs dismounted for the entire time it's gonna take to backup the whole thing.
After the backups are on VC, then I'll take the time to run my backup script, which only re-synchronizes modified files (which typically only takes a few minutes) - so that should minimize the down time while my backup gets re-synched.
Would you do it differently?
|
|
|
|
|
Just curiousity: Doesn't having your backups encrypted increase the risk of not being able to use them for recovery?
I'd opt for physical security (lock them up off-site, for example) instead.
Software Zen: delete this;
|
|
|
|
|
Only if the software is buggy and can't read back what it wrote itself. TC/VC are so mature at this point, I really don't worry about that.
If you're thinking about disk failures...then (a) that's why I do my backups in pairs and (b) an unreadable sector is simply unreadable, whether it's encrypted or not. And things like SpinRite don't care whether a sector is encrypted or not - these tools only concern themselves with trying to recover raw bits. They don't even know whether they're reading a FAT partition, NTFS, ReFS, ZFS, whatever.
|
|
|
|
|
I use SyncBack (free). It allows a number of different options, so that you copy one way or do a sync. It also allows you to do a simulated run and logs what will change.
Just curious, why have you encrypted the whole disk? Doesn't that make access a whole slower?
// TODO: Insert something here Top ten reasons why I'm lazy
1.
|
|
|
|
|
yacCarsten wrote: I use SyncBack (free).
I'm not sure how this helps here TBH. I'm talking about migrating from one encryption system (TrueCrypt) to its successor (VeraCrypt).
yacCarsten wrote: why have you encrypted the whole disk?
As per the top of my post...I don't want to take the time to selectively decide what needs to be encrypted (eg, my banking info), vs what's fine to remain unencrypted (eg, setup programs, which just happen to exist on the same disk). Just encrypt the whole thing and be done with it.
yacCarsten wrote: Doesn't that make access a whole slower?
Possibly. Could I actually measure it, especially nowadays? Maybe if I was transferring the entire content of the drive - which only happens in extremely rare circumstances like this one, which is pretty much a one-time operation. Besides, they're backup drives - they're only being used when I'm updating my backups or when I find out I need to recover a few very specific files (which has happened maybe...3 times since I've started maintaining backups decades ago?)
|
|
|
|
|
Every morning I open Code Project and start with the news.
Every morning I find a new application or framework or both.
My question is how many of those, are actually used by developers, other than the people who created them?
I believe that some of those created last year are still in use today, but not many.
Really! I don't intend to criticize those that developed them; however, the learning curves have got to be tremendous.
Am I a hopeless luddite?
What do you think?
|
|
|
|
|
|
That there are too many frameworks? Or that he is a hopeless Luddite?
You could always embrace the power of โandโ!
If you can't laugh at yourself - ask me and I will do it for you.
|
|
|
|
|
The same as him. 
|
|
|
|
|
A framework really helps if it's intended for your domain and has a low surface-to-volume ratio. Without one, the outcome is superfluous diversity, which makes it hard for software to interoperate without writing glue that would otherwise be unnecessary.
Ideally, a framework should be developed internally so that it can evolve to suit the needs of your applications. But if an external framework is a good fit, and if it's responsive to its users, it's worth considering.
The worst outcome is a team without a framework. It can happen because management thinks everyone should be developing features or because no developer has enough domain experience to develop a framework.
|
|
|
|
|
When I read about software code having a "low surface to volume ratio", I know I have stumbled into Pseuds Corner. ๐โน๏ธ
|
|
|
|
|
What does that even mean: "low surface to volume ratio"???
Steve Naidamast
Sr. Software Engineer
Black Falcon Software, Inc.
blackfalconsoftware@outlook.com
|
|
|
|
|
Good question! It's just techno-babble as far as I can make out.
|
|
|
|
|
I understand that the lowest surface area to volume ratio is possessed by a ball, so maybe such code is balls?
|
|
|
|
|
Absolutely.
The first thing we do after defining the problem/solution space and what the project IS NOT.
Is too look at what type of Framework is needed.
Both for web and thick clients, etc.
How will we all talk to the DB
How will we wrap/protect the database (a lot of system views).
The framework gives us handrails for adding functionality. Allows us to prototype quicker.
And get user feedback quicker.
The Flexibility of using views includes the ability to add columns in real-time, and have them show up in various grids/pages...
And I would NEVER want to use the same framework every project. Ever! LOL
|
|
|
|
|
Ditto
PartsBin an Electronics Part Organizer - A updated version available!
JaxCoder.com
|
|
|
|
|
We're both hopeless luddites when it comes to the Framework du jour. The fundamental problem with frameworks is that as soon as your requirements go outside the framework you start fighting the framework. Since no two projects are the same this means that frameworks invariably cause more technical debt that has to be dealt with down the road.
|
|
|
|
|
That can be true, but there will usually be more technical debt without a framework.
|
|
|
|
|
I consider trying to shoehorn a framework into your needs to be the largest technical debt you can incur.
|
|
|
|
|
Yes, agreed.
Each framework solves one maybe two valuable problems. You know it's going to suck when the people making the call ignore all that and just pick the popular new shiny thing, so people will like them more.
Sure guys, let's do a single page app in a jazzy .ts framework.. only to chew it up and spit it out as thousands of different pages served as static files with no binding whatsoever. Great job guys, you sure captured the design philosophy of that framework! Odd thing that all our frontend devs are now running away, though. Very odd that.
|
|
|
|
|
I've worked on a varied bunch of projects using the Django web framework, and the only times I've felt like I'm fighting the framework is when a lesser programmer has written idiotic code because they thought they knew better than the framework in the first place, and I'm the chump stuck maintaining it.
There are good frameworks that work for a wide variety of things, but you must drink the kool-aid. You have to do it their way through and through, and only then do you get the benefits of a framework.
|
|
|
|