|
I had a great idea this morning upon waking up. I don't know how to figure out which #include's are needed to compile any particular source other than tedious error prone inspection followed by numerous compilations #include'ing one by one any file to resolve the latest error. The great idea was to automate the process by compiling every possible combination [1...40] subsets of the 40 #include's in my project and settling on the combination with the smallest number needed for an error free compile. This could easily be done via awk to automate the insertion of the #include statements. However a calculation of the number of possible combinations of 40 files in every possible subset number resulted in "inf" appearing on my monitor so I guess it won't work at least not on my pig of a machine. I guess I will have to wait for entangled bits.
|
|
|
|
|
This was the original reason I started to develop my C++ static analysis tool. All you need to do is switch to C++11 or enhance it to support C++20!
There is also Include What You Use[^]. I haven't looked into it, but it might do what you need.
EDIT: Pruning #include directives to the minimal set that compiles isn't a good idea because it fails to account for headers that get included transitively but that should nevertheless be included.
|
|
|
|
|
Thanks for informing me of the tools. I would greatly like to develop a C++20 version but would need to be a x10 programmer to add another project to my current efforts. As I am more of a 1/x10 programmer as I am always surprised how little I accomplish in most days unless of course I blame my pig of a machine which I am beginning to lean toward as I often find myself drumming my fingers instead of typing with them it will have to wait though perhaps I could dabble at it from time to time. I will look into IWYU. Thanks. I looked into checkheaders but found it reported many incorrect "... not needed" messages unless of course I utilized it improperly which is always a possibility. As for transitive #includes I assume that refers to nested which I do not do. All my #includes are only in cpp files. - Best
|
|
|
|
|
By transitive, I mean if A includes B, and B includes C, then A sees C transitively, so A will compile even if it should also include C. For example, if C defines a base class, which B derives, with A then deriving from B, there is no need for A to include C. But in other cases (C using a free function, or a bare typedef or enum in A), C should include A.
modified 1-Nov-22 17:21pm.
|
|
|
|
|
Thank you for the clarification. The last statement confuses me. Should it not be "C should include A" as C is using an identifier defined in A.
|
|
|
|
|
Quite right. I'll fix it.
|
|
|
|
|
It's been a while I've used C++ (and thus #include statements), as I've been doing C# exclusively for about 15 years now, but if I find myself in a situation where I have to add a 'using' statement without understanding why I need it...then I consider this a problem I need to solve (understand why it's needed), rather than adding it and shrugging it off and concluding "whatever works"...
|
|
|
|
|
|
I am surprised a commercial product does not exist to debug #includes making it effortless. It seems to me it would be simple enough to a programmer knowledgeable of compiler writing i.e. to wit merely scan the code identify all the identifiers and their signatures when needed and Voila Presto Bingo. How can such a product not sell like hot cakes. Apparently there is something I do not understand.
|
|
|
|
|
I feel like such a tool would belong in the weird and the wonderful.
I hate to say this, but if your includes are so heavily dependent on ordering you are almost certainly due for a restructure of your code.
For example, it might be better to do the includes as more of a tree in terms of what includes what than you currently have it.
There are a number of ways to deal with it but it all has to do with structure.
Edit: I'm not saying this is certainly the issue in your case. It just smells from here. My spidey sense is tingling.
To err is human. Fortune favors the monsters.
|
|
|
|
|
By tree I assume you mean nested/transitive if that's the right term. Doesn't that leave me w/ the same problem i.e. each file whether cpp or h still needs certain #include's and in a certain order. As you clearly are a better programmer than I there must be something I do not understand. As for relying on the order of #include's I gave up attempting to make each independent of any other after scratching my head raw.
|
|
|
|
|
It leaves the same problem, but it creates potentially more organization.
The better alternative is to reduce the number of cross header dependencies, or restructure the dependencies into common headers included by each of the downstream headers.
This of course isn't possible if you don't "own" the code in those headers, and in any case, it's probably a lot of work to restructure it as above.
So I'm not saying your tool doesn't have merit. I'm just saying if you need it, you might want to take a second look at how things are structured.
To err is human. Fortune favors the monsters.
|
|
|
|
|
"my spider sense..." lol. well put
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
technically it was "spidey" - comes from the old spiderman comics.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I'm sure there's tools to do this. Maybe something like CPPDepend? Dependency Graph
That seems to be a commercial tool, I've not used it, so I can't comment on whether it actually works or not, but it seems like it might give you what you're looking for. Maybe search results for "C++ include dependency graph" or similar might lead you to what you're looking for.
Keep Calm and Carry On
|
|
|
|
|
This bloody #include hell is huge reason why I'm looking forward to widespread standard module support. Until then, I combine all the external includes into one header & include that.
|
|
|
|
|
Does the order of the includes matter?
If not you could include all of them to check the project compiles, then remove them one by one and adding them back if removing them breaks the compile.
40! is 8.1591528e+47 according to Google. That would take a while to work through.
|
|
|
|
|
Not a bad idea, but how do you exclude circular dependencies from cropping up in this case. Not all of them are caught at compile time which you could run the risk of introducing them into your application with a just get it to compile mindset.
|
|
|
|
|
You could try the technique used in the Git project.
Firstly, they do have two two top level `.h` files that are always included as the first lines in any other file that will need includes. This provides a level commonality across the project, and a consistent inclusion order.
Secondly, and slightly more important for you, is that all the included `<file>.h` files have a preamble/suffix of:
#ifndef <FILE>_H
#define <FILE)_H
... stuff ...
#endif /* <FILE>_H */
Thus stuff is included only once and a hierarchy of usage is created.
Add
#else /* warnings */ for extra feedback and local tastes.
|
|
|
|
|
Instead of inserting until there are no errors, have you tried deleting until there's an error and then re-inserting it? When you can't delete any without causing an error, you've finished.
|
|
|
|
|
When I want to debug things related to header inclusion, I typically run the compiler in preprocessor-only mode and scan the output.
vc /P
... or ...
gcc -E
You've tried this? The preprocessed output sometimes shows some interesting things and might be good input for a tool chain.
-- Matt
|
|
|
|
|
So because I don't want to selectively choose what should be encrypted vs what I shouldn't bother encrypting, my entire backup drives have been encrypted with TrueCrypt. I've stuck with this for over a decade.
TrueCrypt development has ceased years and years ago, and (it seems to me) the most popular replacement (branched off of TrueCrypt) is now VeraCrypt.
It's been long enough, I really should be moving my backup drives from TC to VC.
Has anyone here done that? What was your approach?
I have two backup drives - identical to each other. The way I see myself doing it is:
a) Format DriveA with VC
b) Mount Drive B with TC
c) Copy the content of DriveB to DriveA
d) Dismount everything
Once I'm confident all the data's been transferred, repeat the process in the opposite direction - that is,
a) Format DriveB with VC
b) Mount DriveA with VC
c) Copy the content of DriveA to DriveB
d) Dismount everything
Or just use something like CloneZilla to clone DriveA back to DriveB.
At that point, TC is completely out of the picture.
The reason I want to copy backup drive to backup drive, rather than directly backing up my live drive to the first backup drive, is that in order to do a full backup, I need to dismount some files (primarily VMs), and I don't want to leave my system with those VMs dismounted for the entire time it's gonna take to backup the whole thing.
After the backups are on VC, then I'll take the time to run my backup script, which only re-synchronizes modified files (which typically only takes a few minutes) - so that should minimize the down time while my backup gets re-synched.
Would you do it differently?
|
|
|
|
|
Just curiousity: Doesn't having your backups encrypted increase the risk of not being able to use them for recovery?
I'd opt for physical security (lock them up off-site, for example) instead.
Software Zen: delete this;
|
|
|
|
|
Only if the software is buggy and can't read back what it wrote itself. TC/VC are so mature at this point, I really don't worry about that.
If you're thinking about disk failures...then (a) that's why I do my backups in pairs and (b) an unreadable sector is simply unreadable, whether it's encrypted or not. And things like SpinRite don't care whether a sector is encrypted or not - these tools only concern themselves with trying to recover raw bits. They don't even know whether they're reading a FAT partition, NTFS, ReFS, ZFS, whatever.
|
|
|
|
|
I use SyncBack (free). It allows a number of different options, so that you copy one way or do a sync. It also allows you to do a simulated run and logs what will change.
Just curious, why have you encrypted the whole disk? Doesn't that make access a whole slower?
// TODO: Insert something here Top ten reasons why I'm lazy
1.
|
|
|
|