Thanks for reply.
Somehow I did not get the message across and the main question is getting muddled by stuff I am not so concerned about.
The bottom line - which I actually just wanted someone to confirm - is that casting pointers from integer to double is a stupid idea.
You can use the wide char versions of atof()[^] or strtod()[^]. If the string does not begin with the numeric value (spaces are ignored), you must parse the string and pass a pointer to the value's offset.
Signed integers seem to be a minefield of undefined behaviour lurking around every corner. You can't even add them without potentially blowing up the Death Star, unless you test for potential overflow first.
Should they be avoided? How should this be dealt with? How bad is it to pretend it's not a problem?
But there's a well-known (and infamous) optimization that GCC does where if on some code path a signed integer would overflow, it deduces that therefore that code path must be dead. In practice this often means that overflow tests that are done after the overflow has already occurred (such as maybe you calculate something, then don't use the result if the calculation overflowed) are deleted, so your program looks correct (after all, you tested for overflows, right?) but isn't.
Lots and lots of things in C++ (and C for that matter) can lead to undefined behavior if preconditions are not met. Signed integer arithmetic is just one of many. If you're programming in this language, you should be used to dealing with narrow contracts.
So, no, they shouldn't be avoided. Deal with them depending on the situation, in many cases an assert will suffice. Pretending it's not a problem is fatal.
(and no, gcc isn't the only compiler that assumes that naive signed overflow checks are always false)