one collecting 10 bits of data into 16 bits integer array
This is common for transferring along byte boundaries. If you transmit 10bits, how many bytes are you moving across a network? ...yeah, integer numbers are desirable for network operations. Pretty much all receivers/digitizers do this.
if I cast or build a new variable as double it will take an 16 bit integer and
process it as double, than what will happen to the original array ?
When casting, nothing happens to your original data, a copy is made. Although as Richard already mentioned, you can't simply cast a whole array, since your element spacing will be different (16bit vs 32bit array elements).
I am still learning about FFT , but as far as I can tell the first FFT function
puts new data (normal pointer behavior) into the original (integer )
array as what ? Integer?
Output of most FFT routines is a float or double (depending on whether it's "single" or "double" precision math).
I think the only way is to collect the ADC data into double array
from get go.
Not going to happen. Analog to digital converters (and most hardware for that matter) deal with integers (you'll commonly hear the term fixed point math, that's why).
Thanks for reply.
Somehow I did not get the message across and the main question is getting muddled by stuff I am not so concerned about.
The bottom line - which I actually just wanted someone to confirm - is that casting pointers from integer to double is a stupid idea.
You can use the wide char versions of atof()[^] or strtod()[^]. If the string does not begin with the numeric value (spaces are ignored), you must parse the string and pass a pointer to the value's offset.
Signed integers seem to be a minefield of undefined behaviour lurking around every corner. You can't even add them without potentially blowing up the Death Star, unless you test for potential overflow first.
Should they be avoided? How should this be dealt with? How bad is it to pretend it's not a problem?
But there's a well-known (and infamous) optimization that GCC does where if on some code path a signed integer would overflow, it deduces that therefore that code path must be dead. In practice this often means that overflow tests that are done after the overflow has already occurred (such as maybe you calculate something, then don't use the result if the calculation overflowed) are deleted, so your program looks correct (after all, you tested for overflows, right?) but isn't.
Lots and lots of things in C++ (and C for that matter) can lead to undefined behavior if preconditions are not met. Signed integer arithmetic is just one of many. If you're programming in this language, you should be used to dealing with narrow contracts.
So, no, they shouldn't be avoided. Deal with them depending on the situation, in many cases an assert will suffice. Pretending it's not a problem is fatal.
(and no, gcc isn't the only compiler that assumes that naive signed overflow checks are always false)