int integerValue = 233;
Orignaly the integerValue valid data is 10 bits instead of full 16
It is in two 8 bits registers
Only 10 bits are defined so I mask it with 0x3FF, that may be ovekill, but safe.
double doubleValue = integerValue; // the integer value is converted to a double
// and stored in the new variable
I understand the above , I used wrong therm calling it conversion.
My basic question is
what is the difference between these
// the value passed in is already a double, or if not it will be converted
// the integer value will be converted into a double. But you do not need a cast.
function( (double) integerValue,....
However, if you are passing a pointer to the array of integers then you connot use anything like the above. You must pass it as an integer pointer and the called function will then do the conversion. Something like:
double function(int* myArrayOfIntegers, int count)
double result = 0.0;
for (int i = 0; i < count; ++i)
double temp = myArrayOfIntegers[i]; // get next integer and convert to double
// do some calculations
// create an array to hold all the values
int* theIntegers = newint[theCount];
// fill the array from the source values
// send the array to be processed
double theAnswer = function(theIntegers, theCount);
this is getting really interesting.
Here is some stuff to think about.
It all started by compiler complaining about integer pointer being passed to function expecting double pointer.
Second - I put together two sources - one collecting 10 bits of data into 16 bits integer array and not processing anything, just collecting.
The second source builds / emulates sine wave into array of doubles and that gets processed by FFT. So I replaced the emulated (double) data with real (integer) collected data.
My thinking is - if I cast or build a new variable as double it will take an 16 bit integer and process it as double, than what will happen to the original array ?
I am still learning about FFT , but as far as I can tell the first FFT function puts new data (normal pointer behavior) into the original (integer ) array as what ? Integer? I don't think so. ( And how is the pointer advanced? If as double will it skip next ADC data?)
I think the only way is to collect the ADC data into double array from get go.
And that was the OP question.
Thanks for all your help.
Not really, it's pretty basic stuff. As I said before, you cannot send a pointer to an array of integers to a function that expects a pointer to an array of doubles. The types are totally different so your program would just be processing garbage. I showed you in my previous message how to pass the array of integers to the function that needs the values as doubles. That is all there is to it, the compiler will generate the correct code to convert each integer to a double as you process them. The resulting double values can then be used in your FFT calculations.
one collecting 10 bits of data into 16 bits integer array
This is common for transferring along byte boundaries. If you transmit 10bits, how many bytes are you moving across a network? ...yeah, integer numbers are desirable for network operations. Pretty much all receivers/digitizers do this.
if I cast or build a new variable as double it will take an 16 bit integer and
process it as double, than what will happen to the original array ?
When casting, nothing happens to your original data, a copy is made. Although as Richard already mentioned, you can't simply cast a whole array, since your element spacing will be different (16bit vs 32bit array elements).
I am still learning about FFT , but as far as I can tell the first FFT function
puts new data (normal pointer behavior) into the original (integer )
array as what ? Integer?
Output of most FFT routines is a float or double (depending on whether it's "single" or "double" precision math).
I think the only way is to collect the ADC data into double array
from get go.
Not going to happen. Analog to digital converters (and most hardware for that matter) deal with integers (you'll commonly hear the term fixed point math, that's why).
Thanks for reply.
Somehow I did not get the message across and the main question is getting muddled by stuff I am not so concerned about.
The bottom line - which I actually just wanted someone to confirm - is that casting pointers from integer to double is a stupid idea.
You can use the wide char versions of atof()[^] or strtod()[^]. If the string does not begin with the numeric value (spaces are ignored), you must parse the string and pass a pointer to the value's offset.
Signed integers seem to be a minefield of undefined behaviour lurking around every corner. You can't even add them without potentially blowing up the Death Star, unless you test for potential overflow first.
Should they be avoided? How should this be dealt with? How bad is it to pretend it's not a problem?
But there's a well-known (and infamous) optimization that GCC does where if on some code path a signed integer would overflow, it deduces that therefore that code path must be dead. In practice this often means that overflow tests that are done after the overflow has already occurred (such as maybe you calculate something, then don't use the result if the calculation overflowed) are deleted, so your program looks correct (after all, you tested for overflows, right?) but isn't.
Lots and lots of things in C++ (and C for that matter) can lead to undefined behavior if preconditions are not met. Signed integer arithmetic is just one of many. If you're programming in this language, you should be used to dealing with narrow contracts.
So, no, they shouldn't be avoided. Deal with them depending on the situation, in many cases an assert will suffice. Pretending it's not a problem is fatal.
(and no, gcc isn't the only compiler that assumes that naive signed overflow checks are always false)