|
I understand that, but given the exact same data points, shouldn't the cubic interpolation of those points always be the same? I think the problem is the additional restriction that the second derivative be continuous between each cubic interpolation. Without such a restriction, I could compute a cubic interpolation between each pair using a slope determined from the neighboring data points (e.g., each piece of the interpolating method would be defined ONLY by the two points involved, and the two points surrounding those two points). However, such an approach would not have a continuous second derivative. In order to accomplish such a goal, I think the algorithm propagates the second derivatives through to match at each endpoint, instead of the first derivatives, which results in slightly different solutions when provided a consecutive subset of the original data points.
Hopefully this attempts to clarify the problem (I am not fitting a single polynomial to two different data sets; I am applying a 3rd degree polynomial between each pair of consecutive data points).
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
No. You are confusing the task of determining the coefficients of a one-dimensional cubic polynomial from 4 data points with the task of interpolating an arbitrary number of 3D points with a piecewise polynomial cubic spline curve.
The latter creates one new spline segment for each additional point beyond the first. So if you have 4 points, you get three segments, not one!
|
|
|
|
|
The only way to solve your requirement is a polygon: By your own requirement, a spline over N points should be equal to the joining of the N-1 splines you get when creating splines for two consecutive points in the sequence. Since with two points, the resulting spline curve is always a line segment, the full spline will be a polygon.
The moment you introduce continuity conditions for the first or higher derivative, the spline needs to look ahead to a point or points beyond its scope, so it can determine what its derivatives must evaluate to at its end points.
|
|
|
|
|
For anyone else looking at this, I figured out the answer. A cubic spline over N+1 points is solving for the 4 coeffients for each of the N cubic spline segments. This implies that you need exactly 4N equations to compute a unique spline over the data points (since you have 4N unknowns).
A cubic spline uses the following set of 4N-2 equations to compute the 4N cubic coefficients (the other two equations are boundary conditions, usually f''0(x0) = f''N(xN) = 0):
1) fn(xn) = yn (N equations)
2) fn(xn+1) = yn+1 (N equations)
3) f'n(xn+1) = f'n+1(xn+1) (N-1 equations)
4) f''n(xn+1) = f''n+1(xn+1) (N-1 equations)
As you can see by the above equations, the spline is not matched to meet a specified slope at each point, but instead only guarantees that two consecutive segments will have a continuous derivative (and second derivative). Because of this, I had to use cubic interpolation between each set of points using the equation given as Equation 1 on the Wikipedia Cubic Spline[^] page. This results in a piecewise interpolation that has a continuous first derivative, but does not have a continuous second derivative. In essence, this changes equalities 3 and 4 from above to be:
3) f'n(xn) = kn (N equations)
4) f'n(xn+1) = kn+1 (N equations)
Not a spline, but it consistently fits any consecutive subset of a dataset with the exact same cubic polynomial interpolations. I use the data points adjacent to those defining a segment's endpoints to estimate the slope at the segment endpoints.
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
I am looking for a statistical method or algorithm to analyse change over time and to determine whether any value is outside of a range and is a false positive or aoutlier.
The context is that I have webcams which monitor movement.
The movement returned is as a percentage of change between frames.
Occasionally a cloud covers the sun or a shadow falls across the webcam image which will cause a spike in percentage as change reported.
A normal percentage being returned may fluctuate with a range of change over 0.5 seconds of 10%.
A sudden change in lighting would change suddenly the percentage by say 60% and I would consider this to be a false positive.
So what I am looking for is some way of analysing change over time and catching the false positives rather than reporting them as movement.
Any pointers or links much appreciated.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I'm not sure if this is what you are after, but there was an article I read and tried out successfully here on CP that would track movement of objects captured with a webcam with something called Haar Cascade. I'm sorry that I can't remember the name of the article, but you should try a search of the CP articles with suitable searchterms.
I'm on a mobile device right now and can only say that using CP's search facilities from that is, hummm let's say it friendly, suboptimal.
Cheers!
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian
|
|
|
|
|
Thanks - I will look into that.
I also decided to gather some of the data showing the movement level detected over time.
I did this to see if I could determine any pattern that was present when a spike in light took place.
What I noticed, and I know this is obvious, was that a light spike showed as a sudden increase over a short period of time.
I learnt the often learnt lesson which is to look at the data first rather than base my plan of action on a a hypothesis.
So my current plan is to look at the differential over time and spot any spikes in this differential.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
hello
I want to find a fast algorithm for computing 1/d , where d is double ( albeit it can be converted to integer) what is the best algorithm for this case of many algorithms(SRT , goldschmith,newton raphson, ...)?I'm writing my program in c language.
thanks in advance.
|
|
|
|
|
It depends. On the quality of the implementation, the number of bits that you care about, the platform, and even the compiler. There's no simple answer. You'll just have to try them - unless you tell more about the intended target etc.
Some more options might be:
- Avoiding the division altogether. Calculate with fractions.
- Using hardware-supplied division, if it exists. On some processors it's not half bad.
- Using hardware-supplied approximate reciprocal, optionally with refinement step. x86 has had RCPSS[^] for a while now, it's a lot faster than actually dividing, even if you do a refinement step. 3DNow! actually had better support for fast reciprocals but it's AMD specific and obsolete.
- Using crazy IEEE-format-specific tricks such as this[^]
|
|
|
|
|
I,m writing a bit of a large programme and within it there is some divisions, the programme must be implemented on fpga and in fpga there is no division.so I must write a fast algorithm for all divisions , I don't have any access to hardware and I must implement it on software level.
I have read some algorithm from here , but I don't know which one to use.
|
|
|
|
|
So, you can't add a hardware divider to the FPGA then? Or fast reciprocal support?
Anyway it depends. Does it have fast multiplication? If not, well, that's a problem, you could only implement the slow methods then.
If you have fast multiplication and IEEE floats, you can use the weird trick I linked to in my previous post with a couple of refinement steps. That's really just Newton–Raphson division with a simpler calculation for the initial approximation (but afaik it still only takes 3 refinements for single-precision floats, just like the regular initial approximation). Fast reciprocal support works that way too - give a fast initial approximation (handling the exponent right and getting significant bits from a lookup table, if you get 12 significant bits that way you only need one refinement step for single-precision or, 13 are enough to get 2 steps for double-precision) and optionally have instructions that help implement the refinement step (like AMD's PFRCPIT1 and PFRCPIT2), for example to calculate Y = (1 - D*X) and to calculate X + X * Y.
Even without those tricks Newton–Raphson division is still not bad, with the linear approximation it takes only 4 refinements for double-precision floats, but it also takes some annoying exponent adjustments to get in the right range first (in hardware that wouldn't be half as annoying).
Goldschmidt division is, afaik, roughly equivalent in performance and might have a slightly less complex implementation. It's really the same sort of deal - trickery with the exponent to get in the right range, the "2 - something" estimation trick (which is rearranged in Newton-Raphson division, but it's really the same thing), and doing the refinement step until all the bits are right. It just looks a little different.
|
|
|
|
|
thanks harold for your complete and good reply
I cannot add a hardware to the FPGA (the program will be complicated) and don't have any reciprocal support. I wrote some algorithms as follows( from the pages you send to me and from others):
void test1(){
const long MAX_Iter=10000000; int *ar1=new int [MAX_Iter]; double *R=new double [MAX_Iter]; double *R2=new double [MAX_Iter]; double *ERROR=new double [MAX_Iter];
double max_error=pow(10,-100.0);;
for(int i=0;i<MAX_Iter;i++){
ar1[i]=rand()+2+clock();
}
cout<<"===================================BESME Allah============================="<<endl<<endl;;
cout<<"==========================regular algorithm 1/integer============================="<<endl;
clock_t Start = clock();
for(int i=0;i<MAX_Iter;i++){
R[i]=1.0/ar1[i];
}
cout<<" "<<clock()-Start<<endl;
cout<<"==========================your suggestion algorithm (ASM):1/integer============================="<<endl;
Start = clock();
float inv;
for(int i=0;i<MAX_Iter;i++){
float x=ar1[i];
unsigned int *y = (unsigned int*)&x;
*y=0x7EF39252-*y;
inv=x;
for(int j=0;j<3;j++){
inv=inv*(2-inv*ar1[i]);
}
R2[i]=inv;
}
cout<<" "<<clock()-Start<<endl;
max_error=pow(10,-100.0);
for(int i=0;i<MAX_Iter;i++){
ERROR[i]=abs(R[i]-R2[i]);
if(max_error<ERROR[i])
max_error=ERROR[i];
}
cout<<" maximmum error="<<max_error<<endl;
cout<<"========================newton raphson division algorithm time=================="<<endl;
Start = clock();
int iter=15;
for(int i=0;i<MAX_Iter;i++){
int j=32;
double x=0.0000000004656612873077392578125; while(!((ar1[i]<<(32-j))& 2147483648)){
j--;
x*=2;
}
for(int j=0;j<iter;j++){
x=x*(2.0-ar1[i]*x); }
R2[i]=x;
}
cout<<" "<<clock()-Start<<endl;
max_error=pow(10,-100.0);
for(int i=0;i<MAX_Iter;i++){
ERROR[i]=abs(R[i]-R2[i]);
if(max_error<ERROR[i])
max_error=ERROR[i];
}
cout<<" maximmum error="<<max_error<<endl;
cout<<"===========================goldschmidt division algorithm=========================="<<endl;
#define BASE 31
typedef unsigned int uint32;
typedef unsigned long long int uint64;
Start = clock();
for(int i=0;i<MAX_Iter;i++){
int shl = 0; uint32 nfp = ar1[i]; while ( (nfp & 0x80000000) == 0 ) { nfp <<= 1; shl++; }
uint64 q = 0x100000000ULL; uint64 e = 0x100000000ULL - (uint64)nfp; int j;
for (j=0;j<5;j++) {
q += (q*e)>>(uint64)32;
e = (e*e)>>(uint64)32;
}
R2[i]=(uint32)(q>>(64-2*BASE-shl));
}
cout<<" "<<clock()-Start<<endl;
max_error=pow(10,-100.0);
for(int i=0;i<MAX_Iter;i++){
ERROR[i]=abs(R[i]-R2[i]);
if(max_error<ERROR[i])
max_error=ERROR[i];
}
cout<<" maximmum error="<<max_error<<endl;
}
It gives me this answers:
===================================BESME Allah=============================
==========================regular algorithm 1/integer===========================
==
101
==========================your suggestion algorithm (ASM):1/integer=============
================
280
maximmum error=1.15959e-010
========================newton raphson division algorithm time==================
2146
maximmum error=4.12998e-006
===========================goldschmidt division algorithm=======================
===
2343
maximmum error=0.00328947
Press any key to continue . . .
as you see the best algorithm now is the ASM algorithm( it's my name for it) . but I have some problem with it;I can't use any pointer!!!
I wrote the same algorithm without pointer but it goes wrong(and hasn't any logic to answer correct);
void test3(){
float x;
float inv;
cout<<endl;
float main_x;
while(cin>>x){
main_x=x;
unsigned int *y = (unsigned int*)&x;
*y=0x7EF39252-*y;
inv=x;
for(int j=0;j<3;j++){
inv=inv*(2-inv*main_x);
}
cout<<"1/x="<<(1.0/main_x)<<" our inv="<<inv<<" error="<<(1.0/main_x)-inv<<endl;
}
}
void test4(){
float x;
float inv;
cout<<endl;
float main_x;
while(cin>>x){
main_x=x;
x=(0x7EF39252-(unsigned int)x);
inv=x;
cout<<"1/x="<<(1.0/main_x)<<" our inv="<<inv<<" error="<<(1.0/main_x)-inv<<endl;
}
}
test3 works correct but work4 not! why? how it works? can I use a trick to use this procedure without using any pointer?(I can use only for, if , while, and some simple commands. pointer implementation in fpga will be hard).
thanks in advance;
modified 2-Jul-13 8:50am.
|
|
|
|
|
"test4" is doing a value-converting cast, while the algorithm is meant to be used with a reinterpret_cast: take the bit-pattern of the float, and use it as though it was an integer. Depending on the available commands, points may not be necessary to implement that.
Perhaps if you tell me how this FPGA will work I can work out some way to implement that algorithm on it.
I'm not sure that Goldschmidt division is right - well actually the implementation is probably right and I've seen it before, but what it's doing (calculating the fixedpoint inverse of a fixedpoint input, I think?) doesn't match its usage there. That would also explain its super high error.
|
|
|
|
|
actually I don't know anymore about it's implementation , my project manager say's something to me and I must do the desired algorithm. also I don't study anything about fpga and hardware implementations.
some thing that I know is that he wants to speed up his program by writing it in C , for example 6*6 inversion program which in it there is some 1/d where d is float(we can convert it to integer by multiplying it with 2^n) . he has wrote his code in labveiw and wants to convert it to C language. then it will be implemented on fpga.
|
|
|
|
|
Ok I don't understand anymore. So you're making code that will be implemented on an FPGA itself, in the fabric, not as a program that is run on the processor that you're turning an FPGA into? Then why not use the special tricks that work well in hardware?
|
|
|
|
|
I was using code similar to the following to calculate the sum from offset(where offset >=0) to length:
int offset = 2;
int length = 364;
float sum = 0.0f;
for(int i= offset; i<=length; i++)
{
sum += (float) i;
}
However this seemed a naïve approach so I worked out(from the algorithm for sum (0 to number) that I could use the following:
where
f=offset
n=length
sum = ((1+n) * (n/2.0)) - ((1+f) * (f/2.0) - f)
which seems to give the correct answer.
Out of curiosity what algorithm would normally be used for this kind of thing?
|
|
|
|
|
|
Your link to Wolfram Alpha is pointless.
Where is the answer you found?
|
|
|
|
|
Wolfram confirmed that the algorithm in my above question is the one to use.
To work out the algorithm, start with the algorithm for (sum i from i=0 to number) which is:
(n/2.0) * (n+1)
Then we need to subtract the offset using the same algorithm:
((n/2.0) * (n+1)) - ((f/2.0) * (f+1))
And finally account for when the offset is greater than zero by subtracting the offset:
((n/2.0) * (n+1)) - (((f/2.0) * (f+1))-f)
So the final algorithm is:
((n/2.0) * (n+1)) - (((f/2.0) * (f+1))-f)
Incidentally, the same thing applies for i^2, i^3, i^4, and etc, the algorithm just changes.
modified 18-Jun-13 19:22pm.
|
|
|
|
|
But, does it have a name. this algorithm?
|
|
|
|
|
Yes, Partial sums or Finite series.
|
|
|
|
|
C'mon, that's just the formula found by Gauss for calculating the sum from 1 to n. OK, in your case you have to substract the sum from 1 to offset from that sum (and make sure that you include/exclude the offset). But that's a simple thing.
|
|
|
|
|
Oh. I was thinking it looked more like one of Fermat theorems.
|
|
|
|
|
All the way at the bottom of wiki:interpolation_search[^] there's this text
Each iteration of the above code requires between five and six comparisons (the extra is due to the repetitions needed to distinguish the three states of < > and = via binary comparisons in the absence of a three-way comparison) plus some messy arithmetic, while the binary search algorithm can be written with one comparison per iteration and uses only trivial integer arithmetic. It would thereby search an array of a million elements with no more than twenty comparisons (involving accesses to slow memory where the array elements are stored); to beat that the interpolation search as written above would be allowed no more than three iterations. (emphasis mine)
Is that even true? If so, why?
Because that makes exactly no sense to be. It appears to be implying that 1) accessing the same array element twice means reloading it or 2) reloading the same element takes as much time as loading it the first time
Neither of which bear any resemblance to reality.
So, what's going on here? Is wikipedia just wrong? Am I missing something?
|
|
|
|
|
It seems like the interpolation search does more work per iteration, so you could do more binary search iterations in the same time.
The extra work is done in the calculation of mid:
mid = low +
((toFind - sortedArray[low]) * (high - low)) /
(sortedArray[high] - sortedArray[low]);
while the binary search is leaner: http://en.wikipedia.org/wiki/Binary_search[^]
It looks like the calculation of mid requires on the order of ten extra instructions per iteration. And one of these is a division, which is relatively slower. Using the same array element a second time won't take twice the time because the value will be stored in the cache the first time.
One factor that may slow down the binary search is the recursive call at each iteration, although it could be written iteratively.
So the Wikipedia claim could be correct.
But the number of iterations needed for the interpolation search is dependent on how uniform the distribution of numbers is. For a near-uniform distribution, you'd only need one iteration of interpolation search.
|
|
|
|