Code from "Learn OpenCV" provides all matrix information needed to calculate 3D position of point captured by both cameras. I was planing to use cvUndistortPoints on both points to calculate disparity and then feed one point coordinates plus disparity to cvPerspectiveTransform to obtain 3D position.
I'm bumping in a problem while trying cvUndistortPoints, despite all parameters being ok (or I hope so) points returned are NaN or QNaN.
I'm creating matrix of points(well one point only as this is all I'm interested in) like that:
typedef struct elem_ {
float f1;
float f2;
} elem;
CvMat myMat = cvMat(1,1,CV_32FC2);
CV_MAT_ELEM(myMat,elem, 0, 0).f1 = 100.0f;CV_MAT_ELEM(myMat,elem, 0, 0).f2 = 120.0f;
So I hope no error here.
then
cvUndistortPoints(myMat, myMat, &_M1, &_D1, &_R1, &_M1);
float x = CV_MAT_ELEM(myMat,elem, 0, 0).f1;
All matrixes are from original example that works fine I just made them member variables so I could access them in my method. They ware calculated as in here:
http://www.codeproject.com/Questions/75461/OpenCV-how-to-use-remapping-parameters-disparity-t.aspx[
^]
I did ask questions on OpenCV forum but I had no replies ever.
Any ideas?