merged all the latest changes from 2.4 to trunk

This commit is contained in:
Vadim Pisarevsky
2012-04-13 21:50:59 +00:00
parent 020f9a6047
commit 2fd1e2ea57
416 changed files with 12852 additions and 6070 deletions

View File

@@ -33,7 +33,7 @@ where:
* :math:`(u, v)` are the coordinates of the projection point in pixels
* :math:`A` is a camera matrix, or a matrix of intrinsic parameters
* :math:`(cx, cy)` is a principal point that is usually at the image center
* :math:`fx, fy` are the focal lengths expressed in pixel-related units
* :math:`fx, fy` are the focal lengths expressed in pixel units.
Thus, if an image from the camera is
@@ -158,7 +158,7 @@ Finds the camera intrinsic and extrinsic parameters from several views of a cali
:param term_crit: same as ``criteria``.
The function estimates the intrinsic camera
parameters and extrinsic parameters for each of the views. The algorithm is based on [Zhang2000] and [BoughuetMCT]. The coordinates of 3D object points and their corresponding 2D projections
parameters and extrinsic parameters for each of the views. The algorithm is based on [Zhang2000]_ and [BouguetMCT]_. The coordinates of 3D object points and their corresponding 2D projections
in each view must be specified. That may be achieved by using an
object with a known geometry and easily detectable feature points.
Such an object is called a calibration rig or calibration pattern,
@@ -616,7 +616,7 @@ Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
:param flags: Method for solving a PnP problem (see :ocv:func:`solvePnP` ).
The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections ``imagePoints`` and the projected (using
:ocv:func:`projectPoints` ) ``objectPoints``. The use of RANSAC makes the function resistant to outliers.
:ocv:func:`projectPoints` ) ``objectPoints``. The use of RANSAC makes the function resistant to outliers. The function is parallelized with the TBB library.
@@ -1127,8 +1127,7 @@ Computes disparity using the BM algorithm for a rectified stereo pair.
:param state: The pre-initialized ``CvStereoBMState`` structure in the case of the old API.
The method executes the BM algorithm on a rectified stereo pair. See the ``stereo_match.cpp`` OpenCV sample on how to prepare images and call the method. Note that the method is not constant, thus you should not use the same ``StereoBM`` instance from within different threads simultaneously.
The method executes the BM algorithm on a rectified stereo pair. See the ``stereo_match.cpp`` OpenCV sample on how to prepare images and call the method. Note that the method is not constant, thus you should not use the same ``StereoBM`` instance from within different threads simultaneously. The function is parallelized with the TBB library.
@@ -1165,13 +1164,13 @@ Class for computing stereo correspondence using the semi-global block matching a
...
};
The class implements the modified H. Hirschmuller algorithm HH08 that differs from the original one as follows:
The class implements the modified H. Hirschmuller algorithm [HH08]_ that differs from the original one as follows:
* By default, the algorithm is single-pass, which means that you consider only 5 directions instead of 8. Set ``fullDP=true`` to run the full variant of the algorithm but beware that it may consume a lot of memory.
* The algorithm matches blocks, not individual pixels. Though, setting ``SADWindowSize=1`` reduces the blocks to single pixels.
* Mutual information cost function is not implemented. Instead, a simpler Birchfield-Tomasi sub-pixel metric from BT96 is used. Though, the color images are supported as well.
* Mutual information cost function is not implemented. Instead, a simpler Birchfield-Tomasi sub-pixel metric from [BT98]_ is used. Though, the color images are supported as well.
* Some pre- and post- processing steps from K. Konolige algorithm :ocv:funcx:`StereoBM::operator()` are included, for example: pre-filtering (``CV_STEREO_BM_XSOBEL`` type) and post-filtering (uniqueness check, quadratic interpolation and speckle filtering).
@@ -1556,13 +1555,6 @@ The function computes the rectification transformations without knowing intrinsi
While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion, it would be better to correct it before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using :ocv:func:`calibrateCamera` . Then, the images can be corrected using :ocv:func:`undistort` , or just the point coordinates can be corrected with :ocv:func:`undistortPoints` .
.. [BouguetMCT] J.Y.Bouguet. MATLAB calibration tool. http://www.vision.caltech.edu/bouguetj/calib_doc/
.. [Hartley99] Hartley, R.I., Theory and Practice of Projective Rectification. IJCV 35 2, pp 115-127 (1999)
.. [Zhang2000] Z. Zhang. A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000.
triangulatePoints
-----------------
@@ -1589,3 +1581,14 @@ The function reconstructs 3-dimensional points (in homogeneous coordinates) by u
.. seealso::
:ocv:func:`reprojectImageTo3D`
.. [BT98] Birchfield, S. and Tomasi, C. A pixel dissimilarity measure that is insensitive to image sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1998.
.. [BouguetMCT] J.Y.Bouguet. MATLAB calibration tool. http://www.vision.caltech.edu/bouguetj/calib_doc/
.. [Hartley99] Hartley, R.I., Theory and Practice of Projective Rectification. IJCV 35 2, pp 115-127 (1999)
.. [HH08] Hirschmuller, H. Stereo Processing by Semiglobal Matching and Mutual Information, PAMI(30), No. 2, February 2008, pp. 328-341.
.. [Zhang2000] Z. Zhang. A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000.

View File

@@ -29,10 +29,18 @@ epnp::epnp(const cv::Mat& cameraMatrix, const cv::Mat& opoints, const cv::Mat& i
alphas.resize(4 * number_of_correspondences);
pcs.resize(3 * number_of_correspondences);
max_nr = 0;
A1 = NULL;
A2 = NULL;
}
epnp::~epnp()
{
if (A1)
delete[] A1;
if (A2)
delete[] A2;
}
void epnp::choose_control_points(void)
@@ -513,9 +521,6 @@ void epnp::gauss_newton(const CvMat * L_6x10, const CvMat * Rho,
void epnp::qr_solve(CvMat * A, CvMat * b, CvMat * X)
{
static int max_nr = 0;
static double * A1, * A2;
const int nr = A->rows;
const int nc = A->cols;

View File

@@ -14,24 +14,24 @@ class epnp {
void compute_pose(cv::Mat& R, cv::Mat& t);
private:
template <typename T>
void init_camera_parameters(const cv::Mat& cameraMatrix)
{
uc = cameraMatrix.at<T> (0, 2);
vc = cameraMatrix.at<T> (1, 2);
fu = cameraMatrix.at<T> (0, 0);
fv = cameraMatrix.at<T> (1, 1);
void init_camera_parameters(const cv::Mat& cameraMatrix)
{
uc = cameraMatrix.at<T> (0, 2);
vc = cameraMatrix.at<T> (1, 2);
fu = cameraMatrix.at<T> (0, 0);
fv = cameraMatrix.at<T> (1, 1);
}
template <typename OpointType, typename IpointType>
void init_points(const cv::Mat& opoints, const cv::Mat& ipoints)
{
for(int i = 0; i < number_of_correspondences; i++)
{
for(int i = 0; i < number_of_correspondences; i++)
{
pws[3 * i ] = opoints.at<OpointType>(0,i).x;
pws[3 * i + 1] = opoints.at<OpointType>(0,i).y;
pws[3 * i + 2] = opoints.at<OpointType>(0,i).z;
us[2 * i ] = ipoints.at<IpointType>(0,i).x*fu + uc;
us[2 * i + 1] = ipoints.at<IpointType>(0,i).y*fv + vc;
us[2 * i + 1] = ipoints.at<IpointType>(0,i).y*fv + vc;
}
}
double reprojection_error(const double R[3][3], const double t[3]);
@@ -74,6 +74,8 @@ class epnp {
double cws[4][3], ccs[4][3];
double cws_determinant;
int max_nr;
double * A1, * A2;
};
#endif

View File

@@ -333,6 +333,7 @@ void CV_CameraCalibrationTest::run( int start_from )
goodTransVects = 0;
goodRotMatrs = 0;
int progress = 0;
int values_read = -1;
sprintf( filepath, "%scameracalibration/", ts->get_data_path().c_str() );
sprintf( filename, "%sdatafiles.txt", filepath );
@@ -344,11 +345,13 @@ void CV_CameraCalibrationTest::run( int start_from )
goto _exit_;
}
fscanf(datafile,"%d",&numTests);
values_read = fscanf(datafile,"%d",&numTests);
CV_Assert(values_read == 1);
for( currTest = start_from; currTest < numTests; currTest++ )
{
fscanf(datafile,"%s",i_dat_file);
values_read = fscanf(datafile,"%s",i_dat_file);
CV_Assert(values_read == 1);
sprintf(filename, "%s%s", filepath, i_dat_file);
file = fopen(filename,"r");
@@ -366,7 +369,8 @@ void CV_CameraCalibrationTest::run( int start_from )
continue; // if there is more than one test, just skip the test
}
fscanf(file,"%d %d\n",&(imageSize.width),&(imageSize.height));
values_read = fscanf(file,"%d %d\n",&(imageSize.width),&(imageSize.height));
CV_Assert(values_read == 2);
if( imageSize.width <= 0 || imageSize.height <= 0 )
{
ts->printf( cvtest::TS::LOG, "Image size in test file is incorrect\n" );
@@ -375,7 +379,8 @@ void CV_CameraCalibrationTest::run( int start_from )
}
/* Read etalon size */
fscanf(file,"%d %d\n",&(etalonSize.width),&(etalonSize.height));
values_read = fscanf(file,"%d %d\n",&(etalonSize.width),&(etalonSize.height));
CV_Assert(values_read == 2);
if( etalonSize.width <= 0 || etalonSize.height <= 0 )
{
ts->printf( cvtest::TS::LOG, "Pattern size in test file is incorrect\n" );
@@ -386,7 +391,8 @@ void CV_CameraCalibrationTest::run( int start_from )
numPoints = etalonSize.width * etalonSize.height;
/* Read number of images */
fscanf(file,"%d\n",&numImages);
values_read = fscanf(file,"%d\n",&numImages);
CV_Assert(values_read == 1);
if( numImages <=0 )
{
ts->printf( cvtest::TS::LOG, "Number of images in test file is incorrect\n");
@@ -427,7 +433,8 @@ void CV_CameraCalibrationTest::run( int start_from )
for( currPoint = 0; currPoint < numPoints; currPoint++ )
{
double x,y,z;
fscanf(file,"%lf %lf %lf\n",&x,&y,&z);
values_read = fscanf(file,"%lf %lf %lf\n",&x,&y,&z);
CV_Assert(values_read == 3);
(objectPoints+i)->x = x;
(objectPoints+i)->y = y;
@@ -443,7 +450,8 @@ void CV_CameraCalibrationTest::run( int start_from )
for( currPoint = 0; currPoint < numPoints; currPoint++ )
{
double x,y;
fscanf(file,"%lf %lf\n",&x,&y);
values_read = fscanf(file,"%lf %lf\n",&x,&y);
CV_Assert(values_read == 2);
(imagePoints+i)->x = x;
(imagePoints+i)->y = y;
@@ -455,32 +463,40 @@ void CV_CameraCalibrationTest::run( int start_from )
/* Focal lengths */
double goodFcx,goodFcy;
fscanf(file,"%lf %lf",&goodFcx,&goodFcy);
values_read = fscanf(file,"%lf %lf",&goodFcx,&goodFcy);
CV_Assert(values_read == 2);
/* Principal points */
double goodCx,goodCy;
fscanf(file,"%lf %lf",&goodCx,&goodCy);
values_read = fscanf(file,"%lf %lf",&goodCx,&goodCy);
CV_Assert(values_read == 2);
/* Read distortion */
fscanf(file,"%lf",goodDistortion+0);
fscanf(file,"%lf",goodDistortion+1);
fscanf(file,"%lf",goodDistortion+2);
fscanf(file,"%lf",goodDistortion+3);
values_read = fscanf(file,"%lf",goodDistortion+0); CV_Assert(values_read == 1);
values_read = fscanf(file,"%lf",goodDistortion+1); CV_Assert(values_read == 1);
values_read = fscanf(file,"%lf",goodDistortion+2); CV_Assert(values_read == 1);
values_read = fscanf(file,"%lf",goodDistortion+3); CV_Assert(values_read == 1);
/* Read good Rot matrixes */
for( currImage = 0; currImage < numImages; currImage++ )
{
for( i = 0; i < 3; i++ )
for( j = 0; j < 3; j++ )
fscanf(file, "%lf", goodRotMatrs + currImage * 9 + j * 3 + i);
{
values_read = fscanf(file, "%lf", goodRotMatrs + currImage * 9 + j * 3 + i);
CV_Assert(values_read == 1);
}
}
/* Read good Trans vectors */
for( currImage = 0; currImage < numImages; currImage++ )
{
for( i = 0; i < 3; i++ )
fscanf(file, "%lf", goodTransVects + currImage * 3 + i);
{
values_read = fscanf(file, "%lf", goodTransVects + currImage * 3 + i);
CV_Assert(values_read == 1);
}
}
calibFlags = 0
@@ -1508,7 +1524,7 @@ void CV_StereoCalibrationTest::run( int )
const float minDisparity = 0.1f;
const float maxDisparity = 600.0f;
const int pointsCount = 500;
const float requiredAccuracy = 1e-3;
const float requiredAccuracy = 1e-3f;
RNG& rng = ts->get_rng();
Mat projectedPoints_1(2, pointsCount, CV_32FC1);
@@ -1544,7 +1560,7 @@ void CV_StereoCalibrationTest::run( int )
}
//check correctMatches
const float constraintAccuracy = 1e-5;
const float constraintAccuracy = 1e-5f;
Mat newPoints1, newPoints2;
Mat points1 = projectedPoints_1.t();
points1 = points1.reshape(2, 1);
@@ -1767,7 +1783,7 @@ void CV_StereoCalibrationTest_C::correct( const Mat& F,
CvMat _F = F, _points1 = points1, _points2 = points2;
newPoints1.create(1, points1.cols, points1.type());
newPoints2.create(1, points2.cols, points2.type());
CvMat _newPoints1 = newPoints1, _newPoints2 = _newPoints2;
CvMat _newPoints1 = newPoints1, _newPoints2 = newPoints2;
cvCorrectMatches(&_F, &_points1, &_points2, &_newPoints1, &_newPoints2);
}