Merge pull request #3042 from edgarriba:master

This commit is contained in:
Vadim Pisarevsky 2014-08-20 13:18:37 +00:00
commit b2cd954f24
39 changed files with 36481 additions and 278 deletions

View File

@ -325,6 +325,7 @@ extlinks = {
'imgproc_geometric' : ('http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#%s', None ),
'miscellaneous_transformations' : ('http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#%s', None),
'user_interface' : ('http://docs.opencv.org/modules/highgui/doc/user_interface.html#%s', None),
'video' : ('http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#%s', None),
# 'opencv_group' : ('http://answers.opencv.org/%s', None),
'opencv_qa' : ('http://answers.opencv.org/%s', None),
@ -402,6 +403,7 @@ extlinks = {
'contour_area' : ('http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=contourarea#contourarea%s', None),
'arc_length' : ('http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=arclength#arclength%s', None),
'point_polygon_test' : ('http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=pointpolygontest#pointpolygontest%s', None),
'feature_detection_and_description' : ('http://docs.opencv.org/modules/features2d/doc/feature_detection_and_description.html#%s', None),
'feature_detector' : ( 'http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=featuredetector#FeatureDetector%s', None),
'feature_detector_detect' : ('http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=detect#featuredetector-detect%s', None ),
'surf_feature_detector' : ('http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html?highlight=surffeaturedetector#surffeaturedetector%s', None ),

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

View File

@ -0,0 +1,750 @@
.. _realTimePoseEstimation:
Real Time pose estimation of a textured object
**********************************************
Nowadays, augmented reality is one of the top research topic in computer vision and robotics fields. The most elemental problem in augmented reality is the estimation of the camera pose respect of an object in the case of computer vision area to do later some 3D rendering or in the case of robotics obtain an object pose in order to grasp it and do some manipulation. However, this is not a trivial problem to solve due to the fact that the most common issue in image processing is the computational cost of applying a lot of algorithms or mathematical operations for solving a problem which is basic and immediateley for humans.
Goal
====
In this tutorial is explained how to build a real time application to estimate the camera pose in order to track a textured object with six degrees of freedom given a 2D image and its 3D textured model.
The application will have the followings parts:
.. container:: enumeratevisibleitemswithsquare
+ Read 3D textured object model and object mesh.
+ Take input from Camera or Video.
+ Extract ORB features and descriptors from the scene.
+ Match scene descriptors with model descriptors using Flann matcher.
+ Pose estimation using PnP + Ransac.
+ Linear Kalman Filter for bad poses rejection.
Theory
======
In computer vision estimate the camera pose from *n* 3D-to-2D point correspondences is a fundamental and well understood problem. The most general version of the problem requires estimating the six degrees of freedom of the pose and five calibration parameters: focal length, principal point, aspect ratio and skew. It could be established with a minimum of 6 correspondences, using the well known Direct Linear Transform (DLT) algorithm. There are, though, several simplifications to the problem which turn into an extensive list of different algorithms that improve the accuracy of the DLT.
The most common simplification is to assume known calibration parameters which is the so-called Perspective-*n*-Point problem:
.. image:: images/pnp.jpg
:alt: Perspective-n-Point problem scheme
:align: center
**Problem Formulation:** Given a set of correspondences between 3D points :math:`p_i` expressed in a world reference frame, and their 2D projections :math:`u_i` onto the image, we seek to retrieve the pose (:math:`R` and :math:`t`) of the camera w.r.t. the world and the focal length :math:`f`.
OpenCV provides four different approaches to solve the Perspective-*n*-Point problem which return :math:`R` and :math:`t`. Then, using the following formula it's possible to project 3D points into the image plane:
.. math::
s\ \left [ \begin{matrix} u \\ v \\ 1 \end{matrix} \right ] = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} r_{11} & r_{12} & r_{13} & t_1 \\ r_{21} & r_{22} & r_{23} & t_2 \\ r_{31} & r_{32} & r_{33} & t_3 \end{matrix} \right ] \left [ \begin{matrix} X \\ Y \\ Z\\ 1 \end{matrix} \right ]
The complete documentation of how to manage with this equations is in :calib3d:`Camera Calibration and 3D Reconstruction <>`.
Source code
===========
You can find the source code of this tutorial in the :file:`samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/` folder of the OpenCV source library.
The tutorial consists of two main programs:
1. **Model registration**
This applicaton is exclusive to whom don't have a 3D textured model of the object to be detected. You can use this program to create your own textured 3D model. This program only works for planar objects, then if you want to model an object with complex shape you should use a sophisticated software to create it.
The application needs an input image of the object to be registered and its 3D mesh. We have also to provide the intrinsic parameters of the camera with which the input image was taken. All the files need to be specified using the absolute path or the relative one from your applications working directory. If none files are specified the program will try to open the provided default parameters.
The application starts up extracting the ORB features and descriptors from the input image and then uses the mesh along with the `MöllerTrumbore intersection algorithm <http://http://en.wikipedia.org/wiki/M%C3%B6ller%E2%80%93Trumbore_intersection_algorithm/>`_ to compute the 3D coordinates of the found features. Finally, the 3D points and the descriptors are stored in different lists in a file with YAML format which each row is a different point. The technical background on how to store the files can be found in the :ref:`fileInputOutputXMLYAML` tutorial.
.. image:: images/registration.png
:alt: Model registration
:align: center
2. **Model detection**
The aim of this application is estimate in real time the object pose given its 3D textured model.
The application starts up loading the 3D textured model in YAML file format with the same structure explained in the model registration program. From the scene, the ORB features and descriptors are detected and extracted. Then, is used :flann_based_matcher:`FlannBasedMatcher<>` with :flann:`LshIndexParams <flann-index-t-index>` to do the matching between the scene descriptors and the model descriptors. Using the found matches along with :calib3d:`solvePnPRansac <solvepnpransac>` function the :math:`R` and :math:`t` of the camera are computed. Finally, a :video:`KalmanFilter<kalmanfilter>` is applied in order to reject bad poses.
In the case that you compiled OpenCV with the samples, you can find it in :file:`opencv/build/bin/cpp-tutorial-pnp_detection`. Then you can run the application and change some parameters:
.. code-block:: cpp
This program shows how to detect an object given its 3D textured model. You can choose to use a recorded video or the webcam.
Usage:
./cpp-tutorial-pnp_detection -help
Keys:
'esc' - to quit.
--------------------------------------------------------------------------
Usage: cpp-tutorial-pnp_detection [params]
-c, --confidence (value:0.95)
RANSAC confidence
-e, --error (value:2.0)
RANSAC reprojection errror
-f, --fast (value:true)
use of robust fast match
-h, --help (value:true)
print this message
--in, --inliers (value:30)
minimum inliers for Kalman update
--it, --iterations (value:500)
RANSAC maximum iterations count
-k, --keypoints (value:2000)
number of keypoints to detect
--mesh
path to ply mesh
--method, --pnp (value:0)
PnP method: (0) ITERATIVE - (1) EPNP - (2) P3P - (3) DLS
--model
path to yml model
-r, --ratio (value:0.7)
threshold for ratio test
-v, --video
path to recorded video
For example, you can run the application changing the pnp method:
.. code-block:: cpp
./cpp-tutorial-pnp_detection --method=2
Explanation
===========
Here is explained in detail the code for the real time application:
1. **Read 3D textured object model and object mesh.**
In order to load the textured model I implemented the *class* **Model** which has the function *load()* that opens a YAML file and take the stored 3D points with its corresponding descriptors. You can find an example of a 3D textured model in :file:`samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/Data/cookies_ORB.yml`.
.. code-block:: cpp
/** Load a YAML file using OpenCV **/
void Model::load(const std::string path)
{
cv::Mat points3d_mat;
cv::FileStorage storage(path, cv::FileStorage::READ);
storage["points_3d"] >> points3d_mat;
storage["descriptors"] >> descriptors_;
points3d_mat.copyTo(list_points3d_in_);
storage.release();
}
In the main program the model is loaded as follows:
.. code-block:: cpp
Model model; // instantiate Model object
model.load(yml_read_path); // load a 3D textured object model
In order to read the model mesh I implemented a *class* **Mesh** which has a function *load()* that opens a :math:`*`.ply file and store the 3D points of the object and also the composed triangles. You can find an example of a model mesh in :file:`samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/Data/box.ply`.
.. code-block:: cpp
/** Load a CSV with *.ply format **/
void Mesh::load(const std::string path)
{
// Create the reader
CsvReader csvReader(path);
// Clear previous data
list_vertex_.clear();
list_triangles_.clear();
// Read from .ply file
csvReader.readPLY(list_vertex_, list_triangles_);
// Update mesh attributes
num_vertexs_ = list_vertex_.size();
num_triangles_ = list_triangles_.size();
}
In the main program the mesh is loaded as follows:
.. code-block:: cpp
Mesh mesh; // instantiate Mesh object
mesh.load(ply_read_path); // load an object mesh
You can also load different model and mesh:
.. code-block:: cpp
./cpp-tutorial-pnp_detection --mesh=/absolute_path_to_your_mesh.ply --model=/absolute_path_to_your_model.yml
2. **Take input from Camera or Video**
To detect is necessary capture video. It's done loading a recorded video by passing the absolute path where it is located in your machine. In order to test the application you can find a recorded video in :file:`samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/Data/box.mp4`.
.. code-block:: cpp
cv::VideoCapture cap; // instantiate VideoCapture
cap.open(video_read_path); // open a recorded video
if(!cap.isOpened()) // check if we succeeded
{
std::cout << "Could not open the camera device" << std::endl;
return -1;
}
Then the algorithm is computed frame per frame:
.. code-block:: cpp
cv::Mat frame, frame_vis;
while(cap.read(frame) && cv::waitKey(30) != 27) // capture frame until ESC is pressed
{
frame_vis = frame.clone(); // refresh visualisation frame
// MAIN ALGORITHM
}
You can also load different recorded video:
.. code-block:: cpp
./cpp-tutorial-pnp_detection --video=/absolute_path_to_your_video.mp4
3. **Extract ORB features and descriptors from the scene**
The next step is to detect the scene features and extract it descriptors. For this task I implemented a *class* **RobustMatcher** which has a function for keypoints detection and features extraction. You can find it in :file:`samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/RobusMatcher.cpp`. In your *RobusMatch* object you can use any of the 2D features detectors of OpenCV. In this case I used :feature_detection_and_description:`ORB<orb>` features because is based on :feature_detection_and_description:`FAST<fast>` to detect the keypoints and :descriptor_extractor:`BRIEF<briefdescriptorextractor>` to extract the descriptors which means that is fast and robust to rotations. You can find more detailed information about *ORB* in the documentation.
The following code is how to instantiate and set the features detector and the descriptors extractor:
.. code-block:: cpp
RobustMatcher rmatcher; // instantiate RobustMatcher
cv::FeatureDetector * detector = new cv::OrbFeatureDetector(numKeyPoints); // instatiate ORB feature detector
cv::DescriptorExtractor * extractor = new cv::OrbDescriptorExtractor(); // instatiate ORB descriptor extractor
rmatcher.setFeatureDetector(detector); // set feature detector
rmatcher.setDescriptorExtractor(extractor); // set descriptor extractor
The features and descriptors will be computed by the *RobustMatcher* inside the matching function.
4. **Match scene descriptors with model descriptors using Flann matcher**
It is the first step in our detection algorithm. The main idea is to match the scene descriptors with our model descriptors in order to know the 3D coordinates of the found features into the current scene.
Firstly, we have to set which matcher we want to use. In this case is used :flann_based_matcher:`FlannBasedMatcher<>` matcher which in terms of computational cost is faster than the :brute_force_matcher:`BruteForceMatcher<bfmatcher>` matcher as we increase the trained collectction of features. Then, for FlannBased matcher the index created is *Multi-Probe LSH: Efficient Indexing for High-Dimensional Similarity Search* due to *ORB* descriptors are binary.
You can tune the *LSH* and search parameters to improve the matching efficiency:
.. code-block:: cpp
cv::Ptr<cv::flann::IndexParams> indexParams = cv::makePtr<cv::flann::LshIndexParams>(6, 12, 1); // instantiate LSH index parameters
cv::Ptr<cv::flann::SearchParams> searchParams = cv::makePtr<cv::flann::SearchParams>(50); // instantiate flann search parameters
cv::DescriptorMatcher * matcher = new cv::FlannBasedMatcher(indexParams, searchParams); // instantiate FlannBased matcher
rmatcher.setDescriptorMatcher(matcher); // set matcher
Secondly, we have to call the matcher by using *robustMatch()* or *fastRobustMatch()* function. The difference of using this two functions is its computational cost. The first method is slower but more robust at filtering good matches because uses two ratio test and a symmetry test. In contrast, the second method is faster but less robust because only applies a single ratio test to the matches.
The following code is to get the model 3D points and its descriptors and then call the matcher in the main program:
.. code-block:: cpp
// Get the MODEL INFO
std::vector<cv::Point3f> list_points3d_model = model.get_points3d(); // list with model 3D coordinates
cv::Mat descriptors_model = model.get_descriptors(); // list with descriptors of each 3D coordinate
.. code-block:: cpp
// -- Step 1: Robust matching between model descriptors and scene descriptors
std::vector<cv::DMatch> good_matches; // to obtain the model 3D points in the scene
std::vector<cv::KeyPoint> keypoints_scene; // to obtain the 2D points of the scene
if(fast_match)
{
rmatcher.fastRobustMatch(frame, good_matches, keypoints_scene, descriptors_model);
}
else
{
rmatcher.robustMatch(frame, good_matches, keypoints_scene, descriptors_model);
}
The following code corresponds to the *robustMatch()* function which belongs to the *RobustMatcher* class. This function uses the given image to detect the keypoints and extract the descriptors, match using *two Nearest Neighbour* the extracted descriptors with the given model descriptors and vice versa. Then, a ratio test is applied to the two direction matches in order to remove these matches which its distance ratio between the first and second best match is larger than a given threshold. Finally, a symmetry test is applied in order the remove non symmetrical matches.
.. code-block:: cpp
void RobustMatcher::robustMatch( const cv::Mat& frame, std::vector<cv::DMatch>& good_matches,
std::vector<cv::KeyPoint>& keypoints_frame,
const std::vector<cv::KeyPoint>& keypoints_model, const cv::Mat& descriptors_model )
{
// 1a. Detection of the ORB features
this->computeKeyPoints(frame, keypoints_frame);
// 1b. Extraction of the ORB descriptors
cv::Mat descriptors_frame;
this->computeDescriptors(frame, keypoints_frame, descriptors_frame);
// 2. Match the two image descriptors
std::vector<std::vector<cv::DMatch> > matches12, matches21;
// 2a. From image 1 to image 2
matcher_->knnMatch(descriptors_frame, descriptors_model, matches12, 2); // return 2 nearest neighbours
// 2b. From image 2 to image 1
matcher_->knnMatch(descriptors_model, descriptors_frame, matches21, 2); // return 2 nearest neighbours
// 3. Remove matches for which NN ratio is > than threshold
// clean image 1 -> image 2 matches
int removed1 = ratioTest(matches12);
// clean image 2 -> image 1 matches
int removed2 = ratioTest(matches21);
// 4. Remove non-symmetrical matches
symmetryTest(matches12, matches21, good_matches);
}
After the matches filtering we have to subtract the 2D and 3D correspondences from the found scene keypoints and our 3D model using the obtained *DMatches* vector. For more information about :basicstructures:`DMatch <dmatch>` check the documentation.
.. code-block:: cpp
// -- Step 2: Find out the 2D/3D correspondences
std::vector<cv::Point3f> list_points3d_model_match; // container for the model 3D coordinates found in the scene
std::vector<cv::Point2f> list_points2d_scene_match; // container for the model 2D coordinates found in the scene
for(unsigned int match_index = 0; match_index < good_matches.size(); ++match_index)
{
cv::Point3f point3d_model = list_points3d_model[ good_matches[match_index].trainIdx ]; // 3D point from model
cv::Point2f point2d_scene = keypoints_scene[ good_matches[match_index].queryIdx ].pt; // 2D point from the scene
list_points3d_model_match.push_back(point3d_model); // add 3D point
list_points2d_scene_match.push_back(point2d_scene); // add 2D point
}
You can also change the ratio test threshold, the number of keypoints to detect as well as use or not the robust matcher:
.. code-block:: cpp
./cpp-tutorial-pnp_detection --ratio=0.8 --keypoints=1000 --fast=false
5. **Pose estimation using PnP + Ransac**
Once with the 2D and 3D correspondences we have to apply a PnP algorithm in order to estimate the camera pose. The reason why we have to use :calib3d:`solvePnPRansac <solvepnpransac>` instead of :calib3d:`solvePnP <solvepnp>` is due to the fact that after the matching not all the found correspondences are correct and, as like as not, there are false correspondences or also called *outliers*. The `Random Sample Consensus <http://en.wikipedia.org/wiki/RANSAC>`_ or *Ransac* is a non-deterministic iterative method which estimate parameters of a mathematical model from observed data producing an aproximate result as the number of iterations increase. After appyling *Ransac* all the *outliers* will be eliminated to then estimate the camera pose with a certain probability to obtain a good solution.
For the camera pose estimation I have implemented a *class* **PnPProblem**. This *class* has 4 atributes: a given calibration matrix, the rotation matrix, the translation matrix and the rotation-translation matrix. The intrinsic calibration parameters of the camera which you are using to estimate the pose are necessary. In order to obtain the parameters you can check :ref:`CameraCalibrationSquareChessBoardTutorial` and :ref:`cameraCalibrationOpenCV` tutorials.
The following code is how to declare the *PnPProblem class* in the main program:
.. code-block:: cpp
// Intrinsic camera parameters: UVC WEBCAM
double f = 55; // focal length in mm
double sx = 22.3, sy = 14.9; // sensor size
double width = 640, height = 480; // image size
double params_WEBCAM[] = { width*f/sx, // fx
height*f/sy, // fy
width/2, // cx
height/2}; // cy
PnPProblem pnp_detection(params_WEBCAM); // instantiate PnPProblem class
The following code is how the *PnPProblem class* initialises its atributes:
.. code-block:: cpp
// Custom constructor given the intrinsic camera parameters
PnPProblem::PnPProblem(const double params[])
{
_A_matrix = cv::Mat::zeros(3, 3, CV_64FC1); // intrinsic camera parameters
_A_matrix.at<double>(0, 0) = params[0]; // [ fx 0 cx ]
_A_matrix.at<double>(1, 1) = params[1]; // [ 0 fy cy ]
_A_matrix.at<double>(0, 2) = params[2]; // [ 0 0 1 ]
_A_matrix.at<double>(1, 2) = params[3];
_A_matrix.at<double>(2, 2) = 1;
_R_matrix = cv::Mat::zeros(3, 3, CV_64FC1); // rotation matrix
_t_matrix = cv::Mat::zeros(3, 1, CV_64FC1); // translation matrix
_P_matrix = cv::Mat::zeros(3, 4, CV_64FC1); // rotation-translation matrix
}
OpenCV provides four PnP methods: ITERATIVE, EPNP, P3P and DLS. Depending on the application type, the estimation method will be different. In the case that we want to make a real time application, the more suitable methods are EPNP and P3P due to that are faster than ITERATIVE and DLS at finding an optimal solution. However, EPNP and P3P are not especially robust in front of planar surfaces and sometimes the pose estimation seems to have a mirror effect. Therefore, in this this tutorial is used ITERATIVE method due to the object to be detected has planar surfaces.
The OpenCV Ransac implementation wants you to provide three parameters: the maximum number of iterations until stop the algorithm, the maximum allowed distance between the observed and computed point projections to consider it an inlier and the confidence to obtain a good result. You can tune these paramaters in order to improve your algorithm performance. Increasing the number of iterations you will have a more accurate solution, but will take more time to find a solution. Increasing the reprojection error will reduce the computation time, but your solution will be unaccurate. Decreasing the confidence your arlgorithm will be faster, but the obtained solution will be unaccurate.
The following parameters work for this application:
.. code-block:: cpp
// RANSAC parameters
int iterationsCount = 500; // number of Ransac iterations.
float reprojectionError = 2.0; // maximum allowed distance to consider it an inlier.
float confidence = 0.95; // ransac successful confidence.
The following code corresponds to the *estimatePoseRANSAC()* function which belongs to the *PnPProblem class*. This function estimates the rotation and translation matrix given a set of 2D/3D correspondences, the desired PnP method to use, the output inliers container and the Ransac parameters:
.. code-block:: cpp
// Estimate the pose given a list of 2D/3D correspondences with RANSAC and the method to use
void PnPProblem::estimatePoseRANSAC( const std::vector<cv::Point3f> &list_points3d, // list with model 3D coordinates
const std::vector<cv::Point2f> &list_points2d, // list with scene 2D coordinates
int flags, cv::Mat &inliers, int iterationsCount, // PnP method; inliers container
float reprojectionError, float confidence ) // Ransac parameters
{
cv::Mat distCoeffs = cv::Mat::zeros(4, 1, CV_64FC1); // vector of distortion coefficients
cv::Mat rvec = cv::Mat::zeros(3, 1, CV_64FC1); // output rotation vector
cv::Mat tvec = cv::Mat::zeros(3, 1, CV_64FC1); // output translation vector
bool useExtrinsicGuess = false; // if true the function uses the provided rvec and tvec values as
// initial approximations of the rotation and translation vectors
cv::solvePnPRansac( list_points3d, list_points2d, _A_matrix, distCoeffs, rvec, tvec,
useExtrinsicGuess, iterationsCount, reprojectionError, confidence,
inliers, flags );
Rodrigues(rvec,_R_matrix); // converts Rotation Vector to Matrix
_t_matrix = tvec; // set translation matrix
this->set_P_matrix(_R_matrix, _t_matrix); // set rotation-translation matrix
}
In the following code are the 3th and 4th steps of the main algorithm. The first, calling the above function and the second taking the output inliers vector from Ransac to get the 2D scene points for drawing purpose. As seen in the code we must be sure to apply Ransac if we have matches, in the other case, the function :calib3d:`solvePnPRansac <solvepnpransac>` crashes due to any OpenCV *bug*.
.. code-block:: cpp
if(good_matches.size() > 0) // None matches, then RANSAC crashes
{
// -- Step 3: Estimate the pose using RANSAC approach
pnp_detection.estimatePoseRANSAC( list_points3d_model_match, list_points2d_scene_match,
pnpMethod, inliers_idx, iterationsCount, reprojectionError, confidence );
// -- Step 4: Catch the inliers keypoints to draw
for(int inliers_index = 0; inliers_index < inliers_idx.rows; ++inliers_index)
{
int n = inliers_idx.at<int>(inliers_index); // i-inlier
cv::Point2f point2d = list_points2d_scene_match[n]; // i-inlier point 2D
list_points2d_inliers.push_back(point2d); // add i-inlier to list
}
Finally, once the camera pose has been estimated we can use the :math:`R` and :math:`t` in order to compute the 2D projection onto the image of a given 3D point expressed in a world reference frame using the showed formula on *Theory*.
The following code corresponds to the *backproject3DPoint()* function which belongs to the *PnPProblem class*. The function backproject a given 3D point expressed in a world reference frame onto a 2D image:
.. code-block:: cpp
// Backproject a 3D point to 2D using the estimated pose parameters
cv::Point2f PnPProblem::backproject3DPoint(const cv::Point3f &point3d)
{
// 3D point vector [x y z 1]'
cv::Mat point3d_vec = cv::Mat(4, 1, CV_64FC1);
point3d_vec.at<double>(0) = point3d.x;
point3d_vec.at<double>(1) = point3d.y;
point3d_vec.at<double>(2) = point3d.z;
point3d_vec.at<double>(3) = 1;
// 2D point vector [u v 1]'
cv::Mat point2d_vec = cv::Mat(4, 1, CV_64FC1);
point2d_vec = _A_matrix * _P_matrix * point3d_vec;
// Normalization of [u v]'
cv::Point2f point2d;
point2d.x = point2d_vec.at<double>(0) / point2d_vec.at<double>(2);
point2d.y = point2d_vec.at<double>(1) / point2d_vec.at<double>(2);
return point2d;
}
The above function is used to compute all the 3D points of the object *Mesh* to show the pose of the object.
You can also change RANSAC parameters and PnP method:
.. code-block:: cpp
./cpp-tutorial-pnp_detection --error=0.25 --confidence=0.90 --iterations=250 --method=3
6. **Linear Kalman Filter for bad poses rejection**
Is it common in computer vision or robotics fields that after applying detection or tracking techniques, bad results are obtained due to some sensor errors. In order to avoid these bad detections in this tutorial is explained how to implement a Linear Kalman Filter. The Kalman Filter will be applied after detected a given number of inliers.
You can find more information about what `Kalman Filter <http://en.wikipedia.org/wiki/Kalman_filter>`_ is. In this tutorial it's used the OpenCV implementation of the :video:`Kalman Filter <kalmanfilter>` based on `Linear Kalman Filter for position and orientation tracking <http://campar.in.tum.de/Chair/KalmanFilter>`_ to set the dynamics and measurement models.
Firstly, we have to define our state vector which will have 18 states: the positional data (x,y,z) with its first and second derivatives (velocity and acceleration), then rotation is added in form of three euler angles (roll, pitch, jaw) together with their first and second derivatives (angular velocity and acceleration)
.. math::
X = (x,y,z,\dot x,\dot y,\dot z,\ddot x,\ddot y,\ddot z,\psi,\theta,\phi,\dot \psi,\dot \theta,\dot \phi,\ddot \psi,\ddot \theta,\ddot \phi)^T
Secondly, we have to define the number of measuremnts which will be 6: from :math:`R` and :math:`t` we can extract :math:`(x,y,z)` and :math:`(\psi,\theta,\phi)`. In addition, we have to define the number of control actions to apply to the system which in this case will be *zero*. Finally, we have to define the differential time between measurements which in this case is :math:`1/T`, where *T* is the frame rate of the video.
.. code-block:: cpp
cv::KalmanFilter KF; // instantiate Kalman Filter
int nStates = 18; // the number of states
int nMeasurements = 6; // the number of measured states
int nInputs = 0; // the number of action control
double dt = 0.125; // time between measurements (1/FPS)
initKalmanFilter(KF, nStates, nMeasurements, nInputs, dt); // init function
The following code corresponds to the *Kalman Filter* initialisation. Firstly, is set the process noise, the measurement noise and the error covariance matrix. Secondly, are set the transition matrix which is the dynamic model and finally the measurement matrix, which is the measurement model.
You can tune the process and measurement noise to improve the *Kalman Filter* performance. As the measurement noise is reduced the faster will converge doing the algorithm sensitive in front of bad measurements.
.. code-block:: cpp
void initKalmanFilter(cv::KalmanFilter &KF, int nStates, int nMeasurements, int nInputs, double dt)
{
KF.init(nStates, nMeasurements, nInputs, CV_64F); // init Kalman Filter
cv::setIdentity(KF.processNoiseCov, cv::Scalar::all(1e-5)); // set process noise
cv::setIdentity(KF.measurementNoiseCov, cv::Scalar::all(1e-4)); // set measurement noise
cv::setIdentity(KF.errorCovPost, cv::Scalar::all(1)); // error covariance
/** DYNAMIC MODEL **/
// [1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0 0 0]
// [0 1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0 0]
// [0 0 1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0]
// [0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2 0 0]
// [0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2 0]
// [0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2]
// [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
// position
KF.transitionMatrix.at<double>(0,3) = dt;
KF.transitionMatrix.at<double>(1,4) = dt;
KF.transitionMatrix.at<double>(2,5) = dt;
KF.transitionMatrix.at<double>(3,6) = dt;
KF.transitionMatrix.at<double>(4,7) = dt;
KF.transitionMatrix.at<double>(5,8) = dt;
KF.transitionMatrix.at<double>(0,6) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(1,7) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(2,8) = 0.5*pow(dt,2);
// orientation
KF.transitionMatrix.at<double>(9,12) = dt;
KF.transitionMatrix.at<double>(10,13) = dt;
KF.transitionMatrix.at<double>(11,14) = dt;
KF.transitionMatrix.at<double>(12,15) = dt;
KF.transitionMatrix.at<double>(13,16) = dt;
KF.transitionMatrix.at<double>(14,17) = dt;
KF.transitionMatrix.at<double>(9,15) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(10,16) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(11,17) = 0.5*pow(dt,2);
/** MEASUREMENT MODEL **/
// [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
// [0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0]
KF.measurementMatrix.at<double>(0,0) = 1; // x
KF.measurementMatrix.at<double>(1,1) = 1; // y
KF.measurementMatrix.at<double>(2,2) = 1; // z
KF.measurementMatrix.at<double>(3,9) = 1; // roll
KF.measurementMatrix.at<double>(4,10) = 1; // pitch
KF.measurementMatrix.at<double>(5,11) = 1; // yaw
}
In the following code is the 5th step of the main algorithm. When the obtained number of inliers after *Ransac* is over the threshold, the measurements matrix is filled and then the *Kalman Filter* is updated:
.. code-block:: cpp
// -- Step 5: Kalman Filter
// GOOD MEASUREMENT
if( inliers_idx.rows >= minInliersKalman )
{
// Get the measured translation
cv::Mat translation_measured(3, 1, CV_64F);
translation_measured = pnp_detection.get_t_matrix();
// Get the measured rotation
cv::Mat rotation_measured(3, 3, CV_64F);
rotation_measured = pnp_detection.get_R_matrix();
// fill the measurements vector
fillMeasurements(measurements, translation_measured, rotation_measured);
}
// Instantiate estimated translation and rotation
cv::Mat translation_estimated(3, 1, CV_64F);
cv::Mat rotation_estimated(3, 3, CV_64F);
// update the Kalman filter with good measurements
updateKalmanFilter( KF, measurements,
translation_estimated, rotation_estimated);
The following code corresponds to the *fillMeasurements()* function which converts the measured `Rotation Matrix to Eulers angles <http://euclideanspace.com/maths/geometry/rotations/conversions/matrixToEuler/index.htm>`_ and fill the measurements matrix along with the measured translation vector:
.. code-block:: cpp
void fillMeasurements( cv::Mat &measurements,
const cv::Mat &translation_measured, const cv::Mat &rotation_measured)
{
// Convert rotation matrix to euler angles
cv::Mat measured_eulers(3, 1, CV_64F);
measured_eulers = rot2euler(rotation_measured);
// Set measurement to predict
measurements.at<double>(0) = translation_measured.at<double>(0); // x
measurements.at<double>(1) = translation_measured.at<double>(1); // y
measurements.at<double>(2) = translation_measured.at<double>(2); // z
measurements.at<double>(3) = measured_eulers.at<double>(0); // roll
measurements.at<double>(4) = measured_eulers.at<double>(1); // pitch
measurements.at<double>(5) = measured_eulers.at<double>(2); // yaw
}
The following code corresponds to the *updateKalmanFilter()* function which update the Kalman Filter and set the estimated Rotation Matrix and translation vector. The estimated Rotation Matrix comes from the estimated `Euler angles to Rotation Matrix <http://euclideanspace.com/maths/geometry/rotations/conversions/eulerToMatrix/index.htm>`_.
.. code-block:: cpp
void updateKalmanFilter( cv::KalmanFilter &KF, cv::Mat &measurement,
cv::Mat &translation_estimated, cv::Mat &rotation_estimated )
{
// First predict, to update the internal statePre variable
cv::Mat prediction = KF.predict();
// The "correct" phase that is going to use the predicted value and our measurement
cv::Mat estimated = KF.correct(measurement);
// Estimated translation
translation_estimated.at<double>(0) = estimated.at<double>(0);
translation_estimated.at<double>(1) = estimated.at<double>(1);
translation_estimated.at<double>(2) = estimated.at<double>(2);
// Estimated euler angles
cv::Mat eulers_estimated(3, 1, CV_64F);
eulers_estimated.at<double>(0) = estimated.at<double>(9);
eulers_estimated.at<double>(1) = estimated.at<double>(10);
eulers_estimated.at<double>(2) = estimated.at<double>(11);
// Convert estimated quaternion to rotation matrix
rotation_estimated = euler2rot(eulers_estimated);
}
The 6th step is set the estimated rotation-translation matrix:
.. code-block:: cpp
// -- Step 6: Set estimated projection matrix
pnp_detection_est.set_P_matrix(rotation_estimated, translation_estimated);
The last and optional step is draw the found pose. To do it I implemented a function to draw all the mesh 3D points and an extra reference axis:
.. code-block:: cpp
// -- Step X: Draw pose
drawObjectMesh(frame_vis, &mesh, &pnp_detection, green); // draw current pose
drawObjectMesh(frame_vis, &mesh, &pnp_detection_est, yellow); // draw estimated pose
double l = 5;
std::vector<cv::Point2f> pose_points2d;
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(0,0,0))); // axis center
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(l,0,0))); // axis x
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(0,l,0))); // axis y
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(0,0,l))); // axis z
draw3DCoordinateAxes(frame_vis, pose_points2d); // draw axes
You can also modify the minimum inliers to update Kalman Filter:
.. code-block:: cpp
./cpp-tutorial-pnp_detection --inliers=20
Results
=======
The following videos are the results of pose estimation in real time using the explained detection algorithm using the following parameters:
.. code-block:: cpp
// Robust Matcher parameters
int numKeyPoints = 2000; // number of detected keypoints
float ratio = 0.70f; // ratio test
bool fast_match = true; // fastRobustMatch() or robustMatch()
// RANSAC parameters
int iterationsCount = 500; // number of Ransac iterations.
int reprojectionError = 2.0; // maximum allowed distance to consider it an inlier.
float confidence = 0.95; // ransac successful confidence.
// Kalman Filter parameters
int minInliersKalman = 30; // Kalman threshold updating
You can watch the real time pose estimation on the `YouTube here <https://www.youtube.com/watch?v=msFFuHsiUns>`_.
.. raw:: html
<div align="center">
<iframe title=" Pose estimation for the Google Summer Code 2014 using OpenCV libraries." width="560" height="349" src="http://www.youtube.com/embed/msFFuHsiUns?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div>
<div align="center">
<iframe title=" Pose estimation for the Google Summer Code 2014 using OpenCV libraries." width="560" height="349" src="http://www.youtube.com/embed/_E4vhmBJYLU?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

View File

@ -45,6 +45,25 @@ Although we got most of our images in a 2D format they do come from a 3D world.
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|PoseEstimation| **Title:** :ref:`realTimePoseEstimation`
*Compatibility:* > OpenCV 2.0
*Author:* Edgar Riba
Real time pose estimation of a textured object using ORB features, FlannBased matcher, PnP approach plus Ransac and Linear Kalman Filter to reject possible bad poses.
===================== ==============================================
.. |PoseEstimation| image:: images/real_time_pose_estimation.jpg
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
@ -54,3 +73,4 @@ Although we got most of our images in a 2D format they do come from a 3D world.
../camera_calibration_square_chess/camera_calibration_square_chess
../camera_calibration/camera_calibration
../real_time_pose/real_time_pose

View File

@ -562,7 +562,7 @@ solvePnP
------------
Finds an object pose from 3D-2D point correspondences.
.. ocv:function:: bool solvePnP( InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int flags=ITERATIVE )
.. ocv:function:: bool solvePnP( InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int flags=SOLVEPNP_ITERATIVE )
.. ocv:pyfunction:: cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) -> retval, rvec, tvec
@ -584,9 +584,10 @@ Finds an object pose from 3D-2D point correspondences.
:param flags: Method for solving a PnP problem:
* **ITERATIVE** Iterative method is based on Levenberg-Marquardt optimization. In this case the function finds such a pose that minimizes reprojection error, that is the sum of squared distances between the observed projections ``imagePoints`` and the projected (using :ocv:func:`projectPoints` ) ``objectPoints`` .
* **P3P** Method is based on the paper of X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang "Complete Solution Classification for the Perspective-Three-Point Problem". In this case the function requires exactly four object and image points.
* **EPNP** Method has been introduced by F.Moreno-Noguer, V.Lepetit and P.Fua in the paper "EPnP: Efficient Perspective-n-Point Camera Pose Estimation".
* **SOLVEPNP_ITERATIVE** Iterative method is based on Levenberg-Marquardt optimization. In this case the function finds such a pose that minimizes reprojection error, that is the sum of squared distances between the observed projections ``imagePoints`` and the projected (using :ocv:func:`projectPoints` ) ``objectPoints`` .
* **SOLVEPNP_P3P** Method is based on the paper of X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang "Complete Solution Classification for the Perspective-Three-Point Problem". In this case the function requires exactly four object and image points.
* **SOLVEPNP_EPNP** Method has been introduced by F.Moreno-Noguer, V.Lepetit and P.Fua in the paper "EPnP: Efficient Perspective-n-Point Camera Pose Estimation".
* **SOLVEPNP_DLS** Method is based on the paper of Joel A. Hesch and Stergios I. Roumeliotis. "A Direct Least-Squares (DLS) Method for PnP".
The function estimates the object pose given a set of object points, their corresponding image projections, as well as the camera matrix and the distortion coefficients.
@ -598,7 +599,7 @@ solvePnPRansac
------------------
Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
.. ocv:function:: void solvePnPRansac( InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int iterationsCount = 100, float reprojectionError = 8.0, int minInliersCount = 100, OutputArray inliers = noArray(), int flags = ITERATIVE )
.. ocv:function:: bool solvePnPRansac( InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int iterationsCount = 100, float reprojectionError = 8.0, double confidence = 0.99, OutputArray inliers = noArray(), int flags = SOLVEPNP_ITERATIVE )
.. ocv:pyfunction:: cv2.solvePnPRansac(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, iterationsCount[, reprojectionError[, minInliersCount[, inliers[, flags]]]]]]]]) -> rvec, tvec, inliers
@ -620,15 +621,18 @@ Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
:param reprojectionError: Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.
:param minInliersCount: Number of inliers. If the algorithm at some stage finds more inliers than ``minInliersCount`` , it finishes.
:param confidence: The probability that the algorithm produces a useful result.
:param inliers: Output vector that contains indices of inliers in ``objectPoints`` and ``imagePoints`` .
:param flags: Method for solving a PnP problem (see :ocv:func:`solvePnP` ).
The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections ``imagePoints`` and the projected (using
:ocv:func:`projectPoints` ) ``objectPoints``. The use of RANSAC makes the function resistant to outliers. The function is parallelized with the TBB library.
:ocv:func:`projectPoints` ) ``objectPoints``. The use of RANSAC makes the function resistant to outliers.
.. note::
* An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/
findFundamentalMat

View File

@ -56,10 +56,11 @@ enum { LMEDS = 4, //!< least-median algorithm
RANSAC = 8 //!< RANSAC algorithm
};
enum { ITERATIVE = 0,
EPNP = 1, // F.Moreno-Noguer, V.Lepetit and P.Fua "EPnP: Efficient Perspective-n-Point Camera Pose Estimation"
P3P = 2 // X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang; "Complete Solution Classification for the Perspective-Three-Point Problem"
};
enum { SOLVEPNP_ITERATIVE = 0,
SOLVEPNP_EPNP = 1, // F.Moreno-Noguer, V.Lepetit and P.Fua "EPnP: Efficient Perspective-n-Point Camera Pose Estimation"
SOLVEPNP_P3P = 2, // X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang; "Complete Solution Classification for the Perspective-Three-Point Problem"
SOLVEPNP_DLS = 3 // Joel A. Hesch and Stergios I. Roumeliotis. "A Direct Least-Squares (DLS) Method for PnP"
};
enum { CALIB_CB_ADAPTIVE_THRESH = 1,
CALIB_CB_NORMALIZE_IMAGE = 2,
@ -152,15 +153,15 @@ CV_EXPORTS_W void projectPoints( InputArray objectPoints,
CV_EXPORTS_W bool solvePnP( InputArray objectPoints, InputArray imagePoints,
InputArray cameraMatrix, InputArray distCoeffs,
OutputArray rvec, OutputArray tvec,
bool useExtrinsicGuess = false, int flags = ITERATIVE );
bool useExtrinsicGuess = false, int flags = SOLVEPNP_ITERATIVE );
//! computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible.
CV_EXPORTS_W void solvePnPRansac( InputArray objectPoints, InputArray imagePoints,
CV_EXPORTS_W bool solvePnPRansac( InputArray objectPoints, InputArray imagePoints,
InputArray cameraMatrix, InputArray distCoeffs,
OutputArray rvec, OutputArray tvec,
bool useExtrinsicGuess = false, int iterationsCount = 100,
float reprojectionError = 8.0, int minInliersCount = 100,
OutputArray inliers = noArray(), int flags = ITERATIVE );
float reprojectionError = 8.0, double confidence = 0.99,
OutputArray inliers = noArray(), int flags = SOLVEPNP_ITERATIVE );
//! initializes camera matrix from a few 3D points and the corresponding projections.
CV_EXPORTS_W Mat initCameraMatrix2D( InputArrayOfArrays objectPoints,

View File

@ -91,7 +91,8 @@ enum
{
CV_ITERATIVE = 0,
CV_EPNP = 1, // F.Moreno-Noguer, V.Lepetit and P.Fua "EPnP: Efficient Perspective-n-Point Camera Pose Estimation"
CV_P3P = 2 // X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang; "Complete Solution Classification for the Perspective-Three-Point Problem"
CV_P3P = 2, // X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang; "Complete Solution Classification for the Perspective-Three-Point Problem"
CV_DLS = 3 // Joel A. Hesch and Stergios I. Roumeliotis. "A Direct Least-Squares (DLS) Method for PnP"
};
CVAPI(int) cvFindFundamentalMat( const CvMat* points1, const CvMat* points2,

View File

@ -10,7 +10,7 @@ using namespace perf;
using std::tr1::make_tuple;
using std::tr1::get;
CV_ENUM(pnpAlgo, ITERATIVE, EPNP /*, P3P*/)
CV_ENUM(pnpAlgo, SOLVEPNP_ITERATIVE, SOLVEPNP_EPNP, SOLVEPNP_P3P, SOLVEPNP_DLS)
typedef std::tr1::tuple<int, pnpAlgo> PointsNum_Algo_t;
typedef perf::TestBaseWithParam<PointsNum_Algo_t> PointsNum_Algo;
@ -19,8 +19,8 @@ typedef perf::TestBaseWithParam<int> PointsNum;
PERF_TEST_P(PointsNum_Algo, solvePnP,
testing::Combine(
testing::Values(/*4,*/ 3*9, 7*13), //TODO: find why results on 4 points are too unstable
testing::Values((int)ITERATIVE, (int)EPNP)
testing::Values(4, 3*9, 7*13), //TODO: find why results on 4 points are too unstable
testing::Values((int)SOLVEPNP_ITERATIVE, (int)SOLVEPNP_EPNP)
)
)
{
@ -51,6 +51,7 @@ PERF_TEST_P(PointsNum_Algo, solvePnP,
add(points2d, noise, points2d);
declare.in(points3d, points2d);
declare.time(100);
TEST_CYCLE_N(1000)
{
@ -58,12 +59,18 @@ PERF_TEST_P(PointsNum_Algo, solvePnP,
}
SANITY_CHECK(rvec, 1e-6);
SANITY_CHECK(tvec, 1e-3);
SANITY_CHECK(tvec, 1e-6);
}
PERF_TEST(PointsNum_Algo, solveP3P)
PERF_TEST_P(PointsNum_Algo, solvePnPSmallPoints,
testing::Combine(
testing::Values(4), //TODO: find why results on 4 points are too unstable
testing::Values((int)SOLVEPNP_P3P, (int)SOLVEPNP_DLS)
)
)
{
int pointsNum = 4;
int pointsNum = get<0>(GetParam());
pnpAlgo algo = get<1>(GetParam());
vector<Point2f> points2d(pointsNum);
vector<Point3f> points3d(pointsNum);
@ -93,11 +100,11 @@ PERF_TEST(PointsNum_Algo, solveP3P)
TEST_CYCLE_N(1000)
{
solvePnP(points3d, points2d, intrinsics, distortion, rvec, tvec, false, P3P);
solvePnP(points3d, points2d, intrinsics, distortion, rvec, tvec, false, algo);
}
SANITY_CHECK(rvec, 1e-6);
SANITY_CHECK(tvec, 1e-6);
SANITY_CHECK(rvec, 1e-4);
SANITY_CHECK(tvec, 1e-2);
}
PERF_TEST_P(PointsNum, DISABLED_SolvePnPRansac, testing::Values(4, 3*9, 7*13))
@ -117,8 +124,10 @@ PERF_TEST_P(PointsNum, DISABLED_SolvePnPRansac, testing::Values(4, 3*9, 7*13))
Mat dist_coef(1, 8, CV_32F, cv::Scalar::all(0));
vector<cv::Point2f> image_vec;
Mat rvec_gold(1, 3, CV_32FC1);
randu(rvec_gold, 0, 1);
Mat tvec_gold(1, 3, CV_32FC1);
randu(tvec_gold, 0, 1);
projectPoints(object, rvec_gold, tvec_gold, camera_mat, dist_coef, image_vec);

648
modules/calib3d/src/dls.cpp Normal file
View File

@ -0,0 +1,648 @@
#include "precomp.hpp"
#include "dls.h"
#include <iostream>
#ifdef HAVE_EIGEN
# if defined __GNUC__ && defined __APPLE__
# pragma GCC diagnostic ignored "-Wshadow"
# endif
# include <Eigen/Core>
# include <Eigen/Eigenvalues>
# include "opencv2/core/eigen.hpp"
#endif
using namespace std;
dls::dls(const cv::Mat& opoints, const cv::Mat& ipoints)
{
N = std::max(opoints.checkVector(3, CV_32F), opoints.checkVector(3, CV_64F));
p = cv::Mat(3, N, CV_64F);
z = cv::Mat(3, N, CV_64F);
mn = cv::Mat::zeros(3, 1, CV_64F);
cost__ = 9999;
f1coeff.resize(21);
f2coeff.resize(21);
f3coeff.resize(21);
if (opoints.depth() == ipoints.depth())
{
if (opoints.depth() == CV_32F)
init_points<cv::Point3f, cv::Point2f>(opoints, ipoints);
else
init_points<cv::Point3d, cv::Point2d>(opoints, ipoints);
}
else if (opoints.depth() == CV_32F)
init_points<cv::Point3f, cv::Point2d>(opoints, ipoints);
else
init_points<cv::Point3d, cv::Point2f>(opoints, ipoints);
}
dls::~dls()
{
// TODO Auto-generated destructor stub
}
bool dls::compute_pose(cv::Mat& R, cv::Mat& t)
{
std::vector<cv::Mat> R_;
R_.push_back(rotx(CV_PI/2));
R_.push_back(roty(CV_PI/2));
R_.push_back(rotz(CV_PI/2));
// version that calls dls 3 times, to avoid Cayley singularity
for (int i = 0; i < 3; ++i)
{
// Make a random rotation
cv::Mat pp = R_[i] * ( p - cv::repeat(mn, 1, p.cols) );
// clear for new data
C_est_.clear();
t_est_.clear();
cost_.clear();
this->run_kernel(pp); // run dls_pnp()
// find global minimum
for (unsigned int j = 0; j < cost_.size(); ++j)
{
if( cost_[j] < cost__ )
{
t_est__ = t_est_[j] - C_est_[j] * R_[i] * mn;
C_est__ = C_est_[j] * R_[i];
cost__ = cost_[j];
}
}
}
if(C_est__.cols > 0 && C_est__.rows > 0)
{
C_est__.copyTo(R);
t_est__.copyTo(t);
return true;
}
return false;
}
void dls::run_kernel(const cv::Mat& pp)
{
cv::Mat Mtilde(27, 27, CV_64F);
cv::Mat D = cv::Mat::zeros(9, 9, CV_64F);
build_coeff_matrix(pp, Mtilde, D);
cv::Mat eigenval_r, eigenval_i, eigenvec_r, eigenvec_i;
compute_eigenvec(Mtilde, eigenval_r, eigenval_i, eigenvec_r, eigenvec_i);
/*
* Now check the solutions
*/
// extract the optimal solutions from the eigen decomposition of the
// Multiplication matrix
cv::Mat sols = cv::Mat::zeros(3, 27, CV_64F);
std::vector<double> cost;
int count = 0;
for (int k = 0; k < 27; ++k)
{
// V(:,k) = V(:,k)/V(1,k);
cv::Mat V_kA = eigenvec_r.col(k); // 27x1
cv::Mat V_kB = cv::Mat(1, 1, z.depth(), V_kA.at<double>(0)); // 1x1
cv::Mat V_k; cv::solve(V_kB.t(), V_kA.t(), V_k); // A/B = B'\A'
cv::Mat( V_k.t()).copyTo( eigenvec_r.col(k) );
//if (imag(V(2,k)) == 0)
#ifdef HAVE_EIGEN
const double epsilon = 1e-4;
if( eigenval_i.at<double>(k,0) >= -epsilon && eigenval_i.at<double>(k,0) <= epsilon )
#endif
{
double stmp[3];
stmp[0] = eigenvec_r.at<double>(9, k);
stmp[1] = eigenvec_r.at<double>(3, k);
stmp[2] = eigenvec_r.at<double>(1, k);
cv::Mat H = Hessian(stmp);
cv::Mat eigenvalues, eigenvectors;
cv::eigen(H, eigenvalues, eigenvectors);
if(positive_eigenvalues(&eigenvalues))
{
// sols(:,i) = stmp;
cv::Mat stmp_mat(3, 1, CV_64F, &stmp);
stmp_mat.copyTo( sols.col(count) );
cv::Mat Cbar = cayley2rotbar(stmp_mat);
cv::Mat Cbarvec = Cbar.reshape(1,1).t();
// cost(i) = CbarVec' * D * CbarVec;
cv::Mat cost_mat = Cbarvec.t() * D * Cbarvec;
cost.push_back( cost_mat.at<double>(0) );
count++;
}
}
}
// extract solutions
sols = sols.clone().colRange(0, count);
std::vector<cv::Mat> C_est, t_est;
for (int j = 0; j < sols.cols; ++j)
{
// recover the optimal orientation
// C_est(:,:,j) = 1/(1 + sols(:,j)' * sols(:,j)) * cayley2rotbar(sols(:,j));
cv::Mat sols_j = sols.col(j);
double sols_mult = 1./(1.+cv::Mat( sols_j.t() * sols_j ).at<double>(0));
cv::Mat C_est_j = cayley2rotbar(sols_j).mul(sols_mult);
C_est.push_back( C_est_j );
cv::Mat A2 = cv::Mat::zeros(3, 3, CV_64F);
cv::Mat b2 = cv::Mat::zeros(3, 1, CV_64F);
for (int i = 0; i < N; ++i)
{
cv::Mat eye = cv::Mat::eye(3, 3, CV_64F);
cv::Mat z_mul = z.col(i)*z.col(i).t();
A2 += eye - z_mul;
b2 += (z_mul - eye) * C_est_j * pp.col(i);
}
// recover the optimal translation
cv::Mat X2; cv::solve(A2, b2, X2); // A\B
t_est.push_back(X2);
}
// check that the points are infront of the center of perspectivity
for (int k = 0; k < sols.cols; ++k)
{
cv::Mat cam_points = C_est[k] * pp + cv::repeat(t_est[k], 1, pp.cols);
cv::Mat cam_points_k = cam_points.row(2);
if(is_empty(&cam_points_k))
{
cv::Mat C_valid = C_est[k], t_valid = t_est[k];
double cost_valid = cost[k];
C_est_.push_back(C_valid);
t_est_.push_back(t_valid);
cost_.push_back(cost_valid);
}
}
}
void dls::build_coeff_matrix(const cv::Mat& pp, cv::Mat& Mtilde, cv::Mat& D)
{
cv::Mat eye = cv::Mat::eye(3, 3, CV_64F);
// build coeff matrix
// An intermediate matrix, the inverse of what is called "H" in the paper
// (see eq. 25)
cv::Mat H = cv::Mat::zeros(3, 3, CV_64F);
cv::Mat A = cv::Mat::zeros(3, 9, CV_64F);
cv::Mat pp_i(3, 1, CV_64F);
cv::Mat z_i(3, 1, CV_64F);
for (int i = 0; i < N; ++i)
{
z.col(i).copyTo(z_i);
A += ( z_i*z_i.t() - eye ) * LeftMultVec(pp.col(i));
}
H = eye.mul(N) - z * z.t();
// A\B
cv::solve(H, A, A, cv::DECOMP_NORMAL);
H.release();
cv::Mat ppi_A(3, 1, CV_64F);
for (int i = 0; i < N; ++i)
{
z.col(i).copyTo(z_i);
ppi_A = LeftMultVec(pp.col(i)) + A;
D += ppi_A.t() * ( eye - z_i*z_i.t() ) * ppi_A;
}
A.release();
// fill the coefficients
fill_coeff(&D);
// generate random samples
std::vector<double> u(5);
cv::randn(u, 0, 200);
cv::Mat M2 = cayley_LS_M(f1coeff, f2coeff, f3coeff, u);
cv::Mat M2_1 = M2(cv::Range(0,27), cv::Range(0,27));
cv::Mat M2_2 = M2(cv::Range(0,27), cv::Range(27,120));
cv::Mat M2_3 = M2(cv::Range(27,120), cv::Range(27,120));
cv::Mat M2_4 = M2(cv::Range(27,120), cv::Range(0,27));
M2.release();
// A/B = B'\A'
cv::Mat M2_5; cv::solve(M2_3.t(), M2_2.t(), M2_5);
M2_2.release(); M2_3.release();
// construct the multiplication matrix via schur compliment of the Macaulay
// matrix
Mtilde = M2_1 - M2_5.t()*M2_4;
}
void dls::compute_eigenvec(const cv::Mat& Mtilde, cv::Mat& eigenval_real, cv::Mat& eigenval_imag,
cv::Mat& eigenvec_real, cv::Mat& eigenvec_imag)
{
#ifdef HAVE_EIGEN
Eigen::MatrixXd Mtilde_eig, zeros_eig;
cv::cv2eigen(Mtilde, Mtilde_eig);
cv::cv2eigen(cv::Mat::zeros(27, 27, CV_64F), zeros_eig);
Eigen::MatrixXcd Mtilde_eig_cmplx(27, 27);
Mtilde_eig_cmplx.real() = Mtilde_eig;
Mtilde_eig_cmplx.imag() = zeros_eig;
Eigen::ComplexEigenSolver<Eigen::MatrixXcd> ces;
ces.compute(Mtilde_eig_cmplx);
Eigen::MatrixXd eigval_real = ces.eigenvalues().real();
Eigen::MatrixXd eigval_imag = ces.eigenvalues().imag();
Eigen::MatrixXd eigvec_real = ces.eigenvectors().real();
Eigen::MatrixXd eigvec_imag = ces.eigenvectors().imag();
cv::eigen2cv(eigval_real, eigenval_real);
cv::eigen2cv(eigval_imag, eigenval_imag);
cv::eigen2cv(eigvec_real, eigenvec_real);
cv::eigen2cv(eigvec_imag, eigenvec_imag);
#else
EigenvalueDecomposition es(Mtilde);
eigenval_real = es.eigenvalues();
eigenvec_real = es.eigenvectors();
eigenval_imag = eigenvec_imag = cv::Mat();
#endif
}
void dls::fill_coeff(const cv::Mat * D_mat)
{
// TODO: shift D and coefficients one position to left
double D[10][10]; // put D_mat into array
for (int i = 0; i < D_mat->rows; ++i)
{
const double* Di = D_mat->ptr<double>(i);
for (int j = 0; j < D_mat->cols; ++j)
{
D[i+1][j+1] = Di[j];
}
}
// F1 COEFFICIENT
f1coeff[1] = 2*D[1][6] - 2*D[1][8] + 2*D[5][6] - 2*D[5][8] + 2*D[6][1] + 2*D[6][5] + 2*D[6][9] - 2*D[8][1] - 2*D[8][5] - 2*D[8][9] + 2*D[9][6] - 2*D[9][8]; // constant term
f1coeff[2] = 6*D[1][2] + 6*D[1][4] + 6*D[2][1] - 6*D[2][5] - 6*D[2][9] + 6*D[4][1] - 6*D[4][5] - 6*D[4][9] - 6*D[5][2] - 6*D[5][4] - 6*D[9][2] - 6*D[9][4]; // s1^2 * s2
f1coeff[3] = 4*D[1][7] - 4*D[1][3] + 8*D[2][6] - 8*D[2][8] - 4*D[3][1] + 4*D[3][5] + 4*D[3][9] + 8*D[4][6] - 8*D[4][8] + 4*D[5][3] - 4*D[5][7] + 8*D[6][2] + 8*D[6][4] + 4*D[7][1] - 4*D[7][5] - 4*D[7][9] - 8*D[8][2] - 8*D[8][4] + 4*D[9][3] - 4*D[9][7]; // s1 * s2
f1coeff[4] = 4*D[1][2] - 4*D[1][4] + 4*D[2][1] - 4*D[2][5] - 4*D[2][9] + 8*D[3][6] - 8*D[3][8] - 4*D[4][1] + 4*D[4][5] + 4*D[4][9] - 4*D[5][2] + 4*D[5][4] + 8*D[6][3] + 8*D[6][7] + 8*D[7][6] - 8*D[7][8] - 8*D[8][3] - 8*D[8][7] - 4*D[9][2] + 4*D[9][4]; //s1 * s3
f1coeff[5] = 8*D[2][2] - 8*D[3][3] - 8*D[4][4] + 8*D[6][6] + 8*D[7][7] - 8*D[8][8]; // s2 * s3
f1coeff[6] = 4*D[2][6] - 2*D[1][7] - 2*D[1][3] + 4*D[2][8] - 2*D[3][1] + 2*D[3][5] - 2*D[3][9] + 4*D[4][6] + 4*D[4][8] + 2*D[5][3] + 2*D[5][7] + 4*D[6][2] + 4*D[6][4] - 2*D[7][1] + 2*D[7][5] - 2*D[7][9] + 4*D[8][2] + 4*D[8][4] - 2*D[9][3] - 2*D[9][7]; // s2^2 * s3
f1coeff[7] = 2*D[2][5] - 2*D[1][4] - 2*D[2][1] - 2*D[1][2] - 2*D[2][9] - 2*D[4][1] + 2*D[4][5] - 2*D[4][9] + 2*D[5][2] + 2*D[5][4] - 2*D[9][2] - 2*D[9][4]; //s2^3
f1coeff[8] = 4*D[1][9] - 4*D[1][1] + 8*D[3][3] + 8*D[3][7] + 4*D[5][5] + 8*D[7][3] + 8*D[7][7] + 4*D[9][1] - 4*D[9][9]; // s1 * s3^2
f1coeff[9] = 4*D[1][1] - 4*D[5][5] - 4*D[5][9] + 8*D[6][6] - 8*D[6][8] - 8*D[8][6] + 8*D[8][8] - 4*D[9][5] - 4*D[9][9]; // s1
f1coeff[10] = 2*D[1][3] + 2*D[1][7] + 4*D[2][6] - 4*D[2][8] + 2*D[3][1] + 2*D[3][5] + 2*D[3][9] - 4*D[4][6] + 4*D[4][8] + 2*D[5][3] + 2*D[5][7] + 4*D[6][2] - 4*D[6][4] + 2*D[7][1] + 2*D[7][5] + 2*D[7][9] - 4*D[8][2] + 4*D[8][4] + 2*D[9][3] + 2*D[9][7]; // s3
f1coeff[11] = 2*D[1][2] + 2*D[1][4] + 2*D[2][1] + 2*D[2][5] + 2*D[2][9] - 4*D[3][6] + 4*D[3][8] + 2*D[4][1] + 2*D[4][5] + 2*D[4][9] + 2*D[5][2] + 2*D[5][4] - 4*D[6][3] + 4*D[6][7] + 4*D[7][6] - 4*D[7][8] + 4*D[8][3] - 4*D[8][7] + 2*D[9][2] + 2*D[9][4]; // s2
f1coeff[12] = 2*D[2][9] - 2*D[1][4] - 2*D[2][1] - 2*D[2][5] - 2*D[1][2] + 4*D[3][6] + 4*D[3][8] - 2*D[4][1] - 2*D[4][5] + 2*D[4][9] - 2*D[5][2] - 2*D[5][4] + 4*D[6][3] + 4*D[6][7] + 4*D[7][6] + 4*D[7][8] + 4*D[8][3] + 4*D[8][7] + 2*D[9][2] + 2*D[9][4]; // s2 * s3^2
f1coeff[13] = 6*D[1][6] - 6*D[1][8] - 6*D[5][6] + 6*D[5][8] + 6*D[6][1] - 6*D[6][5] - 6*D[6][9] - 6*D[8][1] + 6*D[8][5] + 6*D[8][9] - 6*D[9][6] + 6*D[9][8]; // s1^2
f1coeff[14] = 2*D[1][8] - 2*D[1][6] + 4*D[2][3] + 4*D[2][7] + 4*D[3][2] - 4*D[3][4] - 4*D[4][3] - 4*D[4][7] - 2*D[5][6] + 2*D[5][8] - 2*D[6][1] - 2*D[6][5] + 2*D[6][9] + 4*D[7][2] - 4*D[7][4] + 2*D[8][1] + 2*D[8][5] - 2*D[8][9] + 2*D[9][6] - 2*D[9][8]; // s3^2
f1coeff[15] = 2*D[1][8] - 2*D[1][6] - 4*D[2][3] + 4*D[2][7] - 4*D[3][2] - 4*D[3][4] - 4*D[4][3] + 4*D[4][7] + 2*D[5][6] - 2*D[5][8] - 2*D[6][1] + 2*D[6][5] - 2*D[6][9] + 4*D[7][2] + 4*D[7][4] + 2*D[8][1] - 2*D[8][5] + 2*D[8][9] - 2*D[9][6] + 2*D[9][8]; // s2^2
f1coeff[16] = 2*D[3][9] - 2*D[1][7] - 2*D[3][1] - 2*D[3][5] - 2*D[1][3] - 2*D[5][3] - 2*D[5][7] - 2*D[7][1] - 2*D[7][5] + 2*D[7][9] + 2*D[9][3] + 2*D[9][7]; // s3^3
f1coeff[17] = 4*D[1][6] + 4*D[1][8] + 8*D[2][3] + 8*D[2][7] + 8*D[3][2] + 8*D[3][4] + 8*D[4][3] + 8*D[4][7] - 4*D[5][6] - 4*D[5][8] + 4*D[6][1] - 4*D[6][5] - 4*D[6][9] + 8*D[7][2] + 8*D[7][4] + 4*D[8][1] - 4*D[8][5] - 4*D[8][9] - 4*D[9][6] - 4*D[9][8]; // s1 * s2 * s3
f1coeff[18] = 4*D[1][5] - 4*D[1][1] + 8*D[2][2] + 8*D[2][4] + 8*D[4][2] + 8*D[4][4] + 4*D[5][1] - 4*D[5][5] + 4*D[9][9]; // s1 * s2^2
f1coeff[19] = 6*D[1][3] + 6*D[1][7] + 6*D[3][1] - 6*D[3][5] - 6*D[3][9] - 6*D[5][3] - 6*D[5][7] + 6*D[7][1] - 6*D[7][5] - 6*D[7][9] - 6*D[9][3] - 6*D[9][7]; // s1^2 * s3
f1coeff[20] = 4*D[1][1] - 4*D[1][5] - 4*D[1][9] - 4*D[5][1] + 4*D[5][5] + 4*D[5][9] - 4*D[9][1] + 4*D[9][5] + 4*D[9][9]; // s1^3
// F2 COEFFICIENT
f2coeff[1] = - 2*D[1][3] + 2*D[1][7] - 2*D[3][1] - 2*D[3][5] - 2*D[3][9] - 2*D[5][3] + 2*D[5][7] + 2*D[7][1] + 2*D[7][5] + 2*D[7][9] - 2*D[9][3] + 2*D[9][7]; // constant term
f2coeff[2] = 4*D[1][5] - 4*D[1][1] + 8*D[2][2] + 8*D[2][4] + 8*D[4][2] + 8*D[4][4] + 4*D[5][1] - 4*D[5][5] + 4*D[9][9]; // s1^2 * s2
f2coeff[3] = 4*D[1][8] - 4*D[1][6] - 8*D[2][3] + 8*D[2][7] - 8*D[3][2] - 8*D[3][4] - 8*D[4][3] + 8*D[4][7] + 4*D[5][6] - 4*D[5][8] - 4*D[6][1] + 4*D[6][5] - 4*D[6][9] + 8*D[7][2] + 8*D[7][4] + 4*D[8][1] - 4*D[8][5] + 4*D[8][9] - 4*D[9][6] + 4*D[9][8]; // s1 * s2
f2coeff[4] = 8*D[2][2] - 8*D[3][3] - 8*D[4][4] + 8*D[6][6] + 8*D[7][7] - 8*D[8][8]; // s1 * s3
f2coeff[5] = 4*D[1][4] - 4*D[1][2] - 4*D[2][1] + 4*D[2][5] - 4*D[2][9] - 8*D[3][6] - 8*D[3][8] + 4*D[4][1] - 4*D[4][5] + 4*D[4][9] + 4*D[5][2] - 4*D[5][4] - 8*D[6][3] + 8*D[6][7] + 8*D[7][6] + 8*D[7][8] - 8*D[8][3] + 8*D[8][7] - 4*D[9][2] + 4*D[9][4]; // s2 * s3
f2coeff[6] = 6*D[5][6] - 6*D[1][8] - 6*D[1][6] + 6*D[5][8] - 6*D[6][1] + 6*D[6][5] - 6*D[6][9] - 6*D[8][1] + 6*D[8][5] - 6*D[8][9] - 6*D[9][6] - 6*D[9][8]; // s2^2 * s3
f2coeff[7] = 4*D[1][1] - 4*D[1][5] + 4*D[1][9] - 4*D[5][1] + 4*D[5][5] - 4*D[5][9] + 4*D[9][1] - 4*D[9][5] + 4*D[9][9]; // s2^3
f2coeff[8] = 2*D[2][9] - 2*D[1][4] - 2*D[2][1] - 2*D[2][5] - 2*D[1][2] + 4*D[3][6] + 4*D[3][8] - 2*D[4][1] - 2*D[4][5] + 2*D[4][9] - 2*D[5][2] - 2*D[5][4] + 4*D[6][3] + 4*D[6][7] + 4*D[7][6] + 4*D[7][8] + 4*D[8][3] + 4*D[8][7] + 2*D[9][2] + 2*D[9][4]; // s1 * s3^2
f2coeff[9] = 2*D[1][2] + 2*D[1][4] + 2*D[2][1] + 2*D[2][5] + 2*D[2][9] - 4*D[3][6] + 4*D[3][8] + 2*D[4][1] + 2*D[4][5] + 2*D[4][9] + 2*D[5][2] + 2*D[5][4] - 4*D[6][3] + 4*D[6][7] + 4*D[7][6] - 4*D[7][8] + 4*D[8][3] - 4*D[8][7] + 2*D[9][2] + 2*D[9][4]; // s1
f2coeff[10] = 2*D[1][6] + 2*D[1][8] - 4*D[2][3] + 4*D[2][7] - 4*D[3][2] + 4*D[3][4] + 4*D[4][3] - 4*D[4][7] + 2*D[5][6] + 2*D[5][8] + 2*D[6][1] + 2*D[6][5] + 2*D[6][9] + 4*D[7][2] - 4*D[7][4] + 2*D[8][1] + 2*D[8][5] + 2*D[8][9] + 2*D[9][6] + 2*D[9][8]; // s3
f2coeff[11] = 8*D[3][3] - 4*D[1][9] - 4*D[1][1] - 8*D[3][7] + 4*D[5][5] - 8*D[7][3] + 8*D[7][7] - 4*D[9][1] - 4*D[9][9]; // s2
f2coeff[12] = 4*D[1][1] - 4*D[5][5] + 4*D[5][9] + 8*D[6][6] + 8*D[6][8] + 8*D[8][6] + 8*D[8][8] + 4*D[9][5] - 4*D[9][9]; // s2 * s3^2
f2coeff[13] = 2*D[1][7] - 2*D[1][3] + 4*D[2][6] - 4*D[2][8] - 2*D[3][1] + 2*D[3][5] + 2*D[3][9] + 4*D[4][6] - 4*D[4][8] + 2*D[5][3] - 2*D[5][7] + 4*D[6][2] + 4*D[6][4] + 2*D[7][1] - 2*D[7][5] - 2*D[7][9] - 4*D[8][2] - 4*D[8][4] + 2*D[9][3] - 2*D[9][7]; // s1^2
f2coeff[14] = 2*D[1][3] - 2*D[1][7] + 4*D[2][6] + 4*D[2][8] + 2*D[3][1] + 2*D[3][5] - 2*D[3][9] - 4*D[4][6] - 4*D[4][8] + 2*D[5][3] - 2*D[5][7] + 4*D[6][2] - 4*D[6][4] - 2*D[7][1] - 2*D[7][5] + 2*D[7][9] + 4*D[8][2] - 4*D[8][4] - 2*D[9][3] + 2*D[9][7]; // s3^2
f2coeff[15] = 6*D[1][3] - 6*D[1][7] + 6*D[3][1] - 6*D[3][5] + 6*D[3][9] - 6*D[5][3] + 6*D[5][7] - 6*D[7][1] + 6*D[7][5] - 6*D[7][9] + 6*D[9][3] - 6*D[9][7]; // s2^2
f2coeff[16] = 2*D[6][9] - 2*D[1][8] - 2*D[5][6] - 2*D[5][8] - 2*D[6][1] - 2*D[6][5] - 2*D[1][6] - 2*D[8][1] - 2*D[8][5] + 2*D[8][9] + 2*D[9][6] + 2*D[9][8]; // s3^3
f2coeff[17] = 8*D[2][6] - 4*D[1][7] - 4*D[1][3] + 8*D[2][8] - 4*D[3][1] + 4*D[3][5] - 4*D[3][9] + 8*D[4][6] + 8*D[4][8] + 4*D[5][3] + 4*D[5][7] + 8*D[6][2] + 8*D[6][4] - 4*D[7][1] + 4*D[7][5] - 4*D[7][9] + 8*D[8][2] + 8*D[8][4] - 4*D[9][3] - 4*D[9][7]; // s1 * s2 * s3
f2coeff[18] = 6*D[2][5] - 6*D[1][4] - 6*D[2][1] - 6*D[1][2] - 6*D[2][9] - 6*D[4][1] + 6*D[4][5] - 6*D[4][9] + 6*D[5][2] + 6*D[5][4] - 6*D[9][2] - 6*D[9][4]; // s1 * s2^2
f2coeff[19] = 2*D[1][6] + 2*D[1][8] + 4*D[2][3] + 4*D[2][7] + 4*D[3][2] + 4*D[3][4] + 4*D[4][3] + 4*D[4][7] - 2*D[5][6] - 2*D[5][8] + 2*D[6][1] - 2*D[6][5] - 2*D[6][9] + 4*D[7][2] + 4*D[7][4] + 2*D[8][1] - 2*D[8][5] - 2*D[8][9] - 2*D[9][6] - 2*D[9][8]; // s1^2 * s3
f2coeff[20] = 2*D[1][2] + 2*D[1][4] + 2*D[2][1] - 2*D[2][5] - 2*D[2][9] + 2*D[4][1] - 2*D[4][5] - 2*D[4][9] - 2*D[5][2] - 2*D[5][4] - 2*D[9][2] - 2*D[9][4]; // s1^3
// F3 COEFFICIENT
f3coeff[1] = 2*D[1][2] - 2*D[1][4] + 2*D[2][1] + 2*D[2][5] + 2*D[2][9] - 2*D[4][1] - 2*D[4][5] - 2*D[4][9] + 2*D[5][2] - 2*D[5][4] + 2*D[9][2] - 2*D[9][4]; // constant term
f3coeff[2] = 2*D[1][6] + 2*D[1][8] + 4*D[2][3] + 4*D[2][7] + 4*D[3][2] + 4*D[3][4] + 4*D[4][3] + 4*D[4][7] - 2*D[5][6] - 2*D[5][8] + 2*D[6][1] - 2*D[6][5] - 2*D[6][9] + 4*D[7][2] + 4*D[7][4] + 2*D[8][1] - 2*D[8][5] - 2*D[8][9] - 2*D[9][6] - 2*D[9][8]; // s1^2 * s2
f3coeff[3] = 8*D[2][2] - 8*D[3][3] - 8*D[4][4] + 8*D[6][6] + 8*D[7][7] - 8*D[8][8]; // s1 * s2
f3coeff[4] = 4*D[1][8] - 4*D[1][6] + 8*D[2][3] + 8*D[2][7] + 8*D[3][2] - 8*D[3][4] - 8*D[4][3] - 8*D[4][7] - 4*D[5][6] + 4*D[5][8] - 4*D[6][1] - 4*D[6][5] + 4*D[6][9] + 8*D[7][2] - 8*D[7][4] + 4*D[8][1] + 4*D[8][5] - 4*D[8][9] + 4*D[9][6] - 4*D[9][8]; // s1 * s3
f3coeff[5] = 4*D[1][3] - 4*D[1][7] + 8*D[2][6] + 8*D[2][8] + 4*D[3][1] + 4*D[3][5] - 4*D[3][9] - 8*D[4][6] - 8*D[4][8] + 4*D[5][3] - 4*D[5][7] + 8*D[6][2] - 8*D[6][4] - 4*D[7][1] - 4*D[7][5] + 4*D[7][9] + 8*D[8][2] - 8*D[8][4] - 4*D[9][3] + 4*D[9][7]; // s2 * s3
f3coeff[6] = 4*D[1][1] - 4*D[5][5] + 4*D[5][9] + 8*D[6][6] + 8*D[6][8] + 8*D[8][6] + 8*D[8][8] + 4*D[9][5] - 4*D[9][9]; // s2^2 * s3
f3coeff[7] = 2*D[5][6] - 2*D[1][8] - 2*D[1][6] + 2*D[5][8] - 2*D[6][1] + 2*D[6][5] - 2*D[6][9] - 2*D[8][1] + 2*D[8][5] - 2*D[8][9] - 2*D[9][6] - 2*D[9][8]; // s2^3
f3coeff[8] = 6*D[3][9] - 6*D[1][7] - 6*D[3][1] - 6*D[3][5] - 6*D[1][3] - 6*D[5][3] - 6*D[5][7] - 6*D[7][1] - 6*D[7][5] + 6*D[7][9] + 6*D[9][3] + 6*D[9][7]; // s1 * s3^2
f3coeff[9] = 2*D[1][3] + 2*D[1][7] + 4*D[2][6] - 4*D[2][8] + 2*D[3][1] + 2*D[3][5] + 2*D[3][9] - 4*D[4][6] + 4*D[4][8] + 2*D[5][3] + 2*D[5][7] + 4*D[6][2] - 4*D[6][4] + 2*D[7][1] + 2*D[7][5] + 2*D[7][9] - 4*D[8][2] + 4*D[8][4] + 2*D[9][3] + 2*D[9][7]; // s1
f3coeff[10] = 8*D[2][2] - 4*D[1][5] - 4*D[1][1] - 8*D[2][4] - 8*D[4][2] + 8*D[4][4] - 4*D[5][1] - 4*D[5][5] + 4*D[9][9]; // s3
f3coeff[11] = 2*D[1][6] + 2*D[1][8] - 4*D[2][3] + 4*D[2][7] - 4*D[3][2] + 4*D[3][4] + 4*D[4][3] - 4*D[4][7] + 2*D[5][6] + 2*D[5][8] + 2*D[6][1] + 2*D[6][5] + 2*D[6][9] + 4*D[7][2] - 4*D[7][4] + 2*D[8][1] + 2*D[8][5] + 2*D[8][9] + 2*D[9][6] + 2*D[9][8]; // s2
f3coeff[12] = 6*D[6][9] - 6*D[1][8] - 6*D[5][6] - 6*D[5][8] - 6*D[6][1] - 6*D[6][5] - 6*D[1][6] - 6*D[8][1] - 6*D[8][5] + 6*D[8][9] + 6*D[9][6] + 6*D[9][8]; // s2 * s3^2
f3coeff[13] = 2*D[1][2] - 2*D[1][4] + 2*D[2][1] - 2*D[2][5] - 2*D[2][9] + 4*D[3][6] - 4*D[3][8] - 2*D[4][1] + 2*D[4][5] + 2*D[4][9] - 2*D[5][2] + 2*D[5][4] + 4*D[6][3] + 4*D[6][7] + 4*D[7][6] - 4*D[7][8] - 4*D[8][3] - 4*D[8][7] - 2*D[9][2] + 2*D[9][4]; // s1^2
f3coeff[14] = 6*D[1][4] - 6*D[1][2] - 6*D[2][1] - 6*D[2][5] + 6*D[2][9] + 6*D[4][1] + 6*D[4][5] - 6*D[4][9] - 6*D[5][2] + 6*D[5][4] + 6*D[9][2] - 6*D[9][4]; // s3^2
f3coeff[15] = 2*D[1][4] - 2*D[1][2] - 2*D[2][1] + 2*D[2][5] - 2*D[2][9] - 4*D[3][6] - 4*D[3][8] + 2*D[4][1] - 2*D[4][5] + 2*D[4][9] + 2*D[5][2] - 2*D[5][4] - 4*D[6][3] + 4*D[6][7] + 4*D[7][6] + 4*D[7][8] - 4*D[8][3] + 4*D[8][7] - 2*D[9][2] + 2*D[9][4]; // s2^2
f3coeff[16] = 4*D[1][1] + 4*D[1][5] - 4*D[1][9] + 4*D[5][1] + 4*D[5][5] - 4*D[5][9] - 4*D[9][1] - 4*D[9][5] + 4*D[9][9]; // s3^3
f3coeff[17] = 4*D[2][9] - 4*D[1][4] - 4*D[2][1] - 4*D[2][5] - 4*D[1][2] + 8*D[3][6] + 8*D[3][8] - 4*D[4][1] - 4*D[4][5] + 4*D[4][9] - 4*D[5][2] - 4*D[5][4] + 8*D[6][3] + 8*D[6][7] + 8*D[7][6] + 8*D[7][8] + 8*D[8][3] + 8*D[8][7] + 4*D[9][2] + 4*D[9][4]; // s1 * s2 * s3
f3coeff[18] = 4*D[2][6] - 2*D[1][7] - 2*D[1][3] + 4*D[2][8] - 2*D[3][1] + 2*D[3][5] - 2*D[3][9] + 4*D[4][6] + 4*D[4][8] + 2*D[5][3] + 2*D[5][7] + 4*D[6][2] + 4*D[6][4] - 2*D[7][1] + 2*D[7][5] - 2*D[7][9] + 4*D[8][2] + 4*D[8][4] - 2*D[9][3] - 2*D[9][7]; // s1 * s2^2
f3coeff[19] = 4*D[1][9] - 4*D[1][1] + 8*D[3][3] + 8*D[3][7] + 4*D[5][5] + 8*D[7][3] + 8*D[7][7] + 4*D[9][1] - 4*D[9][9]; // s1^2 * s3
f3coeff[20] = 2*D[1][3] + 2*D[1][7] + 2*D[3][1] - 2*D[3][5] - 2*D[3][9] - 2*D[5][3] - 2*D[5][7] + 2*D[7][1] - 2*D[7][5] - 2*D[7][9] - 2*D[9][3] - 2*D[9][7]; // s1^3
}
cv::Mat dls::LeftMultVec(const cv::Mat& v)
{
cv::Mat mat_ = cv::Mat::zeros(3, 9, CV_64F);
for (int i = 0; i < 3; ++i)
{
mat_.at<double>(i, 3*i + 0) = v.at<double>(0);
mat_.at<double>(i, 3*i + 1) = v.at<double>(1);
mat_.at<double>(i, 3*i + 2) = v.at<double>(2);
}
return mat_;
}
cv::Mat dls::cayley_LS_M(const std::vector<double>& a, const std::vector<double>& b, const std::vector<double>& c, const std::vector<double>& u)
{
// TODO: input matrix pointer
// TODO: shift coefficients one position to left
cv::Mat M = cv::Mat::zeros(120, 120, CV_64F);
M.at<double>(0,0)=u[1]; M.at<double>(0,35)=a[1]; M.at<double>(0,83)=b[1]; M.at<double>(0,118)=c[1];
M.at<double>(1,0)=u[4]; M.at<double>(1,1)=u[1]; M.at<double>(1,34)=a[1]; M.at<double>(1,35)=a[10]; M.at<double>(1,54)=b[1]; M.at<double>(1,83)=b[10]; M.at<double>(1,99)=c[1]; M.at<double>(1,118)=c[10];
M.at<double>(2,1)=u[4]; M.at<double>(2,2)=u[1]; M.at<double>(2,34)=a[10]; M.at<double>(2,35)=a[14]; M.at<double>(2,51)=a[1]; M.at<double>(2,54)=b[10]; M.at<double>(2,65)=b[1]; M.at<double>(2,83)=b[14]; M.at<double>(2,89)=c[1]; M.at<double>(2,99)=c[10]; M.at<double>(2,118)=c[14];
M.at<double>(3,0)=u[3]; M.at<double>(3,3)=u[1]; M.at<double>(3,35)=a[11]; M.at<double>(3,49)=a[1]; M.at<double>(3,76)=b[1]; M.at<double>(3,83)=b[11]; M.at<double>(3,118)=c[11]; M.at<double>(3,119)=c[1];
M.at<double>(4,1)=u[3]; M.at<double>(4,3)=u[4]; M.at<double>(4,4)=u[1]; M.at<double>(4,34)=a[11]; M.at<double>(4,35)=a[5]; M.at<double>(4,43)=a[1]; M.at<double>(4,49)=a[10]; M.at<double>(4,54)=b[11]; M.at<double>(4,71)=b[1]; M.at<double>(4,76)=b[10]; M.at<double>(4,83)=b[5]; M.at<double>(4,99)=c[11]; M.at<double>(4,100)=c[1]; M.at<double>(4,118)=c[5]; M.at<double>(4,119)=c[10];
M.at<double>(5,2)=u[3]; M.at<double>(5,4)=u[4]; M.at<double>(5,5)=u[1]; M.at<double>(5,34)=a[5]; M.at<double>(5,35)=a[12]; M.at<double>(5,41)=a[1]; M.at<double>(5,43)=a[10]; M.at<double>(5,49)=a[14]; M.at<double>(5,51)=a[11]; M.at<double>(5,54)=b[5]; M.at<double>(5,62)=b[1]; M.at<double>(5,65)=b[11]; M.at<double>(5,71)=b[10]; M.at<double>(5,76)=b[14]; M.at<double>(5,83)=b[12]; M.at<double>(5,89)=c[11]; M.at<double>(5,99)=c[5]; M.at<double>(5,100)=c[10]; M.at<double>(5,111)=c[1]; M.at<double>(5,118)=c[12]; M.at<double>(5,119)=c[14];
M.at<double>(6,3)=u[3]; M.at<double>(6,6)=u[1]; M.at<double>(6,30)=a[1]; M.at<double>(6,35)=a[15]; M.at<double>(6,49)=a[11]; M.at<double>(6,75)=b[1]; M.at<double>(6,76)=b[11]; M.at<double>(6,83)=b[15]; M.at<double>(6,107)=c[1]; M.at<double>(6,118)=c[15]; M.at<double>(6,119)=c[11];
M.at<double>(7,4)=u[3]; M.at<double>(7,6)=u[4]; M.at<double>(7,7)=u[1]; M.at<double>(7,30)=a[10]; M.at<double>(7,34)=a[15]; M.at<double>(7,35)=a[6]; M.at<double>(7,43)=a[11]; M.at<double>(7,45)=a[1]; M.at<double>(7,49)=a[5]; M.at<double>(7,54)=b[15]; M.at<double>(7,63)=b[1]; M.at<double>(7,71)=b[11]; M.at<double>(7,75)=b[10]; M.at<double>(7,76)=b[5]; M.at<double>(7,83)=b[6]; M.at<double>(7,99)=c[15]; M.at<double>(7,100)=c[11]; M.at<double>(7,107)=c[10]; M.at<double>(7,112)=c[1]; M.at<double>(7,118)=c[6]; M.at<double>(7,119)=c[5];
M.at<double>(8,5)=u[3]; M.at<double>(8,7)=u[4]; M.at<double>(8,8)=u[1]; M.at<double>(8,30)=a[14]; M.at<double>(8,34)=a[6]; M.at<double>(8,41)=a[11]; M.at<double>(8,43)=a[5]; M.at<double>(8,45)=a[10]; M.at<double>(8,46)=a[1]; M.at<double>(8,49)=a[12]; M.at<double>(8,51)=a[15]; M.at<double>(8,54)=b[6]; M.at<double>(8,62)=b[11]; M.at<double>(8,63)=b[10]; M.at<double>(8,65)=b[15]; M.at<double>(8,66)=b[1]; M.at<double>(8,71)=b[5]; M.at<double>(8,75)=b[14]; M.at<double>(8,76)=b[12]; M.at<double>(8,89)=c[15]; M.at<double>(8,99)=c[6]; M.at<double>(8,100)=c[5]; M.at<double>(8,102)=c[1]; M.at<double>(8,107)=c[14]; M.at<double>(8,111)=c[11]; M.at<double>(8,112)=c[10]; M.at<double>(8,119)=c[12];
M.at<double>(9,0)=u[2]; M.at<double>(9,9)=u[1]; M.at<double>(9,35)=a[9]; M.at<double>(9,36)=a[1]; M.at<double>(9,83)=b[9]; M.at<double>(9,84)=b[1]; M.at<double>(9,88)=c[1]; M.at<double>(9,118)=c[9];
M.at<double>(10,1)=u[2]; M.at<double>(10,9)=u[4]; M.at<double>(10,10)=u[1]; M.at<double>(10,33)=a[1]; M.at<double>(10,34)=a[9]; M.at<double>(10,35)=a[4]; M.at<double>(10,36)=a[10]; M.at<double>(10,54)=b[9]; M.at<double>(10,59)=b[1]; M.at<double>(10,83)=b[4]; M.at<double>(10,84)=b[10]; M.at<double>(10,88)=c[10]; M.at<double>(10,99)=c[9]; M.at<double>(10,117)=c[1]; M.at<double>(10,118)=c[4];
M.at<double>(11,2)=u[2]; M.at<double>(11,10)=u[4]; M.at<double>(11,11)=u[1]; M.at<double>(11,28)=a[1]; M.at<double>(11,33)=a[10]; M.at<double>(11,34)=a[4]; M.at<double>(11,35)=a[8]; M.at<double>(11,36)=a[14]; M.at<double>(11,51)=a[9]; M.at<double>(11,54)=b[4]; M.at<double>(11,57)=b[1]; M.at<double>(11,59)=b[10]; M.at<double>(11,65)=b[9]; M.at<double>(11,83)=b[8]; M.at<double>(11,84)=b[14]; M.at<double>(11,88)=c[14]; M.at<double>(11,89)=c[9]; M.at<double>(11,99)=c[4]; M.at<double>(11,114)=c[1]; M.at<double>(11,117)=c[10]; M.at<double>(11,118)=c[8];
M.at<double>(12,3)=u[2]; M.at<double>(12,9)=u[3]; M.at<double>(12,12)=u[1]; M.at<double>(12,35)=a[3]; M.at<double>(12,36)=a[11]; M.at<double>(12,39)=a[1]; M.at<double>(12,49)=a[9]; M.at<double>(12,76)=b[9]; M.at<double>(12,79)=b[1]; M.at<double>(12,83)=b[3]; M.at<double>(12,84)=b[11]; M.at<double>(12,88)=c[11]; M.at<double>(12,96)=c[1]; M.at<double>(12,118)=c[3]; M.at<double>(12,119)=c[9];
M.at<double>(13,4)=u[2]; M.at<double>(13,10)=u[3]; M.at<double>(13,12)=u[4]; M.at<double>(13,13)=u[1]; M.at<double>(13,33)=a[11]; M.at<double>(13,34)=a[3]; M.at<double>(13,35)=a[17]; M.at<double>(13,36)=a[5]; M.at<double>(13,39)=a[10]; M.at<double>(13,43)=a[9]; M.at<double>(13,47)=a[1]; M.at<double>(13,49)=a[4]; M.at<double>(13,54)=b[3]; M.at<double>(13,59)=b[11]; M.at<double>(13,60)=b[1]; M.at<double>(13,71)=b[9]; M.at<double>(13,76)=b[4]; M.at<double>(13,79)=b[10]; M.at<double>(13,83)=b[17]; M.at<double>(13,84)=b[5]; M.at<double>(13,88)=c[5]; M.at<double>(13,90)=c[1]; M.at<double>(13,96)=c[10]; M.at<double>(13,99)=c[3]; M.at<double>(13,100)=c[9]; M.at<double>(13,117)=c[11]; M.at<double>(13,118)=c[17]; M.at<double>(13,119)=c[4];
M.at<double>(14,5)=u[2]; M.at<double>(14,11)=u[3]; M.at<double>(14,13)=u[4]; M.at<double>(14,14)=u[1]; M.at<double>(14,28)=a[11]; M.at<double>(14,33)=a[5]; M.at<double>(14,34)=a[17]; M.at<double>(14,36)=a[12]; M.at<double>(14,39)=a[14]; M.at<double>(14,41)=a[9]; M.at<double>(14,42)=a[1]; M.at<double>(14,43)=a[4]; M.at<double>(14,47)=a[10]; M.at<double>(14,49)=a[8]; M.at<double>(14,51)=a[3]; M.at<double>(14,54)=b[17]; M.at<double>(14,56)=b[1]; M.at<double>(14,57)=b[11]; M.at<double>(14,59)=b[5]; M.at<double>(14,60)=b[10]; M.at<double>(14,62)=b[9]; M.at<double>(14,65)=b[3]; M.at<double>(14,71)=b[4]; M.at<double>(14,76)=b[8]; M.at<double>(14,79)=b[14]; M.at<double>(14,84)=b[12]; M.at<double>(14,88)=c[12]; M.at<double>(14,89)=c[3]; M.at<double>(14,90)=c[10]; M.at<double>(14,96)=c[14]; M.at<double>(14,99)=c[17]; M.at<double>(14,100)=c[4]; M.at<double>(14,106)=c[1]; M.at<double>(14,111)=c[9]; M.at<double>(14,114)=c[11]; M.at<double>(14,117)=c[5]; M.at<double>(14,119)=c[8];
M.at<double>(15,6)=u[2]; M.at<double>(15,12)=u[3]; M.at<double>(15,15)=u[1]; M.at<double>(15,29)=a[1]; M.at<double>(15,30)=a[9]; M.at<double>(15,35)=a[18]; M.at<double>(15,36)=a[15]; M.at<double>(15,39)=a[11]; M.at<double>(15,49)=a[3]; M.at<double>(15,74)=b[1]; M.at<double>(15,75)=b[9]; M.at<double>(15,76)=b[3]; M.at<double>(15,79)=b[11]; M.at<double>(15,83)=b[18]; M.at<double>(15,84)=b[15]; M.at<double>(15,88)=c[15]; M.at<double>(15,94)=c[1]; M.at<double>(15,96)=c[11]; M.at<double>(15,107)=c[9]; M.at<double>(15,118)=c[18]; M.at<double>(15,119)=c[3];
M.at<double>(16,7)=u[2]; M.at<double>(16,13)=u[3]; M.at<double>(16,15)=u[4]; M.at<double>(16,16)=u[1]; M.at<double>(16,29)=a[10]; M.at<double>(16,30)=a[4]; M.at<double>(16,33)=a[15]; M.at<double>(16,34)=a[18]; M.at<double>(16,36)=a[6]; M.at<double>(16,39)=a[5]; M.at<double>(16,43)=a[3]; M.at<double>(16,44)=a[1]; M.at<double>(16,45)=a[9]; M.at<double>(16,47)=a[11]; M.at<double>(16,49)=a[17]; M.at<double>(16,54)=b[18]; M.at<double>(16,59)=b[15]; M.at<double>(16,60)=b[11]; M.at<double>(16,63)=b[9]; M.at<double>(16,68)=b[1]; M.at<double>(16,71)=b[3]; M.at<double>(16,74)=b[10]; M.at<double>(16,75)=b[4]; M.at<double>(16,76)=b[17]; M.at<double>(16,79)=b[5]; M.at<double>(16,84)=b[6]; M.at<double>(16,88)=c[6]; M.at<double>(16,90)=c[11]; M.at<double>(16,94)=c[10]; M.at<double>(16,96)=c[5]; M.at<double>(16,97)=c[1]; M.at<double>(16,99)=c[18]; M.at<double>(16,100)=c[3]; M.at<double>(16,107)=c[4]; M.at<double>(16,112)=c[9]; M.at<double>(16,117)=c[15]; M.at<double>(16,119)=c[17];
M.at<double>(17,8)=u[2]; M.at<double>(17,14)=u[3]; M.at<double>(17,16)=u[4]; M.at<double>(17,17)=u[1]; M.at<double>(17,28)=a[15]; M.at<double>(17,29)=a[14]; M.at<double>(17,30)=a[8]; M.at<double>(17,33)=a[6]; M.at<double>(17,39)=a[12]; M.at<double>(17,41)=a[3]; M.at<double>(17,42)=a[11]; M.at<double>(17,43)=a[17]; M.at<double>(17,44)=a[10]; M.at<double>(17,45)=a[4]; M.at<double>(17,46)=a[9]; M.at<double>(17,47)=a[5]; M.at<double>(17,51)=a[18]; M.at<double>(17,56)=b[11]; M.at<double>(17,57)=b[15]; M.at<double>(17,59)=b[6]; M.at<double>(17,60)=b[5]; M.at<double>(17,62)=b[3]; M.at<double>(17,63)=b[4]; M.at<double>(17,65)=b[18]; M.at<double>(17,66)=b[9]; M.at<double>(17,68)=b[10]; M.at<double>(17,71)=b[17]; M.at<double>(17,74)=b[14]; M.at<double>(17,75)=b[8]; M.at<double>(17,79)=b[12]; M.at<double>(17,89)=c[18]; M.at<double>(17,90)=c[5]; M.at<double>(17,94)=c[14]; M.at<double>(17,96)=c[12]; M.at<double>(17,97)=c[10]; M.at<double>(17,100)=c[17]; M.at<double>(17,102)=c[9]; M.at<double>(17,106)=c[11]; M.at<double>(17,107)=c[8]; M.at<double>(17,111)=c[3]; M.at<double>(17,112)=c[4]; M.at<double>(17,114)=c[15]; M.at<double>(17,117)=c[6];
M.at<double>(18,9)=u[2]; M.at<double>(18,18)=u[1]; M.at<double>(18,35)=a[13]; M.at<double>(18,36)=a[9]; M.at<double>(18,53)=a[1]; M.at<double>(18,82)=b[1]; M.at<double>(18,83)=b[13]; M.at<double>(18,84)=b[9]; M.at<double>(18,87)=c[1]; M.at<double>(18,88)=c[9]; M.at<double>(18,118)=c[13];
M.at<double>(19,10)=u[2]; M.at<double>(19,18)=u[4]; M.at<double>(19,19)=u[1]; M.at<double>(19,32)=a[1]; M.at<double>(19,33)=a[9]; M.at<double>(19,34)=a[13]; M.at<double>(19,35)=a[19]; M.at<double>(19,36)=a[4]; M.at<double>(19,53)=a[10]; M.at<double>(19,54)=b[13]; M.at<double>(19,59)=b[9]; M.at<double>(19,61)=b[1]; M.at<double>(19,82)=b[10]; M.at<double>(19,83)=b[19]; M.at<double>(19,84)=b[4]; M.at<double>(19,87)=c[10]; M.at<double>(19,88)=c[4]; M.at<double>(19,99)=c[13]; M.at<double>(19,116)=c[1]; M.at<double>(19,117)=c[9]; M.at<double>(19,118)=c[19];
M.at<double>(20,11)=u[2]; M.at<double>(20,19)=u[4]; M.at<double>(20,20)=u[1]; M.at<double>(20,27)=a[1]; M.at<double>(20,28)=a[9]; M.at<double>(20,32)=a[10]; M.at<double>(20,33)=a[4]; M.at<double>(20,34)=a[19]; M.at<double>(20,36)=a[8]; M.at<double>(20,51)=a[13]; M.at<double>(20,53)=a[14]; M.at<double>(20,54)=b[19]; M.at<double>(20,55)=b[1]; M.at<double>(20,57)=b[9]; M.at<double>(20,59)=b[4]; M.at<double>(20,61)=b[10]; M.at<double>(20,65)=b[13]; M.at<double>(20,82)=b[14]; M.at<double>(20,84)=b[8]; M.at<double>(20,87)=c[14]; M.at<double>(20,88)=c[8]; M.at<double>(20,89)=c[13]; M.at<double>(20,99)=c[19]; M.at<double>(20,113)=c[1]; M.at<double>(20,114)=c[9]; M.at<double>(20,116)=c[10]; M.at<double>(20,117)=c[4];
M.at<double>(21,12)=u[2]; M.at<double>(21,18)=u[3]; M.at<double>(21,21)=u[1]; M.at<double>(21,35)=a[2]; M.at<double>(21,36)=a[3]; M.at<double>(21,38)=a[1]; M.at<double>(21,39)=a[9]; M.at<double>(21,49)=a[13]; M.at<double>(21,53)=a[11]; M.at<double>(21,76)=b[13]; M.at<double>(21,78)=b[1]; M.at<double>(21,79)=b[9]; M.at<double>(21,82)=b[11]; M.at<double>(21,83)=b[2]; M.at<double>(21,84)=b[3]; M.at<double>(21,87)=c[11]; M.at<double>(21,88)=c[3]; M.at<double>(21,92)=c[1]; M.at<double>(21,96)=c[9]; M.at<double>(21,118)=c[2]; M.at<double>(21,119)=c[13];
M.at<double>(22,13)=u[2]; M.at<double>(22,19)=u[3]; M.at<double>(22,21)=u[4]; M.at<double>(22,22)=u[1]; M.at<double>(22,32)=a[11]; M.at<double>(22,33)=a[3]; M.at<double>(22,34)=a[2]; M.at<double>(22,36)=a[17]; M.at<double>(22,38)=a[10]; M.at<double>(22,39)=a[4]; M.at<double>(22,40)=a[1]; M.at<double>(22,43)=a[13]; M.at<double>(22,47)=a[9]; M.at<double>(22,49)=a[19]; M.at<double>(22,53)=a[5]; M.at<double>(22,54)=b[2]; M.at<double>(22,59)=b[3]; M.at<double>(22,60)=b[9]; M.at<double>(22,61)=b[11]; M.at<double>(22,71)=b[13]; M.at<double>(22,72)=b[1]; M.at<double>(22,76)=b[19]; M.at<double>(22,78)=b[10]; M.at<double>(22,79)=b[4]; M.at<double>(22,82)=b[5]; M.at<double>(22,84)=b[17]; M.at<double>(22,87)=c[5]; M.at<double>(22,88)=c[17]; M.at<double>(22,90)=c[9]; M.at<double>(22,92)=c[10]; M.at<double>(22,95)=c[1]; M.at<double>(22,96)=c[4]; M.at<double>(22,99)=c[2]; M.at<double>(22,100)=c[13]; M.at<double>(22,116)=c[11]; M.at<double>(22,117)=c[3]; M.at<double>(22,119)=c[19];
M.at<double>(23,14)=u[2]; M.at<double>(23,20)=u[3]; M.at<double>(23,22)=u[4]; M.at<double>(23,23)=u[1]; M.at<double>(23,27)=a[11]; M.at<double>(23,28)=a[3]; M.at<double>(23,32)=a[5]; M.at<double>(23,33)=a[17]; M.at<double>(23,38)=a[14]; M.at<double>(23,39)=a[8]; M.at<double>(23,40)=a[10]; M.at<double>(23,41)=a[13]; M.at<double>(23,42)=a[9]; M.at<double>(23,43)=a[19]; M.at<double>(23,47)=a[4]; M.at<double>(23,51)=a[2]; M.at<double>(23,53)=a[12]; M.at<double>(23,55)=b[11]; M.at<double>(23,56)=b[9]; M.at<double>(23,57)=b[3]; M.at<double>(23,59)=b[17]; M.at<double>(23,60)=b[4]; M.at<double>(23,61)=b[5]; M.at<double>(23,62)=b[13]; M.at<double>(23,65)=b[2]; M.at<double>(23,71)=b[19]; M.at<double>(23,72)=b[10]; M.at<double>(23,78)=b[14]; M.at<double>(23,79)=b[8]; M.at<double>(23,82)=b[12]; M.at<double>(23,87)=c[12]; M.at<double>(23,89)=c[2]; M.at<double>(23,90)=c[4]; M.at<double>(23,92)=c[14]; M.at<double>(23,95)=c[10]; M.at<double>(23,96)=c[8]; M.at<double>(23,100)=c[19]; M.at<double>(23,106)=c[9]; M.at<double>(23,111)=c[13]; M.at<double>(23,113)=c[11]; M.at<double>(23,114)=c[3]; M.at<double>(23,116)=c[5]; M.at<double>(23,117)=c[17];
M.at<double>(24,15)=u[2]; M.at<double>(24,21)=u[3]; M.at<double>(24,24)=u[1]; M.at<double>(24,29)=a[9]; M.at<double>(24,30)=a[13]; M.at<double>(24,36)=a[18]; M.at<double>(24,38)=a[11]; M.at<double>(24,39)=a[3]; M.at<double>(24,49)=a[2]; M.at<double>(24,52)=a[1]; M.at<double>(24,53)=a[15]; M.at<double>(24,73)=b[1]; M.at<double>(24,74)=b[9]; M.at<double>(24,75)=b[13]; M.at<double>(24,76)=b[2]; M.at<double>(24,78)=b[11]; M.at<double>(24,79)=b[3]; M.at<double>(24,82)=b[15]; M.at<double>(24,84)=b[18]; M.at<double>(24,87)=c[15]; M.at<double>(24,88)=c[18]; M.at<double>(24,92)=c[11]; M.at<double>(24,93)=c[1]; M.at<double>(24,94)=c[9]; M.at<double>(24,96)=c[3]; M.at<double>(24,107)=c[13]; M.at<double>(24,119)=c[2];
M.at<double>(25,16)=u[2]; M.at<double>(25,22)=u[3]; M.at<double>(25,24)=u[4]; M.at<double>(25,25)=u[1]; M.at<double>(25,29)=a[4]; M.at<double>(25,30)=a[19]; M.at<double>(25,32)=a[15]; M.at<double>(25,33)=a[18]; M.at<double>(25,38)=a[5]; M.at<double>(25,39)=a[17]; M.at<double>(25,40)=a[11]; M.at<double>(25,43)=a[2]; M.at<double>(25,44)=a[9]; M.at<double>(25,45)=a[13]; M.at<double>(25,47)=a[3]; M.at<double>(25,52)=a[10]; M.at<double>(25,53)=a[6]; M.at<double>(25,59)=b[18]; M.at<double>(25,60)=b[3]; M.at<double>(25,61)=b[15]; M.at<double>(25,63)=b[13]; M.at<double>(25,68)=b[9]; M.at<double>(25,71)=b[2]; M.at<double>(25,72)=b[11]; M.at<double>(25,73)=b[10]; M.at<double>(25,74)=b[4]; M.at<double>(25,75)=b[19]; M.at<double>(25,78)=b[5]; M.at<double>(25,79)=b[17]; M.at<double>(25,82)=b[6]; M.at<double>(25,87)=c[6]; M.at<double>(25,90)=c[3]; M.at<double>(25,92)=c[5]; M.at<double>(25,93)=c[10]; M.at<double>(25,94)=c[4]; M.at<double>(25,95)=c[11]; M.at<double>(25,96)=c[17]; M.at<double>(25,97)=c[9]; M.at<double>(25,100)=c[2]; M.at<double>(25,107)=c[19]; M.at<double>(25,112)=c[13]; M.at<double>(25,116)=c[15]; M.at<double>(25,117)=c[18];
M.at<double>(26,17)=u[2]; M.at<double>(26,23)=u[3]; M.at<double>(26,25)=u[4]; M.at<double>(26,26)=u[1]; M.at<double>(26,27)=a[15]; M.at<double>(26,28)=a[18]; M.at<double>(26,29)=a[8]; M.at<double>(26,32)=a[6]; M.at<double>(26,38)=a[12]; M.at<double>(26,40)=a[5]; M.at<double>(26,41)=a[2]; M.at<double>(26,42)=a[3]; M.at<double>(26,44)=a[4]; M.at<double>(26,45)=a[19]; M.at<double>(26,46)=a[13]; M.at<double>(26,47)=a[17]; M.at<double>(26,52)=a[14]; M.at<double>(26,55)=b[15]; M.at<double>(26,56)=b[3]; M.at<double>(26,57)=b[18]; M.at<double>(26,60)=b[17]; M.at<double>(26,61)=b[6]; M.at<double>(26,62)=b[2]; M.at<double>(26,63)=b[19]; M.at<double>(26,66)=b[13]; M.at<double>(26,68)=b[4]; M.at<double>(26,72)=b[5]; M.at<double>(26,73)=b[14]; M.at<double>(26,74)=b[8]; M.at<double>(26,78)=b[12]; M.at<double>(26,90)=c[17]; M.at<double>(26,92)=c[12]; M.at<double>(26,93)=c[14]; M.at<double>(26,94)=c[8]; M.at<double>(26,95)=c[5]; M.at<double>(26,97)=c[4]; M.at<double>(26,102)=c[13]; M.at<double>(26,106)=c[3]; M.at<double>(26,111)=c[2]; M.at<double>(26,112)=c[19]; M.at<double>(26,113)=c[15]; M.at<double>(26,114)=c[18]; M.at<double>(26,116)=c[6];
M.at<double>(27,15)=u[3]; M.at<double>(27,29)=a[11]; M.at<double>(27,30)=a[3]; M.at<double>(27,36)=a[7]; M.at<double>(27,39)=a[15]; M.at<double>(27,49)=a[18]; M.at<double>(27,69)=b[9]; M.at<double>(27,70)=b[1]; M.at<double>(27,74)=b[11]; M.at<double>(27,75)=b[3]; M.at<double>(27,76)=b[18]; M.at<double>(27,79)=b[15]; M.at<double>(27,84)=b[7]; M.at<double>(27,88)=c[7]; M.at<double>(27,91)=c[1]; M.at<double>(27,94)=c[11]; M.at<double>(27,96)=c[15]; M.at<double>(27,107)=c[3]; M.at<double>(27,110)=c[9]; M.at<double>(27,119)=c[18];
M.at<double>(28,6)=u[3]; M.at<double>(28,30)=a[11]; M.at<double>(28,35)=a[7]; M.at<double>(28,49)=a[15]; M.at<double>(28,69)=b[1]; M.at<double>(28,75)=b[11]; M.at<double>(28,76)=b[15]; M.at<double>(28,83)=b[7]; M.at<double>(28,107)=c[11]; M.at<double>(28,110)=c[1]; M.at<double>(28,118)=c[7]; M.at<double>(28,119)=c[15];
M.at<double>(29,24)=u[3]; M.at<double>(29,29)=a[3]; M.at<double>(29,30)=a[2]; M.at<double>(29,38)=a[15]; M.at<double>(29,39)=a[18]; M.at<double>(29,52)=a[11]; M.at<double>(29,53)=a[7]; M.at<double>(29,69)=b[13]; M.at<double>(29,70)=b[9]; M.at<double>(29,73)=b[11]; M.at<double>(29,74)=b[3]; M.at<double>(29,75)=b[2]; M.at<double>(29,78)=b[15]; M.at<double>(29,79)=b[18]; M.at<double>(29,82)=b[7]; M.at<double>(29,87)=c[7]; M.at<double>(29,91)=c[9]; M.at<double>(29,92)=c[15]; M.at<double>(29,93)=c[11]; M.at<double>(29,94)=c[3]; M.at<double>(29,96)=c[18]; M.at<double>(29,107)=c[2]; M.at<double>(29,110)=c[13];
M.at<double>(30,37)=a[18]; M.at<double>(30,48)=a[7]; M.at<double>(30,52)=a[2]; M.at<double>(30,70)=b[20]; M.at<double>(30,73)=b[2]; M.at<double>(30,77)=b[18]; M.at<double>(30,81)=b[7]; M.at<double>(30,85)=c[7]; M.at<double>(30,91)=c[20]; M.at<double>(30,93)=c[2]; M.at<double>(30,98)=c[18];
M.at<double>(31,29)=a[2]; M.at<double>(31,37)=a[15]; M.at<double>(31,38)=a[18]; M.at<double>(31,50)=a[7]; M.at<double>(31,52)=a[3]; M.at<double>(31,69)=b[20]; M.at<double>(31,70)=b[13]; M.at<double>(31,73)=b[3]; M.at<double>(31,74)=b[2]; M.at<double>(31,77)=b[15]; M.at<double>(31,78)=b[18]; M.at<double>(31,80)=b[7]; M.at<double>(31,86)=c[7]; M.at<double>(31,91)=c[13]; M.at<double>(31,92)=c[18]; M.at<double>(31,93)=c[3]; M.at<double>(31,94)=c[2]; M.at<double>(31,98)=c[15]; M.at<double>(31,110)=c[20];
M.at<double>(32,48)=a[9]; M.at<double>(32,50)=a[13]; M.at<double>(32,53)=a[20]; M.at<double>(32,80)=b[13]; M.at<double>(32,81)=b[9]; M.at<double>(32,82)=b[20]; M.at<double>(32,85)=c[9]; M.at<double>(32,86)=c[13]; M.at<double>(32,87)=c[20];
M.at<double>(33,29)=a[15]; M.at<double>(33,30)=a[18]; M.at<double>(33,39)=a[7]; M.at<double>(33,64)=b[9]; M.at<double>(33,69)=b[3]; M.at<double>(33,70)=b[11]; M.at<double>(33,74)=b[15]; M.at<double>(33,75)=b[18]; M.at<double>(33,79)=b[7]; M.at<double>(33,91)=c[11]; M.at<double>(33,94)=c[15]; M.at<double>(33,96)=c[7]; M.at<double>(33,103)=c[9]; M.at<double>(33,107)=c[18]; M.at<double>(33,110)=c[3];
M.at<double>(34,29)=a[18]; M.at<double>(34,38)=a[7]; M.at<double>(34,52)=a[15]; M.at<double>(34,64)=b[13]; M.at<double>(34,69)=b[2]; M.at<double>(34,70)=b[3]; M.at<double>(34,73)=b[15]; M.at<double>(34,74)=b[18]; M.at<double>(34,78)=b[7]; M.at<double>(34,91)=c[3]; M.at<double>(34,92)=c[7]; M.at<double>(34,93)=c[15]; M.at<double>(34,94)=c[18]; M.at<double>(34,103)=c[13]; M.at<double>(34,110)=c[2];
M.at<double>(35,37)=a[7]; M.at<double>(35,52)=a[18]; M.at<double>(35,64)=b[20]; M.at<double>(35,70)=b[2]; M.at<double>(35,73)=b[18]; M.at<double>(35,77)=b[7]; M.at<double>(35,91)=c[2]; M.at<double>(35,93)=c[18]; M.at<double>(35,98)=c[7]; M.at<double>(35,103)=c[20];
M.at<double>(36,5)=u[4]; M.at<double>(36,34)=a[12]; M.at<double>(36,41)=a[10]; M.at<double>(36,43)=a[14]; M.at<double>(36,49)=a[16]; M.at<double>(36,51)=a[5]; M.at<double>(36,54)=b[12]; M.at<double>(36,62)=b[10]; M.at<double>(36,65)=b[5]; M.at<double>(36,71)=b[14]; M.at<double>(36,76)=b[16]; M.at<double>(36,89)=c[5]; M.at<double>(36,99)=c[12]; M.at<double>(36,100)=c[14]; M.at<double>(36,101)=c[1]; M.at<double>(36,109)=c[11]; M.at<double>(36,111)=c[10]; M.at<double>(36,119)=c[16];
M.at<double>(37,2)=u[4]; M.at<double>(37,34)=a[14]; M.at<double>(37,35)=a[16]; M.at<double>(37,51)=a[10]; M.at<double>(37,54)=b[14]; M.at<double>(37,65)=b[10]; M.at<double>(37,83)=b[16]; M.at<double>(37,89)=c[10]; M.at<double>(37,99)=c[14]; M.at<double>(37,109)=c[1]; M.at<double>(37,118)=c[16];
M.at<double>(38,30)=a[15]; M.at<double>(38,49)=a[7]; M.at<double>(38,64)=b[1]; M.at<double>(38,69)=b[11]; M.at<double>(38,75)=b[15]; M.at<double>(38,76)=b[7]; M.at<double>(38,103)=c[1]; M.at<double>(38,107)=c[15]; M.at<double>(38,110)=c[11]; M.at<double>(38,119)=c[7];
M.at<double>(39,28)=a[14]; M.at<double>(39,33)=a[16]; M.at<double>(39,51)=a[8]; M.at<double>(39,57)=b[14]; M.at<double>(39,59)=b[16]; M.at<double>(39,65)=b[8]; M.at<double>(39,89)=c[8]; M.at<double>(39,105)=c[9]; M.at<double>(39,108)=c[10]; M.at<double>(39,109)=c[4]; M.at<double>(39,114)=c[14]; M.at<double>(39,117)=c[16];
M.at<double>(40,27)=a[14]; M.at<double>(40,28)=a[8]; M.at<double>(40,32)=a[16]; M.at<double>(40,55)=b[14]; M.at<double>(40,57)=b[8]; M.at<double>(40,61)=b[16]; M.at<double>(40,105)=c[13]; M.at<double>(40,108)=c[4]; M.at<double>(40,109)=c[19]; M.at<double>(40,113)=c[14]; M.at<double>(40,114)=c[8]; M.at<double>(40,116)=c[16];
M.at<double>(41,30)=a[7]; M.at<double>(41,64)=b[11]; M.at<double>(41,69)=b[15]; M.at<double>(41,75)=b[7]; M.at<double>(41,103)=c[11]; M.at<double>(41,107)=c[7]; M.at<double>(41,110)=c[15];
M.at<double>(42,27)=a[8]; M.at<double>(42,31)=a[16]; M.at<double>(42,55)=b[8]; M.at<double>(42,58)=b[16]; M.at<double>(42,105)=c[20]; M.at<double>(42,108)=c[19]; M.at<double>(42,113)=c[8]; M.at<double>(42,115)=c[16];
M.at<double>(43,29)=a[7]; M.at<double>(43,64)=b[3]; M.at<double>(43,69)=b[18]; M.at<double>(43,70)=b[15]; M.at<double>(43,74)=b[7]; M.at<double>(43,91)=c[15]; M.at<double>(43,94)=c[7]; M.at<double>(43,103)=c[3]; M.at<double>(43,110)=c[18];
M.at<double>(44,28)=a[16]; M.at<double>(44,57)=b[16]; M.at<double>(44,105)=c[4]; M.at<double>(44,108)=c[14]; M.at<double>(44,109)=c[8]; M.at<double>(44,114)=c[16];
M.at<double>(45,27)=a[16]; M.at<double>(45,55)=b[16]; M.at<double>(45,105)=c[19]; M.at<double>(45,108)=c[8]; M.at<double>(45,113)=c[16];
M.at<double>(46,52)=a[7]; M.at<double>(46,64)=b[2]; M.at<double>(46,70)=b[18]; M.at<double>(46,73)=b[7]; M.at<double>(46,91)=c[18]; M.at<double>(46,93)=c[7]; M.at<double>(46,103)=c[2];
M.at<double>(47,40)=a[7]; M.at<double>(47,44)=a[18]; M.at<double>(47,52)=a[6]; M.at<double>(47,64)=b[19]; M.at<double>(47,67)=b[2]; M.at<double>(47,68)=b[18]; M.at<double>(47,70)=b[17]; M.at<double>(47,72)=b[7]; M.at<double>(47,73)=b[6]; M.at<double>(47,91)=c[17]; M.at<double>(47,93)=c[6]; M.at<double>(47,95)=c[7]; M.at<double>(47,97)=c[18]; M.at<double>(47,103)=c[19]; M.at<double>(47,104)=c[2];
M.at<double>(48,30)=a[6]; M.at<double>(48,43)=a[7]; M.at<double>(48,45)=a[15]; M.at<double>(48,63)=b[15]; M.at<double>(48,64)=b[10]; M.at<double>(48,67)=b[11]; M.at<double>(48,69)=b[5]; M.at<double>(48,71)=b[7]; M.at<double>(48,75)=b[6]; M.at<double>(48,100)=c[7]; M.at<double>(48,103)=c[10]; M.at<double>(48,104)=c[11]; M.at<double>(48,107)=c[6]; M.at<double>(48,110)=c[5]; M.at<double>(48,112)=c[15];
M.at<double>(49,41)=a[12]; M.at<double>(49,45)=a[16]; M.at<double>(49,46)=a[14]; M.at<double>(49,62)=b[12]; M.at<double>(49,63)=b[16]; M.at<double>(49,66)=b[14]; M.at<double>(49,101)=c[5]; M.at<double>(49,102)=c[14]; M.at<double>(49,105)=c[15]; M.at<double>(49,109)=c[6]; M.at<double>(49,111)=c[12]; M.at<double>(49,112)=c[16];
M.at<double>(50,41)=a[16]; M.at<double>(50,62)=b[16]; M.at<double>(50,101)=c[14]; M.at<double>(50,105)=c[5]; M.at<double>(50,109)=c[12]; M.at<double>(50,111)=c[16];
M.at<double>(51,64)=b[18]; M.at<double>(51,70)=b[7]; M.at<double>(51,91)=c[7]; M.at<double>(51,103)=c[18];
M.at<double>(52,41)=a[6]; M.at<double>(52,45)=a[12]; M.at<double>(52,46)=a[5]; M.at<double>(52,62)=b[6]; M.at<double>(52,63)=b[12]; M.at<double>(52,66)=b[5]; M.at<double>(52,67)=b[14]; M.at<double>(52,69)=b[16]; M.at<double>(52,101)=c[15]; M.at<double>(52,102)=c[5]; M.at<double>(52,104)=c[14]; M.at<double>(52,109)=c[7]; M.at<double>(52,110)=c[16]; M.at<double>(52,111)=c[6]; M.at<double>(52,112)=c[12];
M.at<double>(53,64)=b[15]; M.at<double>(53,69)=b[7]; M.at<double>(53,103)=c[15]; M.at<double>(53,110)=c[7];
M.at<double>(54,105)=c[14]; M.at<double>(54,109)=c[16];
M.at<double>(55,44)=a[7]; M.at<double>(55,64)=b[17]; M.at<double>(55,67)=b[18]; M.at<double>(55,68)=b[7]; M.at<double>(55,70)=b[6]; M.at<double>(55,91)=c[6]; M.at<double>(55,97)=c[7]; M.at<double>(55,103)=c[17]; M.at<double>(55,104)=c[18];
M.at<double>(56,105)=c[8]; M.at<double>(56,108)=c[16];
M.at<double>(57,64)=b[6]; M.at<double>(57,67)=b[7]; M.at<double>(57,103)=c[6]; M.at<double>(57,104)=c[7];
M.at<double>(58,46)=a[7]; M.at<double>(58,64)=b[12]; M.at<double>(58,66)=b[7]; M.at<double>(58,67)=b[6]; M.at<double>(58,102)=c[7]; M.at<double>(58,103)=c[12]; M.at<double>(58,104)=c[6];
M.at<double>(59,8)=u[4]; M.at<double>(59,30)=a[16]; M.at<double>(59,41)=a[5]; M.at<double>(59,43)=a[12]; M.at<double>(59,45)=a[14]; M.at<double>(59,46)=a[10]; M.at<double>(59,51)=a[6]; M.at<double>(59,62)=b[5]; M.at<double>(59,63)=b[14]; M.at<double>(59,65)=b[6]; M.at<double>(59,66)=b[10]; M.at<double>(59,71)=b[12]; M.at<double>(59,75)=b[16]; M.at<double>(59,89)=c[6]; M.at<double>(59,100)=c[12]; M.at<double>(59,101)=c[11]; M.at<double>(59,102)=c[10]; M.at<double>(59,107)=c[16]; M.at<double>(59,109)=c[15]; M.at<double>(59,111)=c[5]; M.at<double>(59,112)=c[14];
M.at<double>(60,8)=u[3]; M.at<double>(60,30)=a[12]; M.at<double>(60,41)=a[15]; M.at<double>(60,43)=a[6]; M.at<double>(60,45)=a[5]; M.at<double>(60,46)=a[11]; M.at<double>(60,51)=a[7]; M.at<double>(60,62)=b[15]; M.at<double>(60,63)=b[5]; M.at<double>(60,65)=b[7]; M.at<double>(60,66)=b[11]; M.at<double>(60,67)=b[10]; M.at<double>(60,69)=b[14]; M.at<double>(60,71)=b[6]; M.at<double>(60,75)=b[12]; M.at<double>(60,89)=c[7]; M.at<double>(60,100)=c[6]; M.at<double>(60,102)=c[11]; M.at<double>(60,104)=c[10]; M.at<double>(60,107)=c[12]; M.at<double>(60,110)=c[14]; M.at<double>(60,111)=c[15]; M.at<double>(60,112)=c[5];
M.at<double>(61,42)=a[16]; M.at<double>(61,56)=b[16]; M.at<double>(61,101)=c[8]; M.at<double>(61,105)=c[17]; M.at<double>(61,106)=c[16]; M.at<double>(61,108)=c[12];
M.at<double>(62,64)=b[7]; M.at<double>(62,103)=c[7];
M.at<double>(63,105)=c[16];
M.at<double>(64,46)=a[12]; M.at<double>(64,66)=b[12]; M.at<double>(64,67)=b[16]; M.at<double>(64,101)=c[6]; M.at<double>(64,102)=c[12]; M.at<double>(64,104)=c[16]; M.at<double>(64,105)=c[7];
M.at<double>(65,46)=a[6]; M.at<double>(65,64)=b[16]; M.at<double>(65,66)=b[6]; M.at<double>(65,67)=b[12]; M.at<double>(65,101)=c[7]; M.at<double>(65,102)=c[6]; M.at<double>(65,103)=c[16]; M.at<double>(65,104)=c[12];
M.at<double>(66,46)=a[16]; M.at<double>(66,66)=b[16]; M.at<double>(66,101)=c[12]; M.at<double>(66,102)=c[16]; M.at<double>(66,105)=c[6];
M.at<double>(67,101)=c[16]; M.at<double>(67,105)=c[12];
M.at<double>(68,41)=a[14]; M.at<double>(68,43)=a[16]; M.at<double>(68,51)=a[12]; M.at<double>(68,62)=b[14]; M.at<double>(68,65)=b[12]; M.at<double>(68,71)=b[16]; M.at<double>(68,89)=c[12]; M.at<double>(68,100)=c[16]; M.at<double>(68,101)=c[10]; M.at<double>(68,105)=c[11]; M.at<double>(68,109)=c[5]; M.at<double>(68,111)=c[14];
M.at<double>(69,37)=a[2]; M.at<double>(69,48)=a[18]; M.at<double>(69,52)=a[20]; M.at<double>(69,73)=b[20]; M.at<double>(69,77)=b[2]; M.at<double>(69,81)=b[18]; M.at<double>(69,85)=c[18]; M.at<double>(69,93)=c[20]; M.at<double>(69,98)=c[2];
M.at<double>(70,20)=u[2]; M.at<double>(70,27)=a[9]; M.at<double>(70,28)=a[13]; M.at<double>(70,31)=a[10]; M.at<double>(70,32)=a[4]; M.at<double>(70,33)=a[19]; M.at<double>(70,50)=a[14]; M.at<double>(70,51)=a[20]; M.at<double>(70,53)=a[8]; M.at<double>(70,55)=b[9]; M.at<double>(70,57)=b[13]; M.at<double>(70,58)=b[10]; M.at<double>(70,59)=b[19]; M.at<double>(70,61)=b[4]; M.at<double>(70,65)=b[20]; M.at<double>(70,80)=b[14]; M.at<double>(70,82)=b[8]; M.at<double>(70,86)=c[14]; M.at<double>(70,87)=c[8]; M.at<double>(70,89)=c[20]; M.at<double>(70,113)=c[9]; M.at<double>(70,114)=c[13]; M.at<double>(70,115)=c[10]; M.at<double>(70,116)=c[4]; M.at<double>(70,117)=c[19];
M.at<double>(71,45)=a[7]; M.at<double>(71,63)=b[7]; M.at<double>(71,64)=b[5]; M.at<double>(71,67)=b[15]; M.at<double>(71,69)=b[6]; M.at<double>(71,103)=c[5]; M.at<double>(71,104)=c[15]; M.at<double>(71,110)=c[6]; M.at<double>(71,112)=c[7];
M.at<double>(72,41)=a[7]; M.at<double>(72,45)=a[6]; M.at<double>(72,46)=a[15]; M.at<double>(72,62)=b[7]; M.at<double>(72,63)=b[6]; M.at<double>(72,64)=b[14]; M.at<double>(72,66)=b[15]; M.at<double>(72,67)=b[5]; M.at<double>(72,69)=b[12]; M.at<double>(72,102)=c[15]; M.at<double>(72,103)=c[14]; M.at<double>(72,104)=c[5]; M.at<double>(72,110)=c[12]; M.at<double>(72,111)=c[7]; M.at<double>(72,112)=c[6];
M.at<double>(73,48)=a[13]; M.at<double>(73,50)=a[20]; M.at<double>(73,80)=b[20]; M.at<double>(73,81)=b[13]; M.at<double>(73,85)=c[13]; M.at<double>(73,86)=c[20];
M.at<double>(74,25)=u[3]; M.at<double>(74,29)=a[17]; M.at<double>(74,32)=a[7]; M.at<double>(74,38)=a[6]; M.at<double>(74,40)=a[15]; M.at<double>(74,44)=a[3]; M.at<double>(74,45)=a[2]; M.at<double>(74,47)=a[18]; M.at<double>(74,52)=a[5]; M.at<double>(74,60)=b[18]; M.at<double>(74,61)=b[7]; M.at<double>(74,63)=b[2]; M.at<double>(74,67)=b[13]; M.at<double>(74,68)=b[3]; M.at<double>(74,69)=b[19]; M.at<double>(74,70)=b[4]; M.at<double>(74,72)=b[15]; M.at<double>(74,73)=b[5]; M.at<double>(74,74)=b[17]; M.at<double>(74,78)=b[6]; M.at<double>(74,90)=c[18]; M.at<double>(74,91)=c[4]; M.at<double>(74,92)=c[6]; M.at<double>(74,93)=c[5]; M.at<double>(74,94)=c[17]; M.at<double>(74,95)=c[15]; M.at<double>(74,97)=c[3]; M.at<double>(74,104)=c[13]; M.at<double>(74,110)=c[19]; M.at<double>(74,112)=c[2]; M.at<double>(74,116)=c[7];
M.at<double>(75,21)=u[2]; M.at<double>(75,36)=a[2]; M.at<double>(75,37)=a[1]; M.at<double>(75,38)=a[9]; M.at<double>(75,39)=a[13]; M.at<double>(75,49)=a[20]; M.at<double>(75,50)=a[11]; M.at<double>(75,53)=a[3]; M.at<double>(75,76)=b[20]; M.at<double>(75,77)=b[1]; M.at<double>(75,78)=b[9]; M.at<double>(75,79)=b[13]; M.at<double>(75,80)=b[11]; M.at<double>(75,82)=b[3]; M.at<double>(75,84)=b[2]; M.at<double>(75,86)=c[11]; M.at<double>(75,87)=c[3]; M.at<double>(75,88)=c[2]; M.at<double>(75,92)=c[9]; M.at<double>(75,96)=c[13]; M.at<double>(75,98)=c[1]; M.at<double>(75,119)=c[20];
M.at<double>(76,48)=a[20]; M.at<double>(76,81)=b[20]; M.at<double>(76,85)=c[20];
M.at<double>(77,34)=a[16]; M.at<double>(77,51)=a[14]; M.at<double>(77,54)=b[16]; M.at<double>(77,65)=b[14]; M.at<double>(77,89)=c[14]; M.at<double>(77,99)=c[16]; M.at<double>(77,105)=c[1]; M.at<double>(77,109)=c[10];
M.at<double>(78,27)=a[17]; M.at<double>(78,31)=a[12]; M.at<double>(78,37)=a[16]; M.at<double>(78,40)=a[8]; M.at<double>(78,42)=a[19]; M.at<double>(78,55)=b[17]; M.at<double>(78,56)=b[19]; M.at<double>(78,58)=b[12]; M.at<double>(78,72)=b[8]; M.at<double>(78,77)=b[16]; M.at<double>(78,95)=c[8]; M.at<double>(78,98)=c[16]; M.at<double>(78,101)=c[20]; M.at<double>(78,106)=c[19]; M.at<double>(78,108)=c[2]; M.at<double>(78,113)=c[17]; M.at<double>(78,115)=c[12];
M.at<double>(79,42)=a[12]; M.at<double>(79,44)=a[16]; M.at<double>(79,46)=a[8]; M.at<double>(79,56)=b[12]; M.at<double>(79,66)=b[8]; M.at<double>(79,68)=b[16]; M.at<double>(79,97)=c[16]; M.at<double>(79,101)=c[17]; M.at<double>(79,102)=c[8]; M.at<double>(79,105)=c[18]; M.at<double>(79,106)=c[12]; M.at<double>(79,108)=c[6];
M.at<double>(80,14)=u[4]; M.at<double>(80,28)=a[5]; M.at<double>(80,33)=a[12]; M.at<double>(80,39)=a[16]; M.at<double>(80,41)=a[4]; M.at<double>(80,42)=a[10]; M.at<double>(80,43)=a[8]; M.at<double>(80,47)=a[14]; M.at<double>(80,51)=a[17]; M.at<double>(80,56)=b[10]; M.at<double>(80,57)=b[5]; M.at<double>(80,59)=b[12]; M.at<double>(80,60)=b[14]; M.at<double>(80,62)=b[4]; M.at<double>(80,65)=b[17]; M.at<double>(80,71)=b[8]; M.at<double>(80,79)=b[16]; M.at<double>(80,89)=c[17]; M.at<double>(80,90)=c[14]; M.at<double>(80,96)=c[16]; M.at<double>(80,100)=c[8]; M.at<double>(80,101)=c[9]; M.at<double>(80,106)=c[10]; M.at<double>(80,108)=c[11]; M.at<double>(80,109)=c[3]; M.at<double>(80,111)=c[4]; M.at<double>(80,114)=c[5]; M.at<double>(80,117)=c[12];
M.at<double>(81,31)=a[3]; M.at<double>(81,32)=a[2]; M.at<double>(81,37)=a[4]; M.at<double>(81,38)=a[19]; M.at<double>(81,40)=a[13]; M.at<double>(81,47)=a[20]; M.at<double>(81,48)=a[5]; M.at<double>(81,50)=a[17]; M.at<double>(81,58)=b[3]; M.at<double>(81,60)=b[20]; M.at<double>(81,61)=b[2]; M.at<double>(81,72)=b[13]; M.at<double>(81,77)=b[4]; M.at<double>(81,78)=b[19]; M.at<double>(81,80)=b[17]; M.at<double>(81,81)=b[5]; M.at<double>(81,85)=c[5]; M.at<double>(81,86)=c[17]; M.at<double>(81,90)=c[20]; M.at<double>(81,92)=c[19]; M.at<double>(81,95)=c[13]; M.at<double>(81,98)=c[4]; M.at<double>(81,115)=c[3]; M.at<double>(81,116)=c[2];
M.at<double>(82,29)=a[6]; M.at<double>(82,44)=a[15]; M.at<double>(82,45)=a[18]; M.at<double>(82,47)=a[7]; M.at<double>(82,60)=b[7]; M.at<double>(82,63)=b[18]; M.at<double>(82,64)=b[4]; M.at<double>(82,67)=b[3]; M.at<double>(82,68)=b[15]; M.at<double>(82,69)=b[17]; M.at<double>(82,70)=b[5]; M.at<double>(82,74)=b[6]; M.at<double>(82,90)=c[7]; M.at<double>(82,91)=c[5]; M.at<double>(82,94)=c[6]; M.at<double>(82,97)=c[15]; M.at<double>(82,103)=c[4]; M.at<double>(82,104)=c[3]; M.at<double>(82,110)=c[17]; M.at<double>(82,112)=c[18];
M.at<double>(83,26)=u[2]; M.at<double>(83,27)=a[18]; M.at<double>(83,31)=a[6]; M.at<double>(83,37)=a[12]; M.at<double>(83,40)=a[17]; M.at<double>(83,42)=a[2]; M.at<double>(83,44)=a[19]; M.at<double>(83,46)=a[20]; M.at<double>(83,52)=a[8]; M.at<double>(83,55)=b[18]; M.at<double>(83,56)=b[2]; M.at<double>(83,58)=b[6]; M.at<double>(83,66)=b[20]; M.at<double>(83,68)=b[19]; M.at<double>(83,72)=b[17]; M.at<double>(83,73)=b[8]; M.at<double>(83,77)=b[12]; M.at<double>(83,93)=c[8]; M.at<double>(83,95)=c[17]; M.at<double>(83,97)=c[19]; M.at<double>(83,98)=c[12]; M.at<double>(83,102)=c[20]; M.at<double>(83,106)=c[2]; M.at<double>(83,113)=c[18]; M.at<double>(83,115)=c[6];
M.at<double>(84,16)=u[3]; M.at<double>(84,29)=a[5]; M.at<double>(84,30)=a[17]; M.at<double>(84,33)=a[7]; M.at<double>(84,39)=a[6]; M.at<double>(84,43)=a[18]; M.at<double>(84,44)=a[11]; M.at<double>(84,45)=a[3]; M.at<double>(84,47)=a[15]; M.at<double>(84,59)=b[7]; M.at<double>(84,60)=b[15]; M.at<double>(84,63)=b[3]; M.at<double>(84,67)=b[9]; M.at<double>(84,68)=b[11]; M.at<double>(84,69)=b[4]; M.at<double>(84,70)=b[10]; M.at<double>(84,71)=b[18]; M.at<double>(84,74)=b[5]; M.at<double>(84,75)=b[17]; M.at<double>(84,79)=b[6]; M.at<double>(84,90)=c[15]; M.at<double>(84,91)=c[10]; M.at<double>(84,94)=c[5]; M.at<double>(84,96)=c[6]; M.at<double>(84,97)=c[11]; M.at<double>(84,100)=c[18]; M.at<double>(84,104)=c[9]; M.at<double>(84,107)=c[17]; M.at<double>(84,110)=c[4]; M.at<double>(84,112)=c[3]; M.at<double>(84,117)=c[7];
M.at<double>(85,25)=u[2]; M.at<double>(85,29)=a[19]; M.at<double>(85,31)=a[15]; M.at<double>(85,32)=a[18]; M.at<double>(85,37)=a[5]; M.at<double>(85,38)=a[17]; M.at<double>(85,40)=a[3]; M.at<double>(85,44)=a[13]; M.at<double>(85,45)=a[20]; M.at<double>(85,47)=a[2]; M.at<double>(85,50)=a[6]; M.at<double>(85,52)=a[4]; M.at<double>(85,58)=b[15]; M.at<double>(85,60)=b[2]; M.at<double>(85,61)=b[18]; M.at<double>(85,63)=b[20]; M.at<double>(85,68)=b[13]; M.at<double>(85,72)=b[3]; M.at<double>(85,73)=b[4]; M.at<double>(85,74)=b[19]; M.at<double>(85,77)=b[5]; M.at<double>(85,78)=b[17]; M.at<double>(85,80)=b[6]; M.at<double>(85,86)=c[6]; M.at<double>(85,90)=c[2]; M.at<double>(85,92)=c[17]; M.at<double>(85,93)=c[4]; M.at<double>(85,94)=c[19]; M.at<double>(85,95)=c[3]; M.at<double>(85,97)=c[13]; M.at<double>(85,98)=c[5]; M.at<double>(85,112)=c[20]; M.at<double>(85,115)=c[15]; M.at<double>(85,116)=c[18];
M.at<double>(86,31)=a[18]; M.at<double>(86,37)=a[17]; M.at<double>(86,40)=a[2]; M.at<double>(86,44)=a[20]; M.at<double>(86,48)=a[6]; M.at<double>(86,52)=a[19]; M.at<double>(86,58)=b[18]; M.at<double>(86,68)=b[20]; M.at<double>(86,72)=b[2]; M.at<double>(86,73)=b[19]; M.at<double>(86,77)=b[17]; M.at<double>(86,81)=b[6]; M.at<double>(86,85)=c[6]; M.at<double>(86,93)=c[19]; M.at<double>(86,95)=c[2]; M.at<double>(86,97)=c[20]; M.at<double>(86,98)=c[17]; M.at<double>(86,115)=c[18];
M.at<double>(87,22)=u[2]; M.at<double>(87,31)=a[11]; M.at<double>(87,32)=a[3]; M.at<double>(87,33)=a[2]; M.at<double>(87,37)=a[10]; M.at<double>(87,38)=a[4]; M.at<double>(87,39)=a[19]; M.at<double>(87,40)=a[9]; M.at<double>(87,43)=a[20]; M.at<double>(87,47)=a[13]; M.at<double>(87,50)=a[5]; M.at<double>(87,53)=a[17]; M.at<double>(87,58)=b[11]; M.at<double>(87,59)=b[2]; M.at<double>(87,60)=b[13]; M.at<double>(87,61)=b[3]; M.at<double>(87,71)=b[20]; M.at<double>(87,72)=b[9]; M.at<double>(87,77)=b[10]; M.at<double>(87,78)=b[4]; M.at<double>(87,79)=b[19]; M.at<double>(87,80)=b[5]; M.at<double>(87,82)=b[17]; M.at<double>(87,86)=c[5]; M.at<double>(87,87)=c[17]; M.at<double>(87,90)=c[13]; M.at<double>(87,92)=c[4]; M.at<double>(87,95)=c[9]; M.at<double>(87,96)=c[19]; M.at<double>(87,98)=c[10]; M.at<double>(87,100)=c[20]; M.at<double>(87,115)=c[11]; M.at<double>(87,116)=c[3]; M.at<double>(87,117)=c[2];
M.at<double>(88,27)=a[2]; M.at<double>(88,31)=a[17]; M.at<double>(88,37)=a[8]; M.at<double>(88,40)=a[19]; M.at<double>(88,42)=a[20]; M.at<double>(88,48)=a[12]; M.at<double>(88,55)=b[2]; M.at<double>(88,56)=b[20]; M.at<double>(88,58)=b[17]; M.at<double>(88,72)=b[19]; M.at<double>(88,77)=b[8]; M.at<double>(88,81)=b[12]; M.at<double>(88,85)=c[12]; M.at<double>(88,95)=c[19]; M.at<double>(88,98)=c[8]; M.at<double>(88,106)=c[20]; M.at<double>(88,113)=c[2]; M.at<double>(88,115)=c[17];
M.at<double>(89,31)=a[7]; M.at<double>(89,37)=a[6]; M.at<double>(89,40)=a[18]; M.at<double>(89,44)=a[2]; M.at<double>(89,52)=a[17]; M.at<double>(89,58)=b[7]; M.at<double>(89,67)=b[20]; M.at<double>(89,68)=b[2]; M.at<double>(89,70)=b[19]; M.at<double>(89,72)=b[18]; M.at<double>(89,73)=b[17]; M.at<double>(89,77)=b[6]; M.at<double>(89,91)=c[19]; M.at<double>(89,93)=c[17]; M.at<double>(89,95)=c[18]; M.at<double>(89,97)=c[2]; M.at<double>(89,98)=c[6]; M.at<double>(89,104)=c[20]; M.at<double>(89,115)=c[7];
M.at<double>(90,27)=a[12]; M.at<double>(90,40)=a[16]; M.at<double>(90,42)=a[8]; M.at<double>(90,55)=b[12]; M.at<double>(90,56)=b[8]; M.at<double>(90,72)=b[16]; M.at<double>(90,95)=c[16]; M.at<double>(90,101)=c[19]; M.at<double>(90,105)=c[2]; M.at<double>(90,106)=c[8]; M.at<double>(90,108)=c[17]; M.at<double>(90,113)=c[12];
M.at<double>(91,23)=u[2]; M.at<double>(91,27)=a[3]; M.at<double>(91,28)=a[2]; M.at<double>(91,31)=a[5]; M.at<double>(91,32)=a[17]; M.at<double>(91,37)=a[14]; M.at<double>(91,38)=a[8]; M.at<double>(91,40)=a[4]; M.at<double>(91,41)=a[20]; M.at<double>(91,42)=a[13]; M.at<double>(91,47)=a[19]; M.at<double>(91,50)=a[12]; M.at<double>(91,55)=b[3]; M.at<double>(91,56)=b[13]; M.at<double>(91,57)=b[2]; M.at<double>(91,58)=b[5]; M.at<double>(91,60)=b[19]; M.at<double>(91,61)=b[17]; M.at<double>(91,62)=b[20]; M.at<double>(91,72)=b[4]; M.at<double>(91,77)=b[14]; M.at<double>(91,78)=b[8]; M.at<double>(91,80)=b[12]; M.at<double>(91,86)=c[12]; M.at<double>(91,90)=c[19]; M.at<double>(91,92)=c[8]; M.at<double>(91,95)=c[4]; M.at<double>(91,98)=c[14]; M.at<double>(91,106)=c[13]; M.at<double>(91,111)=c[20]; M.at<double>(91,113)=c[3]; M.at<double>(91,114)=c[2]; M.at<double>(91,115)=c[5]; M.at<double>(91,116)=c[17];
M.at<double>(92,17)=u[4]; M.at<double>(92,28)=a[6]; M.at<double>(92,29)=a[16]; M.at<double>(92,41)=a[17]; M.at<double>(92,42)=a[5]; M.at<double>(92,44)=a[14]; M.at<double>(92,45)=a[8]; M.at<double>(92,46)=a[4]; M.at<double>(92,47)=a[12]; M.at<double>(92,56)=b[5]; M.at<double>(92,57)=b[6]; M.at<double>(92,60)=b[12]; M.at<double>(92,62)=b[17]; M.at<double>(92,63)=b[8]; M.at<double>(92,66)=b[4]; M.at<double>(92,68)=b[14]; M.at<double>(92,74)=b[16]; M.at<double>(92,90)=c[12]; M.at<double>(92,94)=c[16]; M.at<double>(92,97)=c[14]; M.at<double>(92,101)=c[3]; M.at<double>(92,102)=c[4]; M.at<double>(92,106)=c[5]; M.at<double>(92,108)=c[15]; M.at<double>(92,109)=c[18]; M.at<double>(92,111)=c[17]; M.at<double>(92,112)=c[8]; M.at<double>(92,114)=c[6];
M.at<double>(93,17)=u[3]; M.at<double>(93,28)=a[7]; M.at<double>(93,29)=a[12]; M.at<double>(93,41)=a[18]; M.at<double>(93,42)=a[15]; M.at<double>(93,44)=a[5]; M.at<double>(93,45)=a[17]; M.at<double>(93,46)=a[3]; M.at<double>(93,47)=a[6]; M.at<double>(93,56)=b[15]; M.at<double>(93,57)=b[7]; M.at<double>(93,60)=b[6]; M.at<double>(93,62)=b[18]; M.at<double>(93,63)=b[17]; M.at<double>(93,66)=b[3]; M.at<double>(93,67)=b[4]; M.at<double>(93,68)=b[5]; M.at<double>(93,69)=b[8]; M.at<double>(93,70)=b[14]; M.at<double>(93,74)=b[12]; M.at<double>(93,90)=c[6]; M.at<double>(93,91)=c[14]; M.at<double>(93,94)=c[12]; M.at<double>(93,97)=c[5]; M.at<double>(93,102)=c[3]; M.at<double>(93,104)=c[4]; M.at<double>(93,106)=c[15]; M.at<double>(93,110)=c[8]; M.at<double>(93,111)=c[18]; M.at<double>(93,112)=c[17]; M.at<double>(93,114)=c[7];
M.at<double>(94,31)=a[2]; M.at<double>(94,37)=a[19]; M.at<double>(94,40)=a[20]; M.at<double>(94,48)=a[17]; M.at<double>(94,58)=b[2]; M.at<double>(94,72)=b[20]; M.at<double>(94,77)=b[19]; M.at<double>(94,81)=b[17]; M.at<double>(94,85)=c[17]; M.at<double>(94,95)=c[20]; M.at<double>(94,98)=c[19]; M.at<double>(94,115)=c[2];
M.at<double>(95,26)=u[4]; M.at<double>(95,27)=a[6]; M.at<double>(95,40)=a[12]; M.at<double>(95,42)=a[17]; M.at<double>(95,44)=a[8]; M.at<double>(95,46)=a[19]; M.at<double>(95,52)=a[16]; M.at<double>(95,55)=b[6]; M.at<double>(95,56)=b[17]; M.at<double>(95,66)=b[19]; M.at<double>(95,68)=b[8]; M.at<double>(95,72)=b[12]; M.at<double>(95,73)=b[16]; M.at<double>(95,93)=c[16]; M.at<double>(95,95)=c[12]; M.at<double>(95,97)=c[8]; M.at<double>(95,101)=c[2]; M.at<double>(95,102)=c[19]; M.at<double>(95,106)=c[17]; M.at<double>(95,108)=c[18]; M.at<double>(95,113)=c[6];
M.at<double>(96,23)=u[4]; M.at<double>(96,27)=a[5]; M.at<double>(96,28)=a[17]; M.at<double>(96,32)=a[12]; M.at<double>(96,38)=a[16]; M.at<double>(96,40)=a[14]; M.at<double>(96,41)=a[19]; M.at<double>(96,42)=a[4]; M.at<double>(96,47)=a[8]; M.at<double>(96,55)=b[5]; M.at<double>(96,56)=b[4]; M.at<double>(96,57)=b[17]; M.at<double>(96,60)=b[8]; M.at<double>(96,61)=b[12]; M.at<double>(96,62)=b[19]; M.at<double>(96,72)=b[14]; M.at<double>(96,78)=b[16]; M.at<double>(96,90)=c[8]; M.at<double>(96,92)=c[16]; M.at<double>(96,95)=c[14]; M.at<double>(96,101)=c[13]; M.at<double>(96,106)=c[4]; M.at<double>(96,108)=c[3]; M.at<double>(96,109)=c[2]; M.at<double>(96,111)=c[19]; M.at<double>(96,113)=c[5]; M.at<double>(96,114)=c[17]; M.at<double>(96,116)=c[12];
M.at<double>(97,42)=a[6]; M.at<double>(97,44)=a[12]; M.at<double>(97,46)=a[17]; M.at<double>(97,56)=b[6]; M.at<double>(97,66)=b[17]; M.at<double>(97,67)=b[8]; M.at<double>(97,68)=b[12]; M.at<double>(97,70)=b[16]; M.at<double>(97,91)=c[16]; M.at<double>(97,97)=c[12]; M.at<double>(97,101)=c[18]; M.at<double>(97,102)=c[17]; M.at<double>(97,104)=c[8]; M.at<double>(97,106)=c[6]; M.at<double>(97,108)=c[7];
M.at<double>(98,28)=a[12]; M.at<double>(98,41)=a[8]; M.at<double>(98,42)=a[14]; M.at<double>(98,47)=a[16]; M.at<double>(98,56)=b[14]; M.at<double>(98,57)=b[12]; M.at<double>(98,60)=b[16]; M.at<double>(98,62)=b[8]; M.at<double>(98,90)=c[16]; M.at<double>(98,101)=c[4]; M.at<double>(98,105)=c[3]; M.at<double>(98,106)=c[14]; M.at<double>(98,108)=c[5]; M.at<double>(98,109)=c[17]; M.at<double>(98,111)=c[8]; M.at<double>(98,114)=c[12];
M.at<double>(99,42)=a[7]; M.at<double>(99,44)=a[6]; M.at<double>(99,46)=a[18]; M.at<double>(99,56)=b[7]; M.at<double>(99,64)=b[8]; M.at<double>(99,66)=b[18]; M.at<double>(99,67)=b[17]; M.at<double>(99,68)=b[6]; M.at<double>(99,70)=b[12]; M.at<double>(99,91)=c[12]; M.at<double>(99,97)=c[6]; M.at<double>(99,102)=c[18]; M.at<double>(99,103)=c[8]; M.at<double>(99,104)=c[17]; M.at<double>(99,106)=c[7];
M.at<double>(100,51)=a[16]; M.at<double>(100,65)=b[16]; M.at<double>(100,89)=c[16]; M.at<double>(100,105)=c[10]; M.at<double>(100,109)=c[14];
M.at<double>(101,37)=a[9]; M.at<double>(101,38)=a[13]; M.at<double>(101,39)=a[20]; M.at<double>(101,48)=a[11]; M.at<double>(101,50)=a[3]; M.at<double>(101,53)=a[2]; M.at<double>(101,77)=b[9]; M.at<double>(101,78)=b[13]; M.at<double>(101,79)=b[20]; M.at<double>(101,80)=b[3]; M.at<double>(101,81)=b[11]; M.at<double>(101,82)=b[2]; M.at<double>(101,85)=c[11]; M.at<double>(101,86)=c[3]; M.at<double>(101,87)=c[2]; M.at<double>(101,92)=c[13]; M.at<double>(101,96)=c[20]; M.at<double>(101,98)=c[9];
M.at<double>(102,37)=a[13]; M.at<double>(102,38)=a[20]; M.at<double>(102,48)=a[3]; M.at<double>(102,50)=a[2]; M.at<double>(102,77)=b[13]; M.at<double>(102,78)=b[20]; M.at<double>(102,80)=b[2]; M.at<double>(102,81)=b[3]; M.at<double>(102,85)=c[3]; M.at<double>(102,86)=c[2]; M.at<double>(102,92)=c[20]; M.at<double>(102,98)=c[13];
M.at<double>(103,37)=a[20]; M.at<double>(103,48)=a[2]; M.at<double>(103,77)=b[20]; M.at<double>(103,81)=b[2]; M.at<double>(103,85)=c[2]; M.at<double>(103,98)=c[20];
M.at<double>(104,11)=u[4]; M.at<double>(104,28)=a[10]; M.at<double>(104,33)=a[14]; M.at<double>(104,34)=a[8]; M.at<double>(104,36)=a[16]; M.at<double>(104,51)=a[4]; M.at<double>(104,54)=b[8]; M.at<double>(104,57)=b[10]; M.at<double>(104,59)=b[14]; M.at<double>(104,65)=b[4]; M.at<double>(104,84)=b[16]; M.at<double>(104,88)=c[16]; M.at<double>(104,89)=c[4]; M.at<double>(104,99)=c[8]; M.at<double>(104,108)=c[1]; M.at<double>(104,109)=c[9]; M.at<double>(104,114)=c[10]; M.at<double>(104,117)=c[14];
M.at<double>(105,20)=u[4]; M.at<double>(105,27)=a[10]; M.at<double>(105,28)=a[4]; M.at<double>(105,32)=a[14]; M.at<double>(105,33)=a[8]; M.at<double>(105,51)=a[19]; M.at<double>(105,53)=a[16]; M.at<double>(105,55)=b[10]; M.at<double>(105,57)=b[4]; M.at<double>(105,59)=b[8]; M.at<double>(105,61)=b[14]; M.at<double>(105,65)=b[19]; M.at<double>(105,82)=b[16]; M.at<double>(105,87)=c[16]; M.at<double>(105,89)=c[19]; M.at<double>(105,108)=c[9]; M.at<double>(105,109)=c[13]; M.at<double>(105,113)=c[10]; M.at<double>(105,114)=c[4]; M.at<double>(105,116)=c[14]; M.at<double>(105,117)=c[8];
M.at<double>(106,27)=a[4]; M.at<double>(106,28)=a[19]; M.at<double>(106,31)=a[14]; M.at<double>(106,32)=a[8]; M.at<double>(106,50)=a[16]; M.at<double>(106,55)=b[4]; M.at<double>(106,57)=b[19]; M.at<double>(106,58)=b[14]; M.at<double>(106,61)=b[8]; M.at<double>(106,80)=b[16]; M.at<double>(106,86)=c[16]; M.at<double>(106,108)=c[13]; M.at<double>(106,109)=c[20]; M.at<double>(106,113)=c[4]; M.at<double>(106,114)=c[19]; M.at<double>(106,115)=c[14]; M.at<double>(106,116)=c[8];
M.at<double>(107,27)=a[19]; M.at<double>(107,31)=a[8]; M.at<double>(107,48)=a[16]; M.at<double>(107,55)=b[19]; M.at<double>(107,58)=b[8]; M.at<double>(107,81)=b[16]; M.at<double>(107,85)=c[16]; M.at<double>(107,108)=c[20]; M.at<double>(107,113)=c[19]; M.at<double>(107,115)=c[8];
M.at<double>(108,36)=a[20]; M.at<double>(108,48)=a[1]; M.at<double>(108,50)=a[9]; M.at<double>(108,53)=a[13]; M.at<double>(108,80)=b[9]; M.at<double>(108,81)=b[1]; M.at<double>(108,82)=b[13]; M.at<double>(108,84)=b[20]; M.at<double>(108,85)=c[1]; M.at<double>(108,86)=c[9]; M.at<double>(108,87)=c[13]; M.at<double>(108,88)=c[20];
M.at<double>(109,26)=u[3]; M.at<double>(109,27)=a[7]; M.at<double>(109,40)=a[6]; M.at<double>(109,42)=a[18]; M.at<double>(109,44)=a[17]; M.at<double>(109,46)=a[2]; M.at<double>(109,52)=a[12]; M.at<double>(109,55)=b[7]; M.at<double>(109,56)=b[18]; M.at<double>(109,66)=b[2]; M.at<double>(109,67)=b[19]; M.at<double>(109,68)=b[17]; M.at<double>(109,70)=b[8]; M.at<double>(109,72)=b[6]; M.at<double>(109,73)=b[12]; M.at<double>(109,91)=c[8]; M.at<double>(109,93)=c[12]; M.at<double>(109,95)=c[6]; M.at<double>(109,97)=c[17]; M.at<double>(109,102)=c[2]; M.at<double>(109,104)=c[19]; M.at<double>(109,106)=c[18]; M.at<double>(109,113)=c[7];
M.at<double>(110,7)=u[3]; M.at<double>(110,30)=a[5]; M.at<double>(110,34)=a[7]; M.at<double>(110,43)=a[15]; M.at<double>(110,45)=a[11]; M.at<double>(110,49)=a[6]; M.at<double>(110,54)=b[7]; M.at<double>(110,63)=b[11]; M.at<double>(110,67)=b[1]; M.at<double>(110,69)=b[10]; M.at<double>(110,71)=b[15]; M.at<double>(110,75)=b[5]; M.at<double>(110,76)=b[6]; M.at<double>(110,99)=c[7]; M.at<double>(110,100)=c[15]; M.at<double>(110,104)=c[1]; M.at<double>(110,107)=c[5]; M.at<double>(110,110)=c[10]; M.at<double>(110,112)=c[11]; M.at<double>(110,119)=c[6];
M.at<double>(111,18)=u[2]; M.at<double>(111,35)=a[20]; M.at<double>(111,36)=a[13]; M.at<double>(111,50)=a[1]; M.at<double>(111,53)=a[9]; M.at<double>(111,80)=b[1]; M.at<double>(111,82)=b[9]; M.at<double>(111,83)=b[20]; M.at<double>(111,84)=b[13]; M.at<double>(111,86)=c[1]; M.at<double>(111,87)=c[9]; M.at<double>(111,88)=c[13]; M.at<double>(111,118)=c[20];
M.at<double>(112,19)=u[2]; M.at<double>(112,31)=a[1]; M.at<double>(112,32)=a[9]; M.at<double>(112,33)=a[13]; M.at<double>(112,34)=a[20]; M.at<double>(112,36)=a[19]; M.at<double>(112,50)=a[10]; M.at<double>(112,53)=a[4]; M.at<double>(112,54)=b[20]; M.at<double>(112,58)=b[1]; M.at<double>(112,59)=b[13]; M.at<double>(112,61)=b[9]; M.at<double>(112,80)=b[10]; M.at<double>(112,82)=b[4]; M.at<double>(112,84)=b[19]; M.at<double>(112,86)=c[10]; M.at<double>(112,87)=c[4]; M.at<double>(112,88)=c[19]; M.at<double>(112,99)=c[20]; M.at<double>(112,115)=c[1]; M.at<double>(112,116)=c[9]; M.at<double>(112,117)=c[13];
M.at<double>(113,31)=a[9]; M.at<double>(113,32)=a[13]; M.at<double>(113,33)=a[20]; M.at<double>(113,48)=a[10]; M.at<double>(113,50)=a[4]; M.at<double>(113,53)=a[19]; M.at<double>(113,58)=b[9]; M.at<double>(113,59)=b[20]; M.at<double>(113,61)=b[13]; M.at<double>(113,80)=b[4]; M.at<double>(113,81)=b[10]; M.at<double>(113,82)=b[19]; M.at<double>(113,85)=c[10]; M.at<double>(113,86)=c[4]; M.at<double>(113,87)=c[19]; M.at<double>(113,115)=c[9]; M.at<double>(113,116)=c[13]; M.at<double>(113,117)=c[20];
M.at<double>(114,31)=a[13]; M.at<double>(114,32)=a[20]; M.at<double>(114,48)=a[4]; M.at<double>(114,50)=a[19]; M.at<double>(114,58)=b[13]; M.at<double>(114,61)=b[20]; M.at<double>(114,80)=b[19]; M.at<double>(114,81)=b[4]; M.at<double>(114,85)=c[4]; M.at<double>(114,86)=c[19]; M.at<double>(114,115)=c[13]; M.at<double>(114,116)=c[20];
M.at<double>(115,31)=a[20]; M.at<double>(115,48)=a[19]; M.at<double>(115,58)=b[20]; M.at<double>(115,81)=b[19]; M.at<double>(115,85)=c[19]; M.at<double>(115,115)=c[20];
M.at<double>(116,24)=u[2]; M.at<double>(116,29)=a[13]; M.at<double>(116,30)=a[20]; M.at<double>(116,37)=a[11]; M.at<double>(116,38)=a[3]; M.at<double>(116,39)=a[2]; M.at<double>(116,50)=a[15]; M.at<double>(116,52)=a[9]; M.at<double>(116,53)=a[18]; M.at<double>(116,73)=b[9]; M.at<double>(116,74)=b[13]; M.at<double>(116,75)=b[20]; M.at<double>(116,77)=b[11]; M.at<double>(116,78)=b[3]; M.at<double>(116,79)=b[2]; M.at<double>(116,80)=b[15]; M.at<double>(116,82)=b[18]; M.at<double>(116,86)=c[15]; M.at<double>(116,87)=c[18]; M.at<double>(116,92)=c[3]; M.at<double>(116,93)=c[9]; M.at<double>(116,94)=c[13]; M.at<double>(116,96)=c[2]; M.at<double>(116,98)=c[11]; M.at<double>(116,107)=c[20];
M.at<double>(117,29)=a[20]; M.at<double>(117,37)=a[3]; M.at<double>(117,38)=a[2]; M.at<double>(117,48)=a[15]; M.at<double>(117,50)=a[18]; M.at<double>(117,52)=a[13]; M.at<double>(117,73)=b[13]; M.at<double>(117,74)=b[20]; M.at<double>(117,77)=b[3]; M.at<double>(117,78)=b[2]; M.at<double>(117,80)=b[18]; M.at<double>(117,81)=b[15]; M.at<double>(117,85)=c[15]; M.at<double>(117,86)=c[18]; M.at<double>(117,92)=c[2]; M.at<double>(117,93)=c[13]; M.at<double>(117,94)=c[20]; M.at<double>(117,98)=c[3];
M.at<double>(118,27)=a[13]; M.at<double>(118,28)=a[20]; M.at<double>(118,31)=a[4]; M.at<double>(118,32)=a[19]; M.at<double>(118,48)=a[14]; M.at<double>(118,50)=a[8]; M.at<double>(118,55)=b[13]; M.at<double>(118,57)=b[20]; M.at<double>(118,58)=b[4]; M.at<double>(118,61)=b[19]; M.at<double>(118,80)=b[8]; M.at<double>(118,81)=b[14]; M.at<double>(118,85)=c[14]; M.at<double>(118,86)=c[8]; M.at<double>(118,113)=c[13]; M.at<double>(118,114)=c[20]; M.at<double>(118,115)=c[4]; M.at<double>(118,116)=c[19];
M.at<double>(119,27)=a[20]; M.at<double>(119,31)=a[19]; M.at<double>(119,48)=a[8]; M.at<double>(119,55)=b[20]; M.at<double>(119,58)=b[19]; M.at<double>(119,81)=b[8]; M.at<double>(119,85)=c[8]; M.at<double>(119,113)=c[20]; M.at<double>(119,115)=c[19];
return M.t();
}
cv::Mat dls::Hessian(const double s[])
{
// the vector of monomials is
// m = [ const ; s1^2 * s2 ; s1 * s2 ; s1 * s3 ; s2 * s3 ; s2^2 * s3 ; s2^3 ; ...
// s1 * s3^2 ; s1 ; s3 ; s2 ; s2 * s3^2 ; s1^2 ; s3^2 ; s2^2 ; s3^3 ; ...
// s1 * s2 * s3 ; s1 * s2^2 ; s1^2 * s3 ; s1^3]
//
// deriv of m w.r.t. s1
//Hs3 = [0 ; 2 * s(1) * s(2) ; s(2) ; s(3) ; 0 ; 0 ; 0 ; ...
// s(3)^2 ; 1 ; 0 ; 0 ; 0 ; 2 * s(1) ; 0 ; 0 ; 0 ; ...
// s(2) * s(3) ; s(2)^2 ; 2*s(1)*s(3); 3 * s(1)^2];
double Hs1[20];
Hs1[0]=0; Hs1[1]=2*s[0]*s[1]; Hs1[2]=s[1]; Hs1[3]=s[2]; Hs1[4]=0; Hs1[5]=0; Hs1[6]=0;
Hs1[7]=s[2]*s[2]; Hs1[8]=1; Hs1[9]=0; Hs1[10]=0; Hs1[11]=0; Hs1[12]=2*s[0]; Hs1[13]=0;
Hs1[14]=0; Hs1[15]=0; Hs1[16]=s[1]*s[2]; Hs1[17]=s[1]*s[1]; Hs1[18]=2*s[0]*s[2]; Hs1[19]=3*s[0]*s[0];
// deriv of m w.r.t. s2
//Hs2 = [0 ; s(1)^2 ; s(1) ; 0 ; s(3) ; 2 * s(2) * s(3) ; 3 * s(2)^2 ; ...
// 0 ; 0 ; 0 ; 1 ; s(3)^2 ; 0 ; 0 ; 2 * s(2) ; 0 ; ...
// s(1) * s(3) ; s(1) * 2 * s(2) ; 0 ; 0];
double Hs2[20];
Hs2[0]=0; Hs2[1]=s[0]*s[0]; Hs2[2]=s[0]; Hs2[3]=0; Hs2[4]=s[2]; Hs2[5]=2*s[1]*s[2]; Hs2[6]=3*s[1]*s[1];
Hs2[7]=0; Hs2[8]=0; Hs2[9]=0; Hs2[10]=1; Hs2[11]=s[2]*s[2]; Hs2[12]=0; Hs2[13]=0;
Hs2[14]=2*s[1]; Hs2[15]=0; Hs2[16]=s[0]*s[2]; Hs2[17]=2*s[0]*s[1]; Hs2[18]=0; Hs2[19]=0;
// deriv of m w.r.t. s3
//Hs3 = [0 ; 0 ; 0 ; s(1) ; s(2) ; s(2)^2 ; 0 ; ...
// s(1) * 2 * s(3) ; 0 ; 1 ; 0 ; s(2) * 2 * s(3) ; 0 ; 2 * s(3) ; 0 ; 3 * s(3)^2 ; ...
// s(1) * s(2) ; 0 ; s(1)^2 ; 0];
double Hs3[20];
Hs3[0]=0; Hs3[1]=0; Hs3[2]=0; Hs3[3]=s[0]; Hs3[4]=s[1]; Hs3[5]=s[1]*s[1]; Hs3[6]=0;
Hs3[7]=2*s[0]*s[2]; Hs3[8]=0; Hs3[9]=1; Hs3[10]=0; Hs3[11]=2*s[1]*s[2]; Hs3[12]=0; Hs3[13]=2*s[2];
Hs3[14]=0; Hs3[15]=3*s[2]*s[2]; Hs3[16]=s[0]*s[1]; Hs3[17]=0; Hs3[18]=s[0]*s[0]; Hs3[19]=0;
// fill Hessian matrix
cv::Mat H(3, 3, CV_64F);
H.at<double>(0,0) = cv::Mat(cv::Mat(f1coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs1)).at<double>(0,0);
H.at<double>(0,1) = cv::Mat(cv::Mat(f1coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs2)).at<double>(0,0);
H.at<double>(0,2) = cv::Mat(cv::Mat(f1coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs3)).at<double>(0,0);
H.at<double>(1,0) = cv::Mat(cv::Mat(f2coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs1)).at<double>(0,0);
H.at<double>(1,1) = cv::Mat(cv::Mat(f2coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs2)).at<double>(0,0);
H.at<double>(1,2) = cv::Mat(cv::Mat(f2coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs3)).at<double>(0,0);
H.at<double>(2,0) = cv::Mat(cv::Mat(f3coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs1)).at<double>(0,0);
H.at<double>(2,1) = cv::Mat(cv::Mat(f3coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs2)).at<double>(0,0);
H.at<double>(2,2) = cv::Mat(cv::Mat(f3coeff).rowRange(1,21).t()*cv::Mat(20, 1, CV_64F, &Hs3)).at<double>(0,0);
return H;
}
cv::Mat dls::cayley2rotbar(const cv::Mat& s)
{
double s_mul1 = cv::Mat(s.t()*s).at<double>(0,0);
cv::Mat s_mul2 = s*s.t();
cv::Mat eye = cv::Mat::eye(3, 3, CV_64F);
return cv::Mat( eye.mul(1.-s_mul1) + skewsymm(&s).mul(2.) + s_mul2.mul(2.) ).t();
}
cv::Mat dls::skewsymm(const cv::Mat * X1)
{
cv::MatConstIterator_<double> it = X1->begin<double>();
return (cv::Mat_<double>(3,3) << 0, -*(it+2), *(it+1),
*(it+2), 0, -*(it+0),
-*(it+1), *(it+0), 0);
}
cv::Mat dls::rotx(const double t)
{
// rotx: rotation about y-axis
double ct = cos(t);
double st = sin(t);
return (cv::Mat_<double>(3,3) << 1, 0, 0, 0, ct, -st, 0, st, ct);
}
cv::Mat dls::roty(const double t)
{
// roty: rotation about y-axis
double ct = cos(t);
double st = sin(t);
return (cv::Mat_<double>(3,3) << ct, 0, st, 0, 1, 0, -st, 0, ct);
}
cv::Mat dls::rotz(const double t)
{
// rotz: rotation about y-axis
double ct = cos(t);
double st = sin(t);
return (cv::Mat_<double>(3,3) << ct, -st, 0, st, ct, 0, 0, 0, 1);
}
cv::Mat dls::mean(const cv::Mat& M)
{
cv::Mat m = cv::Mat::zeros(3, 1, CV_64F);
for (int i = 0; i < M.cols; ++i) m += M.col(i);
return m.mul(1./(double)M.cols);
}
bool dls::is_empty(const cv::Mat * M)
{
cv::MatConstIterator_<double> it = M->begin<double>(), it_end = M->end<double>();
for(; it != it_end; ++it)
{
if(*it < 0) return false;
}
return true;
}
bool dls::positive_eigenvalues(const cv::Mat * eigenvalues)
{
cv::MatConstIterator_<double> it = eigenvalues->begin<double>();
return *(it) > 0 && *(it+1) > 0 && *(it+2) > 0;
}

773
modules/calib3d/src/dls.h Normal file
View File

@ -0,0 +1,773 @@
#ifndef DLS_H_
#define DLS_H_
#include "precomp.hpp"
#include <iostream>
using namespace std;
using namespace cv;
class dls
{
public:
dls(const cv::Mat& opoints, const cv::Mat& ipoints);
~dls();
bool compute_pose(cv::Mat& R, cv::Mat& t);
private:
// initialisation
template <typename OpointType, typename IpointType>
void init_points(const cv::Mat& opoints, const cv::Mat& ipoints)
{
for(int i = 0; i < N; i++)
{
p.at<double>(0,i) = opoints.at<OpointType>(0,i).x;
p.at<double>(1,i) = opoints.at<OpointType>(0,i).y;
p.at<double>(2,i) = opoints.at<OpointType>(0,i).z;
// compute mean of object points
mn.at<double>(0) += p.at<double>(0,i);
mn.at<double>(1) += p.at<double>(1,i);
mn.at<double>(2) += p.at<double>(2,i);
// make z into unit vectors from normalized pixel coords
double sr = std::pow(ipoints.at<IpointType>(0,i).x, 2) +
std::pow(ipoints.at<IpointType>(0,i).y, 2) + (double)1;
sr = std::sqrt(sr);
z.at<double>(0,i) = ipoints.at<IpointType>(0,i).x / sr;
z.at<double>(1,i) = ipoints.at<IpointType>(0,i).y / sr;
z.at<double>(2,i) = (double)1 / sr;
}
mn.at<double>(0) /= (double)N;
mn.at<double>(1) /= (double)N;
mn.at<double>(2) /= (double)N;
}
// main algorithm
cv::Mat LeftMultVec(const cv::Mat& v);
void run_kernel(const cv::Mat& pp);
void build_coeff_matrix(const cv::Mat& pp, cv::Mat& Mtilde, cv::Mat& D);
void compute_eigenvec(const cv::Mat& Mtilde, cv::Mat& eigenval_real, cv::Mat& eigenval_imag,
cv::Mat& eigenvec_real, cv::Mat& eigenvec_imag);
void fill_coeff(const cv::Mat * D);
// useful functions
cv::Mat cayley_LS_M(const std::vector<double>& a, const std::vector<double>& b,
const std::vector<double>& c, const std::vector<double>& u);
cv::Mat Hessian(const double s[]);
cv::Mat cayley2rotbar(const cv::Mat& s);
cv::Mat skewsymm(const cv::Mat * X1);
// extra functions
cv::Mat rotx(const double t);
cv::Mat roty(const double t);
cv::Mat rotz(const double t);
cv::Mat mean(const cv::Mat& M);
bool is_empty(const cv::Mat * v);
bool positive_eigenvalues(const cv::Mat * eigenvalues);
cv::Mat p, z, mn; // object-image points
int N; // number of input points
std::vector<double> f1coeff, f2coeff, f3coeff, cost_; // coefficient for coefficients matrix
std::vector<cv::Mat> C_est_, t_est_; // optimal candidates
cv::Mat C_est__, t_est__; // optimal found solution
double cost__; // optimal found solution
};
class EigenvalueDecomposition {
private:
// Holds the data dimension.
int n;
// Stores real/imag part of a complex division.
double cdivr, cdivi;
// Pointer to internal memory.
double *d, *e, *ort;
double **V, **H;
// Holds the computed eigenvalues.
Mat _eigenvalues;
// Holds the computed eigenvectors.
Mat _eigenvectors;
// Allocates memory.
template<typename _Tp>
_Tp *alloc_1d(int m) {
return new _Tp[m];
}
// Allocates memory.
template<typename _Tp>
_Tp *alloc_1d(int m, _Tp val) {
_Tp *arr = alloc_1d<_Tp> (m);
for (int i = 0; i < m; i++)
arr[i] = val;
return arr;
}
// Allocates memory.
template<typename _Tp>
_Tp **alloc_2d(int m, int _n) {
_Tp **arr = new _Tp*[m];
for (int i = 0; i < m; i++)
arr[i] = new _Tp[_n];
return arr;
}
// Allocates memory.
template<typename _Tp>
_Tp **alloc_2d(int m, int _n, _Tp val) {
_Tp **arr = alloc_2d<_Tp> (m, _n);
for (int i = 0; i < m; i++) {
for (int j = 0; j < _n; j++) {
arr[i][j] = val;
}
}
return arr;
}
void cdiv(double xr, double xi, double yr, double yi) {
double r, dv;
if (std::abs(yr) > std::abs(yi)) {
r = yi / yr;
dv = yr + r * yi;
cdivr = (xr + r * xi) / dv;
cdivi = (xi - r * xr) / dv;
} else {
r = yr / yi;
dv = yi + r * yr;
cdivr = (r * xr + xi) / dv;
cdivi = (r * xi - xr) / dv;
}
}
// Nonsymmetric reduction from Hessenberg to real Schur form.
void hqr2() {
// This is derived from the Algol procedure hqr2,
// by Martin and Wilkinson, Handbook for Auto. Comp.,
// Vol.ii-Linear Algebra, and the corresponding
// Fortran subroutine in EISPACK.
// Initialize
int nn = this->n;
int n1 = nn - 1;
int low = 0;
int high = nn - 1;
double eps = std::pow(2.0, -52.0);
double exshift = 0.0;
double p = 0, q = 0, r = 0, s = 0, z = 0, t, w, x, y;
// Store roots isolated by balanc and compute matrix norm
double norm = 0.0;
for (int i = 0; i < nn; i++) {
if (i < low || i > high) {
d[i] = H[i][i];
e[i] = 0.0;
}
for (int j = std::max(i - 1, 0); j < nn; j++) {
norm = norm + std::abs(H[i][j]);
}
}
// Outer loop over eigenvalue index
int iter = 0;
while (n1 >= low) {
// Look for single small sub-diagonal element
int l = n1;
while (l > low) {
s = std::abs(H[l - 1][l - 1]) + std::abs(H[l][l]);
if (s == 0.0) {
s = norm;
}
if (std::abs(H[l][l - 1]) < eps * s) {
break;
}
l--;
}
// Check for convergence
// One root found
if (l == n1) {
H[n1][n1] = H[n1][n1] + exshift;
d[n1] = H[n1][n1];
e[n1] = 0.0;
n1--;
iter = 0;
// Two roots found
} else if (l == n1 - 1) {
w = H[n1][n1 - 1] * H[n1 - 1][n1];
p = (H[n1 - 1][n1 - 1] - H[n1][n1]) / 2.0;
q = p * p + w;
z = std::sqrt(std::abs(q));
H[n1][n1] = H[n1][n1] + exshift;
H[n1 - 1][n1 - 1] = H[n1 - 1][n1 - 1] + exshift;
x = H[n1][n1];
// Real pair
if (q >= 0) {
if (p >= 0) {
z = p + z;
} else {
z = p - z;
}
d[n1 - 1] = x + z;
d[n1] = d[n1 - 1];
if (z != 0.0) {
d[n1] = x - w / z;
}
e[n1 - 1] = 0.0;
e[n1] = 0.0;
x = H[n1][n1 - 1];
s = std::abs(x) + std::abs(z);
p = x / s;
q = z / s;
r = std::sqrt(p * p + q * q);
p = p / r;
q = q / r;
// Row modification
for (int j = n1 - 1; j < nn; j++) {
z = H[n1 - 1][j];
H[n1 - 1][j] = q * z + p * H[n1][j];
H[n1][j] = q * H[n1][j] - p * z;
}
// Column modification
for (int i = 0; i <= n1; i++) {
z = H[i][n1 - 1];
H[i][n1 - 1] = q * z + p * H[i][n1];
H[i][n1] = q * H[i][n1] - p * z;
}
// Accumulate transformations
for (int i = low; i <= high; i++) {
z = V[i][n1 - 1];
V[i][n1 - 1] = q * z + p * V[i][n1];
V[i][n1] = q * V[i][n1] - p * z;
}
// Complex pair
} else {
d[n1 - 1] = x + p;
d[n1] = x + p;
e[n1 - 1] = z;
e[n1] = -z;
}
n1 = n1 - 2;
iter = 0;
// No convergence yet
} else {
// Form shift
x = H[n1][n1];
y = 0.0;
w = 0.0;
if (l < n1) {
y = H[n1 - 1][n1 - 1];
w = H[n1][n1 - 1] * H[n1 - 1][n1];
}
// Wilkinson's original ad hoc shift
if (iter == 10) {
exshift += x;
for (int i = low; i <= n1; i++) {
H[i][i] -= x;
}
s = std::abs(H[n1][n1 - 1]) + std::abs(H[n1 - 1][n1 - 2]);
x = y = 0.75 * s;
w = -0.4375 * s * s;
}
// MATLAB's new ad hoc shift
if (iter == 30) {
s = (y - x) / 2.0;
s = s * s + w;
if (s > 0) {
s = std::sqrt(s);
if (y < x) {
s = -s;
}
s = x - w / ((y - x) / 2.0 + s);
for (int i = low; i <= n1; i++) {
H[i][i] -= s;
}
exshift += s;
x = y = w = 0.964;
}
}
iter = iter + 1; // (Could check iteration count here.)
// Look for two consecutive small sub-diagonal elements
int m = n1 - 2;
while (m >= l) {
z = H[m][m];
r = x - z;
s = y - z;
p = (r * s - w) / H[m + 1][m] + H[m][m + 1];
q = H[m + 1][m + 1] - z - r - s;
r = H[m + 2][m + 1];
s = std::abs(p) + std::abs(q) + std::abs(r);
p = p / s;
q = q / s;
r = r / s;
if (m == l) {
break;
}
if (std::abs(H[m][m - 1]) * (std::abs(q) + std::abs(r)) < eps * (std::abs(p)
* (std::abs(H[m - 1][m - 1]) + std::abs(z) + std::abs(
H[m + 1][m + 1])))) {
break;
}
m--;
}
for (int i = m + 2; i <= n1; i++) {
H[i][i - 2] = 0.0;
if (i > m + 2) {
H[i][i - 3] = 0.0;
}
}
// Double QR step involving rows l:n and columns m:n
for (int k = m; k <= n1 - 1; k++) {
bool notlast = (k != n1 - 1);
if (k != m) {
p = H[k][k - 1];
q = H[k + 1][k - 1];
r = (notlast ? H[k + 2][k - 1] : 0.0);
x = std::abs(p) + std::abs(q) + std::abs(r);
if (x != 0.0) {
p = p / x;
q = q / x;
r = r / x;
}
}
if (x == 0.0) {
break;
}
s = std::sqrt(p * p + q * q + r * r);
if (p < 0) {
s = -s;
}
if (s != 0) {
if (k != m) {
H[k][k - 1] = -s * x;
} else if (l != m) {
H[k][k - 1] = -H[k][k - 1];
}
p = p + s;
x = p / s;
y = q / s;
z = r / s;
q = q / p;
r = r / p;
// Row modification
for (int j = k; j < nn; j++) {
p = H[k][j] + q * H[k + 1][j];
if (notlast) {
p = p + r * H[k + 2][j];
H[k + 2][j] = H[k + 2][j] - p * z;
}
H[k][j] = H[k][j] - p * x;
H[k + 1][j] = H[k + 1][j] - p * y;
}
// Column modification
for (int i = 0; i <= std::min(n1, k + 3); i++) {
p = x * H[i][k] + y * H[i][k + 1];
if (notlast) {
p = p + z * H[i][k + 2];
H[i][k + 2] = H[i][k + 2] - p * r;
}
H[i][k] = H[i][k] - p;
H[i][k + 1] = H[i][k + 1] - p * q;
}
// Accumulate transformations
for (int i = low; i <= high; i++) {
p = x * V[i][k] + y * V[i][k + 1];
if (notlast) {
p = p + z * V[i][k + 2];
V[i][k + 2] = V[i][k + 2] - p * r;
}
V[i][k] = V[i][k] - p;
V[i][k + 1] = V[i][k + 1] - p * q;
}
} // (s != 0)
} // k loop
} // check convergence
} // while (n1 >= low)
// Backsubstitute to find vectors of upper triangular form
if (norm == 0.0) {
return;
}
for (n1 = nn - 1; n1 >= 0; n1--) {
p = d[n1];
q = e[n1];
// Real vector
if (q == 0) {
int l = n1;
H[n1][n1] = 1.0;
for (int i = n1 - 1; i >= 0; i--) {
w = H[i][i] - p;
r = 0.0;
for (int j = l; j <= n1; j++) {
r = r + H[i][j] * H[j][n1];
}
if (e[i] < 0.0) {
z = w;
s = r;
} else {
l = i;
if (e[i] == 0.0) {
if (w != 0.0) {
H[i][n1] = -r / w;
} else {
H[i][n1] = -r / (eps * norm);
}
// Solve real equations
} else {
x = H[i][i + 1];
y = H[i + 1][i];
q = (d[i] - p) * (d[i] - p) + e[i] * e[i];
t = (x * s - z * r) / q;
H[i][n1] = t;
if (std::abs(x) > std::abs(z)) {
H[i + 1][n1] = (-r - w * t) / x;
} else {
H[i + 1][n1] = (-s - y * t) / z;
}
}
// Overflow control
t = std::abs(H[i][n1]);
if ((eps * t) * t > 1) {
for (int j = i; j <= n1; j++) {
H[j][n1] = H[j][n1] / t;
}
}
}
}
// Complex vector
} else if (q < 0) {
int l = n1 - 1;
// Last vector component imaginary so matrix is triangular
if (std::abs(H[n1][n1 - 1]) > std::abs(H[n1 - 1][n1])) {
H[n1 - 1][n1 - 1] = q / H[n1][n1 - 1];
H[n1 - 1][n1] = -(H[n1][n1] - p) / H[n1][n1 - 1];
} else {
cdiv(0.0, -H[n1 - 1][n1], H[n1 - 1][n1 - 1] - p, q);
H[n1 - 1][n1 - 1] = cdivr;
H[n1 - 1][n1] = cdivi;
}
H[n1][n1 - 1] = 0.0;
H[n1][n1] = 1.0;
for (int i = n1 - 2; i >= 0; i--) {
double ra, sa, vr, vi;
ra = 0.0;
sa = 0.0;
for (int j = l; j <= n1; j++) {
ra = ra + H[i][j] * H[j][n1 - 1];
sa = sa + H[i][j] * H[j][n1];
}
w = H[i][i] - p;
if (e[i] < 0.0) {
z = w;
r = ra;
s = sa;
} else {
l = i;
if (e[i] == 0) {
cdiv(-ra, -sa, w, q);
H[i][n1 - 1] = cdivr;
H[i][n1] = cdivi;
} else {
// Solve complex equations
x = H[i][i + 1];
y = H[i + 1][i];
vr = (d[i] - p) * (d[i] - p) + e[i] * e[i] - q * q;
vi = (d[i] - p) * 2.0 * q;
if (vr == 0.0 && vi == 0.0) {
vr = eps * norm * (std::abs(w) + std::abs(q) + std::abs(x)
+ std::abs(y) + std::abs(z));
}
cdiv(x * r - z * ra + q * sa,
x * s - z * sa - q * ra, vr, vi);
H[i][n1 - 1] = cdivr;
H[i][n1] = cdivi;
if (std::abs(x) > (std::abs(z) + std::abs(q))) {
H[i + 1][n1 - 1] = (-ra - w * H[i][n1 - 1] + q
* H[i][n1]) / x;
H[i + 1][n1] = (-sa - w * H[i][n1] - q * H[i][n1
- 1]) / x;
} else {
cdiv(-r - y * H[i][n1 - 1], -s - y * H[i][n1], z,
q);
H[i + 1][n1 - 1] = cdivr;
H[i + 1][n1] = cdivi;
}
}
// Overflow control
t = std::max(std::abs(H[i][n1 - 1]), std::abs(H[i][n1]));
if ((eps * t) * t > 1) {
for (int j = i; j <= n1; j++) {
H[j][n1 - 1] = H[j][n1 - 1] / t;
H[j][n1] = H[j][n1] / t;
}
}
}
}
}
}
// Vectors of isolated roots
for (int i = 0; i < nn; i++) {
if (i < low || i > high) {
for (int j = i; j < nn; j++) {
V[i][j] = H[i][j];
}
}
}
// Back transformation to get eigenvectors of original matrix
for (int j = nn - 1; j >= low; j--) {
for (int i = low; i <= high; i++) {
z = 0.0;
for (int k = low; k <= std::min(j, high); k++) {
z = z + V[i][k] * H[k][j];
}
V[i][j] = z;
}
}
}
// Nonsymmetric reduction to Hessenberg form.
void orthes() {
// This is derived from the Algol procedures orthes and ortran,
// by Martin and Wilkinson, Handbook for Auto. Comp.,
// Vol.ii-Linear Algebra, and the corresponding
// Fortran subroutines in EISPACK.
int low = 0;
int high = n - 1;
for (int m = low + 1; m <= high - 1; m++) {
// Scale column.
double scale = 0.0;
for (int i = m; i <= high; i++) {
scale = scale + std::abs(H[i][m - 1]);
}
if (scale != 0.0) {
// Compute Householder transformation.
double h = 0.0;
for (int i = high; i >= m; i--) {
ort[i] = H[i][m - 1] / scale;
h += ort[i] * ort[i];
}
double g = std::sqrt(h);
if (ort[m] > 0) {
g = -g;
}
h = h - ort[m] * g;
ort[m] = ort[m] - g;
// Apply Householder similarity transformation
// H = (I-u*u'/h)*H*(I-u*u')/h)
for (int j = m; j < n; j++) {
double f = 0.0;
for (int i = high; i >= m; i--) {
f += ort[i] * H[i][j];
}
f = f / h;
for (int i = m; i <= high; i++) {
H[i][j] -= f * ort[i];
}
}
for (int i = 0; i <= high; i++) {
double f = 0.0;
for (int j = high; j >= m; j--) {
f += ort[j] * H[i][j];
}
f = f / h;
for (int j = m; j <= high; j++) {
H[i][j] -= f * ort[j];
}
}
ort[m] = scale * ort[m];
H[m][m - 1] = scale * g;
}
}
// Accumulate transformations (Algol's ortran).
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
V[i][j] = (i == j ? 1.0 : 0.0);
}
}
for (int m = high - 1; m >= low + 1; m--) {
if (H[m][m - 1] != 0.0) {
for (int i = m + 1; i <= high; i++) {
ort[i] = H[i][m - 1];
}
for (int j = m; j <= high; j++) {
double g = 0.0;
for (int i = m; i <= high; i++) {
g += ort[i] * V[i][j];
}
// Double division avoids possible underflow
g = (g / ort[m]) / H[m][m - 1];
for (int i = m; i <= high; i++) {
V[i][j] += g * ort[i];
}
}
}
}
}
// Releases all internal working memory.
void release() {
// releases the working data
delete[] d;
delete[] e;
delete[] ort;
for (int i = 0; i < n; i++) {
delete[] H[i];
delete[] V[i];
}
delete[] H;
delete[] V;
}
// Computes the Eigenvalue Decomposition for a matrix given in H.
void compute() {
// Allocate memory for the working data.
V = alloc_2d<double> (n, n, 0.0);
d = alloc_1d<double> (n);
e = alloc_1d<double> (n);
ort = alloc_1d<double> (n);
// Reduce to Hessenberg form.
orthes();
// Reduce Hessenberg to real Schur form.
hqr2();
// Copy eigenvalues to OpenCV Matrix.
_eigenvalues.create(1, n, CV_64FC1);
for (int i = 0; i < n; i++) {
_eigenvalues.at<double> (0, i) = d[i];
}
// Copy eigenvectors to OpenCV Matrix.
_eigenvectors.create(n, n, CV_64FC1);
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
_eigenvectors.at<double> (i, j) = V[i][j];
// Deallocate the memory by releasing all internal working data.
release();
}
public:
EigenvalueDecomposition()
: n(0) { }
// Initializes & computes the Eigenvalue Decomposition for a general matrix
// given in src. This function is a port of the EigenvalueSolver in JAMA,
// which has been released to public domain by The MathWorks and the
// National Institute of Standards and Technology (NIST).
EigenvalueDecomposition(InputArray src) {
compute(src);
}
// This function computes the Eigenvalue Decomposition for a general matrix
// given in src. This function is a port of the EigenvalueSolver in JAMA,
// which has been released to public domain by The MathWorks and the
// National Institute of Standards and Technology (NIST).
void compute(InputArray src)
{
/*if(isSymmetric(src)) {
// Fall back to OpenCV for a symmetric matrix!
cv::eigen(src, _eigenvalues, _eigenvectors);
} else {*/
Mat tmp;
// Convert the given input matrix to double. Is there any way to
// prevent allocating the temporary memory? Only used for copying
// into working memory and deallocated after.
src.getMat().convertTo(tmp, CV_64FC1);
// Get dimension of the matrix.
this->n = tmp.cols;
// Allocate the matrix data to work on.
this->H = alloc_2d<double> (n, n);
// Now safely copy the data.
for (int i = 0; i < tmp.rows; i++) {
for (int j = 0; j < tmp.cols; j++) {
this->H[i][j] = tmp.at<double>(i, j);
}
}
// Deallocates the temporary matrix before computing.
tmp.release();
// Performs the eigenvalue decomposition of H.
compute();
// }
}
~EigenvalueDecomposition() {}
// Returns the eigenvalues of the Eigenvalue Decomposition.
Mat eigenvalues() { return _eigenvalues; }
// Returns the eigenvectors of the Eigenvalue Decomposition.
Mat eigenvectors() { return _eigenvectors; }
};
#endif // DLS_H

View File

@ -391,6 +391,7 @@ CV_INIT_ALGORITHM(LMeDSPointSetRegistrator, "PointSetRegistrator.LMeDS",
obj.info()->addParam(obj, "confidence", obj.confidence);
obj.info()->addParam(obj, "maxIters", obj.maxIters))
Ptr<PointSetRegistrator> createRANSACPointSetRegistrator(const Ptr<PointSetRegistrator::Callback>& _cb,
int _modelPoints, double _threshold,
double _confidence, int _maxIters)
@ -409,6 +410,7 @@ Ptr<PointSetRegistrator> createLMeDSPointSetRegistrator(const Ptr<PointSetRegist
new LMeDSPointSetRegistrator(_cb, _modelPoints, _confidence, _maxIters));
}
class Affine3DEstimatorCallback : public PointSetRegistrator::Callback
{
public:

View File

@ -41,6 +41,7 @@
//M*/
#include "precomp.hpp"
#include "dls.h"
#include "epnp.h"
#include "p3p.h"
#include "opencv2/calib3d/calib3d_c.h"
@ -59,7 +60,7 @@ bool cv::solvePnP( InputArray _opoints, InputArray _ipoints,
_tvec.create(3, 1, CV_64F);
Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat();
if (flags == EPNP)
if (flags == SOLVEPNP_EPNP)
{
cv::Mat undistortedPoints;
cv::undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
@ -70,7 +71,7 @@ bool cv::solvePnP( InputArray _opoints, InputArray _ipoints,
cv::Rodrigues(R, rvec);
return true;
}
else if (flags == P3P)
else if (flags == SOLVEPNP_P3P)
{
CV_Assert( npoints == 4);
cv::Mat undistortedPoints;
@ -83,7 +84,7 @@ bool cv::solvePnP( InputArray _opoints, InputArray _ipoints,
cv::Rodrigues(R, rvec);
return result;
}
else if (flags == ITERATIVE)
else if (flags == SOLVEPNP_ITERATIVE)
{
CvMat c_objectPoints = opoints, c_imagePoints = ipoints;
CvMat c_cameraMatrix = cameraMatrix, c_distCoeffs = distCoeffs;
@ -93,222 +94,97 @@ bool cv::solvePnP( InputArray _opoints, InputArray _ipoints,
&c_rvec, &c_tvec, useExtrinsicGuess );
return true;
}
else if (flags == SOLVEPNP_DLS)
{
cv::Mat undistortedPoints;
cv::undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
dls PnP(opoints, undistortedPoints);
cv::Mat R, rvec = _rvec.getMat(), tvec = _tvec.getMat();
bool result = PnP.compute_pose(R, tvec);
if (result)
cv::Rodrigues(R, rvec);
return result;
}
else
CV_Error(CV_StsBadArg, "The flags argument must be one of CV_ITERATIVE, CV_P3P or CV_EPNP");
CV_Error(CV_StsBadArg, "The flags argument must be one of SOLVEPNP_ITERATIVE, SOLVEPNP_P3P, SOLVEPNP_EPNP or SOLVEPNP_DLS");
return false;
}
namespace cv
class PnPRansacCallback : public PointSetRegistrator::Callback
{
namespace pnpransac
public:
PnPRansacCallback(Mat _cameraMatrix=Mat(3,3,CV_64F), Mat _distCoeffs=Mat(4,1,CV_64F), int _flags=cv::SOLVEPNP_ITERATIVE,
bool _useExtrinsicGuess=false, Mat _rvec=Mat(), Mat _tvec=Mat() )
: cameraMatrix(_cameraMatrix), distCoeffs(_distCoeffs), flags(_flags), useExtrinsicGuess(_useExtrinsicGuess),
rvec(_rvec), tvec(_tvec) {}
/* Pre: True */
/* Post: compute _model with given points an return number of found models */
int runKernel( InputArray _m1, InputArray _m2, OutputArray _model ) const
{
const int MIN_POINTS_COUNT = 4;
static void project3dPoints(const Mat& points, const Mat& rvec, const Mat& tvec, Mat& modif_points)
{
modif_points.create(1, points.cols, CV_32FC3);
Mat R(3, 3, CV_64FC1);
Rodrigues(rvec, R);
Mat transformation(3, 4, CV_64F);
Mat r = transformation.colRange(0, 3);
R.copyTo(r);
Mat t = transformation.colRange(3, 4);
tvec.copyTo(t);
transform(points, modif_points, transformation);
}
struct CameraParameters
{
void init(Mat _intrinsics, Mat _distCoeffs)
{
_intrinsics.copyTo(intrinsics);
_distCoeffs.copyTo(distortion);
}
Mat intrinsics;
Mat distortion;
};
struct Parameters
{
int iterationsCount;
float reprojectionError;
int minInliersCount;
bool useExtrinsicGuess;
int flags;
CameraParameters camera;
};
template <typename OpointType, typename IpointType>
static void pnpTask(const std::vector<char>& pointsMask, const Mat& objectPoints, const Mat& imagePoints,
const Parameters& params, std::vector<int>& inliers, Mat& rvec, Mat& tvec,
const Mat& rvecInit, const Mat& tvecInit, Mutex& resultsMutex)
{
Mat modelObjectPoints(1, MIN_POINTS_COUNT, CV_MAKETYPE(DataDepth<OpointType>::value, 3));
Mat modelImagePoints(1, MIN_POINTS_COUNT, CV_MAKETYPE(DataDepth<IpointType>::value, 2));
for (int i = 0, colIndex = 0; i < (int)pointsMask.size(); i++)
{
if (pointsMask[i])
{
Mat colModelImagePoints = modelImagePoints(Rect(colIndex, 0, 1, 1));
imagePoints.col(i).copyTo(colModelImagePoints);
Mat colModelObjectPoints = modelObjectPoints(Rect(colIndex, 0, 1, 1));
objectPoints.col(i).copyTo(colModelObjectPoints);
colIndex = colIndex+1;
}
}
//filter same 3d points, hang in solvePnP
double eps = 1e-10;
int num_same_points = 0;
for (int i = 0; i < MIN_POINTS_COUNT; i++)
for (int j = i + 1; j < MIN_POINTS_COUNT; j++)
{
if (norm(modelObjectPoints.at<Vec<OpointType,3> >(0, i) - modelObjectPoints.at<Vec<OpointType,3> >(0, j)) < eps)
num_same_points++;
}
if (num_same_points > 0)
return;
Mat localRvec, localTvec;
rvecInit.copyTo(localRvec);
tvecInit.copyTo(localTvec);
solvePnP(modelObjectPoints, modelImagePoints, params.camera.intrinsics, params.camera.distortion, localRvec, localTvec,
params.useExtrinsicGuess, params.flags);
Mat opoints = _m1.getMat(), ipoints = _m2.getMat();
std::vector<Point_<OpointType> > projected_points;
projected_points.resize(objectPoints.cols);
projectPoints(objectPoints, localRvec, localTvec, params.camera.intrinsics, params.camera.distortion, projected_points);
bool correspondence = cv::solvePnP( _m1, _m2, cameraMatrix, distCoeffs,
rvec, tvec, useExtrinsicGuess, flags );
Mat rotatedPoints;
project3dPoints(objectPoints, localRvec, localTvec, rotatedPoints);
Mat _local_model;
cv::hconcat(rvec, tvec, _local_model);
_local_model.copyTo(_model);
std::vector<int> localInliers;
for (int i = 0; i < objectPoints.cols; i++)
{
//Although p is a 2D point it needs the same type as the object points to enable the norm calculation
Point_<OpointType> p((OpointType)imagePoints.at<Vec<IpointType,2> >(0, i)[0],
(OpointType)imagePoints.at<Vec<IpointType,2> >(0, i)[1]);
if ((norm(p - projected_points[i]) < params.reprojectionError)
&& (rotatedPoints.at<Vec<OpointType,3> >(0, i)[2] > 0)) //hack
{
localInliers.push_back(i);
}
}
return correspondence;
}
if (localInliers.size() > inliers.size())
{
resultsMutex.lock();
/* Pre: True */
/* Post: fill _err with projection errors */
void computeError( InputArray _m1, InputArray _m2, InputArray _model, OutputArray _err ) const
{
inliers.clear();
inliers.resize(localInliers.size());
memcpy(&inliers[0], &localInliers[0], sizeof(int) * localInliers.size());
localRvec.copyTo(rvec);
localTvec.copyTo(tvec);
Mat opoints = _m1.getMat(), ipoints = _m2.getMat(), model = _model.getMat();
resultsMutex.unlock();
}
}
int i, count = opoints.cols;
Mat _rvec = model.col(0);
Mat _tvec = model.col(1);
static void pnpTask(const std::vector<char>& pointsMask, const Mat& objectPoints, const Mat& imagePoints,
const Parameters& params, std::vector<int>& inliers, Mat& rvec, Mat& tvec,
const Mat& rvecInit, const Mat& tvecInit, Mutex& resultsMutex)
{
CV_Assert(objectPoints.depth() == CV_64F || objectPoints.depth() == CV_32F);
CV_Assert(imagePoints.depth() == CV_64F || imagePoints.depth() == CV_32F);
const bool objectDoublePrecision = objectPoints.depth() == CV_64F;
const bool imageDoublePrecision = imagePoints.depth() == CV_64F;
if(objectDoublePrecision)
{
if(imageDoublePrecision)
pnpTask<double, double>(pointsMask, objectPoints, imagePoints, params, inliers, rvec, tvec, rvecInit, tvecInit, resultsMutex);
else
pnpTask<double, float>(pointsMask, objectPoints, imagePoints, params, inliers, rvec, tvec, rvecInit, tvecInit, resultsMutex);
}
else
{
if(imageDoublePrecision)
pnpTask<float, double>(pointsMask, objectPoints, imagePoints, params, inliers, rvec, tvec, rvecInit, tvecInit, resultsMutex);
else
pnpTask<float, float>(pointsMask, objectPoints, imagePoints, params, inliers, rvec, tvec, rvecInit, tvecInit, resultsMutex);
}
}
class PnPSolver
{
public:
void operator()( const BlockedRange& r ) const
{
std::vector<char> pointsMask(objectPoints.cols, 0);
memset(&pointsMask[0], 1, MIN_POINTS_COUNT );
for( int i=r.begin(); i!=r.end(); ++i )
{
generateVar(pointsMask);
pnpTask(pointsMask, objectPoints, imagePoints, parameters,
inliers, rvec, tvec, initRvec, initTvec, syncMutex);
if ((int)inliers.size() >= parameters.minInliersCount)
{
#ifdef HAVE_TBB
tbb::task::self().cancel_group_execution();
#else
break;
#endif
}
}
}
PnPSolver(const Mat& _objectPoints, const Mat& _imagePoints, const Parameters& _parameters,
Mat& _rvec, Mat& _tvec, std::vector<int>& _inliers):
objectPoints(_objectPoints), imagePoints(_imagePoints), parameters(_parameters),
rvec(_rvec), tvec(_tvec), inliers(_inliers)
{
rvec.copyTo(initRvec);
tvec.copyTo(initTvec);
Mat projpoints(count, 2, CV_32FC1);
cv::projectPoints(opoints, _rvec, _tvec, cameraMatrix, distCoeffs, projpoints);
generator.state = theRNG().state; //to control it somehow...
}
private:
PnPSolver& operator=(const PnPSolver&);
const Point2f* ipoints_ptr = ipoints.ptr<Point2f>();
const Point2f* projpoints_ptr = projpoints.ptr<Point2f>();
const Mat& objectPoints;
const Mat& imagePoints;
const Parameters& parameters;
Mat &rvec, &tvec;
std::vector<int>& inliers;
Mat initRvec, initTvec;
_err.create(count, 1, CV_32FC1);
float* err = _err.getMat().ptr<float>();
static RNG generator;
static Mutex syncMutex;
void generateVar(std::vector<char>& mask) const
{
int size = (int)mask.size();
for (int i = 0; i < size; i++)
{
int i1 = generator.uniform(0, size);
int i2 = generator.uniform(0, size);
char curr = mask[i1];
mask[i1] = mask[i2];
mask[i2] = curr;
}
}
};
Mutex PnPSolver::syncMutex;
RNG PnPSolver::generator;
for ( i = 0; i < count; ++i)
err[i] = (float)cv::norm( ipoints_ptr[i] - projpoints_ptr[i] );
}
}
void cv::solvePnPRansac(InputArray _opoints, InputArray _ipoints,
Mat cameraMatrix;
Mat distCoeffs;
int flags;
bool useExtrinsicGuess;
Mat rvec;
Mat tvec;
};
bool cv::solvePnPRansac(InputArray _opoints, InputArray _ipoints,
InputArray _cameraMatrix, InputArray _distCoeffs,
OutputArray _rvec, OutputArray _tvec, bool useExtrinsicGuess,
int iterationsCount, float reprojectionError, int minInliersCount,
int iterationsCount, float reprojectionError, double confidence,
OutputArray _inliers, int flags)
{
Mat opoints = _opoints.getMat(), ipoints = _ipoints.getMat();
Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat();
int npoints = std::max(opoints.checkVector(3, CV_32F), opoints.checkVector(3, CV_64F));
CV_Assert( npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) );
CV_Assert(opoints.isContinuous());
CV_Assert(opoints.depth() == CV_32F || opoints.depth() == CV_64F);
@ -319,60 +195,57 @@ void cv::solvePnPRansac(InputArray _opoints, InputArray _ipoints,
_rvec.create(3, 1, CV_64FC1);
_tvec.create(3, 1, CV_64FC1);
Mat rvec = _rvec.getMat();
Mat tvec = _tvec.getMat();
Mat objectPoints = opoints.reshape(3, 1), imagePoints = ipoints.reshape(2, 1);
Mat rvec = useExtrinsicGuess ? _rvec.getMat() : Mat(3, 1, CV_64FC1);
Mat tvec = useExtrinsicGuess ? _tvec.getMat() : Mat(3, 1, CV_64FC1);
Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat();
if (minInliersCount <= 0)
minInliersCount = objectPoints.cols;
cv::pnpransac::Parameters params;
params.iterationsCount = iterationsCount;
params.minInliersCount = minInliersCount;
params.reprojectionError = reprojectionError;
params.useExtrinsicGuess = useExtrinsicGuess;
params.camera.init(cameraMatrix, distCoeffs);
params.flags = flags;
Ptr<PointSetRegistrator::Callback> cb; // pointer to callback
cb = makePtr<PnPRansacCallback>( cameraMatrix, distCoeffs, flags, useExtrinsicGuess, rvec, tvec);
std::vector<int> localInliers;
Mat localRvec, localTvec;
rvec.copyTo(localRvec);
tvec.copyTo(localTvec);
int model_points = 4; // minimum of number of model points
if( flags == cv::SOLVEPNP_ITERATIVE ) model_points = 6;
else if( flags == cv::SOLVEPNP_EPNP ) model_points = 5;
if (objectPoints.cols >= pnpransac::MIN_POINTS_COUNT)
double param1 = reprojectionError; // reprojection error
double param2 = confidence; // confidence
int param3 = iterationsCount; // number maximum iterations
cv::Mat _local_model(3, 2, CV_64FC1);
cv::Mat _mask_local_inliers(1, opoints.rows, CV_8UC1);
// call Ransac
int result = createRANSACPointSetRegistrator(cb, model_points, param1, param2, param3)->run(opoints, ipoints, _local_model, _mask_local_inliers);
if( result <= 0 || _local_model.rows <= 0)
{
parallel_for(BlockedRange(0,iterationsCount), cv::pnpransac::PnPSolver(objectPoints, imagePoints, params,
localRvec, localTvec, localInliers));
}
_rvec.assign(rvec); // output rotation vector
_tvec.assign(tvec); // output translation vector
if (localInliers.size() >= (size_t)pnpransac::MIN_POINTS_COUNT)
{
if (flags != P3P)
{
int i, pointsCount = (int)localInliers.size();
Mat inlierObjectPoints(1, pointsCount, CV_MAKE_TYPE(opoints.depth(), 3)), inlierImagePoints(1, pointsCount, CV_MAKE_TYPE(ipoints.depth(), 2));
for (i = 0; i < pointsCount; i++)
{
int index = localInliers[i];
Mat colInlierImagePoints = inlierImagePoints(Rect(i, 0, 1, 1));
imagePoints.col(index).copyTo(colInlierImagePoints);
Mat colInlierObjectPoints = inlierObjectPoints(Rect(i, 0, 1, 1));
objectPoints.col(index).copyTo(colInlierObjectPoints);
}
solvePnP(inlierObjectPoints, inlierImagePoints, params.camera.intrinsics, params.camera.distortion, localRvec, localTvec, true, flags);
}
localRvec.copyTo(rvec);
localTvec.copyTo(tvec);
if (_inliers.needed())
Mat(localInliers).copyTo(_inliers);
if( _inliers.needed() )
_inliers.release();
return false;
}
else
{
tvec.setTo(Scalar(0));
Mat R = Mat::eye(3, 3, CV_64F);
Rodrigues(R, rvec);
if( _inliers.needed() )
_inliers.release();
_rvec.assign(_local_model.col(0)); // output rotation vector
_tvec.assign(_local_model.col(1)); // output translation vector
}
return;
if(_inliers.needed())
{
Mat _local_inliers;
int count = 0;
for (int i = 0; i < _mask_local_inliers.rows; ++i)
{
if((int)_mask_local_inliers.at<uchar>(i) == 1) // inliers mask
{
_local_inliers.push_back(count); // output inliers vector
count++;
}
}
_local_inliers.copyTo(_inliers);
}
return true;
}

View File

@ -54,9 +54,10 @@ class CV_solvePnPRansac_Test : public cvtest::BaseTest
public:
CV_solvePnPRansac_Test()
{
eps[ITERATIVE] = 1.0e-2;
eps[EPNP] = 1.0e-2;
eps[P3P] = 1.0e-2;
eps[SOLVEPNP_ITERATIVE] = 1.0e-2;
eps[SOLVEPNP_EPNP] = 1.0e-2;
eps[SOLVEPNP_P3P] = 1.0e-2;
eps[SOLVEPNP_DLS] = 1.0e-2;
totalTestsCount = 10;
}
~CV_solvePnPRansac_Test() {}
@ -135,7 +136,7 @@ protected:
}
solvePnPRansac(points, projectedPoints, intrinsics, distCoeffs, rvec, tvec,
false, 500, 0.5, -1, inliers, method);
false, 500, 0.5, 0.99, inliers, method);
bool isTestSuccess = inliers.size() >= points.size()*0.95;
@ -153,13 +154,12 @@ protected:
{
ts->set_failed_test_info(cvtest::TS::OK);
vector<Point3f> points;
vector<Point3f> points, points_dls;
const int pointsCount = 500;
points.resize(pointsCount);
generate3DPointCloud(points);
const int methodsCount = 3;
const int methodsCount = 4;
RNG rng = ts->get_rng();
@ -184,7 +184,7 @@ protected:
}
}
}
double eps[3];
double eps[4];
int totalTestsCount;
};
@ -193,9 +193,10 @@ class CV_solvePnP_Test : public CV_solvePnPRansac_Test
public:
CV_solvePnP_Test()
{
eps[ITERATIVE] = 1.0e-6;
eps[EPNP] = 1.0e-6;
eps[P3P] = 1.0e-4;
eps[SOLVEPNP_ITERATIVE] = 1.0e-6;
eps[SOLVEPNP_EPNP] = 1.0e-6;
eps[SOLVEPNP_P3P] = 1.0e-4;
eps[SOLVEPNP_DLS] = 1.0e-4;
totalTestsCount = 1000;
}
@ -218,6 +219,10 @@ protected:
{
opoints = std::vector<Point3f>(points.begin(), points.begin()+4);
}
else if(method == 3)
{
opoints = std::vector<Point3f>(points.begin(), points.begin()+50);
}
else
opoints = points;
@ -239,7 +244,7 @@ protected:
}
};
TEST(DISABLED_Calib3d_SolvePnPRansac, accuracy) { CV_solvePnPRansac_Test test; test.safe_run(); }
TEST(Calib3d_SolvePnPRansac, accuracy) { CV_solvePnPRansac_Test test; test.safe_run(); }
TEST(Calib3d_SolvePnP, accuracy) { CV_solvePnP_Test test; test.safe_run(); }

View File

@ -99,9 +99,13 @@ if(BUILD_EXAMPLES AND OCV_DEPENDENCIES_FOUND)
endif()
foreach(sample_filename ${cpp_samples})
get_filename_component(sample ${sample_filename} NAME_WE)
OPENCV_DEFINE_CPP_EXAMPLE(${sample} ${sample_filename})
if(NOT "${sample_filename}" MATCHES "real_time_pose_estimation/")
get_filename_component(sample ${sample_filename} NAME_WE)
OPENCV_DEFINE_CPP_EXAMPLE(${sample} ${sample_filename})
endif()
endforeach()
include("tutorial_code/calib3d/real_time_pose_estimation/CMakeLists.txt")
endif()
if(INSTALL_C_EXAMPLES AND NOT WIN32)

View File

@ -0,0 +1,19 @@
set(sample_dir ${CMAKE_CURRENT_SOURCE_DIR}/tutorial_code/calib3d/real_time_pose_estimation/src/)
set(target cpp-tutorial-)
set(sample_pnplib
${sample_dir}CsvReader.cpp
${sample_dir}CsvWriter.cpp
${sample_dir}ModelRegistration.cpp
${sample_dir}Mesh.cpp
${sample_dir}Model.cpp
${sample_dir}PnPProblem.cpp
${sample_dir}Utils.cpp
${sample_dir}RobustMatcher.cpp
)
add_executable( ${target}pnp_registration ${sample_dir}main_registration.cpp ${sample_pnplib} )
add_executable( ${target}pnp_detection ${sample_dir}main_detection.cpp ${sample_pnplib} )
ocv_target_link_libraries( ${target}pnp_registration ${OPENCV_LINKER_LIBS} ${OPENCV_CPP_SAMPLES_REQUIRED_DEPS} )
ocv_target_link_libraries( ${target}pnp_detection ${OPENCV_LINKER_LIBS} ${OPENCV_CPP_SAMPLES_REQUIRED_DEPS} )

View File

@ -0,0 +1,31 @@
ply
format ascii 1.0
comment made by anonymous
comment this file is a cube
element vertex 8
property float32 x
property float32 y
property float32 z
element face 12
property list uint8 int32 vertex_index
end_header
0 0 0
0 25.8 0
18.9 0 0
18.9 25.8 0
0 0 7.5
0 25.8 7.5
18.9 0 7.5
18.9 25.8 7.5
3 5 1 0
3 5 4 0
3 4 0 2
3 4 6 2
3 7 5 4
3 7 6 4
3 3 2 1
3 1 2 0
3 5 7 1
3 7 1 3
3 7 6 3
3 6 3 2

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

View File

@ -0,0 +1,79 @@
#include "CsvReader.h"
/** The default constructor of the CSV reader Class */
CsvReader::CsvReader(const string &path, const char &separator){
_file.open(path.c_str(), ifstream::in);
_separator = separator;
}
/* Read a plane text file with .ply format */
void CsvReader::readPLY(vector<Point3f> &list_vertex, vector<vector<int> > &list_triangles)
{
std::string line, tmp_str, n;
int num_vertex = 0, num_triangles = 0;
int count = 0;
bool end_header = false;
bool end_vertex = false;
// Read the whole *.ply file
while (getline(_file, line)) {
stringstream liness(line);
// read header
if(!end_header)
{
getline(liness, tmp_str, _separator);
if( tmp_str == "element" )
{
getline(liness, tmp_str, _separator);
getline(liness, n);
if(tmp_str == "vertex") num_vertex = StringToInt(n);
if(tmp_str == "face") num_triangles = StringToInt(n);
}
if(tmp_str == "end_header") end_header = true;
}
// read file content
else if(end_header)
{
// read vertex and add into 'list_vertex'
if(!end_vertex && count < num_vertex)
{
string x, y, z;
getline(liness, x, _separator);
getline(liness, y, _separator);
getline(liness, z);
cv::Point3f tmp_p;
tmp_p.x = (float)StringToInt(x);
tmp_p.y = (float)StringToInt(y);
tmp_p.z = (float)StringToInt(z);
list_vertex.push_back(tmp_p);
count++;
if(count == num_vertex)
{
count = 0;
end_vertex = !end_vertex;
}
}
// read faces and add into 'list_triangles'
else if(end_vertex && count < num_triangles)
{
string num_pts_per_face, id0, id1, id2;
getline(liness, num_pts_per_face, _separator);
getline(liness, id0, _separator);
getline(liness, id1, _separator);
getline(liness, id2);
std::vector<int> tmp_triangle(3);
tmp_triangle[0] = StringToInt(id0);
tmp_triangle[1] = StringToInt(id1);
tmp_triangle[2] = StringToInt(id2);
list_triangles.push_back(tmp_triangle);
count++;
}
}
}
}

View File

@ -0,0 +1,40 @@
#ifndef CSVREADER_H
#define CSVREADER_H
#include <iostream>
#include <fstream>
#include <opencv2/core/core.hpp>
#include "Utils.h"
using namespace std;
using namespace cv;
class CsvReader {
public:
/**
* The default constructor of the CSV reader Class.
* The default separator is ' ' (empty space)
*
* @param path - The path of the file to read
* @param separator - The separator character between words per line
* @return
*/
CsvReader(const string &path, const char &separator = ' ');
/**
* Read a plane text file with .ply format
*
* @param list_vertex - The container of the vertices list of the mesh
* @param list_triangle - The container of the triangles list of the mesh
* @return
*/
void readPLY(vector<Point3f> &list_vertex, vector<vector<int> > &list_triangles);
private:
/** The current stream file for the reader */
ifstream _file;
/** The separator character between words for each line */
char _separator;
};
#endif

View File

@ -0,0 +1,48 @@
#include "CsvWriter.h"
CsvWriter::CsvWriter(const string &path, const string &separator){
_file.open(path.c_str(), ofstream::out);
_isFirstTerm = true;
_separator = separator;
}
CsvWriter::~CsvWriter() {
_file.flush();
_file.close();
}
void CsvWriter::writeXYZ(const vector<Point3f> &list_points3d)
{
string x, y, z;
for(unsigned int i = 0; i < list_points3d.size(); ++i)
{
x = FloatToString(list_points3d[i].x);
y = FloatToString(list_points3d[i].y);
z = FloatToString(list_points3d[i].z);
_file << x << _separator << y << _separator << z << std::endl;
}
}
void CsvWriter::writeUVXYZ(const vector<Point3f> &list_points3d, const vector<Point2f> &list_points2d, const Mat &descriptors)
{
string u, v, x, y, z, descriptor_str;
for(unsigned int i = 0; i < list_points3d.size(); ++i)
{
u = FloatToString(list_points2d[i].x);
v = FloatToString(list_points2d[i].y);
x = FloatToString(list_points3d[i].x);
y = FloatToString(list_points3d[i].y);
z = FloatToString(list_points3d[i].z);
_file << u << _separator << v << _separator << x << _separator << y << _separator << z;
for(int j = 0; j < 32; ++j)
{
descriptor_str = FloatToString(descriptors.at<float>(i,j));
_file << _separator << descriptor_str;
}
_file << std::endl;
}
}

View File

@ -0,0 +1,25 @@
#ifndef CSVWRITER_H
#define CSVWRITER_H
#include <iostream>
#include <fstream>
#include <opencv2/core/core.hpp>
#include "Utils.h"
using namespace std;
using namespace cv;
class CsvWriter {
public:
CsvWriter(const string &path, const string &separator = " ");
~CsvWriter();
void writeXYZ(const vector<Point3f> &list_points3d);
void writeUVXYZ(const vector<Point3f> &list_points3d, const vector<Point2f> &list_points2d, const Mat &descriptors);
private:
ofstream _file;
string _separator;
bool _isFirstTerm;
};
#endif

View File

@ -0,0 +1,82 @@
/*
* Mesh.cpp
*
* Created on: Apr 9, 2014
* Author: edgar
*/
#include "Mesh.h"
#include "CsvReader.h"
// --------------------------------------------------- //
// TRIANGLE CLASS //
// --------------------------------------------------- //
/** The custom constructor of the Triangle Class */
Triangle::Triangle(int id, cv::Point3f V0, cv::Point3f V1, cv::Point3f V2)
{
id_ = id; v0_ = V0; v1_ = V1; v2_ = V2;
}
/** The default destructor of the Class */
Triangle::~Triangle()
{
// TODO Auto-generated destructor stub
}
// --------------------------------------------------- //
// RAY CLASS //
// --------------------------------------------------- //
/** The custom constructor of the Ray Class */
Ray::Ray(cv::Point3f P0, cv::Point3f P1) {
p0_ = P0; p1_ = P1;
}
/** The default destructor of the Class */
Ray::~Ray()
{
// TODO Auto-generated destructor stub
}
// --------------------------------------------------- //
// OBJECT MESH CLASS //
// --------------------------------------------------- //
/** The default constructor of the ObjectMesh Class */
Mesh::Mesh() : list_vertex_(0) , list_triangles_(0)
{
id_ = 0;
num_vertexs_ = 0;
num_triangles_ = 0;
}
/** The default destructor of the ObjectMesh Class */
Mesh::~Mesh()
{
// TODO Auto-generated destructor stub
}
/** Load a CSV with *.ply format **/
void Mesh::load(const std::string path)
{
// Create the reader
CsvReader csvReader(path);
// Clear previous data
list_vertex_.clear();
list_triangles_.clear();
// Read from .ply file
csvReader.readPLY(list_vertex_, list_triangles_);
// Update mesh attributes
num_vertexs_ = (int)list_vertex_.size();
num_triangles_ = (int)list_triangles_.size();
}

View File

@ -0,0 +1,86 @@
/*
* Mesh.h
*
* Created on: Apr 9, 2014
* Author: edgar
*/
#ifndef MESH_H_
#define MESH_H_
#include <iostream>
#include <opencv2/core/core.hpp>
// --------------------------------------------------- //
// TRIANGLE CLASS //
// --------------------------------------------------- //
class Triangle {
public:
explicit Triangle(int id, cv::Point3f V0, cv::Point3f V1, cv::Point3f V2);
virtual ~Triangle();
cv::Point3f getV0() const { return v0_; }
cv::Point3f getV1() const { return v1_; }
cv::Point3f getV2() const { return v2_; }
private:
/** The identifier number of the triangle */
int id_;
/** The three vertices that defines the triangle */
cv::Point3f v0_, v1_, v2_;
};
// --------------------------------------------------- //
// RAY CLASS //
// --------------------------------------------------- //
class Ray {
public:
explicit Ray(cv::Point3f P0, cv::Point3f P1);
virtual ~Ray();
cv::Point3f getP0() { return p0_; }
cv::Point3f getP1() { return p1_; }
private:
/** The two points that defines the ray */
cv::Point3f p0_, p1_;
};
// --------------------------------------------------- //
// OBJECT MESH CLASS //
// --------------------------------------------------- //
class Mesh
{
public:
Mesh();
virtual ~Mesh();
std::vector<std::vector<int> > getTrianglesList() const { return list_triangles_; }
cv::Point3f getVertex(int pos) const { return list_vertex_[pos]; }
int getNumVertices() const { return num_vertexs_; }
void load(const std::string path_file);
private:
/** The identification number of the mesh */
int id_;
/** The current number of vertices in the mesh */
int num_vertexs_;
/** The current number of triangles in the mesh */
int num_triangles_;
/* The list of triangles of the mesh */
std::vector<cv::Point3f> list_vertex_;
/* The list of triangles of the mesh */
std::vector<std::vector<int> > list_triangles_;
};
#endif /* OBJECTMESH_H_ */

View File

@ -0,0 +1,73 @@
/*
* Model.cpp
*
* Created on: Apr 9, 2014
* Author: edgar
*/
#include "Model.h"
#include "CsvWriter.h"
Model::Model() : list_points2d_in_(0), list_points2d_out_(0), list_points3d_in_(0)
{
n_correspondences_ = 0;
}
Model::~Model()
{
// TODO Auto-generated destructor stub
}
void Model::add_correspondence(const cv::Point2f &point2d, const cv::Point3f &point3d)
{
list_points2d_in_.push_back(point2d);
list_points3d_in_.push_back(point3d);
n_correspondences_++;
}
void Model::add_outlier(const cv::Point2f &point2d)
{
list_points2d_out_.push_back(point2d);
}
void Model::add_descriptor(const cv::Mat &descriptor)
{
descriptors_.push_back(descriptor);
}
void Model::add_keypoint(const cv::KeyPoint &kp)
{
list_keypoints_.push_back(kp);
}
/** Save a CSV file and fill the object mesh */
void Model::save(const std::string path)
{
cv::Mat points3dmatrix = cv::Mat(list_points3d_in_);
cv::Mat points2dmatrix = cv::Mat(list_points2d_in_);
//cv::Mat keyPointmatrix = cv::Mat(list_keypoints_);
cv::FileStorage storage(path, cv::FileStorage::WRITE);
storage << "points_3d" << points3dmatrix;
storage << "points_2d" << points2dmatrix;
storage << "keypoints" << list_keypoints_;
storage << "descriptors" << descriptors_;
storage.release();
}
/** Load a YAML file using OpenCv functions **/
void Model::load(const std::string path)
{
cv::Mat points3d_mat;
cv::FileStorage storage(path, cv::FileStorage::READ);
storage["points_3d"] >> points3d_mat;
storage["descriptors"] >> descriptors_;
points3d_mat.copyTo(list_points3d_in_);
storage.release();
}

View File

@ -0,0 +1,54 @@
/*
* Model.h
*
* Created on: Apr 9, 2014
* Author: edgar
*/
#ifndef MODEL_H_
#define MODEL_H_
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
class Model
{
public:
Model();
virtual ~Model();
std::vector<cv::Point2f> get_points2d_in() const { return list_points2d_in_; }
std::vector<cv::Point2f> get_points2d_out() const { return list_points2d_out_; }
std::vector<cv::Point3f> get_points3d() const { return list_points3d_in_; }
std::vector<cv::KeyPoint> get_keypoints() const { return list_keypoints_; }
cv::Mat get_descriptors() const { return descriptors_; }
int get_numDescriptors() const { return descriptors_.rows; }
void add_correspondence(const cv::Point2f &point2d, const cv::Point3f &point3d);
void add_outlier(const cv::Point2f &point2d);
void add_descriptor(const cv::Mat &descriptor);
void add_keypoint(const cv::KeyPoint &kp);
void save(const std::string path);
void load(const std::string path);
private:
/** The current number of correspondecnes */
int n_correspondences_;
/** The list of 2D points on the model surface */
std::vector<cv::KeyPoint> list_keypoints_;
/** The list of 2D points on the model surface */
std::vector<cv::Point2f> list_points2d_in_;
/** The list of 2D points outside the model surface */
std::vector<cv::Point2f> list_points2d_out_;
/** The list of 3D points on the model surface */
std::vector<cv::Point3f> list_points3d_in_;
/** The list of 2D points descriptors */
cv::Mat descriptors_;
};
#endif /* OBJECTMODEL_H_ */

View File

@ -0,0 +1,35 @@
/*
* ModelRegistration.cpp
*
* Created on: Apr 18, 2014
* Author: edgar
*/
#include "ModelRegistration.h"
ModelRegistration::ModelRegistration()
{
n_registrations_ = 0;
max_registrations_ = 0;
}
ModelRegistration::~ModelRegistration()
{
// TODO Auto-generated destructor stub
}
void ModelRegistration::registerPoint(const cv::Point2f &point2d, const cv::Point3f &point3d)
{
// add correspondence at the end of the vector
list_points2d_.push_back(point2d);
list_points3d_.push_back(point3d);
n_registrations_++;
}
void ModelRegistration::reset()
{
n_registrations_ = 0;
max_registrations_ = 0;
list_points2d_.clear();
list_points3d_.clear();
}

View File

@ -0,0 +1,43 @@
/*
* ModelRegistration.h
*
* Created on: Apr 18, 2014
* Author: edgar
*/
#ifndef MODELREGISTRATION_H_
#define MODELREGISTRATION_H_
#include <iostream>
#include <opencv2/core/core.hpp>
class ModelRegistration
{
public:
ModelRegistration();
virtual ~ModelRegistration();
void setNumMax(int n) { max_registrations_ = n; }
std::vector<cv::Point2f> get_points2d() const { return list_points2d_; }
std::vector<cv::Point3f> get_points3d() const { return list_points3d_; }
int getNumMax() const { return max_registrations_; }
int getNumRegist() const { return n_registrations_; }
bool is_registrable() const { return (n_registrations_ < max_registrations_); }
void registerPoint(const cv::Point2f &point2d, const cv::Point3f &point3d);
void reset();
private:
/** The current number of registered points */
int n_registrations_;
/** The total number of points to register */
int max_registrations_;
/** The list of 2D points to register the model */
std::vector<cv::Point2f> list_points2d_;
/** The list of 3D points to register the model */
std::vector<cv::Point3f> list_points3d_;
};
#endif /* MODELREGISTRATION_H_ */

View File

@ -0,0 +1,318 @@
/*
* PnPProblem.cpp
*
* Created on: Mar 28, 2014
* Author: Edgar Riba
*/
#include <iostream>
#include <sstream>
#include "PnPProblem.h"
#include "Mesh.h"
#include <opencv2/calib3d/calib3d.hpp>
/* Functions headers */
cv::Point3f CROSS(cv::Point3f v1, cv::Point3f v2);
double DOT(cv::Point3f v1, cv::Point3f v2);
cv::Point3f SUB(cv::Point3f v1, cv::Point3f v2);
cv::Point3f get_nearest_3D_point(std::vector<cv::Point3f> &points_list, cv::Point3f origin);
/* Functions for MöllerTrumbore intersection algorithm */
cv::Point3f CROSS(cv::Point3f v1, cv::Point3f v2)
{
cv::Point3f tmp_p;
tmp_p.x = v1.y*v2.z - v1.z*v2.y;
tmp_p.y = v1.z*v2.x - v1.x*v2.z;
tmp_p.z = v1.x*v2.y - v1.y*v2.x;
return tmp_p;
}
double DOT(cv::Point3f v1, cv::Point3f v2)
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
cv::Point3f SUB(cv::Point3f v1, cv::Point3f v2)
{
cv::Point3f tmp_p;
tmp_p.x = v1.x - v2.x;
tmp_p.y = v1.y - v2.y;
tmp_p.z = v1.z - v2.z;
return tmp_p;
}
/* End functions for MöllerTrumbore intersection algorithm
* */
// Function to get the nearest 3D point to the Ray origin
cv::Point3f get_nearest_3D_point(std::vector<cv::Point3f> &points_list, cv::Point3f origin)
{
cv::Point3f p1 = points_list[0];
cv::Point3f p2 = points_list[1];
double d1 = std::sqrt( std::pow(p1.x-origin.x, 2) + std::pow(p1.y-origin.y, 2) + std::pow(p1.z-origin.z, 2) );
double d2 = std::sqrt( std::pow(p2.x-origin.x, 2) + std::pow(p2.y-origin.y, 2) + std::pow(p2.z-origin.z, 2) );
if(d1 < d2)
{
return p1;
}
else
{
return p2;
}
}
// Custom constructor given the intrinsic camera parameters
PnPProblem::PnPProblem(const double params[])
{
_A_matrix = cv::Mat::zeros(3, 3, CV_64FC1); // intrinsic camera parameters
_A_matrix.at<double>(0, 0) = params[0]; // [ fx 0 cx ]
_A_matrix.at<double>(1, 1) = params[1]; // [ 0 fy cy ]
_A_matrix.at<double>(0, 2) = params[2]; // [ 0 0 1 ]
_A_matrix.at<double>(1, 2) = params[3];
_A_matrix.at<double>(2, 2) = 1;
_R_matrix = cv::Mat::zeros(3, 3, CV_64FC1); // rotation matrix
_t_matrix = cv::Mat::zeros(3, 1, CV_64FC1); // translation matrix
_P_matrix = cv::Mat::zeros(3, 4, CV_64FC1); // rotation-translation matrix
}
PnPProblem::~PnPProblem()
{
// TODO Auto-generated destructor stub
}
void PnPProblem::set_P_matrix( const cv::Mat &R_matrix, const cv::Mat &t_matrix)
{
// Rotation-Translation Matrix Definition
_P_matrix.at<double>(0,0) = R_matrix.at<double>(0,0);
_P_matrix.at<double>(0,1) = R_matrix.at<double>(0,1);
_P_matrix.at<double>(0,2) = R_matrix.at<double>(0,2);
_P_matrix.at<double>(1,0) = R_matrix.at<double>(1,0);
_P_matrix.at<double>(1,1) = R_matrix.at<double>(1,1);
_P_matrix.at<double>(1,2) = R_matrix.at<double>(1,2);
_P_matrix.at<double>(2,0) = R_matrix.at<double>(2,0);
_P_matrix.at<double>(2,1) = R_matrix.at<double>(2,1);
_P_matrix.at<double>(0,3) = t_matrix.at<double>(0);
_P_matrix.at<double>(1,3) = t_matrix.at<double>(1);
_P_matrix.at<double>(2,3) = t_matrix.at<double>(2);
}
// Estimate the pose given a list of 2D/3D correspondences and the method to use
bool PnPProblem::estimatePose( const std::vector<cv::Point3f> &list_points3d,
const std::vector<cv::Point2f> &list_points2d,
int flags)
{
cv::Mat distCoeffs = cv::Mat::zeros(4, 1, CV_64FC1);
cv::Mat rvec = cv::Mat::zeros(3, 1, CV_64FC1);
cv::Mat tvec = cv::Mat::zeros(3, 1, CV_64FC1);
bool useExtrinsicGuess = false;
// Pose estimation
bool correspondence = cv::solvePnP( list_points3d, list_points2d, _A_matrix, distCoeffs, rvec, tvec,
useExtrinsicGuess, flags);
// Transforms Rotation Vector to Matrix
Rodrigues(rvec,_R_matrix);
_t_matrix = tvec;
// Set projection matrix
this->set_P_matrix(_R_matrix, _t_matrix);
return correspondence;
}
// Estimate the pose given a list of 2D/3D correspondences with RANSAC and the method to use
void PnPProblem::estimatePoseRANSAC( const std::vector<cv::Point3f> &list_points3d, // list with model 3D coordinates
const std::vector<cv::Point2f> &list_points2d, // list with scene 2D coordinates
int flags, cv::Mat &inliers, int iterationsCount, // PnP method; inliers container
float reprojectionError, double confidence ) // Ransac parameters
{
cv::Mat distCoeffs = cv::Mat::zeros(4, 1, CV_64FC1); // vector of distortion coefficients
cv::Mat rvec = cv::Mat::zeros(3, 1, CV_64FC1); // output rotation vector
cv::Mat tvec = cv::Mat::zeros(3, 1, CV_64FC1); // output translation vector
bool useExtrinsicGuess = false; // if true the function uses the provided rvec and tvec values as
// initial approximations of the rotation and translation vectors
cv::solvePnPRansac( list_points3d, list_points2d, _A_matrix, distCoeffs, rvec, tvec,
useExtrinsicGuess, iterationsCount, reprojectionError, confidence,
inliers, flags );
Rodrigues(rvec,_R_matrix); // converts Rotation Vector to Matrix
_t_matrix = tvec; // set translation matrix
this->set_P_matrix(_R_matrix, _t_matrix); // set rotation-translation matrix
}
// Given the mesh, backproject the 3D points to 2D to verify the pose estimation
std::vector<cv::Point2f> PnPProblem::verify_points(Mesh *mesh)
{
std::vector<cv::Point2f> verified_points_2d;
for( int i = 0; i < mesh->getNumVertices(); i++)
{
cv::Point3f point3d = mesh->getVertex(i);
cv::Point2f point2d = this->backproject3DPoint(point3d);
verified_points_2d.push_back(point2d);
}
return verified_points_2d;
}
// Backproject a 3D point to 2D using the estimated pose parameters
cv::Point2f PnPProblem::backproject3DPoint(const cv::Point3f &point3d)
{
// 3D point vector [x y z 1]'
cv::Mat point3d_vec = cv::Mat(4, 1, CV_64FC1);
point3d_vec.at<double>(0) = point3d.x;
point3d_vec.at<double>(1) = point3d.y;
point3d_vec.at<double>(2) = point3d.z;
point3d_vec.at<double>(3) = 1;
// 2D point vector [u v 1]'
cv::Mat point2d_vec = cv::Mat(4, 1, CV_64FC1);
point2d_vec = _A_matrix * _P_matrix * point3d_vec;
// Normalization of [u v]'
cv::Point2f point2d;
point2d.x = (float)(point2d_vec.at<double>(0) / point2d_vec.at<double>(2));
point2d.y = (float)(point2d_vec.at<double>(1) / point2d_vec.at<double>(2));
return point2d;
}
// Back project a 2D point to 3D and returns if it's on the object surface
bool PnPProblem::backproject2DPoint(const Mesh *mesh, const cv::Point2f &point2d, cv::Point3f &point3d)
{
// Triangles list of the object mesh
std::vector<std::vector<int> > triangles_list = mesh->getTrianglesList();
double lambda = 8;
double u = point2d.x;
double v = point2d.y;
// Point in vector form
cv::Mat point2d_vec = cv::Mat::ones(3, 1, CV_64F); // 3x1
point2d_vec.at<double>(0) = u * lambda;
point2d_vec.at<double>(1) = v * lambda;
point2d_vec.at<double>(2) = lambda;
// Point in camera coordinates
cv::Mat X_c = _A_matrix.inv() * point2d_vec ; // 3x1
// Point in world coordinates
cv::Mat X_w = _R_matrix.inv() * ( X_c - _t_matrix ); // 3x1
// Center of projection
cv::Mat C_op = cv::Mat(_R_matrix.inv()).mul(-1) * _t_matrix; // 3x1
// Ray direction vector
cv::Mat ray = X_w - C_op; // 3x1
ray = ray / cv::norm(ray); // 3x1
// Set up Ray
Ray R((cv::Point3f)C_op, (cv::Point3f)ray);
// A vector to store the intersections found
std::vector<cv::Point3f> intersections_list;
// Loop for all the triangles and check the intersection
for (unsigned int i = 0; i < triangles_list.size(); i++)
{
cv::Point3f V0 = mesh->getVertex(triangles_list[i][0]);
cv::Point3f V1 = mesh->getVertex(triangles_list[i][1]);
cv::Point3f V2 = mesh->getVertex(triangles_list[i][2]);
Triangle T(i, V0, V1, V2);
double out;
if(this->intersect_MollerTrumbore(R, T, &out))
{
cv::Point3f tmp_pt = R.getP0() + out*R.getP1(); // P = O + t*D
intersections_list.push_back(tmp_pt);
}
}
// If there are intersection, find the nearest one
if (!intersections_list.empty())
{
point3d = get_nearest_3D_point(intersections_list, R.getP0());
return true;
}
else
{
return false;
}
}
// MöllerTrumbore intersection algorithm
bool PnPProblem::intersect_MollerTrumbore(Ray &Ray, Triangle &Triangle, double *out)
{
const double EPSILON = 0.000001;
cv::Point3f e1, e2;
cv::Point3f P, Q, T;
double det, inv_det, u, v;
double t;
cv::Point3f V1 = Triangle.getV0(); // Triangle vertices
cv::Point3f V2 = Triangle.getV1();
cv::Point3f V3 = Triangle.getV2();
cv::Point3f O = Ray.getP0(); // Ray origin
cv::Point3f D = Ray.getP1(); // Ray direction
//Find vectors for two edges sharing V1
e1 = SUB(V2, V1);
e2 = SUB(V3, V1);
// Begin calculation determinant - also used to calculate U parameter
P = CROSS(D, e2);
// If determinant is near zero, ray lie in plane of triangle
det = DOT(e1, P);
//NOT CULLING
if(det > -EPSILON && det < EPSILON) return false;
inv_det = 1.f / det;
//calculate distance from V1 to ray origin
T = SUB(O, V1);
//Calculate u parameter and test bound
u = DOT(T, P) * inv_det;
//The intersection lies outside of the triangle
if(u < 0.f || u > 1.f) return false;
//Prepare to test v parameter
Q = CROSS(T, e1);
//Calculate V parameter and test bound
v = DOT(D, Q) * inv_det;
//The intersection lies outside of the triangle
if(v < 0.f || u + v > 1.f) return false;
t = DOT(e2, Q) * inv_det;
if(t > EPSILON) { //ray intersection
*out = t;
return true;
}
// No hit, no win
return false;
}

View File

@ -0,0 +1,58 @@
/*
* PnPProblem.h
*
* Created on: Mar 28, 2014
* Author: Edgar Riba
*/
#ifndef PNPPROBLEM_H_
#define PNPPROBLEM_H_
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "Mesh.h"
#include "ModelRegistration.h"
class PnPProblem
{
public:
explicit PnPProblem(const double param[]); // custom constructor
virtual ~PnPProblem();
bool backproject2DPoint(const Mesh *mesh, const cv::Point2f &point2d, cv::Point3f &point3d);
bool intersect_MollerTrumbore(Ray &R, Triangle &T, double *out);
std::vector<cv::Point2f> verify_points(Mesh *mesh);
cv::Point2f backproject3DPoint(const cv::Point3f &point3d);
bool estimatePose(const std::vector<cv::Point3f> &list_points3d, const std::vector<cv::Point2f> &list_points2d, int flags);
void estimatePoseRANSAC( const std::vector<cv::Point3f> &list_points3d, const std::vector<cv::Point2f> &list_points2d,
int flags, cv::Mat &inliers,
int iterationsCount, float reprojectionError, double confidence );
cv::Mat get_A_matrix() const { return _A_matrix; }
cv::Mat get_R_matrix() const { return _R_matrix; }
cv::Mat get_t_matrix() const { return _t_matrix; }
cv::Mat get_P_matrix() const { return _P_matrix; }
void set_P_matrix( const cv::Mat &R_matrix, const cv::Mat &t_matrix);
private:
/** The calibration matrix */
cv::Mat _A_matrix;
/** The computed rotation matrix */
cv::Mat _R_matrix;
/** The computed translation matrix */
cv::Mat _t_matrix;
/** The computed projection matrix */
cv::Mat _P_matrix;
};
// Functions for MöllerTrumbore intersection algorithm
cv::Point3f CROSS(cv::Point3f v1, cv::Point3f v2);
double DOT(cv::Point3f v1, cv::Point3f v2);
cv::Point3f SUB(cv::Point3f v1, cv::Point3f v2);
#endif /* PNPPROBLEM_H_ */

View File

@ -0,0 +1,152 @@
/*
* RobustMatcher.cpp
*
* Created on: Jun 4, 2014
* Author: eriba
*/
#include "RobustMatcher.h"
#include <time.h>
#include <opencv2/features2d/features2d.hpp>
RobustMatcher::~RobustMatcher()
{
// TODO Auto-generated destructor stub
}
void RobustMatcher::computeKeyPoints( const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints)
{
detector_->detect(image, keypoints);
}
void RobustMatcher::computeDescriptors( const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, cv::Mat& descriptors)
{
extractor_->compute(image, keypoints, descriptors);
}
int RobustMatcher::ratioTest(std::vector<std::vector<cv::DMatch> > &matches)
{
int removed = 0;
// for all matches
for ( std::vector<std::vector<cv::DMatch> >::iterator
matchIterator= matches.begin(); matchIterator!= matches.end(); ++matchIterator)
{
// if 2 NN has been identified
if (matchIterator->size() > 1)
{
// check distance ratio
if ((*matchIterator)[0].distance / (*matchIterator)[1].distance > ratio_)
{
matchIterator->clear(); // remove match
removed++;
}
}
else
{ // does not have 2 neighbours
matchIterator->clear(); // remove match
removed++;
}
}
return removed;
}
void RobustMatcher::symmetryTest( const std::vector<std::vector<cv::DMatch> >& matches1,
const std::vector<std::vector<cv::DMatch> >& matches2,
std::vector<cv::DMatch>& symMatches )
{
// for all matches image 1 -> image 2
for (std::vector<std::vector<cv::DMatch> >::const_iterator
matchIterator1 = matches1.begin(); matchIterator1 != matches1.end(); ++matchIterator1)
{
// ignore deleted matches
if (matchIterator1->empty() || matchIterator1->size() < 2)
continue;
// for all matches image 2 -> image 1
for (std::vector<std::vector<cv::DMatch> >::const_iterator
matchIterator2 = matches2.begin(); matchIterator2 != matches2.end(); ++matchIterator2)
{
// ignore deleted matches
if (matchIterator2->empty() || matchIterator2->size() < 2)
continue;
// Match symmetry test
if ((*matchIterator1)[0].queryIdx ==
(*matchIterator2)[0].trainIdx &&
(*matchIterator2)[0].queryIdx ==
(*matchIterator1)[0].trainIdx)
{
// add symmetrical match
symMatches.push_back(
cv::DMatch((*matchIterator1)[0].queryIdx,
(*matchIterator1)[0].trainIdx,
(*matchIterator1)[0].distance));
break; // next match in image 1 -> image 2
}
}
}
}
void RobustMatcher::robustMatch( const cv::Mat& frame, std::vector<cv::DMatch>& good_matches,
std::vector<cv::KeyPoint>& keypoints_frame, const cv::Mat& descriptors_model )
{
// 1a. Detection of the ORB features
this->computeKeyPoints(frame, keypoints_frame);
// 1b. Extraction of the ORB descriptors
cv::Mat descriptors_frame;
this->computeDescriptors(frame, keypoints_frame, descriptors_frame);
// 2. Match the two image descriptors
std::vector<std::vector<cv::DMatch> > matches12, matches21;
// 2a. From image 1 to image 2
matcher_->knnMatch(descriptors_frame, descriptors_model, matches12, 2); // return 2 nearest neighbours
// 2b. From image 2 to image 1
matcher_->knnMatch(descriptors_model, descriptors_frame, matches21, 2); // return 2 nearest neighbours
// 3. Remove matches for which NN ratio is > than threshold
// clean image 1 -> image 2 matches
ratioTest(matches12);
// clean image 2 -> image 1 matches
ratioTest(matches21);
// 4. Remove non-symmetrical matches
symmetryTest(matches12, matches21, good_matches);
}
void RobustMatcher::fastRobustMatch( const cv::Mat& frame, std::vector<cv::DMatch>& good_matches,
std::vector<cv::KeyPoint>& keypoints_frame,
const cv::Mat& descriptors_model )
{
good_matches.clear();
// 1a. Detection of the ORB features
this->computeKeyPoints(frame, keypoints_frame);
// 1b. Extraction of the ORB descriptors
cv::Mat descriptors_frame;
this->computeDescriptors(frame, keypoints_frame, descriptors_frame);
// 2. Match the two image descriptors
std::vector<std::vector<cv::DMatch> > matches;
matcher_->knnMatch(descriptors_frame, descriptors_model, matches, 2);
// 3. Remove matches for which NN ratio is > than threshold
ratioTest(matches);
// 4. Fill good matches container
for ( std::vector<std::vector<cv::DMatch> >::iterator
matchIterator= matches.begin(); matchIterator!= matches.end(); ++matchIterator)
{
if (!matchIterator->empty()) good_matches.push_back((*matchIterator)[0]);
}
}

View File

@ -0,0 +1,81 @@
/*
* RobustMatcher.h
*
* Created on: Jun 4, 2014
* Author: eriba
*/
#ifndef ROBUSTMATCHER_H_
#define ROBUSTMATCHER_H_
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
class RobustMatcher {
public:
RobustMatcher() : ratio_(0.8f)
{
// ORB is the default feature
detector_ = new cv::OrbFeatureDetector();
extractor_ = new cv::OrbDescriptorExtractor();
// BruteFroce matcher with Norm Hamming is the default matcher
matcher_ = new cv::BFMatcher(cv::NORM_HAMMING, false);
}
virtual ~RobustMatcher();
// Set the feature detector
void setFeatureDetector(cv::FeatureDetector * detect) { detector_ = detect; }
// Set the descriptor extractor
void setDescriptorExtractor(cv::DescriptorExtractor * desc) { extractor_ = desc; }
// Set the matcher
void setDescriptorMatcher(cv::DescriptorMatcher * match) { matcher_ = match; }
// Compute the keypoints of an image
void computeKeyPoints( const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints);
// Compute the descriptors of an image given its keypoints
void computeDescriptors( const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, cv::Mat& descriptors);
// Set ratio parameter for the ratio test
void setRatio( float rat) { ratio_ = rat; }
// Clear matches for which NN ratio is > than threshold
// return the number of removed points
// (corresponding entries being cleared,
// i.e. size will be 0)
int ratioTest(std::vector<std::vector<cv::DMatch> > &matches);
// Insert symmetrical matches in symMatches vector
void symmetryTest( const std::vector<std::vector<cv::DMatch> >& matches1,
const std::vector<std::vector<cv::DMatch> >& matches2,
std::vector<cv::DMatch>& symMatches );
// Match feature points using ratio and symmetry test
void robustMatch( const cv::Mat& frame, std::vector<cv::DMatch>& good_matches,
std::vector<cv::KeyPoint>& keypoints_frame,
const cv::Mat& descriptors_model );
// Match feature points using ratio test
void fastRobustMatch( const cv::Mat& frame, std::vector<cv::DMatch>& good_matches,
std::vector<cv::KeyPoint>& keypoints_frame,
const cv::Mat& descriptors_model );
private:
// pointer to the feature point detector object
cv::FeatureDetector * detector_;
// pointer to the feature descriptor extractor object
cv::DescriptorExtractor * extractor_;
// pointer to the matcher object
cv::DescriptorMatcher * matcher_;
// max ratio between 1st and 2nd NN
float ratio_;
};
#endif /* ROBUSTMATCHER_H_ */

View File

@ -0,0 +1,292 @@
/*
* Utils.cpp
*
* Created on: Mar 28, 2014
* Author: Edgar Riba
*/
#include <iostream>
#include "PnPProblem.h"
#include "ModelRegistration.h"
#include "Utils.h"
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
// For text
int fontFace = cv::FONT_ITALIC;
double fontScale = 0.75;
int thickness_font = 2;
// For circles
int lineType = 8;
int radius = 4;
double thickness_circ = -1;
// Draw a text with the question point
void drawQuestion(cv::Mat image, cv::Point3f point, cv::Scalar color)
{
std::string x = IntToString((int)point.x);
std::string y = IntToString((int)point.y);
std::string z = IntToString((int)point.z);
std::string text = " Where is point (" + x + "," + y + "," + z + ") ?";
cv::putText(image, text, cv::Point(25,50), fontFace, fontScale, color, thickness_font, 8);
}
// Draw a text with the number of entered points
void drawText(cv::Mat image, std::string text, cv::Scalar color)
{
cv::putText(image, text, cv::Point(25,50), fontFace, fontScale, color, thickness_font, 8);
}
// Draw a text with the number of entered points
void drawText2(cv::Mat image, std::string text, cv::Scalar color)
{
cv::putText(image, text, cv::Point(25,75), fontFace, fontScale, color, thickness_font, 8);
}
// Draw a text with the frame ratio
void drawFPS(cv::Mat image, double fps, cv::Scalar color)
{
std::string fps_str = IntToString((int)fps);
std::string text = fps_str + " FPS";
cv::putText(image, text, cv::Point(500,50), fontFace, fontScale, color, thickness_font, 8);
}
// Draw a text with the frame ratio
void drawConfidence(cv::Mat image, double confidence, cv::Scalar color)
{
std::string conf_str = IntToString((int)confidence);
std::string text = conf_str + " %";
cv::putText(image, text, cv::Point(500,75), fontFace, fontScale, color, thickness_font, 8);
}
// Draw a text with the number of entered points
void drawCounter(cv::Mat image, int n, int n_max, cv::Scalar color)
{
std::string n_str = IntToString(n);
std::string n_max_str = IntToString(n_max);
std::string text = n_str + " of " + n_max_str + " points";
cv::putText(image, text, cv::Point(500,50), fontFace, fontScale, color, thickness_font, 8);
}
// Draw the points and the coordinates
void drawPoints(cv::Mat image, std::vector<cv::Point2f> &list_points_2d, std::vector<cv::Point3f> &list_points_3d, cv::Scalar color)
{
for (unsigned int i = 0; i < list_points_2d.size(); ++i)
{
cv::Point2f point_2d = list_points_2d[i];
cv::Point3f point_3d = list_points_3d[i];
// Draw Selected points
cv::circle(image, point_2d, radius, color, -1, lineType );
std::string idx = IntToString(i+1);
std::string x = IntToString((int)point_3d.x);
std::string y = IntToString((int)point_3d.y);
std::string z = IntToString((int)point_3d.z);
std::string text = "P" + idx + " (" + x + "," + y + "," + z +")";
point_2d.x = point_2d.x + 10;
point_2d.y = point_2d.y - 10;
cv::putText(image, text, point_2d, fontFace, fontScale*0.5, color, thickness_font, 8);
}
}
// Draw only the 2D points
void draw2DPoints(cv::Mat image, std::vector<cv::Point2f> &list_points, cv::Scalar color)
{
for( size_t i = 0; i < list_points.size(); i++)
{
cv::Point2f point_2d = list_points[i];
// Draw Selected points
cv::circle(image, point_2d, radius, color, -1, lineType );
}
}
// Draw an arrow into the image
void drawArrow(cv::Mat image, cv::Point2i p, cv::Point2i q, cv::Scalar color, int arrowMagnitude, int thickness, int line_type, int shift)
{
//Draw the principle line
cv::line(image, p, q, color, thickness, line_type, shift);
const double PI = CV_PI;
//compute the angle alpha
double angle = atan2((double)p.y-q.y, (double)p.x-q.x);
//compute the coordinates of the first segment
p.x = (int) ( q.x + arrowMagnitude * cos(angle + PI/4));
p.y = (int) ( q.y + arrowMagnitude * sin(angle + PI/4));
//Draw the first segment
cv::line(image, p, q, color, thickness, line_type, shift);
//compute the coordinates of the second segment
p.x = (int) ( q.x + arrowMagnitude * cos(angle - PI/4));
p.y = (int) ( q.y + arrowMagnitude * sin(angle - PI/4));
//Draw the second segment
cv::line(image, p, q, color, thickness, line_type, shift);
}
// Draw the 3D coordinate axes
void draw3DCoordinateAxes(cv::Mat image, const std::vector<cv::Point2f> &list_points2d)
{
cv::Scalar red(0, 0, 255);
cv::Scalar green(0,255,0);
cv::Scalar blue(255,0,0);
cv::Scalar black(0,0,0);
cv::Point2i origin = list_points2d[0];
cv::Point2i pointX = list_points2d[1];
cv::Point2i pointY = list_points2d[2];
cv::Point2i pointZ = list_points2d[3];
drawArrow(image, origin, pointX, red, 9, 2);
drawArrow(image, origin, pointY, blue, 9, 2);
drawArrow(image, origin, pointZ, green, 9, 2);
cv::circle(image, origin, radius/2, black, -1, lineType );
}
// Draw the object mesh
void drawObjectMesh(cv::Mat image, const Mesh *mesh, PnPProblem *pnpProblem, cv::Scalar color)
{
std::vector<std::vector<int> > list_triangles = mesh->getTrianglesList();
for( size_t i = 0; i < list_triangles.size(); i++)
{
std::vector<int> tmp_triangle = list_triangles.at(i);
cv::Point3f point_3d_0 = mesh->getVertex(tmp_triangle[0]);
cv::Point3f point_3d_1 = mesh->getVertex(tmp_triangle[1]);
cv::Point3f point_3d_2 = mesh->getVertex(tmp_triangle[2]);
cv::Point2f point_2d_0 = pnpProblem->backproject3DPoint(point_3d_0);
cv::Point2f point_2d_1 = pnpProblem->backproject3DPoint(point_3d_1);
cv::Point2f point_2d_2 = pnpProblem->backproject3DPoint(point_3d_2);
cv::line(image, point_2d_0, point_2d_1, color, 1);
cv::line(image, point_2d_1, point_2d_2, color, 1);
cv::line(image, point_2d_2, point_2d_0, color, 1);
}
}
// Computes the norm of the translation error
double get_translation_error(const cv::Mat &t_true, const cv::Mat &t)
{
return cv::norm( t_true - t );
}
// Computes the norm of the rotation error
double get_rotation_error(const cv::Mat &R_true, const cv::Mat &R)
{
cv::Mat error_vec, error_mat;
error_mat = R_true * cv::Mat(R.inv()).mul(-1);
cv::Rodrigues(error_mat, error_vec);
return cv::norm(error_vec);
}
// Converts a given Rotation Matrix to Euler angles
cv::Mat rot2euler(const cv::Mat & rotationMatrix)
{
cv::Mat euler(3,1,CV_64F);
double m00 = rotationMatrix.at<double>(0,0);
double m02 = rotationMatrix.at<double>(0,2);
double m10 = rotationMatrix.at<double>(1,0);
double m11 = rotationMatrix.at<double>(1,1);
double m12 = rotationMatrix.at<double>(1,2);
double m20 = rotationMatrix.at<double>(2,0);
double m22 = rotationMatrix.at<double>(2,2);
double x, y, z;
// Assuming the angles are in radians.
if (m10 > 0.998) { // singularity at north pole
x = 0;
y = CV_PI/2;
z = atan2(m02,m22);
}
else if (m10 < -0.998) { // singularity at south pole
x = 0;
y = -CV_PI/2;
z = atan2(m02,m22);
}
else
{
x = atan2(-m12,m11);
y = asin(m10);
z = atan2(-m20,m00);
}
euler.at<double>(0) = x;
euler.at<double>(1) = y;
euler.at<double>(2) = z;
return euler;
}
// Converts a given Euler angles to Rotation Matrix
cv::Mat euler2rot(const cv::Mat & euler)
{
cv::Mat rotationMatrix(3,3,CV_64F);
double x = euler.at<double>(0);
double y = euler.at<double>(1);
double z = euler.at<double>(2);
// Assuming the angles are in radians.
double ch = cos(z);
double sh = sin(z);
double ca = cos(y);
double sa = sin(y);
double cb = cos(x);
double sb = sin(x);
double m00, m01, m02, m10, m11, m12, m20, m21, m22;
m00 = ch * ca;
m01 = sh*sb - ch*sa*cb;
m02 = ch*sa*sb + sh*cb;
m10 = sa;
m11 = ca*cb;
m12 = -ca*sb;
m20 = -sh*ca;
m21 = sh*sa*cb + ch*sb;
m22 = -sh*sa*sb + ch*cb;
rotationMatrix.at<double>(0,0) = m00;
rotationMatrix.at<double>(0,1) = m01;
rotationMatrix.at<double>(0,2) = m02;
rotationMatrix.at<double>(1,0) = m10;
rotationMatrix.at<double>(1,1) = m11;
rotationMatrix.at<double>(1,2) = m12;
rotationMatrix.at<double>(2,0) = m20;
rotationMatrix.at<double>(2,1) = m21;
rotationMatrix.at<double>(2,2) = m22;
return rotationMatrix;
}
// Converts a given string to an integer
int StringToInt ( const std::string &Text )
{
std::istringstream ss(Text);
int result;
return ss >> result ? result : 0;
}
// Converts a given float to a string
std::string FloatToString ( float Number )
{
std::ostringstream ss;
ss << Number;
return ss.str();
}
// Converts a given integer to a string
std::string IntToString ( int Number )
{
std::ostringstream ss;
ss << Number;
return ss.str();
}

View File

@ -0,0 +1,69 @@
/*
* Utils.h
*
* Created on: Mar 28, 2014
* Author: Edgar Riba
*/
#ifndef UTILS_H_
#define UTILS_H_
#include <iostream>
#include "PnPProblem.h"
// Draw a text with the question point
void drawQuestion(cv::Mat image, cv::Point3f point, cv::Scalar color);
// Draw a text with the number of entered points
void drawText(cv::Mat image, std::string text, cv::Scalar color);
// Draw a text with the number of entered points
void drawText2(cv::Mat image, std::string text, cv::Scalar color);
// Draw a text with the frame ratio
void drawFPS(cv::Mat image, double fps, cv::Scalar color);
// Draw a text with the frame ratio
void drawConfidence(cv::Mat image, double confidence, cv::Scalar color);
// Draw a text with the number of entered points
void drawCounter(cv::Mat image, int n, int n_max, cv::Scalar color);
// Draw the points and the coordinates
void drawPoints(cv::Mat image, std::vector<cv::Point2f> &list_points_2d, std::vector<cv::Point3f> &list_points_3d, cv::Scalar color);
// Draw only the 2D points
void draw2DPoints(cv::Mat image, std::vector<cv::Point2f> &list_points, cv::Scalar color);
// Draw an arrow into the image
void drawArrow(cv::Mat image, cv::Point2i p, cv::Point2i q, cv::Scalar color, int arrowMagnitude = 9, int thickness=1, int line_type=8, int shift=0);
// Draw the 3D coordinate axes
void draw3DCoordinateAxes(cv::Mat image, const std::vector<cv::Point2f> &list_points2d);
// Draw the object mesh
void drawObjectMesh(cv::Mat image, const Mesh *mesh, PnPProblem *pnpProblem, cv::Scalar color);
// Computes the norm of the translation error
double get_translation_error(const cv::Mat &t_true, const cv::Mat &t);
// Computes the norm of the rotation error
double get_rotation_error(const cv::Mat &R_true, const cv::Mat &R);
// Converts a given Rotation Matrix to Euler angles
cv::Mat rot2euler(const cv::Mat & rotationMatrix);
// Converts a given Euler angles to Rotation Matrix
cv::Mat euler2rot(const cv::Mat & euler);
// Converts a given string to an integer
int StringToInt ( const std::string &Text );
// Converts a given float to a string
std::string FloatToString ( float Number );
// Converts a given integer to a string
std::string IntToString ( int Number );
#endif /* UTILS_H_ */

View File

@ -0,0 +1,468 @@
// C++
#include <iostream>
#include <time.h>
// OpenCV
#include <opencv2/core/core.hpp>
#include <opencv2/core/utility.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/video/tracking.hpp>
// PnP Tutorial
#include "Mesh.h"
#include "Model.h"
#include "PnPProblem.h"
#include "RobustMatcher.h"
#include "ModelRegistration.h"
#include "Utils.h"
/** GLOBAL VARIABLES **/
std::string tutorial_path = "../../samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/"; // path to tutorial
std::string video_read_path = tutorial_path + "Data/box.mp4"; // recorded video
std::string yml_read_path = tutorial_path + "Data/cookies_ORB.yml"; // 3dpts + descriptors
std::string ply_read_path = tutorial_path + "Data/box.ply"; // mesh
// Intrinsic camera parameters: UVC WEBCAM
double f = 55; // focal length in mm
double sx = 22.3, sy = 14.9; // sensor size
double width = 640, height = 480; // image size
double params_WEBCAM[] = { width*f/sx, // fx
height*f/sy, // fy
width/2, // cx
height/2}; // cy
// Some basic colors
cv::Scalar red(0, 0, 255);
cv::Scalar green(0,255,0);
cv::Scalar blue(255,0,0);
cv::Scalar yellow(0,255,255);
// Robust Matcher parameters
int numKeyPoints = 2000; // number of detected keypoints
float ratio = 0.70f; // ratio test
bool fast_match = true; // fastRobustMatch() or robustMatch()
// RANSAC parameters
int iterationsCount = 500; // number of Ransac iterations.
float reprojectionError = 2.0; // maximum allowed distance to consider it an inlier.
double confidence = 0.95; // ransac successful confidence.
// Kalman Filter parameters
int minInliersKalman = 30; // Kalman threshold updating
// PnP parameters
int pnpMethod = cv::SOLVEPNP_ITERATIVE;
/** Functions headers **/
void help();
void initKalmanFilter( cv::KalmanFilter &KF, int nStates, int nMeasurements, int nInputs, double dt);
void updateKalmanFilter( cv::KalmanFilter &KF, cv::Mat &measurements,
cv::Mat &translation_estimated, cv::Mat &rotation_estimated );
void fillMeasurements( cv::Mat &measurements,
const cv::Mat &translation_measured, const cv::Mat &rotation_measured);
/** Main program **/
int main(int argc, char *argv[])
{
help();
const cv::String keys =
"{help h | | print this message }"
"{video v | | path to recorded video }"
"{model | | path to yml model }"
"{mesh | | path to ply mesh }"
"{keypoints k |2000 | number of keypoints to detect }"
"{ratio r |0.7 | threshold for ratio test }"
"{iterations it |500 | RANSAC maximum iterations count }"
"{error e |2.0 | RANSAC reprojection errror }"
"{confidence c |0.95 | RANSAC confidence }"
"{inliers in |30 | minimum inliers for Kalman update }"
"{method pnp |0 | PnP method: (0) ITERATIVE - (1) EPNP - (2) P3P - (3) DLS}"
"{fast f |true | use of robust fast match }"
;
cv::CommandLineParser parser(argc, argv, keys);
if (parser.has("help"))
{
parser.printMessage();
return 0;
}
else
{
video_read_path = parser.get<std::string>("video").size() > 0 ? parser.get<std::string>("video") : video_read_path;
yml_read_path = parser.get<std::string>("model").size() > 0 ? parser.get<std::string>("model") : yml_read_path;
ply_read_path = parser.get<std::string>("mesh").size() > 0 ? parser.get<std::string>("mesh") : ply_read_path;
numKeyPoints = !parser.has("keypoints") ? parser.get<int>("keypoints") : numKeyPoints;
ratio = !parser.has("ratio") ? parser.get<float>("ratio") : ratio;
fast_match = !parser.has("fast") ? parser.get<bool>("fast") : fast_match;
iterationsCount = !parser.has("iterations") ? parser.get<int>("iterations") : iterationsCount;
reprojectionError = !parser.has("error") ? parser.get<float>("error") : reprojectionError;
confidence = !parser.has("confidence") ? parser.get<float>("confidence") : confidence;
minInliersKalman = !parser.has("inliers") ? parser.get<int>("inliers") : minInliersKalman;
pnpMethod = !parser.has("method") ? parser.get<int>("method") : pnpMethod;
}
PnPProblem pnp_detection(params_WEBCAM);
PnPProblem pnp_detection_est(params_WEBCAM);
Model model; // instantiate Model object
model.load(yml_read_path); // load a 3D textured object model
Mesh mesh; // instantiate Mesh object
mesh.load(ply_read_path); // load an object mesh
RobustMatcher rmatcher; // instantiate RobustMatcher
cv::FeatureDetector * detector = new cv::OrbFeatureDetector(numKeyPoints); // instatiate ORB feature detector
cv::DescriptorExtractor * extractor = new cv::OrbDescriptorExtractor(); // instatiate ORB descriptor extractor
rmatcher.setFeatureDetector(detector); // set feature detector
rmatcher.setDescriptorExtractor(extractor); // set descriptor extractor
cv::Ptr<cv::flann::IndexParams> indexParams = cv::makePtr<cv::flann::LshIndexParams>(6, 12, 1); // instantiate LSH index parameters
cv::Ptr<cv::flann::SearchParams> searchParams = cv::makePtr<cv::flann::SearchParams>(50); // instantiate flann search parameters
cv::DescriptorMatcher * matcher = new cv::FlannBasedMatcher(indexParams, searchParams); // instantiate FlannBased matcher
rmatcher.setDescriptorMatcher(matcher); // set matcher
rmatcher.setRatio(ratio); // set ratio test parameter
cv::KalmanFilter KF; // instantiate Kalman Filter
int nStates = 18; // the number of states
int nMeasurements = 6; // the number of measured states
int nInputs = 0; // the number of control actions
double dt = 0.125; // time between measurements (1/FPS)
initKalmanFilter(KF, nStates, nMeasurements, nInputs, dt); // init function
cv::Mat measurements(nMeasurements, 1, CV_64F); measurements.setTo(cv::Scalar(0));
bool good_measurement = false;
// Get the MODEL INFO
std::vector<cv::Point3f> list_points3d_model = model.get_points3d(); // list with model 3D coordinates
cv::Mat descriptors_model = model.get_descriptors(); // list with descriptors of each 3D coordinate
// Create & Open Window
cv::namedWindow("REAL TIME DEMO", cv::WINDOW_KEEPRATIO);
cv::VideoCapture cap; // instantiate VideoCapture
cap.open(video_read_path); // open a recorded video
if(!cap.isOpened()) // check if we succeeded
{
std::cout << "Could not open the camera device" << std::endl;
return -1;
}
// start and end times
time_t start, end;
// fps calculated using number of frames / seconds
// floating point seconds elapsed since start
double fps, sec;
// frame counter
int counter = 0;
// start the clock
time(&start);
cv::Mat frame, frame_vis;
while(cap.read(frame) && cv::waitKey(30) != 27) // capture frame until ESC is pressed
{
frame_vis = frame.clone(); // refresh visualisation frame
// -- Step 1: Robust matching between model descriptors and scene descriptors
std::vector<cv::DMatch> good_matches; // to obtain the 3D points of the model
std::vector<cv::KeyPoint> keypoints_scene; // to obtain the 2D points of the scene
if(fast_match)
{
rmatcher.fastRobustMatch(frame, good_matches, keypoints_scene, descriptors_model);
}
else
{
rmatcher.robustMatch(frame, good_matches, keypoints_scene, descriptors_model);
}
// -- Step 2: Find out the 2D/3D correspondences
std::vector<cv::Point3f> list_points3d_model_match; // container for the model 3D coordinates found in the scene
std::vector<cv::Point2f> list_points2d_scene_match; // container for the model 2D coordinates found in the scene
for(unsigned int match_index = 0; match_index < good_matches.size(); ++match_index)
{
cv::Point3f point3d_model = list_points3d_model[ good_matches[match_index].trainIdx ]; // 3D point from model
cv::Point2f point2d_scene = keypoints_scene[ good_matches[match_index].queryIdx ].pt; // 2D point from the scene
list_points3d_model_match.push_back(point3d_model); // add 3D point
list_points2d_scene_match.push_back(point2d_scene); // add 2D point
}
// Draw outliers
draw2DPoints(frame_vis, list_points2d_scene_match, red);
cv::Mat inliers_idx;
std::vector<cv::Point2f> list_points2d_inliers;
if(good_matches.size() > 0) // None matches, then RANSAC crashes
{
// -- Step 3: Estimate the pose using RANSAC approach
pnp_detection.estimatePoseRANSAC( list_points3d_model_match, list_points2d_scene_match,
pnpMethod, inliers_idx,
iterationsCount, reprojectionError, confidence );
// -- Step 4: Catch the inliers keypoints to draw
for(int inliers_index = 0; inliers_index < inliers_idx.rows; ++inliers_index)
{
int n = inliers_idx.at<int>(inliers_index); // i-inlier
cv::Point2f point2d = list_points2d_scene_match[n]; // i-inlier point 2D
list_points2d_inliers.push_back(point2d); // add i-inlier to list
}
// Draw inliers points 2D
draw2DPoints(frame_vis, list_points2d_inliers, blue);
// -- Step 5: Kalman Filter
good_measurement = false;
// GOOD MEASUREMENT
if( inliers_idx.rows >= minInliersKalman )
{
// Get the measured translation
cv::Mat translation_measured(3, 1, CV_64F);
translation_measured = pnp_detection.get_t_matrix();
// Get the measured rotation
cv::Mat rotation_measured(3, 3, CV_64F);
rotation_measured = pnp_detection.get_R_matrix();
// fill the measurements vector
fillMeasurements(measurements, translation_measured, rotation_measured);
good_measurement = true;
}
// Instantiate estimated translation and rotation
cv::Mat translation_estimated(3, 1, CV_64F);
cv::Mat rotation_estimated(3, 3, CV_64F);
// update the Kalman filter with good measurements
updateKalmanFilter( KF, measurements,
translation_estimated, rotation_estimated);
// -- Step 6: Set estimated projection matrix
pnp_detection_est.set_P_matrix(rotation_estimated, translation_estimated);
}
// -- Step X: Draw pose
if(good_measurement)
{
drawObjectMesh(frame_vis, &mesh, &pnp_detection, green); // draw current pose
}
else
{
drawObjectMesh(frame_vis, &mesh, &pnp_detection_est, yellow); // draw estimated pose
}
float l = 5;
std::vector<cv::Point2f> pose_points2d;
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(0,0,0))); // axis center
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(l,0,0))); // axis x
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(0,l,0))); // axis y
pose_points2d.push_back(pnp_detection_est.backproject3DPoint(cv::Point3f(0,0,l))); // axis z
draw3DCoordinateAxes(frame_vis, pose_points2d); // draw axes
// FRAME RATE
// see how much time has elapsed
time(&end);
// calculate current FPS
++counter;
sec = difftime (end, start);
fps = counter / sec;
drawFPS(frame_vis, fps, yellow); // frame ratio
double detection_ratio = ((double)inliers_idx.rows/(double)good_matches.size())*100;
drawConfidence(frame_vis, detection_ratio, yellow);
// -- Step X: Draw some debugging text
// Draw some debug text
int inliers_int = inliers_idx.rows;
int outliers_int = (int)good_matches.size() - inliers_int;
std::string inliers_str = IntToString(inliers_int);
std::string outliers_str = IntToString(outliers_int);
std::string n = IntToString((int)good_matches.size());
std::string text = "Found " + inliers_str + " of " + n + " matches";
std::string text2 = "Inliers: " + inliers_str + " - Outliers: " + outliers_str;
drawText(frame_vis, text, green);
drawText2(frame_vis, text2, red);
cv::imshow("REAL TIME DEMO", frame_vis);
}
// Close and Destroy Window
cv::destroyWindow("REAL TIME DEMO");
std::cout << "GOODBYE ..." << std::endl;
}
/**********************************************************************************************************/
void help()
{
std::cout
<< "--------------------------------------------------------------------------" << std::endl
<< "This program shows how to detect an object given its 3D textured model. You can choose to "
<< "use a recorded video or the webcam." << std::endl
<< "Usage:" << std::endl
<< "./cpp-tutorial-pnp_detection -help" << std::endl
<< "Keys:" << std::endl
<< "'esc' - to quit." << std::endl
<< "--------------------------------------------------------------------------" << std::endl
<< std::endl;
}
/**********************************************************************************************************/
void initKalmanFilter(cv::KalmanFilter &KF, int nStates, int nMeasurements, int nInputs, double dt)
{
KF.init(nStates, nMeasurements, nInputs, CV_64F); // init Kalman Filter
cv::setIdentity(KF.processNoiseCov, cv::Scalar::all(1e-5)); // set process noise
cv::setIdentity(KF.measurementNoiseCov, cv::Scalar::all(1e-2)); // set measurement noise
cv::setIdentity(KF.errorCovPost, cv::Scalar::all(1)); // error covariance
/** DYNAMIC MODEL **/
// [1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0 0 0]
// [0 1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0 0]
// [0 0 1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0]
// [0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2 0 0]
// [0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2 0]
// [0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2]
// [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]
// [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
// position
KF.transitionMatrix.at<double>(0,3) = dt;
KF.transitionMatrix.at<double>(1,4) = dt;
KF.transitionMatrix.at<double>(2,5) = dt;
KF.transitionMatrix.at<double>(3,6) = dt;
KF.transitionMatrix.at<double>(4,7) = dt;
KF.transitionMatrix.at<double>(5,8) = dt;
KF.transitionMatrix.at<double>(0,6) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(1,7) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(2,8) = 0.5*pow(dt,2);
// orientation
KF.transitionMatrix.at<double>(9,12) = dt;
KF.transitionMatrix.at<double>(10,13) = dt;
KF.transitionMatrix.at<double>(11,14) = dt;
KF.transitionMatrix.at<double>(12,15) = dt;
KF.transitionMatrix.at<double>(13,16) = dt;
KF.transitionMatrix.at<double>(14,17) = dt;
KF.transitionMatrix.at<double>(9,15) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(10,16) = 0.5*pow(dt,2);
KF.transitionMatrix.at<double>(11,17) = 0.5*pow(dt,2);
/** MEASUREMENT MODEL **/
// [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
// [0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]
// [0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0]
KF.measurementMatrix.at<double>(0,0) = 1; // x
KF.measurementMatrix.at<double>(1,1) = 1; // y
KF.measurementMatrix.at<double>(2,2) = 1; // z
KF.measurementMatrix.at<double>(3,9) = 1; // roll
KF.measurementMatrix.at<double>(4,10) = 1; // pitch
KF.measurementMatrix.at<double>(5,11) = 1; // yaw
}
/**********************************************************************************************************/
void updateKalmanFilter( cv::KalmanFilter &KF, cv::Mat &measurement,
cv::Mat &translation_estimated, cv::Mat &rotation_estimated )
{
// First predict, to update the internal statePre variable
cv::Mat prediction = KF.predict();
// The "correct" phase that is going to use the predicted value and our measurement
cv::Mat estimated = KF.correct(measurement);
// Estimated translation
translation_estimated.at<double>(0) = estimated.at<double>(0);
translation_estimated.at<double>(1) = estimated.at<double>(1);
translation_estimated.at<double>(2) = estimated.at<double>(2);
// Estimated euler angles
cv::Mat eulers_estimated(3, 1, CV_64F);
eulers_estimated.at<double>(0) = estimated.at<double>(9);
eulers_estimated.at<double>(1) = estimated.at<double>(10);
eulers_estimated.at<double>(2) = estimated.at<double>(11);
// Convert estimated quaternion to rotation matrix
rotation_estimated = euler2rot(eulers_estimated);
}
/**********************************************************************************************************/
void fillMeasurements( cv::Mat &measurements,
const cv::Mat &translation_measured, const cv::Mat &rotation_measured)
{
// Convert rotation matrix to euler angles
cv::Mat measured_eulers(3, 1, CV_64F);
measured_eulers = rot2euler(rotation_measured);
// Set measurement to predict
measurements.at<double>(0) = translation_measured.at<double>(0); // x
measurements.at<double>(1) = translation_measured.at<double>(1); // y
measurements.at<double>(2) = translation_measured.at<double>(2); // z
measurements.at<double>(3) = measured_eulers.at<double>(0); // roll
measurements.at<double>(4) = measured_eulers.at<double>(1); // pitch
measurements.at<double>(5) = measured_eulers.at<double>(2); // yaw
}

View File

@ -0,0 +1,265 @@
// C++
#include <iostream>
// OpenCV
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/features2d/features2d.hpp>
// PnP Tutorial
#include "Mesh.h"
#include "Model.h"
#include "PnPProblem.h"
#include "RobustMatcher.h"
#include "ModelRegistration.h"
#include "Utils.h"
/** GLOBAL VARIABLES **/
std::string tutorial_path = "../../samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/"; // path to tutorial
std::string img_path = tutorial_path + "Data/resized_IMG_3875.JPG"; // image to register
std::string ply_read_path = tutorial_path + "Data/box.ply"; // object mesh
std::string write_path = tutorial_path + "Data/cookies_ORB.yml"; // output file
// Boolean the know if the registration it's done
bool end_registration = false;
// Intrinsic camera parameters: UVC WEBCAM
double f = 45; // focal length in mm
double sx = 22.3, sy = 14.9;
double width = 2592, height = 1944;
double params_CANON[] = { width*f/sx, // fx
height*f/sy, // fy
width/2, // cx
height/2}; // cy
// Setup the points to register in the image
// In the order of the *.ply file and starting at 1
int n = 8;
int pts[] = {1, 2, 3, 4, 5, 6, 7, 8}; // 3 -> 4
// Some basic colors
cv::Scalar red(0, 0, 255);
cv::Scalar green(0,255,0);
cv::Scalar blue(255,0,0);
cv::Scalar yellow(0,255,255);
/*
* CREATE MODEL REGISTRATION OBJECT
* CREATE OBJECT MESH
* CREATE OBJECT MODEL
* CREATE PNP OBJECT
*/
ModelRegistration registration;
Model model;
Mesh mesh;
PnPProblem pnp_registration(params_CANON);
/** Functions headers **/
void help();
// Mouse events for model registration
static void onMouseModelRegistration( int event, int x, int y, int, void* )
{
if ( event == cv::EVENT_LBUTTONUP )
{
int n_regist = registration.getNumRegist();
int n_vertex = pts[n_regist];
cv::Point2f point_2d = cv::Point2f((float)x,(float)y);
cv::Point3f point_3d = mesh.getVertex(n_vertex-1);
bool is_registrable = registration.is_registrable();
if (is_registrable)
{
registration.registerPoint(point_2d, point_3d);
if( registration.getNumRegist() == registration.getNumMax() ) end_registration = true;
}
}
}
/** Main program **/
int main()
{
help();
// load a mesh given the *.ply file path
mesh.load(ply_read_path);
// set parameters
int numKeyPoints = 10000;
//Instantiate robust matcher: detector, extractor, matcher
RobustMatcher rmatcher;
cv::FeatureDetector * detector = new cv::OrbFeatureDetector(numKeyPoints);
rmatcher.setFeatureDetector(detector);
/** GROUND TRUTH OF THE FIRST IMAGE **/
// Create & Open Window
cv::namedWindow("MODEL REGISTRATION", cv::WINDOW_KEEPRATIO);
// Set up the mouse events
cv::setMouseCallback("MODEL REGISTRATION", onMouseModelRegistration, 0 );
// Open the image to register
cv::Mat img_in = cv::imread(img_path, cv::IMREAD_COLOR);
cv::Mat img_vis = img_in.clone();
if (!img_in.data) {
std::cout << "Could not open or find the image" << std::endl;
return -1;
}
// Set the number of points to register
int num_registrations = n;
registration.setNumMax(num_registrations);
std::cout << "Click the box corners ..." << std::endl;
std::cout << "Waiting ..." << std::endl;
// Loop until all the points are registered
while ( cv::waitKey(30) < 0 )
{
// Refresh debug image
img_vis = img_in.clone();
// Current registered points
std::vector<cv::Point2f> list_points2d = registration.get_points2d();
std::vector<cv::Point3f> list_points3d = registration.get_points3d();
// Draw current registered points
drawPoints(img_vis, list_points2d, list_points3d, red);
// If the registration is not finished, draw which 3D point we have to register.
// If the registration is finished, breaks the loop.
if (!end_registration)
{
// Draw debug text
int n_regist = registration.getNumRegist();
int n_vertex = pts[n_regist];
cv::Point3f current_poin3d = mesh.getVertex(n_vertex-1);
drawQuestion(img_vis, current_poin3d, green);
drawCounter(img_vis, registration.getNumRegist(), registration.getNumMax(), red);
}
else
{
// Draw debug text
drawText(img_vis, "END REGISTRATION", green);
drawCounter(img_vis, registration.getNumRegist(), registration.getNumMax(), green);
break;
}
// Show the image
cv::imshow("MODEL REGISTRATION", img_vis);
}
/** COMPUTE CAMERA POSE **/
std::cout << "COMPUTING POSE ..." << std::endl;
// The list of registered points
std::vector<cv::Point2f> list_points2d = registration.get_points2d();
std::vector<cv::Point3f> list_points3d = registration.get_points3d();
// Estimate pose given the registered points
bool is_correspondence = pnp_registration.estimatePose(list_points3d, list_points2d, cv::SOLVEPNP_ITERATIVE);
if ( is_correspondence )
{
std::cout << "Correspondence found" << std::endl;
// Compute all the 2D points of the mesh to verify the algorithm and draw it
std::vector<cv::Point2f> list_points2d_mesh = pnp_registration.verify_points(&mesh);
draw2DPoints(img_vis, list_points2d_mesh, green);
} else {
std::cout << "Correspondence not found" << std::endl << std::endl;
}
// Show the image
cv::imshow("MODEL REGISTRATION", img_vis);
// Show image until ESC pressed
cv::waitKey(0);
/** COMPUTE 3D of the image Keypoints **/
// Containers for keypoints and descriptors of the model
std::vector<cv::KeyPoint> keypoints_model;
cv::Mat descriptors;
// Compute keypoints and descriptors
rmatcher.computeKeyPoints(img_in, keypoints_model);
rmatcher.computeDescriptors(img_in, keypoints_model, descriptors);
// Check if keypoints are on the surface of the registration image and add to the model
for (unsigned int i = 0; i < keypoints_model.size(); ++i) {
cv::Point2f point2d(keypoints_model[i].pt);
cv::Point3f point3d;
bool on_surface = pnp_registration.backproject2DPoint(&mesh, point2d, point3d);
if (on_surface)
{
model.add_correspondence(point2d, point3d);
model.add_descriptor(descriptors.row(i));
model.add_keypoint(keypoints_model[i]);
}
else
{
model.add_outlier(point2d);
}
}
// save the model into a *.yaml file
model.save(write_path);
// Out image
img_vis = img_in.clone();
// The list of the points2d of the model
std::vector<cv::Point2f> list_points_in = model.get_points2d_in();
std::vector<cv::Point2f> list_points_out = model.get_points2d_out();
// Draw some debug text
std::string num = IntToString((int)list_points_in.size());
std::string text = "There are " + num + " inliers";
drawText(img_vis, text, green);
// Draw some debug text
num = IntToString((int)list_points_out.size());
text = "There are " + num + " outliers";
drawText2(img_vis, text, red);
// Draw the object mesh
drawObjectMesh(img_vis, &mesh, &pnp_registration, blue);
// Draw found keypoints depending on if are or not on the surface
draw2DPoints(img_vis, list_points_in, green);
draw2DPoints(img_vis, list_points_out, red);
// Show the image
cv::imshow("MODEL REGISTRATION", img_vis);
// Wait until ESC pressed
cv::waitKey(0);
// Close and Destroy Window
cv::destroyWindow("MODEL REGISTRATION");
std::cout << "GOODBYE" << std::endl;
}
/**********************************************************************************************************/
void help()
{
std::cout
<< "--------------------------------------------------------------------------" << std::endl
<< "This program shows how to create your 3D textured model. " << std::endl
<< "Usage:" << std::endl
<< "./cpp-tutorial-pnp_registration" << std::endl
<< "--------------------------------------------------------------------------" << std::endl
<< std::endl;
}