Removed Sphinx documentation files

This commit is contained in:
Maksim Shabunin
2014-12-24 18:37:57 +03:00
parent 61991a3330
commit d01bedbc61
338 changed files with 0 additions and 73040 deletions

View File

@@ -1,160 +0,0 @@
.. _akazeMatching:
AKAZE local features matching
******************************
Introduction
------------------
In this tutorial we will learn how to use [AKAZE]_ local features to detect and match keypoints on two images.
We will find keypoints on a pair of images with given homography matrix,
match them and count the number of inliers (i. e. matches that fit in the given homography).
You can find expanded version of this example here: https://github.com/pablofdezalc/test_kaze_akaze_opencv
.. [AKAZE] Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. Pablo F. Alcantarilla, Jesús Nuevo and Adrien Bartoli. In British Machine Vision Conference (BMVC), Bristol, UK, September 2013.
Data
------------------
We are going to use images 1 and 3 from *Graffity* sequence of Oxford dataset.
.. image:: images/graf.png
:height: 200pt
:width: 320pt
:alt: Graffity
:align: center
Homography is given by a 3 by 3 matrix:
.. code-block:: none
7.6285898e-01 -2.9922929e-01 2.2567123e+02
3.3443473e-01 1.0143901e+00 -7.6999973e+01
3.4663091e-04 -1.4364524e-05 1.0000000e+00
You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*) in *opencv/samples/cpp*.
Source Code
===========
.. literalinclude:: ../../../../samples/cpp/tutorial_code/features2D/AKAZE_match.cpp
:language: cpp
:linenos:
:tab-width: 4
Explanation
===========
#. **Load images and homography**
.. code-block:: cpp
Mat img1 = imread("graf1.png", IMREAD_GRAYSCALE);
Mat img2 = imread("graf3.png", IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs("H1to3p.xml", FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
We are loading grayscale images here. Homography is stored in the xml created with FileStorage.
#. **Detect keypoints and compute descriptors using AKAZE**
.. code-block:: cpp
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
AKAZE akaze;
akaze(img1, noArray(), kpts1, desc1);
akaze(img2, noArray(), kpts2, desc2);
We create AKAZE object and use it's *operator()* functionality. Since we don't need the *mask* parameter, *noArray()* is used.
#. **Use brute-force matcher to find 2-nn matches**
.. code-block:: cpp
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
We use Hamming distance, because AKAZE uses binary descriptor by default.
#. **Use 2-nn matches to find correct keypoint matches**
.. code-block:: cpp
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if(dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
If the closest match is *ratio* closer than the second closest one, then the match is correct.
#. **Check if our matches fit in the homography model**
.. code-block:: cpp
for(int i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
float dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if(dist < inlier_threshold) {
int new_i = inliers1.size();
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
If the distance from first keypoint's projection to the second keypoint is less than threshold, then it it fits in the homography.
We create a new set of matches for the inliers, because it is required by the drawing function.
#. **Output results**
.. code-block:: cpp
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("res.png", res);
...
Here we save the resulting image and print some statistics.
Results
=======
Found matches
--------------
.. image:: images/res.png
:height: 200pt
:width: 320pt
:alt: Matches
:align: center
A-KAZE Matching Results
--------------------------
.. code-block:: none
Keypoints 1 2943
Keypoints 2 3511
Matches 447
Inliers 308
Inlier Ratio 0.689038

View File

@@ -1,155 +0,0 @@
.. _akazeTracking:
AKAZE and ORB planar tracking
******************************
Introduction
------------------
In this tutorial we will compare *AKAZE* and *ORB* local features
using them to find matches between video frames and track object movements.
The algorithm is as follows:
* Detect and describe keypoints on the first frame, manually set object boundaries
* For every next frame:
#. Detect and describe keypoints
#. Match them using bruteforce matcher
#. Estimate homography transformation using RANSAC
#. Filter inliers from all the matches
#. Apply homography transformation to the bounding box to find the object
#. Draw bounding box and inliers, compute inlier ratio as evaluation metric
.. image:: images/frame.png
:height: 480pt
:width: 640pt
:alt: Result frame example
:align: center
Data
===========
To do the tracking we need a video and object position on the first frame.
You can download our example video and data from `here <https://docs.google.com/file/d/0B72G7D4snftJandBb0taLVJHMFk>`_.
To run the code you have to specify input and output video path and object bounding box.
.. code-block:: none
./planar_tracking blais.mp4 result.avi blais_bb.xml.gz
Source Code
===========
.. literalinclude:: ../../../../samples/cpp/tutorial_code/features2D/AKAZE_tracking/planar_tracking.cpp
:language: cpp
:linenos:
:tab-width: 4
Explanation
===========
Tracker class
--------------
This class implements algorithm described abobve
using given feature detector and descriptor matcher.
* **Setting up the first frame**
.. code-block:: cpp
void Tracker::setFirstFrame(const Mat frame, vector<Point2f> bb, string title, Stats& stats)
{
first_frame = frame.clone();
(*detector)(first_frame, noArray(), first_kp, first_desc);
stats.keypoints = (int)first_kp.size();
drawBoundingBox(first_frame, bb);
putText(first_frame, title, Point(0, 60), FONT_HERSHEY_PLAIN, 5, Scalar::all(0), 4);
object_bb = bb;
}
We compute and store keypoints and descriptors from the first frame and prepare it for the output.
We need to save number of detected keypoints to make sure both detectors locate roughly the same number of those.
* **Processing frames**
#. Locate keypoints and compute descriptors
.. code-block:: cpp
(*detector)(frame, noArray(), kp, desc);
To find matches between frames we have to locate the keypoints first.
In this tutorial detectors are set up to find about 1000 keypoints on each frame.
#. Use 2-nn matcher to find correspondences
.. code-block:: cpp
matcher->knnMatch(first_desc, desc, matches, 2);
for(unsigned i = 0; i < matches.size(); i++) {
if(matches[i][0].distance < nn_match_ratio * matches[i][1].distance) {
matched1.push_back(first_kp[matches[i][0].queryIdx]);
matched2.push_back( kp[matches[i][0].trainIdx]);
}
}
If the closest match is *nn_match_ratio* closer than the second closest one, then it's a match.
2. Use *RANSAC* to estimate homography transformation
.. code-block:: cpp
homography = findHomography(Points(matched1), Points(matched2),
RANSAC, ransac_thresh, inlier_mask);
If there are at least 4 matches we can use random sample consensus to estimate image transformation.
3. Save the inliers
.. code-block:: cpp
for(unsigned i = 0; i < matched1.size(); i++) {
if(inlier_mask.at<uchar>(i)) {
int new_i = static_cast<int>(inliers1.size());
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
inlier_matches.push_back(DMatch(new_i, new_i, 0));
}
}
Since *findHomography* computes the inliers we only have to save the chosen points and matches.
4. Project object bounding box
.. code-block:: cpp
perspectiveTransform(object_bb, new_bb, homography);
If there is a reasonable number of inliers we can use estimated transformation to locate the object.
Results
=======
You can watch the resulting `video on youtube <http://www.youtube.com/watch?v=LWY-w8AGGhE>`_.
*AKAZE* statistics:
.. code-block:: none
Matches 626
Inliers 410
Inlier ratio 0.58
Keypoints 1117
*ORB* statistics:
.. code-block:: none
Matches 504
Inliers 319
Inlier ratio 0.56
Keypoints 1112

View File

@@ -1,63 +0,0 @@
.. _detectionOfPlanarObjects:
Detection of planar objects
***************************
.. highlight:: cpp
The goal of this tutorial is to learn how to use *features2d* and *calib3d* modules for detecting known planar objects in scenes.
*Test data*: use images in your data folder, for instance, ``box.png`` and ``box_in_scene.png``.
#.
Create a new console project. Read two input images. ::
Mat img1 = imread(argv[1], IMREAD_GRAYSCALE);
Mat img2 = imread(argv[2], IMREAD_GRAYSCALE);
#.
Detect keypoints in both images and compute descriptors for each of the keypoints. ::
// detecting keypoints
Ptr<Feature2D> surf = SURF::create();
vector<KeyPoint> keypoints1;
Mat descriptors1;
surf->detectAndCompute(img1, Mat(), keypoints1, descriptors1);
... // do the same for the second image
#.
Now, find the closest matches between descriptors from the first image to the second: ::
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
#.
Visualize the results: ::
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
#.
Find the homography transformation between two sets of points: ::
vector<Point2f> points1, points2;
// fill the arrays with the points
....
Mat H = findHomography(Mat(points1), Mat(points2), RANSAC, ransacReprojThreshold);
#.
Create a set of inlier matches and draw them. Use perspectiveTransform function to map points with homography:
Mat points1Projected;
perspectiveTransform(Mat(points1), points1Projected, H);
#.
Use ``drawMatches`` for drawing inliers.

View File

@@ -1,97 +0,0 @@
.. _feature_description:
Feature Description
*******************
Goal
=====
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Use the :descriptor_extractor:`DescriptorExtractor<>` interface in order to find the feature vector correspondent to the keypoints. Specifically:
* Use :surf_descriptor_extractor:`SurfDescriptorExtractor<>` and its function :descriptor_extractor:`compute<>` to perform the required calculations.
* Use a :brute_force_matcher:`BFMatcher<>` to match the features vector
* Use the function :draw_matches:`drawMatches<>` to draw the detected matches.
Theory
======
Code
====
This tutorial code's is shown lines below.
.. code-block:: cpp
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/xfeatures2d.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/* @function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{ return -1; }
Mat img_1 = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_2 = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector, compute the descriptors
int minHessian = 400;
Ptr<SURF> detector = SURF::create();
detector->setMinHessian(minHessian);
std::vector<KeyPoint> keypoints_1, keypoints_2;
Mat descriptors_1, descriptors_2;
detector->detectAndCompute( img_1, keypoints_1, descriptors_1 );
detector->detectAndCompute( img_2, keypoints_2, descriptors_2 );
//-- Step 2: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L2);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );
//-- Show detected matches
imshow("Matches", img_matches );
waitKey(0);
return 0;
}
/* @function readme */
void readme()
{ std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
Explanation
============
Result
======
#. Here is the result after applying the BruteForce matcher between the two original images:
.. image:: images/Feature_Description_BruteForce_Result.jpg
:align: center
:height: 200pt

View File

@@ -1,98 +0,0 @@
.. _feature_detection:
Feature Detection
******************
Goal
=====
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Use the :feature_detector:`FeatureDetector<>` interface in order to find interest points. Specifically:
* Use the :surf_feature_detector:`SurfFeatureDetector<>` and its function :feature_detector_detect:`detect<>` to perform the detection process
* Use the function :draw_keypoints:`drawKeypoints<>` to draw the detected keypoints
Theory
======
Code
====
This tutorial code's is shown lines below.
.. code-block:: cpp
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/* @function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{ readme(); return -1; }
Mat img_1 = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_2 = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector->detect( img_1, keypoints_1 );
detector->detect( img_2, keypoints_2 );
//-- Draw keypoints
Mat img_keypoints_1; Mat img_keypoints_2;
drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
drawKeypoints( img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
imshow("Keypoints 2", img_keypoints_2 );
waitKey(0);
return 0;
}
/* @function readme */
void readme()
{ std::cout << " Usage: ./SURF_detector <img1> <img2>" << std::endl; }
Explanation
============
Result
======
#. Here is the result of the feature detection applied to the first image:
.. image:: images/Feature_Detection_Result_a.jpg
:align: center
:height: 125pt
#. And here is the result for the second image:
.. image:: images/Feature_Detection_Result_b.jpg
:align: center
:height: 200pt

View File

@@ -1,149 +0,0 @@
.. _feature_flann_matcher:
Feature Matching with FLANN
****************************
Goal
=====
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Use the :flann_based_matcher:`FlannBasedMatcher<>` interface in order to perform a quick and efficient matching by using the :flann:`FLANN<>` ( *Fast Approximate Nearest Neighbor Search Library* )
Theory
======
Code
====
This tutorial code's is shown lines below.
.. code-block:: cpp
/*
* @file SURF_FlannMatcher
* @brief SURF detector + descriptor + FLANN Matcher
* @author A. Huaman
*/
#include <stdio.h>
#include <iostream>
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/xfeatures2d.hpp"
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/*
* @function main
* @brief Main function
*/
int main( int argc, char** argv )
{
if( argc != 3 )
{ readme(); return -1; }
Mat img_1 = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_2 = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
//-- small)
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.02) )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
imshow( "Good Matches", img_matches );
for( int i = 0; i < (int)good_matches.size(); i++ )
{ printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx ); }
waitKey(0);
return 0;
}
/*
* @function readme
*/
void readme()
{ std::cout << " Usage: ./SURF_FlannMatcher <img1> <img2>" << std::endl; }
Explanation
============
Result
======
#. Here is the result of the feature detection applied to the first image:
.. image:: images/Featur_FlannMatcher_Result.jpg
:align: center
:height: 250pt
#. Additionally, we get as console output the keypoints filtered:
.. image:: images/Feature_FlannMatcher_Keypoints_Result.jpg
:align: center
:height: 250pt

View File

@@ -1,149 +0,0 @@
.. _feature_homography:
Features2D + Homography to find a known object
**********************************************
Goal
=====
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Use the function :find_homography:`findHomography<>` to find the transform between matched keypoints.
* Use the function :perspective_transform:`perspectiveTransform<>` to map the points.
Theory
======
Code
====
This tutorial code's is shown lines below.
.. code-block:: cpp
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/xfeatures2d.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/* @function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{ readme(); return -1; }
Mat img_object = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_scene = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_object.data || !img_scene.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < 3*min_dist )
{ good_matches.push_back( matches[i]); }
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
imshow( "Good Matches & Object detection", img_matches );
waitKey(0);
return 0;
}
/* @function readme */
void readme()
{ std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
Explanation
============
Result
======
#. And here is the result for the detected object (highlighted in green)
.. image:: images/Feature_Homography_Result.jpg
:align: center
:height: 200pt

View File

@@ -1,239 +0,0 @@
.. _Table-Of-Content-Feature2D:
*feature2d* module. 2D Features framework
-----------------------------------------------------------
Learn about how to use the feature points detectors, descriptors and matching framework found inside OpenCV.
.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|Harris| **Title:** :ref:`harris_detector`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Why is it a good idea to track corners? We learn to use the Harris method to detect corners
===================== ==============================================
.. |Harris| image:: images/trackingmotion/Harris_Detector_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|ShiTomasi| **Title:** :ref:`good_features_to_track`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Where we use an improved method to detect corners more accuratelyI
===================== ==============================================
.. |ShiTomasi| image:: images/trackingmotion/Shi_Tomasi_Detector_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|GenericCorner| **Title:** :ref:`generic_corner_detector`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Here you will learn how to use OpenCV functions to make your personalized corner detector!
===================== ==============================================
.. |GenericCorner| image:: images/trackingmotion/Generic_Corner_Detector_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|Subpixel| **Title:** :ref:`corner_subpixeles`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Is pixel resolution enough? Here we learn a simple method to improve our accuracy.
===================== ==============================================
.. |Subpixel| image:: images/trackingmotion/Corner_Subpixeles_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureDetect| **Title:** :ref:`feature_detection`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use *features2d* to detect interest points.
===================== ==============================================
.. |FeatureDetect| image:: images/Feature_Detection_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureDescript| **Title:** :ref:`feature_description`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use *features2d* to calculate feature vectors.
===================== ==============================================
.. |FeatureDescript| image:: images/Feature_Description_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureFlann| **Title:** :ref:`feature_flann_matcher`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use the FLANN library to make a fast matching.
===================== ==============================================
.. |FeatureFlann| image:: images/Feature_Flann_Matcher_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureHomo| **Title:** :ref:`feature_homography`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use *features2d* and *calib3d* to detect an object in a scene.
===================== ==============================================
.. |FeatureHomo| image:: images/Feature_Homography_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|DetectPlanar| **Title:** :ref:`detectionOfPlanarObjects`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_VictorE|
You will use *features2d* and *calib3d* modules for detecting known planar objects in scenes.
===================== ==============================================
.. |DetectPlanar| image:: images/detection_of_planar_objects.png
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|AkazeMatch| **Title:** :ref:`akazeMatching`
*Compatibility:* > OpenCV 3.0
*Author:* Fedor Morozov
Using *AKAZE* local features to find correspondence between two images.
===================== ==============================================
.. |AkazeMatch| image:: images/AKAZE_Match_Tutorial_Cover.png
:height: 90pt
:width: 90pt
===================== ==============================================
|AkazeTracking| **Title:** :ref:`akazeTracking`
*Compatibility:* > OpenCV 3.0
*Author:* Fedor Morozov
Using *AKAZE* and *ORB* for planar object tracking.
===================== ==============================================
.. |AkazeTracking| image:: images/AKAZE_Tracking_Tutorial_Cover.png
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../feature_description/feature_description
../trackingmotion/harris_detector/harris_detector
../feature_flann_matcher/feature_flann_matcher
../feature_homography/feature_homography
../trackingmotion/good_features_to_track/good_features_to_track.rst
../trackingmotion/generic_corner_detector/generic_corner_detector
../trackingmotion/corner_subpixeles/corner_subpixeles
../feature_detection/feature_detection
../feature_flann_matcher/feature_flann_matcher
../feature_homography/feature_homography
../detection_of_planar_objects/detection_of_planar_objects
../akaze_matching/akaze_matching
../akaze_tracking/akaze_tracking

View File

@@ -1,137 +0,0 @@
.. _corner_subpixeles:
Detecting corners location in subpixeles
****************************************
Goal
=====
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Use the OpenCV function :corner_sub_pix:`cornerSubPix <>` to find more exact corner positions (more exact than integer pixels).
Theory
======
Code
====
This tutorial code's is shown lines below. You can also download it from `here <https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerSubPix_Demo.cpp>`_
.. code-block:: cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int maxCorners = 10;
int maxTrackbar = 25;
RNG rng(12345);
char* source_window = "Image";
/// Function header
void goodFeaturesToTrack_Demo( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create Window
namedWindow( source_window, WINDOW_AUTOSIZE );
/// Create Trackbar to set the number of corners
createTrackbar( "Max corners:", source_window, &maxCorners, maxTrackbar, goodFeaturesToTrack_Demo);
imshow( source_window, src );
goodFeaturesToTrack_Demo( 0, 0 );
waitKey(0);
return(0);
}
/*
* @function goodFeaturesToTrack_Demo.cpp
* @brief Apply Shi-Tomasi corner detector
*/
void goodFeaturesToTrack_Demo( int, void* )
{
if( maxCorners < 1 ) { maxCorners = 1; }
/// Parameters for Shi-Tomasi algorithm
vector<Point2f> corners;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
/// Copy the source image
Mat copy;
copy = src.clone();
/// Apply corner detection
goodFeaturesToTrack( src_gray,
corners,
maxCorners,
qualityLevel,
minDistance,
Mat(),
blockSize,
useHarrisDetector,
k );
/// Draw corners detected
cout<<"** Number of corners detected: "<<corners.size()<<endl;
int r = 4;
for( int i = 0; i < corners.size(); i++ )
{ circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255)), -1, 8, 0 ); }
/// Show what you got
namedWindow( source_window, WINDOW_AUTOSIZE );
imshow( source_window, copy );
/// Set the neeed parameters to find the refined corners
Size winSize = Size( 5, 5 );
Size zeroZone = Size( -1, -1 );
TermCriteria criteria = TermCriteria( TermCriteria::EPS + TermCriteria::MAX_ITER, 40, 0.001 );
/// Calculate the refined corner locations
cornerSubPix( src_gray, corners, winSize, zeroZone, criteria );
/// Write them down
for( int i = 0; i < corners.size(); i++ )
{ cout<<" -- Refined Corner ["<<i<<"] ("<<corners[i].x<<","<<corners[i].y<<")"<<endl; }
}
Explanation
============
Result
======
.. image:: images/Corner_Subpixeles_Original_Image.jpg
:align: center
Here is the result:
.. image:: images/Corner_Subpixeles_Result.jpg
:align: center

View File

@@ -1,39 +0,0 @@
.. _generic_corner_detector:
Creating yor own corner detector
********************************
Goal
=====
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Use the OpenCV function :corner_eigenvals_and_vecs:`cornerEigenValsAndVecs <>` to find the eigenvalues and eigenvectors to determine if a pixel is a corner.
* Use the OpenCV function :corner_min_eigenval:`cornerMinEigenVal <>` to find the minimum eigenvalues for corner detection.
* To implement our own version of the Harris detector as well as the Shi-Tomasi detector, by using the two functions above.
Theory
======
Code
====
This tutorial code's is shown lines below. You can also download it from `here <https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerDetector_Demo.cpp>`_
.. literalinclude:: ../../../../../samples/cpp/tutorial_code/TrackingMotion/cornerDetector_Demo.cpp
:language: cpp
Explanation
============
Result
======
.. image:: images/My_Harris_corner_detector_Result.jpg
:align: center
.. image:: images/My_Shi_Tomasi_corner_detector_Result.jpg
:align: center

View File

@@ -1,120 +0,0 @@
.. _good_features_to_track:
Shi-Tomasi corner detector
**************************
Goal
=====
In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Use the function :good_features_to_track:`goodFeaturesToTrack <>` to detect corners using the Shi-Tomasi method.
Theory
======
Code
====
This tutorial code's is shown lines below. You can also download it from `here <https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/goodFeaturesToTrack_Demo.cpp>`_
.. code-block:: cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int maxCorners = 23;
int maxTrackbar = 100;
RNG rng(12345);
char* source_window = "Image";
/// Function header
void goodFeaturesToTrack_Demo( int, void* );
/*
* @function main
*/
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create Window
namedWindow( source_window, WINDOW_AUTOSIZE );
/// Create Trackbar to set the number of corners
createTrackbar( "Max corners:", source_window, &maxCorners, maxTrackbar, goodFeaturesToTrack_Demo );
imshow( source_window, src );
goodFeaturesToTrack_Demo( 0, 0 );
waitKey(0);
return(0);
}
/*
* @function goodFeaturesToTrack_Demo.cpp
* @brief Apply Shi-Tomasi corner detector
*/
void goodFeaturesToTrack_Demo( int, void* )
{
if( maxCorners < 1 ) { maxCorners = 1; }
/// Parameters for Shi-Tomasi algorithm
vector<Point2f> corners;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
/// Copy the source image
Mat copy;
copy = src.clone();
/// Apply corner detection
goodFeaturesToTrack( src_gray,
corners,
maxCorners,
qualityLevel,
minDistance,
Mat(),
blockSize,
useHarrisDetector,
k );
/// Draw corners detected
cout<<"** Number of corners detected: "<<corners.size()<<endl;
int r = 4;
for( int i = 0; i < corners.size(); i++ )
{ circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255)), -1, 8, 0 ); }
/// Show what you got
namedWindow( source_window, WINDOW_AUTOSIZE );
imshow( source_window, copy );
}
Explanation
============
Result
======
.. image:: images/Feature_Detection_Result_a.jpg
:align: center

View File

@@ -1,245 +0,0 @@
.. _harris_detector:
Harris corner detector
**********************
Goal
=====
In this tutorial you will learn:
.. container:: enumeratevisibleitemswithsquare
* What features are and why they are important
* Use the function :corner_harris:`cornerHarris <>` to detect corners using the Harris-Stephens method.
Theory
======
What is a feature?
-------------------
.. container:: enumeratevisibleitemswithsquare
* In computer vision, usually we need to find matching points between different frames of an environment. Why? If we know how two images relate to each other, we can use *both* images to extract information of them.
* When we say **matching points** we are referring, in a general sense, to *characteristics* in the scene that we can recognize easily. We call these characteristics **features**.
* **So, what characteristics should a feature have?**
* It must be *uniquely recognizable*
Types of Image Features
------------------------
To mention a few:
.. container:: enumeratevisibleitemswithsquare
* Edges
* **Corners** (also known as interest points)
* Blobs (also known as regions of interest )
In this tutorial we will study the *corner* features, specifically.
Why is a corner so special?
----------------------------
.. container:: enumeratevisibleitemswithsquare
* Because, since it is the intersection of two edges, it represents a point in which the directions of these two edges *change*. Hence, the gradient of the image (in both directions) have a high variation, which can be used to detect it.
How does it work?
-----------------
.. container:: enumeratevisibleitemswithsquare
* Let's look for corners. Since corners represents a variation in the gradient in the image, we will look for this "variation".
* Consider a grayscale image :math:`I`. We are going to sweep a window :math:`w(x,y)` (with displacements :math:`u` in the x direction and :math:`v` in the right direction) :math:`I` and will calculate the variation of intensity.
.. math::
E(u,v) = \sum _{x,y} w(x,y)[ I(x+u,y+v) - I(x,y)]^{2}
where:
* :math:`w(x,y)` is the window at position :math:`(x,y)`
* :math:`I(x,y)` is the intensity at :math:`(x,y)`
* :math:`I(x+u,y+v)` is the intensity at the moved window :math:`(x+u,y+v)`
* Since we are looking for windows with corners, we are looking for windows with a large variation in intensity. Hence, we have to maximize the equation above, specifically the term:
.. math::
\sum _{x,y}[ I(x+u,y+v) - I(x,y)]^{2}
* Using *Taylor expansion*:
.. math::
E(u,v) \approx \sum _{x,y}[ I(x,y) + u I_{x} + vI_{y} - I(x,y)]^{2}
* Expanding the equation and cancelling properly:
.. math::
E(u,v) \approx \sum _{x,y} u^{2}I_{x}^{2} + 2uvI_{x}I_{y} + v^{2}I_{y}^{2}
* Which can be expressed in a matrix form as:
.. math::
E(u,v) \approx \begin{bmatrix}
u & v
\end{bmatrix}
\left (
\displaystyle \sum_{x,y}
w(x,y)
\begin{bmatrix}
I_x^{2} & I_{x}I_{y} \\
I_xI_{y} & I_{y}^{2}
\end{bmatrix}
\right )
\begin{bmatrix}
u \\
v
\end{bmatrix}
* Let's denote:
.. math::
M = \displaystyle \sum_{x,y}
w(x,y)
\begin{bmatrix}
I_x^{2} & I_{x}I_{y} \\
I_xI_{y} & I_{y}^{2}
\end{bmatrix}
* So, our equation now is:
.. math::
E(u,v) \approx \begin{bmatrix}
u & v
\end{bmatrix}
M
\begin{bmatrix}
u \\
v
\end{bmatrix}
* A score is calculated for each window, to determine if it can possibly contain a corner:
.. math::
R = det(M) - k(trace(M))^{2}
where:
* det(M) = :math:`\lambda_{1}\lambda_{2}`
* trace(M) = :math:`\lambda_{1}+\lambda_{2}`
a window with a score :math:`R` greater than a certain value is considered a "corner"
Code
====
This tutorial code's is shown lines below. You can also download it from `here <https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerHarris_Demo.cpp>`_
.. code-block:: cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int thresh = 200;
int max_thresh = 255;
char* source_window = "Source image";
char* corners_window = "Corners detected";
/// Function header
void cornerHarris_demo( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create a window and a trackbar
namedWindow( source_window, WINDOW_AUTOSIZE );
createTrackbar( "Threshold: ", source_window, &thresh, max_thresh, cornerHarris_demo );
imshow( source_window, src );
cornerHarris_demo( 0, 0 );
waitKey(0);
return(0);
}
/* @function cornerHarris_demo */
void cornerHarris_demo( int, void* )
{
Mat dst, dst_norm, dst_norm_scaled;
dst = Mat::zeros( src.size(), CV_32FC1 );
/// Detector parameters
int blockSize = 2;
int apertureSize = 3;
double k = 0.04;
/// Detecting corners
cornerHarris( src_gray, dst, blockSize, apertureSize, k, BORDER_DEFAULT );
/// Normalizing
normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
convertScaleAbs( dst_norm, dst_norm_scaled );
/// Drawing a circle around corners
for( int j = 0; j < dst_norm.rows ; j++ )
{ for( int i = 0; i < dst_norm.cols; i++ )
{
if( (int) dst_norm.at<float>(j,i) > thresh )
{
circle( dst_norm_scaled, Point( i, j ), 5, Scalar(0), 2, 8, 0 );
}
}
}
/// Showing the result
namedWindow( corners_window, WINDOW_AUTOSIZE );
imshow( corners_window, dst_norm_scaled );
}
Explanation
============
Result
======
The original image:
.. image:: images/Harris_Detector_Original_Image.jpg
:align: center
The detected corners are surrounded by a small black circle
.. image:: images/Harris_Detector_Result.jpg
:align: center