Doxygen tutorials: basic structure

This commit is contained in:
Maksim Shabunin
2014-11-27 15:39:05 +03:00
parent 220f671655
commit 8375182e34
99 changed files with 17805 additions and 0 deletions

View File

@@ -0,0 +1,133 @@
AKAZE local features matching {#tutorial_akaze_matching}
=============================
Introduction
------------
In this tutorial we will learn how to use [AKAZE]_ local features to detect and match keypoints on
two images.
We will find keypoints on a pair of images with given homography matrix, match them and count the
number of inliers (i. e. matches that fit in the given homography).
You can find expanded version of this example here:
<https://github.com/pablofdezalc/test_kaze_akaze_opencv>
Data
----
We are going to use images 1 and 3 from *Graffity* sequence of Oxford dataset.
![image](images/graf.png)
Homography is given by a 3 by 3 matrix:
@code{.none}
7.6285898e-01 -2.9922929e-01 2.2567123e+02
3.3443473e-01 1.0143901e+00 -7.6999973e+01
3.4663091e-04 -1.4364524e-05 1.0000000e+00
@endcode
You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*) in
*opencv/samples/cpp*.
### Source Code
@includelineno cpp/tutorial_code/features2D/AKAZE_match.cpp
### Explanation
1. **Load images and homography**
@code{.cpp}
Mat img1 = imread("graf1.png", IMREAD_GRAYSCALE);
Mat img2 = imread("graf3.png", IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs("H1to3p.xml", FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
@endcode
We are loading grayscale images here. Homography is stored in the xml created with FileStorage.
1. **Detect keypoints and compute descriptors using AKAZE**
@code{.cpp}
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
AKAZE akaze;
akaze(img1, noArray(), kpts1, desc1);
akaze(img2, noArray(), kpts2, desc2);
@endcode
We create AKAZE object and use it's *operator()* functionality. Since we don't need the *mask*
parameter, *noArray()* is used.
1. **Use brute-force matcher to find 2-nn matches**
@code{.cpp}
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
@endcode
We use Hamming distance, because AKAZE uses binary descriptor by default.
1. **Use 2-nn matches to find correct keypoint matches**
@code{.cpp}
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if(dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
@endcode
If the closest match is *ratio* closer than the second closest one, then the match is correct.
1. **Check if our matches fit in the homography model**
@code{.cpp}
for(int i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
float dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if(dist < inlier_threshold) {
int new_i = inliers1.size();
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
@endcode
If the distance from first keypoint's projection to the second keypoint is less than threshold,
then it it fits in the homography.
We create a new set of matches for the inliers, because it is required by the drawing function.
1. **Output results**
@code{.cpp}
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("res.png", res);
...
@endcode
Here we save the resulting image and print some statistics.
### Results
Found matches
-------------
![image](images/res.png)
A-KAZE Matching Results
-----------------------
@code{.none}
Keypoints 1: 2943
Keypoints 2: 3511
Matches: 447
Inliers: 308
Inlier Ratio: 0.689038}
@endcode

View File

@@ -0,0 +1,138 @@
AKAZE and ORB planar tracking {#tutorial_akaze_tracking}
=============================
Introduction
------------
In this tutorial we will compare *AKAZE* and *ORB* local features using them to find matches between
video frames and track object movements.
The algorithm is as follows:
- Detect and describe keypoints on the first frame, manually set object boundaries
- For every next frame:
1. Detect and describe keypoints
2. Match them using bruteforce matcher
3. Estimate homography transformation using RANSAC
4. Filter inliers from all the matches
5. Apply homography transformation to the bounding box to find the object
6. Draw bounding box and inliers, compute inlier ratio as evaluation metric
![image](images/frame.png)
### Data
To do the tracking we need a video and object position on the first frame.
You can download our example video and data from
[here](https://docs.google.com/file/d/0B72G7D4snftJandBb0taLVJHMFk).
To run the code you have to specify input and output video path and object bounding box.
@code{.none}
./planar_tracking blais.mp4 result.avi blais_bb.xml.gz
@endcode
### Source Code
@includelineno cpp/tutorial_code/features2D/AKAZE_tracking/planar_tracking.cpp
### Explanation
Tracker class
-------------
This class implements algorithm described abobve using given feature detector and descriptor
matcher.
- **Setting up the first frame**
@code{.cpp}
void Tracker::setFirstFrame(const Mat frame, vector<Point2f> bb, string title, Stats& stats)
{
first_frame = frame.clone();
(*detector)(first_frame, noArray(), first_kp, first_desc);
stats.keypoints = (int)first_kp.size();
drawBoundingBox(first_frame, bb);
putText(first_frame, title, Point(0, 60), FONT_HERSHEY_PLAIN, 5, Scalar::all(0), 4);
object_bb = bb;
}
@endcode
We compute and store keypoints and descriptors from the first frame and prepare it for the
output.
We need to save number of detected keypoints to make sure both detectors locate roughly the same
number of those.
- **Processing frames**
1. Locate keypoints and compute descriptors
@code{.cpp}
(*detector)(frame, noArray(), kp, desc);
@endcode
To find matches between frames we have to locate the keypoints first.
In this tutorial detectors are set up to find about 1000 keypoints on each frame.
1. Use 2-nn matcher to find correspondences
@code{.cpp}
matcher->knnMatch(first_desc, desc, matches, 2);
for(unsigned i = 0; i < matches.size(); i++) {
if(matches[i][0].distance < nn_match_ratio * matches[i][1].distance) {
matched1.push_back(first_kp[matches[i][0].queryIdx]);
matched2.push_back( kp[matches[i][0].trainIdx]);
}
}
@endcode
If the closest match is *nn_match_ratio* closer than the second closest one, then it's a
match.
2. Use *RANSAC* to estimate homography transformation
@code{.cpp}
homography = findHomography(Points(matched1), Points(matched2),
RANSAC, ransac_thresh, inlier_mask);
@endcode
If there are at least 4 matches we can use random sample consensus to estimate image
transformation.
3. Save the inliers
@code{.cpp}
for(unsigned i = 0; i < matched1.size(); i++) {
if(inlier_mask.at<uchar>(i)) {
int new_i = static_cast<int>(inliers1.size());
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
inlier_matches.push_back(DMatch(new_i, new_i, 0));
}
}
@endcode
Since *findHomography* computes the inliers we only have to save the chosen points and
matches.
4. Project object bounding box
@code{.cpp}
perspectiveTransform(object_bb, new_bb, homography);
@endcode
If there is a reasonable number of inliers we can use estimated transformation to locate the
object.
### Results
You can watch the resulting [video on youtube](http://www.youtube.com/watch?v=LWY-w8AGGhE).
*AKAZE* statistics:
@code{.none}
Matches 626
Inliers 410
Inlier ratio 0.58
Keypoints 1117
@endcode
*ORB* statistics:
@code{.none}
Matches 504
Inliers 319
Inlier ratio 0.56
Keypoints 1112
@endcode

View File

@@ -0,0 +1,54 @@
Detection of planar objects {#tutorial_detection_of_planar_objects}
===========================
The goal of this tutorial is to learn how to use *features2d* and *calib3d* modules for detecting
known planar objects in scenes.
*Test data*: use images in your data folder, for instance, box.png and box_in_scene.png.
- Create a new console project. Read two input images. :
Mat img1 = imread(argv[1], IMREAD_GRAYSCALE);
Mat img2 = imread(argv[2], IMREAD_GRAYSCALE);
- Detect keypoints in both images and compute descriptors for each of the keypoints. :
// detecting keypoints
Ptr<Feature2D> surf = SURF::create();
vector<KeyPoint> keypoints1;
Mat descriptors1;
surf->detectAndCompute(img1, Mat(), keypoints1, descriptors1);
... // do the same for the second image
- Now, find the closest matches between descriptors from the first image to the second: :
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
- Visualize the results: :
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
- Find the homography transformation between two sets of points: :
vector<Point2f> points1, points2;
// fill the arrays with the points
....
Mat H = findHomography(Mat(points1), Mat(points2), RANSAC, ransacReprojThreshold);
- Create a set of inlier matches and draw them. Use perspectiveTransform function to map points
with homography:
Mat points1Projected; perspectiveTransform(Mat(points1), points1Projected, H);
- Use drawMatches for drawing inliers.

View File

@@ -0,0 +1,91 @@
Feature Description {#tutorial_feature_description}
===================
Goal
----
In this tutorial you will learn how to:
- Use the @ref cv::DescriptorExtractor interface in order to find the feature vector correspondent
to the keypoints. Specifically:
- Use @ref cv::SurfDescriptorExtractor and its function @ref cv::compute to perform the
required calculations.
- Use a @ref cv::BFMatcher to match the features vector
- Use the function @ref cv::drawMatches to draw the detected matches.
Theory
------
Code
----
This tutorial code's is shown lines below.
@code{.cpp}
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/xfeatures2d.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/* @function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{ return -1; }
Mat img_1 = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_2 = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector, compute the descriptors
int minHessian = 400;
Ptr<SURF> detector = SURF::create();
detector->setMinHessian(minHessian);
std::vector<KeyPoint> keypoints_1, keypoints_2;
Mat descriptors_1, descriptors_2;
detector->detectAndCompute( img_1, keypoints_1, descriptors_1 );
detector->detectAndCompute( img_2, keypoints_2, descriptors_2 );
//-- Step 2: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L2);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );
//-- Show detected matches
imshow("Matches", img_matches );
waitKey(0);
return 0;
}
/* @function readme */
void readme()
{ std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
@endcode
Explanation
-----------
Result
------
1. Here is the result after applying the BruteForce matcher between the two original images:
![image](images/Feature_Description_BruteForce_Result.jpg)

View File

@@ -0,0 +1,89 @@
Feature Detection {#tutorial_feature_detection}
=================
Goal
----
In this tutorial you will learn how to:
- Use the @ref cv::FeatureDetector interface in order to find interest points. Specifically:
- Use the @ref cv::SurfFeatureDetector and its function @ref cv::detect to perform the
detection process
- Use the function @ref cv::drawKeypoints to draw the detected keypoints
Theory
------
Code
----
This tutorial code's is shown lines below.
@code{.cpp}
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/* @function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{ readme(); return -1; }
Mat img_1 = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_2 = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector->detect( img_1, keypoints_1 );
detector->detect( img_2, keypoints_2 );
//-- Draw keypoints
Mat img_keypoints_1; Mat img_keypoints_2;
drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
drawKeypoints( img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
imshow("Keypoints 2", img_keypoints_2 );
waitKey(0);
return 0;
}
/* @function readme */
void readme()
{ std::cout << " Usage: ./SURF_detector <img1> <img2>" << std::endl; }
@endcode
Explanation
-----------
Result
------
1. Here is the result of the feature detection applied to the first image:
![image](images/Feature_Detection_Result_a.jpg)
2. And here is the result for the second image:
![image](images/Feature_Detection_Result_b.jpg)

View File

@@ -0,0 +1,140 @@
Feature Matching with FLANN {#tutorial_feature_flann_matcher}
===========================
Goal
----
In this tutorial you will learn how to:
- Use the @ref cv::FlannBasedMatcher interface in order to perform a quick and efficient matching
by using the @ref cv::FLANN ( *Fast Approximate Nearest Neighbor Search Library* )
Theory
------
Code
----
This tutorial code's is shown lines below.
@code{.cpp}
/*
* @file SURF_FlannMatcher
* @brief SURF detector + descriptor + FLANN Matcher
* @author A. Huaman
*/
#include <stdio.h>
#include <iostream>
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/xfeatures2d.hpp"
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/*
* @function main
* @brief Main function
*/
int main( int argc, char** argv )
{
if( argc != 3 )
{ readme(); return -1; }
Mat img_1 = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_2 = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
//-- small)
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.02) )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
imshow( "Good Matches", img_matches );
for( int i = 0; i < (int)good_matches.size(); i++ )
{ printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx ); }
waitKey(0);
return 0;
}
/*
* @function readme
*/
void readme()
{ std::cout << " Usage: ./SURF_FlannMatcher <img1> <img2>" << std::endl; }
@endcode
Explanation
-----------
Result
------
1. Here is the result of the feature detection applied to the first image:
![image](images/Featur_FlannMatcher_Result.jpg)
2. Additionally, we get as console output the keypoints filtered:
![image](images/Feature_FlannMatcher_Keypoints_Result.jpg)

View File

@@ -0,0 +1,141 @@
Features2D + Homography to find a known object {#tutorial_feature_homography}
==============================================
Goal
----
In this tutorial you will learn how to:
- Use the function @ref cv::findHomography to find the transform between matched keypoints.
- Use the function @ref cv::perspectiveTransform to map the points.
Theory
------
Code
----
This tutorial code's is shown lines below.
@code{.cpp}
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/xfeatures2d.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/* @function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{ readme(); return -1; }
Mat img_object = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_scene = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_object.data || !img_scene.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < 3*min_dist )
{ good_matches.push_back( matches[i]); }
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
imshow( "Good Matches & Object detection", img_matches );
waitKey(0);
return 0;
}
/* @function readme */
void readme()
{ std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
@endcode
Explanation
-----------
Result
------
1. And here is the result for the detected object (highlighted in green)
![image](images/Feature_Homography_Result.jpg)

View File

@@ -0,0 +1,95 @@
2D Features framework (feature2d module) {#tutorial_table_of_content_features2d}
=========================================
Learn about how to use the feature points detectors, descriptors and matching framework found inside
OpenCV.
- @subpage tutorial_harris_detector
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
Why is it a good idea to track corners? We learn to use the Harris method to detect
corners
- @subpage tutorial_good_features_to_track
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
Where we use an improved method to detect corners more accuratelyI
- @subpage tutorial_generic_corner_detector
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
Here you will learn how to use OpenCV functions to make your personalized corner detector!
- @subpage tutorial_corner_subpixeles
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
Is pixel resolution enough? Here we learn a simple method to improve our accuracy.
- @subpage tutorial_feature_detection
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
In this tutorial, you will use *features2d* to detect interest points.
- @subpage tutorial_feature_description
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
In this tutorial, you will use *features2d* to calculate feature vectors.
- @subpage tutorial_feature_flann_matcher
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
In this tutorial, you will use the FLANN library to make a fast matching.
- @subpage tutorial_feature_homography
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
In this tutorial, you will use *features2d* and *calib3d* to detect an object in a scene.
- @subpage tutorial_detection_of_planar_objects
*Compatibility:* \> OpenCV 2.0
*Author:* Victor Eruhimov
You will use *features2d* and *calib3d* modules for detecting known planar objects in
scenes.
- @subpage tutorial_akaze_matching
*Compatibility:* \> OpenCV 3.0
*Author:* Fedor Morozov
Using *AKAZE* local features to find correspondence between two images.
- @subpage tutorial_akaze_tracking
*Compatibility:* \> OpenCV 3.0
*Author:* Fedor Morozov
Using *AKAZE* and *ORB* for planar object tracking.

View File

@@ -0,0 +1,130 @@
Detecting corners location in subpixeles {#tutorial_corner_subpixeles}
========================================
Goal
----
In this tutorial you will learn how to:
- Use the OpenCV function @ref cv::cornerSubPix to find more exact corner positions (more exact
than integer pixels).
Theory
------
Code
----
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerSubPix_Demo.cpp)
@code{.cpp}
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int maxCorners = 10;
int maxTrackbar = 25;
RNG rng(12345);
char* source_window = "Image";
/// Function header
void goodFeaturesToTrack_Demo( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create Window
namedWindow( source_window, WINDOW_AUTOSIZE );
/// Create Trackbar to set the number of corners
createTrackbar( "Max corners:", source_window, &maxCorners, maxTrackbar, goodFeaturesToTrack_Demo);
imshow( source_window, src );
goodFeaturesToTrack_Demo( 0, 0 );
waitKey(0);
return(0);
}
/*
* @function goodFeaturesToTrack_Demo.cpp
* @brief Apply Shi-Tomasi corner detector
*/
void goodFeaturesToTrack_Demo( int, void* )
{
if( maxCorners < 1 ) { maxCorners = 1; }
/// Parameters for Shi-Tomasi algorithm
vector<Point2f> corners;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
/// Copy the source image
Mat copy;
copy = src.clone();
/// Apply corner detection
goodFeaturesToTrack( src_gray,
corners,
maxCorners,
qualityLevel,
minDistance,
Mat(),
blockSize,
useHarrisDetector,
k );
/// Draw corners detected
cout<<"** Number of corners detected: "<<corners.size()<<endl;
int r = 4;
for( int i = 0; i < corners.size(); i++ )
{ circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255)), -1, 8, 0 ); }
/// Show what you got
namedWindow( source_window, WINDOW_AUTOSIZE );
imshow( source_window, copy );
/// Set the neeed parameters to find the refined corners
Size winSize = Size( 5, 5 );
Size zeroZone = Size( -1, -1 );
TermCriteria criteria = TermCriteria( TermCriteria::EPS + TermCriteria::MAX_ITER, 40, 0.001 );
/// Calculate the refined corner locations
cornerSubPix( src_gray, corners, winSize, zeroZone, criteria );
/// Write them down
for( int i = 0; i < corners.size(); i++ )
{ cout<<" -- Refined Corner ["<<i<<"] ("<<corners[i].x<<","<<corners[i].y<<")"<<endl; }
}
@endcode
Explanation
-----------
Result
------
![image](images/Corner_Subpixeles_Original_Image.jpg)
Here is the result:
![image](images/Corner_Subpixeles_Result.jpg)

View File

@@ -0,0 +1,36 @@
Creating yor own corner detector {#tutorial_generic_corner_detector}
================================
Goal
----
In this tutorial you will learn how to:
- Use the OpenCV function @ref cv::cornerEigenValsAndVecs to find the eigenvalues and eigenvectors
to determine if a pixel is a corner.
- Use the OpenCV function @ref cv::cornerMinEigenVal to find the minimum eigenvalues for corner
detection.
- To implement our own version of the Harris detector as well as the Shi-Tomasi detector, by using
the two functions above.
Theory
------
Code
----
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerDetector_Demo.cpp)
@includelineno cpp/tutorial_code/TrackingMotion/cornerDetector_Demo.cpp
Explanation
-----------
Result
------
![image](images/My_Harris_corner_detector_Result.jpg)
![image](images/My_Shi_Tomasi_corner_detector_Result.jpg)

View File

@@ -0,0 +1,115 @@
Shi-Tomasi corner detector {#tutorial_good_features_to_track}
==========================
Goal
----
In this tutorial you will learn how to:
- Use the function @ref cv::goodFeaturesToTrack to detect corners using the Shi-Tomasi method.
Theory
------
Code
----
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/goodFeaturesToTrack_Demo.cpp)
@code{.cpp}
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int maxCorners = 23;
int maxTrackbar = 100;
RNG rng(12345);
char* source_window = "Image";
/// Function header
void goodFeaturesToTrack_Demo( int, void* );
/*
* @function main
*/
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create Window
namedWindow( source_window, WINDOW_AUTOSIZE );
/// Create Trackbar to set the number of corners
createTrackbar( "Max corners:", source_window, &maxCorners, maxTrackbar, goodFeaturesToTrack_Demo );
imshow( source_window, src );
goodFeaturesToTrack_Demo( 0, 0 );
waitKey(0);
return(0);
}
/*
* @function goodFeaturesToTrack_Demo.cpp
* @brief Apply Shi-Tomasi corner detector
*/
void goodFeaturesToTrack_Demo( int, void* )
{
if( maxCorners < 1 ) { maxCorners = 1; }
/// Parameters for Shi-Tomasi algorithm
vector<Point2f> corners;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
/// Copy the source image
Mat copy;
copy = src.clone();
/// Apply corner detection
goodFeaturesToTrack( src_gray,
corners,
maxCorners,
qualityLevel,
minDistance,
Mat(),
blockSize,
useHarrisDetector,
k );
/// Draw corners detected
cout<<"** Number of corners detected: "<<corners.size()<<endl;
int r = 4;
for( int i = 0; i < corners.size(); i++ )
{ circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255)), -1, 8, 0 ); }
/// Show what you got
namedWindow( source_window, WINDOW_AUTOSIZE );
imshow( source_window, copy );
}
@endcode
Explanation
-----------
Result
------
![image](images/Feature_Detection_Result_a.jpg)

View File

@@ -0,0 +1,209 @@
Harris corner detector {#tutorial_harris_detector}
======================
Goal
----
In this tutorial you will learn:
- What features are and why they are important
- Use the function @ref cv::cornerHarris to detect corners using the Harris-Stephens method.
Theory
------
### What is a feature?
- In computer vision, usually we need to find matching points between different frames of an
environment. Why? If we know how two images relate to each other, we can use *both* images to
extract information of them.
- When we say **matching points** we are referring, in a general sense, to *characteristics* in
the scene that we can recognize easily. We call these characteristics **features**.
- **So, what characteristics should a feature have?**
- It must be *uniquely recognizable*
### Types of Image Features
To mention a few:
- Edges
- **Corners** (also known as interest points)
- Blobs (also known as regions of interest )
In this tutorial we will study the *corner* features, specifically.
### Why is a corner so special?
- Because, since it is the intersection of two edges, it represents a point in which the
directions of these two edges *change*. Hence, the gradient of the image (in both directions)
have a high variation, which can be used to detect it.
### How does it work?
- Let's look for corners. Since corners represents a variation in the gradient in the image, we
will look for this "variation".
- Consider a grayscale image \f$I\f$. We are going to sweep a window \f$w(x,y)\f$ (with displacements \f$u\f$
in the x direction and \f$v\f$ in the right direction) \f$I\f$ and will calculate the variation of
intensity.
\f[E(u,v) = \sum _{x,y} w(x,y)[ I(x+u,y+v) - I(x,y)]^{2}\f]
where:
- \f$w(x,y)\f$ is the window at position \f$(x,y)\f$
- \f$I(x,y)\f$ is the intensity at \f$(x,y)\f$
- \f$I(x+u,y+v)\f$ is the intensity at the moved window \f$(x+u,y+v)\f$
- Since we are looking for windows with corners, we are looking for windows with a large variation
in intensity. Hence, we have to maximize the equation above, specifically the term:
\f[\sum _{x,y}[ I(x+u,y+v) - I(x,y)]^{2}\f]
- Using *Taylor expansion*:
\f[E(u,v) \approx \sum _{x,y}[ I(x,y) + u I_{x} + vI_{y} - I(x,y)]^{2}\f]
- Expanding the equation and cancelling properly:
\f[E(u,v) \approx \sum _{x,y} u^{2}I_{x}^{2} + 2uvI_{x}I_{y} + v^{2}I_{y}^{2}\f]
- Which can be expressed in a matrix form as:
\f[E(u,v) \approx \begin{bmatrix}
u & v
\end{bmatrix}
\left (
\displaystyle \sum_{x,y}
w(x,y)
\begin{bmatrix}
I_x^{2} & I_{x}I_{y} \\
I_xI_{y} & I_{y}^{2}
\end{bmatrix}
\right )
\begin{bmatrix}
u \\
v
\end{bmatrix}\f]
- Let's denote:
\f[M = \displaystyle \sum_{x,y}
w(x,y)
\begin{bmatrix}
I_x^{2} & I_{x}I_{y} \\
I_xI_{y} & I_{y}^{2}
\end{bmatrix}\f]
- So, our equation now is:
\f[E(u,v) \approx \begin{bmatrix}
u & v
\end{bmatrix}
M
\begin{bmatrix}
u \\
v
\end{bmatrix}\f]
- A score is calculated for each window, to determine if it can possibly contain a corner:
\f[R = det(M) - k(trace(M))^{2}\f]
where:
- det(M) = \f$\lambda_{1}\lambda_{2}\f$
- trace(M) = \f$\lambda_{1}+\lambda_{2}\f$
a window with a score \f$R\f$ greater than a certain value is considered a "corner"
Code
----
This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerHarris_Demo.cpp)
@code{.cpp}
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int thresh = 200;
int max_thresh = 255;
char* source_window = "Source image";
char* corners_window = "Corners detected";
/// Function header
void cornerHarris_demo( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create a window and a trackbar
namedWindow( source_window, WINDOW_AUTOSIZE );
createTrackbar( "Threshold: ", source_window, &thresh, max_thresh, cornerHarris_demo );
imshow( source_window, src );
cornerHarris_demo( 0, 0 );
waitKey(0);
return(0);
}
/* @function cornerHarris_demo */
void cornerHarris_demo( int, void* )
{
Mat dst, dst_norm, dst_norm_scaled;
dst = Mat::zeros( src.size(), CV_32FC1 );
/// Detector parameters
int blockSize = 2;
int apertureSize = 3;
double k = 0.04;
/// Detecting corners
cornerHarris( src_gray, dst, blockSize, apertureSize, k, BORDER_DEFAULT );
/// Normalizing
normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
convertScaleAbs( dst_norm, dst_norm_scaled );
/// Drawing a circle around corners
for( int j = 0; j < dst_norm.rows ; j++ )
{ for( int i = 0; i < dst_norm.cols; i++ )
{
if( (int) dst_norm.at<float>(j,i) > thresh )
{
circle( dst_norm_scaled, Point( i, j ), 5, Scalar(0), 2, 8, 0 );
}
}
}
/// Showing the result
namedWindow( corners_window, WINDOW_AUTOSIZE );
imshow( corners_window, dst_norm_scaled );
}
@endcode
Explanation
-----------
Result
------
The original image:
![image](images/Harris_Detector_Original_Image.jpg)
The detected corners are surrounded by a small black circle
![image](images/Harris_Detector_Result.jpg)