OpenCV reference manual (C++ part only for now) is now produced directly from RST, not from TeX.

This commit is contained in:
Vadim Pisarevsky
2011-02-22 20:43:26 +00:00
parent 32a2fde8ac
commit 371aa08006
65 changed files with 41233 additions and 98 deletions

View File

@@ -0,0 +1,446 @@
Common Interfaces of Descriptor Extractors
==========================================
.. highlight:: cpp
Extractors of keypoint descriptors in OpenCV have wrappers with common interface that enables to switch easily
between different algorithms solving the same problem. This section is devoted to computing descriptors
that are represented as vectors in a multidimensional space. All objects that implement ''vector''
descriptor extractors inherit
:func:`DescriptorExtractor`
interface.
.. index:: DescriptorExtractor
.. _DescriptorExtractor:
DescriptorExtractor
-------------------
`id=0.00924308242838 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor>`__
.. ctype:: DescriptorExtractor
Abstract base class for computing descriptors for image keypoints.
::
class CV_EXPORTS DescriptorExtractor
{
public:
virtual ~DescriptorExtractor();
void compute( const Mat& image, vector<KeyPoint>& keypoints,
Mat& descriptors ) const;
void compute( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints,
vector<Mat>& descriptors ) const;
virtual void read( const FileNode& );
virtual void write( FileStorage& ) const;
virtual int descriptorSize() const = 0;
virtual int descriptorType() const = 0;
static Ptr<DescriptorExtractor> create( const string& descriptorExtractorType );
protected:
...
};
..
In this interface we assume a keypoint descriptor can be represented as a
dense, fixed-dimensional vector of some basic type. Most descriptors used
in practice follow this pattern, as it makes it very easy to compute
distances between descriptors. Therefore we represent a collection of
descriptors as a
:func:`Mat`
, where each row is one keypoint descriptor.
.. index:: DescriptorExtractor::compute
cv::DescriptorExtractor::compute
--------------------------------
`id=0.622580160404 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Acompute>`__
.. cfunction:: void DescriptorExtractor::compute( const Mat\& image, vector<KeyPoint>\& keypoints, Mat\& descriptors ) const
Compute the descriptors for a set of keypoints detected in an image (first variant)
or image set (second variant).
:param image: The image.
:param keypoints: The keypoints. Keypoints for which a descriptor cannot be computed are removed.
:param descriptors: The descriptors. Row i is the descriptor for keypoint i.
.. cfunction:: void DescriptorExtractor::compute( const vector<Mat>\& images, vector<vector<KeyPoint> >\& keypoints, vector<Mat>\& descriptors ) const
* **images** The image set.
* **keypoints** Input keypoints collection. keypoints[i] is keypoints
detected in images[i]. Keypoints for which a descriptor
can not be computed are removed.
* **descriptors** Descriptor collection. descriptors[i] are descriptors computed for
a set keypoints[i].
.. index:: DescriptorExtractor::read
cv::DescriptorExtractor::read
-----------------------------
`id=0.708176779821 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Aread>`__
.. cfunction:: void DescriptorExtractor::read( const FileNode\& fn )
Read descriptor extractor object from file node.
:param fn: File node from which detector will be read.
.. index:: DescriptorExtractor::write
cv::DescriptorExtractor::write
------------------------------
`id=0.206682397054 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Awrite>`__
.. cfunction:: void DescriptorExtractor::write( FileStorage\& fs ) const
Write descriptor extractor object to file storage.
:param fs: File storage in which detector will be written.
.. index:: DescriptorExtractor::create
cv::DescriptorExtractor::create
-------------------------------
`id=0.923714079643 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Acreate>`__
:func:`DescriptorExtractor`
.. cfunction:: Ptr<DescriptorExtractor> DescriptorExtractor::create( const string\& descriptorExtractorType )
Descriptor extractor factory that creates of given type with
default parameters (rather using default constructor).
:param descriptorExtractorType: Descriptor extractor type.
Now the following descriptor extractor types are supported:
\
``"SIFT"``
--
:func:`SiftFeatureDetector`
,
\
``"SURF"``
--
:func:`SurfFeatureDetector`
,
\
``"BRIEF"``
--
:func:`BriefFeatureDetector`
.
\
Also combined format is supported: descriptor extractor adapter name (
``"Opponent"``
--
:func:`OpponentColorDescriptorExtractor`
) + descriptor extractor name (see above),
e.g.
``"OpponentSIFT"``
, etc.
.. index:: SiftDescriptorExtractor
.. _SiftDescriptorExtractor:
SiftDescriptorExtractor
-----------------------
`id=0.676546819501 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/SiftDescriptorExtractor>`__
.. ctype:: SiftDescriptorExtractor
Wrapping class for descriptors computing using
:func:`SIFT`
class.
::
class SiftDescriptorExtractor : public DescriptorExtractor
{
public:
SiftDescriptorExtractor(
const SIFT::DescriptorParams& descriptorParams=SIFT::DescriptorParams(),
const SIFT::CommonParams& commonParams=SIFT::CommonParams() );
SiftDescriptorExtractor( double magnification, bool isNormalize=true,
bool recalculateAngles=true, int nOctaves=SIFT::CommonParams::DEFAULT_NOCTAVES,
int nOctaveLayers=SIFT::CommonParams::DEFAULT_NOCTAVE_LAYERS,
int firstOctave=SIFT::CommonParams::DEFAULT_FIRST_OCTAVE,
int angleMode=SIFT::CommonParams::FIRST_ANGLE );
virtual void read (const FileNode &fn);
virtual void write (FileStorage &fs) const;
virtual int descriptorSize() const;
virtual int descriptorType() const;
protected:
...
}
..
.. index:: SurfDescriptorExtractor
.. _SurfDescriptorExtractor:
SurfDescriptorExtractor
-----------------------
`id=0.638581739296 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/SurfDescriptorExtractor>`__
.. ctype:: SurfDescriptorExtractor
Wrapping class for descriptors computing using
:func:`SURF`
class.
::
class SurfDescriptorExtractor : public DescriptorExtractor
{
public:
SurfDescriptorExtractor( int nOctaves=4,
int nOctaveLayers=2, bool extended=false );
virtual void read (const FileNode &fn);
virtual void write (FileStorage &fs) const;
virtual int descriptorSize() const;
virtual int descriptorType() const;
protected:
...
}
..
.. index:: CalonderDescriptorExtractor
.. _CalonderDescriptorExtractor:
CalonderDescriptorExtractor
---------------------------
`id=0.301561509204 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/CalonderDescriptorExtractor>`__
.. ctype:: CalonderDescriptorExtractor
Wrapping class for descriptors computing using
:func:`RTreeClassifier`
class.
::
template<typename T>
class CalonderDescriptorExtractor : public DescriptorExtractor
{
public:
CalonderDescriptorExtractor( const string& classifierFile );
virtual void read( const FileNode &fn );
virtual void write( FileStorage &fs ) const;
virtual int descriptorSize() const;
virtual int descriptorType() const;
protected:
...
}
..
.. index:: OpponentColorDescriptorExtractor
.. _OpponentColorDescriptorExtractor:
OpponentColorDescriptorExtractor
--------------------------------
`id=0.081563051622 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/OpponentColorDescriptorExtractor>`__
.. ctype:: OpponentColorDescriptorExtractor
Adapts a descriptor extractor to compute descripors in Opponent Color Space
(refer to van de Sande et al., CGIV 2008 "Color Descriptors for Object Category Recognition").
Input RGB image is transformed in Opponent Color Space. Then unadapted descriptor extractor
(set in constructor) computes descriptors on each of the three channel and concatenate
them into a single color descriptor.
::
class OpponentColorDescriptorExtractor : public DescriptorExtractor
{
public:
OpponentColorDescriptorExtractor( const Ptr<DescriptorExtractor>& dextractor );
virtual void read( const FileNode& );
virtual void write( FileStorage& ) const;
virtual int descriptorSize() const;
virtual int descriptorType() const;
protected:
...
};
..
.. index:: BriefDescriptorExtractor
.. _BriefDescriptorExtractor:
BriefDescriptorExtractor
------------------------
`id=0.207875021385 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BriefDescriptorExtractor>`__
.. ctype:: BriefDescriptorExtractor
Class for computing BRIEF descriptors described in paper of Calonder M., Lepetit V.,
Strecha C., Fua P.: ''BRIEF: Binary Robust Independent Elementary Features.''
11th European Conference on Computer Vision (ECCV), Heraklion, Crete. LNCS Springer, September 2010.
::
class BriefDescriptorExtractor : public DescriptorExtractor
{
public:
static const int PATCH_SIZE = 48;
static const int KERNEL_SIZE = 9;
// bytes is a length of descriptor in bytes. It can be equal 16, 32 or 64 bytes.
BriefDescriptorExtractor( int bytes = 32 );
virtual void read( const FileNode& );
virtual void write( FileStorage& ) const;
virtual int descriptorSize() const;
virtual int descriptorType() const;
protected:
...
};
..

View File

@@ -0,0 +1,637 @@
Common Interfaces of Descriptor Matchers
========================================
.. highlight:: cpp
Matchers of keypoint descriptors in OpenCV have wrappers with common interface that enables to switch easily
between different algorithms solving the same problem. This section is devoted to matching descriptors
that are represented as vectors in a multidimensional space. All objects that implement ''vector''
descriptor matchers inherit
:func:`DescriptorMatcher`
interface.
.. index:: DMatch
.. _DMatch:
DMatch
------
`id=0.193402930617 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DMatch>`__
.. ctype:: DMatch
Match between two keypoint descriptors: query descriptor index,
train descriptor index, train image index and distance between descriptors.
::
struct DMatch
{
DMatch() : queryIdx(-1), trainIdx(-1), imgIdx(-1),
distance(std::numeric_limits<float>::max()) {}
DMatch( int _queryIdx, int _trainIdx, float _distance ) :
queryIdx(_queryIdx), trainIdx(_trainIdx), imgIdx(-1),
distance(_distance) {}
DMatch( int _queryIdx, int _trainIdx, int _imgIdx, float _distance ) :
queryIdx(_queryIdx), trainIdx(_trainIdx), imgIdx(_imgIdx),
distance(_distance) {}
int queryIdx; // query descriptor index
int trainIdx; // train descriptor index
int imgIdx; // train image index
float distance;
// less is better
bool operator<( const DMatch &m ) const;
};
..
.. index:: DescriptorMatcher
.. _DescriptorMatcher:
DescriptorMatcher
-----------------
`id=0.0185035556985 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher>`__
.. ctype:: DescriptorMatcher
Abstract base class for matching keypoint descriptors. It has two groups
of match methods: for matching descriptors of one image with other image or
with image set.
::
class DescriptorMatcher
{
public:
virtual ~DescriptorMatcher();
virtual void add( const vector<Mat>& descriptors );
const vector<Mat>& getTrainDescriptors() const;
virtual void clear();
bool empty() const;
virtual bool isMaskSupported() const = 0;
virtual void train();
/*
* Group of methods to match descriptors from image pair.
*/
void match( const Mat& queryDescriptors, const Mat& trainDescriptors,
vector<DMatch>& matches, const Mat& mask=Mat() ) const;
void knnMatch( const Mat& queryDescriptors, const Mat& trainDescriptors,
vector<vector<DMatch> >& matches, int k,
const Mat& mask=Mat(), bool compactResult=false ) const;
void radiusMatch( const Mat& queryDescriptors, const Mat& trainDescriptors,
vector<vector<DMatch> >& matches, float maxDistance,
const Mat& mask=Mat(), bool compactResult=false ) const;
/*
* Group of methods to match descriptors from one image to image set.
*/
void match( const Mat& queryDescriptors, vector<DMatch>& matches,
const vector<Mat>& masks=vector<Mat>() );
void knnMatch( const Mat& queryDescriptors, vector<vector<DMatch> >& matches,
int k, const vector<Mat>& masks=vector<Mat>(),
bool compactResult=false );
void radiusMatch( const Mat& queryDescriptors, vector<vector<DMatch> >& matches,
float maxDistance, const vector<Mat>& masks=vector<Mat>(),
bool compactResult=false );
virtual void read( const FileNode& );
virtual void write( FileStorage& ) const;
virtual Ptr<DescriptorMatcher> clone( bool emptyTrainData=false ) const = 0;
static Ptr<DescriptorMatcher> create( const string& descriptorMatcherType );
protected:
vector<Mat> trainDescCollection;
...
};
..
.. index:: DescriptorMatcher::add
cv::DescriptorMatcher::add
--------------------------
`id=0.549221986718 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3Aadd>`__
````
.. cfunction:: void add( const vector<Mat>\& descriptors )
Add descriptors to train descriptor collection. If collection trainDescCollectionis not empty
the new descriptors are added to existing train descriptors.
:param descriptors: Descriptors to add. Each ``descriptors[i]`` is a set of descriptors
from the same (one) train image.
.. index:: DescriptorMatcher::getTrainDescriptors
cv::DescriptorMatcher::getTrainDescriptors
------------------------------------------
`id=0.354691082433 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3AgetTrainDescriptors>`__
````
.. cfunction:: const vector<Mat>\& getTrainDescriptors() const
Returns constant link to the train descriptor collection (i.e. trainDescCollection).
.. index:: DescriptorMatcher::clear
cv::DescriptorMatcher::clear
----------------------------
`id=0.776403699262 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3Aclear>`__
.. cfunction:: void DescriptorMatcher::clear()
Clear train descriptor collection.
.. index:: DescriptorMatcher::empty
cv::DescriptorMatcher::empty
----------------------------
`id=0.186730120991 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3Aempty>`__
.. cfunction:: bool DescriptorMatcher::empty() const
Return true if there are not train descriptors in collection.
.. index:: DescriptorMatcher::isMaskSupported
cv::DescriptorMatcher::isMaskSupported
--------------------------------------
`id=0.4880242426 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3AisMaskSupported>`__
.. cfunction:: bool DescriptorMatcher::isMaskSupported()
Returns true if descriptor matcher supports masking permissible matches.
.. index:: DescriptorMatcher::train
cv::DescriptorMatcher::train
----------------------------
`id=0.708209257367 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3Atrain>`__
.. cfunction:: void DescriptorMatcher::train()
Train descriptor matcher (e.g. train flann index). In all methods to match the method train()
is run every time before matching. Some descriptor matchers (e.g. BruteForceMatcher) have empty
implementation of this method, other matchers realy train their inner structures (e.g. FlannBasedMatcher
trains flann::Index)
.. index:: DescriptorMatcher::match
cv::DescriptorMatcher::match
----------------------------
`id=0.803878673329 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3Amatch>`__
````
````
````
````
.. cfunction:: void DescriptorMatcher::match( const Mat\& queryDescriptors, const Mat\& trainDescriptors, vector<DMatch>\& matches, const Mat\& mask=Mat() ) const
Find the best match for each descriptor from a query set with train descriptors.
Supposed that the query descriptors are of keypoints detected on the same query image.
In first variant of this method train descriptors are set as input argument and
supposed that they are of keypoints detected on the same train image. In second variant
of the method train descriptors collection that was set using addmethod is used.
Optional mask (or masks) can be set to describe which descriptors can be matched. queryDescriptors[i]can be matched with trainDescriptors[j]only if mask.at<uchar>(i,j)is non-zero.
.. cfunction:: void DescriptorMatcher::match( const Mat\& queryDescriptors, vector<DMatch>\& matches, const vector<Mat>\& masks=vector<Mat>() )
:param queryDescriptors: Query set of descriptors.
:param trainDescriptors: Train set of descriptors. This will not be added to train descriptors collection
stored in class object.
:param matches: Matches. If some query descriptor masked out in ``mask`` no match will be added for this descriptor.
So ``matches`` size may be less query descriptors count.
:param mask: Mask specifying permissible matches between input query and train matrices of descriptors.
:param masks: The set of masks. Each ``masks[i]`` specifies permissible matches between input query descriptors
and stored train descriptors from i-th image (i.e. ``trainDescCollection[i])`` .
.. index:: DescriptorMatcher::knnMatch
cv::DescriptorMatcher::knnMatch
-------------------------------
`id=0.510078848403 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3AknnMatch>`__
:func:`DescriptorMatcher::match`
.. cfunction:: void DescriptorMatcher::knnMatch( const Mat\& queryDescriptors, const Mat\& trainDescriptors, vector<vector<DMatch> >\& matches, int k, const Mat\& mask=Mat(), bool compactResult=false ) const
Find the k best matches for each descriptor from a query set with train descriptors.
Found k (or less if not possible) matches are returned in distance increasing order.
Details about query and train descriptors see in .
.. cfunction:: void DescriptorMatcher::knnMatch( const Mat\& queryDescriptors, vector<vector<DMatch> >\& matches, int k, const vector<Mat>\& masks=vector<Mat>(), bool compactResult=false )
:param queryDescriptors, trainDescriptors, mask, masks: See in :func:`DescriptorMatcher::match` .
:param matches: Mathes. Each ``matches[i]`` is k or less matches for the same query descriptor.
:param k: Count of best matches will be found per each query descriptor (or less if it's not possible).
:param compactResult: It's used when mask (or masks) is not empty. If ``compactResult`` is false ``matches`` vector will have the same size as ``queryDescriptors`` rows. If ``compactResult``
is true ``matches`` vector will not contain matches for fully masked out query descriptors.
.. index:: DescriptorMatcher::radiusMatch
cv::DescriptorMatcher::radiusMatch
----------------------------------
`id=0.763278154174 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3AradiusMatch>`__
:func:`DescriptorMatcher::match`
.. cfunction:: void DescriptorMatcher::radiusMatch( const Mat\& queryDescriptors, const Mat\& trainDescriptors, vector<vector<DMatch> >\& matches, float maxDistance, const Mat\& mask=Mat(), bool compactResult=false ) const
Find the best matches for each query descriptor which have distance less than given threshold.
Found matches are returned in distance increasing order. Details about query and train
descriptors see in .
.. cfunction:: void DescriptorMatcher::radiusMatch( const Mat\& queryDescriptors, vector<vector<DMatch> >\& matches, float maxDistance, const vector<Mat>\& masks=vector<Mat>(), bool compactResult=false )
:param queryDescriptors, trainDescriptors, mask, masks: See in :func:`DescriptorMatcher::match` .
:param matches, compactResult: See in :func:`DescriptorMatcher::knnMatch` .
:param maxDistance: The threshold to found match distances.
.. index:: DescriptorMatcher::clone
cv::DescriptorMatcher::clone
----------------------------
`id=0.743679534249 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3Aclone>`__
.. cfunction:: Ptr<DescriptorMatcher> \\DescriptorMatcher::clone( bool emptyTrainData ) const
Clone the matcher.
:param emptyTrainData: If emptyTrainData is false the method create deep copy of the object, i.e. copies
both parameters and train data. If emptyTrainData is true the method create object copy with current parameters
but with empty train data..
.. index:: DescriptorMatcher::create
cv::DescriptorMatcher::create
-----------------------------
`id=0.681869512138 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorMatcher%3A%3Acreate>`__
:func:`DescriptorMatcher`
.. cfunction:: Ptr<DescriptorMatcher> DescriptorMatcher::create( const string\& descriptorMatcherType )
Descriptor matcher factory that creates of
given type with default parameters (rather using default constructor).
:param descriptorMatcherType: Descriptor matcher type.
Now the following matcher types are supported:
``"BruteForce"``
(it uses
``L2``
),
``"BruteForce-L1"``
,
``"BruteForce-Hamming"``
,
``"BruteForce-HammingLUT"``
,
``"FlannBased"``
.
.. index:: BruteForceMatcher
.. _BruteForceMatcher:
BruteForceMatcher
-----------------
`id=0.47821275438 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BruteForceMatcher>`__
.. ctype:: BruteForceMatcher
Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest
descriptor in the second set by trying each one. This descriptor matcher supports masking
permissible matches between descriptor sets.
::
template<class Distance>
class BruteForceMatcher : public DescriptorMatcher
{
public:
BruteForceMatcher( Distance d = Distance() );
virtual ~BruteForceMatcher();
virtual bool isMaskSupported() const;
virtual Ptr<DescriptorMatcher> clone( bool emptyTrainData=false ) const;
protected:
...
}
..
For efficiency, BruteForceMatcher is templated on the distance metric.
For float descriptors, a common choice would be
``L2<float>``
. Class of supported distances are:
::
template<typename T>
struct Accumulator
{
typedef T Type;
};
template<> struct Accumulator<unsigned char> { typedef unsigned int Type; };
template<> struct Accumulator<unsigned short> { typedef unsigned int Type; };
template<> struct Accumulator<char> { typedef int Type; };
template<> struct Accumulator<short> { typedef int Type; };
/*
* Squared Euclidean distance functor
*/
template<class T>
struct L2
{
typedef T ValueType;
typedef typename Accumulator<T>::Type ResultType;
ResultType operator()( const T* a, const T* b, int size ) const;
};
/*
* Manhattan distance (city block distance) functor
*/
template<class T>
struct CV_EXPORTS L1
{
typedef T ValueType;
typedef typename Accumulator<T>::Type ResultType;
ResultType operator()( const T* a, const T* b, int size ) const;
...
};
/*
* Hamming distance (city block distance) functor
*/
struct HammingLUT
{
typedef unsigned char ValueType;
typedef int ResultType;
ResultType operator()( const unsigned char* a, const unsigned char* b,
int size ) const;
...
};
struct Hamming
{
typedef unsigned char ValueType;
typedef int ResultType;
ResultType operator()( const unsigned char* a, const unsigned char* b,
int size ) const;
...
};
..
.. index:: FlannBasedMatcher
.. _FlannBasedMatcher:
FlannBasedMatcher
-----------------
`id=0.721140850904 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/FlannBasedMatcher>`__
.. ctype:: FlannBasedMatcher
Flann based descriptor matcher. This matcher trains
:func:`flann::Index`
on
train descriptor collection and calls it's nearest search methods to find best matches.
So this matcher may be faster in cases of matching to large train collection than
brute force matcher.
``FlannBasedMatcher``
does not support masking permissible
matches between descriptor sets, because
:func:`flann::Index`
does not
support this.
::
class FlannBasedMatcher : public DescriptorMatcher
{
public:
FlannBasedMatcher(
const Ptr<flann::IndexParams>& indexParams=new flann::KDTreeIndexParams(),
const Ptr<flann::SearchParams>& searchParams=new flann::SearchParams() );
virtual void add( const vector<Mat>& descriptors );
virtual void clear();
virtual void train();
virtual bool isMaskSupported() const;
virtual Ptr<DescriptorMatcher> clone( bool emptyTrainData=false ) const;
protected:
...
};
..

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,677 @@
Common Interfaces of Generic Descriptor Matchers
================================================
.. highlight:: cpp
Matchers of keypoint descriptors in OpenCV have wrappers with common interface that enables to switch easily
between different algorithms solving the same problem. This section is devoted to matching descriptors
that can not be represented as vectors in a multidimensional space.
``GenericDescriptorMatcher``
is a more generic interface for descriptors. It does not make any assumptions about descriptor representation.
Every descriptor with
:func:`DescriptorExtractor`
interface has a wrapper with
``GenericDescriptorMatcher``
interface (see
:func:`VectorDescriptorMatcher`
).
There are descriptors such as One way descriptor and Ferns that have
``GenericDescriptorMatcher``
interface implemented, but do not support
:func:`DescriptorExtractor`
.
.. index:: GenericDescriptorMatcher
.. _GenericDescriptorMatcher:
GenericDescriptorMatcher
------------------------
`id=0.973387347242 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher>`__
.. ctype:: GenericDescriptorMatcher
Abstract interface for a keypoint descriptor extracting and matching.
There is
:func:`DescriptorExtractor`
and
:func:`DescriptorMatcher`
for these purposes too, but their interfaces are intended for descriptors
represented as vectors in a multidimensional space.
``GenericDescriptorMatcher``
is a more generic interface for descriptors.
As
:func:`DescriptorMatcher`
,
``GenericDescriptorMatcher``
has two groups
of match methods: for matching keypoints of one image with other image or
with image set.
::
class GenericDescriptorMatcher
{
public:
GenericDescriptorMatcher();
virtual ~GenericDescriptorMatcher();
virtual void add( const vector<Mat>& images,
vector<vector<KeyPoint> >& keypoints );
const vector<Mat>& getTrainImages() const;
const vector<vector<KeyPoint> >& getTrainKeypoints() const;
virtual void clear();
virtual void train() = 0;
virtual bool isMaskSupported() = 0;
void classify( const Mat& queryImage,
vector<KeyPoint>& queryKeypoints,
const Mat& trainImage,
vector<KeyPoint>& trainKeypoints ) const;
void classify( const Mat& queryImage,
vector<KeyPoint>& queryKeypoints );
/*
* Group of methods to match keypoints from image pair.
*/
void match( const Mat& queryImage, vector<KeyPoint>& queryKeypoints,
const Mat& trainImage, vector<KeyPoint>& trainKeypoints,
vector<DMatch>& matches, const Mat& mask=Mat() ) const;
void knnMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints,
const Mat& trainImage, vector<KeyPoint>& trainKeypoints,
vector<vector<DMatch> >& matches, int k,
const Mat& mask=Mat(), bool compactResult=false ) const;
void radiusMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints,
const Mat& trainImage, vector<KeyPoint>& trainKeypoints,
vector<vector<DMatch> >& matches, float maxDistance,
const Mat& mask=Mat(), bool compactResult=false ) const;
/*
* Group of methods to match keypoints from one image to image set.
*/
void match( const Mat& queryImage, vector<KeyPoint>& queryKeypoints,
vector<DMatch>& matches, const vector<Mat>& masks=vector<Mat>() );
void knnMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints,
vector<vector<DMatch> >& matches, int k,
const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
void radiusMatch( const Mat& queryImage, vector<KeyPoint>& queryKeypoints,
vector<vector<DMatch> >& matches, float maxDistance,
const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
virtual void read( const FileNode& );
virtual void write( FileStorage& ) const;
virtual Ptr<GenericDescriptorMatcher> clone( bool emptyTrainData=false ) const = 0;
protected:
...
};
..
.. index:: GenericDescriptorMatcher::add
cv::GenericDescriptorMatcher::add
---------------------------------
`id=0.507600777855 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Aadd>`__
.. cfunction:: void GenericDescriptorMatcher::add( const vector<Mat>\& images, vector<vector<KeyPoint> >\& keypoints )
Adds images and keypoints from them to the train collection (descriptors are supposed to be calculated here).
If train collection is not empty new image and keypoints from them will be added to
existing data.
:param images: Image collection.
:param keypoints: Point collection. Assumes that ``keypoints[i]`` are keypoints
detected in an image ``images[i]`` .
.. index:: GenericDescriptorMatcher::getTrainImages
cv::GenericDescriptorMatcher::getTrainImages
--------------------------------------------
`id=0.520364236881 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3AgetTrainImages>`__
.. cfunction:: const vector<Mat>\& GenericDescriptorMatcher::getTrainImages() const
Returns train image collection.
.. index:: GenericDescriptorMatcher::getTrainKeypoints
cv::GenericDescriptorMatcher::getTrainKeypoints
-----------------------------------------------
`id=0.179197628979 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3AgetTrainKeypoints>`__
.. cfunction:: const vector<vector<KeyPoint> >\& GenericDescriptorMatcher::getTrainKeypoints() const
Returns train keypoints collection.
.. index:: GenericDescriptorMatcher::clear
cv::GenericDescriptorMatcher::clear
-----------------------------------
`id=0.163507435554 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Aclear>`__
.. cfunction:: void GenericDescriptorMatcher::clear()
Clear train collection (iamges and keypoints).
.. index:: GenericDescriptorMatcher::train
cv::GenericDescriptorMatcher::train
-----------------------------------
`id=0.270072381935 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Atrain>`__
.. cfunction:: void GenericDescriptorMatcher::train()
Train the object, e.g. tree-based structure to extract descriptors or
to optimize descriptors matching.
.. index:: GenericDescriptorMatcher::isMaskSupported
cv::GenericDescriptorMatcher::isMaskSupported
---------------------------------------------
`id=0.208711469863 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3AisMaskSupported>`__
.. cfunction:: void GenericDescriptorMatcher::isMaskSupported()
Returns true if generic descriptor matcher supports masking permissible matches.
.. index:: GenericDescriptorMatcher::classify
cv::GenericDescriptorMatcher::classify
--------------------------------------
`id=0.550844968727 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Aclassify>`__
:func:`GenericDescriptorMatcher::add`
.. cfunction:: void GenericDescriptorMatcher::classify( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints, const Mat\& trainImage, vector<KeyPoint>\& trainKeypoints ) const
Classifies query keypoints under keypoints of one train image qiven as input argument
(first version of the method) or train image collection that set using (second version).
.. cfunction:: void GenericDescriptorMatcher::classify( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints )
:param queryImage: The query image.
:param queryKeypoints: Keypoints from the query image.
:param trainImage: The train image.
:param trainKeypoints: Keypoints from the train image.
.. index:: GenericDescriptorMatcher::match
cv::GenericDescriptorMatcher::match
-----------------------------------
`id=0.91509902003 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Amatch>`__
:func:`GenericDescriptorMatcher::add`
:func:`DescriptorMatcher::match`
.. cfunction:: void GenericDescriptorMatcher::match( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints, const Mat\& trainImage, vector<KeyPoint>\& trainKeypoints, vector<DMatch>\& matches, const Mat\& mask=Mat() ) const
Find best match for query keypoints to the training set. In first version of method
one train image and keypoints detected on it - are input arguments. In second version
query keypoints are matched to training collectin that set using . As in the mask can be set.
.. cfunction:: void GenericDescriptorMatcher::match( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints, vector<DMatch>\& matches, const vector<Mat>\& masks=vector<Mat>() )
:param queryImage: Query image.
:param queryKeypoints: Keypoints detected in ``queryImage`` .
:param trainImage: Train image. This will not be added to train image collection
stored in class object.
:param trainKeypoints: Keypoints detected in ``trainImage`` . They will not be added to train points collection
stored in class object.
:param matches: Matches. If some query descriptor (keypoint) masked out in ``mask``
no match will be added for this descriptor.
So ``matches`` size may be less query keypoints count.
:param mask: Mask specifying permissible matches between input query and train keypoints.
:param masks: The set of masks. Each ``masks[i]`` specifies permissible matches between input query keypoints
and stored train keypointss from i-th image.
.. index:: GenericDescriptorMatcher::knnMatch
cv::GenericDescriptorMatcher::knnMatch
--------------------------------------
`id=0.828361496735 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3AknnMatch>`__
:func:`GenericDescriptorMatcher::match`
:func:`DescriptorMatcher::knnMatch`
.. cfunction:: void GenericDescriptorMatcher::knnMatch( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints, const Mat\& trainImage, vector<KeyPoint>\& trainKeypoints, vector<vector<DMatch> >\& matches, int k, const Mat\& mask=Mat(), bool compactResult=false ) const
Find the knn best matches for each keypoint from a query set with train keypoints.
Found knn (or less if not possible) matches are returned in distance increasing order.
Details see in and .
.. cfunction:: void GenericDescriptorMatcher::knnMatch( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints, vector<vector<DMatch> >\& matches, int k, const vector<Mat>\& masks=vector<Mat>(), bool compactResult=false )
.. index:: GenericDescriptorMatcher::radiusMatch
cv::GenericDescriptorMatcher::radiusMatch
-----------------------------------------
`id=0.732845229707 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3AradiusMatch>`__
:func:`GenericDescriptorMatcher::match`
:func:`DescriptorMatcher::radiusMatch`
.. cfunction:: void GenericDescriptorMatcher::radiusMatch( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints, const Mat\& trainImage, vector<KeyPoint>\& trainKeypoints, vector<vector<DMatch> >\& matches, float maxDistance, const Mat\& mask=Mat(), bool compactResult=false ) const
Find the best matches for each query keypoint which have distance less than given threshold.
Found matches are returned in distance increasing order. Details see in and .
.. cfunction:: void GenericDescriptorMatcher::radiusMatch( const Mat\& queryImage, vector<KeyPoint>\& queryKeypoints, vector<vector<DMatch> >\& matches, float maxDistance, const vector<Mat>\& masks=vector<Mat>(), bool compactResult=false )
.. index:: GenericDescriptorMatcher::read
cv::GenericDescriptorMatcher::read
----------------------------------
`id=0.937930388921 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Aread>`__
.. cfunction:: void GenericDescriptorMatcher::read( const FileNode\& fn )
Reads matcher object from a file node.
.. index:: GenericDescriptorMatcher::write
cv::GenericDescriptorMatcher::write
-----------------------------------
`id=0.509497773169 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Awrite>`__
.. cfunction:: void GenericDescriptorMatcher::write( FileStorage\& fs ) const
Writes match object to a file storage
.. index:: GenericDescriptorMatcher::clone
cv::GenericDescriptorMatcher::clone
-----------------------------------
`id=0.864304581549 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/GenericDescriptorMatcher%3A%3Aclone>`__
.. cfunction:: Ptr<GenericDescriptorMatcher>\\GenericDescriptorMatcher::clone( bool emptyTrainData ) const
Clone the matcher.
:param emptyTrainData: If emptyTrainData is false the method create deep copy of the object, i.e. copies
both parameters and train data. If emptyTrainData is true the method create object copy with current parameters
but with empty train data.
.. index:: OneWayDescriptorMatcher
.. _OneWayDescriptorMatcher:
OneWayDescriptorMatcher
-----------------------
`id=0.295296902287 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/OneWayDescriptorMatcher>`__
.. ctype:: OneWayDescriptorMatcher
Wrapping class for computing, matching and classification of descriptors using
:func:`OneWayDescriptorBase`
class.
::
class OneWayDescriptorMatcher : public GenericDescriptorMatcher
{
public:
class Params
{
public:
static const int POSE_COUNT = 500;
static const int PATCH_WIDTH = 24;
static const int PATCH_HEIGHT = 24;
static float GET_MIN_SCALE() { return 0.7f; }
static float GET_MAX_SCALE() { return 1.5f; }
static float GET_STEP_SCALE() { return 1.2f; }
Params( int poseCount = POSE_COUNT,
Size patchSize = Size(PATCH_WIDTH, PATCH_HEIGHT),
string pcaFilename = string(),
string trainPath = string(), string trainImagesList = string(),
float minScale = GET_MIN_SCALE(), float maxScale = GET_MAX_SCALE(),
float stepScale = GET_STEP_SCALE() );
int poseCount;
Size patchSize;
string pcaFilename;
string trainPath;
string trainImagesList;
float minScale, maxScale, stepScale;
};
OneWayDescriptorMatcher( const Params& params=Params() );
virtual ~OneWayDescriptorMatcher();
void initialize( const Params& params, const Ptr<OneWayDescriptorBase>& base=Ptr<OneWayDescriptorBase>() );
// Clears keypoints storing in collection and OneWayDescriptorBase
virtual void clear();
virtual void train();
virtual bool isMaskSupported();
virtual void read( const FileNode &fn );
virtual void write( FileStorage& fs ) const;
virtual Ptr<GenericDescriptorMatcher> clone( bool emptyTrainData=false ) const;
protected:
...
};
..
.. index:: FernDescriptorMatcher
.. _FernDescriptorMatcher:
FernDescriptorMatcher
---------------------
`id=0.410971973421 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/FernDescriptorMatcher>`__
.. ctype:: FernDescriptorMatcher
Wrapping class for computing, matching and classification of descriptors using
:func:`FernClassifier`
class.
::
class FernDescriptorMatcher : public GenericDescriptorMatcher
{
public:
class Params
{
public:
Params( int nclasses=0,
int patchSize=FernClassifier::PATCH_SIZE,
int signatureSize=FernClassifier::DEFAULT_SIGNATURE_SIZE,
int nstructs=FernClassifier::DEFAULT_STRUCTS,
int structSize=FernClassifier::DEFAULT_STRUCT_SIZE,
int nviews=FernClassifier::DEFAULT_VIEWS,
int compressionMethod=FernClassifier::COMPRESSION_NONE,
const PatchGenerator& patchGenerator=PatchGenerator() );
Params( const string& filename );
int nclasses;
int patchSize;
int signatureSize;
int nstructs;
int structSize;
int nviews;
int compressionMethod;
PatchGenerator patchGenerator;
string filename;
};
FernDescriptorMatcher( const Params& params=Params() );
virtual ~FernDescriptorMatcher();
virtual void clear();
virtual void train();
virtual bool isMaskSupported();
virtual void read( const FileNode &fn );
virtual void write( FileStorage& fs ) const;
virtual Ptr<GenericDescriptorMatcher> clone( bool emptyTrainData=false ) const;
protected:
...
};
..
.. index:: VectorDescriptorMatcher
.. _VectorDescriptorMatcher:
VectorDescriptorMatcher
-----------------------
`id=0.89575693039 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/VectorDescriptorMatcher>`__
.. ctype:: VectorDescriptorMatcher
Class used for matching descriptors that can be described as vectors in a finite-dimensional space.
::
class CV_EXPORTS VectorDescriptorMatcher : public GenericDescriptorMatcher
{
public:
VectorDescriptorMatcher( const Ptr<DescriptorExtractor>& extractor, const Ptr<DescriptorMatcher>& matcher );
virtual ~VectorDescriptorMatcher();
virtual void add( const vector<Mat>& imgCollection,
vector<vector<KeyPoint> >& pointCollection );
virtual void clear();
virtual void train();
virtual bool isMaskSupported();
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
virtual Ptr<GenericDescriptorMatcher> clone( bool emptyTrainData=false ) const;
protected:
...
};
..
Example of creating:
::
VectorDescriptorMatcher matcher( new SurfDescriptorExtractor,
new BruteForceMatcher<L2<float> > );
..

View File

@@ -0,0 +1,140 @@
Drawing Function of Keypoints and Matches
=========================================
.. highlight:: cpp
.. index:: drawMatches
cv::drawMatches
---------------
`id=0.919261687295 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/drawMatches>`__
.. cfunction:: void drawMatches( const Mat\& img1, const vector<KeyPoint>\& keypoints1, const Mat\& img2, const vector<KeyPoint>\& keypoints2, const vector<DMatch>\& matches1to2, Mat\& outImg, const Scalar\& matchColor=Scalar::all(-1), const Scalar\& singlePointColor=Scalar::all(-1), const vector<char>\& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
This function draws matches of keypints from two images on output image.
Match is a line connecting two keypoints (circles).
.. cfunction:: void drawMatches( const Mat\& img1, const vector<KeyPoint>\& keypoints1, const Mat\& img2, const vector<KeyPoint>\& keypoints2, const vector<vector<DMatch> >\& matches1to2, Mat\& outImg, const Scalar\& matchColor=Scalar::all(-1), const Scalar\& singlePointColor=Scalar::all(-1), const vector<vector<char>>\& matchesMask= vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
:param img1: First source image.
:param keypoints1: Keypoints from first source image.
:param img2: Second source image.
:param keypoints2: Keypoints from second source image.
:param matches: Matches from first image to second one, i.e. ``keypoints1[i]``
has corresponding point ``keypoints2[matches[i]]`` .
:param outImg: Output image. Its content depends on ``flags`` value
what is drawn in output image. See below possible ``flags`` bit values.
:param matchColor: Color of matches (lines and connected keypoints).
If ``matchColor==Scalar::all(-1)`` color will be generated randomly.
:param singlePointColor: Color of single keypoints (circles), i.e. keypoints not having the matches.
If ``singlePointColor==Scalar::all(-1)`` color will be generated randomly.
:param matchesMask: Mask determining which matches will be drawn. If mask is empty all matches will be drawn.
:param flags: Each bit of ``flags`` sets some feature of drawing.
Possible ``flags`` bit values is defined by ``DrawMatchesFlags`` , see below.
::
struct DrawMatchesFlags
{
enum{ DEFAULT = 0, // Output image matrix will be created (Mat::create),
// i.e. existing memory of output image may be reused.
// Two source image, matches and single keypoints
// will be drawn.
// For each keypoint only the center point will be
// drawn (without the circle around keypoint with
// keypoint size and orientation).
DRAW_OVER_OUTIMG = 1, // Output image matrix will not be
// created (Mat::create). Matches will be drawn
// on existing content of output image.
NOT_DRAW_SINGLE_POINTS = 2, // Single keypoints will not be drawn.
DRAW_RICH_KEYPOINTS = 4 // For each keypoint the circle around
// keypoint with keypoint size and orientation will
// be drawn.
};
};
..
.. index:: drawKeypoints
cv::drawKeypoints
-----------------
`id=0.694314481427 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/drawKeypoints>`__
.. cfunction:: void drawKeypoints( const Mat\& image, const vector<KeyPoint>\& keypoints, Mat\& outImg, const Scalar\& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT )
Draw keypoints.
:param image: Source image.
:param keypoints: Keypoints from source image.
:param outImg: Output image. Its content depends on ``flags`` value
what is drawn in output image. See possible ``flags`` bit values.
:param color: Color of keypoints
.
:param flags: Each bit of ``flags`` sets some feature of drawing.
Possible ``flags`` bit values is defined by ``DrawMatchesFlags`` ,
see above in :func:`drawMatches` .

View File

@@ -0,0 +1,975 @@
Feature detection and description
=================================
.. highlight:: cpp
.. index:: FAST
cv::FAST
--------
`id=0.180338558353 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/FAST>`__
.. cfunction:: void FAST( const Mat\& image, vector<KeyPoint>\& keypoints, int threshold, bool nonmaxSupression=true )
Detects corners using FAST algorithm by E. Rosten (''Machine learning for high-speed corner detection'', 2006).
:param image: The image. Keypoints (corners) will be detected on this.
:param keypoints: Keypoints detected on the image.
:param threshold: Threshold on difference between intensity of center pixel and
pixels on circle around this pixel. See description of the algorithm.
:param nonmaxSupression: If it is true then non-maximum supression will be applied to detected corners (keypoints).
.. index:: MSER
.. _MSER:
MSER
----
`id=0.0333368188128 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/MSER>`__
.. ctype:: MSER
Maximally-Stable Extremal Region Extractor
::
class MSER : public CvMSERParams
{
public:
// default constructor
MSER();
// constructor that initializes all the algorithm parameters
MSER( int _delta, int _min_area, int _max_area,
float _max_variation, float _min_diversity,
int _max_evolution, double _area_threshold,
double _min_margin, int _edge_blur_size );
// runs the extractor on the specified image; returns the MSERs,
// each encoded as a contour (vector<Point>, see findContours)
// the optional mask marks the area where MSERs are searched for
void operator()( const Mat& image, vector<vector<Point> >& msers, const Mat& mask ) const;
};
..
The class encapsulates all the parameters of MSER (see
http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions
) extraction algorithm.
.. index:: StarDetector
.. _StarDetector:
StarDetector
------------
`id=0.378812518152 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/StarDetector>`__
.. ctype:: StarDetector
Implements Star keypoint detector
::
class StarDetector : CvStarDetectorParams
{
public:
// default constructor
StarDetector();
// the full constructor initialized all the algorithm parameters:
// maxSize - maximum size of the features. The following
// values of the parameter are supported:
// 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128
// responseThreshold - threshold for the approximated laplacian,
// used to eliminate weak features. The larger it is,
// the less features will be retrieved
// lineThresholdProjected - another threshold for the laplacian to
// eliminate edges
// lineThresholdBinarized - another threshold for the feature
// size to eliminate edges.
// The larger the 2 threshold, the more points you get.
StarDetector(int maxSize, int responseThreshold,
int lineThresholdProjected,
int lineThresholdBinarized,
int suppressNonmaxSize);
// finds keypoints in an image
void operator()(const Mat& image, vector<KeyPoint>& keypoints) const;
};
..
The class implements a modified version of CenSurE keypoint detector described in
Agrawal08
.. index:: SIFT
.. _SIFT:
SIFT
----
`id=0.385373212311 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/SIFT>`__
.. ctype:: SIFT
Class for extracting keypoints and computing descriptors using approach named Scale Invariant Feature Transform (SIFT).
::
class CV_EXPORTS SIFT
{
public:
struct CommonParams
{
static const int DEFAULT_NOCTAVES = 4;
static const int DEFAULT_NOCTAVE_LAYERS = 3;
static const int DEFAULT_FIRST_OCTAVE = -1;
enum{ FIRST_ANGLE = 0, AVERAGE_ANGLE = 1 };
CommonParams();
CommonParams( int _nOctaves, int _nOctaveLayers, int _firstOctave,
int _angleMode );
int nOctaves, nOctaveLayers, firstOctave;
int angleMode;
};
struct DetectorParams
{
static double GET_DEFAULT_THRESHOLD()
{ return 0.04 / SIFT::CommonParams::DEFAULT_NOCTAVE_LAYERS / 2.0; }
static double GET_DEFAULT_EDGE_THRESHOLD() { return 10.0; }
DetectorParams();
DetectorParams( double _threshold, double _edgeThreshold );
double threshold, edgeThreshold;
};
struct DescriptorParams
{
static double GET_DEFAULT_MAGNIFICATION() { return 3.0; }
static const bool DEFAULT_IS_NORMALIZE = true;
static const int DESCRIPTOR_SIZE = 128;
DescriptorParams();
DescriptorParams( double _magnification, bool _isNormalize,
bool _recalculateAngles );
double magnification;
bool isNormalize;
bool recalculateAngles;
};
SIFT();
//! sift-detector constructor
SIFT( double _threshold, double _edgeThreshold,
int _nOctaves=CommonParams::DEFAULT_NOCTAVES,
int _nOctaveLayers=CommonParams::DEFAULT_NOCTAVE_LAYERS,
int _firstOctave=CommonParams::DEFAULT_FIRST_OCTAVE,
int _angleMode=CommonParams::FIRST_ANGLE );
//! sift-descriptor constructor
SIFT( double _magnification, bool _isNormalize=true,
bool _recalculateAngles = true,
int _nOctaves=CommonParams::DEFAULT_NOCTAVES,
int _nOctaveLayers=CommonParams::DEFAULT_NOCTAVE_LAYERS,
int _firstOctave=CommonParams::DEFAULT_FIRST_OCTAVE,
int _angleMode=CommonParams::FIRST_ANGLE );
SIFT( const CommonParams& _commParams,
const DetectorParams& _detectorParams = DetectorParams(),
const DescriptorParams& _descriptorParams = DescriptorParams() );
//! returns the descriptor size in floats (128)
int descriptorSize() const { return DescriptorParams::DESCRIPTOR_SIZE; }
//! finds the keypoints using SIFT algorithm
void operator()(const Mat& img, const Mat& mask,
vector<KeyPoint>& keypoints) const;
//! finds the keypoints and computes descriptors for them using SIFT algorithm.
//! Optionally it can compute descriptors for the user-provided keypoints
void operator()(const Mat& img, const Mat& mask,
vector<KeyPoint>& keypoints,
Mat& descriptors,
bool useProvidedKeypoints=false) const;
CommonParams getCommonParams () const { return commParams; }
DetectorParams getDetectorParams () const { return detectorParams; }
DescriptorParams getDescriptorParams () const { return descriptorParams; }
protected:
...
};
..
.. index:: SURF
.. _SURF:
SURF
----
`id=0.43149154692 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/SURF>`__
.. ctype:: SURF
Class for extracting Speeded Up Robust Features from an image.
::
class SURF : public CvSURFParams
{
public:
// default constructor
SURF();
// constructor that initializes all the algorithm parameters
SURF(double _hessianThreshold, int _nOctaves=4,
int _nOctaveLayers=2, bool _extended=false);
// returns the number of elements in each descriptor (64 or 128)
int descriptorSize() const;
// detects keypoints using fast multi-scale Hessian detector
void operator()(const Mat& img, const Mat& mask,
vector<KeyPoint>& keypoints) const;
// detects keypoints and computes the SURF descriptors for them;
// output vector "descriptors" stores elements of descriptors and has size
// equal descriptorSize()*keypoints.size() as each descriptor is
// descriptorSize() elements of this vector.
void operator()(const Mat& img, const Mat& mask,
vector<KeyPoint>& keypoints,
vector<float>& descriptors,
bool useProvidedKeypoints=false) const;
};
..
The class
``SURF``
implements Speeded Up Robust Features descriptor
Bay06
.
There is fast multi-scale Hessian keypoint detector that can be used to find the keypoints
(which is the default option), but the descriptors can be also computed for the user-specified keypoints.
The function can be used for object tracking and localization, image stitching etc. See the
``find_obj.cpp``
demo in OpenCV samples directory.
.. index:: RandomizedTree
.. _RandomizedTree:
RandomizedTree
--------------
`id=0.539311466248 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RandomizedTree>`__
.. ctype:: RandomizedTree
The class contains base structure for
``RTreeClassifier``
::
class CV_EXPORTS RandomizedTree
{
public:
friend class RTreeClassifier;
RandomizedTree();
~RandomizedTree();
void train(std::vector<BaseKeypoint> const& base_set,
cv::RNG &rng, int depth, int views,
size_t reduced_num_dim, int num_quant_bits);
void train(std::vector<BaseKeypoint> const& base_set,
cv::RNG &rng, PatchGenerator &make_patch, int depth,
int views, size_t reduced_num_dim, int num_quant_bits);
// following two funcs are EXPERIMENTAL
//(do not use unless you know exactly what you do)
static void quantizeVector(float *vec, int dim, int N, float bnds[2],
int clamp_mode=0);
static void quantizeVector(float *src, int dim, int N, float bnds[2],
uchar *dst);
// patch_data must be a 32x32 array (no row padding)
float* getPosterior(uchar* patch_data);
const float* getPosterior(uchar* patch_data) const;
uchar* getPosterior2(uchar* patch_data);
void read(const char* file_name, int num_quant_bits);
void read(std::istream &is, int num_quant_bits);
void write(const char* file_name) const;
void write(std::ostream &os) const;
int classes() { return classes_; }
int depth() { return depth_; }
void discardFloatPosteriors() { freePosteriors(1); }
inline void applyQuantization(int num_quant_bits)
{ makePosteriors2(num_quant_bits); }
private:
int classes_;
int depth_;
int num_leaves_;
std::vector<RTreeNode> nodes_;
float **posteriors_; // 16-bytes aligned posteriors
uchar **posteriors2_; // 16-bytes aligned posteriors
std::vector<int> leaf_counts_;
void createNodes(int num_nodes, cv::RNG &rng);
void allocPosteriorsAligned(int num_leaves, int num_classes);
void freePosteriors(int which);
// which: 1=posteriors_, 2=posteriors2_, 3=both
void init(int classes, int depth, cv::RNG &rng);
void addExample(int class_id, uchar* patch_data);
void finalize(size_t reduced_num_dim, int num_quant_bits);
int getIndex(uchar* patch_data) const;
inline float* getPosteriorByIndex(int index);
inline uchar* getPosteriorByIndex2(int index);
inline const float* getPosteriorByIndex(int index) const;
void convertPosteriorsToChar();
void makePosteriors2(int num_quant_bits);
void compressLeaves(size_t reduced_num_dim);
void estimateQuantPercForPosteriors(float perc[2]);
};
..
.. index:: RandomizedTree::train
cv::RandomizedTree::train
-------------------------
`id=0.360469298211 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RandomizedTree%3A%3Atrain>`__
.. cfunction:: void train(std::vector<BaseKeypoint> const\& base_set, cv::RNG \&rng, PatchGenerator \&make_patch, int depth, int views, size_t reduced_num_dim, int num_quant_bits)
Trains a randomized tree using input set of keypoints
.. cfunction:: void train(std::vector<BaseKeypoint> const\& base_set, cv::RNG \&rng, PatchGenerator \&make_patch, int depth, int views, size_t reduced_num_dim, int num_quant_bits)
{Vector of
``BaseKeypoint``
type. Contains keypoints from the image are used for training}
{Random numbers generator is used for training}
{Patch generator is used for training}
{Maximum tree depth}
{Number of dimensions are used in compressed signature}
{Number of bits are used for quantization}
.. index:: RandomizedTree::read
cv::RandomizedTree::read
------------------------
`id=0.663893576705 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RandomizedTree%3A%3Aread>`__
.. cfunction:: read(const char* file_name, int num_quant_bits)
Reads pre-saved randomized tree from file or stream
.. cfunction:: read(std::istream \&is, int num_quant_bits)
:param file_name: Filename of file contains randomized tree data
:param is: Input stream associated with file contains randomized tree data
{Number of bits are used for quantization}
.. index:: RandomizedTree::write
cv::RandomizedTree::write
-------------------------
`id=0.640726433619 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RandomizedTree%3A%3Awrite>`__
.. cfunction:: void write(const char* file_name) const
Writes current randomized tree to a file or stream
.. cfunction:: void write(std::ostream \&os) const
:param file_name: Filename of file where randomized tree data will be stored
:param is: Output stream associated with file where randomized tree data will be stored
.. index:: RandomizedTree::applyQuantization
cv::RandomizedTree::applyQuantization
-------------------------------------
`id=0.113364904421 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RandomizedTree%3A%3AapplyQuantization>`__
.. cfunction:: void applyQuantization(int num_quant_bits)
Applies quantization to the current randomized tree
{Number of bits are used for quantization}
.. index:: RTreeNode
.. _RTreeNode:
RTreeNode
---------
`id=0.718763052087 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeNode>`__
.. ctype:: RTreeNode
The class contains base structure for
``RandomizedTree``
::
struct RTreeNode
{
short offset1, offset2;
RTreeNode() {}
RTreeNode(uchar x1, uchar y1, uchar x2, uchar y2)
: offset1(y1*PATCH_SIZE + x1),
offset2(y2*PATCH_SIZE + x2)
{}
//! Left child on 0, right child on 1
inline bool operator() (uchar* patch_data) const
{
return patch_data[offset1] > patch_data[offset2];
}
};
..
.. index:: RTreeClassifier
.. _RTreeClassifier:
RTreeClassifier
---------------
`id=0.477872539921 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier>`__
.. ctype:: RTreeClassifier
The class contains
``RTreeClassifier``
. It represents calonder descriptor which was originally introduced by Michael Calonder
::
class CV_EXPORTS RTreeClassifier
{
public:
static const int DEFAULT_TREES = 48;
static const size_t DEFAULT_NUM_QUANT_BITS = 4;
RTreeClassifier();
void train(std::vector<BaseKeypoint> const& base_set,
cv::RNG &rng,
int num_trees = RTreeClassifier::DEFAULT_TREES,
int depth = DEFAULT_DEPTH,
int views = DEFAULT_VIEWS,
size_t reduced_num_dim = DEFAULT_REDUCED_NUM_DIM,
int num_quant_bits = DEFAULT_NUM_QUANT_BITS,
bool print_status = true);
void train(std::vector<BaseKeypoint> const& base_set,
cv::RNG &rng,
PatchGenerator &make_patch,
int num_trees = RTreeClassifier::DEFAULT_TREES,
int depth = DEFAULT_DEPTH,
int views = DEFAULT_VIEWS,
size_t reduced_num_dim = DEFAULT_REDUCED_NUM_DIM,
int num_quant_bits = DEFAULT_NUM_QUANT_BITS,
bool print_status = true);
// sig must point to a memory block of at least
//classes()*sizeof(float|uchar) bytes
void getSignature(IplImage *patch, uchar *sig);
void getSignature(IplImage *patch, float *sig);
void getSparseSignature(IplImage *patch, float *sig,
float thresh);
static int countNonZeroElements(float *vec, int n, double tol=1e-10);
static inline void safeSignatureAlloc(uchar **sig, int num_sig=1,
int sig_len=176);
static inline uchar* safeSignatureAlloc(int num_sig=1,
int sig_len=176);
inline int classes() { return classes_; }
inline int original_num_classes()
{ return original_num_classes_; }
void setQuantization(int num_quant_bits);
void discardFloatPosteriors();
void read(const char* file_name);
void read(std::istream &is);
void write(const char* file_name) const;
void write(std::ostream &os) const;
std::vector<RandomizedTree> trees_;
private:
int classes_;
int num_quant_bits_;
uchar **posteriors_;
ushort *ptemp_;
int original_num_classes_;
bool keep_floats_;
};
..
.. index:: RTreeClassifier::train
cv::RTreeClassifier::train
--------------------------
`id=0.173927228061 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier%3A%3Atrain>`__
.. cfunction:: void train(std::vector<BaseKeypoint> const\& base_set, cv::RNG \&rng, int num_trees = RTreeClassifier::DEFAULT_TREES, int depth = DEFAULT_DEPTH, int views = DEFAULT_VIEWS, size_t reduced_num_dim = DEFAULT_REDUCED_NUM_DIM, int num_quant_bits = DEFAULT_NUM_QUANT_BITS, bool print_status = true)
Trains a randomized tree classificator using input set of keypoints
.. cfunction:: void train(std::vector<BaseKeypoint> const\& base_set, cv::RNG \&rng, PatchGenerator \&make_patch, int num_trees = RTreeClassifier::DEFAULT_TREES, int depth = DEFAULT_DEPTH, int views = DEFAULT_VIEWS, size_t reduced_num_dim = DEFAULT_REDUCED_NUM_DIM, int num_quant_bits = DEFAULT_NUM_QUANT_BITS, bool print_status = true)
{Vector of
``BaseKeypoint``
type. Contains keypoints from the image are used for training}
{Random numbers generator is used for training}
{Patch generator is used for training}
{Number of randomized trees used in RTreeClassificator}
{Maximum tree depth}
{Number of dimensions are used in compressed signature}
{Number of bits are used for quantization}
{Print current status of training on the console}
.. index:: RTreeClassifier::getSignature
cv::RTreeClassifier::getSignature
---------------------------------
`id=0.90043980708 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier%3A%3AgetSignature>`__
.. cfunction:: void getSignature(IplImage *patch, uchar *sig)
Returns signature for image patch
.. cfunction:: void getSignature(IplImage *patch, float *sig)
{Image patch to calculate signature for}
{Output signature (array dimension is
``reduced_num_dim)``
}
.. index:: RTreeClassifier::getSparseSignature
cv::RTreeClassifier::getSparseSignature
---------------------------------------
`id=0.692099737961 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier%3A%3AgetSparseSignature>`__
````
.. cfunction:: void getSparseSignature(IplImage *patch, float *sig, float thresh)
The function is simular to getSignaturebut uses the threshold for removing all signature elements less than the threshold. So that the signature is compressed
{Image patch to calculate signature for}
{Output signature (array dimension is
``reduced_num_dim)``
}
{The threshold that is used for compressing the signature}
.. index:: RTreeClassifier::countNonZeroElements
cv::RTreeClassifier::countNonZeroElements
-----------------------------------------
`id=0.553226961988 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier%3A%3AcountNonZeroElements>`__
.. cfunction:: static int countNonZeroElements(float *vec, int n, double tol=1e-10)
The function returns the number of non-zero elements in the input array.
:param vec: Input vector contains float elements
:param n: Input vector size
{The threshold used for elements counting. We take all elements are less than
``tol``
as zero elements}
.. index:: RTreeClassifier::read
cv::RTreeClassifier::read
-------------------------
`id=0.648907224792 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier%3A%3Aread>`__
.. cfunction:: read(const char* file_name)
Reads pre-saved RTreeClassifier from file or stream
.. cfunction:: read(std::istream \&is)
:param file_name: Filename of file contains randomized tree data
:param is: Input stream associated with file contains randomized tree data
.. index:: RTreeClassifier::write
cv::RTreeClassifier::write
--------------------------
`id=0.340545032412 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier%3A%3Awrite>`__
.. cfunction:: void write(const char* file_name) const
Writes current RTreeClassifier to a file or stream
.. cfunction:: void write(std::ostream \&os) const
:param file_name: Filename of file where randomized tree data will be stored
:param is: Output stream associated with file where randomized tree data will be stored
.. index:: RTreeClassifier::setQuantization
cv::RTreeClassifier::setQuantization
------------------------------------
`id=0.788175788924 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/RTreeClassifier%3A%3AsetQuantization>`__
.. cfunction:: void setQuantization(int num_quant_bits)
Applies quantization to the current randomized tree
{Number of bits are used for quantization}
Below there is an example of
``RTreeClassifier``
usage for feature matching. There are test and train images and we extract features from both with SURF. Output is
:math:`best\_corr`
and
:math:`best\_corr\_idx`
arrays which keep the best probabilities and corresponding features indexes for every train feature.
::
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq *objectKeypoints = 0, *objectDescriptors = 0;
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
CvSURFParams params = cvSURFParams(500, 1);
cvExtractSURF( test_image, 0, &imageKeypoints, &imageDescriptors,
storage, params );
cvExtractSURF( train_image, 0, &objectKeypoints, &objectDescriptors,
storage, params );
cv::RTreeClassifier detector;
int patch_width = cv::PATCH_SIZE;
iint patch_height = cv::PATCH_SIZE;
vector<cv::BaseKeypoint> base_set;
int i=0;
CvSURFPoint* point;
for (i=0;i<(n_points > 0 ? n_points : objectKeypoints->total);i++)
{
point=(CvSURFPoint*)cvGetSeqElem(objectKeypoints,i);
base_set.push_back(
cv::BaseKeypoint(point->pt.x,point->pt.y,train_image));
}
//Detector training
cv::RNG rng( cvGetTickCount() );
cv::PatchGenerator gen(0,255,2,false,0.7,1.3,-CV_PI/3,CV_PI/3,
-CV_PI/3,CV_PI/3);
printf("RTree Classifier training...n");
detector.train(base_set,rng,gen,24,cv::DEFAULT_DEPTH,2000,
(int)base_set.size(), detector.DEFAULT_NUM_QUANT_BITS);
printf("Donen");
float* signature = new float[detector.original_num_classes()];
float* best_corr;
int* best_corr_idx;
if (imageKeypoints->total > 0)
{
best_corr = new float[imageKeypoints->total];
best_corr_idx = new int[imageKeypoints->total];
}
for(i=0; i < imageKeypoints->total; i++)
{
point=(CvSURFPoint*)cvGetSeqElem(imageKeypoints,i);
int part_idx = -1;
float prob = 0.0f;
CvRect roi = cvRect((int)(point->pt.x) - patch_width/2,
(int)(point->pt.y) - patch_height/2,
patch_width, patch_height);
cvSetImageROI(test_image, roi);
roi = cvGetImageROI(test_image);
if(roi.width != patch_width || roi.height != patch_height)
{
best_corr_idx[i] = part_idx;
best_corr[i] = prob;
}
else
{
cvSetImageROI(test_image, roi);
IplImage* roi_image =
cvCreateImage(cvSize(roi.width, roi.height),
test_image->depth, test_image->nChannels);
cvCopy(test_image,roi_image);
detector.getSignature(roi_image, signature);
for (int j = 0; j< detector.original_num_classes();j++)
{
if (prob < signature[j])
{
part_idx = j;
prob = signature[j];
}
}
best_corr_idx[i] = part_idx;
best_corr[i] = prob;
if (roi_image)
cvReleaseImage(&roi_image);
}
cvResetImageROI(test_image);
}
..

View File

@@ -0,0 +1,14 @@
*********************
2D Features Framework
*********************
.. toctree::
:maxdepth: 2
feature_detection_and_description
common_interfaces_of_feature_detectors
common_interfaces_of_descriptor_extractors
common_interfaces_of_descriptor_matchers
common_interfaces_of_generic_descriptor_matchers
drawing_function_of_keypoints_and_matches
object_categorization

View File

@@ -0,0 +1,408 @@
Object Categorization
=====================
.. highlight:: cpp
Some approaches based on local 2D features and used to object categorization
are described in this section.
.. index:: BOWTrainer
.. _BOWTrainer:
BOWTrainer
----------
`id=0.926370937775 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWTrainer>`__
.. ctype:: BOWTrainer
Abstract base class for training ''bag of visual words'' vocabulary from a set of descriptors.
See e.g. ''Visual Categorization with Bags of Keypoints'' of Gabriella Csurka, Christopher R. Dance,
Lixin Fan, Jutta Willamowski, Cedric Bray, 2004.
::
class BOWTrainer
{
public:
BOWTrainer(){}
virtual ~BOWTrainer(){}
void add( const Mat& descriptors );
const vector<Mat>& getDescriptors() const;
int descripotorsCount() const;
virtual void clear();
virtual Mat cluster() const = 0;
virtual Mat cluster( const Mat& descriptors ) const = 0;
protected:
...
};
..
.. index:: BOWTrainer::add
cv::BOWTrainer::add
-------------------
`id=0.849162389183 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWTrainer%3A%3Aadd>`__
````
.. cfunction:: void BOWTrainer::add( const Mat\& descriptors )
Add descriptors to training set. The training set will be clustered using clustermethod to construct vocabulary.
:param descriptors: Descriptors to add to training set. Each row of ``descriptors``
matrix is a one descriptor.
.. index:: BOWTrainer::getDescriptors
cv::BOWTrainer::getDescriptors
------------------------------
`id=0.999824242082 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWTrainer%3A%3AgetDescriptors>`__
.. cfunction:: const vector<Mat>\& BOWTrainer::getDescriptors() const
Returns training set of descriptors.
.. index:: BOWTrainer::descripotorsCount
cv::BOWTrainer::descripotorsCount
---------------------------------
`id=0.497913292449 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWTrainer%3A%3AdescripotorsCount>`__
.. cfunction:: const vector<Mat>\& BOWTrainer::descripotorsCount() const
Returns count of all descriptors stored in the training set.
.. index:: BOWTrainer::cluster
cv::BOWTrainer::cluster
-----------------------
`id=0.560094315089 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWTrainer%3A%3Acluster>`__
.. cfunction:: Mat BOWTrainer::cluster() const
Cluster train descriptors. Vocabulary consists from cluster centers. So this method
returns vocabulary. In first method variant the stored in object train descriptors will be
clustered, in second variant -- input descriptors will be clustered.
.. cfunction:: Mat BOWTrainer::cluster( const Mat\& descriptors ) const
:param descriptors: Descriptors to cluster. Each row of ``descriptors``
matrix is a one descriptor. Descriptors will not be added
to the inner train descriptor set.
.. index:: BOWKMeansTrainer
.. _BOWKMeansTrainer:
BOWKMeansTrainer
----------------
`id=0.588500098443 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWKMeansTrainer>`__
.. ctype:: BOWKMeansTrainer
:func:`kmeans`
based class to train visual vocabulary using the ''bag of visual words'' approach.
::
class BOWKMeansTrainer : public BOWTrainer
{
public:
BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(),
int attempts=3, int flags=KMEANS_PP_CENTERS );
virtual ~BOWKMeansTrainer(){}
// Returns trained vocabulary (i.e. cluster centers).
virtual Mat cluster() const;
virtual Mat cluster( const Mat& descriptors ) const;
protected:
...
};
..
To gain an understanding of constructor parameters see
:func:`kmeans`
function
arguments.
.. index:: BOWImgDescriptorExtractor
.. _BOWImgDescriptorExtractor:
BOWImgDescriptorExtractor
-------------------------
`id=0.166378792557 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWImgDescriptorExtractor>`__
.. ctype:: BOWImgDescriptorExtractor
Class to compute image descriptor using ''bad of visual words''. In few,
such computing consists from the following steps:
1. Compute descriptors for given image and it's keypoints set,
\
2. Find nearest visual words from vocabulary for each keypoint descriptor,
\
3. Image descriptor is a normalized histogram of vocabulary words encountered in the image. I.e.
``i``
-bin of the histogram is a frequency of
``i``
-word of vocabulary in the given image.
::
class BOWImgDescriptorExtractor
{
public:
BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>& dextractor,
const Ptr<DescriptorMatcher>& dmatcher );
virtual ~BOWImgDescriptorExtractor(){}
void setVocabulary( const Mat& vocabulary );
const Mat& getVocabulary() const;
void compute( const Mat& image, vector<KeyPoint>& keypoints,
Mat& imgDescriptor,
vector<vector<int> >* pointIdxsOfClusters=0,
Mat* descriptors=0 );
int descriptorSize() const;
int descriptorType() const;
protected:
...
};
..
.. index:: BOWImgDescriptorExtractor::BOWImgDescriptorExtractor
cv::BOWImgDescriptorExtractor::BOWImgDescriptorExtractor
--------------------------------------------------------
`id=0.355574799377 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWImgDescriptorExtractor%3A%3ABOWImgDescriptorExtractor>`__
.. cfunction:: BOWImgDescriptorExtractor::BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>\& dextractor, const Ptr<DescriptorMatcher>\& dmatcher )
Constructor.
:param dextractor: Descriptor extractor that will be used to compute descriptors
for input image and it's keypoints.
:param dmatcher: Descriptor matcher that will be used to find nearest word of trained vocabulary to
each keupoints descriptor of the image.
.. index:: BOWImgDescriptorExtractor::setVocabulary
cv::BOWImgDescriptorExtractor::setVocabulary
--------------------------------------------
`id=0.592484692408 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWImgDescriptorExtractor%3A%3AsetVocabulary>`__
.. cfunction:: void BOWImgDescriptorExtractor::setVocabulary( const Mat\& vocabulary )
Method to set visual vocabulary.
:param vocabulary: Vocabulary (can be trained using inheritor of :func:`BOWTrainer` ).
Each row of vocabulary is a one visual word (cluster center).
.. index:: BOWImgDescriptorExtractor::getVocabulary
cv::BOWImgDescriptorExtractor::getVocabulary
--------------------------------------------
`id=0.0185667539631 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWImgDescriptorExtractor%3A%3AgetVocabulary>`__
.. cfunction:: const Mat\& BOWImgDescriptorExtractor::getVocabulary() const
Returns set vocabulary.
.. index:: BOWImgDescriptorExtractor::compute
cv::BOWImgDescriptorExtractor::compute
--------------------------------------
`id=0.558308680471 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWImgDescriptorExtractor%3A%3Acompute>`__
.. cfunction:: void BOWImgDescriptorExtractor::compute( const Mat\& image, vector<KeyPoint>\& keypoints, Mat\& imgDescriptor, vector<vector<int> >* pointIdxsOfClusters=0, Mat* descriptors=0 )
Compute image descriptor using set visual vocabulary.
:param image: The image. Image descriptor will be computed for this.
:param keypoints: Keypoints detected in the input image.
:param imgDescriptor: This is output, i.e. computed image descriptor.
:param pointIdxsOfClusters: Indices of keypoints which belong to the cluster, i.e.
``pointIdxsOfClusters[i]`` is keypoint indices which belong
to the ``i-`` cluster (word of vocabulary) (returned if it is not 0.)
:param descriptors: Descriptors of the image keypoints (returned if it is not 0.)
.. index:: BOWImgDescriptorExtractor::descriptorSize
cv::BOWImgDescriptorExtractor::descriptorSize
---------------------------------------------
`id=0.758326749957 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWImgDescriptorExtractor%3A%3AdescriptorSize>`__
.. cfunction:: int BOWImgDescriptorExtractor::descriptorSize() const
Returns image discriptor size, if vocabulary was set, and 0 otherwise.
.. index:: BOWImgDescriptorExtractor::descriptorType
cv::BOWImgDescriptorExtractor::descriptorType
---------------------------------------------
`id=0.940227909801 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BOWImgDescriptorExtractor%3A%3AdescriptorType>`__
.. cfunction:: int BOWImgDescriptorExtractor::descriptorType() const
Returns image descriptor type.