Removed Sphinx documentation files
This commit is contained in:
@@ -1,88 +0,0 @@
|
||||
Common Interfaces of Descriptor Extractors
|
||||
==========================================
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
Extractors of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
|
||||
between different algorithms solving the same problem. This section is devoted to computing descriptors
|
||||
represented as vectors in a multidimensional space. All objects that implement the ``vector``
|
||||
descriptor extractors inherit the
|
||||
:ocv:class:`DescriptorExtractor` interface.
|
||||
|
||||
.. note::
|
||||
|
||||
* An example explaining keypoint extraction can be found at opencv_source_code/samples/cpp/descriptor_extractor_matcher.cpp
|
||||
* An example on descriptor evaluation can be found at opencv_source_code/samples/cpp/detector_descriptor_evaluation.cpp
|
||||
|
||||
DescriptorExtractor
|
||||
-------------------
|
||||
.. ocv:class:: DescriptorExtractor : public Algorithm
|
||||
|
||||
Abstract base class for computing descriptors for image keypoints. ::
|
||||
|
||||
class CV_EXPORTS DescriptorExtractor
|
||||
{
|
||||
public:
|
||||
virtual ~DescriptorExtractor();
|
||||
|
||||
void compute( InputArray image, vector<KeyPoint>& keypoints,
|
||||
OutputArray descriptors ) const;
|
||||
void compute( InputArrayOfArrays images, vector<vector<KeyPoint> >& keypoints,
|
||||
OutputArrayOfArrays descriptors ) const;
|
||||
|
||||
virtual void read( const FileNode& );
|
||||
virtual void write( FileStorage& ) const;
|
||||
|
||||
virtual int descriptorSize() const = 0;
|
||||
virtual int descriptorType() const = 0;
|
||||
virtual int defaultNorm() const = 0;
|
||||
|
||||
static Ptr<DescriptorExtractor> create( const String& descriptorExtractorType );
|
||||
|
||||
protected:
|
||||
...
|
||||
};
|
||||
|
||||
|
||||
In this interface, a keypoint descriptor can be represented as a
|
||||
dense, fixed-dimension vector of a basic type. Most descriptors
|
||||
follow this pattern as it simplifies computing
|
||||
distances between descriptors. Therefore, a collection of
|
||||
descriptors is represented as
|
||||
:ocv:class:`Mat` , where each row is a keypoint descriptor.
|
||||
|
||||
|
||||
|
||||
DescriptorExtractor::compute
|
||||
--------------------------------
|
||||
Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant).
|
||||
|
||||
.. ocv:function:: void DescriptorExtractor::compute( InputArray image, vector<KeyPoint>& keypoints, OutputArray descriptors ) const
|
||||
|
||||
.. ocv:function:: void DescriptorExtractor::compute( InputArrayOfArrays images, vector<vector<KeyPoint> >& keypoints, OutputArrayOfArrays descriptors ) const
|
||||
|
||||
.. ocv:pyfunction:: cv2.DescriptorExtractor_create.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
|
||||
|
||||
:param image: Image.
|
||||
|
||||
:param images: Image set.
|
||||
|
||||
:param keypoints: Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: ``SIFT`` duplicates keypoint with several dominant orientations (for each orientation).
|
||||
|
||||
:param descriptors: Computed descriptors. In the second variant of the method ``descriptors[i]`` are descriptors computed for a ``keypoints[i]``. Row ``j`` is the ``keypoints`` (or ``keypoints[i]``) is the descriptor for keypoint ``j``-th keypoint.
|
||||
|
||||
|
||||
DescriptorExtractor::create
|
||||
-------------------------------
|
||||
Creates a descriptor extractor by name.
|
||||
|
||||
.. ocv:function:: Ptr<DescriptorExtractor> DescriptorExtractor::create( const String& descriptorExtractorType )
|
||||
|
||||
.. ocv:pyfunction:: cv2.DescriptorExtractor_create(descriptorExtractorType) -> retval
|
||||
|
||||
:param descriptorExtractorType: Descriptor extractor type.
|
||||
|
||||
The current implementation supports the following types of a descriptor extractor:
|
||||
|
||||
* ``"BRISK"`` -- :ocv:class:`BRISK`
|
||||
* ``"ORB"`` -- :ocv:class:`ORB`
|
||||
@@ -1,279 +0,0 @@
|
||||
Common Interfaces of Descriptor Matchers
|
||||
========================================
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
Matchers of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
|
||||
between different algorithms solving the same problem. This section is devoted to matching descriptors
|
||||
that are represented as vectors in a multidimensional space. All objects that implement ``vector``
|
||||
descriptor matchers inherit the
|
||||
:ocv:class:`DescriptorMatcher` interface.
|
||||
|
||||
.. note::
|
||||
|
||||
* An example explaining keypoint matching can be found at opencv_source_code/samples/cpp/descriptor_extractor_matcher.cpp
|
||||
* An example on descriptor matching evaluation can be found at opencv_source_code/samples/cpp/detector_descriptor_matcher_evaluation.cpp
|
||||
* An example on one to many image matching can be found at opencv_source_code/samples/cpp/matching_to_many_images.cpp
|
||||
|
||||
DescriptorMatcher
|
||||
-----------------
|
||||
.. ocv:class:: DescriptorMatcher : public Algorithm
|
||||
|
||||
Abstract base class for matching keypoint descriptors. It has two groups
|
||||
of match methods: for matching descriptors of an image with another image or
|
||||
with an image set. ::
|
||||
|
||||
class DescriptorMatcher
|
||||
{
|
||||
public:
|
||||
virtual ~DescriptorMatcher();
|
||||
|
||||
virtual void add( InputArrayOfArrays descriptors );
|
||||
|
||||
const vector<Mat>& getTrainDescriptors() const;
|
||||
virtual void clear();
|
||||
bool empty() const;
|
||||
virtual bool isMaskSupported() const = 0;
|
||||
|
||||
virtual void train();
|
||||
|
||||
/*
|
||||
* Group of methods to match descriptors from an image pair.
|
||||
*/
|
||||
void match( InputArray queryDescriptors, InputArray trainDescriptors,
|
||||
vector<DMatch>& matches, InputArray mask=noArray() ) const;
|
||||
void knnMatch( InputArray queryDescriptors, InputArray trainDescriptors,
|
||||
vector<vector<DMatch> >& matches, int k,
|
||||
InputArray mask=noArray(), bool compactResult=false ) const;
|
||||
void radiusMatch( InputArray queryDescriptors, InputArray trainDescriptors,
|
||||
vector<vector<DMatch> >& matches, float maxDistance,
|
||||
InputArray mask=noArray(), bool compactResult=false ) const;
|
||||
/*
|
||||
* Group of methods to match descriptors from one image to an image set.
|
||||
*/
|
||||
void match( InputArray queryDescriptors, vector<DMatch>& matches,
|
||||
InputArrayOfArrays masks=noArray() );
|
||||
void knnMatch( InputArray queryDescriptors, vector<vector<DMatch> >& matches,
|
||||
int k, InputArrayOfArrays masks=noArray(),
|
||||
bool compactResult=false );
|
||||
void radiusMatch( InputArray queryDescriptors, vector<vector<DMatch> >& matches,
|
||||
float maxDistance, InputArrayOfArrays masks=noArray(),
|
||||
bool compactResult=false );
|
||||
|
||||
virtual void read( const FileNode& );
|
||||
virtual void write( FileStorage& ) const;
|
||||
|
||||
virtual Ptr<DescriptorMatcher> clone( bool emptyTrainData=false ) const = 0;
|
||||
|
||||
static Ptr<DescriptorMatcher> create( const String& descriptorMatcherType );
|
||||
|
||||
protected:
|
||||
vector<Mat> trainDescCollection;
|
||||
vector<UMat> utrainDescCollection;
|
||||
...
|
||||
};
|
||||
|
||||
|
||||
DescriptorMatcher::add
|
||||
--------------------------
|
||||
Adds descriptors to train a CPU(``trainDescCollectionis``) or GPU(``utrainDescCollectionis``) descriptor collection. If the collection is not empty, the new descriptors are added to existing train descriptors.
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::add( InputArrayOfArrays descriptors )
|
||||
|
||||
:param descriptors: Descriptors to add. Each ``descriptors[i]`` is a set of descriptors from the same train image.
|
||||
|
||||
|
||||
DescriptorMatcher::getTrainDescriptors
|
||||
------------------------------------------
|
||||
Returns a constant link to the train descriptor collection ``trainDescCollection`` .
|
||||
|
||||
.. ocv:function:: const vector<Mat>& DescriptorMatcher::getTrainDescriptors() const
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::clear
|
||||
----------------------------
|
||||
Clears the train descriptor collections.
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::clear()
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::empty
|
||||
----------------------------
|
||||
Returns true if there are no train descriptors in the both collections.
|
||||
|
||||
.. ocv:function:: bool DescriptorMatcher::empty() const
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::isMaskSupported
|
||||
--------------------------------------
|
||||
Returns true if the descriptor matcher supports masking permissible matches.
|
||||
|
||||
.. ocv:function:: bool DescriptorMatcher::isMaskSupported()
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::train
|
||||
----------------------------
|
||||
Trains a descriptor matcher
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::train()
|
||||
|
||||
Trains a descriptor matcher (for example, the flann index). In all methods to match, the method ``train()`` is run every time before matching. Some descriptor matchers (for example, ``BruteForceMatcher``) have an empty implementation of this method. Other matchers really train their inner structures (for example, ``FlannBasedMatcher`` trains ``flann::Index`` ).
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::match
|
||||
----------------------------
|
||||
Finds the best match for each descriptor from a query set.
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::match( InputArray queryDescriptors, InputArray trainDescriptors, vector<DMatch>& matches, InputArray mask=noArray() ) const
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::match(InputArray queryDescriptors, vector<DMatch>& matches, InputArrayOfArrays masks=noArray() )
|
||||
|
||||
:param queryDescriptors: Query set of descriptors.
|
||||
|
||||
:param trainDescriptors: Train set of descriptors. This set is not added to the train descriptors collection stored in the class object.
|
||||
|
||||
:param matches: Matches. If a query descriptor is masked out in ``mask`` , no match is added for this descriptor. So, ``matches`` size may be smaller than the query descriptors count.
|
||||
|
||||
:param mask: Mask specifying permissible matches between an input query and train matrices of descriptors.
|
||||
|
||||
:param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between the input query descriptors and stored train descriptors from the i-th image ``trainDescCollection[i]``.
|
||||
|
||||
In the first variant of this method, the train descriptors are passed as an input argument. In the second variant of the method, train descriptors collection that was set by ``DescriptorMatcher::add`` is used. Optional mask (or masks) can be passed to specify which query and training descriptors can be matched. Namely, ``queryDescriptors[i]`` can be matched with ``trainDescriptors[j]`` only if ``mask.at<uchar>(i,j)`` is non-zero.
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::knnMatch
|
||||
-------------------------------
|
||||
Finds the k best matches for each descriptor from a query set.
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::knnMatch(InputArray queryDescriptors, InputArray trainDescriptors, vector<vector<DMatch> >& matches, int k, InputArray mask=noArray(), bool compactResult=false ) const
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::knnMatch( InputArray queryDescriptors, vector<vector<DMatch> >& matches, int k, InputArrayOfArrays masks=noArray(), bool compactResult=false )
|
||||
|
||||
:param queryDescriptors: Query set of descriptors.
|
||||
|
||||
:param trainDescriptors: Train set of descriptors. This set is not added to the train descriptors collection stored in the class object.
|
||||
|
||||
:param mask: Mask specifying permissible matches between an input query and train matrices of descriptors.
|
||||
|
||||
:param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between the input query descriptors and stored train descriptors from the i-th image ``trainDescCollection[i]``.
|
||||
|
||||
:param matches: Matches. Each ``matches[i]`` is k or less matches for the same query descriptor.
|
||||
|
||||
:param k: Count of best matches found per each query descriptor or less if a query descriptor has less than k possible matches in total.
|
||||
|
||||
:param compactResult: Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
|
||||
|
||||
These extended variants of :ocv:func:`DescriptorMatcher::match` methods find several best matches for each query descriptor. The matches are returned in the distance increasing order. See :ocv:func:`DescriptorMatcher::match` for the details about query and train descriptors.
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::radiusMatch
|
||||
----------------------------------
|
||||
For each query descriptor, finds the training descriptors not farther than the specified distance.
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::radiusMatch( InputArray queryDescriptors, InputArray trainDescriptors, vector<vector<DMatch> >& matches, float maxDistance, InputArray mask=noArray(), bool compactResult=false ) const
|
||||
|
||||
.. ocv:function:: void DescriptorMatcher::radiusMatch( InputArray queryDescriptors, vector<vector<DMatch> >& matches, float maxDistance, InputArrayOfArrays masks=noArray(), bool compactResult=false )
|
||||
|
||||
:param queryDescriptors: Query set of descriptors.
|
||||
|
||||
:param trainDescriptors: Train set of descriptors. This set is not added to the train descriptors collection stored in the class object.
|
||||
|
||||
:param mask: Mask specifying permissible matches between an input query and train matrices of descriptors.
|
||||
|
||||
:param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between the input query descriptors and stored train descriptors from the i-th image ``trainDescCollection[i]``.
|
||||
|
||||
:param matches: Found matches.
|
||||
|
||||
:param compactResult: Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
|
||||
|
||||
:param maxDistance: Threshold for the distance between matched descriptors. Distance means here metric distance (e.g. Hamming distance), not the distance between coordinates (which is measured in Pixels)!
|
||||
|
||||
For each query descriptor, the methods find such training descriptors that the distance between the query descriptor and the training descriptor is equal or smaller than ``maxDistance``. Found matches are returned in the distance increasing order.
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::clone
|
||||
----------------------------
|
||||
Clones the matcher.
|
||||
|
||||
.. ocv:function:: Ptr<DescriptorMatcher> DescriptorMatcher::clone( bool emptyTrainData=false )
|
||||
|
||||
:param emptyTrainData: If ``emptyTrainData`` is false, the method creates a deep copy of the object, that is, copies both parameters and train data. If ``emptyTrainData`` is true, the method creates an object copy with the current parameters but with empty train data.
|
||||
|
||||
|
||||
|
||||
DescriptorMatcher::create
|
||||
-----------------------------
|
||||
Creates a descriptor matcher of a given type with the default parameters (using default constructor).
|
||||
|
||||
.. ocv:function:: Ptr<DescriptorMatcher> DescriptorMatcher::create( const String& descriptorMatcherType )
|
||||
|
||||
:param descriptorMatcherType: Descriptor matcher type. Now the following matcher types are supported:
|
||||
|
||||
*
|
||||
``BruteForce`` (it uses ``L2`` )
|
||||
*
|
||||
``BruteForce-L1``
|
||||
*
|
||||
``BruteForce-Hamming``
|
||||
*
|
||||
``BruteForce-Hamming(2)``
|
||||
*
|
||||
``FlannBased``
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
BFMatcher
|
||||
-----------------
|
||||
.. ocv:class:: BFMatcher : public DescriptorMatcher
|
||||
|
||||
Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets.
|
||||
|
||||
|
||||
BFMatcher::BFMatcher
|
||||
--------------------
|
||||
Brute-force matcher constructor.
|
||||
|
||||
.. ocv:function:: BFMatcher::BFMatcher( int normType=NORM_L2, bool crossCheck=false )
|
||||
|
||||
:param normType: One of ``NORM_L1``, ``NORM_L2``, ``NORM_HAMMING``, ``NORM_HAMMING2``. ``L1`` and ``L2`` norms are preferable choices for SIFT and SURF descriptors, ``NORM_HAMMING`` should be used with ORB, BRISK and BRIEF, ``NORM_HAMMING2`` should be used with ORB when ``WTA_K==3`` or ``4`` (see ORB::ORB constructor description).
|
||||
|
||||
:param crossCheck: If it is false, this is will be default BFMatcher behaviour when it finds the k nearest neighbors for each query descriptor. If ``crossCheck==true``, then the ``knnMatch()`` method with ``k=1`` will only return pairs ``(i,j)`` such that for ``i-th`` query descriptor the ``j-th`` descriptor in the matcher's collection is the nearest and vice versa, i.e. the ``BFMatcher`` will only return consistent pairs. Such technique usually produces best results with minimal number of outliers when there are enough matches. This is alternative to the ratio test, used by D. Lowe in SIFT paper.
|
||||
|
||||
|
||||
FlannBasedMatcher
|
||||
-----------------
|
||||
.. ocv:class:: FlannBasedMatcher : public DescriptorMatcher
|
||||
|
||||
Flann-based descriptor matcher. This matcher trains :ocv:class:`flann::Index_` on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. ``FlannBasedMatcher`` does not support masking permissible matches of descriptor sets because ``flann::Index`` does not support this. ::
|
||||
|
||||
class FlannBasedMatcher : public DescriptorMatcher
|
||||
{
|
||||
public:
|
||||
FlannBasedMatcher(
|
||||
const Ptr<flann::IndexParams>& indexParams=new flann::KDTreeIndexParams(),
|
||||
const Ptr<flann::SearchParams>& searchParams=new flann::SearchParams() );
|
||||
|
||||
virtual void add( InputArrayOfArrays descriptors );
|
||||
virtual void clear();
|
||||
|
||||
virtual void train();
|
||||
virtual bool isMaskSupported() const;
|
||||
|
||||
virtual Ptr<DescriptorMatcher> clone( bool emptyTrainData=false ) const;
|
||||
protected:
|
||||
...
|
||||
};
|
||||
|
||||
..
|
||||
@@ -1,179 +0,0 @@
|
||||
Common Interfaces of Feature Detectors
|
||||
======================================
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
Feature detectors in OpenCV have wrappers with a common interface that enables you to easily switch
|
||||
between different algorithms solving the same problem. All objects that implement keypoint detectors
|
||||
inherit the
|
||||
:ocv:class:`FeatureDetector` interface.
|
||||
|
||||
.. note::
|
||||
|
||||
* An example explaining keypoint detection can be found at opencv_source_code/samples/cpp/descriptor_extractor_matcher.cpp
|
||||
|
||||
FeatureDetector
|
||||
---------------
|
||||
.. ocv:class:: FeatureDetector : public Algorithm
|
||||
|
||||
Abstract base class for 2D image feature detectors. ::
|
||||
|
||||
class CV_EXPORTS FeatureDetector
|
||||
{
|
||||
public:
|
||||
virtual ~FeatureDetector();
|
||||
|
||||
void detect( InputArray image, vector<KeyPoint>& keypoints,
|
||||
InputArray mask=noArray() ) const;
|
||||
|
||||
void detect( InputArrayOfArrays images,
|
||||
vector<vector<KeyPoint> >& keypoints,
|
||||
InputArrayOfArrays masks=noArray() ) const;
|
||||
|
||||
virtual void read(const FileNode&);
|
||||
virtual void write(FileStorage&) const;
|
||||
|
||||
static Ptr<FeatureDetector> create( const String& detectorType );
|
||||
|
||||
protected:
|
||||
...
|
||||
};
|
||||
|
||||
FeatureDetector::detect
|
||||
---------------------------
|
||||
Detects keypoints in an image (first variant) or image set (second variant).
|
||||
|
||||
.. ocv:function:: void FeatureDetector::detect( InputArray image, vector<KeyPoint>& keypoints, InputArray mask=noArray() ) const
|
||||
|
||||
.. ocv:function:: void FeatureDetector::detect( InputArrayOfArrays images, vector<vector<KeyPoint> >& keypoints, InputArrayOfArrays masks=noArray() ) const
|
||||
|
||||
.. ocv:pyfunction:: cv2.FeatureDetector_create.detect(image[, mask]) -> keypoints
|
||||
|
||||
:param image: Image.
|
||||
|
||||
:param images: Image set.
|
||||
|
||||
:param keypoints: The detected keypoints. In the second variant of the method ``keypoints[i]`` is a set of keypoints detected in ``images[i]`` .
|
||||
|
||||
:param mask: Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.
|
||||
|
||||
:param masks: Masks for each input image specifying where to look for keypoints (optional). ``masks[i]`` is a mask for ``images[i]``.
|
||||
|
||||
FastFeatureDetector
|
||||
-------------------
|
||||
.. ocv:class:: FastFeatureDetector : public Feature2D
|
||||
|
||||
Wrapping class for feature detection using the
|
||||
:ocv:func:`FAST` method. ::
|
||||
|
||||
class FastFeatureDetector : public Feature2D
|
||||
{
|
||||
public:
|
||||
static Ptr<FastFeatureDetector> create( int threshold=1, bool nonmaxSuppression=true, type=FastFeatureDetector::TYPE_9_16 );
|
||||
};
|
||||
|
||||
GFTTDetector
|
||||
---------------------------
|
||||
.. ocv:class:: GFTTDetector : public FeatureDetector
|
||||
|
||||
Wrapping class for feature detection using the
|
||||
:ocv:func:`goodFeaturesToTrack` function. ::
|
||||
|
||||
class GFTTDetector : public Feature2D
|
||||
{
|
||||
public:
|
||||
enum { USE_HARRIS_DETECTOR=10000 };
|
||||
static Ptr<GFTTDetector> create( int maxCorners=1000, double qualityLevel=0.01,
|
||||
double minDistance=1, int blockSize=3,
|
||||
bool useHarrisDetector=false, double k=0.04 );
|
||||
};
|
||||
|
||||
MSER
|
||||
-------------------
|
||||
.. ocv:class:: MSER : public Feature2D
|
||||
|
||||
Maximally stable region detector ::
|
||||
|
||||
class MSER : public Feature2D
|
||||
{
|
||||
public:
|
||||
enum
|
||||
{
|
||||
DELTA=10000, MIN_AREA=10001, MAX_AREA=10002, PASS2_ONLY=10003,
|
||||
MAX_EVOLUTION=10004, AREA_THRESHOLD=10005,
|
||||
MIN_MARGIN=10006, EDGE_BLUR_SIZE=10007
|
||||
};
|
||||
|
||||
//! the full constructor
|
||||
static Ptr<MSER> create( int _delta=5, int _min_area=60, int _max_area=14400,
|
||||
double _max_variation=0.25, double _min_diversity=.2,
|
||||
int _max_evolution=200, double _area_threshold=1.01,
|
||||
double _min_margin=0.003, int _edge_blur_size=5 );
|
||||
|
||||
virtual void detectRegions( InputArray image,
|
||||
std::vector<std::vector<Point> >& msers,
|
||||
std::vector<Rect>& bboxes ) = 0;
|
||||
};
|
||||
|
||||
SimpleBlobDetector
|
||||
-------------------
|
||||
.. ocv:class:: SimpleBlobDetector : public FeatureDetector
|
||||
|
||||
Class for extracting blobs from an image. ::
|
||||
|
||||
class SimpleBlobDetector : public FeatureDetector
|
||||
{
|
||||
public:
|
||||
struct Params
|
||||
{
|
||||
Params();
|
||||
float thresholdStep;
|
||||
float minThreshold;
|
||||
float maxThreshold;
|
||||
size_t minRepeatability;
|
||||
float minDistBetweenBlobs;
|
||||
|
||||
bool filterByColor;
|
||||
uchar blobColor;
|
||||
|
||||
bool filterByArea;
|
||||
float minArea, maxArea;
|
||||
|
||||
bool filterByCircularity;
|
||||
float minCircularity, maxCircularity;
|
||||
|
||||
bool filterByInertia;
|
||||
float minInertiaRatio, maxInertiaRatio;
|
||||
|
||||
bool filterByConvexity;
|
||||
float minConvexity, maxConvexity;
|
||||
};
|
||||
|
||||
static Ptr<SimpleBlobDetector> create(const SimpleBlobDetector::Params
|
||||
¶meters = SimpleBlobDetector::Params());
|
||||
};
|
||||
|
||||
The class implements a simple algorithm for extracting blobs from an image:
|
||||
|
||||
#. Convert the source image to binary images by applying thresholding with several thresholds from ``minThreshold`` (inclusive) to ``maxThreshold`` (exclusive) with distance ``thresholdStep`` between neighboring thresholds.
|
||||
|
||||
#. Extract connected components from every binary image by :ocv:func:`findContours` and calculate their centers.
|
||||
|
||||
#. Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the ``minDistBetweenBlobs`` parameter.
|
||||
|
||||
#. From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.
|
||||
|
||||
This class performs several filtrations of returned blobs. You should set ``filterBy*`` to true/false to turn on/off corresponding filtration. Available filtrations:
|
||||
|
||||
* **By color**. This filter compares the intensity of a binary image at the center of a blob to ``blobColor``. If they differ, the blob is filtered out. Use ``blobColor = 0`` to extract dark blobs and ``blobColor = 255`` to extract light blobs.
|
||||
|
||||
* **By area**. Extracted blobs have an area between ``minArea`` (inclusive) and ``maxArea`` (exclusive).
|
||||
|
||||
* **By circularity**. Extracted blobs have circularity (:math:`\frac{4*\pi*Area}{perimeter * perimeter}`) between ``minCircularity`` (inclusive) and ``maxCircularity`` (exclusive).
|
||||
|
||||
* **By ratio of the minimum inertia to maximum inertia**. Extracted blobs have this ratio between ``minInertiaRatio`` (inclusive) and ``maxInertiaRatio`` (exclusive).
|
||||
|
||||
* **By convexity**. Extracted blobs have convexity (area / area of blob convex hull) between ``minConvexity`` (inclusive) and ``maxConvexity`` (exclusive).
|
||||
|
||||
|
||||
Default values of parameters are tuned to extract dark circular blobs.
|
||||
@@ -1,86 +0,0 @@
|
||||
Drawing Function of Keypoints and Matches
|
||||
=========================================
|
||||
|
||||
|
||||
|
||||
drawMatches
|
||||
---------------
|
||||
Draws the found matches of keypoints from two images.
|
||||
|
||||
.. ocv:function:: void drawMatches( InputArray img1, const vector<KeyPoint>& keypoints1, InputArray img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, InputOutputArray outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
|
||||
|
||||
.. ocv:function:: void drawMatches( InputArray img1, const vector<KeyPoint>& keypoints1, InputArray img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch> >& matches1to2, InputOutputArray outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char> >& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
|
||||
|
||||
.. ocv:pyfunction:: cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) -> outImg
|
||||
|
||||
.. ocv:pyfunction:: cv2.drawMatchesKnn(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) -> outImg
|
||||
|
||||
|
||||
:param img1: First source image.
|
||||
|
||||
:param keypoints1: Keypoints from the first source image.
|
||||
|
||||
:param img2: Second source image.
|
||||
|
||||
:param keypoints2: Keypoints from the second source image.
|
||||
|
||||
:param matches1to2: Matches from the first image to the second one, which means that ``keypoints1[i]`` has a corresponding point in ``keypoints2[matches[i]]`` .
|
||||
|
||||
:param outImg: Output image. Its content depends on the ``flags`` value defining what is drawn in the output image. See possible ``flags`` bit values below.
|
||||
|
||||
:param matchColor: Color of matches (lines and connected keypoints). If ``matchColor==Scalar::all(-1)`` , the color is generated randomly.
|
||||
|
||||
:param singlePointColor: Color of single keypoints (circles), which means that keypoints do not have the matches. If ``singlePointColor==Scalar::all(-1)`` , the color is generated randomly.
|
||||
|
||||
:param matchesMask: Mask determining which matches are drawn. If the mask is empty, all matches are drawn.
|
||||
|
||||
:param flags: Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``.
|
||||
|
||||
This function draws matches of keypoints from two images in the output image. Match is a line connecting two keypoints (circles). The structure ``DrawMatchesFlags`` is defined as follows:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
struct DrawMatchesFlags
|
||||
{
|
||||
enum
|
||||
{
|
||||
DEFAULT = 0, // Output image matrix will be created (Mat::create),
|
||||
// i.e. existing memory of output image may be reused.
|
||||
// Two source images, matches, and single keypoints
|
||||
// will be drawn.
|
||||
// For each keypoint, only the center point will be
|
||||
// drawn (without a circle around the keypoint with the
|
||||
// keypoint size and orientation).
|
||||
DRAW_OVER_OUTIMG = 1, // Output image matrix will not be
|
||||
// created (using Mat::create). Matches will be drawn
|
||||
// on existing content of output image.
|
||||
NOT_DRAW_SINGLE_POINTS = 2, // Single keypoints will not be drawn.
|
||||
DRAW_RICH_KEYPOINTS = 4 // For each keypoint, the circle around
|
||||
// keypoint with keypoint size and orientation will
|
||||
// be drawn.
|
||||
};
|
||||
};
|
||||
|
||||
..
|
||||
|
||||
|
||||
|
||||
drawKeypoints
|
||||
-----------------
|
||||
Draws keypoints.
|
||||
|
||||
.. ocv:function:: void drawKeypoints( InputArray image, const vector<KeyPoint>& keypoints, InputOutputArray outImage, const Scalar& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT )
|
||||
|
||||
.. ocv:pyfunction:: cv2.drawKeypoints(image, keypoints[, outImage[, color[, flags]]]) -> outImage
|
||||
|
||||
:param image: Source image.
|
||||
|
||||
:param keypoints: Keypoints from the source image.
|
||||
|
||||
:param outImage: Output image. Its content depends on the ``flags`` value defining what is drawn in the output image. See possible ``flags`` bit values below.
|
||||
|
||||
:param color: Color of keypoints.
|
||||
|
||||
:param flags: Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``. See details above in :ocv:func:`drawMatches` .
|
||||
|
||||
.. note:: For Python API, flags are modified as `cv2.DRAW_MATCHES_FLAGS_DEFAULT`, `cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS`, `cv2.DRAW_MATCHES_FLAGS_DRAW_OVER_OUTIMG`, `cv2.DRAW_MATCHES_FLAGS_NOT_DRAW_SINGLE_POINTS`
|
||||
@@ -1,247 +0,0 @@
|
||||
Feature Detection and Description
|
||||
=================================
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
.. note::
|
||||
|
||||
* An example explaining keypoint detection and description can be found at opencv_source_code/samples/cpp/descriptor_extractor_matcher.cpp
|
||||
|
||||
FAST
|
||||
----
|
||||
Detects corners using the FAST algorithm
|
||||
|
||||
.. ocv:function:: void FAST( InputArray image, vector<KeyPoint>& keypoints, int threshold, bool nonmaxSuppression=true )
|
||||
.. ocv:function:: void FAST( InputArray image, vector<KeyPoint>& keypoints, int threshold, bool nonmaxSuppression, int type )
|
||||
|
||||
:param image: grayscale image where keypoints (corners) are detected.
|
||||
|
||||
:param keypoints: keypoints detected on the image.
|
||||
|
||||
:param threshold: threshold on difference between intensity of the central pixel and pixels of a circle around this pixel.
|
||||
|
||||
:param nonmaxSuppression: if true, non-maximum suppression is applied to detected corners (keypoints).
|
||||
|
||||
:param type: one of the three neighborhoods as defined in the paper: ``FastFeatureDetector::TYPE_9_16``, ``FastFeatureDetector::TYPE_7_12``, ``FastFeatureDetector::TYPE_5_8``
|
||||
|
||||
Detects corners using the FAST algorithm by [Rosten06]_.
|
||||
|
||||
.. note:: In Python API, types are given as ``cv2.FAST_FEATURE_DETECTOR_TYPE_5_8``, ``cv2.FAST_FEATURE_DETECTOR_TYPE_7_12`` and ``cv2.FAST_FEATURE_DETECTOR_TYPE_9_16``. For corner detection, use ``cv2.FAST.detect()`` method.
|
||||
|
||||
|
||||
.. [Rosten06] E. Rosten. Machine Learning for High-speed Corner Detection, 2006.
|
||||
|
||||
MSER
|
||||
----
|
||||
.. ocv:class:: MSER : public FeatureDetector
|
||||
|
||||
Maximally stable extremal region extractor. ::
|
||||
|
||||
class MSER : public CvMSERParams
|
||||
{
|
||||
public:
|
||||
// default constructor
|
||||
MSER();
|
||||
// constructor that initializes all the algorithm parameters
|
||||
MSER( int _delta, int _min_area, int _max_area,
|
||||
float _max_variation, float _min_diversity,
|
||||
int _max_evolution, double _area_threshold,
|
||||
double _min_margin, int _edge_blur_size );
|
||||
// runs the extractor on the specified image; returns the MSERs,
|
||||
// each encoded as a contour (vector<Point>, see findContours)
|
||||
// the optional mask marks the area where MSERs are searched for
|
||||
void detectRegions( InputArray image, vector<vector<Point> >& msers, vector<Rect>& bboxes ) const;
|
||||
};
|
||||
|
||||
The class encapsulates all the parameters of the MSER extraction algorithm (see
|
||||
http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions). Also see http://code.opencv.org/projects/opencv/wiki/MSER for useful comments and parameters description.
|
||||
|
||||
.. note::
|
||||
|
||||
* (Python) A complete example showing the use of the MSER detector can be found at opencv_source_code/samples/python2/mser.py
|
||||
|
||||
|
||||
ORB
|
||||
---
|
||||
.. ocv:class:: ORB : public Feature2D
|
||||
|
||||
Class implementing the ORB (*oriented BRIEF*) keypoint detector and descriptor extractor, described in [RRKB11]_. The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).
|
||||
|
||||
.. [RRKB11] Ethan Rublee, Vincent Rabaud, Kurt Konolige, Gary R. Bradski: ORB: An efficient alternative to SIFT or SURF. ICCV 2011: 2564-2571.
|
||||
|
||||
ORB::ORB
|
||||
--------
|
||||
The ORB constructor
|
||||
|
||||
.. ocv:function:: ORB::ORB(int nfeatures = 500, float scaleFactor = 1.2f, int nlevels = 8, int edgeThreshold = 31, int firstLevel = 0, int WTA_K=2, int scoreType=ORB::HARRIS_SCORE, int patchSize=31)
|
||||
|
||||
.. ocv:pyfunction:: cv2.ORB([, nfeatures[, scaleFactor[, nlevels[, edgeThreshold[, firstLevel[, WTA_K[, scoreType[, patchSize]]]]]]]]) -> <ORB object>
|
||||
|
||||
|
||||
:param nfeatures: The maximum number of features to retain.
|
||||
|
||||
:param scaleFactor: Pyramid decimation ratio, greater than 1. ``scaleFactor==2`` means the classical pyramid, where each next level has 4x less pixels than the previous, but such a big scale factor will degrade feature matching scores dramatically. On the other hand, too close to 1 scale factor will mean that to cover certain scale range you will need more pyramid levels and so the speed will suffer.
|
||||
|
||||
:param nlevels: The number of pyramid levels. The smallest level will have linear size equal to ``input_image_linear_size/pow(scaleFactor, nlevels)``.
|
||||
|
||||
:param edgeThreshold: This is size of the border where the features are not detected. It should roughly match the ``patchSize`` parameter.
|
||||
|
||||
:param firstLevel: It should be 0 in the current implementation.
|
||||
|
||||
:param WTA_K: The number of points that produce each element of the oriented BRIEF descriptor. The default value 2 means the BRIEF where we take a random point pair and compare their brightnesses, so we get 0/1 response. Other possible values are 3 and 4. For example, 3 means that we take 3 random points (of course, those point coordinates are random, but they are generated from the pre-defined seed, so each element of BRIEF descriptor is computed deterministically from the pixel rectangle), find point of maximum brightness and output index of the winner (0, 1 or 2). Such output will occupy 2 bits, and therefore it will need a special variant of Hamming distance, denoted as ``NORM_HAMMING2`` (2 bits per bin). When ``WTA_K=4``, we take 4 random points to compute each bin (that will also occupy 2 bits with possible values 0, 1, 2 or 3).
|
||||
|
||||
:param scoreType: The default HARRIS_SCORE means that Harris algorithm is used to rank features (the score is written to ``KeyPoint::score`` and is used to retain best ``nfeatures`` features); FAST_SCORE is alternative value of the parameter that produces slightly less stable keypoints, but it is a little faster to compute.
|
||||
|
||||
:param patchSize: size of the patch used by the oriented BRIEF descriptor. Of course, on smaller pyramid layers the perceived image area covered by a feature will be larger.
|
||||
|
||||
ORB::operator()
|
||||
---------------
|
||||
Finds keypoints in an image and computes their descriptors
|
||||
|
||||
.. ocv:function:: void ORB::operator()(InputArray image, InputArray mask, vector<KeyPoint>& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false ) const
|
||||
|
||||
.. ocv:pyfunction:: cv2.ORB.detect(image[, mask]) -> keypoints
|
||||
.. ocv:pyfunction:: cv2.ORB.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
|
||||
.. ocv:pyfunction:: cv2.ORB.detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
|
||||
|
||||
|
||||
:param image: The input 8-bit grayscale image.
|
||||
|
||||
:param mask: The operation mask.
|
||||
|
||||
:param keypoints: The output vector of keypoints.
|
||||
|
||||
:param descriptors: The output descriptors. Pass ``cv::noArray()`` if you do not need it.
|
||||
|
||||
:param useProvidedKeypoints: If it is true, then the method will use the provided vector of keypoints instead of detecting them.
|
||||
|
||||
|
||||
BRISK
|
||||
-----
|
||||
.. ocv:class:: BRISK : public Feature2D
|
||||
|
||||
Class implementing the BRISK keypoint detector and descriptor extractor, described in [LCS11]_.
|
||||
|
||||
.. [LCS11] Stefan Leutenegger, Margarita Chli and Roland Siegwart: BRISK: Binary Robust Invariant Scalable Keypoints. ICCV 2011: 2548-2555.
|
||||
|
||||
BRISK::BRISK
|
||||
------------
|
||||
The BRISK constructor
|
||||
|
||||
.. ocv:function:: BRISK::BRISK(int thresh=30, int octaves=3, float patternScale=1.0f)
|
||||
|
||||
.. ocv:pyfunction:: cv2.BRISK([, thresh[, octaves[, patternScale]]]) -> <BRISK object>
|
||||
|
||||
:param thresh: FAST/AGAST detection threshold score.
|
||||
|
||||
:param octaves: detection octaves. Use 0 to do single scale.
|
||||
|
||||
:param patternScale: apply this scale to the pattern used for sampling the neighbourhood of a keypoint.
|
||||
|
||||
BRISK::BRISK
|
||||
------------
|
||||
The BRISK constructor for a custom pattern
|
||||
|
||||
.. ocv:function:: BRISK::BRISK(std::vector<float> &radiusList, std::vector<int> &numberList, float dMax=5.85f, float dMin=8.2f, std::vector<int> indexChange=std::vector<int>())
|
||||
|
||||
.. ocv:pyfunction:: cv2.BRISK(radiusList, numberList[, dMax[, dMin[, indexChange]]]) -> <BRISK object>
|
||||
|
||||
:param radiusList: defines the radii (in pixels) where the samples around a keypoint are taken (for keypoint scale 1).
|
||||
|
||||
:param numberList: defines the number of sampling points on the sampling circle. Must be the same size as radiusList..
|
||||
|
||||
:param dMax: threshold for the short pairings used for descriptor formation (in pixels for keypoint scale 1).
|
||||
|
||||
:param dMin: threshold for the long pairings used for orientation determination (in pixels for keypoint scale 1).
|
||||
|
||||
:param indexChanges: index remapping of the bits.
|
||||
|
||||
BRISK::operator()
|
||||
-----------------
|
||||
Finds keypoints in an image and computes their descriptors
|
||||
|
||||
.. ocv:function:: void BRISK::operator()(InputArray image, InputArray mask, vector<KeyPoint>& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false ) const
|
||||
|
||||
.. ocv:pyfunction:: cv2.BRISK.detect(image[, mask]) -> keypoints
|
||||
.. ocv:pyfunction:: cv2.BRISK.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
|
||||
.. ocv:pyfunction:: cv2.BRISK.detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
|
||||
|
||||
:param image: The input 8-bit grayscale image.
|
||||
|
||||
:param mask: The operation mask.
|
||||
|
||||
:param keypoints: The output vector of keypoints.
|
||||
|
||||
:param descriptors: The output descriptors. Pass ``cv::noArray()`` if you do not need it.
|
||||
|
||||
:param useProvidedKeypoints: If it is true, then the method will use the provided vector of keypoints instead of detecting them.
|
||||
|
||||
KAZE
|
||||
----
|
||||
.. ocv:class:: KAZE : public Feature2D
|
||||
|
||||
Class implementing the KAZE keypoint detector and descriptor extractor, described in [ABD12]_. ::
|
||||
|
||||
class CV_EXPORTS_W KAZE : public Feature2D
|
||||
{
|
||||
public:
|
||||
CV_WRAP KAZE();
|
||||
CV_WRAP explicit KAZE(bool extended, bool upright, float threshold = 0.001f,
|
||||
int octaves = 4, int sublevels = 4, int diffusivity = DIFF_PM_G2);
|
||||
};
|
||||
|
||||
.. note:: AKAZE descriptor can only be used with KAZE or AKAZE keypoints
|
||||
|
||||
.. [ABD12] KAZE Features. Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison. In European Conference on Computer Vision (ECCV), Fiorenze, Italy, October 2012.
|
||||
|
||||
KAZE::KAZE
|
||||
----------
|
||||
The KAZE constructor
|
||||
|
||||
.. ocv:function:: KAZE::KAZE(bool extended, bool upright, float threshold, int octaves, int sublevels, int diffusivity)
|
||||
|
||||
:param extended: Set to enable extraction of extended (128-byte) descriptor.
|
||||
:param upright: Set to enable use of upright descriptors (non rotation-invariant).
|
||||
:param threshold: Detector response threshold to accept point
|
||||
:param octaves: Maximum octave evolution of the image
|
||||
:param sublevels: Default number of sublevels per scale level
|
||||
:param diffusivity: Diffusivity type. DIFF_PM_G1, DIFF_PM_G2, DIFF_WEICKERT or DIFF_CHARBONNIER
|
||||
|
||||
AKAZE
|
||||
-----
|
||||
.. ocv:class:: AKAZE : public Feature2D
|
||||
|
||||
Class implementing the AKAZE keypoint detector and descriptor extractor, described in [ANB13]_. ::
|
||||
|
||||
class CV_EXPORTS_W AKAZE : public Feature2D
|
||||
{
|
||||
public:
|
||||
CV_WRAP AKAZE();
|
||||
CV_WRAP explicit AKAZE(int descriptor_type, int descriptor_size = 0, int descriptor_channels = 3,
|
||||
float threshold = 0.001f, int octaves = 4, int sublevels = 4, int diffusivity = DIFF_PM_G2);
|
||||
};
|
||||
|
||||
.. note:: AKAZE descriptors can only be used with KAZE or AKAZE keypoints. Try to avoid using *extract* and *detect* instead of *operator()* due to performance reasons.
|
||||
|
||||
.. [ANB13] Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. Pablo F. Alcantarilla, Jesús Nuevo and Adrien Bartoli. In British Machine Vision Conference (BMVC), Bristol, UK, September 2013.
|
||||
|
||||
AKAZE::AKAZE
|
||||
------------
|
||||
The AKAZE constructor
|
||||
|
||||
.. ocv:function:: AKAZE::AKAZE(int descriptor_type, int descriptor_size, int descriptor_channels, float threshold, int octaves, int sublevels, int diffusivity)
|
||||
|
||||
:param descriptor_type: Type of the extracted descriptor: DESCRIPTOR_KAZE, DESCRIPTOR_KAZE_UPRIGHT, DESCRIPTOR_MLDB or DESCRIPTOR_MLDB_UPRIGHT.
|
||||
:param descriptor_size: Size of the descriptor in bits. 0 -> Full size
|
||||
:param descriptor_channels: Number of channels in the descriptor (1, 2, 3)
|
||||
:param threshold: Detector response threshold to accept point
|
||||
:param octaves: Maximum octave evolution of the image
|
||||
:param sublevels: Default number of sublevels per scale level
|
||||
:param diffusivity: Diffusivity type. DIFF_PM_G1, DIFF_PM_G2, DIFF_WEICKERT or DIFF_CHARBONNIER
|
||||
|
||||
SIFT
|
||||
----
|
||||
|
||||
.. ocv:class:: SIFT : public Feature2D
|
||||
|
||||
The SIFT algorithm has been moved to opencv_contrib/xfeatures2d module.
|
||||
@@ -1,15 +0,0 @@
|
||||
*********************************
|
||||
features2d. 2D Features Framework
|
||||
*********************************
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
feature_detection_and_description
|
||||
common_interfaces_of_feature_detectors
|
||||
common_interfaces_of_descriptor_extractors
|
||||
common_interfaces_of_descriptor_matchers
|
||||
drawing_function_of_keypoints_and_matches
|
||||
object_categorization
|
||||
@@ -1,213 +0,0 @@
|
||||
Object Categorization
|
||||
=====================
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
This section describes approaches based on local 2D features and used to categorize objects.
|
||||
|
||||
.. note::
|
||||
|
||||
* A complete Bag-Of-Words sample can be found at opencv_source_code/samples/cpp/bagofwords_classification.cpp
|
||||
|
||||
* (Python) An example using the features2D framework to perform object categorization can be found at opencv_source_code/samples/python2/find_obj.py
|
||||
|
||||
BOWTrainer
|
||||
----------
|
||||
.. ocv:class:: BOWTrainer
|
||||
|
||||
Abstract base class for training the *bag of visual words* vocabulary from a set of descriptors.
|
||||
For details, see, for example, *Visual Categorization with Bags of Keypoints* by Gabriella Csurka, Christopher R. Dance,
|
||||
Lixin Fan, Jutta Willamowski, Cedric Bray, 2004. ::
|
||||
|
||||
class BOWTrainer
|
||||
{
|
||||
public:
|
||||
BOWTrainer(){}
|
||||
virtual ~BOWTrainer(){}
|
||||
|
||||
void add( const Mat& descriptors );
|
||||
const vector<Mat>& getDescriptors() const;
|
||||
int descriptorsCount() const;
|
||||
|
||||
virtual void clear();
|
||||
|
||||
virtual Mat cluster() const = 0;
|
||||
virtual Mat cluster( const Mat& descriptors ) const = 0;
|
||||
|
||||
protected:
|
||||
...
|
||||
};
|
||||
|
||||
BOWTrainer::add
|
||||
-------------------
|
||||
Adds descriptors to a training set.
|
||||
|
||||
.. ocv:function:: void BOWTrainer::add( const Mat& descriptors )
|
||||
|
||||
:param descriptors: Descriptors to add to a training set. Each row of the ``descriptors`` matrix is a descriptor.
|
||||
|
||||
The training set is clustered using ``clustermethod`` to construct the vocabulary.
|
||||
|
||||
BOWTrainer::getDescriptors
|
||||
------------------------------
|
||||
Returns a training set of descriptors.
|
||||
|
||||
.. ocv:function:: const vector<Mat>& BOWTrainer::getDescriptors() const
|
||||
|
||||
|
||||
|
||||
BOWTrainer::descriptorsCount
|
||||
---------------------------------
|
||||
Returns the count of all descriptors stored in the training set.
|
||||
|
||||
.. ocv:function:: int BOWTrainer::descriptorsCount() const
|
||||
|
||||
|
||||
|
||||
BOWTrainer::cluster
|
||||
-----------------------
|
||||
Clusters train descriptors.
|
||||
|
||||
.. ocv:function:: Mat BOWTrainer::cluster() const
|
||||
|
||||
.. ocv:function:: Mat BOWTrainer::cluster( const Mat& descriptors ) const
|
||||
|
||||
:param descriptors: Descriptors to cluster. Each row of the ``descriptors`` matrix is a descriptor. Descriptors are not added to the inner train descriptor set.
|
||||
|
||||
The vocabulary consists of cluster centers. So, this method returns the vocabulary. In the first variant of the method, train descriptors stored in the object are clustered. In the second variant, input descriptors are clustered.
|
||||
|
||||
BOWKMeansTrainer
|
||||
----------------
|
||||
.. ocv:class:: BOWKMeansTrainer : public BOWTrainer
|
||||
|
||||
:ocv:func:`kmeans` -based class to train visual vocabulary using the *bag of visual words* approach.
|
||||
::
|
||||
|
||||
class BOWKMeansTrainer : public BOWTrainer
|
||||
{
|
||||
public:
|
||||
BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(),
|
||||
int attempts=3, int flags=KMEANS_PP_CENTERS );
|
||||
virtual ~BOWKMeansTrainer(){}
|
||||
|
||||
// Returns trained vocabulary (i.e. cluster centers).
|
||||
virtual Mat cluster() const;
|
||||
virtual Mat cluster( const Mat& descriptors ) const;
|
||||
|
||||
protected:
|
||||
...
|
||||
};
|
||||
|
||||
BOWKMeansTrainer::BOWKMeansTrainer
|
||||
----------------------------------
|
||||
|
||||
The constructor.
|
||||
|
||||
.. ocv:function:: BOWKMeansTrainer::BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(), int attempts=3, int flags=KMEANS_PP_CENTERS )
|
||||
|
||||
See :ocv:func:`kmeans` function parameters.
|
||||
|
||||
BOWImgDescriptorExtractor
|
||||
-------------------------
|
||||
.. ocv:class:: BOWImgDescriptorExtractor
|
||||
|
||||
Class to compute an image descriptor using the *bag of visual words*. Such a computation consists of the following steps:
|
||||
|
||||
#. Compute descriptors for a given image and its keypoints set.
|
||||
#. Find the nearest visual words from the vocabulary for each keypoint descriptor.
|
||||
#. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The ``i``-th bin of the histogram is a frequency of ``i``-th word of the vocabulary in the given image.
|
||||
|
||||
The class declaration is the following: ::
|
||||
|
||||
class BOWImgDescriptorExtractor
|
||||
{
|
||||
public:
|
||||
BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>& dextractor,
|
||||
const Ptr<DescriptorMatcher>& dmatcher );
|
||||
BOWImgDescriptorExtractor( const Ptr<DescriptorMatcher>& dmatcher );
|
||||
virtual ~BOWImgDescriptorExtractor(){}
|
||||
|
||||
void setVocabulary( const Mat& vocabulary );
|
||||
const Mat& getVocabulary() const;
|
||||
void compute( InputArray image, vector<KeyPoint>& keypoints,
|
||||
OutputArray imgDescriptor,
|
||||
vector<vector<int> >* pointIdxsOfClusters=0,
|
||||
Mat* descriptors=0 );
|
||||
void compute( InputArray descriptors, OutputArray imgDescriptor,
|
||||
std::vector<std::vector<int> >* pointIdxsOfClusters=0 );
|
||||
int descriptorSize() const;
|
||||
int descriptorType() const;
|
||||
|
||||
protected:
|
||||
...
|
||||
};
|
||||
|
||||
|
||||
|
||||
|
||||
BOWImgDescriptorExtractor::BOWImgDescriptorExtractor
|
||||
--------------------------------------------------------
|
||||
The constructor.
|
||||
|
||||
.. ocv:function:: BOWImgDescriptorExtractor::BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>& dextractor, const Ptr<DescriptorMatcher>& dmatcher )
|
||||
.. ocv:function:: BOWImgDescriptorExtractor::BOWImgDescriptorExtractor( const Ptr<DescriptorMatcher>& dmatcher )
|
||||
|
||||
:param dextractor: Descriptor extractor that is used to compute descriptors for an input image and its keypoints.
|
||||
|
||||
:param dmatcher: Descriptor matcher that is used to find the nearest word of the trained vocabulary for each keypoint descriptor of the image.
|
||||
|
||||
|
||||
|
||||
BOWImgDescriptorExtractor::setVocabulary
|
||||
--------------------------------------------
|
||||
Sets a visual vocabulary.
|
||||
|
||||
.. ocv:function:: void BOWImgDescriptorExtractor::setVocabulary( const Mat& vocabulary )
|
||||
|
||||
:param vocabulary: Vocabulary (can be trained using the inheritor of :ocv:class:`BOWTrainer` ). Each row of the vocabulary is a visual word (cluster center).
|
||||
|
||||
|
||||
|
||||
BOWImgDescriptorExtractor::getVocabulary
|
||||
--------------------------------------------
|
||||
Returns the set vocabulary.
|
||||
|
||||
.. ocv:function:: const Mat& BOWImgDescriptorExtractor::getVocabulary() const
|
||||
|
||||
|
||||
|
||||
BOWImgDescriptorExtractor::compute
|
||||
--------------------------------------
|
||||
Computes an image descriptor using the set visual vocabulary.
|
||||
|
||||
.. ocv:function:: void BOWImgDescriptorExtractor::compute( InputArray image, vector<KeyPoint>& keypoints, OutputArray imgDescriptor, vector<vector<int> >* pointIdxsOfClusters=0, Mat* descriptors=0 )
|
||||
.. ocv:function:: void BOWImgDescriptorExtractor::compute( InputArray keypointDescriptors, OutputArray imgDescriptor, std::vector<std::vector<int> >* pointIdxsOfClusters=0 )
|
||||
|
||||
:param image: Image, for which the descriptor is computed.
|
||||
|
||||
:param keypoints: Keypoints detected in the input image.
|
||||
|
||||
:param keypointDescriptors: Computed descriptors to match with vocabulary.
|
||||
|
||||
:param imgDescriptor: Computed output image descriptor.
|
||||
|
||||
:param pointIdxsOfClusters: Indices of keypoints that belong to the cluster. This means that ``pointIdxsOfClusters[i]`` are keypoint indices that belong to the ``i`` -th cluster (word of vocabulary) returned if it is non-zero.
|
||||
|
||||
:param descriptors: Descriptors of the image keypoints that are returned if they are non-zero.
|
||||
|
||||
|
||||
|
||||
BOWImgDescriptorExtractor::descriptorSize
|
||||
---------------------------------------------
|
||||
Returns an image descriptor size if the vocabulary is set. Otherwise, it returns 0.
|
||||
|
||||
.. ocv:function:: int BOWImgDescriptorExtractor::descriptorSize() const
|
||||
|
||||
|
||||
|
||||
BOWImgDescriptorExtractor::descriptorType
|
||||
---------------------------------------------
|
||||
|
||||
Returns an image descriptor type.
|
||||
|
||||
.. ocv:function:: int BOWImgDescriptorExtractor::descriptorType() const
|
||||
Reference in New Issue
Block a user