Normalize whitespace in documentation and text files

This commit is contained in:
Andrey Kamaev
2012-10-17 21:42:09 +04:00
parent 9337246867
commit 0e7ca71dcc
95 changed files with 1238 additions and 1238 deletions

View File

@@ -78,7 +78,7 @@ First, we create an instance of a keypoint detector. All detectors inherit the a
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
We create an instance of descriptor extractor. The most of OpenCV descriptors inherit ``DescriptorExtractor`` abstract interface. Then we compute descriptors for each of the keypoints. The output ``Mat`` of the ``DescriptorExtractor::compute`` method contains a descriptor in a row *i* for each *i*-th keypoint. Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count). ::
We create an instance of descriptor extractor. The most of OpenCV descriptors inherit ``DescriptorExtractor`` abstract interface. Then we compute descriptors for each of the keypoints. The output ``Mat`` of the ``DescriptorExtractor::compute`` method contains a descriptor in a row *i* for each *i*-th keypoint. Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count). ::
// matching descriptors
BruteForceMatcher<L2<float> > matcher;

View File

@@ -13,7 +13,7 @@ Images
Load an image from a file: ::
Mat img = imread(filename)
If you read a jpg file, a 3 channel image is created by default. If you need a grayscale image, use: ::
Mat img = imread(filename, 0);
@@ -23,14 +23,14 @@ If you read a jpg file, a 3 channel image is created by default. If you need a g
Save an image to a file: ::
imwrite(filename, img);
.. note:: format of the file is determined by its extension.
.. note:: use ``imdecode`` and ``imencode`` to read and write image from/to memory rather than a file.
XML/YAML
--------
TBD
Basic operations with images
@@ -85,7 +85,7 @@ Memory management and reference counting
// .. fill the array
Mat pointsMat = Mat(points).reshape(1);
As a result we get a 32FC1 matrix with 3 columns instead of 32FC3 matrix with 1 column. ``pointsMat`` uses data from ``points`` and will not deallocate the memory when destroyed. In this particular instance, however, developer has to make sure that lifetime of ``points`` is longer than of ``pointsMat``.
As a result we get a 32FC1 matrix with 3 columns instead of 32FC3 matrix with 1 column. ``pointsMat`` uses data from ``points`` and will not deallocate the memory when destroyed. In this particular instance, however, developer has to make sure that lifetime of ``points`` is longer than of ``pointsMat``.
If we need to copy the data, this is done using, for example, ``Mat::copyTo`` or ``Mat::clone``: ::
Mat img = imread("image.jpg");
@@ -115,7 +115,7 @@ A convertion from \texttt{Mat} to C API data structures: ::
IplImage img1 = img;
CvMat m = img;
Note that there is no data copying here.
Note that there is no data copying here.
Conversion from color to grey scale: ::

View File

@@ -6,7 +6,7 @@ Cascade Classifier Training
Introduction
============
The work with a cascade classifier inlcudes two major stages: training and detection.
The work with a cascade classifier inlcudes two major stages: training and detection.
Detection stage is described in a documentation of ``objdetect`` module of general OpenCV documentation. Documentation gives some basic information about cascade classifier.
Current guide is describing how to train a cascade classifier: preparation of a training data and running the training application.
@@ -18,10 +18,10 @@ There are two applications in OpenCV to train cascade classifier: ``opencv_haart
Note that ``opencv_traincascade`` application can use TBB for multi-threading. To use it in multicore mode OpenCV must be built with TBB.
Also there are some auxilary utilities related to the training.
Also there are some auxilary utilities related to the training.
* ``opencv_createsamples`` is used to prepare a training dataset of positive and test samples. ``opencv_createsamples`` produces dataset of positive samples in a format that is supported by both ``opencv_haartraining`` and ``opencv_traincascade`` applications. The output is a file with \*.vec extension, it is a binary format which contains images.
* ``opencv_performance`` may be used to evaluate the quality of classifiers, but for trained by ``opencv_haartraining`` only. It takes a collection of marked up images, runs the classifier and reports the performance, i.e. number of found objects, number of missed objects, number of false alarms and other information.
Since ``opencv_haartraining`` is an obsolete application, only ``opencv_traincascade`` will be described futher. ``opencv_createsamples`` utility is needed to prepare a training data for ``opencv_traincascade``, so it will be described too.
@@ -36,7 +36,7 @@ Negative Samples
Negative samples are taken from arbitrary images. These images must not contain detected objects. Negative samples are enumerated in a special file. It is a text file in which each line contains an image filename (relative to the directory of the description file) of negative sample image. This file must be created manually. Note that negative samples and sample images are also called background samples or background samples images, and are used interchangeably in this document. Described images may be of different sizes. But each image should be (but not nessesarily) larger then a training window size, because these images are used to subsample negative image to the training size.
An example of description file:
Directory structure:
.. code-block:: text
@@ -45,14 +45,14 @@ Directory structure:
img1.jpg
img2.jpg
bg.txt
File bg.txt:
.. code-block:: text
img/img1.jpg
img/img2.jpg
Positive Samples
----------------
Positive samples are created by ``opencv_createsamples`` utility. They may be created from a single image with object or from a collection of previously marked up images.
@@ -66,37 +66,37 @@ Command line arguments:
* ``-vec <vec_file_name>``
Name of the output file containing the positive samples for training.
* ``-img <image_file_name>``
Source object image (e.g., a company logo).
* ``-bg <background_file_name>``
Background description file; contains a list of images which are used as a background for randomly distorted versions of the object.
* ``-num <number_of_samples>``
Number of positive samples to generate.
* ``-bgcolor <background_color>``
Background color (currently grayscale images are assumed); the background color denotes the transparent color. Since there might be compression artifacts, the amount of color tolerance can be specified by ``-bgthresh``. All pixels withing ``bgcolor-bgthresh`` and ``bgcolor+bgthresh`` range are interpreted as transparent.
* ``-bgthresh <background_color_threshold>``
* ``-inv``
If specified, colors will be inverted.
* ``-randinv``
If specified, colors will be inverted randomly.
* ``-maxidev <max_intensity_deviation>``
Maximal intensity deviation of pixels in foreground samples.
* ``-maxxangle <max_x_rotation_angle>``
* ``-maxyangle <max_y_rotation_angle>``
@@ -104,15 +104,15 @@ Command line arguments:
* ``-maxzangle <max_z_rotation_angle>``
Maximum rotation angles must be given in radians.
* ``-show``
Useful debugging option. If specified, each sample will be shown. Pressing ``Esc`` will continue the samples creation process without.
* ``-w <sample_width>``
Width (in pixels) of the output samples.
* ``-h <sample_height>``
Height (in pixels) of the output samples.
@@ -123,7 +123,7 @@ The source image is rotated randomly around all three axes. The chosen angle is
Positive samples also may be obtained from a collection of previously marked up images. This collection is described by a text file similar to background description file. Each line of this file corresponds to an image. The first element of the line is the filename. It is followed by the number of object instances. The following numbers are the coordinates of objects bounding rectangles (x, y, width, height).
An example of description file:
Directory structure:
.. code-block:: text
@@ -132,27 +132,27 @@ Directory structure:
img1.jpg
img2.jpg
info.dat
File info.dat:
.. code-block:: text
img/img1.jpg 1 140 100 45 45
img/img2.jpg 2 100 200 50 50 50 30 25 25
Image img1.jpg contains single object instance with the following coordinates of bounding rectangle: (140, 100, 45, 45). Image img2.jpg contains two object instances.
In order to create positive samples from such collection, ``-info`` argument should be specified instead of ``-img``:
* ``-info <collection_file_name>``
Description file of marked up images collection.
The scheme of samples creation in this case is as follows. The object instances are taken from images. Then they are resized to target samples size and stored in output vec-file. No distortion is applied, so the only affecting arguments are ``-w``, ``-h``, ``-show`` and ``-num``.
``opencv_createsamples`` utility may be used for examining samples stored in positive samples file. In order to do this only ``-vec``, ``-w`` and ``-h`` parameters should be specified.
Note that for training, it does not matter how vec-files with positive samples are generated. But ``opencv_createsamples`` utility is the only one way to collect/create a vector file of positive samples, provided by OpenCV.
Note that for training, it does not matter how vec-files with positive samples are generated. But ``opencv_createsamples`` utility is the only one way to collect/create a vector file of positive samples, provided by OpenCV.
Example of vec-file is available here ``opencv/data/vec_files/trainingfaces_24-24.vec``. It can be used to train a face detector with the following window size: ``-w 24 -h 24``.
@@ -165,99 +165,99 @@ Command line arguments of ``opencv_traincascade`` application grouped by purpose
#.
Common arguments:
* ``-data <cascade_dir_name>``
Where the trained classifier should be stored.
* ``-vec <vec_file_name>``
vec-file with positive samples (created by ``opencv_createsamples`` utility).
* ``-bg <background_file_name>``
Background description file.
* ``-numPos <number_of_positive_samples>``
* ``-numNeg <number_of_negative_samples>``
Number of positive/negative samples used in training for every classifier stage.
* ``-numStages <number_of_stages>``
Number of cascade stages to be trained.
* ``-precalcValBufSize <precalculated_vals_buffer_size_in_Mb>``
Size of buffer for precalculated feature values (in Mb).
* ``-precalcIdxBufSize <precalculated_idxs_buffer_size_in_Mb>``
Size of buffer for precalculated feature indices (in Mb). The more memory you have the faster the training process.
* ``-baseFormatSave``
This argument is actual in case of Haar-like features. If it is specified, the cascade will be saved in the old format.
#.
Cascade parameters:
* ``-stageType <BOOST(default)>``
Type of stages. Only boosted classifier are supported as a stage type at the moment.
* ``-featureType<{HAAR(default), LBP}>``
Type of features: ``HAAR`` - Haar-like features, ``LBP`` - local binary patterns.
* ``-w <sampleWidth>``
* ``-h <sampleHeight>``
Size of training samples (in pixels). Must have exactly the same values as used during training samples creation (``opencv_createsamples`` utility).
#.
Boosted classifer parameters:
* ``-bt <{DAB, RAB, LB, GAB(default)}>``
Type of boosted classifiers: ``DAB`` - Discrete AdaBoost, ``RAB`` - Real AdaBoost, ``LB`` - LogitBoost, ``GAB`` - Gentle AdaBoost.
* ``-minHitRate <min_hit_rate>``
Minimal desired hit rate for each stage of the classifier. Overall hit rate may be estimated as (min_hit_rate^number_of_stages).
* ``-maxFalseAlarmRate <max_false_alarm_rate>``
Maximal desired false alarm rate for each stage of the classifier. Overall false alarm rate may be estimated as (max_false_alarm_rate^number_of_stages).
* ``-weightTrimRate <weight_trim_rate>``
Specifies whether trimming should be used and its weight. A decent choice is 0.95.
* ``-maxDepth <max_depth_of_weak_tree>``
Maximal depth of a weak tree. A decent choice is 1, that is case of stumps.
* ``-maxWeakCount <max_weak_tree_count>``
Maximal count of weak trees for every cascade stage. The boosted classifier (stage) will have so many weak trees (``<=maxWeakCount``), as needed to achieve the given ``-maxFalseAlarmRate``.
#.
Haar-like feature parameters:
* ``-mode <BASIC (default) | CORE | ALL>``
Selects the type of Haar features set used in training. ``BASIC`` use only upright features, while ``ALL`` uses the full set of upright and 45 degree rotated feature set. See [Rainer2002]_ for more details.
#.
#.
Local Binary Patterns parameters:
Local Binary Patterns don't have parameters.
After the ``opencv_traincascade`` application has finished its work, the trained cascade will be saved in cascade.xml file in the folder, which was passed as ``-data`` parameter. Other files in this folder are created for the case of interrupted training, so you may delete them after completion of training.