Doxygen tutorials: warnings cleared

This commit is contained in:
Maksim Shabunin 2014-11-27 19:54:13 +03:00
parent 8375182e34
commit c5536534d8
64 changed files with 889 additions and 1659 deletions

View File

@ -199,7 +199,7 @@ if(BUILD_DOCS AND HAVE_DOXYGEN)
set(tutorial_path "${CMAKE_CURRENT_SOURCE_DIR}/tutorials") set(tutorial_path "${CMAKE_CURRENT_SOURCE_DIR}/tutorials")
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_INPUT_LIST "${rootfile} ; ${paths_include} ; ${paths_doc} ; ${tutorial_path}") string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_INPUT_LIST "${rootfile} ; ${paths_include} ; ${paths_doc} ; ${tutorial_path}")
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_IMAGE_PATH "${paths_doc} ; ${tutorial_path}") string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_IMAGE_PATH "${paths_doc} ; ${tutorial_path}")
string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_EXAMPLE_PATH "${CMAKE_SOURCE_DIR}/samples/cpp ; ${paths_doc}") string(REPLACE ";" " \\\n" CMAKE_DOXYGEN_EXAMPLE_PATH "${CMAKE_SOURCE_DIR}/samples ; ${paths_doc}")
set(CMAKE_DOXYGEN_LAYOUT "${CMAKE_CURRENT_SOURCE_DIR}/DoxygenLayout.xml") set(CMAKE_DOXYGEN_LAYOUT "${CMAKE_CURRENT_SOURCE_DIR}/DoxygenLayout.xml")
set(CMAKE_DOXYGEN_OUTPUT_PATH "doxygen") set(CMAKE_DOXYGEN_OUTPUT_PATH "doxygen")
set(CMAKE_EXTRA_BIB_FILES "${bibfile} ${paths_bib}") set(CMAKE_EXTRA_BIB_FILES "${bibfile} ${paths_bib}")

View File

@ -91,7 +91,7 @@ directory mentioned above.
The application starts up with reading the settings from the configuration file. Although, this is The application starts up with reading the settings from the configuration file. Although, this is
an important part of it, it has nothing to do with the subject of this tutorial: *camera an important part of it, it has nothing to do with the subject of this tutorial: *camera
calibration*. Therefore, I've chosen not to post the code for that part here. Technical background calibration*. Therefore, I've chosen not to post the code for that part here. Technical background
on how to do this you can find in the @ref fileInputOutputXMLYAML tutorial. on how to do this you can find in the @ref tutorial_file_input_output_with_xml_yml tutorial.
Explanation Explanation
----------- -----------
@ -486,4 +486,3 @@ here](https://www.youtube.com/watch?v=ViPN810E0SU).
<iframe title=" Camera calibration With OpenCV - Chessboard or asymmetrical circle pattern." width="560" height="349" src="http://www.youtube.com/embed/ViPN810E0SU?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe> <iframe title=" Camera calibration With OpenCV - Chessboard or asymmetrical circle pattern." width="560" height="349" src="http://www.youtube.com/embed/ViPN810E0SU?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div> </div>
\endhtmlonly \endhtmlonly

View File

@ -51,8 +51,7 @@ plane:
\f[s\ \left [ \begin{matrix} u \\ v \\ 1 \end{matrix} \right ] = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} r_{11} & r_{12} & r_{13} & t_1 \\ r_{21} & r_{22} & r_{23} & t_2 \\ r_{31} & r_{32} & r_{33} & t_3 \end{matrix} \right ] \left [ \begin{matrix} X \\ Y \\ Z\\ 1 \end{matrix} \right ]\f] \f[s\ \left [ \begin{matrix} u \\ v \\ 1 \end{matrix} \right ] = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} r_{11} & r_{12} & r_{13} & t_1 \\ r_{21} & r_{22} & r_{23} & t_2 \\ r_{31} & r_{32} & r_{33} & t_3 \end{matrix} \right ] \left [ \begin{matrix} X \\ Y \\ Z\\ 1 \end{matrix} \right ]\f]
The complete documentation of how to manage with this equations is in @ref cv::Camera Calibration The complete documentation of how to manage with this equations is in @ref calib3d .
and 3D Reconstruction .
Source code Source code
----------- -----------
@ -80,7 +79,7 @@ then uses the mesh along with the [MöllerTrumbore intersection
algorithm](http://http://en.wikipedia.org/wiki/M%C3%B6ller%E2%80%93Trumbore_intersection_algorithm/) algorithm](http://http://en.wikipedia.org/wiki/M%C3%B6ller%E2%80%93Trumbore_intersection_algorithm/)
to compute the 3D coordinates of the found features. Finally, the 3D points and the descriptors to compute the 3D coordinates of the found features. Finally, the 3D points and the descriptors
are stored in different lists in a file with YAML format which each row is a different point. The are stored in different lists in a file with YAML format which each row is a different point. The
technical background on how to store the files can be found in the @ref fileInputOutputXMLYAML technical background on how to store the files can be found in the @ref tutorial_file_input_output_with_xml_yml
tutorial. tutorial.
![image](images/registration.png) ![image](images/registration.png)
@ -91,9 +90,9 @@ The aim of this application is estimate in real time the object pose given its 3
The application starts up loading the 3D textured model in YAML file format with the same The application starts up loading the 3D textured model in YAML file format with the same
structure explained in the model registration program. From the scene, the ORB features and structure explained in the model registration program. From the scene, the ORB features and
descriptors are detected and extracted. Then, is used @ref cv::FlannBasedMatcher with @ref descriptors are detected and extracted. Then, is used @ref cv::FlannBasedMatcher with
cv::LshIndexParams to do the matching between the scene descriptors and the model descriptors. @ref cv::flann::GenericIndex to do the matching between the scene descriptors and the model descriptors.
Using the found matches along with @ref cv::solvePnPRansac function the @ref cv::R\` and \f$t\f$ of Using the found matches along with @ref cv::solvePnPRansac function the `R` and `t` of
the camera are computed. Finally, a KalmanFilter is applied in order to reject bad poses. the camera are computed. Finally, a KalmanFilter is applied in order to reject bad poses.
In the case that you compiled OpenCV with the samples, you can find it in opencv/build/bin/cpp-tutorial-pnp_detection\`. In the case that you compiled OpenCV with the samples, you can find it in opencv/build/bin/cpp-tutorial-pnp_detection\`.
@ -242,7 +241,7 @@ implemented a *class* **RobustMatcher** which has a function for keypoints detec
extraction. You can find it in extraction. You can find it in
`samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/RobusMatcher.cpp`. In your `samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/RobusMatcher.cpp`. In your
*RobusMatch* object you can use any of the 2D features detectors of OpenCV. In this case I used *RobusMatch* object you can use any of the 2D features detectors of OpenCV. In this case I used
@ref cv::ORB features because is based on @ref cv::FAST to detect the keypoints and @ref cv::BRIEF @ref cv::ORB features because is based on @ref cv::FAST to detect the keypoints and @ref cv::xfeatures2d::BriefDescriptorExtractor
to extract the descriptors which means that is fast and robust to rotations. You can find more to extract the descriptors which means that is fast and robust to rotations. You can find more
detailed information about *ORB* in the documentation. detailed information about *ORB* in the documentation.
@ -265,9 +264,9 @@ It is the first step in our detection algorithm. The main idea is to match the s
with our model descriptors in order to know the 3D coordinates of the found features into the with our model descriptors in order to know the 3D coordinates of the found features into the
current scene. current scene.
Firstly, we have to set which matcher we want to use. In this case is used @ref Firstly, we have to set which matcher we want to use. In this case is used
cv::FlannBasedMatcher matcher which in terms of computational cost is faster than the @ref @ref cv::FlannBasedMatcher matcher which in terms of computational cost is faster than the
cv::BruteForceMatcher matcher as we increase the trained collectction of features. Then, for @ref cv::BFMatcher matcher as we increase the trained collectction of features. Then, for
FlannBased matcher the index created is *Multi-Probe LSH: Efficient Indexing for High-Dimensional FlannBased matcher the index created is *Multi-Probe LSH: Efficient Indexing for High-Dimensional
Similarity Search* due to *ORB* descriptors are binary. Similarity Search* due to *ORB* descriptors are binary.
@ -349,8 +348,8 @@ void RobustMatcher::robustMatch( const cv::Mat& frame, std::vector<cv::DMatch>&
} }
@endcode @endcode
After the matches filtering we have to subtract the 2D and 3D correspondences from the found scene After the matches filtering we have to subtract the 2D and 3D correspondences from the found scene
keypoints and our 3D model using the obtained *DMatches* vector. For more information about @ref keypoints and our 3D model using the obtained *DMatches* vector. For more information about
cv::DMatch check the documentation. @ref cv::DMatch check the documentation.
@code{.cpp} @code{.cpp}
// -- Step 2: Find out the 2D/3D correspondences // -- Step 2: Find out the 2D/3D correspondences
@ -385,8 +384,8 @@ solution.
For the camera pose estimation I have implemented a *class* **PnPProblem**. This *class* has 4 For the camera pose estimation I have implemented a *class* **PnPProblem**. This *class* has 4
atributes: a given calibration matrix, the rotation matrix, the translation matrix and the atributes: a given calibration matrix, the rotation matrix, the translation matrix and the
rotation-translation matrix. The intrinsic calibration parameters of the camera which you are rotation-translation matrix. The intrinsic calibration parameters of the camera which you are
using to estimate the pose are necessary. In order to obtain the parameters you can check @ref using to estimate the pose are necessary. In order to obtain the parameters you can check
CameraCalibrationSquareChessBoardTutorial and @ref cameraCalibrationOpenCV tutorials. @ref tutorial_camera_calibration_square_chess and @ref tutorial_camera_calibration tutorials.
The following code is how to declare the *PnPProblem class* in the main program: The following code is how to declare the *PnPProblem class* in the main program:
@code{.cpp} @code{.cpp}
@ -543,9 +542,9 @@ Filter will be applied after detected a given number of inliers.
You can find more information about what [Kalman You can find more information about what [Kalman
Filter](http://en.wikipedia.org/wiki/Kalman_filter) is. In this tutorial it's used the OpenCV Filter](http://en.wikipedia.org/wiki/Kalman_filter) is. In this tutorial it's used the OpenCV
implementation of the @ref cv::Kalman Filter based on [Linear Kalman Filter for position and implementation of the @ref cv::KalmanFilter based on
orientation tracking](http://campar.in.tum.de/Chair/KalmanFilter) to set the dynamics and [Linear Kalman Filter for position and orientation tracking](http://campar.in.tum.de/Chair/KalmanFilter)
measurement models. to set the dynamics and measurement models.
Firstly, we have to define our state vector which will have 18 states: the positional data (x,y,z) Firstly, we have to define our state vector which will have 18 states: the positional data (x,y,z)
with its first and second derivatives (velocity and acceleration), then rotation is added in form with its first and second derivatives (velocity and acceleration), then rotation is added in form
@ -796,4 +795,3 @@ here](http://www.youtube.com/user/opencvdev/videos).
<iframe title="Pose estimation of textured object using OpenCV in cluttered background" width="560" height="349" src="http://www.youtube.com/embed/YLS9bWek78k?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe> <iframe title="Pose estimation of textured object using OpenCV in cluttered background" width="560" height="349" src="http://www.youtube.com/embed/YLS9bWek78k?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div> </div>
\endhtmlonly \endhtmlonly

View File

@ -83,29 +83,22 @@ Explanation
src2 = imread("../../images/WindowsLogo.jpg"); src2 = imread("../../images/WindowsLogo.jpg");
@endcode @endcode
**warning** **warning**
Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and
height) and type. height) and type.
2. Now we need to generate the @ref cv::g(x)\` image. For this, the function 2. Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy:
add_weighted:addWeighted comes quite handy: @code{.cpp}
beta = ( 1.0 - alpha );
.. code-block:: cpp addWeighted( src1, alpha, src2, beta, 0.0, dst);
@endcode
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
since @ref cv::addWeighted produces: since @ref cv::addWeighted produces:
\f[dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma\f]
In this case, `gamma` is the argument \f$0.0\f$ in the code above.
.. math::
dst = \\alpha \\cdot src1 + \\beta \\cdot src2 + \\gamma
In this case, :math:gamma\` is the argument \f$0.0\f$ in the code above.
3. Create windows, show the images and wait for the user to end the program. 3. Create windows, show the images and wait for the user to end the program.
Result Result
------ ------
![image](images/Adding_Images_Tutorial_Result_0.jpg) ![image](images/Adding_Images_Tutorial_Result_Big.jpg)

View File

@ -115,6 +115,6 @@ Explanation
Result Result
======= =======
.. image:: images/Adding_Images_Tutorial_Result_0.jpg .. image:: images/Adding_Images_Tutorial_Result_Big.jpg
:alt: Blending Images Tutorial - Final Result :alt: Blending Images Tutorial - Final Result
:align: center :align: center

View File

@ -97,6 +97,7 @@ int main( int argc, char** argv )
return 0; return 0;
} }
@endcode @endcode
Explanation Explanation
----------- -----------
@ -134,11 +135,10 @@ Explanation
} }
@endcode @endcode
Notice the following: Notice the following:
- To access each pixel in the images we are using this syntax: *image.at\<Vec3b\>(y,x)[c]* - To access each pixel in the images we are using this syntax: *image.at\<Vec3b\>(y,x)[c]*
where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2). where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
- Since the operation @ref cv::alpha cdot p(i,j) + beta\` can give values out of range or not - Since the operation \f$\alpha \cdot p(i,j) + \beta\f$ can give values out of range or not
integers (if \f$\alpha\f$ is float), we use :saturate_cast:\`saturate_cast to make sure the integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
values are valid. values are valid.
5. Finally, we create windows and show the images, the usual way. 5. Finally, we create windows and show the images, the usual way.
@ -151,12 +151,13 @@ Explanation
waitKey(0); waitKey(0);
@endcode @endcode
@note @note
Instead of using the **for** loops to access each pixel, we could have simply used this command: Instead of using the **for** loops to access each pixel, we could have simply used this command:
@code{.cpp} @code{.cpp}
image.convertTo(new_image, -1, alpha, beta); image.convertTo(new_image, -1, alpha, beta);
@endcode @endcode
where @ref cv::convertTo would effectively perform *new_image = a*image + beta\*. However, we where @ref cv::Mat::convertTo would effectively perform *new_image = a*image + beta\*. However, we
wanted to show you how to access each pixel. In any case, both methods give the same result but wanted to show you how to access each pixel. In any case, both methods give the same result but
convertTo is more optimized and works a lot faster. convertTo is more optimized and works a lot faster.
@ -171,8 +172,7 @@ Result
* Enter the alpha value [1.0-3.0]: 2.2 * Enter the alpha value [1.0-3.0]: 2.2
* Enter the beta value [0-100]: 50 * Enter the beta value [0-100]: 50
@endcode @endcode
- We get this: - We get this:
![image](images/Basic_Linear_Transform_Tutorial_Result_0.jpg) ![image](images/Basic_Linear_Transform_Tutorial_Result_big.jpg)

View File

@ -204,6 +204,6 @@ Result
* We get this: * We get this:
.. image:: images/Basic_Linear_Transform_Tutorial_Result_0.jpg .. image:: images/Basic_Linear_Transform_Tutorial_Result_big.jpg
:alt: Basic Linear Transform - Final Result :alt: Basic Linear Transform - Final Result
:align: center :align: center

View File

@ -79,10 +79,12 @@ double t = (double)getTickCount();
t = ((double)getTickCount() - t)/getTickFrequency(); t = ((double)getTickCount() - t)/getTickFrequency();
cout << "Times passed in seconds: " << t << endl; cout << "Times passed in seconds: " << t << endl;
@endcode @endcode
@anchor tutorial_how_to_scan_images_storing
How the image matrix is stored in the memory? How the image matrix is stored in the memory?
--------------------------------------------- ---------------------------------------------
As you could already read in my @ref matTheBasicImageContainer tutorial the size of the matrix As you could already read in my @ref tutorial_mat_the_basic_image_container tutorial the size of the matrix
depends of the color system used. More accurately, it depends from the number of channels used. In depends of the color system used. More accurately, it depends from the number of channels used. In
case of a gray scale image we have something like: case of a gray scale image we have something like:
@ -110,7 +112,7 @@ Row n & \tabIt{n,0} & \tabIt{n,1} & \tabIt{n,...} & \tabIt{n, m} \\
Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory
is large enough to store the rows in a successive fashion the rows may follow one after another, is large enough to store the rows in a successive fashion the rows may follow one after another,
creating a single long row. Because everything is in a single place following one after another this creating a single long row. Because everything is in a single place following one after another this
may help to speed up the scanning process. We can use the @ref cv::isContinuous() function to *ask* may help to speed up the scanning process. We can use the @ref cv::Mat::isContinuous() function to *ask*
the matrix if this is the case. Continue on to the next section to find an example. the matrix if this is the case. Continue on to the next section to find an example.
The efficient way The efficient way
@ -227,12 +229,12 @@ differences I've used a quite large (2560 X 1600) image. The performance present
color images. For a more accurate value I've averaged the value I got from the call of the function color images. For a more accurate value I've averaged the value I got from the call of the function
for hundred times. for hundred times.
--------------- ---------------------- Method | Time
Efficient Way 79.4717 milliseconds --------------- | ----------------------
Iterator 83.7201 milliseconds Efficient Way | 79.4717 milliseconds
On-The-Fly RA 93.7878 milliseconds Iterator | 83.7201 milliseconds
LUT function 32.5759 milliseconds On-The-Fly RA | 93.7878 milliseconds
--------------- ---------------------- LUT function | 32.5759 milliseconds
We can conclude a couple of things. If possible, use the already made functions of OpenCV (instead We can conclude a couple of things. If possible, use the already made functions of OpenCV (instead
reinventing these). The fastest method turns out to be the LUT function. This is because the OpenCV reinventing these). The fastest method turns out to be the LUT function. This is because the OpenCV
@ -242,12 +244,10 @@ Using the on-the-fly reference access method for full image scan is the most cos
In the release mode it may beat the iterator approach or not, however it surely sacrifices for this In the release mode it may beat the iterator approach or not, however it surely sacrifices for this
the safety trait of iterators. the safety trait of iterators.
Finally, you may watch a sample run of the program on the [video Finally, you may watch a sample run of the program on the [video posted](https://www.youtube.com/watch?v=fB3AN5fjgwc) on our YouTube channel.
posted](https://www.youtube.com/watch?v=fB3AN5fjgwc) on our YouTube channel.
\htmlonly \htmlonly
<div align="center"> <div align="center">
<iframe title="How to scan images in OpenCV?" width="560" height="349" src="http://www.youtube.com/embed/fB3AN5fjgwc?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe> <iframe title="How to scan images in OpenCV?" width="560" height="349" src="http://www.youtube.com/embed/fB3AN5fjgwc?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div> </div>
\endhtmlonly \endhtmlonly

View File

@ -16,8 +16,7 @@ Code
You may also find the source code in the You may also find the source code in the
`samples/cpp/tutorial_code/core/ippasync/ippasync_sample.cpp` file of the OpenCV source library or `samples/cpp/tutorial_code/core/ippasync/ippasync_sample.cpp` file of the OpenCV source library or
download it from here download it from [here](samples/cpp/tutorial_code/core/ippasync/ippasync_sample.cpp).
\<../../../../samples/cpp/tutorial_code/core/ippasync/ippasync_sample.cpp\>.
@includelineno cpp/tutorial_code/core/ippasync/ippasync_sample.cpp @includelineno cpp/tutorial_code/core/ippasync/ippasync_sample.cpp
@ -37,8 +36,8 @@ Explanation
hppStatus sts; hppStatus sts;
hppiVirtualMatrix * virtMatrix; hppiVirtualMatrix * virtMatrix;
@endcode @endcode
2. Load input image or video. How to open and read video stream you can see in the @ref 2. Load input image or video. How to open and read video stream you can see in the
videoInputPSNRMSSIM tutorial. @ref tutorial_video_input_psnr_ssim tutorial.
@code{.cpp} @code{.cpp}
if( useCamera ) if( useCamera )
{ {
@ -83,7 +82,7 @@ Explanation
result.create( image.rows, image.cols, CV_8U); result.create( image.rows, image.cols, CV_8U);
@endcode @endcode
6. Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::getHpp 6. Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::hpp::getHpp
and call [hppiSobel](http://software.intel.com/en-us/node/474701) function. and call [hppiSobel](http://software.intel.com/en-us/node/474701) function.
@code{.cpp} @code{.cpp}
//convert Mat to hppiMatrix //convert Mat to hppiMatrix
@ -134,6 +133,7 @@ Explanation
CHECK_DEL_STATUS(sts, "hppDeleteInstance"); CHECK_DEL_STATUS(sts, "hppDeleteInstance");
} }
@endcode @endcode
Result Result
------ ------
@ -141,4 +141,3 @@ After compiling the code above we can execute it giving an image or video path a
as an argument. For this tutorial we use baboon.png image as input. The result is below. as an argument. For this tutorial we use baboon.png image as input. The result is below.
![image](images/How_To_Use_IPPA_Result.jpg) ![image](images/How_To_Use_IPPA_Result.jpg)

View File

@ -19,8 +19,8 @@ this. In the following you'll learn:
General General
------- -------
When making the switch you first need to learn some about the new data structure for images: @ref When making the switch you first need to learn some about the new data structure for images:
matTheBasicImageContainer, this replaces the old *CvMat* and *IplImage* ones. Switching to the new @ref tutorial_mat_the_basic_image_container, this replaces the old *CvMat* and *IplImage* ones. Switching to the new
functions is easier. You just need to remember a couple of new things. functions is easier. You just need to remember a couple of new things.
OpenCV 2 received reorganization. No longer are all the functions crammed into a single library. We OpenCV 2 received reorganization. No longer are all the functions crammed into a single library. We
@ -46,8 +46,8 @@ and the subsequent words start with a capital letter (like *copyMakeBorder*).
Now, remember that you need to link to your application all the modules you use, and in case you are Now, remember that you need to link to your application all the modules you use, and in case you are
on Windows using the *DLL* system you will need to add, again, to the path all the binaries. For on Windows using the *DLL* system you will need to add, again, to the path all the binaries. For
more in-depth information if you're on Windows read @ref Windows_Visual_Studio_How_To and for more in-depth information if you're on Windows read @ref tutorial_windows_visual_studio_Opencv and for
Linux an example usage is explained in @ref Linux_Eclipse_Usage. Linux an example usage is explained in @ref tutorial_linux_eclipse.
Now for converting the *Mat* object you can use either the *IplImage* or the *CvMat* operators. Now for converting the *Mat* object you can use either the *IplImage* or the *CvMat* operators.
While in the C interface you used to work with pointers here it's no longer the case. In the C++ While in the C interface you used to work with pointers here it's no longer the case. In the C++
@ -81,11 +81,11 @@ For example:
Mat K(piL), L; Mat K(piL), L;
L = Mat(pI); L = Mat(pI);
@endcode @endcode
A case study A case study
------------ ------------
Now that you have the basics done [here's Now that you have the basics done [here's](samples/cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp)
](samples/cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp)
an example that mixes the usage of the C interface with the C++ one. You will also find it in the an example that mixes the usage of the C interface with the C++ one. You will also find it in the
sample directory of the OpenCV source code library at the sample directory of the OpenCV source code library at the
`samples/cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp` . `samples/cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp` .
@ -124,7 +124,7 @@ lines
Here you can observe that we may go through all the pixels of an image in three fashions: an Here you can observe that we may go through all the pixels of an image in three fashions: an
iterator, a C pointer and an individual element access style. You can read a more in-depth iterator, a C pointer and an individual element access style. You can read a more in-depth
description of these in the @ref howToScanImagesOpenCV tutorial. Converting from the old function description of these in the @ref tutorial_how_to_scan_images tutorial. Converting from the old function
names is easy. Just remove the cv prefix and use the new *Mat* data structure. Here's an example of names is easy. Just remove the cv prefix and use the new *Mat* data structure. Here's an example of
this by using the weighted addition function: this by using the weighted addition function:
@ -161,4 +161,3 @@ of the OpenCV source code library.
<iframe title="Interoperability with OpenCV 1" width="560" height="349" src="http://www.youtube.com/embed/qckm-zvo31w?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe> <iframe title="Interoperability with OpenCV 1" width="560" height="349" src="http://www.youtube.com/embed/qckm-zvo31w?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div> </div>
\endhtmlonly \endhtmlonly

View File

@ -67,7 +67,7 @@ At first we make sure that the input images data is in unsigned char format. For
CV_Assert(myImage.depth() == CV_8U); // accept only uchar images CV_Assert(myImage.depth() == CV_8U); // accept only uchar images
@endcode @endcode
We create an output image with the same size and the same type as our input. As you can see in the We create an output image with the same size and the same type as our input. As you can see in the
@ref How_Image_Stored_Memory section, depending on the number of channels we may have one or more @ref tutorial_how_to_scan_images_storing "storing" section, depending on the number of channels we may have one or more
subcolumns. We will iterate through them via pointers so the total number of elements depends from subcolumns. We will iterate through them via pointers so the total number of elements depends from
this number. this number.
@code{.cpp} @code{.cpp}
@ -104,6 +104,7 @@ Result.row(Result.rows - 1).setTo(Scalar(0)); // The bottom row
Result.col(0).setTo(Scalar(0)); // The left column Result.col(0).setTo(Scalar(0)); // The left column
Result.col(Result.cols - 1).setTo(Scalar(0)); // The right column Result.col(Result.cols - 1).setTo(Scalar(0)); // The right column
@endcode @endcode
The filter2D function The filter2D function
--------------------- ---------------------

View File

@ -18,8 +18,8 @@ numerical matrices and other information describing the matrix itself. *OpenCV*
library whose main focus is to process and manipulate this information. Therefore, the first thing library whose main focus is to process and manipulate this information. Therefore, the first thing
you need to be familiar with is how OpenCV stores and handles images. you need to be familiar with is how OpenCV stores and handles images.
*Mat* Mat
----- ---
OpenCV has been around since 2001. In those days the library was built around a *C* interface and to OpenCV has been around since 2001. In those days the library was built around a *C* interface and to
store the image in the memory they used a C structure called *IplImage*. This is the one you'll see store the image in the memory they used a C structure called *IplImage*. This is the one you'll see
@ -62,6 +62,7 @@ To tackle this issue OpenCV uses a reference counting system. The idea is that e
its own header, however the matrix may be shared between two instance of them by having their matrix its own header, however the matrix may be shared between two instance of them by having their matrix
pointers point to the same address. Moreover, the copy operators **will only copy the headers** and pointers point to the same address. Moreover, the copy operators **will only copy the headers** and
the pointer to the large matrix, not the data itself. the pointer to the large matrix, not the data itself.
@code{.cpp} @code{.cpp}
Mat A, C; // creates just the header parts Mat A, C; // creates just the header parts
A = imread(argv[1], IMREAD_COLOR); // here we'll know the method used (allocate matrix) A = imread(argv[1], IMREAD_COLOR); // here we'll know the method used (allocate matrix)
@ -70,6 +71,7 @@ Mat B(A); // Use the copy constructor
C = A; // Assignment operator C = A; // Assignment operator
@endcode @endcode
All the above objects, in the end, point to the same single data matrix. Their headers are All the above objects, in the end, point to the same single data matrix. Their headers are
different, however, and making a modification using any of them will affect all the other ones as different, however, and making a modification using any of them will affect all the other ones as
well. In practice the different objects just provide different access method to the same underlying well. In practice the different objects just provide different access method to the same underlying
@ -85,7 +87,7 @@ for cleaning it up when it's no longer needed. The short answer is: the last obj
This is handled by using a reference counting mechanism. Whenever somebody copies a header of a This is handled by using a reference counting mechanism. Whenever somebody copies a header of a
*Mat* object, a counter is increased for the matrix. Whenever a header is cleaned this counter is *Mat* object, a counter is increased for the matrix. Whenever a header is cleaned this counter is
decreased. When the counter reaches zero the matrix too is freed. Sometimes you will want to copy decreased. When the counter reaches zero the matrix too is freed. Sometimes you will want to copy
the matrix itself too, so OpenCV provides the @ref cv::clone() and @ref cv::copyTo() functions. the matrix itself too, so OpenCV provides the @ref cv::Mat::clone() and @ref cv::Mat::copyTo() functions.
@code{.cpp} @code{.cpp}
Mat F = A.clone(); Mat F = A.clone();
Mat G; Mat G;
@ -97,10 +99,10 @@ remember from all this is that:
- Output image allocation for OpenCV functions is automatic (unless specified otherwise). - Output image allocation for OpenCV functions is automatic (unless specified otherwise).
- You do not need to think about memory management with OpenCVs C++ interface. - You do not need to think about memory management with OpenCVs C++ interface.
- The assignment operator and the copy constructor only copies the header. - The assignment operator and the copy constructor only copies the header.
- The underlying matrix of an image may be copied using the @ref cv::clone() and @ref cv::copyTo() - The underlying matrix of an image may be copied using the @ref cv::Mat::clone() and @ref cv::Mat::copyTo()
functions. functions.
*Storing* methods Storing methods
----------------- -----------------
This is about how you store the pixel values. You can select the color space and the data type used. This is about how you store the pixel values. You can select the color space and the data type used.
@ -134,10 +136,10 @@ using the float (4 byte = 32 bit) or double (8 byte = 64 bit) data types for eac
Nevertheless, remember that increasing the size of a component also increases the size of the whole Nevertheless, remember that increasing the size of a component also increases the size of the whole
picture in the memory. picture in the memory.
Creating a *Mat* object explicitly Creating a Mat object explicitly
---------------------------------- ----------------------------------
In the @ref Load_Save_Image tutorial you have already learned how to write a matrix to an image In the @ref tutorial_load_save_image tutorial you have already learned how to write a matrix to an image
file by using the @ref cv::imwrite() function. However, for debugging purposes it's much more file by using the @ref cv::imwrite() function. However, for debugging purposes it's much more
convenient to see the actual values. You can do this using the \<\< operator of *Mat*. Be aware that convenient to see the actual values. You can do this using the \<\< operator of *Mat*. Be aware that
this only works for two dimensional matrices. this only works for two dimensional matrices.
@ -146,29 +148,28 @@ Although *Mat* works really well as an image container, it is also a general mat
Therefore, it is possible to create and manipulate multidimensional matrices. You can create a Mat Therefore, it is possible to create and manipulate multidimensional matrices. You can create a Mat
object in multiple ways: object in multiple ways:
- @ref cv::Mat() Constructor - @ref cv::Mat::Mat Constructor
@includelineno @includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines 27-28
27-28
![image](images/MatBasicContainerOut1.png)
For two dimensional and multichannel images we first define their size: row and column count wise. ![image](images/MatBasicContainerOut1.png)
Then we need to specify the data type to use for storing the elements and the number of channels For two dimensional and multichannel images we first define their size: row and column count wise.
per matrix point. To do this we have multiple definitions constructed according to the following
convention: Then we need to specify the data type to use for storing the elements and the number of channels
@code{.cpp} per matrix point. To do this we have multiple definitions constructed according to the following
CV_[The number of bits per item][Signed or Unsigned][Type Prefix]C[The channel number] convention:
@endcode @code{.cpp}
For instance, *CV_8UC3* means we use unsigned char types that are 8 bit long and each pixel has CV_[The number of bits per item][Signed or Unsigned][Type Prefix]C[The channel number]
three of these to form the three channels. This are predefined for up to four channel numbers. The @endcode
@ref cv::Scalar is four element short vector. Specify this and you can initialize all matrix For instance, *CV_8UC3* means we use unsigned char types that are 8 bit long and each pixel has
points with a custom value. If you need more you can create the type with the upper macro, setting three of these to form the three channels. This are predefined for up to four channel numbers. The
the channel number in parenthesis as you can see below. @ref cv::Scalar is four element short vector. Specify this and you can initialize all matrix
points with a custom value. If you need more you can create the type with the upper macro, setting
the channel number in parenthesis as you can see below.
- Use C/C++ arrays and initialize via constructor - Use C/C++ arrays and initialize via constructor
@ -176,8 +177,8 @@ the channel number in parenthesis as you can see below.
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
35-36 35-36
The upper example shows how to create a matrix with more than two dimensions. Specify its The upper example shows how to create a matrix with more than two dimensions. Specify its
dimension, then pass a pointer containing the size for each dimension and the rest remains the dimension, then pass a pointer containing the size for each dimension and the rest remains the
same. same.
@ -187,60 +188,57 @@ the channel number in parenthesis as you can see below.
IplImage* img = cvLoadImage("greatwave.png", 1); IplImage* img = cvLoadImage("greatwave.png", 1);
Mat mtx(img); // convert IplImage* -> Mat Mat mtx(img); // convert IplImage* -> Mat
@endcode @endcode
- @ref cv::Create() function: - @ref cv::Mat::create function:
@includelineno @includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines 31-32
31-32
![image](images/MatBasicContainerOut2.png)
You cannot initialize the matrix values with this construction. It will only reallocate its matrix ![image](images/MatBasicContainerOut2.png)
data memory if the new size will not fit into the old one.
- MATLAB style initializer: @ref cv::zeros() , @ref cv::ones() , @ref cv::eye() . Specify size and You cannot initialize the matrix values with this construction. It will only reallocate its matrix
data memory if the new size will not fit into the old one.
- MATLAB style initializer: @ref cv::Mat::zeros , @ref cv::Mat::ones , @ref cv::Mat::eye . Specify size and
data type to use: data type to use:
@includelineno @includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
40-47 40-47
![image](images/MatBasicContainerOut3.png) ![image](images/MatBasicContainerOut3.png)
- For small matrices you may use comma separated initializers: - For small matrices you may use comma separated initializers:
@includelineno @includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines 50-51
50-51
![image](images/MatBasicContainerOut6.png)
- Create a new header for an existing *Mat* object and @ref cv::clone() or @ref cv::copyTo() it. ![image](images/MatBasicContainerOut6.png)
- Create a new header for an existing *Mat* object and @ref cv::Mat::clone or @ref cv::Mat::copyTo it.
@includelineno @includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines 53-54
53-54
![image](images/MatBasicContainerOut7.png) ![image](images/MatBasicContainerOut7.png)
@note @note
You can fill out a matrix with random values using the @ref cv::randu() function. You need to You can fill out a matrix with random values using the @ref cv::randu() function. You need to
give the lower and upper value for the random values: give the lower and upper value for the random values:
@includelineno @includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
57-58 57-58
Output formatting Output formatting
----------------- -----------------
@ -253,8 +251,8 @@ format your matrix output:
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
61 61
![image](images/MatBasicContainerOut8.png) ![image](images/MatBasicContainerOut8.png)
- Python - Python
@ -263,8 +261,8 @@ format your matrix output:
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
62 62
![image](images/MatBasicContainerOut16.png) ![image](images/MatBasicContainerOut16.png)
- Comma separated values (CSV) - Comma separated values (CSV)
@ -273,8 +271,8 @@ format your matrix output:
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
64 64
![image](images/MatBasicContainerOut10.png) ![image](images/MatBasicContainerOut10.png)
- Numpy - Numpy
@ -283,8 +281,8 @@ format your matrix output:
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
63 63
![image](images/MatBasicContainerOut9.png) ![image](images/MatBasicContainerOut9.png)
- C - C
@ -293,8 +291,8 @@ format your matrix output:
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
65 65
![image](images/MatBasicContainerOut11.png) ![image](images/MatBasicContainerOut11.png)
Output of other common items Output of other common items
@ -308,8 +306,8 @@ OpenCV offers support for output of other common OpenCV data structures too via
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
67-68 67-68
![image](images/MatBasicContainerOut12.png) ![image](images/MatBasicContainerOut12.png)
- 3D Point - 3D Point
@ -318,8 +316,8 @@ OpenCV offers support for output of other common OpenCV data structures too via
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
70-71 70-71
![image](images/MatBasicContainerOut13.png) ![image](images/MatBasicContainerOut13.png)
- std::vector via cv::Mat - std::vector via cv::Mat
@ -328,8 +326,8 @@ OpenCV offers support for output of other common OpenCV data structures too via
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines lines
74-77 74-77
![image](images/MatBasicContainerOut14.png) ![image](images/MatBasicContainerOut14.png)
- std::vector of points - std::vector of points
@ -339,12 +337,11 @@ OpenCV offers support for output of other common OpenCV data structures too via
lines lines
79-83 79-83
![image](images/MatBasicContainerOut15.png) ![image](images/MatBasicContainerOut15.png)
Most of the samples here have been included in a small console application. You can download it from Most of the samples here have been included in a small console application. You can download it from
[here [here](samples/cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp)
](samples/cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp)
or in the core section of the cpp samples. or in the core section of the cpp samples.
You can also find a quick video demonstration of this on You can also find a quick video demonstration of this on
@ -355,4 +352,3 @@ You can also find a quick video demonstration of this on
<iframe title="Install OpenCV by using its source files - Part 1" width="560" height="349" src="http://www.youtube.com/embed/1tibU7vGWpk?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe> <iframe title="Install OpenCV by using its source files - Part 1" width="560" height="349" src="http://www.youtube.com/embed/1tibU7vGWpk?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div> </div>
\endhtmlonly \endhtmlonly

View File

@ -13,15 +13,14 @@ In this tutorial you will learn how to:
Code Code
---- ----
- In the previous tutorial (@ref Drawing_1) we drew diverse geometric figures, giving as input - In the previous tutorial (@ref tutorial_basic_geometric_drawing) we drew diverse geometric figures, giving as input
parameters such as coordinates (in the form of @ref cv::Points ), color, thickness, etc. You parameters such as coordinates (in the form of @ref cv::Point), color, thickness, etc. You
might have noticed that we gave specific values for these arguments. might have noticed that we gave specific values for these arguments.
- In this tutorial, we intend to use *random* values for the drawing parameters. Also, we intend - In this tutorial, we intend to use *random* values for the drawing parameters. Also, we intend
to populate our image with a big number of geometric figures. Since we will be initializing them to populate our image with a big number of geometric figures. Since we will be initializing them
in a random fashion, this process will be automatic and made by using *loops* . in a random fashion, this process will be automatic and made by using *loops* .
- This code is in your OpenCV sample folder. Otherwise you can grab it from - This code is in your OpenCV sample folder. Otherwise you can grab it from
[here](http://code.opencv.org/projects/opencv/repository/revisions/master/raw/samples/cpp/tutorial_code/core/Matrix/Drawing_2.cpp) [here](http://code.opencv.org/projects/opencv/repository/revisions/master/raw/samples/cpp/tutorial_code/core/Matrix/Drawing_2.cpp)
.
Explanation Explanation
----------- -----------
@ -213,9 +212,9 @@ Explanation
**image** minus the value of **i** (remember that for each pixel we are considering three values **image** minus the value of **i** (remember that for each pixel we are considering three values
such as R, G and B, so each of them will be affected) such as R, G and B, so each of them will be affected)
Also remember that the substraction operation *always* performs internally a **saturate** Also remember that the substraction operation *always* performs internally a **saturate**
operation, which means that the result obtained will always be inside the allowed range (no operation, which means that the result obtained will always be inside the allowed range (no
negative and between 0 and 255 for our example). negative and between 0 and 255 for our example).
Result Result
------ ------
@ -247,6 +246,4 @@ functions, which will produce:
colors and positions. colors and positions.
8. And the big end (which by the way expresses a big truth too): 8. And the big end (which by the way expresses a big truth too):
![image](images/Drawing_2_Tutorial_Result_7.jpg) ![image](images/Drawing_2_Tutorial_Result_big.jpg)

View File

@ -262,6 +262,6 @@ As you just saw in the Code section, the program will sequentially execute diver
#. And the big end (which by the way expresses a big truth too): #. And the big end (which by the way expresses a big truth too):
.. image:: images/Drawing_2_Tutorial_Result_7.jpg .. image:: images/Drawing_2_Tutorial_Result_big.jpg
:alt: Drawing Tutorial 2 - Final Result 7 :alt: Drawing Tutorial 2 - Final Result 7
:align: center :align: center

View File

@ -8,7 +8,7 @@ In this tutorial you will learn how to:
- Use the @ref cv::DescriptorExtractor interface in order to find the feature vector correspondent - Use the @ref cv::DescriptorExtractor interface in order to find the feature vector correspondent
to the keypoints. Specifically: to the keypoints. Specifically:
- Use @ref cv::SurfDescriptorExtractor and its function @ref cv::compute to perform the - Use @ref cv::xfeatures2d::SURF and its function @ref cv::xfeatures2d::SURF::compute to perform the
required calculations. required calculations.
- Use a @ref cv::BFMatcher to match the features vector - Use a @ref cv::BFMatcher to match the features vector
- Use the function @ref cv::drawMatches to draw the detected matches. - Use the function @ref cv::drawMatches to draw the detected matches.
@ -78,14 +78,13 @@ int main( int argc, char** argv )
void readme() void readme()
{ std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; } { std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
@endcode @endcode
Explanation Explanation
----------- -----------
Result Result
------ ------
1. Here is the result after applying the BruteForce matcher between the two original images: Here is the result after applying the BruteForce matcher between the two original images:
![image](images/Feature_Description_BruteForce_Result.jpg)
![image](images/Feature_Description_BruteForce_Result.jpg)

View File

@ -7,7 +7,7 @@ Goal
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the @ref cv::FeatureDetector interface in order to find interest points. Specifically: - Use the @ref cv::FeatureDetector interface in order to find interest points. Specifically:
- Use the @ref cv::SurfFeatureDetector and its function @ref cv::detect to perform the - Use the @ref cv::xfeatures2d::SURF and its function @ref cv::xfeatures2d::SURF::detect to perform the
detection process detection process
- Use the function @ref cv::drawKeypoints to draw the detected keypoints - Use the function @ref cv::drawKeypoints to draw the detected keypoints
@ -72,6 +72,7 @@ int main( int argc, char** argv )
void readme() void readme()
{ std::cout << " Usage: ./SURF_detector <img1> <img2>" << std::endl; } { std::cout << " Usage: ./SURF_detector <img1> <img2>" << std::endl; }
@endcode @endcode
Explanation Explanation
----------- -----------
@ -85,5 +86,3 @@ Result
2. And here is the result for the second image: 2. And here is the result for the second image:
![image](images/Feature_Detection_Result_b.jpg) ![image](images/Feature_Detection_Result_b.jpg)

View File

@ -7,7 +7,7 @@ Goal
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the @ref cv::FlannBasedMatcher interface in order to perform a quick and efficient matching - Use the @ref cv::FlannBasedMatcher interface in order to perform a quick and efficient matching
by using the @ref cv::FLANN ( *Fast Approximate Nearest Neighbor Search Library* ) by using the @ref flann module
Theory Theory
------ ------
@ -123,6 +123,7 @@ int main( int argc, char** argv )
void readme() void readme()
{ std::cout << " Usage: ./SURF_FlannMatcher <img1> <img2>" << std::endl; } { std::cout << " Usage: ./SURF_FlannMatcher <img1> <img2>" << std::endl; }
@endcode @endcode
Explanation Explanation
----------- -----------
@ -136,5 +137,3 @@ Result
2. Additionally, we get as console output the keypoints filtered: 2. Additionally, we get as console output the keypoints filtered:
![image](images/Feature_FlannMatcher_Keypoints_Result.jpg) ![image](images/Feature_FlannMatcher_Keypoints_Result.jpg)

View File

@ -1,10 +1,11 @@
Similarity check (PNSR and SSIM) on the GPU {#tutorial_gpu_basics_similarity} Similarity check (PNSR and SSIM) on the GPU {#tutorial_gpu_basics_similarity}
=========================================== ===========================================
@todo update this tutorial
Goal Goal
---- ----
In the @ref videoInputPSNRMSSIM tutorial I already presented the PSNR and SSIM methods for checking In the @ref tutorial_video_input_psnr_ssim tutorial I already presented the PSNR and SSIM methods for checking
the similarity between the two images. And as you could see there performing these takes quite some the similarity between the two images. And as you could see there performing these takes quite some
time, especially in the case of the SSIM. However, if the performance numbers of an OpenCV time, especially in the case of the SSIM. However, if the performance numbers of an OpenCV
implementation for the CPU do not satisfy you and you happen to have an NVidia CUDA GPU device in implementation for the CPU do not satisfy you and you happen to have an NVidia CUDA GPU device in
@ -32,7 +33,7 @@ you'll find here only the functions itself.
The PSNR returns a float number, that if the two inputs are similar between 30 and 50 (higher is The PSNR returns a float number, that if the two inputs are similar between 30 and 50 (higher is
better). better).
@includelineno cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp @includelineno samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
lines lines
165-210, 18-23, 210-235 165-210, 18-23, 210-235
@ -41,7 +42,7 @@ The SSIM returns the MSSIM of the images. This is too a float number between zer
better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data
structure: structure:
@includelineno cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp @includelineno samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
lines lines
235-355, 26-42, 357- 235-355, 26-42, 357-
@ -63,7 +64,8 @@ the cv:: to avoid confusion. I'll do the later.
@code{.cpp} @code{.cpp}
#include <opencv2/gpu.hpp> // GPU structures and methods #include <opencv2/gpu.hpp> // GPU structures and methods
@endcode @endcode
GPU stands for **g**raphics **p**rocessing **u**nit. It was originally build to render graphical
GPU stands for "graphics processing unit". It was originally build to render graphical
scenes. These scenes somehow build on a lot of data. Nevertheless, these aren't all dependent one scenes. These scenes somehow build on a lot of data. Nevertheless, these aren't all dependent one
from another in a sequential way and as it is possible a parallel processing of them. Due to this a from another in a sequential way and as it is possible a parallel processing of them. Due to this a
GPU will contain multiple smaller processing units. These aren't the state of the art processors and GPU will contain multiple smaller processing units. These aren't the state of the art processors and
@ -81,7 +83,7 @@ small functions to GPU is not recommended as the upload/download time will be la
you gain by a parallel execution. you gain by a parallel execution.
Mat objects are stored only in the system memory (or the CPU cache). For getting an OpenCV matrix to Mat objects are stored only in the system memory (or the CPU cache). For getting an OpenCV matrix to
the GPU you'll need to use its GPU counterpart @ref cv::GpuMat . It works similar to the Mat with a the GPU you'll need to use its GPU counterpart @ref cv::cuda::GpuMat . It works similar to the Mat with a
2D only limitation and no reference returning for its functions (cannot mix GPU references with CPU 2D only limitation and no reference returning for its functions (cannot mix GPU references with CPU
ones). To upload a Mat object to the GPU you need to call the upload function after creating an ones). To upload a Mat object to the GPU you need to call the upload function after creating an
instance of the class. To download you may use simple assignment to a Mat object or use the download instance of the class. To download you may use simple assignment to a Mat object or use the download
@ -120,7 +122,7 @@ Optimization
The reason for this is that you're throwing out on the window the price for memory allocation and The reason for this is that you're throwing out on the window the price for memory allocation and
data transfer. And on the GPU this is damn high. Another possibility for optimization is to data transfer. And on the GPU this is damn high. Another possibility for optimization is to
introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::gpu::Stream. introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::Stream.
1. Memory allocation on the GPU is considerable. Therefore, if its possible allocate new memory as 1. Memory allocation on the GPU is considerable. Therefore, if its possible allocate new memory as
few times as possible. If you create a function what you intend to call multiple times it is a few times as possible. If you create a function what you intend to call multiple times it is a
@ -162,7 +164,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::gpu::S
gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1; gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1;
gpu::add(b.t1, C1, b.t1); gpu::add(b.t1, C1, b.t1);
@endcode @endcode
3. Use asynchronous calls (the @ref cv::gpu::Stream ). By default whenever you call a gpu function 3. Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a gpu function
it will wait for the call to finish and return with the result afterwards. However, it is it will wait for the call to finish and return with the result afterwards. However, it is
possible to make asynchronous calls, meaning it will call for the operation execution, make the possible to make asynchronous calls, meaning it will call for the operation execution, make the
costly data allocations for the algorithm and return back right away. Now you can call another costly data allocations for the algorithm and return back right away. Now you can call another
@ -182,6 +184,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::gpu::S
gpu::split(b.t1, b.vI1, stream); // Methods (pass the stream as final parameter). gpu::split(b.t1, b.vI1, stream); // Methods (pass the stream as final parameter).
gpu::multiply(b.vI1[i], b.vI1[i], b.I1_2, stream); // I1^2 gpu::multiply(b.vI1[i], b.vI1[i], b.I1_2, stream); // I1^2
@endcode @endcode
Result and conclusion Result and conclusion
--------------------- ---------------------

View File

@ -22,7 +22,7 @@ In this tutorial you will learn how to:
Code Code
---- ----
Let's modify the program made in the tutorial @ref Adding_Images. We will let the user enter the Let's modify the program made in the tutorial @ref tutorial_adding_images. We will let the user enter the
\f$\alpha\f$ value by using the Trackbar. \f$\alpha\f$ value by using the Trackbar.
@code{.cpp} @code{.cpp}
#include <opencv2/opencv.hpp> #include <opencv2/opencv.hpp>
@ -82,6 +82,7 @@ int main( int argc, char** argv )
return 0; return 0;
} }
@endcode @endcode
Explanation Explanation
----------- -----------
@ -122,7 +123,6 @@ We only analyze the code that is related to Trackbar:
} }
@endcode @endcode
Note that: Note that:
- We use the value of **alpha_slider** (integer) to get a double value for **alpha**. - We use the value of **alpha_slider** (integer) to get a double value for **alpha**.
- **alpha_slider** is updated each time the trackbar is displaced by the user. - **alpha_slider** is updated each time the trackbar is displaced by the user.
- We define *src1*, *src2*, *dist*, *alpha*, *alpha_slider* and *beta* as global variables, - We define *src1*, *src2*, *dist*, *alpha*, *alpha_slider* and *beta* as global variables,
@ -135,10 +135,8 @@ Result
![image](images/Adding_Trackbars_Tutorial_Result_0.jpg) ![image](images/Adding_Trackbars_Tutorial_Result_0.jpg)
- As a manner of practice, you can also add 02 trackbars for the program made in @ref - As a manner of practice, you can also add 02 trackbars for the program made in
Basic_Linear_Transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might @ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might
look like: look like:
![image](images/Adding_Trackbars_Tutorial_Result_1.jpg) ![image](images/Adding_Trackbars_Tutorial_Result_1.jpg)

View File

@ -33,8 +33,8 @@ lines
How to read a video stream (online-camera or offline-file)? How to read a video stream (online-camera or offline-file)?
----------------------------------------------------------- -----------------------------------------------------------
Essentially, all the functionalities required for video manipulation is integrated in the @ref Essentially, all the functionalities required for video manipulation is integrated in the @ref cv::VideoCapture
cv::VideoCapture C++ class. This on itself builds on the FFmpeg open source library. This is a basic C++ class. This on itself builds on the FFmpeg open source library. This is a basic
dependency of OpenCV so you shouldn't need to worry about this. A video is composed of a succession dependency of OpenCV so you shouldn't need to worry about this. A video is composed of a succession
of images, we refer to these in the literature as frames. In case of a video file there is a *frame of images, we refer to these in the literature as frames. In case of a video file there is a *frame
rate* specifying just how long is between two frames. While for the video cameras usually there is a rate* specifying just how long is between two frames. While for the video cameras usually there is a
@ -42,7 +42,7 @@ limit of just how many frames they can digitalize per second, this property is l
any time the camera sees the current snapshot of the world. any time the camera sees the current snapshot of the world.
The first task you need to do is to assign to a @ref cv::VideoCapture class its source. You can do The first task you need to do is to assign to a @ref cv::VideoCapture class its source. You can do
this either via the @ref cv::constructor or its @ref cv::open function. If this argument is an this either via the @ref cv::VideoCapture::VideoCapture or its @ref cv::VideoCapture::open function. If this argument is an
integer then you will bind the class to a camera, a device. The number passed here is the ID of the integer then you will bind the class to a camera, a device. The number passed here is the ID of the
device, assigned by the operating system. If you have a single camera attached to your system its ID device, assigned by the operating system. If you have a single camera attached to your system its ID
will probably be zero and further ones increasing from there. If the parameter passed to these is a will probably be zero and further ones increasing from there. If the parameter passed to these is a
@ -63,8 +63,8 @@ VideoCapture captRefrnc(sourceReference);
VideoCapture captUndTst; VideoCapture captUndTst;
captUndTst.open(sourceCompareWith); captUndTst.open(sourceCompareWith);
@endcode @endcode
To check if the binding of the class to a video source was successful or not use the @ref To check if the binding of the class to a video source was successful or not use the @ref cv::VideoCapture::isOpened
cv::isOpened function: function:
@code{.cpp} @code{.cpp}
if ( !captRefrnc.isOpened()) if ( !captRefrnc.isOpened())
{ {
@ -73,10 +73,10 @@ if ( !captRefrnc.isOpened())
} }
@endcode @endcode
Closing the video is automatic when the objects destructor is called. However, if you want to close Closing the video is automatic when the objects destructor is called. However, if you want to close
it before this you need to call its @ref cv::release function. The frames of the video are just it before this you need to call its @ref cv::VideoCapture::release function. The frames of the video are just
simple images. Therefore, we just need to extract them from the @ref cv::VideoCapture object and put simple images. Therefore, we just need to extract them from the @ref cv::VideoCapture object and put
them inside a *Mat* one. The video streams are sequential. You may get the frames one after another them inside a *Mat* one. The video streams are sequential. You may get the frames one after another
by the @ref cv::read or the overloaded \>\> operator: by the @ref cv::VideoCapture::read or the overloaded \>\> operator:
@code{.cpp} @code{.cpp}
Mat frameReference, frameUnderTest; Mat frameReference, frameUnderTest;
captRefrnc >> frameReference; captRefrnc >> frameReference;
@ -92,11 +92,11 @@ if( frameReference.empty() || frameUnderTest.empty())
} }
@endcode @endcode
A read method is made of a frame grab and a decoding applied on that. You may call explicitly these A read method is made of a frame grab and a decoding applied on that. You may call explicitly these
two by using the @ref cv::grab and then the @ref cv::retrieve functions. two by using the @ref cv::VideoCapture::grab and then the @ref cv::VideoCapture::retrieve functions.
Videos have many-many information attached to them besides the content of the frames. These are Videos have many-many information attached to them besides the content of the frames. These are
usually numbers, however in some case it may be short character sequences (4 bytes or less). Due to usually numbers, however in some case it may be short character sequences (4 bytes or less). Due to
this to acquire these information there is a general function named @ref cv::get that returns double this to acquire these information there is a general function named @ref cv::VideoCapture::get that returns double
values containing these properties. Use bitwise operations to decode the characters from a double values containing these properties. Use bitwise operations to decode the characters from a double
type and conversions where valid values are only integers. Its single argument is the ID of the type and conversions where valid values are only integers. Its single argument is the ID of the
queried property. For example, here we get the size of the frames in the reference and test case queried property. For example, here we get the size of the frames in the reference and test case
@ -109,7 +109,7 @@ cout << "Reference frame resolution: Width=" << refS.width << " Height=" << ref
<< " of nr#: " << captRefrnc.get(CAP_PROP_FRAME_COUNT) << endl; << " of nr#: " << captRefrnc.get(CAP_PROP_FRAME_COUNT) << endl;
@endcode @endcode
When you are working with videos you may often want to control these values yourself. To do this When you are working with videos you may often want to control these values yourself. To do this
there is a @ref cv::set function. Its first argument remains the name of the property you want to there is a @ref cv::VideoCapture::set function. Its first argument remains the name of the property you want to
change and there is a second of double type containing the value to be set. It will return true if change and there is a second of double type containing the value to be set. It will return true if
it succeeds and false otherwise. Good examples for this is seeking in a video file to a given time it succeeds and false otherwise. Good examples for this is seeking in a video file to a given time
or frame: or frame:
@ -118,8 +118,8 @@ captRefrnc.set(CAP_PROP_POS_MSEC, 1.2); // go to the 1.2 second in the video
captRefrnc.set(CAP_PROP_POS_FRAMES, 10); // go to the 10th frame of the video captRefrnc.set(CAP_PROP_POS_FRAMES, 10); // go to the 10th frame of the video
// now a read operation would read the frame at the set position // now a read operation would read the frame at the set position
@endcode @endcode
For properties you can read and change look into the documentation of the @ref cv::get and @ref For properties you can read and change look into the documentation of the @ref cv::VideoCapture::get and
cv::set functions. @ref cv::VideoCapture::set functions.
Image similarity - PSNR and SSIM Image similarity - PSNR and SSIM
-------------------------------- --------------------------------
@ -175,9 +175,10 @@ the article introducing it. Nevertheless, you can get a good image of it by look
implementation below. implementation below.
@sa @sa
SSIM is described more in-depth in the: "Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. SSIM is described more in-depth in the: "Z. Wang, A. C. Bovik, H. R. Sheikh and E. P.
Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE
Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004." article. Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004." article.
@code{.cpp} @code{.cpp}
Scalar getMSSIM( const Mat& i1, const Mat& i2) Scalar getMSSIM( const Mat& i1, const Mat& i2)
{ {

View File

@ -5,8 +5,8 @@ Goal
---- ----
Whenever you work with video feeds you may eventually want to save your image processing result in a Whenever you work with video feeds you may eventually want to save your image processing result in a
form of a new video file. For simple video outputs you can use the OpenCV built-in @ref form of a new video file. For simple video outputs you can use the OpenCV built-in @ref cv::VideoWriter
cv::VideoWriter class, designed for this. class, designed for this.
- How to create a video file with OpenCV - How to create a video file with OpenCV
- What type of video files you can create with OpenCV - What type of video files you can create with OpenCV
@ -58,11 +58,15 @@ container. No audio or other track editing support here. Nevertheless, any video
your system might work. If you encounter some of these limitations you will need to look into more your system might work. If you encounter some of these limitations you will need to look into more
specialized video writing libraries such as *FFMpeg* or codecs as *HuffYUV*, *CorePNG* and *LCL*. As specialized video writing libraries such as *FFMpeg* or codecs as *HuffYUV*, *CorePNG* and *LCL*. As
an alternative, create the video track with OpenCV and expand it with sound tracks or convert it to an alternative, create the video track with OpenCV and expand it with sound tracks or convert it to
other formats by using video manipulation programs such as *VirtualDub* or *AviSynth*. The other formats by using video manipulation programs such as *VirtualDub* or *AviSynth*.
*VideoWriter* class ======================= The content written here builds on the assumption you
already read the @ref videoInputPSNRMSSIM tutorial and you know how to read video files. To create a The *VideoWriter* class
-----------------------
The content written here builds on the assumption you
already read the @ref tutorial_video_input_psnr_ssim tutorial and you know how to read video files. To create a
video file you just need to create an instance of the @ref cv::VideoWriter class. You can specify video file you just need to create an instance of the @ref cv::VideoWriter class. You can specify
its properties either via parameters in the constructor or later on via the @ref cv::open function. its properties either via parameters in the constructor or later on via the @ref cv::VideoWriter::open function.
Either way, the parameters are the same: 1. The name of the output that contains the container type Either way, the parameters are the same: 1. The name of the output that contains the container type
in its extension. At the moment only *avi* is supported. We construct this from the input file, add in its extension. At the moment only *avi* is supported. We construct this from the input file, add
to this the name of the channel to use, and finish it off with the container extension. to this the name of the channel to use, and finish it off with the container extension.
@ -122,10 +126,10 @@ Size S = Size((int) inputVideo.get(CAP_PROP_FRAME_WIDTH), //Acquire input siz
(int) inputVideo.get(CAP_PROP_FRAME_HEIGHT)); (int) inputVideo.get(CAP_PROP_FRAME_HEIGHT));
outputVideo.open(NAME , ex, inputVideo.get(CAP_PROP_FPS),S, true); outputVideo.open(NAME , ex, inputVideo.get(CAP_PROP_FPS),S, true);
@endcode @endcode
Afterwards, you use the @ref cv::isOpened() function to find out if the open operation succeeded or Afterwards, you use the @ref cv::VideoWriter::isOpened() function to find out if the open operation succeeded or
not. The video file automatically closes when the *VideoWriter* object is destroyed. After you open not. The video file automatically closes when the *VideoWriter* object is destroyed. After you open
the object with success you can send the frames of the video in a sequential order by using the @ref the object with success you can send the frames of the video in a sequential order by using the
cv::write function of the class. Alternatively, you can use its overloaded operator \<\< : @ref cv::VideoWriter::write function of the class. Alternatively, you can use its overloaded operator \<\< :
@code{.cpp} @code{.cpp}
outputVideo.write(res); //or outputVideo.write(res); //or
outputVideo << res; outputVideo << res;

View File

@ -15,7 +15,9 @@ Cool Theory
----------- -----------
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. @note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
Morphological Operations --------------------------
Morphological Operations
------------------------
- In short: A set of operations that process images based on shapes. Morphological operations - In short: A set of operations that process images based on shapes. Morphological operations
apply a *structuring element* to an input image and generate an output image. apply a *structuring element* to an input image and generate an output image.
@ -59,102 +61,8 @@ Code
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Morphology_1.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Morphology_1.cpp)
@code{.cpp} @includelineno samples/cpp/tutorial_code/ImgProc/Morphology_1.cpp
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "highgui.h"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/// Global variables
Mat src, erosion_dst, dilation_dst;
int erosion_elem = 0;
int erosion_size = 0;
int dilation_elem = 0;
int dilation_size = 0;
int const max_elem = 2;
int const max_kernel_size = 21;
/* Function Headers */
void Erosion( int, void* );
void Dilation( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
/// Create windows
namedWindow( "Erosion Demo", WINDOW_AUTOSIZE );
namedWindow( "Dilation Demo", WINDOW_AUTOSIZE );
cvMoveWindow( "Dilation Demo", src.cols, 0 );
/// Create Erosion Trackbar
createTrackbar( "Element:\n 0: Rect \n 1: Cross \n 2: Ellipse", "Erosion Demo",
&erosion_elem, max_elem,
Erosion );
createTrackbar( "Kernel size:\n 2n +1", "Erosion Demo",
&erosion_size, max_kernel_size,
Erosion );
/// Create Dilation Trackbar
createTrackbar( "Element:\n 0: Rect \n 1: Cross \n 2: Ellipse", "Dilation Demo",
&dilation_elem, max_elem,
Dilation );
createTrackbar( "Kernel size:\n 2n +1", "Dilation Demo",
&dilation_size, max_kernel_size,
Dilation );
/// Default start
Erosion( 0, 0 );
Dilation( 0, 0 );
waitKey(0);
return 0;
}
/* @function Erosion */
void Erosion( int, void* )
{
int erosion_type;
if( erosion_elem == 0 ){ erosion_type = MORPH_RECT; }
else if( erosion_elem == 1 ){ erosion_type = MORPH_CROSS; }
else if( erosion_elem == 2) { erosion_type = MORPH_ELLIPSE; }
Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
/// Apply the erosion operation
erode( src, erosion_dst, element );
imshow( "Erosion Demo", erosion_dst );
}
/* @function Dilation */
void Dilation( int, void* )
{
int dilation_type;
if( dilation_elem == 0 ){ dilation_type = MORPH_RECT; }
else if( dilation_elem == 1 ){ dilation_type = MORPH_CROSS; }
else if( dilation_elem == 2) { dilation_type = MORPH_ELLIPSE; }
Mat element = getStructuringElement( dilation_type,
Size( 2*dilation_size + 1, 2*dilation_size+1 ),
Point( dilation_size, dilation_size ) );
/// Apply the dilation operation
dilate( src, dilation_dst, element );
imshow( "Dilation Demo", dilation_dst );
}
@endcode
Explanation Explanation
----------- -----------
@ -195,9 +103,8 @@ Explanation
- *src*: The source image - *src*: The source image
- *erosion_dst*: The output image - *erosion_dst*: The output image
- *element*: This is the kernel we will use to perform the operation. If we do not - *element*: This is the kernel we will use to perform the operation. If we do not
specify, the default is a simple @ref cv::3x3\` matrix. Otherwise, we can specify its specify, the default is a simple `3x3` matrix. Otherwise, we can specify its
shape. For this, we need to use the function shape. For this, we need to use the function cv::getStructuringElement :
get_structuring_element:\`getStructuringElement :
@code{.cpp} @code{.cpp}
Mat element = getStructuringElement( erosion_type, Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ), Size( 2*erosion_size + 1, 2*erosion_size+1 ),
@ -213,44 +120,42 @@ get_structuring_element:\`getStructuringElement :
specified, it is assumed to be in the center. specified, it is assumed to be in the center.
- That is all. We are ready to perform the erosion of our image. - That is all. We are ready to perform the erosion of our image.
@note Additionally, there is another parameter that allows you to perform multiple erosions @note Additionally, there is another parameter that allows you to perform multiple erosions
(iterations) at once. We are not using it in this simple tutorial, though. You can check out the (iterations) at once. We are not using it in this simple tutorial, though. You can check out the
Reference for more details. Reference for more details.
1. **dilation:** 3. **dilation:**
The code is below. As you can see, it is completely similar to the snippet of code for **erosion**. The code is below. As you can see, it is completely similar to the snippet of code for **erosion**.
Here we also have the option of defining our kernel, its anchor point and the size of the operator Here we also have the option of defining our kernel, its anchor point and the size of the operator
to be used. to be used.
@code{.cpp} @code{.cpp}
/* @function Dilation */ /* @function Dilation */
void Dilation( int, void* ) void Dilation( int, void* )
{ {
int dilation_type; int dilation_type;
if( dilation_elem == 0 ){ dilation_type = MORPH_RECT; } if( dilation_elem == 0 ){ dilation_type = MORPH_RECT; }
else if( dilation_elem == 1 ){ dilation_type = MORPH_CROSS; } else if( dilation_elem == 1 ){ dilation_type = MORPH_CROSS; }
else if( dilation_elem == 2) { dilation_type = MORPH_ELLIPSE; } else if( dilation_elem == 2) { dilation_type = MORPH_ELLIPSE; }
Mat element = getStructuringElement( dilation_type,
Size( 2*dilation_size + 1, 2*dilation_size+1 ),
Point( dilation_size, dilation_size ) );
/// Apply the dilation operation
dilate( src, dilation_dst, element );
imshow( "Dilation Demo", dilation_dst );
}
@endcode
Mat element = getStructuringElement( dilation_type,
Size( 2*dilation_size + 1, 2*dilation_size+1 ),
Point( dilation_size, dilation_size ) );
/// Apply the dilation operation
dilate( src, dilation_dst, element );
imshow( "Dilation Demo", dilation_dst );
}
@endcode
Results Results
------- -------
- Compile the code above and execute it with an image as argument. For instance, using this image: Compile the code above and execute it with an image as argument. For instance, using this image:
![image](images/Morphology_1_Tutorial_Original_Image.jpg) ![image](images/Morphology_1_Tutorial_Original_Image.jpg)
We get the results below. Varying the indices in the Trackbars give different output images,
naturally. Try them out! You can even try to add a third Trackbar to control the number of
iterations.
![image](images/Morphology_1_Tutorial_Cover.jpg)
We get the results below. Varying the indices in the Trackbars give different output images,
naturally. Try them out! You can even try to add a third Trackbar to control the number of
iterations.
![image](images/Morphology_1_Result.jpg)

View File

@ -271,6 +271,6 @@ Results
We get the results below. Varying the indices in the Trackbars give different output images, naturally. Try them out! You can even try to add a third Trackbar to control the number of iterations. We get the results below. Varying the indices in the Trackbars give different output images, naturally. Try them out! You can even try to add a third Trackbar to control the number of iterations.
.. image:: images/Morphology_1_Tutorial_Cover.jpg .. image:: images/Morphology_1_Result.jpg
:alt: Dilation and Erosion application :alt: Dilation and Erosion application
:align: center :align: center

View File

@ -42,17 +42,17 @@ Theory
- What we want to do is to use our *model histogram* (that we know represents a skin tonality) to - What we want to do is to use our *model histogram* (that we know represents a skin tonality) to
detect skin areas in our Test Image. Here are the steps detect skin areas in our Test Image. Here are the steps
a. In each pixel of our Test Image (i.e. \f$p(i,j)\f$ ), collect the data and find the -# In each pixel of our Test Image (i.e. \f$p(i,j)\f$ ), collect the data and find the
correspondent bin location for that pixel (i.e. \f$( h_{i,j}, s_{i,j} )\f$ ). correspondent bin location for that pixel (i.e. \f$( h_{i,j}, s_{i,j} )\f$ ).
b. Lookup the *model histogram* in the correspondent bin - \f$( h_{i,j}, s_{i,j} )\f$ - and read -# Lookup the *model histogram* in the correspondent bin - \f$( h_{i,j}, s_{i,j} )\f$ - and read
the bin value. the bin value.
c. Store this bin value in a new image (*BackProjection*). Also, you may consider to normalize -# Store this bin value in a new image (*BackProjection*). Also, you may consider to normalize
the *model histogram* first, so the output for the Test Image can be visible for you. the *model histogram* first, so the output for the Test Image can be visible for you.
d. Applying the steps above, we get the following BackProjection image for our Test Image: -# Applying the steps above, we get the following BackProjection image for our Test Image:
![image](images/Back_Projection_Theory4.jpg) ![image](images/Back_Projection_Theory4.jpg)
e. In terms of statistics, the values stored in *BackProjection* represent the *probability* -# In terms of statistics, the values stored in *BackProjection* represent the *probability*
that a pixel in *Test Image* belongs to a skin area, based on the *model histogram* that we that a pixel in *Test Image* belongs to a skin area, based on the *model histogram* that we
use. For instance in our Test image, the brighter areas are more probable to be skin area use. For instance in our Test image, the brighter areas are more probable to be skin area
(as they actually are), whereas the darker areas have less probability (notice that these (as they actually are), whereas the darker areas have less probability (notice that these
@ -72,13 +72,13 @@ Code
- Display the backprojection and the histogram in windows. - Display the backprojection and the histogram in windows.
- **Downloadable code**: - **Downloadable code**:
a. Click -# Click
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/calcBackProject_Demo1.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/calcBackProject_Demo1.cpp)
for the basic version (explained in this tutorial). for the basic version (explained in this tutorial).
b. For stuff slightly fancier (using H-S histograms and floodFill to define a mask for the -# For stuff slightly fancier (using H-S histograms and floodFill to define a mask for the
skin area) you can check the [improved skin area) you can check the [improved
demo](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/calcBackProject_Demo2.cpp) demo](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/calcBackProject_Demo2.cpp)
c. ...or you can always check out the classical -# ...or you can always check out the classical
[camshiftdemo](https://github.com/Itseez/opencv/tree/master/samples/cpp/camshiftdemo.cpp) [camshiftdemo](https://github.com/Itseez/opencv/tree/master/samples/cpp/camshiftdemo.cpp)
in samples. in samples.
@ -255,5 +255,3 @@ Results
------ ------ ------ ------ ------ ------
|R0| |R1| |R2| |R0| |R1| |R2|
------ ------ ------ ------ ------ ------

View File

@ -10,7 +10,7 @@ In this tutorial you will learn how to:
- To calculate histograms of arrays of images by using the OpenCV function @ref cv::calcHist - To calculate histograms of arrays of images by using the OpenCV function @ref cv::calcHist
- To normalize an array by using the function @ref cv::normalize - To normalize an array by using the function @ref cv::normalize
@note In the last tutorial (@ref histogram_equalization) we talked about a particular kind of @note In the last tutorial (@ref tutorial_histogram_equalization) we talked about a particular kind of
histogram called *Image histogram*. Now we will considerate it in its more general concept. Read on! histogram called *Image histogram*. Now we will considerate it in its more general concept. Read on!
### What are histograms? ### What are histograms?
@ -42,10 +42,10 @@ histogram called *Image histogram*. Now we will considerate it in its more gener
keep count not only of color intensities, but of whatever image features that we want to measure keep count not only of color intensities, but of whatever image features that we want to measure
(i.e. gradients, directions, etc). (i.e. gradients, directions, etc).
- Let's identify some parts of the histogram: - Let's identify some parts of the histogram:
a. **dims**: The number of parameters you want to collect data of. In our example, **dims = 1** -# **dims**: The number of parameters you want to collect data of. In our example, **dims = 1**
because we are only counting the intensity values of each pixel (in a greyscale image). because we are only counting the intensity values of each pixel (in a greyscale image).
b. **bins**: It is the number of **subdivisions** in each dim. In our example, **bins = 16** -# **bins**: It is the number of **subdivisions** in each dim. In our example, **bins = 16**
c. **range**: The limits for the values to be measured. In this case: **range = [0,255]** -# **range**: The limits for the values to be measured. In this case: **range = [0,255]**
- What if you want to count two features? In this case your resulting histogram would be a 3D plot - What if you want to count two features? In this case your resulting histogram would be a 3D plot
(in which x and y would be \f$bin_{x}\f$ and \f$bin_{y}\f$ for each feature and z would be the number of (in which x and y would be \f$bin_{x}\f$ and \f$bin_{y}\f$ for each feature and z would be the number of
counts for each combination of \f$(bin_{x}, bin_{y})\f$. The same would apply for more features (of counts for each combination of \f$(bin_{x}, bin_{y})\f$. The same would apply for more features (of
@ -68,82 +68,8 @@ Code
- **Downloadable code**: Click - **Downloadable code**: Click
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/calcHist_Demo.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/calcHist_Demo.cpp)
- **Code at glance:** - **Code at glance:**
@code{.cpp} @includelineno samples/cpp/tutorial_code/Histograms_Matching/calcHist_Demo.cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/*
* @function main
*/
int main( int argc, char** argv )
{
Mat src, dst;
/// Load image
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
/// Separate the image in 3 places ( B, G and R )
vector<Mat> bgr_planes;
split( src, bgr_planes );
/// Establish the number of bins
int histSize = 256;
/// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 } ;
const float* histRange = { range };
bool uniform = true; bool accumulate = false;
Mat b_hist, g_hist, r_hist;
/// Compute the histograms:
calcHist( &bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[2], 1, 0, Mat(), r_hist, 1, &histSize, &histRange, uniform, accumulate );
// Draw the histograms for B, G and R
int hist_w = 512; int hist_h = 400;
int bin_w = cvRound( (double) hist_w/histSize );
Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the result to [ 0, histImage.rows ]
normalize(b_hist, b_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(g_hist, g_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(r_hist, r_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
/// Draw for each channel
for( int i = 1; i < histSize; i++ )
{
line( histImage, Point( bin_w*(i-1), hist_h - cvRound(b_hist.at<float>(i-1)) ) ,
Point( bin_w*(i), hist_h - cvRound(b_hist.at<float>(i)) ),
Scalar( 255, 0, 0), 2, 8, 0 );
line( histImage, Point( bin_w*(i-1), hist_h - cvRound(g_hist.at<float>(i-1)) ) ,
Point( bin_w*(i), hist_h - cvRound(g_hist.at<float>(i)) ),
Scalar( 0, 255, 0), 2, 8, 0 );
line( histImage, Point( bin_w*(i-1), hist_h - cvRound(r_hist.at<float>(i-1)) ) ,
Point( bin_w*(i), hist_h - cvRound(r_hist.at<float>(i)) ),
Scalar( 0, 0, 255), 2, 8, 0 );
}
/// Display
namedWindow("calcHist Demo", WINDOW_AUTOSIZE );
imshow("calcHist Demo", histImage );
waitKey(0);
return 0;
}
@endcode
Explanation Explanation
----------- -----------
@ -169,26 +95,26 @@ Explanation
4. Now we are ready to start configuring the **histograms** for each plane. Since we are working 4. Now we are ready to start configuring the **histograms** for each plane. Since we are working
with the B, G and R planes, we know that our values will range in the interval \f$[0,255]\f$ with the B, G and R planes, we know that our values will range in the interval \f$[0,255]\f$
a. Establish number of bins (5, 10...): -# Establish number of bins (5, 10...):
@code{.cpp} @code{.cpp}
int histSize = 256; //from 0 to 255 int histSize = 256; //from 0 to 255
@endcode @endcode
b. Set the range of values (as we said, between 0 and 255 ) -# Set the range of values (as we said, between 0 and 255 )
@code{.cpp} @code{.cpp}
/// Set the ranges ( for B,G,R) ) /// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 } ; //the upper boundary is exclusive float range[] = { 0, 256 } ; //the upper boundary is exclusive
const float* histRange = { range }; const float* histRange = { range };
@endcode @endcode
c. We want our bins to have the same size (uniform) and to clear the histograms in the -# We want our bins to have the same size (uniform) and to clear the histograms in the
beginning, so: beginning, so:
@code{.cpp} @code{.cpp}
bool uniform = true; bool accumulate = false; bool uniform = true; bool accumulate = false;
@endcode @endcode
d. Finally, we create the Mat objects to save our histograms. Creating 3 (one for each plane): -# Finally, we create the Mat objects to save our histograms. Creating 3 (one for each plane):
@code{.cpp} @code{.cpp}
Mat b_hist, g_hist, r_hist; Mat b_hist, g_hist, r_hist;
@endcode @endcode
e. We proceed to calculate the histograms by using the OpenCV function @ref cv::calcHist : -# We proceed to calculate the histograms by using the OpenCV function @ref cv::calcHist :
@code{.cpp} @code{.cpp}
/// Compute the histograms: /// Compute the histograms:
calcHist( &bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate ); calcHist( &bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
@ -254,18 +180,15 @@ Explanation
Scalar( 0, 0, 255), 2, 8, 0 ); Scalar( 0, 0, 255), 2, 8, 0 );
} }
@endcode @endcode
we use the expression: we use the expression:
@code{.cpp} @code{.cpp}
b_hist.at<float>(i) b_hist.at<float>(i)
@endcode @endcode
where \f$i\f$ indicates the dimension. If it were a 2D-histogram we would use something like: where \f$i\f$ indicates the dimension. If it were a 2D-histogram we would use something like:
@code{.cpp} @code{.cpp}
b_hist.at<float>( i, j ) b_hist.at<float>( i, j )
@endcode @endcode
8. Finally we display our histograms and wait for the user to exit: 8. Finally we display our histograms and wait for the user to exit:
@code{.cpp} @code{.cpp}
namedWindow("calcHist Demo", WINDOW_AUTOSIZE ); namedWindow("calcHist Demo", WINDOW_AUTOSIZE );
@ -275,6 +198,7 @@ Explanation
return 0; return 0;
@endcode @endcode
Result Result
------ ------
@ -285,5 +209,3 @@ Result
2. Produces the following histogram: 2. Produces the following histogram:
![image](images/Histogram_Calculation_Result.jpg) ![image](images/Histogram_Calculation_Result.jpg)

View File

@ -17,7 +17,7 @@ Theory
(\f$d(H_{1}, H_{2})\f$) to express how well both histograms match. (\f$d(H_{1}, H_{2})\f$) to express how well both histograms match.
- OpenCV implements the function @ref cv::compareHist to perform a comparison. It also offers 4 - OpenCV implements the function @ref cv::compareHist to perform a comparison. It also offers 4
different metrics to compute the matching: different metrics to compute the matching:
a. **Correlation ( CV_COMP_CORREL )** -# **Correlation ( CV_COMP_CORREL )**
\f[d(H_1,H_2) = \frac{\sum_I (H_1(I) - \bar{H_1}) (H_2(I) - \bar{H_2})}{\sqrt{\sum_I(H_1(I) - \bar{H_1})^2 \sum_I(H_2(I) - \bar{H_2})^2}}\f] \f[d(H_1,H_2) = \frac{\sum_I (H_1(I) - \bar{H_1}) (H_2(I) - \bar{H_2})}{\sqrt{\sum_I(H_1(I) - \bar{H_1})^2 \sum_I(H_2(I) - \bar{H_2})^2}}\f]
@ -27,15 +27,15 @@ Theory
and \f$N\f$ is the total number of histogram bins. and \f$N\f$ is the total number of histogram bins.
b. **Chi-Square ( CV_COMP_CHISQR )** -# **Chi-Square ( CV_COMP_CHISQR )**
\f[d(H_1,H_2) = \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)}\f] \f[d(H_1,H_2) = \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)}\f]
c. **Intersection ( method=CV_COMP_INTERSECT )** -# **Intersection ( method=CV_COMP_INTERSECT )**
\f[d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I))\f] \f[d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I))\f]
d. **Bhattacharyya distance ( CV_COMP_BHATTACHARYYA )** -# **Bhattacharyya distance ( CV_COMP_BHATTACHARYYA )**
\f[d(H_1,H_2) = \sqrt{1 - \frac{1}{\sqrt{\bar{H_1} \bar{H_2} N^2}} \sum_I \sqrt{H_1(I) \cdot H_2(I)}}\f] \f[d(H_1,H_2) = \sqrt{1 - \frac{1}{\sqrt{\bar{H_1} \bar{H_2} N^2}} \sum_I \sqrt{H_1(I) \cdot H_2(I)}}\f]
@ -165,4 +165,3 @@ match. As we can see, the match *base-base* is the highest of all as expected. A
that the match *base-half* is the second best match (as we predicted). For the other two metrics, that the match *base-half* is the second best match (as we predicted). For the other two metrics,
the less the result, the better the match. We can observe that the matches between the test 1 and the less the result, the better the match. We can observe that the matches between the test 1 and
test 2 with respect to the base are worse, which again, was expected. test 2 with respect to the base are worse, which again, was expected.

View File

@ -7,7 +7,7 @@ Goal
In this tutorial you will learn: In this tutorial you will learn:
- What an image histogram is and why it is useful - What an image histogram is and why it is useful
- To equalize histograms of images by using the OpenCV <function@ref> cv::equalizeHist - To equalize histograms of images by using the OpenCV function @ref cv::equalizeHist
Theory Theory
------ ------
@ -59,54 +59,13 @@ Code
- **What does this program do?** - **What does this program do?**
- Loads an image - Loads an image
- Convert the original image to grayscale - Convert the original image to grayscale
- Equalize the Histogram by using the OpenCV function @ref cv::EqualizeHist - Equalize the Histogram by using the OpenCV function @ref cv::equalizeHist
- Display the source and equalized images in a window. - Display the source and equalized images in a window.
- **Downloadable code**: Click - **Downloadable code**: Click
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/EqualizeHist_Demo.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/EqualizeHist_Demo.cpp)
- **Code at glance:** - **Code at glance:**
@code{.cpp} @includelineno samples/cpp/tutorial_code/Histograms_Matching/EqualizeHist_Demo.cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
/* @function main */
int main( int argc, char** argv )
{
Mat src, dst;
char* source_window = "Source image";
char* equalized_window = "Equalized Image";
/// Load image
src = imread( argv[1], 1 );
if( !src.data )
{ cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl;
return -1;}
/// Convert to grayscale
cvtColor( src, src, COLOR_BGR2GRAY );
/// Apply Histogram Equalization
equalizeHist( src, dst );
/// Display results
namedWindow( source_window, WINDOW_AUTOSIZE );
namedWindow( equalized_window, WINDOW_AUTOSIZE );
imshow( source_window, src );
imshow( equalized_window, dst );
/// Wait until user exits the program
waitKey(0);
return 0;
}
@endcode
Explanation Explanation
----------- -----------
@ -149,6 +108,7 @@ Explanation
waitKey(0); waitKey(0);
return 0; return 0;
@endcode @endcode
Results Results
------- -------
@ -173,8 +133,6 @@ Results
Notice how the number of pixels is more distributed through the intensity range. Notice how the number of pixels is more distributed through the intensity range.
**note** @note
Are you wondering how did we draw the Histogram figures shown above? Check out the following Are you wondering how did we draw the Histogram figures shown above? Check out the following
tutorial! tutorial!

View File

@ -23,8 +23,8 @@ template image (patch).
- We need two primary components: - We need two primary components:
a. **Source image (I):** The image in which we expect to find a match to the template image -# **Source image (I):** The image in which we expect to find a match to the template image
b. **Template image (T):** The patch image which will be compared to the template image -# **Template image (T):** The patch image which will be compared to the template image
our goal is to detect the highest matching area: our goal is to detect the highest matching area:
@ -300,5 +300,3 @@ Results
other possible high matches. other possible high matches.
![image](images/Template_Matching_Image_Result.jpg) ![image](images/Template_Matching_Image_Result.jpg)

View File

@ -11,12 +11,12 @@ In this tutorial you will learn how to:
Theory Theory
------ ------
1. The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to many as the The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to many as the
*optimal detector*, Canny algorithm aims to satisfy three main criteria: *optimal detector*, Canny algorithm aims to satisfy three main criteria:
- **Low error rate:** Meaning a good detection of only existent edges. - **Low error rate:** Meaning a good detection of only existent edges.
- **Good localization:** The distance between edge pixels detected and real edge pixels have - **Good localization:** The distance between edge pixels detected and real edge pixels have
to be minimized. to be minimized.
- **Minimal response:** Only one detector response per edge. - **Minimal response:** Only one detector response per edge.
### Steps ### Steps
@ -32,8 +32,7 @@ Theory
\end{bmatrix}\f] \end{bmatrix}\f]
2. Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel: 2. Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel:
a. Apply a pair of convolution masks (in \f$x\f$ and \f$y\f$ directions: 1. Apply a pair of convolution masks (in \f$x\f$ and \f$y\f$ directions:
\f[G_{x} = \begin{bmatrix} \f[G_{x} = \begin{bmatrix}
-1 & 0 & +1 \\ -1 & 0 & +1 \\
-2 & 0 & +2 \\ -2 & 0 & +2 \\
@ -44,22 +43,20 @@ Theory
+1 & +2 & +1 +1 & +2 & +1
\end{bmatrix}\f] \end{bmatrix}\f]
b. Find the gradient strength and direction with: 2. Find the gradient strength and direction with:
\f[\begin{array}{l} \f[\begin{array}{l}
G = \sqrt{ G_{x}^{2} + G_{y}^{2} } \\ G = \sqrt{ G_{x}^{2} + G_{y}^{2} } \\
\theta = \arctan(\dfrac{ G_{y} }{ G_{x} }) \theta = \arctan(\dfrac{ G_{y} }{ G_{x} })
\end{array}\f] \end{array}\f]
The direction is rounded to one of four possible angles (namely 0, 45, 90 or 135) The direction is rounded to one of four possible angles (namely 0, 45, 90 or 135)
3. *Non-maximum* suppression is applied. This removes pixels that are not considered to be part of 3. *Non-maximum* suppression is applied. This removes pixels that are not considered to be part of
an edge. Hence, only thin lines (candidate edges) will remain. an edge. Hence, only thin lines (candidate edges) will remain.
4. *Hysteresis*: The final step. Canny does use two thresholds (upper and lower): 4. *Hysteresis*: The final step. Canny does use two thresholds (upper and lower):
a. If a pixel gradient is higher than the *upper* threshold, the pixel is accepted as an edge 1. If a pixel gradient is higher than the *upper* threshold, the pixel is accepted as an edge
b. If a pixel gradient value is below the *lower* threshold, then it is rejected. 2. If a pixel gradient value is below the *lower* threshold, then it is rejected.
c. If the pixel gradient is between the two thresholds, then it will be accepted only if it is 3. If the pixel gradient is between the two thresholds, then it will be accepted only if it is
connected to a pixel that is above the *upper* threshold. connected to a pixel that is above the *upper* threshold.
Canny recommended a *upper*:*lower* ratio between 2:1 and 3:1. Canny recommended a *upper*:*lower* ratio between 2:1 and 3:1.
@ -78,76 +75,8 @@ Code
2. The tutorial code's is shown lines below. You can also download it from 2. The tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp)
@code{.cpp} @includelineno samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/// Global variables
Mat src, src_gray;
Mat dst, detected_edges;
int edgeThresh = 1;
int lowThreshold;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
char* window_name = "Edge Map";
/*
* @function CannyThreshold
* @brief Trackbar callback - Canny thresholds input with a ratio 1:3
*/
void CannyThreshold(int, void*)
{
/// Reduce noise with a kernel 3x3
blur( src_gray, detected_edges, Size(3,3) );
/// Canny detector
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
/// Using Canny's output as a mask, we display our result
dst = Scalar::all(0);
src.copyTo( dst, detected_edges);
imshow( window_name, dst );
}
/* @function main */
int main( int argc, char** argv )
{
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
/// Create a matrix of the same type and size as src (for dst)
dst.create( src.size(), src.type() );
/// Convert the image to grayscale
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create a window
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Create a Trackbar for user to enter threshold
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
/// Show the image
CannyThreshold(0, 0);
/// Wait until user exit program by pressing a key
waitKey(0);
return 0;
}
@endcode
Explanation Explanation
----------- -----------
@ -164,11 +93,11 @@ Explanation
char* window_name = "Edge Map"; char* window_name = "Edge Map";
@endcode @endcode
Note the following: Note the following:
a. We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*) 1. We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*)
b. We set the kernel size of \f$3\f$ (for the Sobel operations to be performed internally by the 2. We set the kernel size of \f$3\f$ (for the Sobel operations to be performed internally by the
Canny function) Canny function)
c. We set a maximum value for the lower Threshold of \f$100\f$. 3. We set a maximum value for the lower Threshold of \f$100\f$.
2. Loads the source image: 2. Loads the source image:
@code{.cpp} @code{.cpp}
@ -196,17 +125,17 @@ Explanation
@endcode @endcode
Observe the following: Observe the following:
a. The variable to be controlled by the Trackbar is *lowThreshold* with a limit of 1. The variable to be controlled by the Trackbar is *lowThreshold* with a limit of
*max_lowThreshold* (which we set to 100 previously) *max_lowThreshold* (which we set to 100 previously)
b. Each time the Trackbar registers an action, the callback function *CannyThreshold* will be 2. Each time the Trackbar registers an action, the callback function *CannyThreshold* will be
invoked. invoked.
7. Let's check the *CannyThreshold* function, step by step: 7. Let's check the *CannyThreshold* function, step by step:
a. First, we blur the image with a filter of kernel size 3: 1. First, we blur the image with a filter of kernel size 3:
@code{.cpp} @code{.cpp}
blur( src_gray, detected_edges, Size(3,3) ); blur( src_gray, detected_edges, Size(3,3) );
@endcode @endcode
b. Second, we apply the OpenCV function @ref cv::Canny : 2. Second, we apply the OpenCV function @ref cv::Canny :
@code{.cpp} @code{.cpp}
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size ); Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
@endcode @endcode
@ -224,12 +153,12 @@ Explanation
@code{.cpp} @code{.cpp}
dst = Scalar::all(0); dst = Scalar::all(0);
@endcode @endcode
9. Finally, we will use the function @ref cv::copyTo to map only the areas of the image that are 9. Finally, we will use the function @ref cv::Mat::copyTo to map only the areas of the image that are
identified as edges (on a black background). identified as edges (on a black background).
@code{.cpp} @code{.cpp}
src.copyTo( dst, detected_edges); src.copyTo( dst, detected_edges);
@endcode @endcode
@ref cv::copyTo copy the *src* image onto *dst*. However, it will only copy the pixels in the @ref cv::Mat::copyTo copy the *src* image onto *dst*. However, it will only copy the pixels in the
locations where they have non-zero values. Since the output of the Canny detector is the edge locations where they have non-zero values. Since the output of the Canny detector is the edge
contours on a black background, the resulting *dst* will be black in all the area but the contours on a black background, the resulting *dst* will be black in all the area but the
detected edges. detected edges.
@ -251,4 +180,3 @@ Result
![image](images/Canny_Detector_Tutorial_Result.jpg) ![image](images/Canny_Detector_Tutorial_Result.jpg)
- Notice how the image is superposed to the black background on the edge regions. - Notice how the image is superposed to the black background on the edge regions.

View File

@ -24,8 +24,8 @@ Theory
3. In this tutorial, we will briefly explore two ways of defining the extra padding (border) for an 3. In this tutorial, we will briefly explore two ways of defining the extra padding (border) for an
image: image:
a. **BORDER_CONSTANT**: Pad the image with a constant value (i.e. black or \f$0\f$ -# **BORDER_CONSTANT**: Pad the image with a constant value (i.e. black or \f$0\f$
b. **BORDER_REPLICATE**: The row or column at the very edge of the original is replicated to -# **BORDER_REPLICATE**: The row or column at the very edge of the original is replicated to
the extra border. the extra border.
This will be seen more clearly in the Code section. This will be seen more clearly in the Code section.
@ -175,13 +175,13 @@ Explanation
@endcode @endcode
The arguments are: The arguments are:
a. *src*: Source image -# *src*: Source image
b. *dst*: Destination image -# *dst*: Destination image
c. *top*, *bottom*, *left*, *right*: Length in pixels of the borders at each side of the image. -# *top*, *bottom*, *left*, *right*: Length in pixels of the borders at each side of the image.
We define them as being 5% of the original size of the image. We define them as being 5% of the original size of the image.
d. *borderType*: Define what type of border is applied. It can be constant or replicate for -# *borderType*: Define what type of border is applied. It can be constant or replicate for
this example. this example.
e. *value*: If *borderType* is *BORDER_CONSTANT*, this is the value used to fill the border -# *value*: If *borderType* is *BORDER_CONSTANT*, this is the value used to fill the border
pixels. pixels.
8. We display our output image in the image created previously 8. We display our output image in the image created previously
@ -204,5 +204,3 @@ Results
option looks: option looks:
![image](images/CopyMakeBorder_Tutorial_Results.jpg) ![image](images/CopyMakeBorder_Tutorial_Results.jpg)

View File

@ -159,15 +159,15 @@ Explanation
@endcode @endcode
The arguments denote: The arguments denote:
a. *src*: Source image -# *src*: Source image
b. *dst*: Destination image -# *dst*: Destination image
c. *ddepth*: The depth of *dst*. A negative value (such as \f$-1\f$) indicates that the depth is -# *ddepth*: The depth of *dst*. A negative value (such as \f$-1\f$) indicates that the depth is
the same as the source. the same as the source.
d. *kernel*: The kernel to be scanned through the image -# *kernel*: The kernel to be scanned through the image
e. *anchor*: The position of the anchor relative to its kernel. The location *Point(-1, -1)* -# *anchor*: The position of the anchor relative to its kernel. The location *Point(-1, -1)*
indicates the center by default. indicates the center by default.
f. *delta*: A value to be added to each pixel during the convolution. By default it is \f$0\f$ -# *delta*: A value to be added to each pixel during the convolution. By default it is \f$0\f$
g. *BORDER_DEFAULT*: We let this value by default (more details in the following tutorial) -# *BORDER_DEFAULT*: We let this value by default (more details in the following tutorial)
7. Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be 7. Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be
updated in the range indicated. updated in the range indicated.
@ -180,5 +180,3 @@ Results
the kernel size should change, as can be seen in the series of snapshots below: the kernel size should change, as can be seen in the series of snapshots below:
![image](images/filter_2d_tutorial_result.jpg) ![image](images/filter_2d_tutorial_result.jpg)

View File

@ -20,8 +20,8 @@ straight lines. \#. To apply the Transform, first an edge detection pre-processi
1. As you know, a line in the image space can be expressed with two variables. For example: 1. As you know, a line in the image space can be expressed with two variables. For example:
a. In the **Cartesian coordinate system:** Parameters: \f$(m,b)\f$. -# In the **Cartesian coordinate system:** Parameters: \f$(m,b)\f$.
b. In the **Polar coordinate system:** Parameters: \f$(r,\theta)\f$ -# In the **Polar coordinate system:** Parameters: \f$(r,\theta)\f$
![image](images/Hough_Lines_Tutorial_Theory_0.jpg) ![image](images/Hough_Lines_Tutorial_Theory_0.jpg)
@ -180,7 +180,7 @@ Explanation
available for this purpose: available for this purpose:
3. **Standard Hough Line Transform** 3. **Standard Hough Line Transform**
a. First, you apply the Transform: -# First, you apply the Transform:
@code{.cpp} @code{.cpp}
vector<Vec2f> lines; vector<Vec2f> lines;
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 ); HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
@ -196,7 +196,7 @@ Explanation
- *threshold*: The minimum number of intersections to "*detect*" a line - *threshold*: The minimum number of intersections to "*detect*" a line
- *srn* and *stn*: Default parameters to zero. Check OpenCV reference for more info. - *srn* and *stn*: Default parameters to zero. Check OpenCV reference for more info.
b. And then you display the result by drawing the lines. -# And then you display the result by drawing the lines.
@code{.cpp} @code{.cpp}
for( size_t i = 0; i < lines.size(); i++ ) for( size_t i = 0; i < lines.size(); i++ )
{ {
@ -212,7 +212,7 @@ Explanation
} }
@endcode @endcode
4. **Probabilistic Hough Line Transform** 4. **Probabilistic Hough Line Transform**
a. First you apply the transform: -# First you apply the transform:
@code{.cpp} @code{.cpp}
vector<Vec4i> lines; vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 ); HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
@ -231,7 +231,7 @@ Explanation
this number of points are disregarded. this number of points are disregarded.
- *maxLineGap*: The maximum gap between two points to be considered in the same line. - *maxLineGap*: The maximum gap between two points to be considered in the same line.
b. And then you display the result by drawing the lines. -# And then you display the result by drawing the lines.
@code{.cpp} @code{.cpp}
for( size_t i = 0; i < lines.size(); i++ ) for( size_t i = 0; i < lines.size(); i++ )
{ {
@ -267,4 +267,3 @@ We get the following result by using the Probabilistic Hough Line Transform:
You may observe that the number of lines detected vary while you change the *threshold*. The You may observe that the number of lines detected vary while you change the *threshold*. The
explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected
(since you will need more points to declare a line detected). (since you will need more points to declare a line detected).

View File

@ -206,16 +206,16 @@ Explanation
How do we update our mapping matrices *mat_x* and *mat_y*? Go on reading: How do we update our mapping matrices *mat_x* and *mat_y*? Go on reading:
6. **Updating the mapping matrices:** We are going to perform 4 different mappings: 6. **Updating the mapping matrices:** We are going to perform 4 different mappings:
a. Reduce the picture to half its size and will display it in the middle: -# Reduce the picture to half its size and will display it in the middle:
\f[h(i,j) = ( 2*i - src.cols/2 + 0.5, 2*j - src.rows/2 + 0.5)\f] \f[h(i,j) = ( 2*i - src.cols/2 + 0.5, 2*j - src.rows/2 + 0.5)\f]
for all pairs \f$(i,j)\f$ such that: \f$\dfrac{src.cols}{4}<i<\dfrac{3 \cdot src.cols}{4}\f$ and for all pairs \f$(i,j)\f$ such that: \f$\dfrac{src.cols}{4}<i<\dfrac{3 \cdot src.cols}{4}\f$ and
\f$\dfrac{src.rows}{4}<j<\dfrac{3 \cdot src.rows}{4}\f$ \f$\dfrac{src.rows}{4}<j<\dfrac{3 \cdot src.rows}{4}\f$
b. Turn the image upside down: \f$h( i, j ) = (i, src.rows - j)\f$ -# Turn the image upside down: \f$h( i, j ) = (i, src.rows - j)\f$
c. Reflect the image from left to right: \f$h(i,j) = ( src.cols - i, j )\f$ -# Reflect the image from left to right: \f$h(i,j) = ( src.cols - i, j )\f$
d. Combination of b and c: \f$h(i,j) = ( src.cols - i, src.rows - j )\f$ -# Combination of b and c: \f$h(i,j) = ( src.cols - i, src.rows - j )\f$
This is expressed in the following snippet. Here, *map_x* represents the first coordinate of This is expressed in the following snippet. Here, *map_x* represents the first coordinate of
*h(i,j)* and *map_y* the second coordinate. *h(i,j)* and *map_y* the second coordinate.
@ -277,4 +277,3 @@ Result
5. Reflecting it in both directions: 5. Reflecting it in both directions:
![image](images/Remap_Tutorial_Result_3.jpg) ![image](images/Remap_Tutorial_Result_3.jpg)

View File

@ -53,7 +53,7 @@ Theory
Assuming that the image to be operated is \f$I\f$: Assuming that the image to be operated is \f$I\f$:
1. We calculate two derivatives: 1. We calculate two derivatives:
a. **Horizontal changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{x}\f$ with odd 1. **Horizontal changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{x}\f$ with odd
size. For example for a kernel size of 3, \f$G_{x}\f$ would be computed as: size. For example for a kernel size of 3, \f$G_{x}\f$ would be computed as:
\f[G_{x} = \begin{bmatrix} \f[G_{x} = \begin{bmatrix}
@ -62,7 +62,7 @@ Assuming that the image to be operated is \f$I\f$:
-1 & 0 & +1 -1 & 0 & +1
\end{bmatrix} * I\f] \end{bmatrix} * I\f]
b. **Vertical changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{y}\f$ with odd 2. **Vertical changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{y}\f$ with odd
size. For example for a kernel size of 3, \f$G_{y}\f$ would be computed as: size. For example for a kernel size of 3, \f$G_{y}\f$ would be computed as:
\f[G_{y} = \begin{bmatrix} \f[G_{y} = \begin{bmatrix}
@ -81,11 +81,10 @@ Assuming that the image to be operated is \f$I\f$:
\f[G = |G_{x}| + |G_{y}|\f] \f[G = |G_{x}| + |G_{y}|\f]
@note @note
When the size of the kernel is @ref cv::3\`, the Sobel kernel shown above may produce noticeable When the size of the kernel is `3`, the Sobel kernel shown above may produce noticeable
inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses
this inaccuracy for kernels of size 3 by using the :scharr:\`Scharr function. This is as fast this inaccuracy for kernels of size 3 by using the :scharr:\`Scharr function. This is as fast
but more accurate than the standar Sobel function. It implements the following kernels: but more accurate than the standar Sobel function. It implements the following kernels:
\f[G_{x} = \begin{bmatrix} \f[G_{x} = \begin{bmatrix}
-3 & 0 & +3 \\ -3 & 0 & +3 \\
-10 & 0 & +10 \\ -10 & 0 & +10 \\
@ -95,11 +94,11 @@ Assuming that the image to be operated is \f$I\f$:
0 & 0 & 0 \\ 0 & 0 & 0 \\
+3 & +10 & +3 +3 & +10 & +3
\end{bmatrix}\f] \end{bmatrix}\f]
@note
You can check out more information of this function in the OpenCV reference (@ref cv::Scharr ). You can check out more information of this function in the OpenCV reference (@ref cv::Scharr ).
Also, in the sample code below, you will notice that above the code for @ref cv::Sobel function Also, in the sample code below, you will notice that above the code for @ref cv::Sobel function
there is also code for the @ref cv::Scharr function commented. Uncommenting it (and obviously there is also code for the @ref cv::Scharr function commented. Uncommenting it (and obviously
commenting the Sobel stuff) should give you an idea of how this function works. commenting the Sobel stuff) should give you an idea of how this function works.
Code Code
---- ----
@ -110,65 +109,8 @@ Code
2. The tutorial code's is shown lines below. You can also download it from 2. The tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp)
@code{.cpp} @includelineno samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/* @function main */
int main( int argc, char** argv )
{
Mat src, src_gray;
Mat grad;
char* window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
int c;
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
/// Convert it to gray
cvtColor( src, src_gray, COLOR_RGB2GRAY );
/// Create window
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Generate grad_x and grad_y
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
//Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_x, abs_grad_x );
/// Gradient Y
//Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_y, abs_grad_y );
/// Total Gradient (approximate)
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
imshow( window_name, grad );
waitKey(0);
return 0;
}
@endcode
Explanation Explanation
----------- -----------
@ -239,5 +181,3 @@ Results
1. Here is the output of applying our basic detector to *lena.jpg*: 1. Here is the output of applying our basic detector to *lena.jpg*:
![image](images/Sobel_Derivatives_Tutorial_Result.jpg) ![image](images/Sobel_Derivatives_Tutorial_Result.jpg)

View File

@ -18,9 +18,9 @@ Theory
transformation) followed by a *vector addition* (translation). transformation) followed by a *vector addition* (translation).
2. From the above, We can use an Affine Transformation to express: 2. From the above, We can use an Affine Transformation to express:
a. Rotations (linear transformation) -# Rotations (linear transformation)
b. Translations (vector addition) -# Translations (vector addition)
c. Scale operations (linear transformation) -# Scale operations (linear transformation)
you can see that, in essence, an Affine Transformation represents a **relation** between two you can see that, in essence, an Affine Transformation represents a **relation** between two
images. images.
@ -41,7 +41,7 @@ Theory
begin{bmatrix} begin{bmatrix}
a_{00} & a_{01} & b_{00} \\ a_{10} & a_{11} & b_{10} a_{00} & a_{01} & b_{00} \\ a_{10} & a_{11} & b_{10}
end{bmatrix}_{2 times 3} end{bmatrix}_{2 times 3}
Considering that we want to transform a 2D vector \f$X = \begin{bmatrix}x \\ y\end{bmatrix}\f$ by Considering that we want to transform a 2D vector \f$X = \begin{bmatrix}x \\ y\end{bmatrix}\f$ by
@ -58,8 +58,8 @@ Theory
1. Excellent question. We mentioned that an Affine Transformation is basically a **relation** 1. Excellent question. We mentioned that an Affine Transformation is basically a **relation**
between two images. The information about this relation can come, roughly, in two ways: between two images. The information about this relation can come, roughly, in two ways:
a. We know both \f$X\f$ and T and we also know that they are related. Then our job is to find \f$M\f$ -# We know both \f$X\f$ and T and we also know that they are related. Then our job is to find \f$M\f$
b. We know \f$M\f$ and \f$X\f$. To obtain \f$T\f$ we only need to apply \f$T = M \cdot X\f$. Our information -# We know \f$M\f$ and \f$X\f$. To obtain \f$T\f$ we only need to apply \f$T = M \cdot X\f$. Our information
for \f$M\f$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation for \f$M\f$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation
between points. between points.
@ -219,9 +219,9 @@ Explanation
7. **Rotate:** To rotate an image, we need to know two things: 7. **Rotate:** To rotate an image, we need to know two things:
a. The center with respect to which the image will rotate -# The center with respect to which the image will rotate
b. The angle to be rotated. In OpenCV a positive angle is counter-clockwise -# The angle to be rotated. In OpenCV a positive angle is counter-clockwise
c. *Optional:* A scale factor -# *Optional:* A scale factor
We define these parameters with the following snippet: We define these parameters with the following snippet:
@code{.cpp} @code{.cpp}
@ -269,5 +269,3 @@ Result
factor, we get: factor, we get:
![image](images/Warp_Affine_Tutorial_Result_Warp_Rotate.jpg) ![image](images/Warp_Affine_Tutorial_Result_Warp_Rotate.jpg)

View File

@ -12,8 +12,7 @@ In this tutorial you will learn how to:
Theory Theory
------ ------
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. .. @note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
container:: enumeratevisibleitemswithsquare
- Usually we need to convert an image to a size different than its original. For this, there are - Usually we need to convert an image to a size different than its original. For this, there are
two possible options: two possible options:
@ -53,161 +52,109 @@ container:: enumeratevisibleitemswithsquare
predecessor. Iterating this process on the input image \f$G_{0}\f$ (original image) produces the predecessor. Iterating this process on the input image \f$G_{0}\f$ (original image) produces the
entire pyramid. entire pyramid.
- The procedure above was useful to downsample an image. What if we want to make it bigger?: - The procedure above was useful to downsample an image. What if we want to make it bigger?:
columns filled with zeros (\f$0\f$)
- First, upsize the image to twice the original in each dimension, wit the new even rows and - First, upsize the image to twice the original in each dimension, wit the new even rows and
columns filled with zeros (\f$0\f$)
- Perform a convolution with the same kernel shown above (multiplied by 4) to approximate the - Perform a convolution with the same kernel shown above (multiplied by 4) to approximate the
values of the "missing pixels" values of the "missing pixels"
- These two procedures (downsampling and upsampling as explained above) are implemented by the - These two procedures (downsampling and upsampling as explained above) are implemented by the
OpenCV functions @ref cv::pyrUp and @ref cv::pyrDown , as we will see in an example with the OpenCV functions @ref cv::pyrUp and @ref cv::pyrDown , as we will see in an example with the
code below: code below:
@note When we reduce the size of an image, we are actually *losing* information of the image. Code @note When we reduce the size of an image, we are actually *losing* information of the image.
======
Code
----
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Pyramids.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Pyramids.cpp)
@code{.cpp}
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <math.h>
#include <stdlib.h>
#include <stdio.h>
using namespace cv; @includelineno samples/cpp/tutorial_code/ImgProc/Pyramids.cpp
/// Global variables
Mat src, dst, tmp;
char* window_name = "Pyramids Demo";
/*
* @function main
*/
int main( int argc, char** argv )
{
/// General instructions
printf( "\n Zoom In-Out demo \n " );
printf( "------------------ \n" );
printf( " * [u] -> Zoom in \n" );
printf( " * [d] -> Zoom out \n" );
printf( " * [ESC] -> Close program \n \n" );
/// Test image - Make sure it s divisible by 2^{n}
src = imread( "../images/chicky_512.jpg" );
if( !src.data )
{ printf(" No data! -- Exiting the program \n");
return -1; }
tmp = src;
dst = tmp;
/// Create window
namedWindow( window_name, WINDOW_AUTOSIZE );
imshow( window_name, dst );
/// Loop
while( true )
{
int c;
c = waitKey(10);
if( (char)c == 27 )
{ break; }
if( (char)c == 'u' )
{ pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) );
printf( "** Zoom In: Image x 2 \n" );
}
else if( (char)c == 'd' )
{ pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) );
printf( "** Zoom Out: Image / 2 \n" );
}
imshow( window_name, dst );
tmp = dst;
}
return 0;
}
@endcode
Explanation Explanation
----------- -----------
1. Let's check the general structure of the program: Let's check the general structure of the program:
- Load an image (in this case it is defined in the program, the user does not have to enter it
as an argument)
@code{.cpp}
/// Test image - Make sure it s divisible by 2^{n}
src = imread( "../images/chicky_512.jpg" );
if( !src.data )
{ printf(" No data! -- Exiting the program \n");
return -1; }
@endcode
- Create a Mat object to store the result of the operations (*dst*) and one to save temporal
results (*tmp*).
@code{.cpp}
Mat src, dst, tmp;
/* ... */
tmp = src;
dst = tmp;
@endcode
- Create a window to display the result
@code{.cpp}
namedWindow( window_name, WINDOW_AUTOSIZE );
imshow( window_name, dst );
@endcode
- Perform an infinite loop waiting for user input.
@code{.cpp}
while( true )
{
int c;
c = waitKey(10);
if( (char)c == 27 ) - Load an image (in this case it is defined in the program, the user does not have to enter it
{ break; } as an argument)
if( (char)c == 'u' ) @code{.cpp}
{ pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) ); /// Test image - Make sure it s divisible by 2^{n}
printf( "** Zoom In: Image x 2 \n" ); src = imread( "../images/chicky_512.jpg" );
} if( !src.data )
else if( (char)c == 'd' ) { printf(" No data! -- Exiting the program \n");
{ pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) ); return -1; }
printf( "** Zoom Out: Image / 2 \n" ); @endcode
}
imshow( window_name, dst ); - Create a Mat object to store the result of the operations (*dst*) and one to save temporal
tmp = dst; results (*tmp*).
@code{.cpp}
Mat src, dst, tmp;
/* ... */
tmp = src;
dst = tmp;
@endcode
- Create a window to display the result
@code{.cpp}
namedWindow( window_name, WINDOW_AUTOSIZE );
imshow( window_name, dst );
@endcode
- Perform an infinite loop waiting for user input.
@code{.cpp}
while( true )
{
int c;
c = waitKey(10);
if( (char)c == 27 )
{ break; }
if( (char)c == 'u' )
{ pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) );
printf( "** Zoom In: Image x 2 \n" );
} }
else if( (char)c == 'd' )
{ pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) );
printf( "** Zoom Out: Image / 2 \n" );
}
imshow( window_name, dst );
tmp = dst;
}
@endcode
Our program exits if the user presses *ESC*. Besides, it has two options:
- **Perform upsampling (after pressing 'u')**
@code{.cpp}
pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 )
@endcode @endcode
Our program exits if the user presses *ESC*. Besides, it has two options: We use the function @ref cv::pyrUp with 03 arguments:
- **Perform upsampling (after pressing 'u')** - *tmp*: The current image, it is initialized with the *src* original image.
@code{.cpp} - *dst*: The destination image (to be shown on screen, supposedly the double of the
pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) input image)
@endcode - *Size( tmp.cols*2, tmp.rows\*2 )\* : The destination size. Since we are upsampling,
We use the function @ref cv::pyrUp with 03 arguments: @ref cv::pyrUp expects a size double than the input image (in this case *tmp*).
- **Perform downsampling (after pressing 'd')**
@code{.cpp}
pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 )
@endcode
Similarly as with @ref cv::pyrUp , we use the function @ref cv::pyrDown with 03
arguments:
- *tmp*: The current image, it is initialized with the *src* original image. - *tmp*: The current image, it is initialized with the *src* original image.
- *dst*: The destination image (to be shown on screen, supposedly the double of the - *dst*: The destination image (to be shown on screen, supposedly half the input
input image) image)
- *Size( tmp.cols*2, tmp.rows\*2 )\* : The destination size. Since we are upsampling, - *Size( tmp.cols/2, tmp.rows/2 )* : The destination size. Since we are upsampling,
@ref cv::pyrUp expects a size double than the input image (in this case *tmp*). @ref cv::pyrDown expects half the size the input image (in this case *tmp*).
- **Perform downsampling (after pressing 'd')** - Notice that it is important that the input image can be divided by a factor of two (in
@code{.cpp} both dimensions). Otherwise, an error will be shown.
pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) - Finally, we update the input image **tmp** with the current image displayed, so the
@endcode subsequent operations are performed on it.
Similarly as with @ref cv::pyrUp , we use the function @ref cv::pyrDown with 03 @code{.cpp}
arguments: tmp = dst;
@endcode
- *tmp*: The current image, it is initialized with the *src* original image.
- *dst*: The destination image (to be shown on screen, supposedly half the input
image)
- *Size( tmp.cols/2, tmp.rows/2 )* : The destination size. Since we are upsampling,
@ref cv::pyrDown expects half the size the input image (in this case *tmp*).
- Notice that it is important that the input image can be divided by a factor of two (in
both dimensions). Otherwise, an error will be shown.
- Finally, we update the input image **tmp** with the current image displayed, so the
subsequent operations are performed on it.
@code{.cpp}
tmp = dst;
@endcode
Results Results
------- -------
@ -226,5 +173,3 @@ Results
is now: is now:
![image](images/Pyramids_Tutorial_PyrUp_Result.jpg) ![image](images/Pyramids_Tutorial_PyrUp_Result.jpg)

View File

@ -16,67 +16,8 @@ Code
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/hull_demo.cpp) [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/hull_demo.cpp)
@code{.cpp}
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv; @includelineno samples/cpp/tutorial_code/ShapeDescriptors/hull_demo.cpp
using namespace std;
Mat src; Mat src_gray;
int thresh = 100;
int max_thresh = 255;
RNG rng(12345);
/// Function header
void thresh_callback(int, void* );
@endcode
/\* @function main */ int main( int argc, char*\* argv )
{
/// Load source image and convert it to gray src = imread( argv[1], 1 );
/// Convert image to gray and blur it cvtColor( src, src_gray, COLOR_BGR2GRAY ); blur(
src_gray, src_gray, Size(3,3) );
/// Create Window char\* source_window = "Source"; namedWindow( source_window,
WINDOW_AUTOSIZE ); imshow( source_window, src );
createTrackbar( " Threshold:", "Source", &thresh, max_thresh, thresh_callback );
thresh_callback( 0, 0 );
waitKey(0); return(0);
}
/\* @function thresh_callback */ void thresh_callback(int, void* ) { Mat src_copy =
src.clone(); Mat threshold_output; vector\<vector\<Point\> \> contours; vector\<Vec4i\>
hierarchy;
/// Detect edges using Threshold threshold( src_gray, threshold_output, thresh, 255,
THRESH_BINARY );
/// Find contours findContours( threshold_output, contours, hierarchy, RETR_TREE,
CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Find the convex hull object for each contour vector\<vector\<Point\> \>hull(
contours.size() ); for( int i = 0; i \< contours.size(); i++ ) { convexHull(
Mat(contours[i]), hull[i], false ); }
/// Draw contours + hull results Mat drawing = Mat::zeros( threshold_output.size(),
CV_8UC3 ); for( int i = 0; i\< contours.size(); i++ ) { Scalar color = Scalar(
rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) ); drawContours( drawing,
contours, i, color, 1, 8, vector\<Vec4i\>(), 0, Point() ); drawContours( drawing, hull, i,
color, 1, 8, vector\<Vec4i\>(), 0, Point() ); }
/// Show in a window namedWindow( "Hull demo", WINDOW_AUTOSIZE ); imshow( "Hull demo",
drawing );
}
Explanation Explanation
----------- -----------
@ -84,10 +25,7 @@ Explanation
Result Result
------ ------
1. Here it is: Here it is:
----------- -----------
|Hull_0| |Hull_1|
----------- -----------
![Original](images/Hull_Original_Image.jpg)
![Result](images/Hull_Result.jpg)

View File

@ -13,7 +13,7 @@ This tutorial assumes you have the following software installed and configured:
- Eclipse IDE - Eclipse IDE
- ADT and CDT plugins for Eclipse - ADT and CDT plugins for Eclipse
If you need help with anything of the above, you may refer to our @ref android_dev_intro guide. If you need help with anything of the above, you may refer to our @ref tutorial_android_dev_intro guide.
If you encounter any error after thoroughly following these steps, feel free to contact us via If you encounter any error after thoroughly following these steps, feel free to contact us via
[OpenCV4Android](https://groups.google.com/group/android-opencv/) discussion group or OpenCV [Q&A [OpenCV4Android](https://groups.google.com/group/android-opencv/) discussion group or OpenCV [Q&A
@ -27,9 +27,9 @@ Pack](http://developer.nvidia.com/tegra-android-development-pack) (**TADP**) rel
for Android development environment setup. for Android development environment setup.
Beside Android development tools the TADP 2.0 includes OpenCV4Android SDK, so it can be already Beside Android development tools the TADP 2.0 includes OpenCV4Android SDK, so it can be already
installed in your system and you can skip to @ref Running_OpenCV_Samples section of this tutorial. installed in your system and you can skip to @ref tutorial_O4A_SDK_samples "samples" section of this tutorial.
More details regarding TADP can be found in the @ref android_dev_intro guide. More details regarding TADP can be found in the @ref tutorial_android_dev_intro guide.
General info General info
------------ ------------
@ -71,20 +71,21 @@ The structure of package contents looks as follows:
On production devices that have access to Google Play Market (and Internet) these packages will On production devices that have access to Google Play Market (and Internet) these packages will
be installed from Market on the first start of an application using OpenCV Manager API. But be installed from Market on the first start of an application using OpenCV Manager API. But
devkits without Market or Internet connection require this packages to be installed manually. devkits without Market or Internet connection require this packages to be installed manually.
Install the Manager.apk and optional binary_pack.apk if it needed. See @ref manager_selection Install the Manager.apk and optional binary_pack.apk if it needed. See `Manager Selection`
for details. for details.
@note Installation from Internet is the preferable way since OpenCV team may publish updated @note Installation from Internet is the preferable way since OpenCV team may publish updated
versions of this packages on the Market. \* `samples` folder contains sample applications projects versions of this packages on the Market.
and their prebuilt packages (APK). Import them into Eclipse workspace (like described below) and
browse the code to learn possible ways of OpenCV use on Android. - `samples` folder contains sample applications projects
and their prebuilt packages (APK). Import them into Eclipse workspace (like described below) and
browse the code to learn possible ways of OpenCV use on Android.
- `doc` folder contains various OpenCV documentation in PDF format. It's also available online at - `doc` folder contains various OpenCV documentation in PDF format. It's also available online at
<http://docs.opencv.org>. <http://docs.opencv.org>.
@note The most recent docs (nightly build) are at <http://docs.opencv.org/2.4>. Generally, it's more
@note The most recent docs (nightly build) are at <http://docs.opencv.org/2.4>. Generally, it's more up-to-date, but can refer to not-yet-released functionality.
up-to-date, but can refer to not-yet-released functionality. .. TODO: I'm not sure that this is the @todo I'm not sure that this is the best place to talk about OpenCV Manager
best place to talk about OpenCV Manager
Starting from version 2.4.3 OpenCV4Android SDK uses OpenCV Manager API for library initialization. Starting from version 2.4.3 OpenCV4Android SDK uses OpenCV Manager API for library initialization.
OpenCV Manager is an Android service based solution providing the following benefits for OpenCV OpenCV Manager is an Android service based solution providing the following benefits for OpenCV
@ -98,8 +99,8 @@ applications developers:
For additional information on OpenCV Manager see the: For additional information on OpenCV Manager see the:
- Slides_ - [Slides](https://docs.google.com/a/itseez.com/presentation/d/1EO_1kijgBg_BsjNp2ymk-aarg-0K279_1VZRcPplSuk/present#slide=id.p)
- Reference Manual_ - [Reference Manual](http://docs.opencv.org/android/refman.html)
Manual OpenCV4Android SDK setup Manual OpenCV4Android SDK setup
------------------------------- -------------------------------
@ -108,21 +109,23 @@ Manual OpenCV4Android SDK setup
1. Go to the [OpenCV download page on 1. Go to the [OpenCV download page on
SourceForge](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/) and download SourceForge](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/) and download
the latest available version. Currently it's `OpenCV-2.4.9-android-sdk.zip`_. the latest available version. Currently it's [OpenCV-2.4.9-android-sdk.zip](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.4.9/OpenCV-2.4.9-android-sdk.zip/download).
2. Create a new folder for Android with OpenCV development. For this tutorial we have unpacked 2. Create a new folder for Android with OpenCV development. For this tutorial we have unpacked
OpenCV SDK to the `C:\\Work\\OpenCV4Android\\` directory. OpenCV SDK to the `C:\Work\OpenCV4Android\` directory.
@note Better to use a path without spaces in it. Otherwise you may have problems with ndk-build. \#. @note Better to use a path without spaces in it. Otherwise you may have problems with ndk-build.
Unpack the SDK archive into the chosen directory.
You can unpack it using any popular archiver (e.g with 7-Zip_): 3. Unpack the SDK archive into the chosen directory.
![image](images/android_package_7zip.png) You can unpack it using any popular archiver (e.g with 7-Zip_):
![image](images/android_package_7zip.png)
On Unix you can use the following command:
@code{.bash}
unzip ~/Downloads/OpenCV-2.4.9-android-sdk.zip
@endcode
On Unix you can use the following command:
@code{.bash}
unzip ~/Downloads/OpenCV-2.4.9-android-sdk.zip
@endcode
### Import OpenCV library and samples to the Eclipse ### Import OpenCV library and samples to the Eclipse
1. Start Eclipse and choose your workspace location. 1. Start Eclipse and choose your workspace location.
@ -143,23 +146,23 @@ unzip ~/Downloads/OpenCV-2.4.9-android-sdk.zip
already references OpenCV library. Follow the steps below to import OpenCV and samples into the already references OpenCV library. Follow the steps below to import OpenCV and samples into the
workspace: workspace:
@note OpenCV samples are indeed **dependent** on OpenCV library project so don't forget to import it to your workspace as well. @note OpenCV samples are indeed **dependent** on OpenCV library project so don't forget to import it to your workspace as well.
- Right click on the Package Explorer window and choose Import... option from the context - Right click on the Package Explorer window and choose Import... option from the context
menu: menu:
![image](images/eclipse_5_import_command.png) ![image](images/eclipse_5_import_command.png)
- In the main panel select General --\> Existing Projects into Workspace and press Next - In the main panel select General --\> Existing Projects into Workspace and press Next
button: button:
![image](images/eclipse_6_import_existing_projects.png) ![image](images/eclipse_6_import_existing_projects.png)
- In the Select root directory field locate your OpenCV package folder. Eclipse should - In the Select root directory field locate your OpenCV package folder. Eclipse should
automatically locate OpenCV library and samples: automatically locate OpenCV library and samples:
![image](images/eclipse_7_select_projects.png) ![image](images/eclipse_7_select_projects.png)
- Click Finish button to complete the import operation. - Click Finish button to complete the import operation.
After clicking Finish button Eclipse will load all selected projects into workspace, and you After clicking Finish button Eclipse will load all selected projects into workspace, and you
have to wait some time while it is building OpenCV samples. Just give a minute to Eclipse to have to wait some time while it is building OpenCV samples. Just give a minute to Eclipse to
@ -171,12 +174,13 @@ unzip ~/Downloads/OpenCV-2.4.9-android-sdk.zip
![image](images/eclipse_10_crystal_clean.png) ![image](images/eclipse_10_crystal_clean.png)
@anchor tutorial_O4A_SDK_samples
### Running OpenCV Samples ### Running OpenCV Samples
At this point you should be able to build and run the samples. Keep in mind, that face-detection and At this point you should be able to build and run the samples. Keep in mind, that face-detection and
Tutorial 2 - Mixed Processing include some native code and require Android NDK and NDK/CDT plugin Tutorial 2 - Mixed Processing include some native code and require Android NDK and NDK/CDT plugin
for Eclipse to build working applications. If you haven't installed these tools, see the for Eclipse to build working applications. If you haven't installed these tools, see the
corresponding section of @ref Android_Dev_Intro. corresponding section of @ref tutorial_android_dev_intro.
**warning** **warning**
@ -185,21 +189,23 @@ Please consider that some samples use Android Java Camera API, which is accessib
emulator. emulator.
@note Recent *Android SDK tools, revision 19+* can run ARM v7a OS images but they available not for @note Recent *Android SDK tools, revision 19+* can run ARM v7a OS images but they available not for
all Android versions. Well, running samples from Eclipse is very simple: all Android versions.
Well, running samples from Eclipse is very simple:
- Connect your device with adb tool from Android SDK or create an emulator with camera support. - Connect your device with adb tool from Android SDK or create an emulator with camera support.
- See [Managing Virtual - See [Managing Virtual Devices](http://developer.android.com/guide/developing/devices/index.html) document for help
Devices](http://developer.android.com/guide/developing/devices/index.html) document for help
with Android Emulator. with Android Emulator.
- See [Using Hardware Devices](http://developer.android.com/guide/developing/device.html) for - See [Using Hardware Devices](http://developer.android.com/guide/developing/device.html) for
help with real devices (not emulators). help with real devices (not emulators).
- Select project you want to start in Package Explorer and just press Ctrl + F11 or select option - Select project you want to start in Package Explorer and just press Ctrl + F11 or select option
Run --\> Run from the main menu, or click Run button on the toolbar. Run --\> Run from the main menu, or click Run button on the toolbar.
@note Android Emulator can take several minutes to start. So, please, be patient. \* On the first @note Android Emulator can take several minutes to start. So, please, be patient. \* On the first
run Eclipse will ask you about the running mode for your application: run Eclipse will ask you about the running mode for your application:
![image](images/eclipse_11_run_as.png) ![image](images/eclipse_11_run_as.png)
- Select the Android Application option and click OK button. Eclipse will install and run the - Select the Android Application option and click OK button. Eclipse will install and run the
sample. sample.
@ -215,18 +221,17 @@ run Eclipse will ask you about the running mode for your application:
device/emulator. It will redirect you to the corresponding page on *Google Play Market*. device/emulator. It will redirect you to the corresponding page on *Google Play Market*.
If you have no access to the *Market*, which is often the case with emulators - you will need to If you have no access to the *Market*, which is often the case with emulators - you will need to
install the packages from OpenCV4Android SDK folder manually. See @ref manager_selection for install the packages from OpenCV4Android SDK folder manually. See `Manager Selection` for
details. details.
@code{.sh} @code{.sh}
<Android SDK path>/platform-tools/adb install <OpenCV4Android SDK path>/apk/OpenCV_2.4.9_Manager_2.18_armv7a-neon.apk <Android SDK path>/platform-tools/adb install <OpenCV4Android SDK path>/apk/OpenCV_2.4.9_Manager_2.18_armv7a-neon.apk
@endcode @endcode
@note armeabi, armv7a-neon, arm7a-neon-android8, mips and x86 stand for platform targets: @note armeabi, armv7a-neon, arm7a-neon-android8, mips and x86 stand for platform targets:
- armeabi is for ARM v5 and ARM v6 architectures with Android API 8+, - armeabi is for ARM v5 and ARM v6 architectures with Android API 8+,
- armv7a-neon is for NEON-optimized ARM v7 with Android API 9+, - armv7a-neon is for NEON-optimized ARM v7 with Android API 9+,
- arm7a-neon-android8 is for NEON-optimized ARM v7 with Android API 8, - arm7a-neon-android8 is for NEON-optimized ARM v7 with Android API 8,
- mips is for MIPS architecture with Android API 9+, - mips is for MIPS architecture with Android API 9+,
- x86 is for Intel x86 CPUs with Android API 9+. - x86 is for Intel x86 CPUs with Android API 9+.
If using hardware device for testing/debugging, run the following command to learn its CPU If using hardware device for testing/debugging, run the following command to learn its CPU
architecture: architecture:
@code{.sh} @code{.sh}
@ -236,9 +241,9 @@ run Eclipse will ask you about the running mode for your application:
Click Edit in the context menu of the selected device. In the window, which then pop-ups, find Click Edit in the context menu of the selected device. In the window, which then pop-ups, find
the CPU field. the CPU field.
You may also see section @ref manager_selection for details. You may also see section `Manager Selection` for details.
When done, you will be able to run OpenCV samples on your device/emulator seamlessly. When done, you will be able to run OpenCV samples on your device/emulator seamlessly.
- Here is Sample - image-manipulations sample, running on top of stock camera-preview of the - Here is Sample - image-manipulations sample, running on top of stock camera-preview of the
emulator. emulator.
@ -249,6 +254,4 @@ What's next
----------- -----------
Now, when you have your instance of OpenCV4Adroid SDK set up and configured, you may want to proceed Now, when you have your instance of OpenCV4Adroid SDK set up and configured, you may want to proceed
to using OpenCV in your own application. You can learn how to do that in a separate @ref to using OpenCV in your own application. You can learn how to do that in a separate @ref tutorial_dev_with_OCV_on_Android tutorial.
dev_with_OCV_on_Android tutorial.

View File

@ -247,6 +247,7 @@ APP_ABI := all
@note We recommend setting APP_ABI := all for all targets. If you want to specify the target @note We recommend setting APP_ABI := all for all targets. If you want to specify the target
explicitly, use armeabi for ARMv5/ARMv6, armeabi-v7a for ARMv7, x86 for Intel Atom or mips for MIPS. explicitly, use armeabi for ARMv5/ARMv6, armeabi-v7a for ARMv7, x86 for Intel Atom or mips for MIPS.
@anchor tutorial_android_dev_intro_ndk
Building application native part from command line Building application native part from command line
-------------------------------------------------- --------------------------------------------------
@ -284,6 +285,8 @@ build tool).
@code{.bash} @code{.bash}
<path_where_NDK_is_placed>/ndk-build -B <path_where_NDK_is_placed>/ndk-build -B
@endcode @endcode
@anchor tutorial_android_dev_intro_eclipse
Building application native part from *Eclipse* (CDT Builder) Building application native part from *Eclipse* (CDT Builder)
------------------------------------------------------------- -------------------------------------------------------------
@ -331,9 +334,9 @@ to apply the changes.
4. Open Project Properties -\> C/C++ Build, uncheck Use default build command, replace "Build 4. Open Project Properties -\> C/C++ Build, uncheck Use default build command, replace "Build
command" text from "make" to command" text from "make" to
"\\f${NDKROOT}/ndk-build.cmd" on Windows, "${NDKROOT}/ndk-build.cmd" on Windows,
"\\f${NDKROOT}/ndk-build" on Linux and MacOS. "${NDKROOT}/ndk-build" on Linux and MacOS.
![image](images/eclipse_cdt_cfg4.png) ![image](images/eclipse_cdt_cfg4.png)
@ -438,7 +441,7 @@ instructions](http://developer.android.com/tools/device.html) for more informati
![image](images/usb_device_connect_08.png) ![image](images/usb_device_connect_08.png)
\` \` \` \`
![image](images/usb_device_connect_09.png) ![image](images/usb_device_connect_09.png)
@ -514,5 +517,4 @@ What's next
----------- -----------
Now, when you have your development environment set up and configured, you may want to proceed to Now, when you have your development environment set up and configured, you may want to proceed to
installing OpenCV4Android SDK. You can learn how to do that in a separate @ref O4A_SDK tutorial. installing OpenCV4Android SDK. You can learn how to do that in a separate @ref tutorial_O4A_SDK tutorial.

View File

@ -13,11 +13,11 @@ This tutorial assumes you have the following installed and configured:
- Eclipse IDE - Eclipse IDE
- ADT and CDT plugins for Eclipse - ADT and CDT plugins for Eclipse
If you need help with anything of the above, you may refer to our @ref android_dev_intro guide. If you need help with anything of the above, you may refer to our @ref tutorial_android_dev_intro guide.
This tutorial also assumes you have OpenCV4Android SDK already installed on your development machine This tutorial also assumes you have OpenCV4Android SDK already installed on your development machine
and OpenCV Manager on your testing device correspondingly. If you need help with any of these, you and OpenCV Manager on your testing device correspondingly. If you need help with any of these, you
may consult our @ref O4A_SDK tutorial. may consult our @ref tutorial_O4A_SDK tutorial.
If you encounter any error after thoroughly following these steps, feel free to contact us via If you encounter any error after thoroughly following these steps, feel free to contact us via
[OpenCV4Android](https://groups.google.com/group/android-opencv/) discussion group or OpenCV [Q&A [OpenCV4Android](https://groups.google.com/group/android-opencv/) discussion group or OpenCV [Q&A
@ -28,7 +28,7 @@ Using OpenCV Library Within Your Android Project
In this section we will explain how to make some existing project to use OpenCV. Starting with 2.4.2 In this section we will explain how to make some existing project to use OpenCV. Starting with 2.4.2
release for Android, *OpenCV Manager* is used to provide apps with the best available version of release for Android, *OpenCV Manager* is used to provide apps with the best available version of
OpenCV. You can get more information here: @ref Android_OpenCV_Manager and in these OpenCV. You can get more information here: `Android OpenCV Manager` and in these
[slides](https://docs.google.com/a/itseez.com/presentation/d/1EO_1kijgBg_BsjNp2ymk-aarg-0K279_1VZRcPplSuk/present#slide=id.p). [slides](https://docs.google.com/a/itseez.com/presentation/d/1EO_1kijgBg_BsjNp2ymk-aarg-0K279_1VZRcPplSuk/present#slide=id.p).
### Java ### Java
@ -52,7 +52,7 @@ Manager to access OpenCV libraries externally installed in the target system.
In most cases OpenCV Manager may be installed automatically from Google Play. For the case, when In most cases OpenCV Manager may be installed automatically from Google Play. For the case, when
Google Play is not available, i.e. emulator, developer board, etc, you can install it manually using Google Play is not available, i.e. emulator, developer board, etc, you can install it manually using
adb tool. See @ref manager_selection for details. adb tool. See `Manager Selection` for details.
There is a very base code snippet implementing the async initialization. It shows basic principles. There is a very base code snippet implementing the async initialization. It shows basic principles.
See the "15-puzzle" OpenCV sample for details. See the "15-puzzle" OpenCV sample for details.
@ -118,7 +118,7 @@ described above.
In case of the application project **with a JNI part**, instead of manual libraries copying you In case of the application project **with a JNI part**, instead of manual libraries copying you
need to modify your Android.mk file: add the following two code lines after the need to modify your Android.mk file: add the following two code lines after the
"include \\f$(CLEAR_VARS)" and before "include $(CLEAR_VARS)" and before
"include path_to_OpenCV-2.4.9-android-sdk/sdk/native/jni/OpenCV.mk" "include path_to_OpenCV-2.4.9-android-sdk/sdk/native/jni/OpenCV.mk"
@code{.make} @code{.make}
OPENCV_CAMERA_MODULES:=on OPENCV_CAMERA_MODULES:=on
@ -160,6 +160,7 @@ described above.
} }
} }
@endcode @endcode
### Native/C++ ### Native/C++
To build your own Android application, using OpenCV as native part, the following steps should be To build your own Android application, using OpenCV as native part, the following steps should be
@ -184,12 +185,13 @@ taken:
4. Several variables can be used to customize OpenCV stuff, but you **don't need** to use them when 4. Several variables can be used to customize OpenCV stuff, but you **don't need** to use them when
your application uses the async initialization via the OpenCV Manager API. your application uses the async initialization via the OpenCV Manager API.
@note These variables should be set **before** the "include .../OpenCV.mk" line: @note These variables should be set **before** the "include .../OpenCV.mk" line:
@code{.make} @code{.make}
OPENCV_INSTALL_MODULES:=on OPENCV_INSTALL_MODULES:=on
@endcode @endcode
Copies necessary OpenCV dynamic libs to the project libs folder in order to include them
into the APK. Copies necessary OpenCV dynamic libs to the project libs folder in order to include them
into the APK.
@code{.make} @code{.make}
OPENCV_CAMERA_MODULES:=off OPENCV_CAMERA_MODULES:=off
@endcode @endcode
@ -200,7 +202,7 @@ Copies necessary OpenCV dynamic libs to the project libs folder in order to incl
Perform static linking with OpenCV. By default dynamic link is used and the project JNI lib Perform static linking with OpenCV. By default dynamic link is used and the project JNI lib
depends on libopencv_java.so. depends on libopencv_java.so.
1. The file `Application.mk` should exist and should contain lines: 5. The file `Application.mk` should exist and should contain lines:
@code{.make} @code{.make}
APP_STL := gnustl_static APP_STL := gnustl_static
APP_CPPFLAGS := -frtti -fexceptions APP_CPPFLAGS := -frtti -fexceptions
@ -218,8 +220,10 @@ Copies necessary OpenCV dynamic libs to the project libs folder in order to incl
@code{.make} @code{.make}
APP_PLATFORM := android-9 APP_PLATFORM := android-9
@endcode @endcode
2. Either use @ref manual \<NDK_build_cli\> ndk-build invocation or @ref setup Eclipse CDT
Builder \<CDT_Builder\> to build native JNI lib before (re)building the Java part and creating 6. Either use @ref tutorial_android_dev_intro_ndk "manual" ndk-build invocation or
@ref tutorial_android_dev_intro_eclipse "setup Eclipse CDT Builder" to build native JNI lib
before (re)building the Java part and creating
an APK. an APK.
Hello OpenCV Sample Hello OpenCV Sample
@ -246,7 +250,7 @@ application. It will be capable of accessing camera output, processing it and di
xmlns:opencv="http://schemas.android.com/apk/res-auto" xmlns:opencv="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent" android:layout_width="match_parent"
android:layout_height="match_parent" > android:layout_height="match_parent" >
<org.opencv.android.JavaCameraView <org.opencv.android.JavaCameraView
android:layout_width="fill_parent" android:layout_width="fill_parent"
android:layout_height="fill_parent" android:layout_height="fill_parent"
@ -254,7 +258,7 @@ application. It will be capable of accessing camera output, processing it and di
android:id="@+id/HelloOpenCvView" android:id="@+id/HelloOpenCvView"
opencv:show_fps="true" opencv:show_fps="true"
opencv:camera_id="any" /> opencv:camera_id="any" />
</LinearLayout> </LinearLayout>
@endcode @endcode
8. Add the following permissions to the `AndroidManifest.xml` file: 8. Add the following permissions to the `AndroidManifest.xml` file:
@ -293,7 +297,7 @@ application. It will be capable of accessing camera output, processing it and di
} }
} }
}; };
@Override @Override
public void onResume() public void onResume()
{ {
@ -365,7 +369,8 @@ function and it is called on retrieving frame from camera. The callback input is
CvCameraViewFrame class that represents frame from camera. CvCameraViewFrame class that represents frame from camera.
@note Do not save or use CvCameraViewFrame object out of onCameraFrame callback. This object does @note Do not save or use CvCameraViewFrame object out of onCameraFrame callback. This object does
not have its own state and its behavior out of callback is unpredictable! It has rgba() and gray() not have its own state and its behavior out of callback is unpredictable!
It has rgba() and gray()
methods that allows to get frame as RGBA and one channel gray scale Mat respectively. It expects methods that allows to get frame as RGBA and one channel gray scale Mat respectively. It expects
that onCameraFrame function returns RGBA frame that will be drawn on the screen. that onCameraFrame function returns RGBA frame that will be drawn on the screen.

View File

@ -61,7 +61,7 @@ The available [installation guide](https://github.com/technomancy/leiningen#inst
easy to be followed: easy to be followed:
1. [Download the script](https://raw.github.com/technomancy/leiningen/stable/bin/lein) 1. [Download the script](https://raw.github.com/technomancy/leiningen/stable/bin/lein)
2. Place it on your \\f$PATH (cf. \~/bin is a good choice if it is on your path.) 2. Place it on your $PATH (cf. \~/bin is a good choice if it is on your path.)
3. Set the script to be executable. (i.e. chmod 755 \~/bin/lein). 3. Set the script to be executable. (i.e. chmod 755 \~/bin/lein).
If you work on Windows, follow [this instruction](https://github.com/technomancy/leiningen#windows) If you work on Windows, follow [this instruction](https://github.com/technomancy/leiningen#windows)

View File

@ -6,8 +6,8 @@ Android development. This guide will help you to create your first Java (or Scal
OpenCV. We will use either [Apache Ant](http://ant.apache.org/) or [Simple Build Tool OpenCV. We will use either [Apache Ant](http://ant.apache.org/) or [Simple Build Tool
(SBT)](http://www.scala-sbt.org/) to build the application. (SBT)](http://www.scala-sbt.org/) to build the application.
If you want to use Eclipse head to @ref Java_Eclipse. For further reading after this guide, look at If you want to use Eclipse head to @ref tutorial_java_eclipse. For further reading after this guide, look at
the @ref Android_Dev_Intro tutorials. the @ref tutorial_android_dev_intro tutorials.
What we'll do in this guide What we'll do in this guide
--------------------------- ---------------------------
@ -58,19 +58,23 @@ or
@code{.bat} @code{.bat}
cmake -DBUILD_SHARED_LIBS=OFF -G "Visual Studio 10" .. cmake -DBUILD_SHARED_LIBS=OFF -G "Visual Studio 10" ..
@endcode @endcode
@note When OpenCV is built as a set of **static** libraries (-DBUILD_SHARED_LIBS=OFF option) the @note When OpenCV is built as a set of **static** libraries (-DBUILD_SHARED_LIBS=OFF option) the
Java bindings dynamic library is all-sufficient, i.e. doesn't depend on other OpenCV libs, but Java bindings dynamic library is all-sufficient, i.e. doesn't depend on other OpenCV libs, but
includes all the OpenCV code inside. Examine the output of CMake and ensure java is one of the includes all the OpenCV code inside.
Examine the output of CMake and ensure java is one of the
modules "To be built". If not, it's likely you're missing a dependency. You should troubleshoot by modules "To be built". If not, it's likely you're missing a dependency. You should troubleshoot by
looking through the CMake output for any Java-related tools that aren't found and installing them. looking through the CMake output for any Java-related tools that aren't found and installing them.
![image](images/cmake_output.png) ![image](images/cmake_output.png)
@note If CMake can't find Java in your system set the JAVA_HOME environment variable with the path to installed JDK before running it. E.g.: @note If CMake can't find Java in your system set the JAVA_HOME environment variable with the path to installed JDK before running it. E.g.:
@code{.bash} @code{.bash}
export JAVA_HOME=/usr/lib/jvm/java-6-oracle export JAVA_HOME=/usr/lib/jvm/java-6-oracle
cmake -DBUILD_SHARED_LIBS=OFF .. cmake -DBUILD_SHARED_LIBS=OFF ..
@endcode @endcode
Now start the build: Now start the build:
@code{.bash} @code{.bash}
make -j8 make -j8
@ -87,72 +91,23 @@ Java sample with Ant
-------------------- --------------------
@note The described sample is provided with OpenCV library in the `opencv/samples/java/ant` @note The described sample is provided with OpenCV library in the `opencv/samples/java/ant`
folder. \* Create a folder where you'll develop this sample application. folder.
- Create a folder where you'll develop this sample application.
- In this folder create the `build.xml` file with the following content using any text editor: - In this folder create the `build.xml` file with the following content using any text editor:
@code{.xml} @include samples/java/ant/build.xml
<project name="SimpleSample" basedir="." default="rebuild-run"> @note This XML file can be reused for building other Java applications. It describes a common folder structure in the lines 3 - 12 and common targets for compiling and running the application.
When reusing this XML don't forget to modify the project name in the line 1, that is also the
<property name="src.dir" value="src"/>
<property name="lib.dir" value="\f${ocvJarDir}"/>
<path id="classpath">
<fileset dir="\f${lib.dir}" includes="**/*.jar"/>
</path>
<property name="build.dir" value="build"/>
<property name="classes.dir" value="\f${build.dir}/classes"/>
<property name="jar.dir" value="\f${build.dir}/jar"/>
<property name="main-class" value="\f${ant.project.name}"/>
<target name="clean">
<delete dir="\f${build.dir}"/>
</target>
<target name="compile">
<mkdir dir="\f${classes.dir}"/>
<javac includeantruntime="false" srcdir="\f${src.dir}" destdir="\f${classes.dir}" classpathref="classpath"/>
</target>
<target name="jar" depends="compile">
<mkdir dir="\f${jar.dir}"/>
<jar destfile="\f${jar.dir}/\f${ant.project.name}.jar" basedir="\f${classes.dir}">
<manifest>
<attribute name="Main-Class" value="\f${main-class}"/>
</manifest>
</jar>
</target>
<target name="run" depends="jar">
<java fork="true" classname="\f${main-class}">
<sysproperty key="java.library.path" path="\f${ocvLibDir}"/>
<classpath>
<path refid="classpath"/>
<path location="\f${jar.dir}/\f${ant.project.name}.jar"/>
</classpath>
</java>
</target>
<target name="rebuild" depends="clean,jar"/>
<target name="rebuild-run" depends="clean,run"/>
</project>
@endcode
@note This XML file can be reused for building other Java applications. It describes a common folder structure in the lines 3 - 12 and common targets for compiling and running the application.
When reusing this XML don't forget to modify the project name in the line 1, that is also the
name of the main class (line 14). The paths to OpenCV jar and jni lib are expected as parameters name of the main class (line 14). The paths to OpenCV jar and jni lib are expected as parameters
("\\f${ocvJarDir}" in line 5 and "\\f${ocvLibDir}" in line 37), but you can hardcode these paths for ("${ocvJarDir}" in line 5 and "${ocvLibDir}" in line 37), but you can hardcode these paths for
your convenience. See [Ant documentation](http://ant.apache.org/manual/) for detailed your convenience. See [Ant documentation](http://ant.apache.org/manual/) for detailed
description of its build file format. description of its build file format.
- Create an `src` folder next to the `build.xml` file and a `SimpleSample.java` file in it. - Create an `src` folder next to the `build.xml` file and a `SimpleSample.java` file in it.
-
Put the following Java code into the `SimpleSample.java` file: - Put the following Java code into the `SimpleSample.java` file:
@code{.java} @code{.java}
import org.opencv.core.Core; import org.opencv.core.Core;
import org.opencv.core.Mat; import org.opencv.core.Mat;
import org.opencv.core.CvType; import org.opencv.core.CvType;
@ -175,20 +130,18 @@ folder. \* Create a folder where you'll develop this sample application.
} }
@endcode @endcode
- - Run the following command in console in the folder containing `build.xml`:
@code{.bash}
ant -DocvJarDir=path/to/dir/containing/opencv-244.jar -DocvLibDir=path/to/dir/containing/opencv_java244/native/library
@endcode
For example:
@code{.bat}
ant -DocvJarDir=X:\opencv-2.4.4\bin -DocvLibDir=X:\opencv-2.4.4\bin\Release
@endcode
The command should initiate [re]building and running the sample. You should see on the
screen something like this:
Run the following command in console in the folder containing `build.xml`: ![image](images/ant_output.png)
@code{.bash}
ant -DocvJarDir=path/to/dir/containing/opencv-244.jar -DocvLibDir=path/to/dir/containing/opencv_java244/native/library
@endcode
For example:
@code{.bat}
ant -DocvJarDir=X:\opencv-2.4.4\bin -DocvLibDir=X:\opencv-2.4.4\bin\Release
@endcode
The command should initiate [re]building and running the sample. You should see on the
screen something like this:
![image](images/ant_output.png)
SBT project for Java and Scala SBT project for Java and Scala
------------------------------ ------------------------------
@ -370,4 +323,3 @@ It should also write the following image to `faceDetection.png`:
You're done! Now you have a sample Java application working with OpenCV, so you can start the work You're done! Now you have a sample Java application working with OpenCV, so you can start the work
on your own. We wish you good luck and many years of joyful life! on your own. We wish you good luck and many years of joyful life!

File diff suppressed because one or more lines are too long

View File

@ -12,10 +12,12 @@ Required Packages
Launch GIT client and clone OpenCV repository from [here](http://github.com/itseez/opencv) Launch GIT client and clone OpenCV repository from [here](http://github.com/itseez/opencv)
In MacOS it can be done using the following command in Terminal: In MacOS it can be done using the following command in Terminal:
@code{.bash} @code{.bash}
cd ~/<my_working _directory> cd ~/<my_working _directory>
git clone https://github.com/Itseez/opencv.git git clone https://github.com/Itseez/opencv.git
@endcode @endcode
Building OpenCV from Source, using CMake and Command Line Building OpenCV from Source, using CMake and Command Line
--------------------------------------------------------- ---------------------------------------------------------
@ -24,11 +26,13 @@ Building OpenCV from Source, using CMake and Command Line
cd / cd /
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
@endcode @endcode
2. Build OpenCV framework: 2. Build OpenCV framework:
@code{.bash} @code{.bash}
cd ~/<my_working_directory> cd ~/<my_working_directory>
python opencv/platforms/ios/build_framework.py ios python opencv/platforms/ios/build_framework.py ios
@endcode @endcode
If everything's fine, a few minutes later you will get If everything's fine, a few minutes later you will get
\~/\<my_working_directory\>/ios/opencv2.framework. You can add this framework to your Xcode \~/\<my_working_directory\>/ios/opencv2.framework. You can add this framework to your Xcode
projects. projects.
@ -36,5 +40,4 @@ projects.
Further Reading Further Reading
--------------- ---------------
You can find several OpenCV+iOS tutorials here @ref Table-Of-Content-iOS. You can find several OpenCV+iOS tutorials here @ref tutorial_table_of_content_ios.

View File

@ -11,7 +11,7 @@ Configuring Eclipse
------------------- -------------------
First, obtain a fresh release of OpenCV [from download page](http://opencv.org/downloads.html) and First, obtain a fresh release of OpenCV [from download page](http://opencv.org/downloads.html) and
extract it under a simple location like C:\\OpenCV-2.4.6\\. I am using version 2.4.6, but the steps extract it under a simple location like `C:\OpenCV-2.4.6\`. I am using version 2.4.6, but the steps
are more or less the same for other versions. are more or less the same for other versions.
Now, we will define OpenCV as a user library in Eclipse, so we can reuse the configuration for any Now, we will define OpenCV as a user library in Eclipse, so we can reuse the configuration for any
@ -31,12 +31,12 @@ Now select your new user library and click Add External JARs....
![image](images/4-add-external-jars.png) ![image](images/4-add-external-jars.png)
Browse through C:\\OpenCV-2.4.6\\build\\java\\ and select opencv-246.jar. After adding the jar, Browse through `C:\OpenCV-2.4.6\build\java\` and select opencv-246.jar. After adding the jar,
extend the opencv-246.jar and select Native library location and press Edit.... extend the opencv-246.jar and select Native library location and press Edit....
![image](images/5-native-library.png) ![image](images/5-native-library.png)
Select External Folder... and browse to select the folder C:\\OpenCV-2.4.6\\build\\java\\x64. If you Select External Folder... and browse to select the folder `C:\OpenCV-2.4.6\build\java\x64`. If you
have a 32-bit system you need to select the x86 folder instead of x64. have a 32-bit system you need to select the x86 folder instead of x64.
![image](images/6-external-folder.png) ![image](images/6-external-folder.png)
@ -86,4 +86,3 @@ When you run the code you should see 3x3 identity matrix as output.
That is it, whenever you start a new project just add the OpenCV user library that you have defined That is it, whenever you start a new project just add the OpenCV user library that you have defined
to your project and you are good to go. Enjoy your powerful, less painful development environment :) to your project and you are good to go. Enjoy your powerful, less painful development environment :)

View File

@ -1,17 +1,16 @@
Using OpenCV with Eclipse (plugin CDT) {#tutorial_linux_eclipse} Using OpenCV with Eclipse (plugin CDT) {#tutorial_linux_eclipse}
====================================== ======================================
@note Two ways, one by forming a project directly, and another by CMake Prerequisites Prerequisites
=============== -------------
Two ways, one by forming a project directly, and another by CMake Prerequisites
1. Having installed [Eclipse](http://www.eclipse.org/) in your workstation (only the CDT plugin for 1. Having installed [Eclipse](http://www.eclipse.org/) in your workstation (only the CDT plugin for
C/C++ is needed). You can follow the following steps: C/C++ is needed). You can follow the following steps:
- Go to the Eclipse site - Go to the Eclipse site
- Download [Eclipse IDE for C/C++ - Download [Eclipse IDE for C/C++
Developers](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/heliossr2) . Developers](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/heliossr2) .
Choose the link according to your workstation. Choose the link according to your workstation.
2. Having installed OpenCV. If not yet, go @ref tutorial_linux_install "here".
2. Having installed OpenCV. If not yet, go @ref here \<Linux-Installation\>.
Making a project Making a project
---------------- ----------------
@ -75,46 +74,47 @@ Making a project
- Go to **Project--\>Properties** - Go to **Project--\>Properties**
- In **C/C++ Build**, click on **Settings**. At the right, choose the **Tool Settings** Tab. - In **C/C++ Build**, click on **Settings**. At the right, choose the **Tool Settings** Tab.
Here we will enter the headers and libraries info: Here we will enter the headers and libraries info:
a. In **GCC C++ Compiler**, go to **Includes**. In **Include paths(-l)** you should -# In **GCC C++ Compiler**, go to **Includes**. In **Include paths(-l)** you should
include the path of the folder where opencv was installed. In our example, this is include the path of the folder where opencv was installed. In our example, this is
/usr/local/include/opencv. /usr/local/include/opencv.
![image](images/a9.png) ![image](images/a9.png)
@note If you do not know where your opencv files are, open the **Terminal** and type:
@code{.bash}
pkg-config --cflags opencv
@endcode
For instance, that command gave me this output:
@code{.bash}
-I/usr/local/include/opencv -I/usr/local/include
@endcode
b. Now go to **GCC C++ Linker**,there you have to fill two spaces:
First in **Library search path (-L)** you have to write the path to where the opencv libraries @note If you do not know where your opencv files are, open the **Terminal** and type:
reside, in my case the path is: : @code{.bash}
pkg-config --cflags opencv
@endcode
For instance, that command gave me this output:
@code{.bash}
-I/usr/local/include/opencv -I/usr/local/include
@endcode
/usr/local/lib -# Now go to **GCC C++ Linker**,there you have to fill two spaces:
Then in **Libraries(-l)** add the OpenCV libraries that you may need. Usually just the 3 first First in **Library search path (-L)** you have to write the path to where the opencv libraries
on the list below are enough (for simple applications) . In my case, I am putting all of them reside, in my case the path is: :
since I plan to use the whole bunch:
opencv_core opencv_imgproc opencv_highgui opencv_ml opencv_video opencv_features2d /usr/local/lib
opencv_calib3d opencv_objdetect opencv_contrib opencv_legacy opencv_flann
![image](images/a10.png) Then in **Libraries(-l)** add the OpenCV libraries that you may need. Usually just the 3 first
on the list below are enough (for simple applications) . In my case, I am putting all of them
since I plan to use the whole bunch:
If you don't know where your libraries are (or you are just psychotic and want to make sure opencv_core opencv_imgproc opencv_highgui opencv_ml opencv_video opencv_features2d
the path is fine), type in **Terminal**: opencv_calib3d opencv_objdetect opencv_contrib opencv_legacy opencv_flann
@code{.bash}
pkg-config --libs opencv
@endcode
My output (in case you want to check) was: .. code-block:: bash
-L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann ![image](images/a10.png)
Now you are done. Click **OK** If you don't know where your libraries are (or you are just psychotic and want to make sure
the path is fine), type in **Terminal**:
@code{.bash}
pkg-config --libs opencv
@endcode
My output (in case you want to check) was:
@code{.bash}
-L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann
@endcode
Now you are done. Click **OK**
- Your project should be ready to be built. For this, go to **Project-\>Build all** - Your project should be ready to be built. For this, go to **Project-\>Build all**
@ -171,7 +171,7 @@ int main ( int argc, char **argv )
} }
@endcode @endcode
1. Create a build directory, say, under *foo*: mkdir /build. Then cd build. 1. Create a build directory, say, under *foo*: mkdir /build. Then cd build.
2. Put a *CmakeLists.txt* file in build: 2. Put a `CmakeLists.txt` file in build:
@code{.bash} @code{.bash}
PROJECT( helloworld_proj ) PROJECT( helloworld_proj )
FIND_PACKAGE( OpenCV REQUIRED ) FIND_PACKAGE( OpenCV REQUIRED )
@ -180,21 +180,20 @@ TARGET_LINK_LIBRARIES( helloworld \f${OpenCV_LIBS} )
@endcode @endcode
1. Run: cmake-gui .. and make sure you fill in where opencv was built. 1. Run: cmake-gui .. and make sure you fill in where opencv was built.
2. Then click configure and then generate. If it's OK, **quit cmake-gui** 2. Then click configure and then generate. If it's OK, **quit cmake-gui**
3. Run make -j4 *(the -j4 is optional, it just tells the compiler to build in 4 threads)*. Make 3. Run `make -j4` (the -j4 is optional, it just tells the compiler to build in 4 threads). Make
sure it builds. sure it builds.
4. Start eclipse . Put the workspace in some directory but **not** in foo or foo\\\\build 4. Start eclipse. Put the workspace in some directory but **not** in foo or `foo\build`
5. Right click in the Project Explorer section. Select Import And then open the C/C++ filter. 5. Right click in the Project Explorer section. Select Import And then open the C/C++ filter.
Choose *Existing Code* as a Makefile Project\`\` Choose *Existing Code* as a Makefile Project.
6. Name your project, say *helloworld*. Browse to the Existing Code location foo\\\\build (where 6. Name your project, say *helloworld*. Browse to the Existing Code location `foo\build` (where
you ran your cmake-gui from). Select *Linux GCC* in the *"Toolchain for Indexer Settings"* and you ran your cmake-gui from). Select *Linux GCC* in the *"Toolchain for Indexer Settings"* and
press *Finish*. press *Finish*.
7. Right click in the Project Explorer section. Select Properties. Under C/C++ Build, set the 7. Right click in the Project Explorer section. Select Properties. Under C/C++ Build, set the
*build directory:* from something like \\f${workspace_loc:/helloworld} to *build directory:* from something like `${workspace_loc:/helloworld}` to
\\f${workspace_loc:/helloworld}/build since that's where you are building to. `${workspace_loc:/helloworld}/build` since that's where you are building to.
a. You can also optionally modify the Build command: from make to something like -# You can also optionally modify the Build command: from make to something like
make VERBOSE=1 -j4 which tells the compiler to produce detailed symbol files for debugging and `make VERBOSE=1 -j4` which tells the compiler to produce detailed symbol files for debugging and
also to compile in 4 parallel threads. also to compile in 4 parallel threads.
1. Done!
8. Done!

View File

@ -127,10 +127,9 @@ Building OpenCV from Source Using CMake
@code{.bash} @code{.bash}
<cmake_build_dir>/bin/opencv_test_core <cmake_build_dir>/bin/opencv_test_core
@endcode @endcode
@note @note
If the size of the created library is a critical issue (like in case of an Android build) you If the size of the created library is a critical issue (like in case of an Android build) you
can use the install/strip command to get the smallest size as possible. The *stripped* version can use the install/strip command to get the smallest size as possible. The *stripped* version
appears to be twice as small. However, we do not recommend using this unless those extra appears to be twice as small. However, we do not recommend using this unless those extra
megabytes do really matter. megabytes do really matter.

View File

@ -3,7 +3,7 @@ Load, Modify, and Save an Image {#tutorial_load_save_image}
@note @note
We assume that by now you know how to load an image using @ref cv::imread and to display it in a We assume that by now you know how to load an image using @ref cv::imread and to display it in a
window (using @ref cv::imshow ). Read the @ref Display_Image tutorial otherwise. window (using @ref cv::imshow ). Read the @ref tutorial_display_image tutorial otherwise.
Goals Goals
----- -----
@ -103,4 +103,3 @@ And if you check in your folder (in my case *images*), you should have a newly .
![image](images/Load_Save_Image_Result_2.jpg) ![image](images/Load_Save_Image_Result_2.jpg)
Congratulations, you are done with this tutorial! Congratulations, you are done with this tutorial!

View File

@ -6,13 +6,13 @@ relatively modern version of Windows OS. If you encounter errors after following
below, feel free to contact us via our [OpenCV Q&A forum](http://answers.opencv.org). We'll do our below, feel free to contact us via our [OpenCV Q&A forum](http://answers.opencv.org). We'll do our
best to help you out. best to help you out.
@note To use the OpenCV library you have two options: @ref Windows_Install_Prebuild or @ref @note To use the OpenCV library you have two options: @ref tutorial_windows_install_prebuilt or
CppTutWindowsMakeOwn. While the first one is easier to complete, it only works if you are coding @ref tutorial_windows_install_build. While the first one is easier to complete, it only works if you are coding
with the latest Microsoft Visual Studio IDE and doesn't take advantage of the most advanced with the latest Microsoft Visual Studio IDE and doesn't take advantage of the most advanced
technologies we integrate into our library. .. _Windows_Install_Prebuild: technologies we integrate into our library. .. _Windows_Install_Prebuild:
Installation by Using the Pre-built Libraries Installation by Using the Pre-built Libraries {#tutorial_windows_install_prebuilt}
--------------------------------------------- =============================================
1. Launch a web browser of choice and go to our [page on 1. Launch a web browser of choice and go to our [page on
Sourceforge](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/). Sourceforge](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/).
@ -22,14 +22,13 @@ Installation by Using the Pre-built Libraries
![image](images/OpenCV_Install_Directory.png) ![image](images/OpenCV_Install_Directory.png)
5. To finalize the installation go to the @ref WindowsSetPathAndEnviromentVariable section. 5. To finalize the installation go to the @ref tutorial_windows_install_path section.
Installation by Making Your Own Libraries from the Source Files Installation by Making Your Own Libraries from the Source Files {#tutorial_windows_install_build}
--------------------------------------------------------------- ===============================================================
You may find the content of this tutorial also inside the following videos: [Part You may find the content of this tutorial also inside the following videos:
1](https://www.youtube.com/watch?v=NnovZ1cTlMs) and [Part [Part 1](https://www.youtube.com/watch?v=NnovZ1cTlMs) and [Part 2](https://www.youtube.com/watch?v=qGNWMcfWwPU), hosted on YouTube.
2](https://www.youtube.com/watch?v=qGNWMcfWwPU), hosted on YouTube.
\htmlonly \htmlonly
<div align="center"> <div align="center">
@ -37,6 +36,7 @@ You may find the content of this tutorial also inside the following videos: [Par
<iframe title="Install OpenCV by using its source files - Part 2" width="560" height="349" src="http://www.youtube.com/embed/qGNWMcfWwPU?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe> <iframe title="Install OpenCV by using its source files - Part 2" width="560" height="349" src="http://www.youtube.com/embed/qGNWMcfWwPU?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div> </div>
\endhtmlonly \endhtmlonly
**warning** **warning**
These videos above are long-obsolete and contain inaccurate information. Be careful, since These videos above are long-obsolete and contain inaccurate information. Be careful, since
@ -50,10 +50,10 @@ Building the OpenCV library from scratch requires a couple of tools installed be
- An IDE of choice (preferably), or just a CC++ compiler that will actually make the binary files. - An IDE of choice (preferably), or just a CC++ compiler that will actually make the binary files.
Here we will use the [Microsoft Visual Studio](https://www.microsoft.com/visualstudio/en-us). Here we will use the [Microsoft Visual Studio](https://www.microsoft.com/visualstudio/en-us).
However, you can use any other IDE that has a valid CC++ compiler. However, you can use any other IDE that has a valid CC++ compiler.
- CMake_, which is a neat tool to make the project files (for your chosen IDE) from the OpenCV - [CMake](http://www.cmake.org/cmake/resources/software.html), which is a neat tool to make the project files (for your chosen IDE) from the OpenCV
source files. It will also allow an easy configuration of the OpenCV build files, in order to source files. It will also allow an easy configuration of the OpenCV build files, in order to
make binary files that fits exactly to your needs. make binary files that fits exactly to your needs.
- Git to acquire the OpenCV source files. A good tool for this is TortoiseGit_. Alternatively, - Git to acquire the OpenCV source files. A good tool for this is [TortoiseGit](http://code.google.com/p/tortoisegit/wiki/Download). Alternatively,
you can just download an archived version of the source files from our [page on you can just download an archived version of the source files from our [page on
Sourceforge](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/) Sourceforge](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/)
@ -62,35 +62,35 @@ Nevertheless, there is a couple of tools, libraries made by 3rd parties that off
the OpenCV may take advantage. These will improve its capabilities in many ways. In order to use any the OpenCV may take advantage. These will improve its capabilities in many ways. In order to use any
of them, you need to download and install them on your system. of them, you need to download and install them on your system.
- The Python libraries_ are required to build the *Python interface* of OpenCV. For now use the - The [Python libraries](http://www.python.org/downloads/) are required to build the *Python interface* of OpenCV. For now use the
version `2.7.{x}`. This is also a must if you want to build the *OpenCV documentation*. version `2.7.{x}`. This is also a must if you want to build the *OpenCV documentation*.
- Numpy_ is a scientific computing package for Python. Required for the *Python interface*. - [Numpy](http://numpy.scipy.org/) is a scientific computing package for Python. Required for the *Python interface*.
- Intel |copy| Threading Building Blocks (*TBB*)_ is used inside OpenCV for parallel code - [Intel Threading Building Blocks (*TBB*)](http://threadingbuildingblocks.org/file.php?fid=77) is used inside OpenCV for parallel code
snippets. Using this will make sure that the OpenCV library will take advantage of all the cores snippets. Using this will make sure that the OpenCV library will take advantage of all the cores
you have in your systems CPU. you have in your systems CPU.
- Intel |copy| Integrated Performance Primitives (*IPP*)_ may be used to improve the performance - [Intel Integrated Performance Primitives (*IPP*)](http://software.intel.com/en-us/articles/intel-ipp/) may be used to improve the performance
of color conversion, Haar training and DFT functions of the OpenCV library. Watch out, since of color conversion, Haar training and DFT functions of the OpenCV library. Watch out, since
this isn't a free service. this isn't a free service.
- Intel |copy| IPP Asynchronous C/C++_ is currently focused delivering Intel |copy| Graphics - [Intel IPP Asynchronous C/C++](http://software.intel.com/en-us/intel-ipp-preview) is currently focused delivering Intel Graphics
support for advanced image processing and computer vision functions. support for advanced image processing and computer vision functions.
- OpenCV offers a somewhat fancier and more useful graphical user interface, than the default one - OpenCV offers a somewhat fancier and more useful graphical user interface, than the default one
by using the Qt framework_. For a quick overview of what this has to offer look into the by using the [Qt framework](http://qt.nokia.com/downloads). For a quick overview of what this has to offer look into the
documentations *highgui* module, under the *Qt New Functions* section. Version 4.6 or later of documentations *highgui* module, under the *Qt New Functions* section. Version 4.6 or later of
the framework is required. the framework is required.
- Eigen_ is a C++ template library for linear algebra. - [Eigen](http://eigen.tuxfamily.org/index.php?title=Main_Page#Download) is a C++ template library for linear algebra.
- The latest CUDA Toolkit_ will allow you to use the power lying inside your GPU. This will - The latest [CUDA Toolkit](http://developer.nvidia.com/cuda-downloads) will allow you to use the power lying inside your GPU. This will
drastically improve performance for some algorithms (e.g the HOG descriptor). Getting more and drastically improve performance for some algorithms (e.g the HOG descriptor). Getting more and
more of our algorithms to work on the GPUs is a constant effort of the OpenCV team. more of our algorithms to work on the GPUs is a constant effort of the OpenCV team.
- OpenEXR_ source files are required for the library to work with this high dynamic range (HDR) - [OpenEXR](http://www.openexr.com/downloads.html) source files are required for the library to work with this high dynamic range (HDR)
image file format. image file format.
- The OpenNI Framework_ contains a set of open source APIs that provide support for natural - The [OpenNI Framework](http://www.openni.org/) contains a set of open source APIs that provide support for natural
interaction with devices via methods such as voice command recognition, hand gestures and body interaction with devices via methods such as voice command recognition, hand gestures and body
motion tracking. motion tracking.
- Miktex_ is the best [TEX](https://secure.wikimedia.org/wikipedia/en/wiki/TeX) implementation on - [Miktex]( http://miktex.org/2.9/setup) is the best [TEX](https://secure.wikimedia.org/wikipedia/en/wiki/TeX) implementation on
the Windows OS. It is required to build the *OpenCV documentation*. the Windows OS. It is required to build the *OpenCV documentation*.
- Sphinx_ is a python documentation generator and is the tool that will actually create the - [Sphinx](http://sphinx.pocoo.org/) is a python documentation generator and is the tool that will actually create the
*OpenCV documentation*. This on its own requires a couple of tools installed, We will cover this *OpenCV documentation*. This on its own requires a couple of tools installed, We will cover this
in depth at the @ref How to Install Sphinx \<HereInstallSphinx\> section. in depth at the @ref tutorial_windows_install_sphinx "How to Install Sphinx" section.
Now we will describe the steps to follow for a full build (using all the above frameworks, tools and Now we will describe the steps to follow for a full build (using all the above frameworks, tools and
libraries). If you do not need the support for some of these you can just freely skip this section. libraries). If you do not need the support for some of these you can just freely skip this section.
@ -99,31 +99,32 @@ libraries). If you do not need the support for some of these you can just freely
1. Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio 1. Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio
just install it and make sure it starts up. just install it and make sure it starts up.
2. Install CMake_. Simply follow the wizard, no need to add it to the path. The default install 2. Install [CMake](http://www.cmake.org/cmake/resources/software.html). Simply follow the wizard, no need to add it to the path. The default install
options are OK. options are OK.
3. Download and install an up-to-date version of msysgit from its [official 3. Download and install an up-to-date version of msysgit from its [official
site](http://code.google.com/p/msysgit/downloads/list). There is also the portable version, site](http://code.google.com/p/msysgit/downloads/list). There is also the portable version,
which you need only to unpack to get access to the console version of Git. Supposing that for which you need only to unpack to get access to the console version of Git. Supposing that for
some of us it could be quite enough. some of us it could be quite enough.
4. Install TortoiseGit_. Choose the 32 or 64 bit version according to the type of OS you work in. 4. Install [TortoiseGit](http://code.google.com/p/tortoisegit/wiki/Download). Choose the 32 or 64 bit version according to the type of OS you work in.
While installing, locate your msysgit (if it doesn't do that automatically). Follow the While installing, locate your msysgit (if it doesn't do that automatically). Follow the
wizard -- the default options are OK for the most part. wizard -- the default options are OK for the most part.
5. Choose a directory in your file system, where you will download the OpenCV libraries to. I 5. Choose a directory in your file system, where you will download the OpenCV libraries to. I
recommend creating a new one that has short path and no special charachters in it, for example recommend creating a new one that has short path and no special charachters in it, for example
`D:/OpenCV`. For this tutorial I'll suggest you do so. If you use your own path and know, what `D:/OpenCV`. For this tutorial I'll suggest you do so. If you use your own path and know, what
you're doing -- it's OK. you're doing -- it's OK.
a) Clone the repository to the selected directory. After clicking *Clone* button, a window will -# Clone the repository to the selected directory. After clicking *Clone* button, a window will
appear where you can select from what repository you want to download source files appear where you can select from what repository you want to download source files
(<https://github.com/Itseez/opencv.git>) and to what directory (`D:/OpenCV`). (<https://github.com/Itseez/opencv.git>) and to what directory (`D:/OpenCV`).
b) Push the OK button and be patient as the repository is quite a heavy download. It will take -# Push the OK button and be patient as the repository is quite a heavy download. It will take
some time depending on your Internet connection. some time depending on your Internet connection.
6. In this section I will cover installing the 3rd party libraries. 6. In this section I will cover installing the 3rd party libraries.
a) Download the Python libraries_ and install it with the default options. You will need a -# Download the [Python libraries](http://www.python.org/downloads/) and install it with the default options. You will need a
couple other python extensions. Luckily installing all these may be automated by a nice tool couple other python extensions. Luckily installing all these may be automated by a nice tool
called [Setuptools](http://pypi.python.org/pypi/setuptools#downloads). Download and install called [Setuptools](http://pypi.python.org/pypi/setuptools#downloads). Download and install
again. again.
b) Installing Sphinx is easy once you have installed *Setuptools*. This contains a little @anchor tutorial_windows_install_sphinx
-# Installing Sphinx is easy once you have installed *Setuptools*. This contains a little
application that will automatically connect to the python databases and download the latest application that will automatically connect to the python databases and download the latest
version of many python scripts. Start up a command window (enter *cmd* into the windows version of many python scripts. Start up a command window (enter *cmd* into the windows
start menu and press enter) and use the *CD* command to navigate to your Python folders start menu and press enter) and use the *CD* command to navigate to your Python folders
@ -134,86 +135,88 @@ libraries). If you do not need the support for some of these you can just freely
![image](images/Sphinx_Install.png) ![image](images/Sphinx_Install.png)
@note @note
The *CD* navigation command works only inside a drive. For example if you are somewhere in the The *CD* navigation command works only inside a drive. For example if you are somewhere in the
*C:* drive you cannot use it this to go to another drive (like for example *D:*). To do so you *C:* drive you cannot use it this to go to another drive (like for example *D:*). To do so you
first need to change drives letters. For this simply enter the command *D:*. Then you can use first need to change drives letters. For this simply enter the command *D:*. Then you can use
the *CD* to navigate to specific folder inside the drive. Bonus tip: you can clear the screen by the *CD* to navigate to specific folder inside the drive. Bonus tip: you can clear the screen by
using the *CLS* command. using the *CLS* command.
This will also install its prerequisites [Jinja2](http://jinja.pocoo.org/docs/) and This will also install its prerequisites [Jinja2](http://jinja.pocoo.org/docs/) and
[Pygments](http://pygments.org/). [Pygments](http://pygments.org/).
1) The easiest way to install Numpy_ is to just download its binaries from the [sourceforga -# The easiest way to install Numpy is to just download its binaries from the [sourceforge page](http://sourceforge.net/projects/numpy/files/NumPy/).
page](http://sourceforge.net/projects/numpy/files/NumPy/). Make sure your download and install Make sure your download and install
exactly the binary for your python version (so for version `2.7`). exactly the binary for your python version (so for version `2.7`).
2) Download the Miktex_ and install it. Again just follow the wizard. At the fourth step make -# Download the [Miktex](http://miktex.org/2.9/setup) and install it. Again just follow the wizard. At the fourth step make
sure you select for the *"Install missing packages on-the-fly"* the *Yes* option, as you can sure you select for the *"Install missing packages on-the-fly"* the *Yes* option, as you can
see on the image below. Again this will take quite some time so be patient. see on the image below. Again this will take quite some time so be patient.
![image](images/MiktexInstall.png) ![image](images/MiktexInstall.png)
3) For the Intel |copy| Threading Building Blocks (*TBB*)_ download the source files and extract -# For the [Intel Threading Building Blocks (*TBB*)](http://threadingbuildingblocks.org/file.php?fid=77)
it inside a directory on your system. For example let there be `D:/OpenCV/dep`. For installing download the source files and extract
the Intel |copy| Integrated Performance Primitives (*IPP*)_ the story is the same. For it inside a directory on your system. For example let there be `D:/OpenCV/dep`. For installing
exctracting the archives I recommend using the [7-Zip](http://www.7-zip.org/) application. the [Intel Integrated Performance Primitives (*IPP*)](http://software.intel.com/en-us/articles/intel-ipp/)
the story is the same. For
exctracting the archives I recommend using the [7-Zip](http://www.7-zip.org/) application.
![image](images/IntelTBB.png) ![image](images/IntelTBB.png)
4) For the Intel |copy| IPP Asynchronous C/C++_ download the source files and set environment -# For the [Intel IPP Asynchronous C/C++](http://software.intel.com/en-us/intel-ipp-preview) download the source files and set environment
variable **IPP_ASYNC_ROOT**. It should point to variable **IPP_ASYNC_ROOT**. It should point to
`<your Program Files(x86) directory>/Intel/IPP Preview */ipp directory`. Here \* denotes the `<your Program Files(x86) directory>/Intel/IPP Preview */ipp directory`. Here \* denotes the
particular preview name. particular preview name.
5) In case of the Eigen_ library it is again a case of download and extract to the -# In case of the [Eigen](http://eigen.tuxfamily.org/index.php?title=Main_Page#Download) library it is again a case of download and extract to the
`D:/OpenCV/dep` directory. `D:/OpenCV/dep` directory.
6) Same as above with OpenEXR_. -# Same as above with [OpenEXR](http://www.openexr.com/downloads.html).
7) For the OpenNI Framework_ you need to install both the [development -# For the [OpenNI Framework](http://www.openni.org/) you need to install both the [development
build](http://www.openni.org/downloadfiles/opennimodules/openni-binaries/21-stable) and the build](http://www.openni.org/downloadfiles/opennimodules/openni-binaries/21-stable) and the
[PrimeSensor [PrimeSensor
Module](http://www.openni.org/downloadfiles/opennimodules/openni-compliant-hardware-binaries/32-stable). Module](http://www.openni.org/downloadfiles/opennimodules/openni-compliant-hardware-binaries/32-stable).
8) For the CUDA you need again two modules: the latest CUDA Toolkit_ and the *CUDA Tools SDK*. -# For the CUDA you need again two modules: the latest [CUDA Toolkit](http://developer.nvidia.com/cuda-downloads) and the *CUDA Tools SDK*.
Download and install both of them with a *complete* option by using the 32 or 64 bit setups Download and install both of them with a *complete* option by using the 32 or 64 bit setups
according to your OS. according to your OS.
9) In case of the Qt framework_ you need to build yourself the binary files (unless you use the -# In case of the Qt framework you need to build yourself the binary files (unless you use the
Microsoft Visual Studio 2008 with 32 bit compiler). To do this go to the [Qt Microsoft Visual Studio 2008 with 32 bit compiler). To do this go to the [Qt
Downloads](http://qt.nokia.com/downloads) page. Download the source files (not the Downloads](http://qt.nokia.com/downloads) page. Download the source files (not the
installers!!!): installers!!!):
![image](images/qtDownloadThisPackage.png) ![image](images/qtDownloadThisPackage.png)
Extract it into a nice and short named directory like `D:/OpenCV/dep/qt/` . Then you need to Extract it into a nice and short named directory like `D:/OpenCV/dep/qt/` . Then you need to
build it. Start up a *Visual* *Studio* *Command* *Prompt* (*2010*) by using the start menu build it. Start up a *Visual* *Studio* *Command* *Prompt* (*2010*) by using the start menu
search (or navigate through the start menu search (or navigate through the start menu
All Programs --\> Microsoft Visual Studio 2010 --\> Visual Studio Tools --\> Visual Studio Command Prompt (2010)). All Programs --\> Microsoft Visual Studio 2010 --\> Visual Studio Tools --\> Visual Studio Command Prompt (2010)).
![image](images/visualstudiocommandprompt.jpg) ![image](images/visualstudiocommandprompt.jpg)
Now navigate to the extracted folder and enter inside it by using this console window. You Now navigate to the extracted folder and enter inside it by using this console window. You
should have a folder containing files like *Install*, *Make* and so on. Use the *dir* command should have a folder containing files like *Install*, *Make* and so on. Use the *dir* command
to list files inside your current directory. Once arrived at this directory enter the to list files inside your current directory. Once arrived at this directory enter the
following command: following command:
@code{.bash} @code{.bash}
configure.exe -release -no-webkit -no-phonon -no-phonon-backend -no-script -no-scripttools configure.exe -release -no-webkit -no-phonon -no-phonon-backend -no-script -no-scripttools
-no-qt3support -no-multimedia -no-ltcg -no-qt3support -no-multimedia -no-ltcg
@endcode @endcode
Completing this will take around 10-20 minutes. Then enter the next command that will take a Completing this will take around 10-20 minutes. Then enter the next command that will take a
lot longer (can easily take even more than a full hour): lot longer (can easily take even more than a full hour):
@code{.bash} @code{.bash}
nmake nmake
@endcode @endcode
After this set the Qt enviroment variables using the following command on Windows 7: After this set the Qt enviroment variables using the following command on Windows 7:
@code{.bash} @code{.bash}
setx -m QTDIR D:/OpenCV/dep/qt/qt-everywhere-opensource-src-4.7.3 setx -m QTDIR D:/OpenCV/dep/qt/qt-everywhere-opensource-src-4.7.3
@endcode @endcode
Also, add the built binary files path to the system path by using the |PathEditor|_. In our Also, add the built binary files path to the system path by using the [PathEditor](http://www.redfernplace.com/software-projects/patheditor/). In our
case this is `D:/OpenCV/dep/qt/qt-everywhere-opensource-src-4.7.3/bin`. case this is `D:/OpenCV/dep/qt/qt-everywhere-opensource-src-4.7.3/bin`.
@note @note
If you plan on doing Qt application development you can also install at this point the *Qt If you plan on doing Qt application development you can also install at this point the *Qt
Visual Studio Add-in*. After this you can make and build Qt applications without using the *Qt Visual Studio Add-in*. After this you can make and build Qt applications without using the *Qt
Creator*. Everything is nicely integrated into Visual Studio. Creator*. Everything is nicely integrated into Visual Studio.
1. Now start the *CMake (cmake-gui)*. You may again enter it in the start menu search or get it 7. Now start the *CMake (cmake-gui)*. You may again enter it in the start menu search or get it
from the All Programs --\> CMake 2.8 --\> CMake (cmake-gui). First, select the directory for the from the All Programs --\> CMake 2.8 --\> CMake (cmake-gui). First, select the directory for the
source files of the OpenCV library (1). Then, specify a directory where you will build the source files of the OpenCV library (1). Then, specify a directory where you will build the
binary files for OpenCV (2). binary files for OpenCV (2).
@ -307,8 +310,8 @@ This will also install its prerequisites [Jinja2](http://jinja.pocoo.org/docs/)
To test your build just go into the `Build/bin/Debug` or `Build/bin/Release` directory and start To test your build just go into the `Build/bin/Debug` or `Build/bin/Release` directory and start
a couple of applications like the *contours.exe*. If they run, you are done. Otherwise, a couple of applications like the *contours.exe*. If they run, you are done. Otherwise,
something definitely went awfully wrong. In this case you should contact us at our @ref cv::Q&A something definitely went awfully wrong. In this case you should contact us at our [Q&A forum](http://answers.opencv.org/).
forum . If everything is okay the *contours.exe* output should resemble the following image (if If everything is okay the *contours.exe* output should resemble the following image (if
built with Qt support): built with Qt support):
![image](images/WindowsQtContoursOutput.png) ![image](images/WindowsQtContoursOutput.png)
@ -319,18 +322,18 @@ This will also install its prerequisites [Jinja2](http://jinja.pocoo.org/docs/)
caused mostly by old video card drivers. For testing the GPU (if built) run the caused mostly by old video card drivers. For testing the GPU (if built) run the
*performance_gpu.exe* sample application. *performance_gpu.exe* sample application.
Set the OpenCV enviroment variable and add it to the systems path Set the OpenCV enviroment variable and add it to the systems path {#tutorial_windows_install_path}
----------------------------------------------------------------- =================================================================
First we set an enviroment variable to make easier our work. This will hold the build directory of First we set an enviroment variable to make easier our work. This will hold the build directory of
our OpenCV library that we use in our projects. Start up a command window and enter: our OpenCV library that we use in our projects. Start up a command window and enter:
@code
setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc10 (suggested for Visual Studio 2010 - 32 bit Windows) setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc10 (suggested for Visual Studio 2010 - 32 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc10 (suggested for Visual Studio 2010 - 64 bit Windows) setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc10 (suggested for Visual Studio 2010 - 64 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc11 (suggested for Visual Studio 2012 - 32 bit Windows) setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc11 (suggested for Visual Studio 2012 - 32 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc11 (suggested for Visual Studio 2012 - 64 bit Windows) setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc11 (suggested for Visual Studio 2012 - 64 bit Windows)
@endcode
Here the directory is where you have your OpenCV binaries (*extracted* or *built*). You can have Here the directory is where you have your OpenCV binaries (*extracted* or *built*). You can have
different platform (e.g. x64 instead of x86) or compiler type, so substitute appropriate value. different platform (e.g. x64 instead of x86) or compiler type, so substitute appropriate value.
Inside this you should have two folders called *lib* and *bin*. The -m should be added if you wish Inside this you should have two folders called *lib* and *bin*. The -m should be added if you wish
@ -344,10 +347,11 @@ However, to do this the operating system needs to know where they are. The syste
a list of folders where DLLs can be found. Add the OpenCV library path to this and the OS will know a list of folders where DLLs can be found. Add the OpenCV library path to this and the OS will know
where to look if he ever needs the OpenCV binaries. Otherwise, you will need to copy the used DLLs where to look if he ever needs the OpenCV binaries. Otherwise, you will need to copy the used DLLs
right beside the applications executable file (*exe*) for the OS to find it, which is highly right beside the applications executable file (*exe*) for the OS to find it, which is highly
unpleasent if you work on many projects. To do this start up again the |PathEditor|_ and add the unpleasent if you work on many projects. To do this start up again the [PathEditor](http://www.redfernplace.com/software-projects/patheditor/) and add the
following new entry (right click in the application to bring up the menu): following new entry (right click in the application to bring up the menu):
@code
%OPENCV_DIR%\bin %OPENCV_DIR%\bin
@endcode
![image](images/PathEditorOpenCVInsertNew.png) ![image](images/PathEditorOpenCVInsertNew.png)
@ -357,7 +361,6 @@ Save it to the registry and you are done. If you ever change the location of you
or want to try out your applicaton with a different build all you will need to do is to update the or want to try out your applicaton with a different build all you will need to do is to update the
OPENCV_DIR variable via the *setx* command inside a command window. OPENCV_DIR variable via the *setx* command inside a command window.
Now you can continue reading the tutorials with the @ref Windows_Visual_Studio_How_To section. Now you can continue reading the tutorials with the @ref tutorial_windows_visual_studio_Opencv section.
There you will find out how to use the OpenCV library in your own projects with the help of the There you will find out how to use the OpenCV library in your own projects with the help of the
Microsoft Visual Studio IDE. Microsoft Visual Studio IDE.

View File

@ -1,23 +1,23 @@
How to build applications with OpenCV inside the *Microsoft Visual Studio* {#tutorial_windows_visual_studio_Opencv} How to build applications with OpenCV inside the "Microsoft Visual Studio" {#tutorial_windows_visual_studio_Opencv}
========================================================================== ==========================================================================
Everything I describe here will apply to the C\\C++ interface of OpenCV. I start out from the Everything I describe here will apply to the C\\C++ interface of OpenCV. I start out from the
assumption that you have read and completed with success the @ref Windows_Installation tutorial. assumption that you have read and completed with success the @ref tutorial_windows_install tutorial.
Therefore, before you go any further make sure you have an OpenCV directory that contains the OpenCV Therefore, before you go any further make sure you have an OpenCV directory that contains the OpenCV
header files plus binaries and you have set the environment variables as @ref described here header files plus binaries and you have set the environment variables as described here
\<WindowsSetPathAndEnviromentVariable\>. @ref tutorial_windows_install_path.
![image](images/OpenCV_Install_Directory.jpg) ![image](images/OpenCV_Install_Directory.jpg)
The OpenCV libraries, distributed by us, on the Microsoft Windows operating system are in a The OpenCV libraries, distributed by us, on the Microsoft Windows operating system are in a
**D**ynamic **L**inked **L**ibraries (*DLL*). These have the advantage that all the content of the Dynamic Linked Libraries (*DLL*). These have the advantage that all the content of the
library are loaded only at runtime, on demand, and that countless programs may use the same library library are loaded only at runtime, on demand, and that countless programs may use the same library
file. This means that if you have ten applications using the OpenCV library, no need to have around file. This means that if you have ten applications using the OpenCV library, no need to have around
a version for each one of them. Of course you need to have the *dll* of the OpenCV on all systems a version for each one of them. Of course you need to have the *dll* of the OpenCV on all systems
where you want to run your application. where you want to run your application.
Another approach is to use static libraries that have *lib* extensions. You may build these by using Another approach is to use static libraries that have *lib* extensions. You may build these by using
our source files as described in the @ref Windows_Installation tutorial. When you use this the our source files as described in the @ref tutorial_windows_install tutorial. When you use this the
library will be built-in inside your *exe* file. So there is no chance that the user deletes them, library will be built-in inside your *exe* file. So there is no chance that the user deletes them,
for some reason. As a drawback your application will be larger one and as, it will take more time to for some reason. As a drawback your application will be larger one and as, it will take more time to
load it during its startup. load it during its startup.
@ -103,13 +103,14 @@ further to someone else who has a different OpenCV install path. Moreover, fixin
to manually modifying every explicit path. A more elegant solution is to use the environment to manually modifying every explicit path. A more elegant solution is to use the environment
variables. Anything that you put inside a parenthesis started with a dollar sign will be replaced at variables. Anything that you put inside a parenthesis started with a dollar sign will be replaced at
runtime with the current environment variables value. Here comes in play the environment variable runtime with the current environment variables value. Here comes in play the environment variable
setting we already made in our @ref previous tutorial \<WindowsSetPathAndEnviromentVariable\>. setting we already made in our previous tutorial @ref tutorial_windows_install_path.
Next go to the Linker --\> General and under the *"Additional Library Directories"* add the libs Next go to the Linker --\> General and under the *"Additional Library Directories"* add the libs
directory: directory:
@code{.bash} @code{.bash}
\f$(OPENCV_DIR)\lib $(OPENCV_DIR)\lib
@endcode @endcode
![image](images/PropertySheetOpenCVLib.jpg) ![image](images/PropertySheetOpenCVLib.jpg)
Then you need to specify the libraries in which the linker should look into. To do this go to the Then you need to specify the libraries in which the linker should look into. To do this go to the
@ -233,5 +234,4 @@ cumbersome task. Luckily, in the Visual Studio there is a menu to automate all t
Specify here the name of the inputs and while you start your application from the Visual Studio Specify here the name of the inputs and while you start your application from the Visual Studio
enviroment you have automatic argument passing. In the next introductionary tutorial you'll see an enviroment you have automatic argument passing. In the next introductionary tutorial you'll see an
in-depth explanation of the upper source code: @ref Display_Image. in-depth explanation of the upper source code: @ref tutorial_display_image.

View File

@ -12,9 +12,8 @@ This tutorial assumes that you have the following available:
1. Visual Studio 2012 Professional (or better) with Update 1 installed. Update 1 can be downloaded 1. Visual Studio 2012 Professional (or better) with Update 1 installed. Update 1 can be downloaded
[here](http://www.microsoft.com/en-us/download/details.aspx?id=35774). [here](http://www.microsoft.com/en-us/download/details.aspx?id=35774).
2. An OpenCV installation on your Windows machine (Tutorial: @ref Windows_Installation). 2. An OpenCV installation on your Windows machine (Tutorial: @ref tutorial_windows_install).
3. Ability to create and build OpenCV projects in Visual Studio (Tutorial: @ref 3. Ability to create and build OpenCV projects in Visual Studio (Tutorial: @ref tutorial_windows_visual_studio_Opencv).
Windows_Visual_Studio_How_To).
Installation Installation
------------ ------------
@ -178,4 +177,3 @@ Documentation](http://go.microsoft.com/fwlink/?LinkId=285461) for details--you a
documentation page by clicking on the *Help* link in the Image Watch window: documentation page by clicking on the *Help* link in the Image Watch window:
![image](images/help_button.jpg) ![image](images/help_button.jpg)

View File

@ -15,7 +15,7 @@ Including OpenCV library in your iOS project
The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your
XCode project. Download the latest binary from XCode project. Download the latest binary from
\<<http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>\>. Alternatively follow this \<<http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>\>. Alternatively follow this
guide @ref iOS-Installation to compile the framework manually. Once you have the framework, just guide @ref tutorial_ios_install to compile the framework manually. Once you have the framework, just
drag-and-drop into XCode: drag-and-drop into XCode:
![image](images/xcode_hello_ios_framework_drag_and_drop.png) ![image](images/xcode_hello_ios_framework_drag_and_drop.png)
@ -203,4 +203,3 @@ When you are working on grayscale data, turn set grayscale = YES as the YUV colo
directly access the luminance plane. directly access the luminance plane.
The Accelerate framework provides some CPU-accelerated DSP filters, which come handy in your case. The Accelerate framework provides some CPU-accelerated DSP filters, which come handy in your case.

View File

Before

Width:  |  Height:  |  Size: 1.8 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

@ -1,13 +1,15 @@
Introduction to Support Vector Machines {#tutorial_introduction_to_svm} Introduction to Support Vector Machines {#tutorial_introduction_to_svm}
======================================= =======================================
@todo update this tutorial
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV functions @ref cv::CvSVM::train to build a classifier based on SVMs and @ref - Use the OpenCV functions @ref cv::ml::SVM::train to build a classifier based on SVMs and @ref
cv::CvSVM::predict to test its performance. cv::ml::SVM::predict to test its performance.
What is a SVM? What is a SVM?
-------------- --------------
@ -47,7 +49,7 @@ How is the optimal hyperplane computed?
Let's introduce the notation used to define formally a hyperplane: Let's introduce the notation used to define formally a hyperplane:
f(x) = beta_{0} + beta\^{T} x, \f[f(x) = \beta_{0} + \beta^{T} x,\f]
where \f$\beta\f$ is known as the *weight vector* and \f$\beta_{0}\f$ as the *bias*. where \f$\beta\f$ is known as the *weight vector* and \f$\beta_{0}\f$ as the *bias*.
@ -57,7 +59,7 @@ Friedman. The optimal hyperplane can be represented in an infinite number of dif
scaling of \f$\beta\f$ and \f$\beta_{0}\f$. As a matter of convention, among all the possible scaling of \f$\beta\f$ and \f$\beta_{0}\f$. As a matter of convention, among all the possible
representations of the hyperplane, the one chosen is representations of the hyperplane, the one chosen is
|beta_{0} + beta\^{T} x| = 1 \f[|\beta_{0} + \beta^{T} x| = 1\f]
where \f$x\f$ symbolizes the training examples closest to the hyperplane. In general, the training where \f$x\f$ symbolizes the training examples closest to the hyperplane. In general, the training
examples that are closest to the hyperplane are called **support vectors**. This representation is examples that are closest to the hyperplane are called **support vectors**. This representation is
@ -71,20 +73,18 @@ Now, we use the result of geometry that gives the distance between a point \f$x\
In particular, for the canonical hyperplane, the numerator is equal to one and the distance to the In particular, for the canonical hyperplane, the numerator is equal to one and the distance to the
support vectors is support vectors is
mathrm{distance}_{text{ support vectors}} = frac{|beta_{0} + beta\^{T} x|}{||beta||} = \f[\mathrm{distance}_{\text{ support vectors}} = \frac{|\beta_{0} + \beta^{T} x|}{||\beta||} = \frac{1}{||\beta||}.\f]
frac{1}{||beta||}.
Recall that the margin introduced in the previous section, here denoted as \f$M\f$, is twice the Recall that the margin introduced in the previous section, here denoted as \f$M\f$, is twice the
distance to the closest examples: distance to the closest examples:
M = frac{2}{||beta||} \f[M = \frac{2}{||\beta||}\f]
Finally, the problem of maximizing \f$M\f$ is equivalent to the problem of minimizing a function Finally, the problem of maximizing \f$M\f$ is equivalent to the problem of minimizing a function
\f$L(\beta)\f$ subject to some constraints. The constraints model the requirement for the hyperplane to \f$L(\beta)\f$ subject to some constraints. The constraints model the requirement for the hyperplane to
classify correctly all the training examples \f$x_{i}\f$. Formally, classify correctly all the training examples \f$x_{i}\f$. Formally,
min_{beta, beta_{0}} L(beta) = frac{1}{2}||beta||\^{2} text{ subject to } y_{i}(beta\^{T} \f[\min_{\beta, \beta_{0}} L(\beta) = \frac{1}{2}||\beta||^{2} \text{ subject to } y_{i}(\beta^{T} x_{i} + \beta_{0}) \geq 1 \text{ } \forall i,\f]
x_{i} + beta_{0}) geq 1 text{ } forall i,
where \f$y_{i}\f$ represents each of the labels of the training examples. where \f$y_{i}\f$ represents each of the labels of the training examples.
@ -101,19 +101,20 @@ Explanation
1. **Set up the training data** 1. **Set up the training data**
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
two different classes; one of the classes consists of one point and the other of three points. two different classes; one of the classes consists of one point and the other of three points.
@code{.cpp} @code{.cpp}
float labels[4] = {1.0, -1.0, -1.0, -1.0}; float labels[4] = {1.0, -1.0, -1.0, -1.0};
float trainingData[4][2] = {{501, 10}, {255, 10}, {501, 255}, {10, 501}}; float trainingData[4][2] = {{501, 10}, {255, 10}, {501, 255}, {10, 501}};
@endcode @endcode
The function @ref cv::CvSVM::train that will be used afterwards requires the training data to be The function @ref cv::ml::SVM::train that will be used afterwards requires the training data to be
stored as @ref cv::Mat objects of floats. Therefore, we create these objects from the arrays stored as @ref cv::Mat objects of floats. Therefore, we create these objects from the arrays
defined above: defined above:
@code{.cpp} @code{.cpp}
Mat trainingDataMat(4, 2, CV_32FC1, trainingData); Mat trainingDataMat(4, 2, CV_32FC1, trainingData);
Mat labelsMat (4, 1, CV_32FC1, labels); Mat labelsMat (4, 1, CV_32FC1, labels);
@endcode @endcode
2. **Set up SVM's parameters** 2. **Set up SVM's parameters**
In this tutorial we have introduced the theory of SVMs in the most simple case, when the In this tutorial we have introduced the theory of SVMs in the most simple case, when the
@ -121,7 +122,7 @@ Mat labelsMat (4, 1, CV_32FC1, labels);
used in a wide variety of problems (e.g. problems with non-linearly separable data, a SVM using used in a wide variety of problems (e.g. problems with non-linearly separable data, a SVM using
a kernel function to raise the dimensionality of the examples, etc). As a consequence of this, a kernel function to raise the dimensionality of the examples, etc). As a consequence of this,
we have to define some parameters before training the SVM. These parameters are stored in an we have to define some parameters before training the SVM. These parameters are stored in an
object of the class @ref cv::CvSVMParams . object of the class @ref cv::ml::SVM::Params .
@code{.cpp} @code{.cpp}
ml::SVM::Params params; ml::SVM::Params params;
params.svmType = ml::SVM::C_SVC; params.svmType = ml::SVM::C_SVC;
@ -132,14 +133,16 @@ Mat labelsMat (4, 1, CV_32FC1, labels);
classification (n \f$\geq\f$ 2). This parameter is defined in the attribute classification (n \f$\geq\f$ 2). This parameter is defined in the attribute
*ml::SVM::Params.svmType*. *ml::SVM::Params.svmType*.
@note The important feature of the type of SVM **CvSVM::C_SVC** deals with imperfect separation of classes (i.e. when the training data is non-linearly separable). This feature is not important here since the data is linearly separable and we chose this SVM type only for being the most commonly used. The important feature of the type of SVM **CvSVM::C_SVC** deals with imperfect separation of classes (i.e. when the training data is non-linearly separable). This feature is not important here since the data is linearly separable and we chose this SVM type only for being the most commonly used.
- *Type of SVM kernel*. We have not talked about kernel functions since they are not
- *Type of SVM kernel*. We have not talked about kernel functions since they are not
interesting for the training data we are dealing with. Nevertheless, let's explain briefly interesting for the training data we are dealing with. Nevertheless, let's explain briefly
now the main idea behind a kernel function. It is a mapping done to the training data to now the main idea behind a kernel function. It is a mapping done to the training data to
improve its resemblance to a linearly separable set of data. This mapping consists of improve its resemblance to a linearly separable set of data. This mapping consists of
increasing the dimensionality of the data and is done efficiently using a kernel function. increasing the dimensionality of the data and is done efficiently using a kernel function.
We choose here the type **ml::SVM::LINEAR** which means that no mapping is done. This We choose here the type **ml::SVM::LINEAR** which means that no mapping is done. This
parameter is defined in the attribute *ml::SVMParams.kernel_type*. parameter is defined in the attribute *ml::SVMParams.kernel_type*.
- *Termination criteria of the algorithm*. The SVM training procedure is implemented solving a - *Termination criteria of the algorithm*. The SVM training procedure is implemented solving a
constrained quadratic optimization problem in an **iterative** fashion. Here we specify a constrained quadratic optimization problem in an **iterative** fashion. Here we specify a
maximum number of iterations and a tolerance error so we allow the algorithm to finish in maximum number of iterations and a tolerance error so we allow the algorithm to finish in
@ -155,35 +158,36 @@ Mat labelsMat (4, 1, CV_32FC1, labels);
CvSVM SVM; CvSVM SVM;
SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params); SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);
@endcode @endcode
4. **Regions classified by the SVM** 4. **Regions classified by the SVM**
The method @ref cv::CvSVM::predict is used to classify an input sample using a trained SVM. In The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
this example we have used this method in order to color the space depending on the prediction done this example we have used this method in order to color the space depending on the prediction done
by the SVM. In other words, an image is traversed interpreting its pixels as points of the by the SVM. In other words, an image is traversed interpreting its pixels as points of the
Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in
green if it is the class with label 1 and in blue if it is the class with label -1. green if it is the class with label 1 and in blue if it is the class with label -1.
@code{.cpp} @code{.cpp}
Vec3b green(0,255,0), blue (255,0,0); Vec3b green(0,255,0), blue (255,0,0);
for (int i = 0; i < image.rows; ++i) for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j) for (int j = 0; j < image.cols; ++j)
{ {
Mat sampleMat = (Mat_<float>(1,2) << i,j); Mat sampleMat = (Mat_<float>(1,2) << i,j);
float response = SVM.predict(sampleMat); float response = SVM.predict(sampleMat);
if (response == 1)
image.at<Vec3b>(j, i) = green;
else
if (response == -1)
image.at<Vec3b>(j, i) = blue;
}
@endcode
if (response == 1)
image.at<Vec3b>(j, i) = green;
else
if (response == -1)
image.at<Vec3b>(j, i) = blue;
}
@endcode
5. **Support vectors** 5. **Support vectors**
We use here a couple of methods to obtain information about the support vectors. The method @ref We use here a couple of methods to obtain information about the support vectors.
cv::CvSVM::get_support_vector_count outputs the total number of support vectors used in the The method @ref cv::ml::SVM::getSupportVectors obtain all of the support
problem and with the method @ref cv::CvSVM::get_support_vector we obtain each of the support vectors. We have used this methods here to find the training examples that are
vectors using an index. We have used this methods here to find the training examples that are
support vectors and highlight them. support vectors and highlight them.
@code{.cpp} @code{.cpp}
int c = SVM.get_support_vector_count(); int c = SVM.get_support_vector_count();
@ -194,6 +198,7 @@ for (int i = 0; i < image.rows; ++i)
circle( image, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thickness, lineType); circle( image, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thickness, lineType);
} }
@endcode @endcode
Results Results
------- -------
@ -204,5 +209,4 @@ Results
optimal separating hyperplane. optimal separating hyperplane.
- Finally the support vectors are shown using gray rings around the training examples. - Finally the support vectors are shown using gray rings around the training examples.
![image](images/result.png) ![image](images/svm_intro_result.png)

View File

@ -182,6 +182,6 @@ Results
* Finally the support vectors are shown using gray rings around the training examples. * Finally the support vectors are shown using gray rings around the training examples.
.. image:: images/result.png .. image:: images/svm_intro_result.png
:alt: The seperated planes :alt: The seperated planes
:align: center :align: center

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,6 +1,8 @@
Support Vector Machines for Non-Linearly Separable Data {#tutorial_non_linear_svms} Support Vector Machines for Non-Linearly Separable Data {#tutorial_non_linear_svms}
======================================================= =======================================================
@todo update this tutorial
Goal Goal
---- ----
@ -8,7 +10,7 @@ In this tutorial you will learn how to:
- Define the optimization problem for SVMs when it is not possible to separate linearly the - Define the optimization problem for SVMs when it is not possible to separate linearly the
training data. training data.
- How to configure the parameters in @ref cv::CvSVMParams to adapt your SVM for this class of - How to configure the parameters in @ref cv::ml::SVM::Params to adapt your SVM for this class of
problems. problems.
Motivation Motivation
@ -36,23 +38,22 @@ the biggest margin and the new one of generalizing the training data correctly b
many classification errors. many classification errors.
We start here from the formulation of the optimization problem of finding the hyperplane which We start here from the formulation of the optimization problem of finding the hyperplane which
maximizes the **margin** (this is explained in the @ref previous tutorial \<introductiontosvms\>): maximizes the **margin** (this is explained in the previous tutorial (@ref tutorial_introduction_to_svm):
min_{beta, beta_{0}} L(beta) = frac{1}{2}||beta||\^{2} text{ subject to } y_{i}(beta\^{T} \f[\min_{\beta, \beta_{0}} L(\beta) = \frac{1}{2}||\beta||^{2} \text{ subject to } y_{i}(\beta^{T} x_{i} + \beta_{0}) \geq 1 \text{ } \forall i\f]
x_{i} + beta_{0}) geq 1 text{ } forall i
There are multiple ways in which this model can be modified so it takes into account the There are multiple ways in which this model can be modified so it takes into account the
misclassification errors. For example, one could think of minimizing the same quantity plus a misclassification errors. For example, one could think of minimizing the same quantity plus a
constant times the number of misclassification errors in the training data, i.e.: constant times the number of misclassification errors in the training data, i.e.:
min ||beta||\^{2} + C text{(\# misclassication errors)} \f[\min ||\beta||^{2} + C \text{(\# misclassication errors)}\f]
However, this one is not a very good solution since, among some other reasons, we do not distinguish However, this one is not a very good solution since, among some other reasons, we do not distinguish
between samples that are misclassified with a small distance to their appropriate decision region or between samples that are misclassified with a small distance to their appropriate decision region or
samples that are not. Therefore, a better solution will take into account the *distance of the samples that are not. Therefore, a better solution will take into account the *distance of the
misclassified samples to their correct decision regions*, i.e.: misclassified samples to their correct decision regions*, i.e.:
min ||beta||\^{2} + C text{(distance of misclassified samples to their correct regions)} \f[\min ||\beta||^{2} + C \text{(distance of misclassified samples to their correct regions)}\f]
For each sample of the training data a new parameter \f$\xi_{i}\f$ is defined. Each one of these For each sample of the training data a new parameter \f$\xi_{i}\f$ is defined. Each one of these
parameters contains the distance from its corresponding training sample to their correct decision parameters contains the distance from its corresponding training sample to their correct decision
@ -70,8 +71,7 @@ misclassified training sample to the margin of its appropriate region.
Finally, the new formulation for the optimization problem is: Finally, the new formulation for the optimization problem is:
min_{beta, beta_{0}} L(beta) = ||beta||\^{2} + C sum_{i} {xi_{i}} text{ subject to } \f[\min_{\beta, \beta_{0}} L(\beta) = ||\beta||^{2} + C \sum_{i} {\xi_{i}} \text{ subject to } y_{i}(\beta^{T} x_{i} + \beta_{0}) \geq 1 - \xi_{i} \text{ and } \xi_{i} \geq 0 \text{ } \forall i\f]
y_{i}(beta\^{T} x_{i} + beta_{0}) geq 1 - xi_{i} text{ and } xi_{i} geq 0 text{ } forall i
How should the parameter C be chosen? It is obvious that the answer to this question depends on how How should the parameter C be chosen? It is obvious that the answer to this question depends on how
the training data is distributed. Although there is no general answer, it is useful to take into the training data is distributed. Although there is no general answer, it is useful to take into
@ -101,138 +101,143 @@ Explanation
1. **Set up the training data** 1. **Set up the training data**
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
two different classes. To make the exercise more appealing, the training data is generated two different classes. To make the exercise more appealing, the training data is generated
randomly using a uniform probability density functions (PDFs). randomly using a uniform probability density functions (PDFs).
We have divided the generation of the training data into two main parts. We have divided the generation of the training data into two main parts.
In the first part we generate data for both classes that is linearly separable. In the first part we generate data for both classes that is linearly separable.
@code{.cpp} @code{.cpp}
// Generate random points for the class 1 // Generate random points for the class 1
Mat trainClass = trainData.rowRange(0, nLinearSamples); Mat trainClass = trainData.rowRange(0, nLinearSamples);
// The x coordinate of the points is in [0, 0.4) // The x coordinate of the points is in [0, 0.4)
Mat c = trainClass.colRange(0, 1); Mat c = trainClass.colRange(0, 1);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(0.4 * WIDTH)); rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(0.4 * WIDTH));
// The y coordinate of the points is in [0, 1) // The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2); c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT)); rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
// Generate random points for the class 2
trainClass = trainData.rowRange(2*NTRAINING_SAMPLES-nLinearSamples, 2*NTRAINING_SAMPLES);
// The x coordinate of the points is in [0.6, 1]
c = trainClass.colRange(0 , 1);
rng.fill(c, RNG::UNIFORM, Scalar(0.6*WIDTH), Scalar(WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
@endcode
In the second part we create data for both classes that is non-linearly separable, data that
overlaps.
@code{.cpp}
// Generate random points for the classes 1 and 2
trainClass = trainData.rowRange( nLinearSamples, 2*NTRAINING_SAMPLES-nLinearSamples);
// The x coordinate of the points is in [0.4, 0.6)
c = trainClass.colRange(0,1);
rng.fill(c, RNG::UNIFORM, Scalar(0.4*WIDTH), Scalar(0.6*WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
@endcode
// Generate random points for the class 2
trainClass = trainData.rowRange(2*NTRAINING_SAMPLES-nLinearSamples, 2*NTRAINING_SAMPLES);
// The x coordinate of the points is in [0.6, 1]
c = trainClass.colRange(0 , 1);
rng.fill(c, RNG::UNIFORM, Scalar(0.6*WIDTH), Scalar(WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
@endcode
In the second part we create data for both classes that is non-linearly separable, data that
overlaps.
@code{.cpp}
// Generate random points for the classes 1 and 2
trainClass = trainData.rowRange( nLinearSamples, 2*NTRAINING_SAMPLES-nLinearSamples);
// The x coordinate of the points is in [0.4, 0.6)
c = trainClass.colRange(0,1);
rng.fill(c, RNG::UNIFORM, Scalar(0.4*WIDTH), Scalar(0.6*WIDTH));
// The y coordinate of the points is in [0, 1)
c = trainClass.colRange(1,2);
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
@endcode
2. **Set up SVM's parameters** 2. **Set up SVM's parameters**
@sa @sa
In the previous tutorial @ref introductiontosvms there is an explanation of the atributes of the In the previous tutorial @ref tutorial_introduction_to_svm there is an explanation of the atributes of the
class @ref cv::CvSVMParams that we configure here before training the SVM. class @ref cv::ml::SVM::Params that we configure here before training the SVM.
@code{.cpp}
CvSVMParams params;
params.svm_type = SVM::C_SVC;
params.C = 0.1;
params.kernel_type = SVM::LINEAR;
params.term_crit = TermCriteria(TermCriteria::ITER, (int)1e7, 1e-6);
@endcode
There are just two differences between the configuration we do here and the one that was done in
the @ref previous tutorial \<introductiontosvms\> that we use as reference.
- *CvSVM::C_SVC*. We chose here a small value of this parameter in order not to punish too much @code{.cpp}
the misclassification errors in the optimization. The idea of doing this stems from the will CvSVMParams params;
of obtaining a solution close to the one intuitively expected. However, we recommend to get a params.svm_type = SVM::C_SVC;
better insight of the problem by making adjustments to this parameter. params.C = 0.1;
params.kernel_type = SVM::LINEAR;
params.term_crit = TermCriteria(TermCriteria::ITER, (int)1e7, 1e-6);
@endcode
There are just two differences between the configuration we do here and the one that was done in
the previous tutorial (tutorial_introduction_to_svm) that we use as reference.
@note Here there are just very few points in the overlapping region between classes, giving a smaller value to **FRAC_LINEAR_SEP** the density of points can be incremented and the impact of the parameter **CvSVM::C_SVC** explored deeply. - *CvSVM::C_SVC*. We chose here a small value of this parameter in order not to punish too much
- *Termination Criteria of the algorithm*. The maximum number of iterations has to be the misclassification errors in the optimization. The idea of doing this stems from the will
increased considerably in order to solve correctly a problem with non-linearly separable of obtaining a solution close to the one intuitively expected. However, we recommend to get a
training data. In particular, we have increased in five orders of magnitude this value. better insight of the problem by making adjustments to this parameter.
@note Here there are just very few points in the overlapping region between classes, giving a smaller value to **FRAC_LINEAR_SEP** the density of points can be incremented and the impact of the parameter **CvSVM::C_SVC** explored deeply.
- *Termination Criteria of the algorithm*. The maximum number of iterations has to be
increased considerably in order to solve correctly a problem with non-linearly separable
training data. In particular, we have increased in five orders of magnitude this value.
3. **Train the SVM** 3. **Train the SVM**
We call the method @ref cv::CvSVM::train to build the SVM model. Watch out that the training We call the method @ref cv::ml::SVM::train to build the SVM model. Watch out that the training
process may take a quite long time. Have patiance when your run the program. process may take a quite long time. Have patiance when your run the program.
@code{.cpp} @code{.cpp}
CvSVM svm; CvSVM svm;
svm.train(trainData, labels, Mat(), Mat(), params); svm.train(trainData, labels, Mat(), Mat(), params);
@endcode @endcode
4. **Show the Decision Regions** 4. **Show the Decision Regions**
The method @ref cv::CvSVM::predict is used to classify an input sample using a trained SVM. In The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
this example we have used this method in order to color the space depending on the prediction done this example we have used this method in order to color the space depending on the prediction done
by the SVM. In other words, an image is traversed interpreting its pixels as points of the by the SVM. In other words, an image is traversed interpreting its pixels as points of the
Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in
dark green if it is the class with label 1 and in dark blue if it is the class with label 2. dark green if it is the class with label 1 and in dark blue if it is the class with label 2.
@code{.cpp} @code{.cpp}
Vec3b green(0,100,0), blue (100,0,0); Vec3b green(0,100,0), blue (100,0,0);
for (int i = 0; i < I.rows; ++i) for (int i = 0; i < I.rows; ++i)
for (int j = 0; j < I.cols; ++j) for (int j = 0; j < I.cols; ++j)
{ {
Mat sampleMat = (Mat_<float>(1,2) << i, j); Mat sampleMat = (Mat_<float>(1,2) << i, j);
float response = svm.predict(sampleMat); float response = svm.predict(sampleMat);
if (response == 1) I.at<Vec3b>(j, i) = green;
else if (response == 2) I.at<Vec3b>(j, i) = blue;
}
@endcode
if (response == 1) I.at<Vec3b>(j, i) = green;
else if (response == 2) I.at<Vec3b>(j, i) = blue;
}
@endcode
5. **Show the training data** 5. **Show the training data**
The method @ref cv::circle is used to show the samples that compose the training data. The samples The method @ref cv::circle is used to show the samples that compose the training data. The samples
of the class labeled with 1 are shown in light green and in light blue the samples of the class of the class labeled with 1 are shown in light green and in light blue the samples of the class
labeled with 2. labeled with 2.
@code{.cpp} @code{.cpp}
int thick = -1; int thick = -1;
int lineType = 8; int lineType = 8;
float px, py; float px, py;
// Class 1 // Class 1
for (int i = 0; i < NTRAINING_SAMPLES; ++i) for (int i = 0; i < NTRAINING_SAMPLES; ++i)
{ {
px = trainData.at<float>(i,0); px = trainData.at<float>(i,0);
py = trainData.at<float>(i,1); py = trainData.at<float>(i,1);
circle(I, Point( (int) px, (int) py ), 3, Scalar(0, 255, 0), thick, lineType); circle(I, Point( (int) px, (int) py ), 3, Scalar(0, 255, 0), thick, lineType);
} }
// Class 2 // Class 2
for (int i = NTRAINING_SAMPLES; i <2*NTRAINING_SAMPLES; ++i) for (int i = NTRAINING_SAMPLES; i <2*NTRAINING_SAMPLES; ++i)
{ {
px = trainData.at<float>(i,0); px = trainData.at<float>(i,0);
py = trainData.at<float>(i,1); py = trainData.at<float>(i,1);
circle(I, Point( (int) px, (int) py ), 3, Scalar(255, 0, 0), thick, lineType); circle(I, Point( (int) px, (int) py ), 3, Scalar(255, 0, 0), thick, lineType);
} }
@endcode @endcode
6. **Support vectors** 6. **Support vectors**
We use here a couple of methods to obtain information about the support vectors. The method @ref We use here a couple of methods to obtain information about the support vectors. The method
cv::CvSVM::get_support_vector_count outputs the total number of support vectors used in the @ref cv::ml::SVM::getSupportVectors obtain all support vectors.
problem and with the method @ref cv::CvSVM::get_support_vector we obtain each of the support We have used this methods here to find the training examples that are
vectors using an index. We have used this methods here to find the training examples that are support vectors and highlight them.
support vectors and highlight them. @code{.cpp}
@code{.cpp} thick = 2;
thick = 2; lineType = 8;
lineType = 8; int x = svm.get_support_vector_count();
int x = svm.get_support_vector_count();
for (int i = 0; i < x; ++i)
{
const float* v = svm.get_support_vector(i);
circle( I, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thick, lineType);
}
@endcode
for (int i = 0; i < x; ++i)
{
const float* v = svm.get_support_vector(i);
circle( I, Point( (int) v[0], (int) v[1]), 6, Scalar(128, 128, 128), thick, lineType);
}
@endcode
Results Results
------- -------
@ -245,14 +250,12 @@ Results
and some blue points lay on the green one. and some blue points lay on the green one.
- Finally the support vectors are shown using gray rings around the training examples. - Finally the support vectors are shown using gray rings around the training examples.
![image](images/result.png) ![image](images/svm_non_linear_result.png)
You may observe a runtime instance of this on the [YouTube You may observe a runtime instance of this on the [YouTube here](https://www.youtube.com/watch?v=vFv2yPcSo-Q).
here](https://www.youtube.com/watch?v=vFv2yPcSo-Q).
\htmlonly \htmlonly
<div align="center"> <div align="center">
<iframe title="Support Vector Machines for Non-Linearly Separable Data" width="560" height="349" src="http://www.youtube.com/embed/vFv2yPcSo-Q?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe> <iframe title="Support Vector Machines for Non-Linearly Separable Data" width="560" height="349" src="http://www.youtube.com/embed/vFv2yPcSo-Q?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div> </div>
\endhtmlonly \endhtmlonly

View File

@ -218,7 +218,7 @@ Results
* Finally the support vectors are shown using gray rings around the training examples. * Finally the support vectors are shown using gray rings around the training examples.
.. image:: images/result.png .. image:: images/svm_non_linear_result.png
:alt: Training data and decision regions given by the SVM :alt: Training data and decision regions given by the SVM
:width: 300pt :width: 300pt
:align: center :align: center

View File

@ -8,8 +8,8 @@ In this tutorial you will learn how to:
- Use the @ref cv::CascadeClassifier class to detect objects in a video stream. Particularly, we - Use the @ref cv::CascadeClassifier class to detect objects in a video stream. Particularly, we
will use the functions: will use the functions:
- @ref cv::load to load a .xml classifier file. It can be either a Haar or a LBP classifer - @ref cv::CascadeClassifier::load to load a .xml classifier file. It can be either a Haar or a LBP classifer
- @ref cv::detectMultiScale to perform the detection. - @ref cv::CascadeClassifier::detectMultiScale to perform the detection.
Theory Theory
------ ------
@ -126,5 +126,3 @@ Result
detection. For the eyes we keep using the file used in the tutorial. detection. For the eyes we keep using the file used in the tutorial.
![image](images/Cascade_Classifier_Tutorial_Result_LBP.jpg) ![image](images/Cascade_Classifier_Tutorial_Result_LBP.jpg)

View File

@ -40,11 +40,11 @@ Code
In the following you can find the source code. We will let the user chose to process either a video In the following you can find the source code. We will let the user chose to process either a video
file or a sequence of images. file or a sequence of images.
- -
Two different methods are used to generate two foreground masks: Two different methods are used to generate two foreground masks:
1. @ref cv::MOG 1. @ref cv::bgsegm::BackgroundSubtractorMOG
2. @ref cv::MOG2 2. @ref cv::bgsegm::BackgroundSubtractorMOG2
The results as well as the input data are shown on the screen. The results as well as the input data are shown on the screen.
@code{.cpp} @code{.cpp}
@ -389,4 +389,3 @@ References
- Antoine Vacavant, Thierry Chateau, Alexis Wilhelm and Laurent Lequievre. A Benchmark Dataset for - Antoine Vacavant, Thierry Chateau, Alexis Wilhelm and Laurent Lequievre. A Benchmark Dataset for
Foreground/Background Extraction. In ACCV 2012, Workshop: Background Models Challenge, LNCS Foreground/Background Extraction. In ACCV 2012, Workshop: Background Models Challenge, LNCS
7728, 291-300. November 2012, Daejeon, Korea. 7728, 291-300. November 2012, Daejeon, Korea.