Doxygen tutorials: cpp done
@ -824,3 +824,11 @@
|
|||||||
journal = {Machine learning},
|
journal = {Machine learning},
|
||||||
volume = {10}
|
volume = {10}
|
||||||
}
|
}
|
||||||
|
@inproceedings{vacavant2013benchmark,
|
||||||
|
title={A benchmark dataset for outdoor foreground/background extraction},
|
||||||
|
author={Vacavant, Antoine and Chateau, Thierry and Wilhelm, Alexis and Lequi{\`e}vre, Laurent},
|
||||||
|
booktitle={Computer Vision-ACCV 2012 Workshops},
|
||||||
|
pages={291--300},
|
||||||
|
year={2013},
|
||||||
|
organization={Springer}
|
||||||
|
}
|
||||||
|
@ -96,7 +96,7 @@ on how to do this you can find in the @ref tutorial_file_input_output_with_xml_y
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. **Read the settings.**
|
-# **Read the settings.**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Settings s;
|
Settings s;
|
||||||
const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml";
|
const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml";
|
||||||
@ -119,7 +119,7 @@ Explanation
|
|||||||
additional post-processing function that checks validity of the input. Only if all inputs are
|
additional post-processing function that checks validity of the input. Only if all inputs are
|
||||||
good then *goodInput* variable will be true.
|
good then *goodInput* variable will be true.
|
||||||
|
|
||||||
2. **Get next input, if it fails or we have enough of them - calibrate**. After this we have a big
|
-# **Get next input, if it fails or we have enough of them - calibrate**. After this we have a big
|
||||||
loop where we do the following operations: get the next image from the image list, camera or
|
loop where we do the following operations: get the next image from the image list, camera or
|
||||||
video file. If this fails or we have enough images then we run the calibration process. In case
|
video file. If this fails or we have enough images then we run the calibration process. In case
|
||||||
of image we step out of the loop and otherwise the remaining frames will be undistorted (if the
|
of image we step out of the loop and otherwise the remaining frames will be undistorted (if the
|
||||||
@ -151,7 +151,7 @@ Explanation
|
|||||||
@endcode
|
@endcode
|
||||||
For some cameras we may need to flip the input image. Here we do this too.
|
For some cameras we may need to flip the input image. Here we do this too.
|
||||||
|
|
||||||
3. **Find the pattern in the current input**. The formation of the equations I mentioned above aims
|
-# **Find the pattern in the current input**. The formation of the equations I mentioned above aims
|
||||||
to finding major patterns in the input: in case of the chessboard this are corners of the
|
to finding major patterns in the input: in case of the chessboard this are corners of the
|
||||||
squares and for the circles, well, the circles themselves. The position of these will form the
|
squares and for the circles, well, the circles themselves. The position of these will form the
|
||||||
result which will be written into the *pointBuf* vector.
|
result which will be written into the *pointBuf* vector.
|
||||||
@ -212,7 +212,7 @@ Explanation
|
|||||||
drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );
|
drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
4. **Show state and result to the user, plus command line control of the application**. This part
|
-# **Show state and result to the user, plus command line control of the application**. This part
|
||||||
shows text output on the image.
|
shows text output on the image.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
//----------------------------- Output Text ------------------------------------------------
|
//----------------------------- Output Text ------------------------------------------------
|
||||||
@ -263,7 +263,7 @@ Explanation
|
|||||||
imagePoints.clear();
|
imagePoints.clear();
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
5. **Show the distortion removal for the images too**. When you work with an image list it is not
|
-# **Show the distortion removal for the images too**. When you work with an image list it is not
|
||||||
possible to remove the distortion inside the loop. Therefore, you must do this after the loop.
|
possible to remove the distortion inside the loop. Therefore, you must do this after the loop.
|
||||||
Taking advantage of this now I'll expand the @ref cv::undistort function, which is in fact first
|
Taking advantage of this now I'll expand the @ref cv::undistort function, which is in fact first
|
||||||
calls @ref cv::initUndistortRectifyMap to find transformation matrices and then performs
|
calls @ref cv::initUndistortRectifyMap to find transformation matrices and then performs
|
||||||
@ -291,6 +291,7 @@ Explanation
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
The calibration and save
|
The calibration and save
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
@ -419,6 +420,7 @@ double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,
|
|||||||
return std::sqrt(totalErr/totalPoints); // calculate the arithmetical mean
|
return std::sqrt(totalErr/totalPoints); // calculate the arithmetical mean
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
@ -444,21 +446,21 @@ images/CameraCalibration/VID5/xx8.jpg
|
|||||||
Then passed `images/CameraCalibration/VID5/VID5.XML` as an input in the configuration file. Here's a
|
Then passed `images/CameraCalibration/VID5/VID5.XML` as an input in the configuration file. Here's a
|
||||||
chessboard pattern found during the runtime of the application:
|
chessboard pattern found during the runtime of the application:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
After applying the distortion removal we get:
|
After applying the distortion removal we get:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The same works for [this asymmetrical circle pattern ](acircles_pattern.png) by setting the input
|
The same works for [this asymmetrical circle pattern ](acircles_pattern.png) by setting the input
|
||||||
width to 4 and height to 11. This time I've used a live camera feed by specifying its ID ("1") for
|
width to 4 and height to 11. This time I've used a live camera feed by specifying its ID ("1") for
|
||||||
the input. Here's, how a detected pattern should look:
|
the input. Here's, how a detected pattern should look:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
In both cases in the specified output XML/YAML file you'll find the camera and distortion
|
In both cases in the specified output XML/YAML file you'll find the camera and distortion
|
||||||
coefficients matrices:
|
coefficients matrices:
|
||||||
@code{.cpp}
|
@code{.xml}
|
||||||
<Camera_Matrix type_id="opencv-matrix">
|
<Camera_Matrix type_id="opencv-matrix">
|
||||||
<rows>3</rows>
|
<rows>3</rows>
|
||||||
<cols>3</cols>
|
<cols>3</cols>
|
||||||
|
@ -73,7 +73,7 @@ int main( int argc, char** argv )
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Since we are going to perform:
|
-# Since we are going to perform:
|
||||||
|
|
||||||
\f[g(x) = (1 - \alpha)f_{0}(x) + \alpha f_{1}(x)\f]
|
\f[g(x) = (1 - \alpha)f_{0}(x) + \alpha f_{1}(x)\f]
|
||||||
|
|
||||||
@ -87,7 +87,7 @@ Explanation
|
|||||||
Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and
|
Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and
|
||||||
height) and type.
|
height) and type.
|
||||||
|
|
||||||
2. Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy:
|
-# Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
beta = ( 1.0 - alpha );
|
beta = ( 1.0 - alpha );
|
||||||
addWeighted( src1, alpha, src2, beta, 0.0, dst);
|
addWeighted( src1, alpha, src2, beta, 0.0, dst);
|
||||||
@ -96,9 +96,9 @@ Explanation
|
|||||||
\f[dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma\f]
|
\f[dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma\f]
|
||||||
In this case, `gamma` is the argument \f$0.0\f$ in the code above.
|
In this case, `gamma` is the argument \f$0.0\f$ in the code above.
|
||||||
|
|
||||||
3. Create windows, show the images and wait for the user to end the program.
|
-# Create windows, show the images and wait for the user to end the program.
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||

|

|
||||||
|
@ -52,7 +52,7 @@ Code
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two
|
-# Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two
|
||||||
windows to display them.
|
windows to display them.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Windows names
|
/// Windows names
|
||||||
@ -63,7 +63,7 @@ Explanation
|
|||||||
Mat atom_image = Mat::zeros( w, w, CV_8UC3 );
|
Mat atom_image = Mat::zeros( w, w, CV_8UC3 );
|
||||||
Mat rook_image = Mat::zeros( w, w, CV_8UC3 );
|
Mat rook_image = Mat::zeros( w, w, CV_8UC3 );
|
||||||
@endcode
|
@endcode
|
||||||
2. We created functions to draw different geometric shapes. For instance, to draw the atom we used
|
-# We created functions to draw different geometric shapes. For instance, to draw the atom we used
|
||||||
*MyEllipse* and *MyFilledCircle*:
|
*MyEllipse* and *MyFilledCircle*:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// 1. Draw a simple atom:
|
/// 1. Draw a simple atom:
|
||||||
@ -77,7 +77,7 @@ Explanation
|
|||||||
/// 1.b. Creating circles
|
/// 1.b. Creating circles
|
||||||
MyFilledCircle( atom_image, Point( w/2.0, w/2.0) );
|
MyFilledCircle( atom_image, Point( w/2.0, w/2.0) );
|
||||||
@endcode
|
@endcode
|
||||||
3. And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*:
|
-# And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// 2. Draw a rook
|
/// 2. Draw a rook
|
||||||
|
|
||||||
@ -98,7 +98,7 @@ Explanation
|
|||||||
MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) );
|
MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) );
|
||||||
MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) );
|
MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) );
|
||||||
@endcode
|
@endcode
|
||||||
4. Let's check what is inside each of these functions:
|
-# Let's check what is inside each of these functions:
|
||||||
- *MyLine*
|
- *MyLine*
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
void MyLine( Mat img, Point start, Point end )
|
void MyLine( Mat img, Point start, Point end )
|
||||||
@ -240,5 +240,5 @@ Result
|
|||||||
|
|
||||||
Compiling and running your program should give you a result like this:
|
Compiling and running your program should give you a result like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -101,16 +101,16 @@ int main( int argc, char** argv )
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
|
-# We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
double alpha;
|
double alpha;
|
||||||
int beta;
|
int beta;
|
||||||
@endcode
|
@endcode
|
||||||
2. We load an image using @ref cv::imread and save it in a Mat object:
|
-# We load an image using @ref cv::imread and save it in a Mat object:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat image = imread( argv[1] );
|
Mat image = imread( argv[1] );
|
||||||
@endcode
|
@endcode
|
||||||
3. Now, since we will make some transformations to this image, we need a new Mat object to store
|
-# Now, since we will make some transformations to this image, we need a new Mat object to store
|
||||||
it. Also, we want this to have the following features:
|
it. Also, we want this to have the following features:
|
||||||
|
|
||||||
- Initial pixel values equal to zero
|
- Initial pixel values equal to zero
|
||||||
@ -121,7 +121,7 @@ Explanation
|
|||||||
We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
|
We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
|
||||||
*image.size()* and *image.type()*
|
*image.size()* and *image.type()*
|
||||||
|
|
||||||
4. Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
|
-# Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
|
||||||
pixel in image. Since we are operating with RGB images, we will have three values per pixel (R,
|
pixel in image. Since we are operating with RGB images, we will have three values per pixel (R,
|
||||||
G and B), so we will also access them separately. Here is the piece of code:
|
G and B), so we will also access them separately. Here is the piece of code:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -141,7 +141,7 @@ Explanation
|
|||||||
integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
|
integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
|
||||||
values are valid.
|
values are valid.
|
||||||
|
|
||||||
5. Finally, we create windows and show the images, the usual way.
|
-# Finally, we create windows and show the images, the usual way.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow("Original Image", 1);
|
namedWindow("Original Image", 1);
|
||||||
namedWindow("New Image", 1);
|
namedWindow("New Image", 1);
|
||||||
@ -166,7 +166,7 @@ Result
|
|||||||
|
|
||||||
- Running our code and using \f$\alpha = 2.2\f$ and \f$\beta = 50\f$
|
- Running our code and using \f$\alpha = 2.2\f$ and \f$\beta = 50\f$
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
\f$ ./BasicLinearTransforms lena.jpg
|
$ ./BasicLinearTransforms lena.jpg
|
||||||
Basic Linear Transforms
|
Basic Linear Transforms
|
||||||
-------------------------
|
-------------------------
|
||||||
* Enter the alpha value [1.0-3.0]: 2.2
|
* Enter the alpha value [1.0-3.0]: 2.2
|
||||||
@ -175,4 +175,4 @@ Result
|
|||||||
|
|
||||||
- We get this:
|
- We get this:
|
||||||
|
|
||||||

|

|
||||||
|
@ -22,10 +22,14 @@ OpenCV source code library.
|
|||||||
|
|
||||||
Here's a sample usage of @ref cv::dft() :
|
Here's a sample usage of @ref cv::dft() :
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/discrete_fourier_transform/discrete_fourier_transform.cpp
|
@dontinclude cpp/tutorial_code/core/discrete_fourier_transform/discrete_fourier_transform.cpp
|
||||||
|
@until highgui.hpp
|
||||||
lines
|
@skipline iostream
|
||||||
1-4, 6, 20-21, 24-79
|
@skip main
|
||||||
|
@until {
|
||||||
|
@skip filename
|
||||||
|
@until return 0;
|
||||||
|
@until }
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
@ -52,7 +56,7 @@ Fourier Transform too needs to be of a discrete type resulting in a Discrete Fou
|
|||||||
(*DFT*). You'll want to use this whenever you need to determine the structure of an image from a
|
(*DFT*). You'll want to use this whenever you need to determine the structure of an image from a
|
||||||
geometrical point of view. Here are the steps to follow (in case of a gray scale input image *I*):
|
geometrical point of view. Here are the steps to follow (in case of a gray scale input image *I*):
|
||||||
|
|
||||||
1. **Expand the image to an optimal size**. The performance of a DFT is dependent of the image
|
-# **Expand the image to an optimal size**. The performance of a DFT is dependent of the image
|
||||||
size. It tends to be the fastest for image sizes that are multiple of the numbers two, three and
|
size. It tends to be the fastest for image sizes that are multiple of the numbers two, three and
|
||||||
five. Therefore, to achieve maximal performance it is generally a good idea to pad border values
|
five. Therefore, to achieve maximal performance it is generally a good idea to pad border values
|
||||||
to the image to get a size with such traits. The @ref cv::getOptimalDFTSize() returns this
|
to the image to get a size with such traits. The @ref cv::getOptimalDFTSize() returns this
|
||||||
@ -66,7 +70,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
|||||||
@endcode
|
@endcode
|
||||||
The appended pixels are initialized with zero.
|
The appended pixels are initialized with zero.
|
||||||
|
|
||||||
2. **Make place for both the complex and the real values**. The result of a Fourier Transform is
|
-# **Make place for both the complex and the real values**. The result of a Fourier Transform is
|
||||||
complex. This implies that for each image value the result is two image values (one per
|
complex. This implies that for each image value the result is two image values (one per
|
||||||
component). Moreover, the frequency domains range is much larger than its spatial counterpart.
|
component). Moreover, the frequency domains range is much larger than its spatial counterpart.
|
||||||
Therefore, we store these usually at least in a *float* format. Therefore we'll convert our
|
Therefore, we store these usually at least in a *float* format. Therefore we'll convert our
|
||||||
@ -76,12 +80,12 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
|||||||
Mat complexI;
|
Mat complexI;
|
||||||
merge(planes, 2, complexI); // Add to the expanded another plane with zeros
|
merge(planes, 2, complexI); // Add to the expanded another plane with zeros
|
||||||
@endcode
|
@endcode
|
||||||
3. **Make the Discrete Fourier Transform**. It's possible an in-place calculation (same input as
|
-# **Make the Discrete Fourier Transform**. It's possible an in-place calculation (same input as
|
||||||
output):
|
output):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
dft(complexI, complexI); // this way the result may fit in the source matrix
|
dft(complexI, complexI); // this way the result may fit in the source matrix
|
||||||
@endcode
|
@endcode
|
||||||
4. **Transform the real and complex values to magnitude**. A complex number has a real (*Re*) and a
|
-# **Transform the real and complex values to magnitude**. A complex number has a real (*Re*) and a
|
||||||
complex (imaginary - *Im*) part. The results of a DFT are complex numbers. The magnitude of a
|
complex (imaginary - *Im*) part. The results of a DFT are complex numbers. The magnitude of a
|
||||||
DFT is:
|
DFT is:
|
||||||
|
|
||||||
@ -93,7 +97,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
|||||||
magnitude(planes[0], planes[1], planes[0]);// planes[0] = magnitude
|
magnitude(planes[0], planes[1], planes[0]);// planes[0] = magnitude
|
||||||
Mat magI = planes[0];
|
Mat magI = planes[0];
|
||||||
@endcode
|
@endcode
|
||||||
5. **Switch to a logarithmic scale**. It turns out that the dynamic range of the Fourier
|
-# **Switch to a logarithmic scale**. It turns out that the dynamic range of the Fourier
|
||||||
coefficients is too large to be displayed on the screen. We have some small and some high
|
coefficients is too large to be displayed on the screen. We have some small and some high
|
||||||
changing values that we can't observe like this. Therefore the high values will all turn out as
|
changing values that we can't observe like this. Therefore the high values will all turn out as
|
||||||
white points, while the small ones as black. To use the gray scale values to for visualization
|
white points, while the small ones as black. To use the gray scale values to for visualization
|
||||||
@ -106,7 +110,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
|||||||
magI += Scalar::all(1); // switch to logarithmic scale
|
magI += Scalar::all(1); // switch to logarithmic scale
|
||||||
log(magI, magI);
|
log(magI, magI);
|
||||||
@endcode
|
@endcode
|
||||||
6. **Crop and rearrange**. Remember, that at the first step, we expanded the image? Well, it's time
|
-# **Crop and rearrange**. Remember, that at the first step, we expanded the image? Well, it's time
|
||||||
to throw away the newly introduced values. For visualization purposes we may also rearrange the
|
to throw away the newly introduced values. For visualization purposes we may also rearrange the
|
||||||
quadrants of the result, so that the origin (zero, zero) corresponds with the image center.
|
quadrants of the result, so that the origin (zero, zero) corresponds with the image center.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -128,13 +132,14 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
|||||||
q2.copyTo(q1);
|
q2.copyTo(q1);
|
||||||
tmp.copyTo(q2);
|
tmp.copyTo(q2);
|
||||||
@endcode
|
@endcode
|
||||||
7. **Normalize**. This is done again for visualization purposes. We now have the magnitudes,
|
-# **Normalize**. This is done again for visualization purposes. We now have the magnitudes,
|
||||||
however this are still out of our image display range of zero to one. We normalize our values to
|
however this are still out of our image display range of zero to one. We normalize our values to
|
||||||
this range using the @ref cv::normalize() function.
|
this range using the @ref cv::normalize() function.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
normalize(magI, magI, 0, 1, NORM_MINMAX); // Transform the matrix with float values into a
|
normalize(magI, magI, 0, 1, NORM_MINMAX); // Transform the matrix with float values into a
|
||||||
// viewable image form (float between values 0 and 1).
|
// viewable image form (float between values 0 and 1).
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
@ -147,13 +152,12 @@ image about a text.
|
|||||||
|
|
||||||
In case of the horizontal text:
|
In case of the horizontal text:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
In case of a rotated text:
|
In case of a rotated text:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can see that the most influential components of the frequency domain (brightest dots on the
|
You can see that the most influential components of the frequency domain (brightest dots on the
|
||||||
magnitude image) follow the geometric rotation of objects on the image. From this we may calculate
|
magnitude image) follow the geometric rotation of objects on the image. From this we may calculate
|
||||||
the offset and perform an image rotation to correct eventual miss alignments.
|
the offset and perform an image rotation to correct eventual miss alignments.
|
||||||
|
|
||||||
|
@ -22,10 +22,12 @@ library.
|
|||||||
|
|
||||||
Here's a sample code of how to achieve all the stuff enumerated at the goal list.
|
Here's a sample code of how to achieve all the stuff enumerated at the goal list.
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/file_input_output/file_input_output.cpp
|
@dontinclude cpp/tutorial_code/core/file_input_output/file_input_output.cpp
|
||||||
|
|
||||||
lines
|
@until std;
|
||||||
1-7, 21-154
|
@skip class MyData
|
||||||
|
@until return 0;
|
||||||
|
@until }
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
@ -36,7 +38,7 @@ structures you may serialize: *mappings* (like the STL map) and *element sequenc
|
|||||||
vector). The difference between these is that in a map every element has a unique name through what
|
vector). The difference between these is that in a map every element has a unique name through what
|
||||||
you may access it. For sequences you need to go through them to query a specific item.
|
you may access it. For sequences you need to go through them to query a specific item.
|
||||||
|
|
||||||
1. **XML/YAML File Open and Close.** Before you write any content to such file you need to open it
|
-# **XML/YAML File Open and Close.** Before you write any content to such file you need to open it
|
||||||
and at the end to close it. The XML/YAML data structure in OpenCV is @ref cv::FileStorage . To
|
and at the end to close it. The XML/YAML data structure in OpenCV is @ref cv::FileStorage . To
|
||||||
specify that this structure to which file binds on your hard drive you can use either its
|
specify that this structure to which file binds on your hard drive you can use either its
|
||||||
constructor or the *open()* function of this:
|
constructor or the *open()* function of this:
|
||||||
@ -56,7 +58,7 @@ you may access it. For sequences you need to go through them to query a specific
|
|||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
fs.release(); // explicit close
|
fs.release(); // explicit close
|
||||||
@endcode
|
@endcode
|
||||||
2. **Input and Output of text and numbers.** The data structure uses the same \<\< output operator
|
-# **Input and Output of text and numbers.** The data structure uses the same \<\< output operator
|
||||||
that the STL library. For outputting any type of data structure we need first to specify its
|
that the STL library. For outputting any type of data structure we need first to specify its
|
||||||
name. We do this by just simply printing out the name of this. For basic types you may follow
|
name. We do this by just simply printing out the name of this. For basic types you may follow
|
||||||
this with the print of the value :
|
this with the print of the value :
|
||||||
@ -70,7 +72,7 @@ you may access it. For sequences you need to go through them to query a specific
|
|||||||
fs["iterationNr"] >> itNr;
|
fs["iterationNr"] >> itNr;
|
||||||
itNr = (int) fs["iterationNr"];
|
itNr = (int) fs["iterationNr"];
|
||||||
@endcode
|
@endcode
|
||||||
3. **Input/Output of OpenCV Data structures.** Well these behave exactly just as the basic C++
|
-# **Input/Output of OpenCV Data structures.** Well these behave exactly just as the basic C++
|
||||||
types:
|
types:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat R = Mat_<uchar >::eye (3, 3),
|
Mat R = Mat_<uchar >::eye (3, 3),
|
||||||
@ -82,7 +84,7 @@ you may access it. For sequences you need to go through them to query a specific
|
|||||||
fs["R"] >> R; // Read cv::Mat
|
fs["R"] >> R; // Read cv::Mat
|
||||||
fs["T"] >> T;
|
fs["T"] >> T;
|
||||||
@endcode
|
@endcode
|
||||||
4. **Input/Output of vectors (arrays) and associative maps.** As I mentioned beforehand, we can
|
-# **Input/Output of vectors (arrays) and associative maps.** As I mentioned beforehand, we can
|
||||||
output maps and sequences (array, vector) too. Again we first print the name of the variable and
|
output maps and sequences (array, vector) too. Again we first print the name of the variable and
|
||||||
then we have to specify if our output is either a sequence or map.
|
then we have to specify if our output is either a sequence or map.
|
||||||
|
|
||||||
@ -121,7 +123,7 @@ you may access it. For sequences you need to go through them to query a specific
|
|||||||
cout << "Two " << (int)(n["Two"]) << "; ";
|
cout << "Two " << (int)(n["Two"]) << "; ";
|
||||||
cout << "One " << (int)(n["One"]) << endl << endl;
|
cout << "One " << (int)(n["One"]) << endl << endl;
|
||||||
@endcode
|
@endcode
|
||||||
5. **Read and write your own data structures.** Suppose you have a data structure such as:
|
-# **Read and write your own data structures.** Suppose you have a data structure such as:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
class MyData
|
class MyData
|
||||||
{
|
{
|
||||||
@ -180,6 +182,7 @@ you may access it. For sequences you need to go through them to query a specific
|
|||||||
fs["NonExisting"] >> m; // Do not add a fs << "NonExisting" << m command for this to work
|
fs["NonExisting"] >> m; // Do not add a fs << "NonExisting" << m command for this to work
|
||||||
cout << endl << "NonExisting = " << endl << m << endl;
|
cout << endl << "NonExisting = " << endl << m << endl;
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
@ -270,4 +273,3 @@ here](https://www.youtube.com/watch?v=A4yqVnByMMM) .
|
|||||||
<iframe title="File Input and Output using XML and YAML files in OpenCV" width="560" height="349" src="http://www.youtube.com/embed/A4yqVnByMMM?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
|
<iframe title="File Input and Output using XML and YAML files in OpenCV" width="560" height="349" src="http://www.youtube.com/embed/A4yqVnByMMM?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
|
||||||
</div>
|
</div>
|
||||||
\endhtmlonly
|
\endhtmlonly
|
||||||
|
|
||||||
|
@ -59,10 +59,10 @@ how_to_scan_images imageName.jpg intValueToReduce [G]
|
|||||||
The final argument is optional. If given the image will be loaded in gray scale format, otherwise
|
The final argument is optional. If given the image will be loaded in gray scale format, otherwise
|
||||||
the RGB color way is used. The first thing is to calculate the lookup table.
|
the RGB color way is used. The first thing is to calculate the lookup table.
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
@dontinclude cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||||
|
|
||||||
lines
|
@skip int divideWith
|
||||||
49-61
|
@until table[i]
|
||||||
|
|
||||||
Here we first use the C++ *stringstream* class to convert the third command line argument from text
|
Here we first use the C++ *stringstream* class to convert the third command line argument from text
|
||||||
to an integer format. Then we use a simple look and the upper formula to calculate the lookup table.
|
to an integer format. Then we use a simple look and the upper formula to calculate the lookup table.
|
||||||
@ -88,26 +88,12 @@ As you could already read in my @ref tutorial_mat_the_basic_image_container tuto
|
|||||||
depends of the color system used. More accurately, it depends from the number of channels used. In
|
depends of the color system used. More accurately, it depends from the number of channels used. In
|
||||||
case of a gray scale image we have something like:
|
case of a gray scale image we have something like:
|
||||||
|
|
||||||
\f[\newcommand{\tabItG}[1] { \textcolor{black}{#1} \cellcolor[gray]{0.8}}
|

|
||||||
\begin{tabular} {ccccc}
|
|
||||||
~ & \multicolumn{1}{c}{Column 0} & \multicolumn{1}{c}{Column 1} & \multicolumn{1}{c}{Column ...} & \multicolumn{1}{c}{Column m}\\
|
|
||||||
Row 0 & \tabItG{0,0} & \tabItG{0,1} & \tabItG{...} & \tabItG{0, m} \\
|
|
||||||
Row 1 & \tabItG{1,0} & \tabItG{1,1} & \tabItG{...} & \tabItG{1, m} \\
|
|
||||||
Row ... & \tabItG{...,0} & \tabItG{...,1} & \tabItG{...} & \tabItG{..., m} \\
|
|
||||||
Row n & \tabItG{n,0} & \tabItG{n,1} & \tabItG{n,...} & \tabItG{n, m} \\
|
|
||||||
\end{tabular}\f]
|
|
||||||
|
|
||||||
For multichannel images the columns contain as many sub columns as the number of channels. For
|
For multichannel images the columns contain as many sub columns as the number of channels. For
|
||||||
example in case of an RGB color system:
|
example in case of an RGB color system:
|
||||||
|
|
||||||
\f[\newcommand{\tabIt}[1] { \textcolor{yellow}{#1} \cellcolor{blue} & \textcolor{black}{#1} \cellcolor{green} & \textcolor{black}{#1} \cellcolor{red}}
|

|
||||||
\begin{tabular} {ccccccccccccc}
|
|
||||||
~ & \multicolumn{3}{c}{Column 0} & \multicolumn{3}{c}{Column 1} & \multicolumn{3}{c}{Column ...} & \multicolumn{3}{c}{Column m}\\
|
|
||||||
Row 0 & \tabIt{0,0} & \tabIt{0,1} & \tabIt{...} & \tabIt{0, m} \\
|
|
||||||
Row 1 & \tabIt{1,0} & \tabIt{1,1} & \tabIt{...} & \tabIt{1, m} \\
|
|
||||||
Row ... & \tabIt{...,0} & \tabIt{...,1} & \tabIt{...} & \tabIt{..., m} \\
|
|
||||||
Row n & \tabIt{n,0} & \tabIt{n,1} & \tabIt{n,...} & \tabIt{n, m} \\
|
|
||||||
\end{tabular}\f]
|
|
||||||
|
|
||||||
Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory
|
Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory
|
||||||
is large enough to store the rows in a successive fashion the rows may follow one after another,
|
is large enough to store the rows in a successive fashion the rows may follow one after another,
|
||||||
@ -121,10 +107,9 @@ The efficient way
|
|||||||
When it comes to performance you cannot beat the classic C style operator[] (pointer) access.
|
When it comes to performance you cannot beat the classic C style operator[] (pointer) access.
|
||||||
Therefore, the most efficient method we can recommend for making the assignment is:
|
Therefore, the most efficient method we can recommend for making the assignment is:
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
@skip Mat& ScanImageAndReduceC
|
||||||
|
@until return
|
||||||
lines
|
@until }
|
||||||
126-153
|
|
||||||
|
|
||||||
Here we basically just acquire a pointer to the start of each row and go through it until it ends.
|
Here we basically just acquire a pointer to the start of each row and go through it until it ends.
|
||||||
In the special case that the matrix is stored in a continues manner we only need to request the
|
In the special case that the matrix is stored in a continues manner we only need to request the
|
||||||
@ -156,10 +141,9 @@ considered a safer way as it takes over these tasks from the user. All you need
|
|||||||
begin and the end of the image matrix and then just increase the begin iterator until you reach the
|
begin and the end of the image matrix and then just increase the begin iterator until you reach the
|
||||||
end. To acquire the value *pointed* by the iterator use the \* operator (add it before it).
|
end. To acquire the value *pointed* by the iterator use the \* operator (add it before it).
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
@skip ScanImageAndReduceIterator
|
||||||
|
@until return
|
||||||
lines
|
@until }
|
||||||
155-183
|
|
||||||
|
|
||||||
In case of color images we have three uchar items per column. This may be considered a short vector
|
In case of color images we have three uchar items per column. This may be considered a short vector
|
||||||
of uchar items, that has been baptized in OpenCV with the *Vec3b* name. To access the n-th sub
|
of uchar items, that has been baptized in OpenCV with the *Vec3b* name. To access the n-th sub
|
||||||
@ -177,10 +161,9 @@ what type we are looking at the image. It's no different here as you need manual
|
|||||||
type to use at the automatic lookup. You can observe this in case of the gray scale images for the
|
type to use at the automatic lookup. You can observe this in case of the gray scale images for the
|
||||||
following source code (the usage of the + @ref cv::at() function):
|
following source code (the usage of the + @ref cv::at() function):
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
@skip ScanImageAndReduceRandomAccess
|
||||||
|
@until return
|
||||||
lines
|
@until }
|
||||||
185-217
|
|
||||||
|
|
||||||
The functions takes your input type and coordinates and calculates on the fly the address of the
|
The functions takes your input type and coordinates and calculates on the fly the address of the
|
||||||
queried item. Then returns a reference to that. This may be a constant when you *get* the value and
|
queried item. Then returns a reference to that. This may be a constant when you *get* the value and
|
||||||
@ -209,17 +192,14 @@ OpenCV has a function that makes the modification without the need from you to w
|
|||||||
the image. We use the @ref cv::LUT() function of the core module. First we build a Mat type of the
|
the image. We use the @ref cv::LUT() function of the core module. First we build a Mat type of the
|
||||||
lookup table:
|
lookup table:
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
@dontinclude cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||||
|
|
||||||
lines
|
@skip Mat lookUpTable
|
||||||
108-111
|
@until p[i] = table[i]
|
||||||
|
|
||||||
Finally call the function (I is our input image and J the output one):
|
Finally call the function (I is our input image and J the output one):
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
@skipline LUT
|
||||||
|
|
||||||
lines
|
|
||||||
116
|
|
||||||
|
|
||||||
Performance Difference
|
Performance Difference
|
||||||
----------------------
|
----------------------
|
||||||
|
After Width: | Height: | Size: 1.9 KiB |
After Width: | Height: | Size: 3.8 KiB |
@ -23,7 +23,7 @@ download it from [here](samples/cpp/tutorial_code/core/ippasync/ippasync_sample.
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Create parameters for OpenCV:
|
-# Create parameters for OpenCV:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
VideoCapture cap;
|
VideoCapture cap;
|
||||||
Mat image, gray, result;
|
Mat image, gray, result;
|
||||||
@ -36,7 +36,7 @@ Explanation
|
|||||||
hppStatus sts;
|
hppStatus sts;
|
||||||
hppiVirtualMatrix * virtMatrix;
|
hppiVirtualMatrix * virtMatrix;
|
||||||
@endcode
|
@endcode
|
||||||
2. Load input image or video. How to open and read video stream you can see in the
|
-# Load input image or video. How to open and read video stream you can see in the
|
||||||
@ref tutorial_video_input_psnr_ssim tutorial.
|
@ref tutorial_video_input_psnr_ssim tutorial.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
if( useCamera )
|
if( useCamera )
|
||||||
@ -56,7 +56,7 @@ Explanation
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
3. Create accelerator instance using
|
-# Create accelerator instance using
|
||||||
[hppCreateInstance](http://software.intel.com/en-us/node/501686):
|
[hppCreateInstance](http://software.intel.com/en-us/node/501686):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
accelType = sAccel == "cpu" ? HPP_ACCEL_TYPE_CPU:
|
accelType = sAccel == "cpu" ? HPP_ACCEL_TYPE_CPU:
|
||||||
@ -67,12 +67,12 @@ Explanation
|
|||||||
sts = hppCreateInstance(accelType, 0, &accel);
|
sts = hppCreateInstance(accelType, 0, &accel);
|
||||||
CHECK_STATUS(sts, "hppCreateInstance");
|
CHECK_STATUS(sts, "hppCreateInstance");
|
||||||
@endcode
|
@endcode
|
||||||
4. Create an array of virtual matrices using
|
-# Create an array of virtual matrices using
|
||||||
[hppiCreateVirtualMatrices](http://software.intel.com/en-us/node/501700) function.
|
[hppiCreateVirtualMatrices](http://software.intel.com/en-us/node/501700) function.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
virtMatrix = hppiCreateVirtualMatrices(accel, 1);
|
virtMatrix = hppiCreateVirtualMatrices(accel, 1);
|
||||||
@endcode
|
@endcode
|
||||||
5. Prepare a matrix for input and output data:
|
-# Prepare a matrix for input and output data:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cap >> image;
|
cap >> image;
|
||||||
if(image.empty())
|
if(image.empty())
|
||||||
@ -82,7 +82,7 @@ Explanation
|
|||||||
|
|
||||||
result.create( image.rows, image.cols, CV_8U);
|
result.create( image.rows, image.cols, CV_8U);
|
||||||
@endcode
|
@endcode
|
||||||
6. Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::hpp::getHpp
|
-# Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::hpp::getHpp
|
||||||
and call [hppiSobel](http://software.intel.com/en-us/node/474701) function.
|
and call [hppiSobel](http://software.intel.com/en-us/node/474701) function.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
//convert Mat to hppiMatrix
|
//convert Mat to hppiMatrix
|
||||||
@ -104,14 +104,14 @@ Explanation
|
|||||||
HPP_DATA_TYPE_16S data type for source matrix with HPP_DATA_TYPE_8U type. You should check
|
HPP_DATA_TYPE_16S data type for source matrix with HPP_DATA_TYPE_8U type. You should check
|
||||||
hppStatus after each call IPP Async function.
|
hppStatus after each call IPP Async function.
|
||||||
|
|
||||||
7. Create windows and show the images, the usual way.
|
-# Create windows and show the images, the usual way.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow("image", image);
|
imshow("image", image);
|
||||||
imshow("rez", result);
|
imshow("rez", result);
|
||||||
|
|
||||||
waitKey(15);
|
waitKey(15);
|
||||||
@endcode
|
@endcode
|
||||||
8. Delete hpp matrices.
|
-# Delete hpp matrices.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
sts = hppiFreeMatrix(src);
|
sts = hppiFreeMatrix(src);
|
||||||
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
|
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
|
||||||
@ -119,7 +119,7 @@ Explanation
|
|||||||
sts = hppiFreeMatrix(dst);
|
sts = hppiFreeMatrix(dst);
|
||||||
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
|
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
|
||||||
@endcode
|
@endcode
|
||||||
9. Delete virtual matrices and accelerator instance.
|
-# Delete virtual matrices and accelerator instance.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
if (virtMatrix)
|
if (virtMatrix)
|
||||||
{
|
{
|
||||||
@ -140,4 +140,4 @@ Result
|
|||||||
After compiling the code above we can execute it giving an image or video path and accelerator type
|
After compiling the code above we can execute it giving an image or video path and accelerator type
|
||||||
as an argument. For this tutorial we use baboon.png image as input. The result is below.
|
as an argument. For this tutorial we use baboon.png image as input. The result is below.
|
||||||
|
|
||||||

|

|
||||||
|
@ -93,20 +93,18 @@ To further help on seeing the difference the programs supports two modes: one mi
|
|||||||
one pure C++. If you define the *DEMO_MIXED_API_USE* you'll end up using the first. The program
|
one pure C++. If you define the *DEMO_MIXED_API_USE* you'll end up using the first. The program
|
||||||
separates the color planes, does some modifications on them and in the end merge them back together.
|
separates the color planes, does some modifications on them and in the end merge them back together.
|
||||||
|
|
||||||
@includelineno
|
@dontinclude cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
||||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
@until namespace cv
|
||||||
|
@skip ifdef
|
||||||
lines
|
@until endif
|
||||||
1-10, 23-26, 29-46
|
@skip main
|
||||||
|
@until endif
|
||||||
|
|
||||||
Here you can observe that with the new structure we have no pointer problems, although it is
|
Here you can observe that with the new structure we have no pointer problems, although it is
|
||||||
possible to use the old functions and in the end just transform the result to a *Mat* object.
|
possible to use the old functions and in the end just transform the result to a *Mat* object.
|
||||||
|
|
||||||
@includelineno
|
@skip convert image
|
||||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
@until split
|
||||||
|
|
||||||
lines
|
|
||||||
48-53
|
|
||||||
|
|
||||||
Because, we want to mess around with the images luma component we first convert from the default RGB
|
Because, we want to mess around with the images luma component we first convert from the default RGB
|
||||||
to the YUV color space and then split the result up into separate planes. Here the program splits:
|
to the YUV color space and then split the result up into separate planes. Here the program splits:
|
||||||
@ -116,11 +114,8 @@ image some Gaussian noise and then mix together the channels according to some f
|
|||||||
|
|
||||||
The scanning version looks like:
|
The scanning version looks like:
|
||||||
|
|
||||||
@includelineno
|
@skip #if 1
|
||||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
@until #else
|
||||||
|
|
||||||
lines
|
|
||||||
57-77
|
|
||||||
|
|
||||||
Here you can observe that we may go through all the pixels of an image in three fashions: an
|
Here you can observe that we may go through all the pixels of an image in three fashions: an
|
||||||
iterator, a C pointer and an individual element access style. You can read a more in-depth
|
iterator, a C pointer and an individual element access style. You can read a more in-depth
|
||||||
@ -128,26 +123,20 @@ description of these in the @ref tutorial_how_to_scan_images tutorial. Convertin
|
|||||||
names is easy. Just remove the cv prefix and use the new *Mat* data structure. Here's an example of
|
names is easy. Just remove the cv prefix and use the new *Mat* data structure. Here's an example of
|
||||||
this by using the weighted addition function:
|
this by using the weighted addition function:
|
||||||
|
|
||||||
@includelineno
|
@until planes[0]
|
||||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
@until endif
|
||||||
|
|
||||||
lines
|
|
||||||
81-113
|
|
||||||
|
|
||||||
As you may observe the *planes* variable is of type *Mat*. However, converting from *Mat* to
|
As you may observe the *planes* variable is of type *Mat*. However, converting from *Mat* to
|
||||||
*IplImage* is easy and made automatically with a simple assignment operator.
|
*IplImage* is easy and made automatically with a simple assignment operator.
|
||||||
|
|
||||||
@includelineno
|
@skip merge(planes
|
||||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
@until #endif
|
||||||
|
|
||||||
lines
|
|
||||||
117-129
|
|
||||||
|
|
||||||
The new *imshow* highgui function accepts both the *Mat* and *IplImage* data structures. Compile and
|
The new *imshow* highgui function accepts both the *Mat* and *IplImage* data structures. Compile and
|
||||||
run the program and if the first image below is your input you may get either the first or second as
|
run the program and if the first image below is your input you may get either the first or second as
|
||||||
output:
|
output:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You may observe a runtime instance of this on the [YouTube
|
You may observe a runtime instance of this on the [YouTube
|
||||||
here](https://www.youtube.com/watch?v=qckm-zvo31w) and you can [download the source code from here
|
here](https://www.youtube.com/watch?v=qckm-zvo31w) and you can [download the source code from here
|
||||||
|
@ -130,7 +130,7 @@ difference.
|
|||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can download this source code from [here
|
You can download this source code from [here
|
||||||
](samples/cpp/tutorial_code/core/mat_mask_operations/mat_mask_operations.cpp) or look in the
|
](samples/cpp/tutorial_code/core/mat_mask_operations/mat_mask_operations.cpp) or look in the
|
||||||
|
@ -9,7 +9,7 @@ computed tomography, and magnetic resonance imaging to name a few. In every case
|
|||||||
see are images. However, when transforming this to our digital devices what we record are numerical
|
see are images. However, when transforming this to our digital devices what we record are numerical
|
||||||
values for each of the points of the image.
|
values for each of the points of the image.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
For example in the above image you can see that the mirror of the car is nothing more than a matrix
|
For example in the above image you can see that the mirror of the car is nothing more than a matrix
|
||||||
containing all the intensity values of the pixel points. How we get and store the pixels values may
|
containing all the intensity values of the pixel points. How we get and store the pixels values may
|
||||||
@ -144,18 +144,18 @@ file by using the @ref cv::imwrite() function. However, for debugging purposes i
|
|||||||
convenient to see the actual values. You can do this using the \<\< operator of *Mat*. Be aware that
|
convenient to see the actual values. You can do this using the \<\< operator of *Mat*. Be aware that
|
||||||
this only works for two dimensional matrices.
|
this only works for two dimensional matrices.
|
||||||
|
|
||||||
|
@dontinclude cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||||
|
|
||||||
Although *Mat* works really well as an image container, it is also a general matrix class.
|
Although *Mat* works really well as an image container, it is also a general matrix class.
|
||||||
Therefore, it is possible to create and manipulate multidimensional matrices. You can create a Mat
|
Therefore, it is possible to create and manipulate multidimensional matrices. You can create a Mat
|
||||||
object in multiple ways:
|
object in multiple ways:
|
||||||
|
|
||||||
- @ref cv::Mat::Mat Constructor
|
- @ref cv::Mat::Mat Constructor
|
||||||
|
|
||||||
@includelineno
|
@skip Mat M(2
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
@until cout
|
||||||
|
|
||||||
lines 27-28
|

|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
For two dimensional and multichannel images we first define their size: row and column count wise.
|
For two dimensional and multichannel images we first define their size: row and column count wise.
|
||||||
|
|
||||||
@ -173,11 +173,8 @@ object in multiple ways:
|
|||||||
|
|
||||||
- Use C/C++ arrays and initialize via constructor
|
- Use C/C++ arrays and initialize via constructor
|
||||||
|
|
||||||
@includelineno
|
@skip int sz
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
@until Mat L
|
||||||
|
|
||||||
lines
|
|
||||||
35-36
|
|
||||||
|
|
||||||
The upper example shows how to create a matrix with more than two dimensions. Specify its
|
The upper example shows how to create a matrix with more than two dimensions. Specify its
|
||||||
dimension, then pass a pointer containing the size for each dimension and the rest remains the
|
dimension, then pass a pointer containing the size for each dimension and the rest remains the
|
||||||
@ -188,14 +185,14 @@ object in multiple ways:
|
|||||||
IplImage* img = cvLoadImage("greatwave.png", 1);
|
IplImage* img = cvLoadImage("greatwave.png", 1);
|
||||||
Mat mtx(img); // convert IplImage* -> Mat
|
Mat mtx(img); // convert IplImage* -> Mat
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
- @ref cv::Mat::create function:
|
- @ref cv::Mat::create function:
|
||||||
|
@code
|
||||||
|
M.create(4,4, CV_8UC(2));
|
||||||
|
cout << "M = "<< endl << " " << M << endl << endl;
|
||||||
|
@endcode
|
||||||
|
|
||||||
@includelineno
|

|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
|
||||||
|
|
||||||
lines 31-32
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
You cannot initialize the matrix values with this construction. It will only reallocate its matrix
|
You cannot initialize the matrix values with this construction. It will only reallocate its matrix
|
||||||
data memory if the new size will not fit into the old one.
|
data memory if the new size will not fit into the old one.
|
||||||
@ -203,41 +200,31 @@ object in multiple ways:
|
|||||||
- MATLAB style initializer: @ref cv::Mat::zeros , @ref cv::Mat::ones , @ref cv::Mat::eye . Specify size and
|
- MATLAB style initializer: @ref cv::Mat::zeros , @ref cv::Mat::ones , @ref cv::Mat::eye . Specify size and
|
||||||
data type to use:
|
data type to use:
|
||||||
|
|
||||||
@includelineno
|
@skip Mat E
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
@until cout
|
||||||
|
|
||||||
lines
|

|
||||||
40-47
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- For small matrices you may use comma separated initializers:
|
- For small matrices you may use comma separated initializers:
|
||||||
|
|
||||||
@includelineno
|
@skip Mat C
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
@until cout
|
||||||
|
|
||||||
lines 50-51
|

|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Create a new header for an existing *Mat* object and @ref cv::Mat::clone or @ref cv::Mat::copyTo it.
|
- Create a new header for an existing *Mat* object and @ref cv::Mat::clone or @ref cv::Mat::copyTo it.
|
||||||
|
|
||||||
@includelineno
|
@skip Mat RowClone
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
@until cout
|
||||||
|
|
||||||
lines 53-54
|

|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
@note
|
@note
|
||||||
You can fill out a matrix with random values using the @ref cv::randu() function. You need to
|
You can fill out a matrix with random values using the @ref cv::randu() function. You need to
|
||||||
give the lower and upper value for the random values:
|
give the lower and upper value for the random values:
|
||||||
|
@skip Mat R
|
||||||
|
@until randu
|
||||||
|
|
||||||
@includelineno
|
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
|
||||||
|
|
||||||
lines
|
|
||||||
57-58
|
|
||||||
|
|
||||||
Output formatting
|
Output formatting
|
||||||
-----------------
|
-----------------
|
||||||
@ -246,54 +233,26 @@ In the above examples you could see the default formatting option. OpenCV, howev
|
|||||||
format your matrix output:
|
format your matrix output:
|
||||||
|
|
||||||
- Default
|
- Default
|
||||||
|
@skipline (default)
|
||||||
@includelineno
|

|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
|
||||||
|
|
||||||
lines
|
|
||||||
61
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Python
|
- Python
|
||||||
|
@skipline (python)
|
||||||
@includelineno
|

|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
|
||||||
|
|
||||||
lines
|
|
||||||
62
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Comma separated values (CSV)
|
- Comma separated values (CSV)
|
||||||
|
@skipline (csv)
|
||||||
@includelineno
|

|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
|
||||||
|
|
||||||
lines
|
|
||||||
64
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Numpy
|
- Numpy
|
||||||
|
@code
|
||||||
@includelineno
|
cout << "R (numpy) = " << endl << format(R, Formatter::FMT_NUMPY ) << endl << endl;
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
@endcode
|
||||||
|

|
||||||
lines
|
|
||||||
63
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- C
|
- C
|
||||||
|
@skipline (c)
|
||||||
@includelineno
|

|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
|
||||||
|
|
||||||
lines
|
|
||||||
65
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Output of other common items
|
Output of other common items
|
||||||
----------------------------
|
----------------------------
|
||||||
@ -301,44 +260,24 @@ Output of other common items
|
|||||||
OpenCV offers support for output of other common OpenCV data structures too via the \<\< operator:
|
OpenCV offers support for output of other common OpenCV data structures too via the \<\< operator:
|
||||||
|
|
||||||
- 2D Point
|
- 2D Point
|
||||||
|
@skip Point2f P
|
||||||
@includelineno
|
@until cout
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|

|
||||||
|
|
||||||
lines
|
|
||||||
67-68
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- 3D Point
|
- 3D Point
|
||||||
|
@skip Point3f P3f
|
||||||
@includelineno
|
@until cout
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|

|
||||||
|
|
||||||
lines
|
|
||||||
70-71
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- std::vector via cv::Mat
|
- std::vector via cv::Mat
|
||||||
|
@skip vector<float> v
|
||||||
@includelineno
|
@until cout
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|

|
||||||
|
|
||||||
lines
|
|
||||||
74-77
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- std::vector of points
|
- std::vector of points
|
||||||
|
@skip vector<Point2f> vPoints
|
||||||
@includelineno
|
@until cout
|
||||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|

|
||||||
|
|
||||||
lines
|
|
||||||
79-83
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Most of the samples here have been included in a small console application. You can download it from
|
Most of the samples here have been included in a small console application. You can download it from
|
||||||
[here](samples/cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp)
|
[here](samples/cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp)
|
||||||
|
@ -25,7 +25,7 @@ Code
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Let's start by checking out the *main* function. We observe that first thing we do is creating a
|
-# Let's start by checking out the *main* function. We observe that first thing we do is creating a
|
||||||
*Random Number Generator* object (RNG):
|
*Random Number Generator* object (RNG):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
RNG rng( 0xFFFFFFFF );
|
RNG rng( 0xFFFFFFFF );
|
||||||
@ -33,7 +33,7 @@ Explanation
|
|||||||
RNG implements a random number generator. In this example, *rng* is a RNG element initialized
|
RNG implements a random number generator. In this example, *rng* is a RNG element initialized
|
||||||
with the value *0xFFFFFFFF*
|
with the value *0xFFFFFFFF*
|
||||||
|
|
||||||
2. Then we create a matrix initialized to *zeros* (which means that it will appear as black),
|
-# Then we create a matrix initialized to *zeros* (which means that it will appear as black),
|
||||||
specifying its height, width and its type:
|
specifying its height, width and its type:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Initialize a matrix filled with zeros
|
/// Initialize a matrix filled with zeros
|
||||||
@ -42,7 +42,7 @@ Explanation
|
|||||||
/// Show it in a window during DELAY ms
|
/// Show it in a window during DELAY ms
|
||||||
imshow( window_name, image );
|
imshow( window_name, image );
|
||||||
@endcode
|
@endcode
|
||||||
3. Then we proceed to draw crazy stuff. After taking a look at the code, you can see that it is
|
-# Then we proceed to draw crazy stuff. After taking a look at the code, you can see that it is
|
||||||
mainly divided in 8 sections, defined as functions:
|
mainly divided in 8 sections, defined as functions:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Now, let's draw some lines
|
/// Now, let's draw some lines
|
||||||
@ -79,7 +79,7 @@ Explanation
|
|||||||
All of these functions follow the same pattern, so we will analyze only a couple of them, since
|
All of these functions follow the same pattern, so we will analyze only a couple of them, since
|
||||||
the same explanation applies for all.
|
the same explanation applies for all.
|
||||||
|
|
||||||
4. Checking out the function **Drawing_Random_Lines**:
|
-# Checking out the function **Drawing_Random_Lines**:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
int Drawing_Random_Lines( Mat image, char* window_name, RNG rng )
|
int Drawing_Random_Lines( Mat image, char* window_name, RNG rng )
|
||||||
{
|
{
|
||||||
@ -133,11 +133,11 @@ Explanation
|
|||||||
are used as the *R*, *G* and *B* parameters for the line color. Hence, the color of the
|
are used as the *R*, *G* and *B* parameters for the line color. Hence, the color of the
|
||||||
lines will be random too!
|
lines will be random too!
|
||||||
|
|
||||||
5. The explanation above applies for the other functions generating circles, ellipses, polygones,
|
-# The explanation above applies for the other functions generating circles, ellipses, polygones,
|
||||||
etc. The parameters such as *center* and *vertices* are also generated randomly.
|
etc. The parameters such as *center* and *vertices* are also generated randomly.
|
||||||
6. Before finishing, we also should take a look at the functions *Display_Random_Text* and
|
-# Before finishing, we also should take a look at the functions *Display_Random_Text* and
|
||||||
*Displaying_Big_End*, since they both have a few interesting features:
|
*Displaying_Big_End*, since they both have a few interesting features:
|
||||||
7. **Display_Random_Text:**
|
-# **Display_Random_Text:**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
int Displaying_Random_Text( Mat image, char* window_name, RNG rng )
|
int Displaying_Random_Text( Mat image, char* window_name, RNG rng )
|
||||||
{
|
{
|
||||||
@ -178,7 +178,7 @@ Explanation
|
|||||||
As a result, we will get (analagously to the other drawing functions) **NUMBER** texts over our
|
As a result, we will get (analagously to the other drawing functions) **NUMBER** texts over our
|
||||||
image, in random locations.
|
image, in random locations.
|
||||||
|
|
||||||
8. **Displaying_Big_End**
|
-# **Displaying_Big_End**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
int Displaying_Big_End( Mat image, char* window_name, RNG rng )
|
int Displaying_Big_End( Mat image, char* window_name, RNG rng )
|
||||||
{
|
{
|
||||||
@ -222,28 +222,28 @@ Result
|
|||||||
As you just saw in the Code section, the program will sequentially execute diverse drawing
|
As you just saw in the Code section, the program will sequentially execute diverse drawing
|
||||||
functions, which will produce:
|
functions, which will produce:
|
||||||
|
|
||||||
1. First a random set of *NUMBER* lines will appear on screen such as it can be seen in this
|
-# First a random set of *NUMBER* lines will appear on screen such as it can be seen in this
|
||||||
screenshot:
|
screenshot:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. Then, a new set of figures, these time *rectangles* will follow.
|
-# Then, a new set of figures, these time *rectangles* will follow.
|
||||||
3. Now some ellipses will appear, each of them with random position, size, thickness and arc
|
-# Now some ellipses will appear, each of them with random position, size, thickness and arc
|
||||||
length:
|
length:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
4. Now, *polylines* with 03 segments will appear on screen, again in random configurations.
|
-# Now, *polylines* with 03 segments will appear on screen, again in random configurations.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
5. Filled polygons (in this example triangles) will follow.
|
-# Filled polygons (in this example triangles) will follow.
|
||||||
6. The last geometric figure to appear: circles!
|
-# The last geometric figure to appear: circles!
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
7. Near the end, the text *"Testing Text Rendering"* will appear in a variety of fonts, sizes,
|
-# Near the end, the text *"Testing Text Rendering"* will appear in a variety of fonts, sizes,
|
||||||
colors and positions.
|
colors and positions.
|
||||||
8. And the big end (which by the way expresses a big truth too):
|
-# And the big end (which by the way expresses a big truth too):
|
||||||
|
|
||||||

|

|
||||||
|
@ -4,10 +4,10 @@ AKAZE local features matching {#tutorial_akaze_matching}
|
|||||||
Introduction
|
Introduction
|
||||||
------------
|
------------
|
||||||
|
|
||||||
In this tutorial we will learn how to use [AKAZE]_ local features to detect and match keypoints on
|
In this tutorial we will learn how to use AKAZE @cite ANB13 local features to detect and match keypoints on
|
||||||
two images.
|
two images.
|
||||||
|
|
||||||
We will find keypoints on a pair of images with given homography matrix, match them and count the
|
We will find keypoints on a pair of images with given homography matrix, match them and count the
|
||||||
|
|
||||||
number of inliers (i. e. matches that fit in the given homography).
|
number of inliers (i. e. matches that fit in the given homography).
|
||||||
|
|
||||||
You can find expanded version of this example here:
|
You can find expanded version of this example here:
|
||||||
@ -18,7 +18,7 @@ Data
|
|||||||
|
|
||||||
We are going to use images 1 and 3 from *Graffity* sequence of Oxford dataset.
|
We are going to use images 1 and 3 from *Graffity* sequence of Oxford dataset.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Homography is given by a 3 by 3 matrix:
|
Homography is given by a 3 by 3 matrix:
|
||||||
@code{.none}
|
@code{.none}
|
||||||
@ -35,92 +35,92 @@ You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*)
|
|||||||
|
|
||||||
### Explanation
|
### Explanation
|
||||||
|
|
||||||
1. **Load images and homography**
|
-# **Load images and homography**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat img1 = imread("graf1.png", IMREAD_GRAYSCALE);
|
Mat img1 = imread("graf1.png", IMREAD_GRAYSCALE);
|
||||||
Mat img2 = imread("graf3.png", IMREAD_GRAYSCALE);
|
Mat img2 = imread("graf3.png", IMREAD_GRAYSCALE);
|
||||||
|
|
||||||
Mat homography;
|
Mat homography;
|
||||||
FileStorage fs("H1to3p.xml", FileStorage::READ);
|
FileStorage fs("H1to3p.xml", FileStorage::READ);
|
||||||
fs.getFirstTopLevelNode() >> homography;
|
fs.getFirstTopLevelNode() >> homography;
|
||||||
@endcode
|
@endcode
|
||||||
We are loading grayscale images here. Homography is stored in the xml created with FileStorage.
|
We are loading grayscale images here. Homography is stored in the xml created with FileStorage.
|
||||||
|
|
||||||
1. **Detect keypoints and compute descriptors using AKAZE**
|
-# **Detect keypoints and compute descriptors using AKAZE**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
vector<KeyPoint> kpts1, kpts2;
|
vector<KeyPoint> kpts1, kpts2;
|
||||||
Mat desc1, desc2;
|
Mat desc1, desc2;
|
||||||
|
|
||||||
AKAZE akaze;
|
AKAZE akaze;
|
||||||
akaze(img1, noArray(), kpts1, desc1);
|
akaze(img1, noArray(), kpts1, desc1);
|
||||||
akaze(img2, noArray(), kpts2, desc2);
|
akaze(img2, noArray(), kpts2, desc2);
|
||||||
@endcode
|
@endcode
|
||||||
We create AKAZE object and use it's *operator()* functionality. Since we don't need the *mask*
|
We create AKAZE object and use it's *operator()* functionality. Since we don't need the *mask*
|
||||||
parameter, *noArray()* is used.
|
parameter, *noArray()* is used.
|
||||||
|
|
||||||
1. **Use brute-force matcher to find 2-nn matches**
|
-# **Use brute-force matcher to find 2-nn matches**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
BFMatcher matcher(NORM_HAMMING);
|
BFMatcher matcher(NORM_HAMMING);
|
||||||
vector< vector<DMatch> > nn_matches;
|
vector< vector<DMatch> > nn_matches;
|
||||||
matcher.knnMatch(desc1, desc2, nn_matches, 2);
|
matcher.knnMatch(desc1, desc2, nn_matches, 2);
|
||||||
@endcode
|
@endcode
|
||||||
We use Hamming distance, because AKAZE uses binary descriptor by default.
|
We use Hamming distance, because AKAZE uses binary descriptor by default.
|
||||||
|
|
||||||
1. **Use 2-nn matches to find correct keypoint matches**
|
-# **Use 2-nn matches to find correct keypoint matches**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
for(size_t i = 0; i < nn_matches.size(); i++) {
|
for(size_t i = 0; i < nn_matches.size(); i++) {
|
||||||
DMatch first = nn_matches[i][0];
|
DMatch first = nn_matches[i][0];
|
||||||
float dist1 = nn_matches[i][0].distance;
|
float dist1 = nn_matches[i][0].distance;
|
||||||
float dist2 = nn_matches[i][1].distance;
|
float dist2 = nn_matches[i][1].distance;
|
||||||
|
|
||||||
if(dist1 < nn_match_ratio * dist2) {
|
if(dist1 < nn_match_ratio * dist2) {
|
||||||
matched1.push_back(kpts1[first.queryIdx]);
|
matched1.push_back(kpts1[first.queryIdx]);
|
||||||
matched2.push_back(kpts2[first.trainIdx]);
|
matched2.push_back(kpts2[first.trainIdx]);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
@endcode
|
||||||
@endcode
|
If the closest match is *ratio* closer than the second closest one, then the match is correct.
|
||||||
If the closest match is *ratio* closer than the second closest one, then the match is correct.
|
|
||||||
|
|
||||||
1. **Check if our matches fit in the homography model**
|
-# **Check if our matches fit in the homography model**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
for(int i = 0; i < matched1.size(); i++) {
|
for(int i = 0; i < matched1.size(); i++) {
|
||||||
Mat col = Mat::ones(3, 1, CV_64F);
|
Mat col = Mat::ones(3, 1, CV_64F);
|
||||||
col.at<double>(0) = matched1[i].pt.x;
|
col.at<double>(0) = matched1[i].pt.x;
|
||||||
col.at<double>(1) = matched1[i].pt.y;
|
col.at<double>(1) = matched1[i].pt.y;
|
||||||
|
|
||||||
col = homography * col;
|
col = homography * col;
|
||||||
col /= col.at<double>(2);
|
col /= col.at<double>(2);
|
||||||
float dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
|
float dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
|
||||||
pow(col.at<double>(1) - matched2[i].pt.y, 2));
|
pow(col.at<double>(1) - matched2[i].pt.y, 2));
|
||||||
|
|
||||||
if(dist < inlier_threshold) {
|
if(dist < inlier_threshold) {
|
||||||
int new_i = inliers1.size();
|
int new_i = inliers1.size();
|
||||||
inliers1.push_back(matched1[i]);
|
inliers1.push_back(matched1[i]);
|
||||||
inliers2.push_back(matched2[i]);
|
inliers2.push_back(matched2[i]);
|
||||||
good_matches.push_back(DMatch(new_i, new_i, 0));
|
good_matches.push_back(DMatch(new_i, new_i, 0));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
@endcode
|
||||||
@endcode
|
If the distance from first keypoint's projection to the second keypoint is less than threshold,
|
||||||
If the distance from first keypoint's projection to the second keypoint is less than threshold,
|
then it it fits in the homography.
|
||||||
then it it fits in the homography.
|
|
||||||
|
|
||||||
We create a new set of matches for the inliers, because it is required by the drawing function.
|
We create a new set of matches for the inliers, because it is required by the drawing function.
|
||||||
|
|
||||||
1. **Output results**
|
-# **Output results**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat res;
|
Mat res;
|
||||||
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
|
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
|
||||||
imwrite("res.png", res);
|
imwrite("res.png", res);
|
||||||
...
|
...
|
||||||
@endcode
|
@endcode
|
||||||
Here we save the resulting image and print some statistics.
|
Here we save the resulting image and print some statistics.
|
||||||
|
|
||||||
### Results
|
### Results
|
||||||
|
|
||||||
Found matches
|
Found matches
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
A-KAZE Matching Results
|
A-KAZE Matching Results
|
||||||
-----------------------
|
-----------------------
|
||||||
|
@ -152,8 +152,9 @@ A-KAZE Matching Results
|
|||||||
--------------------------
|
--------------------------
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
Keypoints 1: 2943
|
|
||||||
Keypoints 2: 3511
|
Keypoints 1 2943
|
||||||
Matches: 447
|
Keypoints 2 3511
|
||||||
Inliers: 308
|
Matches 447
|
||||||
Inlier Ratio: 0.689038
|
Inliers 308
|
||||||
|
Inlier Ratio 0.689038
|
||||||
|
@ -11,16 +11,17 @@ The algorithm is as follows:
|
|||||||
|
|
||||||
- Detect and describe keypoints on the first frame, manually set object boundaries
|
- Detect and describe keypoints on the first frame, manually set object boundaries
|
||||||
- For every next frame:
|
- For every next frame:
|
||||||
1. Detect and describe keypoints
|
-# Detect and describe keypoints
|
||||||
2. Match them using bruteforce matcher
|
-# Match them using bruteforce matcher
|
||||||
3. Estimate homography transformation using RANSAC
|
-# Estimate homography transformation using RANSAC
|
||||||
4. Filter inliers from all the matches
|
-# Filter inliers from all the matches
|
||||||
5. Apply homography transformation to the bounding box to find the object
|
-# Apply homography transformation to the bounding box to find the object
|
||||||
6. Draw bounding box and inliers, compute inlier ratio as evaluation metric
|
-# Draw bounding box and inliers, compute inlier ratio as evaluation metric
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Data
|
Data
|
||||||
|
----
|
||||||
|
|
||||||
To do the tracking we need a video and object position on the first frame.
|
To do the tracking we need a video and object position on the first frame.
|
||||||
|
|
||||||
@ -31,14 +32,16 @@ To run the code you have to specify input and output video path and object bound
|
|||||||
@code{.none}
|
@code{.none}
|
||||||
./planar_tracking blais.mp4 result.avi blais_bb.xml.gz
|
./planar_tracking blais.mp4 result.avi blais_bb.xml.gz
|
||||||
@endcode
|
@endcode
|
||||||
### Source Code
|
|
||||||
|
Source Code
|
||||||
|
-----------
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/features2D/AKAZE_tracking/planar_tracking.cpp
|
@includelineno cpp/tutorial_code/features2D/AKAZE_tracking/planar_tracking.cpp
|
||||||
|
|
||||||
### Explanation
|
Explanation
|
||||||
|
-----------
|
||||||
|
|
||||||
Tracker class
|
### Tracker class
|
||||||
-------------
|
|
||||||
|
|
||||||
This class implements algorithm described abobve using given feature detector and descriptor
|
This class implements algorithm described abobve using given feature detector and descriptor
|
||||||
matcher.
|
matcher.
|
||||||
@ -63,62 +66,60 @@ matcher.
|
|||||||
|
|
||||||
- **Processing frames**
|
- **Processing frames**
|
||||||
|
|
||||||
1. Locate keypoints and compute descriptors
|
-# Locate keypoints and compute descriptors
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
(*detector)(frame, noArray(), kp, desc);
|
(*detector)(frame, noArray(), kp, desc);
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
To find matches between frames we have to locate the keypoints first.
|
|
||||||
|
|
||||||
In this tutorial detectors are set up to find about 1000 keypoints on each frame.
|
|
||||||
|
|
||||||
1. Use 2-nn matcher to find correspondences
|
To find matches between frames we have to locate the keypoints first.
|
||||||
@code{.cpp}
|
|
||||||
matcher->knnMatch(first_desc, desc, matches, 2);
|
In this tutorial detectors are set up to find about 1000 keypoints on each frame.
|
||||||
for(unsigned i = 0; i < matches.size(); i++) {
|
|
||||||
if(matches[i][0].distance < nn_match_ratio * matches[i][1].distance) {
|
-# Use 2-nn matcher to find correspondences
|
||||||
matched1.push_back(first_kp[matches[i][0].queryIdx]);
|
@code{.cpp}
|
||||||
matched2.push_back( kp[matches[i][0].trainIdx]);
|
matcher->knnMatch(first_desc, desc, matches, 2);
|
||||||
|
for(unsigned i = 0; i < matches.size(); i++) {
|
||||||
|
if(matches[i][0].distance < nn_match_ratio * matches[i][1].distance) {
|
||||||
|
matched1.push_back(first_kp[matches[i][0].queryIdx]);
|
||||||
|
matched2.push_back( kp[matches[i][0].trainIdx]);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
@endcode
|
||||||
@endcode
|
If the closest match is *nn_match_ratio* closer than the second closest one, then it's a
|
||||||
|
match.
|
||||||
If the closest match is *nn_match_ratio* closer than the second closest one, then it's a
|
|
||||||
match.
|
|
||||||
|
|
||||||
2. Use *RANSAC* to estimate homography transformation
|
-# Use *RANSAC* to estimate homography transformation
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
homography = findHomography(Points(matched1), Points(matched2),
|
homography = findHomography(Points(matched1), Points(matched2),
|
||||||
RANSAC, ransac_thresh, inlier_mask);
|
RANSAC, ransac_thresh, inlier_mask);
|
||||||
@endcode
|
@endcode
|
||||||
|
If there are at least 4 matches we can use random sample consensus to estimate image
|
||||||
If there are at least 4 matches we can use random sample consensus to estimate image
|
transformation.
|
||||||
transformation.
|
|
||||||
|
|
||||||
3. Save the inliers
|
-# Save the inliers
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
for(unsigned i = 0; i < matched1.size(); i++) {
|
for(unsigned i = 0; i < matched1.size(); i++) {
|
||||||
if(inlier_mask.at<uchar>(i)) {
|
if(inlier_mask.at<uchar>(i)) {
|
||||||
int new_i = static_cast<int>(inliers1.size());
|
int new_i = static_cast<int>(inliers1.size());
|
||||||
inliers1.push_back(matched1[i]);
|
inliers1.push_back(matched1[i]);
|
||||||
inliers2.push_back(matched2[i]);
|
inliers2.push_back(matched2[i]);
|
||||||
inlier_matches.push_back(DMatch(new_i, new_i, 0));
|
inlier_matches.push_back(DMatch(new_i, new_i, 0));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
@endcode
|
||||||
@endcode
|
Since *findHomography* computes the inliers we only have to save the chosen points and
|
||||||
|
matches.
|
||||||
Since *findHomography* computes the inliers we only have to save the chosen points and
|
|
||||||
matches.
|
|
||||||
|
|
||||||
4. Project object bounding box
|
-# Project object bounding box
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
perspectiveTransform(object_bb, new_bb, homography);
|
perspectiveTransform(object_bb, new_bb, homography);
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
If there is a reasonable number of inliers we can use estimated transformation to locate the
|
|
||||||
object.
|
|
||||||
|
|
||||||
### Results
|
If there is a reasonable number of inliers we can use estimated transformation to locate the
|
||||||
|
object.
|
||||||
|
|
||||||
|
Results
|
||||||
|
-------
|
||||||
|
|
||||||
You can watch the resulting [video on youtube](http://www.youtube.com/watch?v=LWY-w8AGGhE).
|
You can watch the resulting [video on youtube](http://www.youtube.com/watch?v=LWY-w8AGGhE).
|
||||||
|
|
||||||
@ -129,6 +130,7 @@ Inliers 410
|
|||||||
Inlier ratio 0.58
|
Inlier ratio 0.58
|
||||||
Keypoints 1117
|
Keypoints 1117
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
*ORB* statistics:
|
*ORB* statistics:
|
||||||
@code{.none}
|
@code{.none}
|
||||||
Matches 504
|
Matches 504
|
||||||
|
@ -87,4 +87,4 @@ Result
|
|||||||
|
|
||||||
Here is the result after applying the BruteForce matcher between the two original images:
|
Here is the result after applying the BruteForce matcher between the two original images:
|
||||||
|
|
||||||

|

|
||||||
|
@ -79,10 +79,10 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here is the result of the feature detection applied to the first image:
|
-# Here is the result of the feature detection applied to the first image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. And here is the result for the second image:
|
-# And here is the result for the second image:
|
||||||
|
|
||||||

|

|
||||||
|
@ -130,10 +130,10 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here is the result of the feature detection applied to the first image:
|
-# Here is the result of the feature detection applied to the first image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. Additionally, we get as console output the keypoints filtered:
|
-# Additionally, we get as console output the keypoints filtered:
|
||||||
|
|
||||||

|

|
||||||
|
@ -134,8 +134,8 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. And here is the result for the detected object (highlighted in green)
|
-# And here is the result for the detected object (highlighted in green)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
|
@ -122,9 +122,9 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Here is the result:
|
Here is the result:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -111,5 +111,5 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -201,9 +201,9 @@ Result
|
|||||||
|
|
||||||
The original image:
|
The original image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The detected corners are surrounded by a small black circle
|
The detected corners are surrounded by a small black circle
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -1,8 +0,0 @@
|
|||||||
General tutorials {#tutorial_table_of_content_general}
|
|
||||||
=================
|
|
||||||
|
|
||||||
These tutorials are the bottom of the iceberg as they link together multiple of the modules
|
|
||||||
presented above in order to solve complex problems.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -24,28 +24,45 @@ The source code
|
|||||||
|
|
||||||
You may also find the source code and these video file in the
|
You may also find the source code and these video file in the
|
||||||
`samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity` folder of the OpenCV
|
`samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity` folder of the OpenCV
|
||||||
source library or download it from here
|
source library or download it from [here](samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp).
|
||||||
\<../../../../samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp\>. The
|
The full source code is quite long (due to the controlling of the application via the command line
|
||||||
full source code is quite long (due to the controlling of the application via the command line
|
|
||||||
arguments and performance measurement). Therefore, to avoid cluttering up these sections with those
|
arguments and performance measurement). Therefore, to avoid cluttering up these sections with those
|
||||||
you'll find here only the functions itself.
|
you'll find here only the functions itself.
|
||||||
|
|
||||||
The PSNR returns a float number, that if the two inputs are similar between 30 and 50 (higher is
|
The PSNR returns a float number, that if the two inputs are similar between 30 and 50 (higher is
|
||||||
better).
|
better).
|
||||||
|
|
||||||
@includelineno samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
@dontinclude samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
||||||
|
|
||||||
lines
|
@skip struct BufferPSNR
|
||||||
165-210, 18-23, 210-235
|
@until };
|
||||||
|
|
||||||
|
@skip double getPSNR(
|
||||||
|
@until return psnr;
|
||||||
|
@until }
|
||||||
|
@until }
|
||||||
|
|
||||||
|
@skip double getPSNR_CUDA(
|
||||||
|
@until return psnr;
|
||||||
|
@until }
|
||||||
|
@until }
|
||||||
|
|
||||||
The SSIM returns the MSSIM of the images. This is too a float number between zero and one (higher is
|
The SSIM returns the MSSIM of the images. This is too a float number between zero and one (higher is
|
||||||
better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data
|
better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data
|
||||||
structure:
|
structure:
|
||||||
|
|
||||||
@includelineno samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
@dontinclude samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
||||||
|
|
||||||
lines
|
@skip struct BufferMSSIM
|
||||||
235-355, 26-42, 357-
|
@until };
|
||||||
|
|
||||||
|
@skip Scalar getMSSIM(
|
||||||
|
@until return mssim;
|
||||||
|
@until }
|
||||||
|
|
||||||
|
@skip Scalar getMSSIM_CUDA_optimized(
|
||||||
|
@until return mssim;
|
||||||
|
@until }
|
||||||
|
|
||||||
How to do it? - The GPU
|
How to do it? - The GPU
|
||||||
-----------------------
|
-----------------------
|
||||||
@ -124,7 +141,7 @@ The reason for this is that you're throwing out on the window the price for memo
|
|||||||
data transfer. And on the GPU this is damn high. Another possibility for optimization is to
|
data transfer. And on the GPU this is damn high. Another possibility for optimization is to
|
||||||
introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::Stream.
|
introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::Stream.
|
||||||
|
|
||||||
1. Memory allocation on the GPU is considerable. Therefore, if it’s possible allocate new memory as
|
-# Memory allocation on the GPU is considerable. Therefore, if it’s possible allocate new memory as
|
||||||
few times as possible. If you create a function what you intend to call multiple times it is a
|
few times as possible. If you create a function what you intend to call multiple times it is a
|
||||||
good idea to allocate any local parameters for the function only once, during the first call. To
|
good idea to allocate any local parameters for the function only once, during the first call. To
|
||||||
do this you create a data structure containing all the local variables you will use. For
|
do this you create a data structure containing all the local variables you will use. For
|
||||||
@ -148,7 +165,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
|
|||||||
Now you access these local parameters as: *b.gI1*, *b.buf* and so on. The GpuMat will only
|
Now you access these local parameters as: *b.gI1*, *b.buf* and so on. The GpuMat will only
|
||||||
reallocate itself on a new call if the new matrix size is different from the previous one.
|
reallocate itself on a new call if the new matrix size is different from the previous one.
|
||||||
|
|
||||||
2. Avoid unnecessary function data transfers. Any small data transfer will be significant one once
|
-# Avoid unnecessary function data transfers. Any small data transfer will be significant one once
|
||||||
you go to the GPU. Therefore, if possible make all calculations in-place (in other words do not
|
you go to the GPU. Therefore, if possible make all calculations in-place (in other words do not
|
||||||
create new memory objects - for reasons explained at the previous point). For example, although
|
create new memory objects - for reasons explained at the previous point). For example, although
|
||||||
expressing arithmetical operations may be easier to express in one line formulas, it will be
|
expressing arithmetical operations may be easier to express in one line formulas, it will be
|
||||||
@ -164,7 +181,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
|
|||||||
gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1;
|
gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1;
|
||||||
gpu::add(b.t1, C1, b.t1);
|
gpu::add(b.t1, C1, b.t1);
|
||||||
@endcode
|
@endcode
|
||||||
3. Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a gpu function
|
-# Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a gpu function
|
||||||
it will wait for the call to finish and return with the result afterwards. However, it is
|
it will wait for the call to finish and return with the result afterwards. However, it is
|
||||||
possible to make asynchronous calls, meaning it will call for the operation execution, make the
|
possible to make asynchronous calls, meaning it will call for the operation execution, make the
|
||||||
costly data allocations for the algorithm and return back right away. Now you can call another
|
costly data allocations for the algorithm and return back right away. Now you can call another
|
||||||
@ -189,7 +206,7 @@ Result and conclusion
|
|||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
On an Intel P8700 laptop CPU paired with a low end NVidia GT220M here are the performance numbers:
|
On an Intel P8700 laptop CPU paired with a low end NVidia GT220M here are the performance numbers:
|
||||||
@code{.cpp}
|
@code
|
||||||
Time of PSNR CPU (averaged for 10 runs): 41.4122 milliseconds. With result of: 19.2506
|
Time of PSNR CPU (averaged for 10 runs): 41.4122 milliseconds. With result of: 19.2506
|
||||||
Time of PSNR GPU (averaged for 10 runs): 158.977 milliseconds. With result of: 19.2506
|
Time of PSNR GPU (averaged for 10 runs): 158.977 milliseconds. With result of: 19.2506
|
||||||
Initial call GPU optimized: 31.3418 milliseconds. With result of: 19.2506
|
Initial call GPU optimized: 31.3418 milliseconds. With result of: 19.2506
|
||||||
|
Before Width: | Height: | Size: 111 KiB After Width: | Height: | Size: 111 KiB |
Before Width: | Height: | Size: 53 KiB After Width: | Height: | Size: 53 KiB |
Before Width: | Height: | Size: 120 KiB After Width: | Height: | Size: 120 KiB |
@ -94,9 +94,8 @@ Below is the output of the program. Use the first image as the input. For the DE
|
|||||||
the SRTM file located at the USGS here.
|
the SRTM file located at the USGS here.
|
||||||
[<http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>](http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip)
|
[<http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>](http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|

|
||||||
|
@ -106,8 +106,8 @@ Results
|
|||||||
|
|
||||||
Below is the output of the program. Use the first image as the input. For the DEM model, download the SRTM file located at the USGS here. `http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip <http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>`_
|
Below is the output of the program. Use the first image as the input. For the DEM model, download the SRTM file located at the USGS here. `http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip <http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>`_
|
||||||
|
|
||||||
.. image:: images/output.jpg
|
.. image:: images/gdal_output.jpg
|
||||||
|
|
||||||
.. image:: images/heat-map.jpg
|
.. image:: images/gdal_heat-map.jpg
|
||||||
|
|
||||||
.. image:: images/flood-zone.jpg
|
.. image:: images/gdal_flood-zone.jpg
|
||||||
|
@ -7,7 +7,7 @@ Adding a Trackbar to our applications! {#tutorial_trackbar}
|
|||||||
- Well, it is time to use some fancy GUI tools. OpenCV provides some GUI utilities (*highgui.h*)
|
- Well, it is time to use some fancy GUI tools. OpenCV provides some GUI utilities (*highgui.h*)
|
||||||
for you. An example of this is a **Trackbar**
|
for you. An example of this is a **Trackbar**
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- In this tutorial we will just modify our two previous programs so that they get the input
|
- In this tutorial we will just modify our two previous programs so that they get the input
|
||||||
information from the trackbar.
|
information from the trackbar.
|
||||||
@ -88,16 +88,16 @@ Explanation
|
|||||||
|
|
||||||
We only analyze the code that is related to Trackbar:
|
We only analyze the code that is related to Trackbar:
|
||||||
|
|
||||||
1. First, we load 02 images, which are going to be blended.
|
-# First, we load 02 images, which are going to be blended.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src1 = imread("../../images/LinuxLogo.jpg");
|
src1 = imread("../../images/LinuxLogo.jpg");
|
||||||
src2 = imread("../../images/WindowsLogo.jpg");
|
src2 = imread("../../images/WindowsLogo.jpg");
|
||||||
@endcode
|
@endcode
|
||||||
2. To create a trackbar, first we have to create the window in which it is going to be located. So:
|
-# To create a trackbar, first we have to create the window in which it is going to be located. So:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow("Linear Blend", 1);
|
namedWindow("Linear Blend", 1);
|
||||||
@endcode
|
@endcode
|
||||||
3. Now we can create the Trackbar:
|
-# Now we can create the Trackbar:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
|
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
|
||||||
@endcode
|
@endcode
|
||||||
@ -110,7 +110,7 @@ We only analyze the code that is related to Trackbar:
|
|||||||
- The numerical value of Trackbar is stored in **alpha_slider**
|
- The numerical value of Trackbar is stored in **alpha_slider**
|
||||||
- Whenever the user moves the Trackbar, the callback function **on_trackbar** is called
|
- Whenever the user moves the Trackbar, the callback function **on_trackbar** is called
|
||||||
|
|
||||||
4. Finally, we have to define the callback function **on_trackbar**
|
-# Finally, we have to define the callback function **on_trackbar**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
void on_trackbar( int, void* )
|
void on_trackbar( int, void* )
|
||||||
{
|
{
|
||||||
@ -133,10 +133,10 @@ Result
|
|||||||
|
|
||||||
- Our program produces the following output:
|
- Our program produces the following output:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- As a manner of practice, you can also add 02 trackbars for the program made in
|
- As a manner of practice, you can also add 02 trackbars for the program made in
|
||||||
@ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might
|
@ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might
|
||||||
look like:
|
look like:
|
||||||
|
|
||||||

|

|
||||||
|
@ -25,10 +25,14 @@ version of it ](samples/cpp/tutorial_code/HighGUI/video-input-psnr-ssim/video/Me
|
|||||||
You may also find the source code and these video file in the
|
You may also find the source code and these video file in the
|
||||||
`samples/cpp/tutorial_code/HighGUI/video-input-psnr-ssim/` folder of the OpenCV source library.
|
`samples/cpp/tutorial_code/HighGUI/video-input-psnr-ssim/` folder of the OpenCV source library.
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/HighGUI/video-input-psnr-ssim/video-input-psnr-ssim.cpp
|
@dontinclude cpp/tutorial_code/HighGUI/video-input-psnr-ssim/video-input-psnr-ssim.cpp
|
||||||
|
|
||||||
lines
|
@until Scalar getMSSIM
|
||||||
1-15, 29-31, 33-208
|
@skip main
|
||||||
|
@until {
|
||||||
|
@skip if
|
||||||
|
@until return mssim;
|
||||||
|
@until }
|
||||||
|
|
||||||
How to read a video stream (online-camera or offline-file)?
|
How to read a video stream (online-camera or offline-file)?
|
||||||
-----------------------------------------------------------
|
-----------------------------------------------------------
|
||||||
@ -243,10 +247,9 @@ for each frame, and the SSIM only for the frames where the PSNR falls below an i
|
|||||||
visualization purpose we show both images in an OpenCV window and print the PSNR and MSSIM values to
|
visualization purpose we show both images in an OpenCV window and print the PSNR and MSSIM values to
|
||||||
the console. Expect to see something like:
|
the console. Expect to see something like:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You may observe a runtime instance of this on the [YouTube
|
You may observe a runtime instance of this on the [YouTube here](https://www.youtube.com/watch?v=iOcNljutOgg).
|
||||||
here](https://www.youtube.com/watch?v=iOcNljutOgg).
|
|
||||||
|
|
||||||
\htmlonly
|
\htmlonly
|
||||||
<div align="center">
|
<div align="center">
|
||||||
|
@ -47,7 +47,7 @@ somehow longer and includes names such as *XVID*, *DIVX*, *H264* or *LAGS* (*Lag
|
|||||||
Codec*). The full list of codecs you may use on a system depends on just what one you have
|
Codec*). The full list of codecs you may use on a system depends on just what one you have
|
||||||
installed.
|
installed.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
As you can see things can get really complicated with videos. However, OpenCV is mainly a computer
|
As you can see things can get really complicated with videos. However, OpenCV is mainly a computer
|
||||||
vision library, not a video stream, codec and write one. Therefore, the developers tried to keep
|
vision library, not a video stream, codec and write one. Therefore, the developers tried to keep
|
||||||
@ -75,7 +75,7 @@ const string source = argv[1]; // the source file name
|
|||||||
string::size_type pAt = source.find_last_of('.'); // Find extension point
|
string::size_type pAt = source.find_last_of('.'); // Find extension point
|
||||||
const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi"; // Form the new name with container
|
const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi"; // Form the new name with container
|
||||||
@endcode
|
@endcode
|
||||||
1. The codec to use for the video track. Now all the video codecs have a unique short name of
|
-# The codec to use for the video track. Now all the video codecs have a unique short name of
|
||||||
maximum four characters. Hence, the *XVID*, *DIVX* or *H264* names. This is called a four
|
maximum four characters. Hence, the *XVID*, *DIVX* or *H264* names. This is called a four
|
||||||
character code. You may also ask this from an input video by using its *get* function. Because
|
character code. You may also ask this from an input video by using its *get* function. Because
|
||||||
the *get* function is a general function it always returns double values. A double value is
|
the *get* function is a general function it always returns double values. A double value is
|
||||||
@ -109,13 +109,13 @@ const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi"; // Form the n
|
|||||||
If you pass for this argument minus one than a window will pop up at runtime that contains all
|
If you pass for this argument minus one than a window will pop up at runtime that contains all
|
||||||
the codec installed on your system and ask you to select the one to use:
|
the codec installed on your system and ask you to select the one to use:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. The frame per second for the output video. Again, here I keep the input videos frame per second
|
-# The frame per second for the output video. Again, here I keep the input videos frame per second
|
||||||
by using the *get* function.
|
by using the *get* function.
|
||||||
3. The size of the frames for the output video. Here too I keep the input videos frame size per
|
-# The size of the frames for the output video. Here too I keep the input videos frame size per
|
||||||
second by using the *get* function.
|
second by using the *get* function.
|
||||||
4. The final argument is an optional one. By default is true and says that the output will be a
|
-# The final argument is an optional one. By default is true and says that the output will be a
|
||||||
colorful one (so for write you will send three channel images). To create a gray scale video
|
colorful one (so for write you will send three channel images). To create a gray scale video
|
||||||
pass a false parameter here.
|
pass a false parameter here.
|
||||||
|
|
||||||
@ -148,7 +148,7 @@ merge(spl, res);
|
|||||||
Put all this together and you'll get the upper source code, whose runtime result will show something
|
Put all this together and you'll get the upper source code, whose runtime result will show something
|
||||||
around the idea:
|
around the idea:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You may observe a runtime instance of this on the [YouTube
|
You may observe a runtime instance of this on the [YouTube
|
||||||
here](https://www.youtube.com/watch?v=jpBwHxsl1_0).
|
here](https://www.youtube.com/watch?v=jpBwHxsl1_0).
|
||||||
|
@ -28,7 +28,7 @@ Morphological Operations
|
|||||||
- Finding of intensity bumps or holes in an image
|
- Finding of intensity bumps or holes in an image
|
||||||
- We will explain dilation and erosion briefly, using the following image as an example:
|
- We will explain dilation and erosion briefly, using the following image as an example:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Dilation
|
### Dilation
|
||||||
|
|
||||||
@ -40,7 +40,7 @@ Morphological Operations
|
|||||||
deduce, this maximizing operation causes bright regions within an image to "grow" (therefore the
|
deduce, this maximizing operation causes bright regions within an image to "grow" (therefore the
|
||||||
name *dilation*). Take as an example the image above. Applying dilation we can get:
|
name *dilation*). Take as an example the image above. Applying dilation we can get:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The background (bright) dilates around the black regions of the letter.
|
The background (bright) dilates around the black regions of the letter.
|
||||||
|
|
||||||
@ -54,7 +54,7 @@ The background (bright) dilates around the black regions of the letter.
|
|||||||
(shown above). You can see in the result below that the bright areas of the image (the
|
(shown above). You can see in the result below that the bright areas of the image (the
|
||||||
background, apparently), get thinner, whereas the dark zones (the "writing") gets bigger.
|
background, apparently), get thinner, whereas the dark zones (the "writing") gets bigger.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
@ -66,7 +66,7 @@ This tutorial code's is shown lines below. You can also download it from
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Most of the stuff shown is known by you (if you have any doubt, please refer to the tutorials in
|
-# Most of the stuff shown is known by you (if you have any doubt, please refer to the tutorials in
|
||||||
previous sections). Let's check the general structure of the program:
|
previous sections). Let's check the general structure of the program:
|
||||||
|
|
||||||
- Load an image (can be RGB or grayscale)
|
- Load an image (can be RGB or grayscale)
|
||||||
@ -80,7 +80,7 @@ Explanation
|
|||||||
|
|
||||||
Let's analyze these two functions:
|
Let's analyze these two functions:
|
||||||
|
|
||||||
2. **erosion:**
|
-# **erosion:**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/* @function Erosion */
|
/* @function Erosion */
|
||||||
void Erosion( int, void* )
|
void Erosion( int, void* )
|
||||||
@ -124,7 +124,7 @@ Explanation
|
|||||||
(iterations) at once. We are not using it in this simple tutorial, though. You can check out the
|
(iterations) at once. We are not using it in this simple tutorial, though. You can check out the
|
||||||
Reference for more details.
|
Reference for more details.
|
||||||
|
|
||||||
3. **dilation:**
|
-# **dilation:**
|
||||||
|
|
||||||
The code is below. As you can see, it is completely similar to the snippet of code for **erosion**.
|
The code is below. As you can see, it is completely similar to the snippet of code for **erosion**.
|
||||||
Here we also have the option of defining our kernel, its anchor point and the size of the operator
|
Here we also have the option of defining our kernel, its anchor point and the size of the operator
|
||||||
@ -152,10 +152,10 @@ Results
|
|||||||
|
|
||||||
Compile the code above and execute it with an image as argument. For instance, using this image:
|
Compile the code above and execute it with an image as argument. For instance, using this image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
We get the results below. Varying the indices in the Trackbars give different output images,
|
We get the results below. Varying the indices in the Trackbars give different output images,
|
||||||
naturally. Try them out! You can even try to add a third Trackbar to control the number of
|
naturally. Try them out! You can even try to add a third Trackbar to control the number of
|
||||||
iterations.
|
iterations.
|
||||||
|
|
||||||

|

|
||||||
|
@ -56,17 +56,15 @@ enumeratevisibleitemswithsquare
|
|||||||
produce the output array.
|
produce the output array.
|
||||||
- Just to make the picture clearer, remember how a 1D Gaussian kernel look like?
|
- Just to make the picture clearer, remember how a 1D Gaussian kernel look like?
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Assuming that an image is 1D, you can notice that the pixel located in the middle would have the
|
Assuming that an image is 1D, you can notice that the pixel located in the middle would have the
|
||||||
biggest weight. The weight of its neighbors decreases as the spatial distance between them and
|
biggest weight. The weight of its neighbors decreases as the spatial distance between them and
|
||||||
the center pixel increases.
|
the center pixel increases.
|
||||||
|
|
||||||
@note
|
@note
|
||||||
Remember that a 2D Gaussian can be represented as :
|
Remember that a 2D Gaussian can be represented as :
|
||||||
|
|
||||||
\f[G_{0}(x, y) = A e^{ \dfrac{ -(x - \mu_{x})^{2} }{ 2\sigma^{2}_{x} } + \dfrac{ -(y - \mu_{y})^{2} }{ 2\sigma^{2}_{y} } }\f]
|
\f[G_{0}(x, y) = A e^{ \dfrac{ -(x - \mu_{x})^{2} }{ 2\sigma^{2}_{x} } + \dfrac{ -(y - \mu_{y})^{2} }{ 2\sigma^{2}_{y} } }\f]
|
||||||
|
|
||||||
where \f$\mu\f$ is the mean (the peak) and \f$\sigma\f$ represents the variance (per each of the
|
where \f$\mu\f$ is the mean (the peak) and \f$\sigma\f$ represents the variance (per each of the
|
||||||
variables \f$x\f$ and \f$y\f$)
|
variables \f$x\f$ and \f$y\f$)
|
||||||
|
|
||||||
@ -188,12 +186,13 @@ int display_dst( int delay );
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Let's check the OpenCV functions that involve only the smoothing procedure, since the rest is
|
-# Let's check the OpenCV functions that involve only the smoothing procedure, since the rest is
|
||||||
already known by now.
|
already known by now.
|
||||||
2. **Normalized Block Filter:**
|
-# **Normalized Block Filter:**
|
||||||
|
|
||||||
OpenCV offers the function @ref cv::blur to perform smoothing with this filter.
|
OpenCV offers the function @ref cv::blur to perform smoothing with this filter.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -211,7 +210,7 @@ Explanation
|
|||||||
respect to the neighborhood. If there is a negative value, then the center of the kernel is
|
respect to the neighborhood. If there is a negative value, then the center of the kernel is
|
||||||
considered the anchor point.
|
considered the anchor point.
|
||||||
|
|
||||||
3. **Gaussian Filter:**
|
-# **Gaussian Filter:**
|
||||||
|
|
||||||
It is performed by the function @ref cv::GaussianBlur :
|
It is performed by the function @ref cv::GaussianBlur :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -231,7 +230,7 @@ Explanation
|
|||||||
- \f$\sigma_{y}\f$: The standard deviation in y. Writing \f$0\f$ implies that \f$\sigma_{y}\f$ is
|
- \f$\sigma_{y}\f$: The standard deviation in y. Writing \f$0\f$ implies that \f$\sigma_{y}\f$ is
|
||||||
calculated using kernel size.
|
calculated using kernel size.
|
||||||
|
|
||||||
4. **Median Filter:**
|
-# **Median Filter:**
|
||||||
|
|
||||||
This filter is provided by the @ref cv::medianBlur function:
|
This filter is provided by the @ref cv::medianBlur function:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -245,7 +244,7 @@ Explanation
|
|||||||
- *dst*: Destination image, must be the same type as *src*
|
- *dst*: Destination image, must be the same type as *src*
|
||||||
- *i*: Size of the kernel (only one because we use a square window). Must be odd.
|
- *i*: Size of the kernel (only one because we use a square window). Must be odd.
|
||||||
|
|
||||||
5. **Bilateral Filter**
|
-# **Bilateral Filter**
|
||||||
|
|
||||||
Provided by OpenCV function @ref cv::bilateralFilter
|
Provided by OpenCV function @ref cv::bilateralFilter
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -268,6 +267,4 @@ Results
|
|||||||
filters explained.
|
filters explained.
|
||||||
- Here is a snapshot of the image smoothed using *medianBlur*:
|
- Here is a snapshot of the image smoothed using *medianBlur*:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
|
@ -28,17 +28,14 @@ Theory
|
|||||||
- Let's say you have gotten a skin histogram (Hue-Saturation) based on the image below. The
|
- Let's say you have gotten a skin histogram (Hue-Saturation) based on the image below. The
|
||||||
histogram besides is going to be our *model histogram* (which we know represents a sample of
|
histogram besides is going to be our *model histogram* (which we know represents a sample of
|
||||||
skin tonality). You applied some mask to capture only the histogram of the skin area:
|
skin tonality). You applied some mask to capture only the histogram of the skin area:
|
||||||
|

|
||||||
------ ------
|

|
||||||
|T0| |T1|
|
|
||||||
------ ------
|
|
||||||
|
|
||||||
- Now, let's imagine that you get another hand image (Test Image) like the one below: (with its
|
- Now, let's imagine that you get another hand image (Test Image) like the one below: (with its
|
||||||
respective histogram):
|
respective histogram):
|
||||||
|

|
||||||
|

|
||||||
|
|
||||||
------ ------
|
|
||||||
|T2| |T3|
|
|
||||||
------ ------
|
|
||||||
|
|
||||||
- What we want to do is to use our *model histogram* (that we know represents a skin tonality) to
|
- What we want to do is to use our *model histogram* (that we know represents a skin tonality) to
|
||||||
detect skin areas in our Test Image. Here are the steps
|
detect skin areas in our Test Image. Here are the steps
|
||||||
@ -50,7 +47,7 @@ Theory
|
|||||||
the *model histogram* first, so the output for the Test Image can be visible for you.
|
the *model histogram* first, so the output for the Test Image can be visible for you.
|
||||||
-# Applying the steps above, we get the following BackProjection image for our Test Image:
|
-# Applying the steps above, we get the following BackProjection image for our Test Image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
-# In terms of statistics, the values stored in *BackProjection* represent the *probability*
|
-# In terms of statistics, the values stored in *BackProjection* represent the *probability*
|
||||||
that a pixel in *Test Image* belongs to a skin area, based on the *model histogram* that we
|
that a pixel in *Test Image* belongs to a skin area, based on the *model histogram* that we
|
||||||
@ -83,98 +80,23 @@ Code
|
|||||||
in samples.
|
in samples.
|
||||||
|
|
||||||
- **Code at glance:**
|
- **Code at glance:**
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/Histograms_Matching/calcBackProject_Demo1.cpp
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
|
|
||||||
#include <iostream>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
/// Global Variables
|
|
||||||
Mat src; Mat hsv; Mat hue;
|
|
||||||
int bins = 25;
|
|
||||||
|
|
||||||
/// Function Headers
|
|
||||||
void Hist_and_Backproj(int, void* );
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Read the image
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
/// Transform it to HSV
|
|
||||||
cvtColor( src, hsv, COLOR_BGR2HSV );
|
|
||||||
|
|
||||||
/// Use only the Hue value
|
|
||||||
hue.create( hsv.size(), hsv.depth() );
|
|
||||||
int ch[] = { 0, 0 };
|
|
||||||
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
|
|
||||||
|
|
||||||
/// Create Trackbar to enter the number of bins
|
|
||||||
char* window_image = "Source image";
|
|
||||||
namedWindow( window_image, WINDOW_AUTOSIZE );
|
|
||||||
createTrackbar("* Hue bins: ", window_image, &bins, 180, Hist_and_Backproj );
|
|
||||||
Hist_and_Backproj(0, 0);
|
|
||||||
|
|
||||||
/// Show the image
|
|
||||||
imshow( window_image, src );
|
|
||||||
|
|
||||||
/// Wait until user exits the program
|
|
||||||
waitKey(0);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function Hist_and_Backproj
|
|
||||||
* @brief Callback to Trackbar
|
|
||||||
*/
|
|
||||||
void Hist_and_Backproj(int, void* )
|
|
||||||
{
|
|
||||||
MatND hist;
|
|
||||||
int histSize = MAX( bins, 2 );
|
|
||||||
float hue_range[] = { 0, 180 };
|
|
||||||
const float* ranges = { hue_range };
|
|
||||||
|
|
||||||
/// Get the Histogram and normalize it
|
|
||||||
calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
|
|
||||||
normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );
|
|
||||||
|
|
||||||
/// Get Backprojection
|
|
||||||
MatND backproj;
|
|
||||||
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
|
|
||||||
|
|
||||||
/// Draw the backproj
|
|
||||||
imshow( "BackProj", backproj );
|
|
||||||
|
|
||||||
/// Draw the histogram
|
|
||||||
int w = 400; int h = 400;
|
|
||||||
int bin_w = cvRound( (double) w / histSize );
|
|
||||||
Mat histImg = Mat::zeros( w, h, CV_8UC3 );
|
|
||||||
|
|
||||||
for( int i = 0; i < bins; i ++ )
|
|
||||||
{ rectangle( histImg, Point( i*bin_w, h ), Point( (i+1)*bin_w, h - cvRound( hist.at<float>(i)*h/255.0 ) ), Scalar( 0, 0, 255 ), -1 ); }
|
|
||||||
|
|
||||||
imshow( "Histogram", histImg );
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Declare the matrices to store our images and initialize the number of bins to be used by our
|
-# Declare the matrices to store our images and initialize the number of bins to be used by our
|
||||||
histogram:
|
histogram:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src; Mat hsv; Mat hue;
|
Mat src; Mat hsv; Mat hue;
|
||||||
int bins = 25;
|
int bins = 25;
|
||||||
@endcode
|
@endcode
|
||||||
2. Read the input image and transform it to HSV format:
|
-# Read the input image and transform it to HSV format:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1], 1 );
|
src = imread( argv[1], 1 );
|
||||||
cvtColor( src, hsv, COLOR_BGR2HSV );
|
cvtColor( src, hsv, COLOR_BGR2HSV );
|
||||||
@endcode
|
@endcode
|
||||||
3. For this tutorial, we will use only the Hue value for our 1-D histogram (check out the fancier
|
-# For this tutorial, we will use only the Hue value for our 1-D histogram (check out the fancier
|
||||||
code in the links above if you want to use the more standard H-S histogram, which yields better
|
code in the links above if you want to use the more standard H-S histogram, which yields better
|
||||||
results):
|
results):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -182,7 +104,7 @@ Explanation
|
|||||||
int ch[] = { 0, 0 };
|
int ch[] = { 0, 0 };
|
||||||
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
|
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
|
||||||
@endcode
|
@endcode
|
||||||
as you see, we use the function :mix_channels:mixChannels to get only the channel 0 (Hue) from
|
as you see, we use the function @ref cv::mixChannels to get only the channel 0 (Hue) from
|
||||||
the hsv image. It gets the following parameters:
|
the hsv image. It gets the following parameters:
|
||||||
|
|
||||||
- **&hsv:** The source array from which the channels will be copied
|
- **&hsv:** The source array from which the channels will be copied
|
||||||
@ -193,7 +115,7 @@ Explanation
|
|||||||
case, the Hue(0) channel of &hsv is being copied to the 0 channel of &hue (1-channel)
|
case, the Hue(0) channel of &hsv is being copied to the 0 channel of &hue (1-channel)
|
||||||
- **1:** Number of index pairs
|
- **1:** Number of index pairs
|
||||||
|
|
||||||
4. Create a Trackbar for the user to enter the bin values. Any change on the Trackbar means a call
|
-# Create a Trackbar for the user to enter the bin values. Any change on the Trackbar means a call
|
||||||
to the **Hist_and_Backproj** callback function.
|
to the **Hist_and_Backproj** callback function.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
char* window_image = "Source image";
|
char* window_image = "Source image";
|
||||||
@ -201,14 +123,14 @@ Explanation
|
|||||||
createTrackbar("* Hue bins: ", window_image, &bins, 180, Hist_and_Backproj );
|
createTrackbar("* Hue bins: ", window_image, &bins, 180, Hist_and_Backproj );
|
||||||
Hist_and_Backproj(0, 0);
|
Hist_and_Backproj(0, 0);
|
||||||
@endcode
|
@endcode
|
||||||
5. Show the image and wait for the user to exit the program:
|
-# Show the image and wait for the user to exit the program:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow( window_image, src );
|
imshow( window_image, src );
|
||||||
|
|
||||||
waitKey(0);
|
waitKey(0);
|
||||||
return 0;
|
return 0;
|
||||||
@endcode
|
@endcode
|
||||||
6. **Hist_and_Backproj function:** Initialize the arguments needed for @ref cv::calcHist . The
|
-# **Hist_and_Backproj function:** Initialize the arguments needed for @ref cv::calcHist . The
|
||||||
number of bins comes from the Trackbar:
|
number of bins comes from the Trackbar:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
void Hist_and_Backproj(int, void* )
|
void Hist_and_Backproj(int, void* )
|
||||||
@ -218,12 +140,12 @@ Explanation
|
|||||||
float hue_range[] = { 0, 180 };
|
float hue_range[] = { 0, 180 };
|
||||||
const float* ranges = { hue_range };
|
const float* ranges = { hue_range };
|
||||||
@endcode
|
@endcode
|
||||||
7. Calculate the Histogram and normalize it to the range \f$[0,255]\f$
|
-# Calculate the Histogram and normalize it to the range \f$[0,255]\f$
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
|
calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
|
||||||
normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );
|
normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );
|
||||||
@endcode
|
@endcode
|
||||||
8. Get the Backprojection of the same image by calling the function @ref cv::calcBackProject
|
-# Get the Backprojection of the same image by calling the function @ref cv::calcBackProject
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
MatND backproj;
|
MatND backproj;
|
||||||
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
|
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
|
||||||
@ -231,11 +153,11 @@ Explanation
|
|||||||
all the arguments are known (the same as used to calculate the histogram), only we add the
|
all the arguments are known (the same as used to calculate the histogram), only we add the
|
||||||
backproj matrix, which will store the backprojection of the source image (&hue)
|
backproj matrix, which will store the backprojection of the source image (&hue)
|
||||||
|
|
||||||
9. Display backproj:
|
-# Display backproj:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow( "BackProj", backproj );
|
imshow( "BackProj", backproj );
|
||||||
@endcode
|
@endcode
|
||||||
10. Draw the 1-D Hue histogram of the image:
|
-# Draw the 1-D Hue histogram of the image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
int w = 400; int h = 400;
|
int w = 400; int h = 400;
|
||||||
int bin_w = cvRound( (double) w / histSize );
|
int bin_w = cvRound( (double) w / histSize );
|
||||||
@ -246,12 +168,12 @@ Explanation
|
|||||||
|
|
||||||
imshow( "Histogram", histImg );
|
imshow( "Histogram", histImg );
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. Here are the output by using a sample image ( guess what? Another hand ). You can play with the
|
Here are the output by using a sample image ( guess what? Another hand ). You can play with the
|
||||||
bin values and you will observe how it affects the results:
|
bin values and you will observe how it affects the results:
|
||||||
|

|
||||||
------ ------ ------
|

|
||||||
|R0| |R1| |R2|
|

|
||||||
------ ------ ------
|
|
||||||
|
@ -21,7 +21,7 @@ histogram called *Image histogram*. Now we will considerate it in its more gener
|
|||||||
- Let's see an example. Imagine that a Matrix contains information of an image (i.e. intensity in
|
- Let's see an example. Imagine that a Matrix contains information of an image (i.e. intensity in
|
||||||
the range \f$0-255\f$):
|
the range \f$0-255\f$):
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- What happens if we want to *count* this data in an organized way? Since we know that the *range*
|
- What happens if we want to *count* this data in an organized way? Since we know that the *range*
|
||||||
of information value for this case is 256 values, we can segment our range in subparts (called
|
of information value for this case is 256 values, we can segment our range in subparts (called
|
||||||
@ -36,7 +36,7 @@ histogram called *Image histogram*. Now we will considerate it in its more gener
|
|||||||
this to the example above we get the image below ( axis x represents the bins and axis y the
|
this to the example above we get the image below ( axis x represents the bins and axis y the
|
||||||
number of pixels in each of them).
|
number of pixels in each of them).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- This was just a simple example of how an histogram works and why it is useful. An histogram can
|
- This was just a simple example of how an histogram works and why it is useful. An histogram can
|
||||||
keep count not only of color intensities, but of whatever image features that we want to measure
|
keep count not only of color intensities, but of whatever image features that we want to measure
|
||||||
@ -73,18 +73,18 @@ Code
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Create the necessary matrices:
|
-# Create the necessary matrices:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src, dst;
|
Mat src, dst;
|
||||||
@endcode
|
@endcode
|
||||||
2. Load the source image
|
-# Load the source image
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1], 1 );
|
src = imread( argv[1], 1 );
|
||||||
|
|
||||||
if( !src.data )
|
if( !src.data )
|
||||||
{ return -1; }
|
{ return -1; }
|
||||||
@endcode
|
@endcode
|
||||||
3. Separate the source image in its three R,G and B planes. For this we use the OpenCV function
|
-# Separate the source image in its three R,G and B planes. For this we use the OpenCV function
|
||||||
@ref cv::split :
|
@ref cv::split :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
vector<Mat> bgr_planes;
|
vector<Mat> bgr_planes;
|
||||||
@ -93,7 +93,7 @@ Explanation
|
|||||||
our input is the image to be divided (this case with three channels) and the output is a vector
|
our input is the image to be divided (this case with three channels) and the output is a vector
|
||||||
of Mat )
|
of Mat )
|
||||||
|
|
||||||
4. Now we are ready to start configuring the **histograms** for each plane. Since we are working
|
-# Now we are ready to start configuring the **histograms** for each plane. Since we are working
|
||||||
with the B, G and R planes, we know that our values will range in the interval \f$[0,255]\f$
|
with the B, G and R planes, we know that our values will range in the interval \f$[0,255]\f$
|
||||||
-# Establish number of bins (5, 10...):
|
-# Establish number of bins (5, 10...):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -137,7 +137,7 @@ Explanation
|
|||||||
- **uniform** and **accumulate**: The bin sizes are the same and the histogram is cleared
|
- **uniform** and **accumulate**: The bin sizes are the same and the histogram is cleared
|
||||||
at the beginning.
|
at the beginning.
|
||||||
|
|
||||||
5. Create an image to display the histograms:
|
-# Create an image to display the histograms:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
// Draw the histograms for R, G and B
|
// Draw the histograms for R, G and B
|
||||||
int hist_w = 512; int hist_h = 400;
|
int hist_w = 512; int hist_h = 400;
|
||||||
@ -145,7 +145,7 @@ Explanation
|
|||||||
|
|
||||||
Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
|
Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
|
||||||
@endcode
|
@endcode
|
||||||
6. Notice that before drawing, we first @ref cv::normalize the histogram so its values fall in the
|
-# Notice that before drawing, we first @ref cv::normalize the histogram so its values fall in the
|
||||||
range indicated by the parameters entered:
|
range indicated by the parameters entered:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Normalize the result to [ 0, histImage.rows ]
|
/// Normalize the result to [ 0, histImage.rows ]
|
||||||
@ -164,7 +164,7 @@ Explanation
|
|||||||
- **-1:** Implies that the output normalized array will be the same type as the input
|
- **-1:** Implies that the output normalized array will be the same type as the input
|
||||||
- **Mat():** Optional mask
|
- **Mat():** Optional mask
|
||||||
|
|
||||||
7. Finally, observe that to access the bin (in this case in this 1D-Histogram):
|
-# Finally, observe that to access the bin (in this case in this 1D-Histogram):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Draw for each channel
|
/// Draw for each channel
|
||||||
for( int i = 1; i < histSize; i++ )
|
for( int i = 1; i < histSize; i++ )
|
||||||
@ -189,7 +189,7 @@ Explanation
|
|||||||
b_hist.at<float>( i, j )
|
b_hist.at<float>( i, j )
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
8. Finally we display our histograms and wait for the user to exit:
|
-# Finally we display our histograms and wait for the user to exit:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow("calcHist Demo", WINDOW_AUTOSIZE );
|
namedWindow("calcHist Demo", WINDOW_AUTOSIZE );
|
||||||
imshow("calcHist Demo", histImage );
|
imshow("calcHist Demo", histImage );
|
||||||
@ -202,10 +202,10 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Using as input argument an image like the shown below:
|
-# Using as input argument an image like the shown below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. Produces the following histogram:
|
-# Produces the following histogram:
|
||||||
|
|
||||||

|

|
||||||
|
@ -18,25 +18,18 @@ Theory
|
|||||||
- OpenCV implements the function @ref cv::compareHist to perform a comparison. It also offers 4
|
- OpenCV implements the function @ref cv::compareHist to perform a comparison. It also offers 4
|
||||||
different metrics to compute the matching:
|
different metrics to compute the matching:
|
||||||
-# **Correlation ( CV_COMP_CORREL )**
|
-# **Correlation ( CV_COMP_CORREL )**
|
||||||
|
|
||||||
\f[d(H_1,H_2) = \frac{\sum_I (H_1(I) - \bar{H_1}) (H_2(I) - \bar{H_2})}{\sqrt{\sum_I(H_1(I) - \bar{H_1})^2 \sum_I(H_2(I) - \bar{H_2})^2}}\f]
|
\f[d(H_1,H_2) = \frac{\sum_I (H_1(I) - \bar{H_1}) (H_2(I) - \bar{H_2})}{\sqrt{\sum_I(H_1(I) - \bar{H_1})^2 \sum_I(H_2(I) - \bar{H_2})^2}}\f]
|
||||||
|
|
||||||
where
|
where
|
||||||
|
|
||||||
\f[\bar{H_k} = \frac{1}{N} \sum _J H_k(J)\f]
|
\f[\bar{H_k} = \frac{1}{N} \sum _J H_k(J)\f]
|
||||||
|
|
||||||
and \f$N\f$ is the total number of histogram bins.
|
and \f$N\f$ is the total number of histogram bins.
|
||||||
|
|
||||||
-# **Chi-Square ( CV_COMP_CHISQR )**
|
-# **Chi-Square ( CV_COMP_CHISQR )**
|
||||||
|
|
||||||
\f[d(H_1,H_2) = \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)}\f]
|
\f[d(H_1,H_2) = \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)}\f]
|
||||||
|
|
||||||
-# **Intersection ( method=CV_COMP_INTERSECT )**
|
-# **Intersection ( method=CV_COMP_INTERSECT )**
|
||||||
|
|
||||||
\f[d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I))\f]
|
\f[d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I))\f]
|
||||||
|
|
||||||
-# **Bhattacharyya distance ( CV_COMP_BHATTACHARYYA )**
|
-# **Bhattacharyya distance ( CV_COMP_BHATTACHARYYA )**
|
||||||
|
|
||||||
\f[d(H_1,H_2) = \sqrt{1 - \frac{1}{\sqrt{\bar{H_1} \bar{H_2} N^2}} \sum_I \sqrt{H_1(I) \cdot H_2(I)}}\f]
|
\f[d(H_1,H_2) = \sqrt{1 - \frac{1}{\sqrt{\bar{H_1} \bar{H_2} N^2}} \sum_I \sqrt{H_1(I) \cdot H_2(I)}}\f]
|
||||||
|
|
||||||
Code
|
Code
|
||||||
@ -59,7 +52,7 @@ Code
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Declare variables such as the matrices to store the base image and the two other images to
|
-# Declare variables such as the matrices to store the base image and the two other images to
|
||||||
compare ( RGB and HSV )
|
compare ( RGB and HSV )
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src_base, hsv_base;
|
Mat src_base, hsv_base;
|
||||||
@ -67,7 +60,7 @@ Explanation
|
|||||||
Mat src_test2, hsv_test2;
|
Mat src_test2, hsv_test2;
|
||||||
Mat hsv_half_down;
|
Mat hsv_half_down;
|
||||||
@endcode
|
@endcode
|
||||||
2. Load the base image (src_base) and the other two test images:
|
-# Load the base image (src_base) and the other two test images:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
if( argc < 4 )
|
if( argc < 4 )
|
||||||
{ printf("** Error. Usage: ./compareHist_Demo <image_settings0> <image_setting1> <image_settings2>\n");
|
{ printf("** Error. Usage: ./compareHist_Demo <image_settings0> <image_setting1> <image_settings2>\n");
|
||||||
@ -78,17 +71,17 @@ Explanation
|
|||||||
src_test1 = imread( argv[2], 1 );
|
src_test1 = imread( argv[2], 1 );
|
||||||
src_test2 = imread( argv[3], 1 );
|
src_test2 = imread( argv[3], 1 );
|
||||||
@endcode
|
@endcode
|
||||||
3. Convert them to HSV format:
|
-# Convert them to HSV format:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cvtColor( src_base, hsv_base, COLOR_BGR2HSV );
|
cvtColor( src_base, hsv_base, COLOR_BGR2HSV );
|
||||||
cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV );
|
cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV );
|
||||||
cvtColor( src_test2, hsv_test2, COLOR_BGR2HSV );
|
cvtColor( src_test2, hsv_test2, COLOR_BGR2HSV );
|
||||||
@endcode
|
@endcode
|
||||||
4. Also, create an image of half the base image (in HSV format):
|
-# Also, create an image of half the base image (in HSV format):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) );
|
hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) );
|
||||||
@endcode
|
@endcode
|
||||||
5. Initialize the arguments to calculate the histograms (bins, ranges and channels H and S ).
|
-# Initialize the arguments to calculate the histograms (bins, ranges and channels H and S ).
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
int h_bins = 50; int s_bins = 60;
|
int h_bins = 50; int s_bins = 60;
|
||||||
int histSize[] = { h_bins, s_bins };
|
int histSize[] = { h_bins, s_bins };
|
||||||
@ -100,14 +93,14 @@ Explanation
|
|||||||
|
|
||||||
int channels[] = { 0, 1 };
|
int channels[] = { 0, 1 };
|
||||||
@endcode
|
@endcode
|
||||||
6. Create the MatND objects to store the histograms:
|
-# Create the MatND objects to store the histograms:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
MatND hist_base;
|
MatND hist_base;
|
||||||
MatND hist_half_down;
|
MatND hist_half_down;
|
||||||
MatND hist_test1;
|
MatND hist_test1;
|
||||||
MatND hist_test2;
|
MatND hist_test2;
|
||||||
@endcode
|
@endcode
|
||||||
7. Calculate the Histograms for the base image, the 2 test images and the half-down base image:
|
-# Calculate the Histograms for the base image, the 2 test images and the half-down base image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false );
|
calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false );
|
||||||
normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() );
|
normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() );
|
||||||
@ -121,7 +114,7 @@ Explanation
|
|||||||
calcHist( &hsv_test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false );
|
calcHist( &hsv_test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false );
|
||||||
normalize( hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat() );
|
normalize( hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat() );
|
||||||
@endcode
|
@endcode
|
||||||
8. Apply sequentially the 4 comparison methods between the histogram of the base image (hist_base)
|
-# Apply sequentially the 4 comparison methods between the histogram of the base image (hist_base)
|
||||||
and the other histograms:
|
and the other histograms:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
for( int i = 0; i < 4; i++ )
|
for( int i = 0; i < 4; i++ )
|
||||||
@ -134,34 +127,32 @@ Explanation
|
|||||||
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_half , base_test1, base_test2 );
|
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_half , base_test1, base_test2 );
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. We use as input the following images:
|
-# We use as input the following images:
|
||||||
|

|
||||||
----------- ----------- -----------
|

|
||||||
|Base_0| |Test_1| |Test_2|
|

|
||||||
----------- ----------- -----------
|
|
||||||
|
|
||||||
where the first one is the base (to be compared to the others), the other 2 are the test images.
|
where the first one is the base (to be compared to the others), the other 2 are the test images.
|
||||||
We will also compare the first image with respect to itself and with respect of half the base
|
We will also compare the first image with respect to itself and with respect of half the base
|
||||||
image.
|
image.
|
||||||
|
|
||||||
2. We should expect a perfect match when we compare the base image histogram with itself. Also,
|
-# We should expect a perfect match when we compare the base image histogram with itself. Also,
|
||||||
compared with the histogram of half the base image, it should present a high match since both
|
compared with the histogram of half the base image, it should present a high match since both
|
||||||
are from the same source. For the other two test images, we can observe that they have very
|
are from the same source. For the other two test images, we can observe that they have very
|
||||||
different lighting conditions, so the matching should not be very good:
|
different lighting conditions, so the matching should not be very good:
|
||||||
3. Here the numeric results:
|
|
||||||
|
|
||||||
*Method* Base - Base Base - Half Base - Test 1 Base - Test 2
|
-# Here the numeric results:
|
||||||
----------------- ------------- ------------- --------------- ---------------
|
*Method* | Base - Base | Base - Half | Base - Test 1 | Base - Test 2
|
||||||
*Correlation* 1.000000 0.930766 0.182073 0.120447
|
----------------- | ------------ | ------------ | -------------- | ---------------
|
||||||
*Chi-square* 0.000000 4.940466 21.184536 49.273437
|
*Correlation* | 1.000000 | 0.930766 | 0.182073 | 0.120447
|
||||||
*Intersection* 24.391548 14.959809 3.889029 5.775088
|
*Chi-square* | 0.000000 | 4.940466 | 21.184536 | 49.273437
|
||||||
*Bhattacharyya* 0.000000 0.222609 0.646576 0.801869
|
*Intersection* | 24.391548 | 14.959809 | 3.889029 | 5.775088
|
||||||
|
*Bhattacharyya* | 0.000000 | 0.222609 | 0.646576 | 0.801869
|
||||||
For the *Correlation* and *Intersection* methods, the higher the metric, the more accurate the
|
For the *Correlation* and *Intersection* methods, the higher the metric, the more accurate the
|
||||||
match. As we can see, the match *base-base* is the highest of all as expected. Also we can observe
|
match. As we can see, the match *base-base* is the highest of all as expected. Also we can observe
|
||||||
that the match *base-half* is the second best match (as we predicted). For the other two metrics,
|
that the match *base-half* is the second best match (as we predicted). For the other two metrics,
|
||||||
the less the result, the better the match. We can observe that the matches between the test 1 and
|
the less the result, the better the match. We can observe that the matches between the test 1 and
|
||||||
test 2 with respect to the base are worse, which again, was expected.
|
test 2 with respect to the base are worse, which again, was expected.
|
||||||
|
@ -17,7 +17,7 @@ Theory
|
|||||||
- It is a graphical representation of the intensity distribution of an image.
|
- It is a graphical representation of the intensity distribution of an image.
|
||||||
- It quantifies the number of pixels for each intensity value considered.
|
- It quantifies the number of pixels for each intensity value considered.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### What is Histogram Equalization?
|
### What is Histogram Equalization?
|
||||||
|
|
||||||
@ -29,7 +29,7 @@ Theory
|
|||||||
*underpopulated* intensities. After applying the equalization, we get an histogram like the
|
*underpopulated* intensities. After applying the equalization, we get an histogram like the
|
||||||
figure in the center. The resulting image is shown in the picture at right.
|
figure in the center. The resulting image is shown in the picture at right.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### How does it work?
|
### How does it work?
|
||||||
|
|
||||||
@ -46,7 +46,7 @@ Theory
|
|||||||
is 255 ( or the maximum value for the intensity of the image ). From the example above, the
|
is 255 ( or the maximum value for the intensity of the image ). From the example above, the
|
||||||
cumulative function is:
|
cumulative function is:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Finally, we use a simple remapping procedure to obtain the intensity values of the equalized
|
- Finally, we use a simple remapping procedure to obtain the intensity values of the equalized
|
||||||
image:
|
image:
|
||||||
@ -69,14 +69,14 @@ Code
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Declare the source and destination images as well as the windows names:
|
-# Declare the source and destination images as well as the windows names:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src, dst;
|
Mat src, dst;
|
||||||
|
|
||||||
char* source_window = "Source image";
|
char* source_window = "Source image";
|
||||||
char* equalized_window = "Equalized Image";
|
char* equalized_window = "Equalized Image";
|
||||||
@endcode
|
@endcode
|
||||||
2. Load the source image:
|
-# Load the source image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1], 1 );
|
src = imread( argv[1], 1 );
|
||||||
|
|
||||||
@ -84,18 +84,18 @@ Explanation
|
|||||||
{ cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl;
|
{ cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl;
|
||||||
return -1;}
|
return -1;}
|
||||||
@endcode
|
@endcode
|
||||||
3. Convert it to grayscale:
|
-# Convert it to grayscale:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cvtColor( src, src, COLOR_BGR2GRAY );
|
cvtColor( src, src, COLOR_BGR2GRAY );
|
||||||
@endcode
|
@endcode
|
||||||
4. Apply histogram equalization with the function @ref cv::equalizeHist :
|
-# Apply histogram equalization with the function @ref cv::equalizeHist :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
equalizeHist( src, dst );
|
equalizeHist( src, dst );
|
||||||
@endcode
|
@endcode
|
||||||
As it can be easily seen, the only arguments are the original image and the output (equalized)
|
As it can be easily seen, the only arguments are the original image and the output (equalized)
|
||||||
image.
|
image.
|
||||||
|
|
||||||
5. Display both images (original and equalized) :
|
-# Display both images (original and equalized) :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||||
namedWindow( equalized_window, WINDOW_AUTOSIZE );
|
namedWindow( equalized_window, WINDOW_AUTOSIZE );
|
||||||
@ -103,7 +103,7 @@ Explanation
|
|||||||
imshow( source_window, src );
|
imshow( source_window, src );
|
||||||
imshow( equalized_window, dst );
|
imshow( equalized_window, dst );
|
||||||
@endcode
|
@endcode
|
||||||
6. Wait until user exists the program
|
-# Wait until user exists the program
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
waitKey(0);
|
waitKey(0);
|
||||||
return 0;
|
return 0;
|
||||||
@ -112,24 +112,24 @@ Explanation
|
|||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. To appreciate better the results of equalization, let's introduce an image with not much
|
-# To appreciate better the results of equalization, let's introduce an image with not much
|
||||||
contrast, such as:
|
contrast, such as:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
which, by the way, has this histogram:
|
which, by the way, has this histogram:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
notice that the pixels are clustered around the center of the histogram.
|
notice that the pixels are clustered around the center of the histogram.
|
||||||
|
|
||||||
2. After applying the equalization with our program, we get this result:
|
-# After applying the equalization with our program, we get this result:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
this image has certainly more contrast. Check out its new histogram like this:
|
this image has certainly more contrast. Check out its new histogram like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Notice how the number of pixels is more distributed through the intensity range.
|
Notice how the number of pixels is more distributed through the intensity range.
|
||||||
|
|
||||||
|
@ -28,12 +28,12 @@ template image (patch).
|
|||||||
|
|
||||||
our goal is to detect the highest matching area:
|
our goal is to detect the highest matching area:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- To identify the matching area, we have to *compare* the template image against the source image
|
- To identify the matching area, we have to *compare* the template image against the source image
|
||||||
by sliding it:
|
by sliding it:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- By **sliding**, we mean moving the patch one pixel at a time (left to right, up to down). At
|
- By **sliding**, we mean moving the patch one pixel at a time (left to right, up to down). At
|
||||||
each location, a metric is calculated so it represents how "good" or "bad" the match at that
|
each location, a metric is calculated so it represents how "good" or "bad" the match at that
|
||||||
@ -41,7 +41,7 @@ template image (patch).
|
|||||||
- For each location of **T** over **I**, you *store* the metric in the *result matrix* **(R)**.
|
- For each location of **T** over **I**, you *store* the metric in the *result matrix* **(R)**.
|
||||||
Each location \f$(x,y)\f$ in **R** contains the match metric:
|
Each location \f$(x,y)\f$ in **R** contains the match metric:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
the image above is the result **R** of sliding the patch with a metric **TM_CCORR_NORMED**.
|
the image above is the result **R** of sliding the patch with a metric **TM_CCORR_NORMED**.
|
||||||
The brightest locations indicate the highest matches. As you can see, the location marked by the
|
The brightest locations indicate the highest matches. As you can see, the location marked by the
|
||||||
@ -56,23 +56,23 @@ template image (patch).
|
|||||||
Good question. OpenCV implements Template matching in the function @ref cv::matchTemplate . The
|
Good question. OpenCV implements Template matching in the function @ref cv::matchTemplate . The
|
||||||
available methods are 6:
|
available methods are 6:
|
||||||
|
|
||||||
a. **method=CV_TM_SQDIFF**
|
-# **method=CV_TM_SQDIFF**
|
||||||
|
|
||||||
\f[R(x,y)= \sum _{x',y'} (T(x',y')-I(x+x',y+y'))^2\f]
|
\f[R(x,y)= \sum _{x',y'} (T(x',y')-I(x+x',y+y'))^2\f]
|
||||||
|
|
||||||
b. **method=CV_TM_SQDIFF_NORMED**
|
-# **method=CV_TM_SQDIFF_NORMED**
|
||||||
|
|
||||||
\f[R(x,y)= \frac{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f]
|
\f[R(x,y)= \frac{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f]
|
||||||
|
|
||||||
c. **method=CV_TM_CCORR**
|
-# **method=CV_TM_CCORR**
|
||||||
|
|
||||||
\f[R(x,y)= \sum _{x',y'} (T(x',y') \cdot I(x+x',y+y'))\f]
|
\f[R(x,y)= \sum _{x',y'} (T(x',y') \cdot I(x+x',y+y'))\f]
|
||||||
|
|
||||||
d. **method=CV_TM_CCORR_NORMED**
|
-# **method=CV_TM_CCORR_NORMED**
|
||||||
|
|
||||||
\f[R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f]
|
\f[R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f]
|
||||||
|
|
||||||
e. **method=CV_TM_CCOEFF**
|
-# **method=CV_TM_CCOEFF**
|
||||||
|
|
||||||
\f[R(x,y)= \sum _{x',y'} (T'(x',y') \cdot I(x+x',y+y'))\f]
|
\f[R(x,y)= \sum _{x',y'} (T'(x',y') \cdot I(x+x',y+y'))\f]
|
||||||
|
|
||||||
@ -80,7 +80,7 @@ e. **method=CV_TM_CCOEFF**
|
|||||||
|
|
||||||
\f[\begin{array}{l} T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum _{x'',y''} T(x'',y'') \\ I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum _{x'',y''} I(x+x'',y+y'') \end{array}\f]
|
\f[\begin{array}{l} T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum _{x'',y''} T(x'',y'') \\ I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum _{x'',y''} I(x+x'',y+y'') \end{array}\f]
|
||||||
|
|
||||||
f. **method=CV_TM_CCOEFF_NORMED**
|
-# **method=CV_TM_CCOEFF_NORMED**
|
||||||
|
|
||||||
\f[R(x,y)= \frac{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) }{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} }\f]
|
\f[R(x,y)= \frac{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) }{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} }\f]
|
||||||
|
|
||||||
@ -98,93 +98,12 @@ Code
|
|||||||
- **Downloadable code**: Click
|
- **Downloadable code**: Click
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/MatchTemplate_Demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/MatchTemplate_Demo.cpp)
|
||||||
- **Code at glance:**
|
- **Code at glance:**
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/Histograms_Matching/MatchTemplate_Demo.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
|
|
||||||
using namespace std;
|
|
||||||
using namespace cv;
|
|
||||||
|
|
||||||
/// Global Variables
|
|
||||||
Mat img; Mat templ; Mat result;
|
|
||||||
char* image_window = "Source Image";
|
|
||||||
char* result_window = "Result window";
|
|
||||||
|
|
||||||
int match_method;
|
|
||||||
int max_Trackbar = 5;
|
|
||||||
|
|
||||||
/// Function Headers
|
|
||||||
void MatchingMethod( int, void* );
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Load image and template
|
|
||||||
img = imread( argv[1], 1 );
|
|
||||||
templ = imread( argv[2], 1 );
|
|
||||||
|
|
||||||
/// Create windows
|
|
||||||
namedWindow( image_window, WINDOW_AUTOSIZE );
|
|
||||||
namedWindow( result_window, WINDOW_AUTOSIZE );
|
|
||||||
|
|
||||||
/// Create Trackbar
|
|
||||||
char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
|
|
||||||
createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
|
|
||||||
|
|
||||||
MatchingMethod( 0, 0 );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function MatchingMethod
|
|
||||||
* @brief Trackbar callback
|
|
||||||
*/
|
|
||||||
void MatchingMethod( int, void* )
|
|
||||||
{
|
|
||||||
/// Source image to display
|
|
||||||
Mat img_display;
|
|
||||||
img.copyTo( img_display );
|
|
||||||
|
|
||||||
/// Create the result matrix
|
|
||||||
int result_cols = img.cols - templ.cols + 1;
|
|
||||||
int result_rows = img.rows - templ.rows + 1;
|
|
||||||
|
|
||||||
result.create( result_cols, result_rows, CV_32FC1 );
|
|
||||||
|
|
||||||
/// Do the Matching and Normalize
|
|
||||||
matchTemplate( img, templ, result, match_method );
|
|
||||||
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
|
|
||||||
|
|
||||||
/// Localizing the best match with minMaxLoc
|
|
||||||
double minVal; double maxVal; Point minLoc; Point maxLoc;
|
|
||||||
Point matchLoc;
|
|
||||||
|
|
||||||
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
|
|
||||||
|
|
||||||
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
|
|
||||||
if( match_method == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED )
|
|
||||||
{ matchLoc = minLoc; }
|
|
||||||
else
|
|
||||||
{ matchLoc = maxLoc; }
|
|
||||||
|
|
||||||
/// Show me what you got
|
|
||||||
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
|
|
||||||
rectangle( result, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
|
|
||||||
|
|
||||||
imshow( image_window, img_display );
|
|
||||||
imshow( result_window, result );
|
|
||||||
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Declare some global variables, such as the image, template and result matrices, as well as the
|
-# Declare some global variables, such as the image, template and result matrices, as well as the
|
||||||
match method and the window names:
|
match method and the window names:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat img; Mat templ; Mat result;
|
Mat img; Mat templ; Mat result;
|
||||||
@ -194,33 +113,33 @@ Explanation
|
|||||||
int match_method;
|
int match_method;
|
||||||
int max_Trackbar = 5;
|
int max_Trackbar = 5;
|
||||||
@endcode
|
@endcode
|
||||||
2. Load the source image and template:
|
-# Load the source image and template:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
img = imread( argv[1], 1 );
|
img = imread( argv[1], 1 );
|
||||||
templ = imread( argv[2], 1 );
|
templ = imread( argv[2], 1 );
|
||||||
@endcode
|
@endcode
|
||||||
3. Create the windows to show the results:
|
-# Create the windows to show the results:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( image_window, WINDOW_AUTOSIZE );
|
namedWindow( image_window, WINDOW_AUTOSIZE );
|
||||||
namedWindow( result_window, WINDOW_AUTOSIZE );
|
namedWindow( result_window, WINDOW_AUTOSIZE );
|
||||||
@endcode
|
@endcode
|
||||||
4. Create the Trackbar to enter the kind of matching method to be used. When a change is detected
|
-# Create the Trackbar to enter the kind of matching method to be used. When a change is detected
|
||||||
the callback function **MatchingMethod** is called.
|
the callback function **MatchingMethod** is called.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
|
char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
|
||||||
createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
|
createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
|
||||||
@endcode
|
@endcode
|
||||||
5. Wait until user exits the program.
|
-# Wait until user exits the program.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
waitKey(0);
|
waitKey(0);
|
||||||
return 0;
|
return 0;
|
||||||
@endcode
|
@endcode
|
||||||
6. Let's check out the callback function. First, it makes a copy of the source image:
|
-# Let's check out the callback function. First, it makes a copy of the source image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat img_display;
|
Mat img_display;
|
||||||
img.copyTo( img_display );
|
img.copyTo( img_display );
|
||||||
@endcode
|
@endcode
|
||||||
7. Next, it creates the result matrix that will store the matching results for each template
|
-# Next, it creates the result matrix that will store the matching results for each template
|
||||||
location. Observe in detail the size of the result matrix (which matches all possible locations
|
location. Observe in detail the size of the result matrix (which matches all possible locations
|
||||||
for it)
|
for it)
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -229,18 +148,18 @@ Explanation
|
|||||||
|
|
||||||
result.create( result_cols, result_rows, CV_32FC1 );
|
result.create( result_cols, result_rows, CV_32FC1 );
|
||||||
@endcode
|
@endcode
|
||||||
8. Perform the template matching operation:
|
-# Perform the template matching operation:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
matchTemplate( img, templ, result, match_method );
|
matchTemplate( img, templ, result, match_method );
|
||||||
@endcode
|
@endcode
|
||||||
the arguments are naturally the input image **I**, the template **T**, the result **R** and the
|
the arguments are naturally the input image **I**, the template **T**, the result **R** and the
|
||||||
match_method (given by the Trackbar)
|
match_method (given by the Trackbar)
|
||||||
|
|
||||||
9. We normalize the results:
|
-# We normalize the results:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
|
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
|
||||||
@endcode
|
@endcode
|
||||||
10. We localize the minimum and maximum values in the result matrix **R** by using @ref
|
-# We localize the minimum and maximum values in the result matrix **R** by using @ref
|
||||||
cv::minMaxLoc .
|
cv::minMaxLoc .
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
double minVal; double maxVal; Point minLoc; Point maxLoc;
|
double minVal; double maxVal; Point minLoc; Point maxLoc;
|
||||||
@ -256,7 +175,7 @@ Explanation
|
|||||||
array.
|
array.
|
||||||
- **Mat():** Optional mask
|
- **Mat():** Optional mask
|
||||||
|
|
||||||
11. For the first two methods ( TM_SQDIFF and MT_SQDIFF_NORMED ) the best match are the lowest
|
-# For the first two methods ( TM_SQDIFF and MT_SQDIFF_NORMED ) the best match are the lowest
|
||||||
values. For all the others, higher values represent better matches. So, we save the
|
values. For all the others, higher values represent better matches. So, we save the
|
||||||
corresponding value in the **matchLoc** variable:
|
corresponding value in the **matchLoc** variable:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -265,7 +184,7 @@ Explanation
|
|||||||
else
|
else
|
||||||
{ matchLoc = maxLoc; }
|
{ matchLoc = maxLoc; }
|
||||||
@endcode
|
@endcode
|
||||||
12. Display the source image and the result matrix. Draw a rectangle around the highest possible
|
-# Display the source image and the result matrix. Draw a rectangle around the highest possible
|
||||||
matching area:
|
matching area:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
|
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
|
||||||
@ -274,29 +193,32 @@ Explanation
|
|||||||
imshow( image_window, img_display );
|
imshow( image_window, img_display );
|
||||||
imshow( result_window, result );
|
imshow( result_window, result );
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. Testing our program with an input image such as:
|
-# Testing our program with an input image such as:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
and a template image:
|
and a template image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. Generate the following result matrices (first row are the standard methods SQDIFF, CCORR and
|
-# Generate the following result matrices (first row are the standard methods SQDIFF, CCORR and
|
||||||
CCOEFF, second row are the same methods in its normalized version). In the first column, the
|
CCOEFF, second row are the same methods in its normalized version). In the first column, the
|
||||||
darkest is the better match, for the other two columns, the brighter a location, the higher the
|
darkest is the better match, for the other two columns, the brighter a location, the higher the
|
||||||
match.
|
match.
|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|
|
||||||
|Result_0| |Result_2| |Result_4|
|
-# The right match is shown below (black rectangle around the face of the guy at the right). Notice
|
||||||
------------- ------------- -------------
|
|
||||||
|Result_1| |Result_3| |Result_5|
|
|
||||||
|
|
||||||
3. The right match is shown below (black rectangle around the face of the guy at the right). Notice
|
|
||||||
that CCORR and CCDEFF gave erroneous best matches, however their normalized version did it
|
that CCORR and CCDEFF gave erroneous best matches, however their normalized version did it
|
||||||
right, this may be due to the fact that we are only considering the "highest match" and not the
|
right, this may be due to the fact that we are only considering the "highest match" and not the
|
||||||
other possible high matches.
|
other possible high matches.
|
||||||
|
|
||||||

|

|
||||||
|
@ -20,7 +20,7 @@ The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to
|
|||||||
|
|
||||||
### Steps
|
### Steps
|
||||||
|
|
||||||
1. Filter out any noise. The Gaussian filter is used for this purpose. An example of a Gaussian
|
-# Filter out any noise. The Gaussian filter is used for this purpose. An example of a Gaussian
|
||||||
kernel of \f$size = 5\f$ that might be used is shown below:
|
kernel of \f$size = 5\f$ that might be used is shown below:
|
||||||
|
|
||||||
\f[K = \dfrac{1}{159}\begin{bmatrix}
|
\f[K = \dfrac{1}{159}\begin{bmatrix}
|
||||||
@ -31,8 +31,8 @@ The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to
|
|||||||
2 & 4 & 5 & 4 & 2
|
2 & 4 & 5 & 4 & 2
|
||||||
\end{bmatrix}\f]
|
\end{bmatrix}\f]
|
||||||
|
|
||||||
2. Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel:
|
-# Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel:
|
||||||
1. Apply a pair of convolution masks (in \f$x\f$ and \f$y\f$ directions:
|
-# Apply a pair of convolution masks (in \f$x\f$ and \f$y\f$ directions:
|
||||||
\f[G_{x} = \begin{bmatrix}
|
\f[G_{x} = \begin{bmatrix}
|
||||||
-1 & 0 & +1 \\
|
-1 & 0 & +1 \\
|
||||||
-2 & 0 & +2 \\
|
-2 & 0 & +2 \\
|
||||||
@ -43,44 +43,44 @@ The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to
|
|||||||
+1 & +2 & +1
|
+1 & +2 & +1
|
||||||
\end{bmatrix}\f]
|
\end{bmatrix}\f]
|
||||||
|
|
||||||
2. Find the gradient strength and direction with:
|
-# Find the gradient strength and direction with:
|
||||||
\f[\begin{array}{l}
|
\f[\begin{array}{l}
|
||||||
G = \sqrt{ G_{x}^{2} + G_{y}^{2} } \\
|
G = \sqrt{ G_{x}^{2} + G_{y}^{2} } \\
|
||||||
\theta = \arctan(\dfrac{ G_{y} }{ G_{x} })
|
\theta = \arctan(\dfrac{ G_{y} }{ G_{x} })
|
||||||
\end{array}\f]
|
\end{array}\f]
|
||||||
The direction is rounded to one of four possible angles (namely 0, 45, 90 or 135)
|
The direction is rounded to one of four possible angles (namely 0, 45, 90 or 135)
|
||||||
|
|
||||||
3. *Non-maximum* suppression is applied. This removes pixels that are not considered to be part of
|
-# *Non-maximum* suppression is applied. This removes pixels that are not considered to be part of
|
||||||
an edge. Hence, only thin lines (candidate edges) will remain.
|
an edge. Hence, only thin lines (candidate edges) will remain.
|
||||||
4. *Hysteresis*: The final step. Canny does use two thresholds (upper and lower):
|
-# *Hysteresis*: The final step. Canny does use two thresholds (upper and lower):
|
||||||
|
|
||||||
1. If a pixel gradient is higher than the *upper* threshold, the pixel is accepted as an edge
|
-# If a pixel gradient is higher than the *upper* threshold, the pixel is accepted as an edge
|
||||||
2. If a pixel gradient value is below the *lower* threshold, then it is rejected.
|
-# If a pixel gradient value is below the *lower* threshold, then it is rejected.
|
||||||
3. If the pixel gradient is between the two thresholds, then it will be accepted only if it is
|
-# If the pixel gradient is between the two thresholds, then it will be accepted only if it is
|
||||||
connected to a pixel that is above the *upper* threshold.
|
connected to a pixel that is above the *upper* threshold.
|
||||||
|
|
||||||
Canny recommended a *upper*:*lower* ratio between 2:1 and 3:1.
|
Canny recommended a *upper*:*lower* ratio between 2:1 and 3:1.
|
||||||
|
|
||||||
5. For more details, you can always consult your favorite Computer Vision book.
|
-# For more details, you can always consult your favorite Computer Vision book.
|
||||||
|
|
||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Asks the user to enter a numerical value to set the lower threshold for our *Canny Edge
|
- Asks the user to enter a numerical value to set the lower threshold for our *Canny Edge
|
||||||
Detector* (by means of a Trackbar)
|
Detector* (by means of a Trackbar)
|
||||||
- Applies the *Canny Detector* and generates a **mask** (bright lines representing the edges
|
- Applies the *Canny Detector* and generates a **mask** (bright lines representing the edges
|
||||||
on a black background).
|
on a black background).
|
||||||
- Applies the mask obtained on the original image and display it in a window.
|
- Applies the mask obtained on the original image and display it in a window.
|
||||||
|
|
||||||
2. The tutorial code's is shown lines below. You can also download it from
|
-# The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp)
|
||||||
@includelineno samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp
|
@includelineno samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Create some needed variables:
|
-# Create some needed variables:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src, src_gray;
|
Mat src, src_gray;
|
||||||
Mat dst, detected_edges;
|
Mat dst, detected_edges;
|
||||||
@ -94,12 +94,12 @@ Explanation
|
|||||||
@endcode
|
@endcode
|
||||||
Note the following:
|
Note the following:
|
||||||
|
|
||||||
1. We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*)
|
-# We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*)
|
||||||
2. We set the kernel size of \f$3\f$ (for the Sobel operations to be performed internally by the
|
-# We set the kernel size of \f$3\f$ (for the Sobel operations to be performed internally by the
|
||||||
Canny function)
|
Canny function)
|
||||||
3. We set a maximum value for the lower Threshold of \f$100\f$.
|
-# We set a maximum value for the lower Threshold of \f$100\f$.
|
||||||
|
|
||||||
2. Loads the source image:
|
-# Loads the source image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Load an image
|
/// Load an image
|
||||||
src = imread( argv[1] );
|
src = imread( argv[1] );
|
||||||
@ -107,35 +107,35 @@ Explanation
|
|||||||
if( !src.data )
|
if( !src.data )
|
||||||
{ return -1; }
|
{ return -1; }
|
||||||
@endcode
|
@endcode
|
||||||
3. Create a matrix of the same type and size of *src* (to be *dst*)
|
-# Create a matrix of the same type and size of *src* (to be *dst*)
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
dst.create( src.size(), src.type() );
|
dst.create( src.size(), src.type() );
|
||||||
@endcode
|
@endcode
|
||||||
4. Convert the image to grayscale (using the function @ref cv::cvtColor :
|
-# Convert the image to grayscale (using the function @ref cv::cvtColor :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||||
@endcode
|
@endcode
|
||||||
5. Create a window to display the results
|
-# Create a window to display the results
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||||
@endcode
|
@endcode
|
||||||
6. Create a Trackbar for the user to enter the lower threshold for our Canny detector:
|
-# Create a Trackbar for the user to enter the lower threshold for our Canny detector:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
|
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
|
||||||
@endcode
|
@endcode
|
||||||
Observe the following:
|
Observe the following:
|
||||||
|
|
||||||
1. The variable to be controlled by the Trackbar is *lowThreshold* with a limit of
|
-# The variable to be controlled by the Trackbar is *lowThreshold* with a limit of
|
||||||
*max_lowThreshold* (which we set to 100 previously)
|
*max_lowThreshold* (which we set to 100 previously)
|
||||||
2. Each time the Trackbar registers an action, the callback function *CannyThreshold* will be
|
-# Each time the Trackbar registers an action, the callback function *CannyThreshold* will be
|
||||||
invoked.
|
invoked.
|
||||||
|
|
||||||
7. Let's check the *CannyThreshold* function, step by step:
|
-# Let's check the *CannyThreshold* function, step by step:
|
||||||
1. First, we blur the image with a filter of kernel size 3:
|
-# First, we blur the image with a filter of kernel size 3:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
blur( src_gray, detected_edges, Size(3,3) );
|
blur( src_gray, detected_edges, Size(3,3) );
|
||||||
@endcode
|
@endcode
|
||||||
2. Second, we apply the OpenCV function @ref cv::Canny :
|
-# Second, we apply the OpenCV function @ref cv::Canny :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
|
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
|
||||||
@endcode
|
@endcode
|
||||||
@ -149,11 +149,11 @@ Explanation
|
|||||||
- *kernel_size*: We defined it to be 3 (the size of the Sobel kernel to be used
|
- *kernel_size*: We defined it to be 3 (the size of the Sobel kernel to be used
|
||||||
internally)
|
internally)
|
||||||
|
|
||||||
8. We fill a *dst* image with zeros (meaning the image is completely black).
|
-# We fill a *dst* image with zeros (meaning the image is completely black).
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
dst = Scalar::all(0);
|
dst = Scalar::all(0);
|
||||||
@endcode
|
@endcode
|
||||||
9. Finally, we will use the function @ref cv::Mat::copyTo to map only the areas of the image that are
|
-# Finally, we will use the function @ref cv::Mat::copyTo to map only the areas of the image that are
|
||||||
identified as edges (on a black background).
|
identified as edges (on a black background).
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src.copyTo( dst, detected_edges);
|
src.copyTo( dst, detected_edges);
|
||||||
@ -163,20 +163,21 @@ Explanation
|
|||||||
contours on a black background, the resulting *dst* will be black in all the area but the
|
contours on a black background, the resulting *dst* will be black in all the area but the
|
||||||
detected edges.
|
detected edges.
|
||||||
|
|
||||||
10. We display our result:
|
-# We display our result:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow( window_name, dst );
|
imshow( window_name, dst );
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
- After compiling the code above, we can run it giving as argument the path to an image. For
|
- After compiling the code above, we can run it giving as argument the path to an image. For
|
||||||
example, using as an input the following image:
|
example, using as an input the following image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Moving the slider, trying different threshold, we obtain the following result:
|
- Moving the slider, trying different threshold, we obtain the following result:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Notice how the image is superposed to the black background on the edge regions.
|
- Notice how the image is superposed to the black background on the edge regions.
|
||||||
|
@ -14,14 +14,14 @@ Theory
|
|||||||
|
|
||||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||||
|
|
||||||
1. In our previous tutorial we learned to use convolution to operate on images. One problem that
|
-# In our previous tutorial we learned to use convolution to operate on images. One problem that
|
||||||
naturally arises is how to handle the boundaries. How can we convolve them if the evaluated
|
naturally arises is how to handle the boundaries. How can we convolve them if the evaluated
|
||||||
points are at the edge of the image?
|
points are at the edge of the image?
|
||||||
2. What most of OpenCV functions do is to copy a given image onto another slightly larger image and
|
-# What most of OpenCV functions do is to copy a given image onto another slightly larger image and
|
||||||
then automatically pads the boundary (by any of the methods explained in the sample code just
|
then automatically pads the boundary (by any of the methods explained in the sample code just
|
||||||
below). This way, the convolution can be performed over the needed pixels without problems (the
|
below). This way, the convolution can be performed over the needed pixels without problems (the
|
||||||
extra padding is cut after the operation is done).
|
extra padding is cut after the operation is done).
|
||||||
3. In this tutorial, we will briefly explore two ways of defining the extra padding (border) for an
|
-# In this tutorial, we will briefly explore two ways of defining the extra padding (border) for an
|
||||||
image:
|
image:
|
||||||
|
|
||||||
-# **BORDER_CONSTANT**: Pad the image with a constant value (i.e. black or \f$0\f$
|
-# **BORDER_CONSTANT**: Pad the image with a constant value (i.e. black or \f$0\f$
|
||||||
@ -33,91 +33,26 @@ Theory
|
|||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Load an image
|
- Load an image
|
||||||
- Let the user choose what kind of padding use in the input image. There are two options:
|
- Let the user choose what kind of padding use in the input image. There are two options:
|
||||||
|
|
||||||
1. *Constant value border*: Applies a padding of a constant value for the whole border.
|
-# *Constant value border*: Applies a padding of a constant value for the whole border.
|
||||||
This value will be updated randomly each 0.5 seconds.
|
This value will be updated randomly each 0.5 seconds.
|
||||||
2. *Replicated border*: The border will be replicated from the pixel values at the edges of
|
-# *Replicated border*: The border will be replicated from the pixel values at the edges of
|
||||||
the original image.
|
the original image.
|
||||||
|
|
||||||
The user chooses either option by pressing 'c' (constant) or 'r' (replicate)
|
The user chooses either option by pressing 'c' (constant) or 'r' (replicate)
|
||||||
- The program finishes when the user presses 'ESC'
|
- The program finishes when the user presses 'ESC'
|
||||||
|
|
||||||
2. The tutorial code's is shown lines below. You can also download it from
|
-# The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include <stdlib.h>
|
|
||||||
#include <stdio.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
|
|
||||||
/// Global Variables
|
|
||||||
Mat src, dst;
|
|
||||||
int top, bottom, left, right;
|
|
||||||
int borderType;
|
|
||||||
Scalar value;
|
|
||||||
char* window_name = "copyMakeBorder Demo";
|
|
||||||
RNG rng(12345);
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
|
|
||||||
int c;
|
|
||||||
|
|
||||||
/// Load an image
|
|
||||||
src = imread( argv[1] );
|
|
||||||
|
|
||||||
if( !src.data )
|
|
||||||
{ return -1;
|
|
||||||
printf(" No data entered, please enter the path to an image file \n");
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Brief how-to for this program
|
|
||||||
printf( "\n \t copyMakeBorder Demo: \n" );
|
|
||||||
printf( "\t -------------------- \n" );
|
|
||||||
printf( " ** Press 'c' to set the border to a random constant value \n");
|
|
||||||
printf( " ** Press 'r' to set the border to be replicated \n");
|
|
||||||
printf( " ** Press 'ESC' to exit the program \n");
|
|
||||||
|
|
||||||
/// Create window
|
|
||||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
|
||||||
|
|
||||||
/// Initialize arguments for the filter
|
|
||||||
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows);
|
|
||||||
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
|
|
||||||
dst = src;
|
|
||||||
|
|
||||||
imshow( window_name, dst );
|
|
||||||
|
|
||||||
while( true )
|
|
||||||
{
|
|
||||||
c = waitKey(500);
|
|
||||||
|
|
||||||
if( (char)c == 27 )
|
|
||||||
{ break; }
|
|
||||||
else if( (char)c == 'c' )
|
|
||||||
{ borderType = BORDER_CONSTANT; }
|
|
||||||
else if( (char)c == 'r' )
|
|
||||||
{ borderType = BORDER_REPLICATE; }
|
|
||||||
|
|
||||||
value = Scalar( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
|
|
||||||
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
|
|
||||||
|
|
||||||
imshow( window_name, dst );
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. First we declare the variables we are going to use:
|
-# First we declare the variables we are going to use:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src, dst;
|
Mat src, dst;
|
||||||
int top, bottom, left, right;
|
int top, bottom, left, right;
|
||||||
@ -129,7 +64,7 @@ Explanation
|
|||||||
Especial attention deserves the variable *rng* which is a random number generator. We use it to
|
Especial attention deserves the variable *rng* which is a random number generator. We use it to
|
||||||
generate the random border color, as we will see soon.
|
generate the random border color, as we will see soon.
|
||||||
|
|
||||||
2. As usual we load our source image *src*:
|
-# As usual we load our source image *src*:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1] );
|
src = imread( argv[1] );
|
||||||
|
|
||||||
@ -138,17 +73,17 @@ Explanation
|
|||||||
printf(" No data entered, please enter the path to an image file \n");
|
printf(" No data entered, please enter the path to an image file \n");
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
3. After giving a short intro of how to use the program, we create a window:
|
-# After giving a short intro of how to use the program, we create a window:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||||
@endcode
|
@endcode
|
||||||
4. Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and
|
-# Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and
|
||||||
*right*). We give them a value of 5% the size of *src*.
|
*right*). We give them a value of 5% the size of *src*.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows);
|
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows);
|
||||||
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
|
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
|
||||||
@endcode
|
@endcode
|
||||||
5. The program begins a *while* loop. If the user presses 'c' or 'r', the *borderType* variable
|
-# The program begins a *while* loop. If the user presses 'c' or 'r', the *borderType* variable
|
||||||
takes the value of *BORDER_CONSTANT* or *BORDER_REPLICATE* respectively:
|
takes the value of *BORDER_CONSTANT* or *BORDER_REPLICATE* respectively:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
while( true )
|
while( true )
|
||||||
@ -162,14 +97,14 @@ Explanation
|
|||||||
else if( (char)c == 'r' )
|
else if( (char)c == 'r' )
|
||||||
{ borderType = BORDER_REPLICATE; }
|
{ borderType = BORDER_REPLICATE; }
|
||||||
@endcode
|
@endcode
|
||||||
6. In each iteration (after 0.5 seconds), the variable *value* is updated...
|
-# In each iteration (after 0.5 seconds), the variable *value* is updated...
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
value = Scalar( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
|
value = Scalar( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
|
||||||
@endcode
|
@endcode
|
||||||
with a random value generated by the **RNG** variable *rng*. This value is a number picked
|
with a random value generated by the **RNG** variable *rng*. This value is a number picked
|
||||||
randomly in the range \f$[0,255]\f$
|
randomly in the range \f$[0,255]\f$
|
||||||
|
|
||||||
7. Finally, we call the function @ref cv::copyMakeBorder to apply the respective padding:
|
-# Finally, we call the function @ref cv::copyMakeBorder to apply the respective padding:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
|
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
|
||||||
@endcode
|
@endcode
|
||||||
@ -184,14 +119,15 @@ Explanation
|
|||||||
-# *value*: If *borderType* is *BORDER_CONSTANT*, this is the value used to fill the border
|
-# *value*: If *borderType* is *BORDER_CONSTANT*, this is the value used to fill the border
|
||||||
pixels.
|
pixels.
|
||||||
|
|
||||||
8. We display our output image in the image created previously
|
-# We display our output image in the image created previously
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow( window_name, dst );
|
imshow( window_name, dst );
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. After compiling the code above, you can execute it giving as argument the path of an image. The
|
-# After compiling the code above, you can execute it giving as argument the path of an image. The
|
||||||
result should be:
|
result should be:
|
||||||
|
|
||||||
- By default, it begins with the border set to BORDER_CONSTANT. Hence, a succession of random
|
- By default, it begins with the border set to BORDER_CONSTANT. Hence, a succession of random
|
||||||
@ -203,4 +139,4 @@ Results
|
|||||||
Below some screenshot showing how the border changes color and how the *BORDER_REPLICATE*
|
Below some screenshot showing how the border changes color and how the *BORDER_REPLICATE*
|
||||||
option looks:
|
option looks:
|
||||||
|
|
||||||

|

|
||||||
|
@ -23,18 +23,18 @@ In a very general sense, convolution is an operation between every part of an im
|
|||||||
A kernel is essentially a fixed size array of numerical coefficeints along with an *anchor point* in
|
A kernel is essentially a fixed size array of numerical coefficeints along with an *anchor point* in
|
||||||
that array, which is tipically located at the center.
|
that array, which is tipically located at the center.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### How does convolution with a kernel work?
|
### How does convolution with a kernel work?
|
||||||
|
|
||||||
Assume you want to know the resulting value of a particular location in the image. The value of the
|
Assume you want to know the resulting value of a particular location in the image. The value of the
|
||||||
convolution is calculated in the following way:
|
convolution is calculated in the following way:
|
||||||
|
|
||||||
1. Place the kernel anchor on top of a determined pixel, with the rest of the kernel overlaying the
|
-# Place the kernel anchor on top of a determined pixel, with the rest of the kernel overlaying the
|
||||||
corresponding local pixels in the image.
|
corresponding local pixels in the image.
|
||||||
2. Multiply the kernel coefficients by the corresponding image pixel values and sum the result.
|
-# Multiply the kernel coefficients by the corresponding image pixel values and sum the result.
|
||||||
3. Place the result to the location of the *anchor* in the input image.
|
-# Place the result to the location of the *anchor* in the input image.
|
||||||
4. Repeat the process for all pixels by scanning the kernel over the entire image.
|
-# Repeat the process for all pixels by scanning the kernel over the entire image.
|
||||||
|
|
||||||
Expressing the procedure above in the form of an equation we would have:
|
Expressing the procedure above in the form of an equation we would have:
|
||||||
|
|
||||||
@ -46,7 +46,7 @@ these operations.
|
|||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Loads an image
|
- Loads an image
|
||||||
- Performs a *normalized box filter*. For instance, for a kernel of size \f$size = 3\f$, the
|
- Performs a *normalized box filter*. For instance, for a kernel of size \f$size = 3\f$, the
|
||||||
kernel would be:
|
kernel would be:
|
||||||
@ -61,7 +61,7 @@ Code
|
|||||||
|
|
||||||
- The filter output (with each kernel) will be shown during 500 milliseconds
|
- The filter output (with each kernel) will be shown during 500 milliseconds
|
||||||
|
|
||||||
2. The tutorial code's is shown lines below. You can also download it from
|
-# The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/filter2D_demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/filter2D_demo.cpp)
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
#include "opencv2/imgproc.hpp"
|
#include "opencv2/imgproc.hpp"
|
||||||
@ -125,26 +125,26 @@ int main ( int argc, char** argv )
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Load an image
|
-# Load an image
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1] );
|
src = imread( argv[1] );
|
||||||
|
|
||||||
if( !src.data )
|
if( !src.data )
|
||||||
{ return -1; }
|
{ return -1; }
|
||||||
@endcode
|
@endcode
|
||||||
2. Create a window to display the result
|
-# Create a window to display the result
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||||
@endcode
|
@endcode
|
||||||
3. Initialize the arguments for the linear filter
|
-# Initialize the arguments for the linear filter
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
anchor = Point( -1, -1 );
|
anchor = Point( -1, -1 );
|
||||||
delta = 0;
|
delta = 0;
|
||||||
ddepth = -1;
|
ddepth = -1;
|
||||||
@endcode
|
@endcode
|
||||||
4. Perform an infinite loop updating the kernel size and applying our linear filter to the input
|
-# Perform an infinite loop updating the kernel size and applying our linear filter to the input
|
||||||
image. Let's analyze that more in detail:
|
image. Let's analyze that more in detail:
|
||||||
5. First we define the kernel our filter is going to use. Here it is:
|
-# First we define the kernel our filter is going to use. Here it is:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
kernel_size = 3 + 2*( ind%5 );
|
kernel_size = 3 + 2*( ind%5 );
|
||||||
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
|
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
|
||||||
@ -153,7 +153,7 @@ Explanation
|
|||||||
line actually builds the kernel by setting its value to a matrix filled with \f$1's\f$ and
|
line actually builds the kernel by setting its value to a matrix filled with \f$1's\f$ and
|
||||||
normalizing it by dividing it between the number of elements.
|
normalizing it by dividing it between the number of elements.
|
||||||
|
|
||||||
6. After setting the kernel, we can generate the filter by using the function @ref cv::filter2D :
|
-# After setting the kernel, we can generate the filter by using the function @ref cv::filter2D :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
|
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
|
||||||
@endcode
|
@endcode
|
||||||
@ -169,14 +169,14 @@ Explanation
|
|||||||
-# *delta*: A value to be added to each pixel during the convolution. By default it is \f$0\f$
|
-# *delta*: A value to be added to each pixel during the convolution. By default it is \f$0\f$
|
||||||
-# *BORDER_DEFAULT*: We let this value by default (more details in the following tutorial)
|
-# *BORDER_DEFAULT*: We let this value by default (more details in the following tutorial)
|
||||||
|
|
||||||
7. Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be
|
-# Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be
|
||||||
updated in the range indicated.
|
updated in the range indicated.
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. After compiling the code above, you can execute it giving as argument the path of an image. The
|
-# After compiling the code above, you can execute it giving as argument the path of an image. The
|
||||||
result should be a window that shows an image blurred by a normalized filter. Each 0.5 seconds
|
result should be a window that shows an image blurred by a normalized filter. Each 0.5 seconds
|
||||||
the kernel size should change, as can be seen in the series of snapshots below:
|
the kernel size should change, as can be seen in the series of snapshots below:
|
||||||
|
|
||||||

|

|
||||||
|
@ -23,7 +23,7 @@ Theory
|
|||||||
where \f$(x_{center}, y_{center})\f$ define the center position (green point) and \f$r\f$ is the radius,
|
where \f$(x_{center}, y_{center})\f$ define the center position (green point) and \f$r\f$ is the radius,
|
||||||
which allows us to completely define a circle, as it can be seen below:
|
which allows us to completely define a circle, as it can be seen below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- For sake of efficiency, OpenCV implements a detection method slightly trickier than the standard
|
- For sake of efficiency, OpenCV implements a detection method slightly trickier than the standard
|
||||||
Hough Transform: *The Hough gradient method*, which is made up of two main stages. The first
|
Hough Transform: *The Hough gradient method*, which is made up of two main stages. The first
|
||||||
@ -34,82 +34,35 @@ Theory
|
|||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Loads an image and blur it to reduce the noise
|
- Loads an image and blur it to reduce the noise
|
||||||
- Applies the *Hough Circle Transform* to the blurred image .
|
- Applies the *Hough Circle Transform* to the blurred image .
|
||||||
- Display the detected circle in a window.
|
- Display the detected circle in a window.
|
||||||
|
|
||||||
2. The sample code that we will explain can be downloaded from
|
-# The sample code that we will explain can be downloaded from [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/houghcircles.cpp).
|
||||||
|TutorialHoughCirclesSimpleDownload|_. A slightly fancier version (which shows trackbars for
|
A slightly fancier version (which shows trackbars for
|
||||||
changing the threshold values) can be found |TutorialHoughCirclesFancyDownload|_.
|
changing the threshold values) can be found [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/HoughCircle_Demo.cpp).
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/houghcircles.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main(int argc, char** argv)
|
|
||||||
{
|
|
||||||
Mat src, src_gray;
|
|
||||||
|
|
||||||
/// Read the image
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
if( !src.data )
|
|
||||||
{ return -1; }
|
|
||||||
|
|
||||||
/// Convert it to gray
|
|
||||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
|
||||||
|
|
||||||
/// Reduce the noise so we avoid false circle detection
|
|
||||||
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
|
|
||||||
|
|
||||||
vector<Vec3f> circles;
|
|
||||||
|
|
||||||
/// Apply the Hough Transform to find the circles
|
|
||||||
HoughCircles( src_gray, circles, HOUGH_GRADIENT, 1, src_gray.rows/8, 200, 100, 0, 0 );
|
|
||||||
|
|
||||||
/// Draw the circles detected
|
|
||||||
for( size_t i = 0; i < circles.size(); i++ )
|
|
||||||
{
|
|
||||||
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
|
|
||||||
int radius = cvRound(circles[i][2]);
|
|
||||||
// circle center
|
|
||||||
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
|
|
||||||
// circle outline
|
|
||||||
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Show your results
|
|
||||||
namedWindow( "Hough Circle Transform Demo", WINDOW_AUTOSIZE );
|
|
||||||
imshow( "Hough Circle Transform Demo", src );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Load an image
|
-# Load an image
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1], 1 );
|
src = imread( argv[1], 1 );
|
||||||
|
|
||||||
if( !src.data )
|
if( !src.data )
|
||||||
{ return -1; }
|
{ return -1; }
|
||||||
@endcode
|
@endcode
|
||||||
2. Convert it to grayscale:
|
-# Convert it to grayscale:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||||
@endcode
|
@endcode
|
||||||
3. Apply a Gaussian blur to reduce noise and avoid false circle detection:
|
-# Apply a Gaussian blur to reduce noise and avoid false circle detection:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
|
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
|
||||||
@endcode
|
@endcode
|
||||||
4. Proceed to apply Hough Circle Transform:
|
-# Proceed to apply Hough Circle Transform:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
vector<Vec3f> circles;
|
vector<Vec3f> circles;
|
||||||
|
|
||||||
@ -129,7 +82,7 @@ Explanation
|
|||||||
- *min_radius = 0*: Minimum radio to be detected. If unknown, put zero as default.
|
- *min_radius = 0*: Minimum radio to be detected. If unknown, put zero as default.
|
||||||
- *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default.
|
- *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default.
|
||||||
|
|
||||||
5. Draw the detected circles:
|
-# Draw the detected circles:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
for( size_t i = 0; i < circles.size(); i++ )
|
for( size_t i = 0; i < circles.size(); i++ )
|
||||||
{
|
{
|
||||||
@ -143,19 +96,19 @@ Explanation
|
|||||||
@endcode
|
@endcode
|
||||||
You can see that we will draw the circle(s) on red and the center(s) with a small green dot
|
You can see that we will draw the circle(s) on red and the center(s) with a small green dot
|
||||||
|
|
||||||
6. Display the detected circle(s):
|
-# Display the detected circle(s):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( "Hough Circle Transform Demo", WINDOW_AUTOSIZE );
|
namedWindow( "Hough Circle Transform Demo", WINDOW_AUTOSIZE );
|
||||||
imshow( "Hough Circle Transform Demo", src );
|
imshow( "Hough Circle Transform Demo", src );
|
||||||
@endcode
|
@endcode
|
||||||
7. Wait for the user to exit the program
|
-# Wait for the user to exit the program
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
waitKey(0);
|
waitKey(0);
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
The result of running the code above with a test image is shown below:
|
The result of running the code above with a test image is shown below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -12,18 +12,22 @@ In this tutorial you will learn how to:
|
|||||||
Theory
|
Theory
|
||||||
------
|
------
|
||||||
|
|
||||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. Hough
|
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||||
Line Transform ---------------------\#. The Hough Line Transform is a transform used to detect
|
|
||||||
straight lines. \#. To apply the Transform, first an edge detection pre-processing is desirable.
|
Hough Line Transform
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
-# The Hough Line Transform is a transform used to detect straight lines.
|
||||||
|
-# To apply the Transform, first an edge detection pre-processing is desirable.
|
||||||
|
|
||||||
### How does it work?
|
### How does it work?
|
||||||
|
|
||||||
1. As you know, a line in the image space can be expressed with two variables. For example:
|
-# As you know, a line in the image space can be expressed with two variables. For example:
|
||||||
|
|
||||||
-# In the **Cartesian coordinate system:** Parameters: \f$(m,b)\f$.
|
-# In the **Cartesian coordinate system:** Parameters: \f$(m,b)\f$.
|
||||||
-# In the **Polar coordinate system:** Parameters: \f$(r,\theta)\f$
|
-# In the **Polar coordinate system:** Parameters: \f$(r,\theta)\f$
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
For Hough Transforms, we will express lines in the *Polar system*. Hence, a line equation can be
|
For Hough Transforms, we will express lines in the *Polar system*. Hence, a line equation can be
|
||||||
written as:
|
written as:
|
||||||
@ -32,7 +36,7 @@ straight lines. \#. To apply the Transform, first an edge detection pre-processi
|
|||||||
|
|
||||||
Arranging the terms: \f$r = x \cos \theta + y \sin \theta\f$
|
Arranging the terms: \f$r = x \cos \theta + y \sin \theta\f$
|
||||||
|
|
||||||
1. In general for each point \f$(x_{0}, y_{0})\f$, we can define the family of lines that goes through
|
-# In general for each point \f$(x_{0}, y_{0})\f$, we can define the family of lines that goes through
|
||||||
that point as:
|
that point as:
|
||||||
|
|
||||||
\f[r_{\theta} = x_{0} \cdot \cos \theta + y_{0} \cdot \sin \theta\f]
|
\f[r_{\theta} = x_{0} \cdot \cos \theta + y_{0} \cdot \sin \theta\f]
|
||||||
@ -40,30 +44,30 @@ Arranging the terms: \f$r = x \cos \theta + y \sin \theta\f$
|
|||||||
Meaning that each pair \f$(r_{\theta},\theta)\f$ represents each line that passes by
|
Meaning that each pair \f$(r_{\theta},\theta)\f$ represents each line that passes by
|
||||||
\f$(x_{0}, y_{0})\f$.
|
\f$(x_{0}, y_{0})\f$.
|
||||||
|
|
||||||
2. If for a given \f$(x_{0}, y_{0})\f$ we plot the family of lines that goes through it, we get a
|
-# If for a given \f$(x_{0}, y_{0})\f$ we plot the family of lines that goes through it, we get a
|
||||||
sinusoid. For instance, for \f$x_{0} = 8\f$ and \f$y_{0} = 6\f$ we get the following plot (in a plane
|
sinusoid. For instance, for \f$x_{0} = 8\f$ and \f$y_{0} = 6\f$ we get the following plot (in a plane
|
||||||
\f$\theta\f$ - \f$r\f$):
|
\f$\theta\f$ - \f$r\f$):
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
We consider only points such that \f$r > 0\f$ and \f$0< \theta < 2 \pi\f$.
|
We consider only points such that \f$r > 0\f$ and \f$0< \theta < 2 \pi\f$.
|
||||||
|
|
||||||
3. We can do the same operation above for all the points in an image. If the curves of two
|
-# We can do the same operation above for all the points in an image. If the curves of two
|
||||||
different points intersect in the plane \f$\theta\f$ - \f$r\f$, that means that both points belong to a
|
different points intersect in the plane \f$\theta\f$ - \f$r\f$, that means that both points belong to a
|
||||||
same line. For instance, following with the example above and drawing the plot for two more
|
same line. For instance, following with the example above and drawing the plot for two more
|
||||||
points: \f$x_{1} = 9\f$, \f$y_{1} = 4\f$ and \f$x_{2} = 12\f$, \f$y_{2} = 3\f$, we get:
|
points: \f$x_{1} = 9\f$, \f$y_{1} = 4\f$ and \f$x_{2} = 12\f$, \f$y_{2} = 3\f$, we get:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The three plots intersect in one single point \f$(0.925, 9.6)\f$, these coordinates are the
|
The three plots intersect in one single point \f$(0.925, 9.6)\f$, these coordinates are the
|
||||||
parameters (\f$\theta, r\f$) or the line in which \f$(x_{0}, y_{0})\f$, \f$(x_{1}, y_{1})\f$ and
|
parameters (\f$\theta, r\f$) or the line in which \f$(x_{0}, y_{0})\f$, \f$(x_{1}, y_{1})\f$ and
|
||||||
\f$(x_{2}, y_{2})\f$ lay.
|
\f$(x_{2}, y_{2})\f$ lay.
|
||||||
|
|
||||||
4. What does all the stuff above mean? It means that in general, a line can be *detected* by
|
-# What does all the stuff above mean? It means that in general, a line can be *detected* by
|
||||||
finding the number of intersections between curves.The more curves intersecting means that the
|
finding the number of intersections between curves.The more curves intersecting means that the
|
||||||
line represented by that intersection have more points. In general, we can define a *threshold*
|
line represented by that intersection have more points. In general, we can define a *threshold*
|
||||||
of the minimum number of intersections needed to *detect* a line.
|
of the minimum number of intersections needed to *detect* a line.
|
||||||
5. This is what the Hough Line Transform does. It keeps track of the intersection between curves of
|
-# This is what the Hough Line Transform does. It keeps track of the intersection between curves of
|
||||||
every point in the image. If the number of intersections is above some *threshold*, then it
|
every point in the image. If the number of intersections is above some *threshold*, then it
|
||||||
declares it as a line with the parameters \f$(\theta, r_{\theta})\f$ of the intersection point.
|
declares it as a line with the parameters \f$(\theta, r_{\theta})\f$ of the intersection point.
|
||||||
|
|
||||||
@ -86,83 +90,20 @@ b. **The Probabilistic Hough Line Transform**
|
|||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Loads an image
|
- Loads an image
|
||||||
- Applies either a *Standard Hough Line Transform* or a *Probabilistic Line Transform*.
|
- Applies either a *Standard Hough Line Transform* or a *Probabilistic Line Transform*.
|
||||||
- Display the original image and the detected line in two windows.
|
- Display the original image and the detected line in two windows.
|
||||||
|
|
||||||
2. The sample code that we will explain can be downloaded from here_. A slightly fancier version
|
-# The sample code that we will explain can be downloaded from [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/houghlines.cpp). A slightly fancier version
|
||||||
(which shows both Hough standard and probabilistic with trackbars for changing the threshold
|
(which shows both Hough standard and probabilistic with trackbars for changing the threshold
|
||||||
values) can be found here_.
|
values) can be found [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp).
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/houghlines.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
|
|
||||||
#include <iostream>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
void help()
|
|
||||||
{
|
|
||||||
cout << "\nThis program demonstrates line finding with the Hough transform.\n"
|
|
||||||
"Usage:\n"
|
|
||||||
"./houghlines <image_name>, Default is pic1.jpg\n" << endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
int main(int argc, char** argv)
|
|
||||||
{
|
|
||||||
const char* filename = argc >= 2 ? argv[1] : "pic1.jpg";
|
|
||||||
|
|
||||||
Mat src = imread(filename, 0);
|
|
||||||
if(src.empty())
|
|
||||||
{
|
|
||||||
help();
|
|
||||||
cout << "can not open " << filename << endl;
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Mat dst, cdst;
|
|
||||||
Canny(src, dst, 50, 200, 3);
|
|
||||||
cvtColor(dst, cdst, COLOR_GRAY2BGR);
|
|
||||||
|
|
||||||
#if 0
|
|
||||||
vector<Vec2f> lines;
|
|
||||||
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
|
|
||||||
|
|
||||||
for( size_t i = 0; i < lines.size(); i++ )
|
|
||||||
{
|
|
||||||
float rho = lines[i][0], theta = lines[i][1];
|
|
||||||
Point pt1, pt2;
|
|
||||||
double a = cos(theta), b = sin(theta);
|
|
||||||
double x0 = a*rho, y0 = b*rho;
|
|
||||||
pt1.x = cvRound(x0 + 1000*(-b));
|
|
||||||
pt1.y = cvRound(y0 + 1000*(a));
|
|
||||||
pt2.x = cvRound(x0 - 1000*(-b));
|
|
||||||
pt2.y = cvRound(y0 - 1000*(a));
|
|
||||||
line( cdst, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
|
|
||||||
}
|
|
||||||
#else
|
|
||||||
vector<Vec4i> lines;
|
|
||||||
HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
|
|
||||||
for( size_t i = 0; i < lines.size(); i++ )
|
|
||||||
{
|
|
||||||
Vec4i l = lines[i];
|
|
||||||
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, CV_AA);
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
imshow("source", src);
|
|
||||||
imshow("detected lines", cdst);
|
|
||||||
|
|
||||||
waitKey();
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Load an image
|
-# Load an image
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src = imread(filename, 0);
|
Mat src = imread(filename, 0);
|
||||||
if(src.empty())
|
if(src.empty())
|
||||||
@ -172,14 +113,14 @@ Explanation
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
2. Detect the edges of the image by using a Canny detector
|
-# Detect the edges of the image by using a Canny detector
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Canny(src, dst, 50, 200, 3);
|
Canny(src, dst, 50, 200, 3);
|
||||||
@endcode
|
@endcode
|
||||||
Now we will apply the Hough Line Transform. We will explain how to use both OpenCV functions
|
Now we will apply the Hough Line Transform. We will explain how to use both OpenCV functions
|
||||||
available for this purpose:
|
available for this purpose:
|
||||||
|
|
||||||
3. **Standard Hough Line Transform**
|
-# **Standard Hough Line Transform**
|
||||||
-# First, you apply the Transform:
|
-# First, you apply the Transform:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
vector<Vec2f> lines;
|
vector<Vec2f> lines;
|
||||||
@ -211,7 +152,7 @@ Explanation
|
|||||||
line( cdst, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
|
line( cdst, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
4. **Probabilistic Hough Line Transform**
|
-# **Probabilistic Hough Line Transform**
|
||||||
-# First you apply the transform:
|
-# First you apply the transform:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
vector<Vec4i> lines;
|
vector<Vec4i> lines;
|
||||||
@ -239,15 +180,16 @@ Explanation
|
|||||||
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, LINE_AA);
|
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, LINE_AA);
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
5. Display the original image and the detected lines:
|
-# Display the original image and the detected lines:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow("source", src);
|
imshow("source", src);
|
||||||
imshow("detected lines", cdst);
|
imshow("detected lines", cdst);
|
||||||
@endcode
|
@endcode
|
||||||
6. Wait until the user exits the program
|
-# Wait until the user exits the program
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
waitKey();
|
waitKey();
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
@ -258,11 +200,11 @@ Result
|
|||||||
|
|
||||||
Using an input image such as:
|
Using an input image such as:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
We get the following result by using the Probabilistic Hough Line Transform:
|
We get the following result by using the Probabilistic Hough Line Transform:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You may observe that the number of lines detected vary while you change the *threshold*. The
|
You may observe that the number of lines detected vary while you change the *threshold*. The
|
||||||
explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected
|
explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected
|
||||||
|
@ -12,16 +12,16 @@ In this tutorial you will learn how to:
|
|||||||
Theory
|
Theory
|
||||||
------
|
------
|
||||||
|
|
||||||
1. In the previous tutorial we learned how to use the *Sobel Operator*. It was based on the fact
|
-# In the previous tutorial we learned how to use the *Sobel Operator*. It was based on the fact
|
||||||
that in the edge area, the pixel intensity shows a "jump" or a high variation of intensity.
|
that in the edge area, the pixel intensity shows a "jump" or a high variation of intensity.
|
||||||
Getting the first derivative of the intensity, we observed that an edge is characterized by a
|
Getting the first derivative of the intensity, we observed that an edge is characterized by a
|
||||||
maximum, as it can be seen in the figure:
|
maximum, as it can be seen in the figure:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. And...what happens if we take the second derivative?
|
-# And...what happens if we take the second derivative?
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can observe that the second derivative is zero! So, we can also use this criterion to
|
You can observe that the second derivative is zero! So, we can also use this criterion to
|
||||||
attempt to detect edges in an image. However, note that zeros will not only appear in edges
|
attempt to detect edges in an image. However, note that zeros will not only appear in edges
|
||||||
@ -30,81 +30,34 @@ Theory
|
|||||||
|
|
||||||
### Laplacian Operator
|
### Laplacian Operator
|
||||||
|
|
||||||
1. From the explanation above, we deduce that the second derivative can be used to *detect edges*.
|
-# From the explanation above, we deduce that the second derivative can be used to *detect edges*.
|
||||||
Since images are "*2D*", we would need to take the derivative in both dimensions. Here, the
|
Since images are "*2D*", we would need to take the derivative in both dimensions. Here, the
|
||||||
Laplacian operator comes handy.
|
Laplacian operator comes handy.
|
||||||
2. The *Laplacian operator* is defined by:
|
-# The *Laplacian operator* is defined by:
|
||||||
|
|
||||||
\f[Laplace(f) = \dfrac{\partial^{2} f}{\partial x^{2}} + \dfrac{\partial^{2} f}{\partial y^{2}}\f]
|
\f[Laplace(f) = \dfrac{\partial^{2} f}{\partial x^{2}} + \dfrac{\partial^{2} f}{\partial y^{2}}\f]
|
||||||
|
|
||||||
1. The Laplacian operator is implemented in OpenCV by the function @ref cv::Laplacian . In fact,
|
-# The Laplacian operator is implemented in OpenCV by the function @ref cv::Laplacian . In fact,
|
||||||
since the Laplacian uses the gradient of images, it calls internally the *Sobel* operator to
|
since the Laplacian uses the gradient of images, it calls internally the *Sobel* operator to
|
||||||
perform its computation.
|
perform its computation.
|
||||||
|
|
||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Loads an image
|
- Loads an image
|
||||||
- Remove noise by applying a Gaussian blur and then convert the original image to grayscale
|
- Remove noise by applying a Gaussian blur and then convert the original image to grayscale
|
||||||
- Applies a Laplacian operator to the grayscale image and stores the output image
|
- Applies a Laplacian operator to the grayscale image and stores the output image
|
||||||
- Display the result in a window
|
- Display the result in a window
|
||||||
|
|
||||||
2. The tutorial code's is shown lines below. You can also download it from
|
-# The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include <stdlib.h>
|
|
||||||
#include <stdio.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
Mat src, src_gray, dst;
|
|
||||||
int kernel_size = 3;
|
|
||||||
int scale = 1;
|
|
||||||
int delta = 0;
|
|
||||||
int ddepth = CV_16S;
|
|
||||||
char* window_name = "Laplace Demo";
|
|
||||||
|
|
||||||
int c;
|
|
||||||
|
|
||||||
/// Load an image
|
|
||||||
src = imread( argv[1] );
|
|
||||||
|
|
||||||
if( !src.data )
|
|
||||||
{ return -1; }
|
|
||||||
|
|
||||||
/// Remove noise by blurring with a Gaussian filter
|
|
||||||
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
|
||||||
|
|
||||||
/// Convert the image to grayscale
|
|
||||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
|
||||||
|
|
||||||
/// Create window
|
|
||||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
|
||||||
|
|
||||||
/// Apply Laplace function
|
|
||||||
Mat abs_dst;
|
|
||||||
|
|
||||||
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
|
|
||||||
convertScaleAbs( dst, abs_dst );
|
|
||||||
|
|
||||||
/// Show what you got
|
|
||||||
imshow( window_name, abs_dst );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Create some needed variables:
|
-# Create some needed variables:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src, src_gray, dst;
|
Mat src, src_gray, dst;
|
||||||
int kernel_size = 3;
|
int kernel_size = 3;
|
||||||
@ -113,22 +66,22 @@ Explanation
|
|||||||
int ddepth = CV_16S;
|
int ddepth = CV_16S;
|
||||||
char* window_name = "Laplace Demo";
|
char* window_name = "Laplace Demo";
|
||||||
@endcode
|
@endcode
|
||||||
2. Loads the source image:
|
-# Loads the source image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1] );
|
src = imread( argv[1] );
|
||||||
|
|
||||||
if( !src.data )
|
if( !src.data )
|
||||||
{ return -1; }
|
{ return -1; }
|
||||||
@endcode
|
@endcode
|
||||||
3. Apply a Gaussian blur to reduce noise:
|
-# Apply a Gaussian blur to reduce noise:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
||||||
@endcode
|
@endcode
|
||||||
4. Convert the image to grayscale using @ref cv::cvtColor
|
-# Convert the image to grayscale using @ref cv::cvtColor
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
||||||
@endcode
|
@endcode
|
||||||
5. Apply the Laplacian operator to the grayscale image:
|
-# Apply the Laplacian operator to the grayscale image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
|
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
|
||||||
@endcode
|
@endcode
|
||||||
@ -142,27 +95,26 @@ Explanation
|
|||||||
this example.
|
this example.
|
||||||
- *scale*, *delta* and *BORDER_DEFAULT*: We leave them as default values.
|
- *scale*, *delta* and *BORDER_DEFAULT*: We leave them as default values.
|
||||||
|
|
||||||
6. Convert the output from the Laplacian operator to a *CV_8U* image:
|
-# Convert the output from the Laplacian operator to a *CV_8U* image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
convertScaleAbs( dst, abs_dst );
|
convertScaleAbs( dst, abs_dst );
|
||||||
@endcode
|
@endcode
|
||||||
7. Display the result in a window:
|
-# Display the result in a window:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow( window_name, abs_dst );
|
imshow( window_name, abs_dst );
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. After compiling the code above, we can run it giving as argument the path to an image. For
|
-# After compiling the code above, we can run it giving as argument the path to an image. For
|
||||||
example, using as an input:
|
example, using as an input:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. We obtain the following result. Notice how the trees and the silhouette of the cow are
|
-# We obtain the following result. Notice how the trees and the silhouette of the cow are
|
||||||
approximately well defined (except in areas in which the intensity are very similar, i.e. around
|
approximately well defined (except in areas in which the intensity are very similar, i.e. around
|
||||||
the cow's head). Also, note that the roof of the house behind the trees (right side) is
|
the cow's head). Also, note that the roof of the house behind the trees (right side) is
|
||||||
notoriously marked. This is due to the fact that the contrast is higher in that region.
|
notoriously marked. This is due to the fact that the contrast is higher in that region.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
|
@ -33,146 +33,53 @@ Theory
|
|||||||
What would happen? It is easily seen that the image would flip in the \f$x\f$ direction. For
|
What would happen? It is easily seen that the image would flip in the \f$x\f$ direction. For
|
||||||
instance, consider the input image:
|
instance, consider the input image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
observe how the red circle changes positions with respect to x (considering \f$x\f$ the horizontal
|
observe how the red circle changes positions with respect to x (considering \f$x\f$ the horizontal
|
||||||
direction):
|
direction):
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- In OpenCV, the function @ref cv::remap offers a simple remapping implementation.
|
- In OpenCV, the function @ref cv::remap offers a simple remapping implementation.
|
||||||
|
|
||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Loads an image
|
- Loads an image
|
||||||
- Each second, apply 1 of 4 different remapping processes to the image and display them
|
- Each second, apply 1 of 4 different remapping processes to the image and display them
|
||||||
indefinitely in a window.
|
indefinitely in a window.
|
||||||
- Wait for the user to exit the program
|
- Wait for the user to exit the program
|
||||||
|
|
||||||
2. The tutorial code's is shown lines below. You can also download it from
|
-# The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Remap_Demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Remap_Demo.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ImgTrans/Remap_Demo.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
|
|
||||||
/// Global variables
|
|
||||||
Mat src, dst;
|
|
||||||
Mat map_x, map_y;
|
|
||||||
char* remap_window = "Remap demo";
|
|
||||||
int ind = 0;
|
|
||||||
|
|
||||||
/// Function Headers
|
|
||||||
void update_map( void );
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function main
|
|
||||||
*/
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Load the image
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
/// Create dst, map_x and map_y with the same size as src:
|
|
||||||
dst.create( src.size(), src.type() );
|
|
||||||
map_x.create( src.size(), CV_32FC1 );
|
|
||||||
map_y.create( src.size(), CV_32FC1 );
|
|
||||||
|
|
||||||
/// Create window
|
|
||||||
namedWindow( remap_window, WINDOW_AUTOSIZE );
|
|
||||||
|
|
||||||
/// Loop
|
|
||||||
while( true )
|
|
||||||
{
|
|
||||||
/// Each 1 sec. Press ESC to exit the program
|
|
||||||
int c = waitKey( 1000 );
|
|
||||||
|
|
||||||
if( (char)c == 27 )
|
|
||||||
{ break; }
|
|
||||||
|
|
||||||
/// Update map_x & map_y. Then apply remap
|
|
||||||
update_map();
|
|
||||||
remap( src, dst, map_x, map_y, INTER_LINEAR, BORDER_CONSTANT, Scalar(0,0, 0) );
|
|
||||||
|
|
||||||
/// Display results
|
|
||||||
imshow( remap_window, dst );
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function update_map
|
|
||||||
* @brief Fill the map_x and map_y matrices with 4 types of mappings
|
|
||||||
*/
|
|
||||||
void update_map( void )
|
|
||||||
{
|
|
||||||
ind = ind%4;
|
|
||||||
|
|
||||||
for( int j = 0; j < src.rows; j++ )
|
|
||||||
{ for( int i = 0; i < src.cols; i++ )
|
|
||||||
{
|
|
||||||
switch( ind )
|
|
||||||
{
|
|
||||||
case 0:
|
|
||||||
if( i > src.cols*0.25 && i < src.cols*0.75 && j > src.rows*0.25 && j < src.rows*0.75 )
|
|
||||||
{
|
|
||||||
map_x.at<float>(j,i) = 2*( i - src.cols*0.25 ) + 0.5 ;
|
|
||||||
map_y.at<float>(j,i) = 2*( j - src.rows*0.25 ) + 0.5 ;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{ map_x.at<float>(j,i) = 0 ;
|
|
||||||
map_y.at<float>(j,i) = 0 ;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
case 1:
|
|
||||||
map_x.at<float>(j,i) = i ;
|
|
||||||
map_y.at<float>(j,i) = src.rows - j ;
|
|
||||||
break;
|
|
||||||
case 2:
|
|
||||||
map_x.at<float>(j,i) = src.cols - i ;
|
|
||||||
map_y.at<float>(j,i) = j ;
|
|
||||||
break;
|
|
||||||
case 3:
|
|
||||||
map_x.at<float>(j,i) = src.cols - i ;
|
|
||||||
map_y.at<float>(j,i) = src.rows - j ;
|
|
||||||
break;
|
|
||||||
} // end of switch
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ind++;
|
|
||||||
@endcode
|
|
||||||
}
|
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Create some variables we will use:
|
-# Create some variables we will use:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src, dst;
|
Mat src, dst;
|
||||||
Mat map_x, map_y;
|
Mat map_x, map_y;
|
||||||
char* remap_window = "Remap demo";
|
char* remap_window = "Remap demo";
|
||||||
int ind = 0;
|
int ind = 0;
|
||||||
@endcode
|
@endcode
|
||||||
2. Load an image:
|
-# Load an image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1], 1 );
|
src = imread( argv[1], 1 );
|
||||||
@endcode
|
@endcode
|
||||||
3. Create the destination image and the two mapping matrices (for x and y )
|
-# Create the destination image and the two mapping matrices (for x and y )
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
dst.create( src.size(), src.type() );
|
dst.create( src.size(), src.type() );
|
||||||
map_x.create( src.size(), CV_32FC1 );
|
map_x.create( src.size(), CV_32FC1 );
|
||||||
map_y.create( src.size(), CV_32FC1 );
|
map_y.create( src.size(), CV_32FC1 );
|
||||||
@endcode
|
@endcode
|
||||||
4. Create a window to display results
|
-# Create a window to display results
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( remap_window, WINDOW_AUTOSIZE );
|
namedWindow( remap_window, WINDOW_AUTOSIZE );
|
||||||
@endcode
|
@endcode
|
||||||
5. Establish a loop. Each 1000 ms we update our mapping matrices (*mat_x* and *mat_y*) and apply
|
-# Establish a loop. Each 1000 ms we update our mapping matrices (*mat_x* and *mat_y*) and apply
|
||||||
them to our source image:
|
them to our source image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
while( true )
|
while( true )
|
||||||
@ -205,14 +112,11 @@ Explanation
|
|||||||
|
|
||||||
How do we update our mapping matrices *mat_x* and *mat_y*? Go on reading:
|
How do we update our mapping matrices *mat_x* and *mat_y*? Go on reading:
|
||||||
|
|
||||||
6. **Updating the mapping matrices:** We are going to perform 4 different mappings:
|
-# **Updating the mapping matrices:** We are going to perform 4 different mappings:
|
||||||
-# Reduce the picture to half its size and will display it in the middle:
|
-# Reduce the picture to half its size and will display it in the middle:
|
||||||
|
|
||||||
\f[h(i,j) = ( 2*i - src.cols/2 + 0.5, 2*j - src.rows/2 + 0.5)\f]
|
\f[h(i,j) = ( 2*i - src.cols/2 + 0.5, 2*j - src.rows/2 + 0.5)\f]
|
||||||
|
|
||||||
for all pairs \f$(i,j)\f$ such that: \f$\dfrac{src.cols}{4}<i<\dfrac{3 \cdot src.cols}{4}\f$ and
|
for all pairs \f$(i,j)\f$ such that: \f$\dfrac{src.cols}{4}<i<\dfrac{3 \cdot src.cols}{4}\f$ and
|
||||||
\f$\dfrac{src.rows}{4}<j<\dfrac{3 \cdot src.rows}{4}\f$
|
\f$\dfrac{src.rows}{4}<j<\dfrac{3 \cdot src.rows}{4}\f$
|
||||||
|
|
||||||
-# Turn the image upside down: \f$h( i, j ) = (i, src.rows - j)\f$
|
-# Turn the image upside down: \f$h( i, j ) = (i, src.rows - j)\f$
|
||||||
-# Reflect the image from left to right: \f$h(i,j) = ( src.cols - i, j )\f$
|
-# Reflect the image from left to right: \f$h(i,j) = ( src.cols - i, j )\f$
|
||||||
-# Combination of b and c: \f$h(i,j) = ( src.cols - i, src.rows - j )\f$
|
-# Combination of b and c: \f$h(i,j) = ( src.cols - i, src.rows - j )\f$
|
||||||
@ -254,26 +158,27 @@ for( int j = 0; j < src.rows; j++ )
|
|||||||
ind++;
|
ind++;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. After compiling the code above, you can execute it giving as argument an image path. For
|
-# After compiling the code above, you can execute it giving as argument an image path. For
|
||||||
instance, by using the following image:
|
instance, by using the following image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. This is the result of reducing it to half the size and centering it:
|
-# This is the result of reducing it to half the size and centering it:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
3. Turning it upside down:
|
-# Turning it upside down:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
4. Reflecting it in the x direction:
|
-# Reflecting it in the x direction:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
5. Reflecting it in both directions:
|
-# Reflecting it in both directions:
|
||||||
|
|
||||||

|

|
||||||
|
@ -15,45 +15,45 @@ Theory
|
|||||||
|
|
||||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||||
|
|
||||||
1. In the last two tutorials we have seen applicative examples of convolutions. One of the most
|
-# In the last two tutorials we have seen applicative examples of convolutions. One of the most
|
||||||
important convolutions is the computation of derivatives in an image (or an approximation to
|
important convolutions is the computation of derivatives in an image (or an approximation to
|
||||||
them).
|
them).
|
||||||
2. Why may be important the calculus of the derivatives in an image? Let's imagine we want to
|
-# Why may be important the calculus of the derivatives in an image? Let's imagine we want to
|
||||||
detect the *edges* present in the image. For instance:
|
detect the *edges* present in the image. For instance:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can easily notice that in an *edge*, the pixel intensity *changes* in a notorious way. A
|
You can easily notice that in an *edge*, the pixel intensity *changes* in a notorious way. A
|
||||||
good way to express *changes* is by using *derivatives*. A high change in gradient indicates a
|
good way to express *changes* is by using *derivatives*. A high change in gradient indicates a
|
||||||
major change in the image.
|
major change in the image.
|
||||||
|
|
||||||
3. To be more graphical, let's assume we have a 1D-image. An edge is shown by the "jump" in
|
-# To be more graphical, let's assume we have a 1D-image. An edge is shown by the "jump" in
|
||||||
intensity in the plot below:
|
intensity in the plot below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
4. The edge "jump" can be seen more easily if we take the first derivative (actually, here appears
|
-# The edge "jump" can be seen more easily if we take the first derivative (actually, here appears
|
||||||
as a maximum)
|
as a maximum)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
5. So, from the explanation above, we can deduce that a method to detect edges in an image can be
|
-# So, from the explanation above, we can deduce that a method to detect edges in an image can be
|
||||||
performed by locating pixel locations where the gradient is higher than its neighbors (or to
|
performed by locating pixel locations where the gradient is higher than its neighbors (or to
|
||||||
generalize, higher than a threshold).
|
generalize, higher than a threshold).
|
||||||
6. More detailed explanation, please refer to **Learning OpenCV** by Bradski and Kaehler
|
-# More detailed explanation, please refer to **Learning OpenCV** by Bradski and Kaehler
|
||||||
|
|
||||||
### Sobel Operator
|
### Sobel Operator
|
||||||
|
|
||||||
1. The Sobel Operator is a discrete differentiation operator. It computes an approximation of the
|
-# The Sobel Operator is a discrete differentiation operator. It computes an approximation of the
|
||||||
gradient of an image intensity function.
|
gradient of an image intensity function.
|
||||||
2. The Sobel Operator combines Gaussian smoothing and differentiation.
|
-# The Sobel Operator combines Gaussian smoothing and differentiation.
|
||||||
|
|
||||||
#### Formulation
|
#### Formulation
|
||||||
|
|
||||||
Assuming that the image to be operated is \f$I\f$:
|
Assuming that the image to be operated is \f$I\f$:
|
||||||
|
|
||||||
1. We calculate two derivatives:
|
-# We calculate two derivatives:
|
||||||
1. **Horizontal changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{x}\f$ with odd
|
-# **Horizontal changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{x}\f$ with odd
|
||||||
size. For example for a kernel size of 3, \f$G_{x}\f$ would be computed as:
|
size. For example for a kernel size of 3, \f$G_{x}\f$ would be computed as:
|
||||||
|
|
||||||
\f[G_{x} = \begin{bmatrix}
|
\f[G_{x} = \begin{bmatrix}
|
||||||
@ -62,7 +62,7 @@ Assuming that the image to be operated is \f$I\f$:
|
|||||||
-1 & 0 & +1
|
-1 & 0 & +1
|
||||||
\end{bmatrix} * I\f]
|
\end{bmatrix} * I\f]
|
||||||
|
|
||||||
2. **Vertical changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{y}\f$ with odd
|
-# **Vertical changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{y}\f$ with odd
|
||||||
size. For example for a kernel size of 3, \f$G_{y}\f$ would be computed as:
|
size. For example for a kernel size of 3, \f$G_{y}\f$ would be computed as:
|
||||||
|
|
||||||
\f[G_{y} = \begin{bmatrix}
|
\f[G_{y} = \begin{bmatrix}
|
||||||
@ -71,7 +71,7 @@ Assuming that the image to be operated is \f$I\f$:
|
|||||||
+1 & +2 & +1
|
+1 & +2 & +1
|
||||||
\end{bmatrix} * I\f]
|
\end{bmatrix} * I\f]
|
||||||
|
|
||||||
2. At each point of the image we calculate an approximation of the *gradient* in that point by
|
-# At each point of the image we calculate an approximation of the *gradient* in that point by
|
||||||
combining both results above:
|
combining both results above:
|
||||||
|
|
||||||
\f[G = \sqrt{ G_{x}^{2} + G_{y}^{2} }\f]
|
\f[G = \sqrt{ G_{x}^{2} + G_{y}^{2} }\f]
|
||||||
@ -83,7 +83,7 @@ Assuming that the image to be operated is \f$I\f$:
|
|||||||
@note
|
@note
|
||||||
When the size of the kernel is `3`, the Sobel kernel shown above may produce noticeable
|
When the size of the kernel is `3`, the Sobel kernel shown above may produce noticeable
|
||||||
inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses
|
inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses
|
||||||
this inaccuracy for kernels of size 3 by using the :scharr:\`Scharr function. This is as fast
|
this inaccuracy for kernels of size 3 by using the @ref cv::Scharr function. This is as fast
|
||||||
but more accurate than the standar Sobel function. It implements the following kernels:
|
but more accurate than the standar Sobel function. It implements the following kernels:
|
||||||
\f[G_{x} = \begin{bmatrix}
|
\f[G_{x} = \begin{bmatrix}
|
||||||
-3 & 0 & +3 \\
|
-3 & 0 & +3 \\
|
||||||
@ -103,18 +103,18 @@ Assuming that the image to be operated is \f$I\f$:
|
|||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Applies the *Sobel Operator* and generates as output an image with the detected *edges*
|
- Applies the *Sobel Operator* and generates as output an image with the detected *edges*
|
||||||
bright on a darker background.
|
bright on a darker background.
|
||||||
|
|
||||||
2. The tutorial code's is shown lines below. You can also download it from
|
-# The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp)
|
||||||
@includelineno samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp
|
@includelineno samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. First we declare the variables we are going to use:
|
-# First we declare the variables we are going to use:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat src, src_gray;
|
Mat src, src_gray;
|
||||||
Mat grad;
|
Mat grad;
|
||||||
@ -123,22 +123,22 @@ Explanation
|
|||||||
int delta = 0;
|
int delta = 0;
|
||||||
int ddepth = CV_16S;
|
int ddepth = CV_16S;
|
||||||
@endcode
|
@endcode
|
||||||
2. As usual we load our source image *src*:
|
-# As usual we load our source image *src*:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1] );
|
src = imread( argv[1] );
|
||||||
|
|
||||||
if( !src.data )
|
if( !src.data )
|
||||||
{ return -1; }
|
{ return -1; }
|
||||||
@endcode
|
@endcode
|
||||||
3. First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 )
|
-# First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 )
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
||||||
@endcode
|
@endcode
|
||||||
4. Now we convert our filtered image to grayscale:
|
-# Now we convert our filtered image to grayscale:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
||||||
@endcode
|
@endcode
|
||||||
5. Second, we calculate the "*derivatives*" in *x* and *y* directions. For this, we use the
|
-# Second, we calculate the "*derivatives*" in *x* and *y* directions. For this, we use the
|
||||||
function @ref cv::Sobel as shown below:
|
function @ref cv::Sobel as shown below:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat grad_x, grad_y;
|
Mat grad_x, grad_y;
|
||||||
@ -161,23 +161,24 @@ Explanation
|
|||||||
Notice that to calculate the gradient in *x* direction we use: \f$x_{order}= 1\f$ and
|
Notice that to calculate the gradient in *x* direction we use: \f$x_{order}= 1\f$ and
|
||||||
\f$y_{order} = 0\f$. We do analogously for the *y* direction.
|
\f$y_{order} = 0\f$. We do analogously for the *y* direction.
|
||||||
|
|
||||||
6. We convert our partial results back to *CV_8U*:
|
-# We convert our partial results back to *CV_8U*:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
convertScaleAbs( grad_x, abs_grad_x );
|
convertScaleAbs( grad_x, abs_grad_x );
|
||||||
convertScaleAbs( grad_y, abs_grad_y );
|
convertScaleAbs( grad_y, abs_grad_y );
|
||||||
@endcode
|
@endcode
|
||||||
7. Finally, we try to approximate the *gradient* by adding both directional gradients (note that
|
-# Finally, we try to approximate the *gradient* by adding both directional gradients (note that
|
||||||
this is not an exact calculation at all! but it is good for our purposes).
|
this is not an exact calculation at all! but it is good for our purposes).
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
|
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
|
||||||
@endcode
|
@endcode
|
||||||
8. Finally, we show our result:
|
-# Finally, we show our result:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imshow( window_name, grad );
|
imshow( window_name, grad );
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. Here is the output of applying our basic detector to *lena.jpg*:
|
-# Here is the output of applying our basic detector to *lena.jpg*:
|
||||||
|
|
||||||

|

|
||||||
|
@ -6,17 +6,17 @@ Goal
|
|||||||
|
|
||||||
In this tutorial you will learn how to:
|
In this tutorial you will learn how to:
|
||||||
|
|
||||||
a. Use the OpenCV function @ref cv::warpAffine to implement simple remapping routines.
|
- Use the OpenCV function @ref cv::warpAffine to implement simple remapping routines.
|
||||||
b. Use the OpenCV function @ref cv::getRotationMatrix2D to obtain a \f$2 \times 3\f$ rotation matrix
|
- Use the OpenCV function @ref cv::getRotationMatrix2D to obtain a \f$2 \times 3\f$ rotation matrix
|
||||||
|
|
||||||
Theory
|
Theory
|
||||||
------
|
------
|
||||||
|
|
||||||
### What is an Affine Transformation?
|
### What is an Affine Transformation?
|
||||||
|
|
||||||
1. It is any transformation that can be expressed in the form of a *matrix multiplication* (linear
|
-# It is any transformation that can be expressed in the form of a *matrix multiplication* (linear
|
||||||
transformation) followed by a *vector addition* (translation).
|
transformation) followed by a *vector addition* (translation).
|
||||||
2. From the above, We can use an Affine Transformation to express:
|
-# From the above, We can use an Affine Transformation to express:
|
||||||
|
|
||||||
-# Rotations (linear transformation)
|
-# Rotations (linear transformation)
|
||||||
-# Translations (vector addition)
|
-# Translations (vector addition)
|
||||||
@ -25,24 +25,28 @@ Theory
|
|||||||
you can see that, in essence, an Affine Transformation represents a **relation** between two
|
you can see that, in essence, an Affine Transformation represents a **relation** between two
|
||||||
images.
|
images.
|
||||||
|
|
||||||
3. The usual way to represent an Affine Transform is by using a \f$2 \times 3\f$ matrix.
|
-# The usual way to represent an Affine Transform is by using a \f$2 \times 3\f$ matrix.
|
||||||
|
|
||||||
\f[A = \begin{bmatrix}
|
\f[
|
||||||
|
A = \begin{bmatrix}
|
||||||
a_{00} & a_{01} \\
|
a_{00} & a_{01} \\
|
||||||
a_{10} & a_{11}
|
a_{10} & a_{11}
|
||||||
\end{bmatrix}_{2 \times 2}
|
\end{bmatrix}_{2 \times 2}
|
||||||
B = \begin{bmatrix}
|
B = \begin{bmatrix}
|
||||||
b_{00} \\
|
b_{00} \\
|
||||||
b_{10}
|
b_{10}
|
||||||
\end{bmatrix}_{2 \times 1}\f]\f[M = \begin{bmatrix}
|
\end{bmatrix}_{2 \times 1}
|
||||||
|
\f]
|
||||||
|
\f[
|
||||||
|
M = \begin{bmatrix}
|
||||||
A & B
|
A & B
|
||||||
\end{bmatrix}
|
\end{bmatrix}
|
||||||
=\f]
|
=
|
||||||
|
\begin{bmatrix}
|
||||||
begin{bmatrix}
|
a_{00} & a_{01} & b_{00} \\
|
||||||
a_{00} & a_{01} & b_{00} \\ a_{10} & a_{11} & b_{10}
|
a_{10} & a_{11} & b_{10}
|
||||||
|
\end{bmatrix}_{2 \times 3}
|
||||||
end{bmatrix}_{2 times 3}
|
\f]
|
||||||
|
|
||||||
Considering that we want to transform a 2D vector \f$X = \begin{bmatrix}x \\ y\end{bmatrix}\f$ by
|
Considering that we want to transform a 2D vector \f$X = \begin{bmatrix}x \\ y\end{bmatrix}\f$ by
|
||||||
using \f$A\f$ and \f$B\f$, we can do it equivalently with:
|
using \f$A\f$ and \f$B\f$, we can do it equivalently with:
|
||||||
@ -56,17 +60,17 @@ Theory
|
|||||||
|
|
||||||
### How do we get an Affine Transformation?
|
### How do we get an Affine Transformation?
|
||||||
|
|
||||||
1. Excellent question. We mentioned that an Affine Transformation is basically a **relation**
|
-# Excellent question. We mentioned that an Affine Transformation is basically a **relation**
|
||||||
between two images. The information about this relation can come, roughly, in two ways:
|
between two images. The information about this relation can come, roughly, in two ways:
|
||||||
-# We know both \f$X\f$ and T and we also know that they are related. Then our job is to find \f$M\f$
|
-# We know both \f$X\f$ and T and we also know that they are related. Then our job is to find \f$M\f$
|
||||||
-# We know \f$M\f$ and \f$X\f$. To obtain \f$T\f$ we only need to apply \f$T = M \cdot X\f$. Our information
|
-# We know \f$M\f$ and \f$X\f$. To obtain \f$T\f$ we only need to apply \f$T = M \cdot X\f$. Our information
|
||||||
for \f$M\f$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation
|
for \f$M\f$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation
|
||||||
between points.
|
between points.
|
||||||
|
|
||||||
2. Let's explain a little bit better (b). Since \f$M\f$ relates 02 images, we can analyze the simplest
|
-# Let's explain a little bit better (b). Since \f$M\f$ relates 02 images, we can analyze the simplest
|
||||||
case in which it relates three points in both images. Look at the figure below:
|
case in which it relates three points in both images. Look at the figure below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a
|
the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a
|
||||||
triangle, but now they have changed notoriously. If we find the Affine Transformation with these
|
triangle, but now they have changed notoriously. If we find the Affine Transformation with these
|
||||||
@ -76,7 +80,7 @@ Theory
|
|||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
1. **What does this program do?**
|
-# **What does this program do?**
|
||||||
- Loads an image
|
- Loads an image
|
||||||
- Applies an Affine Transform to the image. This Transform is obtained from the relation
|
- Applies an Affine Transform to the image. This Transform is obtained from the relation
|
||||||
between three points. We use the function @ref cv::warpAffine for that purpose.
|
between three points. We use the function @ref cv::warpAffine for that purpose.
|
||||||
@ -84,86 +88,14 @@ Code
|
|||||||
the image center
|
the image center
|
||||||
- Waits until the user exits the program
|
- Waits until the user exits the program
|
||||||
|
|
||||||
2. The tutorial code's is shown lines below. You can also download it from
|
-# The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
/// Global variables
|
|
||||||
char* source_window = "Source image";
|
|
||||||
char* warp_window = "Warp";
|
|
||||||
char* warp_rotate_window = "Warp + Rotate";
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
Point2f srcTri[3];
|
|
||||||
Point2f dstTri[3];
|
|
||||||
|
|
||||||
Mat rot_mat( 2, 3, CV_32FC1 );
|
|
||||||
Mat warp_mat( 2, 3, CV_32FC1 );
|
|
||||||
Mat src, warp_dst, warp_rotate_dst;
|
|
||||||
|
|
||||||
/// Load the image
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
/// Set the dst image the same type and size as src
|
|
||||||
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
|
|
||||||
|
|
||||||
/// Set your 3 points to calculate the Affine Transform
|
|
||||||
srcTri[0] = Point2f( 0,0 );
|
|
||||||
srcTri[1] = Point2f( src.cols - 1, 0 );
|
|
||||||
srcTri[2] = Point2f( 0, src.rows - 1 );
|
|
||||||
|
|
||||||
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
|
|
||||||
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
|
|
||||||
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
|
|
||||||
|
|
||||||
/// Get the Affine Transform
|
|
||||||
warp_mat = getAffineTransform( srcTri, dstTri );
|
|
||||||
|
|
||||||
/// Apply the Affine Transform just found to the src image
|
|
||||||
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
|
|
||||||
|
|
||||||
/* Rotating the image after Warp */
|
|
||||||
|
|
||||||
/// Compute a rotation matrix with respect to the center of the image
|
|
||||||
Point center = Point( warp_dst.cols/2, warp_dst.rows/2 );
|
|
||||||
double angle = -50.0;
|
|
||||||
double scale = 0.6;
|
|
||||||
|
|
||||||
/// Get the rotation matrix with the specifications above
|
|
||||||
rot_mat = getRotationMatrix2D( center, angle, scale );
|
|
||||||
|
|
||||||
/// Rotate the warped image
|
|
||||||
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
|
|
||||||
|
|
||||||
/// Show what you got
|
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( source_window, src );
|
|
||||||
|
|
||||||
namedWindow( warp_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( warp_window, warp_dst );
|
|
||||||
|
|
||||||
namedWindow( warp_rotate_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( warp_rotate_window, warp_rotate_dst );
|
|
||||||
|
|
||||||
/// Wait until user exits the program
|
|
||||||
waitKey(0);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Declare some variables we will use, such as the matrices to store our results and 2 arrays of
|
-# Declare some variables we will use, such as the matrices to store our results and 2 arrays of
|
||||||
points to store the 2D points that define our Affine Transform.
|
points to store the 2D points that define our Affine Transform.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Point2f srcTri[3];
|
Point2f srcTri[3];
|
||||||
@ -173,15 +105,15 @@ Explanation
|
|||||||
Mat warp_mat( 2, 3, CV_32FC1 );
|
Mat warp_mat( 2, 3, CV_32FC1 );
|
||||||
Mat src, warp_dst, warp_rotate_dst;
|
Mat src, warp_dst, warp_rotate_dst;
|
||||||
@endcode
|
@endcode
|
||||||
2. Load an image:
|
-# Load an image:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
src = imread( argv[1], 1 );
|
src = imread( argv[1], 1 );
|
||||||
@endcode
|
@endcode
|
||||||
3. Initialize the destination image as having the same size and type as the source:
|
-# Initialize the destination image as having the same size and type as the source:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
|
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
|
||||||
@endcode
|
@endcode
|
||||||
4. **Affine Transform:** As we explained lines above, we need two sets of 3 points to derive the
|
-# **Affine Transform:** As we explained lines above, we need two sets of 3 points to derive the
|
||||||
affine transform relation. Take a look:
|
affine transform relation. Take a look:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
srcTri[0] = Point2f( 0,0 );
|
srcTri[0] = Point2f( 0,0 );
|
||||||
@ -196,14 +128,14 @@ Explanation
|
|||||||
approximately the same as the ones depicted in the example figure (in the Theory section). You
|
approximately the same as the ones depicted in the example figure (in the Theory section). You
|
||||||
may note that the size and orientation of the triangle defined by the 3 points change.
|
may note that the size and orientation of the triangle defined by the 3 points change.
|
||||||
|
|
||||||
5. Armed with both sets of points, we calculate the Affine Transform by using OpenCV function @ref
|
-# Armed with both sets of points, we calculate the Affine Transform by using OpenCV function @ref
|
||||||
cv::getAffineTransform :
|
cv::getAffineTransform :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
warp_mat = getAffineTransform( srcTri, dstTri );
|
warp_mat = getAffineTransform( srcTri, dstTri );
|
||||||
@endcode
|
@endcode
|
||||||
We get as an output a \f$2 \times 3\f$ matrix (in this case **warp_mat**)
|
We get as an output a \f$2 \times 3\f$ matrix (in this case **warp_mat**)
|
||||||
|
|
||||||
6. We apply the Affine Transform just found to the src image
|
-# We apply the Affine Transform just found to the src image
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
|
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
|
||||||
@endcode
|
@endcode
|
||||||
@ -217,7 +149,7 @@ Explanation
|
|||||||
We just got our first transformed image! We will display it in one bit. Before that, we also
|
We just got our first transformed image! We will display it in one bit. Before that, we also
|
||||||
want to rotate it...
|
want to rotate it...
|
||||||
|
|
||||||
7. **Rotate:** To rotate an image, we need to know two things:
|
-# **Rotate:** To rotate an image, we need to know two things:
|
||||||
|
|
||||||
-# The center with respect to which the image will rotate
|
-# The center with respect to which the image will rotate
|
||||||
-# The angle to be rotated. In OpenCV a positive angle is counter-clockwise
|
-# The angle to be rotated. In OpenCV a positive angle is counter-clockwise
|
||||||
@ -229,16 +161,16 @@ Explanation
|
|||||||
double angle = -50.0;
|
double angle = -50.0;
|
||||||
double scale = 0.6;
|
double scale = 0.6;
|
||||||
@endcode
|
@endcode
|
||||||
8. We generate the rotation matrix with the OpenCV function @ref cv::getRotationMatrix2D , which
|
-# We generate the rotation matrix with the OpenCV function @ref cv::getRotationMatrix2D , which
|
||||||
returns a \f$2 \times 3\f$ matrix (in this case *rot_mat*)
|
returns a \f$2 \times 3\f$ matrix (in this case *rot_mat*)
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
rot_mat = getRotationMatrix2D( center, angle, scale );
|
rot_mat = getRotationMatrix2D( center, angle, scale );
|
||||||
@endcode
|
@endcode
|
||||||
9. We now apply the found rotation to the output of our previous Transformation.
|
-# We now apply the found rotation to the output of our previous Transformation.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
|
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
|
||||||
@endcode
|
@endcode
|
||||||
10. Finally, we display our results in two windows plus the original image for good measure:
|
-# Finally, we display our results in two windows plus the original image for good measure:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||||
imshow( source_window, src );
|
imshow( source_window, src );
|
||||||
@ -249,23 +181,24 @@ Explanation
|
|||||||
namedWindow( warp_rotate_window, WINDOW_AUTOSIZE );
|
namedWindow( warp_rotate_window, WINDOW_AUTOSIZE );
|
||||||
imshow( warp_rotate_window, warp_rotate_dst );
|
imshow( warp_rotate_window, warp_rotate_dst );
|
||||||
@endcode
|
@endcode
|
||||||
11. We just have to wait until the user exits the program
|
-# We just have to wait until the user exits the program
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
waitKey(0);
|
waitKey(0);
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. After compiling the code above, we can give it the path of an image as argument. For instance,
|
-# After compiling the code above, we can give it the path of an image as argument. For instance,
|
||||||
for a picture like:
|
for a picture like:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
after applying the first Affine Transform we obtain:
|
after applying the first Affine Transform we obtain:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
and finally, after applying a negative rotation (remember negative means clockwise) and a scale
|
and finally, after applying a negative rotation (remember negative means clockwise) and a scale
|
||||||
factor, we get:
|
factor, we get:
|
||||||
|
|
||||||

|

|
||||||
|
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 60 KiB |
@ -16,8 +16,9 @@ In this tutorial you will learn how to:
|
|||||||
Theory
|
Theory
|
||||||
------
|
------
|
||||||
|
|
||||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. In the
|
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||||
previous tutorial we covered two basic Morphology operations:
|
|
||||||
|
In the previous tutorial we covered two basic Morphology operations:
|
||||||
|
|
||||||
- Erosion
|
- Erosion
|
||||||
- Dilation.
|
- Dilation.
|
||||||
@ -37,7 +38,7 @@ discuss briefly 05 operations offered by OpenCV:
|
|||||||
at the right is the result after applying the opening transformation. We can observe that the
|
at the right is the result after applying the opening transformation. We can observe that the
|
||||||
small spaces in the corners of the letter tend to dissapear.
|
small spaces in the corners of the letter tend to dissapear.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Closing
|
### Closing
|
||||||
|
|
||||||
@ -47,7 +48,7 @@ discuss briefly 05 operations offered by OpenCV:
|
|||||||
|
|
||||||
- Useful to remove small holes (dark regions).
|
- Useful to remove small holes (dark regions).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Morphological Gradient
|
### Morphological Gradient
|
||||||
|
|
||||||
@ -57,7 +58,7 @@ discuss briefly 05 operations offered by OpenCV:
|
|||||||
|
|
||||||
- It is useful for finding the outline of an object as can be seen below:
|
- It is useful for finding the outline of an object as can be seen below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Top Hat
|
### Top Hat
|
||||||
|
|
||||||
@ -65,7 +66,7 @@ discuss briefly 05 operations offered by OpenCV:
|
|||||||
|
|
||||||
\f[dst = tophat( src, element ) = src - open( src, element )\f]
|
\f[dst = tophat( src, element ) = src - open( src, element )\f]
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Black Hat
|
### Black Hat
|
||||||
|
|
||||||
@ -73,7 +74,7 @@ discuss briefly 05 operations offered by OpenCV:
|
|||||||
|
|
||||||
\f[dst = blackhat( src, element ) = close( src, element ) - src\f]
|
\f[dst = blackhat( src, element ) = close( src, element ) - src\f]
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
@ -150,10 +151,11 @@ void Morphology_Operations( int, void* )
|
|||||||
imshow( window_name, dst );
|
imshow( window_name, dst );
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Let's check the general structure of the program:
|
-# Let's check the general structure of the program:
|
||||||
- Load an image
|
- Load an image
|
||||||
- Create a window to display results of the Morphological operations
|
- Create a window to display results of the Morphological operations
|
||||||
- Create 03 Trackbars for the user to enter parameters:
|
- Create 03 Trackbars for the user to enter parameters:
|
||||||
@ -185,17 +187,18 @@ Explanation
|
|||||||
/*
|
/*
|
||||||
* @function Morphology_Operations
|
* @function Morphology_Operations
|
||||||
*/
|
*/
|
||||||
@endcode
|
void Morphology_Operations( int, void* )
|
||||||
void Morphology_Operations( int, void\* ) { // Since MORPH_X : 2,3,4,5 and 6 int
|
{
|
||||||
operation = morph_operator + 2;
|
// Since MORPH_X : 2,3,4,5 and 6
|
||||||
|
int operation = morph_operator + 2;
|
||||||
Mat element = getStructuringElement( morph_elem, Size( 2\*morph_size + 1,
|
|
||||||
2\*morph_size+1 ), Point( morph_size, morph_size ) );
|
Mat element = getStructuringElement( morph_elem, Size( 2*morph_size + 1, 2*morph_size+1 ), Point( morph_size, morph_size ) );
|
||||||
|
|
||||||
/// Apply the specified morphology operation morphologyEx( src, dst, operation, element
|
/// Apply the specified morphology operation
|
||||||
); imshow( window_name, dst );
|
morphologyEx( src, dst, operation, element );
|
||||||
|
imshow( window_name, dst );
|
||||||
}
|
}
|
||||||
|
@endcode
|
||||||
|
|
||||||
We can observe that the key function to perform the morphology transformations is @ref
|
We can observe that the key function to perform the morphology transformations is @ref
|
||||||
cv::morphologyEx . In this example we use four arguments (leaving the rest as defaults):
|
cv::morphologyEx . In this example we use four arguments (leaving the rest as defaults):
|
||||||
@ -225,12 +228,10 @@ Results
|
|||||||
- After compiling the code above we can execute it giving an image path as an argument. For this
|
- After compiling the code above we can execute it giving an image path as an argument. For this
|
||||||
tutorial we use as input the image: **baboon.png**:
|
tutorial we use as input the image: **baboon.png**:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- And here are two snapshots of the display window. The first picture shows the output after using
|
- And here are two snapshots of the display window. The first picture shows the output after using
|
||||||
the operator **Opening** with a cross kernel. The second picture (right side, shows the result
|
the operator **Opening** with a cross kernel. The second picture (right side, shows the result
|
||||||
of using a **Blackhat** operator with an ellipse kernel.
|
of using a **Blackhat** operator with an ellipse kernel.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
|
@ -276,6 +276,6 @@ Results
|
|||||||
|
|
||||||
* And here are two snapshots of the display window. The first picture shows the output after using the operator **Opening** with a cross kernel. The second picture (right side, shows the result of using a **Blackhat** operator with an ellipse kernel.
|
* And here are two snapshots of the display window. The first picture shows the output after using the operator **Opening** with a cross kernel. The second picture (right side, shows the result of using a **Blackhat** operator with an ellipse kernel.
|
||||||
|
|
||||||
.. image:: images/Morphology_2_Tutorial_Cover.jpg
|
.. image:: images/Morphology_2_Tutorial_Result.jpg
|
||||||
:alt: Morphology 2: Result sample
|
:alt: Morphology 2: Result sample
|
||||||
:align: center
|
:align: center
|
||||||
|
@ -16,8 +16,8 @@ Theory
|
|||||||
|
|
||||||
- Usually we need to convert an image to a size different than its original. For this, there are
|
- Usually we need to convert an image to a size different than its original. For this, there are
|
||||||
two possible options:
|
two possible options:
|
||||||
1. *Upsize* the image (zoom in) or
|
-# *Upsize* the image (zoom in) or
|
||||||
2. *Downsize* it (zoom out).
|
-# *Downsize* it (zoom out).
|
||||||
- Although there is a *geometric transformation* function in OpenCV that -literally- resize an
|
- Although there is a *geometric transformation* function in OpenCV that -literally- resize an
|
||||||
image (@ref cv::resize , which we will show in a future tutorial), in this section we analyze
|
image (@ref cv::resize , which we will show in a future tutorial), in this section we analyze
|
||||||
first the use of **Image Pyramids**, which are widely applied in a huge range of vision
|
first the use of **Image Pyramids**, which are widely applied in a huge range of vision
|
||||||
@ -37,7 +37,7 @@ Theory
|
|||||||
|
|
||||||
- Imagine the pyramid as a set of layers in which the higher the layer, the smaller the size.
|
- Imagine the pyramid as a set of layers in which the higher the layer, the smaller the size.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Every layer is numbered from bottom to top, so layer \f$(i+1)\f$ (denoted as \f$G_{i+1}\f$ is smaller
|
- Every layer is numbered from bottom to top, so layer \f$(i+1)\f$ (denoted as \f$G_{i+1}\f$ is smaller
|
||||||
than layer \f$i\f$ (\f$G_{i}\f$).
|
than layer \f$i\f$ (\f$G_{i}\f$).
|
||||||
@ -162,14 +162,14 @@ Results
|
|||||||
that comes in the *tutorial_code/image* folder. Notice that this image is \f$512 \times 512\f$,
|
that comes in the *tutorial_code/image* folder. Notice that this image is \f$512 \times 512\f$,
|
||||||
hence a downsample won't generate any error (\f$512 = 2^{9}\f$). The original image is shown below:
|
hence a downsample won't generate any error (\f$512 = 2^{9}\f$). The original image is shown below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- First we apply two successive @ref cv::pyrDown operations by pressing 'd'. Our output is:
|
- First we apply two successive @ref cv::pyrDown operations by pressing 'd'. Our output is:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Note that we should have lost some resolution due to the fact that we are diminishing the size
|
- Note that we should have lost some resolution due to the fact that we are diminishing the size
|
||||||
of the image. This is evident after we apply @ref cv::pyrUp twice (by pressing 'u'). Our output
|
of the image. This is evident after we apply @ref cv::pyrUp twice (by pressing 'u'). Our output
|
||||||
is now:
|
is now:
|
||||||
|
|
||||||

|

|
||||||
|
@ -17,96 +17,14 @@ Code
|
|||||||
|
|
||||||
This tutorial code's is shown lines below. You can also download it from
|
This tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo1.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo1.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo1.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
#include <stdlib.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
Mat src; Mat src_gray;
|
|
||||||
int thresh = 100;
|
|
||||||
int max_thresh = 255;
|
|
||||||
RNG rng(12345);
|
|
||||||
|
|
||||||
/// Function header
|
|
||||||
void thresh_callback(int, void* );
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Load source image and convert it to gray
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
/// Convert image to gray and blur it
|
|
||||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
|
||||||
blur( src_gray, src_gray, Size(3,3) );
|
|
||||||
|
|
||||||
/// Create Window
|
|
||||||
char* source_window = "Source";
|
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( source_window, src );
|
|
||||||
|
|
||||||
createTrackbar( " Threshold:", "Source", &thresh, max_thresh, thresh_callback );
|
|
||||||
thresh_callback( 0, 0 );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
return(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* @function thresh_callback */
|
|
||||||
void thresh_callback(int, void* )
|
|
||||||
{
|
|
||||||
Mat threshold_output;
|
|
||||||
vector<vector<Point> > contours;
|
|
||||||
vector<Vec4i> hierarchy;
|
|
||||||
|
|
||||||
/// Detect edges using Threshold
|
|
||||||
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
|
|
||||||
/// Find contours
|
|
||||||
findContours( threshold_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
|
||||||
|
|
||||||
/// Approximate contours to polygons + get bounding rects and circles
|
|
||||||
vector<vector<Point> > contours_poly( contours.size() );
|
|
||||||
vector<Rect> boundRect( contours.size() );
|
|
||||||
vector<Point2f>center( contours.size() );
|
|
||||||
vector<float>radius( contours.size() );
|
|
||||||
|
|
||||||
for( int i = 0; i < contours.size(); i++ )
|
|
||||||
{ approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
|
|
||||||
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
|
|
||||||
minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/// Draw polygonal contour + bonding rects + circles
|
|
||||||
Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
|
|
||||||
for( int i = 0; i< contours.size(); i++ )
|
|
||||||
{
|
|
||||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
|
||||||
drawContours( drawing, contours_poly, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
|
|
||||||
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
|
|
||||||
circle( drawing, center[i], (int)radius[i], color, 2, 8, 0 );
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Show in a window
|
|
||||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
|
||||||
imshow( "Contours", drawing );
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here it is:
|
Here it is:
|
||||||
|

|
||||||
---------- ----------
|

|
||||||
|BRC_0| |BRC_1|
|
|
||||||
---------- ----------
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,98 +17,14 @@ Code
|
|||||||
|
|
||||||
This tutorial code's is shown lines below. You can also download it from
|
This tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo2.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo2.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo2.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
#include <stdlib.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
Mat src; Mat src_gray;
|
|
||||||
int thresh = 100;
|
|
||||||
int max_thresh = 255;
|
|
||||||
RNG rng(12345);
|
|
||||||
|
|
||||||
/// Function header
|
|
||||||
void thresh_callback(int, void* );
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Load source image and convert it to gray
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
/// Convert image to gray and blur it
|
|
||||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
|
||||||
blur( src_gray, src_gray, Size(3,3) );
|
|
||||||
|
|
||||||
/// Create Window
|
|
||||||
char* source_window = "Source";
|
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( source_window, src );
|
|
||||||
|
|
||||||
createTrackbar( " Threshold:", "Source", &thresh, max_thresh, thresh_callback );
|
|
||||||
thresh_callback( 0, 0 );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
return(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* @function thresh_callback */
|
|
||||||
void thresh_callback(int, void* )
|
|
||||||
{
|
|
||||||
Mat threshold_output;
|
|
||||||
vector<vector<Point> > contours;
|
|
||||||
vector<Vec4i> hierarchy;
|
|
||||||
|
|
||||||
/// Detect edges using Threshold
|
|
||||||
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
|
|
||||||
/// Find contours
|
|
||||||
findContours( threshold_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
|
||||||
|
|
||||||
/// Find the rotated rectangles and ellipses for each contour
|
|
||||||
vector<RotatedRect> minRect( contours.size() );
|
|
||||||
vector<RotatedRect> minEllipse( contours.size() );
|
|
||||||
|
|
||||||
for( int i = 0; i < contours.size(); i++ )
|
|
||||||
{ minRect[i] = minAreaRect( Mat(contours[i]) );
|
|
||||||
if( contours[i].size() > 5 )
|
|
||||||
{ minEllipse[i] = fitEllipse( Mat(contours[i]) ); }
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Draw contours + rotated rects + ellipses
|
|
||||||
Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
|
|
||||||
for( int i = 0; i< contours.size(); i++ )
|
|
||||||
{
|
|
||||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
|
||||||
// contour
|
|
||||||
drawContours( drawing, contours, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
|
|
||||||
// ellipse
|
|
||||||
ellipse( drawing, minEllipse[i], color, 2, 8 );
|
|
||||||
// rotated rectangle
|
|
||||||
Point2f rect_points[4]; minRect[i].points( rect_points );
|
|
||||||
for( int j = 0; j < 4; j++ )
|
|
||||||
line( drawing, rect_points[j], rect_points[(j+1)%4], color, 1, 8 );
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Show in a window
|
|
||||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
|
||||||
imshow( "Contours", drawing );
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here it is:
|
Here it is:
|
||||||
|

|
||||||
---------- ----------
|

|
||||||
|BRE_0| |BRE_1|
|
|
||||||
---------- ----------
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,81 +17,14 @@ Code
|
|||||||
|
|
||||||
This tutorial code's is shown lines below. You can also download it from
|
This tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/findContours_demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/findContours_demo.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/findContours_demo.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
#include <stdlib.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
Mat src; Mat src_gray;
|
|
||||||
int thresh = 100;
|
|
||||||
int max_thresh = 255;
|
|
||||||
RNG rng(12345);
|
|
||||||
|
|
||||||
/// Function header
|
|
||||||
void thresh_callback(int, void* );
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Load source image and convert it to gray
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
/// Convert image to gray and blur it
|
|
||||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
|
||||||
blur( src_gray, src_gray, Size(3,3) );
|
|
||||||
|
|
||||||
/// Create Window
|
|
||||||
char* source_window = "Source";
|
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( source_window, src );
|
|
||||||
|
|
||||||
createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );
|
|
||||||
thresh_callback( 0, 0 );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
return(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* @function thresh_callback */
|
|
||||||
void thresh_callback(int, void* )
|
|
||||||
{
|
|
||||||
Mat canny_output;
|
|
||||||
vector<vector<Point> > contours;
|
|
||||||
vector<Vec4i> hierarchy;
|
|
||||||
|
|
||||||
/// Detect edges using canny
|
|
||||||
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
|
|
||||||
/// Find contours
|
|
||||||
findContours( canny_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
|
||||||
|
|
||||||
/// Draw contours
|
|
||||||
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
|
|
||||||
for( int i = 0; i< contours.size(); i++ )
|
|
||||||
{
|
|
||||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
|
||||||
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Show in a window
|
|
||||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
|
||||||
imshow( "Contours", drawing );
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here it is:
|
Here it is:
|
||||||
|

|
||||||
-------------- --------------
|

|
||||||
|contour_0| |contour_1|
|
|
||||||
-------------- --------------
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -18,102 +18,15 @@ Code
|
|||||||
|
|
||||||
This tutorial code's is shown lines below. You can also download it from
|
This tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/moments_demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/moments_demo.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/moments_demo.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
#include <stdlib.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
Mat src; Mat src_gray;
|
|
||||||
int thresh = 100;
|
|
||||||
int max_thresh = 255;
|
|
||||||
RNG rng(12345);
|
|
||||||
|
|
||||||
/// Function header
|
|
||||||
void thresh_callback(int, void* );
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Load source image and convert it to gray
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
/// Convert image to gray and blur it
|
|
||||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
|
||||||
blur( src_gray, src_gray, Size(3,3) );
|
|
||||||
|
|
||||||
/// Create Window
|
|
||||||
char* source_window = "Source";
|
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( source_window, src );
|
|
||||||
|
|
||||||
createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );
|
|
||||||
thresh_callback( 0, 0 );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
return(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* @function thresh_callback */
|
|
||||||
void thresh_callback(int, void* )
|
|
||||||
{
|
|
||||||
Mat canny_output;
|
|
||||||
vector<vector<Point> > contours;
|
|
||||||
vector<Vec4i> hierarchy;
|
|
||||||
|
|
||||||
/// Detect edges using canny
|
|
||||||
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
|
|
||||||
/// Find contours
|
|
||||||
findContours( canny_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
|
||||||
|
|
||||||
/// Get the moments
|
|
||||||
vector<Moments> mu(contours.size() );
|
|
||||||
for( int i = 0; i < contours.size(); i++ )
|
|
||||||
{ mu[i] = moments( contours[i], false ); }
|
|
||||||
|
|
||||||
/// Get the mass centers:
|
|
||||||
vector<Point2f> mc( contours.size() );
|
|
||||||
for( int i = 0; i < contours.size(); i++ )
|
|
||||||
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
|
|
||||||
|
|
||||||
/// Draw contours
|
|
||||||
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
|
|
||||||
for( int i = 0; i< contours.size(); i++ )
|
|
||||||
{
|
|
||||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
|
||||||
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
|
|
||||||
circle( drawing, mc[i], 4, color, -1, 8, 0 );
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Show in a window
|
|
||||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
|
||||||
imshow( "Contours", drawing );
|
|
||||||
|
|
||||||
/// Calculate the area with the moments 00 and compare with the result of the OpenCV function
|
|
||||||
printf("\t Info: Area and Contour Length \n");
|
|
||||||
for( int i = 0; i< contours.size(); i++ )
|
|
||||||
{
|
|
||||||
printf(" * Contour[%d] - Area (M_00) = %.2f - Area OpenCV: %.2f - Length: %.2f \n", i, mu[i].m00, contourArea(contours[i]), arcLength( contours[i], true ) );
|
|
||||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
|
||||||
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
|
|
||||||
circle( drawing, mc[i], 4, color, -1, 8, 0 );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here it is:
|
Here it is:
|
||||||
|

|
||||||
--------- --------- ---------
|

|
||||||
|MU_0| |MU_1| |MU_2|
|

|
||||||
--------- --------- ---------
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -16,91 +16,14 @@ Code
|
|||||||
|
|
||||||
This tutorial code's is shown lines below. You can also download it from
|
This tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/pointPolygonTest_demo.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/pointPolygonTest_demo.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/pointPolygonTest_demo.cpp
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include <iostream>
|
|
||||||
#include <stdio.h>
|
|
||||||
#include <stdlib.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
/* @function main */
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Create an image
|
|
||||||
const int r = 100;
|
|
||||||
Mat src = Mat::zeros( Size( 4*r, 4*r ), CV_8UC1 );
|
|
||||||
|
|
||||||
/// Create a sequence of points to make a contour:
|
|
||||||
vector<Point2f> vert(6);
|
|
||||||
|
|
||||||
vert[0] = Point( 1.5*r, 1.34*r );
|
|
||||||
vert[1] = Point( 1*r, 2*r );
|
|
||||||
vert[2] = Point( 1.5*r, 2.866*r );
|
|
||||||
vert[3] = Point( 2.5*r, 2.866*r );
|
|
||||||
vert[4] = Point( 3*r, 2*r );
|
|
||||||
vert[5] = Point( 2.5*r, 1.34*r );
|
|
||||||
|
|
||||||
/// Draw it in src
|
|
||||||
for( int j = 0; j < 6; j++ )
|
|
||||||
{ line( src, vert[j], vert[(j+1)%6], Scalar( 255 ), 3, 8 ); }
|
|
||||||
|
|
||||||
/// Get the contours
|
|
||||||
vector<vector<Point> > contours; vector<Vec4i> hierarchy;
|
|
||||||
Mat src_copy = src.clone();
|
|
||||||
|
|
||||||
findContours( src_copy, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE);
|
|
||||||
|
|
||||||
/// Calculate the distances to the contour
|
|
||||||
Mat raw_dist( src.size(), CV_32FC1 );
|
|
||||||
|
|
||||||
for( int j = 0; j < src.rows; j++ )
|
|
||||||
{ for( int i = 0; i < src.cols; i++ )
|
|
||||||
{ raw_dist.at<float>(j,i) = pointPolygonTest( contours[0], Point2f(i,j), true ); }
|
|
||||||
}
|
|
||||||
|
|
||||||
double minVal; double maxVal;
|
|
||||||
minMaxLoc( raw_dist, &minVal, &maxVal, 0, 0, Mat() );
|
|
||||||
minVal = abs(minVal); maxVal = abs(maxVal);
|
|
||||||
|
|
||||||
/// Depicting the distances graphically
|
|
||||||
Mat drawing = Mat::zeros( src.size(), CV_8UC3 );
|
|
||||||
|
|
||||||
for( int j = 0; j < src.rows; j++ )
|
|
||||||
{ for( int i = 0; i < src.cols; i++ )
|
|
||||||
{
|
|
||||||
if( raw_dist.at<float>(j,i) < 0 )
|
|
||||||
{ drawing.at<Vec3b>(j,i)[0] = 255 - (int) abs(raw_dist.at<float>(j,i))*255/minVal; }
|
|
||||||
else if( raw_dist.at<float>(j,i) > 0 )
|
|
||||||
{ drawing.at<Vec3b>(j,i)[2] = 255 - (int) raw_dist.at<float>(j,i)*255/maxVal; }
|
|
||||||
else
|
|
||||||
{ drawing.at<Vec3b>(j,i)[0] = 255; drawing.at<Vec3b>(j,i)[1] = 255; drawing.at<Vec3b>(j,i)[2] = 255; }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Create Window and show your results
|
|
||||||
char* source_window = "Source";
|
|
||||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
|
||||||
imshow( source_window, src );
|
|
||||||
namedWindow( "Distance", WINDOW_AUTOSIZE );
|
|
||||||
imshow( "Distance", drawing );
|
|
||||||
|
|
||||||
waitKey(0);
|
|
||||||
return(0);
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here it is:
|
Here it is:
|
||||||
|

|
||||||
---------- ----------
|

|
||||||
|PPT_0| |PPT_1|
|
|
||||||
---------- ----------
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -12,7 +12,9 @@ Cool Theory
|
|||||||
-----------
|
-----------
|
||||||
|
|
||||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. What is
|
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. What is
|
||||||
Thresholding? -----------------------
|
|
||||||
|
Thresholding?
|
||||||
|
-------------
|
||||||
|
|
||||||
- The simplest segmentation method
|
- The simplest segmentation method
|
||||||
- Application example: Separate out regions of an image corresponding to objects which we want to
|
- Application example: Separate out regions of an image corresponding to objects which we want to
|
||||||
@ -25,7 +27,7 @@ Thresholding? -----------------------
|
|||||||
identify them (i.e. we can assign them a value of \f$0\f$ (black), \f$255\f$ (white) or any value that
|
identify them (i.e. we can assign them a value of \f$0\f$ (black), \f$255\f$ (white) or any value that
|
||||||
suits your needs).
|
suits your needs).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Types of Thresholding
|
### Types of Thresholding
|
||||||
|
|
||||||
@ -36,7 +38,7 @@ Thresholding? -----------------------
|
|||||||
with pixels with intensity values \f$src(x,y)\f$. The plot below depicts this. The horizontal blue
|
with pixels with intensity values \f$src(x,y)\f$. The plot below depicts this. The horizontal blue
|
||||||
line represents the threshold \f$thresh\f$ (fixed).
|
line represents the threshold \f$thresh\f$ (fixed).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
#### Threshold Binary
|
#### Threshold Binary
|
||||||
|
|
||||||
@ -47,7 +49,7 @@ Thresholding? -----------------------
|
|||||||
- So, if the intensity of the pixel \f$src(x,y)\f$ is higher than \f$thresh\f$, then the new pixel
|
- So, if the intensity of the pixel \f$src(x,y)\f$ is higher than \f$thresh\f$, then the new pixel
|
||||||
intensity is set to a \f$MaxVal\f$. Otherwise, the pixels are set to \f$0\f$.
|
intensity is set to a \f$MaxVal\f$. Otherwise, the pixels are set to \f$0\f$.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
#### Threshold Binary, Inverted
|
#### Threshold Binary, Inverted
|
||||||
|
|
||||||
@ -58,7 +60,7 @@ Thresholding? -----------------------
|
|||||||
- If the intensity of the pixel \f$src(x,y)\f$ is higher than \f$thresh\f$, then the new pixel intensity
|
- If the intensity of the pixel \f$src(x,y)\f$ is higher than \f$thresh\f$, then the new pixel intensity
|
||||||
is set to a \f$0\f$. Otherwise, it is set to \f$MaxVal\f$.
|
is set to a \f$0\f$. Otherwise, it is set to \f$MaxVal\f$.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
#### Truncate
|
#### Truncate
|
||||||
|
|
||||||
@ -69,7 +71,7 @@ Thresholding? -----------------------
|
|||||||
- The maximum intensity value for the pixels is \f$thresh\f$, if \f$src(x,y)\f$ is greater, then its value
|
- The maximum intensity value for the pixels is \f$thresh\f$, if \f$src(x,y)\f$ is greater, then its value
|
||||||
is *truncated*. See figure below:
|
is *truncated*. See figure below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
#### Threshold to Zero
|
#### Threshold to Zero
|
||||||
|
|
||||||
@ -79,7 +81,7 @@ Thresholding? -----------------------
|
|||||||
|
|
||||||
- If \f$src(x,y)\f$ is lower than \f$thresh\f$, the new pixel value will be set to \f$0\f$.
|
- If \f$src(x,y)\f$ is lower than \f$thresh\f$, the new pixel value will be set to \f$0\f$.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
#### Threshold to Zero, Inverted
|
#### Threshold to Zero, Inverted
|
||||||
|
|
||||||
@ -89,97 +91,19 @@ Thresholding? -----------------------
|
|||||||
|
|
||||||
- If \f$src(x,y)\f$ is greater than \f$thresh\f$, the new pixel value will be set to \f$0\f$.
|
- If \f$src(x,y)\f$ is greater than \f$thresh\f$, the new pixel value will be set to \f$0\f$.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
|
|
||||||
The tutorial code's is shown lines below. You can also download it from
|
The tutorial code's is shown lines below. You can also download it from
|
||||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Threshold.cpp)
|
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Threshold.cpp)
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/ImgProc/Threshold.cpp
|
||||||
#include "opencv2/imgproc.hpp"
|
|
||||||
#include "opencv2/highgui.hpp"
|
|
||||||
#include <stdlib.h>
|
|
||||||
#include <stdio.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
|
|
||||||
/// Global variables
|
|
||||||
|
|
||||||
int threshold_value = 0;
|
|
||||||
int threshold_type = 3;;
|
|
||||||
int const max_value = 255;
|
|
||||||
int const max_type = 4;
|
|
||||||
int const max_BINARY_value = 255;
|
|
||||||
|
|
||||||
Mat src, src_gray, dst;
|
|
||||||
char* window_name = "Threshold Demo";
|
|
||||||
|
|
||||||
char* trackbar_type = "Type: \n 0: Binary \n 1: Binary Inverted \n 2: Truncate \n 3: To Zero \n 4: To Zero Inverted";
|
|
||||||
char* trackbar_value = "Value";
|
|
||||||
|
|
||||||
/// Function headers
|
|
||||||
void Threshold_Demo( int, void* );
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function main
|
|
||||||
*/
|
|
||||||
int main( int argc, char** argv )
|
|
||||||
{
|
|
||||||
/// Load an image
|
|
||||||
src = imread( argv[1], 1 );
|
|
||||||
|
|
||||||
/// Convert the image to Gray
|
|
||||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
|
||||||
|
|
||||||
/// Create a window to display results
|
|
||||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
|
||||||
|
|
||||||
/// Create Trackbar to choose type of Threshold
|
|
||||||
createTrackbar( trackbar_type,
|
|
||||||
window_name, &threshold_type,
|
|
||||||
max_type, Threshold_Demo );
|
|
||||||
|
|
||||||
createTrackbar( trackbar_value,
|
|
||||||
window_name, &threshold_value,
|
|
||||||
max_value, Threshold_Demo );
|
|
||||||
|
|
||||||
/// Call the function to initialize
|
|
||||||
Threshold_Demo( 0, 0 );
|
|
||||||
|
|
||||||
/// Wait until user finishes program
|
|
||||||
while(true)
|
|
||||||
{
|
|
||||||
int c;
|
|
||||||
c = waitKey( 20 );
|
|
||||||
if( (char)c == 27 )
|
|
||||||
{ break; }
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function Threshold_Demo
|
|
||||||
*/
|
|
||||||
void Threshold_Demo( int, void* )
|
|
||||||
{
|
|
||||||
/* 0: Binary
|
|
||||||
1: Binary Inverted
|
|
||||||
2: Threshold Truncated
|
|
||||||
3: Threshold to Zero
|
|
||||||
4: Threshold to Zero Inverted
|
|
||||||
*/
|
|
||||||
|
|
||||||
threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type );
|
|
||||||
|
|
||||||
imshow( window_name, dst );
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. Let's check the general structure of the program:
|
-# Let's check the general structure of the program:
|
||||||
- Load an image. If it is RGB we convert it to Grayscale. For this, remember that we can use
|
- Load an image. If it is RGB we convert it to Grayscale. For this, remember that we can use
|
||||||
the function @ref cv::cvtColor :
|
the function @ref cv::cvtColor :
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -241,23 +165,21 @@ Explanation
|
|||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. After compiling this program, run it giving a path to an image as argument. For instance, for an
|
-# After compiling this program, run it giving a path to an image as argument. For instance, for an
|
||||||
input image as:
|
input image as:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. First, we try to threshold our image with a *binary threhold inverted*. We expect that the
|
-# First, we try to threshold our image with a *binary threhold inverted*. We expect that the
|
||||||
pixels brighter than the \f$thresh\f$ will turn dark, which is what actually happens, as we can see
|
pixels brighter than the \f$thresh\f$ will turn dark, which is what actually happens, as we can see
|
||||||
in the snapshot below (notice from the original image, that the doggie's tongue and eyes are
|
in the snapshot below (notice from the original image, that the doggie's tongue and eyes are
|
||||||
particularly bright in comparison with the image, this is reflected in the output image).
|
particularly bright in comparison with the image, this is reflected in the output image).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
3. Now we try with the *threshold to zero*. With this, we expect that the darkest pixels (below the
|
-# Now we try with the *threshold to zero*. With this, we expect that the darkest pixels (below the
|
||||||
threshold) will become completely black, whereas the pixels with value greater than the
|
threshold) will become completely black, whereas the pixels with value greater than the
|
||||||
threshold will keep its original value. This is verified by the following snapshot of the output
|
threshold will keep its original value. This is verified by the following snapshot of the output
|
||||||
image:
|
image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
|
@ -107,19 +107,19 @@ Manual OpenCV4Android SDK setup
|
|||||||
|
|
||||||
### Get the OpenCV4Android SDK
|
### Get the OpenCV4Android SDK
|
||||||
|
|
||||||
1. Go to the [OpenCV download page on
|
-# Go to the [OpenCV download page on
|
||||||
SourceForge](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/) and download
|
SourceForge](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/) and download
|
||||||
the latest available version. Currently it's [OpenCV-2.4.9-android-sdk.zip](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.4.9/OpenCV-2.4.9-android-sdk.zip/download).
|
the latest available version. Currently it's [OpenCV-2.4.9-android-sdk.zip](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.4.9/OpenCV-2.4.9-android-sdk.zip/download).
|
||||||
2. Create a new folder for Android with OpenCV development. For this tutorial we have unpacked
|
-# Create a new folder for Android with OpenCV development. For this tutorial we have unpacked
|
||||||
OpenCV SDK to the `C:\Work\OpenCV4Android\` directory.
|
OpenCV SDK to the `C:\Work\OpenCV4Android\` directory.
|
||||||
|
|
||||||
@note Better to use a path without spaces in it. Otherwise you may have problems with ndk-build.
|
@note Better to use a path without spaces in it. Otherwise you may have problems with ndk-build.
|
||||||
|
|
||||||
3. Unpack the SDK archive into the chosen directory.
|
-# Unpack the SDK archive into the chosen directory.
|
||||||
|
|
||||||
You can unpack it using any popular archiver (e.g with 7-Zip_):
|
You can unpack it using any popular archiver (e.g with 7-Zip):
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
On Unix you can use the following command:
|
On Unix you can use the following command:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
@ -128,15 +128,15 @@ Manual OpenCV4Android SDK setup
|
|||||||
|
|
||||||
### Import OpenCV library and samples to the Eclipse
|
### Import OpenCV library and samples to the Eclipse
|
||||||
|
|
||||||
1. Start Eclipse and choose your workspace location.
|
-# Start Eclipse and choose your workspace location.
|
||||||
|
|
||||||
We recommend to start working with OpenCV for Android from a new clean workspace. A new Eclipse
|
We recommend to start working with OpenCV for Android from a new clean workspace. A new Eclipse
|
||||||
workspace can for example be created in the folder where you have unpacked OpenCV4Android SDK
|
workspace can for example be created in the folder where you have unpacked OpenCV4Android SDK
|
||||||
package:
|
package:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. Import OpenCV library and samples into workspace.
|
-# Import OpenCV library and samples into workspace.
|
||||||
|
|
||||||
OpenCV library is packed as a ready-for-use [Android Library
|
OpenCV library is packed as a ready-for-use [Android Library
|
||||||
Project](http://developer.android.com/guide/developing/projects/index.html#LibraryProjects). You
|
Project](http://developer.android.com/guide/developing/projects/index.html#LibraryProjects). You
|
||||||
@ -146,33 +146,34 @@ Manual OpenCV4Android SDK setup
|
|||||||
already references OpenCV library. Follow the steps below to import OpenCV and samples into the
|
already references OpenCV library. Follow the steps below to import OpenCV and samples into the
|
||||||
workspace:
|
workspace:
|
||||||
|
|
||||||
|
- Right click on the Package Explorer window and choose Import... option from the context
|
||||||
|
menu:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
- In the main panel select General --\> Existing Projects into Workspace and press Next
|
||||||
|
button:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
- In the Select root directory field locate your OpenCV package folder. Eclipse should
|
||||||
|
automatically locate OpenCV library and samples:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
- Click Finish button to complete the import operation.
|
||||||
|
|
||||||
@note OpenCV samples are indeed **dependent** on OpenCV library project so don't forget to import it to your workspace as well.
|
@note OpenCV samples are indeed **dependent** on OpenCV library project so don't forget to import it to your workspace as well.
|
||||||
- Right click on the Package Explorer window and choose Import... option from the context
|
|
||||||
menu:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- In the main panel select General --\> Existing Projects into Workspace and press Next
|
|
||||||
button:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- In the Select root directory field locate your OpenCV package folder. Eclipse should
|
|
||||||
automatically locate OpenCV library and samples:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Click Finish button to complete the import operation.
|
|
||||||
|
|
||||||
After clicking Finish button Eclipse will load all selected projects into workspace, and you
|
After clicking Finish button Eclipse will load all selected projects into workspace, and you
|
||||||
have to wait some time while it is building OpenCV samples. Just give a minute to Eclipse to
|
have to wait some time while it is building OpenCV samples. Just give a minute to Eclipse to
|
||||||
complete initialization.
|
complete initialization.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Once Eclipse completes build you will have the clean workspace without any build errors:
|
Once Eclipse completes build you will have the clean workspace without any build errors:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@anchor tutorial_O4A_SDK_samples
|
@anchor tutorial_O4A_SDK_samples
|
||||||
### Running OpenCV Samples
|
### Running OpenCV Samples
|
||||||
@ -205,7 +206,7 @@ Well, running samples from Eclipse is very simple:
|
|||||||
@note Android Emulator can take several minutes to start. So, please, be patient. \* On the first
|
@note Android Emulator can take several minutes to start. So, please, be patient. \* On the first
|
||||||
run Eclipse will ask you about the running mode for your application:
|
run Eclipse will ask you about the running mode for your application:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Select the Android Application option and click OK button. Eclipse will install and run the
|
- Select the Android Application option and click OK button. Eclipse will install and run the
|
||||||
sample.
|
sample.
|
||||||
@ -214,7 +215,7 @@ Well, running samples from Eclipse is very simple:
|
|||||||
Manager](https://docs.google.com/a/itseez.com/presentation/d/1EO_1kijgBg_BsjNp2ymk-aarg-0K279_1VZRcPplSuk/present#slide=id.p)
|
Manager](https://docs.google.com/a/itseez.com/presentation/d/1EO_1kijgBg_BsjNp2ymk-aarg-0K279_1VZRcPplSuk/present#slide=id.p)
|
||||||
package installed. In this case you will see the following message:
|
package installed. In this case you will see the following message:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
To get rid of the message you will need to install OpenCV Manager and the appropriate
|
To get rid of the message you will need to install OpenCV Manager and the appropriate
|
||||||
OpenCV binary pack. Simply tap Yes if you have *Google Play Market* installed on your
|
OpenCV binary pack. Simply tap Yes if you have *Google Play Market* installed on your
|
||||||
@ -226,12 +227,15 @@ Well, running samples from Eclipse is very simple:
|
|||||||
@code{.sh}
|
@code{.sh}
|
||||||
<Android SDK path>/platform-tools/adb install <OpenCV4Android SDK path>/apk/OpenCV_2.4.9_Manager_2.18_armv7a-neon.apk
|
<Android SDK path>/platform-tools/adb install <OpenCV4Android SDK path>/apk/OpenCV_2.4.9_Manager_2.18_armv7a-neon.apk
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
@note armeabi, armv7a-neon, arm7a-neon-android8, mips and x86 stand for platform targets:
|
@note armeabi, armv7a-neon, arm7a-neon-android8, mips and x86 stand for platform targets:
|
||||||
- armeabi is for ARM v5 and ARM v6 architectures with Android API 8+,
|
- armeabi is for ARM v5 and ARM v6 architectures with Android API 8+,
|
||||||
- armv7a-neon is for NEON-optimized ARM v7 with Android API 9+,
|
- armv7a-neon is for NEON-optimized ARM v7 with Android API 9+,
|
||||||
- arm7a-neon-android8 is for NEON-optimized ARM v7 with Android API 8,
|
- arm7a-neon-android8 is for NEON-optimized ARM v7 with Android API 8,
|
||||||
- mips is for MIPS architecture with Android API 9+,
|
- mips is for MIPS architecture with Android API 9+,
|
||||||
- x86 is for Intel x86 CPUs with Android API 9+.
|
- x86 is for Intel x86 CPUs with Android API 9+.
|
||||||
|
|
||||||
|
@note
|
||||||
If using hardware device for testing/debugging, run the following command to learn its CPU
|
If using hardware device for testing/debugging, run the following command to learn its CPU
|
||||||
architecture:
|
architecture:
|
||||||
@code{.sh}
|
@code{.sh}
|
||||||
@ -241,6 +245,7 @@ Well, running samples from Eclipse is very simple:
|
|||||||
Click Edit in the context menu of the selected device. In the window, which then pop-ups, find
|
Click Edit in the context menu of the selected device. In the window, which then pop-ups, find
|
||||||
the CPU field.
|
the CPU field.
|
||||||
|
|
||||||
|
@note
|
||||||
You may also see section `Manager Selection` for details.
|
You may also see section `Manager Selection` for details.
|
||||||
|
|
||||||
When done, you will be able to run OpenCV samples on your device/emulator seamlessly.
|
When done, you will be able to run OpenCV samples on your device/emulator seamlessly.
|
||||||
@ -248,7 +253,7 @@ Well, running samples from Eclipse is very simple:
|
|||||||
- Here is Sample - image-manipulations sample, running on top of stock camera-preview of the
|
- Here is Sample - image-manipulations sample, running on top of stock camera-preview of the
|
||||||
emulator.
|
emulator.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
What's next
|
What's next
|
||||||
-----------
|
-----------
|
||||||
|
@ -19,16 +19,16 @@ Development for Android significantly differs from development for other platfor
|
|||||||
starting programming for Android we recommend you make sure that you are familiar with the following
|
starting programming for Android we recommend you make sure that you are familiar with the following
|
||||||
key topis:
|
key topis:
|
||||||
|
|
||||||
1. [Java](http://en.wikipedia.org/wiki/Java_(programming_language)) programming language that is
|
-# [Java](http://en.wikipedia.org/wiki/Java_(programming_language)) programming language that is
|
||||||
the primary development technology for Android OS. Also, you can find [Oracle docs on
|
the primary development technology for Android OS. Also, you can find [Oracle docs on
|
||||||
Java](http://docs.oracle.com/javase/) useful.
|
Java](http://docs.oracle.com/javase/) useful.
|
||||||
2. [Java Native Interface (JNI)](http://en.wikipedia.org/wiki/Java_Native_Interface) that is a
|
-# [Java Native Interface (JNI)](http://en.wikipedia.org/wiki/Java_Native_Interface) that is a
|
||||||
technology of running native code in Java virtual machine. Also, you can find [Oracle docs on
|
technology of running native code in Java virtual machine. Also, you can find [Oracle docs on
|
||||||
JNI](http://docs.oracle.com/javase/7/docs/technotes/guides/jni/) useful.
|
JNI](http://docs.oracle.com/javase/7/docs/technotes/guides/jni/) useful.
|
||||||
3. [Android
|
-# [Android
|
||||||
Activity](http://developer.android.com/training/basics/activity-lifecycle/starting.html) and its
|
Activity](http://developer.android.com/training/basics/activity-lifecycle/starting.html) and its
|
||||||
lifecycle, that is an essential Android API class.
|
lifecycle, that is an essential Android API class.
|
||||||
4. OpenCV development will certainly require some knowlege of the [Android
|
-# OpenCV development will certainly require some knowlege of the [Android
|
||||||
Camera](http://developer.android.com/guide/topics/media/camera.html) specifics.
|
Camera](http://developer.android.com/guide/topics/media/camera.html) specifics.
|
||||||
|
|
||||||
Quick environment setup for Android development
|
Quick environment setup for Android development
|
||||||
@ -44,14 +44,15 @@ environment setup automatically and you can skip the rest of the guide.
|
|||||||
|
|
||||||
If you are a beginner in Android development then we also recommend you to start with TADP.
|
If you are a beginner in Android development then we also recommend you to start with TADP.
|
||||||
|
|
||||||
@note *NVIDIA*'s Tegra Android Development Pack includes some special features for *NVIDIA*’s Tegra
|
@note *NVIDIA*'s Tegra Android Development Pack includes some special features for *NVIDIA*’s [Tegra
|
||||||
platform_ but its use is not limited to *Tegra* devices only. \* You need at least *1.6 Gb* free
|
platform](http://www.nvidia.com/object/tegra-3-processor.html)
|
||||||
|
but its use is not limited to *Tegra* devices only. \* You need at least *1.6 Gb* free
|
||||||
disk space for the install.
|
disk space for the install.
|
||||||
|
|
||||||
- TADP will download Android SDK platforms and Android NDK from Google's server, so Internet
|
- TADP will download Android SDK platforms and Android NDK from Google's server, so Internet
|
||||||
connection is required for the installation.
|
connection is required for the installation.
|
||||||
- TADP may ask you to flash your development kit at the end of installation process. Just skip
|
- TADP may ask you to flash your development kit at the end of installation process. Just skip
|
||||||
this step if you have no Tegra Development Kit_.
|
this step if you have no [Tegra Development Kit](http://developer.nvidia.com/mobile/tegra-hardware-sales-inquiries).
|
||||||
- (UNIX) TADP will ask you for *root* in the middle of installation, so you need to be a member of
|
- (UNIX) TADP will ask you for *root* in the middle of installation, so you need to be a member of
|
||||||
*sudo* group.
|
*sudo* group.
|
||||||
|
|
||||||
@ -62,7 +63,7 @@ Manual environment setup for Android development
|
|||||||
|
|
||||||
You need the following software to be installed in order to develop for Android in Java:
|
You need the following software to be installed in order to develop for Android in Java:
|
||||||
|
|
||||||
1. **Sun JDK 6** (Sun JDK 7 is also possible)
|
-# **Sun JDK 6** (Sun JDK 7 is also possible)
|
||||||
|
|
||||||
Visit [Java SE Downloads page](http://www.oracle.com/technetwork/java/javase/downloads/) and
|
Visit [Java SE Downloads page](http://www.oracle.com/technetwork/java/javase/downloads/) and
|
||||||
download an installer for your OS.
|
download an installer for your OS.
|
||||||
@ -71,30 +72,32 @@ You need the following software to be installed in order to develop for Android
|
|||||||
guide](http://source.android.com/source/initializing.html#installing-the-jdk) for Ubuntu and Mac
|
guide](http://source.android.com/source/initializing.html#installing-the-jdk) for Ubuntu and Mac
|
||||||
OS (only JDK sections are applicable for OpenCV)
|
OS (only JDK sections are applicable for OpenCV)
|
||||||
|
|
||||||
@note OpenJDK is not suitable for Android development, since Android SDK supports only Sun JDK. If you use Ubuntu, after installation of Sun JDK you should run the following command to set Sun java environment:
|
@note OpenJDK is not suitable for Android development, since Android SDK supports only Sun JDK. If you use Ubuntu, after installation of Sun JDK you should run the following command to set Sun java environment:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
sudo update-java-alternatives --set java-6-sun
|
sudo update-java-alternatives --set java-6-sun
|
||||||
@endcode
|
@endcode
|
||||||
1. **Android SDK**
|
|
||||||
|
-# **Android SDK**
|
||||||
|
|
||||||
Get the latest Android SDK from <http://developer.android.com/sdk/index.html>
|
Get the latest Android SDK from <http://developer.android.com/sdk/index.html>
|
||||||
|
|
||||||
Here is Google's [install guide](http://developer.android.com/sdk/installing.html) for the SDK.
|
Here is Google's [install guide](http://developer.android.com/sdk/installing.html) for the SDK.
|
||||||
|
|
||||||
@note You can choose downloading **ADT Bundle package** that in addition to Android SDK Tools
|
@note You can choose downloading **ADT Bundle package** that in addition to Android SDK Tools
|
||||||
includes Eclipse + ADT + NDK/CDT plugins, Android Platform-tools, the latest Android platform and
|
includes Eclipse + ADT + NDK/CDT plugins, Android Platform-tools, the latest Android platform and
|
||||||
the latest Android system image for the emulator - this is the best choice for those who is setting
|
the latest Android system image for the emulator - this is the best choice for those who is setting
|
||||||
up Android development environment the first time!
|
up Android development environment the first time!
|
||||||
|
|
||||||
@note If you are running x64 version of Ubuntu Linux, then you need ia32 shared libraries for use on amd64 and ia64 systems to be installed. You can install them with the following command:
|
@note If you are running x64 version of Ubuntu Linux, then you need ia32 shared libraries for use on amd64 and ia64 systems to be installed. You can install them with the following command:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
sudo apt-get install ia32-libs
|
sudo apt-get install ia32-libs
|
||||||
@endcode
|
@endcode
|
||||||
For Red Hat based systems the following command might be helpful:
|
For Red Hat based systems the following command might be helpful:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
sudo yum install libXtst.i386
|
sudo yum install libXtst.i386
|
||||||
@endcode
|
@endcode
|
||||||
1. **Android SDK components**
|
|
||||||
|
-# **Android SDK components**
|
||||||
|
|
||||||
You need the following SDK components to be installed:
|
You need the following SDK components to be installed:
|
||||||
|
|
||||||
@ -110,13 +113,13 @@ up Android development environment the first time!
|
|||||||
successful compilation the **target** platform should be set to Android 3.0 (API 11) or
|
successful compilation the **target** platform should be set to Android 3.0 (API 11) or
|
||||||
higher. It will not prevent them from running on Android 2.2.
|
higher. It will not prevent them from running on Android 2.2.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
See [Adding Platforms and
|
See [Adding Platforms and
|
||||||
Packages](http://developer.android.com/sdk/installing/adding-packages.html) for help with
|
Packages](http://developer.android.com/sdk/installing/adding-packages.html) for help with
|
||||||
installing/updating SDK components.
|
installing/updating SDK components.
|
||||||
|
|
||||||
2. **Eclipse IDE**
|
-# **Eclipse IDE**
|
||||||
|
|
||||||
Check the [Android SDK System Requirements](http://developer.android.com/sdk/requirements.html)
|
Check the [Android SDK System Requirements](http://developer.android.com/sdk/requirements.html)
|
||||||
document for a list of Eclipse versions that are compatible with the Android SDK. For OpenCV
|
document for a list of Eclipse versions that are compatible with the Android SDK. For OpenCV
|
||||||
@ -126,7 +129,7 @@ up Android development environment the first time!
|
|||||||
If you have no Eclipse installed, you can get it from the [official
|
If you have no Eclipse installed, you can get it from the [official
|
||||||
site](http://www.eclipse.org/downloads/).
|
site](http://www.eclipse.org/downloads/).
|
||||||
|
|
||||||
3. **ADT plugin for Eclipse**
|
-# **ADT plugin for Eclipse**
|
||||||
|
|
||||||
These instructions are copied from [Android Developers
|
These instructions are copied from [Android Developers
|
||||||
site](http://developer.android.com/sdk/installing/installing-adt.html), check it out in case of
|
site](http://developer.android.com/sdk/installing/installing-adt.html), check it out in case of
|
||||||
@ -135,33 +138,34 @@ up Android development environment the first time!
|
|||||||
Assuming that you have Eclipse IDE installed, as described above, follow these steps to download
|
Assuming that you have Eclipse IDE installed, as described above, follow these steps to download
|
||||||
and install the ADT plugin:
|
and install the ADT plugin:
|
||||||
|
|
||||||
1. Start Eclipse, then select Help --\> Install New Software...
|
-# Start Eclipse, then select Help --\> Install New Software...
|
||||||
2. Click Add (in the top-right corner).
|
-# Click Add (in the top-right corner).
|
||||||
3. In the Add Repository dialog that appears, enter "ADT Plugin" for the Name and the following
|
-# In the Add Repository dialog that appears, enter "ADT Plugin" for the Name and the following
|
||||||
URL for the Location:
|
URL for the Location: <https://dl-ssl.google.com/android/eclipse/>
|
||||||
|
|
||||||
<https://dl-ssl.google.com/android/eclipse/>
|
-# Click OK
|
||||||
|
|
||||||
4. Click OK
|
@note If you have trouble acquiring the plugin, try using "http" in the Location URL, instead of "https" (https is preferred for security reasons).
|
||||||
|
|
||||||
@note If you have trouble acquiring the plugin, try using "http" in the Location URL, instead of "https" (https is preferred for security reasons).
|
-# In the Available Software dialog, select the checkbox next to Developer Tools and click Next.
|
||||||
1. In the Available Software dialog, select the checkbox next to Developer Tools and click
|
|
||||||
Next.
|
|
||||||
2. In the next window, you'll see a list of the tools to be downloaded. Click Next.
|
|
||||||
|
|
||||||
@note If you also plan to develop native C++ code with Android NDK don't forget to enable NDK Plugins installations as well.
|
-# In the next window, you'll see a list of the tools to be downloaded. Click Next.
|
||||||

|
|
||||||
|
|
||||||
1. Read and accept the license agreements, then click Finish.
|
@note If you also plan to develop native C++ code with Android NDK don't forget to enable NDK Plugins installations as well.
|
||||||
|
|
||||||
@note If you get a security warning saying that the authenticity or validity of the software can't be established, click OK.
|

|
||||||
1. When the installation completes, restart Eclipse.
|
|
||||||
|
-# Read and accept the license agreements, then click Finish.
|
||||||
|
|
||||||
|
@note If you get a security warning saying that the authenticity or validity of the software can't be established, click OK.
|
||||||
|
|
||||||
|
-# When the installation completes, restart Eclipse.
|
||||||
|
|
||||||
### Native development in C++
|
### Native development in C++
|
||||||
|
|
||||||
You need the following software to be installed in order to develop for Android in C++:
|
You need the following software to be installed in order to develop for Android in C++:
|
||||||
|
|
||||||
1. **Android NDK**
|
-# **Android NDK**
|
||||||
|
|
||||||
To compile C++ code for Android platform you need Android Native Development Kit (*NDK*).
|
To compile C++ code for Android platform you need Android Native Development Kit (*NDK*).
|
||||||
|
|
||||||
@ -170,17 +174,18 @@ You need the following software to be installed in order to develop for Android
|
|||||||
extract the archive to some folder on your computer. Here are [installation
|
extract the archive to some folder on your computer. Here are [installation
|
||||||
instructions](http://developer.android.com/tools/sdk/ndk/index.html#Installing).
|
instructions](http://developer.android.com/tools/sdk/ndk/index.html#Installing).
|
||||||
|
|
||||||
@note Before start you can read official Android NDK documentation which is in the Android NDK
|
@note Before start you can read official Android NDK documentation which is in the Android NDK
|
||||||
archive, in the folder `docs/`. The main article about using Android NDK build system is in the
|
archive, in the folder `docs/`. The main article about using Android NDK build system is in the
|
||||||
`ANDROID-MK.html` file. Some additional information you can find in the `APPLICATION-MK.html`,
|
`ANDROID-MK.html` file. Some additional information you can find in the `APPLICATION-MK.html`,
|
||||||
`NDK-BUILD.html` files, and `CPU-ARM-NEON.html`, `CPLUSPLUS-SUPPORT.html`, `PREBUILTS.html`. \#.
|
`NDK-BUILD.html` files, and `CPU-ARM-NEON.html`, `CPLUSPLUS-SUPPORT.html`, `PREBUILTS.html`.
|
||||||
**CDT plugin for Eclipse**
|
|
||||||
|
|
||||||
If you selected for installation the NDK plugins component of Eclipse ADT plugin (see the picture
|
-# **CDT plugin for Eclipse**
|
||||||
above) your Eclipse IDE should already have CDT plugin (that means C/C++ Development Tooling).
|
|
||||||
There are several possible ways to integrate compilation of C++ code by Android NDK into Eclipse
|
If you selected for installation the NDK plugins component of Eclipse ADT plugin (see the picture
|
||||||
compilation process. We recommend the approach based on Eclipse CDT(C/C++ Development Tooling)
|
above) your Eclipse IDE should already have CDT plugin (that means C/C++ Development Tooling).
|
||||||
Builder.
|
There are several possible ways to integrate compilation of C++ code by Android NDK into Eclipse
|
||||||
|
compilation process. We recommend the approach based on Eclipse CDT(C/C++ Development Tooling)
|
||||||
|
Builder.
|
||||||
|
|
||||||
Android application structure
|
Android application structure
|
||||||
-----------------------------
|
-----------------------------
|
||||||
@ -244,6 +249,7 @@ APP_STL := gnustl_static
|
|||||||
APP_CPPFLAGS := -frtti -fexceptions
|
APP_CPPFLAGS := -frtti -fexceptions
|
||||||
APP_ABI := all
|
APP_ABI := all
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
@note We recommend setting APP_ABI := all for all targets. If you want to specify the target
|
@note We recommend setting APP_ABI := all for all targets. If you want to specify the target
|
||||||
explicitly, use armeabi for ARMv5/ARMv6, armeabi-v7a for ARMv7, x86 for Intel Atom or mips for MIPS.
|
explicitly, use armeabi for ARMv5/ARMv6, armeabi-v7a for ARMv7, x86 for Intel Atom or mips for MIPS.
|
||||||
|
|
||||||
@ -260,18 +266,18 @@ We strongly reccomend using cmd.exe (standard Windows console) instead of Cygwin
|
|||||||
not really supported and we are unlikely to help you in case you encounter some problems with
|
not really supported and we are unlikely to help you in case you encounter some problems with
|
||||||
it. So, use it only if you're capable of handling the consequences yourself.
|
it. So, use it only if you're capable of handling the consequences yourself.
|
||||||
|
|
||||||
1. Open console and go to the root folder of an Android application
|
-# Open console and go to the root folder of an Android application
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
cd <root folder of the project>/
|
cd <root folder of the project>/
|
||||||
@endcode
|
@endcode
|
||||||
2. Run the following command
|
-# Run the following command
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
<path_where_NDK_is_placed>/ndk-build
|
<path_where_NDK_is_placed>/ndk-build
|
||||||
@endcode
|
@endcode
|
||||||
@note On Windows we recommend to use ndk-build.cmd in standard Windows console (cmd.exe) rather than the similar bash script in Cygwin shell.
|
@note On Windows we recommend to use ndk-build.cmd in standard Windows console (cmd.exe) rather than the similar bash script in Cygwin shell.
|
||||||

|

|
||||||
|
|
||||||
1. After executing this command the C++ part of the source code is compiled.
|
-# After executing this command the C++ part of the source code is compiled.
|
||||||
|
|
||||||
After that the Java part of the application can be (re)compiled (using either *Eclipse* or *Ant*
|
After that the Java part of the application can be (re)compiled (using either *Eclipse* or *Ant*
|
||||||
build tool).
|
build tool).
|
||||||
@ -299,8 +305,8 @@ Builder.
|
|||||||
OpenCV for Android package since version 2.4.2 contains sample projects
|
OpenCV for Android package since version 2.4.2 contains sample projects
|
||||||
pre-configured CDT Builders. For your own projects follow the steps below.
|
pre-configured CDT Builders. For your own projects follow the steps below.
|
||||||
|
|
||||||
1. Define the NDKROOT environment variable containing the path to Android NDK in your system (e.g.
|
-# Define the NDKROOT environment variable containing the path to Android NDK in your system (e.g.
|
||||||
"X:\\\\Apps\\\\android-ndk-r8" or "/opt/android-ndk-r8").
|
"X:\\Apps\\android-ndk-r8" or "/opt/android-ndk-r8").
|
||||||
|
|
||||||
**On Windows** an environment variable can be set via
|
**On Windows** an environment variable can be set via
|
||||||
My Computer -\> Properties -\> Advanced -\> Environment variables. On Windows 7 it's also
|
My Computer -\> Properties -\> Advanced -\> Environment variables. On Windows 7 it's also
|
||||||
@ -309,71 +315,68 @@ OpenCV for Android package since version 2.4.2 contains sample projects
|
|||||||
**On Linux** and **MacOS** an environment variable can be set via appending a
|
**On Linux** and **MacOS** an environment variable can be set via appending a
|
||||||
"export VAR_NAME=VAR_VALUE" line to the `"~/.bashrc"` file and logging off and then on.
|
"export VAR_NAME=VAR_VALUE" line to the `"~/.bashrc"` file and logging off and then on.
|
||||||
|
|
||||||
@note It's also possible to define the NDKROOT environment variable within Eclipse IDE, but it
|
@note It's also possible to define the NDKROOT environment variable within Eclipse IDE, but it
|
||||||
should be done for every new workspace you create. If you prefer this option better than setting
|
should be done for every new workspace you create. If you prefer this option better than setting
|
||||||
system environment variable, open Eclipse menu
|
system environment variable, open Eclipse menu
|
||||||
Window -\> Preferences -\> C/C++ -\> Build -\> Environment, press the Add... button and set variable
|
Window -\> Preferences -\> C/C++ -\> Build -\> Environment, press the Add... button and set variable
|
||||||
name to NDKROOT and value to local Android NDK path. \#. After that you need to **restart Eclipse**
|
name to NDKROOT and value to local Android NDK path. \#. After that you need to **restart Eclipse**
|
||||||
to apply the changes.
|
to apply the changes.
|
||||||
|
|
||||||
1. Open Eclipse and load the Android app project to configure.
|
-# Open Eclipse and load the Android app project to configure.
|
||||||
2. Add C/C++ Nature to the project via Eclipse menu
|
|
||||||
|
-# Add C/C++ Nature to the project via Eclipse menu
|
||||||
New -\> Other -\> C/C++ -\> Convert to a C/C++ Project.
|
New -\> Other -\> C/C++ -\> Convert to a C/C++ Project.
|
||||||
|

|
||||||

|
|
||||||
|
|
||||||
And:
|
And:
|
||||||
|

|
||||||
|
|
||||||

|
-# Select the project(s) to convert. Specify "Project type" = Makefile project, "Toolchains" =
|
||||||
|
|
||||||
3. Select the project(s) to convert. Specify "Project type" = Makefile project, "Toolchains" =
|
|
||||||
Other Toolchain.
|
Other Toolchain.
|
||||||
|

|
||||||
|
|
||||||

|
-# Open Project Properties -\> C/C++ Build, uncheck Use default build command, replace "Build
|
||||||
|
|
||||||
4. Open Project Properties -\> C/C++ Build, uncheck Use default build command, replace "Build
|
|
||||||
command" text from "make" to
|
command" text from "make" to
|
||||||
|
|
||||||
"${NDKROOT}/ndk-build.cmd" on Windows,
|
"${NDKROOT}/ndk-build.cmd" on Windows,
|
||||||
|
|
||||||
"${NDKROOT}/ndk-build" on Linux and MacOS.
|
"${NDKROOT}/ndk-build" on Linux and MacOS.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
5. Go to Behaviour tab and change "Workbench build type" section like shown below:
|
-# Go to Behaviour tab and change "Workbench build type" section like shown below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
6. Press OK and make sure the ndk-build is successfully invoked when building the project.
|
-# Press OK and make sure the ndk-build is successfully invoked when building the project.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
7. If you open your C++ source file in Eclipse editor, you'll see syntax error notifications. They
|
-# If you open your C++ source file in Eclipse editor, you'll see syntax error notifications. They
|
||||||
are not real errors, but additional CDT configuring is required.
|
are not real errors, but additional CDT configuring is required.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
8. Open Project Properties -\> C/C++ General -\> Paths and Symbols and add the following
|
-# Open Project Properties -\> C/C++ General -\> Paths and Symbols and add the following
|
||||||
**Include** paths for **C++**:
|
**Include** paths for **C++**:
|
||||||
|
@code
|
||||||
# for NDK r8 and prior:
|
# for NDK r8 and prior:
|
||||||
\f${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
||||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/include
|
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/include
|
||||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/libs/armeabi-v7a/include
|
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/libs/armeabi-v7a/include
|
||||||
\f${ProjDirPath}/../../sdk/native/jni/include
|
${ProjDirPath}/../../sdk/native/jni/include
|
||||||
|
|
||||||
# for NDK r8b and later:
|
# for NDK r8b and later:
|
||||||
\f${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
||||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/include
|
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/include
|
||||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi-v7a/include
|
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi-v7a/include
|
||||||
\f${ProjDirPath}/../../sdk/native/jni/include
|
${ProjDirPath}/../../sdk/native/jni/include
|
||||||
|
@endcode
|
||||||
The last path should be changed to the correct absolute or relative path to OpenCV4Android SDK
|
The last path should be changed to the correct absolute or relative path to OpenCV4Android SDK
|
||||||
location.
|
location.
|
||||||
|
|
||||||
This should clear the syntax error notifications in Eclipse C++ editor.
|
This should clear the syntax error notifications in Eclipse C++ editor.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Debugging and Testing
|
Debugging and Testing
|
||||||
---------------------
|
---------------------
|
||||||
@ -386,18 +389,18 @@ hardware device for testing and debugging an Android project.
|
|||||||
AVD (*Android Virtual Device*) is not probably the most convenient way to test an OpenCV-dependent
|
AVD (*Android Virtual Device*) is not probably the most convenient way to test an OpenCV-dependent
|
||||||
application, but sure the most uncomplicated one to configure.
|
application, but sure the most uncomplicated one to configure.
|
||||||
|
|
||||||
1. Assuming you already have *Android SDK* and *Eclipse IDE* installed, in Eclipse go
|
-# Assuming you already have *Android SDK* and *Eclipse IDE* installed, in Eclipse go
|
||||||
Window -\> AVD Manager.
|
Window -\> AVD Manager.
|
||||||
2. Press the New button in AVD Manager window.
|
-# Press the New button in AVD Manager window.
|
||||||
3. Create new Android Virtual Device window will let you select some properties for your new
|
-# Create new Android Virtual Device window will let you select some properties for your new
|
||||||
device, like target API level, size of SD-card and other.
|
device, like target API level, size of SD-card and other.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
4. When you click the Create AVD button, your new AVD will be availible in AVD Manager.
|
-# When you click the Create AVD button, your new AVD will be availible in AVD Manager.
|
||||||
5. Press Start to launch the device. Be aware that any AVD (a.k.a. Emulator) is usually much slower
|
-# Press Start to launch the device. Be aware that any AVD (a.k.a. Emulator) is usually much slower
|
||||||
than a hardware Android device, so it may take up to several minutes to start.
|
than a hardware Android device, so it may take up to several minutes to start.
|
||||||
6. Go Run -\> Run/Debug in Eclipse IDE to run your application in regular or debugging mode.
|
-# Go Run -\> Run/Debug in Eclipse IDE to run your application in regular or debugging mode.
|
||||||
Device Chooser will let you choose among the running devices or to start a new one.
|
Device Chooser will let you choose among the running devices or to start a new one.
|
||||||
|
|
||||||
### Hardware Device
|
### Hardware Device
|
||||||
@ -412,86 +415,77 @@ instructions](http://developer.android.com/tools/device.html) for more informati
|
|||||||
|
|
||||||
#### Windows host computer
|
#### Windows host computer
|
||||||
|
|
||||||
1. Enable USB debugging on the Android device (via Settings menu).
|
-# Enable USB debugging on the Android device (via Settings menu).
|
||||||
2. Attach the Android device to your PC with a USB cable.
|
-# Attach the Android device to your PC with a USB cable.
|
||||||
3. Go to Start Menu and **right-click** on Computer. Select Manage in the context menu. You may be
|
-# Go to Start Menu and **right-click** on Computer. Select Manage in the context menu. You may be
|
||||||
asked for Administrative permissions.
|
asked for Administrative permissions.
|
||||||
4. Select Device Manager in the left pane and find an unknown device in the list. You may try
|
-# Select Device Manager in the left pane and find an unknown device in the list. You may try
|
||||||
unplugging it and then plugging back in order to check whether it's your exact equipment appears
|
unplugging it and then plugging back in order to check whether it's your exact equipment appears
|
||||||
in the list.
|
in the list.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
5. Try your luck installing Google USB drivers without any modifications: **right-click** on the
|
-# Try your luck installing Google USB drivers without any modifications: **right-click** on the
|
||||||
unknown device, select Properties menu item --\> Details tab --\> Update Driver button.
|
unknown device, select Properties menu item --\> Details tab --\> Update Driver button.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
6. Select Browse computer for driver software.
|
-# Select Browse computer for driver software.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
7. Specify the path to `<Android SDK folder>/extras/google/usb_driver/` folder.
|
-# Specify the path to `<Android SDK folder>/extras/google/usb_driver/` folder.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
8. If you get the prompt to install unverified drivers and report about success - you've finished
|
-# If you get the prompt to install unverified drivers and report about success - you've finished
|
||||||
with USB driver installation.
|
with USB driver installation.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
\` \`
|

|
||||||
|
|
||||||
|
-# Otherwise (getting the failure like shown below) follow the next steps.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
9. Otherwise (getting the failure like shown below) follow the next steps.
|
-# Again **right-click** on the unknown device, select Properties --\> Details --\> Hardware Ids
|
||||||
|
and copy the line like `USB\VID_XXXX&PID_XXXX&MI_XX`.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
10. Again **right-click** on the unknown device, select Properties --\> Details --\> Hardware Ids
|
-# Now open file `<Android SDK folder>/extras/google/usb_driver/android_winusb.inf`. Select either
|
||||||
and copy the line like USB\\VID_XXXX&PID_XXXX&MI_XX.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
11. Now open file `<Android SDK folder>/extras/google/usb_driver/android_winusb.inf`. Select either
|
|
||||||
Google.NTx86 or Google.NTamd64 section depending on your host system architecture.
|
Google.NTx86 or Google.NTamd64 section depending on your host system architecture.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
12. There should be a record like existing ones for your device and you need to add one manually.
|
-# There should be a record like existing ones for your device and you need to add one manually.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
13. Save the `android_winusb.inf` file and try to install the USB driver again.
|
-# Save the `android_winusb.inf` file and try to install the USB driver again.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
\` \`
|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
\` \`
|
-# This time installation should go successfully.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
14. This time installation should go successfully.
|

|
||||||
|
|
||||||

|
-# And an unknown device is now recognized as an Android phone.
|
||||||
|
|
||||||
\` \`
|

|
||||||
|
|
||||||

|
-# Successful device USB connection can be verified in console via adb devices command.
|
||||||
|
|
||||||
15. And an unknown device is now recognized as an Android phone.
|

|
||||||
|
|
||||||

|
-# Now, in Eclipse go Run -\> Run/Debug to run your application in regular or debugging mode.
|
||||||
|
|
||||||
16. Successful device USB connection can be verified in console via adb devices command.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
17. Now, in Eclipse go Run -\> Run/Debug to run your application in regular or debugging mode.
|
|
||||||
Device Chooser will let you choose among the devices.
|
Device Chooser will let you choose among the devices.
|
||||||
|
|
||||||
#### Linux host computer
|
#### Linux host computer
|
||||||
@ -507,7 +501,7 @@ SUBSYSTEM=="usb", ATTR{idVendor}=="1004", MODE="0666", GROUP="plugdev"
|
|||||||
Then restart your adb server (even better to restart the system), plug in your Android device and
|
Then restart your adb server (even better to restart the system), plug in your Android device and
|
||||||
execute adb devices command. You will see the list of attached devices:
|
execute adb devices command. You will see the list of attached devices:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
#### Mac OS host computer
|
#### Mac OS host computer
|
||||||
|
|
||||||
|
@ -38,17 +38,17 @@ OpenCV. You can get more information here: `Android OpenCV Manager` and in these
|
|||||||
Using async initialization is a **recommended** way for application development. It uses the OpenCV
|
Using async initialization is a **recommended** way for application development. It uses the OpenCV
|
||||||
Manager to access OpenCV libraries externally installed in the target system.
|
Manager to access OpenCV libraries externally installed in the target system.
|
||||||
|
|
||||||
1. Add OpenCV library project to your workspace. Use menu
|
-# Add OpenCV library project to your workspace. Use menu
|
||||||
File -\> Import -\> Existing project in your workspace.
|
File -\> Import -\> Existing project in your workspace.
|
||||||
|
|
||||||
Press Browse button and locate OpenCV4Android SDK (`OpenCV-2.4.9-android-sdk/sdk`).
|
Press Browse button and locate OpenCV4Android SDK (`OpenCV-2.4.9-android-sdk/sdk`).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. In application project add a reference to the OpenCV Java SDK in
|
-# In application project add a reference to the OpenCV Java SDK in
|
||||||
Project -\> Properties -\> Android -\> Library -\> Add select OpenCV Library - 2.4.9.
|
Project -\> Properties -\> Android -\> Library -\> Add select OpenCV Library - 2.4.9.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
In most cases OpenCV Manager may be installed automatically from Google Play. For the case, when
|
In most cases OpenCV Manager may be installed automatically from Google Play. For the case, when
|
||||||
Google Play is not available, i.e. emulator, developer board, etc, you can install it manually using
|
Google Play is not available, i.e. emulator, developer board, etc, you can install it manually using
|
||||||
@ -101,18 +101,18 @@ designed mostly for development purposes. This approach is deprecated for the pr
|
|||||||
release package is recommended to communicate with OpenCV Manager via the async initialization
|
release package is recommended to communicate with OpenCV Manager via the async initialization
|
||||||
described above.
|
described above.
|
||||||
|
|
||||||
1. Add the OpenCV library project to your workspace the same way as for the async initialization
|
-# Add the OpenCV library project to your workspace the same way as for the async initialization
|
||||||
above. Use menu File -\> Import -\> Existing project in your workspace, press Browse button and
|
above. Use menu File -\> Import -\> Existing project in your workspace, press Browse button and
|
||||||
select OpenCV SDK path (`OpenCV-2.4.9-android-sdk/sdk`).
|
select OpenCV SDK path (`OpenCV-2.4.9-android-sdk/sdk`).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. In the application project add a reference to the OpenCV4Android SDK in
|
-# In the application project add a reference to the OpenCV4Android SDK in
|
||||||
Project -\> Properties -\> Android -\> Library -\> Add select OpenCV Library - 2.4.9;
|
Project -\> Properties -\> Android -\> Library -\> Add select OpenCV Library - 2.4.9;
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
3. If your application project **doesn't have a JNI part**, just copy the corresponding OpenCV
|
-# If your application project **doesn't have a JNI part**, just copy the corresponding OpenCV
|
||||||
native libs from `<OpenCV-2.4.9-android-sdk>/sdk/native/libs/<target_arch>` to your project
|
native libs from `<OpenCV-2.4.9-android-sdk>/sdk/native/libs/<target_arch>` to your project
|
||||||
directory to folder `libs/<target_arch>`.
|
directory to folder `libs/<target_arch>`.
|
||||||
|
|
||||||
@ -126,7 +126,7 @@ described above.
|
|||||||
@endcode
|
@endcode
|
||||||
The result should look like the following:
|
The result should look like the following:
|
||||||
@code{.make}
|
@code{.make}
|
||||||
include \f$(CLEAR_VARS)
|
include $(CLEAR_VARS)
|
||||||
|
|
||||||
# OpenCV
|
# OpenCV
|
||||||
OPENCV_CAMERA_MODULES:=on
|
OPENCV_CAMERA_MODULES:=on
|
||||||
@ -139,7 +139,7 @@ described above.
|
|||||||
Eclipse will automatically include all the libraries from the `libs` folder to the application
|
Eclipse will automatically include all the libraries from the `libs` folder to the application
|
||||||
package (APK).
|
package (APK).
|
||||||
|
|
||||||
4. The last step of enabling OpenCV in your application is Java initialization code before calling
|
-# The last step of enabling OpenCV in your application is Java initialization code before calling
|
||||||
OpenCV API. It can be done, for example, in the static section of the Activity class:
|
OpenCV API. It can be done, for example, in the static section of the Activity class:
|
||||||
@code{.java}
|
@code{.java}
|
||||||
static {
|
static {
|
||||||
@ -166,23 +166,23 @@ described above.
|
|||||||
To build your own Android application, using OpenCV as native part, the following steps should be
|
To build your own Android application, using OpenCV as native part, the following steps should be
|
||||||
taken:
|
taken:
|
||||||
|
|
||||||
1. You can use an environment variable to specify the location of OpenCV package or just hardcode
|
-# You can use an environment variable to specify the location of OpenCV package or just hardcode
|
||||||
absolute or relative path in the `jni/Android.mk` of your projects.
|
absolute or relative path in the `jni/Android.mk` of your projects.
|
||||||
2. The file `jni/Android.mk` should be written for the current application using the common rules
|
-# The file `jni/Android.mk` should be written for the current application using the common rules
|
||||||
for this file.
|
for this file.
|
||||||
|
|
||||||
For detailed information see the Android NDK documentation from the Android NDK archive, in the
|
For detailed information see the Android NDK documentation from the Android NDK archive, in the
|
||||||
file `<path_where_NDK_is_placed>/docs/ANDROID-MK.html`.
|
file `<path_where_NDK_is_placed>/docs/ANDROID-MK.html`.
|
||||||
|
|
||||||
3. The following line:
|
-# The following line:
|
||||||
@code{.make}
|
@code{.make}
|
||||||
include C:\Work\OpenCV4Android\OpenCV-2.4.9-android-sdk\sdk\native\jni\OpenCV.mk
|
include C:\Work\OpenCV4Android\OpenCV-2.4.9-android-sdk\sdk\native\jni\OpenCV.mk
|
||||||
@endcode
|
@endcode
|
||||||
Should be inserted into the `jni/Android.mk` file **after** this line:
|
Should be inserted into the `jni/Android.mk` file **after** this line:
|
||||||
@code{.make}
|
@code{.make}
|
||||||
include \f$(CLEAR_VARS)
|
include $(CLEAR_VARS)
|
||||||
@endcode
|
@endcode
|
||||||
4. Several variables can be used to customize OpenCV stuff, but you **don't need** to use them when
|
-# Several variables can be used to customize OpenCV stuff, but you **don't need** to use them when
|
||||||
your application uses the async initialization via the OpenCV Manager API.
|
your application uses the async initialization via the OpenCV Manager API.
|
||||||
|
|
||||||
@note These variables should be set **before** the "include .../OpenCV.mk" line:
|
@note These variables should be set **before** the "include .../OpenCV.mk" line:
|
||||||
@ -202,7 +202,7 @@ taken:
|
|||||||
Perform static linking with OpenCV. By default dynamic link is used and the project JNI lib
|
Perform static linking with OpenCV. By default dynamic link is used and the project JNI lib
|
||||||
depends on libopencv_java.so.
|
depends on libopencv_java.so.
|
||||||
|
|
||||||
5. The file `Application.mk` should exist and should contain lines:
|
-# The file `Application.mk` should exist and should contain lines:
|
||||||
@code{.make}
|
@code{.make}
|
||||||
APP_STL := gnustl_static
|
APP_STL := gnustl_static
|
||||||
APP_CPPFLAGS := -frtti -fexceptions
|
APP_CPPFLAGS := -frtti -fexceptions
|
||||||
@ -221,7 +221,7 @@ taken:
|
|||||||
APP_PLATFORM := android-9
|
APP_PLATFORM := android-9
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
6. Either use @ref tutorial_android_dev_intro_ndk "manual" ndk-build invocation or
|
-# Either use @ref tutorial_android_dev_intro_ndk "manual" ndk-build invocation or
|
||||||
@ref tutorial_android_dev_intro_eclipse "setup Eclipse CDT Builder" to build native JNI lib
|
@ref tutorial_android_dev_intro_eclipse "setup Eclipse CDT Builder" to build native JNI lib
|
||||||
before (re)building the Java part and creating
|
before (re)building the Java part and creating
|
||||||
an APK.
|
an APK.
|
||||||
@ -232,18 +232,18 @@ Hello OpenCV Sample
|
|||||||
Here are basic steps to guide you trough the process of creating a simple OpenCV-centric
|
Here are basic steps to guide you trough the process of creating a simple OpenCV-centric
|
||||||
application. It will be capable of accessing camera output, processing it and displaying the result.
|
application. It will be capable of accessing camera output, processing it and displaying the result.
|
||||||
|
|
||||||
1. Open Eclipse IDE, create a new clean workspace, create a new Android project
|
-# Open Eclipse IDE, create a new clean workspace, create a new Android project
|
||||||
File --\> New --\> Android Project
|
File --\> New --\> Android Project
|
||||||
2. Set name, target, package and minSDKVersion accordingly. The minimal SDK version for build with
|
-# Set name, target, package and minSDKVersion accordingly. The minimal SDK version for build with
|
||||||
OpenCV4Android SDK is 11. Minimal device API Level (for application manifest) is 8.
|
OpenCV4Android SDK is 11. Minimal device API Level (for application manifest) is 8.
|
||||||
3. Allow Eclipse to create default activity. Lets name the activity HelloOpenCvActivity.
|
-# Allow Eclipse to create default activity. Lets name the activity HelloOpenCvActivity.
|
||||||
4. Choose Blank Activity with full screen layout. Lets name the layout HelloOpenCvLayout.
|
-# Choose Blank Activity with full screen layout. Lets name the layout HelloOpenCvLayout.
|
||||||
5. Import OpenCV library project to your workspace.
|
-# Import OpenCV library project to your workspace.
|
||||||
6. Reference OpenCV library within your project properties.
|
-# Reference OpenCV library within your project properties.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
7. Edit your layout file as xml file and pass the following layout there:
|
-# Edit your layout file as xml file and pass the following layout there:
|
||||||
@code{.xml}
|
@code{.xml}
|
||||||
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
|
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
|
||||||
xmlns:tools="http://schemas.android.com/tools"
|
xmlns:tools="http://schemas.android.com/tools"
|
||||||
@ -261,7 +261,7 @@ application. It will be capable of accessing camera output, processing it and di
|
|||||||
|
|
||||||
</LinearLayout>
|
</LinearLayout>
|
||||||
@endcode
|
@endcode
|
||||||
8. Add the following permissions to the `AndroidManifest.xml` file:
|
-# Add the following permissions to the `AndroidManifest.xml` file:
|
||||||
@code{.xml}
|
@code{.xml}
|
||||||
</application>
|
</application>
|
||||||
|
|
||||||
@ -272,14 +272,14 @@ application. It will be capable of accessing camera output, processing it and di
|
|||||||
<uses-feature android:name="android.hardware.camera.front" android:required="false"/>
|
<uses-feature android:name="android.hardware.camera.front" android:required="false"/>
|
||||||
<uses-feature android:name="android.hardware.camera.front.autofocus" android:required="false"/>
|
<uses-feature android:name="android.hardware.camera.front.autofocus" android:required="false"/>
|
||||||
@endcode
|
@endcode
|
||||||
9. Set application theme in AndroidManifest.xml to hide title and system buttons.
|
-# Set application theme in AndroidManifest.xml to hide title and system buttons.
|
||||||
@code{.xml}
|
@code{.xml}
|
||||||
<application
|
<application
|
||||||
android:icon="@drawable/icon"
|
android:icon="@drawable/icon"
|
||||||
android:label="@string/app_name"
|
android:label="@string/app_name"
|
||||||
android:theme="@android:style/Theme.NoTitleBar.Fullscreen" >
|
android:theme="@android:style/Theme.NoTitleBar.Fullscreen" >
|
||||||
@endcode
|
@endcode
|
||||||
10. Add OpenCV library initialization to your activity. Fix errors by adding requited imports.
|
-# Add OpenCV library initialization to your activity. Fix errors by adding requited imports.
|
||||||
@code{.java}
|
@code{.java}
|
||||||
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
|
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
|
||||||
@Override
|
@Override
|
||||||
@ -305,7 +305,7 @@ application. It will be capable of accessing camera output, processing it and di
|
|||||||
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_6, this, mLoaderCallback);
|
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_6, this, mLoaderCallback);
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
11. Defines that your activity implements CvCameraViewListener2 interface and fix activity related
|
-# Defines that your activity implements CvCameraViewListener2 interface and fix activity related
|
||||||
errors by defining missed methods. For this activity define onCreate, onDestroy and onPause and
|
errors by defining missed methods. For this activity define onCreate, onDestroy and onPause and
|
||||||
implement them according code snippet bellow. Fix errors by adding requited imports.
|
implement them according code snippet bellow. Fix errors by adding requited imports.
|
||||||
@code{.java}
|
@code{.java}
|
||||||
@ -346,7 +346,7 @@ application. It will be capable of accessing camera output, processing it and di
|
|||||||
return inputFrame.rgba();
|
return inputFrame.rgba();
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
12. Run your application on device or emulator.
|
-# Run your application on device or emulator.
|
||||||
|
|
||||||
Lets discuss some most important steps. Every Android application with UI must implement Activity
|
Lets discuss some most important steps. Every Android application with UI must implement Activity
|
||||||
and View. By the first steps we create blank activity and default view layout. The simplest
|
and View. By the first steps we create blank activity and default view layout. The simplest
|
||||||
|
@ -32,9 +32,11 @@ tutorial](http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_j
|
|||||||
|
|
||||||
If you are in hurry, here is a minimum quick start guide to install OpenCV on Mac OS X:
|
If you are in hurry, here is a minimum quick start guide to install OpenCV on Mac OS X:
|
||||||
|
|
||||||
NOTE 1: I'm assuming you already installed [xcode](https://developer.apple.com/xcode/),
|
@note
|
||||||
|
I'm assuming you already installed [xcode](https://developer.apple.com/xcode/),
|
||||||
[jdk](http://www.oracle.com/technetwork/java/javase/downloads/index.html) and
|
[jdk](http://www.oracle.com/technetwork/java/javase/downloads/index.html) and
|
||||||
[Cmake](http://www.cmake.org/cmake/resources/software.html).
|
[Cmake](http://www.cmake.org/cmake/resources/software.html).
|
||||||
|
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
cd ~/
|
cd ~/
|
||||||
mkdir opt
|
mkdir opt
|
||||||
@ -60,9 +62,9 @@ cycle of your CLJ projects.
|
|||||||
The available [installation guide](https://github.com/technomancy/leiningen#installation) is very
|
The available [installation guide](https://github.com/technomancy/leiningen#installation) is very
|
||||||
easy to be followed:
|
easy to be followed:
|
||||||
|
|
||||||
1. [Download the script](https://raw.github.com/technomancy/leiningen/stable/bin/lein)
|
-# [Download the script](https://raw.github.com/technomancy/leiningen/stable/bin/lein)
|
||||||
2. Place it on your $PATH (cf. \~/bin is a good choice if it is on your path.)
|
-# Place it on your $PATH (cf. \~/bin is a good choice if it is on your path.)
|
||||||
3. Set the script to be executable. (i.e. chmod 755 \~/bin/lein).
|
-# Set the script to be executable. (i.e. chmod 755 \~/bin/lein).
|
||||||
|
|
||||||
If you work on Windows, follow [this instruction](https://github.com/technomancy/leiningen#windows)
|
If you work on Windows, follow [this instruction](https://github.com/technomancy/leiningen#windows)
|
||||||
|
|
||||||
@ -171,9 +173,9 @@ Your directories layout should look like the following:
|
|||||||
tree
|
tree
|
||||||
.
|
.
|
||||||
|__ native
|
|__ native
|
||||||
| |__ macosx
|
| |__ macosx
|
||||||
| |__ x86_64
|
| |__ x86_64
|
||||||
| |__ libopencv_java247.dylib
|
| |__ libopencv_java247.dylib
|
||||||
|
|
|
|
||||||
|__ opencv-247.jar
|
|__ opencv-247.jar
|
||||||
|__ opencv-native-247.jar
|
|__ opencv-native-247.jar
|
||||||
@ -215,13 +217,13 @@ simple-sample/
|
|||||||
|__ LICENSE
|
|__ LICENSE
|
||||||
|__ README.md
|
|__ README.md
|
||||||
|__ doc
|
|__ doc
|
||||||
| |__ intro.md
|
| |__ intro.md
|
||||||
|
|
|
|
||||||
|__ project.clj
|
|__ project.clj
|
||||||
|__ resources
|
|__ resources
|
||||||
|__ src
|
|__ src
|
||||||
| |__ simple_sample
|
| |__ simple_sample
|
||||||
| |__ core.clj
|
| |__ core.clj
|
||||||
|__ test
|
|__ test
|
||||||
|__ simple_sample
|
|__ simple_sample
|
||||||
|__ core_test.clj
|
|__ core_test.clj
|
||||||
@ -299,7 +301,9 @@ nil
|
|||||||
Then you can start interacting with OpenCV by just referencing the fully qualified names of its
|
Then you can start interacting with OpenCV by just referencing the fully qualified names of its
|
||||||
classes.
|
classes.
|
||||||
|
|
||||||
NOTE 2: [Here](http://docs.opencv.org/java/) you can find the full OpenCV Java API.
|
@note
|
||||||
|
[Here](http://docs.opencv.org/java/) you can find the full OpenCV Java API.
|
||||||
|
|
||||||
@code{.clojure}
|
@code{.clojure}
|
||||||
user=> (org.opencv.core.Point. 0 0)
|
user=> (org.opencv.core.Point. 0 0)
|
||||||
#<Point {0.0, 0.0}>
|
#<Point {0.0, 0.0}>
|
||||||
@ -409,6 +413,7 @@ class SimpleSample {
|
|||||||
|
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
### Add injections to the project
|
### Add injections to the project
|
||||||
|
|
||||||
Before start coding, we'd like to eliminate the boring need of interactively loading the native
|
Before start coding, we'd like to eliminate the boring need of interactively loading the native
|
||||||
@ -454,6 +459,7 @@ We're going to mimic almost verbatim the original OpenCV java tutorial to:
|
|||||||
- change the value of every element of the second row to 1
|
- change the value of every element of the second row to 1
|
||||||
- change the value of every element of the 6th column to 5
|
- change the value of every element of the 6th column to 5
|
||||||
- print the content of the obtained matrix
|
- print the content of the obtained matrix
|
||||||
|
|
||||||
@code{.clojure}
|
@code{.clojure}
|
||||||
user=> (def m (Mat. 5 10 CvType/CV_8UC1 (Scalar. 0 0)))
|
user=> (def m (Mat. 5 10 CvType/CV_8UC1 (Scalar. 0 0)))
|
||||||
#'user/m
|
#'user/m
|
||||||
@ -473,6 +479,7 @@ user=> (println (.dump m))
|
|||||||
0, 0, 0, 0, 0, 5, 0, 0, 0, 0]
|
0, 0, 0, 0, 0, 5, 0, 0, 0, 0]
|
||||||
nil
|
nil
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
If you are accustomed to a functional language all those abused and mutating nouns are going to
|
If you are accustomed to a functional language all those abused and mutating nouns are going to
|
||||||
irritate your preference for verbs. Even if the CLJ interop syntax is very handy and complete, there
|
irritate your preference for verbs. Even if the CLJ interop syntax is very handy and complete, there
|
||||||
is still an impedance mismatch between any OOP language and any FP language (bein Scala a mixed
|
is still an impedance mismatch between any OOP language and any FP language (bein Scala a mixed
|
||||||
@ -483,6 +490,7 @@ To exit the REPL type (exit), ctr-D or (quit) at the REPL prompt.
|
|||||||
user=> (exit)
|
user=> (exit)
|
||||||
Bye for now!
|
Bye for now!
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
### Interactively load and blur an image
|
### Interactively load and blur an image
|
||||||
|
|
||||||
In the next sample you will learn how to interactively load and blur and image from the REPL by
|
In the next sample you will learn how to interactively load and blur and image from the REPL by
|
||||||
@ -500,7 +508,7 @@ main argument to both the GaussianBlur and the imwrite methods.
|
|||||||
First we want to add an image file to a newly create directory for storing static resources of the
|
First we want to add an image file to a newly create directory for storing static resources of the
|
||||||
project.
|
project.
|
||||||
|
|
||||||

|

|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
mkdir -p resources/images
|
mkdir -p resources/images
|
||||||
cp ~/opt/opencv/doc/tutorials/introduction/desktop_java/images/lena.png resource/images/
|
cp ~/opt/opencv/doc/tutorials/introduction/desktop_java/images/lena.png resource/images/
|
||||||
@ -554,7 +562,7 @@ Bye for now!
|
|||||||
@endcode
|
@endcode
|
||||||
Following is the new blurred image of Lena.
|
Following is the new blurred image of Lena.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Next Steps
|
Next Steps
|
||||||
----------
|
----------
|
||||||
@ -577,4 +585,3 @@ the gap.
|
|||||||
Copyright © 2013 Giacomo (Mimmo) Cosenza aka Magomimmo
|
Copyright © 2013 Giacomo (Mimmo) Cosenza aka Magomimmo
|
||||||
|
|
||||||
Distributed under the BSD 3-clause License, the same of OpenCV.
|
Distributed under the BSD 3-clause License, the same of OpenCV.
|
||||||
|
|
||||||
|
@ -49,10 +49,11 @@ In Linux it can be achieved with the following command in Terminal:
|
|||||||
cd ~/<my_working _directory>
|
cd ~/<my_working _directory>
|
||||||
git clone https://github.com/Itseez/opencv.git
|
git clone https://github.com/Itseez/opencv.git
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
Building OpenCV
|
Building OpenCV
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
1. Create a build directory, make it current and run the following command:
|
-# Create a build directory, make it current and run the following command:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
cmake [<some optional parameters>] -DCMAKE_TOOLCHAIN_FILE=<path to the OpenCV source directory>/platforms/linux/arm-gnueabi.toolchain.cmake <path to the OpenCV source directory>
|
cmake [<some optional parameters>] -DCMAKE_TOOLCHAIN_FILE=<path to the OpenCV source directory>/platforms/linux/arm-gnueabi.toolchain.cmake <path to the OpenCV source directory>
|
||||||
@endcode
|
@endcode
|
||||||
@ -69,13 +70,15 @@ Building OpenCV
|
|||||||
|
|
||||||
cmake -DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake ../../..
|
cmake -DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake ../../..
|
||||||
@endcode
|
@endcode
|
||||||
2. Run make in build (\<cmake_binary_dir\>) directory:
|
|
||||||
|
-# Run make in build (\<cmake_binary_dir\>) directory:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
make
|
make
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
@note
|
@note
|
||||||
Optionally you can strip symbols info from the created library via install/strip make target.
|
Optionally you can strip symbols info from the created library via install/strip make target.
|
||||||
This option produces smaller binary (\~ twice smaller) but makes further debugging harder.
|
This option produces smaller binary (\~ twice smaller) but makes further debugging harder.
|
||||||
|
|
||||||
### Enable hardware optimizations
|
### Enable hardware optimizations
|
||||||
|
|
||||||
@ -86,5 +89,4 @@ extensions.
|
|||||||
|
|
||||||
TBB is supported on multi core ARM SoCs also. Add -DWITH_TBB=ON and -DBUILD_TBB=ON to enable it.
|
TBB is supported on multi core ARM SoCs also. Add -DWITH_TBB=ON and -DBUILD_TBB=ON to enable it.
|
||||||
Cmake scripts download TBB sources from official project site
|
Cmake scripts download TBB sources from official project site
|
||||||
[](http://threadingbuildingblocks.org/) and build it.
|
<http://threadingbuildingblocks.org/> and build it.
|
||||||
|
|
||||||
|
@ -33,7 +33,9 @@ from the [OpenCV SourceForge repository](http://sourceforge.net/projects/opencvl
|
|||||||
|
|
||||||
@note Windows users can find the prebuilt files needed for Java development in the
|
@note Windows users can find the prebuilt files needed for Java development in the
|
||||||
`opencv/build/java/` folder inside the package. For other OSes it's required to build OpenCV from
|
`opencv/build/java/` folder inside the package. For other OSes it's required to build OpenCV from
|
||||||
sources. Another option to get OpenCV sources is to clone [OpenCV git
|
sources.
|
||||||
|
|
||||||
|
Another option to get OpenCV sources is to clone [OpenCV git
|
||||||
repository](https://github.com/Itseez/opencv/). In order to build OpenCV with Java bindings you need
|
repository](https://github.com/Itseez/opencv/). In order to build OpenCV with Java bindings you need
|
||||||
JDK (Java Development Kit) (we recommend [Oracle/Sun JDK 6 or
|
JDK (Java Development Kit) (we recommend [Oracle/Sun JDK 6 or
|
||||||
7](http://www.oracle.com/technetwork/java/javase/downloads/)), [Apache Ant](http://ant.apache.org/)
|
7](http://www.oracle.com/technetwork/java/javase/downloads/)), [Apache Ant](http://ant.apache.org/)
|
||||||
@ -67,7 +69,7 @@ Examine the output of CMake and ensure java is one of the
|
|||||||
modules "To be built". If not, it's likely you're missing a dependency. You should troubleshoot by
|
modules "To be built". If not, it's likely you're missing a dependency. You should troubleshoot by
|
||||||
looking through the CMake output for any Java-related tools that aren't found and installing them.
|
looking through the CMake output for any Java-related tools that aren't found and installing them.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@note If CMake can't find Java in your system set the JAVA_HOME environment variable with the path to installed JDK before running it. E.g.:
|
@note If CMake can't find Java in your system set the JAVA_HOME environment variable with the path to installed JDK before running it. E.g.:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
@ -141,7 +143,7 @@ folder.
|
|||||||
The command should initiate [re]building and running the sample. You should see on the
|
The command should initiate [re]building and running the sample. You should see on the
|
||||||
screen something like this:
|
screen something like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
SBT project for Java and Scala
|
SBT project for Java and Scala
|
||||||
------------------------------
|
------------------------------
|
||||||
@ -203,7 +205,7 @@ eclipse # Running "eclipse" from within the sbt console
|
|||||||
@endcode
|
@endcode
|
||||||
You should see something like this:
|
You should see something like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can now import the SBT project to Eclipse using Import ... -\> Existing projects into workspace.
|
You can now import the SBT project to Eclipse using Import ... -\> Existing projects into workspace.
|
||||||
Whether you actually do this is optional for the guide; we'll be using SBT to build the project, so
|
Whether you actually do this is optional for the guide; we'll be using SBT to build the project, so
|
||||||
@ -225,7 +227,7 @@ sbt run
|
|||||||
@endcode
|
@endcode
|
||||||
You should see something like this:
|
You should see something like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Running SBT samples
|
### Running SBT samples
|
||||||
|
|
||||||
@ -241,7 +243,7 @@ sbt eclipse
|
|||||||
@endcode
|
@endcode
|
||||||
Next, create the directory `src/main/resources` and download this Lena image into it:
|
Next, create the directory `src/main/resources` and download this Lena image into it:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Make sure it's called `"lena.png"`. Items in the resources directory are available to the Java
|
Make sure it's called `"lena.png"`. Items in the resources directory are available to the Java
|
||||||
application at runtime.
|
application at runtime.
|
||||||
@ -315,11 +317,11 @@ sbt run
|
|||||||
@endcode
|
@endcode
|
||||||
You should see something like this:
|
You should see something like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
It should also write the following image to `faceDetection.png`:
|
It should also write the following image to `faceDetection.png`:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You're done! Now you have a sample Java application working with OpenCV, so you can start the work
|
You're done! Now you have a sample Java application working with OpenCV, so you can start the work
|
||||||
on your own. We wish you good luck and many years of joyful life!
|
on your own. We wish you good luck and many years of joyful life!
|
||||||
|
@ -21,6 +21,8 @@ Download the source code from
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
|
@dontinclude cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||||
|
|
||||||
In OpenCV 2 we have multiple modules. Each one takes care of a different area or approach towards
|
In OpenCV 2 we have multiple modules. Each one takes care of a different area or approach towards
|
||||||
image processing. You could already observe this in the structure of the user guide of these
|
image processing. You could already observe this in the structure of the user guide of these
|
||||||
tutorials itself. Before you use any of them you first need to include the header files where the
|
tutorials itself. Before you use any of them you first need to include the header files where the
|
||||||
@ -31,36 +33,25 @@ You'll almost always end up using the:
|
|||||||
- *core* section, as here are defined the basic building blocks of the library
|
- *core* section, as here are defined the basic building blocks of the library
|
||||||
- *highgui* module, as this contains the functions for input and output operations
|
- *highgui* module, as this contains the functions for input and output operations
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@until <string>
|
||||||
|
|
||||||
lines
|
|
||||||
1-6
|
|
||||||
|
|
||||||
We also include the *iostream* to facilitate console line output and input. To avoid data structure
|
We also include the *iostream* to facilitate console line output and input. To avoid data structure
|
||||||
and function name conflicts with other libraries, OpenCV has its own namespace: *cv*. To avoid the
|
and function name conflicts with other libraries, OpenCV has its own namespace: *cv*. To avoid the
|
||||||
need appending prior each of these the *cv::* keyword you can import the namespace in the whole file
|
need appending prior each of these the *cv::* keyword you can import the namespace in the whole file
|
||||||
by using the lines:
|
by using the lines:
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@line using namespace cv
|
||||||
|
|
||||||
lines
|
|
||||||
8-9
|
|
||||||
|
|
||||||
This is true for the STL library too (used for console I/O). Now, let's analyze the *main* function.
|
This is true for the STL library too (used for console I/O). Now, let's analyze the *main* function.
|
||||||
We start up assuring that we acquire a valid image name argument from the command line. Otherwise
|
We start up assuring that we acquire a valid image name argument from the command line. Otherwise
|
||||||
take a picture by default: "HappyFish.jpg".
|
take a picture by default: "HappyFish.jpg".
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@skip string
|
||||||
|
@until }
|
||||||
lines
|
|
||||||
13-17
|
|
||||||
|
|
||||||
Then create a *Mat* object that will store the data of the loaded image.
|
Then create a *Mat* object that will store the data of the loaded image.
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@skipline Mat
|
||||||
|
|
||||||
lines
|
|
||||||
19
|
|
||||||
|
|
||||||
Now we call the @ref cv::imread function which loads the image name specified by the first argument
|
Now we call the @ref cv::imread function which loads the image name specified by the first argument
|
||||||
(*argv[1]*). The second argument specifies the format in what we want the image. This may be:
|
(*argv[1]*). The second argument specifies the format in what we want the image. This may be:
|
||||||
@ -69,10 +60,7 @@ Now we call the @ref cv::imread function which loads the image name specified by
|
|||||||
- IMREAD_GRAYSCALE ( 0) loads the image as an intensity one
|
- IMREAD_GRAYSCALE ( 0) loads the image as an intensity one
|
||||||
- IMREAD_COLOR (\>0) loads the image in the RGB format
|
- IMREAD_COLOR (\>0) loads the image in the RGB format
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@skipline image = imread
|
||||||
|
|
||||||
lines
|
|
||||||
20
|
|
||||||
|
|
||||||
@note
|
@note
|
||||||
OpenCV offers support for the image formats Windows bitmap (bmp), portable image formats (pbm,
|
OpenCV offers support for the image formats Windows bitmap (bmp), portable image formats (pbm,
|
||||||
@ -94,30 +82,18 @@ the image it contains from a size point of view. It may be:
|
|||||||
would like the image to keep its aspect ratio (*WINDOW_KEEPRATIO*) or not
|
would like the image to keep its aspect ratio (*WINDOW_KEEPRATIO*) or not
|
||||||
(*WINDOW_FREERATIO*).
|
(*WINDOW_FREERATIO*).
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@skipline namedWindow
|
||||||
|
|
||||||
|
|
||||||
lines
|
|
||||||
28
|
|
||||||
|
|
||||||
Finally, to update the content of the OpenCV window with a new image use the @ref cv::imshow
|
Finally, to update the content of the OpenCV window with a new image use the @ref cv::imshow
|
||||||
function. Specify the OpenCV window name to update and the image to use during this operation:
|
function. Specify the OpenCV window name to update and the image to use during this operation:
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@skipline imshow
|
||||||
|
|
||||||
|
|
||||||
lines
|
|
||||||
29
|
|
||||||
|
|
||||||
Because we want our window to be displayed until the user presses a key (otherwise the program would
|
Because we want our window to be displayed until the user presses a key (otherwise the program would
|
||||||
end far too quickly), we use the @ref cv::waitKey function whose only parameter is just how long
|
end far too quickly), we use the @ref cv::waitKey function whose only parameter is just how long
|
||||||
should it wait for a user input (measured in milliseconds). Zero means to wait forever.
|
should it wait for a user input (measured in milliseconds). Zero means to wait forever.
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
@skipline waitKey
|
||||||
|
|
||||||
|
|
||||||
lines
|
|
||||||
31
|
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
@ -130,11 +106,10 @@ Result
|
|||||||
@endcode
|
@endcode
|
||||||
- You should get a nice window as the one shown below:
|
- You should get a nice window as the one shown below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
\htmlonly
|
\htmlonly
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<iframe title="Introduction - Display an Image" width="560" height="349" src="http://www.youtube.com/embed/1OJEqpuaGc4?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
|
<iframe title="Introduction - Display an Image" width="560" height="349" src="http://www.youtube.com/embed/1OJEqpuaGc4?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
|
||||||
</div>
|
</div>
|
||||||
\endhtmlonly
|
\endhtmlonly
|
||||||
|
|
||||||
|
@ -21,13 +21,13 @@ git clone https://github.com/Itseez/opencv.git
|
|||||||
Building OpenCV from Source, using CMake and Command Line
|
Building OpenCV from Source, using CMake and Command Line
|
||||||
---------------------------------------------------------
|
---------------------------------------------------------
|
||||||
|
|
||||||
1. Make symbolic link for Xcode to let OpenCV build scripts find the compiler, header files etc.
|
-# Make symbolic link for Xcode to let OpenCV build scripts find the compiler, header files etc.
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
cd /
|
cd /
|
||||||
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
|
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
2. Build OpenCV framework:
|
-# Build OpenCV framework:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
cd ~/<my_working_directory>
|
cd ~/<my_working_directory>
|
||||||
python opencv/platforms/ios/build_framework.py ios
|
python opencv/platforms/ios/build_framework.py ios
|
||||||
|
@ -17,51 +17,51 @@ are more or less the same for other versions.
|
|||||||
Now, we will define OpenCV as a user library in Eclipse, so we can reuse the configuration for any
|
Now, we will define OpenCV as a user library in Eclipse, so we can reuse the configuration for any
|
||||||
project. Launch Eclipse and select Window --\> Preferences from the menu.
|
project. Launch Eclipse and select Window --\> Preferences from the menu.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Navigate under Java --\> Build Path --\> User Libraries and click New....
|
Navigate under Java --\> Build Path --\> User Libraries and click New....
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Enter a name, e.g. OpenCV-2.4.6, for your new library.
|
Enter a name, e.g. OpenCV-2.4.6, for your new library.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Now select your new user library and click Add External JARs....
|
Now select your new user library and click Add External JARs....
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Browse through `C:\OpenCV-2.4.6\build\java\` and select opencv-246.jar. After adding the jar,
|
Browse through `C:\OpenCV-2.4.6\build\java\` and select opencv-246.jar. After adding the jar,
|
||||||
extend the opencv-246.jar and select Native library location and press Edit....
|
extend the opencv-246.jar and select Native library location and press Edit....
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Select External Folder... and browse to select the folder `C:\OpenCV-2.4.6\build\java\x64`. If you
|
Select External Folder... and browse to select the folder `C:\OpenCV-2.4.6\build\java\x64`. If you
|
||||||
have a 32-bit system you need to select the x86 folder instead of x64.
|
have a 32-bit system you need to select the x86 folder instead of x64.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Your user library configuration should look like this:
|
Your user library configuration should look like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Testing the configuration on a new Java project
|
Testing the configuration on a new Java project
|
||||||
-----------------------------------------------
|
-----------------------------------------------
|
||||||
|
|
||||||
Now start creating a new Java project.
|
Now start creating a new Java project.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
On the Java Settings step, under Libraries tab, select Add Library... and select OpenCV-2.4.6, then
|
On the Java Settings step, under Libraries tab, select Add Library... and select OpenCV-2.4.6, then
|
||||||
click Finish.
|
click Finish.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Libraries should look like this:
|
Libraries should look like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Now you have created and configured a new Java project it is time to test it. Create a new java
|
Now you have created and configured a new Java project it is time to test it. Create a new java
|
||||||
file. Here is a starter code for your convenience:
|
file. Here is a starter code for your convenience:
|
||||||
@ -82,7 +82,7 @@ public class Hello
|
|||||||
@endcode
|
@endcode
|
||||||
When you run the code you should see 3x3 identity matrix as output.
|
When you run the code you should see 3x3 identity matrix as output.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
That is it, whenever you start a new project just add the OpenCV user library that you have defined
|
That is it, whenever you start a new project just add the OpenCV user library that you have defined
|
||||||
to your project and you are good to go. Enjoy your powerful, less painful development environment :)
|
to your project and you are good to go. Enjoy your powerful, less painful development environment :)
|
||||||
|
@ -4,45 +4,45 @@ Using OpenCV with Eclipse (plugin CDT) {#tutorial_linux_eclipse}
|
|||||||
Prerequisites
|
Prerequisites
|
||||||
-------------
|
-------------
|
||||||
Two ways, one by forming a project directly, and another by CMake Prerequisites
|
Two ways, one by forming a project directly, and another by CMake Prerequisites
|
||||||
1. Having installed [Eclipse](http://www.eclipse.org/) in your workstation (only the CDT plugin for
|
-# Having installed [Eclipse](http://www.eclipse.org/) in your workstation (only the CDT plugin for
|
||||||
C/C++ is needed). You can follow the following steps:
|
C/C++ is needed). You can follow the following steps:
|
||||||
- Go to the Eclipse site
|
- Go to the Eclipse site
|
||||||
- Download [Eclipse IDE for C/C++
|
- Download [Eclipse IDE for C/C++
|
||||||
Developers](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/heliossr2) .
|
Developers](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/heliossr2) .
|
||||||
Choose the link according to your workstation.
|
Choose the link according to your workstation.
|
||||||
2. Having installed OpenCV. If not yet, go @ref tutorial_linux_install "here".
|
-# Having installed OpenCV. If not yet, go @ref tutorial_linux_install "here".
|
||||||
|
|
||||||
Making a project
|
Making a project
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
1. Start Eclipse. Just run the executable that comes in the folder.
|
-# Start Eclipse. Just run the executable that comes in the folder.
|
||||||
2. Go to **File -\> New -\> C/C++ Project**
|
-# Go to **File -\> New -\> C/C++ Project**
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
3. Choose a name for your project (i.e. DisplayImage). An **Empty Project** should be okay for this
|
-# Choose a name for your project (i.e. DisplayImage). An **Empty Project** should be okay for this
|
||||||
example.
|
example.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
4. Leave everything else by default. Press **Finish**.
|
-# Leave everything else by default. Press **Finish**.
|
||||||
5. Your project (in this case DisplayImage) should appear in the **Project Navigator** (usually at
|
-# Your project (in this case DisplayImage) should appear in the **Project Navigator** (usually at
|
||||||
the left side of your window).
|
the left side of your window).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
6. Now, let's add a source file using OpenCV:
|
-# Now, let's add a source file using OpenCV:
|
||||||
- Right click on **DisplayImage** (in the Navigator). **New -\> Folder** .
|
- Right click on **DisplayImage** (in the Navigator). **New -\> Folder** .
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Name your folder **src** and then hit **Finish**
|
- Name your folder **src** and then hit **Finish**
|
||||||
- Right click on your newly created **src** folder. Choose **New source file**:
|
- Right click on your newly created **src** folder. Choose **New source file**:
|
||||||
- Call it **DisplayImage.cpp**. Hit **Finish**
|
- Call it **DisplayImage.cpp**. Hit **Finish**
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
7. So, now you have a project with a empty .cpp file. Let's fill it with some sample code (in other
|
-# So, now you have a project with a empty .cpp file. Let's fill it with some sample code (in other
|
||||||
words, copy and paste the snippet below):
|
words, copy and paste the snippet below):
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
#include <opencv2/opencv.hpp>
|
#include <opencv2/opencv.hpp>
|
||||||
@ -68,7 +68,7 @@ Making a project
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
8. We are only missing one final step: To tell OpenCV where the OpenCV headers and libraries are.
|
-# We are only missing one final step: To tell OpenCV where the OpenCV headers and libraries are.
|
||||||
For this, do the following:
|
For this, do the following:
|
||||||
|
|
||||||
- Go to **Project--\>Properties**
|
- Go to **Project--\>Properties**
|
||||||
@ -78,7 +78,7 @@ Making a project
|
|||||||
include the path of the folder where opencv was installed. In our example, this is
|
include the path of the folder where opencv was installed. In our example, this is
|
||||||
/usr/local/include/opencv.
|
/usr/local/include/opencv.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@note If you do not know where your opencv files are, open the **Terminal** and type:
|
@note If you do not know where your opencv files are, open the **Terminal** and type:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
@ -103,7 +103,7 @@ Making a project
|
|||||||
opencv_core opencv_imgproc opencv_highgui opencv_ml opencv_video opencv_features2d
|
opencv_core opencv_imgproc opencv_highgui opencv_ml opencv_video opencv_features2d
|
||||||
opencv_calib3d opencv_objdetect opencv_contrib opencv_legacy opencv_flann
|
opencv_calib3d opencv_objdetect opencv_contrib opencv_legacy opencv_flann
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
If you don't know where your libraries are (or you are just psychotic and want to make sure
|
If you don't know where your libraries are (or you are just psychotic and want to make sure
|
||||||
the path is fine), type in **Terminal**:
|
the path is fine), type in **Terminal**:
|
||||||
@ -120,7 +120,7 @@ Making a project
|
|||||||
|
|
||||||
In the Console you should get something like
|
In the Console you should get something like
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
If you check in your folder, there should be an executable there.
|
If you check in your folder, there should be an executable there.
|
||||||
|
|
||||||
@ -138,21 +138,21 @@ Assuming that the image to use as the argument would be located in
|
|||||||
\<DisplayImage_directory\>/images/HappyLittleFish.png. We can still do this, but let's do it from
|
\<DisplayImage_directory\>/images/HappyLittleFish.png. We can still do this, but let's do it from
|
||||||
Eclipse:
|
Eclipse:
|
||||||
|
|
||||||
1. Go to **Run-\>Run Configurations**
|
-# Go to **Run-\>Run Configurations**
|
||||||
2. Under C/C++ Application you will see the name of your executable + Debug (if not, click over
|
-# Under C/C++ Application you will see the name of your executable + Debug (if not, click over
|
||||||
C/C++ Application a couple of times). Select the name (in this case **DisplayImage Debug**).
|
C/C++ Application a couple of times). Select the name (in this case **DisplayImage Debug**).
|
||||||
3. Now, in the right side of the window, choose the **Arguments** Tab. Write the path of the image
|
-# Now, in the right side of the window, choose the **Arguments** Tab. Write the path of the image
|
||||||
file we want to open (path relative to the workspace/DisplayImage folder). Let's use
|
file we want to open (path relative to the workspace/DisplayImage folder). Let's use
|
||||||
**HappyLittleFish.png**:
|
**HappyLittleFish.png**:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
4. Click on the **Apply** button and then in Run. An OpenCV window should pop up with the fish
|
-# Click on the **Apply** button and then in Run. An OpenCV window should pop up with the fish
|
||||||
image (or whatever you used).
|
image (or whatever you used).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
5. Congratulations! You are ready to have fun with OpenCV using Eclipse.
|
-# Congratulations! You are ready to have fun with OpenCV using Eclipse.
|
||||||
|
|
||||||
### V2: Using CMake+OpenCV with Eclipse (plugin CDT)
|
### V2: Using CMake+OpenCV with Eclipse (plugin CDT)
|
||||||
|
|
||||||
@ -170,25 +170,25 @@ int main ( int argc, char **argv )
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
1. Create a build directory, say, under *foo*: mkdir /build. Then cd build.
|
-# Create a build directory, say, under *foo*: mkdir /build. Then cd build.
|
||||||
2. Put a `CmakeLists.txt` file in build:
|
-# Put a `CmakeLists.txt` file in build:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
PROJECT( helloworld_proj )
|
PROJECT( helloworld_proj )
|
||||||
FIND_PACKAGE( OpenCV REQUIRED )
|
FIND_PACKAGE( OpenCV REQUIRED )
|
||||||
ADD_EXECUTABLE( helloworld helloworld.cxx )
|
ADD_EXECUTABLE( helloworld helloworld.cxx )
|
||||||
TARGET_LINK_LIBRARIES( helloworld \f${OpenCV_LIBS} )
|
TARGET_LINK_LIBRARIES( helloworld \f${OpenCV_LIBS} )
|
||||||
@endcode
|
@endcode
|
||||||
1. Run: cmake-gui .. and make sure you fill in where opencv was built.
|
-# Run: cmake-gui .. and make sure you fill in where opencv was built.
|
||||||
2. Then click configure and then generate. If it's OK, **quit cmake-gui**
|
-# Then click configure and then generate. If it's OK, **quit cmake-gui**
|
||||||
3. Run `make -j4` (the -j4 is optional, it just tells the compiler to build in 4 threads). Make
|
-# Run `make -j4` (the -j4 is optional, it just tells the compiler to build in 4 threads). Make
|
||||||
sure it builds.
|
sure it builds.
|
||||||
4. Start eclipse. Put the workspace in some directory but **not** in foo or `foo\build`
|
-# Start eclipse. Put the workspace in some directory but **not** in foo or `foo\build`
|
||||||
5. Right click in the Project Explorer section. Select Import And then open the C/C++ filter.
|
-# Right click in the Project Explorer section. Select Import And then open the C/C++ filter.
|
||||||
Choose *Existing Code* as a Makefile Project.
|
Choose *Existing Code* as a Makefile Project.
|
||||||
6. Name your project, say *helloworld*. Browse to the Existing Code location `foo\build` (where
|
-# Name your project, say *helloworld*. Browse to the Existing Code location `foo\build` (where
|
||||||
you ran your cmake-gui from). Select *Linux GCC* in the *"Toolchain for Indexer Settings"* and
|
you ran your cmake-gui from). Select *Linux GCC* in the *"Toolchain for Indexer Settings"* and
|
||||||
press *Finish*.
|
press *Finish*.
|
||||||
7. Right click in the Project Explorer section. Select Properties. Under C/C++ Build, set the
|
-# Right click in the Project Explorer section. Select Properties. Under C/C++ Build, set the
|
||||||
*build directory:* from something like `${workspace_loc:/helloworld}` to
|
*build directory:* from something like `${workspace_loc:/helloworld}` to
|
||||||
`${workspace_loc:/helloworld}/build` since that's where you are building to.
|
`${workspace_loc:/helloworld}/build` since that's where you are building to.
|
||||||
|
|
||||||
@ -196,4 +196,4 @@ TARGET_LINK_LIBRARIES( helloworld \f${OpenCV_LIBS} )
|
|||||||
`make VERBOSE=1 -j4` which tells the compiler to produce detailed symbol files for debugging and
|
`make VERBOSE=1 -j4` which tells the compiler to produce detailed symbol files for debugging and
|
||||||
also to compile in 4 parallel threads.
|
also to compile in 4 parallel threads.
|
||||||
|
|
||||||
8. Done!
|
-# Done!
|
||||||
|
@ -1,13 +1,12 @@
|
|||||||
Using OpenCV with gcc and CMake {#tutorial_linux_gcc_cmake}
|
Using OpenCV with gcc and CMake {#tutorial_linux_gcc_cmake}
|
||||||
===============================
|
===============================
|
||||||
|
|
||||||
@note We assume that you have successfully installed OpenCV in your workstation. .. container::
|
@note We assume that you have successfully installed OpenCV in your workstation.
|
||||||
enumeratevisibleitemswithsquare
|
|
||||||
|
|
||||||
- The easiest way of using OpenCV in your code is to use [CMake](http://www.cmake.org/). A few
|
- The easiest way of using OpenCV in your code is to use [CMake](http://www.cmake.org/). A few
|
||||||
advantages (taken from the Wiki):
|
advantages (taken from the Wiki):
|
||||||
1. No need to change anything when porting between Linux and Windows
|
-# No need to change anything when porting between Linux and Windows
|
||||||
2. Can easily be combined with other tools by CMake( i.e. Qt, ITK and VTK )
|
-# Can easily be combined with other tools by CMake( i.e. Qt, ITK and VTK )
|
||||||
- If you are not familiar with CMake, checkout the
|
- If you are not familiar with CMake, checkout the
|
||||||
[tutorial](http://www.cmake.org/cmake/help/cmake_tutorial.html) on its website.
|
[tutorial](http://www.cmake.org/cmake/help/cmake_tutorial.html) on its website.
|
||||||
|
|
||||||
@ -75,5 +74,4 @@ giving an image location as an argument, i.e.:
|
|||||||
@endcode
|
@endcode
|
||||||
You should get a nice window as the one shown below:
|
You should get a nice window as the one shown below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -49,7 +49,7 @@ git clone https://github.com/Itseez/opencv_contrib.git
|
|||||||
Building OpenCV from Source Using CMake
|
Building OpenCV from Source Using CMake
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
1. Create a temporary directory, which we denote as \<cmake_build_dir\>, where you want to put
|
-# Create a temporary directory, which we denote as \<cmake_build_dir\>, where you want to put
|
||||||
the generated Makefiles, project files as well the object files and output binaries and enter
|
the generated Makefiles, project files as well the object files and output binaries and enter
|
||||||
there.
|
there.
|
||||||
|
|
||||||
@ -59,7 +59,7 @@ Building OpenCV from Source Using CMake
|
|||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
@endcode
|
@endcode
|
||||||
2. Configuring. Run cmake [\<some optional parameters\>] \<path to the OpenCV source directory\>
|
-# Configuring. Run cmake [\<some optional parameters\>] \<path to the OpenCV source directory\>
|
||||||
|
|
||||||
For example
|
For example
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
@ -73,14 +73,14 @@ Building OpenCV from Source Using CMake
|
|||||||
- run: “Configure”
|
- run: “Configure”
|
||||||
- run: “Generate”
|
- run: “Generate”
|
||||||
|
|
||||||
3. Description of some parameters
|
-# Description of some parameters
|
||||||
- build type: CMAKE_BUILD_TYPE=Release\\Debug
|
- build type: `CMAKE_BUILD_TYPE=Release\Debug`
|
||||||
- to build with modules from opencv_contrib set OPENCV_EXTRA_MODULES_PATH to \<path to
|
- to build with modules from opencv_contrib set OPENCV_EXTRA_MODULES_PATH to \<path to
|
||||||
opencv_contrib/modules/\>
|
opencv_contrib/modules/\>
|
||||||
- set BUILD_DOCS for building documents
|
- set BUILD_DOCS for building documents
|
||||||
- set BUILD_EXAMPLES to build all examples
|
- set BUILD_EXAMPLES to build all examples
|
||||||
|
|
||||||
4. [optional] Building python. Set the following python parameters:
|
-# [optional] Building python. Set the following python parameters:
|
||||||
- PYTHON2(3)_EXECUTABLE = \<path to python\>
|
- PYTHON2(3)_EXECUTABLE = \<path to python\>
|
||||||
- PYTHON_INCLUDE_DIR = /usr/include/python\<version\>
|
- PYTHON_INCLUDE_DIR = /usr/include/python\<version\>
|
||||||
- PYTHON_INCLUDE_DIR2 = /usr/include/x86_64-linux-gnu/python\<version\>
|
- PYTHON_INCLUDE_DIR2 = /usr/include/x86_64-linux-gnu/python\<version\>
|
||||||
@ -88,18 +88,18 @@ Building OpenCV from Source Using CMake
|
|||||||
- PYTHON2(3)_NUMPY_INCLUDE_DIRS =
|
- PYTHON2(3)_NUMPY_INCLUDE_DIRS =
|
||||||
/usr/lib/python\<version\>/dist-packages/numpy/core/include/
|
/usr/lib/python\<version\>/dist-packages/numpy/core/include/
|
||||||
|
|
||||||
5. [optional] Building java.
|
-# [optional] Building java.
|
||||||
- Unset parameter: BUILD_SHARED_LIBS
|
- Unset parameter: BUILD_SHARED_LIBS
|
||||||
- It is useful also to unset BUILD_EXAMPLES, BUILD_TESTS, BUILD_PERF_TESTS - as they all
|
- It is useful also to unset BUILD_EXAMPLES, BUILD_TESTS, BUILD_PERF_TESTS - as they all
|
||||||
will be statically linked with OpenCV and can take a lot of memory.
|
will be statically linked with OpenCV and can take a lot of memory.
|
||||||
|
|
||||||
6. Build. From build directory execute make, recomend to do it in several threads
|
-# Build. From build directory execute make, recomend to do it in several threads
|
||||||
|
|
||||||
For example
|
For example
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
make -j7 # runs 7 jobs in parallel
|
make -j7 # runs 7 jobs in parallel
|
||||||
@endcode
|
@endcode
|
||||||
7. [optional] Building documents. Enter \<cmake_build_dir/doc/\> and run make with target
|
-# [optional] Building documents. Enter \<cmake_build_dir/doc/\> and run make with target
|
||||||
"html_docs"
|
"html_docs"
|
||||||
|
|
||||||
For example
|
For example
|
||||||
@ -107,11 +107,11 @@ Building OpenCV from Source Using CMake
|
|||||||
cd ~/opencv/build/doc/
|
cd ~/opencv/build/doc/
|
||||||
make -j7 html_docs
|
make -j7 html_docs
|
||||||
@endcode
|
@endcode
|
||||||
8. To install libraries, from build directory execute
|
-# To install libraries, from build directory execute
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
sudo make install
|
sudo make install
|
||||||
@endcode
|
@endcode
|
||||||
9. [optional] Running tests
|
-# [optional] Running tests
|
||||||
|
|
||||||
- Get the required test data from [OpenCV extra
|
- Get the required test data from [OpenCV extra
|
||||||
repository](https://github.com/Itseez/opencv_extra).
|
repository](https://github.com/Itseez/opencv_extra).
|
||||||
|
@ -55,9 +55,9 @@ int main( int argc, char** argv )
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. We begin by loading an image using @ref cv::imread , located in the path given by *imageName*.
|
-# We begin by loading an image using @ref cv::imread , located in the path given by *imageName*.
|
||||||
For this example, assume you are loading a RGB image.
|
For this example, assume you are loading a RGB image.
|
||||||
2. Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice
|
-# Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice
|
||||||
function to do this kind of transformations:
|
function to do this kind of transformations:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
cvtColor( image, gray_image, COLOR_BGR2GRAY );
|
cvtColor( image, gray_image, COLOR_BGR2GRAY );
|
||||||
@ -70,7 +70,7 @@ Explanation
|
|||||||
this case we use **COLOR_BGR2GRAY** (because of @ref cv::imread has BGR default channel
|
this case we use **COLOR_BGR2GRAY** (because of @ref cv::imread has BGR default channel
|
||||||
order in case of color images).
|
order in case of color images).
|
||||||
|
|
||||||
3. So now we have our new *gray_image* and want to save it on disk (otherwise it will get lost
|
-# So now we have our new *gray_image* and want to save it on disk (otherwise it will get lost
|
||||||
after the program ends). To save it, we will use a function analagous to @ref cv::imread : @ref
|
after the program ends). To save it, we will use a function analagous to @ref cv::imread : @ref
|
||||||
cv::imwrite
|
cv::imwrite
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -79,7 +79,7 @@ Explanation
|
|||||||
Which will save our *gray_image* as *Gray_Image.jpg* in the folder *images* located two levels
|
Which will save our *gray_image* as *Gray_Image.jpg* in the folder *images* located two levels
|
||||||
up of my current location.
|
up of my current location.
|
||||||
|
|
||||||
4. Finally, let's check out the images. We create two windows and use them to show the original
|
-# Finally, let's check out the images. We create two windows and use them to show the original
|
||||||
image as well as the new one:
|
image as well as the new one:
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
namedWindow( imageName, WINDOW_AUTOSIZE );
|
namedWindow( imageName, WINDOW_AUTOSIZE );
|
||||||
@ -88,18 +88,18 @@ Explanation
|
|||||||
imshow( imageName, image );
|
imshow( imageName, image );
|
||||||
imshow( "Gray image", gray_image );
|
imshow( "Gray image", gray_image );
|
||||||
@endcode
|
@endcode
|
||||||
5. Add the *waitKey(0)* function call for the program to wait forever for an user key press.
|
-# Add the *waitKey(0)* function call for the program to wait forever for an user key press.
|
||||||
|
|
||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
When you run your program you should get something like this:
|
When you run your program you should get something like this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
And if you check in your folder (in my case *images*), you should have a newly .jpg file named
|
And if you check in your folder (in my case *images*), you should have a newly .jpg file named
|
||||||
*Gray_Image.jpg*:
|
*Gray_Image.jpg*:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Congratulations, you are done with this tutorial!
|
Congratulations, you are done with this tutorial!
|
||||||
|
@ -14,15 +14,15 @@ technologies we integrate into our library. .. _Windows_Install_Prebuild:
|
|||||||
Installation by Using the Pre-built Libraries {#tutorial_windows_install_prebuilt}
|
Installation by Using the Pre-built Libraries {#tutorial_windows_install_prebuilt}
|
||||||
=============================================
|
=============================================
|
||||||
|
|
||||||
1. Launch a web browser of choice and go to our [page on
|
-# Launch a web browser of choice and go to our [page on
|
||||||
Sourceforge](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/).
|
Sourceforge](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/).
|
||||||
2. Choose a build you want to use and download it.
|
-# Choose a build you want to use and download it.
|
||||||
3. Make sure you have admin rights. Unpack the self-extracting archive.
|
-# Make sure you have admin rights. Unpack the self-extracting archive.
|
||||||
4. You can check the installation at the chosen path as you can see below.
|
-# You can check the installation at the chosen path as you can see below.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
5. To finalize the installation go to the @ref tutorial_windows_install_path section.
|
-# To finalize the installation go to the @ref tutorial_windows_install_path section.
|
||||||
|
|
||||||
Installation by Making Your Own Libraries from the Source Files {#tutorial_windows_install_build}
|
Installation by Making Your Own Libraries from the Source Files {#tutorial_windows_install_build}
|
||||||
===============================================================
|
===============================================================
|
||||||
@ -97,18 +97,18 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
|
|
||||||
### Building the library
|
### Building the library
|
||||||
|
|
||||||
1. Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio
|
-# Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio
|
||||||
just install it and make sure it starts up.
|
just install it and make sure it starts up.
|
||||||
2. Install [CMake](http://www.cmake.org/cmake/resources/software.html). Simply follow the wizard, no need to add it to the path. The default install
|
-# Install [CMake](http://www.cmake.org/cmake/resources/software.html). Simply follow the wizard, no need to add it to the path. The default install
|
||||||
options are OK.
|
options are OK.
|
||||||
3. Download and install an up-to-date version of msysgit from its [official
|
-# Download and install an up-to-date version of msysgit from its [official
|
||||||
site](http://code.google.com/p/msysgit/downloads/list). There is also the portable version,
|
site](http://code.google.com/p/msysgit/downloads/list). There is also the portable version,
|
||||||
which you need only to unpack to get access to the console version of Git. Supposing that for
|
which you need only to unpack to get access to the console version of Git. Supposing that for
|
||||||
some of us it could be quite enough.
|
some of us it could be quite enough.
|
||||||
4. Install [TortoiseGit](http://code.google.com/p/tortoisegit/wiki/Download). Choose the 32 or 64 bit version according to the type of OS you work in.
|
-# Install [TortoiseGit](http://code.google.com/p/tortoisegit/wiki/Download). Choose the 32 or 64 bit version according to the type of OS you work in.
|
||||||
While installing, locate your msysgit (if it doesn't do that automatically). Follow the
|
While installing, locate your msysgit (if it doesn't do that automatically). Follow the
|
||||||
wizard -- the default options are OK for the most part.
|
wizard -- the default options are OK for the most part.
|
||||||
5. Choose a directory in your file system, where you will download the OpenCV libraries to. I
|
-# Choose a directory in your file system, where you will download the OpenCV libraries to. I
|
||||||
recommend creating a new one that has short path and no special charachters in it, for example
|
recommend creating a new one that has short path and no special charachters in it, for example
|
||||||
`D:/OpenCV`. For this tutorial I'll suggest you do so. If you use your own path and know, what
|
`D:/OpenCV`. For this tutorial I'll suggest you do so. If you use your own path and know, what
|
||||||
you're doing -- it's OK.
|
you're doing -- it's OK.
|
||||||
@ -118,7 +118,7 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
-# Push the OK button and be patient as the repository is quite a heavy download. It will take
|
-# Push the OK button and be patient as the repository is quite a heavy download. It will take
|
||||||
some time depending on your Internet connection.
|
some time depending on your Internet connection.
|
||||||
|
|
||||||
6. In this section I will cover installing the 3rd party libraries.
|
-# In this section I will cover installing the 3rd party libraries.
|
||||||
-# Download the [Python libraries](http://www.python.org/downloads/) and install it with the default options. You will need a
|
-# Download the [Python libraries](http://www.python.org/downloads/) and install it with the default options. You will need a
|
||||||
couple other python extensions. Luckily installing all these may be automated by a nice tool
|
couple other python extensions. Luckily installing all these may be automated by a nice tool
|
||||||
called [Setuptools](http://pypi.python.org/pypi/setuptools#downloads). Download and install
|
called [Setuptools](http://pypi.python.org/pypi/setuptools#downloads). Download and install
|
||||||
@ -131,9 +131,9 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
Script sub-folder. Here just pass to the *easy_install.exe* as argument the name of the
|
Script sub-folder. Here just pass to the *easy_install.exe* as argument the name of the
|
||||||
program you want to install. Add the *sphinx* argument.
|
program you want to install. Add the *sphinx* argument.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@note
|
@note
|
||||||
The *CD* navigation command works only inside a drive. For example if you are somewhere in the
|
The *CD* navigation command works only inside a drive. For example if you are somewhere in the
|
||||||
@ -152,7 +152,7 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
sure you select for the *"Install missing packages on-the-fly"* the *Yes* option, as you can
|
sure you select for the *"Install missing packages on-the-fly"* the *Yes* option, as you can
|
||||||
see on the image below. Again this will take quite some time so be patient.
|
see on the image below. Again this will take quite some time so be patient.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
-# For the [Intel Threading Building Blocks (*TBB*)](http://threadingbuildingblocks.org/file.php?fid=77)
|
-# For the [Intel Threading Building Blocks (*TBB*)](http://threadingbuildingblocks.org/file.php?fid=77)
|
||||||
download the source files and extract
|
download the source files and extract
|
||||||
@ -161,7 +161,7 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
the story is the same. For
|
the story is the same. For
|
||||||
exctracting the archives I recommend using the [7-Zip](http://www.7-zip.org/) application.
|
exctracting the archives I recommend using the [7-Zip](http://www.7-zip.org/) application.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
-# For the [Intel IPP Asynchronous C/C++](http://software.intel.com/en-us/intel-ipp-preview) download the source files and set environment
|
-# For the [Intel IPP Asynchronous C/C++](http://software.intel.com/en-us/intel-ipp-preview) download the source files and set environment
|
||||||
variable **IPP_ASYNC_ROOT**. It should point to
|
variable **IPP_ASYNC_ROOT**. It should point to
|
||||||
@ -182,14 +182,14 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
Downloads](http://qt.nokia.com/downloads) page. Download the source files (not the
|
Downloads](http://qt.nokia.com/downloads) page. Download the source files (not the
|
||||||
installers!!!):
|
installers!!!):
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Extract it into a nice and short named directory like `D:/OpenCV/dep/qt/` . Then you need to
|
Extract it into a nice and short named directory like `D:/OpenCV/dep/qt/` . Then you need to
|
||||||
build it. Start up a *Visual* *Studio* *Command* *Prompt* (*2010*) by using the start menu
|
build it. Start up a *Visual* *Studio* *Command* *Prompt* (*2010*) by using the start menu
|
||||||
search (or navigate through the start menu
|
search (or navigate through the start menu
|
||||||
All Programs --\> Microsoft Visual Studio 2010 --\> Visual Studio Tools --\> Visual Studio Command Prompt (2010)).
|
All Programs --\> Microsoft Visual Studio 2010 --\> Visual Studio Tools --\> Visual Studio Command Prompt (2010)).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Now navigate to the extracted folder and enter inside it by using this console window. You
|
Now navigate to the extracted folder and enter inside it by using this console window. You
|
||||||
should have a folder containing files like *Install*, *Make* and so on. Use the *dir* command
|
should have a folder containing files like *Install*, *Make* and so on. Use the *dir* command
|
||||||
@ -216,25 +216,25 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
Visual Studio Add-in*. After this you can make and build Qt applications without using the *Qt
|
Visual Studio Add-in*. After this you can make and build Qt applications without using the *Qt
|
||||||
Creator*. Everything is nicely integrated into Visual Studio.
|
Creator*. Everything is nicely integrated into Visual Studio.
|
||||||
|
|
||||||
7. Now start the *CMake (cmake-gui)*. You may again enter it in the start menu search or get it
|
-# Now start the *CMake (cmake-gui)*. You may again enter it in the start menu search or get it
|
||||||
from the All Programs --\> CMake 2.8 --\> CMake (cmake-gui). First, select the directory for the
|
from the All Programs --\> CMake 2.8 --\> CMake (cmake-gui). First, select the directory for the
|
||||||
source files of the OpenCV library (1). Then, specify a directory where you will build the
|
source files of the OpenCV library (1). Then, specify a directory where you will build the
|
||||||
binary files for OpenCV (2).
|
binary files for OpenCV (2).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Press the Configure button to specify the compiler (and *IDE*) you want to use. Note that in
|
Press the Configure button to specify the compiler (and *IDE*) you want to use. Note that in
|
||||||
case you can choose between different compilers for making either 64 bit or 32 bit libraries.
|
case you can choose between different compilers for making either 64 bit or 32 bit libraries.
|
||||||
Select the one you use in your application development.
|
Select the one you use in your application development.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
CMake will start out and based on your system variables will try to automatically locate as many
|
CMake will start out and based on your system variables will try to automatically locate as many
|
||||||
packages as possible. You can modify the packages to use for the build in the WITH --\> WITH_X
|
packages as possible. You can modify the packages to use for the build in the WITH --\> WITH_X
|
||||||
menu points (where *X* is the package abbreviation). Here are a list of current packages you can
|
menu points (where *X* is the package abbreviation). Here are a list of current packages you can
|
||||||
turn on or off:
|
turn on or off:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Select all the packages you want to use and press again the *Configure* button. For an easier
|
Select all the packages you want to use and press again the *Configure* button. For an easier
|
||||||
overview of the build options make sure the *Grouped* option under the binary directory
|
overview of the build options make sure the *Grouped* option under the binary directory
|
||||||
@ -242,9 +242,9 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
directories. In case of these CMake will throw an error in its output window (located at the
|
directories. In case of these CMake will throw an error in its output window (located at the
|
||||||
bottom of the GUI) and set its field values, to not found constants. For example:
|
bottom of the GUI) and set its field values, to not found constants. For example:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
For these you need to manually set the queried directories or files path. After this press again
|
For these you need to manually set the queried directories or files path. After this press again
|
||||||
the *Configure* button to see if the value entered by you was accepted or not. Do this until all
|
the *Configure* button to see if the value entered by you was accepted or not. Do this until all
|
||||||
@ -254,7 +254,7 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
option will make sure that they are categorized inside directories in the *Solution Explorer*.
|
option will make sure that they are categorized inside directories in the *Solution Explorer*.
|
||||||
It is a must have feature, if you ask me.
|
It is a must have feature, if you ask me.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Furthermore, you need to select what part of OpenCV you want to build.
|
Furthermore, you need to select what part of OpenCV you want to build.
|
||||||
|
|
||||||
@ -286,24 +286,24 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
IDE at the startup. Now you need to build both the *Release* and the *Debug* binaries. Use the
|
IDE at the startup. Now you need to build both the *Release* and the *Debug* binaries. Use the
|
||||||
drop-down menu on your IDE to change to another of these after building for one of them.
|
drop-down menu on your IDE to change to another of these after building for one of them.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
In the end you can observe the built binary files inside the bin directory:
|
In the end you can observe the built binary files inside the bin directory:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
For the documentation you need to explicitly issue the build commands on the *doc* project for
|
For the documentation you need to explicitly issue the build commands on the *doc* project for
|
||||||
the PDF files and on the *doc_html* for the HTML ones. Each of these will call *Sphinx* to do
|
the PDF files and on the *doc_html* for the HTML ones. Each of these will call *Sphinx* to do
|
||||||
all the hard work. You can find the generated documentation inside the `Build/Doc/_html` for the
|
all the hard work. You can find the generated documentation inside the `Build/Doc/_html` for the
|
||||||
HTML pages and within the `Build/Doc` the PDF manuals.
|
HTML pages and within the `Build/Doc` the PDF manuals.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
To collect the header and the binary files, that you will use during your own projects, into a
|
To collect the header and the binary files, that you will use during your own projects, into a
|
||||||
separate directory (simillary to how the pre-built binaries ship) you need to explicitely build
|
separate directory (simillary to how the pre-built binaries ship) you need to explicitely build
|
||||||
the *Install* project.
|
the *Install* project.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
This will create an *Install* directory inside the *Build* one collecting all the built binaries
|
This will create an *Install* directory inside the *Build* one collecting all the built binaries
|
||||||
into a single place. Use this only after you built both the *Release* and *Debug* versions.
|
into a single place. Use this only after you built both the *Release* and *Debug* versions.
|
||||||
@ -314,7 +314,7 @@ libraries). If you do not need the support for some of these you can just freely
|
|||||||
If everything is okay the *contours.exe* output should resemble the following image (if
|
If everything is okay the *contours.exe* output should resemble the following image (if
|
||||||
built with Qt support):
|
built with Qt support):
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@note
|
@note
|
||||||
If you use the GPU module (CUDA libraries) make sure you also upgrade to the latest drivers of
|
If you use the GPU module (CUDA libraries) make sure you also upgrade to the latest drivers of
|
||||||
@ -353,9 +353,9 @@ following new entry (right click in the application to bring up the menu):
|
|||||||
%OPENCV_DIR%\bin
|
%OPENCV_DIR%\bin
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Save it to the registry and you are done. If you ever change the location of your build directories
|
Save it to the registry and you are done. If you ever change the location of your build directories
|
||||||
or want to try out your applicaton with a different build all you will need to do is to update the
|
or want to try out your applicaton with a different build all you will need to do is to update the
|
||||||
|
@ -1,13 +1,13 @@
|
|||||||
How to build applications with OpenCV inside the "Microsoft Visual Studio" {#tutorial_windows_visual_studio_Opencv}
|
How to build applications with OpenCV inside the "Microsoft Visual Studio" {#tutorial_windows_visual_studio_Opencv}
|
||||||
==========================================================================
|
==========================================================================
|
||||||
|
|
||||||
Everything I describe here will apply to the C\\C++ interface of OpenCV. I start out from the
|
Everything I describe here will apply to the `C\C++` interface of OpenCV. I start out from the
|
||||||
assumption that you have read and completed with success the @ref tutorial_windows_install tutorial.
|
assumption that you have read and completed with success the @ref tutorial_windows_install tutorial.
|
||||||
Therefore, before you go any further make sure you have an OpenCV directory that contains the OpenCV
|
Therefore, before you go any further make sure you have an OpenCV directory that contains the OpenCV
|
||||||
header files plus binaries and you have set the environment variables as described here
|
header files plus binaries and you have set the environment variables as described here
|
||||||
@ref tutorial_windows_install_path.
|
@ref tutorial_windows_install_path.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The OpenCV libraries, distributed by us, on the Microsoft Windows operating system are in a
|
The OpenCV libraries, distributed by us, on the Microsoft Windows operating system are in a
|
||||||
Dynamic Linked Libraries (*DLL*). These have the advantage that all the content of the
|
Dynamic Linked Libraries (*DLL*). These have the advantage that all the content of the
|
||||||
@ -58,7 +58,7 @@ create a new solution inside Visual studio by going through the File --\> New --
|
|||||||
selection. Choose *Win32 Console Application* as type. Enter its name and select the path where to
|
selection. Choose *Win32 Console Application* as type. Enter its name and select the path where to
|
||||||
create it. Then in the upcoming dialog make sure you create an empty project.
|
create it. Then in the upcoming dialog make sure you create an empty project.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The *local* method
|
The *local* method
|
||||||
------------------
|
------------------
|
||||||
@ -75,7 +75,7 @@ you can view and modify them by using the *Property Manger*. You can bring up th
|
|||||||
View --\> Property Pages. Expand it and you can see the existing rule packages (called *Proporty
|
View --\> Property Pages. Expand it and you can see the existing rule packages (called *Proporty
|
||||||
Sheets*).
|
Sheets*).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The really useful stuff of these is that you may create a rule package *once* and you can later just
|
The really useful stuff of these is that you may create a rule package *once* and you can later just
|
||||||
add it to your new projects. Create it once and reuse it later. We want to create a new *Property
|
add it to your new projects. Create it once and reuse it later. We want to create a new *Property
|
||||||
@ -83,7 +83,7 @@ Sheet* that will contain all the rules that the compiler and linker needs to kno
|
|||||||
need a separate one for the Debug and the Release Builds. Start up with the Debug one as shown in
|
need a separate one for the Debug and the Release Builds. Start up with the Debug one as shown in
|
||||||
the image below:
|
the image below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Use for example the *OpenCV_Debug* name. Then by selecting the sheet Right Click --\> Properties.
|
Use for example the *OpenCV_Debug* name. Then by selecting the sheet Right Click --\> Properties.
|
||||||
In the following I will show to set the OpenCV rules locally, as I find unnecessary to pollute
|
In the following I will show to set the OpenCV rules locally, as I find unnecessary to pollute
|
||||||
@ -93,7 +93,7 @@ group, you should add any .c/.cpp file to the project.
|
|||||||
@code{.bash}
|
@code{.bash}
|
||||||
\f$(OPENCV_DIR)\..\..\include
|
\f$(OPENCV_DIR)\..\..\include
|
||||||
@endcode
|
@endcode
|
||||||

|

|
||||||
|
|
||||||
When adding third party libraries settings it is generally a good idea to use the power behind the
|
When adding third party libraries settings it is generally a good idea to use the power behind the
|
||||||
environment variables. The full location of the OpenCV library may change on each system. Moreover,
|
environment variables. The full location of the OpenCV library may change on each system. Moreover,
|
||||||
@ -111,15 +111,15 @@ directory:
|
|||||||
$(OPENCV_DIR)\lib
|
$(OPENCV_DIR)\lib
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Then you need to specify the libraries in which the linker should look into. To do this go to the
|
Then you need to specify the libraries in which the linker should look into. To do this go to the
|
||||||
Linker --\> Input and under the *"Additional Dependencies"* entry add the name of all modules which
|
Linker --\> Input and under the *"Additional Dependencies"* entry add the name of all modules which
|
||||||
you want to use:
|
you want to use:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The names of the libraries are as follow:
|
The names of the libraries are as follow:
|
||||||
@code{.bash}
|
@code{.bash}
|
||||||
@ -150,19 +150,19 @@ click ok to save and do the same with a new property inside the Release rule sec
|
|||||||
omit the *d* letters from the library names and to save the property sheets with the save icon above
|
omit the *d* letters from the library names and to save the property sheets with the save icon above
|
||||||
them.
|
them.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You can find your property sheets inside your projects directory. At this point it is a wise
|
You can find your property sheets inside your projects directory. At this point it is a wise
|
||||||
decision to back them up into some special directory, to always have them at hand in the future,
|
decision to back them up into some special directory, to always have them at hand in the future,
|
||||||
whenever you create an OpenCV project. Note that for Visual Studio 2010 the file extension is
|
whenever you create an OpenCV project. Note that for Visual Studio 2010 the file extension is
|
||||||
*props*, while for 2008 this is *vsprops*.
|
*props*, while for 2008 this is *vsprops*.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Next time when you make a new OpenCV project just use the "Add Existing Property Sheet..." menu
|
Next time when you make a new OpenCV project just use the "Add Existing Property Sheet..." menu
|
||||||
entry inside the Property Manager to easily add the OpenCV build rules.
|
entry inside the Property Manager to easily add the OpenCV build rules.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The *global* method
|
The *global* method
|
||||||
-------------------
|
-------------------
|
||||||
@ -175,12 +175,12 @@ by using for instance: a Property page.
|
|||||||
In Visual Studio 2008 you can find this under the:
|
In Visual Studio 2008 you can find this under the:
|
||||||
Tools --\> Options --\> Projects and Solutions --\> VC++ Directories.
|
Tools --\> Options --\> Projects and Solutions --\> VC++ Directories.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
In Visual Studio 2010 this has been moved to a global property sheet which is automatically added to
|
In Visual Studio 2010 this has been moved to a global property sheet which is automatically added to
|
||||||
every project you create:
|
every project you create:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The process is the same as described in case of the local approach. Just add the include directories
|
The process is the same as described in case of the local approach. Just add the include directories
|
||||||
by using the environment variable *OPENCV_DIR*.
|
by using the environment variable *OPENCV_DIR*.
|
||||||
@ -210,7 +210,7 @@ OpenCV logo](samples/data/opencv-logo.png). Before starting up the application m
|
|||||||
the image file in your current working directory. Modify the image file name inside the code to try
|
the image file in your current working directory. Modify the image file name inside the code to try
|
||||||
it out on other images too. Run it and voil á:
|
it out on other images too. Run it and voil á:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Command line arguments with Visual Studio
|
Command line arguments with Visual Studio
|
||||||
-----------------------------------------
|
-----------------------------------------
|
||||||
@ -230,7 +230,7 @@ with the console window on the Microsoft Windows many people come to use it almo
|
|||||||
adding the same argument again and again while you are testing your application is, somewhat, a
|
adding the same argument again and again while you are testing your application is, somewhat, a
|
||||||
cumbersome task. Luckily, in the Visual Studio there is a menu to automate all this:
|
cumbersome task. Luckily, in the Visual Studio there is a menu to automate all this:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Specify here the name of the inputs and while you start your application from the Visual Studio
|
Specify here the name of the inputs and while you start your application from the Visual Studio
|
||||||
enviroment you have automatic argument passing. In the next introductionary tutorial you'll see an
|
enviroment you have automatic argument passing. In the next introductionary tutorial you'll see an
|
||||||
|
@ -10,10 +10,10 @@ Prerequisites
|
|||||||
|
|
||||||
This tutorial assumes that you have the following available:
|
This tutorial assumes that you have the following available:
|
||||||
|
|
||||||
1. Visual Studio 2012 Professional (or better) with Update 1 installed. Update 1 can be downloaded
|
-# Visual Studio 2012 Professional (or better) with Update 1 installed. Update 1 can be downloaded
|
||||||
[here](http://www.microsoft.com/en-us/download/details.aspx?id=35774).
|
[here](http://www.microsoft.com/en-us/download/details.aspx?id=35774).
|
||||||
2. An OpenCV installation on your Windows machine (Tutorial: @ref tutorial_windows_install).
|
-# An OpenCV installation on your Windows machine (Tutorial: @ref tutorial_windows_install).
|
||||||
3. Ability to create and build OpenCV projects in Visual Studio (Tutorial: @ref tutorial_windows_visual_studio_Opencv).
|
-# Ability to create and build OpenCV projects in Visual Studio (Tutorial: @ref tutorial_windows_visual_studio_Opencv).
|
||||||
|
|
||||||
Installation
|
Installation
|
||||||
------------
|
------------
|
||||||
@ -98,13 +98,13 @@ Launch the program in the debugger (Debug --\> Start Debugging, or hit *F5*). Wh
|
|||||||
hit, the program is paused and Visual Studio displays a yellow instruction pointer at the
|
hit, the program is paused and Visual Studio displays a yellow instruction pointer at the
|
||||||
breakpoint:
|
breakpoint:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Now you can inspect the state of you program. For example, you can bring up the *Locals* window
|
Now you can inspect the state of you program. For example, you can bring up the *Locals* window
|
||||||
(Debug --\> Windows --\> Locals), which will show the names and values of the variables in the
|
(Debug --\> Windows --\> Locals), which will show the names and values of the variables in the
|
||||||
current scope:
|
current scope:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Note that the built-in *Locals* window will display text only. This is where the Image Watch plug-in
|
Note that the built-in *Locals* window will display text only. This is where the Image Watch plug-in
|
||||||
comes in. Image Watch is like another *Locals* window, but with an image viewer built into it. To
|
comes in. Image Watch is like another *Locals* window, but with an image viewer built into it. To
|
||||||
@ -114,7 +114,7 @@ had Image Watch open, and where it was located between debugging sessions. This
|
|||||||
to do this once--the next time you start debugging, Image Watch will be back where you left it.
|
to do this once--the next time you start debugging, Image Watch will be back where you left it.
|
||||||
Here's what the docked Image Watch window looks like at our breakpoint:
|
Here's what the docked Image Watch window looks like at our breakpoint:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The radio button at the top left (*Locals/Watch*) selects what is shown in the *Image List* below:
|
The radio button at the top left (*Locals/Watch*) selects what is shown in the *Image List* below:
|
||||||
*Locals* lists all OpenCV image objects in the current scope (this list is automatically populated).
|
*Locals* lists all OpenCV image objects in the current scope (this list is automatically populated).
|
||||||
@ -128,7 +128,7 @@ If an image has a thumbnail, left-clicking on that image will select it for deta
|
|||||||
*Image Viewer* on the right. The viewer lets you pan (drag mouse) and zoom (mouse wheel). It also
|
*Image Viewer* on the right. The viewer lets you pan (drag mouse) and zoom (mouse wheel). It also
|
||||||
displays the pixel coordinate and value at the current mouse position.
|
displays the pixel coordinate and value at the current mouse position.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Note that the second image in the list, *edges*, is shown as "invalid". This indicates that some
|
Note that the second image in the list, *edges*, is shown as "invalid". This indicates that some
|
||||||
data members of this image object have corrupt or invalid values (for example, a negative image
|
data members of this image object have corrupt or invalid values (for example, a negative image
|
||||||
@ -146,18 +146,18 @@ Now assume you want to do a visual sanity check of the *cv::Canny()* implementat
|
|||||||
*edges* image into the viewer by selecting it in the *Image List* and zoom into a region with a
|
*edges* image into the viewer by selecting it in the *Image List* and zoom into a region with a
|
||||||
clearly defined edge:
|
clearly defined edge:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Right-click on the *Image Viewer* to bring up the view context menu and enable Link Views (a check
|
Right-click on the *Image Viewer* to bring up the view context menu and enable Link Views (a check
|
||||||
box next to the menu item indicates whether the option is enabled).
|
box next to the menu item indicates whether the option is enabled).
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The Link Views feature keeps the view region fixed when flipping between images of the same size. To
|
The Link Views feature keeps the view region fixed when flipping between images of the same size. To
|
||||||
see how this works, select the input image from the image list--you should now see the corresponding
|
see how this works, select the input image from the image list--you should now see the corresponding
|
||||||
zoomed-in region in the input image:
|
zoomed-in region in the input image:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You may also switch back and forth between viewing input and edges with your up/down cursor keys.
|
You may also switch back and forth between viewing input and edges with your up/down cursor keys.
|
||||||
That way you can easily verify that the detected edges line up nicely with the data in the input
|
That way you can easily verify that the detected edges line up nicely with the data in the input
|
||||||
@ -168,12 +168,12 @@ More ...
|
|||||||
|
|
||||||
Image watch has a number of more advanced features, such as
|
Image watch has a number of more advanced features, such as
|
||||||
|
|
||||||
1. pinning images to a *Watch* list for inspection across scopes or between debugging sessions
|
-# pinning images to a *Watch* list for inspection across scopes or between debugging sessions
|
||||||
2. clamping, thresholding, or diff'ing images directly inside the Watch window
|
-# clamping, thresholding, or diff'ing images directly inside the Watch window
|
||||||
3. comparing an in-memory image against a reference image from a file
|
-# comparing an in-memory image against a reference image from a file
|
||||||
|
|
||||||
Please refer to the online [Image Watch
|
Please refer to the online [Image Watch
|
||||||
Documentation](http://go.microsoft.com/fwlink/?LinkId=285461) for details--you also can get to the
|
Documentation](http://go.microsoft.com/fwlink/?LinkId=285461) for details--you also can get to the
|
||||||
documentation page by clicking on the *Help* link in the Image Watch window:
|
documentation page by clicking on the *Help* link in the Image Watch window:
|
||||||
|
|
||||||

|

|
||||||
|
@ -9,46 +9,45 @@ In this tutorial we will learn how to:
|
|||||||
- Link OpenCV framework with Xcode
|
- Link OpenCV framework with Xcode
|
||||||
- How to write simple Hello World application using OpenCV and Xcode.
|
- How to write simple Hello World application using OpenCV and Xcode.
|
||||||
|
|
||||||
*Linking OpenCV iOS*
|
Linking OpenCV iOS
|
||||||
--------------------
|
------------------
|
||||||
|
|
||||||
Follow this step by step guide to link OpenCV to iOS.
|
Follow this step by step guide to link OpenCV to iOS.
|
||||||
|
|
||||||
1. Create a new XCode project.
|
-# Create a new XCode project.
|
||||||
2. Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left
|
-# Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left
|
||||||
hand panel and click on project name.
|
hand panel and click on project name.
|
||||||
3. Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
|
-# Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
|
||||||
4. Click on Add others and go to directory where *opencv2.framework* is located and click open
|
-# Click on Add others and go to directory where *opencv2.framework* is located and click open
|
||||||
5. Now you can start writing your application.
|
-# Now you can start writing your application.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
*Hello OpenCV iOS Application*
|
Hello OpenCV iOS Application
|
||||||
------------------------------
|
----------------------------
|
||||||
|
|
||||||
Now we will learn how to write a simple Hello World Application in Xcode using OpenCV.
|
Now we will learn how to write a simple Hello World Application in Xcode using OpenCV.
|
||||||
|
|
||||||
- Link your project with OpenCV as shown in previous section.
|
- Link your project with OpenCV as shown in previous section.
|
||||||
- Open the file named *NameOfProject-Prefix.pch* ( replace NameOfProject with name of your
|
- Open the file named *NameOfProject-Prefix.pch* ( replace NameOfProject with name of your
|
||||||
project) and add the following lines of code.
|
project) and add the following lines of code.
|
||||||
@code{.cpp}
|
@code{.m}
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
#import <opencv2/opencv.hpp>
|
#import <opencv2/opencv.hpp>
|
||||||
#endif
|
#endif
|
||||||
@endcode
|
@endcode
|
||||||

|

|
||||||
|
|
||||||
- Add the following lines of code to viewDidLoad method in ViewController.m.
|
- Add the following lines of code to viewDidLoad method in ViewController.m.
|
||||||
@code{.cpp}
|
@code{.m}
|
||||||
UIAlertView * alert = [[UIAlertView alloc] initWithTitle:@"Hello!" message:@"Welcome to OpenCV" delegate:self cancelButtonTitle:@"Continue" otherButtonTitles:nil];
|
UIAlertView * alert = [[UIAlertView alloc] initWithTitle:@"Hello!" message:@"Welcome to OpenCV" delegate:self cancelButtonTitle:@"Continue" otherButtonTitles:nil];
|
||||||
[alert show];
|
[alert show];
|
||||||
@endcode
|
@endcode
|
||||||

|

|
||||||
|
|
||||||
- You are good to run the project.
|
- You are good to run the project.
|
||||||
|
|
||||||
*Output*
|
Output
|
||||||
--------
|
------
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|

|
||||||
|
@ -6,14 +6,14 @@ Goal
|
|||||||
|
|
||||||
In this tutorial we will learn how to do basic image processing using OpenCV in iOS.
|
In this tutorial we will learn how to do basic image processing using OpenCV in iOS.
|
||||||
|
|
||||||
*Introduction*
|
Introduction
|
||||||
--------------
|
------------
|
||||||
|
|
||||||
In *OpenCV* all the image processing operations are usually carried out on the *Mat* structure. In
|
In *OpenCV* all the image processing operations are usually carried out on the *Mat* structure. In
|
||||||
iOS however, to render an image on screen it have to be an instance of the *UIImage* class. To
|
iOS however, to render an image on screen it have to be an instance of the *UIImage* class. To
|
||||||
convert an *OpenCV Mat* to an *UIImage* we use the *Core Graphics* framework available in iOS. Below
|
convert an *OpenCV Mat* to an *UIImage* we use the *Core Graphics* framework available in iOS. Below
|
||||||
is the code needed to covert back and forth between Mat's and UIImage's.
|
is the code needed to covert back and forth between Mat's and UIImage's.
|
||||||
@code{.cpp}
|
@code{.m}
|
||||||
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
|
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
|
||||||
{
|
{
|
||||||
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
|
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
|
||||||
@ -37,7 +37,7 @@ is the code needed to covert back and forth between Mat's and UIImage's.
|
|||||||
return cvMat;
|
return cvMat;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
@code{.cpp}
|
@code{.m}
|
||||||
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
|
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
|
||||||
{
|
{
|
||||||
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
|
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
|
||||||
@ -63,12 +63,12 @@ is the code needed to covert back and forth between Mat's and UIImage's.
|
|||||||
@endcode
|
@endcode
|
||||||
After the processing we need to convert it back to UIImage. The code below can handle both
|
After the processing we need to convert it back to UIImage. The code below can handle both
|
||||||
gray-scale and color image conversions (determined by the number of channels in the *if* statement).
|
gray-scale and color image conversions (determined by the number of channels in the *if* statement).
|
||||||
@code{.cpp}
|
@code{.m}
|
||||||
cv::Mat greyMat;
|
cv::Mat greyMat;
|
||||||
cv::cvtColor(inputMat, greyMat, COLOR_BGR2GRAY);
|
cv::cvtColor(inputMat, greyMat, COLOR_BGR2GRAY);
|
||||||
@endcode
|
@endcode
|
||||||
After the processing we need to convert it back to UIImage.
|
After the processing we need to convert it back to UIImage.
|
||||||
@code{.cpp}
|
@code{.m}
|
||||||
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
|
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
|
||||||
{
|
{
|
||||||
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
|
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
|
||||||
@ -106,10 +106,11 @@ After the processing we need to convert it back to UIImage.
|
|||||||
return finalImage;
|
return finalImage;
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
*Output*
|
|
||||||
|
Output
|
||||||
--------
|
--------
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Check out an instance of running code with more Image Effects on
|
Check out an instance of running code with more Image Effects on
|
||||||
[YouTube](http://www.youtube.com/watch?v=Ko3K_xdhJ1I) .
|
[YouTube](http://www.youtube.com/watch?v=Ko3K_xdhJ1I) .
|
||||||
@ -119,4 +120,3 @@ Check out an instance of running code with more Image Effects on
|
|||||||
<iframe width="560" height="350" src="http://www.youtube.com/embed/Ko3K_xdhJ1I" frameborder="0" allowfullscreen></iframe>
|
<iframe width="560" height="350" src="http://www.youtube.com/embed/Ko3K_xdhJ1I" frameborder="0" allowfullscreen></iframe>
|
||||||
</div>
|
</div>
|
||||||
\endhtmlonly
|
\endhtmlonly
|
||||||
|
|
||||||
|
@ -14,11 +14,11 @@ Including OpenCV library in your iOS project
|
|||||||
|
|
||||||
The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your
|
The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your
|
||||||
XCode project. Download the latest binary from
|
XCode project. Download the latest binary from
|
||||||
\<<http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>\>. Alternatively follow this
|
<http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>. Alternatively follow this
|
||||||
guide @ref tutorial_ios_install to compile the framework manually. Once you have the framework, just
|
guide @ref tutorial_ios_install to compile the framework manually. Once you have the framework, just
|
||||||
drag-and-drop into XCode:
|
drag-and-drop into XCode:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Also you have to locate the prefix header that is used for all header files in the project. The file
|
Also you have to locate the prefix header that is used for all header files in the project. The file
|
||||||
is typically located at "ProjectName/Supporting Files/ProjectName-Prefix.pch". There, you have add
|
is typically located at "ProjectName/Supporting Files/ProjectName-Prefix.pch". There, you have add
|
||||||
@ -54,7 +54,7 @@ First, we create a simple iOS project, for example Single View Application. Then
|
|||||||
an UIImageView and UIButton to start the camera and display the video frames. The storyboard could
|
an UIImageView and UIButton to start the camera and display the video frames. The storyboard could
|
||||||
look like that:
|
look like that:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Make sure to add and connect the IBOutlets and IBActions to the corresponding ViewController:
|
Make sure to add and connect the IBOutlets and IBActions to the corresponding ViewController:
|
||||||
@code{.objc}
|
@code{.objc}
|
||||||
@ -127,7 +127,7 @@ should have at least the following frameworks in your project:
|
|||||||
- UIKit
|
- UIKit
|
||||||
- Foundation
|
- Foundation
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
#### Processing frames
|
#### Processing frames
|
||||||
|
|
||||||
|
@ -23,26 +23,28 @@ In which sense is the hyperplane obtained optimal? Let's consider the following
|
|||||||
For a linearly separable set of 2D-points which belong to one of two classes, find a separating
|
For a linearly separable set of 2D-points which belong to one of two classes, find a separating
|
||||||
straight line.
|
straight line.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@note In this example we deal with lines and points in the Cartesian plane instead of hyperplanes
|
@note In this example we deal with lines and points in the Cartesian plane instead of hyperplanes
|
||||||
and vectors in a high dimensional space. This is a simplification of the problem.It is important to
|
and vectors in a high dimensional space. This is a simplification of the problem.It is important to
|
||||||
understand that this is done only because our intuition is better built from examples that are easy
|
understand that this is done only because our intuition is better built from examples that are easy
|
||||||
to imagine. However, the same concepts apply to tasks where the examples to classify lie in a space
|
to imagine. However, the same concepts apply to tasks where the examples to classify lie in a space
|
||||||
whose dimension is higher than two. In the above picture you can see that there exists multiple
|
whose dimension is higher than two.
|
||||||
|
|
||||||
|
In the above picture you can see that there exists multiple
|
||||||
lines that offer a solution to the problem. Is any of them better than the others? We can
|
lines that offer a solution to the problem. Is any of them better than the others? We can
|
||||||
intuitively define a criterion to estimate the worth of the lines:
|
intuitively define a criterion to estimate the worth of the lines:
|
||||||
|
|
||||||
A line is bad if it passes too close to the points because it will be noise sensitive and it will
|
- A line is bad if it passes too close to the points because it will be noise sensitive and it will
|
||||||
not generalize correctly. Therefore, our goal should be to find the line passing as far as
|
not generalize correctly. Therefore, our goal should be to find the line passing as far as
|
||||||
possible from all points.
|
possible from all points.
|
||||||
|
|
||||||
Then, the operation of the SVM algorithm is based on finding the hyperplane that gives the largest
|
Then, the operation of the SVM algorithm is based on finding the hyperplane that gives the largest
|
||||||
minimum distance to the training examples. Twice, this distance receives the important name of
|
minimum distance to the training examples. Twice, this distance receives the important name of
|
||||||
**margin** within SVM's theory. Therefore, the optimal separating hyperplane *maximizes* the margin
|
**margin** within SVM's theory. Therefore, the optimal separating hyperplane *maximizes* the margin
|
||||||
of the training data.
|
of the training data.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
How is the optimal hyperplane computed?
|
How is the optimal hyperplane computed?
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
@ -55,7 +57,9 @@ where \f$\beta\f$ is known as the *weight vector* and \f$\beta_{0}\f$ as the *bi
|
|||||||
|
|
||||||
@sa A more in depth description of this and hyperplanes you can find in the section 4.5 (*Seperating
|
@sa A more in depth description of this and hyperplanes you can find in the section 4.5 (*Seperating
|
||||||
Hyperplanes*) of the book: *Elements of Statistical Learning* by T. Hastie, R. Tibshirani and J. H.
|
Hyperplanes*) of the book: *Elements of Statistical Learning* by T. Hastie, R. Tibshirani and J. H.
|
||||||
Friedman. The optimal hyperplane can be represented in an infinite number of different ways by
|
Friedman.
|
||||||
|
|
||||||
|
The optimal hyperplane can be represented in an infinite number of different ways by
|
||||||
scaling of \f$\beta\f$ and \f$\beta_{0}\f$. As a matter of convention, among all the possible
|
scaling of \f$\beta\f$ and \f$\beta_{0}\f$. As a matter of convention, among all the possible
|
||||||
representations of the hyperplane, the one chosen is
|
representations of the hyperplane, the one chosen is
|
||||||
|
|
||||||
@ -99,7 +103,7 @@ Source Code
|
|||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. **Set up the training data**
|
-# **Set up the training data**
|
||||||
|
|
||||||
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
|
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
|
||||||
two different classes; one of the classes consists of one point and the other of three points.
|
two different classes; one of the classes consists of one point and the other of three points.
|
||||||
@ -115,7 +119,7 @@ Explanation
|
|||||||
Mat labelsMat (4, 1, CV_32FC1, labels);
|
Mat labelsMat (4, 1, CV_32FC1, labels);
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
2. **Set up SVM's parameters**
|
-# **Set up SVM's parameters**
|
||||||
|
|
||||||
In this tutorial we have introduced the theory of SVMs in the most simple case, when the
|
In this tutorial we have introduced the theory of SVMs in the most simple case, when the
|
||||||
training examples are spread into two classes that are linearly separable. However, SVMs can be
|
training examples are spread into two classes that are linearly separable. However, SVMs can be
|
||||||
@ -149,7 +153,7 @@ Explanation
|
|||||||
less number of steps even if the optimal hyperplane has not been computed yet. This
|
less number of steps even if the optimal hyperplane has not been computed yet. This
|
||||||
parameter is defined in a structure @ref cv::cvTermCriteria .
|
parameter is defined in a structure @ref cv::cvTermCriteria .
|
||||||
|
|
||||||
3. **Train the SVM**
|
-# **Train the SVM**
|
||||||
|
|
||||||
We call the method
|
We call the method
|
||||||
[CvSVM::train](http://docs.opencv.org/modules/ml/doc/support_vector_machines.html#cvsvm-train)
|
[CvSVM::train](http://docs.opencv.org/modules/ml/doc/support_vector_machines.html#cvsvm-train)
|
||||||
@ -159,7 +163,7 @@ Explanation
|
|||||||
SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);
|
SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
4. **Regions classified by the SVM**
|
-# **Regions classified by the SVM**
|
||||||
|
|
||||||
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
|
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
|
||||||
this example we have used this method in order to color the space depending on the prediction done
|
this example we have used this method in order to color the space depending on the prediction done
|
||||||
@ -183,7 +187,7 @@ Explanation
|
|||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
5. **Support vectors**
|
-# **Support vectors**
|
||||||
|
|
||||||
We use here a couple of methods to obtain information about the support vectors.
|
We use here a couple of methods to obtain information about the support vectors.
|
||||||
The method @ref cv::ml::SVM::getSupportVectors obtain all of the support
|
The method @ref cv::ml::SVM::getSupportVectors obtain all of the support
|
||||||
@ -209,4 +213,4 @@ Results
|
|||||||
optimal separating hyperplane.
|
optimal separating hyperplane.
|
||||||
- Finally the support vectors are shown using gray rings around the training examples.
|
- Finally the support vectors are shown using gray rings around the training examples.
|
||||||
|
|
||||||

|

|
||||||
|
@ -61,11 +61,13 @@ region. The following picture shows non-linearly separable training data from tw
|
|||||||
separating hyperplane and the distances to their correct regions of the samples that are
|
separating hyperplane and the distances to their correct regions of the samples that are
|
||||||
misclassified.
|
misclassified.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@note Only the distances of the samples that are misclassified are shown in the picture. The
|
@note Only the distances of the samples that are misclassified are shown in the picture. The
|
||||||
distances of the rest of the samples are zero since they lay already in their correct decision
|
distances of the rest of the samples are zero since they lay already in their correct decision
|
||||||
region. The red and blue lines that appear on the picture are the margins to each one of the
|
region.
|
||||||
|
|
||||||
|
The red and blue lines that appear on the picture are the margins to each one of the
|
||||||
decision regions. It is very **important** to realize that each of the \f$\xi_{i}\f$ goes from a
|
decision regions. It is very **important** to realize that each of the \f$\xi_{i}\f$ goes from a
|
||||||
misclassified training sample to the margin of its appropriate region.
|
misclassified training sample to the margin of its appropriate region.
|
||||||
|
|
||||||
@ -93,13 +95,10 @@ or [download it from here ](samples/cpp/tutorial_code/ml/non_linear_svms/non_lin
|
|||||||
|
|
||||||
@includelineno cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp
|
@includelineno cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp
|
||||||
|
|
||||||
lines
|
|
||||||
1-12, 23-24, 27-
|
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
1. **Set up the training data**
|
-# **Set up the training data**
|
||||||
|
|
||||||
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
|
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
|
||||||
two different classes. To make the exercise more appealing, the training data is generated
|
two different classes. To make the exercise more appealing, the training data is generated
|
||||||
@ -140,7 +139,7 @@ Explanation
|
|||||||
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
|
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
2. **Set up SVM's parameters**
|
-# **Set up SVM's parameters**
|
||||||
|
|
||||||
@sa
|
@sa
|
||||||
In the previous tutorial @ref tutorial_introduction_to_svm there is an explanation of the atributes of the
|
In the previous tutorial @ref tutorial_introduction_to_svm there is an explanation of the atributes of the
|
||||||
@ -161,12 +160,13 @@ Explanation
|
|||||||
of obtaining a solution close to the one intuitively expected. However, we recommend to get a
|
of obtaining a solution close to the one intuitively expected. However, we recommend to get a
|
||||||
better insight of the problem by making adjustments to this parameter.
|
better insight of the problem by making adjustments to this parameter.
|
||||||
|
|
||||||
@note Here there are just very few points in the overlapping region between classes, giving a smaller value to **FRAC_LINEAR_SEP** the density of points can be incremented and the impact of the parameter **CvSVM::C_SVC** explored deeply.
|
@note Here there are just very few points in the overlapping region between classes, giving a smaller value to **FRAC_LINEAR_SEP** the density of points can be incremented and the impact of the parameter **CvSVM::C_SVC** explored deeply.
|
||||||
- *Termination Criteria of the algorithm*. The maximum number of iterations has to be
|
|
||||||
increased considerably in order to solve correctly a problem with non-linearly separable
|
|
||||||
training data. In particular, we have increased in five orders of magnitude this value.
|
|
||||||
|
|
||||||
3. **Train the SVM**
|
- *Termination Criteria of the algorithm*. The maximum number of iterations has to be
|
||||||
|
increased considerably in order to solve correctly a problem with non-linearly separable
|
||||||
|
training data. In particular, we have increased in five orders of magnitude this value.
|
||||||
|
|
||||||
|
-# **Train the SVM**
|
||||||
|
|
||||||
We call the method @ref cv::ml::SVM::train to build the SVM model. Watch out that the training
|
We call the method @ref cv::ml::SVM::train to build the SVM model. Watch out that the training
|
||||||
process may take a quite long time. Have patiance when your run the program.
|
process may take a quite long time. Have patiance when your run the program.
|
||||||
@ -175,7 +175,7 @@ Explanation
|
|||||||
svm.train(trainData, labels, Mat(), Mat(), params);
|
svm.train(trainData, labels, Mat(), Mat(), params);
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
4. **Show the Decision Regions**
|
-# **Show the Decision Regions**
|
||||||
|
|
||||||
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
|
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
|
||||||
this example we have used this method in order to color the space depending on the prediction done
|
this example we have used this method in order to color the space depending on the prediction done
|
||||||
@ -195,7 +195,7 @@ Explanation
|
|||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
5. **Show the training data**
|
-# **Show the training data**
|
||||||
|
|
||||||
The method @ref cv::circle is used to show the samples that compose the training data. The samples
|
The method @ref cv::circle is used to show the samples that compose the training data. The samples
|
||||||
of the class labeled with 1 are shown in light green and in light blue the samples of the class
|
of the class labeled with 1 are shown in light green and in light blue the samples of the class
|
||||||
@ -220,7 +220,7 @@ Explanation
|
|||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
|
|
||||||
6. **Support vectors**
|
-# **Support vectors**
|
||||||
|
|
||||||
We use here a couple of methods to obtain information about the support vectors. The method
|
We use here a couple of methods to obtain information about the support vectors. The method
|
||||||
@ref cv::ml::SVM::getSupportVectors obtain all support vectors.
|
@ref cv::ml::SVM::getSupportVectors obtain all support vectors.
|
||||||
@ -250,7 +250,7 @@ Results
|
|||||||
and some blue points lay on the green one.
|
and some blue points lay on the green one.
|
||||||
- Finally the support vectors are shown using gray rings around the training examples.
|
- Finally the support vectors are shown using gray rings around the training examples.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
You may observe a runtime instance of this on the [YouTube here](https://www.youtube.com/watch?v=vFv2yPcSo-Q).
|
You may observe a runtime instance of this on the [YouTube here](https://www.youtube.com/watch?v=vFv2yPcSo-Q).
|
||||||
|
|
||||||
|
@ -113,16 +113,16 @@ Explanation
|
|||||||
Result
|
Result
|
||||||
------
|
------
|
||||||
|
|
||||||
1. Here is the result of running the code above and using as input the video stream of a build-in
|
-# Here is the result of running the code above and using as input the video stream of a build-in
|
||||||
webcam:
|
webcam:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Remember to copy the files *haarcascade_frontalface_alt.xml* and
|
Remember to copy the files *haarcascade_frontalface_alt.xml* and
|
||||||
*haarcascade_eye_tree_eyeglasses.xml* in your current directory. They are located in
|
*haarcascade_eye_tree_eyeglasses.xml* in your current directory. They are located in
|
||||||
*opencv/data/haarcascades*
|
*opencv/data/haarcascades*
|
||||||
|
|
||||||
2. This is the result of using the file *lbpcascade_frontalface.xml* (LBP trained) for the face
|
-# This is the result of using the file *lbpcascade_frontalface.xml* (LBP trained) for the face
|
||||||
detection. For the eyes we keep using the file used in the tutorial.
|
detection. For the eyes we keep using the file used in the tutorial.
|
||||||
|
|
||||||

|

|
||||||
|
@ -26,40 +26,43 @@ be implemented using different algorithms so take a look at the reference manual
|
|||||||
Exposure sequence
|
Exposure sequence
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Source Code
|
Source Code
|
||||||
|
-----------
|
||||||
|
|
||||||
@includelineno cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
|
@includelineno cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
|
||||||
|
|
||||||
### Explanation
|
Explanation
|
||||||
|
-----------
|
||||||
|
|
||||||
1. **Load images and exposure times**
|
-# **Load images and exposure times**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
vector<Mat> images;
|
vector<Mat> images;
|
||||||
vector<float> times;
|
vector<float> times;
|
||||||
loadExposureSeq(argv[1], images, times);
|
loadExposureSeq(argv[1], images, times);
|
||||||
@endcode
|
@endcode
|
||||||
Firstly we load input images and exposure times from user-defined folder. The folder should
|
Firstly we load input images and exposure times from user-defined folder. The folder should
|
||||||
contain images and *list.txt* - file that contains file names and inverse exposure times.
|
contain images and *list.txt* - file that contains file names and inverse exposure times.
|
||||||
|
|
||||||
For our image sequence the list is following:
|
For our image sequence the list is following:
|
||||||
@code{.none}
|
@code{.none}
|
||||||
memorial00.png 0.03125
|
memorial00.png 0.03125
|
||||||
memorial01.png 0.0625
|
memorial01.png 0.0625
|
||||||
...
|
...
|
||||||
memorial15.png 1024
|
memorial15.png 1024
|
||||||
@endcode
|
@endcode
|
||||||
2. **Estimate camera response**
|
|
||||||
@code{.cpp}
|
|
||||||
Mat response;
|
|
||||||
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
|
|
||||||
calibrate->process(images, response, times);
|
|
||||||
@endcode
|
|
||||||
It is necessary to know camera response function (CRF) for a lot of HDR construction algorithms.
|
|
||||||
We use one of the calibration algorithms to estimate inverse CRF for all 256 pixel values.
|
|
||||||
|
|
||||||
3. **Make HDR image**
|
-# **Estimate camera response**
|
||||||
|
@code{.cpp}
|
||||||
|
Mat response;
|
||||||
|
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
|
||||||
|
calibrate->process(images, response, times);
|
||||||
|
@endcode
|
||||||
|
It is necessary to know camera response function (CRF) for a lot of HDR construction algorithms.
|
||||||
|
We use one of the calibration algorithms to estimate inverse CRF for all 256 pixel values.
|
||||||
|
|
||||||
|
-# **Make HDR image**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat hdr;
|
Mat hdr;
|
||||||
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
|
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
|
||||||
@ -68,45 +71,43 @@ merge_debevec->process(images, hdr, times, response);
|
|||||||
We use Debevec's weighting scheme to construct HDR image using response calculated in the previous
|
We use Debevec's weighting scheme to construct HDR image using response calculated in the previous
|
||||||
item.
|
item.
|
||||||
|
|
||||||
4. **Tonemap HDR image**
|
-# **Tonemap HDR image**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat ldr;
|
Mat ldr;
|
||||||
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
|
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
|
||||||
tonemap->process(hdr, ldr);
|
tonemap->process(hdr, ldr);
|
||||||
@endcode
|
@endcode
|
||||||
Since we want to see our results on common LDR display we have to map our HDR image to 8-bit range
|
Since we want to see our results on common LDR display we have to map our HDR image to 8-bit range
|
||||||
preserving most details. It is the main goal of tonemapping methods. We use tonemapper with
|
preserving most details. It is the main goal of tonemapping methods. We use tonemapper with
|
||||||
bilateral filtering and set 2.2 as the value for gamma correction.
|
bilateral filtering and set 2.2 as the value for gamma correction.
|
||||||
|
|
||||||
5. **Perform exposure fusion**
|
-# **Perform exposure fusion**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat fusion;
|
Mat fusion;
|
||||||
Ptr<MergeMertens> merge_mertens = createMergeMertens();
|
Ptr<MergeMertens> merge_mertens = createMergeMertens();
|
||||||
merge_mertens->process(images, fusion);
|
merge_mertens->process(images, fusion);
|
||||||
@endcode
|
@endcode
|
||||||
There is an alternative way to merge our exposures in case when we don't need HDR image. This
|
There is an alternative way to merge our exposures in case when we don't need HDR image. This
|
||||||
process is called exposure fusion and produces LDR image that doesn't require gamma correction. It
|
process is called exposure fusion and produces LDR image that doesn't require gamma correction. It
|
||||||
also doesn't use exposure values of the photographs.
|
also doesn't use exposure values of the photographs.
|
||||||
|
|
||||||
6. **Write results**
|
-# **Write results**
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
imwrite("fusion.png", fusion * 255);
|
imwrite("fusion.png", fusion * 255);
|
||||||
imwrite("ldr.png", ldr * 255);
|
imwrite("ldr.png", ldr * 255);
|
||||||
imwrite("hdr.hdr", hdr);
|
imwrite("hdr.hdr", hdr);
|
||||||
@endcode
|
@endcode
|
||||||
Now it's time to look at the results. Note that HDR image can't be stored in one of common image
|
Now it's time to look at the results. Note that HDR image can't be stored in one of common image
|
||||||
formats, so we save it to Radiance image (.hdr). Also all HDR imaging functions return results in
|
formats, so we save it to Radiance image (.hdr). Also all HDR imaging functions return results in
|
||||||
[0, 1] range so we should multiply result by 255.
|
[0, 1] range so we should multiply result by 255.
|
||||||
|
|
||||||
### Results
|
Results
|
||||||
|
-------
|
||||||
|
|
||||||
Tonemapped image
|
### Tonemapped image
|
||||||
----------------
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Exposure fusion
|
### Exposure fusion
|
||||||
---------------
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|

|
||||||
|
@ -75,8 +75,3 @@ As always, we would be happy to hear your comments and receive your contribution
|
|||||||
|
|
||||||
These tutorials show how to use Viz module effectively.
|
These tutorials show how to use Viz module effectively.
|
||||||
|
|
||||||
- @subpage tutorial_table_of_content_general
|
|
||||||
|
|
||||||
These tutorials
|
|
||||||
are the bottom of the iceberg as they link together multiple of the modules presented above in
|
|
||||||
order to solve complex problems.
|
|
||||||
|
@ -9,12 +9,12 @@ How to Use Background Subtraction Methods {#tutorial_background_subtraction}
|
|||||||
general, everything that can be considered as background given the characteristics of the
|
general, everything that can be considered as background given the characteristics of the
|
||||||
observed scene.
|
observed scene.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- Background modeling consists of two main steps:
|
- Background modeling consists of two main steps:
|
||||||
|
|
||||||
1. Background Initialization;
|
-# Background Initialization;
|
||||||
2. Background Update.
|
-# Background Update.
|
||||||
|
|
||||||
In the first step, an initial model of the background is computed, while in the second step that
|
In the first step, an initial model of the background is computed, while in the second step that
|
||||||
model is updated in order to adapt to possible changes in the scene.
|
model is updated in order to adapt to possible changes in the scene.
|
||||||
@ -28,11 +28,11 @@ Goals
|
|||||||
|
|
||||||
In this tutorial you will learn how to:
|
In this tutorial you will learn how to:
|
||||||
|
|
||||||
1. Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref
|
-# Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref
|
||||||
cv::imread ;
|
cv::imread ;
|
||||||
2. Create and update the background model by using @ref cv::BackgroundSubtractor class;
|
-# Create and update the background model by using @ref cv::BackgroundSubtractor class;
|
||||||
3. Get and show the foreground mask by using @ref cv::imshow ;
|
-# Get and show the foreground mask by using @ref cv::imshow ;
|
||||||
4. Save the output by using @ref cv::imwrite to quantitatively evaluate the results.
|
-# Save the output by using @ref cv::imwrite to quantitatively evaluate the results.
|
||||||
|
|
||||||
Code
|
Code
|
||||||
----
|
----
|
||||||
@ -40,201 +40,28 @@ Code
|
|||||||
In the following you can find the source code. We will let the user chose to process either a video
|
In the following you can find the source code. We will let the user chose to process either a video
|
||||||
file or a sequence of images.
|
file or a sequence of images.
|
||||||
|
|
||||||
-
|
Two different methods are used to generate two foreground masks:
|
||||||
|
-# @ref cv::bgsegm::BackgroundSubtractorMOG
|
||||||
Two different methods are used to generate two foreground masks:
|
-# @ref cv::BackgroundSubtractorMOG2
|
||||||
1. @ref cv::bgsegm::BackgroundSubtractorMOG
|
|
||||||
2. @ref cv::bgsegm::BackgroundSubtractorMOG2
|
|
||||||
|
|
||||||
The results as well as the input data are shown on the screen.
|
The results as well as the input data are shown on the screen.
|
||||||
@code{.cpp}
|
The source file can be downloaded [here ](samples/cpp/tutorial_code/video/bg_sub.cpp).
|
||||||
//opencv
|
|
||||||
#include <opencv2/highgui/highgui.hpp>
|
|
||||||
#include <opencv2/video/background_segm.hpp>
|
|
||||||
//C
|
|
||||||
#include <stdio.h>
|
|
||||||
//C++
|
|
||||||
#include <iostream>
|
|
||||||
#include <sstream>
|
|
||||||
|
|
||||||
using namespace cv;
|
@includelineno samples/cpp/tutorial_code/video/bg_sub.cpp
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
//global variables
|
|
||||||
Mat frame; //current frame
|
|
||||||
Mat fgMaskMOG; //fg mask generated by MOG method
|
|
||||||
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
|
|
||||||
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
|
|
||||||
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
|
|
||||||
int keyboard;
|
|
||||||
|
|
||||||
//function declarations
|
|
||||||
void help();
|
|
||||||
void processVideo(char* videoFilename);
|
|
||||||
void processImages(char* firstFrameFilename);
|
|
||||||
|
|
||||||
void help()
|
|
||||||
{
|
|
||||||
cout
|
|
||||||
<< "--------------------------------------------------------------------------" << endl
|
|
||||||
<< "This program shows how to use background subtraction methods provided by " << endl
|
|
||||||
<< " OpenCV. You can process both videos (-vid) and images (-img)." << endl
|
|
||||||
<< endl
|
|
||||||
<< "Usage:" << endl
|
|
||||||
<< "./bs {-vid <video filename>|-img <image filename>}" << endl
|
|
||||||
<< "for example: ./bs -vid video.avi" << endl
|
|
||||||
<< "or: ./bs -img /data/images/1.png" << endl
|
|
||||||
<< "--------------------------------------------------------------------------" << endl
|
|
||||||
<< endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
int main(int argc, char* argv[])
|
|
||||||
{
|
|
||||||
//print help information
|
|
||||||
help();
|
|
||||||
|
|
||||||
//check for the input parameter correctness
|
|
||||||
if(argc != 3) {
|
|
||||||
cerr <<"Incorret input list" << endl;
|
|
||||||
cerr <<"exiting..." << endl;
|
|
||||||
return EXIT_FAILURE;
|
|
||||||
}
|
|
||||||
|
|
||||||
//create GUI windows
|
|
||||||
namedWindow("Frame");
|
|
||||||
namedWindow("FG Mask MOG");
|
|
||||||
namedWindow("FG Mask MOG 2");
|
|
||||||
|
|
||||||
//create Background Subtractor objects
|
|
||||||
pMOG = createBackgroundSubtractorMOG(); //MOG approach
|
|
||||||
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
|
|
||||||
|
|
||||||
if(strcmp(argv[1], "-vid") == 0) {
|
|
||||||
//input data coming from a video
|
|
||||||
processVideo(argv[2]);
|
|
||||||
}
|
|
||||||
else if(strcmp(argv[1], "-img") == 0) {
|
|
||||||
//input data coming from a sequence of images
|
|
||||||
processImages(argv[2]);
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
//error in reading input parameters
|
|
||||||
cerr <<"Please, check the input parameters." << endl;
|
|
||||||
cerr <<"Exiting..." << endl;
|
|
||||||
return EXIT_FAILURE;
|
|
||||||
}
|
|
||||||
//destroy GUI windows
|
|
||||||
destroyAllWindows();
|
|
||||||
return EXIT_SUCCESS;
|
|
||||||
}
|
|
||||||
|
|
||||||
void processVideo(char* videoFilename) {
|
|
||||||
//create the capture object
|
|
||||||
VideoCapture capture(videoFilename);
|
|
||||||
if(!capture.isOpened()){
|
|
||||||
//error in opening the video input
|
|
||||||
cerr << "Unable to open video file: " << videoFilename << endl;
|
|
||||||
exit(EXIT_FAILURE);
|
|
||||||
}
|
|
||||||
//read input data. ESC or 'q' for quitting
|
|
||||||
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
|
||||||
//read the current frame
|
|
||||||
if(!capture.read(frame)) {
|
|
||||||
cerr << "Unable to read next frame." << endl;
|
|
||||||
cerr << "Exiting..." << endl;
|
|
||||||
exit(EXIT_FAILURE);
|
|
||||||
}
|
|
||||||
//update the background model
|
|
||||||
pMOG->apply(frame, fgMaskMOG);
|
|
||||||
pMOG2->apply(frame, fgMaskMOG2);
|
|
||||||
//get the frame number and write it on the current frame
|
|
||||||
stringstream ss;
|
|
||||||
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
|
|
||||||
cv::Scalar(255,255,255), -1);
|
|
||||||
ss << capture.get(CAP_PROP_POS_FRAMES);
|
|
||||||
string frameNumberString = ss.str();
|
|
||||||
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
|
||||||
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
|
||||||
//show the current frame and the fg masks
|
|
||||||
imshow("Frame", frame);
|
|
||||||
imshow("FG Mask MOG", fgMaskMOG);
|
|
||||||
imshow("FG Mask MOG 2", fgMaskMOG2);
|
|
||||||
//get the input from the keyboard
|
|
||||||
keyboard = waitKey( 30 );
|
|
||||||
}
|
|
||||||
//delete capture object
|
|
||||||
capture.release();
|
|
||||||
}
|
|
||||||
|
|
||||||
void processImages(char* fistFrameFilename) {
|
|
||||||
//read the first file of the sequence
|
|
||||||
frame = imread(fistFrameFilename);
|
|
||||||
if(!frame.data){
|
|
||||||
//error in opening the first image
|
|
||||||
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
|
|
||||||
exit(EXIT_FAILURE);
|
|
||||||
}
|
|
||||||
//current image filename
|
|
||||||
string fn(fistFrameFilename);
|
|
||||||
//read input data. ESC or 'q' for quitting
|
|
||||||
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
|
||||||
//update the background model
|
|
||||||
pMOG->apply(frame, fgMaskMOG);
|
|
||||||
pMOG2->apply(frame, fgMaskMOG2);
|
|
||||||
//get the frame number and write it on the current frame
|
|
||||||
size_t index = fn.find_last_of("/");
|
|
||||||
if(index == string::npos) {
|
|
||||||
index = fn.find_last_of("\\");
|
|
||||||
}
|
|
||||||
size_t index2 = fn.find_last_of(".");
|
|
||||||
string prefix = fn.substr(0,index+1);
|
|
||||||
string suffix = fn.substr(index2);
|
|
||||||
string frameNumberString = fn.substr(index+1, index2-index-1);
|
|
||||||
istringstream iss(frameNumberString);
|
|
||||||
int frameNumber = 0;
|
|
||||||
iss >> frameNumber;
|
|
||||||
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
|
|
||||||
cv::Scalar(255,255,255), -1);
|
|
||||||
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
|
||||||
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
|
||||||
//show the current frame and the fg masks
|
|
||||||
imshow("Frame", frame);
|
|
||||||
imshow("FG Mask MOG", fgMaskMOG);
|
|
||||||
imshow("FG Mask MOG 2", fgMaskMOG2);
|
|
||||||
//get the input from the keyboard
|
|
||||||
keyboard = waitKey( 30 );
|
|
||||||
//search for the next image in the sequence
|
|
||||||
ostringstream oss;
|
|
||||||
oss << (frameNumber + 1);
|
|
||||||
string nextFrameNumberString = oss.str();
|
|
||||||
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
|
|
||||||
//read the next frame
|
|
||||||
frame = imread(nextFrameFilename);
|
|
||||||
if(!frame.data){
|
|
||||||
//error in opening the next image in the sequence
|
|
||||||
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
|
|
||||||
exit(EXIT_FAILURE);
|
|
||||||
}
|
|
||||||
//update the path of the current frame
|
|
||||||
fn.assign(nextFrameFilename);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
- The source file can be downloaded [here ](samples/cpp/tutorial_code/video/bg_sub.cpp).
|
|
||||||
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
We discuss the main parts of the above code:
|
We discuss the main parts of the above code:
|
||||||
|
|
||||||
1. First, three Mat objects are allocated to store the current frame and two foreground masks,
|
-# First, three Mat objects are allocated to store the current frame and two foreground masks,
|
||||||
obtained by using two different BS algorithms.
|
obtained by using two different BS algorithms.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
Mat frame; //current frame
|
Mat frame; //current frame
|
||||||
Mat fgMaskMOG; //fg mask generated by MOG method
|
Mat fgMaskMOG; //fg mask generated by MOG method
|
||||||
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
|
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
|
||||||
@endcode
|
@endcode
|
||||||
2. Two @ref cv::BackgroundSubtractor objects will be used to generate the foreground masks. In this
|
-# Two @ref cv::BackgroundSubtractor objects will be used to generate the foreground masks. In this
|
||||||
example, default parameters are used, but it is also possible to declare specific parameters in
|
example, default parameters are used, but it is also possible to declare specific parameters in
|
||||||
the create function.
|
the create function.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -245,8 +72,7 @@ We discuss the main parts of the above code:
|
|||||||
pMOG = createBackgroundSubtractorMOG(); //MOG approach
|
pMOG = createBackgroundSubtractorMOG(); //MOG approach
|
||||||
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
|
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
|
||||||
@endcode
|
@endcode
|
||||||
3. The command line arguments are analysed. The user can chose between two options:
|
-# The command line arguments are analysed. The user can chose between two options:
|
||||||
|
|
||||||
- video files (by choosing the option -vid);
|
- video files (by choosing the option -vid);
|
||||||
- image sequences (by choosing the option -img).
|
- image sequences (by choosing the option -img).
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -259,7 +85,7 @@ We discuss the main parts of the above code:
|
|||||||
processImages(argv[2]);
|
processImages(argv[2]);
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
4. Suppose you want to process a video file. The video is read until the end is reached or the user
|
-# Suppose you want to process a video file. The video is read until the end is reached or the user
|
||||||
presses the button 'q' or the button 'ESC'.
|
presses the button 'q' or the button 'ESC'.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
||||||
@ -270,7 +96,7 @@ We discuss the main parts of the above code:
|
|||||||
exit(EXIT_FAILURE);
|
exit(EXIT_FAILURE);
|
||||||
}
|
}
|
||||||
@endcode
|
@endcode
|
||||||
5. Every frame is used both for calculating the foreground mask and for updating the background. If
|
-# Every frame is used both for calculating the foreground mask and for updating the background. If
|
||||||
you want to change the learning rate used for updating the background model, it is possible to
|
you want to change the learning rate used for updating the background model, it is possible to
|
||||||
set a specific learning rate by passing a third parameter to the 'apply' method.
|
set a specific learning rate by passing a third parameter to the 'apply' method.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -278,7 +104,7 @@ We discuss the main parts of the above code:
|
|||||||
pMOG->apply(frame, fgMaskMOG);
|
pMOG->apply(frame, fgMaskMOG);
|
||||||
pMOG2->apply(frame, fgMaskMOG2);
|
pMOG2->apply(frame, fgMaskMOG2);
|
||||||
@endcode
|
@endcode
|
||||||
6. The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
|
-# The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
|
||||||
the top left corner of the current frame. A white rectangle is used to highlight the black
|
the top left corner of the current frame. A white rectangle is used to highlight the black
|
||||||
colored frame number.
|
colored frame number.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
@ -291,14 +117,14 @@ We discuss the main parts of the above code:
|
|||||||
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
||||||
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
||||||
@endcode
|
@endcode
|
||||||
7. We are ready to show the current input frame and the results.
|
-# We are ready to show the current input frame and the results.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
//show the current frame and the fg masks
|
//show the current frame and the fg masks
|
||||||
imshow("Frame", frame);
|
imshow("Frame", frame);
|
||||||
imshow("FG Mask MOG", fgMaskMOG);
|
imshow("FG Mask MOG", fgMaskMOG);
|
||||||
imshow("FG Mask MOG 2", fgMaskMOG2);
|
imshow("FG Mask MOG 2", fgMaskMOG2);
|
||||||
@endcode
|
@endcode
|
||||||
8. The same operations listed above can be performed using a sequence of images as input. The
|
-# The same operations listed above can be performed using a sequence of images as input. The
|
||||||
processImage function is called and, instead of using a @ref cv::VideoCapture object, the images
|
processImage function is called and, instead of using a @ref cv::VideoCapture object, the images
|
||||||
are read by using @ref cv::imread , after individuating the correct path for the next frame to
|
are read by using @ref cv::imread , after individuating the correct path for the next frame to
|
||||||
read.
|
read.
|
||||||
@ -338,7 +164,7 @@ Results
|
|||||||
@endcode
|
@endcode
|
||||||
The output of the program will look as the following:
|
The output of the program will look as the following:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- The video file Video_001.avi is part of the [Background Models Challenge
|
- The video file Video_001.avi is part of the [Background Models Challenge
|
||||||
(BMC)](http://bmc.univ-bpclermont.fr/) data set and it can be downloaded from the following link
|
(BMC)](http://bmc.univ-bpclermont.fr/) data set and it can be downloaded from the following link
|
||||||
@ -350,7 +176,7 @@ Results
|
|||||||
@endcode
|
@endcode
|
||||||
The output of the program will look as the following:
|
The output of the program will look as the following:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
- The sequence of images used in this example is part of the [Background Models Challenge
|
- The sequence of images used in this example is part of the [Background Models Challenge
|
||||||
(BMC)](http://bmc.univ-bpclermont.fr/) dataset and it can be downloaded from the following link
|
(BMC)](http://bmc.univ-bpclermont.fr/) dataset and it can be downloaded from the following link
|
||||||
@ -385,7 +211,5 @@ the accuracy of the results.
|
|||||||
References
|
References
|
||||||
----------
|
----------
|
||||||
|
|
||||||
- Background Models Challenge (BMC) website, [](http://bmc.univ-bpclermont.fr/)
|
- [Background Models Challenge (BMC) website](http://bmc.univ-bpclermont.fr/)
|
||||||
- Antoine Vacavant, Thierry Chateau, Alexis Wilhelm and Laurent Lequievre. A Benchmark Dataset for
|
- A Benchmark Dataset for Foreground/Background Extraction @cite vacavant2013benchmark
|
||||||
Foreground/Background Extraction. In ACCV 2012, Workshop: Background Models Challenge, LNCS
|
|
||||||
7728, 291-300. November 2012, Daejeon, Korea.
|
|
||||||
|
@ -13,132 +13,43 @@ Code
|
|||||||
----
|
----
|
||||||
|
|
||||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/creating_widgets.cpp).
|
You can download the code from [here ](samples/cpp/tutorial_code/viz/creating_widgets.cpp).
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/viz/creating_widgets.cpp
|
||||||
#include <opencv2/viz.hpp>
|
|
||||||
#include <opencv2/viz/widget_accessor.hpp>
|
|
||||||
#include <iostream>
|
|
||||||
|
|
||||||
#include <vtkPoints.h>
|
|
||||||
#include <vtkTriangle.h>
|
|
||||||
#include <vtkCellArray.h>
|
|
||||||
#include <vtkPolyData.h>
|
|
||||||
#include <vtkPolyDataMapper.h>
|
|
||||||
#include <vtkIdList.h>
|
|
||||||
#include <vtkActor.h>
|
|
||||||
#include <vtkProp.h>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @class WTriangle
|
|
||||||
* @brief Defining our own 3D Triangle widget
|
|
||||||
*/
|
|
||||||
class WTriangle : public viz::Widget3D
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
WTriangle(const Point3f &pt1, const Point3f &pt2, const Point3f &pt3, const viz::Color & color = viz::Color::white());
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function WTriangle::WTriangle
|
|
||||||
*/
|
|
||||||
WTriangle::WTriangle(const Point3f &pt1, const Point3f &pt2, const Point3f &pt3, const viz::Color & color)
|
|
||||||
{
|
|
||||||
// Create a triangle
|
|
||||||
vtkSmartPointer<vtkPoints> points = vtkSmartPointer<vtkPoints>::New();
|
|
||||||
points->InsertNextPoint(pt1.x, pt1.y, pt1.z);
|
|
||||||
points->InsertNextPoint(pt2.x, pt2.y, pt2.z);
|
|
||||||
points->InsertNextPoint(pt3.x, pt3.y, pt3.z);
|
|
||||||
|
|
||||||
vtkSmartPointer<vtkTriangle> triangle = vtkSmartPointer<vtkTriangle>::New();
|
|
||||||
triangle->GetPointIds()->SetId(0,0);
|
|
||||||
triangle->GetPointIds()->SetId(1,1);
|
|
||||||
triangle->GetPointIds()->SetId(2,2);
|
|
||||||
|
|
||||||
vtkSmartPointer<vtkCellArray> cells = vtkSmartPointer<vtkCellArray>::New();
|
|
||||||
cells->InsertNextCell(triangle);
|
|
||||||
|
|
||||||
// Create a polydata object
|
|
||||||
vtkSmartPointer<vtkPolyData> polyData = vtkSmartPointer<vtkPolyData>::New();
|
|
||||||
|
|
||||||
// Add the geometry and topology to the polydata
|
|
||||||
polyData->SetPoints(points);
|
|
||||||
polyData->SetPolys(cells);
|
|
||||||
|
|
||||||
// Create mapper and actor
|
|
||||||
vtkSmartPointer<vtkPolyDataMapper> mapper = vtkSmartPointer<vtkPolyDataMapper>::New();
|
|
||||||
#if VTK_MAJOR_VERSION <= 5
|
|
||||||
mapper->SetInput(polyData);
|
|
||||||
#else
|
|
||||||
mapper->SetInputData(polyData);
|
|
||||||
#endif
|
|
||||||
|
|
||||||
vtkSmartPointer<vtkActor> actor = vtkSmartPointer<vtkActor>::New();
|
|
||||||
actor->SetMapper(mapper);
|
|
||||||
|
|
||||||
// Store this actor in the widget in order that visualizer can access it
|
|
||||||
viz::WidgetAccessor::setProp(*this, actor);
|
|
||||||
|
|
||||||
// Set the color of the widget. This has to be called after WidgetAccessor.
|
|
||||||
setColor(color);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function main
|
|
||||||
*/
|
|
||||||
int main()
|
|
||||||
{
|
|
||||||
/// Create a window
|
|
||||||
viz::Viz3d myWindow("Creating Widgets");
|
|
||||||
|
|
||||||
/// Create a triangle widget
|
|
||||||
WTriangle tw(Point3f(0.0,0.0,0.0), Point3f(1.0,1.0,1.0), Point3f(0.0,1.0,0.0), viz::Color::red());
|
|
||||||
|
|
||||||
/// Show widget in the visualizer window
|
|
||||||
myWindow.showWidget("TRIANGLE", tw);
|
|
||||||
|
|
||||||
/// Start event loop
|
|
||||||
myWindow.spin();
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Here is the general structure of the program:
|
Here is the general structure of the program:
|
||||||
|
|
||||||
- Extend Widget3D class to create a new 3D widget.
|
- Extend Widget3D class to create a new 3D widget.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
class WTriangle : public viz::Widget3D
|
class WTriangle : public viz::Widget3D
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
WTriangle(const Point3f &pt1, const Point3f &pt2, const Point3f &pt3, const viz::Color & color = viz::Color::white());
|
WTriangle(const Point3f &pt1, const Point3f &pt2, const Point3f &pt3, const viz::Color & color = viz::Color::white());
|
||||||
};
|
};
|
||||||
@endcode
|
@endcode
|
||||||
- Assign a VTK actor to the widget.
|
- Assign a VTK actor to the widget.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
// Store this actor in the widget in order that visualizer can access it
|
// Store this actor in the widget in order that visualizer can access it
|
||||||
viz::WidgetAccessor::setProp(*this, actor);
|
viz::WidgetAccessor::setProp(*this, actor);
|
||||||
@endcode
|
@endcode
|
||||||
- Set color of the widget.
|
- Set color of the widget.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
// Set the color of the widget. This has to be called after WidgetAccessor.
|
// Set the color of the widget. This has to be called after WidgetAccessor.
|
||||||
setColor(color);
|
setColor(color);
|
||||||
@endcode
|
@endcode
|
||||||
- Construct a triangle widget and display it in the window.
|
- Construct a triangle widget and display it in the window.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Create a triangle widget
|
/// Create a triangle widget
|
||||||
WTriangle tw(Point3f(0.0,0.0,0.0), Point3f(1.0,1.0,1.0), Point3f(0.0,1.0,0.0), viz::Color::red());
|
WTriangle tw(Point3f(0.0,0.0,0.0), Point3f(1.0,1.0,1.0), Point3f(0.0,1.0,0.0), viz::Color::red());
|
||||||
|
|
||||||
|
/// Show widget in the visualizer window
|
||||||
|
myWindow.showWidget("TRIANGLE", tw);
|
||||||
|
@endcode
|
||||||
|
|
||||||
/// Show widget in the visualizer window
|
|
||||||
myWindow.showWidget("TRIANGLE", tw);
|
|
||||||
@endcode
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
Here is the result of the program.
|
Here is the result of the program.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -15,36 +15,35 @@ Code
|
|||||||
----
|
----
|
||||||
|
|
||||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/launching_viz.cpp).
|
You can download the code from [here ](samples/cpp/tutorial_code/viz/launching_viz.cpp).
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/viz/launching_viz.cpp
|
||||||
#include <opencv2/viz.hpp>
|
|
||||||
#include <iostream>
|
|
||||||
|
|
||||||
using namespace cv;
|
Explanation
|
||||||
using namespace std;
|
-----------
|
||||||
|
|
||||||
/*
|
Here is the general structure of the program:
|
||||||
* @function main
|
|
||||||
*/
|
- Create a window.
|
||||||
int main()
|
@code{.cpp}
|
||||||
{
|
|
||||||
/// Create a window
|
/// Create a window
|
||||||
viz::Viz3d myWindow("Viz Demo");
|
viz::Viz3d myWindow("Viz Demo");
|
||||||
|
@endcode
|
||||||
|
- Start event loop. This event loop will run until user terminates it by pressing **e**, **E**,
|
||||||
|
**q**, **Q**.
|
||||||
|
@code{.cpp}
|
||||||
/// Start event loop
|
/// Start event loop
|
||||||
myWindow.spin();
|
myWindow.spin();
|
||||||
|
@endcode
|
||||||
/// Event loop is over when pressed q, Q, e, E
|
- Access same window via its name. Since windows are implicitly shared, **sameWindow** is exactly
|
||||||
cout << "First event loop is over" << endl;
|
the same with **myWindow**. If the name does not exist, a new window is created.
|
||||||
|
@code{.cpp}
|
||||||
/// Access window via its name
|
/// Access window via its name
|
||||||
viz::Viz3d sameWindow = viz::getWindowByName("Viz Demo");
|
viz::Viz3d sameWindow = viz::get("Viz Demo");
|
||||||
|
@endcode
|
||||||
/// Start event loop
|
- Start a controlled event loop. Once it starts, **wasStopped** is set to false. Inside the while
|
||||||
sameWindow.spin();
|
loop, in each iteration, **spinOnce** is called to prevent event loop from completely stopping.
|
||||||
|
Inside the while loop, user can execute other statements including those which interact with the
|
||||||
/// Event loop is over when pressed q, Q, e, E
|
window.
|
||||||
cout << "Second event loop is over" << endl;
|
@code{.cpp}
|
||||||
|
|
||||||
/// Event loop is over when pressed q, Q, e, E
|
/// Event loop is over when pressed q, Q, e, E
|
||||||
/// Start event loop once for 1 millisecond
|
/// Start event loop once for 1 millisecond
|
||||||
sameWindow.spinOnce(1, true);
|
sameWindow.spinOnce(1, true);
|
||||||
@ -55,54 +54,11 @@ int main()
|
|||||||
/// Event loop for 1 millisecond
|
/// Event loop for 1 millisecond
|
||||||
sameWindow.spinOnce(1, true);
|
sameWindow.spinOnce(1, true);
|
||||||
}
|
}
|
||||||
|
@endcode
|
||||||
|
|
||||||
/// Once more event loop is stopped
|
|
||||||
cout << "Last event loop is over" << endl;
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
|
||||||
-----------
|
|
||||||
|
|
||||||
Here is the general structure of the program:
|
|
||||||
|
|
||||||
- Create a window.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Create a window
|
|
||||||
viz::Viz3d myWindow("Viz Demo");
|
|
||||||
@endcode
|
|
||||||
- Start event loop. This event loop will run until user terminates it by pressing **e**, **E**,
|
|
||||||
**q**, **Q**.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Start event loop
|
|
||||||
myWindow.spin();
|
|
||||||
@endcode
|
|
||||||
- Access same window via its name. Since windows are implicitly shared, **sameWindow** is exactly
|
|
||||||
the same with **myWindow**. If the name does not exist, a new window is created.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Access window via its name
|
|
||||||
viz::Viz3d sameWindow = viz::get("Viz Demo");
|
|
||||||
@endcode
|
|
||||||
- Start a controlled event loop. Once it starts, **wasStopped** is set to false. Inside the while
|
|
||||||
loop, in each iteration, **spinOnce** is called to prevent event loop from completely stopping.
|
|
||||||
Inside the while loop, user can execute other statements including those which interact with the
|
|
||||||
window.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Event loop is over when pressed q, Q, e, E
|
|
||||||
/// Start event loop once for 1 millisecond
|
|
||||||
sameWindow.spinOnce(1, true);
|
|
||||||
while(!sameWindow.wasStopped())
|
|
||||||
{
|
|
||||||
/// Interact with window
|
|
||||||
|
|
||||||
/// Event loop for 1 millisecond
|
|
||||||
sameWindow.spinOnce(1, true);
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
Here is the result of the program.
|
Here is the result of the program.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
@ -14,74 +14,47 @@ Code
|
|||||||
----
|
----
|
||||||
|
|
||||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/transformations.cpp).
|
You can download the code from [here ](samples/cpp/tutorial_code/viz/transformations.cpp).
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/viz/transformations.cpp
|
||||||
#include <opencv2/viz.hpp>
|
|
||||||
#include <iostream>
|
|
||||||
#include <fstream>
|
|
||||||
|
|
||||||
using namespace cv;
|
Explanation
|
||||||
using namespace std;
|
-----------
|
||||||
|
|
||||||
/*
|
Here is the general structure of the program:
|
||||||
* @function cvcloud_load
|
|
||||||
* @brief load bunny.ply
|
|
||||||
*/
|
|
||||||
Mat cvcloud_load()
|
|
||||||
{
|
|
||||||
Mat cloud(1, 1889, CV_32FC3);
|
|
||||||
ifstream ifs("bunny.ply");
|
|
||||||
|
|
||||||
string str;
|
|
||||||
for(size_t i = 0; i < 12; ++i)
|
|
||||||
getline(ifs, str);
|
|
||||||
|
|
||||||
Point3f* data = cloud.ptr<cv::Point3f>();
|
|
||||||
float dummy1, dummy2;
|
|
||||||
for(size_t i = 0; i < 1889; ++i)
|
|
||||||
ifs >> data[i].x >> data[i].y >> data[i].z >> dummy1 >> dummy2;
|
|
||||||
|
|
||||||
cloud *= 5.0f;
|
|
||||||
return cloud;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function main
|
|
||||||
*/
|
|
||||||
int main(int argn, char **argv)
|
|
||||||
{
|
|
||||||
if (argn < 2)
|
|
||||||
{
|
|
||||||
cout << "Usage: " << endl << "./transformations [ G | C ]" << endl;
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool camera_pov = (argv[1][0] == 'C');
|
|
||||||
|
|
||||||
|
- Create a visualization window.
|
||||||
|
@code{.cpp}
|
||||||
/// Create a window
|
/// Create a window
|
||||||
viz::Viz3d myWindow("Coordinate Frame");
|
viz::Viz3d myWindow("Transformations");
|
||||||
|
@endcode
|
||||||
/// Add coordinate axes
|
- Get camera pose from camera position, camera focal point and y direction.
|
||||||
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());
|
@code{.cpp}
|
||||||
|
|
||||||
/// Let's assume camera has the following properties
|
/// Let's assume camera has the following properties
|
||||||
Point3f cam_pos(3.0f,3.0f,3.0f), cam_focal_point(3.0f,3.0f,2.0f), cam_y_dir(-1.0f,0.0f,0.0f);
|
Point3f cam_pos(3.0f,3.0f,3.0f), cam_focal_point(3.0f,3.0f,2.0f), cam_y_dir(-1.0f,0.0f,0.0f);
|
||||||
|
|
||||||
/// We can get the pose of the cam using makeCameraPose
|
/// We can get the pose of the cam using makeCameraPose
|
||||||
Affine3f cam_pose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);
|
Affine3f cam_pose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);
|
||||||
|
@endcode
|
||||||
|
- Obtain transform matrix knowing the axes of camera coordinate system.
|
||||||
|
@code{.cpp}
|
||||||
/// We can get the transformation matrix from camera coordinate system to global using
|
/// We can get the transformation matrix from camera coordinate system to global using
|
||||||
/// - makeTransformToGlobal. We need the axes of the camera
|
/// - makeTransformToGlobal. We need the axes of the camera
|
||||||
Affine3f transform = viz::makeTransformToGlobal(Vec3f(0.0f,-1.0f,0.0f), Vec3f(-1.0f,0.0f,0.0f), Vec3f(0.0f,0.0f,-1.0f), cam_pos);
|
Affine3f transform = viz::makeTransformToGlobal(Vec3f(0.0f,-1.0f,0.0f), Vec3f(-1.0f,0.0f,0.0f), Vec3f(0.0f,0.0f,-1.0f), cam_pos);
|
||||||
|
@endcode
|
||||||
|
- Create a cloud widget from bunny.ply file
|
||||||
|
@code{.cpp}
|
||||||
/// Create a cloud widget.
|
/// Create a cloud widget.
|
||||||
Mat bunny_cloud = cvcloud_load();
|
Mat bunny_cloud = cvcloud_load();
|
||||||
viz::WCloud cloud_widget(bunny_cloud, viz::Color::green());
|
viz::WCloud cloud_widget(bunny_cloud, viz::Color::green());
|
||||||
|
@endcode
|
||||||
|
- Given the pose in camera coordinate system, estimate the global pose.
|
||||||
|
@code{.cpp}
|
||||||
/// Pose of the widget in camera frame
|
/// Pose of the widget in camera frame
|
||||||
Affine3f cloud_pose = Affine3f().translate(Vec3f(0.0f,0.0f,3.0f));
|
Affine3f cloud_pose = Affine3f().translate(Vec3f(0.0f,0.0f,3.0f));
|
||||||
/// Pose of the widget in global frame
|
/// Pose of the widget in global frame
|
||||||
Affine3f cloud_pose_global = transform * cloud_pose;
|
Affine3f cloud_pose_global = transform * cloud_pose;
|
||||||
|
@endcode
|
||||||
|
- If the view point is set to be global, visualize camera coordinate frame and viewing frustum.
|
||||||
|
@code{.cpp}
|
||||||
/// Visualize camera frame
|
/// Visualize camera frame
|
||||||
if (!camera_pov)
|
if (!camera_pov)
|
||||||
{
|
{
|
||||||
@ -90,88 +63,26 @@ int main(int argn, char **argv)
|
|||||||
myWindow.showWidget("CPW", cpw, cam_pose);
|
myWindow.showWidget("CPW", cpw, cam_pose);
|
||||||
myWindow.showWidget("CPW_FRUSTUM", cpw_frustum, cam_pose);
|
myWindow.showWidget("CPW_FRUSTUM", cpw_frustum, cam_pose);
|
||||||
}
|
}
|
||||||
|
@endcode
|
||||||
|
- Visualize the cloud widget with the estimated global pose
|
||||||
|
@code{.cpp}
|
||||||
/// Visualize widget
|
/// Visualize widget
|
||||||
myWindow.showWidget("bunny", cloud_widget, cloud_pose_global);
|
myWindow.showWidget("bunny", cloud_widget, cloud_pose_global);
|
||||||
|
@endcode
|
||||||
|
- If the view point is set to be camera's, set viewer pose to **cam_pose**.
|
||||||
|
@code{.cpp}
|
||||||
/// Set the viewer pose to that of camera
|
/// Set the viewer pose to that of camera
|
||||||
if (camera_pov)
|
if (camera_pov)
|
||||||
myWindow.setViewerPose(cam_pose);
|
myWindow.setViewerPose(cam_pose);
|
||||||
|
@endcode
|
||||||
|
|
||||||
/// Start event loop.
|
|
||||||
myWindow.spin();
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
|
||||||
-----------
|
|
||||||
|
|
||||||
Here is the general structure of the program:
|
|
||||||
|
|
||||||
- Create a visualization window.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Create a window
|
|
||||||
viz::Viz3d myWindow("Transformations");
|
|
||||||
@endcode
|
|
||||||
- Get camera pose from camera position, camera focal point and y direction.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Let's assume camera has the following properties
|
|
||||||
Point3f cam_pos(3.0f,3.0f,3.0f), cam_focal_point(3.0f,3.0f,2.0f), cam_y_dir(-1.0f,0.0f,0.0f);
|
|
||||||
|
|
||||||
/// We can get the pose of the cam using makeCameraPose
|
|
||||||
Affine3f cam_pose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);
|
|
||||||
@endcode
|
|
||||||
- Obtain transform matrix knowing the axes of camera coordinate system.
|
|
||||||
@code{.cpp}
|
|
||||||
/// We can get the transformation matrix from camera coordinate system to global using
|
|
||||||
/// - makeTransformToGlobal. We need the axes of the camera
|
|
||||||
Affine3f transform = viz::makeTransformToGlobal(Vec3f(0.0f,-1.0f,0.0f), Vec3f(-1.0f,0.0f,0.0f), Vec3f(0.0f,0.0f,-1.0f), cam_pos);
|
|
||||||
@endcode
|
|
||||||
- Create a cloud widget from bunny.ply file
|
|
||||||
@code{.cpp}
|
|
||||||
/// Create a cloud widget.
|
|
||||||
Mat bunny_cloud = cvcloud_load();
|
|
||||||
viz::WCloud cloud_widget(bunny_cloud, viz::Color::green());
|
|
||||||
@endcode
|
|
||||||
- Given the pose in camera coordinate system, estimate the global pose.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Pose of the widget in camera frame
|
|
||||||
Affine3f cloud_pose = Affine3f().translate(Vec3f(0.0f,0.0f,3.0f));
|
|
||||||
/// Pose of the widget in global frame
|
|
||||||
Affine3f cloud_pose_global = transform * cloud_pose;
|
|
||||||
@endcode
|
|
||||||
- If the view point is set to be global, visualize camera coordinate frame and viewing frustum.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Visualize camera frame
|
|
||||||
if (!camera_pov)
|
|
||||||
{
|
|
||||||
viz::WCameraPosition cpw(0.5); // Coordinate axes
|
|
||||||
viz::WCameraPosition cpw_frustum(Vec2f(0.889484, 0.523599)); // Camera frustum
|
|
||||||
myWindow.showWidget("CPW", cpw, cam_pose);
|
|
||||||
myWindow.showWidget("CPW_FRUSTUM", cpw_frustum, cam_pose);
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
- Visualize the cloud widget with the estimated global pose
|
|
||||||
@code{.cpp}
|
|
||||||
/// Visualize widget
|
|
||||||
myWindow.showWidget("bunny", cloud_widget, cloud_pose_global);
|
|
||||||
@endcode
|
|
||||||
- If the view point is set to be camera's, set viewer pose to **cam_pose**.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Set the viewer pose to that of camera
|
|
||||||
if (camera_pov)
|
|
||||||
myWindow.setViewerPose(cam_pose);
|
|
||||||
@endcode
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
1. Here is the result from the camera point of view.
|
-# Here is the result from the camera point of view.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
2. Here is the result from global point of view.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
-# Here is the result from global point of view.
|
||||||
|
|
||||||
|

|
||||||
|
@ -14,122 +14,65 @@ Code
|
|||||||
----
|
----
|
||||||
|
|
||||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/widget_pose.cpp).
|
You can download the code from [here ](samples/cpp/tutorial_code/viz/widget_pose.cpp).
|
||||||
@code{.cpp}
|
@includelineno samples/cpp/tutorial_code/viz/widget_pose.cpp
|
||||||
#include <opencv2/viz.hpp>
|
|
||||||
#include <opencv2/calib3d.hpp>
|
|
||||||
#include <iostream>
|
|
||||||
|
|
||||||
using namespace cv;
|
|
||||||
using namespace std;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* @function main
|
|
||||||
*/
|
|
||||||
int main()
|
|
||||||
{
|
|
||||||
/// Create a window
|
|
||||||
viz::Viz3d myWindow("Coordinate Frame");
|
|
||||||
|
|
||||||
/// Add coordinate axes
|
|
||||||
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());
|
|
||||||
|
|
||||||
/// Add line to represent (1,1,1) axis
|
|
||||||
viz::WLine axis(Point3f(-1.0f,-1.0f,-1.0f), Point3f(1.0f,1.0f,1.0f));
|
|
||||||
axis.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
|
||||||
myWindow.showWidget("Line Widget", axis);
|
|
||||||
|
|
||||||
/// Construct a cube widget
|
|
||||||
viz::WCube cube_widget(Point3f(0.5,0.5,0.0), Point3f(0.0,0.0,-0.5), true, viz::Color::blue());
|
|
||||||
cube_widget.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
|
||||||
|
|
||||||
/// Display widget (update if already displayed)
|
|
||||||
myWindow.showWidget("Cube Widget", cube_widget);
|
|
||||||
|
|
||||||
/// Rodrigues vector
|
|
||||||
Mat rot_vec = Mat::zeros(1,3,CV_32F);
|
|
||||||
float translation_phase = 0.0, translation = 0.0;
|
|
||||||
while(!myWindow.wasStopped())
|
|
||||||
{
|
|
||||||
/* Rotation using rodrigues */
|
|
||||||
/// Rotate around (1,1,1)
|
|
||||||
rot_vec.at<float>(0,0) += CV_PI * 0.01f;
|
|
||||||
rot_vec.at<float>(0,1) += CV_PI * 0.01f;
|
|
||||||
rot_vec.at<float>(0,2) += CV_PI * 0.01f;
|
|
||||||
|
|
||||||
/// Shift on (1,1,1)
|
|
||||||
translation_phase += CV_PI * 0.01f;
|
|
||||||
translation = sin(translation_phase);
|
|
||||||
|
|
||||||
Mat rot_mat;
|
|
||||||
Rodrigues(rot_vec, rot_mat);
|
|
||||||
|
|
||||||
/// Construct pose
|
|
||||||
Affine3f pose(rot_mat, Vec3f(translation, translation, translation));
|
|
||||||
|
|
||||||
myWindow.setWidgetPose("Cube Widget", pose);
|
|
||||||
|
|
||||||
myWindow.spinOnce(1, true);
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
@endcode
|
|
||||||
Explanation
|
Explanation
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Here is the general structure of the program:
|
Here is the general structure of the program:
|
||||||
|
|
||||||
- Create a visualization window.
|
- Create a visualization window.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Create a window
|
/// Create a window
|
||||||
viz::Viz3d myWindow("Coordinate Frame");
|
viz::Viz3d myWindow("Coordinate Frame");
|
||||||
@endcode
|
@endcode
|
||||||
- Show coordinate axes in the window using CoordinateSystemWidget.
|
- Show coordinate axes in the window using CoordinateSystemWidget.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Add coordinate axes
|
/// Add coordinate axes
|
||||||
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());
|
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());
|
||||||
@endcode
|
@endcode
|
||||||
- Display a line representing the axis (1,1,1).
|
- Display a line representing the axis (1,1,1).
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Add line to represent (1,1,1) axis
|
/// Add line to represent (1,1,1) axis
|
||||||
viz::WLine axis(Point3f(-1.0f,-1.0f,-1.0f), Point3f(1.0f,1.0f,1.0f));
|
viz::WLine axis(Point3f(-1.0f,-1.0f,-1.0f), Point3f(1.0f,1.0f,1.0f));
|
||||||
axis.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
axis.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
||||||
myWindow.showWidget("Line Widget", axis);
|
myWindow.showWidget("Line Widget", axis);
|
||||||
@endcode
|
@endcode
|
||||||
- Construct a cube.
|
- Construct a cube.
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Construct a cube widget
|
/// Construct a cube widget
|
||||||
viz::WCube cube_widget(Point3f(0.5,0.5,0.0), Point3f(0.0,0.0,-0.5), true, viz::Color::blue());
|
viz::WCube cube_widget(Point3f(0.5,0.5,0.0), Point3f(0.0,0.0,-0.5), true, viz::Color::blue());
|
||||||
cube_widget.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
cube_widget.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
||||||
myWindow.showWidget("Cube Widget", cube_widget);
|
myWindow.showWidget("Cube Widget", cube_widget);
|
||||||
@endcode
|
@endcode
|
||||||
- Create rotation matrix from rodrigues vector
|
- Create rotation matrix from rodrigues vector
|
||||||
@code{.cpp}
|
@code{.cpp}
|
||||||
/// Rotate around (1,1,1)
|
/// Rotate around (1,1,1)
|
||||||
rot_vec.at<float>(0,0) += CV_PI * 0.01f;
|
rot_vec.at<float>(0,0) += CV_PI * 0.01f;
|
||||||
rot_vec.at<float>(0,1) += CV_PI * 0.01f;
|
rot_vec.at<float>(0,1) += CV_PI * 0.01f;
|
||||||
rot_vec.at<float>(0,2) += CV_PI * 0.01f;
|
rot_vec.at<float>(0,2) += CV_PI * 0.01f;
|
||||||
|
|
||||||
...
|
|
||||||
|
|
||||||
Mat rot_mat;
|
|
||||||
Rodrigues(rot_vec, rot_mat);
|
|
||||||
@endcode
|
|
||||||
- Use Affine3f to set pose of the cube.
|
|
||||||
@code{.cpp}
|
|
||||||
/// Construct pose
|
|
||||||
Affine3f pose(rot_mat, Vec3f(translation, translation, translation));
|
|
||||||
myWindow.setWidgetPose("Cube Widget", pose);
|
|
||||||
@endcode
|
|
||||||
- Animate the rotation using wasStopped and spinOnce
|
|
||||||
@code{.cpp}
|
|
||||||
while(!myWindow.wasStopped())
|
|
||||||
{
|
|
||||||
...
|
...
|
||||||
|
|
||||||
myWindow.spinOnce(1, true);
|
Mat rot_mat;
|
||||||
}
|
Rodrigues(rot_vec, rot_mat);
|
||||||
@endcode
|
@endcode
|
||||||
|
- Use Affine3f to set pose of the cube.
|
||||||
|
@code{.cpp}
|
||||||
|
/// Construct pose
|
||||||
|
Affine3f pose(rot_mat, Vec3f(translation, translation, translation));
|
||||||
|
myWindow.setWidgetPose("Cube Widget", pose);
|
||||||
|
@endcode
|
||||||
|
- Animate the rotation using wasStopped and spinOnce
|
||||||
|
@code{.cpp}
|
||||||
|
while(!myWindow.wasStopped())
|
||||||
|
{
|
||||||
|
...
|
||||||
|
|
||||||
|
myWindow.spinOnce(1, true);
|
||||||
|
}
|
||||||
|
@endcode
|
||||||
|
|
||||||
Results
|
Results
|
||||||
-------
|
-------
|
||||||
|
|
||||||
@ -140,4 +83,3 @@ Here is the result of the program.
|
|||||||
<iframe width="420" height="315" src="https://www.youtube.com/embed/22HKMN657U0" frameborder="0" allowfullscreen></iframe>
|
<iframe width="420" height="315" src="https://www.youtube.com/embed/22HKMN657U0" frameborder="0" allowfullscreen></iframe>
|
||||||
</div>
|
</div>
|
||||||
\endhtmlonly
|
\endhtmlonly
|
||||||
|
|
||||||
|