Doxygen tutorials: cpp done
@ -824,3 +824,11 @@
|
||||
journal = {Machine learning},
|
||||
volume = {10}
|
||||
}
|
||||
@inproceedings{vacavant2013benchmark,
|
||||
title={A benchmark dataset for outdoor foreground/background extraction},
|
||||
author={Vacavant, Antoine and Chateau, Thierry and Wilhelm, Alexis and Lequi{\`e}vre, Laurent},
|
||||
booktitle={Computer Vision-ACCV 2012 Workshops},
|
||||
pages={291--300},
|
||||
year={2013},
|
||||
organization={Springer}
|
||||
}
|
||||
|
@ -96,7 +96,7 @@ on how to do this you can find in the @ref tutorial_file_input_output_with_xml_y
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. **Read the settings.**
|
||||
-# **Read the settings.**
|
||||
@code{.cpp}
|
||||
Settings s;
|
||||
const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml";
|
||||
@ -119,7 +119,7 @@ Explanation
|
||||
additional post-processing function that checks validity of the input. Only if all inputs are
|
||||
good then *goodInput* variable will be true.
|
||||
|
||||
2. **Get next input, if it fails or we have enough of them - calibrate**. After this we have a big
|
||||
-# **Get next input, if it fails or we have enough of them - calibrate**. After this we have a big
|
||||
loop where we do the following operations: get the next image from the image list, camera or
|
||||
video file. If this fails or we have enough images then we run the calibration process. In case
|
||||
of image we step out of the loop and otherwise the remaining frames will be undistorted (if the
|
||||
@ -151,7 +151,7 @@ Explanation
|
||||
@endcode
|
||||
For some cameras we may need to flip the input image. Here we do this too.
|
||||
|
||||
3. **Find the pattern in the current input**. The formation of the equations I mentioned above aims
|
||||
-# **Find the pattern in the current input**. The formation of the equations I mentioned above aims
|
||||
to finding major patterns in the input: in case of the chessboard this are corners of the
|
||||
squares and for the circles, well, the circles themselves. The position of these will form the
|
||||
result which will be written into the *pointBuf* vector.
|
||||
@ -212,7 +212,7 @@ Explanation
|
||||
drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );
|
||||
}
|
||||
@endcode
|
||||
4. **Show state and result to the user, plus command line control of the application**. This part
|
||||
-# **Show state and result to the user, plus command line control of the application**. This part
|
||||
shows text output on the image.
|
||||
@code{.cpp}
|
||||
//----------------------------- Output Text ------------------------------------------------
|
||||
@ -263,7 +263,7 @@ Explanation
|
||||
imagePoints.clear();
|
||||
}
|
||||
@endcode
|
||||
5. **Show the distortion removal for the images too**. When you work with an image list it is not
|
||||
-# **Show the distortion removal for the images too**. When you work with an image list it is not
|
||||
possible to remove the distortion inside the loop. Therefore, you must do this after the loop.
|
||||
Taking advantage of this now I'll expand the @ref cv::undistort function, which is in fact first
|
||||
calls @ref cv::initUndistortRectifyMap to find transformation matrices and then performs
|
||||
@ -291,6 +291,7 @@ Explanation
|
||||
}
|
||||
}
|
||||
@endcode
|
||||
|
||||
The calibration and save
|
||||
------------------------
|
||||
|
||||
@ -419,6 +420,7 @@ double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,
|
||||
return std::sqrt(totalErr/totalPoints); // calculate the arithmetical mean
|
||||
}
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
@ -444,21 +446,21 @@ images/CameraCalibration/VID5/xx8.jpg
|
||||
Then passed `images/CameraCalibration/VID5/VID5.XML` as an input in the configuration file. Here's a
|
||||
chessboard pattern found during the runtime of the application:
|
||||
|
||||

|
||||

|
||||
|
||||
After applying the distortion removal we get:
|
||||
|
||||

|
||||

|
||||
|
||||
The same works for [this asymmetrical circle pattern ](acircles_pattern.png) by setting the input
|
||||
width to 4 and height to 11. This time I've used a live camera feed by specifying its ID ("1") for
|
||||
the input. Here's, how a detected pattern should look:
|
||||
|
||||

|
||||

|
||||
|
||||
In both cases in the specified output XML/YAML file you'll find the camera and distortion
|
||||
coefficients matrices:
|
||||
@code{.cpp}
|
||||
@code{.xml}
|
||||
<Camera_Matrix type_id="opencv-matrix">
|
||||
<rows>3</rows>
|
||||
<cols>3</cols>
|
||||
|
@ -39,7 +39,7 @@ DLT.
|
||||
The most common simplification is to assume known calibration parameters which is the so-called
|
||||
Perspective-*n*-Point problem:
|
||||
|
||||

|
||||

|
||||
|
||||
**Problem Formulation:** Given a set of correspondences between 3D points \f$p_i\f$ expressed in a world
|
||||
reference frame, and their 2D projections \f$u_i\f$ onto the image, we seek to retrieve the pose (\f$R\f$
|
||||
@ -61,7 +61,7 @@ You can find the source code of this tutorial in the
|
||||
|
||||
The tutorial consists of two main programs:
|
||||
|
||||
1. **Model registration**
|
||||
-# **Model registration**
|
||||
|
||||
This applicaton is exclusive to whom don't have a 3D textured model of the object to be detected.
|
||||
You can use this program to create your own textured 3D model. This program only works for planar
|
||||
@ -82,9 +82,9 @@ are stored in different lists in a file with YAML format which each row is a dif
|
||||
technical background on how to store the files can be found in the @ref tutorial_file_input_output_with_xml_yml
|
||||
tutorial.
|
||||
|
||||

|
||||

|
||||
|
||||
2. **Model detection**
|
||||
-# **Model detection**
|
||||
|
||||
The aim of this application is estimate in real time the object pose given its 3D textured model.
|
||||
|
||||
@ -136,12 +136,13 @@ For example, you can run the application changing the pnp method:
|
||||
@code{.cpp}
|
||||
./cpp-tutorial-pnp_detection --method=2
|
||||
@endcode
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
Here is explained in detail the code for the real time application:
|
||||
|
||||
1. **Read 3D textured object model and object mesh.**
|
||||
-# **Read 3D textured object model and object mesh.**
|
||||
|
||||
In order to load the textured model I implemented the *class* **Model** which has the function
|
||||
*load()* that opens a YAML file and take the stored 3D points with its corresponding descriptors.
|
||||
@ -202,7 +203,8 @@ You can also load different model and mesh:
|
||||
@code{.cpp}
|
||||
./cpp-tutorial-pnp_detection --mesh=/absolute_path_to_your_mesh.ply --model=/absolute_path_to_your_model.yml
|
||||
@endcode
|
||||
2. **Take input from Camera or Video**
|
||||
|
||||
-# **Take input from Camera or Video**
|
||||
|
||||
To detect is necessary capture video. It's done loading a recorded video by passing the absolute
|
||||
path where it is located in your machine. In order to test the application you can find a recorded
|
||||
@ -234,7 +236,8 @@ You can also load different recorded video:
|
||||
@code{.cpp}
|
||||
./cpp-tutorial-pnp_detection --video=/absolute_path_to_your_video.mp4
|
||||
@endcode
|
||||
3. **Extract ORB features and descriptors from the scene**
|
||||
|
||||
-# **Extract ORB features and descriptors from the scene**
|
||||
|
||||
The next step is to detect the scene features and extract it descriptors. For this task I
|
||||
implemented a *class* **RobustMatcher** which has a function for keypoints detection and features
|
||||
@ -258,7 +261,7 @@ rmatcher.setDescriptorExtractor(extractor);
|
||||
@endcode
|
||||
The features and descriptors will be computed by the *RobustMatcher* inside the matching function.
|
||||
|
||||
4. **Match scene descriptors with model descriptors using Flann matcher**
|
||||
-# **Match scene descriptors with model descriptors using Flann matcher**
|
||||
|
||||
It is the first step in our detection algorithm. The main idea is to match the scene descriptors
|
||||
with our model descriptors in order to know the 3D coordinates of the found features into the
|
||||
@ -369,7 +372,8 @@ not the robust matcher:
|
||||
@code{.cpp}
|
||||
./cpp-tutorial-pnp_detection --ratio=0.8 --keypoints=1000 --fast=false
|
||||
@endcode
|
||||
5. **Pose estimation using PnP + Ransac**
|
||||
|
||||
-# **Pose estimation using PnP + Ransac**
|
||||
|
||||
Once with the 2D and 3D correspondences we have to apply a PnP algorithm in order to estimate the
|
||||
camera pose. The reason why we have to use @ref cv::solvePnPRansac instead of @ref cv::solvePnP is
|
||||
@ -533,7 +537,8 @@ You can also change RANSAC parameters and PnP method:
|
||||
@code{.cpp}
|
||||
./cpp-tutorial-pnp_detection --error=0.25 --confidence=0.90 --iterations=250 --method=3
|
||||
@endcode
|
||||
6. **Linear Kalman Filter for bad poses rejection**
|
||||
|
||||
-# **Linear Kalman Filter for bad poses rejection**
|
||||
|
||||
Is it common in computer vision or robotics fields that after applying detection or tracking
|
||||
techniques, bad results are obtained due to some sensor errors. In order to avoid these bad
|
||||
@ -758,6 +763,7 @@ You can also modify the minimum inliers to update Kalman Filter:
|
||||
@code{.cpp}
|
||||
./cpp-tutorial-pnp_detection --inliers=20
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
|
@ -73,7 +73,7 @@ int main( int argc, char** argv )
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Since we are going to perform:
|
||||
-# Since we are going to perform:
|
||||
|
||||
\f[g(x) = (1 - \alpha)f_{0}(x) + \alpha f_{1}(x)\f]
|
||||
|
||||
@ -87,7 +87,7 @@ Explanation
|
||||
Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and
|
||||
height) and type.
|
||||
|
||||
2. Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy:
|
||||
-# Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy:
|
||||
@code{.cpp}
|
||||
beta = ( 1.0 - alpha );
|
||||
addWeighted( src1, alpha, src2, beta, 0.0, dst);
|
||||
@ -96,9 +96,9 @@ Explanation
|
||||
\f[dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma\f]
|
||||
In this case, `gamma` is the argument \f$0.0\f$ in the code above.
|
||||
|
||||
3. Create windows, show the images and wait for the user to end the program.
|
||||
-# Create windows, show the images and wait for the user to end the program.
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||

|
||||

|
||||
|
@ -52,7 +52,7 @@ Code
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two
|
||||
-# Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two
|
||||
windows to display them.
|
||||
@code{.cpp}
|
||||
/// Windows names
|
||||
@ -63,7 +63,7 @@ Explanation
|
||||
Mat atom_image = Mat::zeros( w, w, CV_8UC3 );
|
||||
Mat rook_image = Mat::zeros( w, w, CV_8UC3 );
|
||||
@endcode
|
||||
2. We created functions to draw different geometric shapes. For instance, to draw the atom we used
|
||||
-# We created functions to draw different geometric shapes. For instance, to draw the atom we used
|
||||
*MyEllipse* and *MyFilledCircle*:
|
||||
@code{.cpp}
|
||||
/// 1. Draw a simple atom:
|
||||
@ -77,7 +77,7 @@ Explanation
|
||||
/// 1.b. Creating circles
|
||||
MyFilledCircle( atom_image, Point( w/2.0, w/2.0) );
|
||||
@endcode
|
||||
3. And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*:
|
||||
-# And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*:
|
||||
@code{.cpp}
|
||||
/// 2. Draw a rook
|
||||
|
||||
@ -98,7 +98,7 @@ Explanation
|
||||
MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) );
|
||||
MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) );
|
||||
@endcode
|
||||
4. Let's check what is inside each of these functions:
|
||||
-# Let's check what is inside each of these functions:
|
||||
- *MyLine*
|
||||
@code{.cpp}
|
||||
void MyLine( Mat img, Point start, Point end )
|
||||
@ -240,5 +240,5 @@ Result
|
||||
|
||||
Compiling and running your program should give you a result like this:
|
||||
|
||||

|
||||

|
||||
|
||||
|
@ -101,16 +101,16 @@ int main( int argc, char** argv )
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
|
||||
-# We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
|
||||
@code{.cpp}
|
||||
double alpha;
|
||||
int beta;
|
||||
@endcode
|
||||
2. We load an image using @ref cv::imread and save it in a Mat object:
|
||||
-# We load an image using @ref cv::imread and save it in a Mat object:
|
||||
@code{.cpp}
|
||||
Mat image = imread( argv[1] );
|
||||
@endcode
|
||||
3. Now, since we will make some transformations to this image, we need a new Mat object to store
|
||||
-# Now, since we will make some transformations to this image, we need a new Mat object to store
|
||||
it. Also, we want this to have the following features:
|
||||
|
||||
- Initial pixel values equal to zero
|
||||
@ -121,7 +121,7 @@ Explanation
|
||||
We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
|
||||
*image.size()* and *image.type()*
|
||||
|
||||
4. Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
|
||||
-# Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
|
||||
pixel in image. Since we are operating with RGB images, we will have three values per pixel (R,
|
||||
G and B), so we will also access them separately. Here is the piece of code:
|
||||
@code{.cpp}
|
||||
@ -141,7 +141,7 @@ Explanation
|
||||
integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
|
||||
values are valid.
|
||||
|
||||
5. Finally, we create windows and show the images, the usual way.
|
||||
-# Finally, we create windows and show the images, the usual way.
|
||||
@code{.cpp}
|
||||
namedWindow("Original Image", 1);
|
||||
namedWindow("New Image", 1);
|
||||
@ -166,7 +166,7 @@ Result
|
||||
|
||||
- Running our code and using \f$\alpha = 2.2\f$ and \f$\beta = 50\f$
|
||||
@code{.bash}
|
||||
\f$ ./BasicLinearTransforms lena.jpg
|
||||
$ ./BasicLinearTransforms lena.jpg
|
||||
Basic Linear Transforms
|
||||
-------------------------
|
||||
* Enter the alpha value [1.0-3.0]: 2.2
|
||||
@ -175,4 +175,4 @@ Result
|
||||
|
||||
- We get this:
|
||||
|
||||

|
||||

|
||||
|
@ -22,10 +22,14 @@ OpenCV source code library.
|
||||
|
||||
Here's a sample usage of @ref cv::dft() :
|
||||
|
||||
@includelineno cpp/tutorial_code/core/discrete_fourier_transform/discrete_fourier_transform.cpp
|
||||
|
||||
lines
|
||||
1-4, 6, 20-21, 24-79
|
||||
@dontinclude cpp/tutorial_code/core/discrete_fourier_transform/discrete_fourier_transform.cpp
|
||||
@until highgui.hpp
|
||||
@skipline iostream
|
||||
@skip main
|
||||
@until {
|
||||
@skip filename
|
||||
@until return 0;
|
||||
@until }
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
@ -52,7 +56,7 @@ Fourier Transform too needs to be of a discrete type resulting in a Discrete Fou
|
||||
(*DFT*). You'll want to use this whenever you need to determine the structure of an image from a
|
||||
geometrical point of view. Here are the steps to follow (in case of a gray scale input image *I*):
|
||||
|
||||
1. **Expand the image to an optimal size**. The performance of a DFT is dependent of the image
|
||||
-# **Expand the image to an optimal size**. The performance of a DFT is dependent of the image
|
||||
size. It tends to be the fastest for image sizes that are multiple of the numbers two, three and
|
||||
five. Therefore, to achieve maximal performance it is generally a good idea to pad border values
|
||||
to the image to get a size with such traits. The @ref cv::getOptimalDFTSize() returns this
|
||||
@ -66,7 +70,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
||||
@endcode
|
||||
The appended pixels are initialized with zero.
|
||||
|
||||
2. **Make place for both the complex and the real values**. The result of a Fourier Transform is
|
||||
-# **Make place for both the complex and the real values**. The result of a Fourier Transform is
|
||||
complex. This implies that for each image value the result is two image values (one per
|
||||
component). Moreover, the frequency domains range is much larger than its spatial counterpart.
|
||||
Therefore, we store these usually at least in a *float* format. Therefore we'll convert our
|
||||
@ -76,12 +80,12 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
||||
Mat complexI;
|
||||
merge(planes, 2, complexI); // Add to the expanded another plane with zeros
|
||||
@endcode
|
||||
3. **Make the Discrete Fourier Transform**. It's possible an in-place calculation (same input as
|
||||
-# **Make the Discrete Fourier Transform**. It's possible an in-place calculation (same input as
|
||||
output):
|
||||
@code{.cpp}
|
||||
dft(complexI, complexI); // this way the result may fit in the source matrix
|
||||
@endcode
|
||||
4. **Transform the real and complex values to magnitude**. A complex number has a real (*Re*) and a
|
||||
-# **Transform the real and complex values to magnitude**. A complex number has a real (*Re*) and a
|
||||
complex (imaginary - *Im*) part. The results of a DFT are complex numbers. The magnitude of a
|
||||
DFT is:
|
||||
|
||||
@ -93,7 +97,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
||||
magnitude(planes[0], planes[1], planes[0]);// planes[0] = magnitude
|
||||
Mat magI = planes[0];
|
||||
@endcode
|
||||
5. **Switch to a logarithmic scale**. It turns out that the dynamic range of the Fourier
|
||||
-# **Switch to a logarithmic scale**. It turns out that the dynamic range of the Fourier
|
||||
coefficients is too large to be displayed on the screen. We have some small and some high
|
||||
changing values that we can't observe like this. Therefore the high values will all turn out as
|
||||
white points, while the small ones as black. To use the gray scale values to for visualization
|
||||
@ -106,7 +110,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
||||
magI += Scalar::all(1); // switch to logarithmic scale
|
||||
log(magI, magI);
|
||||
@endcode
|
||||
6. **Crop and rearrange**. Remember, that at the first step, we expanded the image? Well, it's time
|
||||
-# **Crop and rearrange**. Remember, that at the first step, we expanded the image? Well, it's time
|
||||
to throw away the newly introduced values. For visualization purposes we may also rearrange the
|
||||
quadrants of the result, so that the origin (zero, zero) corresponds with the image center.
|
||||
@code{.cpp}
|
||||
@ -128,13 +132,14 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
|
||||
q2.copyTo(q1);
|
||||
tmp.copyTo(q2);
|
||||
@endcode
|
||||
7. **Normalize**. This is done again for visualization purposes. We now have the magnitudes,
|
||||
-# **Normalize**. This is done again for visualization purposes. We now have the magnitudes,
|
||||
however this are still out of our image display range of zero to one. We normalize our values to
|
||||
this range using the @ref cv::normalize() function.
|
||||
@code{.cpp}
|
||||
normalize(magI, magI, 0, 1, NORM_MINMAX); // Transform the matrix with float values into a
|
||||
// viewable image form (float between values 0 and 1).
|
||||
@endcode
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
@ -147,13 +152,12 @@ image about a text.
|
||||
|
||||
In case of the horizontal text:
|
||||
|
||||

|
||||

|
||||
|
||||
In case of a rotated text:
|
||||
|
||||

|
||||

|
||||
|
||||
You can see that the most influential components of the frequency domain (brightest dots on the
|
||||
magnitude image) follow the geometric rotation of objects on the image. From this we may calculate
|
||||
the offset and perform an image rotation to correct eventual miss alignments.
|
||||
|
||||
|
@ -22,10 +22,12 @@ library.
|
||||
|
||||
Here's a sample code of how to achieve all the stuff enumerated at the goal list.
|
||||
|
||||
@includelineno cpp/tutorial_code/core/file_input_output/file_input_output.cpp
|
||||
@dontinclude cpp/tutorial_code/core/file_input_output/file_input_output.cpp
|
||||
|
||||
lines
|
||||
1-7, 21-154
|
||||
@until std;
|
||||
@skip class MyData
|
||||
@until return 0;
|
||||
@until }
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
@ -36,7 +38,7 @@ structures you may serialize: *mappings* (like the STL map) and *element sequenc
|
||||
vector). The difference between these is that in a map every element has a unique name through what
|
||||
you may access it. For sequences you need to go through them to query a specific item.
|
||||
|
||||
1. **XML/YAML File Open and Close.** Before you write any content to such file you need to open it
|
||||
-# **XML/YAML File Open and Close.** Before you write any content to such file you need to open it
|
||||
and at the end to close it. The XML/YAML data structure in OpenCV is @ref cv::FileStorage . To
|
||||
specify that this structure to which file binds on your hard drive you can use either its
|
||||
constructor or the *open()* function of this:
|
||||
@ -56,7 +58,7 @@ you may access it. For sequences you need to go through them to query a specific
|
||||
@code{.cpp}
|
||||
fs.release(); // explicit close
|
||||
@endcode
|
||||
2. **Input and Output of text and numbers.** The data structure uses the same \<\< output operator
|
||||
-# **Input and Output of text and numbers.** The data structure uses the same \<\< output operator
|
||||
that the STL library. For outputting any type of data structure we need first to specify its
|
||||
name. We do this by just simply printing out the name of this. For basic types you may follow
|
||||
this with the print of the value :
|
||||
@ -70,7 +72,7 @@ you may access it. For sequences you need to go through them to query a specific
|
||||
fs["iterationNr"] >> itNr;
|
||||
itNr = (int) fs["iterationNr"];
|
||||
@endcode
|
||||
3. **Input/Output of OpenCV Data structures.** Well these behave exactly just as the basic C++
|
||||
-# **Input/Output of OpenCV Data structures.** Well these behave exactly just as the basic C++
|
||||
types:
|
||||
@code{.cpp}
|
||||
Mat R = Mat_<uchar >::eye (3, 3),
|
||||
@ -82,7 +84,7 @@ you may access it. For sequences you need to go through them to query a specific
|
||||
fs["R"] >> R; // Read cv::Mat
|
||||
fs["T"] >> T;
|
||||
@endcode
|
||||
4. **Input/Output of vectors (arrays) and associative maps.** As I mentioned beforehand, we can
|
||||
-# **Input/Output of vectors (arrays) and associative maps.** As I mentioned beforehand, we can
|
||||
output maps and sequences (array, vector) too. Again we first print the name of the variable and
|
||||
then we have to specify if our output is either a sequence or map.
|
||||
|
||||
@ -121,7 +123,7 @@ you may access it. For sequences you need to go through them to query a specific
|
||||
cout << "Two " << (int)(n["Two"]) << "; ";
|
||||
cout << "One " << (int)(n["One"]) << endl << endl;
|
||||
@endcode
|
||||
5. **Read and write your own data structures.** Suppose you have a data structure such as:
|
||||
-# **Read and write your own data structures.** Suppose you have a data structure such as:
|
||||
@code{.cpp}
|
||||
class MyData
|
||||
{
|
||||
@ -180,6 +182,7 @@ you may access it. For sequences you need to go through them to query a specific
|
||||
fs["NonExisting"] >> m; // Do not add a fs << "NonExisting" << m command for this to work
|
||||
cout << endl << "NonExisting = " << endl << m << endl;
|
||||
@endcode
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
@ -270,4 +273,3 @@ here](https://www.youtube.com/watch?v=A4yqVnByMMM) .
|
||||
<iframe title="File Input and Output using XML and YAML files in OpenCV" width="560" height="349" src="http://www.youtube.com/embed/A4yqVnByMMM?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
|
||||
</div>
|
||||
\endhtmlonly
|
||||
|
||||
|
@ -59,10 +59,10 @@ how_to_scan_images imageName.jpg intValueToReduce [G]
|
||||
The final argument is optional. If given the image will be loaded in gray scale format, otherwise
|
||||
the RGB color way is used. The first thing is to calculate the lookup table.
|
||||
|
||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
@dontinclude cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
|
||||
lines
|
||||
49-61
|
||||
@skip int divideWith
|
||||
@until table[i]
|
||||
|
||||
Here we first use the C++ *stringstream* class to convert the third command line argument from text
|
||||
to an integer format. Then we use a simple look and the upper formula to calculate the lookup table.
|
||||
@ -88,26 +88,12 @@ As you could already read in my @ref tutorial_mat_the_basic_image_container tuto
|
||||
depends of the color system used. More accurately, it depends from the number of channels used. In
|
||||
case of a gray scale image we have something like:
|
||||
|
||||
\f[\newcommand{\tabItG}[1] { \textcolor{black}{#1} \cellcolor[gray]{0.8}}
|
||||
\begin{tabular} {ccccc}
|
||||
~ & \multicolumn{1}{c}{Column 0} & \multicolumn{1}{c}{Column 1} & \multicolumn{1}{c}{Column ...} & \multicolumn{1}{c}{Column m}\\
|
||||
Row 0 & \tabItG{0,0} & \tabItG{0,1} & \tabItG{...} & \tabItG{0, m} \\
|
||||
Row 1 & \tabItG{1,0} & \tabItG{1,1} & \tabItG{...} & \tabItG{1, m} \\
|
||||
Row ... & \tabItG{...,0} & \tabItG{...,1} & \tabItG{...} & \tabItG{..., m} \\
|
||||
Row n & \tabItG{n,0} & \tabItG{n,1} & \tabItG{n,...} & \tabItG{n, m} \\
|
||||
\end{tabular}\f]
|
||||

|
||||
|
||||
For multichannel images the columns contain as many sub columns as the number of channels. For
|
||||
example in case of an RGB color system:
|
||||
|
||||
\f[\newcommand{\tabIt}[1] { \textcolor{yellow}{#1} \cellcolor{blue} & \textcolor{black}{#1} \cellcolor{green} & \textcolor{black}{#1} \cellcolor{red}}
|
||||
\begin{tabular} {ccccccccccccc}
|
||||
~ & \multicolumn{3}{c}{Column 0} & \multicolumn{3}{c}{Column 1} & \multicolumn{3}{c}{Column ...} & \multicolumn{3}{c}{Column m}\\
|
||||
Row 0 & \tabIt{0,0} & \tabIt{0,1} & \tabIt{...} & \tabIt{0, m} \\
|
||||
Row 1 & \tabIt{1,0} & \tabIt{1,1} & \tabIt{...} & \tabIt{1, m} \\
|
||||
Row ... & \tabIt{...,0} & \tabIt{...,1} & \tabIt{...} & \tabIt{..., m} \\
|
||||
Row n & \tabIt{n,0} & \tabIt{n,1} & \tabIt{n,...} & \tabIt{n, m} \\
|
||||
\end{tabular}\f]
|
||||

|
||||
|
||||
Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory
|
||||
is large enough to store the rows in a successive fashion the rows may follow one after another,
|
||||
@ -121,10 +107,9 @@ The efficient way
|
||||
When it comes to performance you cannot beat the classic C style operator[] (pointer) access.
|
||||
Therefore, the most efficient method we can recommend for making the assignment is:
|
||||
|
||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
|
||||
lines
|
||||
126-153
|
||||
@skip Mat& ScanImageAndReduceC
|
||||
@until return
|
||||
@until }
|
||||
|
||||
Here we basically just acquire a pointer to the start of each row and go through it until it ends.
|
||||
In the special case that the matrix is stored in a continues manner we only need to request the
|
||||
@ -156,10 +141,9 @@ considered a safer way as it takes over these tasks from the user. All you need
|
||||
begin and the end of the image matrix and then just increase the begin iterator until you reach the
|
||||
end. To acquire the value *pointed* by the iterator use the \* operator (add it before it).
|
||||
|
||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
|
||||
lines
|
||||
155-183
|
||||
@skip ScanImageAndReduceIterator
|
||||
@until return
|
||||
@until }
|
||||
|
||||
In case of color images we have three uchar items per column. This may be considered a short vector
|
||||
of uchar items, that has been baptized in OpenCV with the *Vec3b* name. To access the n-th sub
|
||||
@ -177,10 +161,9 @@ what type we are looking at the image. It's no different here as you need manual
|
||||
type to use at the automatic lookup. You can observe this in case of the gray scale images for the
|
||||
following source code (the usage of the + @ref cv::at() function):
|
||||
|
||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
|
||||
lines
|
||||
185-217
|
||||
@skip ScanImageAndReduceRandomAccess
|
||||
@until return
|
||||
@until }
|
||||
|
||||
The functions takes your input type and coordinates and calculates on the fly the address of the
|
||||
queried item. Then returns a reference to that. This may be a constant when you *get* the value and
|
||||
@ -209,17 +192,14 @@ OpenCV has a function that makes the modification without the need from you to w
|
||||
the image. We use the @ref cv::LUT() function of the core module. First we build a Mat type of the
|
||||
lookup table:
|
||||
|
||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
@dontinclude cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
|
||||
lines
|
||||
108-111
|
||||
@skip Mat lookUpTable
|
||||
@until p[i] = table[i]
|
||||
|
||||
Finally call the function (I is our input image and J the output one):
|
||||
|
||||
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
|
||||
|
||||
lines
|
||||
116
|
||||
@skipline LUT
|
||||
|
||||
Performance Difference
|
||||
----------------------
|
||||
|
After Width: | Height: | Size: 1.9 KiB |
After Width: | Height: | Size: 3.8 KiB |
@ -23,7 +23,7 @@ download it from [here](samples/cpp/tutorial_code/core/ippasync/ippasync_sample.
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Create parameters for OpenCV:
|
||||
-# Create parameters for OpenCV:
|
||||
@code{.cpp}
|
||||
VideoCapture cap;
|
||||
Mat image, gray, result;
|
||||
@ -36,7 +36,7 @@ Explanation
|
||||
hppStatus sts;
|
||||
hppiVirtualMatrix * virtMatrix;
|
||||
@endcode
|
||||
2. Load input image or video. How to open and read video stream you can see in the
|
||||
-# Load input image or video. How to open and read video stream you can see in the
|
||||
@ref tutorial_video_input_psnr_ssim tutorial.
|
||||
@code{.cpp}
|
||||
if( useCamera )
|
||||
@ -56,7 +56,7 @@ Explanation
|
||||
return -1;
|
||||
}
|
||||
@endcode
|
||||
3. Create accelerator instance using
|
||||
-# Create accelerator instance using
|
||||
[hppCreateInstance](http://software.intel.com/en-us/node/501686):
|
||||
@code{.cpp}
|
||||
accelType = sAccel == "cpu" ? HPP_ACCEL_TYPE_CPU:
|
||||
@ -67,12 +67,12 @@ Explanation
|
||||
sts = hppCreateInstance(accelType, 0, &accel);
|
||||
CHECK_STATUS(sts, "hppCreateInstance");
|
||||
@endcode
|
||||
4. Create an array of virtual matrices using
|
||||
-# Create an array of virtual matrices using
|
||||
[hppiCreateVirtualMatrices](http://software.intel.com/en-us/node/501700) function.
|
||||
@code{.cpp}
|
||||
virtMatrix = hppiCreateVirtualMatrices(accel, 1);
|
||||
@endcode
|
||||
5. Prepare a matrix for input and output data:
|
||||
-# Prepare a matrix for input and output data:
|
||||
@code{.cpp}
|
||||
cap >> image;
|
||||
if(image.empty())
|
||||
@ -82,7 +82,7 @@ Explanation
|
||||
|
||||
result.create( image.rows, image.cols, CV_8U);
|
||||
@endcode
|
||||
6. Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::hpp::getHpp
|
||||
-# Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::hpp::getHpp
|
||||
and call [hppiSobel](http://software.intel.com/en-us/node/474701) function.
|
||||
@code{.cpp}
|
||||
//convert Mat to hppiMatrix
|
||||
@ -104,14 +104,14 @@ Explanation
|
||||
HPP_DATA_TYPE_16S data type for source matrix with HPP_DATA_TYPE_8U type. You should check
|
||||
hppStatus after each call IPP Async function.
|
||||
|
||||
7. Create windows and show the images, the usual way.
|
||||
-# Create windows and show the images, the usual way.
|
||||
@code{.cpp}
|
||||
imshow("image", image);
|
||||
imshow("rez", result);
|
||||
|
||||
waitKey(15);
|
||||
@endcode
|
||||
8. Delete hpp matrices.
|
||||
-# Delete hpp matrices.
|
||||
@code{.cpp}
|
||||
sts = hppiFreeMatrix(src);
|
||||
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
|
||||
@ -119,7 +119,7 @@ Explanation
|
||||
sts = hppiFreeMatrix(dst);
|
||||
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
|
||||
@endcode
|
||||
9. Delete virtual matrices and accelerator instance.
|
||||
-# Delete virtual matrices and accelerator instance.
|
||||
@code{.cpp}
|
||||
if (virtMatrix)
|
||||
{
|
||||
@ -140,4 +140,4 @@ Result
|
||||
After compiling the code above we can execute it giving an image or video path and accelerator type
|
||||
as an argument. For this tutorial we use baboon.png image as input. The result is below.
|
||||
|
||||

|
||||

|
||||
|
@ -93,20 +93,18 @@ To further help on seeing the difference the programs supports two modes: one mi
|
||||
one pure C++. If you define the *DEMO_MIXED_API_USE* you'll end up using the first. The program
|
||||
separates the color planes, does some modifications on them and in the end merge them back together.
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
||||
|
||||
lines
|
||||
1-10, 23-26, 29-46
|
||||
@dontinclude cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
||||
@until namespace cv
|
||||
@skip ifdef
|
||||
@until endif
|
||||
@skip main
|
||||
@until endif
|
||||
|
||||
Here you can observe that with the new structure we have no pointer problems, although it is
|
||||
possible to use the old functions and in the end just transform the result to a *Mat* object.
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
||||
|
||||
lines
|
||||
48-53
|
||||
@skip convert image
|
||||
@until split
|
||||
|
||||
Because, we want to mess around with the images luma component we first convert from the default RGB
|
||||
to the YUV color space and then split the result up into separate planes. Here the program splits:
|
||||
@ -116,11 +114,8 @@ image some Gaussian noise and then mix together the channels according to some f
|
||||
|
||||
The scanning version looks like:
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
||||
|
||||
lines
|
||||
57-77
|
||||
@skip #if 1
|
||||
@until #else
|
||||
|
||||
Here you can observe that we may go through all the pixels of an image in three fashions: an
|
||||
iterator, a C pointer and an individual element access style. You can read a more in-depth
|
||||
@ -128,26 +123,20 @@ description of these in the @ref tutorial_how_to_scan_images tutorial. Convertin
|
||||
names is easy. Just remove the cv prefix and use the new *Mat* data structure. Here's an example of
|
||||
this by using the weighted addition function:
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
||||
|
||||
lines
|
||||
81-113
|
||||
@until planes[0]
|
||||
@until endif
|
||||
|
||||
As you may observe the *planes* variable is of type *Mat*. However, converting from *Mat* to
|
||||
*IplImage* is easy and made automatically with a simple assignment operator.
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
|
||||
|
||||
lines
|
||||
117-129
|
||||
@skip merge(planes
|
||||
@until #endif
|
||||
|
||||
The new *imshow* highgui function accepts both the *Mat* and *IplImage* data structures. Compile and
|
||||
run the program and if the first image below is your input you may get either the first or second as
|
||||
output:
|
||||
|
||||

|
||||

|
||||
|
||||
You may observe a runtime instance of this on the [YouTube
|
||||
here](https://www.youtube.com/watch?v=qckm-zvo31w) and you can [download the source code from here
|
||||
|
@ -130,7 +130,7 @@ difference.
|
||||
|
||||
For example:
|
||||
|
||||

|
||||

|
||||
|
||||
You can download this source code from [here
|
||||
](samples/cpp/tutorial_code/core/mat_mask_operations/mat_mask_operations.cpp) or look in the
|
||||
|
@ -9,7 +9,7 @@ computed tomography, and magnetic resonance imaging to name a few. In every case
|
||||
see are images. However, when transforming this to our digital devices what we record are numerical
|
||||
values for each of the points of the image.
|
||||
|
||||

|
||||

|
||||
|
||||
For example in the above image you can see that the mirror of the car is nothing more than a matrix
|
||||
containing all the intensity values of the pixel points. How we get and store the pixels values may
|
||||
@ -144,18 +144,18 @@ file by using the @ref cv::imwrite() function. However, for debugging purposes i
|
||||
convenient to see the actual values. You can do this using the \<\< operator of *Mat*. Be aware that
|
||||
this only works for two dimensional matrices.
|
||||
|
||||
@dontinclude cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
Although *Mat* works really well as an image container, it is also a general matrix class.
|
||||
Therefore, it is possible to create and manipulate multidimensional matrices. You can create a Mat
|
||||
object in multiple ways:
|
||||
|
||||
- @ref cv::Mat::Mat Constructor
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
@skip Mat M(2
|
||||
@until cout
|
||||
|
||||
lines 27-28
|
||||
|
||||

|
||||

|
||||
|
||||
For two dimensional and multichannel images we first define their size: row and column count wise.
|
||||
|
||||
@ -173,11 +173,8 @@ object in multiple ways:
|
||||
|
||||
- Use C/C++ arrays and initialize via constructor
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
35-36
|
||||
@skip int sz
|
||||
@until Mat L
|
||||
|
||||
The upper example shows how to create a matrix with more than two dimensions. Specify its
|
||||
dimension, then pass a pointer containing the size for each dimension and the rest remains the
|
||||
@ -188,14 +185,14 @@ object in multiple ways:
|
||||
IplImage* img = cvLoadImage("greatwave.png", 1);
|
||||
Mat mtx(img); // convert IplImage* -> Mat
|
||||
@endcode
|
||||
|
||||
- @ref cv::Mat::create function:
|
||||
@code
|
||||
M.create(4,4, CV_8UC(2));
|
||||
cout << "M = "<< endl << " " << M << endl << endl;
|
||||
@endcode
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines 31-32
|
||||
|
||||

|
||||

|
||||
|
||||
You cannot initialize the matrix values with this construction. It will only reallocate its matrix
|
||||
data memory if the new size will not fit into the old one.
|
||||
@ -203,41 +200,31 @@ object in multiple ways:
|
||||
- MATLAB style initializer: @ref cv::Mat::zeros , @ref cv::Mat::ones , @ref cv::Mat::eye . Specify size and
|
||||
data type to use:
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
@skip Mat E
|
||||
@until cout
|
||||
|
||||
lines
|
||||
40-47
|
||||
|
||||

|
||||

|
||||
|
||||
- For small matrices you may use comma separated initializers:
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
@skip Mat C
|
||||
@until cout
|
||||
|
||||
lines 50-51
|
||||
|
||||

|
||||

|
||||
|
||||
- Create a new header for an existing *Mat* object and @ref cv::Mat::clone or @ref cv::Mat::copyTo it.
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
@skip Mat RowClone
|
||||
@until cout
|
||||
|
||||
lines 53-54
|
||||
|
||||

|
||||

|
||||
|
||||
@note
|
||||
You can fill out a matrix with random values using the @ref cv::randu() function. You need to
|
||||
give the lower and upper value for the random values:
|
||||
@skip Mat R
|
||||
@until randu
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
57-58
|
||||
|
||||
Output formatting
|
||||
-----------------
|
||||
@ -246,54 +233,26 @@ In the above examples you could see the default formatting option. OpenCV, howev
|
||||
format your matrix output:
|
||||
|
||||
- Default
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
61
|
||||
|
||||

|
||||
@skipline (default)
|
||||

|
||||
|
||||
- Python
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
62
|
||||
|
||||

|
||||
@skipline (python)
|
||||

|
||||
|
||||
- Comma separated values (CSV)
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
64
|
||||
|
||||

|
||||
@skipline (csv)
|
||||

|
||||
|
||||
- Numpy
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
63
|
||||
|
||||

|
||||
@code
|
||||
cout << "R (numpy) = " << endl << format(R, Formatter::FMT_NUMPY ) << endl << endl;
|
||||
@endcode
|
||||

|
||||
|
||||
- C
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
65
|
||||
|
||||

|
||||
@skipline (c)
|
||||

|
||||
|
||||
Output of other common items
|
||||
----------------------------
|
||||
@ -301,44 +260,24 @@ Output of other common items
|
||||
OpenCV offers support for output of other common OpenCV data structures too via the \<\< operator:
|
||||
|
||||
- 2D Point
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
67-68
|
||||
|
||||

|
||||
@skip Point2f P
|
||||
@until cout
|
||||

|
||||
|
||||
- 3D Point
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
70-71
|
||||
|
||||

|
||||
@skip Point3f P3f
|
||||
@until cout
|
||||

|
||||
|
||||
- std::vector via cv::Mat
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
74-77
|
||||
|
||||

|
||||
@skip vector<float> v
|
||||
@until cout
|
||||

|
||||
|
||||
- std::vector of points
|
||||
|
||||
@includelineno
|
||||
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
|
||||
|
||||
lines
|
||||
79-83
|
||||
|
||||

|
||||
@skip vector<Point2f> vPoints
|
||||
@until cout
|
||||

|
||||
|
||||
Most of the samples here have been included in a small console application. You can download it from
|
||||
[here](samples/cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp)
|
||||
|
@ -25,7 +25,7 @@ Code
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Let's start by checking out the *main* function. We observe that first thing we do is creating a
|
||||
-# Let's start by checking out the *main* function. We observe that first thing we do is creating a
|
||||
*Random Number Generator* object (RNG):
|
||||
@code{.cpp}
|
||||
RNG rng( 0xFFFFFFFF );
|
||||
@ -33,7 +33,7 @@ Explanation
|
||||
RNG implements a random number generator. In this example, *rng* is a RNG element initialized
|
||||
with the value *0xFFFFFFFF*
|
||||
|
||||
2. Then we create a matrix initialized to *zeros* (which means that it will appear as black),
|
||||
-# Then we create a matrix initialized to *zeros* (which means that it will appear as black),
|
||||
specifying its height, width and its type:
|
||||
@code{.cpp}
|
||||
/// Initialize a matrix filled with zeros
|
||||
@ -42,7 +42,7 @@ Explanation
|
||||
/// Show it in a window during DELAY ms
|
||||
imshow( window_name, image );
|
||||
@endcode
|
||||
3. Then we proceed to draw crazy stuff. After taking a look at the code, you can see that it is
|
||||
-# Then we proceed to draw crazy stuff. After taking a look at the code, you can see that it is
|
||||
mainly divided in 8 sections, defined as functions:
|
||||
@code{.cpp}
|
||||
/// Now, let's draw some lines
|
||||
@ -79,7 +79,7 @@ Explanation
|
||||
All of these functions follow the same pattern, so we will analyze only a couple of them, since
|
||||
the same explanation applies for all.
|
||||
|
||||
4. Checking out the function **Drawing_Random_Lines**:
|
||||
-# Checking out the function **Drawing_Random_Lines**:
|
||||
@code{.cpp}
|
||||
int Drawing_Random_Lines( Mat image, char* window_name, RNG rng )
|
||||
{
|
||||
@ -133,11 +133,11 @@ Explanation
|
||||
are used as the *R*, *G* and *B* parameters for the line color. Hence, the color of the
|
||||
lines will be random too!
|
||||
|
||||
5. The explanation above applies for the other functions generating circles, ellipses, polygones,
|
||||
-# The explanation above applies for the other functions generating circles, ellipses, polygones,
|
||||
etc. The parameters such as *center* and *vertices* are also generated randomly.
|
||||
6. Before finishing, we also should take a look at the functions *Display_Random_Text* and
|
||||
-# Before finishing, we also should take a look at the functions *Display_Random_Text* and
|
||||
*Displaying_Big_End*, since they both have a few interesting features:
|
||||
7. **Display_Random_Text:**
|
||||
-# **Display_Random_Text:**
|
||||
@code{.cpp}
|
||||
int Displaying_Random_Text( Mat image, char* window_name, RNG rng )
|
||||
{
|
||||
@ -178,7 +178,7 @@ Explanation
|
||||
As a result, we will get (analagously to the other drawing functions) **NUMBER** texts over our
|
||||
image, in random locations.
|
||||
|
||||
8. **Displaying_Big_End**
|
||||
-# **Displaying_Big_End**
|
||||
@code{.cpp}
|
||||
int Displaying_Big_End( Mat image, char* window_name, RNG rng )
|
||||
{
|
||||
@ -222,28 +222,28 @@ Result
|
||||
As you just saw in the Code section, the program will sequentially execute diverse drawing
|
||||
functions, which will produce:
|
||||
|
||||
1. First a random set of *NUMBER* lines will appear on screen such as it can be seen in this
|
||||
-# First a random set of *NUMBER* lines will appear on screen such as it can be seen in this
|
||||
screenshot:
|
||||
|
||||

|
||||

|
||||
|
||||
2. Then, a new set of figures, these time *rectangles* will follow.
|
||||
3. Now some ellipses will appear, each of them with random position, size, thickness and arc
|
||||
-# Then, a new set of figures, these time *rectangles* will follow.
|
||||
-# Now some ellipses will appear, each of them with random position, size, thickness and arc
|
||||
length:
|
||||
|
||||

|
||||

|
||||
|
||||
4. Now, *polylines* with 03 segments will appear on screen, again in random configurations.
|
||||
-# Now, *polylines* with 03 segments will appear on screen, again in random configurations.
|
||||
|
||||

|
||||

|
||||
|
||||
5. Filled polygons (in this example triangles) will follow.
|
||||
6. The last geometric figure to appear: circles!
|
||||
-# Filled polygons (in this example triangles) will follow.
|
||||
-# The last geometric figure to appear: circles!
|
||||
|
||||

|
||||

|
||||
|
||||
7. Near the end, the text *"Testing Text Rendering"* will appear in a variety of fonts, sizes,
|
||||
-# Near the end, the text *"Testing Text Rendering"* will appear in a variety of fonts, sizes,
|
||||
colors and positions.
|
||||
8. And the big end (which by the way expresses a big truth too):
|
||||
-# And the big end (which by the way expresses a big truth too):
|
||||
|
||||

|
||||

|
||||
|
@ -4,10 +4,10 @@ AKAZE local features matching {#tutorial_akaze_matching}
|
||||
Introduction
|
||||
------------
|
||||
|
||||
In this tutorial we will learn how to use [AKAZE]_ local features to detect and match keypoints on
|
||||
In this tutorial we will learn how to use AKAZE @cite ANB13 local features to detect and match keypoints on
|
||||
two images.
|
||||
|
||||
We will find keypoints on a pair of images with given homography matrix, match them and count the
|
||||
|
||||
number of inliers (i. e. matches that fit in the given homography).
|
||||
|
||||
You can find expanded version of this example here:
|
||||
@ -18,7 +18,7 @@ Data
|
||||
|
||||
We are going to use images 1 and 3 from *Graffity* sequence of Oxford dataset.
|
||||
|
||||

|
||||

|
||||
|
||||
Homography is given by a 3 by 3 matrix:
|
||||
@code{.none}
|
||||
@ -35,7 +35,7 @@ You can find the images (*graf1.png*, *graf3.png*) and homography (*H1to3p.xml*)
|
||||
|
||||
### Explanation
|
||||
|
||||
1. **Load images and homography**
|
||||
-# **Load images and homography**
|
||||
@code{.cpp}
|
||||
Mat img1 = imread("graf1.png", IMREAD_GRAYSCALE);
|
||||
Mat img2 = imread("graf3.png", IMREAD_GRAYSCALE);
|
||||
@ -46,7 +46,7 @@ fs.getFirstTopLevelNode() >> homography;
|
||||
@endcode
|
||||
We are loading grayscale images here. Homography is stored in the xml created with FileStorage.
|
||||
|
||||
1. **Detect keypoints and compute descriptors using AKAZE**
|
||||
-# **Detect keypoints and compute descriptors using AKAZE**
|
||||
@code{.cpp}
|
||||
vector<KeyPoint> kpts1, kpts2;
|
||||
Mat desc1, desc2;
|
||||
@ -58,7 +58,7 @@ akaze(img2, noArray(), kpts2, desc2);
|
||||
We create AKAZE object and use it's *operator()* functionality. Since we don't need the *mask*
|
||||
parameter, *noArray()* is used.
|
||||
|
||||
1. **Use brute-force matcher to find 2-nn matches**
|
||||
-# **Use brute-force matcher to find 2-nn matches**
|
||||
@code{.cpp}
|
||||
BFMatcher matcher(NORM_HAMMING);
|
||||
vector< vector<DMatch> > nn_matches;
|
||||
@ -66,7 +66,7 @@ matcher.knnMatch(desc1, desc2, nn_matches, 2);
|
||||
@endcode
|
||||
We use Hamming distance, because AKAZE uses binary descriptor by default.
|
||||
|
||||
1. **Use 2-nn matches to find correct keypoint matches**
|
||||
-# **Use 2-nn matches to find correct keypoint matches**
|
||||
@code{.cpp}
|
||||
for(size_t i = 0; i < nn_matches.size(); i++) {
|
||||
DMatch first = nn_matches[i][0];
|
||||
@ -81,7 +81,7 @@ for(size_t i = 0; i < nn_matches.size(); i++) {
|
||||
@endcode
|
||||
If the closest match is *ratio* closer than the second closest one, then the match is correct.
|
||||
|
||||
1. **Check if our matches fit in the homography model**
|
||||
-# **Check if our matches fit in the homography model**
|
||||
@code{.cpp}
|
||||
for(int i = 0; i < matched1.size(); i++) {
|
||||
Mat col = Mat::ones(3, 1, CV_64F);
|
||||
@ -106,7 +106,7 @@ then it it fits in the homography.
|
||||
|
||||
We create a new set of matches for the inliers, because it is required by the drawing function.
|
||||
|
||||
1. **Output results**
|
||||
-# **Output results**
|
||||
@code{.cpp}
|
||||
Mat res;
|
||||
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
|
||||
@ -120,7 +120,7 @@ Here we save the resulting image and print some statistics.
|
||||
Found matches
|
||||
-------------
|
||||
|
||||

|
||||

|
||||
|
||||
A-KAZE Matching Results
|
||||
-----------------------
|
||||
|
@ -152,8 +152,9 @@ A-KAZE Matching Results
|
||||
--------------------------
|
||||
|
||||
.. code-block:: none
|
||||
Keypoints 1: 2943
|
||||
Keypoints 2: 3511
|
||||
Matches: 447
|
||||
Inliers: 308
|
||||
Inlier Ratio: 0.689038
|
||||
|
||||
Keypoints 1 2943
|
||||
Keypoints 2 3511
|
||||
Matches 447
|
||||
Inliers 308
|
||||
Inlier Ratio 0.689038
|
||||
|
@ -11,16 +11,17 @@ The algorithm is as follows:
|
||||
|
||||
- Detect and describe keypoints on the first frame, manually set object boundaries
|
||||
- For every next frame:
|
||||
1. Detect and describe keypoints
|
||||
2. Match them using bruteforce matcher
|
||||
3. Estimate homography transformation using RANSAC
|
||||
4. Filter inliers from all the matches
|
||||
5. Apply homography transformation to the bounding box to find the object
|
||||
6. Draw bounding box and inliers, compute inlier ratio as evaluation metric
|
||||
-# Detect and describe keypoints
|
||||
-# Match them using bruteforce matcher
|
||||
-# Estimate homography transformation using RANSAC
|
||||
-# Filter inliers from all the matches
|
||||
-# Apply homography transformation to the bounding box to find the object
|
||||
-# Draw bounding box and inliers, compute inlier ratio as evaluation metric
|
||||
|
||||

|
||||

|
||||
|
||||
### Data
|
||||
Data
|
||||
----
|
||||
|
||||
To do the tracking we need a video and object position on the first frame.
|
||||
|
||||
@ -31,14 +32,16 @@ To run the code you have to specify input and output video path and object bound
|
||||
@code{.none}
|
||||
./planar_tracking blais.mp4 result.avi blais_bb.xml.gz
|
||||
@endcode
|
||||
### Source Code
|
||||
|
||||
Source Code
|
||||
-----------
|
||||
|
||||
@includelineno cpp/tutorial_code/features2D/AKAZE_tracking/planar_tracking.cpp
|
||||
|
||||
### Explanation
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
Tracker class
|
||||
-------------
|
||||
### Tracker class
|
||||
|
||||
This class implements algorithm described abobve using given feature detector and descriptor
|
||||
matcher.
|
||||
@ -63,7 +66,7 @@ matcher.
|
||||
|
||||
- **Processing frames**
|
||||
|
||||
1. Locate keypoints and compute descriptors
|
||||
-# Locate keypoints and compute descriptors
|
||||
@code{.cpp}
|
||||
(*detector)(frame, noArray(), kp, desc);
|
||||
@endcode
|
||||
@ -72,7 +75,7 @@ matcher.
|
||||
|
||||
In this tutorial detectors are set up to find about 1000 keypoints on each frame.
|
||||
|
||||
1. Use 2-nn matcher to find correspondences
|
||||
-# Use 2-nn matcher to find correspondences
|
||||
@code{.cpp}
|
||||
matcher->knnMatch(first_desc, desc, matches, 2);
|
||||
for(unsigned i = 0; i < matches.size(); i++) {
|
||||
@ -82,20 +85,18 @@ matcher.
|
||||
}
|
||||
}
|
||||
@endcode
|
||||
|
||||
If the closest match is *nn_match_ratio* closer than the second closest one, then it's a
|
||||
match.
|
||||
|
||||
2. Use *RANSAC* to estimate homography transformation
|
||||
-# Use *RANSAC* to estimate homography transformation
|
||||
@code{.cpp}
|
||||
homography = findHomography(Points(matched1), Points(matched2),
|
||||
RANSAC, ransac_thresh, inlier_mask);
|
||||
@endcode
|
||||
|
||||
If there are at least 4 matches we can use random sample consensus to estimate image
|
||||
transformation.
|
||||
|
||||
3. Save the inliers
|
||||
-# Save the inliers
|
||||
@code{.cpp}
|
||||
for(unsigned i = 0; i < matched1.size(); i++) {
|
||||
if(inlier_mask.at<uchar>(i)) {
|
||||
@ -106,11 +107,10 @@ matcher.
|
||||
}
|
||||
}
|
||||
@endcode
|
||||
|
||||
Since *findHomography* computes the inliers we only have to save the chosen points and
|
||||
matches.
|
||||
|
||||
4. Project object bounding box
|
||||
-# Project object bounding box
|
||||
@code{.cpp}
|
||||
perspectiveTransform(object_bb, new_bb, homography);
|
||||
@endcode
|
||||
@ -118,7 +118,8 @@ matcher.
|
||||
If there is a reasonable number of inliers we can use estimated transformation to locate the
|
||||
object.
|
||||
|
||||
### Results
|
||||
Results
|
||||
-------
|
||||
|
||||
You can watch the resulting [video on youtube](http://www.youtube.com/watch?v=LWY-w8AGGhE).
|
||||
|
||||
@ -129,6 +130,7 @@ Inliers 410
|
||||
Inlier ratio 0.58
|
||||
Keypoints 1117
|
||||
@endcode
|
||||
|
||||
*ORB* statistics:
|
||||
@code{.none}
|
||||
Matches 504
|
||||
|
@ -87,4 +87,4 @@ Result
|
||||
|
||||
Here is the result after applying the BruteForce matcher between the two original images:
|
||||
|
||||

|
||||

|
||||
|
@ -79,10 +79,10 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here is the result of the feature detection applied to the first image:
|
||||
-# Here is the result of the feature detection applied to the first image:
|
||||
|
||||

|
||||

|
||||
|
||||
2. And here is the result for the second image:
|
||||
-# And here is the result for the second image:
|
||||
|
||||

|
||||

|
||||
|
@ -130,10 +130,10 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here is the result of the feature detection applied to the first image:
|
||||
-# Here is the result of the feature detection applied to the first image:
|
||||
|
||||

|
||||

|
||||
|
||||
2. Additionally, we get as console output the keypoints filtered:
|
||||
-# Additionally, we get as console output the keypoints filtered:
|
||||
|
||||

|
||||

|
||||
|
@ -134,8 +134,8 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||
1. And here is the result for the detected object (highlighted in green)
|
||||
-# And here is the result for the detected object (highlighted in green)
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
|
@ -122,9 +122,9 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||

|
||||

|
||||
|
||||
Here is the result:
|
||||
|
||||

|
||||

|
||||
|
||||
|
@ -30,7 +30,7 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
|
@ -111,5 +111,5 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||

|
||||

|
||||
|
||||
|
@ -201,9 +201,9 @@ Result
|
||||
|
||||
The original image:
|
||||
|
||||

|
||||

|
||||
|
||||
The detected corners are surrounded by a small black circle
|
||||
|
||||

|
||||

|
||||
|
||||
|
@ -1,8 +0,0 @@
|
||||
General tutorials {#tutorial_table_of_content_general}
|
||||
=================
|
||||
|
||||
These tutorials are the bottom of the iceberg as they link together multiple of the modules
|
||||
presented above in order to solve complex problems.
|
||||
|
||||
|
||||
|
@ -24,28 +24,45 @@ The source code
|
||||
|
||||
You may also find the source code and these video file in the
|
||||
`samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity` folder of the OpenCV
|
||||
source library or download it from here
|
||||
\<../../../../samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp\>. The
|
||||
full source code is quite long (due to the controlling of the application via the command line
|
||||
source library or download it from [here](samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp).
|
||||
The full source code is quite long (due to the controlling of the application via the command line
|
||||
arguments and performance measurement). Therefore, to avoid cluttering up these sections with those
|
||||
you'll find here only the functions itself.
|
||||
|
||||
The PSNR returns a float number, that if the two inputs are similar between 30 and 50 (higher is
|
||||
better).
|
||||
|
||||
@includelineno samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
||||
@dontinclude samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
||||
|
||||
lines
|
||||
165-210, 18-23, 210-235
|
||||
@skip struct BufferPSNR
|
||||
@until };
|
||||
|
||||
@skip double getPSNR(
|
||||
@until return psnr;
|
||||
@until }
|
||||
@until }
|
||||
|
||||
@skip double getPSNR_CUDA(
|
||||
@until return psnr;
|
||||
@until }
|
||||
@until }
|
||||
|
||||
The SSIM returns the MSSIM of the images. This is too a float number between zero and one (higher is
|
||||
better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data
|
||||
structure:
|
||||
|
||||
@includelineno samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
||||
@dontinclude samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp
|
||||
|
||||
lines
|
||||
235-355, 26-42, 357-
|
||||
@skip struct BufferMSSIM
|
||||
@until };
|
||||
|
||||
@skip Scalar getMSSIM(
|
||||
@until return mssim;
|
||||
@until }
|
||||
|
||||
@skip Scalar getMSSIM_CUDA_optimized(
|
||||
@until return mssim;
|
||||
@until }
|
||||
|
||||
How to do it? - The GPU
|
||||
-----------------------
|
||||
@ -124,7 +141,7 @@ The reason for this is that you're throwing out on the window the price for memo
|
||||
data transfer. And on the GPU this is damn high. Another possibility for optimization is to
|
||||
introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::Stream.
|
||||
|
||||
1. Memory allocation on the GPU is considerable. Therefore, if it’s possible allocate new memory as
|
||||
-# Memory allocation on the GPU is considerable. Therefore, if it’s possible allocate new memory as
|
||||
few times as possible. If you create a function what you intend to call multiple times it is a
|
||||
good idea to allocate any local parameters for the function only once, during the first call. To
|
||||
do this you create a data structure containing all the local variables you will use. For
|
||||
@ -148,7 +165,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
|
||||
Now you access these local parameters as: *b.gI1*, *b.buf* and so on. The GpuMat will only
|
||||
reallocate itself on a new call if the new matrix size is different from the previous one.
|
||||
|
||||
2. Avoid unnecessary function data transfers. Any small data transfer will be significant one once
|
||||
-# Avoid unnecessary function data transfers. Any small data transfer will be significant one once
|
||||
you go to the GPU. Therefore, if possible make all calculations in-place (in other words do not
|
||||
create new memory objects - for reasons explained at the previous point). For example, although
|
||||
expressing arithmetical operations may be easier to express in one line formulas, it will be
|
||||
@ -164,7 +181,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
|
||||
gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1;
|
||||
gpu::add(b.t1, C1, b.t1);
|
||||
@endcode
|
||||
3. Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a gpu function
|
||||
-# Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a gpu function
|
||||
it will wait for the call to finish and return with the result afterwards. However, it is
|
||||
possible to make asynchronous calls, meaning it will call for the operation execution, make the
|
||||
costly data allocations for the algorithm and return back right away. Now you can call another
|
||||
@ -189,7 +206,7 @@ Result and conclusion
|
||||
---------------------
|
||||
|
||||
On an Intel P8700 laptop CPU paired with a low end NVidia GT220M here are the performance numbers:
|
||||
@code{.cpp}
|
||||
@code
|
||||
Time of PSNR CPU (averaged for 10 runs): 41.4122 milliseconds. With result of: 19.2506
|
||||
Time of PSNR GPU (averaged for 10 runs): 158.977 milliseconds. With result of: 19.2506
|
||||
Initial call GPU optimized: 31.3418 milliseconds. With result of: 19.2506
|
||||
|
Before Width: | Height: | Size: 111 KiB After Width: | Height: | Size: 111 KiB |
Before Width: | Height: | Size: 53 KiB After Width: | Height: | Size: 53 KiB |
Before Width: | Height: | Size: 120 KiB After Width: | Height: | Size: 120 KiB |
@ -94,9 +94,8 @@ Below is the output of the program. Use the first image as the input. For the DE
|
||||
the SRTM file located at the USGS here.
|
||||
[<http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>](http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip)
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||
|
@ -106,8 +106,8 @@ Results
|
||||
|
||||
Below is the output of the program. Use the first image as the input. For the DEM model, download the SRTM file located at the USGS here. `http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip <http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>`_
|
||||
|
||||
.. image:: images/output.jpg
|
||||
.. image:: images/gdal_output.jpg
|
||||
|
||||
.. image:: images/heat-map.jpg
|
||||
.. image:: images/gdal_heat-map.jpg
|
||||
|
||||
.. image:: images/flood-zone.jpg
|
||||
.. image:: images/gdal_flood-zone.jpg
|
||||
|
@ -7,7 +7,7 @@ Adding a Trackbar to our applications! {#tutorial_trackbar}
|
||||
- Well, it is time to use some fancy GUI tools. OpenCV provides some GUI utilities (*highgui.h*)
|
||||
for you. An example of this is a **Trackbar**
|
||||
|
||||

|
||||

|
||||
|
||||
- In this tutorial we will just modify our two previous programs so that they get the input
|
||||
information from the trackbar.
|
||||
@ -88,16 +88,16 @@ Explanation
|
||||
|
||||
We only analyze the code that is related to Trackbar:
|
||||
|
||||
1. First, we load 02 images, which are going to be blended.
|
||||
-# First, we load 02 images, which are going to be blended.
|
||||
@code{.cpp}
|
||||
src1 = imread("../../images/LinuxLogo.jpg");
|
||||
src2 = imread("../../images/WindowsLogo.jpg");
|
||||
@endcode
|
||||
2. To create a trackbar, first we have to create the window in which it is going to be located. So:
|
||||
-# To create a trackbar, first we have to create the window in which it is going to be located. So:
|
||||
@code{.cpp}
|
||||
namedWindow("Linear Blend", 1);
|
||||
@endcode
|
||||
3. Now we can create the Trackbar:
|
||||
-# Now we can create the Trackbar:
|
||||
@code{.cpp}
|
||||
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
|
||||
@endcode
|
||||
@ -110,7 +110,7 @@ We only analyze the code that is related to Trackbar:
|
||||
- The numerical value of Trackbar is stored in **alpha_slider**
|
||||
- Whenever the user moves the Trackbar, the callback function **on_trackbar** is called
|
||||
|
||||
4. Finally, we have to define the callback function **on_trackbar**
|
||||
-# Finally, we have to define the callback function **on_trackbar**
|
||||
@code{.cpp}
|
||||
void on_trackbar( int, void* )
|
||||
{
|
||||
@ -133,10 +133,10 @@ Result
|
||||
|
||||
- Our program produces the following output:
|
||||
|
||||

|
||||

|
||||
|
||||
- As a manner of practice, you can also add 02 trackbars for the program made in
|
||||
@ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might
|
||||
look like:
|
||||
|
||||

|
||||

|
||||
|
@ -25,10 +25,14 @@ version of it ](samples/cpp/tutorial_code/HighGUI/video-input-psnr-ssim/video/Me
|
||||
You may also find the source code and these video file in the
|
||||
`samples/cpp/tutorial_code/HighGUI/video-input-psnr-ssim/` folder of the OpenCV source library.
|
||||
|
||||
@includelineno cpp/tutorial_code/HighGUI/video-input-psnr-ssim/video-input-psnr-ssim.cpp
|
||||
@dontinclude cpp/tutorial_code/HighGUI/video-input-psnr-ssim/video-input-psnr-ssim.cpp
|
||||
|
||||
lines
|
||||
1-15, 29-31, 33-208
|
||||
@until Scalar getMSSIM
|
||||
@skip main
|
||||
@until {
|
||||
@skip if
|
||||
@until return mssim;
|
||||
@until }
|
||||
|
||||
How to read a video stream (online-camera or offline-file)?
|
||||
-----------------------------------------------------------
|
||||
@ -243,10 +247,9 @@ for each frame, and the SSIM only for the frames where the PSNR falls below an i
|
||||
visualization purpose we show both images in an OpenCV window and print the PSNR and MSSIM values to
|
||||
the console. Expect to see something like:
|
||||
|
||||

|
||||

|
||||
|
||||
You may observe a runtime instance of this on the [YouTube
|
||||
here](https://www.youtube.com/watch?v=iOcNljutOgg).
|
||||
You may observe a runtime instance of this on the [YouTube here](https://www.youtube.com/watch?v=iOcNljutOgg).
|
||||
|
||||
\htmlonly
|
||||
<div align="center">
|
||||
|
@ -47,7 +47,7 @@ somehow longer and includes names such as *XVID*, *DIVX*, *H264* or *LAGS* (*Lag
|
||||
Codec*). The full list of codecs you may use on a system depends on just what one you have
|
||||
installed.
|
||||
|
||||

|
||||

|
||||
|
||||
As you can see things can get really complicated with videos. However, OpenCV is mainly a computer
|
||||
vision library, not a video stream, codec and write one. Therefore, the developers tried to keep
|
||||
@ -75,7 +75,7 @@ const string source = argv[1]; // the source file name
|
||||
string::size_type pAt = source.find_last_of('.'); // Find extension point
|
||||
const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi"; // Form the new name with container
|
||||
@endcode
|
||||
1. The codec to use for the video track. Now all the video codecs have a unique short name of
|
||||
-# The codec to use for the video track. Now all the video codecs have a unique short name of
|
||||
maximum four characters. Hence, the *XVID*, *DIVX* or *H264* names. This is called a four
|
||||
character code. You may also ask this from an input video by using its *get* function. Because
|
||||
the *get* function is a general function it always returns double values. A double value is
|
||||
@ -109,13 +109,13 @@ const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi"; // Form the n
|
||||
If you pass for this argument minus one than a window will pop up at runtime that contains all
|
||||
the codec installed on your system and ask you to select the one to use:
|
||||
|
||||

|
||||

|
||||
|
||||
2. The frame per second for the output video. Again, here I keep the input videos frame per second
|
||||
-# The frame per second for the output video. Again, here I keep the input videos frame per second
|
||||
by using the *get* function.
|
||||
3. The size of the frames for the output video. Here too I keep the input videos frame size per
|
||||
-# The size of the frames for the output video. Here too I keep the input videos frame size per
|
||||
second by using the *get* function.
|
||||
4. The final argument is an optional one. By default is true and says that the output will be a
|
||||
-# The final argument is an optional one. By default is true and says that the output will be a
|
||||
colorful one (so for write you will send three channel images). To create a gray scale video
|
||||
pass a false parameter here.
|
||||
|
||||
@ -148,7 +148,7 @@ merge(spl, res);
|
||||
Put all this together and you'll get the upper source code, whose runtime result will show something
|
||||
around the idea:
|
||||
|
||||

|
||||

|
||||
|
||||
You may observe a runtime instance of this on the [YouTube
|
||||
here](https://www.youtube.com/watch?v=jpBwHxsl1_0).
|
||||
|
@ -28,7 +28,7 @@ Morphological Operations
|
||||
- Finding of intensity bumps or holes in an image
|
||||
- We will explain dilation and erosion briefly, using the following image as an example:
|
||||
|
||||

|
||||

|
||||
|
||||
### Dilation
|
||||
|
||||
@ -40,7 +40,7 @@ Morphological Operations
|
||||
deduce, this maximizing operation causes bright regions within an image to "grow" (therefore the
|
||||
name *dilation*). Take as an example the image above. Applying dilation we can get:
|
||||
|
||||

|
||||

|
||||
|
||||
The background (bright) dilates around the black regions of the letter.
|
||||
|
||||
@ -54,7 +54,7 @@ The background (bright) dilates around the black regions of the letter.
|
||||
(shown above). You can see in the result below that the bright areas of the image (the
|
||||
background, apparently), get thinner, whereas the dark zones (the "writing") gets bigger.
|
||||
|
||||

|
||||

|
||||
|
||||
Code
|
||||
----
|
||||
@ -66,7 +66,7 @@ This tutorial code's is shown lines below. You can also download it from
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Most of the stuff shown is known by you (if you have any doubt, please refer to the tutorials in
|
||||
-# Most of the stuff shown is known by you (if you have any doubt, please refer to the tutorials in
|
||||
previous sections). Let's check the general structure of the program:
|
||||
|
||||
- Load an image (can be RGB or grayscale)
|
||||
@ -80,7 +80,7 @@ Explanation
|
||||
|
||||
Let's analyze these two functions:
|
||||
|
||||
2. **erosion:**
|
||||
-# **erosion:**
|
||||
@code{.cpp}
|
||||
/* @function Erosion */
|
||||
void Erosion( int, void* )
|
||||
@ -124,7 +124,7 @@ Explanation
|
||||
(iterations) at once. We are not using it in this simple tutorial, though. You can check out the
|
||||
Reference for more details.
|
||||
|
||||
3. **dilation:**
|
||||
-# **dilation:**
|
||||
|
||||
The code is below. As you can see, it is completely similar to the snippet of code for **erosion**.
|
||||
Here we also have the option of defining our kernel, its anchor point and the size of the operator
|
||||
@ -152,10 +152,10 @@ Results
|
||||
|
||||
Compile the code above and execute it with an image as argument. For instance, using this image:
|
||||
|
||||

|
||||

|
||||
|
||||
We get the results below. Varying the indices in the Trackbars give different output images,
|
||||
naturally. Try them out! You can even try to add a third Trackbar to control the number of
|
||||
iterations.
|
||||
|
||||

|
||||

|
||||
|
@ -56,7 +56,7 @@ enumeratevisibleitemswithsquare
|
||||
produce the output array.
|
||||
- Just to make the picture clearer, remember how a 1D Gaussian kernel look like?
|
||||
|
||||

|
||||

|
||||
|
||||
Assuming that an image is 1D, you can notice that the pixel located in the middle would have the
|
||||
biggest weight. The weight of its neighbors decreases as the spatial distance between them and
|
||||
@ -64,9 +64,7 @@ enumeratevisibleitemswithsquare
|
||||
|
||||
@note
|
||||
Remember that a 2D Gaussian can be represented as :
|
||||
|
||||
\f[G_{0}(x, y) = A e^{ \dfrac{ -(x - \mu_{x})^{2} }{ 2\sigma^{2}_{x} } + \dfrac{ -(y - \mu_{y})^{2} }{ 2\sigma^{2}_{y} } }\f]
|
||||
|
||||
where \f$\mu\f$ is the mean (the peak) and \f$\sigma\f$ represents the variance (per each of the
|
||||
variables \f$x\f$ and \f$y\f$)
|
||||
|
||||
@ -188,12 +186,13 @@ int display_dst( int delay );
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Let's check the OpenCV functions that involve only the smoothing procedure, since the rest is
|
||||
-# Let's check the OpenCV functions that involve only the smoothing procedure, since the rest is
|
||||
already known by now.
|
||||
2. **Normalized Block Filter:**
|
||||
-# **Normalized Block Filter:**
|
||||
|
||||
OpenCV offers the function @ref cv::blur to perform smoothing with this filter.
|
||||
@code{.cpp}
|
||||
@ -211,7 +210,7 @@ Explanation
|
||||
respect to the neighborhood. If there is a negative value, then the center of the kernel is
|
||||
considered the anchor point.
|
||||
|
||||
3. **Gaussian Filter:**
|
||||
-# **Gaussian Filter:**
|
||||
|
||||
It is performed by the function @ref cv::GaussianBlur :
|
||||
@code{.cpp}
|
||||
@ -231,7 +230,7 @@ Explanation
|
||||
- \f$\sigma_{y}\f$: The standard deviation in y. Writing \f$0\f$ implies that \f$\sigma_{y}\f$ is
|
||||
calculated using kernel size.
|
||||
|
||||
4. **Median Filter:**
|
||||
-# **Median Filter:**
|
||||
|
||||
This filter is provided by the @ref cv::medianBlur function:
|
||||
@code{.cpp}
|
||||
@ -245,7 +244,7 @@ Explanation
|
||||
- *dst*: Destination image, must be the same type as *src*
|
||||
- *i*: Size of the kernel (only one because we use a square window). Must be odd.
|
||||
|
||||
5. **Bilateral Filter**
|
||||
-# **Bilateral Filter**
|
||||
|
||||
Provided by OpenCV function @ref cv::bilateralFilter
|
||||
@code{.cpp}
|
||||
@ -268,6 +267,4 @@ Results
|
||||
filters explained.
|
||||
- Here is a snapshot of the image smoothed using *medianBlur*:
|
||||
|
||||

|
||||
|
||||
|
||||

|
||||
|
@ -28,17 +28,14 @@ Theory
|
||||
- Let's say you have gotten a skin histogram (Hue-Saturation) based on the image below. The
|
||||
histogram besides is going to be our *model histogram* (which we know represents a sample of
|
||||
skin tonality). You applied some mask to capture only the histogram of the skin area:
|
||||
|
||||
------ ------
|
||||
|T0| |T1|
|
||||
------ ------
|
||||

|
||||

|
||||
|
||||
- Now, let's imagine that you get another hand image (Test Image) like the one below: (with its
|
||||
respective histogram):
|
||||

|
||||

|
||||
|
||||
------ ------
|
||||
|T2| |T3|
|
||||
------ ------
|
||||
|
||||
- What we want to do is to use our *model histogram* (that we know represents a skin tonality) to
|
||||
detect skin areas in our Test Image. Here are the steps
|
||||
@ -50,7 +47,7 @@ Theory
|
||||
the *model histogram* first, so the output for the Test Image can be visible for you.
|
||||
-# Applying the steps above, we get the following BackProjection image for our Test Image:
|
||||
|
||||

|
||||

|
||||
|
||||
-# In terms of statistics, the values stored in *BackProjection* represent the *probability*
|
||||
that a pixel in *Test Image* belongs to a skin area, based on the *model histogram* that we
|
||||
@ -83,98 +80,23 @@ Code
|
||||
in samples.
|
||||
|
||||
- **Code at glance:**
|
||||
@code{.cpp}
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include "opencv2/highgui.hpp"
|
||||
@includelineno samples/cpp/tutorial_code/Histograms_Matching/calcBackProject_Demo1.cpp
|
||||
|
||||
#include <iostream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
/// Global Variables
|
||||
Mat src; Mat hsv; Mat hue;
|
||||
int bins = 25;
|
||||
|
||||
/// Function Headers
|
||||
void Hist_and_Backproj(int, void* );
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Read the image
|
||||
src = imread( argv[1], 1 );
|
||||
/// Transform it to HSV
|
||||
cvtColor( src, hsv, COLOR_BGR2HSV );
|
||||
|
||||
/// Use only the Hue value
|
||||
hue.create( hsv.size(), hsv.depth() );
|
||||
int ch[] = { 0, 0 };
|
||||
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
|
||||
|
||||
/// Create Trackbar to enter the number of bins
|
||||
char* window_image = "Source image";
|
||||
namedWindow( window_image, WINDOW_AUTOSIZE );
|
||||
createTrackbar("* Hue bins: ", window_image, &bins, 180, Hist_and_Backproj );
|
||||
Hist_and_Backproj(0, 0);
|
||||
|
||||
/// Show the image
|
||||
imshow( window_image, src );
|
||||
|
||||
/// Wait until user exits the program
|
||||
waitKey(0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* @function Hist_and_Backproj
|
||||
* @brief Callback to Trackbar
|
||||
*/
|
||||
void Hist_and_Backproj(int, void* )
|
||||
{
|
||||
MatND hist;
|
||||
int histSize = MAX( bins, 2 );
|
||||
float hue_range[] = { 0, 180 };
|
||||
const float* ranges = { hue_range };
|
||||
|
||||
/// Get the Histogram and normalize it
|
||||
calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
|
||||
normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );
|
||||
|
||||
/// Get Backprojection
|
||||
MatND backproj;
|
||||
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
|
||||
|
||||
/// Draw the backproj
|
||||
imshow( "BackProj", backproj );
|
||||
|
||||
/// Draw the histogram
|
||||
int w = 400; int h = 400;
|
||||
int bin_w = cvRound( (double) w / histSize );
|
||||
Mat histImg = Mat::zeros( w, h, CV_8UC3 );
|
||||
|
||||
for( int i = 0; i < bins; i ++ )
|
||||
{ rectangle( histImg, Point( i*bin_w, h ), Point( (i+1)*bin_w, h - cvRound( hist.at<float>(i)*h/255.0 ) ), Scalar( 0, 0, 255 ), -1 ); }
|
||||
|
||||
imshow( "Histogram", histImg );
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Declare the matrices to store our images and initialize the number of bins to be used by our
|
||||
-# Declare the matrices to store our images and initialize the number of bins to be used by our
|
||||
histogram:
|
||||
@code{.cpp}
|
||||
Mat src; Mat hsv; Mat hue;
|
||||
int bins = 25;
|
||||
@endcode
|
||||
2. Read the input image and transform it to HSV format:
|
||||
-# Read the input image and transform it to HSV format:
|
||||
@code{.cpp}
|
||||
src = imread( argv[1], 1 );
|
||||
cvtColor( src, hsv, COLOR_BGR2HSV );
|
||||
@endcode
|
||||
3. For this tutorial, we will use only the Hue value for our 1-D histogram (check out the fancier
|
||||
-# For this tutorial, we will use only the Hue value for our 1-D histogram (check out the fancier
|
||||
code in the links above if you want to use the more standard H-S histogram, which yields better
|
||||
results):
|
||||
@code{.cpp}
|
||||
@ -182,7 +104,7 @@ Explanation
|
||||
int ch[] = { 0, 0 };
|
||||
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
|
||||
@endcode
|
||||
as you see, we use the function :mix_channels:mixChannels to get only the channel 0 (Hue) from
|
||||
as you see, we use the function @ref cv::mixChannels to get only the channel 0 (Hue) from
|
||||
the hsv image. It gets the following parameters:
|
||||
|
||||
- **&hsv:** The source array from which the channels will be copied
|
||||
@ -193,7 +115,7 @@ Explanation
|
||||
case, the Hue(0) channel of &hsv is being copied to the 0 channel of &hue (1-channel)
|
||||
- **1:** Number of index pairs
|
||||
|
||||
4. Create a Trackbar for the user to enter the bin values. Any change on the Trackbar means a call
|
||||
-# Create a Trackbar for the user to enter the bin values. Any change on the Trackbar means a call
|
||||
to the **Hist_and_Backproj** callback function.
|
||||
@code{.cpp}
|
||||
char* window_image = "Source image";
|
||||
@ -201,14 +123,14 @@ Explanation
|
||||
createTrackbar("* Hue bins: ", window_image, &bins, 180, Hist_and_Backproj );
|
||||
Hist_and_Backproj(0, 0);
|
||||
@endcode
|
||||
5. Show the image and wait for the user to exit the program:
|
||||
-# Show the image and wait for the user to exit the program:
|
||||
@code{.cpp}
|
||||
imshow( window_image, src );
|
||||
|
||||
waitKey(0);
|
||||
return 0;
|
||||
@endcode
|
||||
6. **Hist_and_Backproj function:** Initialize the arguments needed for @ref cv::calcHist . The
|
||||
-# **Hist_and_Backproj function:** Initialize the arguments needed for @ref cv::calcHist . The
|
||||
number of bins comes from the Trackbar:
|
||||
@code{.cpp}
|
||||
void Hist_and_Backproj(int, void* )
|
||||
@ -218,12 +140,12 @@ Explanation
|
||||
float hue_range[] = { 0, 180 };
|
||||
const float* ranges = { hue_range };
|
||||
@endcode
|
||||
7. Calculate the Histogram and normalize it to the range \f$[0,255]\f$
|
||||
-# Calculate the Histogram and normalize it to the range \f$[0,255]\f$
|
||||
@code{.cpp}
|
||||
calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
|
||||
normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );
|
||||
@endcode
|
||||
8. Get the Backprojection of the same image by calling the function @ref cv::calcBackProject
|
||||
-# Get the Backprojection of the same image by calling the function @ref cv::calcBackProject
|
||||
@code{.cpp}
|
||||
MatND backproj;
|
||||
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
|
||||
@ -231,11 +153,11 @@ Explanation
|
||||
all the arguments are known (the same as used to calculate the histogram), only we add the
|
||||
backproj matrix, which will store the backprojection of the source image (&hue)
|
||||
|
||||
9. Display backproj:
|
||||
-# Display backproj:
|
||||
@code{.cpp}
|
||||
imshow( "BackProj", backproj );
|
||||
@endcode
|
||||
10. Draw the 1-D Hue histogram of the image:
|
||||
-# Draw the 1-D Hue histogram of the image:
|
||||
@code{.cpp}
|
||||
int w = 400; int h = 400;
|
||||
int bin_w = cvRound( (double) w / histSize );
|
||||
@ -246,12 +168,12 @@ Explanation
|
||||
|
||||
imshow( "Histogram", histImg );
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. Here are the output by using a sample image ( guess what? Another hand ). You can play with the
|
||||
Here are the output by using a sample image ( guess what? Another hand ). You can play with the
|
||||
bin values and you will observe how it affects the results:
|
||||
|
||||
------ ------ ------
|
||||
|R0| |R1| |R2|
|
||||
------ ------ ------
|
||||

|
||||

|
||||

|
||||
|
@ -21,7 +21,7 @@ histogram called *Image histogram*. Now we will considerate it in its more gener
|
||||
- Let's see an example. Imagine that a Matrix contains information of an image (i.e. intensity in
|
||||
the range \f$0-255\f$):
|
||||
|
||||

|
||||

|
||||
|
||||
- What happens if we want to *count* this data in an organized way? Since we know that the *range*
|
||||
of information value for this case is 256 values, we can segment our range in subparts (called
|
||||
@ -36,7 +36,7 @@ histogram called *Image histogram*. Now we will considerate it in its more gener
|
||||
this to the example above we get the image below ( axis x represents the bins and axis y the
|
||||
number of pixels in each of them).
|
||||
|
||||

|
||||

|
||||
|
||||
- This was just a simple example of how an histogram works and why it is useful. An histogram can
|
||||
keep count not only of color intensities, but of whatever image features that we want to measure
|
||||
@ -73,18 +73,18 @@ Code
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Create the necessary matrices:
|
||||
-# Create the necessary matrices:
|
||||
@code{.cpp}
|
||||
Mat src, dst;
|
||||
@endcode
|
||||
2. Load the source image
|
||||
-# Load the source image
|
||||
@code{.cpp}
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
@endcode
|
||||
3. Separate the source image in its three R,G and B planes. For this we use the OpenCV function
|
||||
-# Separate the source image in its three R,G and B planes. For this we use the OpenCV function
|
||||
@ref cv::split :
|
||||
@code{.cpp}
|
||||
vector<Mat> bgr_planes;
|
||||
@ -93,7 +93,7 @@ Explanation
|
||||
our input is the image to be divided (this case with three channels) and the output is a vector
|
||||
of Mat )
|
||||
|
||||
4. Now we are ready to start configuring the **histograms** for each plane. Since we are working
|
||||
-# Now we are ready to start configuring the **histograms** for each plane. Since we are working
|
||||
with the B, G and R planes, we know that our values will range in the interval \f$[0,255]\f$
|
||||
-# Establish number of bins (5, 10...):
|
||||
@code{.cpp}
|
||||
@ -137,7 +137,7 @@ Explanation
|
||||
- **uniform** and **accumulate**: The bin sizes are the same and the histogram is cleared
|
||||
at the beginning.
|
||||
|
||||
5. Create an image to display the histograms:
|
||||
-# Create an image to display the histograms:
|
||||
@code{.cpp}
|
||||
// Draw the histograms for R, G and B
|
||||
int hist_w = 512; int hist_h = 400;
|
||||
@ -145,7 +145,7 @@ Explanation
|
||||
|
||||
Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) );
|
||||
@endcode
|
||||
6. Notice that before drawing, we first @ref cv::normalize the histogram so its values fall in the
|
||||
-# Notice that before drawing, we first @ref cv::normalize the histogram so its values fall in the
|
||||
range indicated by the parameters entered:
|
||||
@code{.cpp}
|
||||
/// Normalize the result to [ 0, histImage.rows ]
|
||||
@ -164,7 +164,7 @@ Explanation
|
||||
- **-1:** Implies that the output normalized array will be the same type as the input
|
||||
- **Mat():** Optional mask
|
||||
|
||||
7. Finally, observe that to access the bin (in this case in this 1D-Histogram):
|
||||
-# Finally, observe that to access the bin (in this case in this 1D-Histogram):
|
||||
@code{.cpp}
|
||||
/// Draw for each channel
|
||||
for( int i = 1; i < histSize; i++ )
|
||||
@ -189,7 +189,7 @@ Explanation
|
||||
b_hist.at<float>( i, j )
|
||||
@endcode
|
||||
|
||||
8. Finally we display our histograms and wait for the user to exit:
|
||||
-# Finally we display our histograms and wait for the user to exit:
|
||||
@code{.cpp}
|
||||
namedWindow("calcHist Demo", WINDOW_AUTOSIZE );
|
||||
imshow("calcHist Demo", histImage );
|
||||
@ -202,10 +202,10 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||
1. Using as input argument an image like the shown below:
|
||||
-# Using as input argument an image like the shown below:
|
||||
|
||||

|
||||

|
||||
|
||||
2. Produces the following histogram:
|
||||
-# Produces the following histogram:
|
||||
|
||||

|
||||

|
||||
|
@ -18,25 +18,18 @@ Theory
|
||||
- OpenCV implements the function @ref cv::compareHist to perform a comparison. It also offers 4
|
||||
different metrics to compute the matching:
|
||||
-# **Correlation ( CV_COMP_CORREL )**
|
||||
|
||||
\f[d(H_1,H_2) = \frac{\sum_I (H_1(I) - \bar{H_1}) (H_2(I) - \bar{H_2})}{\sqrt{\sum_I(H_1(I) - \bar{H_1})^2 \sum_I(H_2(I) - \bar{H_2})^2}}\f]
|
||||
|
||||
where
|
||||
|
||||
\f[\bar{H_k} = \frac{1}{N} \sum _J H_k(J)\f]
|
||||
|
||||
and \f$N\f$ is the total number of histogram bins.
|
||||
|
||||
-# **Chi-Square ( CV_COMP_CHISQR )**
|
||||
|
||||
\f[d(H_1,H_2) = \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)}\f]
|
||||
|
||||
-# **Intersection ( method=CV_COMP_INTERSECT )**
|
||||
|
||||
\f[d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I))\f]
|
||||
|
||||
-# **Bhattacharyya distance ( CV_COMP_BHATTACHARYYA )**
|
||||
|
||||
\f[d(H_1,H_2) = \sqrt{1 - \frac{1}{\sqrt{\bar{H_1} \bar{H_2} N^2}} \sum_I \sqrt{H_1(I) \cdot H_2(I)}}\f]
|
||||
|
||||
Code
|
||||
@ -59,7 +52,7 @@ Code
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Declare variables such as the matrices to store the base image and the two other images to
|
||||
-# Declare variables such as the matrices to store the base image and the two other images to
|
||||
compare ( RGB and HSV )
|
||||
@code{.cpp}
|
||||
Mat src_base, hsv_base;
|
||||
@ -67,7 +60,7 @@ Explanation
|
||||
Mat src_test2, hsv_test2;
|
||||
Mat hsv_half_down;
|
||||
@endcode
|
||||
2. Load the base image (src_base) and the other two test images:
|
||||
-# Load the base image (src_base) and the other two test images:
|
||||
@code{.cpp}
|
||||
if( argc < 4 )
|
||||
{ printf("** Error. Usage: ./compareHist_Demo <image_settings0> <image_setting1> <image_settings2>\n");
|
||||
@ -78,17 +71,17 @@ Explanation
|
||||
src_test1 = imread( argv[2], 1 );
|
||||
src_test2 = imread( argv[3], 1 );
|
||||
@endcode
|
||||
3. Convert them to HSV format:
|
||||
-# Convert them to HSV format:
|
||||
@code{.cpp}
|
||||
cvtColor( src_base, hsv_base, COLOR_BGR2HSV );
|
||||
cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV );
|
||||
cvtColor( src_test2, hsv_test2, COLOR_BGR2HSV );
|
||||
@endcode
|
||||
4. Also, create an image of half the base image (in HSV format):
|
||||
-# Also, create an image of half the base image (in HSV format):
|
||||
@code{.cpp}
|
||||
hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) );
|
||||
@endcode
|
||||
5. Initialize the arguments to calculate the histograms (bins, ranges and channels H and S ).
|
||||
-# Initialize the arguments to calculate the histograms (bins, ranges and channels H and S ).
|
||||
@code{.cpp}
|
||||
int h_bins = 50; int s_bins = 60;
|
||||
int histSize[] = { h_bins, s_bins };
|
||||
@ -100,14 +93,14 @@ Explanation
|
||||
|
||||
int channels[] = { 0, 1 };
|
||||
@endcode
|
||||
6. Create the MatND objects to store the histograms:
|
||||
-# Create the MatND objects to store the histograms:
|
||||
@code{.cpp}
|
||||
MatND hist_base;
|
||||
MatND hist_half_down;
|
||||
MatND hist_test1;
|
||||
MatND hist_test2;
|
||||
@endcode
|
||||
7. Calculate the Histograms for the base image, the 2 test images and the half-down base image:
|
||||
-# Calculate the Histograms for the base image, the 2 test images and the half-down base image:
|
||||
@code{.cpp}
|
||||
calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false );
|
||||
normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() );
|
||||
@ -121,7 +114,7 @@ Explanation
|
||||
calcHist( &hsv_test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false );
|
||||
normalize( hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat() );
|
||||
@endcode
|
||||
8. Apply sequentially the 4 comparison methods between the histogram of the base image (hist_base)
|
||||
-# Apply sequentially the 4 comparison methods between the histogram of the base image (hist_base)
|
||||
and the other histograms:
|
||||
@code{.cpp}
|
||||
for( int i = 0; i < 4; i++ )
|
||||
@ -134,32 +127,30 @@ Explanation
|
||||
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_half , base_test1, base_test2 );
|
||||
}
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. We use as input the following images:
|
||||
|
||||
----------- ----------- -----------
|
||||
|Base_0| |Test_1| |Test_2|
|
||||
----------- ----------- -----------
|
||||
|
||||
-# We use as input the following images:
|
||||

|
||||

|
||||

|
||||
where the first one is the base (to be compared to the others), the other 2 are the test images.
|
||||
We will also compare the first image with respect to itself and with respect of half the base
|
||||
image.
|
||||
|
||||
2. We should expect a perfect match when we compare the base image histogram with itself. Also,
|
||||
-# We should expect a perfect match when we compare the base image histogram with itself. Also,
|
||||
compared with the histogram of half the base image, it should present a high match since both
|
||||
are from the same source. For the other two test images, we can observe that they have very
|
||||
different lighting conditions, so the matching should not be very good:
|
||||
3. Here the numeric results:
|
||||
|
||||
*Method* Base - Base Base - Half Base - Test 1 Base - Test 2
|
||||
----------------- ------------- ------------- --------------- ---------------
|
||||
*Correlation* 1.000000 0.930766 0.182073 0.120447
|
||||
*Chi-square* 0.000000 4.940466 21.184536 49.273437
|
||||
*Intersection* 24.391548 14.959809 3.889029 5.775088
|
||||
*Bhattacharyya* 0.000000 0.222609 0.646576 0.801869
|
||||
|
||||
-# Here the numeric results:
|
||||
*Method* | Base - Base | Base - Half | Base - Test 1 | Base - Test 2
|
||||
----------------- | ------------ | ------------ | -------------- | ---------------
|
||||
*Correlation* | 1.000000 | 0.930766 | 0.182073 | 0.120447
|
||||
*Chi-square* | 0.000000 | 4.940466 | 21.184536 | 49.273437
|
||||
*Intersection* | 24.391548 | 14.959809 | 3.889029 | 5.775088
|
||||
*Bhattacharyya* | 0.000000 | 0.222609 | 0.646576 | 0.801869
|
||||
For the *Correlation* and *Intersection* methods, the higher the metric, the more accurate the
|
||||
match. As we can see, the match *base-base* is the highest of all as expected. Also we can observe
|
||||
that the match *base-half* is the second best match (as we predicted). For the other two metrics,
|
||||
|
@ -17,7 +17,7 @@ Theory
|
||||
- It is a graphical representation of the intensity distribution of an image.
|
||||
- It quantifies the number of pixels for each intensity value considered.
|
||||
|
||||

|
||||

|
||||
|
||||
### What is Histogram Equalization?
|
||||
|
||||
@ -29,7 +29,7 @@ Theory
|
||||
*underpopulated* intensities. After applying the equalization, we get an histogram like the
|
||||
figure in the center. The resulting image is shown in the picture at right.
|
||||
|
||||

|
||||

|
||||
|
||||
### How does it work?
|
||||
|
||||
@ -46,7 +46,7 @@ Theory
|
||||
is 255 ( or the maximum value for the intensity of the image ). From the example above, the
|
||||
cumulative function is:
|
||||
|
||||

|
||||

|
||||
|
||||
- Finally, we use a simple remapping procedure to obtain the intensity values of the equalized
|
||||
image:
|
||||
@ -69,14 +69,14 @@ Code
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Declare the source and destination images as well as the windows names:
|
||||
-# Declare the source and destination images as well as the windows names:
|
||||
@code{.cpp}
|
||||
Mat src, dst;
|
||||
|
||||
char* source_window = "Source image";
|
||||
char* equalized_window = "Equalized Image";
|
||||
@endcode
|
||||
2. Load the source image:
|
||||
-# Load the source image:
|
||||
@code{.cpp}
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
@ -84,18 +84,18 @@ Explanation
|
||||
{ cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl;
|
||||
return -1;}
|
||||
@endcode
|
||||
3. Convert it to grayscale:
|
||||
-# Convert it to grayscale:
|
||||
@code{.cpp}
|
||||
cvtColor( src, src, COLOR_BGR2GRAY );
|
||||
@endcode
|
||||
4. Apply histogram equalization with the function @ref cv::equalizeHist :
|
||||
-# Apply histogram equalization with the function @ref cv::equalizeHist :
|
||||
@code{.cpp}
|
||||
equalizeHist( src, dst );
|
||||
@endcode
|
||||
As it can be easily seen, the only arguments are the original image and the output (equalized)
|
||||
image.
|
||||
|
||||
5. Display both images (original and equalized) :
|
||||
-# Display both images (original and equalized) :
|
||||
@code{.cpp}
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
namedWindow( equalized_window, WINDOW_AUTOSIZE );
|
||||
@ -103,7 +103,7 @@ Explanation
|
||||
imshow( source_window, src );
|
||||
imshow( equalized_window, dst );
|
||||
@endcode
|
||||
6. Wait until user exists the program
|
||||
-# Wait until user exists the program
|
||||
@code{.cpp}
|
||||
waitKey(0);
|
||||
return 0;
|
||||
@ -112,24 +112,24 @@ Explanation
|
||||
Results
|
||||
-------
|
||||
|
||||
1. To appreciate better the results of equalization, let's introduce an image with not much
|
||||
-# To appreciate better the results of equalization, let's introduce an image with not much
|
||||
contrast, such as:
|
||||
|
||||

|
||||

|
||||
|
||||
which, by the way, has this histogram:
|
||||
|
||||

|
||||

|
||||
|
||||
notice that the pixels are clustered around the center of the histogram.
|
||||
|
||||
2. After applying the equalization with our program, we get this result:
|
||||
-# After applying the equalization with our program, we get this result:
|
||||
|
||||

|
||||

|
||||
|
||||
this image has certainly more contrast. Check out its new histogram like this:
|
||||
|
||||

|
||||

|
||||
|
||||
Notice how the number of pixels is more distributed through the intensity range.
|
||||
|
||||
|
@ -28,12 +28,12 @@ template image (patch).
|
||||
|
||||
our goal is to detect the highest matching area:
|
||||
|
||||

|
||||

|
||||
|
||||
- To identify the matching area, we have to *compare* the template image against the source image
|
||||
by sliding it:
|
||||
|
||||

|
||||

|
||||
|
||||
- By **sliding**, we mean moving the patch one pixel at a time (left to right, up to down). At
|
||||
each location, a metric is calculated so it represents how "good" or "bad" the match at that
|
||||
@ -41,7 +41,7 @@ template image (patch).
|
||||
- For each location of **T** over **I**, you *store* the metric in the *result matrix* **(R)**.
|
||||
Each location \f$(x,y)\f$ in **R** contains the match metric:
|
||||
|
||||

|
||||

|
||||
|
||||
the image above is the result **R** of sliding the patch with a metric **TM_CCORR_NORMED**.
|
||||
The brightest locations indicate the highest matches. As you can see, the location marked by the
|
||||
@ -56,23 +56,23 @@ template image (patch).
|
||||
Good question. OpenCV implements Template matching in the function @ref cv::matchTemplate . The
|
||||
available methods are 6:
|
||||
|
||||
a. **method=CV_TM_SQDIFF**
|
||||
-# **method=CV_TM_SQDIFF**
|
||||
|
||||
\f[R(x,y)= \sum _{x',y'} (T(x',y')-I(x+x',y+y'))^2\f]
|
||||
|
||||
b. **method=CV_TM_SQDIFF_NORMED**
|
||||
-# **method=CV_TM_SQDIFF_NORMED**
|
||||
|
||||
\f[R(x,y)= \frac{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f]
|
||||
|
||||
c. **method=CV_TM_CCORR**
|
||||
-# **method=CV_TM_CCORR**
|
||||
|
||||
\f[R(x,y)= \sum _{x',y'} (T(x',y') \cdot I(x+x',y+y'))\f]
|
||||
|
||||
d. **method=CV_TM_CCORR_NORMED**
|
||||
-# **method=CV_TM_CCORR_NORMED**
|
||||
|
||||
\f[R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f]
|
||||
|
||||
e. **method=CV_TM_CCOEFF**
|
||||
-# **method=CV_TM_CCOEFF**
|
||||
|
||||
\f[R(x,y)= \sum _{x',y'} (T'(x',y') \cdot I(x+x',y+y'))\f]
|
||||
|
||||
@ -80,7 +80,7 @@ e. **method=CV_TM_CCOEFF**
|
||||
|
||||
\f[\begin{array}{l} T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum _{x'',y''} T(x'',y'') \\ I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum _{x'',y''} I(x+x'',y+y'') \end{array}\f]
|
||||
|
||||
f. **method=CV_TM_CCOEFF_NORMED**
|
||||
-# **method=CV_TM_CCOEFF_NORMED**
|
||||
|
||||
\f[R(x,y)= \frac{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) }{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} }\f]
|
||||
|
||||
@ -98,93 +98,12 @@ Code
|
||||
- **Downloadable code**: Click
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/Histograms_Matching/MatchTemplate_Demo.cpp)
|
||||
- **Code at glance:**
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
@includelineno samples/cpp/tutorial_code/Histograms_Matching/MatchTemplate_Demo.cpp
|
||||
|
||||
using namespace std;
|
||||
using namespace cv;
|
||||
|
||||
/// Global Variables
|
||||
Mat img; Mat templ; Mat result;
|
||||
char* image_window = "Source Image";
|
||||
char* result_window = "Result window";
|
||||
|
||||
int match_method;
|
||||
int max_Trackbar = 5;
|
||||
|
||||
/// Function Headers
|
||||
void MatchingMethod( int, void* );
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Load image and template
|
||||
img = imread( argv[1], 1 );
|
||||
templ = imread( argv[2], 1 );
|
||||
|
||||
/// Create windows
|
||||
namedWindow( image_window, WINDOW_AUTOSIZE );
|
||||
namedWindow( result_window, WINDOW_AUTOSIZE );
|
||||
|
||||
/// Create Trackbar
|
||||
char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
|
||||
createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
|
||||
|
||||
MatchingMethod( 0, 0 );
|
||||
|
||||
waitKey(0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* @function MatchingMethod
|
||||
* @brief Trackbar callback
|
||||
*/
|
||||
void MatchingMethod( int, void* )
|
||||
{
|
||||
/// Source image to display
|
||||
Mat img_display;
|
||||
img.copyTo( img_display );
|
||||
|
||||
/// Create the result matrix
|
||||
int result_cols = img.cols - templ.cols + 1;
|
||||
int result_rows = img.rows - templ.rows + 1;
|
||||
|
||||
result.create( result_cols, result_rows, CV_32FC1 );
|
||||
|
||||
/// Do the Matching and Normalize
|
||||
matchTemplate( img, templ, result, match_method );
|
||||
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
|
||||
|
||||
/// Localizing the best match with minMaxLoc
|
||||
double minVal; double maxVal; Point minLoc; Point maxLoc;
|
||||
Point matchLoc;
|
||||
|
||||
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
|
||||
|
||||
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
|
||||
if( match_method == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED )
|
||||
{ matchLoc = minLoc; }
|
||||
else
|
||||
{ matchLoc = maxLoc; }
|
||||
|
||||
/// Show me what you got
|
||||
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
|
||||
rectangle( result, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
|
||||
|
||||
imshow( image_window, img_display );
|
||||
imshow( result_window, result );
|
||||
|
||||
return;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Declare some global variables, such as the image, template and result matrices, as well as the
|
||||
-# Declare some global variables, such as the image, template and result matrices, as well as the
|
||||
match method and the window names:
|
||||
@code{.cpp}
|
||||
Mat img; Mat templ; Mat result;
|
||||
@ -194,33 +113,33 @@ Explanation
|
||||
int match_method;
|
||||
int max_Trackbar = 5;
|
||||
@endcode
|
||||
2. Load the source image and template:
|
||||
-# Load the source image and template:
|
||||
@code{.cpp}
|
||||
img = imread( argv[1], 1 );
|
||||
templ = imread( argv[2], 1 );
|
||||
@endcode
|
||||
3. Create the windows to show the results:
|
||||
-# Create the windows to show the results:
|
||||
@code{.cpp}
|
||||
namedWindow( image_window, WINDOW_AUTOSIZE );
|
||||
namedWindow( result_window, WINDOW_AUTOSIZE );
|
||||
@endcode
|
||||
4. Create the Trackbar to enter the kind of matching method to be used. When a change is detected
|
||||
-# Create the Trackbar to enter the kind of matching method to be used. When a change is detected
|
||||
the callback function **MatchingMethod** is called.
|
||||
@code{.cpp}
|
||||
char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
|
||||
createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
|
||||
@endcode
|
||||
5. Wait until user exits the program.
|
||||
-# Wait until user exits the program.
|
||||
@code{.cpp}
|
||||
waitKey(0);
|
||||
return 0;
|
||||
@endcode
|
||||
6. Let's check out the callback function. First, it makes a copy of the source image:
|
||||
-# Let's check out the callback function. First, it makes a copy of the source image:
|
||||
@code{.cpp}
|
||||
Mat img_display;
|
||||
img.copyTo( img_display );
|
||||
@endcode
|
||||
7. Next, it creates the result matrix that will store the matching results for each template
|
||||
-# Next, it creates the result matrix that will store the matching results for each template
|
||||
location. Observe in detail the size of the result matrix (which matches all possible locations
|
||||
for it)
|
||||
@code{.cpp}
|
||||
@ -229,18 +148,18 @@ Explanation
|
||||
|
||||
result.create( result_cols, result_rows, CV_32FC1 );
|
||||
@endcode
|
||||
8. Perform the template matching operation:
|
||||
-# Perform the template matching operation:
|
||||
@code{.cpp}
|
||||
matchTemplate( img, templ, result, match_method );
|
||||
@endcode
|
||||
the arguments are naturally the input image **I**, the template **T**, the result **R** and the
|
||||
match_method (given by the Trackbar)
|
||||
|
||||
9. We normalize the results:
|
||||
-# We normalize the results:
|
||||
@code{.cpp}
|
||||
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
|
||||
@endcode
|
||||
10. We localize the minimum and maximum values in the result matrix **R** by using @ref
|
||||
-# We localize the minimum and maximum values in the result matrix **R** by using @ref
|
||||
cv::minMaxLoc .
|
||||
@code{.cpp}
|
||||
double minVal; double maxVal; Point minLoc; Point maxLoc;
|
||||
@ -256,7 +175,7 @@ Explanation
|
||||
array.
|
||||
- **Mat():** Optional mask
|
||||
|
||||
11. For the first two methods ( TM_SQDIFF and MT_SQDIFF_NORMED ) the best match are the lowest
|
||||
-# For the first two methods ( TM_SQDIFF and MT_SQDIFF_NORMED ) the best match are the lowest
|
||||
values. For all the others, higher values represent better matches. So, we save the
|
||||
corresponding value in the **matchLoc** variable:
|
||||
@code{.cpp}
|
||||
@ -265,7 +184,7 @@ Explanation
|
||||
else
|
||||
{ matchLoc = maxLoc; }
|
||||
@endcode
|
||||
12. Display the source image and the result matrix. Draw a rectangle around the highest possible
|
||||
-# Display the source image and the result matrix. Draw a rectangle around the highest possible
|
||||
matching area:
|
||||
@code{.cpp}
|
||||
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
|
||||
@ -274,29 +193,32 @@ Explanation
|
||||
imshow( image_window, img_display );
|
||||
imshow( result_window, result );
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. Testing our program with an input image such as:
|
||||
-# Testing our program with an input image such as:
|
||||
|
||||

|
||||

|
||||
|
||||
and a template image:
|
||||
|
||||

|
||||

|
||||
|
||||
2. Generate the following result matrices (first row are the standard methods SQDIFF, CCORR and
|
||||
-# Generate the following result matrices (first row are the standard methods SQDIFF, CCORR and
|
||||
CCOEFF, second row are the same methods in its normalized version). In the first column, the
|
||||
darkest is the better match, for the other two columns, the brighter a location, the higher the
|
||||
match.
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
|Result_0| |Result_2| |Result_4|
|
||||
------------- ------------- -------------
|
||||
|Result_1| |Result_3| |Result_5|
|
||||
|
||||
3. The right match is shown below (black rectangle around the face of the guy at the right). Notice
|
||||
-# The right match is shown below (black rectangle around the face of the guy at the right). Notice
|
||||
that CCORR and CCDEFF gave erroneous best matches, however their normalized version did it
|
||||
right, this may be due to the fact that we are only considering the "highest match" and not the
|
||||
other possible high matches.
|
||||
|
||||

|
||||

|
||||
|
@ -20,7 +20,7 @@ The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to
|
||||
|
||||
### Steps
|
||||
|
||||
1. Filter out any noise. The Gaussian filter is used for this purpose. An example of a Gaussian
|
||||
-# Filter out any noise. The Gaussian filter is used for this purpose. An example of a Gaussian
|
||||
kernel of \f$size = 5\f$ that might be used is shown below:
|
||||
|
||||
\f[K = \dfrac{1}{159}\begin{bmatrix}
|
||||
@ -31,8 +31,8 @@ The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to
|
||||
2 & 4 & 5 & 4 & 2
|
||||
\end{bmatrix}\f]
|
||||
|
||||
2. Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel:
|
||||
1. Apply a pair of convolution masks (in \f$x\f$ and \f$y\f$ directions:
|
||||
-# Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel:
|
||||
-# Apply a pair of convolution masks (in \f$x\f$ and \f$y\f$ directions:
|
||||
\f[G_{x} = \begin{bmatrix}
|
||||
-1 & 0 & +1 \\
|
||||
-2 & 0 & +2 \\
|
||||
@ -43,44 +43,44 @@ The *Canny Edge detector* was developed by John F. Canny in 1986. Also known to
|
||||
+1 & +2 & +1
|
||||
\end{bmatrix}\f]
|
||||
|
||||
2. Find the gradient strength and direction with:
|
||||
-# Find the gradient strength and direction with:
|
||||
\f[\begin{array}{l}
|
||||
G = \sqrt{ G_{x}^{2} + G_{y}^{2} } \\
|
||||
\theta = \arctan(\dfrac{ G_{y} }{ G_{x} })
|
||||
\end{array}\f]
|
||||
The direction is rounded to one of four possible angles (namely 0, 45, 90 or 135)
|
||||
|
||||
3. *Non-maximum* suppression is applied. This removes pixels that are not considered to be part of
|
||||
-# *Non-maximum* suppression is applied. This removes pixels that are not considered to be part of
|
||||
an edge. Hence, only thin lines (candidate edges) will remain.
|
||||
4. *Hysteresis*: The final step. Canny does use two thresholds (upper and lower):
|
||||
-# *Hysteresis*: The final step. Canny does use two thresholds (upper and lower):
|
||||
|
||||
1. If a pixel gradient is higher than the *upper* threshold, the pixel is accepted as an edge
|
||||
2. If a pixel gradient value is below the *lower* threshold, then it is rejected.
|
||||
3. If the pixel gradient is between the two thresholds, then it will be accepted only if it is
|
||||
-# If a pixel gradient is higher than the *upper* threshold, the pixel is accepted as an edge
|
||||
-# If a pixel gradient value is below the *lower* threshold, then it is rejected.
|
||||
-# If the pixel gradient is between the two thresholds, then it will be accepted only if it is
|
||||
connected to a pixel that is above the *upper* threshold.
|
||||
|
||||
Canny recommended a *upper*:*lower* ratio between 2:1 and 3:1.
|
||||
|
||||
5. For more details, you can always consult your favorite Computer Vision book.
|
||||
-# For more details, you can always consult your favorite Computer Vision book.
|
||||
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Asks the user to enter a numerical value to set the lower threshold for our *Canny Edge
|
||||
Detector* (by means of a Trackbar)
|
||||
- Applies the *Canny Detector* and generates a **mask** (bright lines representing the edges
|
||||
on a black background).
|
||||
- Applies the mask obtained on the original image and display it in a window.
|
||||
|
||||
2. The tutorial code's is shown lines below. You can also download it from
|
||||
-# The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp)
|
||||
@includelineno samples/cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Create some needed variables:
|
||||
-# Create some needed variables:
|
||||
@code{.cpp}
|
||||
Mat src, src_gray;
|
||||
Mat dst, detected_edges;
|
||||
@ -94,12 +94,12 @@ Explanation
|
||||
@endcode
|
||||
Note the following:
|
||||
|
||||
1. We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*)
|
||||
2. We set the kernel size of \f$3\f$ (for the Sobel operations to be performed internally by the
|
||||
-# We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*)
|
||||
-# We set the kernel size of \f$3\f$ (for the Sobel operations to be performed internally by the
|
||||
Canny function)
|
||||
3. We set a maximum value for the lower Threshold of \f$100\f$.
|
||||
-# We set a maximum value for the lower Threshold of \f$100\f$.
|
||||
|
||||
2. Loads the source image:
|
||||
-# Loads the source image:
|
||||
@code{.cpp}
|
||||
/// Load an image
|
||||
src = imread( argv[1] );
|
||||
@ -107,35 +107,35 @@ Explanation
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
@endcode
|
||||
3. Create a matrix of the same type and size of *src* (to be *dst*)
|
||||
-# Create a matrix of the same type and size of *src* (to be *dst*)
|
||||
@code{.cpp}
|
||||
dst.create( src.size(), src.type() );
|
||||
@endcode
|
||||
4. Convert the image to grayscale (using the function @ref cv::cvtColor :
|
||||
-# Convert the image to grayscale (using the function @ref cv::cvtColor :
|
||||
@code{.cpp}
|
||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||
@endcode
|
||||
5. Create a window to display the results
|
||||
-# Create a window to display the results
|
||||
@code{.cpp}
|
||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||
@endcode
|
||||
6. Create a Trackbar for the user to enter the lower threshold for our Canny detector:
|
||||
-# Create a Trackbar for the user to enter the lower threshold for our Canny detector:
|
||||
@code{.cpp}
|
||||
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
|
||||
@endcode
|
||||
Observe the following:
|
||||
|
||||
1. The variable to be controlled by the Trackbar is *lowThreshold* with a limit of
|
||||
-# The variable to be controlled by the Trackbar is *lowThreshold* with a limit of
|
||||
*max_lowThreshold* (which we set to 100 previously)
|
||||
2. Each time the Trackbar registers an action, the callback function *CannyThreshold* will be
|
||||
-# Each time the Trackbar registers an action, the callback function *CannyThreshold* will be
|
||||
invoked.
|
||||
|
||||
7. Let's check the *CannyThreshold* function, step by step:
|
||||
1. First, we blur the image with a filter of kernel size 3:
|
||||
-# Let's check the *CannyThreshold* function, step by step:
|
||||
-# First, we blur the image with a filter of kernel size 3:
|
||||
@code{.cpp}
|
||||
blur( src_gray, detected_edges, Size(3,3) );
|
||||
@endcode
|
||||
2. Second, we apply the OpenCV function @ref cv::Canny :
|
||||
-# Second, we apply the OpenCV function @ref cv::Canny :
|
||||
@code{.cpp}
|
||||
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
|
||||
@endcode
|
||||
@ -149,11 +149,11 @@ Explanation
|
||||
- *kernel_size*: We defined it to be 3 (the size of the Sobel kernel to be used
|
||||
internally)
|
||||
|
||||
8. We fill a *dst* image with zeros (meaning the image is completely black).
|
||||
-# We fill a *dst* image with zeros (meaning the image is completely black).
|
||||
@code{.cpp}
|
||||
dst = Scalar::all(0);
|
||||
@endcode
|
||||
9. Finally, we will use the function @ref cv::Mat::copyTo to map only the areas of the image that are
|
||||
-# Finally, we will use the function @ref cv::Mat::copyTo to map only the areas of the image that are
|
||||
identified as edges (on a black background).
|
||||
@code{.cpp}
|
||||
src.copyTo( dst, detected_edges);
|
||||
@ -163,20 +163,21 @@ Explanation
|
||||
contours on a black background, the resulting *dst* will be black in all the area but the
|
||||
detected edges.
|
||||
|
||||
10. We display our result:
|
||||
-# We display our result:
|
||||
@code{.cpp}
|
||||
imshow( window_name, dst );
|
||||
@endcode
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
- After compiling the code above, we can run it giving as argument the path to an image. For
|
||||
example, using as an input the following image:
|
||||
|
||||

|
||||

|
||||
|
||||
- Moving the slider, trying different threshold, we obtain the following result:
|
||||
|
||||

|
||||

|
||||
|
||||
- Notice how the image is superposed to the black background on the edge regions.
|
||||
|
@ -14,14 +14,14 @@ Theory
|
||||
|
||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||
|
||||
1. In our previous tutorial we learned to use convolution to operate on images. One problem that
|
||||
-# In our previous tutorial we learned to use convolution to operate on images. One problem that
|
||||
naturally arises is how to handle the boundaries. How can we convolve them if the evaluated
|
||||
points are at the edge of the image?
|
||||
2. What most of OpenCV functions do is to copy a given image onto another slightly larger image and
|
||||
-# What most of OpenCV functions do is to copy a given image onto another slightly larger image and
|
||||
then automatically pads the boundary (by any of the methods explained in the sample code just
|
||||
below). This way, the convolution can be performed over the needed pixels without problems (the
|
||||
extra padding is cut after the operation is done).
|
||||
3. In this tutorial, we will briefly explore two ways of defining the extra padding (border) for an
|
||||
-# In this tutorial, we will briefly explore two ways of defining the extra padding (border) for an
|
||||
image:
|
||||
|
||||
-# **BORDER_CONSTANT**: Pad the image with a constant value (i.e. black or \f$0\f$
|
||||
@ -33,91 +33,26 @@ Theory
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Load an image
|
||||
- Let the user choose what kind of padding use in the input image. There are two options:
|
||||
|
||||
1. *Constant value border*: Applies a padding of a constant value for the whole border.
|
||||
-# *Constant value border*: Applies a padding of a constant value for the whole border.
|
||||
This value will be updated randomly each 0.5 seconds.
|
||||
2. *Replicated border*: The border will be replicated from the pixel values at the edges of
|
||||
-# *Replicated border*: The border will be replicated from the pixel values at the edges of
|
||||
the original image.
|
||||
|
||||
The user chooses either option by pressing 'c' (constant) or 'r' (replicate)
|
||||
- The program finishes when the user presses 'ESC'
|
||||
|
||||
2. The tutorial code's is shown lines below. You can also download it from
|
||||
-# The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
@includelineno samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp
|
||||
|
||||
using namespace cv;
|
||||
|
||||
/// Global Variables
|
||||
Mat src, dst;
|
||||
int top, bottom, left, right;
|
||||
int borderType;
|
||||
Scalar value;
|
||||
char* window_name = "copyMakeBorder Demo";
|
||||
RNG rng(12345);
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
|
||||
int c;
|
||||
|
||||
/// Load an image
|
||||
src = imread( argv[1] );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1;
|
||||
printf(" No data entered, please enter the path to an image file \n");
|
||||
}
|
||||
|
||||
/// Brief how-to for this program
|
||||
printf( "\n \t copyMakeBorder Demo: \n" );
|
||||
printf( "\t -------------------- \n" );
|
||||
printf( " ** Press 'c' to set the border to a random constant value \n");
|
||||
printf( " ** Press 'r' to set the border to be replicated \n");
|
||||
printf( " ** Press 'ESC' to exit the program \n");
|
||||
|
||||
/// Create window
|
||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||
|
||||
/// Initialize arguments for the filter
|
||||
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows);
|
||||
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
|
||||
dst = src;
|
||||
|
||||
imshow( window_name, dst );
|
||||
|
||||
while( true )
|
||||
{
|
||||
c = waitKey(500);
|
||||
|
||||
if( (char)c == 27 )
|
||||
{ break; }
|
||||
else if( (char)c == 'c' )
|
||||
{ borderType = BORDER_CONSTANT; }
|
||||
else if( (char)c == 'r' )
|
||||
{ borderType = BORDER_REPLICATE; }
|
||||
|
||||
value = Scalar( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
|
||||
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
|
||||
|
||||
imshow( window_name, dst );
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. First we declare the variables we are going to use:
|
||||
-# First we declare the variables we are going to use:
|
||||
@code{.cpp}
|
||||
Mat src, dst;
|
||||
int top, bottom, left, right;
|
||||
@ -129,7 +64,7 @@ Explanation
|
||||
Especial attention deserves the variable *rng* which is a random number generator. We use it to
|
||||
generate the random border color, as we will see soon.
|
||||
|
||||
2. As usual we load our source image *src*:
|
||||
-# As usual we load our source image *src*:
|
||||
@code{.cpp}
|
||||
src = imread( argv[1] );
|
||||
|
||||
@ -138,17 +73,17 @@ Explanation
|
||||
printf(" No data entered, please enter the path to an image file \n");
|
||||
}
|
||||
@endcode
|
||||
3. After giving a short intro of how to use the program, we create a window:
|
||||
-# After giving a short intro of how to use the program, we create a window:
|
||||
@code{.cpp}
|
||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||
@endcode
|
||||
4. Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and
|
||||
-# Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and
|
||||
*right*). We give them a value of 5% the size of *src*.
|
||||
@code{.cpp}
|
||||
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows);
|
||||
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
|
||||
@endcode
|
||||
5. The program begins a *while* loop. If the user presses 'c' or 'r', the *borderType* variable
|
||||
-# The program begins a *while* loop. If the user presses 'c' or 'r', the *borderType* variable
|
||||
takes the value of *BORDER_CONSTANT* or *BORDER_REPLICATE* respectively:
|
||||
@code{.cpp}
|
||||
while( true )
|
||||
@ -162,14 +97,14 @@ Explanation
|
||||
else if( (char)c == 'r' )
|
||||
{ borderType = BORDER_REPLICATE; }
|
||||
@endcode
|
||||
6. In each iteration (after 0.5 seconds), the variable *value* is updated...
|
||||
-# In each iteration (after 0.5 seconds), the variable *value* is updated...
|
||||
@code{.cpp}
|
||||
value = Scalar( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
|
||||
@endcode
|
||||
with a random value generated by the **RNG** variable *rng*. This value is a number picked
|
||||
randomly in the range \f$[0,255]\f$
|
||||
|
||||
7. Finally, we call the function @ref cv::copyMakeBorder to apply the respective padding:
|
||||
-# Finally, we call the function @ref cv::copyMakeBorder to apply the respective padding:
|
||||
@code{.cpp}
|
||||
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
|
||||
@endcode
|
||||
@ -184,14 +119,15 @@ Explanation
|
||||
-# *value*: If *borderType* is *BORDER_CONSTANT*, this is the value used to fill the border
|
||||
pixels.
|
||||
|
||||
8. We display our output image in the image created previously
|
||||
-# We display our output image in the image created previously
|
||||
@code{.cpp}
|
||||
imshow( window_name, dst );
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. After compiling the code above, you can execute it giving as argument the path of an image. The
|
||||
-# After compiling the code above, you can execute it giving as argument the path of an image. The
|
||||
result should be:
|
||||
|
||||
- By default, it begins with the border set to BORDER_CONSTANT. Hence, a succession of random
|
||||
@ -203,4 +139,4 @@ Results
|
||||
Below some screenshot showing how the border changes color and how the *BORDER_REPLICATE*
|
||||
option looks:
|
||||
|
||||

|
||||

|
||||
|
@ -23,18 +23,18 @@ In a very general sense, convolution is an operation between every part of an im
|
||||
A kernel is essentially a fixed size array of numerical coefficeints along with an *anchor point* in
|
||||
that array, which is tipically located at the center.
|
||||
|
||||

|
||||

|
||||
|
||||
### How does convolution with a kernel work?
|
||||
|
||||
Assume you want to know the resulting value of a particular location in the image. The value of the
|
||||
convolution is calculated in the following way:
|
||||
|
||||
1. Place the kernel anchor on top of a determined pixel, with the rest of the kernel overlaying the
|
||||
-# Place the kernel anchor on top of a determined pixel, with the rest of the kernel overlaying the
|
||||
corresponding local pixels in the image.
|
||||
2. Multiply the kernel coefficients by the corresponding image pixel values and sum the result.
|
||||
3. Place the result to the location of the *anchor* in the input image.
|
||||
4. Repeat the process for all pixels by scanning the kernel over the entire image.
|
||||
-# Multiply the kernel coefficients by the corresponding image pixel values and sum the result.
|
||||
-# Place the result to the location of the *anchor* in the input image.
|
||||
-# Repeat the process for all pixels by scanning the kernel over the entire image.
|
||||
|
||||
Expressing the procedure above in the form of an equation we would have:
|
||||
|
||||
@ -46,7 +46,7 @@ these operations.
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Loads an image
|
||||
- Performs a *normalized box filter*. For instance, for a kernel of size \f$size = 3\f$, the
|
||||
kernel would be:
|
||||
@ -61,7 +61,7 @@ Code
|
||||
|
||||
- The filter output (with each kernel) will be shown during 500 milliseconds
|
||||
|
||||
2. The tutorial code's is shown lines below. You can also download it from
|
||||
-# The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/filter2D_demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/imgproc.hpp"
|
||||
@ -125,26 +125,26 @@ int main ( int argc, char** argv )
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Load an image
|
||||
-# Load an image
|
||||
@code{.cpp}
|
||||
src = imread( argv[1] );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
@endcode
|
||||
2. Create a window to display the result
|
||||
-# Create a window to display the result
|
||||
@code{.cpp}
|
||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||
@endcode
|
||||
3. Initialize the arguments for the linear filter
|
||||
-# Initialize the arguments for the linear filter
|
||||
@code{.cpp}
|
||||
anchor = Point( -1, -1 );
|
||||
delta = 0;
|
||||
ddepth = -1;
|
||||
@endcode
|
||||
4. Perform an infinite loop updating the kernel size and applying our linear filter to the input
|
||||
-# Perform an infinite loop updating the kernel size and applying our linear filter to the input
|
||||
image. Let's analyze that more in detail:
|
||||
5. First we define the kernel our filter is going to use. Here it is:
|
||||
-# First we define the kernel our filter is going to use. Here it is:
|
||||
@code{.cpp}
|
||||
kernel_size = 3 + 2*( ind%5 );
|
||||
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
|
||||
@ -153,7 +153,7 @@ Explanation
|
||||
line actually builds the kernel by setting its value to a matrix filled with \f$1's\f$ and
|
||||
normalizing it by dividing it between the number of elements.
|
||||
|
||||
6. After setting the kernel, we can generate the filter by using the function @ref cv::filter2D :
|
||||
-# After setting the kernel, we can generate the filter by using the function @ref cv::filter2D :
|
||||
@code{.cpp}
|
||||
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
|
||||
@endcode
|
||||
@ -169,14 +169,14 @@ Explanation
|
||||
-# *delta*: A value to be added to each pixel during the convolution. By default it is \f$0\f$
|
||||
-# *BORDER_DEFAULT*: We let this value by default (more details in the following tutorial)
|
||||
|
||||
7. Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be
|
||||
-# Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be
|
||||
updated in the range indicated.
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. After compiling the code above, you can execute it giving as argument the path of an image. The
|
||||
-# After compiling the code above, you can execute it giving as argument the path of an image. The
|
||||
result should be a window that shows an image blurred by a normalized filter. Each 0.5 seconds
|
||||
the kernel size should change, as can be seen in the series of snapshots below:
|
||||
|
||||

|
||||

|
||||
|
@ -23,7 +23,7 @@ Theory
|
||||
where \f$(x_{center}, y_{center})\f$ define the center position (green point) and \f$r\f$ is the radius,
|
||||
which allows us to completely define a circle, as it can be seen below:
|
||||
|
||||

|
||||

|
||||
|
||||
- For sake of efficiency, OpenCV implements a detection method slightly trickier than the standard
|
||||
Hough Transform: *The Hough gradient method*, which is made up of two main stages. The first
|
||||
@ -34,82 +34,35 @@ Theory
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Loads an image and blur it to reduce the noise
|
||||
- Applies the *Hough Circle Transform* to the blurred image .
|
||||
- Display the detected circle in a window.
|
||||
|
||||
2. The sample code that we will explain can be downloaded from
|
||||
|TutorialHoughCirclesSimpleDownload|_. A slightly fancier version (which shows trackbars for
|
||||
changing the threshold values) can be found |TutorialHoughCirclesFancyDownload|_.
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
-# The sample code that we will explain can be downloaded from [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/houghcircles.cpp).
|
||||
A slightly fancier version (which shows trackbars for
|
||||
changing the threshold values) can be found [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/HoughCircle_Demo.cpp).
|
||||
@includelineno samples/cpp/houghcircles.cpp
|
||||
|
||||
using namespace cv;
|
||||
|
||||
/* @function main */
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
Mat src, src_gray;
|
||||
|
||||
/// Read the image
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
|
||||
/// Convert it to gray
|
||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||
|
||||
/// Reduce the noise so we avoid false circle detection
|
||||
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
|
||||
|
||||
vector<Vec3f> circles;
|
||||
|
||||
/// Apply the Hough Transform to find the circles
|
||||
HoughCircles( src_gray, circles, HOUGH_GRADIENT, 1, src_gray.rows/8, 200, 100, 0, 0 );
|
||||
|
||||
/// Draw the circles detected
|
||||
for( size_t i = 0; i < circles.size(); i++ )
|
||||
{
|
||||
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
|
||||
int radius = cvRound(circles[i][2]);
|
||||
// circle center
|
||||
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
|
||||
// circle outline
|
||||
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
|
||||
}
|
||||
|
||||
/// Show your results
|
||||
namedWindow( "Hough Circle Transform Demo", WINDOW_AUTOSIZE );
|
||||
imshow( "Hough Circle Transform Demo", src );
|
||||
|
||||
waitKey(0);
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Load an image
|
||||
-# Load an image
|
||||
@code{.cpp}
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
@endcode
|
||||
2. Convert it to grayscale:
|
||||
-# Convert it to grayscale:
|
||||
@code{.cpp}
|
||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||
@endcode
|
||||
3. Apply a Gaussian blur to reduce noise and avoid false circle detection:
|
||||
-# Apply a Gaussian blur to reduce noise and avoid false circle detection:
|
||||
@code{.cpp}
|
||||
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
|
||||
@endcode
|
||||
4. Proceed to apply Hough Circle Transform:
|
||||
-# Proceed to apply Hough Circle Transform:
|
||||
@code{.cpp}
|
||||
vector<Vec3f> circles;
|
||||
|
||||
@ -129,7 +82,7 @@ Explanation
|
||||
- *min_radius = 0*: Minimum radio to be detected. If unknown, put zero as default.
|
||||
- *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default.
|
||||
|
||||
5. Draw the detected circles:
|
||||
-# Draw the detected circles:
|
||||
@code{.cpp}
|
||||
for( size_t i = 0; i < circles.size(); i++ )
|
||||
{
|
||||
@ -143,19 +96,19 @@ Explanation
|
||||
@endcode
|
||||
You can see that we will draw the circle(s) on red and the center(s) with a small green dot
|
||||
|
||||
6. Display the detected circle(s):
|
||||
-# Display the detected circle(s):
|
||||
@code{.cpp}
|
||||
namedWindow( "Hough Circle Transform Demo", WINDOW_AUTOSIZE );
|
||||
imshow( "Hough Circle Transform Demo", src );
|
||||
@endcode
|
||||
7. Wait for the user to exit the program
|
||||
-# Wait for the user to exit the program
|
||||
@code{.cpp}
|
||||
waitKey(0);
|
||||
@endcode
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
The result of running the code above with a test image is shown below:
|
||||
|
||||

|
||||
|
||||

|
||||
|
@ -12,18 +12,22 @@ In this tutorial you will learn how to:
|
||||
Theory
|
||||
------
|
||||
|
||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. Hough
|
||||
Line Transform ---------------------\#. The Hough Line Transform is a transform used to detect
|
||||
straight lines. \#. To apply the Transform, first an edge detection pre-processing is desirable.
|
||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||
|
||||
Hough Line Transform
|
||||
--------------------
|
||||
|
||||
-# The Hough Line Transform is a transform used to detect straight lines.
|
||||
-# To apply the Transform, first an edge detection pre-processing is desirable.
|
||||
|
||||
### How does it work?
|
||||
|
||||
1. As you know, a line in the image space can be expressed with two variables. For example:
|
||||
-# As you know, a line in the image space can be expressed with two variables. For example:
|
||||
|
||||
-# In the **Cartesian coordinate system:** Parameters: \f$(m,b)\f$.
|
||||
-# In the **Polar coordinate system:** Parameters: \f$(r,\theta)\f$
|
||||
|
||||

|
||||

|
||||
|
||||
For Hough Transforms, we will express lines in the *Polar system*. Hence, a line equation can be
|
||||
written as:
|
||||
@ -32,7 +36,7 @@ straight lines. \#. To apply the Transform, first an edge detection pre-processi
|
||||
|
||||
Arranging the terms: \f$r = x \cos \theta + y \sin \theta\f$
|
||||
|
||||
1. In general for each point \f$(x_{0}, y_{0})\f$, we can define the family of lines that goes through
|
||||
-# In general for each point \f$(x_{0}, y_{0})\f$, we can define the family of lines that goes through
|
||||
that point as:
|
||||
|
||||
\f[r_{\theta} = x_{0} \cdot \cos \theta + y_{0} \cdot \sin \theta\f]
|
||||
@ -40,30 +44,30 @@ Arranging the terms: \f$r = x \cos \theta + y \sin \theta\f$
|
||||
Meaning that each pair \f$(r_{\theta},\theta)\f$ represents each line that passes by
|
||||
\f$(x_{0}, y_{0})\f$.
|
||||
|
||||
2. If for a given \f$(x_{0}, y_{0})\f$ we plot the family of lines that goes through it, we get a
|
||||
-# If for a given \f$(x_{0}, y_{0})\f$ we plot the family of lines that goes through it, we get a
|
||||
sinusoid. For instance, for \f$x_{0} = 8\f$ and \f$y_{0} = 6\f$ we get the following plot (in a plane
|
||||
\f$\theta\f$ - \f$r\f$):
|
||||
|
||||

|
||||

|
||||
|
||||
We consider only points such that \f$r > 0\f$ and \f$0< \theta < 2 \pi\f$.
|
||||
|
||||
3. We can do the same operation above for all the points in an image. If the curves of two
|
||||
-# We can do the same operation above for all the points in an image. If the curves of two
|
||||
different points intersect in the plane \f$\theta\f$ - \f$r\f$, that means that both points belong to a
|
||||
same line. For instance, following with the example above and drawing the plot for two more
|
||||
points: \f$x_{1} = 9\f$, \f$y_{1} = 4\f$ and \f$x_{2} = 12\f$, \f$y_{2} = 3\f$, we get:
|
||||
|
||||

|
||||

|
||||
|
||||
The three plots intersect in one single point \f$(0.925, 9.6)\f$, these coordinates are the
|
||||
parameters (\f$\theta, r\f$) or the line in which \f$(x_{0}, y_{0})\f$, \f$(x_{1}, y_{1})\f$ and
|
||||
\f$(x_{2}, y_{2})\f$ lay.
|
||||
|
||||
4. What does all the stuff above mean? It means that in general, a line can be *detected* by
|
||||
-# What does all the stuff above mean? It means that in general, a line can be *detected* by
|
||||
finding the number of intersections between curves.The more curves intersecting means that the
|
||||
line represented by that intersection have more points. In general, we can define a *threshold*
|
||||
of the minimum number of intersections needed to *detect* a line.
|
||||
5. This is what the Hough Line Transform does. It keeps track of the intersection between curves of
|
||||
-# This is what the Hough Line Transform does. It keeps track of the intersection between curves of
|
||||
every point in the image. If the number of intersections is above some *threshold*, then it
|
||||
declares it as a line with the parameters \f$(\theta, r_{\theta})\f$ of the intersection point.
|
||||
|
||||
@ -86,83 +90,20 @@ b. **The Probabilistic Hough Line Transform**
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Loads an image
|
||||
- Applies either a *Standard Hough Line Transform* or a *Probabilistic Line Transform*.
|
||||
- Display the original image and the detected line in two windows.
|
||||
|
||||
2. The sample code that we will explain can be downloaded from here_. A slightly fancier version
|
||||
-# The sample code that we will explain can be downloaded from [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/houghlines.cpp). A slightly fancier version
|
||||
(which shows both Hough standard and probabilistic with trackbars for changing the threshold
|
||||
values) can be found here_.
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
values) can be found [here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp).
|
||||
@includelineno samples/cpp/houghlines.cpp
|
||||
|
||||
#include <iostream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
void help()
|
||||
{
|
||||
cout << "\nThis program demonstrates line finding with the Hough transform.\n"
|
||||
"Usage:\n"
|
||||
"./houghlines <image_name>, Default is pic1.jpg\n" << endl;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
const char* filename = argc >= 2 ? argv[1] : "pic1.jpg";
|
||||
|
||||
Mat src = imread(filename, 0);
|
||||
if(src.empty())
|
||||
{
|
||||
help();
|
||||
cout << "can not open " << filename << endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
Mat dst, cdst;
|
||||
Canny(src, dst, 50, 200, 3);
|
||||
cvtColor(dst, cdst, COLOR_GRAY2BGR);
|
||||
|
||||
#if 0
|
||||
vector<Vec2f> lines;
|
||||
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
|
||||
|
||||
for( size_t i = 0; i < lines.size(); i++ )
|
||||
{
|
||||
float rho = lines[i][0], theta = lines[i][1];
|
||||
Point pt1, pt2;
|
||||
double a = cos(theta), b = sin(theta);
|
||||
double x0 = a*rho, y0 = b*rho;
|
||||
pt1.x = cvRound(x0 + 1000*(-b));
|
||||
pt1.y = cvRound(y0 + 1000*(a));
|
||||
pt2.x = cvRound(x0 - 1000*(-b));
|
||||
pt2.y = cvRound(y0 - 1000*(a));
|
||||
line( cdst, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
|
||||
}
|
||||
#else
|
||||
vector<Vec4i> lines;
|
||||
HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
|
||||
for( size_t i = 0; i < lines.size(); i++ )
|
||||
{
|
||||
Vec4i l = lines[i];
|
||||
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, CV_AA);
|
||||
}
|
||||
#endif
|
||||
imshow("source", src);
|
||||
imshow("detected lines", cdst);
|
||||
|
||||
waitKey();
|
||||
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Load an image
|
||||
-# Load an image
|
||||
@code{.cpp}
|
||||
Mat src = imread(filename, 0);
|
||||
if(src.empty())
|
||||
@ -172,14 +113,14 @@ Explanation
|
||||
return -1;
|
||||
}
|
||||
@endcode
|
||||
2. Detect the edges of the image by using a Canny detector
|
||||
-# Detect the edges of the image by using a Canny detector
|
||||
@code{.cpp}
|
||||
Canny(src, dst, 50, 200, 3);
|
||||
@endcode
|
||||
Now we will apply the Hough Line Transform. We will explain how to use both OpenCV functions
|
||||
available for this purpose:
|
||||
|
||||
3. **Standard Hough Line Transform**
|
||||
-# **Standard Hough Line Transform**
|
||||
-# First, you apply the Transform:
|
||||
@code{.cpp}
|
||||
vector<Vec2f> lines;
|
||||
@ -211,7 +152,7 @@ Explanation
|
||||
line( cdst, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
|
||||
}
|
||||
@endcode
|
||||
4. **Probabilistic Hough Line Transform**
|
||||
-# **Probabilistic Hough Line Transform**
|
||||
-# First you apply the transform:
|
||||
@code{.cpp}
|
||||
vector<Vec4i> lines;
|
||||
@ -239,15 +180,16 @@ Explanation
|
||||
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, LINE_AA);
|
||||
}
|
||||
@endcode
|
||||
5. Display the original image and the detected lines:
|
||||
-# Display the original image and the detected lines:
|
||||
@code{.cpp}
|
||||
imshow("source", src);
|
||||
imshow("detected lines", cdst);
|
||||
@endcode
|
||||
6. Wait until the user exits the program
|
||||
-# Wait until the user exits the program
|
||||
@code{.cpp}
|
||||
waitKey();
|
||||
@endcode
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
@ -258,11 +200,11 @@ Result
|
||||
|
||||
Using an input image such as:
|
||||
|
||||

|
||||

|
||||
|
||||
We get the following result by using the Probabilistic Hough Line Transform:
|
||||
|
||||

|
||||

|
||||
|
||||
You may observe that the number of lines detected vary while you change the *threshold*. The
|
||||
explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected
|
||||
|
@ -12,16 +12,16 @@ In this tutorial you will learn how to:
|
||||
Theory
|
||||
------
|
||||
|
||||
1. In the previous tutorial we learned how to use the *Sobel Operator*. It was based on the fact
|
||||
-# In the previous tutorial we learned how to use the *Sobel Operator*. It was based on the fact
|
||||
that in the edge area, the pixel intensity shows a "jump" or a high variation of intensity.
|
||||
Getting the first derivative of the intensity, we observed that an edge is characterized by a
|
||||
maximum, as it can be seen in the figure:
|
||||
|
||||

|
||||

|
||||
|
||||
2. And...what happens if we take the second derivative?
|
||||
-# And...what happens if we take the second derivative?
|
||||
|
||||

|
||||

|
||||
|
||||
You can observe that the second derivative is zero! So, we can also use this criterion to
|
||||
attempt to detect edges in an image. However, note that zeros will not only appear in edges
|
||||
@ -30,81 +30,34 @@ Theory
|
||||
|
||||
### Laplacian Operator
|
||||
|
||||
1. From the explanation above, we deduce that the second derivative can be used to *detect edges*.
|
||||
-# From the explanation above, we deduce that the second derivative can be used to *detect edges*.
|
||||
Since images are "*2D*", we would need to take the derivative in both dimensions. Here, the
|
||||
Laplacian operator comes handy.
|
||||
2. The *Laplacian operator* is defined by:
|
||||
-# The *Laplacian operator* is defined by:
|
||||
|
||||
\f[Laplace(f) = \dfrac{\partial^{2} f}{\partial x^{2}} + \dfrac{\partial^{2} f}{\partial y^{2}}\f]
|
||||
|
||||
1. The Laplacian operator is implemented in OpenCV by the function @ref cv::Laplacian . In fact,
|
||||
-# The Laplacian operator is implemented in OpenCV by the function @ref cv::Laplacian . In fact,
|
||||
since the Laplacian uses the gradient of images, it calls internally the *Sobel* operator to
|
||||
perform its computation.
|
||||
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Loads an image
|
||||
- Remove noise by applying a Gaussian blur and then convert the original image to grayscale
|
||||
- Applies a Laplacian operator to the grayscale image and stores the output image
|
||||
- Display the result in a window
|
||||
|
||||
2. The tutorial code's is shown lines below. You can also download it from
|
||||
-# The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
@includelineno samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp
|
||||
|
||||
using namespace cv;
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
Mat src, src_gray, dst;
|
||||
int kernel_size = 3;
|
||||
int scale = 1;
|
||||
int delta = 0;
|
||||
int ddepth = CV_16S;
|
||||
char* window_name = "Laplace Demo";
|
||||
|
||||
int c;
|
||||
|
||||
/// Load an image
|
||||
src = imread( argv[1] );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
|
||||
/// Remove noise by blurring with a Gaussian filter
|
||||
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
||||
|
||||
/// Convert the image to grayscale
|
||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
||||
|
||||
/// Create window
|
||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||
|
||||
/// Apply Laplace function
|
||||
Mat abs_dst;
|
||||
|
||||
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
|
||||
convertScaleAbs( dst, abs_dst );
|
||||
|
||||
/// Show what you got
|
||||
imshow( window_name, abs_dst );
|
||||
|
||||
waitKey(0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Create some needed variables:
|
||||
-# Create some needed variables:
|
||||
@code{.cpp}
|
||||
Mat src, src_gray, dst;
|
||||
int kernel_size = 3;
|
||||
@ -113,22 +66,22 @@ Explanation
|
||||
int ddepth = CV_16S;
|
||||
char* window_name = "Laplace Demo";
|
||||
@endcode
|
||||
2. Loads the source image:
|
||||
-# Loads the source image:
|
||||
@code{.cpp}
|
||||
src = imread( argv[1] );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
@endcode
|
||||
3. Apply a Gaussian blur to reduce noise:
|
||||
-# Apply a Gaussian blur to reduce noise:
|
||||
@code{.cpp}
|
||||
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
||||
@endcode
|
||||
4. Convert the image to grayscale using @ref cv::cvtColor
|
||||
-# Convert the image to grayscale using @ref cv::cvtColor
|
||||
@code{.cpp}
|
||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
||||
@endcode
|
||||
5. Apply the Laplacian operator to the grayscale image:
|
||||
-# Apply the Laplacian operator to the grayscale image:
|
||||
@code{.cpp}
|
||||
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
|
||||
@endcode
|
||||
@ -142,27 +95,26 @@ Explanation
|
||||
this example.
|
||||
- *scale*, *delta* and *BORDER_DEFAULT*: We leave them as default values.
|
||||
|
||||
6. Convert the output from the Laplacian operator to a *CV_8U* image:
|
||||
-# Convert the output from the Laplacian operator to a *CV_8U* image:
|
||||
@code{.cpp}
|
||||
convertScaleAbs( dst, abs_dst );
|
||||
@endcode
|
||||
7. Display the result in a window:
|
||||
-# Display the result in a window:
|
||||
@code{.cpp}
|
||||
imshow( window_name, abs_dst );
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. After compiling the code above, we can run it giving as argument the path to an image. For
|
||||
-# After compiling the code above, we can run it giving as argument the path to an image. For
|
||||
example, using as an input:
|
||||
|
||||

|
||||

|
||||
|
||||
2. We obtain the following result. Notice how the trees and the silhouette of the cow are
|
||||
-# We obtain the following result. Notice how the trees and the silhouette of the cow are
|
||||
approximately well defined (except in areas in which the intensity are very similar, i.e. around
|
||||
the cow's head). Also, note that the roof of the house behind the trees (right side) is
|
||||
notoriously marked. This is due to the fact that the contrast is higher in that region.
|
||||
|
||||

|
||||
|
||||
|
||||

|
||||
|
@ -33,146 +33,53 @@ Theory
|
||||
What would happen? It is easily seen that the image would flip in the \f$x\f$ direction. For
|
||||
instance, consider the input image:
|
||||
|
||||

|
||||

|
||||
|
||||
observe how the red circle changes positions with respect to x (considering \f$x\f$ the horizontal
|
||||
direction):
|
||||
|
||||

|
||||

|
||||
|
||||
- In OpenCV, the function @ref cv::remap offers a simple remapping implementation.
|
||||
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Loads an image
|
||||
- Each second, apply 1 of 4 different remapping processes to the image and display them
|
||||
indefinitely in a window.
|
||||
- Wait for the user to exit the program
|
||||
|
||||
2. The tutorial code's is shown lines below. You can also download it from
|
||||
-# The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Remap_Demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
|
||||
using namespace cv;
|
||||
|
||||
/// Global variables
|
||||
Mat src, dst;
|
||||
Mat map_x, map_y;
|
||||
char* remap_window = "Remap demo";
|
||||
int ind = 0;
|
||||
|
||||
/// Function Headers
|
||||
void update_map( void );
|
||||
|
||||
/*
|
||||
* @function main
|
||||
*/
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Load the image
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
/// Create dst, map_x and map_y with the same size as src:
|
||||
dst.create( src.size(), src.type() );
|
||||
map_x.create( src.size(), CV_32FC1 );
|
||||
map_y.create( src.size(), CV_32FC1 );
|
||||
|
||||
/// Create window
|
||||
namedWindow( remap_window, WINDOW_AUTOSIZE );
|
||||
|
||||
/// Loop
|
||||
while( true )
|
||||
{
|
||||
/// Each 1 sec. Press ESC to exit the program
|
||||
int c = waitKey( 1000 );
|
||||
|
||||
if( (char)c == 27 )
|
||||
{ break; }
|
||||
|
||||
/// Update map_x & map_y. Then apply remap
|
||||
update_map();
|
||||
remap( src, dst, map_x, map_y, INTER_LINEAR, BORDER_CONSTANT, Scalar(0,0, 0) );
|
||||
|
||||
/// Display results
|
||||
imshow( remap_window, dst );
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* @function update_map
|
||||
* @brief Fill the map_x and map_y matrices with 4 types of mappings
|
||||
*/
|
||||
void update_map( void )
|
||||
{
|
||||
ind = ind%4;
|
||||
|
||||
for( int j = 0; j < src.rows; j++ )
|
||||
{ for( int i = 0; i < src.cols; i++ )
|
||||
{
|
||||
switch( ind )
|
||||
{
|
||||
case 0:
|
||||
if( i > src.cols*0.25 && i < src.cols*0.75 && j > src.rows*0.25 && j < src.rows*0.75 )
|
||||
{
|
||||
map_x.at<float>(j,i) = 2*( i - src.cols*0.25 ) + 0.5 ;
|
||||
map_y.at<float>(j,i) = 2*( j - src.rows*0.25 ) + 0.5 ;
|
||||
}
|
||||
else
|
||||
{ map_x.at<float>(j,i) = 0 ;
|
||||
map_y.at<float>(j,i) = 0 ;
|
||||
}
|
||||
break;
|
||||
case 1:
|
||||
map_x.at<float>(j,i) = i ;
|
||||
map_y.at<float>(j,i) = src.rows - j ;
|
||||
break;
|
||||
case 2:
|
||||
map_x.at<float>(j,i) = src.cols - i ;
|
||||
map_y.at<float>(j,i) = j ;
|
||||
break;
|
||||
case 3:
|
||||
map_x.at<float>(j,i) = src.cols - i ;
|
||||
map_y.at<float>(j,i) = src.rows - j ;
|
||||
break;
|
||||
} // end of switch
|
||||
}
|
||||
}
|
||||
ind++;
|
||||
@endcode
|
||||
}
|
||||
@includelineno samples/cpp/tutorial_code/ImgTrans/Remap_Demo.cpp
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Create some variables we will use:
|
||||
-# Create some variables we will use:
|
||||
@code{.cpp}
|
||||
Mat src, dst;
|
||||
Mat map_x, map_y;
|
||||
char* remap_window = "Remap demo";
|
||||
int ind = 0;
|
||||
@endcode
|
||||
2. Load an image:
|
||||
-# Load an image:
|
||||
@code{.cpp}
|
||||
src = imread( argv[1], 1 );
|
||||
@endcode
|
||||
3. Create the destination image and the two mapping matrices (for x and y )
|
||||
-# Create the destination image and the two mapping matrices (for x and y )
|
||||
@code{.cpp}
|
||||
dst.create( src.size(), src.type() );
|
||||
map_x.create( src.size(), CV_32FC1 );
|
||||
map_y.create( src.size(), CV_32FC1 );
|
||||
@endcode
|
||||
4. Create a window to display results
|
||||
-# Create a window to display results
|
||||
@code{.cpp}
|
||||
namedWindow( remap_window, WINDOW_AUTOSIZE );
|
||||
@endcode
|
||||
5. Establish a loop. Each 1000 ms we update our mapping matrices (*mat_x* and *mat_y*) and apply
|
||||
-# Establish a loop. Each 1000 ms we update our mapping matrices (*mat_x* and *mat_y*) and apply
|
||||
them to our source image:
|
||||
@code{.cpp}
|
||||
while( true )
|
||||
@ -205,14 +112,11 @@ Explanation
|
||||
|
||||
How do we update our mapping matrices *mat_x* and *mat_y*? Go on reading:
|
||||
|
||||
6. **Updating the mapping matrices:** We are going to perform 4 different mappings:
|
||||
-# **Updating the mapping matrices:** We are going to perform 4 different mappings:
|
||||
-# Reduce the picture to half its size and will display it in the middle:
|
||||
|
||||
\f[h(i,j) = ( 2*i - src.cols/2 + 0.5, 2*j - src.rows/2 + 0.5)\f]
|
||||
|
||||
for all pairs \f$(i,j)\f$ such that: \f$\dfrac{src.cols}{4}<i<\dfrac{3 \cdot src.cols}{4}\f$ and
|
||||
\f$\dfrac{src.rows}{4}<j<\dfrac{3 \cdot src.rows}{4}\f$
|
||||
|
||||
-# Turn the image upside down: \f$h( i, j ) = (i, src.rows - j)\f$
|
||||
-# Reflect the image from left to right: \f$h(i,j) = ( src.cols - i, j )\f$
|
||||
-# Combination of b and c: \f$h(i,j) = ( src.cols - i, src.rows - j )\f$
|
||||
@ -254,26 +158,27 @@ for( int j = 0; j < src.rows; j++ )
|
||||
ind++;
|
||||
}
|
||||
@endcode
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
1. After compiling the code above, you can execute it giving as argument an image path. For
|
||||
-# After compiling the code above, you can execute it giving as argument an image path. For
|
||||
instance, by using the following image:
|
||||
|
||||

|
||||

|
||||
|
||||
2. This is the result of reducing it to half the size and centering it:
|
||||
-# This is the result of reducing it to half the size and centering it:
|
||||
|
||||

|
||||

|
||||
|
||||
3. Turning it upside down:
|
||||
-# Turning it upside down:
|
||||
|
||||

|
||||

|
||||
|
||||
4. Reflecting it in the x direction:
|
||||
-# Reflecting it in the x direction:
|
||||
|
||||

|
||||

|
||||
|
||||
5. Reflecting it in both directions:
|
||||
-# Reflecting it in both directions:
|
||||
|
||||

|
||||

|
||||
|
@ -15,45 +15,45 @@ Theory
|
||||
|
||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||
|
||||
1. In the last two tutorials we have seen applicative examples of convolutions. One of the most
|
||||
-# In the last two tutorials we have seen applicative examples of convolutions. One of the most
|
||||
important convolutions is the computation of derivatives in an image (or an approximation to
|
||||
them).
|
||||
2. Why may be important the calculus of the derivatives in an image? Let's imagine we want to
|
||||
-# Why may be important the calculus of the derivatives in an image? Let's imagine we want to
|
||||
detect the *edges* present in the image. For instance:
|
||||
|
||||

|
||||

|
||||
|
||||
You can easily notice that in an *edge*, the pixel intensity *changes* in a notorious way. A
|
||||
good way to express *changes* is by using *derivatives*. A high change in gradient indicates a
|
||||
major change in the image.
|
||||
|
||||
3. To be more graphical, let's assume we have a 1D-image. An edge is shown by the "jump" in
|
||||
-# To be more graphical, let's assume we have a 1D-image. An edge is shown by the "jump" in
|
||||
intensity in the plot below:
|
||||
|
||||

|
||||

|
||||
|
||||
4. The edge "jump" can be seen more easily if we take the first derivative (actually, here appears
|
||||
-# The edge "jump" can be seen more easily if we take the first derivative (actually, here appears
|
||||
as a maximum)
|
||||
|
||||

|
||||

|
||||
|
||||
5. So, from the explanation above, we can deduce that a method to detect edges in an image can be
|
||||
-# So, from the explanation above, we can deduce that a method to detect edges in an image can be
|
||||
performed by locating pixel locations where the gradient is higher than its neighbors (or to
|
||||
generalize, higher than a threshold).
|
||||
6. More detailed explanation, please refer to **Learning OpenCV** by Bradski and Kaehler
|
||||
-# More detailed explanation, please refer to **Learning OpenCV** by Bradski and Kaehler
|
||||
|
||||
### Sobel Operator
|
||||
|
||||
1. The Sobel Operator is a discrete differentiation operator. It computes an approximation of the
|
||||
-# The Sobel Operator is a discrete differentiation operator. It computes an approximation of the
|
||||
gradient of an image intensity function.
|
||||
2. The Sobel Operator combines Gaussian smoothing and differentiation.
|
||||
-# The Sobel Operator combines Gaussian smoothing and differentiation.
|
||||
|
||||
#### Formulation
|
||||
|
||||
Assuming that the image to be operated is \f$I\f$:
|
||||
|
||||
1. We calculate two derivatives:
|
||||
1. **Horizontal changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{x}\f$ with odd
|
||||
-# We calculate two derivatives:
|
||||
-# **Horizontal changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{x}\f$ with odd
|
||||
size. For example for a kernel size of 3, \f$G_{x}\f$ would be computed as:
|
||||
|
||||
\f[G_{x} = \begin{bmatrix}
|
||||
@ -62,7 +62,7 @@ Assuming that the image to be operated is \f$I\f$:
|
||||
-1 & 0 & +1
|
||||
\end{bmatrix} * I\f]
|
||||
|
||||
2. **Vertical changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{y}\f$ with odd
|
||||
-# **Vertical changes**: This is computed by convolving \f$I\f$ with a kernel \f$G_{y}\f$ with odd
|
||||
size. For example for a kernel size of 3, \f$G_{y}\f$ would be computed as:
|
||||
|
||||
\f[G_{y} = \begin{bmatrix}
|
||||
@ -71,7 +71,7 @@ Assuming that the image to be operated is \f$I\f$:
|
||||
+1 & +2 & +1
|
||||
\end{bmatrix} * I\f]
|
||||
|
||||
2. At each point of the image we calculate an approximation of the *gradient* in that point by
|
||||
-# At each point of the image we calculate an approximation of the *gradient* in that point by
|
||||
combining both results above:
|
||||
|
||||
\f[G = \sqrt{ G_{x}^{2} + G_{y}^{2} }\f]
|
||||
@ -83,7 +83,7 @@ Assuming that the image to be operated is \f$I\f$:
|
||||
@note
|
||||
When the size of the kernel is `3`, the Sobel kernel shown above may produce noticeable
|
||||
inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses
|
||||
this inaccuracy for kernels of size 3 by using the :scharr:\`Scharr function. This is as fast
|
||||
this inaccuracy for kernels of size 3 by using the @ref cv::Scharr function. This is as fast
|
||||
but more accurate than the standar Sobel function. It implements the following kernels:
|
||||
\f[G_{x} = \begin{bmatrix}
|
||||
-3 & 0 & +3 \\
|
||||
@ -103,18 +103,18 @@ Assuming that the image to be operated is \f$I\f$:
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Applies the *Sobel Operator* and generates as output an image with the detected *edges*
|
||||
bright on a darker background.
|
||||
|
||||
2. The tutorial code's is shown lines below. You can also download it from
|
||||
-# The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp)
|
||||
@includelineno samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. First we declare the variables we are going to use:
|
||||
-# First we declare the variables we are going to use:
|
||||
@code{.cpp}
|
||||
Mat src, src_gray;
|
||||
Mat grad;
|
||||
@ -123,22 +123,22 @@ Explanation
|
||||
int delta = 0;
|
||||
int ddepth = CV_16S;
|
||||
@endcode
|
||||
2. As usual we load our source image *src*:
|
||||
-# As usual we load our source image *src*:
|
||||
@code{.cpp}
|
||||
src = imread( argv[1] );
|
||||
|
||||
if( !src.data )
|
||||
{ return -1; }
|
||||
@endcode
|
||||
3. First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 )
|
||||
-# First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 )
|
||||
@code{.cpp}
|
||||
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
|
||||
@endcode
|
||||
4. Now we convert our filtered image to grayscale:
|
||||
-# Now we convert our filtered image to grayscale:
|
||||
@code{.cpp}
|
||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
||||
@endcode
|
||||
5. Second, we calculate the "*derivatives*" in *x* and *y* directions. For this, we use the
|
||||
-# Second, we calculate the "*derivatives*" in *x* and *y* directions. For this, we use the
|
||||
function @ref cv::Sobel as shown below:
|
||||
@code{.cpp}
|
||||
Mat grad_x, grad_y;
|
||||
@ -161,23 +161,24 @@ Explanation
|
||||
Notice that to calculate the gradient in *x* direction we use: \f$x_{order}= 1\f$ and
|
||||
\f$y_{order} = 0\f$. We do analogously for the *y* direction.
|
||||
|
||||
6. We convert our partial results back to *CV_8U*:
|
||||
-# We convert our partial results back to *CV_8U*:
|
||||
@code{.cpp}
|
||||
convertScaleAbs( grad_x, abs_grad_x );
|
||||
convertScaleAbs( grad_y, abs_grad_y );
|
||||
@endcode
|
||||
7. Finally, we try to approximate the *gradient* by adding both directional gradients (note that
|
||||
-# Finally, we try to approximate the *gradient* by adding both directional gradients (note that
|
||||
this is not an exact calculation at all! but it is good for our purposes).
|
||||
@code{.cpp}
|
||||
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
|
||||
@endcode
|
||||
8. Finally, we show our result:
|
||||
-# Finally, we show our result:
|
||||
@code{.cpp}
|
||||
imshow( window_name, grad );
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. Here is the output of applying our basic detector to *lena.jpg*:
|
||||
-# Here is the output of applying our basic detector to *lena.jpg*:
|
||||
|
||||

|
||||

|
||||
|
@ -6,17 +6,17 @@ Goal
|
||||
|
||||
In this tutorial you will learn how to:
|
||||
|
||||
a. Use the OpenCV function @ref cv::warpAffine to implement simple remapping routines.
|
||||
b. Use the OpenCV function @ref cv::getRotationMatrix2D to obtain a \f$2 \times 3\f$ rotation matrix
|
||||
- Use the OpenCV function @ref cv::warpAffine to implement simple remapping routines.
|
||||
- Use the OpenCV function @ref cv::getRotationMatrix2D to obtain a \f$2 \times 3\f$ rotation matrix
|
||||
|
||||
Theory
|
||||
------
|
||||
|
||||
### What is an Affine Transformation?
|
||||
|
||||
1. It is any transformation that can be expressed in the form of a *matrix multiplication* (linear
|
||||
-# It is any transformation that can be expressed in the form of a *matrix multiplication* (linear
|
||||
transformation) followed by a *vector addition* (translation).
|
||||
2. From the above, We can use an Affine Transformation to express:
|
||||
-# From the above, We can use an Affine Transformation to express:
|
||||
|
||||
-# Rotations (linear transformation)
|
||||
-# Translations (vector addition)
|
||||
@ -25,24 +25,28 @@ Theory
|
||||
you can see that, in essence, an Affine Transformation represents a **relation** between two
|
||||
images.
|
||||
|
||||
3. The usual way to represent an Affine Transform is by using a \f$2 \times 3\f$ matrix.
|
||||
-# The usual way to represent an Affine Transform is by using a \f$2 \times 3\f$ matrix.
|
||||
|
||||
\f[A = \begin{bmatrix}
|
||||
\f[
|
||||
A = \begin{bmatrix}
|
||||
a_{00} & a_{01} \\
|
||||
a_{10} & a_{11}
|
||||
\end{bmatrix}_{2 \times 2}
|
||||
B = \begin{bmatrix}
|
||||
b_{00} \\
|
||||
b_{10}
|
||||
\end{bmatrix}_{2 \times 1}\f]\f[M = \begin{bmatrix}
|
||||
\end{bmatrix}_{2 \times 1}
|
||||
\f]
|
||||
\f[
|
||||
M = \begin{bmatrix}
|
||||
A & B
|
||||
\end{bmatrix}
|
||||
=\f]
|
||||
|
||||
begin{bmatrix}
|
||||
a_{00} & a_{01} & b_{00} \\ a_{10} & a_{11} & b_{10}
|
||||
|
||||
end{bmatrix}_{2 times 3}
|
||||
=
|
||||
\begin{bmatrix}
|
||||
a_{00} & a_{01} & b_{00} \\
|
||||
a_{10} & a_{11} & b_{10}
|
||||
\end{bmatrix}_{2 \times 3}
|
||||
\f]
|
||||
|
||||
Considering that we want to transform a 2D vector \f$X = \begin{bmatrix}x \\ y\end{bmatrix}\f$ by
|
||||
using \f$A\f$ and \f$B\f$, we can do it equivalently with:
|
||||
@ -56,17 +60,17 @@ Theory
|
||||
|
||||
### How do we get an Affine Transformation?
|
||||
|
||||
1. Excellent question. We mentioned that an Affine Transformation is basically a **relation**
|
||||
-# Excellent question. We mentioned that an Affine Transformation is basically a **relation**
|
||||
between two images. The information about this relation can come, roughly, in two ways:
|
||||
-# We know both \f$X\f$ and T and we also know that they are related. Then our job is to find \f$M\f$
|
||||
-# We know \f$M\f$ and \f$X\f$. To obtain \f$T\f$ we only need to apply \f$T = M \cdot X\f$. Our information
|
||||
for \f$M\f$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation
|
||||
between points.
|
||||
|
||||
2. Let's explain a little bit better (b). Since \f$M\f$ relates 02 images, we can analyze the simplest
|
||||
-# Let's explain a little bit better (b). Since \f$M\f$ relates 02 images, we can analyze the simplest
|
||||
case in which it relates three points in both images. Look at the figure below:
|
||||
|
||||

|
||||

|
||||
|
||||
the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a
|
||||
triangle, but now they have changed notoriously. If we find the Affine Transformation with these
|
||||
@ -76,7 +80,7 @@ Theory
|
||||
Code
|
||||
----
|
||||
|
||||
1. **What does this program do?**
|
||||
-# **What does this program do?**
|
||||
- Loads an image
|
||||
- Applies an Affine Transform to the image. This Transform is obtained from the relation
|
||||
between three points. We use the function @ref cv::warpAffine for that purpose.
|
||||
@ -84,86 +88,14 @@ Code
|
||||
the image center
|
||||
- Waits until the user exits the program
|
||||
|
||||
2. The tutorial code's is shown lines below. You can also download it from
|
||||
-# The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
@includelineno samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
/// Global variables
|
||||
char* source_window = "Source image";
|
||||
char* warp_window = "Warp";
|
||||
char* warp_rotate_window = "Warp + Rotate";
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
Point2f srcTri[3];
|
||||
Point2f dstTri[3];
|
||||
|
||||
Mat rot_mat( 2, 3, CV_32FC1 );
|
||||
Mat warp_mat( 2, 3, CV_32FC1 );
|
||||
Mat src, warp_dst, warp_rotate_dst;
|
||||
|
||||
/// Load the image
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
/// Set the dst image the same type and size as src
|
||||
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
|
||||
|
||||
/// Set your 3 points to calculate the Affine Transform
|
||||
srcTri[0] = Point2f( 0,0 );
|
||||
srcTri[1] = Point2f( src.cols - 1, 0 );
|
||||
srcTri[2] = Point2f( 0, src.rows - 1 );
|
||||
|
||||
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
|
||||
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
|
||||
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
|
||||
|
||||
/// Get the Affine Transform
|
||||
warp_mat = getAffineTransform( srcTri, dstTri );
|
||||
|
||||
/// Apply the Affine Transform just found to the src image
|
||||
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
|
||||
|
||||
/* Rotating the image after Warp */
|
||||
|
||||
/// Compute a rotation matrix with respect to the center of the image
|
||||
Point center = Point( warp_dst.cols/2, warp_dst.rows/2 );
|
||||
double angle = -50.0;
|
||||
double scale = 0.6;
|
||||
|
||||
/// Get the rotation matrix with the specifications above
|
||||
rot_mat = getRotationMatrix2D( center, angle, scale );
|
||||
|
||||
/// Rotate the warped image
|
||||
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
|
||||
|
||||
/// Show what you got
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
imshow( source_window, src );
|
||||
|
||||
namedWindow( warp_window, WINDOW_AUTOSIZE );
|
||||
imshow( warp_window, warp_dst );
|
||||
|
||||
namedWindow( warp_rotate_window, WINDOW_AUTOSIZE );
|
||||
imshow( warp_rotate_window, warp_rotate_dst );
|
||||
|
||||
/// Wait until user exits the program
|
||||
waitKey(0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Declare some variables we will use, such as the matrices to store our results and 2 arrays of
|
||||
-# Declare some variables we will use, such as the matrices to store our results and 2 arrays of
|
||||
points to store the 2D points that define our Affine Transform.
|
||||
@code{.cpp}
|
||||
Point2f srcTri[3];
|
||||
@ -173,15 +105,15 @@ Explanation
|
||||
Mat warp_mat( 2, 3, CV_32FC1 );
|
||||
Mat src, warp_dst, warp_rotate_dst;
|
||||
@endcode
|
||||
2. Load an image:
|
||||
-# Load an image:
|
||||
@code{.cpp}
|
||||
src = imread( argv[1], 1 );
|
||||
@endcode
|
||||
3. Initialize the destination image as having the same size and type as the source:
|
||||
-# Initialize the destination image as having the same size and type as the source:
|
||||
@code{.cpp}
|
||||
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
|
||||
@endcode
|
||||
4. **Affine Transform:** As we explained lines above, we need two sets of 3 points to derive the
|
||||
-# **Affine Transform:** As we explained lines above, we need two sets of 3 points to derive the
|
||||
affine transform relation. Take a look:
|
||||
@code{.cpp}
|
||||
srcTri[0] = Point2f( 0,0 );
|
||||
@ -196,14 +128,14 @@ Explanation
|
||||
approximately the same as the ones depicted in the example figure (in the Theory section). You
|
||||
may note that the size and orientation of the triangle defined by the 3 points change.
|
||||
|
||||
5. Armed with both sets of points, we calculate the Affine Transform by using OpenCV function @ref
|
||||
-# Armed with both sets of points, we calculate the Affine Transform by using OpenCV function @ref
|
||||
cv::getAffineTransform :
|
||||
@code{.cpp}
|
||||
warp_mat = getAffineTransform( srcTri, dstTri );
|
||||
@endcode
|
||||
We get as an output a \f$2 \times 3\f$ matrix (in this case **warp_mat**)
|
||||
|
||||
6. We apply the Affine Transform just found to the src image
|
||||
-# We apply the Affine Transform just found to the src image
|
||||
@code{.cpp}
|
||||
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
|
||||
@endcode
|
||||
@ -217,7 +149,7 @@ Explanation
|
||||
We just got our first transformed image! We will display it in one bit. Before that, we also
|
||||
want to rotate it...
|
||||
|
||||
7. **Rotate:** To rotate an image, we need to know two things:
|
||||
-# **Rotate:** To rotate an image, we need to know two things:
|
||||
|
||||
-# The center with respect to which the image will rotate
|
||||
-# The angle to be rotated. In OpenCV a positive angle is counter-clockwise
|
||||
@ -229,16 +161,16 @@ Explanation
|
||||
double angle = -50.0;
|
||||
double scale = 0.6;
|
||||
@endcode
|
||||
8. We generate the rotation matrix with the OpenCV function @ref cv::getRotationMatrix2D , which
|
||||
-# We generate the rotation matrix with the OpenCV function @ref cv::getRotationMatrix2D , which
|
||||
returns a \f$2 \times 3\f$ matrix (in this case *rot_mat*)
|
||||
@code{.cpp}
|
||||
rot_mat = getRotationMatrix2D( center, angle, scale );
|
||||
@endcode
|
||||
9. We now apply the found rotation to the output of our previous Transformation.
|
||||
-# We now apply the found rotation to the output of our previous Transformation.
|
||||
@code{.cpp}
|
||||
warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
|
||||
@endcode
|
||||
10. Finally, we display our results in two windows plus the original image for good measure:
|
||||
-# Finally, we display our results in two windows plus the original image for good measure:
|
||||
@code{.cpp}
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
imshow( source_window, src );
|
||||
@ -249,23 +181,24 @@ Explanation
|
||||
namedWindow( warp_rotate_window, WINDOW_AUTOSIZE );
|
||||
imshow( warp_rotate_window, warp_rotate_dst );
|
||||
@endcode
|
||||
11. We just have to wait until the user exits the program
|
||||
-# We just have to wait until the user exits the program
|
||||
@code{.cpp}
|
||||
waitKey(0);
|
||||
@endcode
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
1. After compiling the code above, we can give it the path of an image as argument. For instance,
|
||||
-# After compiling the code above, we can give it the path of an image as argument. For instance,
|
||||
for a picture like:
|
||||
|
||||

|
||||

|
||||
|
||||
after applying the first Affine Transform we obtain:
|
||||
|
||||

|
||||

|
||||
|
||||
and finally, after applying a negative rotation (remember negative means clockwise) and a scale
|
||||
factor, we get:
|
||||
|
||||

|
||||

|
||||
|
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 60 KiB |
@ -16,8 +16,9 @@ In this tutorial you will learn how to:
|
||||
Theory
|
||||
------
|
||||
|
||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. In the
|
||||
previous tutorial we covered two basic Morphology operations:
|
||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
|
||||
|
||||
In the previous tutorial we covered two basic Morphology operations:
|
||||
|
||||
- Erosion
|
||||
- Dilation.
|
||||
@ -37,7 +38,7 @@ discuss briefly 05 operations offered by OpenCV:
|
||||
at the right is the result after applying the opening transformation. We can observe that the
|
||||
small spaces in the corners of the letter tend to dissapear.
|
||||
|
||||

|
||||

|
||||
|
||||
### Closing
|
||||
|
||||
@ -47,7 +48,7 @@ discuss briefly 05 operations offered by OpenCV:
|
||||
|
||||
- Useful to remove small holes (dark regions).
|
||||
|
||||

|
||||

|
||||
|
||||
### Morphological Gradient
|
||||
|
||||
@ -57,7 +58,7 @@ discuss briefly 05 operations offered by OpenCV:
|
||||
|
||||
- It is useful for finding the outline of an object as can be seen below:
|
||||
|
||||

|
||||

|
||||
|
||||
### Top Hat
|
||||
|
||||
@ -65,7 +66,7 @@ discuss briefly 05 operations offered by OpenCV:
|
||||
|
||||
\f[dst = tophat( src, element ) = src - open( src, element )\f]
|
||||
|
||||

|
||||

|
||||
|
||||
### Black Hat
|
||||
|
||||
@ -73,7 +74,7 @@ discuss briefly 05 operations offered by OpenCV:
|
||||
|
||||
\f[dst = blackhat( src, element ) = close( src, element ) - src\f]
|
||||
|
||||

|
||||

|
||||
|
||||
Code
|
||||
----
|
||||
@ -150,10 +151,11 @@ void Morphology_Operations( int, void* )
|
||||
imshow( window_name, dst );
|
||||
}
|
||||
@endcode
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Let's check the general structure of the program:
|
||||
-# Let's check the general structure of the program:
|
||||
- Load an image
|
||||
- Create a window to display results of the Morphological operations
|
||||
- Create 03 Trackbars for the user to enter parameters:
|
||||
@ -185,17 +187,18 @@ Explanation
|
||||
/*
|
||||
* @function Morphology_Operations
|
||||
*/
|
||||
@endcode
|
||||
void Morphology_Operations( int, void\* ) { // Since MORPH_X : 2,3,4,5 and 6 int
|
||||
operation = morph_operator + 2;
|
||||
void Morphology_Operations( int, void* )
|
||||
{
|
||||
// Since MORPH_X : 2,3,4,5 and 6
|
||||
int operation = morph_operator + 2;
|
||||
|
||||
Mat element = getStructuringElement( morph_elem, Size( 2\*morph_size + 1,
|
||||
2\*morph_size+1 ), Point( morph_size, morph_size ) );
|
||||
|
||||
/// Apply the specified morphology operation morphologyEx( src, dst, operation, element
|
||||
); imshow( window_name, dst );
|
||||
Mat element = getStructuringElement( morph_elem, Size( 2*morph_size + 1, 2*morph_size+1 ), Point( morph_size, morph_size ) );
|
||||
|
||||
/// Apply the specified morphology operation
|
||||
morphologyEx( src, dst, operation, element );
|
||||
imshow( window_name, dst );
|
||||
}
|
||||
@endcode
|
||||
|
||||
We can observe that the key function to perform the morphology transformations is @ref
|
||||
cv::morphologyEx . In this example we use four arguments (leaving the rest as defaults):
|
||||
@ -225,12 +228,10 @@ Results
|
||||
- After compiling the code above we can execute it giving an image path as an argument. For this
|
||||
tutorial we use as input the image: **baboon.png**:
|
||||
|
||||

|
||||

|
||||
|
||||
- And here are two snapshots of the display window. The first picture shows the output after using
|
||||
the operator **Opening** with a cross kernel. The second picture (right side, shows the result
|
||||
of using a **Blackhat** operator with an ellipse kernel.
|
||||
|
||||

|
||||
|
||||
|
||||

|
||||
|
@ -276,6 +276,6 @@ Results
|
||||
|
||||
* And here are two snapshots of the display window. The first picture shows the output after using the operator **Opening** with a cross kernel. The second picture (right side, shows the result of using a **Blackhat** operator with an ellipse kernel.
|
||||
|
||||
.. image:: images/Morphology_2_Tutorial_Cover.jpg
|
||||
.. image:: images/Morphology_2_Tutorial_Result.jpg
|
||||
:alt: Morphology 2: Result sample
|
||||
:align: center
|
||||
|
@ -16,8 +16,8 @@ Theory
|
||||
|
||||
- Usually we need to convert an image to a size different than its original. For this, there are
|
||||
two possible options:
|
||||
1. *Upsize* the image (zoom in) or
|
||||
2. *Downsize* it (zoom out).
|
||||
-# *Upsize* the image (zoom in) or
|
||||
-# *Downsize* it (zoom out).
|
||||
- Although there is a *geometric transformation* function in OpenCV that -literally- resize an
|
||||
image (@ref cv::resize , which we will show in a future tutorial), in this section we analyze
|
||||
first the use of **Image Pyramids**, which are widely applied in a huge range of vision
|
||||
@ -37,7 +37,7 @@ Theory
|
||||
|
||||
- Imagine the pyramid as a set of layers in which the higher the layer, the smaller the size.
|
||||
|
||||

|
||||

|
||||
|
||||
- Every layer is numbered from bottom to top, so layer \f$(i+1)\f$ (denoted as \f$G_{i+1}\f$ is smaller
|
||||
than layer \f$i\f$ (\f$G_{i}\f$).
|
||||
@ -162,14 +162,14 @@ Results
|
||||
that comes in the *tutorial_code/image* folder. Notice that this image is \f$512 \times 512\f$,
|
||||
hence a downsample won't generate any error (\f$512 = 2^{9}\f$). The original image is shown below:
|
||||
|
||||

|
||||

|
||||
|
||||
- First we apply two successive @ref cv::pyrDown operations by pressing 'd'. Our output is:
|
||||
|
||||

|
||||

|
||||
|
||||
- Note that we should have lost some resolution due to the fact that we are diminishing the size
|
||||
of the image. This is evident after we apply @ref cv::pyrUp twice (by pressing 'u'). Our output
|
||||
is now:
|
||||
|
||||

|
||||

|
||||
|
@ -17,96 +17,14 @@ Code
|
||||
|
||||
This tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo1.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo1.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
Mat src; Mat src_gray;
|
||||
int thresh = 100;
|
||||
int max_thresh = 255;
|
||||
RNG rng(12345);
|
||||
|
||||
/// Function header
|
||||
void thresh_callback(int, void* );
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Load source image and convert it to gray
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
/// Convert image to gray and blur it
|
||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||
blur( src_gray, src_gray, Size(3,3) );
|
||||
|
||||
/// Create Window
|
||||
char* source_window = "Source";
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
imshow( source_window, src );
|
||||
|
||||
createTrackbar( " Threshold:", "Source", &thresh, max_thresh, thresh_callback );
|
||||
thresh_callback( 0, 0 );
|
||||
|
||||
waitKey(0);
|
||||
return(0);
|
||||
}
|
||||
|
||||
/* @function thresh_callback */
|
||||
void thresh_callback(int, void* )
|
||||
{
|
||||
Mat threshold_output;
|
||||
vector<vector<Point> > contours;
|
||||
vector<Vec4i> hierarchy;
|
||||
|
||||
/// Detect edges using Threshold
|
||||
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
|
||||
/// Find contours
|
||||
findContours( threshold_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
||||
|
||||
/// Approximate contours to polygons + get bounding rects and circles
|
||||
vector<vector<Point> > contours_poly( contours.size() );
|
||||
vector<Rect> boundRect( contours.size() );
|
||||
vector<Point2f>center( contours.size() );
|
||||
vector<float>radius( contours.size() );
|
||||
|
||||
for( int i = 0; i < contours.size(); i++ )
|
||||
{ approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
|
||||
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
|
||||
minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
|
||||
}
|
||||
|
||||
|
||||
/// Draw polygonal contour + bonding rects + circles
|
||||
Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
|
||||
for( int i = 0; i< contours.size(); i++ )
|
||||
{
|
||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
||||
drawContours( drawing, contours_poly, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
|
||||
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
|
||||
circle( drawing, center[i], (int)radius[i], color, 2, 8, 0 );
|
||||
}
|
||||
|
||||
/// Show in a window
|
||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
||||
imshow( "Contours", drawing );
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here it is:
|
||||
|
||||
---------- ----------
|
||||
|BRC_0| |BRC_1|
|
||||
---------- ----------
|
||||
|
||||
|
||||
Here it is:
|
||||

|
||||

|
||||
|
@ -17,98 +17,14 @@ Code
|
||||
|
||||
This tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo2.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo2.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
Mat src; Mat src_gray;
|
||||
int thresh = 100;
|
||||
int max_thresh = 255;
|
||||
RNG rng(12345);
|
||||
|
||||
/// Function header
|
||||
void thresh_callback(int, void* );
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Load source image and convert it to gray
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
/// Convert image to gray and blur it
|
||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||
blur( src_gray, src_gray, Size(3,3) );
|
||||
|
||||
/// Create Window
|
||||
char* source_window = "Source";
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
imshow( source_window, src );
|
||||
|
||||
createTrackbar( " Threshold:", "Source", &thresh, max_thresh, thresh_callback );
|
||||
thresh_callback( 0, 0 );
|
||||
|
||||
waitKey(0);
|
||||
return(0);
|
||||
}
|
||||
|
||||
/* @function thresh_callback */
|
||||
void thresh_callback(int, void* )
|
||||
{
|
||||
Mat threshold_output;
|
||||
vector<vector<Point> > contours;
|
||||
vector<Vec4i> hierarchy;
|
||||
|
||||
/// Detect edges using Threshold
|
||||
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
|
||||
/// Find contours
|
||||
findContours( threshold_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
||||
|
||||
/// Find the rotated rectangles and ellipses for each contour
|
||||
vector<RotatedRect> minRect( contours.size() );
|
||||
vector<RotatedRect> minEllipse( contours.size() );
|
||||
|
||||
for( int i = 0; i < contours.size(); i++ )
|
||||
{ minRect[i] = minAreaRect( Mat(contours[i]) );
|
||||
if( contours[i].size() > 5 )
|
||||
{ minEllipse[i] = fitEllipse( Mat(contours[i]) ); }
|
||||
}
|
||||
|
||||
/// Draw contours + rotated rects + ellipses
|
||||
Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
|
||||
for( int i = 0; i< contours.size(); i++ )
|
||||
{
|
||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
||||
// contour
|
||||
drawContours( drawing, contours, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
|
||||
// ellipse
|
||||
ellipse( drawing, minEllipse[i], color, 2, 8 );
|
||||
// rotated rectangle
|
||||
Point2f rect_points[4]; minRect[i].points( rect_points );
|
||||
for( int j = 0; j < 4; j++ )
|
||||
line( drawing, rect_points[j], rect_points[(j+1)%4], color, 1, 8 );
|
||||
}
|
||||
|
||||
/// Show in a window
|
||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
||||
imshow( "Contours", drawing );
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here it is:
|
||||
|
||||
---------- ----------
|
||||
|BRE_0| |BRE_1|
|
||||
---------- ----------
|
||||
|
||||
|
||||
Here it is:
|
||||

|
||||

|
||||
|
@ -17,81 +17,14 @@ Code
|
||||
|
||||
This tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/findContours_demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/findContours_demo.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
Mat src; Mat src_gray;
|
||||
int thresh = 100;
|
||||
int max_thresh = 255;
|
||||
RNG rng(12345);
|
||||
|
||||
/// Function header
|
||||
void thresh_callback(int, void* );
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Load source image and convert it to gray
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
/// Convert image to gray and blur it
|
||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||
blur( src_gray, src_gray, Size(3,3) );
|
||||
|
||||
/// Create Window
|
||||
char* source_window = "Source";
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
imshow( source_window, src );
|
||||
|
||||
createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );
|
||||
thresh_callback( 0, 0 );
|
||||
|
||||
waitKey(0);
|
||||
return(0);
|
||||
}
|
||||
|
||||
/* @function thresh_callback */
|
||||
void thresh_callback(int, void* )
|
||||
{
|
||||
Mat canny_output;
|
||||
vector<vector<Point> > contours;
|
||||
vector<Vec4i> hierarchy;
|
||||
|
||||
/// Detect edges using canny
|
||||
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
|
||||
/// Find contours
|
||||
findContours( canny_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
||||
|
||||
/// Draw contours
|
||||
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
|
||||
for( int i = 0; i< contours.size(); i++ )
|
||||
{
|
||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
||||
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
|
||||
}
|
||||
|
||||
/// Show in a window
|
||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
||||
imshow( "Contours", drawing );
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here it is:
|
||||
|
||||
-------------- --------------
|
||||
|contour_0| |contour_1|
|
||||
-------------- --------------
|
||||
|
||||
|
||||
Here it is:
|
||||

|
||||

|
||||
|
@ -18,102 +18,15 @@ Code
|
||||
|
||||
This tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/moments_demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/moments_demo.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
Mat src; Mat src_gray;
|
||||
int thresh = 100;
|
||||
int max_thresh = 255;
|
||||
RNG rng(12345);
|
||||
|
||||
/// Function header
|
||||
void thresh_callback(int, void* );
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Load source image and convert it to gray
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
/// Convert image to gray and blur it
|
||||
cvtColor( src, src_gray, COLOR_BGR2GRAY );
|
||||
blur( src_gray, src_gray, Size(3,3) );
|
||||
|
||||
/// Create Window
|
||||
char* source_window = "Source";
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
imshow( source_window, src );
|
||||
|
||||
createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );
|
||||
thresh_callback( 0, 0 );
|
||||
|
||||
waitKey(0);
|
||||
return(0);
|
||||
}
|
||||
|
||||
/* @function thresh_callback */
|
||||
void thresh_callback(int, void* )
|
||||
{
|
||||
Mat canny_output;
|
||||
vector<vector<Point> > contours;
|
||||
vector<Vec4i> hierarchy;
|
||||
|
||||
/// Detect edges using canny
|
||||
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
|
||||
/// Find contours
|
||||
findContours( canny_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
|
||||
|
||||
/// Get the moments
|
||||
vector<Moments> mu(contours.size() );
|
||||
for( int i = 0; i < contours.size(); i++ )
|
||||
{ mu[i] = moments( contours[i], false ); }
|
||||
|
||||
/// Get the mass centers:
|
||||
vector<Point2f> mc( contours.size() );
|
||||
for( int i = 0; i < contours.size(); i++ )
|
||||
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
|
||||
|
||||
/// Draw contours
|
||||
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
|
||||
for( int i = 0; i< contours.size(); i++ )
|
||||
{
|
||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
||||
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
|
||||
circle( drawing, mc[i], 4, color, -1, 8, 0 );
|
||||
}
|
||||
|
||||
/// Show in a window
|
||||
namedWindow( "Contours", WINDOW_AUTOSIZE );
|
||||
imshow( "Contours", drawing );
|
||||
|
||||
/// Calculate the area with the moments 00 and compare with the result of the OpenCV function
|
||||
printf("\t Info: Area and Contour Length \n");
|
||||
for( int i = 0; i< contours.size(); i++ )
|
||||
{
|
||||
printf(" * Contour[%d] - Area (M_00) = %.2f - Area OpenCV: %.2f - Length: %.2f \n", i, mu[i].m00, contourArea(contours[i]), arcLength( contours[i], true ) );
|
||||
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
|
||||
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
|
||||
circle( drawing, mc[i], 4, color, -1, 8, 0 );
|
||||
}
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here it is:
|
||||
|
||||
--------- --------- ---------
|
||||
|MU_0| |MU_1| |MU_2|
|
||||
--------- --------- ---------
|
||||
|
||||
|
||||
Here it is:
|
||||

|
||||

|
||||

|
||||
|
@ -16,91 +16,14 @@ Code
|
||||
|
||||
This tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ShapeDescriptors/pointPolygonTest_demo.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include <iostream>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
@includelineno samples/cpp/tutorial_code/ShapeDescriptors/pointPolygonTest_demo.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
/* @function main */
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Create an image
|
||||
const int r = 100;
|
||||
Mat src = Mat::zeros( Size( 4*r, 4*r ), CV_8UC1 );
|
||||
|
||||
/// Create a sequence of points to make a contour:
|
||||
vector<Point2f> vert(6);
|
||||
|
||||
vert[0] = Point( 1.5*r, 1.34*r );
|
||||
vert[1] = Point( 1*r, 2*r );
|
||||
vert[2] = Point( 1.5*r, 2.866*r );
|
||||
vert[3] = Point( 2.5*r, 2.866*r );
|
||||
vert[4] = Point( 3*r, 2*r );
|
||||
vert[5] = Point( 2.5*r, 1.34*r );
|
||||
|
||||
/// Draw it in src
|
||||
for( int j = 0; j < 6; j++ )
|
||||
{ line( src, vert[j], vert[(j+1)%6], Scalar( 255 ), 3, 8 ); }
|
||||
|
||||
/// Get the contours
|
||||
vector<vector<Point> > contours; vector<Vec4i> hierarchy;
|
||||
Mat src_copy = src.clone();
|
||||
|
||||
findContours( src_copy, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE);
|
||||
|
||||
/// Calculate the distances to the contour
|
||||
Mat raw_dist( src.size(), CV_32FC1 );
|
||||
|
||||
for( int j = 0; j < src.rows; j++ )
|
||||
{ for( int i = 0; i < src.cols; i++ )
|
||||
{ raw_dist.at<float>(j,i) = pointPolygonTest( contours[0], Point2f(i,j), true ); }
|
||||
}
|
||||
|
||||
double minVal; double maxVal;
|
||||
minMaxLoc( raw_dist, &minVal, &maxVal, 0, 0, Mat() );
|
||||
minVal = abs(minVal); maxVal = abs(maxVal);
|
||||
|
||||
/// Depicting the distances graphically
|
||||
Mat drawing = Mat::zeros( src.size(), CV_8UC3 );
|
||||
|
||||
for( int j = 0; j < src.rows; j++ )
|
||||
{ for( int i = 0; i < src.cols; i++ )
|
||||
{
|
||||
if( raw_dist.at<float>(j,i) < 0 )
|
||||
{ drawing.at<Vec3b>(j,i)[0] = 255 - (int) abs(raw_dist.at<float>(j,i))*255/minVal; }
|
||||
else if( raw_dist.at<float>(j,i) > 0 )
|
||||
{ drawing.at<Vec3b>(j,i)[2] = 255 - (int) raw_dist.at<float>(j,i)*255/maxVal; }
|
||||
else
|
||||
{ drawing.at<Vec3b>(j,i)[0] = 255; drawing.at<Vec3b>(j,i)[1] = 255; drawing.at<Vec3b>(j,i)[2] = 255; }
|
||||
}
|
||||
}
|
||||
|
||||
/// Create Window and show your results
|
||||
char* source_window = "Source";
|
||||
namedWindow( source_window, WINDOW_AUTOSIZE );
|
||||
imshow( source_window, src );
|
||||
namedWindow( "Distance", WINDOW_AUTOSIZE );
|
||||
imshow( "Distance", drawing );
|
||||
|
||||
waitKey(0);
|
||||
return(0);
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here it is:
|
||||
|
||||
---------- ----------
|
||||
|PPT_0| |PPT_1|
|
||||
---------- ----------
|
||||
|
||||
|
||||
Here it is:
|
||||

|
||||

|
||||
|
@ -12,7 +12,9 @@ Cool Theory
|
||||
-----------
|
||||
|
||||
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. What is
|
||||
Thresholding? -----------------------
|
||||
|
||||
Thresholding?
|
||||
-------------
|
||||
|
||||
- The simplest segmentation method
|
||||
- Application example: Separate out regions of an image corresponding to objects which we want to
|
||||
@ -25,7 +27,7 @@ Thresholding? -----------------------
|
||||
identify them (i.e. we can assign them a value of \f$0\f$ (black), \f$255\f$ (white) or any value that
|
||||
suits your needs).
|
||||
|
||||

|
||||

|
||||
|
||||
### Types of Thresholding
|
||||
|
||||
@ -36,7 +38,7 @@ Thresholding? -----------------------
|
||||
with pixels with intensity values \f$src(x,y)\f$. The plot below depicts this. The horizontal blue
|
||||
line represents the threshold \f$thresh\f$ (fixed).
|
||||
|
||||

|
||||

|
||||
|
||||
#### Threshold Binary
|
||||
|
||||
@ -47,7 +49,7 @@ Thresholding? -----------------------
|
||||
- So, if the intensity of the pixel \f$src(x,y)\f$ is higher than \f$thresh\f$, then the new pixel
|
||||
intensity is set to a \f$MaxVal\f$. Otherwise, the pixels are set to \f$0\f$.
|
||||
|
||||

|
||||

|
||||
|
||||
#### Threshold Binary, Inverted
|
||||
|
||||
@ -58,7 +60,7 @@ Thresholding? -----------------------
|
||||
- If the intensity of the pixel \f$src(x,y)\f$ is higher than \f$thresh\f$, then the new pixel intensity
|
||||
is set to a \f$0\f$. Otherwise, it is set to \f$MaxVal\f$.
|
||||
|
||||

|
||||

|
||||
|
||||
#### Truncate
|
||||
|
||||
@ -69,7 +71,7 @@ Thresholding? -----------------------
|
||||
- The maximum intensity value for the pixels is \f$thresh\f$, if \f$src(x,y)\f$ is greater, then its value
|
||||
is *truncated*. See figure below:
|
||||
|
||||

|
||||

|
||||
|
||||
#### Threshold to Zero
|
||||
|
||||
@ -79,7 +81,7 @@ Thresholding? -----------------------
|
||||
|
||||
- If \f$src(x,y)\f$ is lower than \f$thresh\f$, the new pixel value will be set to \f$0\f$.
|
||||
|
||||

|
||||

|
||||
|
||||
#### Threshold to Zero, Inverted
|
||||
|
||||
@ -89,97 +91,19 @@ Thresholding? -----------------------
|
||||
|
||||
- If \f$src(x,y)\f$ is greater than \f$thresh\f$, the new pixel value will be set to \f$0\f$.
|
||||
|
||||

|
||||

|
||||
|
||||
Code
|
||||
----
|
||||
|
||||
The tutorial code's is shown lines below. You can also download it from
|
||||
[here](https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Threshold.cpp)
|
||||
@code{.cpp}
|
||||
#include "opencv2/imgproc.hpp"
|
||||
#include "opencv2/highgui.hpp"
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
@includelineno samples/cpp/tutorial_code/ImgProc/Threshold.cpp
|
||||
|
||||
using namespace cv;
|
||||
|
||||
/// Global variables
|
||||
|
||||
int threshold_value = 0;
|
||||
int threshold_type = 3;;
|
||||
int const max_value = 255;
|
||||
int const max_type = 4;
|
||||
int const max_BINARY_value = 255;
|
||||
|
||||
Mat src, src_gray, dst;
|
||||
char* window_name = "Threshold Demo";
|
||||
|
||||
char* trackbar_type = "Type: \n 0: Binary \n 1: Binary Inverted \n 2: Truncate \n 3: To Zero \n 4: To Zero Inverted";
|
||||
char* trackbar_value = "Value";
|
||||
|
||||
/// Function headers
|
||||
void Threshold_Demo( int, void* );
|
||||
|
||||
/*
|
||||
* @function main
|
||||
*/
|
||||
int main( int argc, char** argv )
|
||||
{
|
||||
/// Load an image
|
||||
src = imread( argv[1], 1 );
|
||||
|
||||
/// Convert the image to Gray
|
||||
cvtColor( src, src_gray, COLOR_RGB2GRAY );
|
||||
|
||||
/// Create a window to display results
|
||||
namedWindow( window_name, WINDOW_AUTOSIZE );
|
||||
|
||||
/// Create Trackbar to choose type of Threshold
|
||||
createTrackbar( trackbar_type,
|
||||
window_name, &threshold_type,
|
||||
max_type, Threshold_Demo );
|
||||
|
||||
createTrackbar( trackbar_value,
|
||||
window_name, &threshold_value,
|
||||
max_value, Threshold_Demo );
|
||||
|
||||
/// Call the function to initialize
|
||||
Threshold_Demo( 0, 0 );
|
||||
|
||||
/// Wait until user finishes program
|
||||
while(true)
|
||||
{
|
||||
int c;
|
||||
c = waitKey( 20 );
|
||||
if( (char)c == 27 )
|
||||
{ break; }
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* @function Threshold_Demo
|
||||
*/
|
||||
void Threshold_Demo( int, void* )
|
||||
{
|
||||
/* 0: Binary
|
||||
1: Binary Inverted
|
||||
2: Threshold Truncated
|
||||
3: Threshold to Zero
|
||||
4: Threshold to Zero Inverted
|
||||
*/
|
||||
|
||||
threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type );
|
||||
|
||||
imshow( window_name, dst );
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. Let's check the general structure of the program:
|
||||
-# Let's check the general structure of the program:
|
||||
- Load an image. If it is RGB we convert it to Grayscale. For this, remember that we can use
|
||||
the function @ref cv::cvtColor :
|
||||
@code{.cpp}
|
||||
@ -241,23 +165,21 @@ Explanation
|
||||
Results
|
||||
-------
|
||||
|
||||
1. After compiling this program, run it giving a path to an image as argument. For instance, for an
|
||||
-# After compiling this program, run it giving a path to an image as argument. For instance, for an
|
||||
input image as:
|
||||
|
||||

|
||||

|
||||
|
||||
2. First, we try to threshold our image with a *binary threhold inverted*. We expect that the
|
||||
-# First, we try to threshold our image with a *binary threhold inverted*. We expect that the
|
||||
pixels brighter than the \f$thresh\f$ will turn dark, which is what actually happens, as we can see
|
||||
in the snapshot below (notice from the original image, that the doggie's tongue and eyes are
|
||||
particularly bright in comparison with the image, this is reflected in the output image).
|
||||
|
||||

|
||||

|
||||
|
||||
3. Now we try with the *threshold to zero*. With this, we expect that the darkest pixels (below the
|
||||
-# Now we try with the *threshold to zero*. With this, we expect that the darkest pixels (below the
|
||||
threshold) will become completely black, whereas the pixels with value greater than the
|
||||
threshold will keep its original value. This is verified by the following snapshot of the output
|
||||
image:
|
||||
|
||||

|
||||
|
||||
|
||||

|
||||
|
@ -107,19 +107,19 @@ Manual OpenCV4Android SDK setup
|
||||
|
||||
### Get the OpenCV4Android SDK
|
||||
|
||||
1. Go to the [OpenCV download page on
|
||||
-# Go to the [OpenCV download page on
|
||||
SourceForge](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/) and download
|
||||
the latest available version. Currently it's [OpenCV-2.4.9-android-sdk.zip](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.4.9/OpenCV-2.4.9-android-sdk.zip/download).
|
||||
2. Create a new folder for Android with OpenCV development. For this tutorial we have unpacked
|
||||
-# Create a new folder for Android with OpenCV development. For this tutorial we have unpacked
|
||||
OpenCV SDK to the `C:\Work\OpenCV4Android\` directory.
|
||||
|
||||
@note Better to use a path without spaces in it. Otherwise you may have problems with ndk-build.
|
||||
|
||||
3. Unpack the SDK archive into the chosen directory.
|
||||
-# Unpack the SDK archive into the chosen directory.
|
||||
|
||||
You can unpack it using any popular archiver (e.g with 7-Zip_):
|
||||
You can unpack it using any popular archiver (e.g with 7-Zip):
|
||||
|
||||

|
||||

|
||||
|
||||
On Unix you can use the following command:
|
||||
@code{.bash}
|
||||
@ -128,15 +128,15 @@ Manual OpenCV4Android SDK setup
|
||||
|
||||
### Import OpenCV library and samples to the Eclipse
|
||||
|
||||
1. Start Eclipse and choose your workspace location.
|
||||
-# Start Eclipse and choose your workspace location.
|
||||
|
||||
We recommend to start working with OpenCV for Android from a new clean workspace. A new Eclipse
|
||||
workspace can for example be created in the folder where you have unpacked OpenCV4Android SDK
|
||||
package:
|
||||
|
||||

|
||||

|
||||
|
||||
2. Import OpenCV library and samples into workspace.
|
||||
-# Import OpenCV library and samples into workspace.
|
||||
|
||||
OpenCV library is packed as a ready-for-use [Android Library
|
||||
Project](http://developer.android.com/guide/developing/projects/index.html#LibraryProjects). You
|
||||
@ -146,33 +146,34 @@ Manual OpenCV4Android SDK setup
|
||||
already references OpenCV library. Follow the steps below to import OpenCV and samples into the
|
||||
workspace:
|
||||
|
||||
@note OpenCV samples are indeed **dependent** on OpenCV library project so don't forget to import it to your workspace as well.
|
||||
- Right click on the Package Explorer window and choose Import... option from the context
|
||||
menu:
|
||||
|
||||

|
||||

|
||||
|
||||
- In the main panel select General --\> Existing Projects into Workspace and press Next
|
||||
button:
|
||||
|
||||

|
||||

|
||||
|
||||
- In the Select root directory field locate your OpenCV package folder. Eclipse should
|
||||
automatically locate OpenCV library and samples:
|
||||
|
||||

|
||||

|
||||
|
||||
- Click Finish button to complete the import operation.
|
||||
|
||||
@note OpenCV samples are indeed **dependent** on OpenCV library project so don't forget to import it to your workspace as well.
|
||||
|
||||
After clicking Finish button Eclipse will load all selected projects into workspace, and you
|
||||
have to wait some time while it is building OpenCV samples. Just give a minute to Eclipse to
|
||||
complete initialization.
|
||||
|
||||

|
||||

|
||||
|
||||
Once Eclipse completes build you will have the clean workspace without any build errors:
|
||||
|
||||

|
||||

|
||||
|
||||
@anchor tutorial_O4A_SDK_samples
|
||||
### Running OpenCV Samples
|
||||
@ -205,7 +206,7 @@ Well, running samples from Eclipse is very simple:
|
||||
@note Android Emulator can take several minutes to start. So, please, be patient. \* On the first
|
||||
run Eclipse will ask you about the running mode for your application:
|
||||
|
||||

|
||||

|
||||
|
||||
- Select the Android Application option and click OK button. Eclipse will install and run the
|
||||
sample.
|
||||
@ -214,7 +215,7 @@ Well, running samples from Eclipse is very simple:
|
||||
Manager](https://docs.google.com/a/itseez.com/presentation/d/1EO_1kijgBg_BsjNp2ymk-aarg-0K279_1VZRcPplSuk/present#slide=id.p)
|
||||
package installed. In this case you will see the following message:
|
||||
|
||||

|
||||

|
||||
|
||||
To get rid of the message you will need to install OpenCV Manager and the appropriate
|
||||
OpenCV binary pack. Simply tap Yes if you have *Google Play Market* installed on your
|
||||
@ -226,12 +227,15 @@ Well, running samples from Eclipse is very simple:
|
||||
@code{.sh}
|
||||
<Android SDK path>/platform-tools/adb install <OpenCV4Android SDK path>/apk/OpenCV_2.4.9_Manager_2.18_armv7a-neon.apk
|
||||
@endcode
|
||||
|
||||
@note armeabi, armv7a-neon, arm7a-neon-android8, mips and x86 stand for platform targets:
|
||||
- armeabi is for ARM v5 and ARM v6 architectures with Android API 8+,
|
||||
- armv7a-neon is for NEON-optimized ARM v7 with Android API 9+,
|
||||
- arm7a-neon-android8 is for NEON-optimized ARM v7 with Android API 8,
|
||||
- mips is for MIPS architecture with Android API 9+,
|
||||
- x86 is for Intel x86 CPUs with Android API 9+.
|
||||
|
||||
@note
|
||||
If using hardware device for testing/debugging, run the following command to learn its CPU
|
||||
architecture:
|
||||
@code{.sh}
|
||||
@ -241,6 +245,7 @@ Well, running samples from Eclipse is very simple:
|
||||
Click Edit in the context menu of the selected device. In the window, which then pop-ups, find
|
||||
the CPU field.
|
||||
|
||||
@note
|
||||
You may also see section `Manager Selection` for details.
|
||||
|
||||
When done, you will be able to run OpenCV samples on your device/emulator seamlessly.
|
||||
@ -248,7 +253,7 @@ Well, running samples from Eclipse is very simple:
|
||||
- Here is Sample - image-manipulations sample, running on top of stock camera-preview of the
|
||||
emulator.
|
||||
|
||||

|
||||

|
||||
|
||||
What's next
|
||||
-----------
|
||||
|
@ -19,16 +19,16 @@ Development for Android significantly differs from development for other platfor
|
||||
starting programming for Android we recommend you make sure that you are familiar with the following
|
||||
key topis:
|
||||
|
||||
1. [Java](http://en.wikipedia.org/wiki/Java_(programming_language)) programming language that is
|
||||
-# [Java](http://en.wikipedia.org/wiki/Java_(programming_language)) programming language that is
|
||||
the primary development technology for Android OS. Also, you can find [Oracle docs on
|
||||
Java](http://docs.oracle.com/javase/) useful.
|
||||
2. [Java Native Interface (JNI)](http://en.wikipedia.org/wiki/Java_Native_Interface) that is a
|
||||
-# [Java Native Interface (JNI)](http://en.wikipedia.org/wiki/Java_Native_Interface) that is a
|
||||
technology of running native code in Java virtual machine. Also, you can find [Oracle docs on
|
||||
JNI](http://docs.oracle.com/javase/7/docs/technotes/guides/jni/) useful.
|
||||
3. [Android
|
||||
-# [Android
|
||||
Activity](http://developer.android.com/training/basics/activity-lifecycle/starting.html) and its
|
||||
lifecycle, that is an essential Android API class.
|
||||
4. OpenCV development will certainly require some knowlege of the [Android
|
||||
-# OpenCV development will certainly require some knowlege of the [Android
|
||||
Camera](http://developer.android.com/guide/topics/media/camera.html) specifics.
|
||||
|
||||
Quick environment setup for Android development
|
||||
@ -44,14 +44,15 @@ environment setup automatically and you can skip the rest of the guide.
|
||||
|
||||
If you are a beginner in Android development then we also recommend you to start with TADP.
|
||||
|
||||
@note *NVIDIA*'s Tegra Android Development Pack includes some special features for *NVIDIA*’s Tegra
|
||||
platform_ but its use is not limited to *Tegra* devices only. \* You need at least *1.6 Gb* free
|
||||
@note *NVIDIA*'s Tegra Android Development Pack includes some special features for *NVIDIA*’s [Tegra
|
||||
platform](http://www.nvidia.com/object/tegra-3-processor.html)
|
||||
but its use is not limited to *Tegra* devices only. \* You need at least *1.6 Gb* free
|
||||
disk space for the install.
|
||||
|
||||
- TADP will download Android SDK platforms and Android NDK from Google's server, so Internet
|
||||
connection is required for the installation.
|
||||
- TADP may ask you to flash your development kit at the end of installation process. Just skip
|
||||
this step if you have no Tegra Development Kit_.
|
||||
this step if you have no [Tegra Development Kit](http://developer.nvidia.com/mobile/tegra-hardware-sales-inquiries).
|
||||
- (UNIX) TADP will ask you for *root* in the middle of installation, so you need to be a member of
|
||||
*sudo* group.
|
||||
|
||||
@ -62,7 +63,7 @@ Manual environment setup for Android development
|
||||
|
||||
You need the following software to be installed in order to develop for Android in Java:
|
||||
|
||||
1. **Sun JDK 6** (Sun JDK 7 is also possible)
|
||||
-# **Sun JDK 6** (Sun JDK 7 is also possible)
|
||||
|
||||
Visit [Java SE Downloads page](http://www.oracle.com/technetwork/java/javase/downloads/) and
|
||||
download an installer for your OS.
|
||||
@ -75,7 +76,8 @@ You need the following software to be installed in order to develop for Android
|
||||
@code{.bash}
|
||||
sudo update-java-alternatives --set java-6-sun
|
||||
@endcode
|
||||
1. **Android SDK**
|
||||
|
||||
-# **Android SDK**
|
||||
|
||||
Get the latest Android SDK from <http://developer.android.com/sdk/index.html>
|
||||
|
||||
@ -94,7 +96,8 @@ up Android development environment the first time!
|
||||
@code{.bash}
|
||||
sudo yum install libXtst.i386
|
||||
@endcode
|
||||
1. **Android SDK components**
|
||||
|
||||
-# **Android SDK components**
|
||||
|
||||
You need the following SDK components to be installed:
|
||||
|
||||
@ -110,13 +113,13 @@ up Android development environment the first time!
|
||||
successful compilation the **target** platform should be set to Android 3.0 (API 11) or
|
||||
higher. It will not prevent them from running on Android 2.2.
|
||||
|
||||

|
||||

|
||||
|
||||
See [Adding Platforms and
|
||||
Packages](http://developer.android.com/sdk/installing/adding-packages.html) for help with
|
||||
installing/updating SDK components.
|
||||
|
||||
2. **Eclipse IDE**
|
||||
-# **Eclipse IDE**
|
||||
|
||||
Check the [Android SDK System Requirements](http://developer.android.com/sdk/requirements.html)
|
||||
document for a list of Eclipse versions that are compatible with the Android SDK. For OpenCV
|
||||
@ -126,7 +129,7 @@ up Android development environment the first time!
|
||||
If you have no Eclipse installed, you can get it from the [official
|
||||
site](http://www.eclipse.org/downloads/).
|
||||
|
||||
3. **ADT plugin for Eclipse**
|
||||
-# **ADT plugin for Eclipse**
|
||||
|
||||
These instructions are copied from [Android Developers
|
||||
site](http://developer.android.com/sdk/installing/installing-adt.html), check it out in case of
|
||||
@ -135,33 +138,34 @@ up Android development environment the first time!
|
||||
Assuming that you have Eclipse IDE installed, as described above, follow these steps to download
|
||||
and install the ADT plugin:
|
||||
|
||||
1. Start Eclipse, then select Help --\> Install New Software...
|
||||
2. Click Add (in the top-right corner).
|
||||
3. In the Add Repository dialog that appears, enter "ADT Plugin" for the Name and the following
|
||||
URL for the Location:
|
||||
-# Start Eclipse, then select Help --\> Install New Software...
|
||||
-# Click Add (in the top-right corner).
|
||||
-# In the Add Repository dialog that appears, enter "ADT Plugin" for the Name and the following
|
||||
URL for the Location: <https://dl-ssl.google.com/android/eclipse/>
|
||||
|
||||
<https://dl-ssl.google.com/android/eclipse/>
|
||||
|
||||
4. Click OK
|
||||
-# Click OK
|
||||
|
||||
@note If you have trouble acquiring the plugin, try using "http" in the Location URL, instead of "https" (https is preferred for security reasons).
|
||||
1. In the Available Software dialog, select the checkbox next to Developer Tools and click
|
||||
Next.
|
||||
2. In the next window, you'll see a list of the tools to be downloaded. Click Next.
|
||||
|
||||
-# In the Available Software dialog, select the checkbox next to Developer Tools and click Next.
|
||||
|
||||
-# In the next window, you'll see a list of the tools to be downloaded. Click Next.
|
||||
|
||||
@note If you also plan to develop native C++ code with Android NDK don't forget to enable NDK Plugins installations as well.
|
||||

|
||||
|
||||
1. Read and accept the license agreements, then click Finish.
|
||||

|
||||
|
||||
-# Read and accept the license agreements, then click Finish.
|
||||
|
||||
@note If you get a security warning saying that the authenticity or validity of the software can't be established, click OK.
|
||||
1. When the installation completes, restart Eclipse.
|
||||
|
||||
-# When the installation completes, restart Eclipse.
|
||||
|
||||
### Native development in C++
|
||||
|
||||
You need the following software to be installed in order to develop for Android in C++:
|
||||
|
||||
1. **Android NDK**
|
||||
-# **Android NDK**
|
||||
|
||||
To compile C++ code for Android platform you need Android Native Development Kit (*NDK*).
|
||||
|
||||
@ -173,8 +177,9 @@ You need the following software to be installed in order to develop for Android
|
||||
@note Before start you can read official Android NDK documentation which is in the Android NDK
|
||||
archive, in the folder `docs/`. The main article about using Android NDK build system is in the
|
||||
`ANDROID-MK.html` file. Some additional information you can find in the `APPLICATION-MK.html`,
|
||||
`NDK-BUILD.html` files, and `CPU-ARM-NEON.html`, `CPLUSPLUS-SUPPORT.html`, `PREBUILTS.html`. \#.
|
||||
**CDT plugin for Eclipse**
|
||||
`NDK-BUILD.html` files, and `CPU-ARM-NEON.html`, `CPLUSPLUS-SUPPORT.html`, `PREBUILTS.html`.
|
||||
|
||||
-# **CDT plugin for Eclipse**
|
||||
|
||||
If you selected for installation the NDK plugins component of Eclipse ADT plugin (see the picture
|
||||
above) your Eclipse IDE should already have CDT plugin (that means C/C++ Development Tooling).
|
||||
@ -244,6 +249,7 @@ APP_STL := gnustl_static
|
||||
APP_CPPFLAGS := -frtti -fexceptions
|
||||
APP_ABI := all
|
||||
@endcode
|
||||
|
||||
@note We recommend setting APP_ABI := all for all targets. If you want to specify the target
|
||||
explicitly, use armeabi for ARMv5/ARMv6, armeabi-v7a for ARMv7, x86 for Intel Atom or mips for MIPS.
|
||||
|
||||
@ -260,18 +266,18 @@ We strongly reccomend using cmd.exe (standard Windows console) instead of Cygwin
|
||||
not really supported and we are unlikely to help you in case you encounter some problems with
|
||||
it. So, use it only if you're capable of handling the consequences yourself.
|
||||
|
||||
1. Open console and go to the root folder of an Android application
|
||||
-# Open console and go to the root folder of an Android application
|
||||
@code{.bash}
|
||||
cd <root folder of the project>/
|
||||
@endcode
|
||||
2. Run the following command
|
||||
-# Run the following command
|
||||
@code{.bash}
|
||||
<path_where_NDK_is_placed>/ndk-build
|
||||
@endcode
|
||||
@note On Windows we recommend to use ndk-build.cmd in standard Windows console (cmd.exe) rather than the similar bash script in Cygwin shell.
|
||||

|
||||

|
||||
|
||||
1. After executing this command the C++ part of the source code is compiled.
|
||||
-# After executing this command the C++ part of the source code is compiled.
|
||||
|
||||
After that the Java part of the application can be (re)compiled (using either *Eclipse* or *Ant*
|
||||
build tool).
|
||||
@ -299,8 +305,8 @@ Builder.
|
||||
OpenCV for Android package since version 2.4.2 contains sample projects
|
||||
pre-configured CDT Builders. For your own projects follow the steps below.
|
||||
|
||||
1. Define the NDKROOT environment variable containing the path to Android NDK in your system (e.g.
|
||||
"X:\\\\Apps\\\\android-ndk-r8" or "/opt/android-ndk-r8").
|
||||
-# Define the NDKROOT environment variable containing the path to Android NDK in your system (e.g.
|
||||
"X:\\Apps\\android-ndk-r8" or "/opt/android-ndk-r8").
|
||||
|
||||
**On Windows** an environment variable can be set via
|
||||
My Computer -\> Properties -\> Advanced -\> Environment variables. On Windows 7 it's also
|
||||
@ -316,64 +322,61 @@ Window -\> Preferences -\> C/C++ -\> Build -\> Environment, press the Add... but
|
||||
name to NDKROOT and value to local Android NDK path. \#. After that you need to **restart Eclipse**
|
||||
to apply the changes.
|
||||
|
||||
1. Open Eclipse and load the Android app project to configure.
|
||||
2. Add C/C++ Nature to the project via Eclipse menu
|
||||
-# Open Eclipse and load the Android app project to configure.
|
||||
|
||||
-# Add C/C++ Nature to the project via Eclipse menu
|
||||
New -\> Other -\> C/C++ -\> Convert to a C/C++ Project.
|
||||
|
||||

|
||||
|
||||

|
||||
And:
|
||||

|
||||
|
||||

|
||||
|
||||
3. Select the project(s) to convert. Specify "Project type" = Makefile project, "Toolchains" =
|
||||
-# Select the project(s) to convert. Specify "Project type" = Makefile project, "Toolchains" =
|
||||
Other Toolchain.
|
||||

|
||||
|
||||

|
||||
|
||||
4. Open Project Properties -\> C/C++ Build, uncheck Use default build command, replace "Build
|
||||
-# Open Project Properties -\> C/C++ Build, uncheck Use default build command, replace "Build
|
||||
command" text from "make" to
|
||||
|
||||
"${NDKROOT}/ndk-build.cmd" on Windows,
|
||||
|
||||
"${NDKROOT}/ndk-build" on Linux and MacOS.
|
||||
|
||||

|
||||

|
||||
|
||||
5. Go to Behaviour tab and change "Workbench build type" section like shown below:
|
||||
-# Go to Behaviour tab and change "Workbench build type" section like shown below:
|
||||
|
||||

|
||||

|
||||
|
||||
6. Press OK and make sure the ndk-build is successfully invoked when building the project.
|
||||
-# Press OK and make sure the ndk-build is successfully invoked when building the project.
|
||||
|
||||

|
||||

|
||||
|
||||
7. If you open your C++ source file in Eclipse editor, you'll see syntax error notifications. They
|
||||
-# If you open your C++ source file in Eclipse editor, you'll see syntax error notifications. They
|
||||
are not real errors, but additional CDT configuring is required.
|
||||
|
||||

|
||||

|
||||
|
||||
8. Open Project Properties -\> C/C++ General -\> Paths and Symbols and add the following
|
||||
-# Open Project Properties -\> C/C++ General -\> Paths and Symbols and add the following
|
||||
**Include** paths for **C++**:
|
||||
|
||||
@code
|
||||
# for NDK r8 and prior:
|
||||
\f${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/include
|
||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/libs/armeabi-v7a/include
|
||||
\f${ProjDirPath}/../../sdk/native/jni/include
|
||||
${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
||||
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/include
|
||||
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/libs/armeabi-v7a/include
|
||||
${ProjDirPath}/../../sdk/native/jni/include
|
||||
|
||||
# for NDK r8b and later:
|
||||
\f${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/include
|
||||
\f${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi-v7a/include
|
||||
\f${ProjDirPath}/../../sdk/native/jni/include
|
||||
|
||||
${NDKROOT}/platforms/android-9/arch-arm/usr/include
|
||||
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/include
|
||||
${NDKROOT}/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi-v7a/include
|
||||
${ProjDirPath}/../../sdk/native/jni/include
|
||||
@endcode
|
||||
The last path should be changed to the correct absolute or relative path to OpenCV4Android SDK
|
||||
location.
|
||||
|
||||
This should clear the syntax error notifications in Eclipse C++ editor.
|
||||
|
||||

|
||||

|
||||
|
||||
Debugging and Testing
|
||||
---------------------
|
||||
@ -386,18 +389,18 @@ hardware device for testing and debugging an Android project.
|
||||
AVD (*Android Virtual Device*) is not probably the most convenient way to test an OpenCV-dependent
|
||||
application, but sure the most uncomplicated one to configure.
|
||||
|
||||
1. Assuming you already have *Android SDK* and *Eclipse IDE* installed, in Eclipse go
|
||||
-# Assuming you already have *Android SDK* and *Eclipse IDE* installed, in Eclipse go
|
||||
Window -\> AVD Manager.
|
||||
2. Press the New button in AVD Manager window.
|
||||
3. Create new Android Virtual Device window will let you select some properties for your new
|
||||
-# Press the New button in AVD Manager window.
|
||||
-# Create new Android Virtual Device window will let you select some properties for your new
|
||||
device, like target API level, size of SD-card and other.
|
||||
|
||||

|
||||

|
||||
|
||||
4. When you click the Create AVD button, your new AVD will be availible in AVD Manager.
|
||||
5. Press Start to launch the device. Be aware that any AVD (a.k.a. Emulator) is usually much slower
|
||||
-# When you click the Create AVD button, your new AVD will be availible in AVD Manager.
|
||||
-# Press Start to launch the device. Be aware that any AVD (a.k.a. Emulator) is usually much slower
|
||||
than a hardware Android device, so it may take up to several minutes to start.
|
||||
6. Go Run -\> Run/Debug in Eclipse IDE to run your application in regular or debugging mode.
|
||||
-# Go Run -\> Run/Debug in Eclipse IDE to run your application in regular or debugging mode.
|
||||
Device Chooser will let you choose among the running devices or to start a new one.
|
||||
|
||||
### Hardware Device
|
||||
@ -412,86 +415,77 @@ instructions](http://developer.android.com/tools/device.html) for more informati
|
||||
|
||||
#### Windows host computer
|
||||
|
||||
1. Enable USB debugging on the Android device (via Settings menu).
|
||||
2. Attach the Android device to your PC with a USB cable.
|
||||
3. Go to Start Menu and **right-click** on Computer. Select Manage in the context menu. You may be
|
||||
-# Enable USB debugging on the Android device (via Settings menu).
|
||||
-# Attach the Android device to your PC with a USB cable.
|
||||
-# Go to Start Menu and **right-click** on Computer. Select Manage in the context menu. You may be
|
||||
asked for Administrative permissions.
|
||||
4. Select Device Manager in the left pane and find an unknown device in the list. You may try
|
||||
-# Select Device Manager in the left pane and find an unknown device in the list. You may try
|
||||
unplugging it and then plugging back in order to check whether it's your exact equipment appears
|
||||
in the list.
|
||||
|
||||

|
||||

|
||||
|
||||
5. Try your luck installing Google USB drivers without any modifications: **right-click** on the
|
||||
-# Try your luck installing Google USB drivers without any modifications: **right-click** on the
|
||||
unknown device, select Properties menu item --\> Details tab --\> Update Driver button.
|
||||
|
||||

|
||||

|
||||
|
||||
6. Select Browse computer for driver software.
|
||||
-# Select Browse computer for driver software.
|
||||
|
||||

|
||||

|
||||
|
||||
7. Specify the path to `<Android SDK folder>/extras/google/usb_driver/` folder.
|
||||
-# Specify the path to `<Android SDK folder>/extras/google/usb_driver/` folder.
|
||||
|
||||

|
||||

|
||||
|
||||
8. If you get the prompt to install unverified drivers and report about success - you've finished
|
||||
-# If you get the prompt to install unverified drivers and report about success - you've finished
|
||||
with USB driver installation.
|
||||
|
||||

|
||||

|
||||
|
||||
\` \`
|
||||

|
||||
|
||||
-# Otherwise (getting the failure like shown below) follow the next steps.
|
||||
|
||||

|
||||

|
||||
|
||||
9. Otherwise (getting the failure like shown below) follow the next steps.
|
||||
-# Again **right-click** on the unknown device, select Properties --\> Details --\> Hardware Ids
|
||||
and copy the line like `USB\VID_XXXX&PID_XXXX&MI_XX`.
|
||||
|
||||

|
||||

|
||||
|
||||
10. Again **right-click** on the unknown device, select Properties --\> Details --\> Hardware Ids
|
||||
and copy the line like USB\\VID_XXXX&PID_XXXX&MI_XX.
|
||||
|
||||

|
||||
|
||||
11. Now open file `<Android SDK folder>/extras/google/usb_driver/android_winusb.inf`. Select either
|
||||
-# Now open file `<Android SDK folder>/extras/google/usb_driver/android_winusb.inf`. Select either
|
||||
Google.NTx86 or Google.NTamd64 section depending on your host system architecture.
|
||||
|
||||

|
||||

|
||||
|
||||
12. There should be a record like existing ones for your device and you need to add one manually.
|
||||
-# There should be a record like existing ones for your device and you need to add one manually.
|
||||
|
||||

|
||||

|
||||
|
||||
13. Save the `android_winusb.inf` file and try to install the USB driver again.
|
||||
-# Save the `android_winusb.inf` file and try to install the USB driver again.
|
||||
|
||||

|
||||

|
||||
|
||||
\` \`
|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
\` \`
|
||||
-# This time installation should go successfully.
|
||||
|
||||

|
||||

|
||||
|
||||
14. This time installation should go successfully.
|
||||

|
||||
|
||||

|
||||
-# And an unknown device is now recognized as an Android phone.
|
||||
|
||||
\` \`
|
||||

|
||||
|
||||

|
||||
-# Successful device USB connection can be verified in console via adb devices command.
|
||||
|
||||
15. And an unknown device is now recognized as an Android phone.
|
||||

|
||||
|
||||

|
||||
|
||||
16. Successful device USB connection can be verified in console via adb devices command.
|
||||
|
||||

|
||||
|
||||
17. Now, in Eclipse go Run -\> Run/Debug to run your application in regular or debugging mode.
|
||||
-# Now, in Eclipse go Run -\> Run/Debug to run your application in regular or debugging mode.
|
||||
Device Chooser will let you choose among the devices.
|
||||
|
||||
#### Linux host computer
|
||||
@ -507,7 +501,7 @@ SUBSYSTEM=="usb", ATTR{idVendor}=="1004", MODE="0666", GROUP="plugdev"
|
||||
Then restart your adb server (even better to restart the system), plug in your Android device and
|
||||
execute adb devices command. You will see the list of attached devices:
|
||||
|
||||

|
||||

|
||||
|
||||
#### Mac OS host computer
|
||||
|
||||
|
@ -38,17 +38,17 @@ OpenCV. You can get more information here: `Android OpenCV Manager` and in these
|
||||
Using async initialization is a **recommended** way for application development. It uses the OpenCV
|
||||
Manager to access OpenCV libraries externally installed in the target system.
|
||||
|
||||
1. Add OpenCV library project to your workspace. Use menu
|
||||
-# Add OpenCV library project to your workspace. Use menu
|
||||
File -\> Import -\> Existing project in your workspace.
|
||||
|
||||
Press Browse button and locate OpenCV4Android SDK (`OpenCV-2.4.9-android-sdk/sdk`).
|
||||
|
||||

|
||||

|
||||
|
||||
2. In application project add a reference to the OpenCV Java SDK in
|
||||
-# In application project add a reference to the OpenCV Java SDK in
|
||||
Project -\> Properties -\> Android -\> Library -\> Add select OpenCV Library - 2.4.9.
|
||||
|
||||

|
||||

|
||||
|
||||
In most cases OpenCV Manager may be installed automatically from Google Play. For the case, when
|
||||
Google Play is not available, i.e. emulator, developer board, etc, you can install it manually using
|
||||
@ -101,18 +101,18 @@ designed mostly for development purposes. This approach is deprecated for the pr
|
||||
release package is recommended to communicate with OpenCV Manager via the async initialization
|
||||
described above.
|
||||
|
||||
1. Add the OpenCV library project to your workspace the same way as for the async initialization
|
||||
-# Add the OpenCV library project to your workspace the same way as for the async initialization
|
||||
above. Use menu File -\> Import -\> Existing project in your workspace, press Browse button and
|
||||
select OpenCV SDK path (`OpenCV-2.4.9-android-sdk/sdk`).
|
||||
|
||||

|
||||

|
||||
|
||||
2. In the application project add a reference to the OpenCV4Android SDK in
|
||||
-# In the application project add a reference to the OpenCV4Android SDK in
|
||||
Project -\> Properties -\> Android -\> Library -\> Add select OpenCV Library - 2.4.9;
|
||||
|
||||

|
||||

|
||||
|
||||
3. If your application project **doesn't have a JNI part**, just copy the corresponding OpenCV
|
||||
-# If your application project **doesn't have a JNI part**, just copy the corresponding OpenCV
|
||||
native libs from `<OpenCV-2.4.9-android-sdk>/sdk/native/libs/<target_arch>` to your project
|
||||
directory to folder `libs/<target_arch>`.
|
||||
|
||||
@ -126,7 +126,7 @@ described above.
|
||||
@endcode
|
||||
The result should look like the following:
|
||||
@code{.make}
|
||||
include \f$(CLEAR_VARS)
|
||||
include $(CLEAR_VARS)
|
||||
|
||||
# OpenCV
|
||||
OPENCV_CAMERA_MODULES:=on
|
||||
@ -139,7 +139,7 @@ described above.
|
||||
Eclipse will automatically include all the libraries from the `libs` folder to the application
|
||||
package (APK).
|
||||
|
||||
4. The last step of enabling OpenCV in your application is Java initialization code before calling
|
||||
-# The last step of enabling OpenCV in your application is Java initialization code before calling
|
||||
OpenCV API. It can be done, for example, in the static section of the Activity class:
|
||||
@code{.java}
|
||||
static {
|
||||
@ -166,23 +166,23 @@ described above.
|
||||
To build your own Android application, using OpenCV as native part, the following steps should be
|
||||
taken:
|
||||
|
||||
1. You can use an environment variable to specify the location of OpenCV package or just hardcode
|
||||
-# You can use an environment variable to specify the location of OpenCV package or just hardcode
|
||||
absolute or relative path in the `jni/Android.mk` of your projects.
|
||||
2. The file `jni/Android.mk` should be written for the current application using the common rules
|
||||
-# The file `jni/Android.mk` should be written for the current application using the common rules
|
||||
for this file.
|
||||
|
||||
For detailed information see the Android NDK documentation from the Android NDK archive, in the
|
||||
file `<path_where_NDK_is_placed>/docs/ANDROID-MK.html`.
|
||||
|
||||
3. The following line:
|
||||
-# The following line:
|
||||
@code{.make}
|
||||
include C:\Work\OpenCV4Android\OpenCV-2.4.9-android-sdk\sdk\native\jni\OpenCV.mk
|
||||
@endcode
|
||||
Should be inserted into the `jni/Android.mk` file **after** this line:
|
||||
@code{.make}
|
||||
include \f$(CLEAR_VARS)
|
||||
include $(CLEAR_VARS)
|
||||
@endcode
|
||||
4. Several variables can be used to customize OpenCV stuff, but you **don't need** to use them when
|
||||
-# Several variables can be used to customize OpenCV stuff, but you **don't need** to use them when
|
||||
your application uses the async initialization via the OpenCV Manager API.
|
||||
|
||||
@note These variables should be set **before** the "include .../OpenCV.mk" line:
|
||||
@ -202,7 +202,7 @@ taken:
|
||||
Perform static linking with OpenCV. By default dynamic link is used and the project JNI lib
|
||||
depends on libopencv_java.so.
|
||||
|
||||
5. The file `Application.mk` should exist and should contain lines:
|
||||
-# The file `Application.mk` should exist and should contain lines:
|
||||
@code{.make}
|
||||
APP_STL := gnustl_static
|
||||
APP_CPPFLAGS := -frtti -fexceptions
|
||||
@ -221,7 +221,7 @@ taken:
|
||||
APP_PLATFORM := android-9
|
||||
@endcode
|
||||
|
||||
6. Either use @ref tutorial_android_dev_intro_ndk "manual" ndk-build invocation or
|
||||
-# Either use @ref tutorial_android_dev_intro_ndk "manual" ndk-build invocation or
|
||||
@ref tutorial_android_dev_intro_eclipse "setup Eclipse CDT Builder" to build native JNI lib
|
||||
before (re)building the Java part and creating
|
||||
an APK.
|
||||
@ -232,18 +232,18 @@ Hello OpenCV Sample
|
||||
Here are basic steps to guide you trough the process of creating a simple OpenCV-centric
|
||||
application. It will be capable of accessing camera output, processing it and displaying the result.
|
||||
|
||||
1. Open Eclipse IDE, create a new clean workspace, create a new Android project
|
||||
-# Open Eclipse IDE, create a new clean workspace, create a new Android project
|
||||
File --\> New --\> Android Project
|
||||
2. Set name, target, package and minSDKVersion accordingly. The minimal SDK version for build with
|
||||
-# Set name, target, package and minSDKVersion accordingly. The minimal SDK version for build with
|
||||
OpenCV4Android SDK is 11. Minimal device API Level (for application manifest) is 8.
|
||||
3. Allow Eclipse to create default activity. Lets name the activity HelloOpenCvActivity.
|
||||
4. Choose Blank Activity with full screen layout. Lets name the layout HelloOpenCvLayout.
|
||||
5. Import OpenCV library project to your workspace.
|
||||
6. Reference OpenCV library within your project properties.
|
||||
-# Allow Eclipse to create default activity. Lets name the activity HelloOpenCvActivity.
|
||||
-# Choose Blank Activity with full screen layout. Lets name the layout HelloOpenCvLayout.
|
||||
-# Import OpenCV library project to your workspace.
|
||||
-# Reference OpenCV library within your project properties.
|
||||
|
||||

|
||||

|
||||
|
||||
7. Edit your layout file as xml file and pass the following layout there:
|
||||
-# Edit your layout file as xml file and pass the following layout there:
|
||||
@code{.xml}
|
||||
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
|
||||
xmlns:tools="http://schemas.android.com/tools"
|
||||
@ -261,7 +261,7 @@ application. It will be capable of accessing camera output, processing it and di
|
||||
|
||||
</LinearLayout>
|
||||
@endcode
|
||||
8. Add the following permissions to the `AndroidManifest.xml` file:
|
||||
-# Add the following permissions to the `AndroidManifest.xml` file:
|
||||
@code{.xml}
|
||||
</application>
|
||||
|
||||
@ -272,14 +272,14 @@ application. It will be capable of accessing camera output, processing it and di
|
||||
<uses-feature android:name="android.hardware.camera.front" android:required="false"/>
|
||||
<uses-feature android:name="android.hardware.camera.front.autofocus" android:required="false"/>
|
||||
@endcode
|
||||
9. Set application theme in AndroidManifest.xml to hide title and system buttons.
|
||||
-# Set application theme in AndroidManifest.xml to hide title and system buttons.
|
||||
@code{.xml}
|
||||
<application
|
||||
android:icon="@drawable/icon"
|
||||
android:label="@string/app_name"
|
||||
android:theme="@android:style/Theme.NoTitleBar.Fullscreen" >
|
||||
@endcode
|
||||
10. Add OpenCV library initialization to your activity. Fix errors by adding requited imports.
|
||||
-# Add OpenCV library initialization to your activity. Fix errors by adding requited imports.
|
||||
@code{.java}
|
||||
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
|
||||
@Override
|
||||
@ -305,7 +305,7 @@ application. It will be capable of accessing camera output, processing it and di
|
||||
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_6, this, mLoaderCallback);
|
||||
}
|
||||
@endcode
|
||||
11. Defines that your activity implements CvCameraViewListener2 interface and fix activity related
|
||||
-# Defines that your activity implements CvCameraViewListener2 interface and fix activity related
|
||||
errors by defining missed methods. For this activity define onCreate, onDestroy and onPause and
|
||||
implement them according code snippet bellow. Fix errors by adding requited imports.
|
||||
@code{.java}
|
||||
@ -346,7 +346,7 @@ application. It will be capable of accessing camera output, processing it and di
|
||||
return inputFrame.rgba();
|
||||
}
|
||||
@endcode
|
||||
12. Run your application on device or emulator.
|
||||
-# Run your application on device or emulator.
|
||||
|
||||
Lets discuss some most important steps. Every Android application with UI must implement Activity
|
||||
and View. By the first steps we create blank activity and default view layout. The simplest
|
||||
|
@ -32,9 +32,11 @@ tutorial](http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_j
|
||||
|
||||
If you are in hurry, here is a minimum quick start guide to install OpenCV on Mac OS X:
|
||||
|
||||
NOTE 1: I'm assuming you already installed [xcode](https://developer.apple.com/xcode/),
|
||||
@note
|
||||
I'm assuming you already installed [xcode](https://developer.apple.com/xcode/),
|
||||
[jdk](http://www.oracle.com/technetwork/java/javase/downloads/index.html) and
|
||||
[Cmake](http://www.cmake.org/cmake/resources/software.html).
|
||||
|
||||
@code{.bash}
|
||||
cd ~/
|
||||
mkdir opt
|
||||
@ -60,9 +62,9 @@ cycle of your CLJ projects.
|
||||
The available [installation guide](https://github.com/technomancy/leiningen#installation) is very
|
||||
easy to be followed:
|
||||
|
||||
1. [Download the script](https://raw.github.com/technomancy/leiningen/stable/bin/lein)
|
||||
2. Place it on your $PATH (cf. \~/bin is a good choice if it is on your path.)
|
||||
3. Set the script to be executable. (i.e. chmod 755 \~/bin/lein).
|
||||
-# [Download the script](https://raw.github.com/technomancy/leiningen/stable/bin/lein)
|
||||
-# Place it on your $PATH (cf. \~/bin is a good choice if it is on your path.)
|
||||
-# Set the script to be executable. (i.e. chmod 755 \~/bin/lein).
|
||||
|
||||
If you work on Windows, follow [this instruction](https://github.com/technomancy/leiningen#windows)
|
||||
|
||||
@ -171,9 +173,9 @@ Your directories layout should look like the following:
|
||||
tree
|
||||
.
|
||||
|__ native
|
||||
| |__ macosx
|
||||
| |__ x86_64
|
||||
| |__ libopencv_java247.dylib
|
||||
| |__ macosx
|
||||
| |__ x86_64
|
||||
| |__ libopencv_java247.dylib
|
||||
|
|
||||
|__ opencv-247.jar
|
||||
|__ opencv-native-247.jar
|
||||
@ -215,13 +217,13 @@ simple-sample/
|
||||
|__ LICENSE
|
||||
|__ README.md
|
||||
|__ doc
|
||||
| |__ intro.md
|
||||
| |__ intro.md
|
||||
|
|
||||
|__ project.clj
|
||||
|__ resources
|
||||
|__ src
|
||||
| |__ simple_sample
|
||||
| |__ core.clj
|
||||
| |__ simple_sample
|
||||
| |__ core.clj
|
||||
|__ test
|
||||
|__ simple_sample
|
||||
|__ core_test.clj
|
||||
@ -299,7 +301,9 @@ nil
|
||||
Then you can start interacting with OpenCV by just referencing the fully qualified names of its
|
||||
classes.
|
||||
|
||||
NOTE 2: [Here](http://docs.opencv.org/java/) you can find the full OpenCV Java API.
|
||||
@note
|
||||
[Here](http://docs.opencv.org/java/) you can find the full OpenCV Java API.
|
||||
|
||||
@code{.clojure}
|
||||
user=> (org.opencv.core.Point. 0 0)
|
||||
#<Point {0.0, 0.0}>
|
||||
@ -409,6 +413,7 @@ class SimpleSample {
|
||||
|
||||
}
|
||||
@endcode
|
||||
|
||||
### Add injections to the project
|
||||
|
||||
Before start coding, we'd like to eliminate the boring need of interactively loading the native
|
||||
@ -454,6 +459,7 @@ We're going to mimic almost verbatim the original OpenCV java tutorial to:
|
||||
- change the value of every element of the second row to 1
|
||||
- change the value of every element of the 6th column to 5
|
||||
- print the content of the obtained matrix
|
||||
|
||||
@code{.clojure}
|
||||
user=> (def m (Mat. 5 10 CvType/CV_8UC1 (Scalar. 0 0)))
|
||||
#'user/m
|
||||
@ -473,6 +479,7 @@ user=> (println (.dump m))
|
||||
0, 0, 0, 0, 0, 5, 0, 0, 0, 0]
|
||||
nil
|
||||
@endcode
|
||||
|
||||
If you are accustomed to a functional language all those abused and mutating nouns are going to
|
||||
irritate your preference for verbs. Even if the CLJ interop syntax is very handy and complete, there
|
||||
is still an impedance mismatch between any OOP language and any FP language (bein Scala a mixed
|
||||
@ -483,6 +490,7 @@ To exit the REPL type (exit), ctr-D or (quit) at the REPL prompt.
|
||||
user=> (exit)
|
||||
Bye for now!
|
||||
@endcode
|
||||
|
||||
### Interactively load and blur an image
|
||||
|
||||
In the next sample you will learn how to interactively load and blur and image from the REPL by
|
||||
@ -500,7 +508,7 @@ main argument to both the GaussianBlur and the imwrite methods.
|
||||
First we want to add an image file to a newly create directory for storing static resources of the
|
||||
project.
|
||||
|
||||

|
||||

|
||||
@code{.bash}
|
||||
mkdir -p resources/images
|
||||
cp ~/opt/opencv/doc/tutorials/introduction/desktop_java/images/lena.png resource/images/
|
||||
@ -554,7 +562,7 @@ Bye for now!
|
||||
@endcode
|
||||
Following is the new blurred image of Lena.
|
||||
|
||||

|
||||

|
||||
|
||||
Next Steps
|
||||
----------
|
||||
@ -577,4 +585,3 @@ the gap.
|
||||
Copyright © 2013 Giacomo (Mimmo) Cosenza aka Magomimmo
|
||||
|
||||
Distributed under the BSD 3-clause License, the same of OpenCV.
|
||||
|
||||
|
@ -49,10 +49,11 @@ In Linux it can be achieved with the following command in Terminal:
|
||||
cd ~/<my_working _directory>
|
||||
git clone https://github.com/Itseez/opencv.git
|
||||
@endcode
|
||||
|
||||
Building OpenCV
|
||||
---------------
|
||||
|
||||
1. Create a build directory, make it current and run the following command:
|
||||
-# Create a build directory, make it current and run the following command:
|
||||
@code{.bash}
|
||||
cmake [<some optional parameters>] -DCMAKE_TOOLCHAIN_FILE=<path to the OpenCV source directory>/platforms/linux/arm-gnueabi.toolchain.cmake <path to the OpenCV source directory>
|
||||
@endcode
|
||||
@ -69,10 +70,12 @@ Building OpenCV
|
||||
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake ../../..
|
||||
@endcode
|
||||
2. Run make in build (\<cmake_binary_dir\>) directory:
|
||||
|
||||
-# Run make in build (\<cmake_binary_dir\>) directory:
|
||||
@code{.bash}
|
||||
make
|
||||
@endcode
|
||||
|
||||
@note
|
||||
Optionally you can strip symbols info from the created library via install/strip make target.
|
||||
This option produces smaller binary (\~ twice smaller) but makes further debugging harder.
|
||||
@ -86,5 +89,4 @@ extensions.
|
||||
|
||||
TBB is supported on multi core ARM SoCs also. Add -DWITH_TBB=ON and -DBUILD_TBB=ON to enable it.
|
||||
Cmake scripts download TBB sources from official project site
|
||||
[](http://threadingbuildingblocks.org/) and build it.
|
||||
|
||||
<http://threadingbuildingblocks.org/> and build it.
|
||||
|
@ -33,7 +33,9 @@ from the [OpenCV SourceForge repository](http://sourceforge.net/projects/opencvl
|
||||
|
||||
@note Windows users can find the prebuilt files needed for Java development in the
|
||||
`opencv/build/java/` folder inside the package. For other OSes it's required to build OpenCV from
|
||||
sources. Another option to get OpenCV sources is to clone [OpenCV git
|
||||
sources.
|
||||
|
||||
Another option to get OpenCV sources is to clone [OpenCV git
|
||||
repository](https://github.com/Itseez/opencv/). In order to build OpenCV with Java bindings you need
|
||||
JDK (Java Development Kit) (we recommend [Oracle/Sun JDK 6 or
|
||||
7](http://www.oracle.com/technetwork/java/javase/downloads/)), [Apache Ant](http://ant.apache.org/)
|
||||
@ -67,7 +69,7 @@ Examine the output of CMake and ensure java is one of the
|
||||
modules "To be built". If not, it's likely you're missing a dependency. You should troubleshoot by
|
||||
looking through the CMake output for any Java-related tools that aren't found and installing them.
|
||||
|
||||

|
||||

|
||||
|
||||
@note If CMake can't find Java in your system set the JAVA_HOME environment variable with the path to installed JDK before running it. E.g.:
|
||||
@code{.bash}
|
||||
@ -141,7 +143,7 @@ folder.
|
||||
The command should initiate [re]building and running the sample. You should see on the
|
||||
screen something like this:
|
||||
|
||||

|
||||

|
||||
|
||||
SBT project for Java and Scala
|
||||
------------------------------
|
||||
@ -203,7 +205,7 @@ eclipse # Running "eclipse" from within the sbt console
|
||||
@endcode
|
||||
You should see something like this:
|
||||
|
||||

|
||||

|
||||
|
||||
You can now import the SBT project to Eclipse using Import ... -\> Existing projects into workspace.
|
||||
Whether you actually do this is optional for the guide; we'll be using SBT to build the project, so
|
||||
@ -225,7 +227,7 @@ sbt run
|
||||
@endcode
|
||||
You should see something like this:
|
||||
|
||||

|
||||

|
||||
|
||||
### Running SBT samples
|
||||
|
||||
@ -241,7 +243,7 @@ sbt eclipse
|
||||
@endcode
|
||||
Next, create the directory `src/main/resources` and download this Lena image into it:
|
||||
|
||||

|
||||

|
||||
|
||||
Make sure it's called `"lena.png"`. Items in the resources directory are available to the Java
|
||||
application at runtime.
|
||||
@ -315,11 +317,11 @@ sbt run
|
||||
@endcode
|
||||
You should see something like this:
|
||||
|
||||

|
||||

|
||||
|
||||
It should also write the following image to `faceDetection.png`:
|
||||
|
||||

|
||||

|
||||
|
||||
You're done! Now you have a sample Java application working with OpenCV, so you can start the work
|
||||
on your own. We wish you good luck and many years of joyful life!
|
||||
|
@ -21,6 +21,8 @@ Download the source code from
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
@dontinclude cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
In OpenCV 2 we have multiple modules. Each one takes care of a different area or approach towards
|
||||
image processing. You could already observe this in the structure of the user guide of these
|
||||
tutorials itself. Before you use any of them you first need to include the header files where the
|
||||
@ -31,36 +33,25 @@ You'll almost always end up using the:
|
||||
- *core* section, as here are defined the basic building blocks of the library
|
||||
- *highgui* module, as this contains the functions for input and output operations
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
lines
|
||||
1-6
|
||||
@until <string>
|
||||
|
||||
We also include the *iostream* to facilitate console line output and input. To avoid data structure
|
||||
and function name conflicts with other libraries, OpenCV has its own namespace: *cv*. To avoid the
|
||||
need appending prior each of these the *cv::* keyword you can import the namespace in the whole file
|
||||
by using the lines:
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
lines
|
||||
8-9
|
||||
@line using namespace cv
|
||||
|
||||
This is true for the STL library too (used for console I/O). Now, let's analyze the *main* function.
|
||||
We start up assuring that we acquire a valid image name argument from the command line. Otherwise
|
||||
take a picture by default: "HappyFish.jpg".
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
lines
|
||||
13-17
|
||||
@skip string
|
||||
@until }
|
||||
|
||||
Then create a *Mat* object that will store the data of the loaded image.
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
lines
|
||||
19
|
||||
@skipline Mat
|
||||
|
||||
Now we call the @ref cv::imread function which loads the image name specified by the first argument
|
||||
(*argv[1]*). The second argument specifies the format in what we want the image. This may be:
|
||||
@ -69,10 +60,7 @@ Now we call the @ref cv::imread function which loads the image name specified by
|
||||
- IMREAD_GRAYSCALE ( 0) loads the image as an intensity one
|
||||
- IMREAD_COLOR (\>0) loads the image in the RGB format
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
lines
|
||||
20
|
||||
@skipline image = imread
|
||||
|
||||
@note
|
||||
OpenCV offers support for the image formats Windows bitmap (bmp), portable image formats (pbm,
|
||||
@ -94,30 +82,18 @@ the image it contains from a size point of view. It may be:
|
||||
would like the image to keep its aspect ratio (*WINDOW_KEEPRATIO*) or not
|
||||
(*WINDOW_FREERATIO*).
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
|
||||
lines
|
||||
28
|
||||
@skipline namedWindow
|
||||
|
||||
Finally, to update the content of the OpenCV window with a new image use the @ref cv::imshow
|
||||
function. Specify the OpenCV window name to update and the image to use during this operation:
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
|
||||
lines
|
||||
29
|
||||
@skipline imshow
|
||||
|
||||
Because we want our window to be displayed until the user presses a key (otherwise the program would
|
||||
end far too quickly), we use the @ref cv::waitKey function whose only parameter is just how long
|
||||
should it wait for a user input (measured in milliseconds). Zero means to wait forever.
|
||||
|
||||
@includelineno cpp/tutorial_code/introduction/display_image/display_image.cpp
|
||||
|
||||
|
||||
lines
|
||||
31
|
||||
@skipline waitKey
|
||||
|
||||
Result
|
||||
------
|
||||
@ -130,11 +106,10 @@ Result
|
||||
@endcode
|
||||
- You should get a nice window as the one shown below:
|
||||
|
||||

|
||||

|
||||
|
||||
\htmlonly
|
||||
<div align="center">
|
||||
<iframe title="Introduction - Display an Image" width="560" height="349" src="http://www.youtube.com/embed/1OJEqpuaGc4?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
|
||||
</div>
|
||||
\endhtmlonly
|
||||
|
||||
|
@ -21,13 +21,13 @@ git clone https://github.com/Itseez/opencv.git
|
||||
Building OpenCV from Source, using CMake and Command Line
|
||||
---------------------------------------------------------
|
||||
|
||||
1. Make symbolic link for Xcode to let OpenCV build scripts find the compiler, header files etc.
|
||||
-# Make symbolic link for Xcode to let OpenCV build scripts find the compiler, header files etc.
|
||||
@code{.bash}
|
||||
cd /
|
||||
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
|
||||
@endcode
|
||||
|
||||
2. Build OpenCV framework:
|
||||
-# Build OpenCV framework:
|
||||
@code{.bash}
|
||||
cd ~/<my_working_directory>
|
||||
python opencv/platforms/ios/build_framework.py ios
|
||||
|
@ -17,51 +17,51 @@ are more or less the same for other versions.
|
||||
Now, we will define OpenCV as a user library in Eclipse, so we can reuse the configuration for any
|
||||
project. Launch Eclipse and select Window --\> Preferences from the menu.
|
||||
|
||||

|
||||

|
||||
|
||||
Navigate under Java --\> Build Path --\> User Libraries and click New....
|
||||
|
||||

|
||||

|
||||
|
||||
Enter a name, e.g. OpenCV-2.4.6, for your new library.
|
||||
|
||||

|
||||

|
||||
|
||||
Now select your new user library and click Add External JARs....
|
||||
|
||||

|
||||

|
||||
|
||||
Browse through `C:\OpenCV-2.4.6\build\java\` and select opencv-246.jar. After adding the jar,
|
||||
extend the opencv-246.jar and select Native library location and press Edit....
|
||||
|
||||

|
||||

|
||||
|
||||
Select External Folder... and browse to select the folder `C:\OpenCV-2.4.6\build\java\x64`. If you
|
||||
have a 32-bit system you need to select the x86 folder instead of x64.
|
||||
|
||||

|
||||

|
||||
|
||||
Your user library configuration should look like this:
|
||||
|
||||

|
||||

|
||||
|
||||
Testing the configuration on a new Java project
|
||||
-----------------------------------------------
|
||||
|
||||
Now start creating a new Java project.
|
||||
|
||||

|
||||

|
||||
|
||||
On the Java Settings step, under Libraries tab, select Add Library... and select OpenCV-2.4.6, then
|
||||
click Finish.
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
Libraries should look like this:
|
||||
|
||||

|
||||

|
||||
|
||||
Now you have created and configured a new Java project it is time to test it. Create a new java
|
||||
file. Here is a starter code for your convenience:
|
||||
@ -82,7 +82,7 @@ public class Hello
|
||||
@endcode
|
||||
When you run the code you should see 3x3 identity matrix as output.
|
||||
|
||||

|
||||

|
||||
|
||||
That is it, whenever you start a new project just add the OpenCV user library that you have defined
|
||||
to your project and you are good to go. Enjoy your powerful, less painful development environment :)
|
||||
|
@ -4,45 +4,45 @@ Using OpenCV with Eclipse (plugin CDT) {#tutorial_linux_eclipse}
|
||||
Prerequisites
|
||||
-------------
|
||||
Two ways, one by forming a project directly, and another by CMake Prerequisites
|
||||
1. Having installed [Eclipse](http://www.eclipse.org/) in your workstation (only the CDT plugin for
|
||||
-# Having installed [Eclipse](http://www.eclipse.org/) in your workstation (only the CDT plugin for
|
||||
C/C++ is needed). You can follow the following steps:
|
||||
- Go to the Eclipse site
|
||||
- Download [Eclipse IDE for C/C++
|
||||
Developers](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/heliossr2) .
|
||||
Choose the link according to your workstation.
|
||||
2. Having installed OpenCV. If not yet, go @ref tutorial_linux_install "here".
|
||||
-# Having installed OpenCV. If not yet, go @ref tutorial_linux_install "here".
|
||||
|
||||
Making a project
|
||||
----------------
|
||||
|
||||
1. Start Eclipse. Just run the executable that comes in the folder.
|
||||
2. Go to **File -\> New -\> C/C++ Project**
|
||||
-# Start Eclipse. Just run the executable that comes in the folder.
|
||||
-# Go to **File -\> New -\> C/C++ Project**
|
||||
|
||||

|
||||

|
||||
|
||||
3. Choose a name for your project (i.e. DisplayImage). An **Empty Project** should be okay for this
|
||||
-# Choose a name for your project (i.e. DisplayImage). An **Empty Project** should be okay for this
|
||||
example.
|
||||
|
||||

|
||||

|
||||
|
||||
4. Leave everything else by default. Press **Finish**.
|
||||
5. Your project (in this case DisplayImage) should appear in the **Project Navigator** (usually at
|
||||
-# Leave everything else by default. Press **Finish**.
|
||||
-# Your project (in this case DisplayImage) should appear in the **Project Navigator** (usually at
|
||||
the left side of your window).
|
||||
|
||||

|
||||

|
||||
|
||||
6. Now, let's add a source file using OpenCV:
|
||||
-# Now, let's add a source file using OpenCV:
|
||||
- Right click on **DisplayImage** (in the Navigator). **New -\> Folder** .
|
||||
|
||||

|
||||

|
||||
|
||||
- Name your folder **src** and then hit **Finish**
|
||||
- Right click on your newly created **src** folder. Choose **New source file**:
|
||||
- Call it **DisplayImage.cpp**. Hit **Finish**
|
||||
|
||||

|
||||

|
||||
|
||||
7. So, now you have a project with a empty .cpp file. Let's fill it with some sample code (in other
|
||||
-# So, now you have a project with a empty .cpp file. Let's fill it with some sample code (in other
|
||||
words, copy and paste the snippet below):
|
||||
@code{.cpp}
|
||||
#include <opencv2/opencv.hpp>
|
||||
@ -68,7 +68,7 @@ Making a project
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
8. We are only missing one final step: To tell OpenCV where the OpenCV headers and libraries are.
|
||||
-# We are only missing one final step: To tell OpenCV where the OpenCV headers and libraries are.
|
||||
For this, do the following:
|
||||
|
||||
- Go to **Project--\>Properties**
|
||||
@ -78,7 +78,7 @@ Making a project
|
||||
include the path of the folder where opencv was installed. In our example, this is
|
||||
/usr/local/include/opencv.
|
||||
|
||||

|
||||

|
||||
|
||||
@note If you do not know where your opencv files are, open the **Terminal** and type:
|
||||
@code{.bash}
|
||||
@ -103,7 +103,7 @@ Making a project
|
||||
opencv_core opencv_imgproc opencv_highgui opencv_ml opencv_video opencv_features2d
|
||||
opencv_calib3d opencv_objdetect opencv_contrib opencv_legacy opencv_flann
|
||||
|
||||

|
||||

|
||||
|
||||
If you don't know where your libraries are (or you are just psychotic and want to make sure
|
||||
the path is fine), type in **Terminal**:
|
||||
@ -120,7 +120,7 @@ Making a project
|
||||
|
||||
In the Console you should get something like
|
||||
|
||||

|
||||

|
||||
|
||||
If you check in your folder, there should be an executable there.
|
||||
|
||||
@ -138,21 +138,21 @@ Assuming that the image to use as the argument would be located in
|
||||
\<DisplayImage_directory\>/images/HappyLittleFish.png. We can still do this, but let's do it from
|
||||
Eclipse:
|
||||
|
||||
1. Go to **Run-\>Run Configurations**
|
||||
2. Under C/C++ Application you will see the name of your executable + Debug (if not, click over
|
||||
-# Go to **Run-\>Run Configurations**
|
||||
-# Under C/C++ Application you will see the name of your executable + Debug (if not, click over
|
||||
C/C++ Application a couple of times). Select the name (in this case **DisplayImage Debug**).
|
||||
3. Now, in the right side of the window, choose the **Arguments** Tab. Write the path of the image
|
||||
-# Now, in the right side of the window, choose the **Arguments** Tab. Write the path of the image
|
||||
file we want to open (path relative to the workspace/DisplayImage folder). Let's use
|
||||
**HappyLittleFish.png**:
|
||||
|
||||

|
||||

|
||||
|
||||
4. Click on the **Apply** button and then in Run. An OpenCV window should pop up with the fish
|
||||
-# Click on the **Apply** button and then in Run. An OpenCV window should pop up with the fish
|
||||
image (or whatever you used).
|
||||
|
||||

|
||||

|
||||
|
||||
5. Congratulations! You are ready to have fun with OpenCV using Eclipse.
|
||||
-# Congratulations! You are ready to have fun with OpenCV using Eclipse.
|
||||
|
||||
### V2: Using CMake+OpenCV with Eclipse (plugin CDT)
|
||||
|
||||
@ -170,25 +170,25 @@ int main ( int argc, char **argv )
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
1. Create a build directory, say, under *foo*: mkdir /build. Then cd build.
|
||||
2. Put a `CmakeLists.txt` file in build:
|
||||
-# Create a build directory, say, under *foo*: mkdir /build. Then cd build.
|
||||
-# Put a `CmakeLists.txt` file in build:
|
||||
@code{.bash}
|
||||
PROJECT( helloworld_proj )
|
||||
FIND_PACKAGE( OpenCV REQUIRED )
|
||||
ADD_EXECUTABLE( helloworld helloworld.cxx )
|
||||
TARGET_LINK_LIBRARIES( helloworld \f${OpenCV_LIBS} )
|
||||
@endcode
|
||||
1. Run: cmake-gui .. and make sure you fill in where opencv was built.
|
||||
2. Then click configure and then generate. If it's OK, **quit cmake-gui**
|
||||
3. Run `make -j4` (the -j4 is optional, it just tells the compiler to build in 4 threads). Make
|
||||
-# Run: cmake-gui .. and make sure you fill in where opencv was built.
|
||||
-# Then click configure and then generate. If it's OK, **quit cmake-gui**
|
||||
-# Run `make -j4` (the -j4 is optional, it just tells the compiler to build in 4 threads). Make
|
||||
sure it builds.
|
||||
4. Start eclipse. Put the workspace in some directory but **not** in foo or `foo\build`
|
||||
5. Right click in the Project Explorer section. Select Import And then open the C/C++ filter.
|
||||
-# Start eclipse. Put the workspace in some directory but **not** in foo or `foo\build`
|
||||
-# Right click in the Project Explorer section. Select Import And then open the C/C++ filter.
|
||||
Choose *Existing Code* as a Makefile Project.
|
||||
6. Name your project, say *helloworld*. Browse to the Existing Code location `foo\build` (where
|
||||
-# Name your project, say *helloworld*. Browse to the Existing Code location `foo\build` (where
|
||||
you ran your cmake-gui from). Select *Linux GCC* in the *"Toolchain for Indexer Settings"* and
|
||||
press *Finish*.
|
||||
7. Right click in the Project Explorer section. Select Properties. Under C/C++ Build, set the
|
||||
-# Right click in the Project Explorer section. Select Properties. Under C/C++ Build, set the
|
||||
*build directory:* from something like `${workspace_loc:/helloworld}` to
|
||||
`${workspace_loc:/helloworld}/build` since that's where you are building to.
|
||||
|
||||
@ -196,4 +196,4 @@ TARGET_LINK_LIBRARIES( helloworld \f${OpenCV_LIBS} )
|
||||
`make VERBOSE=1 -j4` which tells the compiler to produce detailed symbol files for debugging and
|
||||
also to compile in 4 parallel threads.
|
||||
|
||||
8. Done!
|
||||
-# Done!
|
||||
|
@ -1,13 +1,12 @@
|
||||
Using OpenCV with gcc and CMake {#tutorial_linux_gcc_cmake}
|
||||
===============================
|
||||
|
||||
@note We assume that you have successfully installed OpenCV in your workstation. .. container::
|
||||
enumeratevisibleitemswithsquare
|
||||
@note We assume that you have successfully installed OpenCV in your workstation.
|
||||
|
||||
- The easiest way of using OpenCV in your code is to use [CMake](http://www.cmake.org/). A few
|
||||
advantages (taken from the Wiki):
|
||||
1. No need to change anything when porting between Linux and Windows
|
||||
2. Can easily be combined with other tools by CMake( i.e. Qt, ITK and VTK )
|
||||
-# No need to change anything when porting between Linux and Windows
|
||||
-# Can easily be combined with other tools by CMake( i.e. Qt, ITK and VTK )
|
||||
- If you are not familiar with CMake, checkout the
|
||||
[tutorial](http://www.cmake.org/cmake/help/cmake_tutorial.html) on its website.
|
||||
|
||||
@ -75,5 +74,4 @@ giving an image location as an argument, i.e.:
|
||||
@endcode
|
||||
You should get a nice window as the one shown below:
|
||||
|
||||

|
||||
|
||||

|
||||
|
@ -49,7 +49,7 @@ git clone https://github.com/Itseez/opencv_contrib.git
|
||||
Building OpenCV from Source Using CMake
|
||||
---------------------------------------
|
||||
|
||||
1. Create a temporary directory, which we denote as \<cmake_build_dir\>, where you want to put
|
||||
-# Create a temporary directory, which we denote as \<cmake_build_dir\>, where you want to put
|
||||
the generated Makefiles, project files as well the object files and output binaries and enter
|
||||
there.
|
||||
|
||||
@ -59,7 +59,7 @@ Building OpenCV from Source Using CMake
|
||||
mkdir build
|
||||
cd build
|
||||
@endcode
|
||||
2. Configuring. Run cmake [\<some optional parameters\>] \<path to the OpenCV source directory\>
|
||||
-# Configuring. Run cmake [\<some optional parameters\>] \<path to the OpenCV source directory\>
|
||||
|
||||
For example
|
||||
@code{.bash}
|
||||
@ -73,14 +73,14 @@ Building OpenCV from Source Using CMake
|
||||
- run: “Configure”
|
||||
- run: “Generate”
|
||||
|
||||
3. Description of some parameters
|
||||
- build type: CMAKE_BUILD_TYPE=Release\\Debug
|
||||
-# Description of some parameters
|
||||
- build type: `CMAKE_BUILD_TYPE=Release\Debug`
|
||||
- to build with modules from opencv_contrib set OPENCV_EXTRA_MODULES_PATH to \<path to
|
||||
opencv_contrib/modules/\>
|
||||
- set BUILD_DOCS for building documents
|
||||
- set BUILD_EXAMPLES to build all examples
|
||||
|
||||
4. [optional] Building python. Set the following python parameters:
|
||||
-# [optional] Building python. Set the following python parameters:
|
||||
- PYTHON2(3)_EXECUTABLE = \<path to python\>
|
||||
- PYTHON_INCLUDE_DIR = /usr/include/python\<version\>
|
||||
- PYTHON_INCLUDE_DIR2 = /usr/include/x86_64-linux-gnu/python\<version\>
|
||||
@ -88,18 +88,18 @@ Building OpenCV from Source Using CMake
|
||||
- PYTHON2(3)_NUMPY_INCLUDE_DIRS =
|
||||
/usr/lib/python\<version\>/dist-packages/numpy/core/include/
|
||||
|
||||
5. [optional] Building java.
|
||||
-# [optional] Building java.
|
||||
- Unset parameter: BUILD_SHARED_LIBS
|
||||
- It is useful also to unset BUILD_EXAMPLES, BUILD_TESTS, BUILD_PERF_TESTS - as they all
|
||||
will be statically linked with OpenCV and can take a lot of memory.
|
||||
|
||||
6. Build. From build directory execute make, recomend to do it in several threads
|
||||
-# Build. From build directory execute make, recomend to do it in several threads
|
||||
|
||||
For example
|
||||
@code{.bash}
|
||||
make -j7 # runs 7 jobs in parallel
|
||||
@endcode
|
||||
7. [optional] Building documents. Enter \<cmake_build_dir/doc/\> and run make with target
|
||||
-# [optional] Building documents. Enter \<cmake_build_dir/doc/\> and run make with target
|
||||
"html_docs"
|
||||
|
||||
For example
|
||||
@ -107,11 +107,11 @@ Building OpenCV from Source Using CMake
|
||||
cd ~/opencv/build/doc/
|
||||
make -j7 html_docs
|
||||
@endcode
|
||||
8. To install libraries, from build directory execute
|
||||
-# To install libraries, from build directory execute
|
||||
@code{.bash}
|
||||
sudo make install
|
||||
@endcode
|
||||
9. [optional] Running tests
|
||||
-# [optional] Running tests
|
||||
|
||||
- Get the required test data from [OpenCV extra
|
||||
repository](https://github.com/Itseez/opencv_extra).
|
||||
|
@ -55,9 +55,9 @@ int main( int argc, char** argv )
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. We begin by loading an image using @ref cv::imread , located in the path given by *imageName*.
|
||||
-# We begin by loading an image using @ref cv::imread , located in the path given by *imageName*.
|
||||
For this example, assume you are loading a RGB image.
|
||||
2. Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice
|
||||
-# Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice
|
||||
function to do this kind of transformations:
|
||||
@code{.cpp}
|
||||
cvtColor( image, gray_image, COLOR_BGR2GRAY );
|
||||
@ -70,7 +70,7 @@ Explanation
|
||||
this case we use **COLOR_BGR2GRAY** (because of @ref cv::imread has BGR default channel
|
||||
order in case of color images).
|
||||
|
||||
3. So now we have our new *gray_image* and want to save it on disk (otherwise it will get lost
|
||||
-# So now we have our new *gray_image* and want to save it on disk (otherwise it will get lost
|
||||
after the program ends). To save it, we will use a function analagous to @ref cv::imread : @ref
|
||||
cv::imwrite
|
||||
@code{.cpp}
|
||||
@ -79,7 +79,7 @@ Explanation
|
||||
Which will save our *gray_image* as *Gray_Image.jpg* in the folder *images* located two levels
|
||||
up of my current location.
|
||||
|
||||
4. Finally, let's check out the images. We create two windows and use them to show the original
|
||||
-# Finally, let's check out the images. We create two windows and use them to show the original
|
||||
image as well as the new one:
|
||||
@code{.cpp}
|
||||
namedWindow( imageName, WINDOW_AUTOSIZE );
|
||||
@ -88,18 +88,18 @@ Explanation
|
||||
imshow( imageName, image );
|
||||
imshow( "Gray image", gray_image );
|
||||
@endcode
|
||||
5. Add the *waitKey(0)* function call for the program to wait forever for an user key press.
|
||||
-# Add the *waitKey(0)* function call for the program to wait forever for an user key press.
|
||||
|
||||
Result
|
||||
------
|
||||
|
||||
When you run your program you should get something like this:
|
||||
|
||||

|
||||

|
||||
|
||||
And if you check in your folder (in my case *images*), you should have a newly .jpg file named
|
||||
*Gray_Image.jpg*:
|
||||
|
||||

|
||||

|
||||
|
||||
Congratulations, you are done with this tutorial!
|
||||
|
@ -14,15 +14,15 @@ technologies we integrate into our library. .. _Windows_Install_Prebuild:
|
||||
Installation by Using the Pre-built Libraries {#tutorial_windows_install_prebuilt}
|
||||
=============================================
|
||||
|
||||
1. Launch a web browser of choice and go to our [page on
|
||||
-# Launch a web browser of choice and go to our [page on
|
||||
Sourceforge](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/).
|
||||
2. Choose a build you want to use and download it.
|
||||
3. Make sure you have admin rights. Unpack the self-extracting archive.
|
||||
4. You can check the installation at the chosen path as you can see below.
|
||||
-# Choose a build you want to use and download it.
|
||||
-# Make sure you have admin rights. Unpack the self-extracting archive.
|
||||
-# You can check the installation at the chosen path as you can see below.
|
||||
|
||||

|
||||

|
||||
|
||||
5. To finalize the installation go to the @ref tutorial_windows_install_path section.
|
||||
-# To finalize the installation go to the @ref tutorial_windows_install_path section.
|
||||
|
||||
Installation by Making Your Own Libraries from the Source Files {#tutorial_windows_install_build}
|
||||
===============================================================
|
||||
@ -97,18 +97,18 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
|
||||
### Building the library
|
||||
|
||||
1. Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio
|
||||
-# Make sure you have a working IDE with a valid compiler. In case of the Microsoft Visual Studio
|
||||
just install it and make sure it starts up.
|
||||
2. Install [CMake](http://www.cmake.org/cmake/resources/software.html). Simply follow the wizard, no need to add it to the path. The default install
|
||||
-# Install [CMake](http://www.cmake.org/cmake/resources/software.html). Simply follow the wizard, no need to add it to the path. The default install
|
||||
options are OK.
|
||||
3. Download and install an up-to-date version of msysgit from its [official
|
||||
-# Download and install an up-to-date version of msysgit from its [official
|
||||
site](http://code.google.com/p/msysgit/downloads/list). There is also the portable version,
|
||||
which you need only to unpack to get access to the console version of Git. Supposing that for
|
||||
some of us it could be quite enough.
|
||||
4. Install [TortoiseGit](http://code.google.com/p/tortoisegit/wiki/Download). Choose the 32 or 64 bit version according to the type of OS you work in.
|
||||
-# Install [TortoiseGit](http://code.google.com/p/tortoisegit/wiki/Download). Choose the 32 or 64 bit version according to the type of OS you work in.
|
||||
While installing, locate your msysgit (if it doesn't do that automatically). Follow the
|
||||
wizard -- the default options are OK for the most part.
|
||||
5. Choose a directory in your file system, where you will download the OpenCV libraries to. I
|
||||
-# Choose a directory in your file system, where you will download the OpenCV libraries to. I
|
||||
recommend creating a new one that has short path and no special charachters in it, for example
|
||||
`D:/OpenCV`. For this tutorial I'll suggest you do so. If you use your own path and know, what
|
||||
you're doing -- it's OK.
|
||||
@ -118,7 +118,7 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
-# Push the OK button and be patient as the repository is quite a heavy download. It will take
|
||||
some time depending on your Internet connection.
|
||||
|
||||
6. In this section I will cover installing the 3rd party libraries.
|
||||
-# In this section I will cover installing the 3rd party libraries.
|
||||
-# Download the [Python libraries](http://www.python.org/downloads/) and install it with the default options. You will need a
|
||||
couple other python extensions. Luckily installing all these may be automated by a nice tool
|
||||
called [Setuptools](http://pypi.python.org/pypi/setuptools#downloads). Download and install
|
||||
@ -131,9 +131,9 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
Script sub-folder. Here just pass to the *easy_install.exe* as argument the name of the
|
||||
program you want to install. Add the *sphinx* argument.
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
@note
|
||||
The *CD* navigation command works only inside a drive. For example if you are somewhere in the
|
||||
@ -152,7 +152,7 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
sure you select for the *"Install missing packages on-the-fly"* the *Yes* option, as you can
|
||||
see on the image below. Again this will take quite some time so be patient.
|
||||
|
||||

|
||||

|
||||
|
||||
-# For the [Intel Threading Building Blocks (*TBB*)](http://threadingbuildingblocks.org/file.php?fid=77)
|
||||
download the source files and extract
|
||||
@ -161,7 +161,7 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
the story is the same. For
|
||||
exctracting the archives I recommend using the [7-Zip](http://www.7-zip.org/) application.
|
||||
|
||||

|
||||

|
||||
|
||||
-# For the [Intel IPP Asynchronous C/C++](http://software.intel.com/en-us/intel-ipp-preview) download the source files and set environment
|
||||
variable **IPP_ASYNC_ROOT**. It should point to
|
||||
@ -182,14 +182,14 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
Downloads](http://qt.nokia.com/downloads) page. Download the source files (not the
|
||||
installers!!!):
|
||||
|
||||

|
||||

|
||||
|
||||
Extract it into a nice and short named directory like `D:/OpenCV/dep/qt/` . Then you need to
|
||||
build it. Start up a *Visual* *Studio* *Command* *Prompt* (*2010*) by using the start menu
|
||||
search (or navigate through the start menu
|
||||
All Programs --\> Microsoft Visual Studio 2010 --\> Visual Studio Tools --\> Visual Studio Command Prompt (2010)).
|
||||
|
||||

|
||||

|
||||
|
||||
Now navigate to the extracted folder and enter inside it by using this console window. You
|
||||
should have a folder containing files like *Install*, *Make* and so on. Use the *dir* command
|
||||
@ -216,25 +216,25 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
Visual Studio Add-in*. After this you can make and build Qt applications without using the *Qt
|
||||
Creator*. Everything is nicely integrated into Visual Studio.
|
||||
|
||||
7. Now start the *CMake (cmake-gui)*. You may again enter it in the start menu search or get it
|
||||
-# Now start the *CMake (cmake-gui)*. You may again enter it in the start menu search or get it
|
||||
from the All Programs --\> CMake 2.8 --\> CMake (cmake-gui). First, select the directory for the
|
||||
source files of the OpenCV library (1). Then, specify a directory where you will build the
|
||||
binary files for OpenCV (2).
|
||||
|
||||

|
||||

|
||||
|
||||
Press the Configure button to specify the compiler (and *IDE*) you want to use. Note that in
|
||||
case you can choose between different compilers for making either 64 bit or 32 bit libraries.
|
||||
Select the one you use in your application development.
|
||||
|
||||

|
||||

|
||||
|
||||
CMake will start out and based on your system variables will try to automatically locate as many
|
||||
packages as possible. You can modify the packages to use for the build in the WITH --\> WITH_X
|
||||
menu points (where *X* is the package abbreviation). Here are a list of current packages you can
|
||||
turn on or off:
|
||||
|
||||

|
||||

|
||||
|
||||
Select all the packages you want to use and press again the *Configure* button. For an easier
|
||||
overview of the build options make sure the *Grouped* option under the binary directory
|
||||
@ -242,9 +242,9 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
directories. In case of these CMake will throw an error in its output window (located at the
|
||||
bottom of the GUI) and set its field values, to not found constants. For example:
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
For these you need to manually set the queried directories or files path. After this press again
|
||||
the *Configure* button to see if the value entered by you was accepted or not. Do this until all
|
||||
@ -254,7 +254,7 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
option will make sure that they are categorized inside directories in the *Solution Explorer*.
|
||||
It is a must have feature, if you ask me.
|
||||
|
||||

|
||||

|
||||
|
||||
Furthermore, you need to select what part of OpenCV you want to build.
|
||||
|
||||
@ -286,24 +286,24 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
IDE at the startup. Now you need to build both the *Release* and the *Debug* binaries. Use the
|
||||
drop-down menu on your IDE to change to another of these after building for one of them.
|
||||
|
||||

|
||||

|
||||
|
||||
In the end you can observe the built binary files inside the bin directory:
|
||||
|
||||

|
||||

|
||||
|
||||
For the documentation you need to explicitly issue the build commands on the *doc* project for
|
||||
the PDF files and on the *doc_html* for the HTML ones. Each of these will call *Sphinx* to do
|
||||
all the hard work. You can find the generated documentation inside the `Build/Doc/_html` for the
|
||||
HTML pages and within the `Build/Doc` the PDF manuals.
|
||||
|
||||

|
||||

|
||||
|
||||
To collect the header and the binary files, that you will use during your own projects, into a
|
||||
separate directory (simillary to how the pre-built binaries ship) you need to explicitely build
|
||||
the *Install* project.
|
||||
|
||||

|
||||

|
||||
|
||||
This will create an *Install* directory inside the *Build* one collecting all the built binaries
|
||||
into a single place. Use this only after you built both the *Release* and *Debug* versions.
|
||||
@ -314,7 +314,7 @@ libraries). If you do not need the support for some of these you can just freely
|
||||
If everything is okay the *contours.exe* output should resemble the following image (if
|
||||
built with Qt support):
|
||||
|
||||

|
||||

|
||||
|
||||
@note
|
||||
If you use the GPU module (CUDA libraries) make sure you also upgrade to the latest drivers of
|
||||
@ -353,9 +353,9 @@ following new entry (right click in the application to bring up the menu):
|
||||
%OPENCV_DIR%\bin
|
||||
@endcode
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
Save it to the registry and you are done. If you ever change the location of your build directories
|
||||
or want to try out your applicaton with a different build all you will need to do is to update the
|
||||
|
@ -1,13 +1,13 @@
|
||||
How to build applications with OpenCV inside the "Microsoft Visual Studio" {#tutorial_windows_visual_studio_Opencv}
|
||||
==========================================================================
|
||||
|
||||
Everything I describe here will apply to the C\\C++ interface of OpenCV. I start out from the
|
||||
Everything I describe here will apply to the `C\C++` interface of OpenCV. I start out from the
|
||||
assumption that you have read and completed with success the @ref tutorial_windows_install tutorial.
|
||||
Therefore, before you go any further make sure you have an OpenCV directory that contains the OpenCV
|
||||
header files plus binaries and you have set the environment variables as described here
|
||||
@ref tutorial_windows_install_path.
|
||||
|
||||

|
||||

|
||||
|
||||
The OpenCV libraries, distributed by us, on the Microsoft Windows operating system are in a
|
||||
Dynamic Linked Libraries (*DLL*). These have the advantage that all the content of the
|
||||
@ -58,7 +58,7 @@ create a new solution inside Visual studio by going through the File --\> New --
|
||||
selection. Choose *Win32 Console Application* as type. Enter its name and select the path where to
|
||||
create it. Then in the upcoming dialog make sure you create an empty project.
|
||||
|
||||

|
||||

|
||||
|
||||
The *local* method
|
||||
------------------
|
||||
@ -75,7 +75,7 @@ you can view and modify them by using the *Property Manger*. You can bring up th
|
||||
View --\> Property Pages. Expand it and you can see the existing rule packages (called *Proporty
|
||||
Sheets*).
|
||||
|
||||

|
||||

|
||||
|
||||
The really useful stuff of these is that you may create a rule package *once* and you can later just
|
||||
add it to your new projects. Create it once and reuse it later. We want to create a new *Property
|
||||
@ -83,7 +83,7 @@ Sheet* that will contain all the rules that the compiler and linker needs to kno
|
||||
need a separate one for the Debug and the Release Builds. Start up with the Debug one as shown in
|
||||
the image below:
|
||||
|
||||

|
||||

|
||||
|
||||
Use for example the *OpenCV_Debug* name. Then by selecting the sheet Right Click --\> Properties.
|
||||
In the following I will show to set the OpenCV rules locally, as I find unnecessary to pollute
|
||||
@ -93,7 +93,7 @@ group, you should add any .c/.cpp file to the project.
|
||||
@code{.bash}
|
||||
\f$(OPENCV_DIR)\..\..\include
|
||||
@endcode
|
||||

|
||||

|
||||
|
||||
When adding third party libraries settings it is generally a good idea to use the power behind the
|
||||
environment variables. The full location of the OpenCV library may change on each system. Moreover,
|
||||
@ -111,15 +111,15 @@ directory:
|
||||
$(OPENCV_DIR)\lib
|
||||
@endcode
|
||||
|
||||

|
||||

|
||||
|
||||
Then you need to specify the libraries in which the linker should look into. To do this go to the
|
||||
Linker --\> Input and under the *"Additional Dependencies"* entry add the name of all modules which
|
||||
you want to use:
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
The names of the libraries are as follow:
|
||||
@code{.bash}
|
||||
@ -150,19 +150,19 @@ click ok to save and do the same with a new property inside the Release rule sec
|
||||
omit the *d* letters from the library names and to save the property sheets with the save icon above
|
||||
them.
|
||||
|
||||

|
||||

|
||||
|
||||
You can find your property sheets inside your projects directory. At this point it is a wise
|
||||
decision to back them up into some special directory, to always have them at hand in the future,
|
||||
whenever you create an OpenCV project. Note that for Visual Studio 2010 the file extension is
|
||||
*props*, while for 2008 this is *vsprops*.
|
||||
|
||||

|
||||

|
||||
|
||||
Next time when you make a new OpenCV project just use the "Add Existing Property Sheet..." menu
|
||||
entry inside the Property Manager to easily add the OpenCV build rules.
|
||||
|
||||

|
||||

|
||||
|
||||
The *global* method
|
||||
-------------------
|
||||
@ -175,12 +175,12 @@ by using for instance: a Property page.
|
||||
In Visual Studio 2008 you can find this under the:
|
||||
Tools --\> Options --\> Projects and Solutions --\> VC++ Directories.
|
||||
|
||||

|
||||

|
||||
|
||||
In Visual Studio 2010 this has been moved to a global property sheet which is automatically added to
|
||||
every project you create:
|
||||
|
||||

|
||||

|
||||
|
||||
The process is the same as described in case of the local approach. Just add the include directories
|
||||
by using the environment variable *OPENCV_DIR*.
|
||||
@ -210,7 +210,7 @@ OpenCV logo](samples/data/opencv-logo.png). Before starting up the application m
|
||||
the image file in your current working directory. Modify the image file name inside the code to try
|
||||
it out on other images too. Run it and voil á:
|
||||
|
||||

|
||||

|
||||
|
||||
Command line arguments with Visual Studio
|
||||
-----------------------------------------
|
||||
@ -230,7 +230,7 @@ with the console window on the Microsoft Windows many people come to use it almo
|
||||
adding the same argument again and again while you are testing your application is, somewhat, a
|
||||
cumbersome task. Luckily, in the Visual Studio there is a menu to automate all this:
|
||||
|
||||

|
||||

|
||||
|
||||
Specify here the name of the inputs and while you start your application from the Visual Studio
|
||||
enviroment you have automatic argument passing. In the next introductionary tutorial you'll see an
|
||||
|
@ -10,10 +10,10 @@ Prerequisites
|
||||
|
||||
This tutorial assumes that you have the following available:
|
||||
|
||||
1. Visual Studio 2012 Professional (or better) with Update 1 installed. Update 1 can be downloaded
|
||||
-# Visual Studio 2012 Professional (or better) with Update 1 installed. Update 1 can be downloaded
|
||||
[here](http://www.microsoft.com/en-us/download/details.aspx?id=35774).
|
||||
2. An OpenCV installation on your Windows machine (Tutorial: @ref tutorial_windows_install).
|
||||
3. Ability to create and build OpenCV projects in Visual Studio (Tutorial: @ref tutorial_windows_visual_studio_Opencv).
|
||||
-# An OpenCV installation on your Windows machine (Tutorial: @ref tutorial_windows_install).
|
||||
-# Ability to create and build OpenCV projects in Visual Studio (Tutorial: @ref tutorial_windows_visual_studio_Opencv).
|
||||
|
||||
Installation
|
||||
------------
|
||||
@ -98,13 +98,13 @@ Launch the program in the debugger (Debug --\> Start Debugging, or hit *F5*). Wh
|
||||
hit, the program is paused and Visual Studio displays a yellow instruction pointer at the
|
||||
breakpoint:
|
||||
|
||||

|
||||

|
||||
|
||||
Now you can inspect the state of you program. For example, you can bring up the *Locals* window
|
||||
(Debug --\> Windows --\> Locals), which will show the names and values of the variables in the
|
||||
current scope:
|
||||
|
||||

|
||||

|
||||
|
||||
Note that the built-in *Locals* window will display text only. This is where the Image Watch plug-in
|
||||
comes in. Image Watch is like another *Locals* window, but with an image viewer built into it. To
|
||||
@ -114,7 +114,7 @@ had Image Watch open, and where it was located between debugging sessions. This
|
||||
to do this once--the next time you start debugging, Image Watch will be back where you left it.
|
||||
Here's what the docked Image Watch window looks like at our breakpoint:
|
||||
|
||||

|
||||

|
||||
|
||||
The radio button at the top left (*Locals/Watch*) selects what is shown in the *Image List* below:
|
||||
*Locals* lists all OpenCV image objects in the current scope (this list is automatically populated).
|
||||
@ -128,7 +128,7 @@ If an image has a thumbnail, left-clicking on that image will select it for deta
|
||||
*Image Viewer* on the right. The viewer lets you pan (drag mouse) and zoom (mouse wheel). It also
|
||||
displays the pixel coordinate and value at the current mouse position.
|
||||
|
||||

|
||||

|
||||
|
||||
Note that the second image in the list, *edges*, is shown as "invalid". This indicates that some
|
||||
data members of this image object have corrupt or invalid values (for example, a negative image
|
||||
@ -146,18 +146,18 @@ Now assume you want to do a visual sanity check of the *cv::Canny()* implementat
|
||||
*edges* image into the viewer by selecting it in the *Image List* and zoom into a region with a
|
||||
clearly defined edge:
|
||||
|
||||

|
||||

|
||||
|
||||
Right-click on the *Image Viewer* to bring up the view context menu and enable Link Views (a check
|
||||
box next to the menu item indicates whether the option is enabled).
|
||||
|
||||

|
||||

|
||||
|
||||
The Link Views feature keeps the view region fixed when flipping between images of the same size. To
|
||||
see how this works, select the input image from the image list--you should now see the corresponding
|
||||
zoomed-in region in the input image:
|
||||
|
||||

|
||||

|
||||
|
||||
You may also switch back and forth between viewing input and edges with your up/down cursor keys.
|
||||
That way you can easily verify that the detected edges line up nicely with the data in the input
|
||||
@ -168,12 +168,12 @@ More ...
|
||||
|
||||
Image watch has a number of more advanced features, such as
|
||||
|
||||
1. pinning images to a *Watch* list for inspection across scopes or between debugging sessions
|
||||
2. clamping, thresholding, or diff'ing images directly inside the Watch window
|
||||
3. comparing an in-memory image against a reference image from a file
|
||||
-# pinning images to a *Watch* list for inspection across scopes or between debugging sessions
|
||||
-# clamping, thresholding, or diff'ing images directly inside the Watch window
|
||||
-# comparing an in-memory image against a reference image from a file
|
||||
|
||||
Please refer to the online [Image Watch
|
||||
Documentation](http://go.microsoft.com/fwlink/?LinkId=285461) for details--you also can get to the
|
||||
documentation page by clicking on the *Help* link in the Image Watch window:
|
||||
|
||||

|
||||

|
||||
|
@ -9,46 +9,45 @@ In this tutorial we will learn how to:
|
||||
- Link OpenCV framework with Xcode
|
||||
- How to write simple Hello World application using OpenCV and Xcode.
|
||||
|
||||
*Linking OpenCV iOS*
|
||||
--------------------
|
||||
Linking OpenCV iOS
|
||||
------------------
|
||||
|
||||
Follow this step by step guide to link OpenCV to iOS.
|
||||
|
||||
1. Create a new XCode project.
|
||||
2. Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left
|
||||
-# Create a new XCode project.
|
||||
-# Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left
|
||||
hand panel and click on project name.
|
||||
3. Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
|
||||
4. Click on Add others and go to directory where *opencv2.framework* is located and click open
|
||||
5. Now you can start writing your application.
|
||||
-# Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
|
||||
-# Click on Add others and go to directory where *opencv2.framework* is located and click open
|
||||
-# Now you can start writing your application.
|
||||
|
||||

|
||||

|
||||
|
||||
*Hello OpenCV iOS Application*
|
||||
------------------------------
|
||||
Hello OpenCV iOS Application
|
||||
----------------------------
|
||||
|
||||
Now we will learn how to write a simple Hello World Application in Xcode using OpenCV.
|
||||
|
||||
- Link your project with OpenCV as shown in previous section.
|
||||
- Open the file named *NameOfProject-Prefix.pch* ( replace NameOfProject with name of your
|
||||
project) and add the following lines of code.
|
||||
@code{.cpp}
|
||||
@code{.m}
|
||||
#ifdef __cplusplus
|
||||
#import <opencv2/opencv.hpp>
|
||||
#endif
|
||||
@endcode
|
||||

|
||||

|
||||
|
||||
- Add the following lines of code to viewDidLoad method in ViewController.m.
|
||||
@code{.cpp}
|
||||
@code{.m}
|
||||
UIAlertView * alert = [[UIAlertView alloc] initWithTitle:@"Hello!" message:@"Welcome to OpenCV" delegate:self cancelButtonTitle:@"Continue" otherButtonTitles:nil];
|
||||
[alert show];
|
||||
@endcode
|
||||

|
||||

|
||||
|
||||
- You are good to run the project.
|
||||
|
||||
*Output*
|
||||
--------
|
||||
|
||||

|
||||
Output
|
||||
------
|
||||
|
||||

|
||||
|
@ -6,14 +6,14 @@ Goal
|
||||
|
||||
In this tutorial we will learn how to do basic image processing using OpenCV in iOS.
|
||||
|
||||
*Introduction*
|
||||
--------------
|
||||
Introduction
|
||||
------------
|
||||
|
||||
In *OpenCV* all the image processing operations are usually carried out on the *Mat* structure. In
|
||||
iOS however, to render an image on screen it have to be an instance of the *UIImage* class. To
|
||||
convert an *OpenCV Mat* to an *UIImage* we use the *Core Graphics* framework available in iOS. Below
|
||||
is the code needed to covert back and forth between Mat's and UIImage's.
|
||||
@code{.cpp}
|
||||
@code{.m}
|
||||
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
|
||||
{
|
||||
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
|
||||
@ -37,7 +37,7 @@ is the code needed to covert back and forth between Mat's and UIImage's.
|
||||
return cvMat;
|
||||
}
|
||||
@endcode
|
||||
@code{.cpp}
|
||||
@code{.m}
|
||||
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
|
||||
{
|
||||
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
|
||||
@ -63,12 +63,12 @@ is the code needed to covert back and forth between Mat's and UIImage's.
|
||||
@endcode
|
||||
After the processing we need to convert it back to UIImage. The code below can handle both
|
||||
gray-scale and color image conversions (determined by the number of channels in the *if* statement).
|
||||
@code{.cpp}
|
||||
@code{.m}
|
||||
cv::Mat greyMat;
|
||||
cv::cvtColor(inputMat, greyMat, COLOR_BGR2GRAY);
|
||||
@endcode
|
||||
After the processing we need to convert it back to UIImage.
|
||||
@code{.cpp}
|
||||
@code{.m}
|
||||
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
|
||||
{
|
||||
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
|
||||
@ -106,10 +106,11 @@ After the processing we need to convert it back to UIImage.
|
||||
return finalImage;
|
||||
}
|
||||
@endcode
|
||||
*Output*
|
||||
|
||||
Output
|
||||
--------
|
||||
|
||||

|
||||

|
||||
|
||||
Check out an instance of running code with more Image Effects on
|
||||
[YouTube](http://www.youtube.com/watch?v=Ko3K_xdhJ1I) .
|
||||
@ -119,4 +120,3 @@ Check out an instance of running code with more Image Effects on
|
||||
<iframe width="560" height="350" src="http://www.youtube.com/embed/Ko3K_xdhJ1I" frameborder="0" allowfullscreen></iframe>
|
||||
</div>
|
||||
\endhtmlonly
|
||||
|
||||
|
@ -14,11 +14,11 @@ Including OpenCV library in your iOS project
|
||||
|
||||
The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your
|
||||
XCode project. Download the latest binary from
|
||||
\<<http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>\>. Alternatively follow this
|
||||
<http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>. Alternatively follow this
|
||||
guide @ref tutorial_ios_install to compile the framework manually. Once you have the framework, just
|
||||
drag-and-drop into XCode:
|
||||
|
||||

|
||||

|
||||
|
||||
Also you have to locate the prefix header that is used for all header files in the project. The file
|
||||
is typically located at "ProjectName/Supporting Files/ProjectName-Prefix.pch". There, you have add
|
||||
@ -54,7 +54,7 @@ First, we create a simple iOS project, for example Single View Application. Then
|
||||
an UIImageView and UIButton to start the camera and display the video frames. The storyboard could
|
||||
look like that:
|
||||
|
||||

|
||||

|
||||
|
||||
Make sure to add and connect the IBOutlets and IBActions to the corresponding ViewController:
|
||||
@code{.objc}
|
||||
@ -127,7 +127,7 @@ should have at least the following frameworks in your project:
|
||||
- UIKit
|
||||
- Foundation
|
||||
|
||||

|
||||

|
||||
|
||||
#### Processing frames
|
||||
|
||||
|
@ -23,17 +23,19 @@ In which sense is the hyperplane obtained optimal? Let's consider the following
|
||||
For a linearly separable set of 2D-points which belong to one of two classes, find a separating
|
||||
straight line.
|
||||
|
||||

|
||||

|
||||
|
||||
@note In this example we deal with lines and points in the Cartesian plane instead of hyperplanes
|
||||
and vectors in a high dimensional space. This is a simplification of the problem.It is important to
|
||||
understand that this is done only because our intuition is better built from examples that are easy
|
||||
to imagine. However, the same concepts apply to tasks where the examples to classify lie in a space
|
||||
whose dimension is higher than two. In the above picture you can see that there exists multiple
|
||||
whose dimension is higher than two.
|
||||
|
||||
In the above picture you can see that there exists multiple
|
||||
lines that offer a solution to the problem. Is any of them better than the others? We can
|
||||
intuitively define a criterion to estimate the worth of the lines:
|
||||
|
||||
A line is bad if it passes too close to the points because it will be noise sensitive and it will
|
||||
- A line is bad if it passes too close to the points because it will be noise sensitive and it will
|
||||
not generalize correctly. Therefore, our goal should be to find the line passing as far as
|
||||
possible from all points.
|
||||
|
||||
@ -42,7 +44,7 @@ minimum distance to the training examples. Twice, this distance receives the imp
|
||||
**margin** within SVM's theory. Therefore, the optimal separating hyperplane *maximizes* the margin
|
||||
of the training data.
|
||||
|
||||

|
||||

|
||||
|
||||
How is the optimal hyperplane computed?
|
||||
---------------------------------------
|
||||
@ -55,7 +57,9 @@ where \f$\beta\f$ is known as the *weight vector* and \f$\beta_{0}\f$ as the *bi
|
||||
|
||||
@sa A more in depth description of this and hyperplanes you can find in the section 4.5 (*Seperating
|
||||
Hyperplanes*) of the book: *Elements of Statistical Learning* by T. Hastie, R. Tibshirani and J. H.
|
||||
Friedman. The optimal hyperplane can be represented in an infinite number of different ways by
|
||||
Friedman.
|
||||
|
||||
The optimal hyperplane can be represented in an infinite number of different ways by
|
||||
scaling of \f$\beta\f$ and \f$\beta_{0}\f$. As a matter of convention, among all the possible
|
||||
representations of the hyperplane, the one chosen is
|
||||
|
||||
@ -99,7 +103,7 @@ Source Code
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. **Set up the training data**
|
||||
-# **Set up the training data**
|
||||
|
||||
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
|
||||
two different classes; one of the classes consists of one point and the other of three points.
|
||||
@ -115,7 +119,7 @@ Explanation
|
||||
Mat labelsMat (4, 1, CV_32FC1, labels);
|
||||
@endcode
|
||||
|
||||
2. **Set up SVM's parameters**
|
||||
-# **Set up SVM's parameters**
|
||||
|
||||
In this tutorial we have introduced the theory of SVMs in the most simple case, when the
|
||||
training examples are spread into two classes that are linearly separable. However, SVMs can be
|
||||
@ -149,7 +153,7 @@ Explanation
|
||||
less number of steps even if the optimal hyperplane has not been computed yet. This
|
||||
parameter is defined in a structure @ref cv::cvTermCriteria .
|
||||
|
||||
3. **Train the SVM**
|
||||
-# **Train the SVM**
|
||||
|
||||
We call the method
|
||||
[CvSVM::train](http://docs.opencv.org/modules/ml/doc/support_vector_machines.html#cvsvm-train)
|
||||
@ -159,7 +163,7 @@ Explanation
|
||||
SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);
|
||||
@endcode
|
||||
|
||||
4. **Regions classified by the SVM**
|
||||
-# **Regions classified by the SVM**
|
||||
|
||||
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
|
||||
this example we have used this method in order to color the space depending on the prediction done
|
||||
@ -183,7 +187,7 @@ Explanation
|
||||
}
|
||||
@endcode
|
||||
|
||||
5. **Support vectors**
|
||||
-# **Support vectors**
|
||||
|
||||
We use here a couple of methods to obtain information about the support vectors.
|
||||
The method @ref cv::ml::SVM::getSupportVectors obtain all of the support
|
||||
@ -209,4 +213,4 @@ Results
|
||||
optimal separating hyperplane.
|
||||
- Finally the support vectors are shown using gray rings around the training examples.
|
||||
|
||||

|
||||

|
||||
|
@ -61,11 +61,13 @@ region. The following picture shows non-linearly separable training data from tw
|
||||
separating hyperplane and the distances to their correct regions of the samples that are
|
||||
misclassified.
|
||||
|
||||

|
||||

|
||||
|
||||
@note Only the distances of the samples that are misclassified are shown in the picture. The
|
||||
distances of the rest of the samples are zero since they lay already in their correct decision
|
||||
region. The red and blue lines that appear on the picture are the margins to each one of the
|
||||
region.
|
||||
|
||||
The red and blue lines that appear on the picture are the margins to each one of the
|
||||
decision regions. It is very **important** to realize that each of the \f$\xi_{i}\f$ goes from a
|
||||
misclassified training sample to the margin of its appropriate region.
|
||||
|
||||
@ -93,13 +95,10 @@ or [download it from here ](samples/cpp/tutorial_code/ml/non_linear_svms/non_lin
|
||||
|
||||
@includelineno cpp/tutorial_code/ml/non_linear_svms/non_linear_svms.cpp
|
||||
|
||||
lines
|
||||
1-12, 23-24, 27-
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. **Set up the training data**
|
||||
-# **Set up the training data**
|
||||
|
||||
The training data of this exercise is formed by a set of labeled 2D-points that belong to one of
|
||||
two different classes. To make the exercise more appealing, the training data is generated
|
||||
@ -140,7 +139,7 @@ Explanation
|
||||
rng.fill(c, RNG::UNIFORM, Scalar(1), Scalar(HEIGHT));
|
||||
@endcode
|
||||
|
||||
2. **Set up SVM's parameters**
|
||||
-# **Set up SVM's parameters**
|
||||
|
||||
@sa
|
||||
In the previous tutorial @ref tutorial_introduction_to_svm there is an explanation of the atributes of the
|
||||
@ -162,11 +161,12 @@ Explanation
|
||||
better insight of the problem by making adjustments to this parameter.
|
||||
|
||||
@note Here there are just very few points in the overlapping region between classes, giving a smaller value to **FRAC_LINEAR_SEP** the density of points can be incremented and the impact of the parameter **CvSVM::C_SVC** explored deeply.
|
||||
|
||||
- *Termination Criteria of the algorithm*. The maximum number of iterations has to be
|
||||
increased considerably in order to solve correctly a problem with non-linearly separable
|
||||
training data. In particular, we have increased in five orders of magnitude this value.
|
||||
|
||||
3. **Train the SVM**
|
||||
-# **Train the SVM**
|
||||
|
||||
We call the method @ref cv::ml::SVM::train to build the SVM model. Watch out that the training
|
||||
process may take a quite long time. Have patiance when your run the program.
|
||||
@ -175,7 +175,7 @@ Explanation
|
||||
svm.train(trainData, labels, Mat(), Mat(), params);
|
||||
@endcode
|
||||
|
||||
4. **Show the Decision Regions**
|
||||
-# **Show the Decision Regions**
|
||||
|
||||
The method @ref cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In
|
||||
this example we have used this method in order to color the space depending on the prediction done
|
||||
@ -195,7 +195,7 @@ Explanation
|
||||
}
|
||||
@endcode
|
||||
|
||||
5. **Show the training data**
|
||||
-# **Show the training data**
|
||||
|
||||
The method @ref cv::circle is used to show the samples that compose the training data. The samples
|
||||
of the class labeled with 1 are shown in light green and in light blue the samples of the class
|
||||
@ -220,7 +220,7 @@ Explanation
|
||||
}
|
||||
@endcode
|
||||
|
||||
6. **Support vectors**
|
||||
-# **Support vectors**
|
||||
|
||||
We use here a couple of methods to obtain information about the support vectors. The method
|
||||
@ref cv::ml::SVM::getSupportVectors obtain all support vectors.
|
||||
@ -250,7 +250,7 @@ Results
|
||||
and some blue points lay on the green one.
|
||||
- Finally the support vectors are shown using gray rings around the training examples.
|
||||
|
||||

|
||||

|
||||
|
||||
You may observe a runtime instance of this on the [YouTube here](https://www.youtube.com/watch?v=vFv2yPcSo-Q).
|
||||
|
||||
|
@ -113,16 +113,16 @@ Explanation
|
||||
Result
|
||||
------
|
||||
|
||||
1. Here is the result of running the code above and using as input the video stream of a build-in
|
||||
-# Here is the result of running the code above and using as input the video stream of a build-in
|
||||
webcam:
|
||||
|
||||

|
||||

|
||||
|
||||
Remember to copy the files *haarcascade_frontalface_alt.xml* and
|
||||
*haarcascade_eye_tree_eyeglasses.xml* in your current directory. They are located in
|
||||
*opencv/data/haarcascades*
|
||||
|
||||
2. This is the result of using the file *lbpcascade_frontalface.xml* (LBP trained) for the face
|
||||
-# This is the result of using the file *lbpcascade_frontalface.xml* (LBP trained) for the face
|
||||
detection. For the eyes we keep using the file used in the tutorial.
|
||||
|
||||

|
||||

|
||||
|
@ -26,15 +26,17 @@ be implemented using different algorithms so take a look at the reference manual
|
||||
Exposure sequence
|
||||
-----------------
|
||||
|
||||

|
||||

|
||||
|
||||
### Source Code
|
||||
Source Code
|
||||
-----------
|
||||
|
||||
@includelineno cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
|
||||
|
||||
### Explanation
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
1. **Load images and exposure times**
|
||||
-# **Load images and exposure times**
|
||||
@code{.cpp}
|
||||
vector<Mat> images;
|
||||
vector<float> times;
|
||||
@ -50,7 +52,8 @@ memorial01.png 0.0625
|
||||
...
|
||||
memorial15.png 1024
|
||||
@endcode
|
||||
2. **Estimate camera response**
|
||||
|
||||
-# **Estimate camera response**
|
||||
@code{.cpp}
|
||||
Mat response;
|
||||
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
|
||||
@ -59,7 +62,7 @@ calibrate->process(images, response, times);
|
||||
It is necessary to know camera response function (CRF) for a lot of HDR construction algorithms.
|
||||
We use one of the calibration algorithms to estimate inverse CRF for all 256 pixel values.
|
||||
|
||||
3. **Make HDR image**
|
||||
-# **Make HDR image**
|
||||
@code{.cpp}
|
||||
Mat hdr;
|
||||
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
|
||||
@ -68,7 +71,7 @@ merge_debevec->process(images, hdr, times, response);
|
||||
We use Debevec's weighting scheme to construct HDR image using response calculated in the previous
|
||||
item.
|
||||
|
||||
4. **Tonemap HDR image**
|
||||
-# **Tonemap HDR image**
|
||||
@code{.cpp}
|
||||
Mat ldr;
|
||||
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
|
||||
@ -78,7 +81,7 @@ Since we want to see our results on common LDR display we have to map our HDR im
|
||||
preserving most details. It is the main goal of tonemapping methods. We use tonemapper with
|
||||
bilateral filtering and set 2.2 as the value for gamma correction.
|
||||
|
||||
5. **Perform exposure fusion**
|
||||
-# **Perform exposure fusion**
|
||||
@code{.cpp}
|
||||
Mat fusion;
|
||||
Ptr<MergeMertens> merge_mertens = createMergeMertens();
|
||||
@ -88,7 +91,7 @@ There is an alternative way to merge our exposures in case when we don't need HD
|
||||
process is called exposure fusion and produces LDR image that doesn't require gamma correction. It
|
||||
also doesn't use exposure values of the photographs.
|
||||
|
||||
6. **Write results**
|
||||
-# **Write results**
|
||||
@code{.cpp}
|
||||
imwrite("fusion.png", fusion * 255);
|
||||
imwrite("ldr.png", ldr * 255);
|
||||
@ -98,15 +101,13 @@ Now it's time to look at the results. Note that HDR image can't be stored in one
|
||||
formats, so we save it to Radiance image (.hdr). Also all HDR imaging functions return results in
|
||||
[0, 1] range so we should multiply result by 255.
|
||||
|
||||
### Results
|
||||
Results
|
||||
-------
|
||||
|
||||
Tonemapped image
|
||||
----------------
|
||||
### Tonemapped image
|
||||
|
||||

|
||||

|
||||
|
||||
Exposure fusion
|
||||
---------------
|
||||
|
||||

|
||||
### Exposure fusion
|
||||
|
||||

|
||||
|
@ -75,8 +75,3 @@ As always, we would be happy to hear your comments and receive your contribution
|
||||
|
||||
These tutorials show how to use Viz module effectively.
|
||||
|
||||
- @subpage tutorial_table_of_content_general
|
||||
|
||||
These tutorials
|
||||
are the bottom of the iceberg as they link together multiple of the modules presented above in
|
||||
order to solve complex problems.
|
||||
|
@ -9,12 +9,12 @@ How to Use Background Subtraction Methods {#tutorial_background_subtraction}
|
||||
general, everything that can be considered as background given the characteristics of the
|
||||
observed scene.
|
||||
|
||||

|
||||

|
||||
|
||||
- Background modeling consists of two main steps:
|
||||
|
||||
1. Background Initialization;
|
||||
2. Background Update.
|
||||
-# Background Initialization;
|
||||
-# Background Update.
|
||||
|
||||
In the first step, an initial model of the background is computed, while in the second step that
|
||||
model is updated in order to adapt to possible changes in the scene.
|
||||
@ -28,11 +28,11 @@ Goals
|
||||
|
||||
In this tutorial you will learn how to:
|
||||
|
||||
1. Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref
|
||||
-# Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref
|
||||
cv::imread ;
|
||||
2. Create and update the background model by using @ref cv::BackgroundSubtractor class;
|
||||
3. Get and show the foreground mask by using @ref cv::imshow ;
|
||||
4. Save the output by using @ref cv::imwrite to quantitatively evaluate the results.
|
||||
-# Create and update the background model by using @ref cv::BackgroundSubtractor class;
|
||||
-# Get and show the foreground mask by using @ref cv::imshow ;
|
||||
-# Save the output by using @ref cv::imwrite to quantitatively evaluate the results.
|
||||
|
||||
Code
|
||||
----
|
||||
@ -40,201 +40,28 @@ Code
|
||||
In the following you can find the source code. We will let the user chose to process either a video
|
||||
file or a sequence of images.
|
||||
|
||||
-
|
||||
|
||||
Two different methods are used to generate two foreground masks:
|
||||
1. @ref cv::bgsegm::BackgroundSubtractorMOG
|
||||
2. @ref cv::bgsegm::BackgroundSubtractorMOG2
|
||||
-# @ref cv::bgsegm::BackgroundSubtractorMOG
|
||||
-# @ref cv::BackgroundSubtractorMOG2
|
||||
|
||||
The results as well as the input data are shown on the screen.
|
||||
@code{.cpp}
|
||||
//opencv
|
||||
#include <opencv2/highgui/highgui.hpp>
|
||||
#include <opencv2/video/background_segm.hpp>
|
||||
//C
|
||||
#include <stdio.h>
|
||||
//C++
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
The source file can be downloaded [here ](samples/cpp/tutorial_code/video/bg_sub.cpp).
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
//global variables
|
||||
Mat frame; //current frame
|
||||
Mat fgMaskMOG; //fg mask generated by MOG method
|
||||
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
|
||||
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
|
||||
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
|
||||
int keyboard;
|
||||
|
||||
//function declarations
|
||||
void help();
|
||||
void processVideo(char* videoFilename);
|
||||
void processImages(char* firstFrameFilename);
|
||||
|
||||
void help()
|
||||
{
|
||||
cout
|
||||
<< "--------------------------------------------------------------------------" << endl
|
||||
<< "This program shows how to use background subtraction methods provided by " << endl
|
||||
<< " OpenCV. You can process both videos (-vid) and images (-img)." << endl
|
||||
<< endl
|
||||
<< "Usage:" << endl
|
||||
<< "./bs {-vid <video filename>|-img <image filename>}" << endl
|
||||
<< "for example: ./bs -vid video.avi" << endl
|
||||
<< "or: ./bs -img /data/images/1.png" << endl
|
||||
<< "--------------------------------------------------------------------------" << endl
|
||||
<< endl;
|
||||
}
|
||||
|
||||
int main(int argc, char* argv[])
|
||||
{
|
||||
//print help information
|
||||
help();
|
||||
|
||||
//check for the input parameter correctness
|
||||
if(argc != 3) {
|
||||
cerr <<"Incorret input list" << endl;
|
||||
cerr <<"exiting..." << endl;
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
//create GUI windows
|
||||
namedWindow("Frame");
|
||||
namedWindow("FG Mask MOG");
|
||||
namedWindow("FG Mask MOG 2");
|
||||
|
||||
//create Background Subtractor objects
|
||||
pMOG = createBackgroundSubtractorMOG(); //MOG approach
|
||||
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
|
||||
|
||||
if(strcmp(argv[1], "-vid") == 0) {
|
||||
//input data coming from a video
|
||||
processVideo(argv[2]);
|
||||
}
|
||||
else if(strcmp(argv[1], "-img") == 0) {
|
||||
//input data coming from a sequence of images
|
||||
processImages(argv[2]);
|
||||
}
|
||||
else {
|
||||
//error in reading input parameters
|
||||
cerr <<"Please, check the input parameters." << endl;
|
||||
cerr <<"Exiting..." << endl;
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
//destroy GUI windows
|
||||
destroyAllWindows();
|
||||
return EXIT_SUCCESS;
|
||||
}
|
||||
|
||||
void processVideo(char* videoFilename) {
|
||||
//create the capture object
|
||||
VideoCapture capture(videoFilename);
|
||||
if(!capture.isOpened()){
|
||||
//error in opening the video input
|
||||
cerr << "Unable to open video file: " << videoFilename << endl;
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
//read input data. ESC or 'q' for quitting
|
||||
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
||||
//read the current frame
|
||||
if(!capture.read(frame)) {
|
||||
cerr << "Unable to read next frame." << endl;
|
||||
cerr << "Exiting..." << endl;
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
//update the background model
|
||||
pMOG->apply(frame, fgMaskMOG);
|
||||
pMOG2->apply(frame, fgMaskMOG2);
|
||||
//get the frame number and write it on the current frame
|
||||
stringstream ss;
|
||||
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
|
||||
cv::Scalar(255,255,255), -1);
|
||||
ss << capture.get(CAP_PROP_POS_FRAMES);
|
||||
string frameNumberString = ss.str();
|
||||
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
||||
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
||||
//show the current frame and the fg masks
|
||||
imshow("Frame", frame);
|
||||
imshow("FG Mask MOG", fgMaskMOG);
|
||||
imshow("FG Mask MOG 2", fgMaskMOG2);
|
||||
//get the input from the keyboard
|
||||
keyboard = waitKey( 30 );
|
||||
}
|
||||
//delete capture object
|
||||
capture.release();
|
||||
}
|
||||
|
||||
void processImages(char* fistFrameFilename) {
|
||||
//read the first file of the sequence
|
||||
frame = imread(fistFrameFilename);
|
||||
if(!frame.data){
|
||||
//error in opening the first image
|
||||
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
//current image filename
|
||||
string fn(fistFrameFilename);
|
||||
//read input data. ESC or 'q' for quitting
|
||||
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
||||
//update the background model
|
||||
pMOG->apply(frame, fgMaskMOG);
|
||||
pMOG2->apply(frame, fgMaskMOG2);
|
||||
//get the frame number and write it on the current frame
|
||||
size_t index = fn.find_last_of("/");
|
||||
if(index == string::npos) {
|
||||
index = fn.find_last_of("\\");
|
||||
}
|
||||
size_t index2 = fn.find_last_of(".");
|
||||
string prefix = fn.substr(0,index+1);
|
||||
string suffix = fn.substr(index2);
|
||||
string frameNumberString = fn.substr(index+1, index2-index-1);
|
||||
istringstream iss(frameNumberString);
|
||||
int frameNumber = 0;
|
||||
iss >> frameNumber;
|
||||
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
|
||||
cv::Scalar(255,255,255), -1);
|
||||
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
||||
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
||||
//show the current frame and the fg masks
|
||||
imshow("Frame", frame);
|
||||
imshow("FG Mask MOG", fgMaskMOG);
|
||||
imshow("FG Mask MOG 2", fgMaskMOG2);
|
||||
//get the input from the keyboard
|
||||
keyboard = waitKey( 30 );
|
||||
//search for the next image in the sequence
|
||||
ostringstream oss;
|
||||
oss << (frameNumber + 1);
|
||||
string nextFrameNumberString = oss.str();
|
||||
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
|
||||
//read the next frame
|
||||
frame = imread(nextFrameFilename);
|
||||
if(!frame.data){
|
||||
//error in opening the next image in the sequence
|
||||
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
//update the path of the current frame
|
||||
fn.assign(nextFrameFilename);
|
||||
}
|
||||
}
|
||||
@endcode
|
||||
- The source file can be downloaded [here ](samples/cpp/tutorial_code/video/bg_sub.cpp).
|
||||
@includelineno samples/cpp/tutorial_code/video/bg_sub.cpp
|
||||
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
We discuss the main parts of the above code:
|
||||
|
||||
1. First, three Mat objects are allocated to store the current frame and two foreground masks,
|
||||
-# First, three Mat objects are allocated to store the current frame and two foreground masks,
|
||||
obtained by using two different BS algorithms.
|
||||
@code{.cpp}
|
||||
Mat frame; //current frame
|
||||
Mat fgMaskMOG; //fg mask generated by MOG method
|
||||
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
|
||||
@endcode
|
||||
2. Two @ref cv::BackgroundSubtractor objects will be used to generate the foreground masks. In this
|
||||
-# Two @ref cv::BackgroundSubtractor objects will be used to generate the foreground masks. In this
|
||||
example, default parameters are used, but it is also possible to declare specific parameters in
|
||||
the create function.
|
||||
@code{.cpp}
|
||||
@ -245,8 +72,7 @@ We discuss the main parts of the above code:
|
||||
pMOG = createBackgroundSubtractorMOG(); //MOG approach
|
||||
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
|
||||
@endcode
|
||||
3. The command line arguments are analysed. The user can chose between two options:
|
||||
|
||||
-# The command line arguments are analysed. The user can chose between two options:
|
||||
- video files (by choosing the option -vid);
|
||||
- image sequences (by choosing the option -img).
|
||||
@code{.cpp}
|
||||
@ -259,7 +85,7 @@ We discuss the main parts of the above code:
|
||||
processImages(argv[2]);
|
||||
}
|
||||
@endcode
|
||||
4. Suppose you want to process a video file. The video is read until the end is reached or the user
|
||||
-# Suppose you want to process a video file. The video is read until the end is reached or the user
|
||||
presses the button 'q' or the button 'ESC'.
|
||||
@code{.cpp}
|
||||
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
|
||||
@ -270,7 +96,7 @@ We discuss the main parts of the above code:
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
@endcode
|
||||
5. Every frame is used both for calculating the foreground mask and for updating the background. If
|
||||
-# Every frame is used both for calculating the foreground mask and for updating the background. If
|
||||
you want to change the learning rate used for updating the background model, it is possible to
|
||||
set a specific learning rate by passing a third parameter to the 'apply' method.
|
||||
@code{.cpp}
|
||||
@ -278,7 +104,7 @@ We discuss the main parts of the above code:
|
||||
pMOG->apply(frame, fgMaskMOG);
|
||||
pMOG2->apply(frame, fgMaskMOG2);
|
||||
@endcode
|
||||
6. The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
|
||||
-# The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
|
||||
the top left corner of the current frame. A white rectangle is used to highlight the black
|
||||
colored frame number.
|
||||
@code{.cpp}
|
||||
@ -291,14 +117,14 @@ We discuss the main parts of the above code:
|
||||
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
|
||||
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
|
||||
@endcode
|
||||
7. We are ready to show the current input frame and the results.
|
||||
-# We are ready to show the current input frame and the results.
|
||||
@code{.cpp}
|
||||
//show the current frame and the fg masks
|
||||
imshow("Frame", frame);
|
||||
imshow("FG Mask MOG", fgMaskMOG);
|
||||
imshow("FG Mask MOG 2", fgMaskMOG2);
|
||||
@endcode
|
||||
8. The same operations listed above can be performed using a sequence of images as input. The
|
||||
-# The same operations listed above can be performed using a sequence of images as input. The
|
||||
processImage function is called and, instead of using a @ref cv::VideoCapture object, the images
|
||||
are read by using @ref cv::imread , after individuating the correct path for the next frame to
|
||||
read.
|
||||
@ -338,7 +164,7 @@ Results
|
||||
@endcode
|
||||
The output of the program will look as the following:
|
||||
|
||||

|
||||

|
||||
|
||||
- The video file Video_001.avi is part of the [Background Models Challenge
|
||||
(BMC)](http://bmc.univ-bpclermont.fr/) data set and it can be downloaded from the following link
|
||||
@ -350,7 +176,7 @@ Results
|
||||
@endcode
|
||||
The output of the program will look as the following:
|
||||
|
||||

|
||||

|
||||
|
||||
- The sequence of images used in this example is part of the [Background Models Challenge
|
||||
(BMC)](http://bmc.univ-bpclermont.fr/) dataset and it can be downloaded from the following link
|
||||
@ -385,7 +211,5 @@ the accuracy of the results.
|
||||
References
|
||||
----------
|
||||
|
||||
- Background Models Challenge (BMC) website, [](http://bmc.univ-bpclermont.fr/)
|
||||
- Antoine Vacavant, Thierry Chateau, Alexis Wilhelm and Laurent Lequievre. A Benchmark Dataset for
|
||||
Foreground/Background Extraction. In ACCV 2012, Workshop: Background Models Challenge, LNCS
|
||||
7728, 291-300. November 2012, Daejeon, Korea.
|
||||
- [Background Models Challenge (BMC) website](http://bmc.univ-bpclermont.fr/)
|
||||
- A Benchmark Dataset for Foreground/Background Extraction @cite vacavant2013benchmark
|
||||
|
@ -13,97 +13,8 @@ Code
|
||||
----
|
||||
|
||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/creating_widgets.cpp).
|
||||
@code{.cpp}
|
||||
#include <opencv2/viz.hpp>
|
||||
#include <opencv2/viz/widget_accessor.hpp>
|
||||
#include <iostream>
|
||||
@includelineno samples/cpp/tutorial_code/viz/creating_widgets.cpp
|
||||
|
||||
#include <vtkPoints.h>
|
||||
#include <vtkTriangle.h>
|
||||
#include <vtkCellArray.h>
|
||||
#include <vtkPolyData.h>
|
||||
#include <vtkPolyDataMapper.h>
|
||||
#include <vtkIdList.h>
|
||||
#include <vtkActor.h>
|
||||
#include <vtkProp.h>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
/*
|
||||
* @class WTriangle
|
||||
* @brief Defining our own 3D Triangle widget
|
||||
*/
|
||||
class WTriangle : public viz::Widget3D
|
||||
{
|
||||
public:
|
||||
WTriangle(const Point3f &pt1, const Point3f &pt2, const Point3f &pt3, const viz::Color & color = viz::Color::white());
|
||||
};
|
||||
|
||||
/*
|
||||
* @function WTriangle::WTriangle
|
||||
*/
|
||||
WTriangle::WTriangle(const Point3f &pt1, const Point3f &pt2, const Point3f &pt3, const viz::Color & color)
|
||||
{
|
||||
// Create a triangle
|
||||
vtkSmartPointer<vtkPoints> points = vtkSmartPointer<vtkPoints>::New();
|
||||
points->InsertNextPoint(pt1.x, pt1.y, pt1.z);
|
||||
points->InsertNextPoint(pt2.x, pt2.y, pt2.z);
|
||||
points->InsertNextPoint(pt3.x, pt3.y, pt3.z);
|
||||
|
||||
vtkSmartPointer<vtkTriangle> triangle = vtkSmartPointer<vtkTriangle>::New();
|
||||
triangle->GetPointIds()->SetId(0,0);
|
||||
triangle->GetPointIds()->SetId(1,1);
|
||||
triangle->GetPointIds()->SetId(2,2);
|
||||
|
||||
vtkSmartPointer<vtkCellArray> cells = vtkSmartPointer<vtkCellArray>::New();
|
||||
cells->InsertNextCell(triangle);
|
||||
|
||||
// Create a polydata object
|
||||
vtkSmartPointer<vtkPolyData> polyData = vtkSmartPointer<vtkPolyData>::New();
|
||||
|
||||
// Add the geometry and topology to the polydata
|
||||
polyData->SetPoints(points);
|
||||
polyData->SetPolys(cells);
|
||||
|
||||
// Create mapper and actor
|
||||
vtkSmartPointer<vtkPolyDataMapper> mapper = vtkSmartPointer<vtkPolyDataMapper>::New();
|
||||
#if VTK_MAJOR_VERSION <= 5
|
||||
mapper->SetInput(polyData);
|
||||
#else
|
||||
mapper->SetInputData(polyData);
|
||||
#endif
|
||||
|
||||
vtkSmartPointer<vtkActor> actor = vtkSmartPointer<vtkActor>::New();
|
||||
actor->SetMapper(mapper);
|
||||
|
||||
// Store this actor in the widget in order that visualizer can access it
|
||||
viz::WidgetAccessor::setProp(*this, actor);
|
||||
|
||||
// Set the color of the widget. This has to be called after WidgetAccessor.
|
||||
setColor(color);
|
||||
}
|
||||
|
||||
/*
|
||||
* @function main
|
||||
*/
|
||||
int main()
|
||||
{
|
||||
/// Create a window
|
||||
viz::Viz3d myWindow("Creating Widgets");
|
||||
|
||||
/// Create a triangle widget
|
||||
WTriangle tw(Point3f(0.0,0.0,0.0), Point3f(1.0,1.0,1.0), Point3f(0.0,1.0,0.0), viz::Color::red());
|
||||
|
||||
/// Show widget in the visualizer window
|
||||
myWindow.showWidget("TRIANGLE", tw);
|
||||
|
||||
/// Start event loop
|
||||
myWindow.spin();
|
||||
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
@ -135,10 +46,10 @@ WTriangle tw(Point3f(0.0,0.0,0.0), Point3f(1.0,1.0,1.0), Point3f(0.0,1.0,0.0), v
|
||||
/// Show widget in the visualizer window
|
||||
myWindow.showWidget("TRIANGLE", tw);
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
Here is the result of the program.
|
||||
|
||||

|
||||
|
||||

|
||||
|
@ -15,52 +15,8 @@ Code
|
||||
----
|
||||
|
||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/launching_viz.cpp).
|
||||
@code{.cpp}
|
||||
#include <opencv2/viz.hpp>
|
||||
#include <iostream>
|
||||
@includelineno samples/cpp/tutorial_code/viz/launching_viz.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
/*
|
||||
* @function main
|
||||
*/
|
||||
int main()
|
||||
{
|
||||
/// Create a window
|
||||
viz::Viz3d myWindow("Viz Demo");
|
||||
|
||||
/// Start event loop
|
||||
myWindow.spin();
|
||||
|
||||
/// Event loop is over when pressed q, Q, e, E
|
||||
cout << "First event loop is over" << endl;
|
||||
|
||||
/// Access window via its name
|
||||
viz::Viz3d sameWindow = viz::getWindowByName("Viz Demo");
|
||||
|
||||
/// Start event loop
|
||||
sameWindow.spin();
|
||||
|
||||
/// Event loop is over when pressed q, Q, e, E
|
||||
cout << "Second event loop is over" << endl;
|
||||
|
||||
/// Event loop is over when pressed q, Q, e, E
|
||||
/// Start event loop once for 1 millisecond
|
||||
sameWindow.spinOnce(1, true);
|
||||
while(!sameWindow.wasStopped())
|
||||
{
|
||||
/// Interact with window
|
||||
|
||||
/// Event loop for 1 millisecond
|
||||
sameWindow.spinOnce(1, true);
|
||||
}
|
||||
|
||||
/// Once more event loop is stopped
|
||||
cout << "Last event loop is over" << endl;
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
@ -99,10 +55,10 @@ while(!sameWindow.wasStopped())
|
||||
sameWindow.spinOnce(1, true);
|
||||
}
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
Here is the result of the program.
|
||||
|
||||

|
||||
|
||||

|
||||
|
@ -14,96 +14,8 @@ Code
|
||||
----
|
||||
|
||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/transformations.cpp).
|
||||
@code{.cpp}
|
||||
#include <opencv2/viz.hpp>
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
@includelineno samples/cpp/tutorial_code/viz/transformations.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
/*
|
||||
* @function cvcloud_load
|
||||
* @brief load bunny.ply
|
||||
*/
|
||||
Mat cvcloud_load()
|
||||
{
|
||||
Mat cloud(1, 1889, CV_32FC3);
|
||||
ifstream ifs("bunny.ply");
|
||||
|
||||
string str;
|
||||
for(size_t i = 0; i < 12; ++i)
|
||||
getline(ifs, str);
|
||||
|
||||
Point3f* data = cloud.ptr<cv::Point3f>();
|
||||
float dummy1, dummy2;
|
||||
for(size_t i = 0; i < 1889; ++i)
|
||||
ifs >> data[i].x >> data[i].y >> data[i].z >> dummy1 >> dummy2;
|
||||
|
||||
cloud *= 5.0f;
|
||||
return cloud;
|
||||
}
|
||||
|
||||
/*
|
||||
* @function main
|
||||
*/
|
||||
int main(int argn, char **argv)
|
||||
{
|
||||
if (argn < 2)
|
||||
{
|
||||
cout << "Usage: " << endl << "./transformations [ G | C ]" << endl;
|
||||
return 1;
|
||||
}
|
||||
|
||||
bool camera_pov = (argv[1][0] == 'C');
|
||||
|
||||
/// Create a window
|
||||
viz::Viz3d myWindow("Coordinate Frame");
|
||||
|
||||
/// Add coordinate axes
|
||||
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());
|
||||
|
||||
/// Let's assume camera has the following properties
|
||||
Point3f cam_pos(3.0f,3.0f,3.0f), cam_focal_point(3.0f,3.0f,2.0f), cam_y_dir(-1.0f,0.0f,0.0f);
|
||||
|
||||
/// We can get the pose of the cam using makeCameraPose
|
||||
Affine3f cam_pose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);
|
||||
|
||||
/// We can get the transformation matrix from camera coordinate system to global using
|
||||
/// - makeTransformToGlobal. We need the axes of the camera
|
||||
Affine3f transform = viz::makeTransformToGlobal(Vec3f(0.0f,-1.0f,0.0f), Vec3f(-1.0f,0.0f,0.0f), Vec3f(0.0f,0.0f,-1.0f), cam_pos);
|
||||
|
||||
/// Create a cloud widget.
|
||||
Mat bunny_cloud = cvcloud_load();
|
||||
viz::WCloud cloud_widget(bunny_cloud, viz::Color::green());
|
||||
|
||||
/// Pose of the widget in camera frame
|
||||
Affine3f cloud_pose = Affine3f().translate(Vec3f(0.0f,0.0f,3.0f));
|
||||
/// Pose of the widget in global frame
|
||||
Affine3f cloud_pose_global = transform * cloud_pose;
|
||||
|
||||
/// Visualize camera frame
|
||||
if (!camera_pov)
|
||||
{
|
||||
viz::WCameraPosition cpw(0.5); // Coordinate axes
|
||||
viz::WCameraPosition cpw_frustum(Vec2f(0.889484, 0.523599)); // Camera frustum
|
||||
myWindow.showWidget("CPW", cpw, cam_pose);
|
||||
myWindow.showWidget("CPW_FRUSTUM", cpw_frustum, cam_pose);
|
||||
}
|
||||
|
||||
/// Visualize widget
|
||||
myWindow.showWidget("bunny", cloud_widget, cloud_pose_global);
|
||||
|
||||
/// Set the viewer pose to that of camera
|
||||
if (camera_pov)
|
||||
myWindow.setViewerPose(cam_pose);
|
||||
|
||||
/// Start event loop.
|
||||
myWindow.spin();
|
||||
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
@ -163,15 +75,14 @@ myWindow.showWidget("bunny", cloud_widget, cloud_pose_global);
|
||||
if (camera_pov)
|
||||
myWindow.setViewerPose(cam_pose);
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
1. Here is the result from the camera point of view.
|
||||
-# Here is the result from the camera point of view.
|
||||
|
||||

|
||||
|
||||
2. Here is the result from global point of view.
|
||||
|
||||

|
||||

|
||||
|
||||
-# Here is the result from global point of view.
|
||||
|
||||

|
||||
|
@ -14,66 +14,8 @@ Code
|
||||
----
|
||||
|
||||
You can download the code from [here ](samples/cpp/tutorial_code/viz/widget_pose.cpp).
|
||||
@code{.cpp}
|
||||
#include <opencv2/viz.hpp>
|
||||
#include <opencv2/calib3d.hpp>
|
||||
#include <iostream>
|
||||
@includelineno samples/cpp/tutorial_code/viz/widget_pose.cpp
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
/*
|
||||
* @function main
|
||||
*/
|
||||
int main()
|
||||
{
|
||||
/// Create a window
|
||||
viz::Viz3d myWindow("Coordinate Frame");
|
||||
|
||||
/// Add coordinate axes
|
||||
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());
|
||||
|
||||
/// Add line to represent (1,1,1) axis
|
||||
viz::WLine axis(Point3f(-1.0f,-1.0f,-1.0f), Point3f(1.0f,1.0f,1.0f));
|
||||
axis.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
||||
myWindow.showWidget("Line Widget", axis);
|
||||
|
||||
/// Construct a cube widget
|
||||
viz::WCube cube_widget(Point3f(0.5,0.5,0.0), Point3f(0.0,0.0,-0.5), true, viz::Color::blue());
|
||||
cube_widget.setRenderingProperty(viz::LINE_WIDTH, 4.0);
|
||||
|
||||
/// Display widget (update if already displayed)
|
||||
myWindow.showWidget("Cube Widget", cube_widget);
|
||||
|
||||
/// Rodrigues vector
|
||||
Mat rot_vec = Mat::zeros(1,3,CV_32F);
|
||||
float translation_phase = 0.0, translation = 0.0;
|
||||
while(!myWindow.wasStopped())
|
||||
{
|
||||
/* Rotation using rodrigues */
|
||||
/// Rotate around (1,1,1)
|
||||
rot_vec.at<float>(0,0) += CV_PI * 0.01f;
|
||||
rot_vec.at<float>(0,1) += CV_PI * 0.01f;
|
||||
rot_vec.at<float>(0,2) += CV_PI * 0.01f;
|
||||
|
||||
/// Shift on (1,1,1)
|
||||
translation_phase += CV_PI * 0.01f;
|
||||
translation = sin(translation_phase);
|
||||
|
||||
Mat rot_mat;
|
||||
Rodrigues(rot_vec, rot_mat);
|
||||
|
||||
/// Construct pose
|
||||
Affine3f pose(rot_mat, Vec3f(translation, translation, translation));
|
||||
|
||||
myWindow.setWidgetPose("Cube Widget", pose);
|
||||
|
||||
myWindow.spinOnce(1, true);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
@endcode
|
||||
Explanation
|
||||
-----------
|
||||
|
||||
@ -130,6 +72,7 @@ while(!myWindow.wasStopped())
|
||||
myWindow.spinOnce(1, true);
|
||||
}
|
||||
@endcode
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
@ -140,4 +83,3 @@ Here is the result of the program.
|
||||
<iframe width="420" height="315" src="https://www.youtube.com/embed/22HKMN657U0" frameborder="0" allowfullscreen></iframe>
|
||||
</div>
|
||||
\endhtmlonly
|
||||
|
||||
|