Doxygen tutorials: cpp done

This commit is contained in:
Maksim Shabunin
2014-11-28 16:21:28 +03:00
parent c5536534d8
commit 36a04ef8de
92 changed files with 2142 additions and 3691 deletions

View File

@@ -73,7 +73,7 @@ int main( int argc, char** argv )
Explanation
-----------
1. Since we are going to perform:
-# Since we are going to perform:
\f[g(x) = (1 - \alpha)f_{0}(x) + \alpha f_{1}(x)\f]
@@ -87,7 +87,7 @@ Explanation
Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and
height) and type.
2. Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy:
-# Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy:
@code{.cpp}
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
@@ -96,9 +96,9 @@ Explanation
\f[dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma\f]
In this case, `gamma` is the argument \f$0.0\f$ in the code above.
3. Create windows, show the images and wait for the user to end the program.
-# Create windows, show the images and wait for the user to end the program.
Result
------
![image](images/Adding_Images_Tutorial_Result_Big.jpg)
![](images/Adding_Images_Tutorial_Result_Big.jpg)

View File

@@ -52,7 +52,7 @@ Code
Explanation
-----------
1. Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two
-# Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two
windows to display them.
@code{.cpp}
/// Windows names
@@ -63,7 +63,7 @@ Explanation
Mat atom_image = Mat::zeros( w, w, CV_8UC3 );
Mat rook_image = Mat::zeros( w, w, CV_8UC3 );
@endcode
2. We created functions to draw different geometric shapes. For instance, to draw the atom we used
-# We created functions to draw different geometric shapes. For instance, to draw the atom we used
*MyEllipse* and *MyFilledCircle*:
@code{.cpp}
/// 1. Draw a simple atom:
@@ -77,7 +77,7 @@ Explanation
/// 1.b. Creating circles
MyFilledCircle( atom_image, Point( w/2.0, w/2.0) );
@endcode
3. And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*:
-# And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*:
@code{.cpp}
/// 2. Draw a rook
@@ -98,7 +98,7 @@ Explanation
MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) );
MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) );
@endcode
4. Let's check what is inside each of these functions:
-# Let's check what is inside each of these functions:
- *MyLine*
@code{.cpp}
void MyLine( Mat img, Point start, Point end )
@@ -240,5 +240,5 @@ Result
Compiling and running your program should give you a result like this:
![image](images/Drawing_1_Tutorial_Result_0.png)
![](images/Drawing_1_Tutorial_Result_0.png)

View File

@@ -101,16 +101,16 @@ int main( int argc, char** argv )
Explanation
-----------
1. We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
-# We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
@code{.cpp}
double alpha;
int beta;
@endcode
2. We load an image using @ref cv::imread and save it in a Mat object:
-# We load an image using @ref cv::imread and save it in a Mat object:
@code{.cpp}
Mat image = imread( argv[1] );
@endcode
3. Now, since we will make some transformations to this image, we need a new Mat object to store
-# Now, since we will make some transformations to this image, we need a new Mat object to store
it. Also, we want this to have the following features:
- Initial pixel values equal to zero
@@ -121,7 +121,7 @@ Explanation
We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
*image.size()* and *image.type()*
4. Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
-# Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
pixel in image. Since we are operating with RGB images, we will have three values per pixel (R,
G and B), so we will also access them separately. Here is the piece of code:
@code{.cpp}
@@ -141,7 +141,7 @@ Explanation
integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
values are valid.
5. Finally, we create windows and show the images, the usual way.
-# Finally, we create windows and show the images, the usual way.
@code{.cpp}
namedWindow("Original Image", 1);
namedWindow("New Image", 1);
@@ -166,7 +166,7 @@ Result
- Running our code and using \f$\alpha = 2.2\f$ and \f$\beta = 50\f$
@code{.bash}
\f$ ./BasicLinearTransforms lena.jpg
$ ./BasicLinearTransforms lena.jpg
Basic Linear Transforms
-------------------------
* Enter the alpha value [1.0-3.0]: 2.2
@@ -175,4 +175,4 @@ Result
- We get this:
![image](images/Basic_Linear_Transform_Tutorial_Result_big.jpg)
![](images/Basic_Linear_Transform_Tutorial_Result_big.jpg)

View File

@@ -22,10 +22,14 @@ OpenCV source code library.
Here's a sample usage of @ref cv::dft() :
@includelineno cpp/tutorial_code/core/discrete_fourier_transform/discrete_fourier_transform.cpp
lines
1-4, 6, 20-21, 24-79
@dontinclude cpp/tutorial_code/core/discrete_fourier_transform/discrete_fourier_transform.cpp
@until highgui.hpp
@skipline iostream
@skip main
@until {
@skip filename
@until return 0;
@until }
Explanation
-----------
@@ -52,7 +56,7 @@ Fourier Transform too needs to be of a discrete type resulting in a Discrete Fou
(*DFT*). You'll want to use this whenever you need to determine the structure of an image from a
geometrical point of view. Here are the steps to follow (in case of a gray scale input image *I*):
1. **Expand the image to an optimal size**. The performance of a DFT is dependent of the image
-# **Expand the image to an optimal size**. The performance of a DFT is dependent of the image
size. It tends to be the fastest for image sizes that are multiple of the numbers two, three and
five. Therefore, to achieve maximal performance it is generally a good idea to pad border values
to the image to get a size with such traits. The @ref cv::getOptimalDFTSize() returns this
@@ -66,7 +70,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
@endcode
The appended pixels are initialized with zero.
2. **Make place for both the complex and the real values**. The result of a Fourier Transform is
-# **Make place for both the complex and the real values**. The result of a Fourier Transform is
complex. This implies that for each image value the result is two image values (one per
component). Moreover, the frequency domains range is much larger than its spatial counterpart.
Therefore, we store these usually at least in a *float* format. Therefore we'll convert our
@@ -76,12 +80,12 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
Mat complexI;
merge(planes, 2, complexI); // Add to the expanded another plane with zeros
@endcode
3. **Make the Discrete Fourier Transform**. It's possible an in-place calculation (same input as
-# **Make the Discrete Fourier Transform**. It's possible an in-place calculation (same input as
output):
@code{.cpp}
dft(complexI, complexI); // this way the result may fit in the source matrix
@endcode
4. **Transform the real and complex values to magnitude**. A complex number has a real (*Re*) and a
-# **Transform the real and complex values to magnitude**. A complex number has a real (*Re*) and a
complex (imaginary - *Im*) part. The results of a DFT are complex numbers. The magnitude of a
DFT is:
@@ -93,7 +97,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
magnitude(planes[0], planes[1], planes[0]);// planes[0] = magnitude
Mat magI = planes[0];
@endcode
5. **Switch to a logarithmic scale**. It turns out that the dynamic range of the Fourier
-# **Switch to a logarithmic scale**. It turns out that the dynamic range of the Fourier
coefficients is too large to be displayed on the screen. We have some small and some high
changing values that we can't observe like this. Therefore the high values will all turn out as
white points, while the small ones as black. To use the gray scale values to for visualization
@@ -106,7 +110,7 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
magI += Scalar::all(1); // switch to logarithmic scale
log(magI, magI);
@endcode
6. **Crop and rearrange**. Remember, that at the first step, we expanded the image? Well, it's time
-# **Crop and rearrange**. Remember, that at the first step, we expanded the image? Well, it's time
to throw away the newly introduced values. For visualization purposes we may also rearrange the
quadrants of the result, so that the origin (zero, zero) corresponds with the image center.
@code{.cpp}
@@ -128,13 +132,14 @@ geometrical point of view. Here are the steps to follow (in case of a gray scale
q2.copyTo(q1);
tmp.copyTo(q2);
@endcode
7. **Normalize**. This is done again for visualization purposes. We now have the magnitudes,
-# **Normalize**. This is done again for visualization purposes. We now have the magnitudes,
however this are still out of our image display range of zero to one. We normalize our values to
this range using the @ref cv::normalize() function.
@code{.cpp}
normalize(magI, magI, 0, 1, NORM_MINMAX); // Transform the matrix with float values into a
// viewable image form (float between values 0 and 1).
@endcode
Result
------
@@ -147,13 +152,12 @@ image about a text.
In case of the horizontal text:
![image](images/result_normal.jpg)
![](images/result_normal.jpg)
In case of a rotated text:
![image](images/result_rotated.jpg)
![](images/result_rotated.jpg)
You can see that the most influential components of the frequency domain (brightest dots on the
magnitude image) follow the geometric rotation of objects on the image. From this we may calculate
the offset and perform an image rotation to correct eventual miss alignments.

View File

@@ -22,10 +22,12 @@ library.
Here's a sample code of how to achieve all the stuff enumerated at the goal list.
@includelineno cpp/tutorial_code/core/file_input_output/file_input_output.cpp
@dontinclude cpp/tutorial_code/core/file_input_output/file_input_output.cpp
lines
1-7, 21-154
@until std;
@skip class MyData
@until return 0;
@until }
Explanation
-----------
@@ -36,7 +38,7 @@ structures you may serialize: *mappings* (like the STL map) and *element sequenc
vector). The difference between these is that in a map every element has a unique name through what
you may access it. For sequences you need to go through them to query a specific item.
1. **XML/YAML File Open and Close.** Before you write any content to such file you need to open it
-# **XML/YAML File Open and Close.** Before you write any content to such file you need to open it
and at the end to close it. The XML/YAML data structure in OpenCV is @ref cv::FileStorage . To
specify that this structure to which file binds on your hard drive you can use either its
constructor or the *open()* function of this:
@@ -56,7 +58,7 @@ you may access it. For sequences you need to go through them to query a specific
@code{.cpp}
fs.release(); // explicit close
@endcode
2. **Input and Output of text and numbers.** The data structure uses the same \<\< output operator
-# **Input and Output of text and numbers.** The data structure uses the same \<\< output operator
that the STL library. For outputting any type of data structure we need first to specify its
name. We do this by just simply printing out the name of this. For basic types you may follow
this with the print of the value :
@@ -70,7 +72,7 @@ you may access it. For sequences you need to go through them to query a specific
fs["iterationNr"] >> itNr;
itNr = (int) fs["iterationNr"];
@endcode
3. **Input/Output of OpenCV Data structures.** Well these behave exactly just as the basic C++
-# **Input/Output of OpenCV Data structures.** Well these behave exactly just as the basic C++
types:
@code{.cpp}
Mat R = Mat_<uchar >::eye (3, 3),
@@ -82,7 +84,7 @@ you may access it. For sequences you need to go through them to query a specific
fs["R"] >> R; // Read cv::Mat
fs["T"] >> T;
@endcode
4. **Input/Output of vectors (arrays) and associative maps.** As I mentioned beforehand, we can
-# **Input/Output of vectors (arrays) and associative maps.** As I mentioned beforehand, we can
output maps and sequences (array, vector) too. Again we first print the name of the variable and
then we have to specify if our output is either a sequence or map.
@@ -121,7 +123,7 @@ you may access it. For sequences you need to go through them to query a specific
cout << "Two " << (int)(n["Two"]) << "; ";
cout << "One " << (int)(n["One"]) << endl << endl;
@endcode
5. **Read and write your own data structures.** Suppose you have a data structure such as:
-# **Read and write your own data structures.** Suppose you have a data structure such as:
@code{.cpp}
class MyData
{
@@ -180,6 +182,7 @@ you may access it. For sequences you need to go through them to query a specific
fs["NonExisting"] >> m; // Do not add a fs << "NonExisting" << m command for this to work
cout << endl << "NonExisting = " << endl << m << endl;
@endcode
Result
------
@@ -270,4 +273,3 @@ here](https://www.youtube.com/watch?v=A4yqVnByMMM) .
<iframe title="File Input and Output using XML and YAML files in OpenCV" width="560" height="349" src="http://www.youtube.com/embed/A4yqVnByMMM?rel=0&loop=1" frameborder="0" allowfullscreen align="middle"></iframe>
</div>
\endhtmlonly

View File

@@ -59,10 +59,10 @@ how_to_scan_images imageName.jpg intValueToReduce [G]
The final argument is optional. If given the image will be loaded in gray scale format, otherwise
the RGB color way is used. The first thing is to calculate the lookup table.
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
@dontinclude cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
lines
49-61
@skip int divideWith
@until table[i]
Here we first use the C++ *stringstream* class to convert the third command line argument from text
to an integer format. Then we use a simple look and the upper formula to calculate the lookup table.
@@ -88,26 +88,12 @@ As you could already read in my @ref tutorial_mat_the_basic_image_container tuto
depends of the color system used. More accurately, it depends from the number of channels used. In
case of a gray scale image we have something like:
\f[\newcommand{\tabItG}[1] { \textcolor{black}{#1} \cellcolor[gray]{0.8}}
\begin{tabular} {ccccc}
~ & \multicolumn{1}{c}{Column 0} & \multicolumn{1}{c}{Column 1} & \multicolumn{1}{c}{Column ...} & \multicolumn{1}{c}{Column m}\\
Row 0 & \tabItG{0,0} & \tabItG{0,1} & \tabItG{...} & \tabItG{0, m} \\
Row 1 & \tabItG{1,0} & \tabItG{1,1} & \tabItG{...} & \tabItG{1, m} \\
Row ... & \tabItG{...,0} & \tabItG{...,1} & \tabItG{...} & \tabItG{..., m} \\
Row n & \tabItG{n,0} & \tabItG{n,1} & \tabItG{n,...} & \tabItG{n, m} \\
\end{tabular}\f]
![](tutorial_how_matrix_stored_1.png)
For multichannel images the columns contain as many sub columns as the number of channels. For
example in case of an RGB color system:
\f[\newcommand{\tabIt}[1] { \textcolor{yellow}{#1} \cellcolor{blue} & \textcolor{black}{#1} \cellcolor{green} & \textcolor{black}{#1} \cellcolor{red}}
\begin{tabular} {ccccccccccccc}
~ & \multicolumn{3}{c}{Column 0} & \multicolumn{3}{c}{Column 1} & \multicolumn{3}{c}{Column ...} & \multicolumn{3}{c}{Column m}\\
Row 0 & \tabIt{0,0} & \tabIt{0,1} & \tabIt{...} & \tabIt{0, m} \\
Row 1 & \tabIt{1,0} & \tabIt{1,1} & \tabIt{...} & \tabIt{1, m} \\
Row ... & \tabIt{...,0} & \tabIt{...,1} & \tabIt{...} & \tabIt{..., m} \\
Row n & \tabIt{n,0} & \tabIt{n,1} & \tabIt{n,...} & \tabIt{n, m} \\
\end{tabular}\f]
![](tutorial_how_matrix_stored_2.png)
Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory
is large enough to store the rows in a successive fashion the rows may follow one after another,
@@ -121,10 +107,9 @@ The efficient way
When it comes to performance you cannot beat the classic C style operator[] (pointer) access.
Therefore, the most efficient method we can recommend for making the assignment is:
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
lines
126-153
@skip Mat& ScanImageAndReduceC
@until return
@until }
Here we basically just acquire a pointer to the start of each row and go through it until it ends.
In the special case that the matrix is stored in a continues manner we only need to request the
@@ -156,10 +141,9 @@ considered a safer way as it takes over these tasks from the user. All you need
begin and the end of the image matrix and then just increase the begin iterator until you reach the
end. To acquire the value *pointed* by the iterator use the \* operator (add it before it).
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
lines
155-183
@skip ScanImageAndReduceIterator
@until return
@until }
In case of color images we have three uchar items per column. This may be considered a short vector
of uchar items, that has been baptized in OpenCV with the *Vec3b* name. To access the n-th sub
@@ -177,10 +161,9 @@ what type we are looking at the image. It's no different here as you need manual
type to use at the automatic lookup. You can observe this in case of the gray scale images for the
following source code (the usage of the + @ref cv::at() function):
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
lines
185-217
@skip ScanImageAndReduceRandomAccess
@until return
@until }
The functions takes your input type and coordinates and calculates on the fly the address of the
queried item. Then returns a reference to that. This may be a constant when you *get* the value and
@@ -209,17 +192,14 @@ OpenCV has a function that makes the modification without the need from you to w
the image. We use the @ref cv::LUT() function of the core module. First we build a Mat type of the
lookup table:
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
@dontinclude cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
lines
108-111
@skip Mat lookUpTable
@until p[i] = table[i]
Finally call the function (I is our input image and J the output one):
@includelineno cpp/tutorial_code/core/how_to_scan_images/how_to_scan_images.cpp
lines
116
@skipline LUT
Performance Difference
----------------------

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

View File

@@ -23,7 +23,7 @@ download it from [here](samples/cpp/tutorial_code/core/ippasync/ippasync_sample.
Explanation
-----------
1. Create parameters for OpenCV:
-# Create parameters for OpenCV:
@code{.cpp}
VideoCapture cap;
Mat image, gray, result;
@@ -36,7 +36,7 @@ Explanation
hppStatus sts;
hppiVirtualMatrix * virtMatrix;
@endcode
2. Load input image or video. How to open and read video stream you can see in the
-# Load input image or video. How to open and read video stream you can see in the
@ref tutorial_video_input_psnr_ssim tutorial.
@code{.cpp}
if( useCamera )
@@ -56,7 +56,7 @@ Explanation
return -1;
}
@endcode
3. Create accelerator instance using
-# Create accelerator instance using
[hppCreateInstance](http://software.intel.com/en-us/node/501686):
@code{.cpp}
accelType = sAccel == "cpu" ? HPP_ACCEL_TYPE_CPU:
@@ -67,12 +67,12 @@ Explanation
sts = hppCreateInstance(accelType, 0, &accel);
CHECK_STATUS(sts, "hppCreateInstance");
@endcode
4. Create an array of virtual matrices using
-# Create an array of virtual matrices using
[hppiCreateVirtualMatrices](http://software.intel.com/en-us/node/501700) function.
@code{.cpp}
virtMatrix = hppiCreateVirtualMatrices(accel, 1);
@endcode
5. Prepare a matrix for input and output data:
-# Prepare a matrix for input and output data:
@code{.cpp}
cap >> image;
if(image.empty())
@@ -82,7 +82,7 @@ Explanation
result.create( image.rows, image.cols, CV_8U);
@endcode
6. Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::hpp::getHpp
-# Convert Mat to [hppiMatrix](http://software.intel.com/en-us/node/501660) using @ref cv::hpp::getHpp
and call [hppiSobel](http://software.intel.com/en-us/node/474701) function.
@code{.cpp}
//convert Mat to hppiMatrix
@@ -104,14 +104,14 @@ Explanation
HPP_DATA_TYPE_16S data type for source matrix with HPP_DATA_TYPE_8U type. You should check
hppStatus after each call IPP Async function.
7. Create windows and show the images, the usual way.
-# Create windows and show the images, the usual way.
@code{.cpp}
imshow("image", image);
imshow("rez", result);
waitKey(15);
@endcode
8. Delete hpp matrices.
-# Delete hpp matrices.
@code{.cpp}
sts = hppiFreeMatrix(src);
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
@@ -119,7 +119,7 @@ Explanation
sts = hppiFreeMatrix(dst);
CHECK_DEL_STATUS(sts,"hppiFreeMatrix");
@endcode
9. Delete virtual matrices and accelerator instance.
-# Delete virtual matrices and accelerator instance.
@code{.cpp}
if (virtMatrix)
{
@@ -140,4 +140,4 @@ Result
After compiling the code above we can execute it giving an image or video path and accelerator type
as an argument. For this tutorial we use baboon.png image as input. The result is below.
![image](images/How_To_Use_IPPA_Result.jpg)
![](images/How_To_Use_IPPA_Result.jpg)

View File

@@ -93,20 +93,18 @@ To further help on seeing the difference the programs supports two modes: one mi
one pure C++. If you define the *DEMO_MIXED_API_USE* you'll end up using the first. The program
separates the color planes, does some modifications on them and in the end merge them back together.
@includelineno
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
lines
1-10, 23-26, 29-46
@dontinclude cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
@until namespace cv
@skip ifdef
@until endif
@skip main
@until endif
Here you can observe that with the new structure we have no pointer problems, although it is
possible to use the old functions and in the end just transform the result to a *Mat* object.
@includelineno
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
lines
48-53
@skip convert image
@until split
Because, we want to mess around with the images luma component we first convert from the default RGB
to the YUV color space and then split the result up into separate planes. Here the program splits:
@@ -116,11 +114,8 @@ image some Gaussian noise and then mix together the channels according to some f
The scanning version looks like:
@includelineno
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
lines
57-77
@skip #if 1
@until #else
Here you can observe that we may go through all the pixels of an image in three fashions: an
iterator, a C pointer and an individual element access style. You can read a more in-depth
@@ -128,26 +123,20 @@ description of these in the @ref tutorial_how_to_scan_images tutorial. Convertin
names is easy. Just remove the cv prefix and use the new *Mat* data structure. Here's an example of
this by using the weighted addition function:
@includelineno
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
lines
81-113
@until planes[0]
@until endif
As you may observe the *planes* variable is of type *Mat*. However, converting from *Mat* to
*IplImage* is easy and made automatically with a simple assignment operator.
@includelineno
cpp/tutorial_code/core/interoperability_with_OpenCV_1/interoperability_with_OpenCV_1.cpp
lines
117-129
@skip merge(planes
@until #endif
The new *imshow* highgui function accepts both the *Mat* and *IplImage* data structures. Compile and
run the program and if the first image below is your input you may get either the first or second as
output:
![image](images/outputInteropOpenCV1.jpg)
![](images/outputInteropOpenCV1.jpg)
You may observe a runtime instance of this on the [YouTube
here](https://www.youtube.com/watch?v=qckm-zvo31w) and you can [download the source code from here

View File

@@ -130,7 +130,7 @@ difference.
For example:
![image](images/resultMatMaskFilter2D.png)
![](images/resultMatMaskFilter2D.png)
You can download this source code from [here
](samples/cpp/tutorial_code/core/mat_mask_operations/mat_mask_operations.cpp) or look in the

View File

@@ -9,7 +9,7 @@ computed tomography, and magnetic resonance imaging to name a few. In every case
see are images. However, when transforming this to our digital devices what we record are numerical
values for each of the points of the image.
![image](images/MatBasicImageForComputer.jpg)
![](images/MatBasicImageForComputer.jpg)
For example in the above image you can see that the mirror of the car is nothing more than a matrix
containing all the intensity values of the pixel points. How we get and store the pixels values may
@@ -144,18 +144,18 @@ file by using the @ref cv::imwrite() function. However, for debugging purposes i
convenient to see the actual values. You can do this using the \<\< operator of *Mat*. Be aware that
this only works for two dimensional matrices.
@dontinclude cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
Although *Mat* works really well as an image container, it is also a general matrix class.
Therefore, it is possible to create and manipulate multidimensional matrices. You can create a Mat
object in multiple ways:
- @ref cv::Mat::Mat Constructor
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
@skip Mat M(2
@until cout
lines 27-28
![image](images/MatBasicContainerOut1.png)
![](images/MatBasicContainerOut1.png)
For two dimensional and multichannel images we first define their size: row and column count wise.
@@ -173,11 +173,8 @@ object in multiple ways:
- Use C/C++ arrays and initialize via constructor
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
35-36
@skip int sz
@until Mat L
The upper example shows how to create a matrix with more than two dimensions. Specify its
dimension, then pass a pointer containing the size for each dimension and the rest remains the
@@ -188,14 +185,14 @@ object in multiple ways:
IplImage* img = cvLoadImage("greatwave.png", 1);
Mat mtx(img); // convert IplImage* -> Mat
@endcode
- @ref cv::Mat::create function:
@code
M.create(4,4, CV_8UC(2));
cout << "M = "<< endl << " " << M << endl << endl;
@endcode
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines 31-32
![image](images/MatBasicContainerOut2.png)
![](images/MatBasicContainerOut2.png)
You cannot initialize the matrix values with this construction. It will only reallocate its matrix
data memory if the new size will not fit into the old one.
@@ -203,41 +200,31 @@ object in multiple ways:
- MATLAB style initializer: @ref cv::Mat::zeros , @ref cv::Mat::ones , @ref cv::Mat::eye . Specify size and
data type to use:
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
@skip Mat E
@until cout
lines
40-47
![image](images/MatBasicContainerOut3.png)
![](images/MatBasicContainerOut3.png)
- For small matrices you may use comma separated initializers:
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
@skip Mat C
@until cout
lines 50-51
![image](images/MatBasicContainerOut6.png)
![](images/MatBasicContainerOut6.png)
- Create a new header for an existing *Mat* object and @ref cv::Mat::clone or @ref cv::Mat::copyTo it.
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
@skip Mat RowClone
@until cout
lines 53-54
![image](images/MatBasicContainerOut7.png)
![](images/MatBasicContainerOut7.png)
@note
You can fill out a matrix with random values using the @ref cv::randu() function. You need to
give the lower and upper value for the random values:
You can fill out a matrix with random values using the @ref cv::randu() function. You need to
give the lower and upper value for the random values:
@skip Mat R
@until randu
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
57-58
Output formatting
-----------------
@@ -246,54 +233,26 @@ In the above examples you could see the default formatting option. OpenCV, howev
format your matrix output:
- Default
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
61
![image](images/MatBasicContainerOut8.png)
@skipline (default)
![](images/MatBasicContainerOut8.png)
- Python
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
62
![image](images/MatBasicContainerOut16.png)
@skipline (python)
![](images/MatBasicContainerOut16.png)
- Comma separated values (CSV)
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
64
![image](images/MatBasicContainerOut10.png)
@skipline (csv)
![](images/MatBasicContainerOut10.png)
- Numpy
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
63
![image](images/MatBasicContainerOut9.png)
@code
cout << "R (numpy) = " << endl << format(R, Formatter::FMT_NUMPY ) << endl << endl;
@endcode
![](images/MatBasicContainerOut9.png)
- C
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
65
![image](images/MatBasicContainerOut11.png)
@skipline (c)
![](images/MatBasicContainerOut11.png)
Output of other common items
----------------------------
@@ -301,44 +260,24 @@ Output of other common items
OpenCV offers support for output of other common OpenCV data structures too via the \<\< operator:
- 2D Point
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
67-68
![image](images/MatBasicContainerOut12.png)
@skip Point2f P
@until cout
![](images/MatBasicContainerOut12.png)
- 3D Point
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
70-71
![image](images/MatBasicContainerOut13.png)
@skip Point3f P3f
@until cout
![](images/MatBasicContainerOut13.png)
- std::vector via cv::Mat
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
74-77
![image](images/MatBasicContainerOut14.png)
@skip vector<float> v
@until cout
![](images/MatBasicContainerOut14.png)
- std::vector of points
@includelineno
cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp
lines
79-83
![image](images/MatBasicContainerOut15.png)
@skip vector<Point2f> vPoints
@until cout
![](images/MatBasicContainerOut15.png)
Most of the samples here have been included in a small console application. You can download it from
[here](samples/cpp/tutorial_code/core/mat_the_basic_image_container/mat_the_basic_image_container.cpp)

View File

@@ -25,7 +25,7 @@ Code
Explanation
-----------
1. Let's start by checking out the *main* function. We observe that first thing we do is creating a
-# Let's start by checking out the *main* function. We observe that first thing we do is creating a
*Random Number Generator* object (RNG):
@code{.cpp}
RNG rng( 0xFFFFFFFF );
@@ -33,7 +33,7 @@ Explanation
RNG implements a random number generator. In this example, *rng* is a RNG element initialized
with the value *0xFFFFFFFF*
2. Then we create a matrix initialized to *zeros* (which means that it will appear as black),
-# Then we create a matrix initialized to *zeros* (which means that it will appear as black),
specifying its height, width and its type:
@code{.cpp}
/// Initialize a matrix filled with zeros
@@ -42,7 +42,7 @@ Explanation
/// Show it in a window during DELAY ms
imshow( window_name, image );
@endcode
3. Then we proceed to draw crazy stuff. After taking a look at the code, you can see that it is
-# Then we proceed to draw crazy stuff. After taking a look at the code, you can see that it is
mainly divided in 8 sections, defined as functions:
@code{.cpp}
/// Now, let's draw some lines
@@ -79,7 +79,7 @@ Explanation
All of these functions follow the same pattern, so we will analyze only a couple of them, since
the same explanation applies for all.
4. Checking out the function **Drawing_Random_Lines**:
-# Checking out the function **Drawing_Random_Lines**:
@code{.cpp}
int Drawing_Random_Lines( Mat image, char* window_name, RNG rng )
{
@@ -133,11 +133,11 @@ Explanation
are used as the *R*, *G* and *B* parameters for the line color. Hence, the color of the
lines will be random too!
5. The explanation above applies for the other functions generating circles, ellipses, polygones,
-# The explanation above applies for the other functions generating circles, ellipses, polygones,
etc. The parameters such as *center* and *vertices* are also generated randomly.
6. Before finishing, we also should take a look at the functions *Display_Random_Text* and
-# Before finishing, we also should take a look at the functions *Display_Random_Text* and
*Displaying_Big_End*, since they both have a few interesting features:
7. **Display_Random_Text:**
-# **Display_Random_Text:**
@code{.cpp}
int Displaying_Random_Text( Mat image, char* window_name, RNG rng )
{
@@ -178,7 +178,7 @@ Explanation
As a result, we will get (analagously to the other drawing functions) **NUMBER** texts over our
image, in random locations.
8. **Displaying_Big_End**
-# **Displaying_Big_End**
@code{.cpp}
int Displaying_Big_End( Mat image, char* window_name, RNG rng )
{
@@ -222,28 +222,28 @@ Result
As you just saw in the Code section, the program will sequentially execute diverse drawing
functions, which will produce:
1. First a random set of *NUMBER* lines will appear on screen such as it can be seen in this
-# First a random set of *NUMBER* lines will appear on screen such as it can be seen in this
screenshot:
![image](images/Drawing_2_Tutorial_Result_0.jpg)
![](images/Drawing_2_Tutorial_Result_0.jpg)
2. Then, a new set of figures, these time *rectangles* will follow.
3. Now some ellipses will appear, each of them with random position, size, thickness and arc
-# Then, a new set of figures, these time *rectangles* will follow.
-# Now some ellipses will appear, each of them with random position, size, thickness and arc
length:
![image](images/Drawing_2_Tutorial_Result_2.jpg)
![](images/Drawing_2_Tutorial_Result_2.jpg)
4. Now, *polylines* with 03 segments will appear on screen, again in random configurations.
-# Now, *polylines* with 03 segments will appear on screen, again in random configurations.
![image](images/Drawing_2_Tutorial_Result_3.jpg)
![](images/Drawing_2_Tutorial_Result_3.jpg)
5. Filled polygons (in this example triangles) will follow.
6. The last geometric figure to appear: circles!
-# Filled polygons (in this example triangles) will follow.
-# The last geometric figure to appear: circles!
![image](images/Drawing_2_Tutorial_Result_5.jpg)
![](images/Drawing_2_Tutorial_Result_5.jpg)
7. Near the end, the text *"Testing Text Rendering"* will appear in a variety of fonts, sizes,
-# Near the end, the text *"Testing Text Rendering"* will appear in a variety of fonts, sizes,
colors and positions.
8. And the big end (which by the way expresses a big truth too):
-# And the big end (which by the way expresses a big truth too):
![image](images/Drawing_2_Tutorial_Result_big.jpg)
![](images/Drawing_2_Tutorial_Result_big.jpg)