Doxygen tutorials: python final edits

This commit is contained in:
Maksim Shabunin 2014-12-01 15:46:05 +03:00
parent 875f922332
commit 812ce48c36
49 changed files with 426 additions and 353 deletions

View File

@ -5,7 +5,7 @@ Goal
----
In this section,
- We will learn about distortions in camera, intrinsic and extrinsic parameters of camera etc.
- We will learn about distortions in camera, intrinsic and extrinsic parameters of camera etc.
- We will learn to find these parameters, undistort images etc.
Basics
@ -96,9 +96,11 @@ ones.
@sa Instead of chess board, we can use some circular grid, but then use the function
**cv2.findCirclesGrid()** to find the pattern. It is said that less number of images are enough when
using circular grid. Once we find the corners, we can increase their accuracy using
**cv2.cornerSubPix()**. We can also draw the pattern using **cv2.drawChessboardCorners()**. All
these steps are included in below code:
using circular grid.
Once we find the corners, we can increase their accuracy using **cv2.cornerSubPix()**. We can also
draw the pattern using **cv2.drawChessboardCorners()**. All these steps are included in below code:
@code{.py}
import numpy as np
import cv2
@ -225,4 +227,3 @@ Exercises
---------
-# Try camera calibration with circular grid.

View File

@ -5,7 +5,7 @@ Goal
----
In this session,
- We will learn to create depth map from stereo images.
- We will learn to create depth map from stereo images.
Basics
------

View File

@ -5,7 +5,7 @@ Goal
----
In this section,
- We will learn to exploit calib3d module to create some 3D effects in images.
- We will learn to exploit calib3d module to create some 3D effects in images.
Basics
------

View File

@ -21,29 +21,30 @@ Accessing and Modifying pixel values
Let's load a color image first:
@code{.py}
import cv2
import numpy as np
>>> import cv2
>>> import numpy as np
img = cv2.imread('messi5.jpg')
>>> img = cv2.imread('messi5.jpg')
@endcode
You can access a pixel value by its row and column coordinates. For BGR image, it returns an array
of Blue, Green, Red values. For grayscale image, just corresponding intensity is returned.
@code{.py}
px = img[100,100]
print px
>>> px = img[100,100]
>>> print px
[157 166 200]
# accessing only blue pixel
blue = img[100,100,0]
print blue
>>> blue = img[100,100,0]
>>> print blue
157
@endcode
You can modify the pixel values the same way.
@code{.py}
img[100,100] = [255,255,255]
print img[100,100]
>>> img[100,100] = [255,255,255]
>>> print img[100,100]
[255 255 255]
@endcode
**warning**
Numpy is a optimized library for fast array calculations. So simply accessing each and every pixel
@ -52,18 +53,20 @@ values and modifying it will be very slow and it is discouraged.
@note Above mentioned method is normally used for selecting a region of array, say first 5 rows and
last 3 columns like that. For individual pixel access, Numpy array methods, array.item() and
array.itemset() is considered to be better. But it always returns a scalar. So if you want to access
all B,G,R values, you need to call array.item() separately for all. Better pixel accessing and
editing method :
@code{.python}
all B,G,R values, you need to call array.item() separately for all.
Better pixel accessing and editing method :
@code{.py}
# accessing RED value
img.item(10,10,2)
>>> img.item(10,10,2)
59
# modifying RED value
img.itemset((10,10,2),100)
img.item(10,10,2)
>>> img.itemset((10,10,2),100)
>>> img.item(10,10,2)
100
@endcode
Accessing Image Properties
--------------------------
@ -73,23 +76,29 @@ etc.
Shape of image is accessed by img.shape. It returns a tuple of number of rows, columns and channels
(if image is color):
@code{.py}
print img.shape
>>> print img.shape
(342, 548, 3)
@endcode
@note If image is grayscale, tuple returned contains only number of rows and columns. So it is a
good method to check if loaded image is grayscale or color image. Total number of pixels is accessed
by \`img.size\`:
good method to check if loaded image is grayscale or color image.
Total number of pixels is accessed by `img.size`:
@code{.py}
print img.size
>>> print img.size
562248
@endcode
Image datatype is obtained by \`img.dtype\`:
@code{.py}
print img.dtype
>>> print img.dtype
uint8
@endcode
@note img.dtype is very important while debugging because a large number of errors in OpenCV-Python
code is caused by invalid datatype. Image ROI ===========
code is caused by invalid datatype.
Image ROI
---------
Sometimes, you will have to play with certain region of images. For eye detection in images, first
face detection is done all over the image and when face is obtained, we select the face region alone
@ -99,8 +108,8 @@ are always on faces :D ) and performance (because we search for a small area)
ROI is again obtained using Numpy indexing. Here I am selecting the ball and copying it to another
region in the image:
@code{.py}
ball = img[280:340, 330:390]
img[273:333, 100:160] = ball
>>> ball = img[280:340, 330:390]
>>> img[273:333, 100:160] = ball
@endcode
Check the results below:
@ -113,18 +122,19 @@ Sometimes you will need to work separately on B,G,R channels of image. Then you
BGR images to single planes. Or another time, you may need to join these individual channels to BGR
image. You can do it simply by:
@code{.py}
b,g,r = cv2.split(img)
img = cv2.merge((b,g,r))
>>> b,g,r = cv2.split(img)
>>> img = cv2.merge((b,g,r))
@endcode
Or
\>\>\> b = img[:,:,0]
@code
>>> b = img[:,:,0]
@endcode
Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal
to zero. You can simply use Numpy indexing, and that is more faster.
@code{.py}
img[:,:,2] = 0
>>> img[:,:,2] = 0
@endcode
**warning**
cv2.split() is a costly operation (in terms of time). So do it only if you need it. Otherwise go
@ -140,10 +150,9 @@ padding etc. This function takes following arguments:
- **src** - input image
- **top**, **bottom**, **left**, **right** - border width in number of pixels in corresponding
directions
-
**borderType** - Flag defining what kind of border to be added. It can be following types:
- **cv2.BORDER_CONSTANT** - Adds a constant colored border. The value should be given
- **borderType** - Flag defining what kind of border to be added. It can be following types:
- **cv2.BORDER_CONSTANT** - Adds a constant colored border. The value should be given
as next argument.
- **cv2.BORDER_REFLECT** - Border will be mirror reflection of the border elements,
like this : *fedcba|abcdefgh|hgfedcb*

View File

@ -16,15 +16,17 @@ res = img1 + img2. Both images should be of same depth and type, or second image
scalar value.
@note There is a difference between OpenCV addition and Numpy addition. OpenCV addition is a
saturated operation while Numpy addition is a modulo operation. For example, consider below sample:
@code{.py}
x = np.uint8([250])
y = np.uint8([10])
saturated operation while Numpy addition is a modulo operation.
print cv2.add(x,y) # 250+10 = 260 => 255
For example, consider below sample:
@code{.py}
>>> x = np.uint8([250])
>>> y = np.uint8([10])
>>> print cv2.add(x,y) # 250+10 = 260 => 255
[[255]]
print x+y # 250+10 = 260 % 256 = 4
>>> print x+y # 250+10 = 260 % 256 = 4
[4]
@endcode
It will be more visible when you add two images. OpenCV function will provide a better result. So
@ -114,4 +116,3 @@ Exercises
-# Create a slide show of images in a folder with smooth transition between images using
cv2.addWeighted function

View File

@ -113,8 +113,11 @@ working on this issue)*
@note Python scalar operations are faster than Numpy scalar operations. So for operations including
one or two elements, Python scalar is better than Numpy arrays. Numpy takes advantage when size of
array is a little bit bigger. We will try one more example. This time, we will compare the
performance of **cv2.countNonZero()** and **np.count_nonzero()** for same image.
array is a little bit bigger.
We will try one more example. This time, we will compare the performance of **cv2.countNonZero()**
and **np.count_nonzero()** for same image.
@code{.py}
In [35]: %timeit z = cv2.countNonZero(img)
100000 loops, best of 3: 15.8 us per loop

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter
- We will see the basics of BRIEF algorithm
- We will see the basics of BRIEF algorithm
Theory
------

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will understand the basics of FAST algorithm
- We will understand the basics of FAST algorithm
- We will find corners using OpenCV functionalities for FAST algorithm.
Theory

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will mix up the feature matching and findHomography from calib3d module to find known
- We will mix up the feature matching and findHomography from calib3d module to find known
objects in a complex image.
Basics

View File

@ -43,7 +43,7 @@ determine if a window can contain a corner or not.
\f[R = det(M) - k(trace(M))^2\f]
where
- \f$det(M) = \lambda_1 \lambda_2\f$
- \f$det(M) = \lambda_1 \lambda_2\f$
- \f$trace(M) = \lambda_1 + \lambda_2\f$
- \f$\lambda_1\f$ and \f$\lambda_2\f$ are the eigen values of M

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter
- We will see how to match features in one image with others.
- We will see how to match features in one image with others.
- We will use the Brute-Force matcher and FLANN Matcher in OpenCV
Basics of Brute-Force Matcher

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will see the basics of ORB
- We will see the basics of ORB
Theory
------

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will learn about the concepts of SIFT algorithm
- We will learn about the concepts of SIFT algorithm
- We will learn to find SIFT Keypoints and Descriptors.
Theory
@ -155,7 +155,7 @@ sift = cv2.SIFT()
kp, des = sift.detectAndCompute(gray,None)
@endcode
Here kp will be a list of keypoints and des is a numpy array of shape
\f$Number_of_Keypoints \times 128\f$.
\f$Number\_of\_Keypoints \times 128\f$.
So we got keypoints, descriptors etc. Now we want to see how to match keypoints in different images.
That we will learn in coming chapters.

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will see the basics of SURF
- We will see the basics of SURF
- We will see SURF functionalities in OpenCV
Theory
@ -76,40 +76,40 @@ and descriptors.
First we will see a simple demo on how to find SURF keypoints and descriptors and draw it. All
examples are shown in Python terminal since it is just same as SIFT only.
@code{.py}
img = cv2.imread('fly.png',0)
>>> img = cv2.imread('fly.png',0)
# Create SURF object. You can specify params here or later.
# Here I set Hessian Threshold to 400
surf = cv2.SURF(400)
>>> surf = cv2.SURF(400)
# Find keypoints and descriptors directly
kp, des = surf.detectAndCompute(img,None)
>>> kp, des = surf.detectAndCompute(img,None)
len(kp)
>>> len(kp)
699
@endcode
1199 keypoints is too much to show in a picture. We reduce it to some 50 to draw it on an image.
While matching, we may need all those features, but not now. So we increase the Hessian Threshold.
@code{.py}
# Check present Hessian threshold
print surf.hessianThreshold
>>> print surf.hessianThreshold
400.0
# We set it to some 50000. Remember, it is just for representing in picture.
# In actual cases, it is better to have a value 300-500
surf.hessianThreshold = 50000
>>> surf.hessianThreshold = 50000
# Again compute keypoints and check its number.
kp, des = surf.detectAndCompute(img,None)
>>> kp, des = surf.detectAndCompute(img,None)
print len(kp)
>>> print len(kp)
47
@endcode
It is less than 50. Let's draw it on the image.
@code{.py}
img2 = cv2.drawKeypoints(img,kp,None,(255,0,0),4)
>>> img2 = cv2.drawKeypoints(img,kp,None,(255,0,0),4)
plt.imshow(img2),plt.show()
>>> plt.imshow(img2),plt.show()
@endcode
See the result below. You can see that SURF is more like a blob detector. It detects the white blobs
on wings of butterfly. You can test it with other images.
@ -119,16 +119,16 @@ on wings of butterfly. You can test it with other images.
Now I want to apply U-SURF, so that it won't find the orientation.
@code{.py}
# Check upright flag, if it False, set it to True
print surf.upright
>>> print surf.upright
False
surf.upright = True
>>> surf.upright = True
# Recompute the feature points and draw it
kp = surf.detect(img,None)
img2 = cv2.drawKeypoints(img,kp,None,(255,0,0),4)
>>> kp = surf.detect(img,None)
>>> img2 = cv2.drawKeypoints(img,kp,None,(255,0,0),4)
plt.imshow(img2),plt.show()
>>> plt.imshow(img2),plt.show()
@endcode
See the results below. All the orientations are shown in same direction. It is more faster than
previous. If you are working on cases where orientation is not a problem (like panorama stitching)
@ -139,19 +139,19 @@ etc, this is better.
Finally we check the descriptor size and change it to 128 if it is only 64-dim.
@code{.py}
# Find size of descriptor
print surf.descriptorSize()
>>> print surf.descriptorSize()
64
# That means flag, "extended" is False.
surf.extended
>>> surf.extended
False
# So we make it to True to get 128-dim descriptors.
surf.extended = True
kp, des = surf.detectAndCompute(img,None)
print surf.descriptorSize()
>>> surf.extended = True
>>> kp, des = surf.detectAndCompute(img,None)
>>> print surf.descriptorSize()
128
print des.shape
>>> print des.shape
(47, 128)
@endcode
Remaining part is matching which we will do in another chapter.

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -70,20 +70,18 @@ pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32)
pts = pts.reshape((-1,1,2))
cv2.polylines(img,[pts],True,(0,255,255))
@endcode
**note**
If third argument is False, you will get a polylines joining all the points, not a closed shape.
@note If third argument is False, you will get a polylines joining all the points, not a closed
shape.
**note**
cv2.polylines() can be used to draw multiple lines. Just create a list of all the lines you want
to draw and pass it to the function. All lines will be drawn individually. It is a much better and
faster way to draw a group of lines than calling cv2.line() for each line.
@note cv2.polylines() can be used to draw multiple lines. Just create a list of all the lines you
want to draw and pass it to the function. All lines will be drawn individually. It is a much better
and faster way to draw a group of lines than calling cv2.line() for each line.
### Adding Text to Images:
To put texts in images, you need specify following things.
- Text data that you want to write
- Text data that you want to write
- Position coordinates of where you want put it (i.e. bottom-left corner where data starts).
- Font type (Check **cv2.putText()** docs for supported fonts)
- Font Scale (specifies the size of font)
@ -95,12 +93,13 @@ We will write **OpenCV** on our image in white color.
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(img,'OpenCV',(10,500), font, 4,(255,255,255),2,cv2.LINE_AA)
@endcode
### Result
So it is time to see the final result of our drawing. As you studied in previous articles, display
the image to see it.
![image](images/drawing.jpg)
![image](images/drawing_result.jpg)
Additional Resources
--------------------
@ -112,4 +111,3 @@ Exercises
---------
-# Try to create the logo of OpenCV using drawing functions available in OpenCV.

View File

@ -90,7 +90,7 @@ Result
----------
So it is time to see the final result of our drawing. As you studied in previous articles, display the image to see it.
.. image:: images/drawing.jpg
.. image:: images/drawing_result.jpg
:alt: Drawing Functions in OpenCV
:align: center

View File

@ -23,8 +23,9 @@ Second argument is a flag which specifies the way image should be read.
- cv2.IMREAD_GRAYSCALE : Loads image in grayscale mode
- cv2.IMREAD_UNCHANGED : Loads image as such including alpha channel
@note Instead of these three flags, you can simply pass integers 1, 0 or -1 respectively. See the
code below:
@note Instead of these three flags, you can simply pass integers 1, 0 or -1 respectively.
See the code below:
@code{.py}
import numpy as np
import cv2
@ -32,6 +33,7 @@ import cv2
# Load an color image in grayscale
img = cv2.imread('messi5.jpg',0)
@endcode
**warning**
Even if the image path is wrong, it won't throw any error, but print img will give you None
@ -58,15 +60,19 @@ the program continues. If **0** is passed, it waits indefinitely for a key strok
set to detect specific key strokes like, if key a is pressed etc which we will discuss below.
@note Besides binding keyboard events this function also processes many other GUI events, so you
MUST use it to actually display the image. **cv2.destroyAllWindows()** simply destroys all the
windows we created. If you want to destroy any specific window, use the function
**cv2.destroyWindow()** where you pass the exact window name as the argument.
MUST use it to actually display the image.
**cv2.destroyAllWindows()** simply destroys all the windows we created. If you want to destroy any
specific window, use the function **cv2.destroyWindow()** where you pass the exact window name as
the argument.
@note There is a special case where you can already create a window and load image to it later. In
that case, you can specify whether window is resizable or not. It is done with the function
**cv2.namedWindow()**. By default, the flag is cv2.WINDOW_AUTOSIZE. But if you specify flag to be
cv2.WINDOW_NORMAL, you can resize window. It will be helpful when image is too large in dimension
and adding track bar to windows. See the code below:
and adding track bar to windows.
See the code below:
@code{.py}
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow('image',img)
@ -100,6 +106,7 @@ elif k == ord('s'): # wait for 's' key to save and exit
cv2.imwrite('messigray.png',img)
cv2.destroyAllWindows()
@endcode
**warning**
If you are using a 64-bit machine, you will have to modify k = cv2.waitKey(0) line as follows :
@ -126,9 +133,13 @@ A screen-shot of the window will look like this :
![image](images/matplotlib_screenshot.jpg)
@sa Plenty of plotting options are available in Matplotlib. Please refer to Matplotlib docs for more
details. Some, we will see on the way. .. warning:: Color image loaded by OpenCV is in BGR mode. But
Matplotlib displays in RGB mode. So color images will not be displayed correctly in Matplotlib if
image is read with OpenCV. Please see the exercises for more details.
details. Some, we will see on the way.
__warning__
Color image loaded by OpenCV is in BGR mode. But Matplotlib displays in RGB mode. So color images
will not be displayed correctly in Matplotlib if image is read with OpenCV. Please see the exercises
for more details.
Additional Resources
--------------------
@ -140,4 +151,3 @@ Exercises
-# There is some problem when you try to load color image in OpenCV and display it in Matplotlib.
Read [this discussion](http://stackoverflow.com/a/15074748/1134940) and understand it.

View File

@ -60,9 +60,7 @@ For example, I can check the frame width and height by cap.get(3) and cap.get(4)
640x480 by default. But I want to modify it to 320x240. Just use ret = cap.set(3,320) and
ret = cap.set(4,240).
**note**
If you are getting error, make sure camera is working fine using any other camera application
@note If you are getting error, make sure camera is working fine using any other camera application
(like Cheese in Linux).
Playing Video from file
@ -90,10 +88,9 @@ while(cap.isOpened()):
cap.release()
cv2.destroyAllWindows()
@endcode
**note**
Make sure proper versions of ffmpeg or gstreamer is installed. Sometimes, it is a headache to work
with Video Capture mostly due to wrong installation of ffmpeg/gstreamer.
@note Make sure proper versions of ffmpeg or gstreamer is installed. Sometimes, it is a headache to
work with Video Capture mostly due to wrong installation of ffmpeg/gstreamer.
Saving a Video
--------------
@ -148,6 +145,7 @@ cap.release()
out.release()
cv2.destroyAllWindows()
@endcode
Additional Resources
--------------------

View File

@ -17,52 +17,55 @@ Canny Edge Detection is a popular edge detection algorithm. It was developed by
-# **Noise Reduction**
Since edge detection is susceptible to noise in the image, first step is to remove the noise in the
image with a 5x5 Gaussian filter. We have already seen this in previous chapters.
Since edge detection is susceptible to noise in the image, first step is to remove the noise in the
image with a 5x5 Gaussian filter. We have already seen this in previous chapters.
-# **Finding Intensity Gradient of the Image**
Smoothened image is then filtered with a Sobel kernel in both horizontal and vertical direction to
get first derivative in horizontal direction (\f$G_x\f$) and vertical direction (\f$G_y\f$). From these two
images, we can find edge gradient and direction for each pixel as follows:
Smoothened image is then filtered with a Sobel kernel in both horizontal and vertical direction to
get first derivative in horizontal direction (\f$G_x\f$) and vertical direction (\f$G_y\f$). From these two
images, we can find edge gradient and direction for each pixel as follows:
\f[Edge_Gradient \; (G) = \sqrt{G_x^2 + G_y^2}\f]\f[Angle \; (\theta) = \tan^{-1} \bigg(\frac{G_y}{G_x}\bigg)\f]
\f[
Edge\_Gradient \; (G) = \sqrt{G_x^2 + G_y^2} \\
Angle \; (\theta) = \tan^{-1} \bigg(\frac{G_y}{G_x}\bigg)
\f]
Gradient direction is always perpendicular to edges. It is rounded to one of four angles
representing vertical, horizontal and two diagonal directions.
Gradient direction is always perpendicular to edges. It is rounded to one of four angles
representing vertical, horizontal and two diagonal directions.
-# **Non-maximum Suppression**
After getting gradient magnitude and direction, a full scan of image is done to remove any unwanted
pixels which may not constitute the edge. For this, at every pixel, pixel is checked if it is a
local maximum in its neighborhood in the direction of gradient. Check the image below:
After getting gradient magnitude and direction, a full scan of image is done to remove any unwanted
pixels which may not constitute the edge. For this, at every pixel, pixel is checked if it is a
local maximum in its neighborhood in the direction of gradient. Check the image below:
![image](images/nms.jpg)
![image](images/nms.jpg)
Point A is on the edge ( in vertical direction). Gradient direction is normal to the edge. Point B
and C are in gradient directions. So point A is checked with point B and C to see if it forms a
local maximum. If so, it is considered for next stage, otherwise, it is suppressed ( put to zero).
Point A is on the edge ( in vertical direction). Gradient direction is normal to the edge. Point B
and C are in gradient directions. So point A is checked with point B and C to see if it forms a
local maximum. If so, it is considered for next stage, otherwise, it is suppressed ( put to zero).
In short, the result you get is a binary image with "thin edges".
In short, the result you get is a binary image with "thin edges".
-# **Hysteresis Thresholding**
This stage decides which are all edges are really edges and which are not. For this, we need two
threshold values, minVal and maxVal. Any edges with intensity gradient more than maxVal are sure to
be edges and those below minVal are sure to be non-edges, so discarded. Those who lie between these
two thresholds are classified edges or non-edges based on their connectivity. If they are connected
to "sure-edge" pixels, they are considered to be part of edges. Otherwise, they are also discarded.
See the image below:
This stage decides which are all edges are really edges and which are not. For this, we need two
threshold values, minVal and maxVal. Any edges with intensity gradient more than maxVal are sure to
be edges and those below minVal are sure to be non-edges, so discarded. Those who lie between these
two thresholds are classified edges or non-edges based on their connectivity. If they are connected
to "sure-edge" pixels, they are considered to be part of edges. Otherwise, they are also discarded.
See the image below:
![image](images/hysteresis.jpg)
![image](images/hysteresis.jpg)
The edge A is above the maxVal, so considered as "sure-edge". Although edge C is below maxVal, it is
connected to edge A, so that also considered as valid edge and we get that full curve. But edge B,
although it is above minVal and is in same region as that of edge C, it is not connected to any
"sure-edge", so that is discarded. So it is very important that we have to select minVal and maxVal
accordingly to get the correct result.
The edge A is above the maxVal, so considered as "sure-edge". Although edge C is below maxVal, it is
connected to edge A, so that also considered as valid edge and we get that full curve. But edge B,
although it is above minVal and is in same region as that of edge C, it is not connected to any
"sure-edge", so that is discarded. So it is very important that we have to select minVal and maxVal
accordingly to get the correct result.
This stage also removes small pixels noises on the assumption that edges are long lines.
This stage also removes small pixels noises on the assumption that edges are long lines.
So what we finally get is strong edges in the image.
@ -74,7 +77,7 @@ argument is our input image. Second and third arguments are our minVal and maxVa
Third argument is aperture_size. It is the size of Sobel kernel used for find image gradients. By
default it is 3. Last argument is L2gradient which specifies the equation for finding gradient
magnitude. If it is True, it uses the equation mentioned above which is more accurate, otherwise it
uses this function: \f$Edge_Gradient \; (G) = |G_x| + |G_y|\f$. By default, it is False.
uses this function: \f$Edge\_Gradient \; (G) = |G_x| + |G_y|\f$. By default, it is False.
@code{.py}
import cv2
import numpy as np
@ -98,8 +101,7 @@ Additional Resources
--------------------
-# Canny edge detector at [Wikipedia](http://en.wikipedia.org/wiki/Canny_edge_detector)
2. [Canny Edge Detection
Tutorial](http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html) by
-# [Canny Edge Detection Tutorial](http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html) by
Bill Green, 2002.
Exercises
@ -107,4 +109,3 @@ Exercises
-# Write a small application to find the Canny edge detection whose threshold values can be varied
using two trackbars. This way, you can understand the effect of threshold values.

View File

@ -22,13 +22,16 @@ For BGR \f$\rightarrow\f$ Gray conversion we use the flags cv2.COLOR_BGR2GRAY. S
\f$\rightarrow\f$ HSV, we use the flag cv2.COLOR_BGR2HSV. To get other flags, just run following
commands in your Python terminal :
@code{.py}
import cv2
flags = [i for i in dir(cv2) if i.startswith('COLOR_')]
print flags
>>> import cv2
>>> flags = [i for i in dir(cv2) if i.startswith('COLOR_')]
>>> print flags
@endcode
@note For HSV, Hue range is [0,179], Saturation range is [0,255] and Value range is [0,255].
Different softwares use different scales. So if you are comparing OpenCV values with them, you need
to normalize these ranges. Object Tracking ==================
to normalize these ranges.
Object Tracking
---------------
Now we know how to convert BGR image to HSV, we can use this to extract a colored object. In HSV, it
is more easier to represent a color than RGB color-space. In our application, we will try to extract
@ -81,15 +84,19 @@ Below image shows tracking of the blue object:
@note This is the simplest method in object tracking. Once you learn functions of contours, you can
do plenty of things like find centroid of this object and use it to track the object, draw diagrams
just by moving your hand in front of camera and many other funny stuffs. How to find HSV values to
track? -----------------------------------This is a common question found in
[stackoverflow.com](www.stackoverflow.com). It is very simple and you can use the same function,
cv2.cvtColor(). Instead of passing an image, you just pass the BGR values you want. For example, to
find the HSV value of Green, try following commands in Python terminal:
just by moving your hand in front of camera and many other funny stuffs.
How to find HSV values to track?
--------------------------------
This is a common question found in [stackoverflow.com](www.stackoverflow.com). It is very simple and
you can use the same function, cv2.cvtColor(). Instead of passing an image, you just pass the BGR
values you want. For example, to find the HSV value of Green, try following commands in Python
terminal:
@code{.py}
green = np.uint8([[[0,255,0 ]]])
hsv_green = cv2.cvtColor(green,cv2.COLOR_BGR2HSV)
print hsv_green
>>> green = np.uint8([[[0,255,0 ]]])
>>> hsv_green = cv2.cvtColor(green,cv2.COLOR_BGR2HSV)
>>> print hsv_green
[[[ 60 255 255]]]
@endcode
Now you take [H-10, 100,100] and [H+10, 255, 255] as lower bound and upper bound respectively. Apart
@ -104,4 +111,3 @@ Exercises
-# Try to find a way to extract more than one colored objects, for eg, extract red, blue, green
objects simultaneously.

View File

@ -9,7 +9,7 @@ In this article, we will learn
- To find the different features of contours, like area, perimeter, centroid, bounding box etc
- You will see plenty of functions related to contours.
-# Moments
1. Moments
----------
Image moments help you to calculate some features like center of mass of the object, area of the
@ -36,6 +36,7 @@ follows:
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
@endcode
2. Contour Area
---------------
@ -43,6 +44,7 @@ Contour area is given by the function **cv2.contourArea()** or from moments, **M
@code{.py}
area = cv2.contourArea(cnt)
@endcode
3. Contour Perimeter
--------------------
@ -51,6 +53,7 @@ argument specify whether shape is a closed contour (if passed True), or just a c
@code{.py}
perimeter = cv2.arcLength(cnt,True)
@endcode
4. Contour Approximation
------------------------
@ -74,7 +77,7 @@ curve is closed or not.
![image](images/approx.jpg)
-# Convex Hull
5. Convex Hull
--------------
Convex Hull will look similar to contour approximation, but it is not (Both may provide same results
@ -113,7 +116,7 @@ cnt[129] = [[234, 202]] which is same as first result (and so on for others).
You will see it again when we discuss about convexity defects.
-# Checking Convexity
6. Checking Convexity
---------------------
There is a function to check if a curve is convex or not, **cv2.isContourConvex()**. It just return
@ -121,6 +124,7 @@ whether True or False. Not a big deal.
@code{.py}
k = cv2.isContourConvex(cnt)
@endcode
7. Bounding Rectangle
---------------------
@ -136,6 +140,7 @@ Let (x,y) be the top-left coordinate of the rectangle and (w,h) be its width and
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
@endcode
### 7.b. Rotated Rectangle
Here, bounding rectangle is drawn with minimum area, so it considers the rotation also. The function
@ -153,7 +158,7 @@ rectangle is the rotated rect.
![image](images/boundingrect.png)
-# Minimum Enclosing Circle
8. Minimum Enclosing Circle
---------------------------
Next we find the circumcircle of an object using the function **cv2.minEnclosingCircle()**. It is a
@ -166,7 +171,7 @@ cv2.circle(img,center,radius,(0,255,0),2)
@endcode
![image](images/circumcircle.png)
-# Fitting an Ellipse
9. Fitting an Ellipse
---------------------
Next one is to fit an ellipse to an object. It returns the rotated rectangle in which the ellipse is
@ -177,7 +182,7 @@ cv2.ellipse(img,ellipse,(0,255,0),2)
@endcode
![image](images/fitellipse.png)
-# Fitting a Line
10. Fitting a Line
------------------
Similarly we can fit a line to a set of points. Below image contains a set of white points. We can

View File

@ -8,50 +8,54 @@ documentation](http://www.mathworks.in/help/images/ref/regionprops.html).
*(NB : Centroid, Area, Perimeter etc also belong to this category, but we have seen it in last
chapter)*
-# Aspect Ratio
1. Aspect Ratio
---------------
It is the ratio of width to height of bounding rect of the object.
\f[Aspect \; Ratio = \frac{Width}{Height}\f]
@code{.python}
@code{.py}
x,y,w,h = cv2.boundingRect(cnt)
aspect_ratio = float(w)/h
@endcode
2. Extent
---------
Extent is the ratio of contour area to bounding rectangle area.
\f[Extent = \frac{Object \; Area}{Bounding \; Rectangle \; Area}\f]
@code{.python}
@code{.py}
area = cv2.contourArea(cnt)
x,y,w,h = cv2.boundingRect(cnt)
rect_area = w*h
extent = float(area)/rect_area
@endcode
3. Solidity
-----------
Solidity is the ratio of contour area to its convex hull area.
\f[Solidity = \frac{Contour \; Area}{Convex \; Hull \; Area}\f]
@code{.python}
@code{.py}
area = cv2.contourArea(cnt)
hull = cv2.convexHull(cnt)
hull_area = cv2.contourArea(hull)
solidity = float(area)/hull_area
@endcode
4. Equivalent Diameter
----------------------
Equivalent Diameter is the diameter of the circle whose area is same as the contour area.
\f[Equivalent \; Diameter = \sqrt{\frac{4 \times Contour \; Area}{\pi}}\f]
@code{.python}
@code{.py}
area = cv2.contourArea(cnt)
equi_diameter = np.sqrt(4*area/np.pi)
@endcode
5. Orientation
--------------
@ -60,6 +64,7 @@ Minor Axis lengths.
@code{.py}
(x,y),(MA,ma),angle = cv2.fitEllipse(cnt)
@endcode
6. Mask and Pixel Points
------------------------
@ -75,13 +80,14 @@ are given to do the same. Results are also same, but with a slight difference. N
coordinates in **(row, column)** format, while OpenCV gives coordinates in **(x,y)** format. So
basically the answers will be interchanged. Note that, **row = x** and **column = y**.
-# Maximum Value, Minimum Value and their locations
7. Maximum Value, Minimum Value and their locations
---------------------------------------------------
We can find these parameters using a mask image.
@code{.py}
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(imgray,mask = mask)
@endcode
8. Mean Color or Mean Intensity
-------------------------------
@ -90,6 +96,7 @@ grayscale mode. We again use the same mask to do it.
@code{.py}
mean_val = cv2.mean(im,mask = mask)
@endcode
9. Extreme Points
-----------------
@ -111,4 +118,3 @@ Exercises
---------
-# There are still some features left in matlab regionprops doc. Try to implement them.

View File

@ -62,8 +62,11 @@ But most of the time, below method will be useful:
cnt = contours[4]
cv2.drawContours(img, [cnt], 0, (0,255,0), 3)
@endcode
@note Last two methods are same, but when you go forward, you will see last one is more useful.
Contour Approximation Method ================================
Contour Approximation Method
============================
This is the third argument in cv2.findContours function. What does it denote actually?

View File

@ -55,26 +55,35 @@ So each contour has its own information regarding what hierarchy it is, who is i
parent etc. OpenCV represents it as an array of four values : **[Next, Previous, First_Child,
Parent]**
<center>*"Next denotes next contour at the same hierarchical level."*</center>
For eg, take contour-0 in our picture. Who is next contour in its same level ? It is contour-1. So
simply put Next = 1. Similarly for Contour-1, next is contour-2. So Next = 2.
What about contour-2? There is no next contour in the same level. So simply, put Next = -1. What
about contour-4? It is in same level with contour-5. So its next contour is contour-5, so Next = 5.
<center>*"Previous denotes previous contour at the same hierarchical level."*</center>
It is same as above. Previous contour of contour-1 is contour-0 in the same level. Similarly for
contour-2, it is contour-1. And for contour-0, there is no previous, so put it as -1.
<center>*"First_Child denotes its first child contour."*</center>
There is no need of any explanation. For contour-2, child is contour-2a. So it gets the
corresponding index value of contour-2a. What about contour-3a? It has two children. But we take
only first child. And it is contour-4. So First_Child = 4 for contour-3a.
<center>*"Parent denotes index of its parent contour."*</center>
It is just opposite of **First_Child**. Both for contour-4 and contour-5, parent contour is
contour-3a. For contour-3a, it is contour-3 and so on.
@note If there is no child or parent, that field is taken as -1 So now we know about the hierarchy
style used in OpenCV, we can check into Contour Retrieval Modes in OpenCV with the help of same
image given above. ie what do flags like cv2.RETR_LIST, cv2.RETR_TREE, cv2.RETR_CCOMP,
cv2.RETR_EXTERNAL etc mean?
@note If there is no child or parent, that field is taken as -1
So now we know about the hierarchy style used in OpenCV, we can check into Contour Retrieval Modes
in OpenCV with the help of same image given above. ie what do flags like cv2.RETR_LIST,
cv2.RETR_TREE, cv2.RETR_CCOMP, cv2.RETR_EXTERNAL etc mean?
Contour Retrieval Mode
----------------------
@ -92,7 +101,7 @@ Below is the result I got, and each row is hierarchy details of corresponding co
row corresponds to contour 0. Next contour is contour 1. So Next = 1. There is no previous contour,
so Previous = 0. And the remaining two, as told before, it is -1.
@code{.py}
hierarchy
>>> hierarchy
array([[[ 1, -1, -1, -1],
[ 2, 0, -1, -1],
[ 3, 1, -1, -1],
@ -114,7 +123,7 @@ So, in our image, how many extreme outer contours are there? ie at hierarchy-0 l
contours 0,1,2, right? Now try to find the contours using this flag. Here also, values given to each
element is same as above. Compare it with above result. Below is what I got :
@code{.py}
hierarchy
>>> hierarchy
array([[[ 1, -1, -1, -1],
[ 2, 0, -1, -1],
[-1, 1, -1, -1]]])
@ -159,7 +168,7 @@ no child, parent is contour-3. So array is [-1,-1,-1,3].
Remaining you can fill up. This is the final answer I got:
@code{.py}
hierarchy
>>> hierarchy
array([[[ 3, -1, 1, -1],
[ 2, -1, -1, 0],
[-1, 1, -1, 0],
@ -170,6 +179,7 @@ array([[[ 3, -1, 1, -1],
[ 8, 5, -1, -1],
[-1, 7, -1, -1]]])
@endcode
### 4. RETR_TREE
And this is the final guy, Mr.Perfect. It retrieves all the contours and creates a full family
@ -189,7 +199,7 @@ contour-2. Parent is contour-0. So array is [-1,-1,2,0].
And remaining, try yourself. Below is the full answer:
@code{.py}
hierarchy
>>> hierarchy
array([[[ 7, -1, 1, -1],
[-1, -1, 2, 0],
[-1, -1, 3, 1],
@ -200,6 +210,7 @@ array([[[ 7, -1, 1, -1],
[ 8, 0, -1, -1],
[-1, 7, -1, -1]]])
@endcode
Additional Resources
--------------------

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter, we will learn about
- Convexity defects and how to find them.
- Convexity defects and how to find them.
- Finding shortest distance from a point to a polygon
- Matching different shapes
@ -23,11 +23,15 @@ call would look like below:
hull = cv2.convexHull(cnt,returnPoints = False)
defects = cv2.convexityDefects(cnt,hull)
@endcode
@note Remember we have to pass returnPoints = False while finding convex hull, in order to find
convexity defects. It returns an array where each row contains these values - **[ start point, end
point, farthest point, approximate distance to farthest point ]**. We can visualize it using an
image. We draw a line joining start point and end point, then draw a circle at the farthest point.
Remember first three values returned are indices of cnt. So we have to bring those values from cnt.
convexity defects.
It returns an array where each row contains these values - **[ start point, end point, farthest
point, approximate distance to farthest point ]**. We can visualize it using an image. We draw a
line joining start point and end point, then draw a circle at the farthest point. Remember first
three values returned are indices of cnt. So we have to bring those values from cnt.
@code{.py}
import cv2
import numpy as np
@ -72,8 +76,9 @@ False, it finds whether the point is inside or outside or on the contour (it ret
respectively).
@note If you don't want to find the distance, make sure third argument is False, because, it is a
time consuming process. So, making it False gives about 2-3X speedup. 3. Match
Shapes -----------------
time consuming process. So, making it False gives about 2-3X speedup.
### 3. Match Shapes
OpenCV comes with a function **cv2.matchShapes()** which enables us to compare two shapes, or two
contours and returns a metric showing the similarity. The lower the result, the better match it is.
@ -110,7 +115,10 @@ See, even image rotation doesn't affect much on this comparison.
@sa [Hu-Moments](http://en.wikipedia.org/wiki/Image_moment#Rotation_invariant_moments) are seven
moments invariant to translation, rotation and scale. Seventh one is skew-invariant. Those values
can be found using **cv2.HuMoments()** function. Additional Resources =====================
can be found using **cv2.HuMoments()** function.
Additional Resources
====================
Exercises
---------
@ -120,6 +128,5 @@ Exercises
inside curve is blue depending on the distance. Similarly outside points are red. Contour edges
are marked with White. So problem is simple. Write a code to create such a representation of
distance.
2. Compare images of digits or letters using **cv2.matchShapes()**. ( That would be a simple step
-# Compare images of digits or letters using **cv2.matchShapes()**. ( That would be a simple step
towards OCR )

View File

@ -5,7 +5,7 @@ Goals
-----
Learn to:
- Blur the images with various low pass filters
- Blur the images with various low pass filters
- Apply custom-made filters to images (2D convolution)
2D Convolution ( Image Filtering )
@ -61,7 +61,9 @@ specify the width and height of kernel. A 3x3 normalized box filter would look l
\f[K = \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}\f]
@note If you don't want to use normalized box filter, use **cv2.boxFilter()**. Pass an argument
normalize=False to the function. Check a sample demo below with a kernel of 5x5 size:
normalize=False to the function.
Check a sample demo below with a kernel of 5x5 size:
@code{.py}
import cv2
import numpy as np

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter
- We will see GrabCut algorithm to extract foreground in images
- We will see GrabCut algorithm to extract foreground in images
- We will create an interactive application for this.
Theory
@ -59,7 +59,7 @@ So what happens in background ?
It is illustrated in below image (Image Courtesy: <http://www.cs.ru.ac.za/research/g02m1682/>)
![image](images/grabcut.jpg)
![image](images/grabcut_scheme.jpg)
Demo
----
@ -152,6 +152,5 @@ Exercises
-# OpenCV samples contain a sample grabcut.py which is an interactive tool using grabcut. Check it.
Also watch this [youtube video](http://www.youtube.com/watch?v=kAwxLTDDAwU) on how to use it.
2. Here, you can make this into a interactive sample with drawing rectangle and strokes with mouse,
-# Here, you can make this into a interactive sample with drawing rectangle and strokes with mouse,
create trackbar to adjust stroke width etc.

View File

@ -36,7 +36,7 @@ So what happens in background ?
It is illustrated in below image (Image Courtesy: http://www.cs.ru.ac.za/research/g02m1682/)
.. image:: images/grabcut.jpg
.. image:: images/grabcut_scheme.jpg
:alt: Simplified Diagram of GrabCut Algorithm
:align: center

View File

@ -82,6 +82,7 @@ idea what color is there on a first look, unless you know the Hue values of diff
I prefer this method. It is simple and better.
@note While using this function, remember, interpolation flag should be nearest for better results.
Consider code:
@code{.py}
import cv2

View File

@ -5,7 +5,7 @@ Goal
----
Learn to
- Find histograms, using both OpenCV and Numpy functions
- Find histograms, using both OpenCV and Numpy functions
- Plot histograms, using OpenCV and Matplotlib functions
- You will see these functions : **cv2.calcHist()**, **np.histogram()** etc.
@ -39,8 +39,8 @@ terminologies related with histograms.
**BINS** :The above histogram shows the number of pixels for every pixel value, ie from 0 to 255. ie
you need 256 values to show the above histogram. But consider, what if you need not find the number
of pixels for all pixel values separately, but number of pixels in a interval of pixel values? say
for example, you need to find the number of pixels lying between 0 to 15, then 16 to 31, ..., 240 to
255. You will need only 16 values to represent the histogram. And that is what is shown in example
for example, you need to find the number of pixels lying between 0 to 15, then 16 to 31, ..., 240 to 255.
You will need only 16 values to represent the histogram. And that is what is shown in example
given in [OpenCV Tutorials on
histograms](http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html#histogram-calculation).
@ -60,18 +60,20 @@ intensity values.
So now we use **cv2.calcHist()** function to find the histogram. Let's familiarize with the function
and its parameters :
<center><em>cv2.calcHist(images, channels, mask, histSize, ranges[, hist[, accumulate]])</em></center>
-# images : it is the source image of type uint8 or float32. it should be given in square brackets,
ie, "[img]".
2. channels : it is also given in square brackets. It is the index of channel for which we
-# channels : it is also given in square brackets. It is the index of channel for which we
calculate histogram. For example, if input is grayscale image, its value is [0]. For color
image, you can pass [0], [1] or [2] to calculate histogram of blue, green or red channel
respectively.
3. mask : mask image. To find histogram of full image, it is given as "None". But if you want to
-# mask : mask image. To find histogram of full image, it is given as "None". But if you want to
find histogram of particular region of image, you have to create a mask image for that and give
it as mask. (I will show an example later.)
4. histSize : this represents our BIN count. Need to be given in square brackets. For full scale,
-# histSize : this represents our BIN count. Need to be given in square brackets. For full scale,
we pass [256].
5. ranges : this is our RANGE. Normally, it is [0,256].
-# ranges : this is our RANGE. Normally, it is [0,256].
So let's start with a sample image. Simply load an image in grayscale mode and find its full
histogram.
@ -98,13 +100,15 @@ np.histogram(). So for one-dimensional histograms, you can better try that. Don'
minlength = 256 in np.bincount. For example, hist = np.bincount(img.ravel(),minlength=256)
@note OpenCV function is more faster than (around 40X) than np.histogram(). So stick with OpenCV
function. Now we should plot histograms, but how ?
function.
Now we should plot histograms, but how?
Plotting Histograms
-------------------
There are two ways for this,
-# Short Way : use Matplotlib plotting functions
-# Short Way : use Matplotlib plotting functions
-# Long Way : use OpenCV drawing functions
### 1. Using Matplotlib

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will learn to use Hough Transform to find circles in an image.
- We will learn to use Hough Transform to find circles in an image.
- We will see these functions: **cv2.HoughCircles()**
Theory

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will understand the concept of Hough Tranform.
- We will understand the concept of Hough Tranform.
- We will see how to use it detect lines in an image.
- We will see following functions: **cv2.HoughLines()**, **cv2.HoughLinesP()**
@ -49,19 +49,15 @@ each point, the cell (50,90) will be incremented or voted up, while other cells
voted up. This way, at the end, the cell (50,90) will have maximum votes. So if you search the
accumulator for maximum votes, you get the value (50,90) which says, there is a line in this image
at distance 50 from origin and at angle 90 degrees. It is well shown in below animation (Image
Courtesy: Amos Storkey _ )
Courtesy: [Amos Storkey](http://homepages.inf.ed.ac.uk/amos/hough.html) )
.. image:: images/houghlinesdemo.gif
This is how hough transform for lines works. It is simple, and may be you can implement it using Numpy on your own. Below is an image which shows the accumulator. Bright spots at some locations denotes they are the parameters of possible lines in the image. (Image courtesy: Wikipedia
\<<http://en.wikipedia.org/wiki/Hough_transform>\>_ )
.. image:: images/houghlines2.jpg
![](images/houghlinesdemo.gif)
This is how hough transform for lines works. It is simple, and may be you can implement it using
Numpy on your own. Below is an image which shows the accumulator. Bright spots at some locations
denotes they are the parameters of possible lines in the image. (Image courtesy: [Wikipedia](http://en.wikipedia.org/wiki/Hough_transform))
![](images/houghlines2.jpg)
Hough Tranform in OpenCV
=========================
@ -112,9 +108,11 @@ Bettinger's home page](http://phdfb1.free.fr/robot/mscthesis/node14.html)
![image](images/houghlines4.png)
OpenCV implementation is based on Robust Detection of Lines Using the Progressive Probabilistic Hough Transform by Matas, J. and Galambos, C. and Kittler, J.V.. The function used is **cv2.HoughLinesP()**. It has two new arguments.
- **minLineLength** - Minimum length of line. Line segments shorter than this are rejected.
- **maxLineGap** - Maximum allowed gap between line segments to treat them as single line.
OpenCV implementation is based on Robust Detection of Lines Using the Progressive Probabilistic
Hough Transform by Matas, J. and Galambos, C. and Kittler, J.V.. The function used is
**cv2.HoughLinesP()**. It has two new arguments.
- **minLineLength** - Minimum length of line. Line segments shorter than this are rejected.
- **maxLineGap** - Maximum allowed gap between line segments to treat them as single line.
Best thing is that, it directly returns the two endpoints of lines. In previous case, you got only
the parameters of lines, and you had to find all the points. Here, everything is direct and simple.

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will learn different morphological operations like Erosion, Dilation, Opening, Closing
- We will learn different morphological operations like Erosion, Dilation, Opening, Closing
etc.
- We will see different functions like : **cv2.erode()**, **cv2.dilate()**,
**cv2.morphologyEx()** etc.
@ -122,9 +122,9 @@ We manually created a structuring elements in the previous examples with help of
rectangular shape. But in some cases, you may need elliptical/circular shaped kernels. So for this
purpose, OpenCV has a function, **cv2.getStructuringElement()**. You just pass the shape and size of
the kernel, you get the desired kernel.
@code{.python}
@code{.py}
# Rectangular Kernel
cv2.getStructuringElement(cv2.MORPH_RECT,(5,5))
>>> cv2.getStructuringElement(cv2.MORPH_RECT,(5,5))
array([[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
@ -132,7 +132,7 @@ array([[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1]], dtype=uint8)
# Elliptical Kernel
cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
>>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
array([[0, 0, 1, 0, 0],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
@ -140,7 +140,7 @@ array([[0, 0, 1, 0, 0],
[0, 0, 1, 0, 0]], dtype=uint8)
# Cross-shaped Kernel
cv2.getStructuringElement(cv2.MORPH_CROSS,(5,5))
>>> cv2.getStructuringElement(cv2.MORPH_CROSS,(5,5))
array([[0, 0, 1, 0, 0],
[0, 0, 1, 0, 0],
[1, 1, 1, 1, 1],

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will learn about Image Pyramids
- We will learn about Image Pyramids
- We will use Image pyramids to create a new fruit, "Orapple"
- We will see these functions: **cv2.pyrUp()**, **cv2.pyrDown()**

View File

@ -5,7 +5,7 @@ Goals
-----
In this chapter, you will learn
- To find objects in an image using Template Matching
- To find objects in an image using Template Matching
- You will see these functions : **cv2.matchTemplate()**, **cv2.minMaxLoc()**
Theory
@ -24,7 +24,9 @@ is the maximum/minimum value. Take it as the top-left corner of rectangle and ta
and height of the rectangle. That rectangle is your region of template.
@note If you are using cv2.TM_SQDIFF as comparison method, minimum value gives the best match.
Template Matching in OpenCV ============================
Template Matching in OpenCV
---------------------------
Here, as an example, we will search for Messi's face in his photo. So I created a template as below:

View File

@ -54,7 +54,9 @@ for i in xrange(6):
plt.show()
@endcode
@note To plot multiple images, we have used plt.subplot() function. Please checkout Matplotlib docs
for more details. Result is given below :
for more details.
Result is given below :
![image](images/threshold.jpg)
@ -70,7 +72,7 @@ results for images with varying illumination.
It has three special input params and only one output argument.
**Adaptive Method** - It decides how thresholding value is calculated.
- cv2.ADAPTIVE_THRESH_MEAN_C : threshold value is the mean of neighbourhood area.
- cv2.ADAPTIVE_THRESH_MEAN_C : threshold value is the mean of neighbourhood area.
- cv2.ADAPTIVE_THRESH_GAUSSIAN_C : threshold value is the weighted sum of neighbourhood
values where weights are a gaussian window.
@ -229,4 +231,3 @@ Exercises
---------
-# There are some optimizations available for Otsu's binarization. You can search and implement it.

View File

@ -5,7 +5,7 @@ Goal
----
In this section, we will learn
- To find the Fourier Transform of images using OpenCV
- To find the Fourier Transform of images using OpenCV
- To utilize the FFT functions available in Numpy
- Some applications of Fourier Transform
- We will see following functions : **cv2.dft()**, **cv2.idft()** etc
@ -134,11 +134,14 @@ plt.subplot(122),plt.imshow(magnitude_spectrum, cmap = 'gray')
plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])
plt.show()
@endcode
@note You can also use **cv2.cartToPolar()** which returns both magnitude and phase in a single shot
So, now we have to do inverse DFT. In previous session, we created a HPF, this time we will see how
to remove high frequency contents in the image, ie we apply LPF to image. It actually blurs the
image. For this, we create a mask first with high value (1) at low frequencies, ie we pass the LF
content, and 0 at HF region.
@code{.py}
rows, cols = img.shape
crow,ccol = rows/2 , cols/2
@ -165,7 +168,10 @@ See the result:
@note As usual, OpenCV functions **cv2.dft()** and **cv2.idft()** are faster than Numpy
counterparts. But Numpy functions are more user-friendly. For more details about performance issues,
see below section. Performance Optimization of DFT ==================================
see below section.
Performance Optimization of DFT
===============================
Performance of DFT calculation is better for some array size. It is fastest when array size is power
of two. The arrays whose size is a product of 2s, 3s, and 5s are also processed quite

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will learn to use marker-based image segmentation using watershed algorithm
- We will learn to use marker-based image segmentation using watershed algorithm
- We will see: **cv2.watershed()**
Theory
@ -146,4 +146,3 @@ Exercises
-# OpenCV samples has an interactive sample on watershed segmentation, watershed.py. Run it, Enjoy
it, then learn it.

View File

@ -13,39 +13,32 @@ Understanding Parameters
-# **samples** : It should be of **np.float32** data type, and each feature should be put in a
single column.
2. **nclusters(K)** : Number of clusters required at end
3.
**criteria** : It is the iteration termination criteria. When this criteria is satisfied, algorithm iteration stops. Actually, it should be a tuple of 3 parameters. They are \`( type, max_iter, epsilon )\`:
-
3.a - type of termination criteria : It has 3 flags as below:
**cv2.TERM_CRITERIA_EPS** - stop the algorithm iteration if specified accuracy,
*epsilon*, is reached. **cv2.TERM_CRITERIA_MAX_ITER** - stop the algorithm
after the specified number of iterations, *max_iter*. **cv2.TERM_CRITERIA_EPS +
cv2.TERM_CRITERIA_MAX_ITER** - stop the iteration when any of the above
condition is met.
- 3.b - max_iter - An integer specifying maximum number of iterations.
- 3.c - epsilon - Required accuracy
-# **nclusters(K)** : Number of clusters required at end
-# **criteria** : It is the iteration termination criteria. When this criteria is satisfied, algorithm iteration stops. Actually, it should be a tuple of 3 parameters. They are \`( type, max_iter, epsilon )\`:
-# type of termination criteria. It has 3 flags as below:
- **cv2.TERM_CRITERIA_EPS** - stop the algorithm iteration if specified accuracy, *epsilon*, is reached.
- **cv2.TERM_CRITERIA_MAX_ITER** - stop the algorithm after the specified number of iterations, *max_iter*.
- **cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER** - stop the iteration when any of the above condition is met.
-# max_iter - An integer specifying maximum number of iterations.
-# epsilon - Required accuracy
-# **attempts** : Flag to specify the number of times the algorithm is executed using different
initial labellings. The algorithm returns the labels that yield the best compactness. This
compactness is returned as output.
5. **flags** : This flag is used to specify how initial centers are taken. Normally two flags are
-# **flags** : This flag is used to specify how initial centers are taken. Normally two flags are
used for this : **cv2.KMEANS_PP_CENTERS** and **cv2.KMEANS_RANDOM_CENTERS**.
### Output parameters
-# **compactness** : It is the sum of squared distance from each point to their corresponding
centers.
2. **labels** : This is the label array (same as 'code' in previous article) where each element
-# **labels** : This is the label array (same as 'code' in previous article) where each element
marked '0', '1'.....
3. **centers** : This is array of centers of clusters.
-# **centers** : This is array of centers of clusters.
Now we will see how to apply K-Means algorithm with three examples.
-# Data with Only One Feature
1. Data with Only One Feature
-----------------------------
Consider, you have a set of data with only one feature, ie one-dimensional. For eg, we can take our
@ -104,7 +97,7 @@ Below is the output we got:
![image](images/oc_1d_clustered.png)
-# Data with Multiple Features
2. Data with Multiple Features
------------------------------
In previous example, we took only height for t-shirt problem. Here, we will take both height and
@ -153,7 +146,7 @@ Below is the output we get:
![image](images/oc_2d_clustered.jpg)
-# Color Quantization
3. Color Quantization
---------------------
Color Quantization is the process of reducing number of colors in an image. One reason to do so is

View File

@ -62,9 +62,9 @@ Now **Step - 2** and **Step - 3** are iterated until both centroids are converge
*(Or it may be stopped depending on the criteria we provide, like maximum number of iterations, or a
specific accuracy is reached etc.)* **These points are such that sum of distances between test data
and their corresponding centroids are minimum**. Or simply, sum of distances between
\f$C1 \leftrightarrow Red_Points\f$ and \f$C2 \leftrightarrow Blue_Points\f$ is minimum.
\f$C1 \leftrightarrow Red\_Points\f$ and \f$C2 \leftrightarrow Blue\_Points\f$ is minimum.
\f[minimize \;\bigg[J = \sum_{All\: Red_Points}distance(C1,Red_Point) + \sum_{All\: Blue_Points}distance(C2,Blue_Point)\bigg]\f]
\f[minimize \;\bigg[J = \sum_{All\: Red\_Points}distance(C1,Red\_Point) + \sum_{All\: Blue\_Points}distance(C2,Blue\_Point)\bigg]\f]
Final result almost looks like below :

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter
- We will use our knowledge on kNN to build a basic OCR application.
- We will use our knowledge on kNN to build a basic OCR application.
- We will try with Digits and Alphabets data available that comes with OpenCV.
OCR of Hand-written Digits

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter
- We will see an intuitive understanding of SVM
- We will see an intuitive understanding of SVM
Theory
------
@ -79,11 +79,15 @@ mapping function which maps a two-dimensional point to three-dimensional space a
Let us define a kernel function \f$K(p,q)\f$ which does a dot product between two points, shown below:
\f[K(p,q) = \phi(p).\phi(q) &= \phi(p)^T \phi(q) \\
\f[
\begin{aligned}
K(p,q) = \phi(p).\phi(q) &= \phi(p)^T \phi(q) \\
&= (p_{1}^2,p_{2}^2,\sqrt{2} p_1 p_2).(q_{1}^2,q_{2}^2,\sqrt{2} q_1 q_2) \\
&= p_1 q_1 + p_2 q_2 + 2 p_1 q_1 p_2 q_2 \\
&= (p_1 q_1 + p_2 q_2)^2 \\
\phi(p).\phi(q) &= (p.q)^2\f]
\phi(p).\phi(q) &= (p.q)^2
\end{aligned}
\f]
It means, a dot product in three-dimensional space can be achieved using squared dot product in
two-dimensional space. This can be applied to higher dimensional space. So we can calculate higher

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will learn how to remove small noises, strokes etc in old photographs by a method called
- We will learn how to remove small noises, strokes etc in old photographs by a method called
inpainting
- We will see inpainting functionalities in OpenCV.

View File

@ -59,7 +59,7 @@ OpenCV provides four variations of this technique.
4. **cv2.fastNlMeansDenoisingColoredMulti()** - same as above, but for color images.
Common arguments are:
- h : parameter deciding filter strength. Higher h value removes noise better, but removes
- h : parameter deciding filter strength. Higher h value removes noise better, but removes
details of image also. (10 is ok)
- hForColorComponents : same as h, but for color images only. (normally same as h)
- templateWindowSize : should be odd. (recommended 7)

View File

@ -5,7 +5,7 @@ Goals
-----
In this tutorial
- We will learn to setup OpenCV-Python in your Fedora system. Below steps are tested for
- We will learn to setup OpenCV-Python in your Fedora system. Below steps are tested for
Fedora 18 (64-bit) and Fedora 19 (32-bit).
Introduction
@ -24,13 +24,13 @@ Installing OpenCV-Python from Pre-built Binaries
------------------------------------------------
Install all packages with following command in terminal as root.
@code{.bash}
\f$ yum install numpy opencv*
@code{.sh}
$ yum install numpy opencv*
@endcode
Open Python IDLE (or IPython) and type following codes in Python terminal.
@code{.python}
import cv2
print cv2.__version__
@code{.py}
>>> import cv2
>>> print cv2.__version__
@endcode
If the results are printed out without any errors, congratulations !!! You have installed
OpenCV-Python successfully.
@ -57,14 +57,14 @@ dependencies, you can leave if you don't want.
We need **CMake** to configure the installation, **GCC** for compilation, **Python-devel** and
**Numpy** for creating Python extensions etc.
@code{.bash}
@code{.sh}
yum install cmake
yum install python-devel numpy
yum install gcc gcc-c++
@endcode
Next we need **GTK** support for GUI features, Camera support (libdc1394, libv4l), Media Support
(ffmpeg, gstreamer) etc.
@code{.bash}
@code{.sh}
yum install gtk2-devel
yum install libdc1394-devel
yum install libv4l-devel
@ -80,7 +80,7 @@ below. You can either leave it or install it, your call :)
OpenCV comes with supporting files for image formats like PNG, JPEG, JPEG2000, TIFF, WebP etc. But
it may be a little old. If you want to get latest libraries, you can install development files for
these formats.
@code{.bash}
@code{.sh}
yum install libpng-devel
yum install libjpeg-turbo-devel
yum install jasper-devel
@ -91,13 +91,13 @@ yum install libwebp-devel
Several OpenCV functions are parallelized with **Intel's Threading Building Blocks** (TBB). But if
you want to enable it, you need to install TBB first. ( Also while configuring installation with
CMake, don't forget to pass -D WITH_TBB=ON. More details below.)
@code{.bash}
@code{.sh}
yum install tbb-devel
@endcode
OpenCV uses another library **Eigen** for optimized mathematical operations. So if you have Eigen
installed in your system, you can exploit it. ( Also while configuring installation with CMake,
don't forget to pass -D WITH_EIGEN=ON. More details below.)
@code{.bash}
@code{.sh}
yum install eigen3-devel
@endcode
If you want to build **documentation** ( *Yes, you can create offline version of OpenCV's complete
@ -106,7 +106,7 @@ internet always if any question, and it is quite FAST!!!* ), you need to install
documentation generation tool) and **pdflatex** (if you want to create a PDF version of it). ( Also
while configuring installation with CMake, don't forget to pass -D BUILD_DOCS=ON. More details
below.)
@code{.bash}
@code{.sh}
yum install python-sphinx
yum install texlive
@endcode
@ -117,7 +117,7 @@ site](http://sourceforge.net/projects/opencvlibrary/). Then extract the folder.
Or you can download latest source from OpenCV's github repo. (If you want to contribute to OpenCV,
choose this. It always keeps your OpenCV up-to-date). For that, you need to install **Git** first.
@code{.bash}
@code{.sh}
yum install git
git clone https://github.com/Itseez/opencv.git
@endcode
@ -126,7 +126,7 @@ take some time depending upon your internet connection.
Now open a terminal window and navigate to the downloaded OpenCV folder. Create a new build folder
and navigate to it.
@code{.bash}
@code{.sh}
mkdir build
cd build
@endcode
@ -136,12 +136,12 @@ Now we have installed all the required dependencies, let's install OpenCV. Insta
configured with CMake. It specifies which modules are to be installed, installation path, which
additional libraries to be used, whether documentation and examples to be compiled etc. Below
command is normally used for configuration (executed from build folder).
@code{.bash}
@code{.sh}
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
@endcode
It specifies that build type is "Release Mode" and installation path is /usr/local. Observe the -D
before each option and .. at the end. In short, this is the format:
@code{.bash}
@code{.sh}
cmake [-D <flag>] [-D <flag>] ..
@endcode
You can specify as many flags you want, but each flag should be preceded by -D.
@ -154,26 +154,26 @@ modules (since we use OpenCV-Python, we don't need GPU related modules. It saves
understanding.)*
- Enable TBB and Eigen support:
@code{.bash}
@code{.sh}
cmake -D WITH_TBB=ON -D WITH_EIGEN=ON ..
@endcode
- Enable documentation and disable tests and samples
@code{.bash}
@code{.sh}
cmake -D BUILD_DOCS=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF ..
@endcode
- Disable all GPU related modules.
@code{.bash}
@code{.sh}
cmake -D WITH_OPENCL=OFF -D WITH_CUDA=OFF -D BUILD_opencv_gpu=OFF -D BUILD_opencv_gpuarithm=OFF -D BUILD_opencv_gpubgsegm=OFF -D BUILD_opencv_gpucodec=OFF -D BUILD_opencv_gpufeatures2d=OFF -D BUILD_opencv_gpufilters=OFF -D BUILD_opencv_gpuimgproc=OFF -D BUILD_opencv_gpulegacy=OFF -D BUILD_opencv_gpuoptflow=OFF -D BUILD_opencv_gpustereo=OFF -D BUILD_opencv_gpuwarping=OFF ..
@endcode
- Set installation path and build type
@code{.bash}
@code{.sh}
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
@endcode
Each time you enter cmake statement, it prints out the resulting configuration setup. In the final
setup you got, make sure that following fields are filled (below is the some important parts of
configuration I got). These fields should be filled appropriately in your system also. Otherwise
some problem has happened. So check if you have correctly performed above steps.
@code{.bash}
@code{.sh}
-- GUI:
-- GTK+ 2.x: YES (ver 2.24.19)
-- GThread : YES (ver 2.36.3)
@ -219,7 +219,7 @@ Many other flags and settings are there. It is left for you for further explorat
Now you build the files using make command and install it using make install command. make install
should be executed as root.
@code{.bash}
@code{.sh}
make
su
make install
@ -230,20 +230,20 @@ should be able to find OpenCV module. You have two options for that.
-# **Move the module to any folder in Python Path** : Python path can be found out by entering
import sys;print sys.path in Python terminal. It will print out many locations. Move
/usr/local/lib/python2.7/site-packages/cv2.so to any of this folder. For example,
@code{.bash}
@code{.sh}
su mv /usr/local/lib/python2.7/site-packages/cv2.so /usr/lib/python2.7/site-packages
@endcode
But you will have to do this every time you install OpenCV.
-# **Add /usr/local/lib/python2.7/site-packages to the PYTHON_PATH**: It is to be done only once.
Just open \~/.bashrc and add following line to it, then log out and come back.
@code{.bash}
export PYTHONPATH=\f$PYTHONPATH:/usr/local/lib/python2.7/site-packages
@code{.sh}
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/site-packages
@endcode
Thus OpenCV installation is finished. Open a terminal and try import cv2.
To build the documentation, just enter following commands:
@code{.bash}
@code{.sh}
make docs
make html_docs
@endcode
@ -256,4 +256,3 @@ Exercises
---------
-# Compile OpenCV from source in your Fedora machine.

View File

@ -7,33 +7,37 @@ Goals
In this tutorial
- We will learn to setup OpenCV-Python in your Windows system.
Below steps are tested in a Windows 7-64 bit machine with Visual Studio 2010 and Visual Studio
2012. The screenshots shows VS2012.
Below steps are tested in a Windows 7-64 bit machine with Visual Studio 2010 and Visual Studio 2012.
The screenshots shows VS2012.
Installing OpenCV from prebuilt binaries
----------------------------------------
-# Below Python packages are to be downloaded and installed to their default locations.
1.1. [Python-2.7.x](http://python.org/ftp/python/2.7.5/python-2.7.5.msi).
1.2.
[Numpy](http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/numpy-1.7.1-win32-superpack-python2.7.exe/download).
1.3.
[Matplotlib](https://downloads.sourceforge.net/project/matplotlib/matplotlib/matplotlib-1.3.0/matplotlib-1.3.0.win32-py2.7.exe)
(*Matplotlib is optional, but recommended since we use it a lot in our tutorials*).
-# [Python-2.7.x](http://python.org/ftp/python/2.7.5/python-2.7.5.msi).
-# [Numpy](http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/numpy-1.7.1-win32-superpack-python2.7.exe/download).
-# [Matplotlib](https://downloads.sourceforge.net/project/matplotlib/matplotlib/matplotlib-1.3.0/matplotlib-1.3.0.win32-py2.7.exe) (*Matplotlib is optional, but recommended since we use it a lot in our tutorials*).
-# Install all packages into their default locations. Python will be installed to **C:/Python27/**.
3. After installation, open Python IDLE. Enter import numpy and make sure Numpy is working fine.
4. Download latest OpenCV release from [sourceforge
-# After installation, open Python IDLE. Enter import numpy and make sure Numpy is working fine.
-# Download latest OpenCV release from [sourceforge
site](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.4.6/OpenCV-2.4.6.0.exe/download)
and double-click to extract it.
5. Goto **opencv/build/python/2.7** folder.
6. Copy **cv2.pyd** to **C:/Python27/lib/site-packages**.
7. Open Python IDLE and type following codes in Python terminal.
\>\>\> import cv2 \>\>\> print cv2.__version__
-# Goto **opencv/build/python/2.7** folder.
-# Copy **cv2.pyd** to **C:/Python27/lib/site-packages**.
-# Open Python IDLE and type following codes in Python terminal.
@code
>>> import cv2
>>> print cv2.__version__
@endcode
If the results are printed out without any errors, congratulations !!! You have installed
OpenCV-Python successfully.
@ -43,55 +47,54 @@ Building OpenCV from source
-# Download and install Visual Studio and CMake.
1.1. [Visual Studio 2012](http://go.microsoft.com/?linkid=9816768)
1.2. [CMake](http://www.cmake.org/files/v2.8/cmake-2.8.11.2-win32-x86.exe)
-# [Visual Studio 2012](http://go.microsoft.com/?linkid=9816768)
-# [CMake](http://www.cmake.org/files/v2.8/cmake-2.8.11.2-win32-x86.exe)
-# Download and install necessary Python packages to their default locations
2.1. [Python 2.7.x](http://python.org/ftp/python/2.7.5/python-2.7.5.msi)
2.2.
[Numpy](http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/numpy-1.7.1-win32-superpack-python2.7.exe/download)
2.3.
[Matplotlib](https://downloads.sourceforge.net/project/matplotlib/matplotlib/matplotlib-1.3.0/matplotlib-1.3.0.win32-py2.7.exe)
(*Matplotlib is optional, but recommended since we use it a lot in our tutorials.*)
-# [Python 2.7.x](http://python.org/ftp/python/2.7.5/python-2.7.5.msi)
@note In this case, we are using 32-bit binaries of Python packages. But if you want to use OpenCV
for x64, 64-bit binaries of Python packages are to be installed. Problem is that, there is no
official 64-bit binaries of Numpy. You have to build it on your own. For that, you have to use the
same compiler used to build Python. When you start Python IDLE, it shows the compiler details. You
can get more [information here](http://stackoverflow.com/q/2676763/1134940). So your system must
have the same Visual Studio version and build Numpy from source.
-# [Numpy](http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/numpy-1.7.1-win32-superpack-python2.7.exe/download)
@note Another method to have 64-bit Python packages is to use ready-made Python distributions from
third-parties like [Anaconda](http://www.continuum.io/downloads),
[Enthought](https://www.enthought.com/downloads/) etc. It will be bigger in size, but will have
everything you need. Everything in a single shell. You can also download 32-bit versions also. 3.
Make sure Python and Numpy are working fine.
-# [Matplotlib](https://downloads.sourceforge.net/project/matplotlib/matplotlib/matplotlib-1.3.0/matplotlib-1.3.0.win32-py2.7.exe)
(*Matplotlib is optional, but recommended since we use it a lot in our tutorials.*)
@note In this case, we are using 32-bit binaries of Python packages. But if you want to use
OpenCV for x64, 64-bit binaries of Python packages are to be installed. Problem is that, there
is no official 64-bit binaries of Numpy. You have to build it on your own. For that, you have to
use the same compiler used to build Python. When you start Python IDLE, it shows the compiler
details. You can get more [information here](http://stackoverflow.com/q/2676763/1134940). So
your system must have the same Visual Studio version and build Numpy from source.
@note Another method to have 64-bit Python packages is to use ready-made Python distributions
from third-parties like [Anaconda](http://www.continuum.io/downloads),
[Enthought](https://www.enthought.com/downloads/) etc. It will be bigger in size, but will have
everything you need. Everything in a single shell. You can also download 32-bit versions also.
-# Make sure Python and Numpy are working fine.
-# Download OpenCV source. It can be from
[Sourceforge](http://sourceforge.net/projects/opencvlibrary/) (for official release version) or
from [Github](https://github.com/Itseez/opencv) (for latest source).
5. Extract it to a folder, opencv and create a new folder build in it.
6. Open CMake-gui (*Start \> All Programs \> CMake-gui*)
7. Fill the fields as follows (see the image below):
-# Extract it to a folder, opencv and create a new folder build in it.
-# Open CMake-gui (*Start \> All Programs \> CMake-gui*)
-# Fill the fields as follows (see the image below):
7.1. Click on **Browse Source...** and locate the opencv folder.
7.2. Click on **Browse Build...** and locate the build folder we created.
7.3. Click on **Configure**.
![image](images/Capture1.jpg)
7.4. It will open a new window to select the compiler. Choose appropriate compiler (here,
Visual Studio 11) and click **Finish**.
![image](images/Capture2.png)
7.5. Wait until analysis is finished.
-# Click on **Browse Source...** and locate the opencv folder.
-# Click on **Browse Build...** and locate the build folder we created.
-# Click on **Configure**.
![image](images/Capture1.jpg)
-# It will open a new window to select the compiler. Choose appropriate compiler (here,
Visual Studio 11) and click **Finish**.
![image](images/Capture2.png)
-# Wait until analysis is finished.
-# You will see all the fields are marked in red. Click on the **WITH** field to expand it. It
decides what extra features you need. So mark appropriate fields. See the below image:
@ -103,33 +106,37 @@ Make sure Python and Numpy are working fine.
![image](images/Capture5.png)
-# Remaining fields specify what modules are to be built. Since GPU modules are not yet supported
-# Remaining fields specify what modules are to be built. Since GPU modules are not yet supported
by OpenCV-Python, you can completely avoid it to save time (But if you work with them, keep it
there). See the image below:
![image](images/Capture6.png)
-# Now click on **ENABLE** field to expand it. Make sure **ENABLE_SOLUTION_FOLDERS** is unchecked
-# Now click on **ENABLE** field to expand it. Make sure **ENABLE_SOLUTION_FOLDERS** is unchecked
(Solution folders are not supported by Visual Studio Express edition). See the image below:
![image](images/Capture7.png)
-# Also make sure that in the **PYTHON** field, everything is filled. (Ignore
-# Also make sure that in the **PYTHON** field, everything is filled. (Ignore
PYTHON_DEBUG_LIBRARY). See image below:
![image](images/Capture80.png)
-# Finally click the **Generate** button.
14. Now go to our **opencv/build** folder. There you will find **OpenCV.sln** file. Open it with
-# Finally click the **Generate** button.
-# Now go to our **opencv/build** folder. There you will find **OpenCV.sln** file. Open it with
Visual Studio.
15. Check build mode as **Release** instead of **Debug**.
16. In the solution explorer, right-click on the **Solution** (or **ALL_BUILD**) and build it. It
-# Check build mode as **Release** instead of **Debug**.
-# In the solution explorer, right-click on the **Solution** (or **ALL_BUILD**) and build it. It
will take some time to finish.
17. Again, right-click on **INSTALL** and build it. Now OpenCV-Python will be installed.
-# Again, right-click on **INSTALL** and build it. Now OpenCV-Python will be installed.
![image](images/Capture8.png)
-# Open Python IDLE and enter import cv2. If no error, it is installed correctly.
-# Open Python IDLE and enter import cv2. If no error, it is installed correctly.
@note We have installed with no other support like TBB, Eigen, Qt, Documentation etc. It would be
difficult to explain it here. A more detailed video will be added soon or you can just hack around.
@ -140,6 +147,5 @@ Additional Resources
Exercises
---------
-# If you have a windows machine, compile the OpenCV from source. Do all kinds of hacks. If you
meet any problem, visit OpenCV forum and explain your problem.
If you have a windows machine, compile the OpenCV from source. Do all kinds of hacks. If you meet
any problem, visit OpenCV forum and explain your problem.

View File

@ -5,7 +5,7 @@ Goal
----
In this chapter,
- We will understand the concepts of optical flow and its estimation using Lucas-Kanade
- We will understand the concepts of optical flow and its estimation using Lucas-Kanade
method.
- We will use functions like **cv2.calcOpticalFlowPyrLK()** to track feature points in a
video.