Doxygen tutorials: python final edits

This commit is contained in:
Maksim Shabunin
2014-12-01 15:46:05 +03:00
parent 875f922332
commit 812ce48c36
49 changed files with 426 additions and 353 deletions

View File

@@ -13,39 +13,32 @@ Understanding Parameters
-# **samples** : It should be of **np.float32** data type, and each feature should be put in a
single column.
2. **nclusters(K)** : Number of clusters required at end
3.
**criteria** : It is the iteration termination criteria. When this criteria is satisfied, algorithm iteration stops. Actually, it should be a tuple of 3 parameters. They are \`( type, max_iter, epsilon )\`:
-
3.a - type of termination criteria : It has 3 flags as below:
**cv2.TERM_CRITERIA_EPS** - stop the algorithm iteration if specified accuracy,
*epsilon*, is reached. **cv2.TERM_CRITERIA_MAX_ITER** - stop the algorithm
after the specified number of iterations, *max_iter*. **cv2.TERM_CRITERIA_EPS +
cv2.TERM_CRITERIA_MAX_ITER** - stop the iteration when any of the above
condition is met.
- 3.b - max_iter - An integer specifying maximum number of iterations.
- 3.c - epsilon - Required accuracy
-# **nclusters(K)** : Number of clusters required at end
-# **criteria** : It is the iteration termination criteria. When this criteria is satisfied, algorithm iteration stops. Actually, it should be a tuple of 3 parameters. They are \`( type, max_iter, epsilon )\`:
-# type of termination criteria. It has 3 flags as below:
- **cv2.TERM_CRITERIA_EPS** - stop the algorithm iteration if specified accuracy, *epsilon*, is reached.
- **cv2.TERM_CRITERIA_MAX_ITER** - stop the algorithm after the specified number of iterations, *max_iter*.
- **cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER** - stop the iteration when any of the above condition is met.
-# max_iter - An integer specifying maximum number of iterations.
-# epsilon - Required accuracy
-# **attempts** : Flag to specify the number of times the algorithm is executed using different
initial labellings. The algorithm returns the labels that yield the best compactness. This
compactness is returned as output.
5. **flags** : This flag is used to specify how initial centers are taken. Normally two flags are
-# **flags** : This flag is used to specify how initial centers are taken. Normally two flags are
used for this : **cv2.KMEANS_PP_CENTERS** and **cv2.KMEANS_RANDOM_CENTERS**.
### Output parameters
-# **compactness** : It is the sum of squared distance from each point to their corresponding
centers.
2. **labels** : This is the label array (same as 'code' in previous article) where each element
-# **labels** : This is the label array (same as 'code' in previous article) where each element
marked '0', '1'.....
3. **centers** : This is array of centers of clusters.
-# **centers** : This is array of centers of clusters.
Now we will see how to apply K-Means algorithm with three examples.
-# Data with Only One Feature
1. Data with Only One Feature
-----------------------------
Consider, you have a set of data with only one feature, ie one-dimensional. For eg, we can take our
@@ -104,7 +97,7 @@ Below is the output we got:
![image](images/oc_1d_clustered.png)
-# Data with Multiple Features
2. Data with Multiple Features
------------------------------
In previous example, we took only height for t-shirt problem. Here, we will take both height and
@@ -153,7 +146,7 @@ Below is the output we get:
![image](images/oc_2d_clustered.jpg)
-# Color Quantization
3. Color Quantization
---------------------
Color Quantization is the process of reducing number of colors in an image. One reason to do so is

View File

@@ -62,9 +62,9 @@ Now **Step - 2** and **Step - 3** are iterated until both centroids are converge
*(Or it may be stopped depending on the criteria we provide, like maximum number of iterations, or a
specific accuracy is reached etc.)* **These points are such that sum of distances between test data
and their corresponding centroids are minimum**. Or simply, sum of distances between
\f$C1 \leftrightarrow Red_Points\f$ and \f$C2 \leftrightarrow Blue_Points\f$ is minimum.
\f$C1 \leftrightarrow Red\_Points\f$ and \f$C2 \leftrightarrow Blue\_Points\f$ is minimum.
\f[minimize \;\bigg[J = \sum_{All\: Red_Points}distance(C1,Red_Point) + \sum_{All\: Blue_Points}distance(C2,Blue_Point)\bigg]\f]
\f[minimize \;\bigg[J = \sum_{All\: Red\_Points}distance(C1,Red\_Point) + \sum_{All\: Blue\_Points}distance(C2,Blue\_Point)\bigg]\f]
Final result almost looks like below :

View File

@@ -5,7 +5,7 @@ Goal
----
In this chapter
- We will use our knowledge on kNN to build a basic OCR application.
- We will use our knowledge on kNN to build a basic OCR application.
- We will try with Digits and Alphabets data available that comes with OpenCV.
OCR of Hand-written Digits

View File

@@ -5,7 +5,7 @@ Goal
----
In this chapter
- We will see an intuitive understanding of SVM
- We will see an intuitive understanding of SVM
Theory
------
@@ -79,11 +79,15 @@ mapping function which maps a two-dimensional point to three-dimensional space a
Let us define a kernel function \f$K(p,q)\f$ which does a dot product between two points, shown below:
\f[K(p,q) = \phi(p).\phi(q) &= \phi(p)^T \phi(q) \\
\f[
\begin{aligned}
K(p,q) = \phi(p).\phi(q) &= \phi(p)^T \phi(q) \\
&= (p_{1}^2,p_{2}^2,\sqrt{2} p_1 p_2).(q_{1}^2,q_{2}^2,\sqrt{2} q_1 q_2) \\
&= p_1 q_1 + p_2 q_2 + 2 p_1 q_1 p_2 q_2 \\
&= (p_1 q_1 + p_2 q_2)^2 \\
\phi(p).\phi(q) &= (p.q)^2\f]
\phi(p).\phi(q) &= (p.q)^2
\end{aligned}
\f]
It means, a dot product in three-dimensional space can be achieved using squared dot product in
two-dimensional space. This can be applied to higher dimensional space. So we can calculate higher