:param criteria:Specifies the termination criteria of the iterative search algorithm (after the specified maximum number of iterations ``criteria.maxCount`` or when the search window moves by less than ``criteria.epsilon``
:param derivLambda:The relative weight of the spatial image derivatives impact to the optical flow estimation. If ``derivLambda=0`` , only the image intensity is used, if ``derivLambda=1`` , only derivatives are used. Any other values between 0 and 1 means that both derivatives and the image intensity are used (in the corresponding proportions).
***OPTFLOW_USE_INITIAL_FLOW** use initial estimations stored in ``nextPts`` . If the flag is not set, then initially :math:`\texttt{nextPts}\leftarrow\texttt{prevPts}`
:param nextImg:The second input image of the same size and the same type as ``prevImg``
:param flow:The computed flow image; will have the same size as ``prevImg`` and type ``CV_32FC2``
:param pyrScale:Specifies the image scale (<1) to build the pyramids for each image. ``pyrScale=0.5`` means the classical pyramid, where each next layer is twice smaller than the previous
:param levels:The number of pyramid layers, including the initial image. ``levels=1`` means that no extra layers are created and only the original images are used
:param winsize:The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field
:param polyN:Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, ``polyN`` =5 or 7
:param polySigma:Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For ``polyN=5`` you can set ``polySigma=1.1`` , for ``polyN=7`` a good value would be ``polySigma=1.5``
:param flags:The operation flags; can be a combination of the following:
***OPTFLOW_FARNEBACK_GAUSSIAN** Use a Gaussian :math:`\texttt{winsize}\times\texttt{winsize}` filter instead of box filter of the same size for optical flow estimation. Usually, this option gives more accurate flow than with a box filter, at the cost of lower speed (and normally ``winsize`` for a Gaussian window should be set to a larger value to achieve the same level of robustness)
That is, MHI pixels where motion occurs are set to the current ``timestamp`` , while the pixels where motion happened last time a long time ago are cleared.
:param mask:The output mask image; will have the type ``CV_8UC1`` and the same size as ``mhi`` . Its non-zero elements will mark pixels where the motion gradient data is correct
:param orientation:The output motion gradient orientation image; will have the same type and the same size as ``mhi`` . Each pixel of it will the motion orientation in degrees, from 0 to 360.
:param delta1, delta2:The minimal and maximal allowed difference between ``mhi`` values within a pixel neighorhood. That is, the function finds the minimum ( :math:`m(x,y)` ) and maximum ( :math:`M(x,y)` ) ``mhi`` values over :math:`3 \times 3` neighborhood of each pixel and marks the motion orientation at :math:`(x, y)` as valid only if
:func:`phase` are used, so that the computed angle is measured in degrees and covers the full range 0..360). Also, the ``mask`` is filled to indicate pixels where the computed angle is valid.
:param mask:Mask image. It may be a conjunction of a valid gradient mask, also calculated by :func:`calcMotionGradient` , and the mask of the region, whose direction needs to be calculated
:func:`meanShift` and then adjust the window size and finds the optimal rotation. The function returns the rotated rectangle structure that includes the object position, size and the orientation. The next position of the search window can be obtained with ``RotatedRect::boundingRect()`` .
The function implements iterative object search algorithm. It takes the object back projection on input and the initial position. The mass center in ``window`` of the back projection image is computed and the search window center shifts to the mass center. The procedure is repeated until the specified number of iterations ``criteria.maxCount`` is done or until the window center shifts by less than ``criteria.epsilon`` . The algorithm is used inside
:func:`CamShift` and, unlike
:func:`CamShift` , the search window size or orientation do not change during the search. You can simply pass the output of
:func:`calcBackProject` to this function, but better results can be obtained if you pre-filter the back projection and remove the noise (e.g. by retrieving connected components with
:func:`findContours` , throwing away contours with small area (
:func:`contourArea` ) and rendering the remaining contours with
. However, you can modify ``transitionMatrix``,``controlMatrix`` and ``measurementMatrix`` to get the extended Kalman filter functionality. See the OpenCV sample ``kalman.c``