merged revisions r7808 from 2.4 branch
This commit is contained in:
parent
42e0214de5
commit
2659453694
@ -446,14 +446,16 @@ gpu::reprojectImageTo3D
|
|||||||
---------------------------
|
---------------------------
|
||||||
Reprojects a disparity image to 3D space.
|
Reprojects a disparity image to 3D space.
|
||||||
|
|
||||||
.. ocv:function:: void gpu::reprojectImageTo3D(const GpuMat& disp, GpuMat& xyzw, const Mat& Q, Stream& stream = Stream::Null())
|
.. ocv:function:: void gpu::reprojectImageTo3D(const GpuMat& disp, GpuMat& xyzw, const Mat& Q, int dst_cn = 4, Stream& stream = Stream::Null())
|
||||||
|
|
||||||
:param disp: Input disparity image. ``CV_8U`` and ``CV_16S`` types are supported.
|
:param disp: Input disparity image. ``CV_8U`` and ``CV_16S`` types are supported.
|
||||||
|
|
||||||
:param xyzw: Output 4-channel floating-point image of the same size as ``disp`` . Each element of ``xyzw(x,y)`` contains 3D coordinates ``(x,y,z,1)`` of the point ``(x,y)`` , computed from the disparity map.
|
:param xyzw: Output 3- or 4-channel floating-point image of the same size as ``disp`` . Each element of ``xyzw(x,y)`` contains 3D coordinates ``(x,y,z)`` or ``(x,y,z,1)`` of the point ``(x,y)`` , computed from the disparity map.
|
||||||
|
|
||||||
:param Q: :math:`4 \times 4` perspective transformation matrix that can be obtained via :ocv:func:`stereoRectify` .
|
:param Q: :math:`4 \times 4` perspective transformation matrix that can be obtained via :ocv:func:`stereoRectify` .
|
||||||
|
|
||||||
|
:param dst_cn: The number of channels for output image. Can be 3 or 4.
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
.. seealso:: :ocv:func:`reprojectImageTo3D`
|
.. seealso:: :ocv:func:`reprojectImageTo3D`
|
||||||
|
@ -11,18 +11,19 @@ gpu::SURF_GPU
|
|||||||
|
|
||||||
Class used for extracting Speeded Up Robust Features (SURF) from an image. ::
|
Class used for extracting Speeded Up Robust Features (SURF) from an image. ::
|
||||||
|
|
||||||
class SURF_GPU : public CvSURFParams
|
class SURF_GPU
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
enum KeypointLayout
|
enum KeypointLayout
|
||||||
{
|
{
|
||||||
SF_X = 0,
|
X_ROW = 0,
|
||||||
SF_Y,
|
Y_ROW,
|
||||||
SF_LAPLACIAN,
|
LAPLACIAN_ROW,
|
||||||
SF_SIZE,
|
OCTAVE_ROW,
|
||||||
SF_DIR,
|
SIZE_ROW,
|
||||||
SF_HESSIAN,
|
ANGLE_ROW,
|
||||||
SF_FEATURE_STRIDE
|
HESSIAN_ROW,
|
||||||
|
ROWS_COUNT
|
||||||
};
|
};
|
||||||
|
|
||||||
//! the default constructor
|
//! the default constructor
|
||||||
@ -69,6 +70,13 @@ Class used for extracting Speeded Up Robust Features (SURF) from an image. ::
|
|||||||
|
|
||||||
void releaseMemory();
|
void releaseMemory();
|
||||||
|
|
||||||
|
// SURF parameters
|
||||||
|
double hessianThreshold;
|
||||||
|
int nOctaves;
|
||||||
|
int nOctaveLayers;
|
||||||
|
bool extended;
|
||||||
|
bool upright;
|
||||||
|
|
||||||
//! max keypoints = keypointsRatio * img.size().area()
|
//! max keypoints = keypointsRatio * img.size().area()
|
||||||
float keypointsRatio;
|
float keypointsRatio;
|
||||||
|
|
||||||
@ -82,14 +90,15 @@ Class used for extracting Speeded Up Robust Features (SURF) from an image. ::
|
|||||||
|
|
||||||
The class ``SURF_GPU`` implements Speeded Up Robust Features descriptor. There is a fast multi-scale Hessian keypoint detector that can be used to find the keypoints (which is the default option). But the descriptors can also be computed for the user-specified keypoints. Only 8-bit grayscale images are supported.
|
The class ``SURF_GPU`` implements Speeded Up Robust Features descriptor. There is a fast multi-scale Hessian keypoint detector that can be used to find the keypoints (which is the default option). But the descriptors can also be computed for the user-specified keypoints. Only 8-bit grayscale images are supported.
|
||||||
|
|
||||||
The class ``SURF_GPU`` can store results in the GPU and CPU memory. It provides functions to convert results between CPU and GPU version ( ``uploadKeypoints``, ``downloadKeypoints``, ``downloadDescriptors`` ). The format of CPU results is the same as ``SURF`` results. GPU results are stored in ``GpuMat``. The ``keypoints`` matrix is :math:`\texttt{nFeatures} \times 6` matrix with the ``CV_32FC1`` type.
|
The class ``SURF_GPU`` can store results in the GPU and CPU memory. It provides functions to convert results between CPU and GPU version ( ``uploadKeypoints``, ``downloadKeypoints``, ``downloadDescriptors`` ). The format of CPU results is the same as ``SURF`` results. GPU results are stored in ``GpuMat``. The ``keypoints`` matrix is :math:`\texttt{nFeatures} \times 7` matrix with the ``CV_32FC1`` type.
|
||||||
|
|
||||||
* ``keypoints.ptr<float>(SF_X)[i]`` contains x coordinate of the i-th feature.
|
* ``keypoints.ptr<float>(X_ROW)[i]`` contains x coordinate of the i-th feature.
|
||||||
* ``keypoints.ptr<float>(SF_Y)[i]`` contains y coordinate of the i-th feature.
|
* ``keypoints.ptr<float>(Y_ROW)[i]`` contains y coordinate of the i-th feature.
|
||||||
* ``keypoints.ptr<float>(SF_LAPLACIAN)[i]`` contains the laplacian sign of the i-th feature.
|
* ``keypoints.ptr<float>(LAPLACIAN_ROW)[i]`` contains the laplacian sign of the i-th feature.
|
||||||
* ``keypoints.ptr<float>(SF_SIZE)[i]`` contains the size of the i-th feature.
|
* ``keypoints.ptr<float>(OCTAVE_ROW)[i]`` contains the octave of the i-th feature.
|
||||||
* ``keypoints.ptr<float>(SF_DIR)[i]`` contain orientation of the i-th feature.
|
* ``keypoints.ptr<float>(SIZE_ROW)[i]`` contains the size of the i-th feature.
|
||||||
* ``keypoints.ptr<float>(SF_HESSIAN)[i]`` contains the response of the i-th feature.
|
* ``keypoints.ptr<float>(ANGLE_ROW)[i]`` contain orientation of the i-th feature.
|
||||||
|
* ``keypoints.ptr<float>(HESSIAN_ROW)[i]`` contains the response of the i-th feature.
|
||||||
|
|
||||||
The ``descriptors`` matrix is :math:`\texttt{nFeatures} \times \texttt{descriptorSize}` matrix with the ``CV_32FC1`` type.
|
The ``descriptors`` matrix is :math:`\texttt{nFeatures} \times \texttt{descriptorSize}` matrix with the ``CV_32FC1`` type.
|
||||||
|
|
||||||
@ -118,17 +127,17 @@ Class used for corner detection using the FAST algorithm. ::
|
|||||||
// all features have same size
|
// all features have same size
|
||||||
static const int FEATURE_SIZE = 7;
|
static const int FEATURE_SIZE = 7;
|
||||||
|
|
||||||
explicit FAST_GPU(int threshold, bool nonmaxSupression = true,
|
explicit FAST_GPU(int threshold, bool nonmaxSupression = true,
|
||||||
double keypointsRatio = 0.05);
|
double keypointsRatio = 0.05);
|
||||||
|
|
||||||
void operator ()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints);
|
void operator ()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints);
|
||||||
void operator ()(const GpuMat& image, const GpuMat& mask,
|
void operator ()(const GpuMat& image, const GpuMat& mask,
|
||||||
std::vector<KeyPoint>& keypoints);
|
std::vector<KeyPoint>& keypoints);
|
||||||
|
|
||||||
void downloadKeypoints(const GpuMat& d_keypoints,
|
void downloadKeypoints(const GpuMat& d_keypoints,
|
||||||
std::vector<KeyPoint>& keypoints);
|
std::vector<KeyPoint>& keypoints);
|
||||||
|
|
||||||
void convertKeypoints(const Mat& h_keypoints,
|
void convertKeypoints(const Mat& h_keypoints,
|
||||||
std::vector<KeyPoint>& keypoints);
|
std::vector<KeyPoint>& keypoints);
|
||||||
|
|
||||||
void release();
|
void release();
|
||||||
@ -169,7 +178,7 @@ gpu::FAST_GPU::operator ()
|
|||||||
-------------------------------------
|
-------------------------------------
|
||||||
Finds the keypoints using FAST detector.
|
Finds the keypoints using FAST detector.
|
||||||
|
|
||||||
.. ocv:function:: void gpu::FAST_GPU::operator ()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints)
|
.. ocv:function:: void gpu::FAST_GPU::operator ()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints)
|
||||||
.. ocv:function:: void gpu::FAST_GPU::operator ()(const GpuMat& image, const GpuMat& mask, std::vector<KeyPoint>& keypoints)
|
.. ocv:function:: void gpu::FAST_GPU::operator ()(const GpuMat& image, const GpuMat& mask, std::vector<KeyPoint>& keypoints)
|
||||||
|
|
||||||
:param image: Image where keypoints (corners) are detected. Only 8-bit grayscale images are supported.
|
:param image: Image where keypoints (corners) are detected. Only 8-bit grayscale images are supported.
|
||||||
@ -177,8 +186,8 @@ Finds the keypoints using FAST detector.
|
|||||||
:param mask: Optional input mask that marks the regions where we should detect features.
|
:param mask: Optional input mask that marks the regions where we should detect features.
|
||||||
|
|
||||||
:param keypoints: The output vector of keypoints. Can be stored both in CPU and GPU memory. For GPU memory:
|
:param keypoints: The output vector of keypoints. Can be stored both in CPU and GPU memory. For GPU memory:
|
||||||
|
|
||||||
* keypoints.ptr<Vec2s>(LOCATION_ROW)[i] will contain location of i'th point
|
* keypoints.ptr<Vec2s>(LOCATION_ROW)[i] will contain location of i'th point
|
||||||
* keypoints.ptr<float>(RESPONSE_ROW)[i] will contaion response of i'th point (if non-maximum supression is applied)
|
* keypoints.ptr<float>(RESPONSE_ROW)[i] will contaion response of i'th point (if non-maximum supression is applied)
|
||||||
|
|
||||||
|
|
||||||
@ -258,16 +267,18 @@ Class for extracting ORB features and descriptors from an image. ::
|
|||||||
DEFAULT_FAST_THRESHOLD = 20
|
DEFAULT_FAST_THRESHOLD = 20
|
||||||
};
|
};
|
||||||
|
|
||||||
explicit ORB_GPU(size_t n_features = 500,
|
explicit ORB_GPU(int nFeatures = 500, float scaleFactor = 1.2f,
|
||||||
const ORB::CommonParams& detector_params = ORB::CommonParams());
|
int nLevels = 8, int edgeThreshold = 31,
|
||||||
|
int firstLevel = 0, int WTA_K = 2,
|
||||||
|
int scoreType = 0, int patchSize = 31);
|
||||||
|
|
||||||
void operator()(const GpuMat& image, const GpuMat& mask,
|
void operator()(const GpuMat& image, const GpuMat& mask,
|
||||||
std::vector<KeyPoint>& keypoints);
|
std::vector<KeyPoint>& keypoints);
|
||||||
void operator()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints);
|
void operator()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints);
|
||||||
|
|
||||||
void operator()(const GpuMat& image, const GpuMat& mask,
|
void operator()(const GpuMat& image, const GpuMat& mask,
|
||||||
std::vector<KeyPoint>& keypoints, GpuMat& descriptors);
|
std::vector<KeyPoint>& keypoints, GpuMat& descriptors);
|
||||||
void operator()(const GpuMat& image, const GpuMat& mask,
|
void operator()(const GpuMat& image, const GpuMat& mask,
|
||||||
GpuMat& keypoints, GpuMat& descriptors);
|
GpuMat& keypoints, GpuMat& descriptors);
|
||||||
|
|
||||||
void downloadKeyPoints(GpuMat& d_keypoints, std::vector<KeyPoint>& keypoints);
|
void downloadKeyPoints(GpuMat& d_keypoints, std::vector<KeyPoint>& keypoints);
|
||||||
@ -292,11 +303,17 @@ gpu::ORB_GPU::ORB_GPU
|
|||||||
-------------------------------------
|
-------------------------------------
|
||||||
Constructor.
|
Constructor.
|
||||||
|
|
||||||
.. ocv:function:: gpu::ORB_GPU::ORB_GPU(size_t n_features = 500, const ORB::CommonParams& detector_params = ORB::CommonParams())
|
.. ocv:function:: gpu::ORB_GPU::ORB_GPU(int nFeatures = 500, float scaleFactor = 1.2f, int nLevels = 8, int edgeThreshold = 31, int firstLevel = 0, int WTA_K = 2, int scoreType = 0, int patchSize = 31)
|
||||||
|
|
||||||
:param n_features: Number of features to detect.
|
:param nFeatures: The number of desired features.
|
||||||
|
|
||||||
:param detector_params: ORB detector parameters.
|
:param scaleFactor: Coefficient by which we divide the dimensions from one scale pyramid level to the next.
|
||||||
|
|
||||||
|
:param nLevels: The number of levels in the scale pyramid.
|
||||||
|
|
||||||
|
:param edgeThreshold: How far from the boundary the points should be.
|
||||||
|
|
||||||
|
:param firstLevel: The level at which the image is given. If 1, that means we will also look at the image `scaleFactor` times bigger.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -313,9 +330,9 @@ Detects keypoints and computes descriptors for them.
|
|||||||
.. ocv:function:: void gpu::ORB_GPU::operator()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints, GpuMat& descriptors)
|
.. ocv:function:: void gpu::ORB_GPU::operator()(const GpuMat& image, const GpuMat& mask, GpuMat& keypoints, GpuMat& descriptors)
|
||||||
|
|
||||||
:param image: Input 8-bit grayscale image.
|
:param image: Input 8-bit grayscale image.
|
||||||
|
|
||||||
:param mask: Optional input mask that marks the regions where we should detect features.
|
:param mask: Optional input mask that marks the regions where we should detect features.
|
||||||
|
|
||||||
:param keypoints: The input/output vector of keypoints. Can be stored both in CPU and GPU memory. For GPU memory:
|
:param keypoints: The input/output vector of keypoints. Can be stored both in CPU and GPU memory. For GPU memory:
|
||||||
|
|
||||||
* ``keypoints.ptr<float>(X_ROW)[i]`` contains x coordinate of the i'th feature.
|
* ``keypoints.ptr<float>(X_ROW)[i]`` contains x coordinate of the i'th feature.
|
||||||
@ -324,7 +341,7 @@ Detects keypoints and computes descriptors for them.
|
|||||||
* ``keypoints.ptr<float>(ANGLE_ROW)[i]`` contains orientation of the i'th feature.
|
* ``keypoints.ptr<float>(ANGLE_ROW)[i]`` contains orientation of the i'th feature.
|
||||||
* ``keypoints.ptr<float>(OCTAVE_ROW)[i]`` contains the octave of the i'th feature.
|
* ``keypoints.ptr<float>(OCTAVE_ROW)[i]`` contains the octave of the i'th feature.
|
||||||
* ``keypoints.ptr<float>(SIZE_ROW)[i]`` contains the size of the i'th feature.
|
* ``keypoints.ptr<float>(SIZE_ROW)[i]`` contains the size of the i'th feature.
|
||||||
|
|
||||||
:param descriptors: Computed descriptors. if ``blurForDescriptor`` is true, image will be blurred before descriptors calculation.
|
:param descriptors: Computed descriptors. if ``blurForDescriptor`` is true, image will be blurred before descriptors calculation.
|
||||||
|
|
||||||
|
|
||||||
@ -409,9 +426,9 @@ Brute-force descriptor matcher. For each descriptor in the first set, this match
|
|||||||
GpuMat& trainIdx, GpuMat& distance, GpuMat& allDist, int k,
|
GpuMat& trainIdx, GpuMat& distance, GpuMat& allDist, int k,
|
||||||
const GpuMat& mask = GpuMat(), Stream& stream = Stream::Null());
|
const GpuMat& mask = GpuMat(), Stream& stream = Stream::Null());
|
||||||
|
|
||||||
static void knnMatchDownload(const GpuMat& trainIdx, const GpuMat& distance,
|
static void knnMatchDownload(const GpuMat& trainIdx, const GpuMat& distance,
|
||||||
std::vector< std::vector<DMatch> >& matches, bool compactResult = false);
|
std::vector< std::vector<DMatch> >& matches, bool compactResult = false);
|
||||||
static void knnMatchConvert(const Mat& trainIdx, const Mat& distance,
|
static void knnMatchConvert(const Mat& trainIdx, const Mat& distance,
|
||||||
std::vector< std::vector<DMatch> >& matches, bool compactResult = false);
|
std::vector< std::vector<DMatch> >& matches, bool compactResult = false);
|
||||||
|
|
||||||
void knnMatch(const GpuMat& query, const GpuMat& train,
|
void knnMatch(const GpuMat& query, const GpuMat& train,
|
||||||
|
@ -369,21 +369,19 @@ gpu::createLinearFilter_GPU
|
|||||||
-------------------------------
|
-------------------------------
|
||||||
Creates a non-separable linear filter.
|
Creates a non-separable linear filter.
|
||||||
|
|
||||||
.. ocv:function:: Ptr<FilterEngine_GPU> gpu::createLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, const Point& anchor = Point(-1,-1))
|
.. ocv:function:: Ptr<FilterEngine_GPU> gpu::createLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, Point anchor = Point(-1,-1), int borderType = BORDER_DEFAULT)
|
||||||
|
|
||||||
.. ocv:function:: Ptr<BaseFilter_GPU> gpu::getLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, const Size& ksize, Point anchor = Point(-1, -1))
|
.. ocv:function:: Ptr<BaseFilter_GPU> gpu::getLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, const Size& ksize, Point anchor = Point(-1, -1))
|
||||||
|
|
||||||
:param srcType: Input image type. ``CV_8UC1`` and ``CV_8UC4`` types are supported.
|
:param srcType: Input image type. Supports ``CV_8U`` , ``CV_16U`` and ``CV_32F`` one and four channel image.
|
||||||
|
|
||||||
:param dstType: Output image type. The same type as ``src`` is supported.
|
:param dstType: Output image type. The same type as ``src`` is supported.
|
||||||
|
|
||||||
:param kernel: 2D array of filter coefficients. Floating-point coefficients will be converted to fixed-point representation before the actual processing.
|
:param kernel: 2D array of filter coefficients. Floating-point coefficients will be converted to fixed-point representation before the actual processing. Supports size up to 16. For larger kernels use :ocv:func:`gpu::convolve`.
|
||||||
|
|
||||||
:param ksize: Kernel size. Supports size up to 16. For larger kernels use :ocv:func:`gpu::convolve`.
|
|
||||||
|
|
||||||
:param anchor: Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.
|
:param anchor: Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.
|
||||||
|
|
||||||
.. note:: This filter does not check out-of-border accesses, so only a proper sub-matrix of a bigger matrix has to be passed to it.
|
:param borderType: Pixel extrapolation method. For details, see :ocv:func:`borderInterpolate` .
|
||||||
|
|
||||||
.. seealso:: :ocv:func:`createLinearFilter`
|
.. seealso:: :ocv:func:`createLinearFilter`
|
||||||
|
|
||||||
@ -393,9 +391,9 @@ gpu::filter2D
|
|||||||
-----------------
|
-----------------
|
||||||
Applies the non-separable 2D linear filter to an image.
|
Applies the non-separable 2D linear filter to an image.
|
||||||
|
|
||||||
.. ocv:function:: void gpu::filter2D(const GpuMat& src, GpuMat& dst, int ddepth, const Mat& kernel, Point anchor=Point(-1,-1), Stream& stream = Stream::Null())
|
.. ocv:function:: void gpu::filter2D(const GpuMat& src, GpuMat& dst, int ddepth, const Mat& kernel, Point anchor=Point(-1,-1), int borderType = BORDER_DEFAULT, Stream& stream = Stream::Null())
|
||||||
|
|
||||||
:param src: Source image. ``CV_8UC1`` , ``CV_8UC4`` and ``CV_32FC1`` source types are supported.
|
:param src: Source image. Supports ``CV_8U`` , ``CV_16U`` and ``CV_32F`` one and four channel image.
|
||||||
|
|
||||||
:param dst: Destination image. The size and the number of channels is the same as ``src`` .
|
:param dst: Destination image. The size and the number of channels is the same as ``src`` .
|
||||||
|
|
||||||
@ -405,9 +403,9 @@ Applies the non-separable 2D linear filter to an image.
|
|||||||
|
|
||||||
:param anchor: Anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor resides within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center.
|
:param anchor: Anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor resides within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center.
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param borderType: Pixel extrapolation method. For details, see :ocv:func:`borderInterpolate` .
|
||||||
|
|
||||||
.. note:: This filter does not check out-of-border accesses, so only a proper sub-matrix of a bigger matrix has to be passed to it.
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
.. seealso:: :ocv:func:`filter2D`, :ocv:func:`gpu::convolve`
|
.. seealso:: :ocv:func:`filter2D`, :ocv:func:`gpu::convolve`
|
||||||
|
|
||||||
@ -417,7 +415,7 @@ gpu::Laplacian
|
|||||||
------------------
|
------------------
|
||||||
Applies the Laplacian operator to an image.
|
Applies the Laplacian operator to an image.
|
||||||
|
|
||||||
.. ocv:function:: void gpu::Laplacian(const GpuMat& src, GpuMat& dst, int ddepth, int ksize = 1, double scale = 1, Stream& stream = Stream::Null())
|
.. ocv:function:: void gpu::Laplacian(const GpuMat& src, GpuMat& dst, int ddepth, int ksize = 1, double scale = 1, int borderType = BORDER_DEFAULT, Stream& stream = Stream::Null())
|
||||||
|
|
||||||
:param src: Source image. ``CV_8UC1`` and ``CV_8UC4`` source types are supported.
|
:param src: Source image. ``CV_8UC1`` and ``CV_8UC4`` source types are supported.
|
||||||
|
|
||||||
@ -429,6 +427,8 @@ Applies the Laplacian operator to an image.
|
|||||||
|
|
||||||
:param scale: Optional scale factor for the computed Laplacian values. By default, no scaling is applied (see :ocv:func:`getDerivKernels` ).
|
:param scale: Optional scale factor for the computed Laplacian values. By default, no scaling is applied (see :ocv:func:`getDerivKernels` ).
|
||||||
|
|
||||||
|
:param borderType: Pixel extrapolation method. For details, see :ocv:func:`borderInterpolate` .
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
.. note:: This filter does not check out-of-border accesses, so only a proper sub-matrix of a bigger matrix has to be passed to it.
|
.. note:: This filter does not check out-of-border accesses, so only a proper sub-matrix of a bigger matrix has to be passed to it.
|
||||||
|
@ -256,7 +256,7 @@ Class providing a memory buffer for :ocv:func:`gpu::convolve` function, plus it
|
|||||||
void create(Size image_size, Size templ_size);
|
void create(Size image_size, Size templ_size);
|
||||||
static Size estimateBlockSize(Size result_size, Size templ_size);
|
static Size estimateBlockSize(Size result_size, Size templ_size);
|
||||||
};
|
};
|
||||||
|
|
||||||
You can use field `user_block_size` to set specific block size for :ocv:func:`gpu::convolve` function. If you leave its default value `Size(0,0)` then automatic estimation of block size will be used (which is optimized for speed). By varying `user_block_size` you can reduce memory requirements at the cost of speed.
|
You can use field `user_block_size` to set specific block size for :ocv:func:`gpu::convolve` function. If you leave its default value `Size(0,0)` then automatic estimation of block size will be used (which is optimized for speed). By varying `user_block_size` you can reduce memory requirements at the cost of speed.
|
||||||
|
|
||||||
gpu::ConvolveBuf::create
|
gpu::ConvolveBuf::create
|
||||||
@ -283,7 +283,7 @@ Computes a convolution (or cross-correlation) of two images.
|
|||||||
:param ccorr: Flags to evaluate cross-correlation instead of convolution.
|
:param ccorr: Flags to evaluate cross-correlation instead of convolution.
|
||||||
|
|
||||||
:param buf: Optional buffer to avoid extra memory allocations and to adjust some specific parameters. See :ocv:class:`gpu::ConvolveBuf`.
|
:param buf: Optional buffer to avoid extra memory allocations and to adjust some specific parameters. See :ocv:class:`gpu::ConvolveBuf`.
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
.. seealso:: :ocv:func:`gpu::filter2D`
|
.. seealso:: :ocv:func:`gpu::filter2D`
|
||||||
@ -320,9 +320,9 @@ Computes a proximity map for a raster template and an image where the template i
|
|||||||
:param result: Map containing comparison results ( ``CV_32FC1`` ). If ``image`` is *W x H* and ``templ`` is *w x h*, then ``result`` must be *W-w+1 x H-h+1*.
|
:param result: Map containing comparison results ( ``CV_32FC1`` ). If ``image`` is *W x H* and ``templ`` is *w x h*, then ``result`` must be *W-w+1 x H-h+1*.
|
||||||
|
|
||||||
:param method: Specifies the way to compare the template with the image.
|
:param method: Specifies the way to compare the template with the image.
|
||||||
|
|
||||||
:param buf: Optional buffer to avoid extra memory allocations and to adjust some specific parameters. See :ocv:class:`gpu::MatchTemplateBuf`.
|
:param buf: Optional buffer to avoid extra memory allocations and to adjust some specific parameters. See :ocv:class:`gpu::MatchTemplateBuf`.
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
The following methods are supported for the ``CV_8U`` depth images for now:
|
The following methods are supported for the ``CV_8U`` depth images for now:
|
||||||
@ -355,7 +355,7 @@ Applies a generic geometrical transformation to an image.
|
|||||||
:param xmap: X values. Only ``CV_32FC1`` type is supported.
|
:param xmap: X values. Only ``CV_32FC1`` type is supported.
|
||||||
|
|
||||||
:param ymap: Y values. Only ``CV_32FC1`` type is supported.
|
:param ymap: Y values. Only ``CV_32FC1`` type is supported.
|
||||||
|
|
||||||
:param interpolation: Interpolation method (see :ocv:func:`resize` ). ``INTER_NEAREST`` , ``INTER_LINEAR`` and ``INTER_CUBIC`` are supported for now.
|
:param interpolation: Interpolation method (see :ocv:func:`resize` ). ``INTER_NEAREST`` , ``INTER_LINEAR`` and ``INTER_CUBIC`` are supported for now.
|
||||||
|
|
||||||
:param borderMode: Pixel extrapolation method (see :ocv:func:`borderInterpolate` ). ``BORDER_REFLECT101`` , ``BORDER_REPLICATE`` , ``BORDER_CONSTANT`` , ``BORDER_REFLECT`` and ``BORDER_WRAP`` are supported for now.
|
:param borderMode: Pixel extrapolation method (see :ocv:func:`borderInterpolate` ). ``BORDER_REFLECT101`` , ``BORDER_REPLICATE`` , ``BORDER_CONSTANT`` , ``BORDER_REFLECT`` and ``BORDER_WRAP`` are supported for now.
|
||||||
@ -495,6 +495,28 @@ Applies an affine transformation to an image.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
gpu::buildWarpAffineMaps
|
||||||
|
------------------------
|
||||||
|
Builds transformation maps for affine transformation.
|
||||||
|
|
||||||
|
.. ocv:function:: void buildWarpAffineMaps(const Mat& M, bool inverse, Size dsize, GpuMat& xmap, GpuMat& ymap, Stream& stream = Stream::Null());
|
||||||
|
|
||||||
|
:param M: *2x3* transformation matrix.
|
||||||
|
|
||||||
|
:param inverse: Flag specifying that ``M`` is an inverse transformation ( ``dst=>src`` ).
|
||||||
|
|
||||||
|
:param dsize: Size of the destination image.
|
||||||
|
|
||||||
|
:param xmap: X values with ``CV_32FC1`` type.
|
||||||
|
|
||||||
|
:param ymap: Y values with ``CV_32FC1`` type.
|
||||||
|
|
||||||
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
|
.. seealso:: :ocv:func:`gpu::warpAffine` , :ocv:func:`gpu::remap`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
gpu::warpPerspective
|
gpu::warpPerspective
|
||||||
------------------------
|
------------------------
|
||||||
Applies a perspective transformation to an image.
|
Applies a perspective transformation to an image.
|
||||||
@ -517,6 +539,28 @@ Applies a perspective transformation to an image.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
gpu::buildWarpPerspectiveMaps
|
||||||
|
-----------------------------
|
||||||
|
Builds transformation maps for perspective transformation.
|
||||||
|
|
||||||
|
.. ocv:function:: void buildWarpAffineMaps(const Mat& M, bool inverse, Size dsize, GpuMat& xmap, GpuMat& ymap, Stream& stream = Stream::Null());
|
||||||
|
|
||||||
|
:param M: *3x3* transformation matrix.
|
||||||
|
|
||||||
|
:param inverse: Flag specifying that ``M`` is an inverse transformation ( ``dst=>src`` ).
|
||||||
|
|
||||||
|
:param dsize: Size of the destination image.
|
||||||
|
|
||||||
|
:param xmap: X values with ``CV_32FC1`` type.
|
||||||
|
|
||||||
|
:param ymap: Y values with ``CV_32FC1`` type.
|
||||||
|
|
||||||
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
|
.. seealso:: :ocv:func:`gpu::warpPerspective` , :ocv:func:`gpu::remap`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
gpu::rotate
|
gpu::rotate
|
||||||
---------------
|
---------------
|
||||||
Rotates an image around the origin (0,0) and then shifts it.
|
Rotates an image around the origin (0,0) and then shifts it.
|
||||||
@ -562,7 +606,7 @@ Forms a border around an image.
|
|||||||
:param right: Number of pixels in each direction from the source image rectangle to extrapolate. For example: ``top=1, bottom=1, left=1, right=1`` mean that 1 pixel-wide border needs to be built.
|
:param right: Number of pixels in each direction from the source image rectangle to extrapolate. For example: ``top=1, bottom=1, left=1, right=1`` mean that 1 pixel-wide border needs to be built.
|
||||||
|
|
||||||
:param borderType: Border type. See :ocv:func:`borderInterpolate` for details. ``BORDER_REFLECT101`` , ``BORDER_REPLICATE`` , ``BORDER_CONSTANT`` , ``BORDER_REFLECT`` and ``BORDER_WRAP`` are supported for now.
|
:param borderType: Border type. See :ocv:func:`borderInterpolate` for details. ``BORDER_REFLECT101`` , ``BORDER_REPLICATE`` , ``BORDER_CONSTANT`` , ``BORDER_REFLECT`` and ``BORDER_WRAP`` are supported for now.
|
||||||
|
|
||||||
:param value: Border value.
|
:param value: Border value.
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param stream: Stream for the asynchronous version.
|
||||||
@ -796,17 +840,17 @@ Composites two images using alpha opacity values contained in each image.
|
|||||||
:param alpha_op: Flag specifying the alpha-blending operation:
|
:param alpha_op: Flag specifying the alpha-blending operation:
|
||||||
|
|
||||||
* **ALPHA_OVER**
|
* **ALPHA_OVER**
|
||||||
* **ALPHA_IN**
|
* **ALPHA_IN**
|
||||||
* **ALPHA_OUT**
|
* **ALPHA_OUT**
|
||||||
* **ALPHA_ATOP**
|
* **ALPHA_ATOP**
|
||||||
* **ALPHA_XOR**
|
* **ALPHA_XOR**
|
||||||
* **ALPHA_PLUS**
|
* **ALPHA_PLUS**
|
||||||
* **ALPHA_OVER_PREMUL**
|
* **ALPHA_OVER_PREMUL**
|
||||||
* **ALPHA_IN_PREMUL**
|
* **ALPHA_IN_PREMUL**
|
||||||
* **ALPHA_OUT_PREMUL**
|
* **ALPHA_OUT_PREMUL**
|
||||||
* **ALPHA_ATOP_PREMUL**
|
* **ALPHA_ATOP_PREMUL**
|
||||||
* **ALPHA_XOR_PREMUL**
|
* **ALPHA_XOR_PREMUL**
|
||||||
* **ALPHA_PLUS_PREMUL**
|
* **ALPHA_PLUS_PREMUL**
|
||||||
* **ALPHA_PREMUL**
|
* **ALPHA_PREMUL**
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param stream: Stream for the asynchronous version.
|
||||||
|
@ -14,13 +14,13 @@ Performs generalized matrix multiplication.
|
|||||||
:param src1: First multiplied input matrix that should have ``CV_32FC1`` , ``CV_64FC1`` , ``CV_32FC2`` , or ``CV_64FC2`` type.
|
:param src1: First multiplied input matrix that should have ``CV_32FC1`` , ``CV_64FC1`` , ``CV_32FC2`` , or ``CV_64FC2`` type.
|
||||||
|
|
||||||
:param src2: Second multiplied input matrix of the same type as ``src1`` .
|
:param src2: Second multiplied input matrix of the same type as ``src1`` .
|
||||||
|
|
||||||
:param alpha: Weight of the matrix product.
|
:param alpha: Weight of the matrix product.
|
||||||
|
|
||||||
:param src3: Third optional delta matrix added to the matrix product. It should have the same type as ``src1`` and ``src2`` .
|
:param src3: Third optional delta matrix added to the matrix product. It should have the same type as ``src1`` and ``src2`` .
|
||||||
|
|
||||||
:param beta: Weight of ``src3`` .
|
:param beta: Weight of ``src3`` .
|
||||||
|
|
||||||
:param dst: Destination matrix. It has the proper size and the same type as input matrices.
|
:param dst: Destination matrix. It has the proper size and the same type as input matrices.
|
||||||
|
|
||||||
:param flags: Operation flags:
|
:param flags: Operation flags:
|
||||||
@ -30,13 +30,15 @@ Performs generalized matrix multiplication.
|
|||||||
* **GEMM_3_T** transpose ``src3``
|
* **GEMM_3_T** transpose ``src3``
|
||||||
|
|
||||||
:param stream: Stream for the asynchronous version.
|
:param stream: Stream for the asynchronous version.
|
||||||
|
|
||||||
The function performs generalized matrix multiplication similar to the ``gemm`` functions in BLAS level 3. For example, ``gemm(src1, src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T)`` corresponds to
|
The function performs generalized matrix multiplication similar to the ``gemm`` functions in BLAS level 3. For example, ``gemm(src1, src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T)`` corresponds to
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
|
|
||||||
\texttt{dst} = \texttt{alpha} \cdot \texttt{src1} ^T \cdot \texttt{src2} + \texttt{beta} \cdot \texttt{src3} ^T
|
\texttt{dst} = \texttt{alpha} \cdot \texttt{src1} ^T \cdot \texttt{src2} + \texttt{beta} \cdot \texttt{src3} ^T
|
||||||
|
|
||||||
|
.. note:: Transposition operation doesn't support ``CV_64FC2`` input type.
|
||||||
|
|
||||||
.. seealso:: :ocv:func:`gemm`
|
.. seealso:: :ocv:func:`gemm`
|
||||||
|
|
||||||
|
|
||||||
|
@ -89,9 +89,9 @@ Constructor.
|
|||||||
:param minDistance: Minimum possible Euclidean distance between the returned corners.
|
:param minDistance: Minimum possible Euclidean distance between the returned corners.
|
||||||
|
|
||||||
:param blockSize: Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See :ocv:func:`cornerEigenValsAndVecs` .
|
:param blockSize: Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See :ocv:func:`cornerEigenValsAndVecs` .
|
||||||
|
|
||||||
:param useHarrisDetector: Parameter indicating whether to use a Harris detector (see :ocv:func:`gpu::cornerHarris`) or :ocv:func:`gpu::cornerMinEigenVal`.
|
:param useHarrisDetector: Parameter indicating whether to use a Harris detector (see :ocv:func:`gpu::cornerHarris`) or :ocv:func:`gpu::cornerMinEigenVal`.
|
||||||
|
|
||||||
:param harrisK: Free parameter of the Harris detector.
|
:param harrisK: Free parameter of the Harris detector.
|
||||||
|
|
||||||
|
|
||||||
@ -100,7 +100,7 @@ gpu::GoodFeaturesToTrackDetector_GPU::operator ()
|
|||||||
-------------------------------------------------
|
-------------------------------------------------
|
||||||
Finds the most prominent corners in the image.
|
Finds the most prominent corners in the image.
|
||||||
|
|
||||||
.. ocv:function:: void gpu::GoodFeaturesToTrackDetector_GPU::operator ()(const GpuMat& image, GpuMat& corners, const GpuMat& mask = GpuMat())
|
.. ocv:function:: void gpu::GoodFeaturesToTrackDetector_GPU::operator ()(const GpuMat& image, GpuMat& corners, const GpuMat& mask = GpuMat())
|
||||||
|
|
||||||
:param image: Input 8-bit, single-channel image.
|
:param image: Input 8-bit, single-channel image.
|
||||||
|
|
||||||
@ -202,6 +202,7 @@ Class used for calculating an optical flow. ::
|
|||||||
double derivLambda;
|
double derivLambda;
|
||||||
bool useInitialFlow;
|
bool useInitialFlow;
|
||||||
float minEigThreshold;
|
float minEigThreshold;
|
||||||
|
bool getMinEigenVals;
|
||||||
|
|
||||||
void releaseMemory();
|
void releaseMemory();
|
||||||
};
|
};
|
||||||
@ -228,7 +229,7 @@ Calculate an optical flow for a sparse feature set.
|
|||||||
|
|
||||||
:param status: Output status vector (CV_8UC1 type). Each element of the vector is set to 1 if the flow for the corresponding features has been found. Otherwise, it is set to 0.
|
:param status: Output status vector (CV_8UC1 type). Each element of the vector is set to 1 if the flow for the corresponding features has been found. Otherwise, it is set to 0.
|
||||||
|
|
||||||
:param err: Output vector (CV_32FC1 type) that contains min eigen value. It can be NULL, if not needed.
|
:param err: Output vector (CV_32FC1 type) that contains the difference between patches around the original and moved points or min eigen value if ``getMinEigenVals`` is checked. It can be NULL, if not needed.
|
||||||
|
|
||||||
.. seealso:: :ocv:func:`calcOpticalFlowPyrLK`
|
.. seealso:: :ocv:func:`calcOpticalFlowPyrLK`
|
||||||
|
|
||||||
@ -244,11 +245,11 @@ Calculate dense optical flow.
|
|||||||
|
|
||||||
:param nextImg: Second input image of the same size and the same type as ``prevImg`` .
|
:param nextImg: Second input image of the same size and the same type as ``prevImg`` .
|
||||||
|
|
||||||
:param u: Horizontal component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
|
:param u: Horizontal component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
|
||||||
|
|
||||||
:param v: Vertical component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
|
:param v: Vertical component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
|
||||||
|
|
||||||
:param err: Output vector (CV_32FC1 type) that contains min eigen value. It can be NULL, if not needed.
|
:param err: Output vector (CV_32FC1 type) that contains the difference between patches around the original and moved points or min eigen value if ``getMinEigenVals`` is checked. It can be NULL, if not needed.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -283,7 +283,7 @@ CV_EXPORTS Ptr<FilterEngine_GPU> createMorphologyFilter_GPU(int op, int type, co
|
|||||||
const Point& anchor = Point(-1,-1), int iterations = 1);
|
const Point& anchor = Point(-1,-1), int iterations = 1);
|
||||||
|
|
||||||
//! returns 2D filter with the specified kernel
|
//! returns 2D filter with the specified kernel
|
||||||
//! supports CV_8UC1 and CV_8UC4 types
|
//! supports CV_8U, CV_16U and CV_32F one and four channel image
|
||||||
CV_EXPORTS Ptr<BaseFilter_GPU> getLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, Point anchor = Point(-1, -1), int borderType = BORDER_DEFAULT);
|
CV_EXPORTS Ptr<BaseFilter_GPU> getLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, Point anchor = Point(-1, -1), int borderType = BORDER_DEFAULT);
|
||||||
|
|
||||||
//! returns the non-separable linear filter engine
|
//! returns the non-separable linear filter engine
|
||||||
@ -1458,12 +1458,13 @@ public:
|
|||||||
//! finds the keypoints using fast hessian detector used in SURF
|
//! finds the keypoints using fast hessian detector used in SURF
|
||||||
//! supports CV_8UC1 images
|
//! supports CV_8UC1 images
|
||||||
//! keypoints will have nFeature cols and 6 rows
|
//! keypoints will have nFeature cols and 6 rows
|
||||||
//! keypoints.ptr<float>(SF_X)[i] will contain x coordinate of i'th feature
|
//! keypoints.ptr<float>(X_ROW)[i] will contain x coordinate of i'th feature
|
||||||
//! keypoints.ptr<float>(SF_Y)[i] will contain y coordinate of i'th feature
|
//! keypoints.ptr<float>(Y_ROW)[i] will contain y coordinate of i'th feature
|
||||||
//! keypoints.ptr<float>(SF_LAPLACIAN)[i] will contain laplacian sign of i'th feature
|
//! keypoints.ptr<float>(LAPLACIAN_ROW)[i] will contain laplacian sign of i'th feature
|
||||||
//! keypoints.ptr<float>(SF_SIZE)[i] will contain size of i'th feature
|
//! keypoints.ptr<float>(OCTAVE_ROW)[i] will contain octave of i'th feature
|
||||||
//! keypoints.ptr<float>(SF_DIR)[i] will contain orientation of i'th feature
|
//! keypoints.ptr<float>(SIZE_ROW)[i] will contain size of i'th feature
|
||||||
//! keypoints.ptr<float>(SF_HESSIAN)[i] will contain response of i'th feature
|
//! keypoints.ptr<float>(ANGLE_ROW)[i] will contain orientation of i'th feature
|
||||||
|
//! keypoints.ptr<float>(HESSIAN_ROW)[i] will contain response of i'th feature
|
||||||
void operator()(const GpuMat& img, const GpuMat& mask, GpuMat& keypoints);
|
void operator()(const GpuMat& img, const GpuMat& mask, GpuMat& keypoints);
|
||||||
//! finds the keypoints and computes their descriptors.
|
//! finds the keypoints and computes their descriptors.
|
||||||
//! Optionally it can compute descriptors for the user-provided keypoints and recompute keypoints direction
|
//! Optionally it can compute descriptors for the user-provided keypoints and recompute keypoints direction
|
||||||
|
Loading…
x
Reference in New Issue
Block a user