Tutorials: moved imgcodecs and videoio tutorials to separate pages
This commit is contained in:
Binary file not shown.
Before Width: | Height: | Size: 73 KiB |
@@ -1,82 +0,0 @@
|
||||
Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_intelperc}
|
||||
=======================================================================================
|
||||
|
||||
Depth sensors compatible with Intel Perceptual Computing SDK are supported through VideoCapture
|
||||
class. Depth map, RGB image and some other formats of output can be retrieved by using familiar
|
||||
interface of VideoCapture.
|
||||
|
||||
In order to use depth sensor with OpenCV you should do the following preliminary steps:
|
||||
|
||||
-# Install Intel Perceptual Computing SDK (from here <http://www.intel.com/software/perceptual>).
|
||||
|
||||
-# Configure OpenCV with Intel Perceptual Computing SDK support by setting WITH_INTELPERC flag in
|
||||
CMake. If Intel Perceptual Computing SDK is found in install folders OpenCV will be built with
|
||||
Intel Perceptual Computing SDK library (see a status INTELPERC in CMake log). If CMake process
|
||||
doesn't find Intel Perceptual Computing SDK installation folder automatically, the user should
|
||||
change corresponding CMake variables INTELPERC_LIB_DIR and INTELPERC_INCLUDE_DIR to the
|
||||
proper value.
|
||||
|
||||
-# Build OpenCV.
|
||||
|
||||
VideoCapture can retrieve the following data:
|
||||
|
||||
-# data given from depth generator:
|
||||
- CAP_INTELPERC_DEPTH_MAP - each pixel is a 16-bit integer. The value indicates the
|
||||
distance from an object to the camera's XY plane or the Cartesian depth. (CV_16UC1)
|
||||
- CAP_INTELPERC_UVDEPTH_MAP - each pixel contains two 32-bit floating point values in
|
||||
the range of 0-1, representing the mapping of depth coordinates to the color
|
||||
coordinates. (CV_32FC2)
|
||||
- CAP_INTELPERC_IR_MAP - each pixel is a 16-bit integer. The value indicates the
|
||||
intensity of the reflected laser beam. (CV_16UC1)
|
||||
|
||||
-# data given from RGB image generator:
|
||||
- CAP_INTELPERC_IMAGE - color image. (CV_8UC3)
|
||||
|
||||
In order to get depth map from depth sensor use VideoCapture::operator \>\>, e. g. :
|
||||
@code{.cpp}
|
||||
VideoCapture capture( CAP_INTELPERC );
|
||||
for(;;)
|
||||
{
|
||||
Mat depthMap;
|
||||
capture >> depthMap;
|
||||
|
||||
if( waitKey( 30 ) >= 0 )
|
||||
break;
|
||||
}
|
||||
@endcode
|
||||
For getting several data maps use VideoCapture::grab and VideoCapture::retrieve, e.g. :
|
||||
@code{.cpp}
|
||||
VideoCapture capture(CAP_INTELPERC);
|
||||
for(;;)
|
||||
{
|
||||
Mat depthMap;
|
||||
Mat image;
|
||||
Mat irImage;
|
||||
|
||||
capture.grab();
|
||||
|
||||
capture.retrieve( depthMap, CAP_INTELPERC_DEPTH_MAP );
|
||||
capture.retrieve( image, CAP_INTELPERC_IMAGE );
|
||||
capture.retrieve( irImage, CAP_INTELPERC_IR_MAP);
|
||||
|
||||
if( waitKey( 30 ) >= 0 )
|
||||
break;
|
||||
}
|
||||
@endcode
|
||||
For setting and getting some property of sensor\` data generators use VideoCapture::set and
|
||||
VideoCapture::get methods respectively, e.g. :
|
||||
@code{.cpp}
|
||||
VideoCapture capture( CAP_INTELPERC );
|
||||
capture.set( CAP_INTELPERC_DEPTH_GENERATOR | CAP_PROP_INTELPERC_PROFILE_IDX, 0 );
|
||||
cout << "FPS " << capture.get( CAP_INTELPERC_DEPTH_GENERATOR+CAP_PROP_FPS ) << endl;
|
||||
@endcode
|
||||
Since two types of sensor's data generators are supported (image generator and depth generator),
|
||||
there are two flags that should be used to set/get property of the needed generator:
|
||||
|
||||
- CAP_INTELPERC_IMAGE_GENERATOR -- a flag for access to the image generator properties.
|
||||
- CAP_INTELPERC_DEPTH_GENERATOR -- a flag for access to the depth generator properties. This
|
||||
flag value is assumed by default if neither of the two possible values of the property is set.
|
||||
|
||||
For more information please refer to the example of usage
|
||||
[intelperc_capture.cpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/intelperc_capture.cpp)
|
||||
in opencv/samples/cpp folder.
|
@@ -1,138 +0,0 @@
|
||||
Using Kinect and other OpenNI compatible depth sensors {#tutorial_kinect_openni}
|
||||
======================================================
|
||||
|
||||
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture
|
||||
class. Depth map, BGR image and some other formats of output can be retrieved by using familiar
|
||||
interface of VideoCapture.
|
||||
|
||||
In order to use depth sensor with OpenCV you should do the following preliminary steps:
|
||||
|
||||
-# Install OpenNI library (from here <http://www.openni.org/downloadfiles>) and PrimeSensor Module
|
||||
for OpenNI (from here <https://github.com/avin2/SensorKinect>). The installation should be done
|
||||
to default folders listed in the instructions of these products, e.g.:
|
||||
@code{.text}
|
||||
OpenNI:
|
||||
Linux & MacOSX:
|
||||
Libs into: /usr/lib
|
||||
Includes into: /usr/include/ni
|
||||
Windows:
|
||||
Libs into: c:/Program Files/OpenNI/Lib
|
||||
Includes into: c:/Program Files/OpenNI/Include
|
||||
PrimeSensor Module:
|
||||
Linux & MacOSX:
|
||||
Bins into: /usr/bin
|
||||
Windows:
|
||||
Bins into: c:/Program Files/Prime Sense/Sensor/Bin
|
||||
@endcode
|
||||
If one or both products were installed to the other folders, the user should change
|
||||
corresponding CMake variables OPENNI_LIB_DIR, OPENNI_INCLUDE_DIR or/and
|
||||
OPENNI_PRIME_SENSOR_MODULE_BIN_DIR.
|
||||
|
||||
-# Configure OpenCV with OpenNI support by setting WITH_OPENNI flag in CMake. If OpenNI is found
|
||||
in install folders OpenCV will be built with OpenNI library (see a status OpenNI in CMake log)
|
||||
whereas PrimeSensor Modules can not be found (see a status OpenNI PrimeSensor Modules in CMake
|
||||
log). Without PrimeSensor module OpenCV will be successfully compiled with OpenNI library, but
|
||||
VideoCapture object will not grab data from Kinect sensor.
|
||||
|
||||
-# Build OpenCV.
|
||||
|
||||
VideoCapture can retrieve the following data:
|
||||
|
||||
-# data given from depth generator:
|
||||
- CAP_OPENNI_DEPTH_MAP - depth values in mm (CV_16UC1)
|
||||
- CAP_OPENNI_POINT_CLOUD_MAP - XYZ in meters (CV_32FC3)
|
||||
- CAP_OPENNI_DISPARITY_MAP - disparity in pixels (CV_8UC1)
|
||||
- CAP_OPENNI_DISPARITY_MAP_32F - disparity in pixels (CV_32FC1)
|
||||
- CAP_OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not ocluded, not shaded etc.)
|
||||
(CV_8UC1)
|
||||
|
||||
-# data given from BGR image generator:
|
||||
- CAP_OPENNI_BGR_IMAGE - color image (CV_8UC3)
|
||||
- CAP_OPENNI_GRAY_IMAGE - gray image (CV_8UC1)
|
||||
|
||||
In order to get depth map from depth sensor use VideoCapture::operator \>\>, e. g. :
|
||||
@code{.cpp}
|
||||
VideoCapture capture( CAP_OPENNI );
|
||||
for(;;)
|
||||
{
|
||||
Mat depthMap;
|
||||
capture >> depthMap;
|
||||
|
||||
if( waitKey( 30 ) >= 0 )
|
||||
break;
|
||||
}
|
||||
@endcode
|
||||
For getting several data maps use VideoCapture::grab and VideoCapture::retrieve, e.g. :
|
||||
@code{.cpp}
|
||||
VideoCapture capture(0); // or CAP_OPENNI
|
||||
for(;;)
|
||||
{
|
||||
Mat depthMap;
|
||||
Mat bgrImage;
|
||||
|
||||
capture.grab();
|
||||
|
||||
capture.retrieve( depthMap, CAP_OPENNI_DEPTH_MAP );
|
||||
capture.retrieve( bgrImage, CAP_OPENNI_BGR_IMAGE );
|
||||
|
||||
if( waitKey( 30 ) >= 0 )
|
||||
break;
|
||||
}
|
||||
@endcode
|
||||
For setting and getting some property of sensor\` data generators use VideoCapture::set and
|
||||
VideoCapture::get methods respectively, e.g. :
|
||||
@code{.cpp}
|
||||
VideoCapture capture( CAP_OPENNI );
|
||||
capture.set( CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, CAP_OPENNI_VGA_30HZ );
|
||||
cout << "FPS " << capture.get( CAP_OPENNI_IMAGE_GENERATOR+CAP_PROP_FPS ) << endl;
|
||||
@endcode
|
||||
Since two types of sensor's data generators are supported (image generator and depth generator),
|
||||
there are two flags that should be used to set/get property of the needed generator:
|
||||
|
||||
- CAP_OPENNI_IMAGE_GENERATOR -- A flag for access to the image generator properties.
|
||||
- CAP_OPENNI_DEPTH_GENERATOR -- A flag for access to the depth generator properties. This flag
|
||||
value is assumed by default if neither of the two possible values of the property is not set.
|
||||
|
||||
Some depth sensors (for example XtionPRO) do not have image generator. In order to check it you can
|
||||
get CAP_OPENNI_IMAGE_GENERATOR_PRESENT property.
|
||||
@code{.cpp}
|
||||
bool isImageGeneratorPresent = capture.get( CAP_PROP_OPENNI_IMAGE_GENERATOR_PRESENT ) != 0; // or == 1
|
||||
@endcode
|
||||
Flags specifing the needed generator type must be used in combination with particular generator
|
||||
property. The following properties of cameras available through OpenNI interfaces are supported:
|
||||
|
||||
- For image generator:
|
||||
|
||||
- CAP_PROP_OPENNI_OUTPUT_MODE -- Three output modes are supported: CAP_OPENNI_VGA_30HZ
|
||||
used by default (image generator returns images in VGA resolution with 30 FPS),
|
||||
CAP_OPENNI_SXGA_15HZ (image generator returns images in SXGA resolution with 15 FPS) and
|
||||
CAP_OPENNI_SXGA_30HZ (image generator returns images in SXGA resolution with 30 FPS, the
|
||||
mode is supported by XtionPRO Live); depth generator's maps are always in VGA resolution.
|
||||
|
||||
- For depth generator:
|
||||
|
||||
- CAP_PROP_OPENNI_REGISTRATION -- Flag that registers the remapping depth map to image map
|
||||
by changing depth generator's view point (if the flag is "on") or sets this view point to
|
||||
its normal one (if the flag is "off"). The registration process’s resulting images are
|
||||
pixel-aligned,which means that every pixel in the image is aligned to a pixel in the depth
|
||||
image.
|
||||
|
||||
Next properties are available for getting only:
|
||||
|
||||
- CAP_PROP_OPENNI_FRAME_MAX_DEPTH -- A maximum supported depth of Kinect in mm.
|
||||
- CAP_PROP_OPENNI_BASELINE -- Baseline value in mm.
|
||||
- CAP_PROP_OPENNI_FOCAL_LENGTH -- A focal length in pixels.
|
||||
- CAP_PROP_FRAME_WIDTH -- Frame width in pixels.
|
||||
- CAP_PROP_FRAME_HEIGHT -- Frame height in pixels.
|
||||
- CAP_PROP_FPS -- Frame rate in FPS.
|
||||
|
||||
- Some typical flags combinations "generator type + property" are defined as single flags:
|
||||
|
||||
- CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE = CAP_OPENNI_IMAGE_GENERATOR + CAP_PROP_OPENNI_OUTPUT_MODE
|
||||
- CAP_OPENNI_DEPTH_GENERATOR_BASELINE = CAP_OPENNI_DEPTH_GENERATOR + CAP_PROP_OPENNI_BASELINE
|
||||
- CAP_OPENNI_DEPTH_GENERATOR_FOCAL_LENGTH = CAP_OPENNI_DEPTH_GENERATOR + CAP_PROP_OPENNI_FOCAL_LENGTH
|
||||
- CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION = CAP_OPENNI_DEPTH_GENERATOR + CAP_PROP_OPENNI_REGISTRATION
|
||||
|
||||
For more information please refer to the example of usage
|
||||
[openni_capture.cpp](https://github.com/Itseez/opencv/tree/master/samples/cpp/openni_capture.cpp) in
|
||||
opencv/samples/cpp folder.
|
Binary file not shown.
Before Width: | Height: | Size: 111 KiB |
Binary file not shown.
Before Width: | Height: | Size: 53 KiB |
Binary file not shown.
Before Width: | Height: | Size: 120 KiB |
@@ -1,101 +0,0 @@
|
||||
Reading Geospatial Raster files with GDAL {#tutorial_raster_io_gdal}
|
||||
=========================================
|
||||
|
||||
Geospatial raster data is a heavily used product in Geographic Information Systems and
|
||||
Photogrammetry. Raster data typically can represent imagery and Digital Elevation Models (DEM). The
|
||||
standard library for loading GIS imagery is the Geographic Data Abstraction Library [(GDAL)](http://www.gdal.org). In this
|
||||
example, we will show techniques for loading GIS raster formats using native OpenCV functions. In
|
||||
addition, we will show some an example of how OpenCV can use this data for novel and interesting
|
||||
purposes.
|
||||
|
||||
Goals
|
||||
-----
|
||||
|
||||
The primary objectives for this tutorial:
|
||||
|
||||
- How to use OpenCV [imread](@ref imread) to load satellite imagery.
|
||||
- How to use OpenCV [imread](@ref imread) to load SRTM Digital Elevation Models
|
||||
- Given the corner coordinates of both the image and DEM, correllate the elevation data to the
|
||||
image to find elevations for each pixel.
|
||||
- Show a basic, easy-to-implement example of a terrain heat map.
|
||||
- Show a basic use of DEM data coupled with ortho-rectified imagery.
|
||||
|
||||
To implement these goals, the following code takes a Digital Elevation Model as well as a GeoTiff
|
||||
image of San Francisco as input. The image and DEM data is processed and generates a terrain heat
|
||||
map of the image as well as labels areas of the city which would be affected should the water level
|
||||
of the bay rise 10, 50, and 100 meters.
|
||||
|
||||
Code
|
||||
----
|
||||
|
||||
@include cpp/tutorial_code/HighGUI/GDAL_IO/gdal-image.cpp
|
||||
|
||||
How to Read Raster Data using GDAL
|
||||
----------------------------------
|
||||
|
||||
This demonstration uses the default OpenCV imread function. The primary difference is that in order
|
||||
to force GDAL to load the image, you must use the appropriate flag.
|
||||
@code{.cpp}
|
||||
cv::Mat image = cv::imread( argv[1], cv::IMREAD_LOAD_GDAL );
|
||||
@endcode
|
||||
When loading digital elevation models, the actual numeric value of each pixel is essential and
|
||||
cannot be scaled or truncated. For example, with image data a pixel represented as a double with a
|
||||
value of 1 has an equal appearance to a pixel which is represented as an unsigned character with a
|
||||
value of 255. With terrain data, the pixel value represents the elevation in meters. In order to
|
||||
ensure that OpenCV preserves the native value, use the GDAL flag in imread with the ANYDEPTH flag.
|
||||
@code{.cpp}
|
||||
cv::Mat dem = cv::imread( argv[2], cv::IMREAD_LOAD_GDAL | cv::IMREAD_ANYDEPTH );
|
||||
@endcode
|
||||
If you know beforehand the type of DEM model you are loading, then it may be a safe bet to test the
|
||||
Mat::type() or Mat::depth() using an assert or other mechanism. NASA or DOD specification documents
|
||||
can provide the input types for various elevation models. The major types, SRTM and DTED, are both
|
||||
signed shorts.
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
### Lat/Lon (Geographic) Coordinates should normally be avoided
|
||||
|
||||
The Geographic Coordinate System is a spherical coordinate system, meaning that using them with
|
||||
Cartesian mathematics is technically incorrect. This demo uses them to increase the readability and
|
||||
is accurate enough to make the point. A better coordinate system would be Universal Transverse
|
||||
Mercator.
|
||||
|
||||
### Finding the corner coordinates
|
||||
|
||||
One easy method to find the corner coordinates of an image is to use the command-line tool gdalinfo.
|
||||
For imagery which is ortho-rectified and contains the projection information, you can use the [USGS
|
||||
EarthExplorer](http://http://earthexplorer.usgs.gov).
|
||||
@code{.bash}
|
||||
\f$> gdalinfo N37W123.hgt
|
||||
|
||||
Driver: SRTMHGT/SRTMHGT File Format
|
||||
Files: N37W123.hgt
|
||||
Size is 3601, 3601
|
||||
Coordinate System is:
|
||||
GEOGCS["WGS 84",
|
||||
DATUM["WGS_1984",
|
||||
|
||||
... more output ...
|
||||
|
||||
Corner Coordinates:
|
||||
Upper Left (-123.0001389, 38.0001389) (123d 0' 0.50"W, 38d 0' 0.50"N)
|
||||
Lower Left (-123.0001389, 36.9998611) (123d 0' 0.50"W, 36d59'59.50"N)
|
||||
Upper Right (-121.9998611, 38.0001389) (121d59'59.50"W, 38d 0' 0.50"N)
|
||||
Lower Right (-121.9998611, 36.9998611) (121d59'59.50"W, 36d59'59.50"N)
|
||||
Center (-122.5000000, 37.5000000) (122d30' 0.00"W, 37d30' 0.00"N)
|
||||
|
||||
... more output ...
|
||||
@endcode
|
||||
Results
|
||||
-------
|
||||
|
||||
Below is the output of the program. Use the first image as the input. For the DEM model, download
|
||||
the SRTM file located at the USGS here.
|
||||
[<http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip>](http://dds.cr.usgs.gov/srtm/version2_1/SRTM1/Region_04/N37W123.hgt.zip)
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
@@ -1,8 +1,7 @@
|
||||
High Level GUI and Media (highgui module) {#tutorial_table_of_content_highgui}
|
||||
=========================================
|
||||
|
||||
This section contains valuable tutorials about how to read/save your image/video files and how to
|
||||
use the built-in graphical user interface of the library.
|
||||
This section contains tutorials about how to use the built-in graphical user interface of the library.
|
||||
|
||||
- @subpage tutorial_trackbar
|
||||
|
||||
@@ -11,15 +10,3 @@ use the built-in graphical user interface of the library.
|
||||
*Author:* Ana Huamán
|
||||
|
||||
We will learn how to add a Trackbar to our applications
|
||||
|
||||
- @subpage tutorial_raster_io_gdal
|
||||
|
||||
*Compatibility:* \> OpenCV 2.0
|
||||
|
||||
*Author:* Marvin Smith
|
||||
|
||||
Read common GIS Raster and DEM files to display and manipulate geographic data.
|
||||
|
||||
- @subpage tutorial_kinect_openni
|
||||
|
||||
- @subpage tutorial_intelperc
|
||||
|
Reference in New Issue
Block a user