Merge branch 'master' of code.opencv.org:opencv
This commit is contained in:
@@ -2446,6 +2446,6 @@ The above methods are usually enough for users. If you want to make your own alg
|
||||
* Make a class and specify ``Algorithm`` as its base class.
|
||||
* The algorithm parameters should be the class members. See ``Algorithm::get()`` for the list of possible types of the parameters.
|
||||
* Add public virtual method ``AlgorithmInfo* info() const;`` to your class.
|
||||
* Add constructor function, ``AlgorithmInfo`` instance and implement the ``info()`` method. The simplest way is to take http://code.opencv.org/svn/opencv/trunk/opencv/modules/ml/src/ml_init.cpp as the reference and modify it according to the list of your parameters.
|
||||
* Add constructor function, ``AlgorithmInfo`` instance and implement the ``info()`` method. The simplest way is to take http://code.opencv.org/projects/opencv/repository/revisions/master/entry/modules/ml/src/ml_init.cpp as the reference and modify it according to the list of your parameters.
|
||||
* Add some public function (e.g. ``initModule_<mymodule>()``) that calls info() of your algorithm and put it into the same source file as ``info()`` implementation. This is to force C++ linker to include this object file into the target application. See ``Algorithm::create()`` for details.
|
||||
|
||||
|
@@ -42,7 +42,7 @@ You can always determine at runtime whether the OpenCV GPU-built binaries (or PT
|
||||
Utilizing Multiple GPUs
|
||||
-----------------------
|
||||
|
||||
In the current version, each of the OpenCV GPU algorithms can use only a single GPU. So, to utilize multiple GPUs, you have to manually distribute the work between GPUs.
|
||||
In the current version, each of the OpenCV GPU algorithms can use only a single GPU. So, to utilize multiple GPUs, you have to manually distribute the work between GPUs.
|
||||
Switching active devie can be done using :ocv:func:`gpu::setDevice()` function. For more details please read Cuda C Programing Guide.
|
||||
|
||||
While developing algorithms for multiple GPUs, note a data passing overhead. For primitive functions and small images, it can be significant, which may eliminate all the advantages of having multiple GPUs. But for high-level algorithms, consider using multi-GPU acceleration. For example, the Stereo Block Matching algorithm has been successfully parallelized using the following algorithm:
|
||||
@@ -59,5 +59,5 @@ While developing algorithms for multiple GPUs, note a data passing overhead. For
|
||||
With this algorithm, a dual GPU gave a 180
|
||||
%
|
||||
performance increase comparing to the single Fermi GPU. For a source code example, see
|
||||
http://code.opencv.org/svn/opencv/trunk/opencv/samples/gpu/.
|
||||
http://code.opencv.org/projects/opencv/repository/revisions/master/entry/samples/gpu/.
|
||||
|
||||
|
@@ -294,7 +294,7 @@ The methods/functions grab the next frame from video file or camera and return t
|
||||
|
||||
The primary use of the function is in multi-camera environments, especially when the cameras do not have hardware synchronization. That is, you call ``VideoCapture::grab()`` for each camera and after that call the slower method ``VideoCapture::retrieve()`` to decode and get frame from each camera. This way the overhead on demosaicing or motion jpeg decompression etc. is eliminated and the retrieved frames from different cameras will be closer in time.
|
||||
|
||||
Also, when a connected camera is multi-head (for example, a stereo camera or a Kinect device), the correct way of retrieving data from it is to call `VideoCapture::grab` first and then call :ocv:func:`VideoCapture::retrieve` one or more times with different values of the ``channel`` parameter. See http://code.opencv.org/svn/opencv/trunk/opencv/samples/cpp/kinect_maps.cpp
|
||||
Also, when a connected camera is multi-head (for example, a stereo camera or a Kinect device), the correct way of retrieving data from it is to call `VideoCapture::grab` first and then call :ocv:func:`VideoCapture::retrieve` one or more times with different values of the ``channel`` parameter. See http://code.opencv.org/projects/opencv/repository/revisions/master/entry/samples/cpp/kinect_maps.cpp
|
||||
|
||||
|
||||
VideoCapture::retrieve
|
||||
|
@@ -203,7 +203,7 @@ Sets mouse handler for the specified window
|
||||
|
||||
:param winname: Window name
|
||||
|
||||
:param onMouse: Mouse callback. See OpenCV samples, such as http://code.opencv.org/svn/opencv/trunk/opencv/samples/cpp/ffilldemo.cpp, on how to specify and use the callback.
|
||||
:param onMouse: Mouse callback. See OpenCV samples, such as http://code.opencv.org/projects/opencv/repository/revisions/master/entry/samples/cpp/ffilldemo.cpp, on how to specify and use the callback.
|
||||
|
||||
:param userdata: The optional parameter passed to the callback.
|
||||
|
||||
|
@@ -202,7 +202,7 @@ Approximates a polygonal curve(s) with the specified precision.
|
||||
The functions ``approxPolyDP`` approximate a curve or a polygon with another curve/polygon with less vertices so that the distance between them is less or equal to the specified precision. It uses the Douglas-Peucker algorithm
|
||||
http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm
|
||||
|
||||
See http://code.opencv.org/svn/opencv/trunk/opencv/samples/cpp/contours.cpp for the function usage model.
|
||||
See http://code.opencv.org/projects/opencv/repository/revisions/master/entry/samples/cpp/contours.cpp for the function usage model.
|
||||
|
||||
|
||||
ApproxChains
|
||||
|
@@ -21,7 +21,7 @@ The word "cascade" in the classifier name means that the resultant classifier co
|
||||
The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied). For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the :ocv:func:`integral` description).
|
||||
|
||||
To see the object detector at work, have a look at the facedetect demo:
|
||||
http://code.opencv.org/svn/opencv/trunk/opencv/samples/cpp/facedetect.cpp
|
||||
http://code.opencv.org/projects/opencv/repository/revisions/master/entry/samples/cpp/facedetect.cpp
|
||||
|
||||
The following reference is for the detection part only. There is a separate application called ``opencv_traincascade`` that can train a cascade of boosted classifiers from a set of samples.
|
||||
|
||||
|
@@ -444,7 +444,7 @@ inline int predictCategoricalStump( CascadeClassifier& cascade, Ptr<FeatureEvalu
|
||||
CascadeClassifier::Data::Stage* cascadeStages = &cascade.data.stages[0];
|
||||
|
||||
#ifdef HAVE_TEGRA_OPTIMIZATION
|
||||
float tmp; // float accumulator -- float operations are quicker
|
||||
float tmp = 0; // float accumulator -- float operations are quicker
|
||||
#endif
|
||||
for( int si = 0; si < nstages; si++ )
|
||||
{
|
||||
|
@@ -56,6 +56,7 @@ parse_patterns = (
|
||||
{'name': "tests_dir", 'default': None, 'pattern': re.compile("^EXECUTABLE_OUTPUT_PATH:PATH=(.+)$")},
|
||||
{'name': "build_type", 'default': "Release", 'pattern': re.compile("^CMAKE_BUILD_TYPE:STRING=(.*)$")},
|
||||
{'name': "svnversion_path", 'default': None, 'pattern': re.compile("^SVNVERSION_PATH:FILEPATH=(.*)$")},
|
||||
{'name': "git_executable", 'default': None, 'pattern': re.compile("^GIT_EXECUTABLE:FILEPATH=(.*)$")},
|
||||
{'name': "cxx_flags", 'default': "", 'pattern': re.compile("^CMAKE_CXX_FLAGS:STRING=(.*)$")},
|
||||
{'name': "cxx_flags_debug", 'default': "", 'pattern': re.compile("^CMAKE_CXX_FLAGS_DEBUG:STRING=(.*)$")},
|
||||
{'name': "cxx_flags_release", 'default': "", 'pattern': re.compile("^CMAKE_CXX_FLAGS_RELEASE:STRING=(.*)$")},
|
||||
@@ -303,13 +304,15 @@ class RunInfo(object):
|
||||
# detect target arch
|
||||
if self.targetos == "android":
|
||||
if "armeabi-v7a" in self.android_abi:
|
||||
self.targetarch = "ARMv7a"
|
||||
self.targetarch = "armv7a"
|
||||
elif "armeabi-v6" in self.android_abi:
|
||||
self.targetarch = "ARMv6"
|
||||
self.targetarch = "armv6"
|
||||
elif "armeabi" in self.android_abi:
|
||||
self.targetarch = "ARMv5te"
|
||||
self.targetarch = "armv5te"
|
||||
elif "x86" in self.android_abi:
|
||||
self.targetarch = "x86"
|
||||
elif "mips" in self.android_abi:
|
||||
self.targetarch = "mips"
|
||||
else:
|
||||
self.targetarch = "ARM"
|
||||
elif self.is_x64 and hostmachine in ["AMD64", "x86_64"]:
|
||||
@@ -327,19 +330,38 @@ class RunInfo(object):
|
||||
|
||||
self.hardware = None
|
||||
|
||||
self.getSvnVersion(self.cmake_home, "cmake_home_svn")
|
||||
self.cmake_home_vcver = self.getVCVersion(self.cmake_home)
|
||||
if self.opencv_home == self.cmake_home:
|
||||
self.opencv_home_svn = self.cmake_home_svn
|
||||
self.opencv_home_vcver = self.cmake_home_vcver
|
||||
else:
|
||||
self.getSvnVersion(self.opencv_home, "opencv_home_svn")
|
||||
self.opencv_home_vcver = self.getVCVersion(self.opencv_home)
|
||||
|
||||
self.tests = self.getAvailableTestApps()
|
||||
|
||||
def getSvnVersion(self, path, name):
|
||||
def getVCVersion(self, root_path):
|
||||
if os.path.isdir(os.path.join(root_path, ".svn")):
|
||||
return self.getSvnVersion(root_path)
|
||||
elif os.path.isdir(os.path.join(root_path, ".git")):
|
||||
return self.getGitHash(root_path)
|
||||
return None
|
||||
|
||||
def getGitHash(self, path):
|
||||
if not path or not self.git_executable:
|
||||
return None
|
||||
try:
|
||||
output = Popen([self.git_executable, "rev-parse", "--short", "HEAD"], stdout=PIPE, stderr=PIPE, cwd = path).communicate()
|
||||
if not output[1]:
|
||||
return output[0].strip()
|
||||
else:
|
||||
return None
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
def getSvnVersion(self, path):
|
||||
if not path:
|
||||
val = None
|
||||
elif not self.svnversion_path and hostos == 'nt':
|
||||
val = self.tryGetSvnVersionWithTortoise(path, name)
|
||||
val = self.tryGetSvnVersionWithTortoise(path)
|
||||
else:
|
||||
svnversion = self.svnversion_path
|
||||
if not svnversion:
|
||||
@@ -354,9 +376,9 @@ class RunInfo(object):
|
||||
val = None
|
||||
if val:
|
||||
val = val.replace(" ", "_")
|
||||
setattr(self, name, val)
|
||||
return val
|
||||
|
||||
def tryGetSvnVersionWithTortoise(self, path, name):
|
||||
def tryGetSvnVersionWithTortoise(self, path):
|
||||
try:
|
||||
wcrev = "SubWCRev.exe"
|
||||
dir = tempfile.mkdtemp()
|
||||
@@ -408,13 +430,13 @@ class RunInfo(object):
|
||||
if app.startswith(self.nameprefix):
|
||||
app = app[len(self.nameprefix):]
|
||||
|
||||
if self.cmake_home_svn:
|
||||
if self.cmake_home_svn == self.opencv_home_svn:
|
||||
rev = self.cmake_home_svn
|
||||
elif self.opencv_home_svn:
|
||||
rev = self.cmake_home_svn + "-" + self.opencv_home_svn
|
||||
if self.cmake_home_vcver:
|
||||
if self.cmake_home_vcver == self.opencv_home_vcver:
|
||||
rev = self.cmake_home_vcver
|
||||
elif self.opencv_home_vcver:
|
||||
rev = self.cmake_home_vcver + "-" + self.opencv_home_vcver
|
||||
else:
|
||||
rev = self.cmake_home_svn
|
||||
rev = self.cmake_home_vcver
|
||||
else:
|
||||
rev = None
|
||||
if rev:
|
||||
@@ -486,7 +508,6 @@ class RunInfo(object):
|
||||
else:
|
||||
prev_option = prev_option + " " + opt
|
||||
options.append(tmpfile[1])
|
||||
print options
|
||||
output = Popen(options, stdout=PIPE, stderr=PIPE).communicate()
|
||||
compiler_output = output[1]
|
||||
os.remove(tmpfile[1])
|
||||
@@ -508,7 +529,7 @@ class RunInfo(object):
|
||||
hw = "CUDA_"
|
||||
else:
|
||||
hw = ""
|
||||
tstamp = timestamp.strftime("%Y-%m-%d--%H-%M-%S")
|
||||
tstamp = timestamp.strftime("%Y%m%d-%H%M%S")
|
||||
return "%s_%s_%s_%s%s%s.xml" % (app, self.targetos, self.targetarch, hw, rev, tstamp)
|
||||
|
||||
def getTest(self, name):
|
||||
|
Reference in New Issue
Block a user