resolved conflicts, updated retina class interface and optimized a heavy retinacolor process

This commit is contained in:
alexandre benoit
2013-04-29 19:06:35 +02:00
2166 changed files with 342437 additions and 211189 deletions

View File

@@ -1 +1 @@
ocv_define_module(contrib opencv_imgproc opencv_calib3d opencv_features2d opencv_ml opencv_video opencv_objdetect OPTIONAL opencv_highgui)
ocv_define_module(contrib opencv_imgproc opencv_calib3d opencv_ml opencv_video opencv_objdetect OPTIONAL opencv_highgui)

View File

@@ -10,3 +10,4 @@ The module contains some recently added functionality that has not been stabiliz
stereo
FaceRecognizer Documentation <facerec/index>
Retina Documentation <retina/index>
openfabmap

View File

@@ -42,18 +42,18 @@ In OpenCV 2.4 you only need :ocv:func:`applyColorMap` to apply a colormap on a g
.. code-block:: cpp
#include <opencv2/contrib/contrib.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/contrib.hpp>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main(int argc, const char *argv[]) {
// Get the path to the image, if it was given
// if no arguments were given.
string filename;
String filename;
if (argc > 1) {
filename = string(argv[1]);
filename = String(argv[1]);
}
// The following lines show how to apply a colormap on a given image
// and show it with cv::imshow example with an image. An exception is

View File

@@ -30,10 +30,10 @@ a unified access to all face recongition algorithms in OpenCV. ::
virtual void predict(InputArray src, int &label, double &confidence) const = 0;
// Serializes this object to a given filename.
virtual void save(const string& filename) const;
virtual void save(const String& filename) const;
// Deserializes this object from a given filename.
virtual void load(const string& filename);
virtual void load(const String& filename);
// Serializes this object to a given cv::FileStorage.
virtual void save(FileStorage& fs) const = 0;
@@ -52,7 +52,7 @@ I'll go a bit more into detail explaining :ocv:class:`FaceRecognizer`, because i
* So called “virtual constructor”. That is, each Algorithm derivative is registered at program start and you can get the list of registered algorithms and create instance of a particular algorithm by its name (see :ocv:func:`Algorithm::create`). If you plan to add your own algorithms, it is good practice to add a unique prefix to your algorithms to distinguish them from other algorithms.
* Setting/Retrieving algorithm parameters by name. If you used video capturing functionality from OpenCV highgui module, you are probably familar with :ocv:cfunc:`cvSetCaptureProperty`, :ocv:cfunc:`cvGetCaptureProperty`, :ocv:func:`VideoCapture::set` and :ocv:func:`VideoCapture::get`. :ocv:class:`Algorithm` provides similar method where instead of integer id's you specify the parameter names as text strings. See :ocv:func:`Algorithm::set` and :ocv:func:`Algorithm::get` for details.
* Setting/Retrieving algorithm parameters by name. If you used video capturing functionality from OpenCV highgui module, you are probably familar with :ocv:cfunc:`cvSetCaptureProperty`, :ocv:cfunc:`cvGetCaptureProperty`, :ocv:func:`VideoCapture::set` and :ocv:func:`VideoCapture::get`. :ocv:class:`Algorithm` provides similar method where instead of integer id's you specify the parameter names as text Strings. See :ocv:func:`Algorithm::set` and :ocv:func:`Algorithm::get` for details.
* Reading and writing parameters from/to XML or YAML files. Every Algorithm derivative can store all its parameters and then read them back. There is no need to re-implement it each time.
@@ -113,7 +113,7 @@ Since every :ocv:class:`FaceRecognizer` is a :ocv:class:`Algorithm`, you can use
// Create a FaceRecognizer:
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
// And here's how to get its name:
std::string name = model->name();
String name = model->name();
FaceRecognizer::train
@@ -251,7 +251,7 @@ FaceRecognizer::save
Saves a :ocv:class:`FaceRecognizer` and its model state.
.. ocv:function:: void FaceRecognizer::save(const string& filename) const
.. ocv:function:: void FaceRecognizer::save(const String& filename) const
Saves this model to a given filename, either as XML or YAML.
@@ -265,7 +265,7 @@ Saves a :ocv:class:`FaceRecognizer` and its model state.
Every :ocv:class:`FaceRecognizer` overwrites ``FaceRecognizer::save(FileStorage& fs)``
to save the internal model state. ``FaceRecognizer::save(const string& filename)`` saves
to save the internal model state. ``FaceRecognizer::save(const String& filename)`` saves
the state of a model to the given filename.
The suffix ``const`` means that prediction does not affect the internal model
@@ -276,13 +276,13 @@ FaceRecognizer::load
Loads a :ocv:class:`FaceRecognizer` and its model state.
.. ocv:function:: void FaceRecognizer::load( const string& filename )
.. ocv:function:: void FaceRecognizer::load( const String& filename )
.. ocv:function:: void FaceRecognizer::load( const FileStorage& fs ) = 0
Loads a persisted model and state from a given XML or YAML file . Every
:ocv:class:`FaceRecognizer` has to overwrite ``FaceRecognizer::load(FileStorage& fs)``
to enable loading the model state. ``FaceRecognizer::load(FileStorage& fs)`` in
turn gets called by ``FaceRecognizer::load(const string& filename)``, to ease
turn gets called by ``FaceRecognizer::load(const String& filename)``, to ease
saving a model.
createEigenFaceRecognizer

View File

@@ -7,7 +7,7 @@ Face Recognition with OpenCV
Introduction
============
`OpenCV (Open Source Computer Vision) <http://opencv.willowgarage.com>`_ is a popular computer vision library started by `Intel <http://www.intel.com>`_ in 1999. The cross-platform library sets its focus on real-time image processing and includes patent-free implementations of the latest computer vision algorithms. In 2008 `Willow Garage <http://www.willowgarage.com>`_ took over support and OpenCV 2.3.1 now comes with a programming interface to C, C++, `Python <http://www.python.org>`_ and `Android <http://www.android.com>`_. OpenCV is released under a BSD license so it is used in academic projects and commercial products alike.
`OpenCV (Open Source Computer Vision) <http://opencv.org>`_ is a popular computer vision library started by `Intel <http://www.intel.com>`_ in 1999. The cross-platform library sets its focus on real-time image processing and includes patent-free implementations of the latest computer vision algorithms. In 2008 `Willow Garage <http://www.willowgarage.com>`_ took over support and OpenCV 2.3.1 now comes with a programming interface to C, C++, `Python <http://www.python.org>`_ and `Android <http://www.android.com>`_. OpenCV is released under a BSD license so it is used in academic projects and commercial products alike.
OpenCV 2.4 now comes with the very new :ocv:class:`FaceRecognizer` class for face recognition, so you can start experimenting with face recognition right away. This document is the guide I've wished for, when I was working myself into face recognition. It shows you how to perform face recognition with :ocv:class:`FaceRecognizer` in OpenCV (with full source code listings) and gives you an introduction into the algorithms behind. I'll also show how to create the visualizations you can find in many publications, because a lot of people asked for.

View File

@@ -6,7 +6,7 @@ project(facerec_cpp_samples)
#SET(OpenCV_DIR /path/to/your/opencv/installation)
# packages
find_package(OpenCV REQUIRED) # http://opencv.willowgarage.com
find_package(OpenCV REQUIRED) # http://opencv.org
# probably you should loop through the sample files here
add_executable(facerec_demo facerec_demo.cpp)

View File

@@ -1,3 +1,5 @@
#!/usr/bin/env python
import sys
import os.path

View File

@@ -16,9 +16,9 @@
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/core/core.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core.hpp"
#include "opencv2/contrib.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>

View File

@@ -16,9 +16,9 @@
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/core/core.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core.hpp"
#include "opencv2/contrib.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include <fstream>

View File

@@ -16,9 +16,9 @@
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/core/core.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core.hpp"
#include "opencv2/contrib.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include <fstream>

View File

@@ -16,9 +16,9 @@
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/core/core.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core.hpp"
#include "opencv2/contrib.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include <fstream>

View File

@@ -16,9 +16,9 @@
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/contrib.hpp"
#include "opencv2/core.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include <fstream>

View File

@@ -16,11 +16,11 @@
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/core/core.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/core.hpp"
#include "opencv2/contrib.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/objdetect.hpp"
#include <iostream>
#include <fstream>

View File

@@ -1,4 +1,4 @@
openFABMAP
OpenFABMAP
========================================
.. highlight:: cpp

View File

@@ -16,7 +16,7 @@ Class which provides the main controls to the Gipsa/Listic labs human retina mo
**NOTE : See the Retina tutorial in the tutorial/contrib section for complementary explanations.**
The retina can be settled up with various parameters, by default, the retina cancels mean luminance and enforces all details of the visual scene. In order to use your own parameters, you can use at least one time the *write(std::string fs)* method which will write a proper XML file with all default parameters. Then, tweak it on your own and reload them at any time using method *setup(std::string fs)*. These methods update a *Retina::RetinaParameters* member structure that is described hereafter. ::
The retina can be settled up with various parameters, by default, the retina cancels mean luminance and enforces all details of the visual scene. In order to use your own parameters, you can use at least one time the *write(String fs)* method which will write a proper XML file with all default parameters. Then, tweak it on your own and reload them at any time using method *setup(String fs)*. These methods update a *Retina::RetinaParameters* member structure that is described hereafter. ::
class Retina
{
@@ -49,12 +49,12 @@ The retina can be settled up with various parameters, by default, the retina can
Size getOutputSize ();
// setup methods with specific parameters specification of global xml config file loading/write
void setup (std::string retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
void setup (String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
void setup (FileStorage &fs, const bool applyDefaultSetupOnFailure=true);
void setup (RetinaParameters newParameters);
struct Retina::RetinaParameters getParameters ();
const std::string printSetup ();
virtual void write (std::string fs) const;
const String printSetup ();
virtual void write (String fs) const;
virtual void write (FileStorage &fs) const;
void setupOPLandIPLParvoChannel (const bool colorMode=true, const bool normaliseOutput=true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7);
void setupIPLMagnoChannel (const bool normaliseOutput=true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7);
@@ -208,7 +208,7 @@ Retina::getMagno
Retina::getParameters
+++++++++++++++++++++
.. ocv:function:: struct Retina::RetinaParameters Retina::getParameters()
.. ocv:function:: Retina::RetinaParameters Retina::getParameters()
Retrieve the current parameters values in a *Retina::RetinaParameters* structure
@@ -248,7 +248,7 @@ Retina::getOutputSize
Retina::printSetup
++++++++++++++++++
.. ocv:function:: const std::string Retina::printSetup()
.. ocv:function:: const String Retina::printSetup()
Outputs a string showing the used parameters setup
@@ -277,7 +277,7 @@ Retina::setColorSaturation
Retina::setup
+++++++++++++
.. ocv:function:: void Retina::setup(std::string retinaParameterFile = "", const bool applyDefaultSetupOnFailure = true )
.. ocv:function:: void Retina::setup(String retinaParameterFile = "", const bool applyDefaultSetupOnFailure = true )
.. ocv:function:: void Retina::setup(FileStorage & fs, const bool applyDefaultSetupOnFailure = true )
.. ocv:function:: void Retina::setup(RetinaParameters newParameters)
@@ -291,7 +291,7 @@ Retina::setup
Retina::write
+++++++++++++
.. ocv:function:: void Retina::write( std::string fs ) const
.. ocv:function:: void Retina::write( String fs ) const
.. ocv:function:: void Retina::write( FileStorage& fs ) const
Write xml/yml formated parameters information
@@ -336,8 +336,9 @@ Retina::RetinaParameters
========================
.. ocv:struct:: Retina::RetinaParameters
This structure merges all the parameters that can be adjusted threw the **Retina::setup()**, **Retina::setupOPLandIPLParvoChannel** and **Retina::setupIPLMagnoChannel** setup methods
Parameters structure for better clarity, check explenations on the comments of methods : setupOPLandIPLParvoChannel and setupIPLMagnoChannel. ::
This structure merges all the parameters that can be adjusted threw the **Retina::setup()**, **Retina::setupOPLandIPLParvoChannel** and **Retina::setupIPLMagnoChannel** setup methods
Parameters structure for better clarity, check explenations on the comments of methods : setupOPLandIPLParvoChannel and setupIPLMagnoChannel. ::
class RetinaParameters{
struct OPLandIplParvoParameters{ // Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters

View File

@@ -0,0 +1,639 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_CONTRIB_HPP__
#define __OPENCV_CONTRIB_HPP__
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/objdetect.hpp"
namespace cv
{
class CV_EXPORTS Octree
{
public:
struct Node
{
Node() {}
int begin, end;
float x_min, x_max, y_min, y_max, z_min, z_max;
int maxLevels;
bool isLeaf;
int children[8];
};
Octree();
Octree( const std::vector<Point3f>& points, int maxLevels = 10, int minPoints = 20 );
virtual ~Octree();
virtual void buildTree( const std::vector<Point3f>& points, int maxLevels = 10, int minPoints = 20 );
virtual void getPointsWithinSphere( const Point3f& center, float radius,
std::vector<Point3f>& points ) const;
const std::vector<Node>& getNodes() const { return nodes; }
private:
int minPoints;
std::vector<Point3f> points;
std::vector<Node> nodes;
virtual void buildNext(size_t node_ind);
};
class CV_EXPORTS Mesh3D
{
public:
struct EmptyMeshException {};
Mesh3D();
Mesh3D(const std::vector<Point3f>& vtx);
~Mesh3D();
void buildOctree();
void clearOctree();
float estimateResolution(float tryRatio = 0.1f);
void computeNormals(float normalRadius, int minNeighbors = 20);
void computeNormals(const std::vector<int>& subset, float normalRadius, int minNeighbors = 20);
void writeAsVrml(const String& file, const std::vector<Scalar>& colors = std::vector<Scalar>()) const;
std::vector<Point3f> vtx;
std::vector<Point3f> normals;
float resolution;
Octree octree;
const static Point3f allzero;
};
class CV_EXPORTS SpinImageModel
{
public:
/* model parameters, leave unset for default or auto estimate */
float normalRadius;
int minNeighbors;
float binSize;
int imageWidth;
float lambda;
float gamma;
float T_GeometriccConsistency;
float T_GroupingCorespondances;
/* public interface */
SpinImageModel();
explicit SpinImageModel(const Mesh3D& mesh);
~SpinImageModel();
void selectRandomSubset(float ratio);
void setSubset(const std::vector<int>& subset);
void compute();
void match(const SpinImageModel& scene, std::vector< std::vector<Vec2i> >& result);
Mat packRandomScaledSpins(bool separateScale = false, size_t xCount = 10, size_t yCount = 10) const;
size_t getSpinCount() const { return spinImages.rows; }
Mat getSpinImage(size_t index) const { return spinImages.row((int)index); }
const Point3f& getSpinVertex(size_t index) const { return mesh.vtx[subset[index]]; }
const Point3f& getSpinNormal(size_t index) const { return mesh.normals[subset[index]]; }
const Mesh3D& getMesh() const { return mesh; }
Mesh3D& getMesh() { return mesh; }
/* static utility functions */
static bool spinCorrelation(const Mat& spin1, const Mat& spin2, float lambda, float& result);
static Point2f calcSpinMapCoo(const Point3f& point, const Point3f& vertex, const Point3f& normal);
static float geometricConsistency(const Point3f& pointScene1, const Point3f& normalScene1,
const Point3f& pointModel1, const Point3f& normalModel1,
const Point3f& pointScene2, const Point3f& normalScene2,
const Point3f& pointModel2, const Point3f& normalModel2);
static float groupingCreteria(const Point3f& pointScene1, const Point3f& normalScene1,
const Point3f& pointModel1, const Point3f& normalModel1,
const Point3f& pointScene2, const Point3f& normalScene2,
const Point3f& pointModel2, const Point3f& normalModel2,
float gamma);
protected:
void defaultParams();
void matchSpinToModel(const Mat& spin, std::vector<int>& indeces,
std::vector<float>& corrCoeffs, bool useExtremeOutliers = true) const;
void repackSpinImages(const std::vector<uchar>& mask, Mat& spinImages, bool reAlloc = true) const;
std::vector<int> subset;
Mesh3D mesh;
Mat spinImages;
};
class CV_EXPORTS TickMeter
{
public:
TickMeter();
void start();
void stop();
int64 getTimeTicks() const;
double getTimeMicro() const;
double getTimeMilli() const;
double getTimeSec() const;
int64 getCounter() const;
void reset();
private:
int64 counter;
int64 sumTime;
int64 startTime;
};
//CV_EXPORTS std::ostream& operator<<(std::ostream& out, const TickMeter& tm);
class CV_EXPORTS SelfSimDescriptor
{
public:
SelfSimDescriptor();
SelfSimDescriptor(int _ssize, int _lsize,
int _startDistanceBucket=DEFAULT_START_DISTANCE_BUCKET,
int _numberOfDistanceBuckets=DEFAULT_NUM_DISTANCE_BUCKETS,
int _nangles=DEFAULT_NUM_ANGLES);
SelfSimDescriptor(const SelfSimDescriptor& ss);
virtual ~SelfSimDescriptor();
SelfSimDescriptor& operator = (const SelfSimDescriptor& ss);
size_t getDescriptorSize() const;
Size getGridSize( Size imgsize, Size winStride ) const;
virtual void compute(const Mat& img, std::vector<float>& descriptors, Size winStride=Size(),
const std::vector<Point>& locations=std::vector<Point>()) const;
virtual void computeLogPolarMapping(Mat& mappingMask) const;
virtual void SSD(const Mat& img, Point pt, Mat& ssd) const;
int smallSize;
int largeSize;
int startDistanceBucket;
int numberOfDistanceBuckets;
int numberOfAngles;
enum { DEFAULT_SMALL_SIZE = 5, DEFAULT_LARGE_SIZE = 41,
DEFAULT_NUM_ANGLES = 20, DEFAULT_START_DISTANCE_BUCKET = 3,
DEFAULT_NUM_DISTANCE_BUCKETS = 7 };
};
CV_EXPORTS_W int chamerMatching( Mat& img, Mat& templ,
CV_OUT std::vector<std::vector<Point> >& results, CV_OUT std::vector<float>& cost,
double templScale=1, int maxMatches = 20,
double minMatchDistance = 1.0, int padX = 3,
int padY = 3, int scales = 5, double minScale = 0.6, double maxScale = 1.6,
double orientationWeight = 0.5, double truncate = 20);
class CV_EXPORTS_W StereoVar
{
public:
// Flags
enum {USE_INITIAL_DISPARITY = 1, USE_EQUALIZE_HIST = 2, USE_SMART_ID = 4, USE_AUTO_PARAMS = 8, USE_MEDIAN_FILTERING = 16};
enum {CYCLE_O, CYCLE_V};
enum {PENALIZATION_TICHONOV, PENALIZATION_CHARBONNIER, PENALIZATION_PERONA_MALIK};
//! the default constructor
CV_WRAP StereoVar();
//! the full constructor taking all the necessary algorithm parameters
CV_WRAP StereoVar(int levels, double pyrScale, int nIt, int minDisp, int maxDisp, int poly_n, double poly_sigma, float fi, float lambda, int penalization, int cycle, int flags);
//! the destructor
virtual ~StereoVar();
//! the stereo correspondence operator that computes disparity map for the specified rectified stereo pair
CV_WRAP_AS(compute) virtual void operator()(const Mat& left, const Mat& right, CV_OUT Mat& disp);
CV_PROP_RW int levels;
CV_PROP_RW double pyrScale;
CV_PROP_RW int nIt;
CV_PROP_RW int minDisp;
CV_PROP_RW int maxDisp;
CV_PROP_RW int poly_n;
CV_PROP_RW double poly_sigma;
CV_PROP_RW float fi;
CV_PROP_RW float lambda;
CV_PROP_RW int penalization;
CV_PROP_RW int cycle;
CV_PROP_RW int flags;
private:
void autoParams();
void FMG(Mat &I1, Mat &I2, Mat &I2x, Mat &u, int level);
void VCycle_MyFAS(Mat &I1_h, Mat &I2_h, Mat &I2x_h, Mat &u_h, int level);
void VariationalSolver(Mat &I1_h, Mat &I2_h, Mat &I2x_h, Mat &u_h, int level);
};
CV_EXPORTS void polyfit(const Mat& srcx, const Mat& srcy, Mat& dst, int order);
class CV_EXPORTS Directory
{
public:
static std::vector<String> GetListFiles ( const String& path, const String & exten = "*", bool addPath = true );
static std::vector<String> GetListFilesR ( const String& path, const String & exten = "*", bool addPath = true );
static std::vector<String> GetListFolders( const String& path, const String & exten = "*", bool addPath = true );
};
/*
* Generation of a set of different colors by the following way:
* 1) generate more then need colors (in "factor" times) in RGB,
* 2) convert them to Lab,
* 3) choose the needed count of colors from the set that are more different from
* each other,
* 4) convert the colors back to RGB
*/
CV_EXPORTS void generateColors( std::vector<Scalar>& colors, size_t count, size_t factor=100 );
/*
* Estimate the rigid body motion from frame0 to frame1. The method is based on the paper
* "Real-Time Visual Odometry from Dense RGB-D Images", F. Steinbucker, J. Strum, D. Cremers, ICCV, 2011.
*/
enum { ROTATION = 1,
TRANSLATION = 2,
RIGID_BODY_MOTION = 4
};
CV_EXPORTS bool RGBDOdometry( Mat& Rt, const Mat& initRt,
const Mat& image0, const Mat& depth0, const Mat& mask0,
const Mat& image1, const Mat& depth1, const Mat& mask1,
const Mat& cameraMatrix, float minDepth=0.f, float maxDepth=4.f, float maxDepthDiff=0.07f,
const std::vector<int>& iterCounts=std::vector<int>(),
const std::vector<float>& minGradientMagnitudes=std::vector<float>(),
int transformType=RIGID_BODY_MOTION );
/**
*Bilinear interpolation technique.
*
*The value of a desired cortical pixel is obtained through a bilinear interpolation of the values
*of the four nearest neighbouring Cartesian pixels to the center of the RF.
*The same principle is applied to the inverse transformation.
*
*More details can be found in http://dx.doi.org/10.1007/978-3-642-23968-7_5
*/
class CV_EXPORTS LogPolar_Interp
{
public:
LogPolar_Interp() {}
/**
*Constructor
*\param w the width of the input image
*\param h the height of the input image
*\param center the transformation center: where the output precision is maximal
*\param R the number of rings of the cortical image (default value 70 pixel)
*\param ro0 the radius of the blind spot (default value 3 pixel)
*\param full \a 1 (default value) means that the retinal image (the inverse transform) is computed within the circumscribing circle.
* \a 0 means that the retinal image is computed within the inscribed circle.
*\param S the number of sectors of the cortical image (default value 70 pixel).
* Its value is usually internally computed to obtain a pixel aspect ratio equals to 1.
*\param sp \a 1 (default value) means that the parameter \a S is internally computed.
* \a 0 means that the parameter \a S is provided by the user.
*/
LogPolar_Interp(int w, int h, Point2i center, int R=70, double ro0=3.0,
int interp=INTER_LINEAR, int full=1, int S=117, int sp=1);
/**
*Transformation from Cartesian image to cortical (log-polar) image.
*\param source the Cartesian image
*\return the transformed image (cortical image)
*/
const Mat to_cortical(const Mat &source);
/**
*Transformation from cortical image to retinal (inverse log-polar) image.
*\param source the cortical image
*\return the transformed image (retinal image)
*/
const Mat to_cartesian(const Mat &source);
/**
*Destructor
*/
~LogPolar_Interp();
protected:
Mat Rsri;
Mat Csri;
int S, R, M, N;
int top, bottom,left,right;
double ro0, romax, a, q;
int interp;
Mat ETAyx;
Mat CSIyx;
void create_map(int M, int N, int R, int S, double ro0);
};
/**
*Overlapping circular receptive fields technique
*
*The Cartesian plane is divided in two regions: the fovea and the periphery.
*The fovea (oversampling) is handled by using the bilinear interpolation technique described above, whereas in
*the periphery we use the overlapping Gaussian circular RFs.
*
*More details can be found in http://dx.doi.org/10.1007/978-3-642-23968-7_5
*/
class CV_EXPORTS LogPolar_Overlapping
{
public:
LogPolar_Overlapping() {}
/**
*Constructor
*\param w the width of the input image
*\param h the height of the input image
*\param center the transformation center: where the output precision is maximal
*\param R the number of rings of the cortical image (default value 70 pixel)
*\param ro0 the radius of the blind spot (default value 3 pixel)
*\param full \a 1 (default value) means that the retinal image (the inverse transform) is computed within the circumscribing circle.
* \a 0 means that the retinal image is computed within the inscribed circle.
*\param S the number of sectors of the cortical image (default value 70 pixel).
* Its value is usually internally computed to obtain a pixel aspect ratio equals to 1.
*\param sp \a 1 (default value) means that the parameter \a S is internally computed.
* \a 0 means that the parameter \a S is provided by the user.
*/
LogPolar_Overlapping(int w, int h, Point2i center, int R=70,
double ro0=3.0, int full=1, int S=117, int sp=1);
/**
*Transformation from Cartesian image to cortical (log-polar) image.
*\param source the Cartesian image
*\return the transformed image (cortical image)
*/
const Mat to_cortical(const Mat &source);
/**
*Transformation from cortical image to retinal (inverse log-polar) image.
*\param source the cortical image
*\return the transformed image (retinal image)
*/
const Mat to_cartesian(const Mat &source);
/**
*Destructor
*/
~LogPolar_Overlapping();
protected:
Mat Rsri;
Mat Csri;
std::vector<int> Rsr;
std::vector<int> Csr;
std::vector<double> Wsr;
int S, R, M, N, ind1;
int top, bottom,left,right;
double ro0, romax, a, q;
struct kernel
{
kernel() { w = 0; }
std::vector<double> weights;
int w;
};
Mat ETAyx;
Mat CSIyx;
std::vector<kernel> w_ker_2D;
void create_map(int M, int N, int R, int S, double ro0);
};
/**
* Adjacent receptive fields technique
*
*All the Cartesian pixels, whose coordinates in the cortical domain share the same integer part, are assigned to the same RF.
*The precision of the boundaries of the RF can be improved by breaking each pixel into subpixels and assigning each of them to the correct RF.
*This technique is implemented from: Traver, V., Pla, F.: Log-polar mapping template design: From task-level requirements
*to geometry parameters. Image Vision Comput. 26(10) (2008) 1354-1370
*
*More details can be found in http://dx.doi.org/10.1007/978-3-642-23968-7_5
*/
class CV_EXPORTS LogPolar_Adjacent
{
public:
LogPolar_Adjacent() {}
/**
*Constructor
*\param w the width of the input image
*\param h the height of the input image
*\param center the transformation center: where the output precision is maximal
*\param R the number of rings of the cortical image (default value 70 pixel)
*\param ro0 the radius of the blind spot (default value 3 pixel)
*\param smin the size of the subpixel (default value 0.25 pixel)
*\param full \a 1 (default value) means that the retinal image (the inverse transform) is computed within the circumscribing circle.
* \a 0 means that the retinal image is computed within the inscribed circle.
*\param S the number of sectors of the cortical image (default value 70 pixel).
* Its value is usually internally computed to obtain a pixel aspect ratio equals to 1.
*\param sp \a 1 (default value) means that the parameter \a S is internally computed.
* \a 0 means that the parameter \a S is provided by the user.
*/
LogPolar_Adjacent(int w, int h, Point2i center, int R=70, double ro0=3.0, double smin=0.25, int full=1, int S=117, int sp=1);
/**
*Transformation from Cartesian image to cortical (log-polar) image.
*\param source the Cartesian image
*\return the transformed image (cortical image)
*/
const Mat to_cortical(const Mat &source);
/**
*Transformation from cortical image to retinal (inverse log-polar) image.
*\param source the cortical image
*\return the transformed image (retinal image)
*/
const Mat to_cartesian(const Mat &source);
/**
*Destructor
*/
~LogPolar_Adjacent();
protected:
struct pixel
{
pixel() { u = v = 0; a = 0.; }
int u;
int v;
double a;
};
int S, R, M, N;
int top, bottom,left,right;
double ro0, romax, a, q;
std::vector<std::vector<pixel> > L;
std::vector<double> A;
void subdivide_recursively(double x, double y, int i, int j, double length, double smin);
bool get_uv(double x, double y, int&u, int&v);
void create_map(int M, int N, int R, int S, double ro0, double smin);
};
CV_EXPORTS Mat subspaceProject(InputArray W, InputArray mean, InputArray src);
CV_EXPORTS Mat subspaceReconstruct(InputArray W, InputArray mean, InputArray src);
class CV_EXPORTS LDA
{
public:
// Initializes a LDA with num_components (default 0) and specifies how
// samples are aligned (default dataAsRow=true).
LDA(int num_components = 0) :
_num_components(num_components) {};
// Initializes and performs a Discriminant Analysis with Fisher's
// Optimization Criterion on given data in src and corresponding labels
// in labels. If 0 (or less) number of components are given, they are
// automatically determined for given data in computation.
LDA(InputArrayOfArrays src, InputArray labels,
int num_components = 0) :
_num_components(num_components)
{
this->compute(src, labels); //! compute eigenvectors and eigenvalues
}
// Serializes this object to a given filename.
void save(const String& filename) const;
// Deserializes this object from a given filename.
void load(const String& filename);
// Serializes this object to a given cv::FileStorage.
void save(FileStorage& fs) const;
// Deserializes this object from a given cv::FileStorage.
void load(const FileStorage& node);
// Destructor.
~LDA() {}
//! Compute the discriminants for data in src and labels.
void compute(InputArrayOfArrays src, InputArray labels);
// Projects samples into the LDA subspace.
Mat project(InputArray src);
// Reconstructs projections from the LDA subspace.
Mat reconstruct(InputArray src);
// Returns the eigenvectors of this LDA.
Mat eigenvectors() const { return _eigenvectors; };
// Returns the eigenvalues of this LDA.
Mat eigenvalues() const { return _eigenvalues; }
protected:
bool _dataAsRow;
int _num_components;
Mat _eigenvectors;
Mat _eigenvalues;
void lda(InputArrayOfArrays src, InputArray labels);
};
class CV_EXPORTS_W FaceRecognizer : public Algorithm
{
public:
//! virtual destructor
virtual ~FaceRecognizer() {}
// Trains a FaceRecognizer.
CV_WRAP virtual void train(InputArrayOfArrays src, InputArray labels) = 0;
// Updates a FaceRecognizer.
CV_WRAP virtual void update(InputArrayOfArrays src, InputArray labels);
// Gets a prediction from a FaceRecognizer.
virtual int predict(InputArray src) const = 0;
// Predicts the label and confidence for a given sample.
CV_WRAP virtual void predict(InputArray src, CV_OUT int &label, CV_OUT double &confidence) const = 0;
// Serializes this object to a given filename.
CV_WRAP virtual void save(const String& filename) const;
// Deserializes this object from a given filename.
CV_WRAP virtual void load(const String& filename);
// Serializes this object to a given cv::FileStorage.
virtual void save(FileStorage& fs) const = 0;
// Deserializes this object from a given cv::FileStorage.
virtual void load(const FileStorage& fs) = 0;
};
CV_EXPORTS_W Ptr<FaceRecognizer> createEigenFaceRecognizer(int num_components = 0, double threshold = DBL_MAX);
CV_EXPORTS_W Ptr<FaceRecognizer> createFisherFaceRecognizer(int num_components = 0, double threshold = DBL_MAX);
CV_EXPORTS_W Ptr<FaceRecognizer> createLBPHFaceRecognizer(int radius=1, int neighbors=8,
int grid_x=8, int grid_y=8, double threshold = DBL_MAX);
enum
{
COLORMAP_AUTUMN = 0,
COLORMAP_BONE = 1,
COLORMAP_JET = 2,
COLORMAP_WINTER = 3,
COLORMAP_RAINBOW = 4,
COLORMAP_OCEAN = 5,
COLORMAP_SUMMER = 6,
COLORMAP_SPRING = 7,
COLORMAP_COOL = 8,
COLORMAP_HSV = 9,
COLORMAP_PINK = 10,
COLORMAP_HOT = 11
};
CV_EXPORTS_W void applyColorMap(InputArray src, OutputArray dst, int colormap);
CV_EXPORTS bool initModule_contrib();
}
#include "opencv2/contrib/retina.hpp"
#include "opencv2/contrib/openfabmap.hpp"
#endif

View File

@@ -0,0 +1,384 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_CONTRIB_COMPAT_HPP__
#define __OPENCV_CONTRIB_COMPAT_HPP__
#include "opencv2/core/core_c.h"
#ifdef __cplusplus
/****************************************************************************************\
* Adaptive Skin Detector *
\****************************************************************************************/
class CV_EXPORTS CvAdaptiveSkinDetector
{
private:
enum {
GSD_HUE_LT = 3,
GSD_HUE_UT = 33,
GSD_INTENSITY_LT = 15,
GSD_INTENSITY_UT = 250
};
class CV_EXPORTS Histogram
{
private:
enum {
HistogramSize = (GSD_HUE_UT - GSD_HUE_LT + 1)
};
protected:
int findCoverageIndex(double surfaceToCover, int defaultValue = 0);
public:
CvHistogram *fHistogram;
Histogram();
virtual ~Histogram();
void findCurveThresholds(int &x1, int &x2, double percent = 0.05);
void mergeWith(Histogram *source, double weight);
};
int nStartCounter, nFrameCount, nSkinHueLowerBound, nSkinHueUpperBound, nMorphingMethod, nSamplingDivider;
double fHistogramMergeFactor, fHuePercentCovered;
Histogram histogramHueMotion, skinHueHistogram;
IplImage *imgHueFrame, *imgSaturationFrame, *imgLastGrayFrame, *imgMotionFrame, *imgFilteredFrame;
IplImage *imgShrinked, *imgTemp, *imgGrayFrame, *imgHSVFrame;
protected:
void initData(IplImage *src, int widthDivider, int heightDivider);
void adaptiveFilter();
public:
enum {
MORPHING_METHOD_NONE = 0,
MORPHING_METHOD_ERODE = 1,
MORPHING_METHOD_ERODE_ERODE = 2,
MORPHING_METHOD_ERODE_DILATE = 3
};
CvAdaptiveSkinDetector(int samplingDivider = 1, int morphingMethod = MORPHING_METHOD_NONE);
virtual ~CvAdaptiveSkinDetector();
virtual void process(IplImage *inputBGRImage, IplImage *outputHueMask);
};
/****************************************************************************************\
* Fuzzy MeanShift Tracker *
\****************************************************************************************/
class CV_EXPORTS CvFuzzyPoint {
public:
double x, y, value;
CvFuzzyPoint(double _x, double _y);
};
class CV_EXPORTS CvFuzzyCurve {
private:
std::vector<CvFuzzyPoint> points;
double value, centre;
bool between(double x, double x1, double x2);
public:
CvFuzzyCurve();
~CvFuzzyCurve();
void setCentre(double _centre);
double getCentre();
void clear();
void addPoint(double x, double y);
double calcValue(double param);
double getValue();
void setValue(double _value);
};
class CV_EXPORTS CvFuzzyFunction {
public:
std::vector<CvFuzzyCurve> curves;
CvFuzzyFunction();
~CvFuzzyFunction();
void addCurve(CvFuzzyCurve *curve, double value = 0);
void resetValues();
double calcValue();
CvFuzzyCurve *newCurve();
};
class CV_EXPORTS CvFuzzyRule {
private:
CvFuzzyCurve *fuzzyInput1, *fuzzyInput2;
CvFuzzyCurve *fuzzyOutput;
public:
CvFuzzyRule();
~CvFuzzyRule();
void setRule(CvFuzzyCurve *c1, CvFuzzyCurve *c2, CvFuzzyCurve *o1);
double calcValue(double param1, double param2);
CvFuzzyCurve *getOutputCurve();
};
class CV_EXPORTS CvFuzzyController {
private:
std::vector<CvFuzzyRule*> rules;
public:
CvFuzzyController();
~CvFuzzyController();
void addRule(CvFuzzyCurve *c1, CvFuzzyCurve *c2, CvFuzzyCurve *o1);
double calcOutput(double param1, double param2);
};
class CV_EXPORTS CvFuzzyMeanShiftTracker
{
private:
class FuzzyResizer
{
private:
CvFuzzyFunction iInput, iOutput;
CvFuzzyController fuzzyController;
public:
FuzzyResizer();
int calcOutput(double edgeDensity, double density);
};
class SearchWindow
{
public:
FuzzyResizer *fuzzyResizer;
int x, y;
int width, height, maxWidth, maxHeight, ellipseHeight, ellipseWidth;
int ldx, ldy, ldw, ldh, numShifts, numIters;
int xGc, yGc;
long m00, m01, m10, m11, m02, m20;
double ellipseAngle;
double density;
unsigned int depthLow, depthHigh;
int verticalEdgeLeft, verticalEdgeRight, horizontalEdgeTop, horizontalEdgeBottom;
SearchWindow();
~SearchWindow();
void setSize(int _x, int _y, int _width, int _height);
void initDepthValues(IplImage *maskImage, IplImage *depthMap);
bool shift();
void extractInfo(IplImage *maskImage, IplImage *depthMap, bool initDepth);
void getResizeAttribsEdgeDensityLinear(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh);
void getResizeAttribsInnerDensity(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh);
void getResizeAttribsEdgeDensityFuzzy(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh);
bool meanShift(IplImage *maskImage, IplImage *depthMap, int maxIteration, bool initDepth);
};
public:
enum TrackingState
{
tsNone = 0,
tsSearching = 1,
tsTracking = 2,
tsSetWindow = 3,
tsDisabled = 10
};
enum ResizeMethod {
rmEdgeDensityLinear = 0,
rmEdgeDensityFuzzy = 1,
rmInnerDensity = 2
};
enum {
MinKernelMass = 1000
};
SearchWindow kernel;
int searchMode;
private:
enum
{
MaxMeanShiftIteration = 5,
MaxSetSizeIteration = 5
};
void findOptimumSearchWindow(SearchWindow &searchWindow, IplImage *maskImage, IplImage *depthMap, int maxIteration, int resizeMethod, bool initDepth);
public:
CvFuzzyMeanShiftTracker();
~CvFuzzyMeanShiftTracker();
void track(IplImage *maskImage, IplImage *depthMap, int resizeMethod, bool resetSearch, int minKernelMass = MinKernelMass);
};
namespace cv
{
typedef bool (*BundleAdjustCallback)(int iteration, double norm_error, void* user_data);
class CV_EXPORTS LevMarqSparse {
public:
LevMarqSparse();
LevMarqSparse(int npoints, // number of points
int ncameras, // number of cameras
int nPointParams, // number of params per one point (3 in case of 3D points)
int nCameraParams, // number of parameters per one camera
int nErrParams, // number of parameters in measurement vector
// for 1 point at one camera (2 in case of 2D projections)
Mat& visibility, // visibility matrix. rows correspond to points, columns correspond to cameras
// 1 - point is visible for the camera, 0 - invisible
Mat& P0, // starting vector of parameters, first cameras then points
Mat& X, // measurements, in order of visibility. non visible cases are skipped
TermCriteria criteria, // termination criteria
// callback for estimation of Jacobian matrices
void (*fjac)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& A, Mat& B, void* data),
// callback for estimation of backprojection errors
void (*func)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& estim, void* data),
void* data, // user-specific data passed to the callbacks
BundleAdjustCallback cb, void* user_data
);
virtual ~LevMarqSparse();
virtual void run( int npoints, // number of points
int ncameras, // number of cameras
int nPointParams, // number of params per one point (3 in case of 3D points)
int nCameraParams, // number of parameters per one camera
int nErrParams, // number of parameters in measurement vector
// for 1 point at one camera (2 in case of 2D projections)
Mat& visibility, // visibility matrix. rows correspond to points, columns correspond to cameras
// 1 - point is visible for the camera, 0 - invisible
Mat& P0, // starting vector of parameters, first cameras then points
Mat& X, // measurements, in order of visibility. non visible cases are skipped
TermCriteria criteria, // termination criteria
// callback for estimation of Jacobian matrices
void (CV_CDECL * fjac)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& A, Mat& B, void* data),
// callback for estimation of backprojection errors
void (CV_CDECL * func)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& estim, void* data),
void* data // user-specific data passed to the callbacks
);
virtual void clear();
// useful function to do simple bundle adjustment tasks
static void bundleAdjust(std::vector<Point3d>& points, // positions of points in global coordinate system (input and output)
const std::vector<std::vector<Point2d> >& imagePoints, // projections of 3d points for every camera
const std::vector<std::vector<int> >& visibility, // visibility of 3d points for every camera
std::vector<Mat>& cameraMatrix, // intrinsic matrices of all cameras (input and output)
std::vector<Mat>& R, // rotation matrices of all cameras (input and output)
std::vector<Mat>& T, // translation vector of all cameras (input and output)
std::vector<Mat>& distCoeffs, // distortion coefficients of all cameras (input and output)
const TermCriteria& criteria=
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON),
BundleAdjustCallback cb = 0, void* user_data = 0);
public:
virtual void optimize(CvMat &_vis); //main function that runs minimization
//iteratively asks for measurement for visible camera-point pairs
void ask_for_proj(CvMat &_vis,bool once=false);
//iteratively asks for Jacobians for every camera_point pair
void ask_for_projac(CvMat &_vis);
CvMat* err; //error X-hX
double prevErrNorm, errNorm;
double lambda;
CvTermCriteria criteria;
int iters;
CvMat** U; //size of array is equal to number of cameras
CvMat** V; //size of array is equal to number of points
CvMat** inv_V_star; //inverse of V*
CvMat** A;
CvMat** B;
CvMat** W;
CvMat* X; //measurement
CvMat* hX; //current measurement extimation given new parameter vector
CvMat* prevP; //current already accepted parameter.
CvMat* P; // parameters used to evaluate function with new params
// this parameters may be rejected
CvMat* deltaP; //computed increase of parameters (result of normal system solution )
CvMat** ea; // sum_i AijT * e_ij , used as right part of normal equation
// length of array is j = number of cameras
CvMat** eb; // sum_j BijT * e_ij , used as right part of normal equation
// length of array is i = number of points
CvMat** Yj; //length of array is i = num_points
CvMat* S; //big matrix of block Sjk , each block has size num_cam_params x num_cam_params
CvMat* JtJ_diag; //diagonal of JtJ, used to backup diagonal elements before augmentation
CvMat* Vis_index; // matrix which element is index of measurement for point i and camera j
int num_cams;
int num_points;
int num_err_param;
int num_cam_param;
int num_point_param;
//target function and jacobian pointers, which needs to be initialized
void (*fjac)(int i, int j, Mat& point_params, Mat& cam_params, Mat& A, Mat& B, void* data);
void (*func)(int i, int j, Mat& point_params, Mat& cam_params, Mat& estim, void* data);
void* data;
BundleAdjustCallback cb;
void* user_data;
};
} // cv
#endif /* __cplusplus */
#endif /* __OPENCV_CONTRIB_COMPAT_HPP__ */

View File

@@ -7,11 +7,12 @@
// copy or use the software.
//
//
// License Agreement
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
@@ -40,936 +41,8 @@
//
//M*/
#ifndef __OPENCV_CONTRIB_HPP__
#define __OPENCV_CONTRIB_HPP__
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#ifdef __cplusplus
/****************************************************************************************\
* Adaptive Skin Detector *
\****************************************************************************************/
class CV_EXPORTS CvAdaptiveSkinDetector
{
private:
enum {
GSD_HUE_LT = 3,
GSD_HUE_UT = 33,
GSD_INTENSITY_LT = 15,
GSD_INTENSITY_UT = 250
};
class CV_EXPORTS Histogram
{
private:
enum {
HistogramSize = (GSD_HUE_UT - GSD_HUE_LT + 1)
};
protected:
int findCoverageIndex(double surfaceToCover, int defaultValue = 0);
public:
CvHistogram *fHistogram;
Histogram();
virtual ~Histogram();
void findCurveThresholds(int &x1, int &x2, double percent = 0.05);
void mergeWith(Histogram *source, double weight);
};
int nStartCounter, nFrameCount, nSkinHueLowerBound, nSkinHueUpperBound, nMorphingMethod, nSamplingDivider;
double fHistogramMergeFactor, fHuePercentCovered;
Histogram histogramHueMotion, skinHueHistogram;
IplImage *imgHueFrame, *imgSaturationFrame, *imgLastGrayFrame, *imgMotionFrame, *imgFilteredFrame;
IplImage *imgShrinked, *imgTemp, *imgGrayFrame, *imgHSVFrame;
protected:
void initData(IplImage *src, int widthDivider, int heightDivider);
void adaptiveFilter();
public:
enum {
MORPHING_METHOD_NONE = 0,
MORPHING_METHOD_ERODE = 1,
MORPHING_METHOD_ERODE_ERODE = 2,
MORPHING_METHOD_ERODE_DILATE = 3
};
CvAdaptiveSkinDetector(int samplingDivider = 1, int morphingMethod = MORPHING_METHOD_NONE);
virtual ~CvAdaptiveSkinDetector();
virtual void process(IplImage *inputBGRImage, IplImage *outputHueMask);
};
/****************************************************************************************\
* Fuzzy MeanShift Tracker *
\****************************************************************************************/
class CV_EXPORTS CvFuzzyPoint {
public:
double x, y, value;
CvFuzzyPoint(double _x, double _y);
};
class CV_EXPORTS CvFuzzyCurve {
private:
std::vector<CvFuzzyPoint> points;
double value, centre;
bool between(double x, double x1, double x2);
public:
CvFuzzyCurve();
~CvFuzzyCurve();
void setCentre(double _centre);
double getCentre();
void clear();
void addPoint(double x, double y);
double calcValue(double param);
double getValue();
void setValue(double _value);
};
class CV_EXPORTS CvFuzzyFunction {
public:
std::vector<CvFuzzyCurve> curves;
CvFuzzyFunction();
~CvFuzzyFunction();
void addCurve(CvFuzzyCurve *curve, double value = 0);
void resetValues();
double calcValue();
CvFuzzyCurve *newCurve();
};
class CV_EXPORTS CvFuzzyRule {
private:
CvFuzzyCurve *fuzzyInput1, *fuzzyInput2;
CvFuzzyCurve *fuzzyOutput;
public:
CvFuzzyRule();
~CvFuzzyRule();
void setRule(CvFuzzyCurve *c1, CvFuzzyCurve *c2, CvFuzzyCurve *o1);
double calcValue(double param1, double param2);
CvFuzzyCurve *getOutputCurve();
};
class CV_EXPORTS CvFuzzyController {
private:
std::vector<CvFuzzyRule*> rules;
public:
CvFuzzyController();
~CvFuzzyController();
void addRule(CvFuzzyCurve *c1, CvFuzzyCurve *c2, CvFuzzyCurve *o1);
double calcOutput(double param1, double param2);
};
class CV_EXPORTS CvFuzzyMeanShiftTracker
{
private:
class FuzzyResizer
{
private:
CvFuzzyFunction iInput, iOutput;
CvFuzzyController fuzzyController;
public:
FuzzyResizer();
int calcOutput(double edgeDensity, double density);
};
class SearchWindow
{
public:
FuzzyResizer *fuzzyResizer;
int x, y;
int width, height, maxWidth, maxHeight, ellipseHeight, ellipseWidth;
int ldx, ldy, ldw, ldh, numShifts, numIters;
int xGc, yGc;
long m00, m01, m10, m11, m02, m20;
double ellipseAngle;
double density;
unsigned int depthLow, depthHigh;
int verticalEdgeLeft, verticalEdgeRight, horizontalEdgeTop, horizontalEdgeBottom;
SearchWindow();
~SearchWindow();
void setSize(int _x, int _y, int _width, int _height);
void initDepthValues(IplImage *maskImage, IplImage *depthMap);
bool shift();
void extractInfo(IplImage *maskImage, IplImage *depthMap, bool initDepth);
void getResizeAttribsEdgeDensityLinear(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh);
void getResizeAttribsInnerDensity(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh);
void getResizeAttribsEdgeDensityFuzzy(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh);
bool meanShift(IplImage *maskImage, IplImage *depthMap, int maxIteration, bool initDepth);
};
public:
enum TrackingState
{
tsNone = 0,
tsSearching = 1,
tsTracking = 2,
tsSetWindow = 3,
tsDisabled = 10
};
enum ResizeMethod {
rmEdgeDensityLinear = 0,
rmEdgeDensityFuzzy = 1,
rmInnerDensity = 2
};
enum {
MinKernelMass = 1000
};
SearchWindow kernel;
int searchMode;
private:
enum
{
MaxMeanShiftIteration = 5,
MaxSetSizeIteration = 5
};
void findOptimumSearchWindow(SearchWindow &searchWindow, IplImage *maskImage, IplImage *depthMap, int maxIteration, int resizeMethod, bool initDepth);
public:
CvFuzzyMeanShiftTracker();
~CvFuzzyMeanShiftTracker();
void track(IplImage *maskImage, IplImage *depthMap, int resizeMethod, bool resetSearch, int minKernelMass = MinKernelMass);
};
namespace cv
{
class CV_EXPORTS Octree
{
public:
struct Node
{
Node() {}
int begin, end;
float x_min, x_max, y_min, y_max, z_min, z_max;
int maxLevels;
bool isLeaf;
int children[8];
};
Octree();
Octree( const vector<Point3f>& points, int maxLevels = 10, int minPoints = 20 );
virtual ~Octree();
virtual void buildTree( const vector<Point3f>& points, int maxLevels = 10, int minPoints = 20 );
virtual void getPointsWithinSphere( const Point3f& center, float radius,
vector<Point3f>& points ) const;
const vector<Node>& getNodes() const { return nodes; }
private:
int minPoints;
vector<Point3f> points;
vector<Node> nodes;
virtual void buildNext(size_t node_ind);
};
class CV_EXPORTS Mesh3D
{
public:
struct EmptyMeshException {};
Mesh3D();
Mesh3D(const vector<Point3f>& vtx);
~Mesh3D();
void buildOctree();
void clearOctree();
float estimateResolution(float tryRatio = 0.1f);
void computeNormals(float normalRadius, int minNeighbors = 20);
void computeNormals(const vector<int>& subset, float normalRadius, int minNeighbors = 20);
void writeAsVrml(const String& file, const vector<Scalar>& colors = vector<Scalar>()) const;
vector<Point3f> vtx;
vector<Point3f> normals;
float resolution;
Octree octree;
const static Point3f allzero;
};
class CV_EXPORTS SpinImageModel
{
public:
/* model parameters, leave unset for default or auto estimate */
float normalRadius;
int minNeighbors;
float binSize;
int imageWidth;
float lambda;
float gamma;
float T_GeometriccConsistency;
float T_GroupingCorespondances;
/* public interface */
SpinImageModel();
explicit SpinImageModel(const Mesh3D& mesh);
~SpinImageModel();
void setLogger(std::ostream* log);
void selectRandomSubset(float ratio);
void setSubset(const vector<int>& subset);
void compute();
void match(const SpinImageModel& scene, vector< vector<Vec2i> >& result);
Mat packRandomScaledSpins(bool separateScale = false, size_t xCount = 10, size_t yCount = 10) const;
size_t getSpinCount() const { return spinImages.rows; }
Mat getSpinImage(size_t index) const { return spinImages.row((int)index); }
const Point3f& getSpinVertex(size_t index) const { return mesh.vtx[subset[index]]; }
const Point3f& getSpinNormal(size_t index) const { return mesh.normals[subset[index]]; }
const Mesh3D& getMesh() const { return mesh; }
Mesh3D& getMesh() { return mesh; }
/* static utility functions */
static bool spinCorrelation(const Mat& spin1, const Mat& spin2, float lambda, float& result);
static Point2f calcSpinMapCoo(const Point3f& point, const Point3f& vertex, const Point3f& normal);
static float geometricConsistency(const Point3f& pointScene1, const Point3f& normalScene1,
const Point3f& pointModel1, const Point3f& normalModel1,
const Point3f& pointScene2, const Point3f& normalScene2,
const Point3f& pointModel2, const Point3f& normalModel2);
static float groupingCreteria(const Point3f& pointScene1, const Point3f& normalScene1,
const Point3f& pointModel1, const Point3f& normalModel1,
const Point3f& pointScene2, const Point3f& normalScene2,
const Point3f& pointModel2, const Point3f& normalModel2,
float gamma);
protected:
void defaultParams();
void matchSpinToModel(const Mat& spin, vector<int>& indeces,
vector<float>& corrCoeffs, bool useExtremeOutliers = true) const;
void repackSpinImages(const vector<uchar>& mask, Mat& spinImages, bool reAlloc = true) const;
vector<int> subset;
Mesh3D mesh;
Mat spinImages;
std::ostream* out;
};
class CV_EXPORTS TickMeter
{
public:
TickMeter();
void start();
void stop();
int64 getTimeTicks() const;
double getTimeMicro() const;
double getTimeMilli() const;
double getTimeSec() const;
int64 getCounter() const;
void reset();
private:
int64 counter;
int64 sumTime;
int64 startTime;
};
CV_EXPORTS std::ostream& operator<<(std::ostream& out, const TickMeter& tm);
class CV_EXPORTS SelfSimDescriptor
{
public:
SelfSimDescriptor();
SelfSimDescriptor(int _ssize, int _lsize,
int _startDistanceBucket=DEFAULT_START_DISTANCE_BUCKET,
int _numberOfDistanceBuckets=DEFAULT_NUM_DISTANCE_BUCKETS,
int _nangles=DEFAULT_NUM_ANGLES);
SelfSimDescriptor(const SelfSimDescriptor& ss);
virtual ~SelfSimDescriptor();
SelfSimDescriptor& operator = (const SelfSimDescriptor& ss);
size_t getDescriptorSize() const;
Size getGridSize( Size imgsize, Size winStride ) const;
virtual void compute(const Mat& img, vector<float>& descriptors, Size winStride=Size(),
const vector<Point>& locations=vector<Point>()) const;
virtual void computeLogPolarMapping(Mat& mappingMask) const;
virtual void SSD(const Mat& img, Point pt, Mat& ssd) const;
int smallSize;
int largeSize;
int startDistanceBucket;
int numberOfDistanceBuckets;
int numberOfAngles;
enum { DEFAULT_SMALL_SIZE = 5, DEFAULT_LARGE_SIZE = 41,
DEFAULT_NUM_ANGLES = 20, DEFAULT_START_DISTANCE_BUCKET = 3,
DEFAULT_NUM_DISTANCE_BUCKETS = 7 };
};
typedef bool (*BundleAdjustCallback)(int iteration, double norm_error, void* user_data);
class LevMarqSparse {
public:
LevMarqSparse();
LevMarqSparse(int npoints, // number of points
int ncameras, // number of cameras
int nPointParams, // number of params per one point (3 in case of 3D points)
int nCameraParams, // number of parameters per one camera
int nErrParams, // number of parameters in measurement vector
// for 1 point at one camera (2 in case of 2D projections)
Mat& visibility, // visibility matrix. rows correspond to points, columns correspond to cameras
// 1 - point is visible for the camera, 0 - invisible
Mat& P0, // starting vector of parameters, first cameras then points
Mat& X, // measurements, in order of visibility. non visible cases are skipped
TermCriteria criteria, // termination criteria
// callback for estimation of Jacobian matrices
void (CV_CDECL * fjac)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& A, Mat& B, void* data),
// callback for estimation of backprojection errors
void (CV_CDECL * func)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& estim, void* data),
void* data, // user-specific data passed to the callbacks
BundleAdjustCallback cb, void* user_data
);
virtual ~LevMarqSparse();
virtual void run( int npoints, // number of points
int ncameras, // number of cameras
int nPointParams, // number of params per one point (3 in case of 3D points)
int nCameraParams, // number of parameters per one camera
int nErrParams, // number of parameters in measurement vector
// for 1 point at one camera (2 in case of 2D projections)
Mat& visibility, // visibility matrix. rows correspond to points, columns correspond to cameras
// 1 - point is visible for the camera, 0 - invisible
Mat& P0, // starting vector of parameters, first cameras then points
Mat& X, // measurements, in order of visibility. non visible cases are skipped
TermCriteria criteria, // termination criteria
// callback for estimation of Jacobian matrices
void (CV_CDECL * fjac)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& A, Mat& B, void* data),
// callback for estimation of backprojection errors
void (CV_CDECL * func)(int i, int j, Mat& point_params,
Mat& cam_params, Mat& estim, void* data),
void* data // user-specific data passed to the callbacks
);
virtual void clear();
// useful function to do simple bundle adjustment tasks
static void bundleAdjust(vector<Point3d>& points, // positions of points in global coordinate system (input and output)
const vector<vector<Point2d> >& imagePoints, // projections of 3d points for every camera
const vector<vector<int> >& visibility, // visibility of 3d points for every camera
vector<Mat>& cameraMatrix, // intrinsic matrices of all cameras (input and output)
vector<Mat>& R, // rotation matrices of all cameras (input and output)
vector<Mat>& T, // translation vector of all cameras (input and output)
vector<Mat>& distCoeffs, // distortion coefficients of all cameras (input and output)
const TermCriteria& criteria=
TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON),
BundleAdjustCallback cb = 0, void* user_data = 0);
public:
virtual void optimize(CvMat &_vis); //main function that runs minimization
//iteratively asks for measurement for visible camera-point pairs
void ask_for_proj(CvMat &_vis,bool once=false);
//iteratively asks for Jacobians for every camera_point pair
void ask_for_projac(CvMat &_vis);
CvMat* err; //error X-hX
double prevErrNorm, errNorm;
double lambda;
CvTermCriteria criteria;
int iters;
CvMat** U; //size of array is equal to number of cameras
CvMat** V; //size of array is equal to number of points
CvMat** inv_V_star; //inverse of V*
CvMat** A;
CvMat** B;
CvMat** W;
CvMat* X; //measurement
CvMat* hX; //current measurement extimation given new parameter vector
CvMat* prevP; //current already accepted parameter.
CvMat* P; // parameters used to evaluate function with new params
// this parameters may be rejected
CvMat* deltaP; //computed increase of parameters (result of normal system solution )
CvMat** ea; // sum_i AijT * e_ij , used as right part of normal equation
// length of array is j = number of cameras
CvMat** eb; // sum_j BijT * e_ij , used as right part of normal equation
// length of array is i = number of points
CvMat** Yj; //length of array is i = num_points
CvMat* S; //big matrix of block Sjk , each block has size num_cam_params x num_cam_params
CvMat* JtJ_diag; //diagonal of JtJ, used to backup diagonal elements before augmentation
CvMat* Vis_index; // matrix which element is index of measurement for point i and camera j
int num_cams;
int num_points;
int num_err_param;
int num_cam_param;
int num_point_param;
//target function and jacobian pointers, which needs to be initialized
void (*fjac)(int i, int j, Mat& point_params, Mat& cam_params, Mat& A, Mat& B, void* data);
void (*func)(int i, int j, Mat& point_params, Mat& cam_params, Mat& estim, void* data);
void* data;
BundleAdjustCallback cb;
void* user_data;
};
CV_EXPORTS_W int chamerMatching( Mat& img, Mat& templ,
CV_OUT vector<vector<Point> >& results, CV_OUT vector<float>& cost,
double templScale=1, int maxMatches = 20,
double minMatchDistance = 1.0, int padX = 3,
int padY = 3, int scales = 5, double minScale = 0.6, double maxScale = 1.6,
double orientationWeight = 0.5, double truncate = 20);
class CV_EXPORTS_W StereoVar
{
public:
// Flags
enum {USE_INITIAL_DISPARITY = 1, USE_EQUALIZE_HIST = 2, USE_SMART_ID = 4, USE_AUTO_PARAMS = 8, USE_MEDIAN_FILTERING = 16};
enum {CYCLE_O, CYCLE_V};
enum {PENALIZATION_TICHONOV, PENALIZATION_CHARBONNIER, PENALIZATION_PERONA_MALIK};
//! the default constructor
CV_WRAP StereoVar();
//! the full constructor taking all the necessary algorithm parameters
CV_WRAP StereoVar(int levels, double pyrScale, int nIt, int minDisp, int maxDisp, int poly_n, double poly_sigma, float fi, float lambda, int penalization, int cycle, int flags);
//! the destructor
virtual ~StereoVar();
//! the stereo correspondence operator that computes disparity map for the specified rectified stereo pair
CV_WRAP_AS(compute) virtual void operator()(const Mat& left, const Mat& right, Mat& disp);
CV_PROP_RW int levels;
CV_PROP_RW double pyrScale;
CV_PROP_RW int nIt;
CV_PROP_RW int minDisp;
CV_PROP_RW int maxDisp;
CV_PROP_RW int poly_n;
CV_PROP_RW double poly_sigma;
CV_PROP_RW float fi;
CV_PROP_RW float lambda;
CV_PROP_RW int penalization;
CV_PROP_RW int cycle;
CV_PROP_RW int flags;
private:
void autoParams();
void FMG(Mat &I1, Mat &I2, Mat &I2x, Mat &u, int level);
void VCycle_MyFAS(Mat &I1_h, Mat &I2_h, Mat &I2x_h, Mat &u_h, int level);
void VariationalSolver(Mat &I1_h, Mat &I2_h, Mat &I2x_h, Mat &u_h, int level);
};
CV_EXPORTS void polyfit(const Mat& srcx, const Mat& srcy, Mat& dst, int order);
class CV_EXPORTS Directory
{
public:
static std::vector<std::string> GetListFiles ( const std::string& path, const std::string & exten = "*", bool addPath = true );
static std::vector<std::string> GetListFilesR ( const std::string& path, const std::string & exten = "*", bool addPath = true );
static std::vector<std::string> GetListFolders( const std::string& path, const std::string & exten = "*", bool addPath = true );
};
/*
* Generation of a set of different colors by the following way:
* 1) generate more then need colors (in "factor" times) in RGB,
* 2) convert them to Lab,
* 3) choose the needed count of colors from the set that are more different from
* each other,
* 4) convert the colors back to RGB
*/
CV_EXPORTS void generateColors( std::vector<Scalar>& colors, size_t count, size_t factor=100 );
/*
* Estimate the rigid body motion from frame0 to frame1. The method is based on the paper
* "Real-Time Visual Odometry from Dense RGB-D Images", F. Steinbucker, J. Strum, D. Cremers, ICCV, 2011.
*/
enum { ROTATION = 1,
TRANSLATION = 2,
RIGID_BODY_MOTION = 4
};
CV_EXPORTS bool RGBDOdometry( Mat& Rt, const Mat& initRt,
const Mat& image0, const Mat& depth0, const Mat& mask0,
const Mat& image1, const Mat& depth1, const Mat& mask1,
const Mat& cameraMatrix, float minDepth=0.f, float maxDepth=4.f, float maxDepthDiff=0.07f,
const std::vector<int>& iterCounts=std::vector<int>(),
const std::vector<float>& minGradientMagnitudes=std::vector<float>(),
int transformType=RIGID_BODY_MOTION );
/**
*Bilinear interpolation technique.
*
*The value of a desired cortical pixel is obtained through a bilinear interpolation of the values
*of the four nearest neighbouring Cartesian pixels to the center of the RF.
*The same principle is applied to the inverse transformation.
*
*More details can be found in http://dx.doi.org/10.1007/978-3-642-23968-7_5
*/
class CV_EXPORTS LogPolar_Interp
{
public:
LogPolar_Interp() {}
/**
*Constructor
*\param w the width of the input image
*\param h the height of the input image
*\param center the transformation center: where the output precision is maximal
*\param R the number of rings of the cortical image (default value 70 pixel)
*\param ro0 the radius of the blind spot (default value 3 pixel)
*\param full \a 1 (default value) means that the retinal image (the inverse transform) is computed within the circumscribing circle.
* \a 0 means that the retinal image is computed within the inscribed circle.
*\param S the number of sectors of the cortical image (default value 70 pixel).
* Its value is usually internally computed to obtain a pixel aspect ratio equals to 1.
*\param sp \a 1 (default value) means that the parameter \a S is internally computed.
* \a 0 means that the parameter \a S is provided by the user.
*/
LogPolar_Interp(int w, int h, Point2i center, int R=70, double ro0=3.0,
int interp=INTER_LINEAR, int full=1, int S=117, int sp=1);
/**
*Transformation from Cartesian image to cortical (log-polar) image.
*\param source the Cartesian image
*\return the transformed image (cortical image)
*/
const Mat to_cortical(const Mat &source);
/**
*Transformation from cortical image to retinal (inverse log-polar) image.
*\param source the cortical image
*\return the transformed image (retinal image)
*/
const Mat to_cartesian(const Mat &source);
/**
*Destructor
*/
~LogPolar_Interp();
protected:
Mat Rsri;
Mat Csri;
int S, R, M, N;
int top, bottom,left,right;
double ro0, romax, a, q;
int interp;
Mat ETAyx;
Mat CSIyx;
void create_map(int M, int N, int R, int S, double ro0);
};
/**
*Overlapping circular receptive fields technique
*
*The Cartesian plane is divided in two regions: the fovea and the periphery.
*The fovea (oversampling) is handled by using the bilinear interpolation technique described above, whereas in
*the periphery we use the overlapping Gaussian circular RFs.
*
*More details can be found in http://dx.doi.org/10.1007/978-3-642-23968-7_5
*/
class CV_EXPORTS LogPolar_Overlapping
{
public:
LogPolar_Overlapping() {}
/**
*Constructor
*\param w the width of the input image
*\param h the height of the input image
*\param center the transformation center: where the output precision is maximal
*\param R the number of rings of the cortical image (default value 70 pixel)
*\param ro0 the radius of the blind spot (default value 3 pixel)
*\param full \a 1 (default value) means that the retinal image (the inverse transform) is computed within the circumscribing circle.
* \a 0 means that the retinal image is computed within the inscribed circle.
*\param S the number of sectors of the cortical image (default value 70 pixel).
* Its value is usually internally computed to obtain a pixel aspect ratio equals to 1.
*\param sp \a 1 (default value) means that the parameter \a S is internally computed.
* \a 0 means that the parameter \a S is provided by the user.
*/
LogPolar_Overlapping(int w, int h, Point2i center, int R=70,
double ro0=3.0, int full=1, int S=117, int sp=1);
/**
*Transformation from Cartesian image to cortical (log-polar) image.
*\param source the Cartesian image
*\return the transformed image (cortical image)
*/
const Mat to_cortical(const Mat &source);
/**
*Transformation from cortical image to retinal (inverse log-polar) image.
*\param source the cortical image
*\return the transformed image (retinal image)
*/
const Mat to_cartesian(const Mat &source);
/**
*Destructor
*/
~LogPolar_Overlapping();
protected:
Mat Rsri;
Mat Csri;
vector<int> Rsr;
vector<int> Csr;
vector<double> Wsr;
int S, R, M, N, ind1;
int top, bottom,left,right;
double ro0, romax, a, q;
struct kernel
{
kernel() { w = 0; }
vector<double> weights;
int w;
};
Mat ETAyx;
Mat CSIyx;
vector<kernel> w_ker_2D;
void create_map(int M, int N, int R, int S, double ro0);
};
/**
* Adjacent receptive fields technique
*
*All the Cartesian pixels, whose coordinates in the cortical domain share the same integer part, are assigned to the same RF.
*The precision of the boundaries of the RF can be improved by breaking each pixel into subpixels and assigning each of them to the correct RF.
*This technique is implemented from: Traver, V., Pla, F.: Log-polar mapping template design: From task-level requirements
*to geometry parameters. Image Vision Comput. 26(10) (2008) 1354-1370
*
*More details can be found in http://dx.doi.org/10.1007/978-3-642-23968-7_5
*/
class CV_EXPORTS LogPolar_Adjacent
{
public:
LogPolar_Adjacent() {}
/**
*Constructor
*\param w the width of the input image
*\param h the height of the input image
*\param center the transformation center: where the output precision is maximal
*\param R the number of rings of the cortical image (default value 70 pixel)
*\param ro0 the radius of the blind spot (default value 3 pixel)
*\param smin the size of the subpixel (default value 0.25 pixel)
*\param full \a 1 (default value) means that the retinal image (the inverse transform) is computed within the circumscribing circle.
* \a 0 means that the retinal image is computed within the inscribed circle.
*\param S the number of sectors of the cortical image (default value 70 pixel).
* Its value is usually internally computed to obtain a pixel aspect ratio equals to 1.
*\param sp \a 1 (default value) means that the parameter \a S is internally computed.
* \a 0 means that the parameter \a S is provided by the user.
*/
LogPolar_Adjacent(int w, int h, Point2i center, int R=70, double ro0=3.0, double smin=0.25, int full=1, int S=117, int sp=1);
/**
*Transformation from Cartesian image to cortical (log-polar) image.
*\param source the Cartesian image
*\return the transformed image (cortical image)
*/
const Mat to_cortical(const Mat &source);
/**
*Transformation from cortical image to retinal (inverse log-polar) image.
*\param source the cortical image
*\return the transformed image (retinal image)
*/
const Mat to_cartesian(const Mat &source);
/**
*Destructor
*/
~LogPolar_Adjacent();
protected:
struct pixel
{
pixel() { u = v = 0; a = 0.; }
int u;
int v;
double a;
};
int S, R, M, N;
int top, bottom,left,right;
double ro0, romax, a, q;
vector<vector<pixel> > L;
vector<double> A;
void subdivide_recursively(double x, double y, int i, int j, double length, double smin);
bool get_uv(double x, double y, int&u, int&v);
void create_map(int M, int N, int R, int S, double ro0, double smin);
};
CV_EXPORTS Mat subspaceProject(InputArray W, InputArray mean, InputArray src);
CV_EXPORTS Mat subspaceReconstruct(InputArray W, InputArray mean, InputArray src);
class CV_EXPORTS LDA
{
public:
// Initializes a LDA with num_components (default 0) and specifies how
// samples are aligned (default dataAsRow=true).
LDA(int num_components = 0) :
_num_components(num_components) {};
// Initializes and performs a Discriminant Analysis with Fisher's
// Optimization Criterion on given data in src and corresponding labels
// in labels. If 0 (or less) number of components are given, they are
// automatically determined for given data in computation.
LDA(InputArrayOfArrays src, InputArray labels,
int num_components = 0) :
_num_components(num_components)
{
this->compute(src, labels); //! compute eigenvectors and eigenvalues
}
// Serializes this object to a given filename.
void save(const string& filename) const;
// Deserializes this object from a given filename.
void load(const string& filename);
// Serializes this object to a given cv::FileStorage.
void save(FileStorage& fs) const;
// Deserializes this object from a given cv::FileStorage.
void load(const FileStorage& node);
// Destructor.
~LDA() {}
//! Compute the discriminants for data in src and labels.
void compute(InputArrayOfArrays src, InputArray labels);
// Projects samples into the LDA subspace.
Mat project(InputArray src);
// Reconstructs projections from the LDA subspace.
Mat reconstruct(InputArray src);
// Returns the eigenvectors of this LDA.
Mat eigenvectors() const { return _eigenvectors; };
// Returns the eigenvalues of this LDA.
Mat eigenvalues() const { return _eigenvalues; }
protected:
bool _dataAsRow;
int _num_components;
Mat _eigenvectors;
Mat _eigenvalues;
void lda(InputArrayOfArrays src, InputArray labels);
};
class CV_EXPORTS_W FaceRecognizer : public Algorithm
{
public:
//! virtual destructor
virtual ~FaceRecognizer() {}
// Trains a FaceRecognizer.
CV_WRAP virtual void train(InputArrayOfArrays src, InputArray labels) = 0;
// Updates a FaceRecognizer.
CV_WRAP virtual void update(InputArrayOfArrays src, InputArray labels);
// Gets a prediction from a FaceRecognizer.
virtual int predict(InputArray src) const = 0;
// Predicts the label and confidence for a given sample.
CV_WRAP virtual void predict(InputArray src, CV_OUT int &label, CV_OUT double &confidence) const = 0;
// Serializes this object to a given filename.
CV_WRAP virtual void save(const string& filename) const;
// Deserializes this object from a given filename.
CV_WRAP virtual void load(const string& filename);
// Serializes this object to a given cv::FileStorage.
virtual void save(FileStorage& fs) const = 0;
// Deserializes this object from a given cv::FileStorage.
virtual void load(const FileStorage& fs) = 0;
};
CV_EXPORTS_W Ptr<FaceRecognizer> createEigenFaceRecognizer(int num_components = 0, double threshold = DBL_MAX);
CV_EXPORTS_W Ptr<FaceRecognizer> createFisherFaceRecognizer(int num_components = 0, double threshold = DBL_MAX);
CV_EXPORTS_W Ptr<FaceRecognizer> createLBPHFaceRecognizer(int radius=1, int neighbors=8,
int grid_x=8, int grid_y=8, double threshold = DBL_MAX);
enum
{
COLORMAP_AUTUMN = 0,
COLORMAP_BONE = 1,
COLORMAP_JET = 2,
COLORMAP_WINTER = 3,
COLORMAP_RAINBOW = 4,
COLORMAP_OCEAN = 5,
COLORMAP_SUMMER = 6,
COLORMAP_SPRING = 7,
COLORMAP_COOL = 8,
COLORMAP_HSV = 9,
COLORMAP_PINK = 10,
COLORMAP_HOT = 11
};
CV_EXPORTS_W void applyColorMap(InputArray src, OutputArray dst, int colormap);
CV_EXPORTS bool initModule_contrib();
}
#include "opencv2/contrib/retina.hpp"
#include "opencv2/contrib/openfabmap.hpp"
#endif
#ifdef __OPENCV_BUILD
#error this is a compatibility header which should not be used inside the OpenCV library
#endif
#include "opencv2/contrib.hpp"

View File

@@ -2,8 +2,8 @@
#if defined(__linux__) || defined(LINUX) || defined(__APPLE__) || defined(ANDROID)
#include <opencv2/core/core.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/core.hpp>
#include <opencv2/objdetect.hpp>
#include <vector>

View File

@@ -43,12 +43,11 @@
#ifndef __OPENCV_HYBRIDTRACKER_H_
#define __OPENCV_HYBRIDTRACKER_H_
#include "opencv2/core/core.hpp"
#include "opencv2/core/operations.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/video/tracking.hpp"
#include "opencv2/ml/ml.hpp"
#include "opencv2/ml.hpp"
#ifdef __cplusplus
@@ -76,9 +75,9 @@ struct CV_EXPORTS CvMeanShiftTrackerParams
CvTermCriteria term_crit = CvTermCriteria());
int tracking_type;
vector<float> h_range;
vector<float> s_range;
vector<float> v_range;
std::vector<float> h_range;
std::vector<float> s_range;
std::vector<float> v_range;
CvTermCriteria term_crit;
};
@@ -145,7 +144,7 @@ class CV_EXPORTS CvFeatureTracker
private:
Ptr<Feature2D> dd;
Ptr<DescriptorMatcher> matcher;
vector<DMatch> matches;
std::vector<DMatch> matches;
Mat prev_image;
Mat prev_image_bw;
@@ -153,7 +152,7 @@ private:
Point2d prev_center;
int ittr;
vector<Point2f> features[2];
std::vector<Point2f> features[2];
public:
Mat disp_matches;

View File

@@ -52,8 +52,8 @@
#ifndef __OPENCV_OPENFABMAP_H_
#define __OPENCV_OPENFABMAP_H_
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include <vector>
#include <list>
@@ -65,10 +65,6 @@ namespace cv {
namespace of2 {
using std::list;
using std::map;
using std::multiset;
/*
Return data format of a FABMAP compare call
*/
@@ -115,50 +111,50 @@ public:
//methods to add training data for sampling method
virtual void addTraining(const Mat& queryImgDescriptor);
virtual void addTraining(const vector<Mat>& queryImgDescriptors);
virtual void addTraining(const std::vector<Mat>& queryImgDescriptors);
//methods to add to the test data
virtual void add(const Mat& queryImgDescriptor);
virtual void add(const vector<Mat>& queryImgDescriptors);
virtual void add(const std::vector<Mat>& queryImgDescriptors);
//accessors
const vector<Mat>& getTrainingImgDescriptors() const;
const vector<Mat>& getTestImgDescriptors() const;
const std::vector<Mat>& getTrainingImgDescriptors() const;
const std::vector<Mat>& getTestImgDescriptors() const;
//Main FabMap image comparison
void compare(const Mat& queryImgDescriptor,
vector<IMatch>& matches, bool addQuery = false,
std::vector<IMatch>& matches, bool addQuery = false,
const Mat& mask = Mat());
void compare(const Mat& queryImgDescriptor,
const Mat& testImgDescriptors, vector<IMatch>& matches,
const Mat& testImgDescriptors, std::vector<IMatch>& matches,
const Mat& mask = Mat());
void compare(const Mat& queryImgDescriptor,
const vector<Mat>& testImgDescriptors,
vector<IMatch>& matches, const Mat& mask = Mat());
void compare(const vector<Mat>& queryImgDescriptors, vector<
const std::vector<Mat>& testImgDescriptors,
std::vector<IMatch>& matches, const Mat& mask = Mat());
void compare(const std::vector<Mat>& queryImgDescriptors, std::vector<
IMatch>& matches, bool addQuery = false, const Mat& mask =
Mat());
void compare(const vector<Mat>& queryImgDescriptors,
const vector<Mat>& testImgDescriptors,
vector<IMatch>& matches, const Mat& mask = Mat());
void compare(const std::vector<Mat>& queryImgDescriptors,
const std::vector<Mat>& testImgDescriptors,
std::vector<IMatch>& matches, const Mat& mask = Mat());
protected:
void compareImgDescriptor(const Mat& queryImgDescriptor,
int queryIndex, const vector<Mat>& testImgDescriptors,
vector<IMatch>& matches);
int queryIndex, const std::vector<Mat>& testImgDescriptors,
std::vector<IMatch>& matches);
void addImgDescriptor(const Mat& queryImgDescriptor);
//the getLikelihoods method is overwritten for each different FabMap
//method.
virtual void getLikelihoods(const Mat& queryImgDescriptor,
const vector<Mat>& testImgDescriptors,
vector<IMatch>& matches);
const std::vector<Mat>& testImgDescriptors,
std::vector<IMatch>& matches);
virtual double getNewPlaceLikelihood(const Mat& queryImgDescriptor);
//turn likelihoods into probabilities (also add in motion model if used)
void normaliseDistribution(vector<IMatch>& matches);
void normaliseDistribution(std::vector<IMatch>& matches);
//Chow-Liu Tree
int pq(int q);
@@ -174,9 +170,9 @@ protected:
//data
Mat clTree;
vector<Mat> trainingImgDescriptors;
vector<Mat> testImgDescriptors;
vector<IMatch> priorMatches;
std::vector<Mat> trainingImgDescriptors;
std::vector<Mat> testImgDescriptors;
std::vector<IMatch> priorMatches;
//parameters
double PzGe;
@@ -203,8 +199,8 @@ public:
protected:
//FabMap1 implementation of likelihood comparison
void getLikelihoods(const Mat& queryImgDescriptor, const vector<
Mat>& testImgDescriptors, vector<IMatch>& matches);
void getLikelihoods(const Mat& queryImgDescriptor, const std::vector<
Mat>& testImgDescriptors, std::vector<IMatch>& matches);
};
/*
@@ -219,8 +215,8 @@ public:
protected:
//FabMap look-up-table implementation of the likelihood comparison
void getLikelihoods(const Mat& queryImgDescriptor, const vector<
Mat>& testImgDescriptors, vector<IMatch>& matches);
void getLikelihoods(const Mat& queryImgDescriptor, const std::vector<
Mat>& testImgDescriptors, std::vector<IMatch>& matches);
//precomputed data
int (*table)[8];
@@ -243,8 +239,8 @@ public:
protected:
//FabMap Fast Bail-out implementation of the likelihood comparison
void getLikelihoods(const Mat& queryImgDescriptor, const vector<
Mat>& testImgDescriptors, vector<IMatch>& matches);
void getLikelihoods(const Mat& queryImgDescriptor, const std::vector<
Mat>& testImgDescriptors, std::vector<IMatch>& matches);
//stucture used to determine word comparison order
struct WordStats {
@@ -268,7 +264,7 @@ protected:
};
//private fast bail-out necessary functions
void setWordStatistics(const Mat& queryImgDescriptor, multiset<WordStats>& wordData);
void setWordStatistics(const Mat& queryImgDescriptor, std::multiset<WordStats>& wordData);
double limitbisection(double v, double m);
double bennettInequality(double v, double m, double delta);
static bool compInfo(const WordStats& first, const WordStats& second);
@@ -295,39 +291,39 @@ public:
void addTraining(const Mat& queryImgDescriptors) {
FabMap::addTraining(queryImgDescriptors);
}
void addTraining(const vector<Mat>& queryImgDescriptors);
void addTraining(const std::vector<Mat>& queryImgDescriptors);
void add(const Mat& queryImgDescriptors) {
FabMap::add(queryImgDescriptors);
}
void add(const vector<Mat>& queryImgDescriptors);
void add(const std::vector<Mat>& queryImgDescriptors);
protected:
//FabMap2 implementation of the likelihood comparison
void getLikelihoods(const Mat& queryImgDescriptor, const vector<
Mat>& testImgDescriptors, vector<IMatch>& matches);
void getLikelihoods(const Mat& queryImgDescriptor, const std::vector<
Mat>& testImgDescriptors, std::vector<IMatch>& matches);
double getNewPlaceLikelihood(const Mat& queryImgDescriptor);
//the likelihood function using the inverted index
void getIndexLikelihoods(const Mat& queryImgDescriptor, vector<
double>& defaults, map<int, vector<int> >& invertedMap,
vector<IMatch>& matches);
void getIndexLikelihoods(const Mat& queryImgDescriptor, std::vector<
double>& defaults, std::map<int, std::vector<int> >& invertedMap,
std::vector<IMatch>& matches);
void addToIndex(const Mat& queryImgDescriptor,
vector<double>& defaults,
map<int, vector<int> >& invertedMap);
std::vector<double>& defaults,
std::map<int, std::vector<int> >& invertedMap);
//data
vector<double> d1, d2, d3, d4;
vector<vector<int> > children;
std::vector<double> d1, d2, d3, d4;
std::vector<std::vector<int> > children;
// TODO: inverted map a vector?
vector<double> trainingDefaults;
map<int, vector<int> > trainingInvertedMap;
std::vector<double> trainingDefaults;
std::map<int, std::vector<int> > trainingInvertedMap;
vector<double> testDefaults;
map<int, vector<int> > testInvertedMap;
std::vector<double> testDefaults;
std::map<int, std::vector<int> > testInvertedMap;
};
/*
@@ -342,14 +338,14 @@ public:
//add data to the chow-liu tree before calling make
void add(const Mat& imgDescriptor);
void add(const vector<Mat>& imgDescriptors);
void add(const std::vector<Mat>& imgDescriptors);
const vector<Mat>& getImgDescriptors() const;
const std::vector<Mat>& getImgDescriptors() const;
Mat make(double infoThreshold = 0.0);
private:
vector<Mat> imgDescriptors;
std::vector<Mat> imgDescriptors;
Mat mergedImgDescriptors;
typedef struct info {
@@ -364,18 +360,18 @@ private:
double CP(int a, bool za, int b, bool zb); // a | b
//calculating mutual information of all edges
void createBaseEdges(list<info>& edges, double infoThreshold);
void createBaseEdges(std::list<info>& edges, double infoThreshold);
double calcMutInfo(int word1, int word2);
static bool sortInfoScores(const info& first, const info& second);
//selecting minimum spanning egdges with maximum information
bool reduceEdgesToMinSpan(list<info>& edges);
bool reduceEdgesToMinSpan(std::list<info>& edges);
//building the tree sctructure
Mat buildTree(int root_word, list<info> &edges);
Mat buildTree(int root_word, std::list<info> &edges);
void recAddToTree(Mat &cltree, int q, int pq,
list<info> &remaining_edges);
vector<int> extractChildren(list<info> &remaining_edges, int q);
std::list<info> &remaining_edges);
std::vector<int> extractChildren(std::list<info> &remaining_edges, int q);
};

View File

@@ -72,7 +72,7 @@
* Author: Alexandre Benoit
*/
#include "opencv2/core/core.hpp" // for all OpenCV core functionalities access, including cv::Exception support
#include "opencv2/core.hpp" // for all OpenCV core functionalities access, including cv::Exception support
#include <valarray>
namespace cv
@@ -85,10 +85,8 @@ enum RETINA_COLORSAMPLINGMETHOD
RETINA_COLOR_BAYER//!< standard bayer sampling
};
class RetinaFilter;
/**
* @class Retina a wrapper class which allows the Gipsa/Listic Labs model to be used.
* @class Retina a wrapper class which allows the Gipsa/Listic Labs model to be used with OpenCV.
* This retina model allows spatio-temporal image processing (applied on still images, video sequences).
* As a summary, these are the retina model properties:
* => It applies a spectral whithening (mid-frequency details enhancement)
@@ -115,7 +113,7 @@ class CV_EXPORTS Retina : public Algorithm {
public:
// parameters structure for better clarity, check explenations on the comments of methods : setupOPLandIPLParvoChannel and setupIPLMagnoChannel
struct RetinaParameters{
struct RetinaParameters{
struct OPLandIplParvoParameters{ // Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters
OPLandIplParvoParameters():colorMode(true),
normaliseOutput(true),
@@ -163,15 +161,14 @@ public:
* @param retinaParameterFile : the parameters filename
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
virtual void setup(std::string retinaParameterFile="", const bool applyDefaultSetupOnFailure=true)=0;
virtual void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true)=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param fs : the open Filestorage which contains retina parameters
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
virtual void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true)=0;
@@ -193,14 +190,13 @@ public:
* parameters setup display method
* @return a string which contains formatted parameters information
*/
virtual const std::string printSetup()=0;
virtual const String printSetup()=0;
/**
* write xml/yml formated parameters information
* @rparam fs : the filename of the xml file that will be open and writen with formatted parameters information
*/
virtual void write( std::string fs ) const=0;
virtual void write( String fs ) const=0;
/**
* write xml/yml formated parameters information

View File

@@ -35,6 +35,8 @@
//M*/
#include "precomp.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/contrib/compat.hpp"
#define ASD_INTENSITY_SET_PIXEL(pointer, qq) {(*pointer) = (unsigned char)qq;}

View File

@@ -40,7 +40,9 @@
//M*/
#include "precomp.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/contrib/compat.hpp"
#include "opencv2/calib3d/calib3d_c.h"
#include <iostream>
using namespace cv;
@@ -359,7 +361,7 @@ void LevMarqSparse::ask_for_proj(CvMat &/*_vis*/,bool once) {
cvGetSubRect( P, &cam_mat, cvRect( 0, j * num_cam_param, 1, num_cam_param ));
CvMat measur_mat;
cvGetSubRect( hX, &measur_mat, cvRect( 0, ind * num_err_param, 1, num_err_param ));
Mat _point_mat(&point_mat), _cam_mat(&cam_mat), _measur_mat(&measur_mat);
Mat _point_mat = cv::cvarrToMat(&point_mat), _cam_mat = cv::cvarrToMat(&cam_mat), _measur_mat = cv::cvarrToMat(&measur_mat);
func( i, j, _point_mat, _cam_mat, _measur_mat, data);
assert( ind*num_err_param == ((int*)(Vis_index->data.ptr + i * Vis_index->step))[j]);
ind+=1;
@@ -398,7 +400,7 @@ void LevMarqSparse::ask_for_projac(CvMat &/*_vis*/) //should be evaluated at p
//CvMat* Bij = B_line[j];
//CvMat* Bij = ((CvMat**)(B->data.ptr + B->step * i))[j];
Mat _point_mat(&point_mat), _cam_mat(&cam_mat), _Aij(Aij), _Bij(Bij);
Mat _point_mat = cv::cvarrToMat(&point_mat), _cam_mat = cv::cvarrToMat(&cam_mat), _Aij = cv::cvarrToMat(Aij), _Bij = cv::cvarrToMat(Bij);
(*fjac)(i, j, _point_mat, _cam_mat, _Aij, _Bij, data);
}
}
@@ -989,13 +991,13 @@ static void func_new(int i, int j, Mat& point_params, Mat& cam_params, Mat& esti
func(i,j,&_point_params,&_cam_params,&_estim,data);
};
void LevMarqSparse::bundleAdjust( vector<Point3d>& points, //positions of points in global coordinate system (input and output)
const vector<vector<Point2d> >& imagePoints, //projections of 3d points for every camera
const vector<vector<int> >& visibility, //visibility of 3d points for every camera
vector<Mat>& cameraMatrix, //intrinsic matrices of all cameras (input and output)
vector<Mat>& R, //rotation matrices of all cameras (input and output)
vector<Mat>& T, //translation vector of all cameras (input and output)
vector<Mat>& distCoeffs, //distortion coefficients of all cameras (input and output)
void LevMarqSparse::bundleAdjust( std::vector<Point3d>& points, //positions of points in global coordinate system (input and output)
const std::vector<std::vector<Point2d> >& imagePoints, //projections of 3d points for every camera
const std::vector<std::vector<int> >& visibility, //visibility of 3d points for every camera
std::vector<Mat>& cameraMatrix, //intrinsic matrices of all cameras (input and output)
std::vector<Mat>& R, //rotation matrices of all cameras (input and output)
std::vector<Mat>& T, //translation vector of all cameras (input and output)
std::vector<Mat>& distCoeffs, //distortion coefficients of all cameras (input and output)
const TermCriteria& criteria,
BundleAdjustCallback cb, void* user_data) {
//,enum{MOTION_AND_STRUCTURE,MOTION,STRUCTURE})
@@ -1100,16 +1102,17 @@ void LevMarqSparse::bundleAdjust( vector<Point3d>& points, //positions of points
}
//fill camera params
//R.clear();T.clear();cameraMatrix.clear();
Mat levmarP = cv::cvarrToMat(levmar.P);
for( int i = 0; i < num_cameras; i++ ) {
//rotation
Mat rot_vec = Mat(levmar.P).rowRange(i*num_cam_param, i*num_cam_param+3);
Mat rot_vec = levmarP.rowRange(i*num_cam_param, i*num_cam_param+3);
Rodrigues( rot_vec, R[i] );
//translation
T[i] = Mat(levmar.P).rowRange(i*num_cam_param + 3, i*num_cam_param+6);
levmarP.rowRange(i*num_cam_param + 3, i*num_cam_param+6).copyTo(T[i]);
//intrinsic camera matrix
double* intr_data = (double*)cameraMatrix[i].data;
double* intr = (double*)(Mat(levmar.P).data + Mat(levmar.P).step * (i*num_cam_param+6));
double* intr = (double*)(levmarP.data +levmarP.step * (i*num_cam_param+6));
//focals
intr_data[0] = intr[0]; //fx
intr_data[4] = intr[1]; //fy
@@ -1119,7 +1122,7 @@ void LevMarqSparse::bundleAdjust( vector<Point3d>& points, //positions of points
//add distortion if exists
if( distCoeffs.size() ) {
Mat(levmar.P).rowRange(i*num_cam_param + 10, i*num_cam_param+10+numdist).copyTo(distCoeffs[i]);
levmarP.rowRange(i*num_cam_param + 10, i*num_cam_param+10+numdist).copyTo(distCoeffs[i]);
}
}
}

View File

@@ -180,7 +180,7 @@ void BasicRetinaFilter::setLPfilterParameters(const float beta, const float tau,
}
float _temp = (1.0f+_beta)/(2.0f*_mu*_alpha);
float a = _filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
float a = _filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)std::sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
_filteringCoeficientsTable[1+tableOffset]=(1.0f-a)*(1.0f-a)*(1.0f-a)*(1.0f-a)/(1.0f+_beta);
_filteringCoeficientsTable[2+tableOffset] =tau;
@@ -210,17 +210,17 @@ void BasicRetinaFilter::setProgressiveFilterConstants_CentredAccuracy(const floa
float _alpha=0.8f;
float _temp = (1.0f+_beta)/(2.0f*_mu*_alpha);
float a=_filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
float a=_filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)std::sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
_filteringCoeficientsTable[tableOffset+1]=(1.0f-a)*(1.0f-a)*(1.0f-a)*(1.0f-a)/(1.0f+_beta);
_filteringCoeficientsTable[tableOffset+2] =tau;
float commonFactor=alpha0/(float)sqrt(_halfNBcolumns*_halfNBcolumns+_halfNBrows*_halfNBrows+1.0f);
float commonFactor=alpha0/(float)std::sqrt(_halfNBcolumns*_halfNBcolumns+_halfNBrows*_halfNBrows+1.0f);
//memset(_progressiveSpatialConstant, 255, _filterOutput.getNBpixels());
for (unsigned int idColumn=0;idColumn<_halfNBcolumns; ++idColumn)
for (unsigned int idRow=0;idRow<_halfNBrows; ++idRow)
{
// computing local spatial constant
float localSpatialConstantValue=commonFactor*sqrt((float)(idColumn*idColumn)+(float)(idRow*idRow));
float localSpatialConstantValue=commonFactor*std::sqrt((float)(idColumn*idColumn)+(float)(idRow*idRow));
if (localSpatialConstantValue>1.0f)
localSpatialConstantValue=1.0f;
@@ -236,7 +236,7 @@ void BasicRetinaFilter::setProgressiveFilterConstants_CentredAccuracy(const floa
_progressiveGain[_halfNBcolumns-1+idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1-idRow)]=localGain;
_progressiveGain[_halfNBcolumns-1-idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1-idRow)]=localGain;
//std::cout<<commonFactor<<", "<<sqrt((_halfNBcolumns-1-idColumn)+(_halfNBrows-idRow-1))<<", "<<(_halfNBcolumns-1-idColumn)<<", "<<(_halfNBrows-idRow-1)<<", "<<localSpatialConstantValue<<std::endl;
//std::cout<<commonFactor<<", "<<std::sqrt((_halfNBcolumns-1-idColumn)+(_halfNBrows-idRow-1))<<", "<<(_halfNBcolumns-1-idColumn)<<", "<<(_halfNBrows-idRow-1)<<", "<<localSpatialConstantValue<<std::endl;
}
}
@@ -266,7 +266,7 @@ void BasicRetinaFilter::setProgressiveFilterConstants_CustomAccuracy(const float
}
unsigned int tableOffset=filterIndex*3;
float _temp = (1.0f+_beta)/(2.0f*_mu*_alpha);
float a=_filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
float a=_filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)std::sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
_filteringCoeficientsTable[tableOffset+1]=(1.0f-a)*(1.0f-a)*(1.0f-a)*(1.0f-a)/(1.0f+_beta);
_filteringCoeficientsTable[tableOffset+2] =tau;
@@ -286,7 +286,7 @@ void BasicRetinaFilter::setProgressiveFilterConstants_CustomAccuracy(const float
float localGain=(1.0f-localSpatialConstantValue)*(1.0f-localSpatialConstantValue)*(1.0f-localSpatialConstantValue)*(1.0f-localSpatialConstantValue)/(1.0f+_beta);
_progressiveGain[index]=localGain;
//std::cout<<commonFactor<<", "<<sqrt((_halfNBcolumns-1-idColumn)+(_halfNBrows-idRow-1))<<", "<<(_halfNBcolumns-1-idColumn)<<", "<<(_halfNBrows-idRow-1)<<", "<<localSpatialConstantValue<<std::endl;
//std::cout<<commonFactor<<", "<<std::sqrt((_halfNBcolumns-1-idColumn)+(_halfNBrows-idRow-1))<<", "<<(_halfNBcolumns-1-idColumn)<<", "<<(_halfNBrows-idRow-1)<<", "<<localSpatialConstantValue<<std::endl;
}
}

View File

@@ -112,7 +112,6 @@
//#define __BASIC_RETINA_ELEMENT_DEBUG
//using namespace std;
namespace cv
{
class BasicRetinaFilter

View File

@@ -90,7 +90,7 @@ Mat BOWMSCTrainer::cluster(const Mat& _descriptors) const {
Mat icovar = Mat::eye(_descriptors.cols,_descriptors.cols,_descriptors.type());
vector<Mat> initialCentres;
std::vector<Mat> initialCentres;
initialCentres.push_back(_descriptors.row(0));
for (int i = 1; i < _descriptors.rows; i++) {
double minDist = DBL_MAX;

View File

@@ -46,7 +46,7 @@
#include "precomp.hpp"
#include "opencv2/opencv_modules.hpp"
#ifdef HAVE_OPENCV_HIGHGUI
# include "opencv2/highgui/highgui.hpp"
# include "opencv2/highgui.hpp"
#endif
#include <iostream>
#include <queue>
@@ -54,8 +54,6 @@
namespace cv
{
using std::queue;
typedef std::pair<int,int> coordinate_t;
typedef float orientation_t;
typedef std::vector<coordinate_t> template_coords_t;
@@ -144,7 +142,7 @@ private:
LocationScaleImageRange(const std::vector<Point>& locations, const std::vector<float>& _scales) :
locations_(locations), scales_(_scales)
{
assert(locations.size()==_scales.size());
CV_Assert(locations.size()==_scales.size());
}
ImageIterator* iterator() const
@@ -395,7 +393,7 @@ private:
LocationScaleImageIterator(const std::vector<Point>& locations, const std::vector<float>& _scales) :
locations_(locations), scales_(_scales)
{
assert(locations.size()==_scales.size());
CV_Assert(locations.size()==_scales.size());
reset();
}
@@ -622,10 +620,9 @@ void ChamferMatcher::Matching::followContour(Mat& templ_img, template_coords_t&
{
const int dir[][2] = { {-1,-1}, {-1,0}, {-1,1}, {0,1}, {1,1}, {1,0}, {1,-1}, {0,-1} };
coordinate_t next;
coordinate_t next_temp;
unsigned char ptr;
assert (direction==-1 || !coords.empty());
CV_Assert (direction==-1 || !coords.empty());
coordinate_t crt = coords.back();
@@ -767,8 +764,8 @@ void ChamferMatcher::Matching::findContourOrientations(const template_coords_t&
}
// get the middle two angles
nth_element(angles.begin(), angles.begin()+M-1, angles.end());
nth_element(angles.begin()+M-1, angles.begin()+M, angles.end());
std::nth_element(angles.begin(), angles.begin()+M-1, angles.end());
std::nth_element(angles.begin()+M-1, angles.begin()+M, angles.end());
// sort(angles.begin(), angles.end());
// average them to compute tangent
@@ -825,7 +822,7 @@ ChamferMatcher::Template::Template(Mat& edge_image, float scale_) : addr_width(-
}
vector<int>& ChamferMatcher::Template::getTemplateAddresses(int width)
std::vector<int>& ChamferMatcher::Template::getTemplateAddresses(int width)
{
if (addr_width!=width) {
addr.resize(coords.size());
@@ -906,19 +903,18 @@ void ChamferMatcher::Template::show() const
p2.x = x + pad*(int)(sin(orientations[i])*100)/100;
p2.y = y + pad*(int)(cos(orientations[i])*100)/100;
line(templ_color, p1,p2, CV_RGB(255,0,0));
line(templ_color, p1,p2, Scalar(255,0,0));
}
}
circle(templ_color,Point(center.x + pad, center.y + pad),1,CV_RGB(0,255,0));
circle(templ_color,Point(center.x + pad, center.y + pad),1,Scalar(0,255,0));
#ifdef HAVE_OPENCV_HIGHGUI
namedWindow("templ",1);
imshow("templ",templ_color);
cvWaitKey(0);
waitKey();
#else
CV_Error(CV_StsNotImplemented, "OpenCV has been compiled without GUI support");
CV_Error(Error::StsNotImplemented, "OpenCV has been compiled without GUI support");
#endif
templ_color.release();
@@ -931,15 +927,13 @@ void ChamferMatcher::Template::show() const
void ChamferMatcher::Matching::addTemplateFromImage(Mat& templ, float scale)
{
Template* cmt = new Template(templ, scale);
if(templates.size() > 0)
templates.clear();
templates.clear();
templates.push_back(cmt);
cmt->show();
}
void ChamferMatcher::Matching::addTemplate(Template& template_){
if(templates.size() > 0)
templates.clear();
templates.clear();
templates.push_back(&template_);
}
/**
@@ -1065,7 +1059,7 @@ void ChamferMatcher::Matching::fillNonContourOrientations(Mat& annotated_img, Ma
int cols = annotated_img.cols;
int rows = annotated_img.rows;
assert(orientation_img.cols==cols && orientation_img.rows==rows);
CV_Assert(orientation_img.cols==cols && orientation_img.rows==rows);
for (int y=0;y<rows;++y) {
for (int x=0;x<cols;++x) {
@@ -1285,7 +1279,7 @@ void ChamferMatcher::showMatch(Mat& img, int index)
std::cout << "Index too big.\n" << std::endl;
}
assert(img.channels()==3);
CV_Assert(img.channels()==3);
Match match = matches[index];
@@ -1304,7 +1298,7 @@ void ChamferMatcher::showMatch(Mat& img, int index)
void ChamferMatcher::showMatch(Mat& img, Match match)
{
assert(img.channels()==3);
CV_Assert(img.channels()==3);
const template_coords_t& templ_coords = match.tpl->coords;
for (size_t i=0;i<templ_coords.size();++i) {

View File

@@ -73,7 +73,7 @@ void ChowLiuTree::add(const Mat& imgDescriptor) {
}
void ChowLiuTree::add(const vector<Mat>& _imgDescriptors) {
void ChowLiuTree::add(const std::vector<Mat>& _imgDescriptors) {
for (size_t i = 0; i < _imgDescriptors.size(); i++) {
add(_imgDescriptors[i]);
}
@@ -164,10 +164,10 @@ cv::Mat ChowLiuTree::buildTree(int root_word, std::list<info> &edges) {
//independence from a parent node.
//find all children and do the same
vector<int> nextqs = extractChildren(edges, q);
std::vector<int> nextqs = extractChildren(edges, q);
int pq = q;
vector<int>::iterator nextq;
std::vector<int>::iterator nextq;
for(nextq = nextqs.begin(); nextq != nextqs.end(); nextq++) {
recAddToTree(cltree, *nextq, pq, edges);
}
@@ -186,16 +186,16 @@ void ChowLiuTree::recAddToTree(cv::Mat &cltree, int q, int pq,
cltree.at<double>(3, q) = CP(q, true, pq, false);
//find all children and do the same
vector<int> nextqs = extractChildren(remaining_edges, q);
std::vector<int> nextqs = extractChildren(remaining_edges, q);
pq = q;
vector<int>::iterator nextq;
std::vector<int>::iterator nextq;
for(nextq = nextqs.begin(); nextq != nextqs.end(); nextq++) {
recAddToTree(cltree, *nextq, pq, remaining_edges);
}
}
vector<int> ChowLiuTree::extractChildren(std::list<info> &remaining_edges, int q) {
std::vector<int> ChowLiuTree::extractChildren(std::list<info> &remaining_edges, int q) {
std::vector<int> children;
std::list<info>::iterator edge = remaining_edges.begin();
@@ -225,16 +225,16 @@ double ChowLiuTree::calcMutInfo(int word1, int word2) {
double accumulation = 0;
double P00 = JP(word1, false, word2, false);
if(P00) accumulation += P00 * log(P00 / (P(word1, false)*P(word2, false)));
if(P00) accumulation += P00 * std::log(P00 / (P(word1, false)*P(word2, false)));
double P01 = JP(word1, false, word2, true);
if(P01) accumulation += P01 * log(P01 / (P(word1, false)*P(word2, true)));
if(P01) accumulation += P01 * std::log(P01 / (P(word1, false)*P(word2, true)));
double P10 = JP(word1, true, word2, false);
if(P10) accumulation += P10 * log(P10 / (P(word1, true)*P(word2, false)));
if(P10) accumulation += P10 * std::log(P10 / (P(word1, true)*P(word2, false)));
double P11 = JP(word1, true, word2, true);
if(P11) accumulation += P11 * log(P11 / (P(word1, true)*P(word2, true)));
if(P11) accumulation += P11 * std::log(P11 / (P(word1, true)*P(word2, true)));
return accumulation;
}

View File

@@ -40,9 +40,9 @@ static Mat linspace(float x0, float x1, int n)
static void sortMatrixRowsByIndices(InputArray _src, InputArray _indices, OutputArray _dst)
{
if(_indices.getMat().type() != CV_32SC1)
CV_Error(CV_StsUnsupportedFormat, "cv::sortRowsByIndices only works on integer indices!");
CV_Error(Error::StsUnsupportedFormat, "cv::sortRowsByIndices only works on integer indices!");
Mat src = _src.getMat();
vector<int> indices = _indices.getMat();
std::vector<int> indices = _indices.getMat();
_dst.create(src.rows, src.cols, src.type());
Mat dst = _dst.getMat();
for(size_t idx = 0; idx < indices.size(); idx++) {
@@ -64,8 +64,8 @@ static Mat argsort(InputArray _src, bool ascending=true)
{
Mat src = _src.getMat();
if (src.rows != 1 && src.cols != 1)
CV_Error(CV_StsBadArg, "cv::argsort only sorts 1D matrices.");
int flags = CV_SORT_EVERY_ROW+(ascending ? CV_SORT_ASCENDING : CV_SORT_DESCENDING);
CV_Error(Error::StsBadArg, "cv::argsort only sorts 1D matrices.");
int flags = SORT_EVERY_ROW | (ascending ? SORT_ASCENDING : SORT_DESCENDING);
Mat sorted_indices;
sortIdx(src.reshape(1,1),sorted_indices,flags);
return sorted_indices;
@@ -76,7 +76,7 @@ Mat interp1_(const Mat& X_, const Mat& Y_, const Mat& XI)
{
int n = XI.rows;
// sort input table
vector<int> sort_indices = argsort(X_);
std::vector<int> sort_indices = argsort(X_);
Mat X = sortMatrixRowsByIndices(X_,sort_indices);
Mat Y = sortMatrixRowsByIndices(Y_,sort_indices);
@@ -116,8 +116,8 @@ static Mat interp1(InputArray _x, InputArray _Y, InputArray _xi)
Mat Y = _Y.getMat();
Mat xi = _xi.getMat();
// check types & alignment
assert((x.type() == Y.type()) && (Y.type() == xi.type()));
assert((x.cols == 1) && (x.rows == Y.rows) && (x.cols == Y.cols));
CV_Assert((x.type() == Y.type()) && (Y.type() == xi.type()));
CV_Assert((x.cols == 1) && (x.rows == Y.rows) && (x.cols == Y.cols));
// call templated interp1
switch(x.type()) {
case CV_8SC1: return interp1_<char>(x,Y,xi); break;
@@ -127,7 +127,7 @@ static Mat interp1(InputArray _x, InputArray _Y, InputArray _xi)
case CV_32SC1: return interp1_<int>(x,Y,xi); break;
case CV_32FC1: return interp1_<float>(x,Y,xi); break;
case CV_64FC1: return interp1_<double>(x,Y,xi); break;
default: CV_Error(CV_StsUnsupportedFormat, ""); break;
default: CV_Error(Error::StsUnsupportedFormat, ""); break;
}
return Mat();
}
@@ -473,7 +473,7 @@ namespace colormap
void ColorMap::operator()(InputArray _src, OutputArray _dst) const
{
if(_lut.total() != 256)
CV_Error(CV_StsAssert, "cv::LUT only supports tables of size 256.");
CV_Error(Error::StsAssert, "cv::LUT only supports tables of size 256.");
Mat src = _src.getMat();
// Return original matrix if wrong type is given (is fail loud better here?)
if(src.type() != CV_8UC1 && src.type() != CV_8UC3)
@@ -483,8 +483,8 @@ namespace colormap
}
// Turn into a BGR matrix into its grayscale representation.
if(src.type() == CV_8UC3)
cvtColor(src.clone(), src, CV_BGR2GRAY);
cvtColor(src.clone(), src, CV_GRAY2BGR);
cvtColor(src.clone(), src, COLOR_BGR2GRAY);
cvtColor(src.clone(), src, COLOR_GRAY2BGR);
// Apply the ColorMap.
LUT(src, _lut, _dst);
}
@@ -521,7 +521,7 @@ namespace colormap
colormap == COLORMAP_WINTER ? (colormap::ColorMap*)(new colormap::Winter) : 0;
if( !cm )
CV_Error( CV_StsBadArg, "Unknown colormap id; use one of COLORMAP_*");
CV_Error( Error::StsBadArg, "Unknown colormap id; use one of COLORMAP_*");
(*cm)(src, dst);

View File

@@ -43,7 +43,6 @@
#include "opencv2/contrib/hybridtracker.hpp"
using namespace cv;
using namespace std;
CvMeanShiftTracker::CvMeanShiftTracker(CvMeanShiftTrackerParams _params) : params(_params)
{
@@ -61,7 +60,7 @@ void CvMeanShiftTracker::newTrackingWindow(Mat image, Rect selection)
float srange[] = { 0, 1 };
const float* ranges[] = {hrange, srange};
cvtColor(image, hsv, CV_BGR2HSV);
cvtColor(image, hsv, COLOR_BGR2HSV);
inRange(hsv, Scalar(0, 30, MIN(10, 256)), Scalar(180, 256, MAX(10, 256)), mask);
hue.create(hsv.size(), CV_8UC2);
@@ -84,7 +83,7 @@ RotatedRect CvMeanShiftTracker::updateTrackingWindow(Mat image)
float srange[] = { 0, 1 };
const float* ranges[] = {hrange, srange};
cvtColor(image, hsv, CV_BGR2HSV);
cvtColor(image, hsv, COLOR_BGR2HSV);
inRange(hsv, Scalar(0, 30, MIN(10, 256)), Scalar(180, 256, MAX(10, 256)), mask);
hue.create(hsv.size(), CV_8UC2);
mixChannels(&hsv, 1, &hue, 1, channels, 2);

View File

@@ -1,5 +1,8 @@
#if defined(__linux__) || defined(LINUX) || defined(__APPLE__) || defined(ANDROID)
#include "opencv2/contrib/detection_based_tracker.hpp"
#include "opencv2/core/utility.hpp"
#include <pthread.h>
#if defined(DEBUG) || defined(_DEBUG)
#undef DEBUGLOGS
@@ -12,7 +15,6 @@
#ifdef ANDROID
#include <android/log.h>
#include <pthread.h>
#define LOG_TAG "OBJECT_DETECTOR"
#define LOGD0(...) ((void)__android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__))
#define LOGI0(...) ((void)__android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__))
@@ -42,7 +44,6 @@
using namespace cv;
using namespace std;
static inline cv::Point2f centerRect(const cv::Rect& r)
{
@@ -70,7 +71,7 @@ class cv::DetectionBasedTracker::SeparateDetectionWork
public:
SeparateDetectionWork(cv::DetectionBasedTracker& _detectionBasedTracker, cv::Ptr<DetectionBasedTracker::IDetector> _detector);
virtual ~SeparateDetectionWork();
bool communicateWithDetectingThread(const Mat& imageGray, vector<Rect>& rectsWhereRegions);
bool communicateWithDetectingThread(const Mat& imageGray, std::vector<Rect>& rectsWhereRegions);
bool run();
void stop();
void resetTracking();
@@ -226,7 +227,7 @@ void cv::DetectionBasedTracker::SeparateDetectionWork::workcycleObjectDetector()
{
static double freq = getTickFrequency();
LOGD("DetectionBasedTracker::SeparateDetectionWork::workcycleObjectDetector() --- start");
vector<Rect> objects;
std::vector<Rect> objects;
CV_Assert(stateThread==STATE_THREAD_WORKING_SLEEPING);
pthread_mutex_lock(&mutex);
@@ -384,7 +385,7 @@ void cv::DetectionBasedTracker::SeparateDetectionWork::resetTracking()
}
bool cv::DetectionBasedTracker::SeparateDetectionWork::communicateWithDetectingThread(const Mat& imageGray, vector<Rect>& rectsWhereRegions)
bool cv::DetectionBasedTracker::SeparateDetectionWork::communicateWithDetectingThread(const Mat& imageGray, std::vector<Rect>& rectsWhereRegions)
{
static double freq = getTickFrequency();
@@ -498,7 +499,7 @@ void DetectionBasedTracker::process(const Mat& imageGray)
Mat imageDetect=imageGray;
vector<Rect> rectsWhereRegions;
std::vector<Rect> rectsWhereRegions;
bool shouldHandleResult=false;
if (!separateDetectionWork.empty()) {
shouldHandleResult = separateDetectionWork->communicateWithDetectingThread(imageGray, rectsWhereRegions);
@@ -534,7 +535,7 @@ void DetectionBasedTracker::process(const Mat& imageGray)
}
LOGI("DetectionBasedTracker::process: tracked objects num==%d", (int)trackedObjects.size());
vector<Rect> detectedObjectsInRegions;
std::vector<Rect> detectedObjectsInRegions;
LOGD("DetectionBasedTracker::process: rectsWhereRegions.size()=%d", (int)rectsWhereRegions.size());
for(size_t i=0; i < rectsWhereRegions.size(); i++) {
@@ -609,7 +610,7 @@ void cv::DetectionBasedTracker::resetTracking()
trackedObjects.clear();
}
void cv::DetectionBasedTracker::updateTrackedObjects(const vector<Rect>& detectedObjects)
void cv::DetectionBasedTracker::updateTrackedObjects(const std::vector<Rect>& detectedObjects)
{
enum {
NEW_RECTANGLE=-1,
@@ -624,7 +625,7 @@ void cv::DetectionBasedTracker::updateTrackedObjects(const vector<Rect>& detecte
trackedObjects[i].numDetectedFrames++;
}
vector<int> correspondence(detectedObjects.size(), NEW_RECTANGLE);
std::vector<int> correspondence(detectedObjects.size(), NEW_RECTANGLE);
correspondence.clear();
correspondence.resize(detectedObjects.size(), NEW_RECTANGLE);
@@ -830,7 +831,7 @@ Rect cv::DetectionBasedTracker::calcTrackedObjectPositionToShow(int i, ObjectSta
return res;
}
void cv::DetectionBasedTracker::detectInRegion(const Mat& img, const Rect& r, vector<Rect>& detectedObjectsInRegions)
void cv::DetectionBasedTracker::detectInRegion(const Mat& img, const Rect& r, std::vector<Rect>& detectedObjectsInRegions)
{
Rect r0(Point(), img.size());
Rect r1 = scale_rect(r, innerParameters.coeffTrackingWindowSize);
@@ -843,7 +844,7 @@ void cv::DetectionBasedTracker::detectInRegion(const Mat& img, const Rect& r, ve
int d = cvRound(std::min(r.width, r.height) * innerParameters.coeffObjectSizeToTrack);
vector<Rect> tmpobjects;
std::vector<Rect> tmpobjects;
Mat img1(img, r1);//subimage for rectangle -- without data copying
LOGD("DetectionBasedTracker::detectInRegion: img1.size()=%d x %d, d=%d",

View File

@@ -21,11 +21,9 @@
namespace cv
{
using std::set;
// Reads a sequence from a FileNode::SEQ with type _Tp into a result vector.
template<typename _Tp>
inline void readFileNodeList(const FileNode& fn, vector<_Tp>& result) {
inline void readFileNodeList(const FileNode& fn, std::vector<_Tp>& result) {
if (fn.type() == FileNode::SEQ) {
for (FileNodeIterator it = fn.begin(); it != fn.end();) {
_Tp item;
@@ -37,10 +35,10 @@ inline void readFileNodeList(const FileNode& fn, vector<_Tp>& result) {
// Writes the a list of given items to a cv::FileStorage.
template<typename _Tp>
inline void writeFileNodeList(FileStorage& fs, const string& name,
const vector<_Tp>& items) {
inline void writeFileNodeList(FileStorage& fs, const String& name,
const std::vector<_Tp>& items) {
// typedefs
typedef typename vector<_Tp>::const_iterator constVecIterator;
typedef typename std::vector<_Tp>::const_iterator constVecIterator;
// write the elements in item to fs
fs << name << "[";
for (constVecIterator it = items.begin(); it != items.end(); ++it) {
@@ -52,8 +50,8 @@ inline void writeFileNodeList(FileStorage& fs, const string& name,
static Mat asRowMatrix(InputArrayOfArrays src, int rtype, double alpha=1, double beta=0) {
// make sure the input data is a vector of matrices or vector of vector
if(src.kind() != _InputArray::STD_VECTOR_MAT && src.kind() != _InputArray::STD_VECTOR_VECTOR) {
string error_message = "The data is expected as InputArray::STD_VECTOR_MAT (a std::vector<Mat>) or _InputArray::STD_VECTOR_VECTOR (a std::vector< vector<...> >).";
CV_Error(CV_StsBadArg, error_message);
String error_message = "The data is expected as InputArray::STD_VECTOR_MAT (a std::vector<Mat>) or _InputArray::STD_VECTOR_VECTOR (a std::vector< std::vector<...> >).";
CV_Error(Error::StsBadArg, error_message);
}
// number of samples
size_t n = src.total();
@@ -68,8 +66,8 @@ static Mat asRowMatrix(InputArrayOfArrays src, int rtype, double alpha=1, double
for(unsigned int i = 0; i < n; i++) {
// make sure data can be reshaped, throw exception if not!
if(src.getMat(i).total() != d) {
string error_message = format("Wrong number of elements in matrix #%d! Expected %d was %d.", i, d, src.getMat(i).total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong number of elements in matrix #%d! Expected %d was %d.", i, d, src.getMat(i).total());
CV_Error(Error::StsBadArg, error_message);
}
// get a hold of the current row
Mat xi = data.row(i);
@@ -86,13 +84,13 @@ static Mat asRowMatrix(InputArrayOfArrays src, int rtype, double alpha=1, double
// Removes duplicate elements in a given vector.
template<typename _Tp>
inline vector<_Tp> remove_dups(const vector<_Tp>& src) {
typedef typename set<_Tp>::const_iterator constSetIterator;
typedef typename vector<_Tp>::const_iterator constVecIterator;
set<_Tp> set_elems;
inline std::vector<_Tp> remove_dups(const std::vector<_Tp>& src) {
typedef typename std::set<_Tp>::const_iterator constSetIterator;
typedef typename std::vector<_Tp>::const_iterator constVecIterator;
std::set<_Tp> set_elems;
for (constVecIterator it = src.begin(); it != src.end(); ++it)
set_elems.insert(*it);
vector<_Tp> elems;
std::vector<_Tp> elems;
for (constSetIterator it = set_elems.begin(); it != set_elems.end(); ++it)
elems.push_back(*it);
return elems;
@@ -106,7 +104,7 @@ class Eigenfaces : public FaceRecognizer
private:
int _num_components;
double _threshold;
vector<Mat> _projections;
std::vector<Mat> _projections;
Mat _labels;
Mat _eigenvectors;
Mat _eigenvalues;
@@ -162,7 +160,7 @@ private:
Mat _eigenvectors;
Mat _eigenvalues;
Mat _mean;
vector<Mat> _projections;
std::vector<Mat> _projections;
Mat _labels;
public:
@@ -220,7 +218,7 @@ private:
int _neighbors;
double _threshold;
vector<Mat> _histograms;
std::vector<Mat> _histograms;
Mat _labels;
// Computes a LBPH model with images in src and
@@ -300,23 +298,29 @@ public:
//------------------------------------------------------------------------------
// FaceRecognizer
//------------------------------------------------------------------------------
void FaceRecognizer::update(InputArrayOfArrays, InputArray) {
string error_msg = format("This FaceRecognizer (%s) does not support updating, you have to use FaceRecognizer::train to update it.", this->name().c_str());
CV_Error(CV_StsNotImplemented, error_msg);
void FaceRecognizer::update(InputArrayOfArrays src, InputArray labels ) {
if( dynamic_cast<LBPH*>(this) != 0 )
{
dynamic_cast<LBPH*>(this)->update( src, labels );
return;
}
String error_msg = format("This FaceRecognizer (%s) does not support updating, you have to use FaceRecognizer::train to update it.", this->name().c_str());
CV_Error(Error::StsNotImplemented, error_msg);
}
void FaceRecognizer::save(const string& filename) const {
void FaceRecognizer::save(const String& filename) const {
FileStorage fs(filename, FileStorage::WRITE);
if (!fs.isOpened())
CV_Error(CV_StsError, "File can't be opened for writing!");
CV_Error(Error::StsError, "File can't be opened for writing!");
this->save(fs);
fs.release();
}
void FaceRecognizer::load(const string& filename) {
void FaceRecognizer::load(const String& filename) {
FileStorage fs(filename, FileStorage::READ);
if (!fs.isOpened())
CV_Error(CV_StsError, "File can't be opened for writing!");
CV_Error(Error::StsError, "File can't be opened for writing!");
this->load(fs);
fs.release();
}
@@ -326,18 +330,18 @@ void FaceRecognizer::load(const string& filename) {
//------------------------------------------------------------------------------
void Eigenfaces::train(InputArrayOfArrays _src, InputArray _local_labels) {
if(_src.total() == 0) {
string error_message = format("Empty training data was given. You'll need more than one sample to learn a model.");
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Empty training data was given. You'll need more than one sample to learn a model.");
CV_Error(Error::StsBadArg, error_message);
} else if(_local_labels.getMat().type() != CV_32SC1) {
string error_message = format("Labels must be given as integer (CV_32SC1). Expected %d, but was %d.", CV_32SC1, _local_labels.type());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Labels must be given as integer (CV_32SC1). Expected %d, but was %d.", CV_32SC1, _local_labels.type());
CV_Error(Error::StsBadArg, error_message);
}
// make sure data has correct size
if(_src.total() > 1) {
for(int i = 1; i < static_cast<int>(_src.total()); i++) {
if(_src.getMat(i-1).total() != _src.getMat(i).total()) {
string error_message = format("In the Eigenfaces method all input samples (training images) must be of equal size! Expected %d pixels, but was %d pixels.", _src.getMat(i-1).total(), _src.getMat(i).total());
CV_Error(CV_StsUnsupportedFormat, error_message);
String error_message = format("In the Eigenfaces method all input samples (training images) must be of equal size! Expected %d pixels, but was %d pixels.", _src.getMat(i-1).total(), _src.getMat(i).total());
CV_Error(Error::StsUnsupportedFormat, error_message);
}
}
}
@@ -350,8 +354,8 @@ void Eigenfaces::train(InputArrayOfArrays _src, InputArray _local_labels) {
int n = data.rows;
// assert there are as much samples as labels
if(static_cast<int>(labels.total()) != n) {
string error_message = format("The number of samples (src) must equal the number of labels (labels)! len(src)=%d, len(labels)=%d.", n, labels.total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("The number of samples (src) must equal the number of labels (labels)! len(src)=%d, len(labels)=%d.", n, labels.total());
CV_Error(Error::StsBadArg, error_message);
}
// clear existing model data
_labels.release();
@@ -361,7 +365,7 @@ void Eigenfaces::train(InputArrayOfArrays _src, InputArray _local_labels) {
_num_components = n;
// perform the PCA
PCA pca(data, Mat(), CV_PCA_DATA_AS_ROW, _num_components);
PCA pca(data, Mat(), PCA::DATA_AS_ROW, _num_components);
// copy the PCA results
_mean = pca.mean.reshape(1,1); // store the mean vector
_eigenvalues = pca.eigenvalues.clone(); // eigenvalues by row
@@ -381,12 +385,12 @@ void Eigenfaces::predict(InputArray _src, int &minClass, double &minDist) const
// make sure the user is passing correct data
if(_projections.empty()) {
// throw error if no data (or simply return -1?)
string error_message = "This Eigenfaces model is not computed yet. Did you call Eigenfaces::train?";
CV_Error(CV_StsError, error_message);
String error_message = "This Eigenfaces model is not computed yet. Did you call Eigenfaces::train?";
CV_Error(Error::StsError, error_message);
} else if(_eigenvectors.rows != static_cast<int>(src.total())) {
// check data alignment just for clearer exception messages
string error_message = format("Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with %d elements, but got %d.", _eigenvectors.rows, src.total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with %d elements, but got %d.", _eigenvectors.rows, src.total());
CV_Error(Error::StsBadArg, error_message);
}
// project into PCA subspace
Mat q = subspaceProject(_eigenvectors, _mean, src.reshape(1,1));
@@ -435,18 +439,18 @@ void Eigenfaces::save(FileStorage& fs) const {
//------------------------------------------------------------------------------
void Fisherfaces::train(InputArrayOfArrays src, InputArray _lbls) {
if(src.total() == 0) {
string error_message = format("Empty training data was given. You'll need more than one sample to learn a model.");
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Empty training data was given. You'll need more than one sample to learn a model.");
CV_Error(Error::StsBadArg, error_message);
} else if(_lbls.getMat().type() != CV_32SC1) {
string error_message = format("Labels must be given as integer (CV_32SC1). Expected %d, but was %d.", CV_32SC1, _lbls.type());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Labels must be given as integer (CV_32SC1). Expected %d, but was %d.", CV_32SC1, _lbls.type());
CV_Error(Error::StsBadArg, error_message);
}
// make sure data has correct size
if(src.total() > 1) {
for(int i = 1; i < static_cast<int>(src.total()); i++) {
if(src.getMat(i-1).total() != src.getMat(i).total()) {
string error_message = format("In the Fisherfaces method all input samples (training images) must be of equal size! Expected %d pixels, but was %d pixels.", src.getMat(i-1).total(), src.getMat(i).total());
CV_Error(CV_StsUnsupportedFormat, error_message);
String error_message = format("In the Fisherfaces method all input samples (training images) must be of equal size! Expected %d pixels, but was %d pixels.", src.getMat(i-1).total(), src.getMat(i).total());
CV_Error(Error::StsUnsupportedFormat, error_message);
}
}
}
@@ -457,17 +461,17 @@ void Fisherfaces::train(InputArrayOfArrays src, InputArray _lbls) {
int N = data.rows;
// make sure labels are passed in correct shape
if(labels.total() != (size_t) N) {
string error_message = format("The number of samples (src) must equal the number of labels (labels)! len(src)=%d, len(labels)=%d.", N, labels.total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("The number of samples (src) must equal the number of labels (labels)! len(src)=%d, len(labels)=%d.", N, labels.total());
CV_Error(Error::StsBadArg, error_message);
} else if(labels.rows != 1 && labels.cols != 1) {
string error_message = format("Expected the labels in a matrix with one row or column! Given dimensions are rows=%s, cols=%d.", labels.rows, labels.cols);
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Expected the labels in a matrix with one row or column! Given dimensions are rows=%s, cols=%d.", labels.rows, labels.cols);
CV_Error(Error::StsBadArg, error_message);
}
// clear existing model data
_labels.release();
_projections.clear();
// safely copy from cv::Mat to std::vector
vector<int> ll;
std::vector<int> ll;
for(unsigned int i = 0; i < labels.total(); i++) {
ll.push_back(labels.at<int>(i));
}
@@ -477,7 +481,7 @@ void Fisherfaces::train(InputArrayOfArrays src, InputArray _lbls) {
if((_num_components <= 0) || (_num_components > (C-1)))
_num_components = (C-1);
// perform a PCA and keep (N-C) components
PCA pca(data, Mat(), CV_PCA_DATA_AS_ROW, (N-C));
PCA pca(data, Mat(), PCA::DATA_AS_ROW, (N-C));
// project the data and perform a LDA on it
LDA lda(pca.project(data),labels, _num_components);
// store the total mean vector
@@ -501,11 +505,11 @@ void Fisherfaces::predict(InputArray _src, int &minClass, double &minDist) const
// check data alignment just for clearer exception messages
if(_projections.empty()) {
// throw error if no data (or simply return -1?)
string error_message = "This Fisherfaces model is not computed yet. Did you call Fisherfaces::train?";
CV_Error(CV_StsBadArg, error_message);
String error_message = "This Fisherfaces model is not computed yet. Did you call Fisherfaces::train?";
CV_Error(Error::StsBadArg, error_message);
} else if(src.total() != (size_t) _eigenvectors.rows) {
string error_message = format("Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with %d elements, but got %d.", _eigenvectors.rows, src.total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with %d elements, but got %d.", _eigenvectors.rows, src.total());
CV_Error(Error::StsBadArg, error_message);
}
// project into LDA subspace
Mat q = subspaceProject(_eigenvectors, _mean, src.reshape(1,1));
@@ -636,8 +640,8 @@ static void elbp(InputArray src, OutputArray dst, int radius, int neighbors)
case CV_32FC1: elbp_<float>(src,dst, radius, neighbors); break;
case CV_64FC1: elbp_<double>(src,dst, radius, neighbors); break;
default:
string error_msg = format("Using Original Local Binary Patterns for feature extraction only works on single-channel images (given %d). Please pass the image data as a grayscale image!", type);
CV_Error(CV_StsNotImplemented, error_msg);
String error_msg = format("Using Original Local Binary Patterns for feature extraction only works on single-channel images (given %d). Please pass the image data as a grayscale image!", type);
CV_Error(Error::StsNotImplemented, error_msg);
break;
}
}
@@ -683,7 +687,7 @@ static Mat histc(InputArray _src, int minVal, int maxVal, bool normed)
return histc_(src, minVal, maxVal, normed);
break;
default:
CV_Error(CV_StsUnmatchedFormats, "This type is not implemented yet."); break;
CV_Error(Error::StsUnmatchedFormats, "This type is not implemented yet."); break;
}
return Mat();
}
@@ -764,25 +768,25 @@ void LBPH::update(InputArrayOfArrays _in_src, InputArray _in_labels) {
void LBPH::train(InputArrayOfArrays _in_src, InputArray _in_labels, bool preserveData) {
if(_in_src.kind() != _InputArray::STD_VECTOR_MAT && _in_src.kind() != _InputArray::STD_VECTOR_VECTOR) {
string error_message = "The images are expected as InputArray::STD_VECTOR_MAT (a std::vector<Mat>) or _InputArray::STD_VECTOR_VECTOR (a std::vector< vector<...> >).";
CV_Error(CV_StsBadArg, error_message);
String error_message = "The images are expected as InputArray::STD_VECTOR_MAT (a std::vector<Mat>) or _InputArray::STD_VECTOR_VECTOR (a std::vector< std::vector<...> >).";
CV_Error(Error::StsBadArg, error_message);
}
if(_in_src.total() == 0) {
string error_message = format("Empty training data was given. You'll need more than one sample to learn a model.");
CV_Error(CV_StsUnsupportedFormat, error_message);
String error_message = format("Empty training data was given. You'll need more than one sample to learn a model.");
CV_Error(Error::StsUnsupportedFormat, error_message);
} else if(_in_labels.getMat().type() != CV_32SC1) {
string error_message = format("Labels must be given as integer (CV_32SC1). Expected %d, but was %d.", CV_32SC1, _in_labels.type());
CV_Error(CV_StsUnsupportedFormat, error_message);
String error_message = format("Labels must be given as integer (CV_32SC1). Expected %d, but was %d.", CV_32SC1, _in_labels.type());
CV_Error(Error::StsUnsupportedFormat, error_message);
}
// get the vector of matrices
vector<Mat> src;
std::vector<Mat> src;
_in_src.getMatVector(src);
// get the label matrix
Mat labels = _in_labels.getMat();
// check if data is well- aligned
if(labels.total() != src.size()) {
string error_message = format("The number of samples (src) must equal the number of labels (labels). Was len(samples)=%d, len(labels)=%d.", src.size(), _labels.total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("The number of samples (src) must equal the number of labels (labels). Was len(samples)=%d, len(labels)=%d.", src.size(), _labels.total());
CV_Error(Error::StsBadArg, error_message);
}
// if this model should be trained without preserving old data, delete old model data
if(!preserveData) {
@@ -812,8 +816,8 @@ void LBPH::train(InputArrayOfArrays _in_src, InputArray _in_labels, bool preserv
void LBPH::predict(InputArray _src, int &minClass, double &minDist) const {
if(_histograms.empty()) {
// throw error if no data (or simply return -1?)
string error_message = "This LBPH model is not computed yet. Did you call the train method?";
CV_Error(CV_StsBadArg, error_message);
String error_message = "This LBPH model is not computed yet. Did you call the train method?";
CV_Error(Error::StsBadArg, error_message);
}
Mat src = _src.getMat();
// get the spatial histogram from input image
@@ -828,7 +832,7 @@ void LBPH::predict(InputArray _src, int &minClass, double &minDist) const {
minDist = DBL_MAX;
minClass = -1;
for(size_t sampleIdx = 0; sampleIdx < _histograms.size(); sampleIdx++) {
double dist = compareHist(_histograms[sampleIdx], query, CV_COMP_CHISQR);
double dist = compareHist(_histograms[sampleIdx], query, HISTCMP_CHISQR);
if((dist < minDist) && (dist < _threshold)) {
minDist = dist;
minClass = _labels.at<int>((int) sampleIdx);
@@ -889,7 +893,7 @@ CV_INIT_ALGORITHM(LBPH, "FaceRecognizer.LBPH",
bool initModule_contrib()
{
Ptr<Algorithm> efaces = createEigenfaces(), ffaces = createFisherfaces(), lbph = createLBPH();
Ptr<Algorithm> efaces = createEigenfaces_hidden(), ffaces = createFisherfaces_hidden(), lbph = createLBPH_hidden();
return efaces->info() != 0 && ffaces->info() != 0 && lbph->info() != 0;
}

View File

@@ -42,7 +42,7 @@
#include "precomp.hpp"
#include <stdio.h>
#include <iostream>
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/contrib/hybridtracker.hpp"
using namespace cv;
@@ -80,7 +80,7 @@ CvFeatureTracker::~CvFeatureTracker()
void CvFeatureTracker::newTrackingWindow(Mat image, Rect selection)
{
image.copyTo(prev_image);
cvtColor(prev_image, prev_image_bw, CV_BGR2GRAY);
cvtColor(prev_image, prev_image_bw, COLOR_BGR2GRAY);
prev_trackwindow = selection;
prev_center.x = selection.x;
prev_center.y = selection.y;
@@ -98,8 +98,8 @@ Rect CvFeatureTracker::updateTrackingWindow(Mat image)
Rect CvFeatureTracker::updateTrackingWindowWithSIFT(Mat image)
{
ittr++;
vector<KeyPoint> prev_keypoints, curr_keypoints;
vector<Point2f> prev_keys, curr_keys;
std::vector<KeyPoint> prev_keypoints, curr_keypoints;
std::vector<Point2f> prev_keys, curr_keys;
Mat prev_desc, curr_desc;
Rect window = prev_trackwindow;
@@ -131,7 +131,7 @@ Rect CvFeatureTracker::updateTrackingWindowWithSIFT(Mat image)
curr_keys.push_back(curr_keypoints[matches[i].trainIdx].pt);
}
Mat T = findHomography(prev_keys, curr_keys, CV_LMEDS);
Mat T = findHomography(prev_keys, curr_keys, LMEDS);
prev_trackwindow.x += cvRound(T.at<double> (0, 2));
prev_trackwindow.y += cvRound(T.at<double> (1, 2));
@@ -148,12 +148,12 @@ Rect CvFeatureTracker::updateTrackingWindowWithFlow(Mat image)
ittr++;
Size subPixWinSize(10,10), winSize(31,31);
Mat image_bw;
TermCriteria termcrit(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03);
vector<uchar> status;
vector<float> err;
TermCriteria termcrit(TermCriteria::COUNT | TermCriteria::EPS, 20, 0.03);
std::vector<uchar> status;
std::vector<float> err;
cvtColor(image, image_bw, CV_BGR2GRAY);
cvtColor(prev_image, prev_image_bw, CV_BGR2GRAY);
cvtColor(image, image_bw, COLOR_BGR2GRAY);
cvtColor(prev_image, prev_image_bw, COLOR_BGR2GRAY);
if (ittr == 1)
{

View File

@@ -35,6 +35,7 @@
//M*/
#include "precomp.hpp"
#include "opencv2/contrib/compat.hpp"
CvFuzzyPoint::CvFuzzyPoint(double _x, double _y)
{
@@ -380,6 +381,7 @@ void CvFuzzyMeanShiftTracker::SearchWindow::initDepthValues(IplImage *maskImage,
{
if (*depthData)
{
d = *depthData;
m1 += d;
if (d < mind)
mind = d;

View File

@@ -39,7 +39,6 @@
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "opencv2/core/core.hpp"
#include "precomp.hpp"
#include <iostream>
@@ -86,7 +85,7 @@ static void downsamplePoints( const Mat& src, Mat& dst, size_t count )
candidatePointsMask.at<uchar>(0, maxLoc.x) = 0;
Mat minDists;
reduce( activedDists, minDists, 0, CV_REDUCE_MIN );
reduce( activedDists, minDists, 0, REDUCE_MIN );
minMaxLoc( minDists, 0, &maxVal, 0, &maxLoc, candidatePointsMask );
dst.at<Point3_<uchar> >((int)i) = src.at<Point3_<uchar> >(maxLoc.x);
}
@@ -119,7 +118,7 @@ void cv::generateColors( std::vector<Scalar>& colors, size_t count, size_t facto
// Convert the colors set to Lab space.
// Distances between colors in this space correspond a human perception.
Mat lab;
cvtColor( bgr, lab, CV_BGR2Lab);
cvtColor( bgr, lab, COLOR_BGR2Lab);
// Subsample colors from the generated set so that
// to maximize the minimum distances between each other.
@@ -129,7 +128,7 @@ void cv::generateColors( std::vector<Scalar>& colors, size_t count, size_t facto
// Convert subsampled colors back to RGB
Mat bgr_subset;
cvtColor( lab_subset, bgr_subset, CV_Lab2BGR );
cvtColor( lab_subset, bgr_subset, COLOR_Lab2BGR );
CV_Assert( bgr_subset.total() == count );
for( size_t i = 0; i < count; i++ )

View File

@@ -43,7 +43,6 @@
#include "opencv2/contrib/hybridtracker.hpp"
using namespace cv;
using namespace std;
CvHybridTrackerParams::CvHybridTrackerParams(float _ft_tracker_weight, float _ms_tracker_weight,
CvFeatureTrackerParams _ft_params,
@@ -83,7 +82,7 @@ CvHybridTracker::~CvHybridTracker() {
inline float CvHybridTracker::getL2Norm(Point2f p1, Point2f p2) {
float distance = (p1.x - p2.x) * (p1.x - p2.x) + (p1.y - p2.y) * (p1.y
- p2.y);
return sqrt(distance);
return std::sqrt(distance);
}
Mat CvHybridTracker::getDistanceProjection(Mat image, Point2f center) {

View File

@@ -193,7 +193,7 @@ bool ImageLogPolProjection::_initLogRetinaSampling(const double reductionFactor,
//double rlim=1.0/reductionFactor*(minDimension/2.0+samplingStrenght);
// input frame dimensions INdependent log sampling:
_azero=(1.0+reductionFactor*sqrt(samplingStrenght))/(reductionFactor*reductionFactor*samplingStrenght-1.0);
_azero=(1.0+reductionFactor*std::sqrt(samplingStrenght))/(reductionFactor*reductionFactor*samplingStrenght-1.0);
_alim=(1.0+_azero)/reductionFactor;
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: rlim= "<<rlim<<std::endl;
@@ -223,10 +223,10 @@ bool ImageLogPolProjection::_initLogRetinaSampling(const double reductionFactor,
// get the pixel position in the original picture
// -> input frame dimensions dependent log sampling:
//double scale = samplingStrenght/(rlim-(double)sqrt(idRow*idRow+idColumn*idColumn));
//double scale = samplingStrenght/(rlim-(double)std::sqrt(idRow*idRow+idColumn*idColumn));
// -> input frame dimensions INdependent log sampling:
double scale=getOriginalRadiusLength((double)sqrt((double)(idRow*idRow+idColumn*idColumn)));
double scale=getOriginalRadiusLength((double)std::sqrt((double)(idRow*idRow+idColumn*idColumn)));
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: scale= "<<scale<<std::endl;
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: scale2= "<<scale2<<std::endl;
@@ -243,7 +243,7 @@ bool ImageLogPolProjection::_initLogRetinaSampling(const double reductionFactor,
// manage border effects
double length=u*u+v*v;
double radiusRatio=sqrt(rMax/length);
double radiusRatio=std::sqrt(rMax/length);
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::(inputH, inputW)="<<halfInputRows<<", "<<halfInputColumns<<", Rmax2="<<rMax<<std::endl;
@@ -362,14 +362,14 @@ bool ImageLogPolProjection::_initLogPolarCortexSampling(const double reductionFa
//std::cout<<"ImageLogPolProjection::Starting cortex projection"<<std::endl;
// compute transformation, get theta and Radius in reagrd of the output sampled pixel
double diagonalLenght=sqrt((double)(_outputNBcolumns*_outputNBcolumns+_outputNBrows*_outputNBrows));
double diagonalLenght=std::sqrt((double)(_outputNBcolumns*_outputNBcolumns+_outputNBrows*_outputNBrows));
for (unsigned int radiusIndex=0;radiusIndex<_outputNBcolumns;++radiusIndex)
for(unsigned int orientationIndex=0;orientationIndex<_outputNBrows;++orientationIndex)
{
double x=1.0+sinh(radiusAxis[radiusIndex])*cos(orientationAxis[orientationIndex]);
double y=sinh(radiusAxis[radiusIndex])*sin(orientationAxis[orientationIndex]);
// get the input picture coordinate
double R=diagonalLenght*sqrt(x*x+y*y)/(5.0+sqrt(x*x+y*y));
double R=diagonalLenght*std::sqrt(x*x+y*y)/(5.0+std::sqrt(x*x+y*y));
double theta=atan2(y,x);
// convert input polar coord into cartesian/C compatble coordinate
unsigned int columnIndex=(unsigned int)(cos(theta)*R)+halfInputColumns;

View File

@@ -1,5 +1,5 @@
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/contrib.hpp"
#ifdef WIN32
#include <windows.h>
@@ -10,11 +10,11 @@
namespace cv
{
std::vector<std::string> Directory::GetListFiles( const std::string& path, const std::string & exten, bool addPath )
std::vector<String> Directory::GetListFiles( const String& path, const String & exten, bool addPath )
{
std::vector<std::string> list;
std::vector<String> list;
list.clear();
std::string path_f = path + "/" + exten;
String path_f = path + "/" + exten;
#ifdef WIN32
WIN32_FIND_DATA FindFileData;
HANDLE hFind;
@@ -57,10 +57,10 @@ namespace cv
if (dirp->d_type == DT_REG)
{
if (exten.compare("*") == 0)
list.push_back(static_cast<std::string>(dirp->d_name));
list.push_back(static_cast<String>(dirp->d_name));
else
if (std::string(dirp->d_name).find(exten) != std::string::npos)
list.push_back(static_cast<std::string>(dirp->d_name));
if (String(dirp->d_name).find(exten) != String::npos)
list.push_back(static_cast<String>(dirp->d_name));
}
}
closedir(dp);
@@ -69,10 +69,10 @@ namespace cv
return list;
}
std::vector<std::string> Directory::GetListFolders( const std::string& path, const std::string & exten, bool addPath )
std::vector<String> Directory::GetListFolders( const String& path, const String & exten, bool addPath )
{
std::vector<std::string> list;
std::string path_f = path + "/" + exten;
std::vector<String> list;
String path_f = path + "/" + exten;
list.clear();
#ifdef WIN32
WIN32_FIND_DATA FindFileData;
@@ -117,10 +117,10 @@ namespace cv
strcmp(dirp->d_name, "..") != 0 )
{
if (exten.compare("*") == 0)
list.push_back(static_cast<std::string>(dirp->d_name));
list.push_back(static_cast<String>(dirp->d_name));
else
if (std::string(dirp->d_name).find(exten) != std::string::npos)
list.push_back(static_cast<std::string>(dirp->d_name));
if (String(dirp->d_name).find(exten) != String::npos)
list.push_back(static_cast<String>(dirp->d_name));
}
}
closedir(dp);
@@ -129,16 +129,16 @@ namespace cv
return list;
}
std::vector<std::string> Directory::GetListFilesR ( const std::string& path, const std::string & exten, bool addPath )
std::vector<String> Directory::GetListFilesR ( const String& path, const String & exten, bool addPath )
{
std::vector<std::string> list = Directory::GetListFiles(path, exten, addPath);
std::vector<String> list = Directory::GetListFiles(path, exten, addPath);
std::vector<std::string> dirs = Directory::GetListFolders(path, exten, addPath);
std::vector<String> dirs = Directory::GetListFolders(path, exten, addPath);
std::vector<std::string>::const_iterator it;
std::vector<String>::const_iterator it;
for (it = dirs.begin(); it != dirs.end(); ++it)
{
std::vector<std::string> cl = Directory::GetListFiles(*it, exten, addPath);
std::vector<String> cl = Directory::GetListFiles(*it, exten, addPath);
list.insert(list.end(), cl.begin(), cl.end());
}

View File

@@ -24,20 +24,15 @@
namespace cv
{
using std::map;
using std::set;
using std::cout;
using std::endl;
// Removes duplicate elements in a given vector.
template<typename _Tp>
inline vector<_Tp> remove_dups(const vector<_Tp>& src) {
typedef typename set<_Tp>::const_iterator constSetIterator;
typedef typename vector<_Tp>::const_iterator constVecIterator;
set<_Tp> set_elems;
inline std::vector<_Tp> remove_dups(const std::vector<_Tp>& src) {
typedef typename std::set<_Tp>::const_iterator constSetIterator;
typedef typename std::vector<_Tp>::const_iterator constVecIterator;
std::set<_Tp> set_elems;
for (constVecIterator it = src.begin(); it != src.end(); ++it)
set_elems.insert(*it);
vector<_Tp> elems;
std::vector<_Tp> elems;
for (constSetIterator it = set_elems.begin(); it != set_elems.end(); ++it)
elems.push_back(*it);
return elems;
@@ -47,10 +42,10 @@ static Mat argsort(InputArray _src, bool ascending=true)
{
Mat src = _src.getMat();
if (src.rows != 1 && src.cols != 1) {
string error_message = "Wrong shape of input matrix! Expected a matrix with one row or column.";
CV_Error(CV_StsBadArg, error_message);
String error_message = "Wrong shape of input matrix! Expected a matrix with one row or column.";
CV_Error(Error::StsBadArg, error_message);
}
int flags = CV_SORT_EVERY_ROW+(ascending ? CV_SORT_ASCENDING : CV_SORT_DESCENDING);
int flags = SORT_EVERY_ROW | (ascending ? SORT_ASCENDING : SORT_DESCENDING);
Mat sorted_indices;
sortIdx(src.reshape(1,1),sorted_indices,flags);
return sorted_indices;
@@ -59,8 +54,8 @@ static Mat argsort(InputArray _src, bool ascending=true)
static Mat asRowMatrix(InputArrayOfArrays src, int rtype, double alpha=1, double beta=0) {
// make sure the input data is a vector of matrices or vector of vector
if(src.kind() != _InputArray::STD_VECTOR_MAT && src.kind() != _InputArray::STD_VECTOR_VECTOR) {
string error_message = "The data is expected as InputArray::STD_VECTOR_MAT (a std::vector<Mat>) or _InputArray::STD_VECTOR_VECTOR (a std::vector< vector<...> >).";
CV_Error(CV_StsBadArg, error_message);
String error_message = "The data is expected as InputArray::STD_VECTOR_MAT (a std::vector<Mat>) or _InputArray::STD_VECTOR_VECTOR (a std::vector< std::vector<...> >).";
CV_Error(Error::StsBadArg, error_message);
}
// number of samples
size_t n = src.total();
@@ -75,8 +70,8 @@ static Mat asRowMatrix(InputArrayOfArrays src, int rtype, double alpha=1, double
for(int i = 0; i < (int)n; i++) {
// make sure data can be reshaped, throw exception if not!
if(src.getMat(i).total() != d) {
string error_message = format("Wrong number of elements in matrix #%d! Expected %d was %d.", i, (int)d, (int)src.getMat(i).total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong number of elements in matrix #%d! Expected %d was %d.", i, (int)d, (int)src.getMat(i).total());
CV_Error(Error::StsBadArg, error_message);
}
// get a hold of the current row
Mat xi = data.row(i);
@@ -92,10 +87,10 @@ static Mat asRowMatrix(InputArrayOfArrays src, int rtype, double alpha=1, double
static void sortMatrixColumnsByIndices(InputArray _src, InputArray _indices, OutputArray _dst) {
if(_indices.getMat().type() != CV_32SC1) {
CV_Error(CV_StsUnsupportedFormat, "cv::sortColumnsByIndices only works on integer indices!");
CV_Error(Error::StsUnsupportedFormat, "cv::sortColumnsByIndices only works on integer indices!");
}
Mat src = _src.getMat();
vector<int> indices = _indices.getMat();
std::vector<int> indices = _indices.getMat();
_dst.create(src.rows, src.cols, src.type());
Mat dst = _dst.getMat();
for(size_t idx = 0; idx < indices.size(); idx++) {
@@ -183,13 +178,13 @@ Mat subspaceProject(InputArray _W, InputArray _mean, InputArray _src) {
int d = src.cols;
// make sure the data has the correct shape
if(W.rows != d) {
string error_message = format("Wrong shapes for given matrices. Was size(src) = (%d,%d), size(W) = (%d,%d).", src.rows, src.cols, W.rows, W.cols);
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong shapes for given matrices. Was size(src) = (%d,%d), size(W) = (%d,%d).", src.rows, src.cols, W.rows, W.cols);
CV_Error(Error::StsBadArg, error_message);
}
// make sure mean is correct if not empty
if(!mean.empty() && (mean.total() != (size_t) d)) {
string error_message = format("Wrong mean shape for the given data matrix. Expected %d, but was %d.", d, mean.total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong mean shape for the given data matrix. Expected %d, but was %d.", d, mean.total());
CV_Error(Error::StsBadArg, error_message);
}
// create temporary matrices
Mat X, Y;
@@ -221,13 +216,13 @@ Mat subspaceReconstruct(InputArray _W, InputArray _mean, InputArray _src)
int d = src.cols;
// make sure the data has the correct shape
if(W.cols != d) {
string error_message = format("Wrong shapes for given matrices. Was size(src) = (%d,%d), size(W) = (%d,%d).", src.rows, src.cols, W.rows, W.cols);
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong shapes for given matrices. Was size(src) = (%d,%d), size(W) = (%d,%d).", src.rows, src.cols, W.rows, W.cols);
CV_Error(Error::StsBadArg, error_message);
}
// make sure mean is correct if not empty
if(!mean.empty() && (mean.total() != (size_t) W.rows)) {
string error_message = format("Wrong mean shape for the given eigenvector matrix. Expected %d, but was %d.", W.cols, mean.total());
CV_Error(CV_StsBadArg, error_message);
String error_message = format("Wrong mean shape for the given eigenvector matrix. Expected %d, but was %d.", W.cols, mean.total());
CV_Error(Error::StsBadArg, error_message);
}
// initalize temporary matrices
Mat X, Y;
@@ -330,7 +325,7 @@ private:
int n1 = nn - 1;
int low = 0;
int high = nn - 1;
double eps = pow(2.0, -52.0);
double eps = std::pow(2.0, -52.0);
double exshift = 0.0;
double p = 0, q = 0, r = 0, s = 0, z = 0, t, w, x, y;
@@ -342,7 +337,7 @@ private:
d[i] = H[i][i];
e[i] = 0.0;
}
for (int j = max(i - 1, 0); j < nn; j++) {
for (int j = std::max(i - 1, 0); j < nn; j++) {
norm = norm + std::abs(H[i][j]);
}
}
@@ -380,7 +375,7 @@ private:
w = H[n1][n1 - 1] * H[n1 - 1][n1];
p = (H[n1 - 1][n1 - 1] - H[n1][n1]) / 2.0;
q = p * p + w;
z = sqrt(std::abs(q));
z = std::sqrt(std::abs(q));
H[n1][n1] = H[n1][n1] + exshift;
H[n1 - 1][n1 - 1] = H[n1 - 1][n1 - 1] + exshift;
x = H[n1][n1];
@@ -404,7 +399,7 @@ private:
s = std::abs(x) + std::abs(z);
p = x / s;
q = z / s;
r = sqrt(p * p + q * q);
r = std::sqrt(p * p + q * q);
p = p / r;
q = q / r;
@@ -475,7 +470,7 @@ private:
s = (y - x) / 2.0;
s = s * s + w;
if (s > 0) {
s = sqrt(s);
s = std::sqrt(s);
if (y < x) {
s = -s;
}
@@ -539,7 +534,7 @@ private:
if (x == 0.0) {
break;
}
s = sqrt(p * p + q * q + r * r);
s = std::sqrt(p * p + q * q + r * r);
if (p < 0) {
s = -s;
}
@@ -570,7 +565,7 @@ private:
// Column modification
for (int i = 0; i <= min(n1, k + 3); i++) {
for (int i = 0; i <= std::min(n1, k + 3); i++) {
p = x * H[i][k] + y * H[i][k + 1];
if (notlast) {
p = p + z * H[i][k + 2];
@@ -721,7 +716,7 @@ private:
// Overflow control
t = max(std::abs(H[i][n1 - 1]), std::abs(H[i][n1]));
t = std::max(std::abs(H[i][n1 - 1]), std::abs(H[i][n1]));
if ((eps * t) * t > 1) {
for (int j = i; j <= n1; j++) {
H[j][n1 - 1] = H[j][n1 - 1] / t;
@@ -748,7 +743,7 @@ private:
for (int j = nn - 1; j >= low; j--) {
for (int i = low; i <= high; i++) {
z = 0.0;
for (int k = low; k <= min(j, high); k++) {
for (int k = low; k <= std::min(j, high); k++) {
z = z + V[i][k] * H[k][j];
}
V[i][j] = z;
@@ -782,7 +777,7 @@ private:
ort[i] = H[i][m - 1] / scale;
h += ort[i] * ort[i];
}
double g = sqrt(h);
double g = std::sqrt(h);
if (ort[m] > 0) {
g = -g;
}
@@ -941,20 +936,20 @@ public:
//------------------------------------------------------------------------------
// Linear Discriminant Analysis implementation
//------------------------------------------------------------------------------
void LDA::save(const string& filename) const {
void LDA::save(const String& filename) const {
FileStorage fs(filename, FileStorage::WRITE);
if (!fs.isOpened()) {
CV_Error(CV_StsError, "File can't be opened for writing!");
CV_Error(Error::StsError, "File can't be opened for writing!");
}
this->save(fs);
fs.release();
}
// Deserializes this object from a given filename.
void LDA::load(const string& filename) {
void LDA::load(const String& filename) {
FileStorage fs(filename, FileStorage::READ);
if (!fs.isOpened())
CV_Error(CV_StsError, "File can't be opened for writing!");
CV_Error(Error::StsError, "File can't be opened for writing!");
this->load(fs);
fs.release();
}
@@ -978,7 +973,7 @@ void LDA::load(const FileStorage& fs) {
void LDA::lda(InputArrayOfArrays _src, InputArray _lbls) {
// get data
Mat src = _src.getMat();
vector<int> labels;
std::vector<int> labels;
// safely copy the labels
{
Mat tmp = _lbls.getMat();
@@ -991,9 +986,9 @@ void LDA::lda(InputArrayOfArrays _src, InputArray _lbls) {
// ensure working matrix is double precision
src.convertTo(data, CV_64FC1);
// maps the labels, so they're ascending: [0,1,...,C]
vector<int> mapped_labels(labels.size());
vector<int> num2label = remove_dups(labels);
map<int, int> label2num;
std::vector<int> mapped_labels(labels.size());
std::vector<int> num2label = remove_dups(labels);
std::map<int, int> label2num;
for (int i = 0; i < (int)num2label.size(); i++)
label2num[num2label[i]] = i;
for (size_t i = 0; i < labels.size(); i++)
@@ -1006,19 +1001,19 @@ void LDA::lda(InputArrayOfArrays _src, InputArray _lbls) {
// we can't do a LDA on one class, what do you
// want to separate from each other then?
if(C == 1) {
string error_message = "At least two classes are needed to perform a LDA. Reason: Only one class was given!";
CV_Error(CV_StsBadArg, error_message);
String error_message = "At least two classes are needed to perform a LDA. Reason: Only one class was given!";
CV_Error(Error::StsBadArg, error_message);
}
// throw error if less labels, than samples
if (labels.size() != static_cast<size_t>(N)) {
string error_message = format("The number of samples must equal the number of labels. Given %d labels, %d samples. ", labels.size(), N);
CV_Error(CV_StsBadArg, error_message);
String error_message = format("The number of samples must equal the number of labels. Given %d labels, %d samples. ", labels.size(), N);
CV_Error(Error::StsBadArg, error_message);
}
// warn if within-classes scatter matrix becomes singular
if (N < D) {
cout << "Warning: Less observations than feature dimension given!"
<< "Computation will probably fail."
<< endl;
std::cout << "Warning: Less observations than feature dimension given!"
<< "Computation will probably fail."
<< std::endl;
}
// clip number of components to be a valid number
if ((_num_components <= 0) || (_num_components > (C - 1))) {
@@ -1027,8 +1022,8 @@ void LDA::lda(InputArrayOfArrays _src, InputArray _lbls) {
// holds the mean over all classes
Mat meanTotal = Mat::zeros(1, D, data.type());
// holds the mean for each class
vector<Mat> meanClass(C);
vector<int> numClass(C);
std::vector<Mat> meanClass(C);
std::vector<int> numClass(C);
// initialize
for (int i = 0; i < C; i++) {
numClass[i] = 0;
@@ -1076,7 +1071,7 @@ void LDA::lda(InputArrayOfArrays _src, InputArray _lbls) {
// reshape eigenvalues, so they are stored by column
_eigenvalues = _eigenvalues.reshape(1, 1);
// get sorted indices descending by their eigenvalue
vector<int> sorted_indices = argsort(_eigenvalues, false);
std::vector<int> sorted_indices = argsort(_eigenvalues, false);
// now sort eigenvalues and eigenvectors accordingly
_eigenvalues = sortMatrixColumnsByIndices(_eigenvalues, sorted_indices);
_eigenvectors = sortMatrixColumnsByIndices(_eigenvectors, sorted_indices);
@@ -1094,8 +1089,8 @@ void LDA::compute(InputArrayOfArrays _src, InputArray _lbls) {
lda(_src.getMat(), _lbls);
break;
default:
string error_message= format("InputArray Datatype %d is not supported.", _src.kind());
CV_Error(CV_StsBadArg, error_message);
String error_message= format("InputArray Datatype %d is not supported.", _src.kind());
CV_Error(Error::StsBadArg, error_message);
break;
}
}

View File

@@ -75,13 +75,13 @@ LogPolar_Interp::LogPolar_Interp(int w, int h, Point2i center, int _R, double _r
int rtmp;
if (center.x<=w/2 && center.y>=h/2)
rtmp=(int)sqrt((float)center.y*center.y + (float)(w-center.x)*(w-center.x));
rtmp=(int)std::sqrt((float)center.y*center.y + (float)(w-center.x)*(w-center.x));
else if (center.x>=w/2 && center.y>=h/2)
rtmp=(int)sqrt((float)center.y*center.y + (float)center.x*center.x);
rtmp=(int)std::sqrt((float)center.y*center.y + (float)center.x*center.x);
else if (center.x>=w/2 && center.y<=h/2)
rtmp=(int)sqrt((float)(h-center.y)*(h-center.y) + (float)center.x*center.x);
rtmp=(int)std::sqrt((float)(h-center.y)*(h-center.y) + (float)center.x*center.x);
else //if (center.x<=w/2 && center.y<=h/2)
rtmp=(int)sqrt((float)(h-center.y)*(h-center.y) + (float)(w-center.x)*(w-center.x));
rtmp=(int)std::sqrt((float)(h-center.y)*(h-center.y) + (float)(w-center.x)*(w-center.x));
M=2*rtmp; N=2*rtmp;
@@ -97,8 +97,8 @@ LogPolar_Interp::LogPolar_Interp(int w, int h, Point2i center, int _R, double _r
if (sp){
int jc=M/2-1, ic=N/2-1;
int _romax=min(ic, jc);
double _a=exp(log((double)(_romax/2-1)/(double)ro0)/(double)R);
int _romax=std::min(ic, jc);
double _a=std::exp(std::log((double)(_romax/2-1)/(double)ro0)/(double)R);
S=(int) floor(2*CV_PI/(_a-1)+0.5);
}
@@ -116,8 +116,8 @@ void LogPolar_Interp::create_map(int _M, int _n, int _R, int _s, double _ro0)
ro0=_ro0;
int jc=N/2-1, ic=M/2-1;
romax=min(ic, jc);
a=exp(log((double)romax/(double)ro0)/(double)R);
romax=std::min(ic, jc);
a=std::exp(std::log((double)romax/(double)ro0)/(double)R);
q=((double)S)/(2*CV_PI);
Rsri = Mat::zeros(S,R,CV_32FC1);
@@ -129,8 +129,8 @@ void LogPolar_Interp::create_map(int _M, int _n, int _R, int _s, double _ro0)
{
for(int u=0; u<R; u++)
{
Rsri.at<float>(v,u)=(float)(ro0*pow(a,u)*sin(v/q)+jc);
Csri.at<float>(v,u)=(float)(ro0*pow(a,u)*cos(v/q)+ic);
Rsri.at<float>(v,u)=(float)(ro0*std::pow(a,u)*sin(v/q)+jc);
Csri.at<float>(v,u)=(float)(ro0*std::pow(a,u)*cos(v/q)+ic);
}
}
@@ -150,7 +150,7 @@ void LogPolar_Interp::create_map(int _M, int _n, int _R, int _s, double _ro0)
ETAyx.at<float>(j,i)=(float)(q*theta);
double ro2=(j-jc)*(j-jc)+(i-ic)*(i-ic);
CSIyx.at<float>(j,i)=(float)(0.5*log(ro2/(ro0*ro0))/log(a));
CSIyx.at<float>(j,i)=(float)(0.5*std::log(ro2/(ro0*ro0))/std::log(a));
}
}
}
@@ -221,13 +221,13 @@ LogPolar_Overlapping::LogPolar_Overlapping(int w, int h, Point2i center, int _R,
int rtmp;
if (center.x<=w/2 && center.y>=h/2)
rtmp=(int)sqrt((float)center.y*center.y + (float)(w-center.x)*(w-center.x));
rtmp=(int)std::sqrt((float)center.y*center.y + (float)(w-center.x)*(w-center.x));
else if (center.x>=w/2 && center.y>=h/2)
rtmp=(int)sqrt((float)center.y*center.y + (float)center.x*center.x);
rtmp=(int)std::sqrt((float)center.y*center.y + (float)center.x*center.x);
else if (center.x>=w/2 && center.y<=h/2)
rtmp=(int)sqrt((float)(h-center.y)*(h-center.y) + (float)center.x*center.x);
rtmp=(int)std::sqrt((float)(h-center.y)*(h-center.y) + (float)center.x*center.x);
else //if (center.x<=w/2 && center.y<=h/2)
rtmp=(int)sqrt((float)(h-center.y)*(h-center.y) + (float)(w-center.x)*(w-center.x));
rtmp=(int)std::sqrt((float)(h-center.y)*(h-center.y) + (float)(w-center.x)*(w-center.x));
M=2*rtmp; N=2*rtmp;
@@ -244,8 +244,8 @@ LogPolar_Overlapping::LogPolar_Overlapping(int w, int h, Point2i center, int _R,
if (sp){
int jc=M/2-1, ic=N/2-1;
int _romax=min(ic, jc);
double _a=exp(log((double)(_romax/2-1)/(double)ro0)/(double)R);
int _romax=std::min(ic, jc);
double _a=std::exp(std::log((double)(_romax/2-1)/(double)ro0)/(double)R);
S=(int) floor(2*CV_PI/(_a-1)+0.5);
}
@@ -261,8 +261,8 @@ void LogPolar_Overlapping::create_map(int _M, int _n, int _R, int _s, double _ro
ro0=_ro0;
int jc=N/2-1, ic=M/2-1;
romax=min(ic, jc);
a=exp(log((double)romax/(double)ro0)/(double)R);
romax=std::min(ic, jc);
a=std::exp(std::log((double)romax/(double)ro0)/(double)R);
q=((double)S)/(2*CV_PI);
ind1=0;
@@ -279,8 +279,8 @@ void LogPolar_Overlapping::create_map(int _M, int _n, int _R, int _s, double _ro
{
for(int u=0; u<R; u++)
{
Rsri.at<float>(v,u)=(float)(ro0*pow(a,u)*sin(v/q)+jc);
Csri.at<float>(v,u)=(float)(ro0*pow(a,u)*cos(v/q)+ic);
Rsri.at<float>(v,u)=(float)(ro0*std::pow(a,u)*sin(v/q)+jc);
Csri.at<float>(v,u)=(float)(ro0*std::pow(a,u)*cos(v/q)+ic);
Rsr[v*R+u]=(int)floor(Rsri.at<float>(v,u));
Csr[v*R+u]=(int)floor(Csri.at<float>(v,u));
}
@@ -290,7 +290,7 @@ void LogPolar_Overlapping::create_map(int _M, int _n, int _R, int _s, double _ro
for(int i=0; i<R; i++)
{
Wsr[i]=ro0*(a-1)*pow(a,i-1);
Wsr[i]=ro0*(a-1)*std::pow(a,i-1);
if((Wsr[i]>1)&&(done==false))
{
ind1=i;
@@ -314,7 +314,7 @@ void LogPolar_Overlapping::create_map(int _M, int _n, int _R, int _s, double _ro
ETAyx.at<float>(j,i)=(float)(q*theta);
double ro2=(j-jc)*(j-jc)+(i-ic)*(i-ic);
CSIyx.at<float>(j,i)=(float)(0.5*log(ro2/(ro0*ro0))/log(a));
CSIyx.at<float>(j,i)=(float)(0.5*std::log(ro2/(ro0*ro0))/std::log(a));
}
}
@@ -332,7 +332,7 @@ void LogPolar_Overlapping::create_map(int _M, int _n, int _R, int _s, double _ro
for(int j=0; j<2*w+1; j++)
for(int i=0; i<2*w+1; i++)
{
(w_ker_2D[v*R+u].weights)[j*(2*w+1)+i]=exp(-(pow(i-w-dx, 2)+pow(j-w-dy, 2))/(2*sigma*sigma));
(w_ker_2D[v*R+u].weights)[j*(2*w+1)+i]=std::exp(-(std::pow(i-w-dx, 2)+std::pow(j-w-dy, 2))/(2*sigma*sigma));
tot+=(w_ker_2D[v*R+u].weights)[j*(2*w+1)+i];
}
for(int j=0; j<(2*w+1); j++)
@@ -351,7 +351,7 @@ const Mat LogPolar_Overlapping::to_cortical(const Mat &source)
remap(source_border,out,Csri,Rsri,INTER_LINEAR);
int wm=w_ker_2D[R-1].w;
vector<int> IMG((M+2*wm+1)*(N+2*wm+1), 0);
std::vector<int> IMG((M+2*wm+1)*(N+2*wm+1), 0);
for(int j=0; j<N; j++)
for(int i=0; i<M; i++)
@@ -388,8 +388,8 @@ const Mat LogPolar_Overlapping::to_cartesian(const Mat &source)
int wm=w_ker_2D[R-1].w;
vector<double> IMG((N+2*wm+1)*(M+2*wm+1), 0.);
vector<double> NOR((N+2*wm+1)*(M+2*wm+1), 0.);
std::vector<double> IMG((N+2*wm+1)*(M+2*wm+1), 0.);
std::vector<double> NOR((N+2*wm+1)*(M+2*wm+1), 0.);
for(int v=0; v<S; v++)
for(int u=ind1; u<R; u++)
@@ -416,7 +416,7 @@ const Mat LogPolar_Overlapping::to_cartesian(const Mat &source)
{
/*if(NOR[(M+2*wm+1)*j+i]>0)
ret[M*(j-wm)+i-wm]=(int) floor(IMG[(M+2*wm+1)*j+i]+0.5);*/
//int ro=(int)floor(sqrt((double)((j-wm-yc)*(j-wm-yc)+(i-wm-xc)*(i-wm-xc))));
//int ro=(int)floor(std::sqrt((double)((j-wm-yc)*(j-wm-yc)+(i-wm-xc)*(i-wm-xc))));
int csi=(int) floor(CSIyx.at<float>(j-wm,i-wm));
if((csi>=(ind1-(w_ker_2D[ind1]).w))&&(csi<R))
@@ -446,13 +446,13 @@ LogPolar_Adjacent::LogPolar_Adjacent(int w, int h, Point2i center, int _R, doubl
int rtmp;
if (center.x<=w/2 && center.y>=h/2)
rtmp=(int)sqrt((float)center.y*center.y + (float)(w-center.x)*(w-center.x));
rtmp=(int)std::sqrt((float)center.y*center.y + (float)(w-center.x)*(w-center.x));
else if (center.x>=w/2 && center.y>=h/2)
rtmp=(int)sqrt((float)center.y*center.y + (float)center.x*center.x);
rtmp=(int)std::sqrt((float)center.y*center.y + (float)center.x*center.x);
else if (center.x>=w/2 && center.y<=h/2)
rtmp=(int)sqrt((float)(h-center.y)*(h-center.y) + (float)center.x*center.x);
rtmp=(int)std::sqrt((float)(h-center.y)*(h-center.y) + (float)center.x*center.x);
else //if (center.x<=w/2 && center.y<=h/2)
rtmp=(int)sqrt((float)(h-center.y)*(h-center.y) + (float)(w-center.x)*(w-center.x));
rtmp=(int)std::sqrt((float)(h-center.y)*(h-center.y) + (float)(w-center.x)*(w-center.x));
M=2*rtmp; N=2*rtmp;
@@ -468,8 +468,8 @@ LogPolar_Adjacent::LogPolar_Adjacent(int w, int h, Point2i center, int _R, doubl
if (sp){
int jc=M/2-1, ic=N/2-1;
int _romax=min(ic, jc);
double _a=exp(log((double)(_romax/2-1)/(double)ro0)/(double)R);
int _romax=std::min(ic, jc);
double _a=std::exp(std::log((double)(_romax/2-1)/(double)ro0)/(double)R);
S=(int) floor(2*CV_PI/(_a-1)+0.5);
}
@@ -484,9 +484,9 @@ void LogPolar_Adjacent::create_map(int _M, int _n, int _R, int _s, double _ro0,
R=_R;
S=_s;
ro0=_ro0;
romax=min(M/2.0, N/2.0);
romax=std::min(M/2.0, N/2.0);
a=exp(log(romax/ro0)/(double)R);
a=std::exp(std::log(romax/ro0)/(double)R);
q=S/(2*CV_PI);
A.resize(R*S);
@@ -572,7 +572,7 @@ const Mat LogPolar_Adjacent::to_cortical(const Mat &source)
Mat source_border;
copyMakeBorder(source,source_border,top,bottom,left,right,BORDER_CONSTANT,Scalar(0));
vector<double> map(R*S, 0.);
std::vector<double> map(R*S, 0.);
for(int j=0; j<N; j++)
for(int i=0; i<M; i++)
@@ -597,7 +597,7 @@ const Mat LogPolar_Adjacent::to_cortical(const Mat &source)
const Mat LogPolar_Adjacent::to_cartesian(const Mat &source)
{
vector<double> map(M*N, 0.);
std::vector<double> map(M*N, 0.);
for(int j=0; j<N; j++)
for(int i=0; i<M; i++)
@@ -621,7 +621,7 @@ const Mat LogPolar_Adjacent::to_cartesian(const Mat &source)
bool LogPolar_Adjacent::get_uv(double x, double y, int&u, int&v)
{
double ro=sqrt(x*x+y*y), theta;
double ro=std::sqrt(x*x+y*y), theta;
if(x>0)
theta=atan(y/x);
else
@@ -635,7 +635,7 @@ bool LogPolar_Adjacent::get_uv(double x, double y, int&u, int&v)
}
else
{
u= (int) floor(log(ro/ro0)/log(a));
u= (int) floor(std::log(ro/ro0)/std::log(a));
if(theta>=0)
v= (int) floor(q*theta);
else

View File

@@ -144,7 +144,7 @@ void MagnoRetinaFilter::resize(const unsigned int NBrows, const unsigned int NBc
void MagnoRetinaFilter::setCoefficientsTable(const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float localAdaptIntegration_tau, const float localAdaptIntegration_k )
{
_temporalCoefficient=(float)exp(-1.0f/amacrinCellsTemporalCutFrequency);
_temporalCoefficient=(float)std::exp(-1.0f/amacrinCellsTemporalCutFrequency);
// the first set of parameters is dedicated to the low pass filtering property of the ganglion cells
BasicRetinaFilter::setLPfilterParameters(parasolCells_beta, parasolCells_tau, parasolCells_k, 0);
// the second set of parameters is dedicated to the ganglion cells output intergartion for their local adaptation property

View File

@@ -101,7 +101,7 @@ namespace
return true;
}
void fillMinMax(const vector<Point3f>& points, Octree::Node& node)
void fillMinMax(const std::vector<Point3f>& points, Octree::Node& node)
{
node.x_max = node.y_max = node.z_max = std::numeric_limits<float>::min();
node.x_min = node.y_min = node.z_min = std::numeric_limits<float>::max();
@@ -171,7 +171,7 @@ namespace cv
{
}
Octree::Octree(const vector<Point3f>& points3d, int maxLevels, int _minPoints)
Octree::Octree(const std::vector<Point3f>& points3d, int maxLevels, int _minPoints)
{
buildTree(points3d, maxLevels, _minPoints);
}
@@ -180,7 +180,7 @@ namespace cv
{
}
void Octree::getPointsWithinSphere(const Point3f& center, float radius, vector<Point3f>& out) const
void Octree::getPointsWithinSphere(const Point3f& center, float radius, std::vector<Point3f>& out) const
{
out.clear();
@@ -256,9 +256,9 @@ namespace cv
}
}
void Octree::buildTree(const vector<Point3f>& points3d, int maxLevels, int _minPoints)
void Octree::buildTree(const std::vector<Point3f>& points3d, int maxLevels, int _minPoints)
{
assert((size_t)maxLevels * 8 < MAX_STACK_SIZE);
CV_Assert((size_t)maxLevels * 8 < MAX_STACK_SIZE);
points.resize(points3d.size());
std::copy(points3d.begin(), points3d.end(), points.begin());
minPoints = _minPoints;
@@ -286,9 +286,9 @@ namespace cv
{
size_t size = nodes[nodeInd].end - nodes[nodeInd].begin;
vector<size_t> boxBorders(MAX_LEAFS+1, 0);
vector<size_t> boxIndices(size);
vector<Point3f> tempPoints(size);
std::vector<size_t> boxBorders(MAX_LEAFS+1, 0);
std::vector<size_t> boxIndices(size);
std::vector<Point3f> tempPoints(size);
for (int i = nodes[nodeInd].begin, j = 0; i < nodes[nodeInd].end; ++i, ++j)
{
@@ -304,7 +304,7 @@ namespace cv
for (size_t i = 1; i < boxBorders.size(); ++i)
boxBorders[i] += boxBorders[i-1];
vector<size_t> writeInds(boxBorders.begin(), boxBorders.end());
std::vector<size_t> writeInds(boxBorders.begin(), boxBorders.end());
for (size_t i = 0; i < size; ++i)
{

View File

@@ -61,7 +61,7 @@ namespace cv {
namespace of2 {
static double logsumexp(double a, double b) {
return a > b ? log(1 + exp(b - a)) + a : log(1 + exp(a - b)) + b;
return a > b ? std::log(1 + std::exp(b - a)) + a : std::log(1 + std::exp(a - b)) + b;
}
FabMap::FabMap(const Mat& _clTree, double _PzGe,
@@ -103,14 +103,14 @@ const std::vector<cv::Mat>& FabMap::getTestImgDescriptors() const {
void FabMap::addTraining(const Mat& queryImgDescriptor) {
CV_Assert(!queryImgDescriptor.empty());
vector<Mat> queryImgDescriptors;
std::vector<Mat> queryImgDescriptors;
for (int i = 0; i < queryImgDescriptor.rows; i++) {
queryImgDescriptors.push_back(queryImgDescriptor.row(i));
}
addTraining(queryImgDescriptors);
}
void FabMap::addTraining(const vector<Mat>& queryImgDescriptors) {
void FabMap::addTraining(const std::vector<Mat>& queryImgDescriptors) {
for (size_t i = 0; i < queryImgDescriptors.size(); i++) {
CV_Assert(!queryImgDescriptors[i].empty());
CV_Assert(queryImgDescriptors[i].rows == 1);
@@ -122,7 +122,7 @@ void FabMap::addTraining(const vector<Mat>& queryImgDescriptors) {
void FabMap::add(const cv::Mat& queryImgDescriptor) {
CV_Assert(!queryImgDescriptor.empty());
vector<Mat> queryImgDescriptors;
std::vector<Mat> queryImgDescriptors;
for (int i = 0; i < queryImgDescriptor.rows; i++) {
queryImgDescriptors.push_back(queryImgDescriptor.row(i));
}
@@ -140,10 +140,10 @@ void FabMap::add(const std::vector<cv::Mat>& queryImgDescriptors) {
}
void FabMap::compare(const Mat& queryImgDescriptor,
vector<IMatch>& matches, bool addQuery,
std::vector<IMatch>& matches, bool addQuery,
const Mat& mask) {
CV_Assert(!queryImgDescriptor.empty());
vector<Mat> queryImgDescriptors;
std::vector<Mat> queryImgDescriptors;
for (int i = 0; i < queryImgDescriptor.rows; i++) {
queryImgDescriptors.push_back(queryImgDescriptor.row(i));
}
@@ -151,16 +151,16 @@ void FabMap::compare(const Mat& queryImgDescriptor,
}
void FabMap::compare(const Mat& queryImgDescriptor,
const Mat& testImgDescriptor, vector<IMatch>& matches,
const Mat& testImgDescriptor, std::vector<IMatch>& matches,
const Mat& mask) {
CV_Assert(!queryImgDescriptor.empty());
vector<Mat> queryImgDescriptors;
std::vector<Mat> queryImgDescriptors;
for (int i = 0; i < queryImgDescriptor.rows; i++) {
queryImgDescriptors.push_back(queryImgDescriptor.row(i));
}
CV_Assert(!testImgDescriptor.empty());
vector<Mat> _testImgDescriptors;
std::vector<Mat> _testImgDescriptors;
for (int i = 0; i < testImgDescriptor.rows; i++) {
_testImgDescriptors.push_back(testImgDescriptor.row(i));
}
@@ -169,18 +169,18 @@ void FabMap::compare(const Mat& queryImgDescriptor,
}
void FabMap::compare(const Mat& queryImgDescriptor,
const vector<Mat>& _testImgDescriptors,
vector<IMatch>& matches, const Mat& mask) {
const std::vector<Mat>& _testImgDescriptors,
std::vector<IMatch>& matches, const Mat& mask) {
CV_Assert(!queryImgDescriptor.empty());
vector<Mat> queryImgDescriptors;
std::vector<Mat> queryImgDescriptors;
for (int i = 0; i < queryImgDescriptor.rows; i++) {
queryImgDescriptors.push_back(queryImgDescriptor.row(i));
}
compare(queryImgDescriptors,_testImgDescriptors,matches,mask);
}
void FabMap::compare(const vector<Mat>& queryImgDescriptors,
vector<IMatch>& matches, bool addQuery, const Mat& /*mask*/) {
void FabMap::compare(const std::vector<Mat>& queryImgDescriptors,
std::vector<IMatch>& matches, bool addQuery, const Mat& /*mask*/) {
// TODO: add first query if empty (is this necessary)
@@ -199,9 +199,9 @@ void FabMap::compare(const vector<Mat>& queryImgDescriptors,
}
}
void FabMap::compare(const vector<Mat>& queryImgDescriptors,
const vector<Mat>& _testImgDescriptors,
vector<IMatch>& matches, const Mat& /*mask*/) {
void FabMap::compare(const std::vector<Mat>& queryImgDescriptors,
const std::vector<Mat>& _testImgDescriptors,
std::vector<IMatch>& matches, const Mat& /*mask*/) {
CV_Assert(!(flags & MOTION_MODEL));
for (size_t i = 0; i < _testImgDescriptors.size(); i++) {
@@ -225,10 +225,10 @@ void FabMap::compare(const vector<Mat>& queryImgDescriptors,
}
void FabMap::compareImgDescriptor(const Mat& queryImgDescriptor,
int queryIndex, const vector<Mat>& _testImgDescriptors,
vector<IMatch>& matches) {
int queryIndex, const std::vector<Mat>& _testImgDescriptors,
std::vector<IMatch>& matches) {
vector<IMatch> queryMatches;
std::vector<IMatch> queryMatches;
queryMatches.push_back(IMatch(queryIndex,-1,
getNewPlaceLikelihood(queryImgDescriptor),0));
getLikelihoods(queryImgDescriptor,_testImgDescriptors,queryMatches);
@@ -240,7 +240,7 @@ void FabMap::compareImgDescriptor(const Mat& queryImgDescriptor,
}
void FabMap::getLikelihoods(const Mat& /*queryImgDescriptor*/,
const vector<Mat>& /*testImgDescriptors*/, vector<IMatch>& /*matches*/) {
const std::vector<Mat>& /*testImgDescriptors*/, std::vector<IMatch>& /*matches*/) {
}
@@ -252,7 +252,7 @@ double FabMap::getNewPlaceLikelihood(const Mat& queryImgDescriptor) {
for (int q = 0; q < clTree.cols; q++) {
zq = queryImgDescriptor.at<float>(0,q) > 0;
logP += log(Pzq(q, false) * PzqGeq(zq, false) +
logP += std::log(Pzq(q, false) * PzqGeq(zq, false) +
Pzq(q, true) * PzqGeq(zq, true));
}
} else {
@@ -269,7 +269,7 @@ double FabMap::getNewPlaceLikelihood(const Mat& queryImgDescriptor) {
beta = Pzq(q, !zq) * PzqGeq(zq, true) * PzqGzpq(q, zq, zpq);
p += Pzq(q, true) * beta / (alpha + beta);
logP += log(p);
logP += std::log(p);
}
}
return logP;
@@ -279,7 +279,7 @@ double FabMap::getNewPlaceLikelihood(const Mat& queryImgDescriptor) {
CV_Assert(!trainingImgDescriptors.empty());
CV_Assert(numSamples > 0);
vector<Mat> sampledImgDescriptors;
std::vector<Mat> sampledImgDescriptors;
// TODO: this method can result in the same sample being added
// multiple times. Is this desired?
@@ -289,7 +289,7 @@ double FabMap::getNewPlaceLikelihood(const Mat& queryImgDescriptor) {
sampledImgDescriptors.push_back(trainingImgDescriptors[index]);
}
vector<IMatch> matches;
std::vector<IMatch> matches;
getLikelihoods(queryImgDescriptor,sampledImgDescriptors,matches);
double averageLogLikelihood = -DBL_MAX + matches.front().likelihood + 1;
@@ -298,34 +298,34 @@ double FabMap::getNewPlaceLikelihood(const Mat& queryImgDescriptor) {
logsumexp(matches[i].likelihood, averageLogLikelihood);
}
return averageLogLikelihood - log((double)numSamples);
return averageLogLikelihood - std::log((double)numSamples);
}
return 0;
}
void FabMap::normaliseDistribution(vector<IMatch>& matches) {
void FabMap::normaliseDistribution(std::vector<IMatch>& matches) {
CV_Assert(!matches.empty());
if (flags & MOTION_MODEL) {
matches[0].match = matches[0].likelihood + log(Pnew);
matches[0].match = matches[0].likelihood + std::log(Pnew);
if (priorMatches.size() > 2) {
matches[1].match = matches[1].likelihood;
matches[1].match += log(
matches[1].match += std::log(
(2 * (1-mBias) * priorMatches[1].match +
priorMatches[1].match +
2 * mBias * priorMatches[2].match) / 3);
for (size_t i = 2; i < priorMatches.size()-1; i++) {
matches[i].match = matches[i].likelihood;
matches[i].match += log(
matches[i].match += std::log(
(2 * (1-mBias) * priorMatches[i-1].match +
priorMatches[i].match +
2 * mBias * priorMatches[i+1].match)/3);
}
matches[priorMatches.size()-1].match =
matches[priorMatches.size()-1].likelihood;
matches[priorMatches.size()-1].match += log(
matches[priorMatches.size()-1].match += std::log(
(2 * (1-mBias) * priorMatches[priorMatches.size()-2].match +
priorMatches[priorMatches.size()-1].match +
2 * mBias * priorMatches[priorMatches.size()-1].match)/3);
@@ -348,7 +348,7 @@ void FabMap::normaliseDistribution(vector<IMatch>& matches) {
//normalise
for (size_t i = 0; i < matches.size(); i++) {
matches[i].match = exp(matches[i].match - logsum);
matches[i].match = std::exp(matches[i].match - logsum);
}
//smooth final probabilities
@@ -368,7 +368,7 @@ void FabMap::normaliseDistribution(vector<IMatch>& matches) {
logsum = logsumexp(logsum, matches[i].likelihood);
}
for (size_t i = 0; i < matches.size(); i++) {
matches[i].match = exp(matches[i].likelihood - logsum);
matches[i].match = std::exp(matches[i].likelihood - logsum);
}
for (size_t i = 0; i < matches.size(); i++) {
matches[i].match = sFactor*matches[i].match +
@@ -444,7 +444,7 @@ FabMap1::~FabMap1() {
}
void FabMap1::getLikelihoods(const Mat& queryImgDescriptor,
const vector<Mat>& testImageDescriptors, vector<IMatch>& matches) {
const std::vector<Mat>& testImageDescriptors, std::vector<IMatch>& matches) {
for (size_t i = 0; i < testImageDescriptors.size(); i++) {
bool zq, zpq, Lzq;
@@ -455,7 +455,7 @@ void FabMap1::getLikelihoods(const Mat& queryImgDescriptor,
zpq = queryImgDescriptor.at<float>(0,pq(q)) > 0;
Lzq = testImageDescriptors[i].at<float>(0,q) > 0;
logP += log((this->*PzGL)(q, zq, zpq, Lzq));
logP += std::log((this->*PzGL)(q, zq, zpq, Lzq));
}
matches.push_back(IMatch(0,(int)i,logP,0));
@@ -467,7 +467,7 @@ FabMapLUT::FabMapLUT(const Mat& _clTree, double _PzGe, double _PzGNe,
FabMap(_clTree, _PzGe, _PzGNe, _flags, _numSamples), precision(_precision) {
int nWords = clTree.cols;
double precFactor = (double)pow(10.0, precision);
double precFactor = (double)std::pow(10.0, precision);
table = new int[nWords][8];
@@ -478,7 +478,7 @@ FabMap(_clTree, _PzGe, _PzGNe, _flags, _numSamples), precision(_precision) {
bool zq = (bool) ((i >> 1) & 0x01);
bool zpq = (bool) (i & 1);
table[q][i] = -(int)(log((this->*PzGL)(q, zq, zpq, Lzq))
table[q][i] = -(int)(std::log((this->*PzGL)(q, zq, zpq, Lzq))
* precFactor);
}
}
@@ -489,9 +489,9 @@ FabMapLUT::~FabMapLUT() {
}
void FabMapLUT::getLikelihoods(const Mat& queryImgDescriptor,
const vector<Mat>& testImageDescriptors, vector<IMatch>& matches) {
const std::vector<Mat>& testImageDescriptors, std::vector<IMatch>& matches) {
double precFactor = (double)pow(10.0, -precision);
double precFactor = (double)std::pow(10.0, -precision);
for (size_t i = 0; i < testImageDescriptors.size(); i++) {
unsigned long long int logP = 0;
@@ -517,13 +517,13 @@ FabMapFBO::~FabMapFBO() {
}
void FabMapFBO::getLikelihoods(const Mat& queryImgDescriptor,
const vector<Mat>& testImageDescriptors, vector<IMatch>& matches) {
const std::vector<Mat>& testImageDescriptors, std::vector<IMatch>& matches) {
std::multiset<WordStats> wordData;
setWordStatistics(queryImgDescriptor, wordData);
vector<int> matchIndices;
vector<IMatch> queryMatches;
std::vector<int> matchIndices;
std::vector<IMatch> queryMatches;
for (size_t i = 0; i < testImageDescriptors.size(); i++) {
queryMatches.push_back(IMatch(0,(int)i,0,0));
@@ -544,7 +544,7 @@ void FabMapFBO::getLikelihoods(const Mat& queryImgDescriptor,
bool Lzq =
testImageDescriptors[matchIndices[i]].at<float>(0,wordIter->q) > 0;
queryMatches[matchIndices[i]].likelihood +=
log((this->*PzGL)(wordIter->q,zq,zpq,Lzq));
std::log((this->*PzGL)(wordIter->q,zq,zpq,Lzq));
currBest =
std::max(queryMatches[matchIndices[i]].likelihood, currBest);
}
@@ -553,9 +553,9 @@ void FabMapFBO::getLikelihoods(const Mat& queryImgDescriptor,
continue;
double delta = std::max(limitbisection(wordIter->V, wordIter->M),
-log(rejectionThreshold));
-std::log(rejectionThreshold));
vector<int>::iterator matchIter = matchIndices.begin();
std::vector<int>::iterator matchIter = matchIndices.begin();
while (matchIter != matchIndices.end()) {
if (currBest - queryMatches[*matchIter].likelihood > delta) {
queryMatches[*matchIter].likelihood = bailedOut;
@@ -568,7 +568,7 @@ void FabMapFBO::getLikelihoods(const Mat& queryImgDescriptor,
for (size_t i = 0; i < queryMatches.size(); i++) {
if (queryMatches[i].likelihood == bailedOut) {
queryMatches[i].likelihood = currBest + log(rejectionThreshold);
queryMatches[i].likelihood = currBest + std::log(rejectionThreshold);
}
}
matches.insert(matches.end(), queryMatches.begin(), queryMatches.end());
@@ -595,11 +595,11 @@ void FabMapFBO::setWordStatistics(const Mat& queryImgDescriptor,
zq = queryImgDescriptor.at<float>(0,wordIter->q) > 0;
zpq = queryImgDescriptor.at<float>(0,pq(wordIter->q)) > 0;
d = log((this->*PzGL)(wordIter->q, zq, zpq, true)) -
log((this->*PzGL)(wordIter->q, zq, zpq, false));
d = std::log((this->*PzGL)(wordIter->q, zq, zpq, true)) -
std::log((this->*PzGL)(wordIter->q, zq, zpq, false));
V += pow(d, 2.0) * 2 *
(Pzq(wordIter->q, true) - pow(Pzq(wordIter->q, true), 2.0));
V += std::pow(d, 2.0) * 2 *
(Pzq(wordIter->q, true) - std::pow(Pzq(wordIter->q, true), 2.0));
M = std::max(M, fabs(d));
wordIter->V = V;
@@ -631,8 +631,8 @@ double FabMapFBO::limitbisection(double v, double m) {
double FabMapFBO::bennettInequality(double v, double m, double delta) {
double DMonV = delta * m / v;
double f_delta = log(DMonV + sqrt(pow(DMonV, 2.0) + 1));
return exp((v / pow(m, 2.0))*(cosh(f_delta) - 1 - DMonV * f_delta));
double f_delta = std::log(DMonV + std::sqrt(std::pow(DMonV, 2.0) + 1));
return std::exp((v / std::pow(m, 2.0))*(cosh(f_delta) - 1 - DMonV * f_delta));
}
bool FabMapFBO::compInfo(const WordStats& first, const WordStats& second) {
@@ -647,13 +647,13 @@ FabMap(_clTree, _PzGe, _PzGNe, _flags) {
children.resize(clTree.cols);
for (int q = 0; q < clTree.cols; q++) {
d1.push_back(log((this->*PzGL)(q, false, false, true) /
d1.push_back(std::log((this->*PzGL)(q, false, false, true) /
(this->*PzGL)(q, false, false, false)));
d2.push_back(log((this->*PzGL)(q, false, true, true) /
d2.push_back(std::log((this->*PzGL)(q, false, true, true) /
(this->*PzGL)(q, false, true, false)) - d1[q]);
d3.push_back(log((this->*PzGL)(q, true, false, true) /
d3.push_back(std::log((this->*PzGL)(q, true, false, true) /
(this->*PzGL)(q, true, false, false))- d1[q]);
d4.push_back(log((this->*PzGL)(q, true, true, true) /
d4.push_back(std::log((this->*PzGL)(q, true, true, true) /
(this->*PzGL)(q, true, true, false))- d1[q]);
children[pq(q)].push_back(q);
}
@@ -664,7 +664,7 @@ FabMap2::~FabMap2() {
}
void FabMap2::addTraining(const vector<Mat>& queryImgDescriptors) {
void FabMap2::addTraining(const std::vector<Mat>& queryImgDescriptors) {
for (size_t i = 0; i < queryImgDescriptors.size(); i++) {
CV_Assert(!queryImgDescriptors[i].empty());
CV_Assert(queryImgDescriptors[i].rows == 1);
@@ -676,7 +676,7 @@ void FabMap2::addTraining(const vector<Mat>& queryImgDescriptors) {
}
void FabMap2::add(const vector<Mat>& queryImgDescriptors) {
void FabMap2::add(const std::vector<Mat>& queryImgDescriptors) {
for (size_t i = 0; i < queryImgDescriptors.size(); i++) {
CV_Assert(!queryImgDescriptors[i].empty());
CV_Assert(queryImgDescriptors[i].rows == 1);
@@ -688,15 +688,15 @@ void FabMap2::add(const vector<Mat>& queryImgDescriptors) {
}
void FabMap2::getLikelihoods(const Mat& queryImgDescriptor,
const vector<Mat>& testImageDescriptors, vector<IMatch>& matches) {
const std::vector<Mat>& testImageDescriptors, std::vector<IMatch>& matches) {
if (&testImageDescriptors == &testImgDescriptors) {
getIndexLikelihoods(queryImgDescriptor, testDefaults, testInvertedMap,
matches);
} else {
CV_Assert(!(flags & MOTION_MODEL));
vector<double> defaults;
std::map<int, vector<int> > invertedMap;
std::vector<double> defaults;
std::map<int, std::vector<int> > invertedMap;
for (size_t i = 0; i < testImageDescriptors.size(); i++) {
addToIndex(testImageDescriptors[i],defaults,invertedMap);
}
@@ -708,7 +708,7 @@ double FabMap2::getNewPlaceLikelihood(const Mat& queryImgDescriptor) {
CV_Assert(!trainingImgDescriptors.empty());
vector<IMatch> matches;
std::vector<IMatch> matches;
getIndexLikelihoods(queryImgDescriptor, trainingDefaults,
trainingInvertedMap, matches);
@@ -718,13 +718,13 @@ double FabMap2::getNewPlaceLikelihood(const Mat& queryImgDescriptor) {
logsumexp(matches[i].likelihood, averageLogLikelihood);
}
return averageLogLikelihood - log((double)trainingDefaults.size());
return averageLogLikelihood - std::log((double)trainingDefaults.size());
}
void FabMap2::addToIndex(const Mat& queryImgDescriptor,
vector<double>& defaults,
std::map<int, vector<int> >& invertedMap) {
std::vector<double>& defaults,
std::map<int, std::vector<int> >& invertedMap) {
defaults.push_back(0);
for (int q = 0; q < clTree.cols; q++) {
if (queryImgDescriptor.at<float>(0,q) > 0) {
@@ -736,10 +736,10 @@ void FabMap2::addToIndex(const Mat& queryImgDescriptor,
void FabMap2::getIndexLikelihoods(const Mat& queryImgDescriptor,
std::vector<double>& defaults,
std::map<int, vector<int> >& invertedMap,
std::map<int, std::vector<int> >& invertedMap,
std::vector<IMatch>& matches) {
vector<int>::iterator LwithI, child;
std::vector<int>::iterator LwithI, child;
std::vector<double> likelihoods = defaults;

View File

@@ -43,16 +43,13 @@
#ifndef __OPENCV_PRECOMP_H__
#define __OPENCV_PRECOMP_H__
#ifdef HAVE_CVCONFIG_H
#include "cvconfig.h"
#endif
#include "opencv2/contrib.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/objdetect.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/core/internal.hpp"
#include "opencv2/core/private.hpp"
namespace cv
{

View File

@@ -70,7 +70,8 @@
*/
#include "precomp.hpp"
#include "retinafilter.hpp"
#include <iostream>
#include <cstdio>
#include <sstream>
namespace cv
{
@@ -113,7 +114,7 @@ public:
* @param retinaParameterFile : the parameters filename
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(std::string retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
/**
@@ -143,13 +144,13 @@ public:
* parameters setup display method
* @return a string which contains formatted parameters information
*/
const std::string printSetup();
const String printSetup();
/**
* write xml/yml formated parameters information
* @rparam fs : the filename of the xml file that will be open and writen with formatted parameters information
*/
virtual void write( std::string fs ) const;
virtual void write( String fs ) const;
/**
@@ -323,26 +324,26 @@ void RetinaImpl::setColorSaturation(const bool saturateColors, const float color
struct Retina::RetinaParameters RetinaImpl::getParameters(){return _retinaParameters;}
void RetinaImpl::setup(std::string retinaParameterFile, const bool applyDefaultSetupOnFailure)
void RetinaImpl::setup(String retinaParameterFile, const bool applyDefaultSetupOnFailure)
{
try
{
// opening retinaParameterFile in read mode
cv::FileStorage fs(retinaParameterFile, cv::FileStorage::READ);
setup(fs, applyDefaultSetupOnFailure);
}catch(Exception &e)
{
std::cout<<"RetinaImpl::setup: wrong/unappropriate xml parameter file : error report :`n=>"<<e.what()<<std::endl;
if (applyDefaultSetupOnFailure)
{
std::cout<<"RetinaImpl::setup: resetting retina with default parameters"<<std::endl;
setupOPLandIPLParvoChannel();
setupIPLMagnoChannel();
}
catch(Exception &e)
{
printf("Retina::setup: wrong/unappropriate xml parameter file : error report :`n=>%s\n", e.what());
if (applyDefaultSetupOnFailure)
{
printf("Retina::setup: resetting retina with default parameters\n");
setupOPLandIPLParvoChannel();
setupIPLMagnoChannel();
}
else
{
std::cout<<"=> keeping current parameters"<<std::endl;
printf("=> keeping current parameters\n");
}
}
}
@@ -354,7 +355,7 @@ void RetinaImpl::setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailur
// read parameters file if it exists or apply default setup if asked for
if (!fs.isOpened())
{
std::cout<<"RetinaImpl::setup: provided parameters file could not be open... skeeping configuration"<<std::endl;
printf("Retina::setup: provided parameters file could not be open... skeeping configuration\n");
return;
// implicit else case : retinaParameterFile could be open (it exists at least)
}
@@ -386,18 +387,18 @@ void RetinaImpl::setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailur
}catch(Exception &e)
{
std::cout<<"RetinaImpl::setup: resetting retina with default parameters"<<std::endl;
printf("RetinaImpl::setup: resetting retina with default parameters\n");
if (applyDefaultSetupOnFailure)
{
setupOPLandIPLParvoChannel();
setupIPLMagnoChannel();
}
std::cout<<"RetinaImpl::setup: wrong/unappropriate xml parameter file : error report :`n=>"<<e.what()<<std::endl;
std::cout<<"=> keeping current parameters"<<std::endl;
printf("Retina::setup: wrong/unappropriate xml parameter file : error report :`n=>%s\n", e.what());
printf("=> keeping current parameters\n");
}
// report current configuration
std::cout<<printSetup()<<std::endl;
printf("%s\n", printSetup().c_str());
}
void RetinaImpl::setup(cv::Retina::RetinaParameters newConfiguration)
@@ -410,7 +411,7 @@ void RetinaImpl::setup(cv::Retina::RetinaParameters newConfiguration)
}
const std::string RetinaImpl::printSetup()
const String RetinaImpl::printSetup()
{
std::stringstream outmessage;
@@ -440,10 +441,10 @@ const std::string RetinaImpl::printSetup()
<< "\n\t localAdaptintegration_tau : " << _retinaParameters.IplMagno.localAdaptintegration_tau
<< "\n\t localAdaptintegration_k : " << _retinaParameters.IplMagno.localAdaptintegration_k
<<"}";
return outmessage.str();
return outmessage.str().c_str();
}
void RetinaImpl::write( std::string fs ) const
void RetinaImpl::write( String fs ) const
{
FileStorage parametersSaveFile(fs, cv::FileStorage::WRITE );
write(parametersSaveFile);
@@ -602,7 +603,7 @@ void RetinaImpl::_init(const cv::Size inputSz, const bool colorMode, RETINA_COLO
_retinaFilter->clearAllBuffers();
// report current configuration
std::cout<<printSetup()<<std::endl;
printf("%s\n", printSetup().c_str());
}
void RetinaImpl::_convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, cv::Mat &outBuffer)
@@ -686,7 +687,7 @@ bool RetinaImpl::_convertCvMat2ValarrayBuffer(const cv::Mat inputMatToConvert, s
inputMatToConvert.convertTo(dst, dsttype);
}
else
CV_Error(CV_StsUnsupportedFormat, "input image must be single channel (gray levels), bgr format (color) or bgra (color with transparency which won't be considered");
CV_Error(Error::StsUnsupportedFormat, "input image must be single channel (gray levels), bgr format (color) or bgra (color with transparency which won't be considered");
return imageNumberOfChannels>1; // return bool : false for gray level image processing, true for color mode
}

View File

@@ -206,7 +206,7 @@ namespace cv
{
for (j=0;j<(int)_photoreceptorsPrefilter.getNBcolumns();++j)
{
float distanceToCenter=sqrt(((float)(i-halfRows)*(i-halfRows)+(j-halfColumns)*(j-halfColumns)));
float distanceToCenter=std::sqrt(((float)(i-halfRows)*(i-halfRows)+(j-halfColumns)*(j-halfColumns)));
if (distanceToCenter<minDistance)
{
float a=*(hybridParvoMagnoCoefTablePTR++)=0.5f+0.5f*(float)cos(CV_PI*distanceToCenter/minDistance);

View File

@@ -44,20 +44,22 @@
#define SHOW_DEBUG_IMAGES 0
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/core.hpp"
#include "opencv2/calib3d.hpp"
#if SHOW_DEBUG_IMAGES
# include "opencv2/highgui/highgui.hpp"
# include "opencv2/highgui.hpp"
#endif
#include <iostream>
#include <limits>
#include "opencv2/core/internal.hpp"
#if defined(HAVE_EIGEN) && EIGEN_WORLD_VERSION == 3
# ifdef ANDROID
template <typename Scalar> Scalar log2(Scalar v) { using std::log; return log(v)/log(Scalar(2)); }
template <typename Scalar> Scalar log2(Scalar v) { return std::log(v)/std::log(Scalar(2)); }
# endif
# if defined __GNUC__ && defined __APPLE__
# pragma GCC diagnostic ignored "-Wshadow"
# endif
# include <unsupported/Eigen/MatrixFunctions>
# include <Eigen/Dense>
@@ -169,7 +171,7 @@ static void warpImage( const Mat& image, const Mat& depth,
{
const Rect rect = Rect(0, 0, image.cols, image.rows);
vector<Point2f> points2d;
std::vector<Point2f> points2d;
Mat cloud, transformedCloud;
cvtDepth2Cloud( depth, cloud, cameraMatrix );
@@ -307,11 +309,11 @@ static
void buildPyramids( const Mat& image0, const Mat& image1,
const Mat& depth0, const Mat& depth1,
const Mat& cameraMatrix, int sobelSize, double sobelScale,
const vector<float>& minGradMagnitudes,
vector<Mat>& pyramidImage0, vector<Mat>& pyramidDepth0,
vector<Mat>& pyramidImage1, vector<Mat>& pyramidDepth1,
vector<Mat>& pyramid_dI_dx1, vector<Mat>& pyramid_dI_dy1,
vector<Mat>& pyramidTexturedMask1, vector<Mat>& pyramidCameraMatrix )
const std::vector<float>& minGradMagnitudes,
std::vector<Mat>& pyramidImage0, std::vector<Mat>& pyramidDepth0,
std::vector<Mat>& pyramidImage1, std::vector<Mat>& pyramidDepth1,
std::vector<Mat>& pyramid_dI_dx1, std::vector<Mat>& pyramid_dI_dy1,
std::vector<Mat>& pyramidTexturedMask1, std::vector<Mat>& pyramidCameraMatrix )
{
const int pyramidMaxLevel = (int)minGradMagnitudes.size() - 1;
@@ -420,7 +422,7 @@ bool computeKsi( int transformType,
computeCFuncPtr = computeC_Translation;
}
else
CV_Error( CV_StsBadFlag, "Unsupported value of transformation type flag.");
CV_Error(Error::StsBadFlag, "Unsupported value of transformation type flag.");
Mat C( correspsCount, Cwidth, CV_64FC1 );
Mat dI_dt( correspsCount, 1, CV_64FC1 );
@@ -532,10 +534,10 @@ bool cv::RGBDOdometry( cv::Mat& Rt, const Mat& initRt,
minGradientMagnitudes.size() == iterCounts.size() );
CV_Assert( initRt.empty() || (initRt.type()==CV_64FC1 && initRt.size()==Size(4,4) ) );
vector<int> defaultIterCounts;
vector<float> defaultMinGradMagnitudes;
vector<int> const* iterCountsPtr = &iterCounts;
vector<float> const* minGradientMagnitudesPtr = &minGradientMagnitudes;
std::vector<int> defaultIterCounts;
std::vector<float> defaultMinGradMagnitudes;
std::vector<int> const* iterCountsPtr = &iterCounts;
std::vector<float> const* minGradientMagnitudesPtr = &minGradientMagnitudes;
if( iterCounts.empty() || minGradientMagnitudes.empty() )
{
@@ -557,7 +559,7 @@ bool cv::RGBDOdometry( cv::Mat& Rt, const Mat& initRt,
preprocessDepth( depth0, depth1, validMask0, validMask1, minDepth, maxDepth );
vector<Mat> pyramidImage0, pyramidDepth0,
std::vector<Mat> pyramidImage0, pyramidDepth0,
pyramidImage1, pyramidDepth1, pyramid_dI_dx1, pyramid_dI_dy1, pyramidTexturedMask1,
pyramidCameraMatrix;
buildPyramids( image0, image1, depth0, depth1, cameraMatrix, sobelSize, sobelScale, *minGradientMagnitudesPtr,

View File

@@ -145,8 +145,8 @@ void SelfSimDescriptor::SSD(const Mat& img, Point pt, Mat& ssd) const
}
void SelfSimDescriptor::compute(const Mat& img, vector<float>& descriptors, Size winStride,
const vector<Point>& locations) const
void SelfSimDescriptor::compute(const Mat& img, std::vector<float>& descriptors, Size winStride,
const std::vector<Point>& locations) const
{
CV_Assert( img.depth() == CV_8U );
@@ -156,7 +156,7 @@ void SelfSimDescriptor::compute(const Mat& img, vector<float>& descriptors, Size
int i, nwindows = locations.empty() ? gridSize.width*gridSize.height : (int)locations.size();
int border = largeSize/2 + smallSize/2;
int fsize = (int)getDescriptorSize();
vector<float> tempFeature(fsize+1);
std::vector<float> tempFeature(fsize+1);
descriptors.resize(fsize*nwindows + 1);
Mat ssd(largeSize, largeSize, CV_32F), mappingMask;
computeLogPolarMapping(mappingMask);

View File

@@ -49,49 +49,41 @@
#include <set>
using namespace cv;
using namespace std;
/********************************* local utility *********************************/
namespace cv
{
using std::log;
using std::max;
using std::min;
using std::sqrt;
}
namespace
{
const static Scalar colors[] =
{
CV_RGB(255, 0, 0),
CV_RGB( 0, 255, 0),
CV_RGB( 0, 0, 255),
CV_RGB(255, 255, 0),
CV_RGB(255, 0, 255),
CV_RGB( 0, 255, 255),
CV_RGB(255, 127, 127),
CV_RGB(127, 127, 255),
CV_RGB(127, 255, 127),
CV_RGB(255, 255, 127),
CV_RGB(127, 255, 255),
CV_RGB(255, 127, 255),
CV_RGB(127, 0, 0),
CV_RGB( 0, 127, 0),
CV_RGB( 0, 0, 127),
CV_RGB(127, 127, 0),
CV_RGB(127, 0, 127),
CV_RGB( 0, 127, 127)
Scalar(255, 0, 0),
Scalar( 0, 255, 0),
Scalar( 0, 0, 255),
Scalar(255, 255, 0),
Scalar(255, 0, 255),
Scalar( 0, 255, 255),
Scalar(255, 127, 127),
Scalar(127, 127, 255),
Scalar(127, 255, 127),
Scalar(255, 255, 127),
Scalar(127, 255, 255),
Scalar(255, 127, 255),
Scalar(127, 0, 0),
Scalar( 0, 127, 0),
Scalar( 0, 0, 127),
Scalar(127, 127, 0),
Scalar(127, 0, 127),
Scalar( 0, 127, 127)
};
size_t colors_mum = sizeof(colors)/sizeof(colors[0]);
#if defined __cplusplus && __cplusplus > 199711L
#else
template<class FwIt, class T> void iota(FwIt first, FwIt last, T value) { while(first != last) *first++ = value++; }
#endif
template<class FwIt, class T> inline void _iota(FwIt first, FwIt last, T value)
{
while(first != last) *first++ = value++;
}
void computeNormals( const Octree& Octree, const vector<Point3f>& centers, vector<Point3f>& normals,
vector<uchar>& mask, float normalRadius, int minNeighbors = 20)
void computeNormals( const Octree& Octree, const std::vector<Point3f>& centers, std::vector<Point3f>& normals,
std::vector<uchar>& mask, float normalRadius, int minNeighbors = 20)
{
size_t normals_size = centers.size();
normals.resize(normals_size);
@@ -105,7 +97,7 @@ void computeNormals( const Octree& Octree, const vector<Point3f>& centers, vecto
mask[m] = 1;
}
vector<Point3f> buffer;
std::vector<Point3f> buffer;
buffer.reserve(128);
SVD svd;
@@ -207,7 +199,7 @@ void convertTransformMatrix(const float* matrix, float* sseMatrix)
inline __m128 transformSSE(const __m128* matrix, const __m128& in)
{
assert(((size_t)matrix & 15) == 0);
CV_DbgAssert(((size_t)matrix & 15) == 0);
__m128 a0 = _mm_mul_ps(_mm_load_ps((float*)(matrix+0)), _mm_shuffle_ps(in,in,_MM_SHUFFLE(0,0,0,0)));
__m128 a1 = _mm_mul_ps(_mm_load_ps((float*)(matrix+1)), _mm_shuffle_ps(in,in,_MM_SHUFFLE(1,1,1,1)));
__m128 a2 = _mm_mul_ps(_mm_load_ps((float*)(matrix+2)), _mm_shuffle_ps(in,in,_MM_SHUFFLE(2,2,2,2)));
@@ -223,14 +215,14 @@ inline __m128i _mm_mullo_epi32_emul(const __m128i& a, __m128i& b)
#endif
void computeSpinImages( const Octree& Octree, const vector<Point3f>& points, const vector<Point3f>& normals,
vector<uchar>& mask, Mat& spinImages, int imageWidth, float binSize)
void computeSpinImages( const Octree& Octree, const std::vector<Point3f>& points, const std::vector<Point3f>& normals,
std::vector<uchar>& mask, Mat& spinImages, int imageWidth, float binSize)
{
float pixelsPerMeter = 1.f / binSize;
float support = imageWidth * binSize;
assert(normals.size() == points.size());
assert(mask.size() == points.size());
CV_Assert(normals.size() == points.size());
CV_Assert(mask.size() == points.size());
size_t points_size = points.size();
mask.resize(points_size);
@@ -243,12 +235,12 @@ void computeSpinImages( const Octree& Octree, const vector<Point3f>& points, con
int nthreads = getNumThreads();
int i;
vector< vector<Point3f> > pointsInSpherePool(nthreads);
std::vector< std::vector<Point3f> > pointsInSpherePool(nthreads);
for(i = 0; i < nthreads; i++)
pointsInSpherePool[i].reserve(2048);
float halfSuppport = support / 2;
float searchRad = support * sqrt(5.f) / 2; // sqrt(sup*sup + (sup/2) * (sup/2) )
float searchRad = support * std::sqrt(5.f) / 2; // std::sqrt(sup*sup + (sup/2) * (sup/2) )
#ifdef _OPENMP
#pragma omp parallel for num_threads(nthreads)
@@ -258,8 +250,8 @@ void computeSpinImages( const Octree& Octree, const vector<Point3f>& points, con
if (mask[i] == 0)
continue;
int t = cvGetThreadNum();
vector<Point3f>& pointsInSphere = pointsInSpherePool[t];
int t = getThreadNum();
std::vector<Point3f>& pointsInSphere = pointsInSpherePool[t];
const Point3f& center = points[i];
Octree.getPointsWithinSphere(center, searchRad, pointsInSphere);
@@ -297,7 +289,7 @@ void computeSpinImages( const Octree& Octree, const vector<Point3f>& points, con
__m128 ppm4 = _mm_set1_ps(pixelsPerMeter);
__m128i height4m1 = _mm_set1_epi32(spinImage.rows-1);
__m128i width4m1 = _mm_set1_epi32(spinImage.cols-1);
assert( spinImage.step <= 0xffff );
CV_Assert( spinImage.step <= 0xffff );
__m128i step4 = _mm_set1_epi16((short)step);
__m128i zero4 = _mm_setzero_si128();
__m128i one4i = _mm_set1_epi32(1);
@@ -398,7 +390,7 @@ void computeSpinImages( const Octree& Octree, const vector<Point3f>& points, con
if (beta >= support || beta < 0)
continue;
alpha = sqrt( (new_center.x - pt.x) * (new_center.x - pt.x) +
alpha = std::sqrt( (new_center.x - pt.x) * (new_center.x - pt.x) +
(new_center.y - pt.y) * (new_center.y - pt.y) );
float n1f = beta * pixelsPerMeter;
@@ -432,7 +424,7 @@ void computeSpinImages( const Octree& Octree, const vector<Point3f>& points, con
const Point3f cv::Mesh3D::allzero(0.f, 0.f, 0.f);
cv::Mesh3D::Mesh3D() { resolution = -1; }
cv::Mesh3D::Mesh3D(const vector<Point3f>& _vtx)
cv::Mesh3D::Mesh3D(const std::vector<Point3f>& _vtx)
{
resolution = -1;
vtx.resize(_vtx.size());
@@ -450,14 +442,14 @@ float cv::Mesh3D::estimateResolution(float /*tryRatio*/)
const int minReasonable = 10;
int tryNum = static_cast<int>(tryRatio * vtx.size());
tryNum = min(max(tryNum, minReasonable), (int)vtx.size());
tryNum = std::min(std::max(tryNum, minReasonable), (int)vtx.size());
CvMat desc = cvMat((int)vtx.size(), 3, CV_32F, &vtx[0]);
CvFeatureTree* tr = cvCreateKDTree(&desc);
vector<double> dist(tryNum * neighbors);
vector<int> inds(tryNum * neighbors);
vector<Point3f> query;
std::vector<double> dist(tryNum * neighbors);
std::vector<int> inds(tryNum * neighbors);
std::vector<Point3f> query;
RNG& rng = theRNG();
for(int i = 0; i < tryNum; ++i)
@@ -476,11 +468,11 @@ float cv::Mesh3D::estimateResolution(float /*tryRatio*/)
dist.resize(remove(dist.begin(), dist.end(), invalid_dist) - dist.begin());
sort(dist, less<double>());
sort(dist, std::less<double>());
return resolution = (float)dist[ dist.size() / 2 ];
#else
CV_Error(CV_StsNotImplemented, "");
CV_Error(Error::StsNotImplemented, "");
return 1.f;
#endif
}
@@ -489,49 +481,49 @@ float cv::Mesh3D::estimateResolution(float /*tryRatio*/)
void cv::Mesh3D::computeNormals(float normalRadius, int minNeighbors)
{
buildOctree();
vector<uchar> mask;
std::vector<uchar> mask;
::computeNormals(octree, vtx, normals, mask, normalRadius, minNeighbors);
}
void cv::Mesh3D::computeNormals(const vector<int>& subset, float normalRadius, int minNeighbors)
void cv::Mesh3D::computeNormals(const std::vector<int>& subset, float normalRadius, int minNeighbors)
{
buildOctree();
vector<uchar> mask(vtx.size(), 0);
std::vector<uchar> mask(vtx.size(), 0);
for(size_t i = 0; i < subset.size(); ++i)
mask[subset[i]] = 1;
::computeNormals(octree, vtx, normals, mask, normalRadius, minNeighbors);
}
void cv::Mesh3D::writeAsVrml(const String& file, const vector<Scalar>& _colors) const
void cv::Mesh3D::writeAsVrml(const String& file, const std::vector<Scalar>& _colors) const
{
ofstream ofs(file.c_str());
std::ofstream ofs(file.c_str());
ofs << "#VRML V2.0 utf8" << endl;
ofs << "Shape" << std::endl << "{" << endl;
ofs << "geometry PointSet" << endl << "{" << endl;
ofs << "coord Coordinate" << endl << "{" << endl;
ofs << "point[" << endl;
ofs << "#VRML V2.0 utf8" << std::endl;
ofs << "Shape" << std::endl << "{" << std::endl;
ofs << "geometry PointSet" << std::endl << "{" << std::endl;
ofs << "coord Coordinate" << std::endl << "{" << std::endl;
ofs << "point[" << std::endl;
for(size_t i = 0; i < vtx.size(); ++i)
ofs << vtx[i].x << " " << vtx[i].y << " " << vtx[i].z << endl;
ofs << vtx[i].x << " " << vtx[i].y << " " << vtx[i].z << std::endl;
ofs << "]" << endl; //point[
ofs << "}" << endl; //Coordinate{
ofs << "]" << std::endl; //point[
ofs << "}" << std::endl; //Coordinate{
if (vtx.size() == _colors.size())
{
ofs << "color Color" << endl << "{" << endl;
ofs << "color[" << endl;
ofs << "color Color" << std::endl << "{" << std::endl;
ofs << "color[" << std::endl;
for(size_t i = 0; i < _colors.size(); ++i)
ofs << (float)_colors[i][2] << " " << (float)_colors[i][1] << " " << (float)_colors[i][0] << endl;
ofs << (float)_colors[i][2] << " " << (float)_colors[i][1] << " " << (float)_colors[i][0] << std::endl;
ofs << "]" << endl; //color[
ofs << "}" << endl; //color Color{
ofs << "]" << std::endl; //color[
ofs << "}" << std::endl; //color Color{
}
ofs << "}" << endl; //PointSet{
ofs << "}" << endl; //Shape{
ofs << "}" << std::endl; //PointSet{
ofs << "}" << std::endl; //Shape{
}
@@ -624,7 +616,7 @@ bool cv::SpinImageModel::spinCorrelation(const Mat& spin1, const Mat& spin2, flo
if (Nsum11 == sum1sum1 || Nsum22 == sum2sum2)
return false;
double corr = (Nsum12 - sum1 * sum2) / sqrt( (Nsum11 - sum1sum1) * (Nsum22 - sum2sum2) );
double corr = (Nsum12 - sum1 * sum2) / std::sqrt( (Nsum11 - sum1sum1) * (Nsum22 - sum2sum2) );
double atanh = Math::atanh(corr);
result = (float)( atanh * atanh - lambda * ( 1.0 / (N - 3) ) );
return true;
@@ -636,13 +628,13 @@ inline Point2f cv::SpinImageModel::calcSpinMapCoo(const Point3f& p, const Point3
float normalNorm = (float)norm(n);
float beta = PmV.dot(n) / normalNorm;
float pmcNorm = (float)norm(PmV);
float alpha = sqrt( pmcNorm * pmcNorm - beta * beta);
float alpha = std::sqrt( pmcNorm * pmcNorm - beta * beta);
return Point2f(alpha, beta);*/
float pmv_x = p.x - v.x, pmv_y = p.y - v.y, pmv_z = p.z - v.z;
float beta = (pmv_x * n.x + pmv_y + n.y + pmv_z * n.z) / sqrt(n.x * n.x + n.y * n.y + n.z * n.z);
float alpha = sqrt( pmv_x * pmv_x + pmv_y * pmv_y + pmv_z * pmv_z - beta * beta);
float beta = (pmv_x * n.x + pmv_y + n.y + pmv_z * n.z) / std::sqrt(n.x * n.x + n.y * n.y + n.z * n.z);
float alpha = std::sqrt( pmv_x * pmv_x + pmv_y * pmv_y + pmv_z * pmv_z - beta * beta);
return Point2f(alpha, beta);
}
@@ -664,7 +656,7 @@ inline float cv::SpinImageModel::geometricConsistency(const Point3f& pointScene1
double gc12 = 2 * norm(Sm1_to_m2 - Ss1_to_s2) / (n_Sm1_to_m2 + n_Ss1_to_s2 ) ;
return (float)max(gc12, gc21);
return (float)std::max(gc12, gc21);
}
inline float cv::SpinImageModel::groupingCreteria(const Point3f& pointScene1, const Point3f& normalScene1,
@@ -682,28 +674,27 @@ inline float cv::SpinImageModel::groupingCreteria(const Point3f& pointScene1, co
double n_Ss2_to_s1 = norm(Ss2_to_s1 = calcSpinMapCoo(pointScene2, pointScene1, normalScene1));
double gc21 = 2 * norm(Sm2_to_m1 - Ss2_to_s1) / (n_Sm2_to_m1 + n_Ss2_to_s1 );
double wgc21 = gc21 / (1 - exp( -(n_Sm2_to_m1 + n_Ss2_to_s1) * gamma05_inv ) );
double wgc21 = gc21 / (1 - std::exp( -(n_Sm2_to_m1 + n_Ss2_to_s1) * gamma05_inv ) );
double n_Sm1_to_m2 = norm(Sm1_to_m2 = calcSpinMapCoo(pointModel1, pointModel2, normalModel2));
double n_Ss1_to_s2 = norm(Ss1_to_s2 = calcSpinMapCoo(pointScene1, pointScene2, normalScene2));
double gc12 = 2 * norm(Sm1_to_m2 - Ss1_to_s2) / (n_Sm1_to_m2 + n_Ss1_to_s2 );
double wgc12 = gc12 / (1 - exp( -(n_Sm1_to_m2 + n_Ss1_to_s2) * gamma05_inv ) );
double wgc12 = gc12 / (1 - std::exp( -(n_Sm1_to_m2 + n_Ss1_to_s2) * gamma05_inv ) );
return (float)max(wgc12, wgc21);
return (float)std::max(wgc12, wgc21);
}
cv::SpinImageModel::SpinImageModel(const Mesh3D& _mesh) : mesh(_mesh) , out(0)
cv::SpinImageModel::SpinImageModel(const Mesh3D& _mesh) : mesh(_mesh)
{
if (mesh.vtx.empty())
throw Mesh3D::EmptyMeshException();
defaultParams();
}
cv::SpinImageModel::SpinImageModel() : out(0) { defaultParams(); }
cv::SpinImageModel::~SpinImageModel() {}
void cv::SpinImageModel::setLogger(ostream* log) { out = log; }
cv::SpinImageModel::SpinImageModel() { defaultParams(); }
cv::SpinImageModel::~SpinImageModel() {}
void cv::SpinImageModel::defaultParams()
{
@@ -723,14 +714,14 @@ void cv::SpinImageModel::defaultParams()
Mat cv::SpinImageModel::packRandomScaledSpins(bool separateScale, size_t xCount, size_t yCount) const
{
int spinNum = (int)getSpinCount();
int num = min(spinNum, (int)(xCount * yCount));
int num = std::min(spinNum, (int)(xCount * yCount));
if (num == 0)
return Mat();
RNG& rng = theRNG();
vector<Mat> spins;
std::vector<Mat> spins;
for(int i = 0; i < num; ++i)
spins.push_back(getSpinImage( rng.next() % spinNum ).reshape(1, imageWidth));
@@ -750,7 +741,7 @@ Mat cv::SpinImageModel::packRandomScaledSpins(bool separateScale, size_t xCount,
{
double m;
minMaxLoc(spins[i], 0, &m);
totalMax = max(m, totalMax);
totalMax = std::max(m, totalMax);
}
for(int i = 0; i < num; ++i)
@@ -764,7 +755,7 @@ Mat cv::SpinImageModel::packRandomScaledSpins(bool separateScale, size_t xCount,
int sz = spins.front().cols;
Mat result((int)(yCount * sz + (yCount - 1)), (int)(xCount * sz + (xCount - 1)), CV_8UC3);
result = colors[(static_cast<int64>(cvGetTickCount()/cvGetTickFrequency())/1000) % colors_mum];
result = colors[(static_cast<int64>(getTickCount()/getTickFrequency())/1000) % colors_mum];
int pos = 0;
for(int y = 0; y < (int)yCount; ++y)
@@ -778,7 +769,7 @@ Mat cv::SpinImageModel::packRandomScaledSpins(bool separateScale, size_t xCount,
int endx = (x + 1) * sz + x;
Mat color;
cvtColor(spins[pos++], color, CV_GRAY2BGR);
cvtColor(spins[pos++], color, COLOR_GRAY2BGR);
Mat roi = result(Range(starty, endy), Range(startx, endx));
color.copyTo(roi);
}
@@ -787,7 +778,7 @@ Mat cv::SpinImageModel::packRandomScaledSpins(bool separateScale, size_t xCount,
void cv::SpinImageModel::selectRandomSubset(float ratio)
{
ratio = min(max(ratio, 0.f), 1.f);
ratio = std::min(std::max(ratio, 0.f), 1.f);
size_t vtxSize = mesh.vtx.size();
size_t setSize = static_cast<size_t>(vtxSize * ratio);
@@ -799,14 +790,14 @@ void cv::SpinImageModel::selectRandomSubset(float ratio)
else if (setSize == vtxSize)
{
subset.resize(vtxSize);
iota(subset.begin(), subset.end(), 0);
_iota(subset.begin(), subset.end(), 0);
}
else
{
RNG& rnd = theRNG();
vector<size_t> left(vtxSize);
iota(left.begin(), left.end(), (size_t)0);
std::vector<size_t> left(vtxSize);
_iota(left.begin(), left.end(), (size_t)0);
subset.resize(setSize);
for(size_t i = 0; i < setSize; ++i)
@@ -817,20 +808,20 @@ void cv::SpinImageModel::selectRandomSubset(float ratio)
left[pos] = left.back();
left.resize(left.size() - 1);
}
sort(subset, less<int>());
std::sort(subset.begin(), subset.end());
}
}
void cv::SpinImageModel::setSubset(const vector<int>& ss)
void cv::SpinImageModel::setSubset(const std::vector<int>& ss)
{
subset = ss;
}
void cv::SpinImageModel::repackSpinImages(const vector<uchar>& mask, Mat& _spinImages, bool reAlloc) const
void cv::SpinImageModel::repackSpinImages(const std::vector<uchar>& mask, Mat& _spinImages, bool reAlloc) const
{
if (reAlloc)
{
size_t spinCount = mask.size() - count(mask.begin(), mask.end(), (uchar)0);
size_t spinCount = mask.size() - std::count(mask.begin(), mask.end(), (uchar)0);
Mat newImgs((int)spinCount, _spinImages.cols, _spinImages.type());
int pos = 0;
@@ -846,7 +837,7 @@ void cv::SpinImageModel::repackSpinImages(const vector<uchar>& mask, Mat& _spinI
{
int last = (int)mask.size();
int dest = (int)(find(mask.begin(), mask.end(), (uchar)0) - mask.begin());
int dest = (int)(std::find(mask.begin(), mask.end(), (uchar)0) - mask.begin());
if (dest == last)
return;
@@ -879,21 +870,21 @@ void cv::SpinImageModel::compute()
{
mesh.computeNormals(normalRadius, minNeighbors);
subset.resize(mesh.vtx.size());
iota(subset.begin(), subset.end(), 0);
_iota(subset.begin(), subset.end(), 0);
}
else
mesh.computeNormals(subset, normalRadius, minNeighbors);
vector<uchar> mask(mesh.vtx.size(), 0);
std::vector<uchar> mask(mesh.vtx.size(), 0);
for(size_t i = 0; i < subset.size(); ++i)
if (mesh.normals[subset[i]] == Mesh3D::allzero)
subset[i] = -1;
else
mask[subset[i]] = 1;
subset.resize( remove(subset.begin(), subset.end(), -1) - subset.begin() );
subset.resize( std::remove(subset.begin(), subset.end(), -1) - subset.begin() );
vector<Point3f> vtx;
vector<Point3f> normals;
std::vector<Point3f> vtx;
std::vector<Point3f> normals;
for(size_t i = 0; i < mask.size(); ++i)
if(mask[i])
{
@@ -901,7 +892,7 @@ void cv::SpinImageModel::compute()
normals.push_back(mesh.normals[i]);
}
vector<uchar> spinMask(vtx.size(), 1);
std::vector<uchar> spinMask(vtx.size(), 1);
computeSpinImages( mesh.octree, vtx, normals, spinMask, spinImages, imageWidth, binSize);
repackSpinImages(spinMask, spinImages);
@@ -909,19 +900,19 @@ void cv::SpinImageModel::compute()
for(size_t i = 0; i < mask.size(); ++i)
if(mask[i])
if (spinMask[mask_pos++] == 0)
subset.resize( remove(subset.begin(), subset.end(), (int)i) - subset.begin() );
subset.resize( std::remove(subset.begin(), subset.end(), (int)i) - subset.begin() );
}
void cv::SpinImageModel::matchSpinToModel(const Mat& spin, vector<int>& indeces, vector<float>& corrCoeffs, bool useExtremeOutliers) const
void cv::SpinImageModel::matchSpinToModel(const Mat& spin, std::vector<int>& indeces, std::vector<float>& corrCoeffs, bool useExtremeOutliers) const
{
const SpinImageModel& model = *this;
indeces.clear();
corrCoeffs.clear();
vector<float> corrs(model.spinImages.rows);
vector<uchar> masks(model.spinImages.rows);
vector<float> cleanCorrs;
std::vector<float> corrs(model.spinImages.rows);
std::vector<uchar> masks(model.spinImages.rows);
std::vector<float> cleanCorrs;
cleanCorrs.reserve(model.spinImages.rows);
for(int i = 0; i < model.spinImages.rows; ++i)
@@ -936,7 +927,7 @@ void cv::SpinImageModel::matchSpinToModel(const Mat& spin, vector<int>& indeces,
if(total < 5)
return;
sort(cleanCorrs, less<float>());
std::sort(cleanCorrs.begin(), cleanCorrs.end());
float lower_fourth = cleanCorrs[(1 * total) / 4 - 1];
float upper_fourth = cleanCorrs[(3 * total) / 4 - 0];
@@ -971,7 +962,7 @@ struct Match
operator float() const { return measure; }
};
typedef set<size_t> group_t;
typedef std::set<size_t> group_t;
typedef group_t::iterator iter;
typedef group_t::const_iterator citer;
@@ -986,10 +977,10 @@ struct WgcHelper
float Wgc(const size_t corespInd, const group_t& group) const
{
const float* wgcLine = mat.ptr<float>((int)corespInd);
float maximum = numeric_limits<float>::min();
float maximum = std::numeric_limits<float>::min();
for(citer pos = group.begin(); pos != group.end(); ++pos)
maximum = max(wgcLine[*pos], maximum);
maximum = std::max(wgcLine[*pos], maximum);
return maximum;
}
@@ -999,7 +990,7 @@ private:
}
void cv::SpinImageModel::match(const SpinImageModel& scene, vector< vector<Vec2i> >& result)
void cv::SpinImageModel::match(const SpinImageModel& scene, std::vector< std::vector<Vec2i> >& result)
{
if (mesh.vtx.empty())
throw Mesh3D::EmptyMeshException();
@@ -1007,8 +998,8 @@ private:
result.clear();
SpinImageModel& model = *this;
const float infinity = numeric_limits<float>::infinity();
const float float_max = numeric_limits<float>::max();
const float infinity = std::numeric_limits<float>::infinity();
const float float_max = std::numeric_limits<float>::max();
/* estimate gamma */
if (model.gamma == 0.f)
@@ -1021,40 +1012,35 @@ private:
/* estimate lambda */
if (model.lambda == 0.f)
{
vector<int> nonzero(model.spinImages.rows);
std::vector<int> nonzero(model.spinImages.rows);
for(int i = 0; i < model.spinImages.rows; ++i)
nonzero[i] = countNonZero(model.spinImages.row(i));
sort(nonzero, less<int>());
std::sort(nonzero.begin(), nonzero.end());
model.lambda = static_cast<float>( nonzero[ nonzero.size()/2 ] ) / 2;
}
TickMeter corr_timer;
corr_timer.start();
vector<Match> allMatches;
std::vector<Match> allMatches;
for(int i = 0; i < scene.spinImages.rows; ++i)
{
vector<int> indeces;
vector<float> coeffs;
std::vector<int> indeces;
std::vector<float> coeffs;
matchSpinToModel(scene.spinImages.row(i), indeces, coeffs);
for(size_t t = 0; t < indeces.size(); ++t)
allMatches.push_back(Match(i, indeces[t], coeffs[t]));
if (out) if (i % 100 == 0) *out << "Comparing scene spinimage " << i << " of " << scene.spinImages.rows << endl;
}
corr_timer.stop();
if (out) *out << "Spin correlation time = " << corr_timer << endl;
if (out) *out << "Matches number = " << allMatches.size() << endl;
if(allMatches.empty())
return;
/* filtering by similarity measure */
const float fraction = 0.5f;
float maxMeasure = max_element(allMatches.begin(), allMatches.end(), less<float>())->measure;
float maxMeasure = max_element(allMatches.begin(), allMatches.end(), std::less<float>())->measure;
allMatches.erase(
remove_if(allMatches.begin(), allMatches.end(), bind2nd(less<float>(), maxMeasure * fraction)),
remove_if(allMatches.begin(), allMatches.end(), bind2nd(std::less<float>(), maxMeasure * fraction)),
allMatches.end());
if (out) *out << "Matches number [filtered by similarity measure] = " << allMatches.size() << endl;
int matchesSize = (int)allMatches.size();
if(matchesSize == 0)
@@ -1101,17 +1087,14 @@ private:
allMatches[i].measure = infinity;
}
allMatches.erase(
remove_if(allMatches.begin(), allMatches.end(), bind2nd(equal_to<float>(), infinity)),
std::remove_if(allMatches.begin(), allMatches.end(), std::bind2nd(std::equal_to<float>(), infinity)),
allMatches.end());
if (out) *out << "Matches number [filtered by geometric consistency] = " << allMatches.size() << endl;
matchesSize = (int)allMatches.size();
if(matchesSize == 0)
return;
if (out) *out << "grouping ..." << endl;
Mat groupingMat((int)matchesSize, (int)matchesSize, CV_32F);
groupingMat = Scalar(0);
@@ -1153,14 +1136,12 @@ private:
for(int i = 0; i < matchesSize; ++i)
allMatchesInds.insert(i);
vector<float> buf(matchesSize);
std::vector<float> buf(matchesSize);
float *buf_beg = &buf[0];
vector<group_t> groups;
std::vector<group_t> groups;
for(int g = 0; g < matchesSize; ++g)
{
if (out) if (g % 100 == 0) *out << "G = " << g << endl;
group_t left = allMatchesInds;
group_t group;
@@ -1174,7 +1155,7 @@ private:
break;
std::transform(left.begin(), left.end(), buf_beg, WgcHelper(group, groupingMat));
size_t minInd = min_element(buf_beg, buf_beg + left_size) - buf_beg;
size_t minInd = std::min_element(buf_beg, buf_beg + left_size) - buf_beg;
if (buf[minInd] < model.T_GroupingCorespondances) /* can add corespondance to group */
{
@@ -1197,7 +1178,7 @@ private:
{
const group_t& group = groups[i];
vector< Vec2i > outgrp;
std::vector< Vec2i > outgrp;
for(citer pos = group.begin(); pos != group.end(); ++pos)
{
const Match& m = allMatches[*pos];
@@ -1209,16 +1190,16 @@ private:
cv::TickMeter::TickMeter() { reset(); }
int64 cv::TickMeter::getTimeTicks() const { return sumTime; }
double cv::TickMeter::getTimeMicro() const { return (double)getTimeTicks()/cvGetTickFrequency(); }
double cv::TickMeter::getTimeMicro() const { return (double)getTimeTicks()/getTickFrequency(); }
double cv::TickMeter::getTimeMilli() const { return getTimeMicro()*1e-3; }
double cv::TickMeter::getTimeSec() const { return getTimeMilli()*1e-3; }
int64 cv::TickMeter::getCounter() const { return counter; }
void cv::TickMeter::reset() {startTime = 0; sumTime = 0; counter = 0; }
void cv::TickMeter::start(){ startTime = cvGetTickCount(); }
void cv::TickMeter::start(){ startTime = getTickCount(); }
void cv::TickMeter::stop()
{
int64 time = cvGetTickCount();
int64 time = getTickCount();
if ( startTime == 0 )
return;
@@ -1228,4 +1209,4 @@ void cv::TickMeter::stop()
startTime = 0;
}
std::ostream& cv::operator<<(std::ostream& out, const TickMeter& tm){ return out << tm.getTimeSec() << "sec"; }
//std::ostream& cv::operator<<(std::ostream& out, const TickMeter& tm){ return out << tm.getTimeSec() << "sec"; }

View File

@@ -144,11 +144,11 @@ void StereoVar::VariationalSolver(Mat &I1, Mat &I2, Mat &I2x, Mat &u, int level)
if (flags & USE_SMART_ID) {
double scale = pow(pyrScale, (double) level) * (1 + pyrScale);
double scale = std::pow(pyrScale, (double) level) * (1 + pyrScale);
N = (int) (N / scale);
}
double scale = pow(pyrScale, (double) level);
double scale = std::pow(pyrScale, (double) level);
Fi /= (float) scale;
l *= (float) scale;
@@ -239,8 +239,8 @@ void StereoVar::VariationalSolver(Mat &I1, Mat &I2, Mat &I2x, Mat &u, int level)
void StereoVar::VCycle_MyFAS(Mat &I1, Mat &I2, Mat &I2x, Mat &_u, int level)
{
CvSize imgSize = _u.size();
CvSize frmSize = cvSize((int) (imgSize.width * pyrScale + 0.5), (int) (imgSize.height * pyrScale + 0.5));
Size imgSize = _u.size();
Size frmSize = Size((int) (imgSize.width * pyrScale + 0.5), (int) (imgSize.height * pyrScale + 0.5));
Mat I1_h, I2_h, I2x_h, u_h, U, U_h;
//PRE relaxation
@@ -284,8 +284,8 @@ void StereoVar::VCycle_MyFAS(Mat &I1, Mat &I2, Mat &I2x, Mat &_u, int level)
void StereoVar::FMG(Mat &I1, Mat &I2, Mat &I2x, Mat &u, int level)
{
double scale = pow(pyrScale, (double) level);
CvSize frmSize = cvSize((int) (u.cols * scale + 0.5), (int) (u.rows * scale + 0.5));
double scale = std::pow(pyrScale, (double) level);
Size frmSize = Size((int) (u.cols * scale + 0.5), (int) (u.rows * scale + 0.5));
Mat I1_h, I2_h, I2x_h, u_h;
//scaling DOWN
@@ -336,7 +336,7 @@ void StereoVar::autoParams()
if (maxD) {
levels = 0;
while ( pow(pyrScale, levels) * maxD > 1.5) levels ++;
while ( std::pow(pyrScale, levels) * maxD > 1.5) levels ++;
levels++;
}
@@ -350,7 +350,7 @@ void StereoVar::autoParams()
void StereoVar::operator ()( const Mat& left, const Mat& right, Mat& disp )
{
CV_Assert(left.size() == right.size() && left.type() == right.type());
CvSize imgSize = left.size();
Size imgSize = left.size();
int MaxD = MAX(labs(minDisp), labs(maxDisp));
int SignD = 1; if (MIN(minDisp, maxDisp) < 0) SignD = -1;
if (minDisp >= maxDisp) {MaxD = 256; SignD = 1;}
@@ -367,8 +367,8 @@ void StereoVar::operator ()( const Mat& left, const Mat& right, Mat& disp )
// Preprocessing
Mat leftgray, rightgray;
if (left.type() != CV_8UC1) {
cvtColor(left, leftgray, CV_BGR2GRAY);
cvtColor(right, rightgray, CV_BGR2GRAY);
cvtColor(left, leftgray, COLOR_BGR2GRAY);
cvtColor(right, rightgray, COLOR_BGR2GRAY);
} else {
left.copyTo(leftgray);
right.copyTo(rightgray);
@@ -378,8 +378,8 @@ void StereoVar::operator ()( const Mat& left, const Mat& right, Mat& disp )
equalizeHist(rightgray, rightgray);
}
if (poly_sigma > 0.0001) {
GaussianBlur(leftgray, leftgray, cvSize(poly_n, poly_n), poly_sigma);
GaussianBlur(rightgray, rightgray, cvSize(poly_n, poly_n), poly_sigma);
GaussianBlur(leftgray, leftgray, Size(poly_n, poly_n), poly_sigma);
GaussianBlur(rightgray, rightgray, Size(poly_n, poly_n), poly_sigma);
}
if (flags & USE_AUTO_PARAMS) {

View File

@@ -322,7 +322,7 @@ namespace cv
double diff=(*(bufferPTR++)-meanValue);
standardDeviation+=diff*diff;
}
return sqrt(standardDeviation/this->size());
return std::sqrt(standardDeviation/this->size());
};
/**
@@ -513,7 +513,7 @@ namespace cv
stdValue+=inputMinusMean*inputMinusMean;
}
stdValue=sqrt(stdValue/((type)_NBpixels));
stdValue=std::sqrt(stdValue/((type)_NBpixels));
// adjust luminance in regard of mean and std value;
inputOutputBufferPTR=inputOutputBuffer;
for (size_t index=0;index<_NBpixels;++index, ++inputOutputBufferPTR)

View File

@@ -1,13 +1,16 @@
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-declarations"
# pragma GCC diagnostic ignored "-Wmissing-prototypes" //OSX
# if defined __clang__ || defined __APPLE__
# pragma GCC diagnostic ignored "-Wmissing-prototypes"
# pragma GCC diagnostic ignored "-Wextra"
# endif
#endif
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
#include "opencv2/ts/ts.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/ts.hpp"
#include "opencv2/contrib.hpp"
#include <iostream>
#endif