Merged upstream master

This commit is contained in:
Hilton Bristow
2014-03-04 11:14:03 +10:00
808 changed files with 31931 additions and 96605 deletions

View File

@@ -40,6 +40,6 @@ else()
get_filename_component(wrapper_name "${wrapper}" NAME)
install(FILES "${LIBRARY_OUTPUT_PATH}/${wrapper_name}"
DESTINATION ${OPENCV_LIB_INSTALL_PATH}
COMPONENT main)
COMPONENT libs)
endforeach()
endif()

View File

@@ -63,4 +63,4 @@ if (NOT (CMAKE_BUILD_TYPE MATCHES "debug"))
endif()
install(TARGETS ${the_target} LIBRARY DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT main)
install(TARGETS ${the_target} LIBRARY DESTINATION ${OPENCV_LIB_INSTALL_PATH} COMPONENT libs)

View File

@@ -1,3 +0,0 @@
set(the_description "Biologically inspired algorithms")
ocv_warnings_disable(CMAKE_CXX_FLAGS -Wundef)
ocv_define_module(bioinspired opencv_core OPTIONAL opencv_highgui opencv_ocl)

View File

@@ -1,10 +0,0 @@
********************************************************************
bioinspired. Biologically inspired vision models and derivated tools
********************************************************************
The module provides biological visual systems models (human visual system and others). It also provides derivated objects that take advantage of those bio-inspired models.
.. toctree::
:maxdepth: 2
Human retina documentation <retina/index>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,493 +0,0 @@
Retina : a Bio mimetic human retina model
*****************************************
.. highlight:: cpp
Retina
======
.. ocv:class:: Retina : public Algorithm
**Note** : do not forget that the retina model is included in the following namespace : *cv::bioinspired*.
Introduction
++++++++++++
Class which provides the main controls to the Gipsa/Listic labs human retina model. This is a non separable spatio-temporal filter modelling the two main retina information channels :
* foveal vision for detailled color vision : the parvocellular pathway.
* peripheral vision for sensitive transient signals detection (motion and events) : the magnocellular pathway.
From a general point of view, this filter whitens the image spectrum and corrects luminance thanks to local adaptation. An other important property is its hability to filter out spatio-temporal noise while enhancing details.
This model originates from Jeanny Herault work [Herault2010]_. It has been involved in Alexandre Benoit phd and his current research [Benoit2010]_, [Strat2013]_ (he currently maintains this module within OpenCV). It includes the work of other Jeanny's phd student such as [Chaix2007]_ and the log polar transformations of Barthelemy Durette described in Jeanny's book.
**NOTES :**
* For ease of use in computer vision applications, the two retina channels are applied homogeneously on all the input images. This does not follow the real retina topology but this can still be done using the log sampling capabilities proposed within the class.
* Extend the retina description and code use in the tutorial/contrib section for complementary explanations.
Preliminary illustration
++++++++++++++++++++++++
As a preliminary presentation, let's start with a visual example. We propose to apply the filter on a low quality color jpeg image with backlight problems. Here is the considered input... *"Well, my eyes were able to see more that this strange black shadow..."*
.. image:: images/retinaInput.jpg
:alt: a low quality color jpeg image with backlight problems.
:align: center
Below, the retina foveal model applied on the entire image with default parameters. Here contours are enforced, halo effects are voluntary visible with this configuration. See parameters discussion below and increase horizontalCellsGain near 1 to remove them.
.. image:: images/retinaOutput_default.jpg
:alt: the retina foveal model applied on the entire image with default parameters. Here contours are enforced, luminance is corrected and halo effects are voluntary visible with this configuration, increase horizontalCellsGain near 1 to remove them.
:align: center
Below, a second retina foveal model output applied on the entire image with a parameters setup focused on naturalness perception. *"Hey, i now recognize my cat, looking at the mountains at the end of the day !"*. Here contours are enforced, luminance is corrected but halos are avoided with this configuration. The backlight effect is corrected and highlight details are still preserved. Then, even on a low quality jpeg image, if some luminance information remains, the retina is able to reconstruct a proper visual signal. Such configuration is also usefull for High Dynamic Range (*HDR*) images compression to 8bit images as discussed in [benoit2010]_ and in the demonstration codes discussed below.
As shown at the end of the page, parameters change from defaults are :
* horizontalCellsGain=0.3
* photoreceptorsLocalAdaptationSensitivity=ganglioncellsSensitivity=0.89.
.. image:: images/retinaOutput_realistic.jpg
:alt: the retina foveal model applied on the entire image with 'naturalness' parameters. Here contours are enforced but are avoided with this configuration, horizontalCellsGain is 0.3 and photoreceptorsLocalAdaptationSensitivity=ganglioncellsSensitivity=0.89.
:align: center
As observed in this preliminary demo, the retina can be settled up with various parameters, by default, as shown on the figure above, the retina strongly reduces mean luminance energy and enforces all details of the visual scene. Luminance energy and halo effects can be modulated (exagerated to cancelled as shown on the two examples). In order to use your own parameters, you can use at least one time the *write(String fs)* method which will write a proper XML file with all default parameters. Then, tweak it on your own and reload them at any time using method *setup(String fs)*. These methods update a *Retina::RetinaParameters* member structure that is described hereafter. XML parameters file samples are shown at the end of the page.
Here is an overview of the abstract Retina interface, allocate one instance with the *createRetina* functions.::
namespace cv{namespace bioinspired{
class Retina : public Algorithm
{
public:
// parameters setup instance
struct RetinaParameters; // this class is detailled later
// main method for input frame processing (all use method, can also perform High Dynamic Range tone mapping)
void run (InputArray inputImage);
// specific method aiming at correcting luminance only (faster High Dynamic Range tone mapping)
void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
// output buffers retreival methods
// -> foveal color vision details channel with luminance and noise correction
void getParvo (OutputArray retinaOutput_parvo);
void getParvoRAW (OutputArray retinaOutput_parvo);// retreive original output buffers without any normalisation
const Mat getParvoRAW () const;// retreive original output buffers without any normalisation
// -> peripheral monochrome motion and events (transient information) channel
void getMagno (OutputArray retinaOutput_magno);
void getMagnoRAW (OutputArray retinaOutput_magno); // retreive original output buffers without any normalisation
const Mat getMagnoRAW () const;// retreive original output buffers without any normalisation
// reset retina buffers... equivalent to closing your eyes for some seconds
void clearBuffers ();
// retreive input and output buffers sizes
Size getInputSize ();
Size getOutputSize ();
// setup methods with specific parameters specification of global xml config file loading/write
void setup (String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
void setup (FileStorage &fs, const bool applyDefaultSetupOnFailure=true);
void setup (RetinaParameters newParameters);
struct Retina::RetinaParameters getParameters ();
const String printSetup ();
virtual void write (String fs) const;
virtual void write (FileStorage &fs) const;
void setupOPLandIPLParvoChannel (const bool colorMode=true, const bool normaliseOutput=true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7);
void setupIPLMagnoChannel (const bool normaliseOutput=true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7);
void setColorSaturation (const bool saturateColors=true, const float colorSaturationValue=4.0);
void activateMovingContoursProcessing (const bool activate);
void activateContoursProcessing (const bool activate);
};
// Allocators
cv::Ptr<Retina> createRetina (Size inputSize);
cv::Ptr<Retina> createRetina (Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
}} // cv and bioinspired namespaces end
.. Sample code::
* An example on retina tone mapping can be found at opencv_source_code/samples/cpp/OpenEXRimages_HDR_Retina_toneMapping.cpp
* An example on retina tone mapping on video input can be found at opencv_source_code/samples/cpp/OpenEXRimages_HDR_Retina_toneMapping.cpp
* A complete example illustrating the retina interface can be found at opencv_source_code/samples/cpp/retinaDemo.cpp
Description
+++++++++++
Class which allows the `Gipsa <http://www.gipsa-lab.inpg.fr>`_ (preliminary work) / `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer and user) labs retina model to be used. This class allows human retina spatio-temporal image processing to be applied on still images, images sequences and video sequences. Briefly, here are the main human retina model properties:
* spectral whithening (mid-frequency details enhancement)
* high frequency spatio-temporal noise reduction (temporal noise and high frequency spatial noise are minimized)
* low frequency luminance reduction (luminance range compression) : high luminance regions do not hide details in darker regions anymore
* local logarithmic luminance compression allows details to be enhanced even in low light conditions
Use : this model can be used basically for spatio-temporal video effects but also in the aim of :
* performing texture analysis with enhanced signal to noise ratio and enhanced details robust against input images luminance ranges (check out the parvocellular retina channel output, by using the provided **getParvo** methods)
* performing motion analysis also taking benefit of the previously cited properties (check out the magnocellular retina channel output, by using the provided **getMagno** methods)
* general image/video sequence description using either one or both channels. An example of the use of Retina in a Bag of Words approach is given in [Strat2013]_.
Literature
==========
For more information, refer to the following papers :
* Model description :
.. [Benoit2010] Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
* Model use in a Bag of Words approach :
.. [Strat2013] Strat S., Benoit A., Lambert P., "Retina enhanced SIFT descriptors for video indexing", CBMI2013, Veszprém, Hungary, 2013.
* Please have a look at the reference work of Jeanny Herault that you can read in his book :
.. [Herault2010] Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper:
.. [Chaix2007] B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. More informations in the above cited Jeanny Heraults's book.
* Meylan&al work on HDR tone mapping that is implemented as a specific method within the model :
.. [Meylan2007] L. Meylan , D. Alleysson, S. Susstrunk, "A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images", Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
Demos and experiments !
=======================
**NOTE : Complementary to the following examples, have a look at the Retina tutorial in the tutorial/contrib section for complementary explanations.**
Take a look at the provided C++ examples provided with OpenCV :
* **samples/cpp/retinademo.cpp** shows how to use the retina module for details enhancement (Parvo channel output) and transient maps observation (Magno channel output). You can play with images, video sequences and webcam video.
Typical uses are (provided your OpenCV installation is situated in folder *OpenCVReleaseFolder*)
* image processing : **OpenCVReleaseFolder/bin/retinademo -image myPicture.jpg**
* video processing : **OpenCVReleaseFolder/bin/retinademo -video myMovie.avi**
* webcam processing: **OpenCVReleaseFolder/bin/retinademo -video**
**Note :** This demo generates the file *RetinaDefaultParameters.xml* which contains the default parameters of the retina. Then, rename this as *RetinaSpecificParameters.xml*, adjust the parameters the way you want and reload the program to check the effect.
* **samples/cpp/OpenEXRimages_HDR_Retina_toneMapping.cpp** shows how to use the retina to perform High Dynamic Range (HDR) luminance compression
Then, take a HDR image using bracketing with your camera and generate an OpenEXR image and then process it using the demo.
Typical use, supposing that you have the OpenEXR image such as *memorial.exr* (present in the samples/cpp/ folder)
**OpenCVReleaseFolder/bin/OpenEXRimages_HDR_Retina_toneMapping memorial.exr [optional: 'fast']**
Note that some sliders are made available to allow you to play with luminance compression.
If not using the 'fast' option, then, tone mapping is performed using the full retina model [Benoit2010]_. It includes spectral whitening that allows luminance energy to be reduced. When using the 'fast' option, then, a simpler method is used, it is an adaptation of the algorithm presented in [Meylan2007]_. This method gives also good results and is faster to process but it sometimes requires some more parameters adjustement.
Methods description
===================
Here are detailled the main methods to control the retina model
Ptr<Retina>::createRetina
+++++++++++++++++++++++++
.. ocv:function:: Ptr<cv::bioinspired::Retina> createRetina(Size inputSize)
.. ocv:function:: Ptr<cv::bioinspired::Retina> createRetina(Size inputSize, const bool colorMode, cv::bioinspired::RETINA_COLORSAMPLINGMETHOD colorSamplingMethod = cv::bioinspired::RETINA_COLOR_BAYER, const bool useRetinaLogSampling = false, const double reductionFactor = 1.0, const double samplingStrenght = 10.0 )
Constructors from standardized interfaces : retreive a smart pointer to a Retina instance
:param inputSize: the input frame size
:param colorMode: the chosen processing mode : with or without color processing
:param colorSamplingMethod: specifies which kind of color sampling will be used :
* cv::bioinspired::RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
* cv::bioinspired::RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB..., line 2 BRGBRGBRG..., line 3, GBRGBRGBR...
* cv::bioinspired::RETINA_COLOR_BAYER: standard bayer sampling
:param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
:param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
:param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
Retina::activateContoursProcessing
++++++++++++++++++++++++++++++++++
.. ocv:function:: void Retina::activateContoursProcessing(const bool activate)
Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated
:param activate: true if Parvocellular (contours information extraction) output should be activated, false if not... if activated, the Parvocellular output can be retrieved using the **getParvo** methods
Retina::activateMovingContoursProcessing
++++++++++++++++++++++++++++++++++++++++
.. ocv:function:: void Retina::activateMovingContoursProcessing(const bool activate)
Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated
:param activate: true if Magnocellular output should be activated, false if not... if activated, the Magnocellular output can be retrieved using the **getMagno** methods
Retina::clearBuffers
++++++++++++++++++++
.. ocv:function:: void Retina::clearBuffers()
Clears all retina buffers (equivalent to opening the eyes after a long period of eye close ;o) whatchout the temporal transition occuring just after this method call.
Retina::getParvo
++++++++++++++++
.. ocv:function:: void Retina::getParvo( OutputArray retinaOutput_parvo )
.. ocv:function:: void Retina::getParvoRAW( OutputArray retinaOutput_parvo )
.. ocv:function:: const Mat Retina::getParvoRAW() const
Accessor of the details channel of the retina (models foveal vision). Warning, getParvoRAW methods return buffers that are not rescaled within range [0;255] while the non RAW method allows a normalized matrix to be retrieved.
:param retinaOutput_parvo: the output buffer (reallocated if necessary), format can be :
* a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
* RAW methods actually return a 1D matrix (encoding is R1, R2, ... Rn, G1, G2, ..., Gn, B1, B2, ...Bn), this output is the original retina filter model output, without any quantification or rescaling.
Retina::getMagno
++++++++++++++++
.. ocv:function:: void Retina::getMagno( OutputArray retinaOutput_magno )
.. ocv:function:: void Retina::getMagnoRAW( OutputArray retinaOutput_magno )
.. ocv:function:: const Mat Retina::getMagnoRAW() const
Accessor of the motion channel of the retina (models peripheral vision). Warning, getMagnoRAW methods return buffers that are not rescaled within range [0;255] while the non RAW method allows a normalized matrix to be retrieved.
:param retinaOutput_magno: the output buffer (reallocated if necessary), format can be :
* a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
* RAW methods actually return a 1D matrix (encoding is M1, M2,... Mn), this output is the original retina filter model output, without any quantification or rescaling.
Retina::getInputSize
++++++++++++++++++++
.. ocv:function:: Size Retina::getInputSize()
Retreive retina input buffer size
:return: the retina input buffer size
Retina::getOutputSize
+++++++++++++++++++++
.. ocv:function:: Size Retina::getOutputSize()
Retreive retina output buffer size that can be different from the input if a spatial log transformation is applied
:return: the retina output buffer size
Retina::printSetup
++++++++++++++++++
.. ocv:function:: const String Retina::printSetup()
Outputs a string showing the used parameters setup
:return: a string which contains formated parameters information
Retina::run
+++++++++++
.. ocv:function:: void Retina::run(InputArray inputImage)
Method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
:param inputImage: the input Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)
Retina::applyFastToneMapping
++++++++++++++++++++++++++++
.. ocv:function:: void Retina::applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
Method which processes an image in the aim to correct its luminance : correct backlight problems, enhance details in shadows. This method is designed to perform High Dynamic Range image tone mapping (compress >8bit/pixel images to 8bit/pixel). This is a simplified version of the Retina Parvocellular model (simplified version of the run/getParvo methods call) since it does not include the spatio-temporal filter modelling the Outer Plexiform Layer of the retina that performs spectral whitening and many other stuff. However, it works great for tone mapping and in a faster way.
Check the demos and experiments section to see examples and the way to perform tone mapping using the original retina model and the method.
:param inputImage: the input image to process (should be coded in float format : CV_32F, CV_32FC1, CV_32F_C3, CV_32F_C4, the 4th channel won't be considered).
:param outputToneMappedImage: the output 8bit/channel tone mapped image (CV_8U or CV_8UC3 format).
Retina::setColorSaturation
++++++++++++++++++++++++++
.. ocv:function:: void Retina::setColorSaturation(const bool saturateColors = true, const float colorSaturationValue = 4.0 )
Activate color saturation as the final step of the color demultiplexing process -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
:param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
:param colorSaturationValue: the saturation factor : a simple factor applied on the chrominance buffers
Retina::setup
+++++++++++++
.. ocv:function:: void Retina::setup(String retinaParameterFile = "", const bool applyDefaultSetupOnFailure = true )
.. ocv:function:: void Retina::setup(FileStorage & fs, const bool applyDefaultSetupOnFailure = true )
.. ocv:function:: void Retina::setup(RetinaParameters newParameters)
Try to open an XML retina parameters file to adjust current retina instance setup => if the xml file does not exist, then default setup is applied => warning, Exceptions are thrown if read XML file is not valid
:param retinaParameterFile: the parameters filename
:param applyDefaultSetupOnFailure: set to true if an error must be thrown on error
:param fs: the open Filestorage which contains retina parameters
:param newParameters: a parameters structures updated with the new target configuration. You can retreive the current parameers structure using method *Retina::RetinaParameters Retina::getParameters()* and update it before running method *setup*.
Retina::write
+++++++++++++
.. ocv:function:: void Retina::write( String fs ) const
.. ocv:function:: void Retina::write( FileStorage& fs ) const
Write xml/yml formated parameters information
:param fs: the filename of the xml file that will be open and writen with formatted parameters information
Retina::setupIPLMagnoChannel
++++++++++++++++++++++++++++
.. ocv:function:: void Retina::setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta = 0, const float parasolCells_tau = 0, const float parasolCells_k = 7, const float amacrinCellsTemporalCutFrequency = 1.2, const float V0CompressionParameter = 0.95, const float localAdaptintegration_tau = 0, const float localAdaptintegration_k = 7 )
Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel this channel processes signals output from OPL processing stage in peripheral vision, it allows motion information enhancement. It is decorrelated from the details channel. See reference papers for more details.
:param normaliseOutput: specifies if (true) output is rescaled between 0 and 255 of not (false)
:param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
:param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
:param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
:param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2
:param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.95
:param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
:param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
Retina::setupOPLandIPLParvoChannel
++++++++++++++++++++++++++++++++++
.. ocv:function:: void Retina::setupOPLandIPLParvoChannel(const bool colorMode = true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity = 0.7, const float photoreceptorsTemporalConstant = 0.5, const float photoreceptorsSpatialConstant = 0.53, const float horizontalCellsGain = 0, const float HcellsTemporalConstant = 1, const float HcellsSpatialConstant = 7, const float ganglionCellsSensitivity = 0.7 )
Setup the OPL and IPL parvo channels (see biologocal model) OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See reference papers for more informations.
:param colorMode: specifies if (true) color is processed of not (false) to then processing gray level image
:param normaliseOutput: specifies if (true) output is rescaled between 0 and 255 of not (false)
:param photoreceptorsLocalAdaptationSensitivity: the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
:param photoreceptorsTemporalConstant: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
:param photoreceptorsSpatialConstant: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
:param horizontalCellsGain: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
:param HcellsTemporalConstant: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
:param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
:param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.7
Retina::RetinaParameters
========================
.. ocv:struct:: Retina::RetinaParameters
This structure merges all the parameters that can be adjusted threw the **Retina::setup()**, **Retina::setupOPLandIPLParvoChannel** and **Retina::setupIPLMagnoChannel** setup methods
Parameters structure for better clarity, check explenations on the comments of methods : setupOPLandIPLParvoChannel and setupIPLMagnoChannel. ::
class RetinaParameters{
struct OPLandIplParvoParameters{ // Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters
OPLandIplParvoParameters():colorMode(true),
normaliseOutput(true), // specifies if (true) output is rescaled between 0 and 255 of not (false)
photoreceptorsLocalAdaptationSensitivity(0.7f), // the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
photoreceptorsTemporalConstant(0.5f),// the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
photoreceptorsSpatialConstant(0.53f),// the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
horizontalCellsGain(0.0f),//gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
hcellsTemporalConstant(1.f),// the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors. Reduce to 0.5 to limit retina after effects.
hcellsSpatialConstant(7.f),//the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
ganglionCellsSensitivity(0.7f)//the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.7
{};// default setup
bool colorMode, normaliseOutput;
float photoreceptorsLocalAdaptationSensitivity, photoreceptorsTemporalConstant, photoreceptorsSpatialConstant, horizontalCellsGain, hcellsTemporalConstant, hcellsSpatialConstant, ganglionCellsSensitivity;
};
struct IplMagnoParameters{ // Inner Plexiform Layer Magnocellular channel (IplMagno)
IplMagnoParameters():
normaliseOutput(true), //specifies if (true) output is rescaled between 0 and 255 of not (false)
parasolCells_beta(0.f), // the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
parasolCells_tau(0.f), //the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
parasolCells_k(7.f), //the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
amacrinCellsTemporalCutFrequency(1.2f), //the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2
V0CompressionParameter(0.95f), the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.95
localAdaptintegration_tau(0.f), // specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
localAdaptintegration_k(7.f) // specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
{};// default setup
bool normaliseOutput;
float parasolCells_beta, parasolCells_tau, parasolCells_k, amacrinCellsTemporalCutFrequency, V0CompressionParameter, localAdaptintegration_tau, localAdaptintegration_k;
};
struct OPLandIplParvoParameters OPLandIplParvo;
struct IplMagnoParameters IplMagno;
};
Retina parameters files examples
++++++++++++++++++++++++++++++++
Here is the default configuration file of the retina module. It gives results such as the first retina output shown on the top of this page.
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.5e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.01</horizontalCellsGain>
<hcellsTemporalConstant>0.5</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.5e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
Here is the 'realistic" setup used to obtain the second retina output shown on the top of this page.
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>8.9e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.3</horizontalCellsGain>
<hcellsTemporalConstant>0.5</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>8.9e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>

View File

@@ -1,50 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_BIOINSPIRED_HPP__
#define __OPENCV_BIOINSPIRED_HPP__
#include "opencv2/core.hpp"
#include "opencv2/bioinspired/retina.hpp"
#include "opencv2/bioinspired/retinafasttonemapping.hpp"
#endif

View File

@@ -1,311 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2013
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef __OPENCV_BIOINSPIRED_RETINA_HPP__
#define __OPENCV_BIOINSPIRED_RETINA_HPP__
/*
* Retina.hpp
*
* Created on: Jul 19, 2011
* Author: Alexandre Benoit
*/
#include "opencv2/core.hpp" // for all OpenCV core functionalities access, including cv::Exception support
namespace cv{
namespace bioinspired{
enum {
RETINA_COLOR_RANDOM, //!< each pixel position is either R, G or B in a random choice
RETINA_COLOR_DIAGONAL,//!< color sampling is RGBRGBRGB..., line 2 BRGBRGBRG..., line 3, GBRGBRGBR...
RETINA_COLOR_BAYER//!< standard bayer sampling
};
/**
* @class Retina a wrapper class which allows the Gipsa/Listic Labs model to be used with OpenCV.
* This retina model allows spatio-temporal image processing (applied on still images, video sequences).
* As a summary, these are the retina model properties:
* => It applies a spectral whithening (mid-frequency details enhancement)
* => high frequency spatio-temporal noise reduction
* => low frequency luminance to be reduced (luminance range compression)
* => local logarithmic luminance compression allows details to be enhanced in low light conditions
*
* USE : this model can be used basically for spatio-temporal video effects but also for :
* _using the getParvo method output matrix : texture analysiswith enhanced signal to noise ratio and enhanced details robust against input images luminance ranges
* _using the getMagno method output matrix : motion analysis also with the previously cited properties
*
* for more information, reer to the following papers :
* Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
*
* The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
* _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
* ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
* ====> more informations in the above cited Jeanny Heraults's book.
*/
class CV_EXPORTS Retina : public Algorithm {
public:
// parameters structure for better clarity, check explenations on the comments of methods : setupOPLandIPLParvoChannel and setupIPLMagnoChannel
struct RetinaParameters{
struct OPLandIplParvoParameters{ // Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters
OPLandIplParvoParameters():colorMode(true),
normaliseOutput(true),
photoreceptorsLocalAdaptationSensitivity(0.75f),
photoreceptorsTemporalConstant(0.9f),
photoreceptorsSpatialConstant(0.53f),
horizontalCellsGain(0.01f),
hcellsTemporalConstant(0.5f),
hcellsSpatialConstant(7.f),
ganglionCellsSensitivity(0.75f){};// default setup
bool colorMode, normaliseOutput;
float photoreceptorsLocalAdaptationSensitivity, photoreceptorsTemporalConstant, photoreceptorsSpatialConstant, horizontalCellsGain, hcellsTemporalConstant, hcellsSpatialConstant, ganglionCellsSensitivity;
};
struct IplMagnoParameters{ // Inner Plexiform Layer Magnocellular channel (IplMagno)
IplMagnoParameters():
normaliseOutput(true),
parasolCells_beta(0.f),
parasolCells_tau(0.f),
parasolCells_k(7.f),
amacrinCellsTemporalCutFrequency(2.0f),
V0CompressionParameter(0.95f),
localAdaptintegration_tau(0.f),
localAdaptintegration_k(7.f){};// default setup
bool normaliseOutput;
float parasolCells_beta, parasolCells_tau, parasolCells_k, amacrinCellsTemporalCutFrequency, V0CompressionParameter, localAdaptintegration_tau, localAdaptintegration_k;
};
struct OPLandIplParvoParameters OPLandIplParvo;
struct IplMagnoParameters IplMagno;
};
/**
* retreive retina input buffer size
*/
virtual Size getInputSize()=0;
/**
* retreive retina output buffer size
*/
virtual Size getOutputSize()=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param retinaParameterFile : the parameters filename
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
virtual void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true)=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param fs : the open Filestorage which contains retina parameters
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
virtual void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true)=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param newParameters : a parameters structures updated with the new target configuration
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
virtual void setup(RetinaParameters newParameters)=0;
/**
* @return the current parameters setup
*/
virtual struct Retina::RetinaParameters getParameters()=0;
/**
* parameters setup display method
* @return a string which contains formatted parameters information
*/
virtual const String printSetup()=0;
/**
* write xml/yml formated parameters information
* @rparam fs : the filename of the xml file that will be open and writen with formatted parameters information
*/
virtual void write( String fs ) const=0;
/**
* write xml/yml formated parameters information
* @param fs : a cv::Filestorage object ready to be filled
*/
virtual void write( FileStorage& fs ) const=0;
/**
* setup the OPL and IPL parvo channels (see biologocal model)
* OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance (low frequency energy)
* IPL parvo is the OPL next processing stage, it refers to Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision.
* for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* @param colorMode : specifies if (true) color is processed of not (false) to then processing gray level image
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param photoreceptorsLocalAdaptationSensitivity: the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
* @param photoreceptorsTemporalConstant: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param photoreceptorsSpatialConstant: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param horizontalCellsGain: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param HcellsTemporalConstant: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
* @param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 230
*/
virtual void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7)=0;
/**
* set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel
* this channel processes signals outpint from OPL processing stage in peripheral vision, it allows motion information enhancement. It is decorrelated from the details channel. See reference paper for more details.
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 200
* @param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
virtual void setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7)=0;
/**
* method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
* @param inputImage : the input cv::Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)
*/
virtual void run(InputArray inputImage)=0;
/**
* method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvo channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. Then, it can have a more limited effect on images with a very high dynamic range. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
* -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
@param inputImage the input image to process RGB or gray levels
@param outputToneMappedImage the output tone mapped image
*/
virtual void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)=0;
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
virtual void getParvo(OutputArray retinaOutput_parvo)=0;
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : a cv::Mat header filled with the internal parvo buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
virtual void getParvoRAW(OutputArray retinaOutput_parvo)=0;
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
virtual void getMagno(OutputArray retinaOutput_magno)=0;
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : a cv::Mat header filled with the internal retina magno buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
virtual void getMagnoRAW(OutputArray retinaOutput_magno)=0;
// original API level data accessors : get buffers addresses from a Mat header, similar to getParvoRAW and getMagnoRAW...
virtual const Mat getMagnoRAW() const=0;
virtual const Mat getParvoRAW() const=0;
/**
* activate color saturation as the final step of the color demultiplexing process
* -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
*/
virtual void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0)=0;
/**
* clear all retina buffers (equivalent to opening the eyes after a long period of eye close ;o)
*/
virtual void clearBuffers()=0;
/**
* Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated
* @param activate: true if Magnocellular output should be activated, false if not
*/
virtual void activateMovingContoursProcessing(const bool activate)=0;
/**
* Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated
* @param activate: true if Parvocellular (contours information extraction) output should be activated, false if not
*/
virtual void activateContoursProcessing(const bool activate)=0;
};
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize);
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize, const bool colorMode, int colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
CV_EXPORTS Ptr<Retina> createRetina_OCL(Size inputSize);
CV_EXPORTS Ptr<Retina> createRetina_OCL(Size inputSize, const bool colorMode, int colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
}
}
#endif /* __OPENCV_BIOINSPIRED_RETINA_HPP__ */

View File

@@ -1,121 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2013
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
**
**
**
**
** This class is based on image processing tools of the author and already used within the Retina class (this is the same code as method retina::applyFastToneMapping, but in an independent class, it is ligth from a memory requirement point of view). It implements an adaptation of the efficient tone mapping algorithm propose by David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
** -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
**
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef __OPENCV_BIOINSPIRED_RETINAFASTTONEMAPPING_HPP__
#define __OPENCV_BIOINSPIRED_RETINAFASTTONEMAPPING_HPP__
/*
* retinafasttonemapping.hpp
*
* Created on: May 26, 2013
* Author: Alexandre Benoit
*/
#include "opencv2/core.hpp" // for all OpenCV core functionalities access, including cv::Exception support
namespace cv{
namespace bioinspired{
/**
* @class RetinaFastToneMappingImpl a wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV.
* This algorithm is already implemented in thre Retina class (retina::applyFastToneMapping) but used it does not require all the retina model to be allocated. This allows a light memory use for low memory devices (smartphones, etc.
* As a summary, these are the model properties:
* => 2 stages of local luminance adaptation with a different local neighborhood for each.
* => first stage models the retina photorecetors local luminance adaptation
* => second stage models th ganglion cells local information adaptation
* => compared to the initial publication, this class uses spatio-temporal low pass filters instead of spatial only filters.
* ====> this can help noise robustness and temporal stability for video sequence use cases.
* for more information, read to the following papers :
* Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* regarding spatio-temporal filter and the bigger retina model :
* Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
*/
class CV_EXPORTS RetinaFastToneMapping : public Algorithm
{
public:
/**
* method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvocellular channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular retina::run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. Then, it can have a more limited effect on images with a very high dynamic range. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
* -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
@param inputImage the input image to process RGB or gray levels
@param outputToneMappedImage the output tone mapped image
*/
virtual void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)=0;
/**
* setup method that updates tone mapping behaviors by adjusing the local luminance computation area
* @param photoreceptorsNeighborhoodRadius the first stage local adaptation area
* @param ganglioncellsNeighborhoodRadius the second stage local adaptation area
* @param meanLuminanceModulatorK the factor applied to modulate the meanLuminance information (default is 1, see reference paper)
*/
virtual void setup(const float photoreceptorsNeighborhoodRadius=3.f, const float ganglioncellsNeighborhoodRadius=1.f, const float meanLuminanceModulatorK=1.f)=0;
};
CV_EXPORTS Ptr<RetinaFastToneMapping> createRetinaFastToneMapping(Size inputSize);
}
}
#endif /* __OPENCV_BIOINSPIRED_RETINAFASTTONEMAPPING_HPP__ */

View File

@@ -1,888 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include <iostream>
#include <cstdlib>
#include "basicretinafilter.hpp"
#include <cmath>
namespace cv
{
namespace bioinspired
{
// @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
//////////////////////////////////////////////////////////
// BASIC RETINA FILTER
//////////////////////////////////////////////////////////
// Constructor and Desctructor of the basic retina filter
BasicRetinaFilter::BasicRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns, const unsigned int parametersListSize, const bool useProgressiveFilter)
:_filterOutput(NBrows, NBcolumns),
_localBuffer(NBrows*NBcolumns),
_filteringCoeficientsTable(3*parametersListSize),
_progressiveSpatialConstant(0),// pointer to a local table containing local spatial constant (allocated with the object)
_progressiveGain(0)
{
#ifdef T_BASIC_RETINA_ELEMENT_DEBUG
std::cout<<"BasicRetinaFilter::BasicRetinaFilter: new filter, size="<<NBrows<<", "<<NBcolumns<<std::endl;
#endif
_halfNBrows=_filterOutput.getNBrows()/2;
_halfNBcolumns=_filterOutput.getNBcolumns()/2;
if (useProgressiveFilter)
{
#ifdef T_BASIC_RETINA_ELEMENT_DEBUG
std::cout<<"BasicRetinaFilter::BasicRetinaFilter: _progressiveSpatialConstant_Tbuffer"<<std::endl;
#endif
_progressiveSpatialConstant.resize(_filterOutput.size());
#ifdef T_BASIC_RETINA_ELEMENT_DEBUG
std::cout<<"BasicRetinaFilter::BasicRetinaFilter: new _progressiveGain_Tbuffer"<<NBrows<<", "<<NBcolumns<<std::endl;
#endif
_progressiveGain.resize(_filterOutput.size());
}
#ifdef T_BASIC_RETINA_ELEMENT_DEBUG
std::cout<<"BasicRetinaFilter::BasicRetinaFilter: new filter, size="<<NBrows<<", "<<NBcolumns<<std::endl;
#endif
// set default values
_maxInputValue=256.0;
// reset all buffers
clearAllBuffers();
#ifdef T_BASIC_RETINA_ELEMENT_DEBUG
std::cout<<"BasicRetinaFilter::Init BasicRetinaElement at specified frame size OK, size="<<this->size()<<std::endl;
#endif
}
BasicRetinaFilter::~BasicRetinaFilter()
{
#ifdef BASIC_RETINA_ELEMENT_DEBUG
std::cout<<"BasicRetinaFilter::BasicRetinaElement Deleted OK"<<std::endl;
#endif
}
////////////////////////////////////
// functions of the basic filter
////////////////////////////////////
// resize all allocated buffers
void BasicRetinaFilter::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
std::cout<<"BasicRetinaFilter::resize( "<<NBrows<<", "<<NBcolumns<<")"<<std::endl;
// resizing buffers
_filterOutput.resizeBuffer(NBrows, NBcolumns);
// updating variables
_halfNBrows=_filterOutput.getNBrows()/2;
_halfNBcolumns=_filterOutput.getNBcolumns()/2;
_localBuffer.resize(_filterOutput.size());
// in case of spatial adapted filter
if (_progressiveSpatialConstant.size()>0)
{
_progressiveSpatialConstant.resize(_filterOutput.size());
_progressiveGain.resize(_filterOutput.size());
}
// reset buffers
clearAllBuffers();
}
// Change coefficients table
void BasicRetinaFilter::setLPfilterParameters(const float beta, const float tau, const float desired_k, const unsigned int filterIndex)
{
float _beta = beta+tau;
float k=desired_k;
// check if the spatial constant is correct (avoid 0 value to avoid division by 0)
if (desired_k<=0)
{
k=0.001f;
std::cerr<<"BasicRetinaFilter::spatial constant of the low pass filter must be superior to zero !!! correcting parameter setting to 0,001"<<std::endl;
}
float _alpha = k*k;
float _mu = 0.8f;
unsigned int tableOffset=filterIndex*3;
if (k<=0)
{
std::cerr<<"BasicRetinaFilter::spatial filtering coefficient must be superior to zero, correcting value to 0.01"<<std::endl;
_alpha=0.0001f;
}
float _temp = (1.0f+_beta)/(2.0f*_mu*_alpha);
float a = _filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)std::sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
_filteringCoeficientsTable[1+tableOffset]=(1.0f-a)*(1.0f-a)*(1.0f-a)*(1.0f-a)/(1.0f+_beta);
_filteringCoeficientsTable[2+tableOffset] =tau;
//std::cout<<"BasicRetinaFilter::normal:"<<(1.0-a)*(1.0-a)*(1.0-a)*(1.0-a)/(1.0+_beta)<<" -> old:"<<(1-a)*(1-a)*(1-a)*(1-a)/(1+_beta)<<std::endl;
//std::cout<<"BasicRetinaFilter::a="<<a<<", gain="<<_filteringCoeficientsTable[1+tableOffset]<<", tau="<<tau<<std::endl;
}
void BasicRetinaFilter::setProgressiveFilterConstants_CentredAccuracy(const float beta, const float tau, const float alpha0, const unsigned int filterIndex)
{
// check if dedicated buffers are already allocated, if not create them
if (_progressiveSpatialConstant.size()!=_filterOutput.size())
{
_progressiveSpatialConstant.resize(_filterOutput.size());
_progressiveGain.resize(_filterOutput.size());
}
float _beta = beta+tau;
float _mu=0.8f;
if (alpha0<=0)
{
std::cerr<<"BasicRetinaFilter::spatial filtering coefficient must be superior to zero, correcting value to 0.01"<<std::endl;
//alpha0=0.0001;
}
unsigned int tableOffset=filterIndex*3;
float _alpha=0.8f;
float _temp = (1.0f+_beta)/(2.0f*_mu*_alpha);
float a=_filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)std::sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
_filteringCoeficientsTable[tableOffset+1]=(1.0f-a)*(1.0f-a)*(1.0f-a)*(1.0f-a)/(1.0f+_beta);
_filteringCoeficientsTable[tableOffset+2] =tau;
float commonFactor=alpha0/(float)std::sqrt(_halfNBcolumns*_halfNBcolumns+_halfNBrows*_halfNBrows+1.0f);
//memset(_progressiveSpatialConstant, 255, _filterOutput.getNBpixels());
for (unsigned int idColumn=0;idColumn<_halfNBcolumns; ++idColumn)
for (unsigned int idRow=0;idRow<_halfNBrows; ++idRow)
{
// computing local spatial constant
float localSpatialConstantValue=commonFactor*std::sqrt((float)(idColumn*idColumn)+(float)(idRow*idRow));
if (localSpatialConstantValue>1.0f)
localSpatialConstantValue=1.0f;
_progressiveSpatialConstant[_halfNBcolumns-1+idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1+idRow)]=localSpatialConstantValue;
_progressiveSpatialConstant[_halfNBcolumns-1-idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1+idRow)]=localSpatialConstantValue;
_progressiveSpatialConstant[_halfNBcolumns-1+idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1-idRow)]=localSpatialConstantValue;
_progressiveSpatialConstant[_halfNBcolumns-1-idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1-idRow)]=localSpatialConstantValue;
// computing local gain
float localGain=(1-localSpatialConstantValue)*(1-localSpatialConstantValue)*(1-localSpatialConstantValue)*(1-localSpatialConstantValue)/(1+_beta);
_progressiveGain[_halfNBcolumns-1+idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1+idRow)]=localGain;
_progressiveGain[_halfNBcolumns-1-idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1+idRow)]=localGain;
_progressiveGain[_halfNBcolumns-1+idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1-idRow)]=localGain;
_progressiveGain[_halfNBcolumns-1-idColumn+_filterOutput.getNBcolumns()*(_halfNBrows-1-idRow)]=localGain;
//std::cout<<commonFactor<<", "<<std::sqrt((_halfNBcolumns-1-idColumn)+(_halfNBrows-idRow-1))<<", "<<(_halfNBcolumns-1-idColumn)<<", "<<(_halfNBrows-idRow-1)<<", "<<localSpatialConstantValue<<std::endl;
}
}
void BasicRetinaFilter::setProgressiveFilterConstants_CustomAccuracy(const float beta, const float tau, const float k, const std::valarray<float> &accuracyMap, const unsigned int filterIndex)
{
if (accuracyMap.size()!=_filterOutput.size())
{
std::cerr<<"BasicRetinaFilter::setProgressiveFilterConstants_CustomAccuracy: error: input accuracy map does not match filter size, init skept"<<std::endl;
return ;
}
// check if dedicated buffers are already allocated, if not create them
if (_progressiveSpatialConstant.size()!=_filterOutput.size())
{
_progressiveSpatialConstant.resize(accuracyMap.size());
_progressiveGain.resize(accuracyMap.size());
}
float _beta = beta+tau;
float _alpha=k*k;
float _mu=0.8f;
if (k<=0)
{
std::cerr<<"BasicRetinaFilter::spatial filtering coefficient must be superior to zero, correcting value to 0.01"<<std::endl;
//alpha0=0.0001;
}
unsigned int tableOffset=filterIndex*3;
float _temp = (1.0f+_beta)/(2.0f*_mu*_alpha);
float a=_filteringCoeficientsTable[tableOffset] = 1.0f + _temp - (float)std::sqrt( (1.0f+_temp)*(1.0f+_temp) - 1.0f);
_filteringCoeficientsTable[tableOffset+1]=(1.0f-a)*(1.0f-a)*(1.0f-a)*(1.0f-a)/(1.0f+_beta);
_filteringCoeficientsTable[tableOffset+2] =tau;
//memset(_progressiveSpatialConstant, 255, _filterOutput.getNBpixels());
for (unsigned int idColumn=0;idColumn<_filterOutput.getNBcolumns(); ++idColumn)
for (unsigned int idRow=0;idRow<_filterOutput.getNBrows(); ++idRow)
{
// computing local spatial constant
unsigned int index=idColumn+idRow*_filterOutput.getNBcolumns();
float localSpatialConstantValue=_a*accuracyMap[index];
if (localSpatialConstantValue>1)
localSpatialConstantValue=1;
_progressiveSpatialConstant[index]=localSpatialConstantValue;
// computing local gain
float localGain=(1.0f-localSpatialConstantValue)*(1.0f-localSpatialConstantValue)*(1.0f-localSpatialConstantValue)*(1.0f-localSpatialConstantValue)/(1.0f+_beta);
_progressiveGain[index]=localGain;
//std::cout<<commonFactor<<", "<<std::sqrt((_halfNBcolumns-1-idColumn)+(_halfNBrows-idRow-1))<<", "<<(_halfNBcolumns-1-idColumn)<<", "<<(_halfNBrows-idRow-1)<<", "<<localSpatialConstantValue<<std::endl;
}
}
///////////////////////////////////////////////////////////////////////
/// Local luminance adaptation functions
// run local adaptation filter and save result in _filterOutput
const std::valarray<float> &BasicRetinaFilter::runFilter_LocalAdapdation(const std::valarray<float> &inputFrame, const std::valarray<float> &localLuminance)
{
_localLuminanceAdaptation(get_data(inputFrame), get_data(localLuminance), &_filterOutput[0]);
return _filterOutput;
}
// run local adaptation filter at a specific output adress
void BasicRetinaFilter::runFilter_LocalAdapdation(const std::valarray<float> &inputFrame, const std::valarray<float> &localLuminance, std::valarray<float> &outputFrame)
{
_localLuminanceAdaptation(get_data(inputFrame), get_data(localLuminance), &outputFrame[0]);
}
// run local adaptation filter and save result in _filterOutput with autonomous low pass filtering before adaptation
const std::valarray<float> &BasicRetinaFilter::runFilter_LocalAdapdation_autonomous(const std::valarray<float> &inputFrame)
{
_spatiotemporalLPfilter(get_data(inputFrame), &_filterOutput[0]);
_localLuminanceAdaptation(get_data(inputFrame), &_filterOutput[0], &_filterOutput[0]);
return _filterOutput;
}
// run local adaptation filter at a specific output adress with autonomous low pass filtering before adaptation
void BasicRetinaFilter::runFilter_LocalAdapdation_autonomous(const std::valarray<float> &inputFrame, std::valarray<float> &outputFrame)
{
_spatiotemporalLPfilter(get_data(inputFrame), &_filterOutput[0]);
_localLuminanceAdaptation(get_data(inputFrame), &_filterOutput[0], &outputFrame[0]);
}
// local luminance adaptation of the input in regard of localLuminance buffer, the input is rewrited and becomes the output
void BasicRetinaFilter::_localLuminanceAdaptation(float *inputOutputFrame, const float *localLuminance)
{
_localLuminanceAdaptation(inputOutputFrame, localLuminance, inputOutputFrame, false);
/* const float *localLuminancePTR=localLuminance;
float *inputOutputFramePTR=inputOutputFrame;
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel, ++inputOutputFramePTR)
{
float X0=*(localLuminancePTR++)*_localLuminanceFactor+_localLuminanceAddon;
*(inputOutputFramePTR) = (_maxInputValue+X0)**inputOutputFramePTR/(*inputOutputFramePTR +X0+0.00000000001);
}
*/
}
// local luminance adaptation of the input in regard of localLuminance buffer
void BasicRetinaFilter::_localLuminanceAdaptation(const float *inputFrame, const float *localLuminance, float *outputFrame, const bool updateLuminanceMean)
{
if (updateLuminanceMean)
{ float meanLuminance=0;
const float *luminancePTR=inputFrame;
for (unsigned int i=0;i<_filterOutput.getNBpixels();++i)
meanLuminance+=*(luminancePTR++);
meanLuminance/=_filterOutput.getNBpixels();
//float tempMeanValue=meanLuminance+_meanInputValue*_tau;
updateCompressionParameter(meanLuminance);
}
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(0,_filterOutput.getNBpixels()), Parallel_localAdaptation(localLuminance, inputFrame, outputFrame, _localLuminanceFactor, _localLuminanceAddon, _maxInputValue));
#else
//std::cout<<meanLuminance<<std::endl;
const float *localLuminancePTR=localLuminance;
const float *inputFramePTR=inputFrame;
float *outputFramePTR=outputFrame;
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel, ++inputFramePTR, ++outputFramePTR)
{
float X0=*(localLuminancePTR++)*_localLuminanceFactor+_localLuminanceAddon;
// TODO : the following line can lead to a divide by zero ! A small offset is added, take care if the offset is too large in case of High Dynamic Range images which can use very small values...
*(outputFramePTR) = (_maxInputValue+X0)**inputFramePTR/(*inputFramePTR +X0+0.00000000001);
//std::cout<<"BasicRetinaFilter::inputFrame[IDpixel]=%f, X0=%f, outputFrame[IDpixel]=%f\n", inputFrame[IDpixel], X0, outputFrame[IDpixel]);
}
#endif
}
// local adaptation applied on a range of values which can be positive and negative
void BasicRetinaFilter::_localLuminanceAdaptationPosNegValues(const float *inputFrame, const float *localLuminance, float *outputFrame)
{
const float *localLuminancePTR=localLuminance;
const float *inputFramePTR=inputFrame;
float *outputFramePTR=outputFrame;
float factor=_maxInputValue*2.0f/(float)CV_PI;
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel, ++inputFramePTR)
{
float X0=*(localLuminancePTR++)*_localLuminanceFactor+_localLuminanceAddon;
*(outputFramePTR++) = factor*atan(*inputFramePTR/X0);//(_maxInputValue+X0)**inputFramePTR/(*inputFramePTR +X0);
//std::cout<<"BasicRetinaFilter::inputFrame[IDpixel]=%f, X0=%f, outputFrame[IDpixel]=%f\n", inputFrame[IDpixel], X0, outputFrame[IDpixel]);
}
}
///////////////////////////////////////////////////////////////////////
/// Spatio temporal Low Pass filter functions
// run LP filter and save result in the basic retina element buffer
const std::valarray<float> &BasicRetinaFilter::runFilter_LPfilter(const std::valarray<float> &inputFrame, const unsigned int filterIndex)
{
_spatiotemporalLPfilter(get_data(inputFrame), &_filterOutput[0], filterIndex);
return _filterOutput;
}
// run LP filter for a new frame input and save result at a specific output adress
void BasicRetinaFilter::runFilter_LPfilter(const std::valarray<float> &inputFrame, std::valarray<float> &outputFrame, const unsigned int filterIndex)
{
_spatiotemporalLPfilter(get_data(inputFrame), &outputFrame[0], filterIndex);
}
// run LP filter on the input data and rewrite it
void BasicRetinaFilter::runFilter_LPfilter_Autonomous(std::valarray<float> &inputOutputFrame, const unsigned int filterIndex)
{
unsigned int coefTableOffset=filterIndex*3;
/**********/
_a=_filteringCoeficientsTable[coefTableOffset];
_gain=_filteringCoeficientsTable[1+coefTableOffset];
_tau=_filteringCoeficientsTable[2+coefTableOffset];
// launch the serie of 1D directional filters in order to compute the 2D low pass filter
_horizontalCausalFilter(&inputOutputFrame[0], 0, _filterOutput.getNBrows());
_horizontalAnticausalFilter(&inputOutputFrame[0], 0, _filterOutput.getNBrows());
_verticalCausalFilter(&inputOutputFrame[0], 0, _filterOutput.getNBcolumns());
_verticalAnticausalFilter_multGain(&inputOutputFrame[0], 0, _filterOutput.getNBcolumns());
}
// run LP filter for a new frame input and save result at a specific output adress
void BasicRetinaFilter::_spatiotemporalLPfilter(const float *inputFrame, float *outputFrame, const unsigned int filterIndex)
{
unsigned int coefTableOffset=filterIndex*3;
/**********/
_a=_filteringCoeficientsTable[coefTableOffset];
_gain=_filteringCoeficientsTable[1+coefTableOffset];
_tau=_filteringCoeficientsTable[2+coefTableOffset];
// launch the serie of 1D directional filters in order to compute the 2D low pass filter
_horizontalCausalFilter_addInput(inputFrame, outputFrame, 0,_filterOutput.getNBrows());
_horizontalAnticausalFilter(outputFrame, 0, _filterOutput.getNBrows());
_verticalCausalFilter(outputFrame, 0, _filterOutput.getNBcolumns());
_verticalAnticausalFilter_multGain(outputFrame, 0, _filterOutput.getNBcolumns());
}
// run SQUARING LP filter for a new frame input and save result at a specific output adress
float BasicRetinaFilter::_squaringSpatiotemporalLPfilter(const float *inputFrame, float *outputFrame, const unsigned int filterIndex)
{
unsigned int coefTableOffset=filterIndex*3;
/**********/
_a=_filteringCoeficientsTable[coefTableOffset];
_gain=_filteringCoeficientsTable[1+coefTableOffset];
_tau=_filteringCoeficientsTable[2+coefTableOffset];
// launch the serie of 1D directional filters in order to compute the 2D low pass filter
_squaringHorizontalCausalFilter(inputFrame, outputFrame, 0, _filterOutput.getNBrows());
_horizontalAnticausalFilter(outputFrame, 0, _filterOutput.getNBrows());
_verticalCausalFilter(outputFrame, 0, _filterOutput.getNBcolumns());
return _verticalAnticausalFilter_returnMeanValue(outputFrame, 0, _filterOutput.getNBcolumns());
}
/////////////////////////////////////////////////
// standard version of the 1D low pass filters
// horizontal causal filter which adds the input inside
void BasicRetinaFilter::_horizontalCausalFilter(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
//#pragma omp parallel for
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float* outputPTR=outputFrame+(IDrowStart+IDrow)*_filterOutput.getNBcolumns();
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(outputPTR)+ _a* result;
*(outputPTR++) = result;
}
}
}
// horizontal causal filter which adds the input inside
void BasicRetinaFilter::_horizontalCausalFilter_addInput(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDrowStart,IDrowEnd), Parallel_horizontalCausalFilter_addInput(inputFrame, outputFrame, IDrowStart, _filterOutput.getNBcolumns(), _a, _tau));
#else
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float* outputPTR=outputFrame+(IDrowStart+IDrow)*_filterOutput.getNBcolumns();
register const float* inputPTR=inputFrame+(IDrowStart+IDrow)*_filterOutput.getNBcolumns();
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(inputPTR++) + _tau**(outputPTR)+ _a* result;
*(outputPTR++) = result;
}
}
#endif
}
// horizontal anticausal filter (basic way, no add on)
void BasicRetinaFilter::_horizontalAnticausalFilter(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDrowStart,IDrowEnd), Parallel_horizontalAnticausalFilter(outputFrame, IDrowEnd, _filterOutput.getNBcolumns(), _a ));
#else
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float* outputPTR=outputFrame+(IDrowEnd-IDrow)*(_filterOutput.getNBcolumns())-1;
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(outputPTR)+ _a* result;
*(outputPTR--) = result;
}
}
#endif
}
// horizontal anticausal filter which multiplies the output by _gain
void BasicRetinaFilter::_horizontalAnticausalFilter_multGain(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
//#pragma omp parallel for
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float* outputPTR=outputFrame+(IDrowEnd-IDrow)*(_filterOutput.getNBcolumns())-1;
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(outputPTR)+ _a* result;
*(outputPTR--) = _gain*result;
}
}
}
// vertical anticausal filter
void BasicRetinaFilter::_verticalCausalFilter(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDcolumnStart,IDcolumnEnd), Parallel_verticalCausalFilter(outputFrame, _filterOutput.getNBrows(), _filterOutput.getNBcolumns(), _a ));
#else
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=outputFrame+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
result = *(outputPTR) + _a * result;
*(outputPTR) = result;
outputPTR+=_filterOutput.getNBcolumns();
}
}
#endif
}
// vertical anticausal filter (basic way, no add on)
void BasicRetinaFilter::_verticalAnticausalFilter(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd)
{
float* offset=outputFrame+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
//#pragma omp parallel for
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=offset+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
result = *(outputPTR) + _a * result;
*(outputPTR) = result;
outputPTR-=_filterOutput.getNBcolumns();
}
}
}
// vertical anticausal filter which multiplies the output by _gain
void BasicRetinaFilter::_verticalAnticausalFilter_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDcolumnStart,IDcolumnEnd), Parallel_verticalAnticausalFilter_multGain(outputFrame, _filterOutput.getNBrows(), _filterOutput.getNBcolumns(), _a, _gain ));
#else
float* offset=outputFrame+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
//#pragma omp parallel for
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=offset+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
result = *(outputPTR) + _a * result;
*(outputPTR) = _gain*result;
outputPTR-=_filterOutput.getNBcolumns();
}
}
#endif
}
/////////////////////////////////////////
// specific modifications of 1D filters
// -> squaring horizontal causal filter
void BasicRetinaFilter::_squaringHorizontalCausalFilter(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
register float* outputPTR=outputFrame+IDrowStart*_filterOutput.getNBcolumns();
register const float* inputPTR=inputFrame+IDrowStart*_filterOutput.getNBcolumns();
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(inputPTR)**(inputPTR) + _tau**(outputPTR)+ _a* result;
*(outputPTR++) = result;
++inputPTR;
}
}
}
// vertical anticausal filter that returns the mean value of its result
float BasicRetinaFilter::_verticalAnticausalFilter_returnMeanValue(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd)
{
register float meanValue=0;
float* offset=outputFrame+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=offset+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
result = *(outputPTR) + _a * result;
*(outputPTR) = _gain*result;
meanValue+=*(outputPTR);
outputPTR-=_filterOutput.getNBcolumns();
}
}
return meanValue/(float)_filterOutput.getNBpixels();
}
// LP filter with integration in specific areas (regarding true values of a binary parameters image)
void BasicRetinaFilter::_localSquaringSpatioTemporalLPfilter(const float *inputFrame, float *LPfilterOutput, const unsigned int *integrationAreas, const unsigned int filterIndex)
{
unsigned int coefTableOffset=filterIndex*3;
_a=_filteringCoeficientsTable[coefTableOffset+0];
_gain=_filteringCoeficientsTable[coefTableOffset+1];
_tau=_filteringCoeficientsTable[coefTableOffset+2];
// launch the serie of 1D directional filters in order to compute the 2D low pass filter
_local_squaringHorizontalCausalFilter(inputFrame, LPfilterOutput, 0, _filterOutput.getNBrows(), integrationAreas);
_local_horizontalAnticausalFilter(LPfilterOutput, 0, _filterOutput.getNBrows(), integrationAreas);
_local_verticalCausalFilter(LPfilterOutput, 0, _filterOutput.getNBcolumns(), integrationAreas);
_local_verticalAnticausalFilter_multGain(LPfilterOutput, 0, _filterOutput.getNBcolumns(), integrationAreas);
}
// LP filter on specific parts of the picture instead of all the image
// same functions (some of them) but take a binary flag to allow integration, false flag means, no data change at the output...
// this function take an image in input and squares it befor computing
void BasicRetinaFilter::_local_squaringHorizontalCausalFilter(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd, const unsigned int *integrationAreas)
{
register float* outputPTR=outputFrame+IDrowStart*_filterOutput.getNBcolumns();
register const float* inputPTR=inputFrame+IDrowStart*_filterOutput.getNBcolumns();
const unsigned int *integrationAreasPTR=integrationAreas;
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
if (*(integrationAreasPTR++))
result = *(inputPTR)**(inputPTR) + _tau**(outputPTR)+ _a* result;
else
result=0;
*(outputPTR++) = result;
++inputPTR;
}
}
}
void BasicRetinaFilter::_local_horizontalAnticausalFilter(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd, const unsigned int *integrationAreas)
{
register float* outputPTR=outputFrame+IDrowEnd*(_filterOutput.getNBcolumns())-1;
const unsigned int *integrationAreasPTR=integrationAreas;
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
if (*(integrationAreasPTR++))
result = *(outputPTR)+ _a* result;
else
result=0;
*(outputPTR--) = result;
}
}
}
void BasicRetinaFilter::_local_verticalCausalFilter(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd, const unsigned int *integrationAreas)
{
const unsigned int *integrationAreasPTR=integrationAreas;
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=outputFrame+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
if (*(integrationAreasPTR++))
result = *(outputPTR)+ _a* result;
else
result=0;
*(outputPTR) = result;
outputPTR+=_filterOutput.getNBcolumns();
}
}
}
// this functions affects _gain at the output
void BasicRetinaFilter::_local_verticalAnticausalFilter_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd, const unsigned int *integrationAreas)
{
const unsigned int *integrationAreasPTR=integrationAreas;
float* offset=outputFrame+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=offset+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
if (*(integrationAreasPTR++))
result = *(outputPTR)+ _a* result;
else
result=0;
*(outputPTR) = _gain*result;
outputPTR-=_filterOutput.getNBcolumns();
}
}
}
////////////////////////////////////////////////////
// run LP filter for a new frame input and save result at a specific output adress
// -> USE IRREGULAR SPATIAL CONSTANT
// irregular filter computed from a buffer and rewrites it
void BasicRetinaFilter::_spatiotemporalLPfilter_Irregular(float *inputOutputFrame, const unsigned int filterIndex)
{
if (_progressiveGain.size()==0)
{
std::cerr<<"BasicRetinaFilter::runProgressiveFilter: cannot perform filtering, no progressive filter settled up"<<std::endl;
return;
}
unsigned int coefTableOffset=filterIndex*3;
/**********/
//_a=_filteringCoeficientsTable[coefTableOffset];
_tau=_filteringCoeficientsTable[2+coefTableOffset];
// launch the serie of 1D directional filters in order to compute the 2D low pass filter
_horizontalCausalFilter_Irregular(inputOutputFrame, 0, (int)_filterOutput.getNBrows());
_horizontalAnticausalFilter_Irregular(inputOutputFrame, 0, (int)_filterOutput.getNBrows(), &_progressiveSpatialConstant[0]);
_verticalCausalFilter_Irregular(inputOutputFrame, 0, (int)_filterOutput.getNBcolumns(), &_progressiveSpatialConstant[0]);
_verticalAnticausalFilter_Irregular_multGain(inputOutputFrame, 0, (int)_filterOutput.getNBcolumns());
}
// irregular filter computed from a buffer and puts result on another
void BasicRetinaFilter::_spatiotemporalLPfilter_Irregular(const float *inputFrame, float *outputFrame, const unsigned int filterIndex)
{
if (_progressiveGain.size()==0)
{
std::cerr<<"BasicRetinaFilter::runProgressiveFilter: cannot perform filtering, no progressive filter settled up"<<std::endl;
return;
}
unsigned int coefTableOffset=filterIndex*3;
/**********/
//_a=_filteringCoeficientsTable[coefTableOffset];
_tau=_filteringCoeficientsTable[2+coefTableOffset];
// launch the serie of 1D directional filters in order to compute the 2D low pass filter
_horizontalCausalFilter_Irregular_addInput(inputFrame, outputFrame, 0, (int)_filterOutput.getNBrows());
_horizontalAnticausalFilter_Irregular(outputFrame, 0, (int)_filterOutput.getNBrows(), &_progressiveSpatialConstant[0]);
_verticalCausalFilter_Irregular(outputFrame, 0, (int)_filterOutput.getNBcolumns(), &_progressiveSpatialConstant[0]);
_verticalAnticausalFilter_Irregular_multGain(outputFrame, 0, (int)_filterOutput.getNBcolumns());
}
// 1D filters with irregular spatial constant
// horizontal causal filter wich runs on its input buffer
void BasicRetinaFilter::_horizontalCausalFilter_Irregular(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
register float* outputPTR=outputFrame+IDrowStart*_filterOutput.getNBcolumns();
register const float* spatialConstantPTR=&_progressiveSpatialConstant[0]+IDrowStart*_filterOutput.getNBcolumns();
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(outputPTR)+ *(spatialConstantPTR++)* result;
*(outputPTR++) = result;
}
}
}
// horizontal causal filter with add input
void BasicRetinaFilter::_horizontalCausalFilter_Irregular_addInput(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
register float* outputPTR=outputFrame+IDrowStart*_filterOutput.getNBcolumns();
register const float* inputPTR=inputFrame+IDrowStart*_filterOutput.getNBcolumns();
register const float* spatialConstantPTR=&_progressiveSpatialConstant[0]+IDrowStart*_filterOutput.getNBcolumns();
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(inputPTR++) + _tau**(outputPTR)+ *(spatialConstantPTR++)* result;
*(outputPTR++) = result;
}
}
}
// horizontal anticausal filter (basic way, no add on)
void BasicRetinaFilter::_horizontalAnticausalFilter_Irregular(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd, const float *spatialConstantBuffer)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDrowStart,IDrowEnd), Parallel_horizontalAnticausalFilter_Irregular(outputFrame, spatialConstantBuffer, IDrowEnd, _filterOutput.getNBcolumns()));
#else
register float* outputPTR=outputFrame+IDrowEnd*(_filterOutput.getNBcolumns())-1;
register const float* spatialConstantPTR=spatialConstantBuffer+IDrowEnd*(_filterOutput.getNBcolumns())-1;
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
result = *(outputPTR)+ *(spatialConstantPTR--)* result;
*(outputPTR--) = result;
}
}
#endif
}
// vertical anticausal filter
void BasicRetinaFilter::_verticalCausalFilter_Irregular(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd, const float *spatialConstantBuffer)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDcolumnStart,IDcolumnEnd), Parallel_verticalCausalFilter_Irregular(outputFrame, spatialConstantBuffer, _filterOutput.getNBrows(), _filterOutput.getNBcolumns()));
#else
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=outputFrame+IDcolumn;
register const float *spatialConstantPTR=spatialConstantBuffer+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
result = *(outputPTR) + *(spatialConstantPTR) * result;
*(outputPTR) = result;
outputPTR+=_filterOutput.getNBcolumns();
spatialConstantPTR+=_filterOutput.getNBcolumns();
}
}
#endif
}
// vertical anticausal filter which multiplies the output by _gain
void BasicRetinaFilter::_verticalAnticausalFilter_Irregular_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd)
{
float* outputOffset=outputFrame+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
const float* constantOffset=&_progressiveSpatialConstant[0]+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
const float* gainOffset=&_progressiveGain[0]+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=outputOffset+IDcolumn;
register const float *spatialConstantPTR=constantOffset+IDcolumn;
register const float *progressiveGainPTR=gainOffset+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
result = *(outputPTR) + *(spatialConstantPTR) * result;
*(outputPTR) = *(progressiveGainPTR)*result;
outputPTR-=_filterOutput.getNBcolumns();
spatialConstantPTR-=_filterOutput.getNBcolumns();
progressiveGainPTR-=_filterOutput.getNBcolumns();
}
}
}
}// end of namespace bioinspired
}// end of namespace cv

View File

@@ -1,657 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef BASICRETINAELEMENT_HPP_
#define BASICRETINAELEMENT_HPP_
#include <cstring>
/**
* @class BasicRetinaFilter
* @brief Brief overview, this class provides tools for low level image processing:
* --> this class is able to perform:
* -> first order Low pass optimized filtering
* -> local luminance adaptation (able to correct back light problems and contrast enhancement)
* -> progressive low pass filter filtering (higher filtering on the borders than on the center)
* -> image data between 0 and 255 resampling with different options, linear rescaling, sigmoide)
*
* NOTE : initially the retina model was based on double format scalar values but
* a good memory/precision compromise is float...
* also the double format precision does not make so much sense from a biological point of view (neurons value coding is not so precise)
*
* TYPICAL USE:
*
* // create object at a specified picture size
* BasicRetinaFilter *_photoreceptorsPrefilter;
* _photoreceptorsPrefilter =new BasicRetinaFilter(sizeRows, sizeWindows);
*
* // init gain, spatial and temporal parameters:
* _photoreceptorsPrefilter->setCoefficientsTable(gain,temporalConstant, spatialConstant);
*
* // during program execution, call the filter for local luminance correction or low pass filtering for an input picture called "FrameBuffer":
* _photoreceptorsPrefilter->runFilter_LocalAdapdation(FrameBuffer);
* // or (Low pass first order filter)
* _photoreceptorsPrefilter->runFilter_LPfilter(FrameBuffer);
* // get output frame and its size:
* const unsigned int output_nbRows=_photoreceptorsPrefilter->getNBrows();
* const unsigned int output_nbColumns=_photoreceptorsPrefilter->getNBcolumns();
* const double *outputFrame=_photoreceptorsPrefilter->getOutput();
*
* // at the end of the program, destroy object:
* delete _photoreceptorsPrefilter;
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
* synthesis of the work described in Alexandre BENOIT thesis: "Le systeme visuel humain au secours de la vision par ordinateur"
*/
#include <iostream>
#include "templatebuffer.hpp"
//#define __BASIC_RETINA_ELEMENT_DEBUG
namespace cv
{
namespace bioinspired
{
class BasicRetinaFilter
{
public:
/**
* constructor of the base bio-inspired toolbox, parameters are only linked to imae input size and number of filtering capabilities of the object
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
* @param parametersListSize: specifies the number of parameters set (each parameters set represents a specific low pass spatio-temporal filter)
* @param useProgressiveFilter: specifies if the filter has irreguar (progressive) filtering capabilities (this can be activated later using setProgressiveFilterConstants_xxx methods)
*/
BasicRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns, const unsigned int parametersListSize=1, const bool useProgressiveFilter=false);
/**
* standrad destructore
*/
~BasicRetinaFilter();
/**
* function which clears the output buffer of the object
*/
inline void clearOutputBuffer(){_filterOutput=0;};
/**
* function which clears the secondary buffer of the object
*/
inline void clearSecondaryBuffer(){_localBuffer=0;};
/**
* function which clears the output and the secondary buffer of the object
*/
inline void clearAllBuffers(){clearOutputBuffer();clearSecondaryBuffer();};
/**
* resize basic retina filter object (resize all allocated buffers
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* forbiden method inherited from parent std::valarray
* prefer not to use this method since the filter matrix become vectors
*/
void resize(const unsigned int){std::cerr<<"error, not accessible method"<<std::endl;};
/**
* low pass filter call and run (models the homogeneous cells network at the retina level, for example horizontal cells or photoreceptors)
* @param inputFrame: the input image to be processed
* @param filterIndex: the offset which specifies the parameter set that should be used for the filtering
* @return the processed image, the output is reachable later by using function getOutput()
*/
const std::valarray<float> &runFilter_LPfilter(const std::valarray<float> &inputFrame, const unsigned int filterIndex=0); // run the LP filter for a new frame input and save result in _filterOutput
/**
* low pass filter call and run (models the homogeneous cells network at the retina level, for example horizontal cells or photoreceptors)
* @param inputFrame: the input image to be processed
* @param outputFrame: the output buffer in which the result is writed
* @param filterIndex: the offset which specifies the parameter set that should be used for the filtering
*/
void runFilter_LPfilter(const std::valarray<float> &inputFrame, std::valarray<float> &outputFrame, const unsigned int filterIndex=0); // run LP filter on a specific output adress
/**
* low pass filter call and run (models the homogeneous cells network at the retina level, for example horizontal cells or photoreceptors)
* @param inputOutputFrame: the input image to be processed on which the result is rewrited
* @param filterIndex: the offset which specifies the parameter set that should be used for the filtering
*/
void runFilter_LPfilter_Autonomous(std::valarray<float> &inputOutputFrame, const unsigned int filterIndex=0);// run LP filter on the input data and rewrite it
/**
* local luminance adaptation call and run (contrast enhancement property of the photoreceptors)
* @param inputOutputFrame: the input image to be processed
* @param localLuminance: an image which represents the local luminance of the inputFrame parameter, in general, it is its low pass spatial filtering
* @return the processed image, the output is reachable later by using function getOutput()
*/
const std::valarray<float> &runFilter_LocalAdapdation(const std::valarray<float> &inputOutputFrame, const std::valarray<float> &localLuminance);// run local adaptation filter and save result in _filterOutput
/**
* local luminance adaptation call and run (contrast enhancement property of the photoreceptors)
* @param inputFrame: the input image to be processed
* @param localLuminance: an image which represents the local luminance of the inputFrame parameter, in general, it is its low pass spatial filtering
* @param outputFrame: the output buffer in which the result is writed
*/
void runFilter_LocalAdapdation(const std::valarray<float> &inputFrame, const std::valarray<float> &localLuminance, std::valarray<float> &outputFrame); // run local adaptation filter on a specific output adress
/**
* local luminance adaptation call and run (contrast enhancement property of the photoreceptors)
* @param inputFrame: the input image to be processed
* @return the processed image, the output is reachable later by using function getOutput()
*/
const std::valarray<float> &runFilter_LocalAdapdation_autonomous(const std::valarray<float> &inputFrame);// run local adaptation filter and save result in _filterOutput
/**
* local luminance adaptation call and run (contrast enhancement property of the photoreceptors)
* @param inputFrame: the input image to be processed
* @param outputFrame: the output buffer in which the result is writen
*/
void runFilter_LocalAdapdation_autonomous(const std::valarray<float> &inputFrame, std::valarray<float> &outputFrame); // run local adaptation filter on a specific output adress
/**
* run low pass filtering with progressive parameters (models the retina log sampling of the photoreceptors and its low pass filtering effect consequence: more powerfull low pass filtering effect on the corners)
* @param inputFrame: the input image to be processed
* @param filterIndex: the index which specifies the parameter set that should be used for the filtering
* @return the processed image, the output is reachable later by using function getOutput() if outputFrame is NULL
*/
inline void runProgressiveFilter(std::valarray<float> &inputFrame, const unsigned int filterIndex=0){_spatiotemporalLPfilter_Irregular(&inputFrame[0], filterIndex);};
/**
* run low pass filtering with progressive parameters (models the retina log sampling of the photoreceptors and its low pass filtering effect consequence: more powerfull low pass filtering effect on the corners)
* @param inputFrame: the input image to be processed
* @param outputFrame: the output buffer in which the result is writen
* @param filterIndex: the index which specifies the parameter set that should be used for the filtering
*/
inline void runProgressiveFilter(const std::valarray<float> &inputFrame,
std::valarray<float> &outputFrame,
const unsigned int filterIndex=0)
{_spatiotemporalLPfilter_Irregular(get_data(inputFrame), &outputFrame[0], filterIndex);};
/**
* first order spatio-temporal low pass filter setup function
* @param beta: gain of the filter (generally set to zero)
* @param tau: time constant of the filter (unit is frame for video processing)
* @param k: spatial constant of the filter (unit is pixels)
* @param filterIndex: the index which specifies the parameter set that should be used for the filtering
*/
void setLPfilterParameters(const float beta, const float tau, const float k, const unsigned int filterIndex=0); // change the parameters of the filter
/**
* first order spatio-temporal low pass filter setup function
* @param beta: gain of the filter (generally set to zero)
* @param tau: time constant of the filter (unit is frame for video processing)
* @param alpha0: spatial constant of the filter (unit is pixels) on the border of the image
* @param filterIndex: the index which specifies the parameter set that should be used for the filtering
*/
void setProgressiveFilterConstants_CentredAccuracy(const float beta, const float tau, const float alpha0, const unsigned int filterIndex=0);
/**
* first order spatio-temporal low pass filter setup function
* @param beta: gain of the filter (generally set to zero)
* @param tau: time constant of the filter (unit is frame for video processing)
* @param alpha0: spatial constant of the filter (unit is pixels) on the border of the image
* @param accuracyMap an image (float format) which values range is between 0 and 1, where 0 means, apply no filtering and 1 means apply the filtering as specified in the parameters set, intermediate values allow to smooth variations of the filtering strenght
* @param filterIndex: the index which specifies the parameter set that should be used for the filtering
*/
void setProgressiveFilterConstants_CustomAccuracy(const float beta, const float tau, const float alpha0, const std::valarray<float> &accuracyMap, const unsigned int filterIndex=0);
/**
* local luminance adaptation setup, this function should be applied for normal local adaptation (not for tone mapping operation)
* @param v0: compression effect for the local luminance adaptation processing, set a value between 0.6 and 0.9 for best results, a high value yields to a high compression effect
* @param maxInputValue: the maximum amplitude value measured after local adaptation processing (c.f. function runFilter_LocalAdapdation & runFilter_LocalAdapdation_autonomous)
* @param meanLuminance: the a priori meann luminance of the input data (should be 128 for 8bits images but can vary greatly in case of High Dynamic Range Images (HDRI)
*/
void setV0CompressionParameter(const float v0, const float maxInputValue, const float){ _v0=v0*maxInputValue; _localLuminanceFactor=v0; _localLuminanceAddon=maxInputValue*(1.0f-v0); _maxInputValue=maxInputValue;};
/**
* update local luminance adaptation setup, initial maxInputValue is kept. This function should be applied for normal local adaptation (not for tone mapping operation)
* @param v0: compression effect for the local luminance adaptation processing, set a value between 0.6 and 0.9 for best results, a high value yields to a high compression effect
* @param meanLuminance: the a priori meann luminance of the input data (should be 128 for 8bits images but can vary greatly in case of High Dynamic Range Images (HDRI)
*/
void setV0CompressionParameter(const float v0, const float meanLuminance){ this->setV0CompressionParameter(v0, _maxInputValue, meanLuminance);};
/**
* local luminance adaptation setup, this function should be applied for normal local adaptation (not for tone mapping operation)
* @param v0: compression effect for the local luminance adaptation processing, set a value between 0.6 and 0.9 for best results, a high value yields to a high compression effect
*/
void setV0CompressionParameter(const float v0){ _v0=v0*_maxInputValue; _localLuminanceFactor=v0; _localLuminanceAddon=_maxInputValue*(1.0f-v0);};
/**
* local luminance adaptation setup, this function should be applied for local adaptation applied to tone mapping operation
* @param v0: compression effect for the local luminance adaptation processing, set a value between 0.6 and 0.9 for best results, a high value yields to a high compression effect
* @param maxInputValue: the maximum amplitude value measured after local adaptation processing (c.f. function runFilter_LocalAdapdation & runFilter_LocalAdapdation_autonomous)
* @param meanLuminance: the a priori meann luminance of the input data (should be 128 for 8bits images but can vary greatly in case of High Dynamic Range Images (HDRI)
*/
void setV0CompressionParameterToneMapping(const float v0, const float maxInputValue, const float meanLuminance=128.0f){ _v0=v0*maxInputValue; _localLuminanceFactor=1.0f; _localLuminanceAddon=meanLuminance*v0; _maxInputValue=maxInputValue;};
/**
* update compression parameters while keeping v0 parameter value
* @param meanLuminance the input frame mean luminance
*/
inline void updateCompressionParameter(const float meanLuminance){_localLuminanceFactor=1; _localLuminanceAddon=meanLuminance*_v0;};
/**
* @return the v0 compression parameter used to compute the local adaptation
*/
float getV0CompressionParameter(){ return _v0/_maxInputValue;};
/**
* @return the output result of the object
*/
inline const std::valarray<float> &getOutput() const {return _filterOutput;};
/**
* @return number of rows of the filter
*/
inline unsigned int getNBrows(){return _filterOutput.getNBrows();};
/**
* @return number of columns of the filter
*/
inline unsigned int getNBcolumns(){return _filterOutput.getNBcolumns();};
/**
* @return number of pixels of the filter
*/
inline unsigned int getNBpixels(){return _filterOutput.getNBpixels();};
/**
* force filter output to be normalized between 0 and maxValue
* @param maxValue: the maximum output value that is required
*/
inline void normalizeGrayOutput_0_maxOutputValue(const float maxValue){_filterOutput.normalizeGrayOutput_0_maxOutputValue(maxValue);};
/**
* force filter output to be normalized around 0 and rescaled with a sigmoide effect (extrem values saturation)
* @param maxValue: the maximum output value that is required
*/
inline void normalizeGrayOutputCentredSigmoide(){_filterOutput.normalizeGrayOutputCentredSigmoide();};
/**
* force filter output to be normalized : data centering and std normalisation
* @param maxValue: the maximum output value that is required
*/
inline void centerReductImageLuminance(){_filterOutput.centerReductImageLuminance();};
/**
* @return the maximum input buffer value
*/
inline float getMaxInputValue(){return this->_maxInputValue;};
/**
* @return the maximum input buffer value
*/
inline void setMaxInputValue(const float newMaxInputValue){this->_maxInputValue=newMaxInputValue;};
protected:
/////////////////////////
// data buffers
TemplateBuffer<float> _filterOutput; // primary buffer (contains processing outputs)
std::valarray<float> _localBuffer; // local secondary buffer
/////////////////////////
// PARAMETERS
unsigned int _halfNBrows;
unsigned int _halfNBcolumns;
// parameters buffers
std::valarray <float>_filteringCoeficientsTable;
std::valarray <float>_progressiveSpatialConstant;// pointer to a local table containing local spatial constant (allocated with the object)
std::valarray <float>_progressiveGain;// pointer to a local table containing local spatial constant (allocated with the object)
// local adaptation filtering parameters
float _v0; //value used for local luminance adaptation function
float _maxInputValue;
float _meanInputValue;
float _localLuminanceFactor;
float _localLuminanceAddon;
// protected data related to standard low pass filters parameters
float _a;
float _tau;
float _gain;
/////////////////////////
// FILTERS METHODS
// Basic low pass spation temporal low pass filter used by each retina filters
void _spatiotemporalLPfilter(const float *inputFrame, float *LPfilterOutput, const unsigned int coefTableOffset=0);
float _squaringSpatiotemporalLPfilter(const float *inputFrame, float *outputFrame, const unsigned int filterIndex=0);
// LP filter with an irregular spatial filtering
// -> rewrites the input buffer
void _spatiotemporalLPfilter_Irregular(float *inputOutputFrame, const unsigned int filterIndex=0);
// writes the output on another buffer
void _spatiotemporalLPfilter_Irregular(const float *inputFrame, float *outputFrame, const unsigned int filterIndex=0);
// LP filter that squares the input and computes the output ONLY on the areas where the integrationAreas map are TRUE
void _localSquaringSpatioTemporalLPfilter(const float *inputFrame, float *LPfilterOutput, const unsigned int *integrationAreas, const unsigned int filterIndex=0);
// local luminance adaptation of the input in regard of localLuminance buffer
void _localLuminanceAdaptation(const float *inputFrame, const float *localLuminance, float *outputFrame, const bool updateLuminanceMean=true);
// local luminance adaptation of the input in regard of localLuminance buffer, the input is rewrited and becomes the output
void _localLuminanceAdaptation(float *inputOutputFrame, const float *localLuminance);
// local adaptation applied on a range of values which can be positive and negative
void _localLuminanceAdaptationPosNegValues(const float *inputFrame, const float *localLuminance, float *outputFrame);
//////////////////////////////////////////////////////////////
// 1D directional filters used for the 2D low pass filtering
// 1D filters with image input
void _horizontalCausalFilter_addInput(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd);
// 1D filters with image input that is squared in the function // parallelized with TBB
void _squaringHorizontalCausalFilter(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd);
// vertical anticausal filter that returns the mean value of its result
float _verticalAnticausalFilter_returnMeanValue(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd);
// most simple functions: only perform 1D filtering with output=input (no add on)
void _horizontalCausalFilter(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd);
void _horizontalAnticausalFilter(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd); // parallelized with TBB
void _verticalCausalFilter(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd); // parallelized with TBB
void _verticalAnticausalFilter(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd);
// perform 1D filtering with output with varrying spatial coefficient
void _horizontalCausalFilter_Irregular(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd);
void _horizontalCausalFilter_Irregular_addInput(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd);
void _horizontalAnticausalFilter_Irregular(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd, const float *spatialConstantBuffer); // parallelized with TBB
void _verticalCausalFilter_Irregular(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd, const float *spatialConstantBuffer); // parallelized with TBB
void _verticalAnticausalFilter_Irregular_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd);
// 1D filters in which the output is multiplied by _gain
void _verticalAnticausalFilter_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd); // this functions affects _gain at the output // parallelized with TBB
void _horizontalAnticausalFilter_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd); // this functions affects _gain at the output
// LP filter on specific parts of the picture instead of all the image
// same functions (some of them) but take a binary flag to allow integration, false flag means, 0 at the output...
void _local_squaringHorizontalCausalFilter(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd, const unsigned int *integrationAreas);
void _local_horizontalAnticausalFilter(float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd, const unsigned int *integrationAreas);
void _local_verticalCausalFilter(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd, const unsigned int *integrationAreas);
void _local_verticalAnticausalFilter_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd, const unsigned int *integrationAreas); // this functions affects _gain at the output
#ifdef MAKE_PARALLEL
/******************************************************
** IF some parallelizing thread methods are available, then, main loops are parallelized using these functors
** ==> main idea paralellise main filters loops, then, only the most used methods are parallelized... TODO : increase the number of parallelised methods as necessary
** ==> functors names = Parallel_$$$ where $$$= the name of the serial method that is parallelised
** ==> functors constructors can differ from the parameters used with their related serial functions
*/
#define _DEBUG_TBB // define DEBUG_TBB in order to display additionnal data on stdout
class Parallel_horizontalAnticausalFilter: public cv::ParallelLoopBody
{
private:
float *outputFrame;
unsigned int IDrowEnd, nbColumns;
float filterParam_a;
public:
// constructor which takes the input image pointer reference reference and limits
Parallel_horizontalAnticausalFilter(float *bufferToProcess, const unsigned int idEnd, const unsigned int nbCols, const float a )
:outputFrame(bufferToProcess), IDrowEnd(idEnd), nbColumns(nbCols), filterParam_a(a)
{
#ifdef DEBUG_TBB
std::cout<<"Parallel_horizontalAnticausalFilter::Parallel_horizontalAnticausalFilter :"
<<"\n\t idEnd="<<IDrowEnd
<<"\n\t nbCols="<<nbColumns
<<"\n\t filterParam="<<filterParam_a
<<std::endl;
#endif
}
virtual void operator()( const Range& r ) const {
#ifdef DEBUG_TBB
std::cout<<"Parallel_horizontalAnticausalFilter::operator() :"
<<"\n\t range size="<<r.size()
<<"\n\t first index="<<r.start
//<<"\n\t last index="<<filterParam
<<std::endl;
#endif
for (int IDrow=r.start; IDrow!=r.end; ++IDrow)
{
register float* outputPTR=outputFrame+(IDrowEnd-IDrow)*(nbColumns)-1;
register float result=0;
for (unsigned int index=0; index<nbColumns; ++index)
{
result = *(outputPTR)+ filterParam_a* result;
*(outputPTR--) = result;
}
}
}
};
class Parallel_horizontalCausalFilter_addInput: public cv::ParallelLoopBody
{
private:
const float *inputFrame;
float *outputFrame;
unsigned int IDrowStart, nbColumns;
float filterParam_a, filterParam_tau;
public:
Parallel_horizontalCausalFilter_addInput(const float *bufferToAddAsInputProcess, float *bufferToProcess, const unsigned int idStart, const unsigned int nbCols, const float a, const float tau)
:inputFrame(bufferToAddAsInputProcess), outputFrame(bufferToProcess), IDrowStart(idStart), nbColumns(nbCols), filterParam_a(a), filterParam_tau(tau){}
virtual void operator()( const Range& r ) const {
for (int IDrow=r.start; IDrow!=r.end; ++IDrow)
{
register float* outputPTR=outputFrame+(IDrowStart+IDrow)*nbColumns;
register const float* inputPTR=inputFrame+(IDrowStart+IDrow)*nbColumns;
register float result=0;
for (unsigned int index=0; index<nbColumns; ++index)
{
result = *(inputPTR++) + filterParam_tau**(outputPTR)+ filterParam_a* result;
*(outputPTR++) = result;
}
}
}
};
class Parallel_verticalCausalFilter: public cv::ParallelLoopBody
{
private:
float *outputFrame;
unsigned int nbRows, nbColumns;
float filterParam_a;
public:
Parallel_verticalCausalFilter(float *bufferToProcess, const unsigned int nbRws, const unsigned int nbCols, const float a )
:outputFrame(bufferToProcess), nbRows(nbRws), nbColumns(nbCols), filterParam_a(a){}
virtual void operator()( const Range& r ) const {
for (int IDcolumn=r.start; IDcolumn!=r.end; ++IDcolumn)
{
register float result=0;
register float *outputPTR=outputFrame+IDcolumn;
for (unsigned int index=0; index<nbRows; ++index)
{
result = *(outputPTR) + filterParam_a * result;
*(outputPTR) = result;
outputPTR+=nbColumns;
}
}
}
};
class Parallel_verticalAnticausalFilter_multGain: public cv::ParallelLoopBody
{
private:
float *outputFrame;
unsigned int nbRows, nbColumns;
float filterParam_a, filterParam_gain;
public:
Parallel_verticalAnticausalFilter_multGain(float *bufferToProcess, const unsigned int nbRws, const unsigned int nbCols, const float a, const float gain)
:outputFrame(bufferToProcess), nbRows(nbRws), nbColumns(nbCols), filterParam_a(a), filterParam_gain(gain){}
virtual void operator()( const Range& r ) const {
float* offset=outputFrame+nbColumns*nbRows-nbColumns;
for (int IDcolumn=r.start; IDcolumn!=r.end; ++IDcolumn)
{
register float result=0;
register float *outputPTR=offset+IDcolumn;
for (unsigned int index=0; index<nbRows; ++index)
{
result = *(outputPTR) + filterParam_a * result;
*(outputPTR) = filterParam_gain*result;
outputPTR-=nbColumns;
}
}
}
};
class Parallel_localAdaptation: public cv::ParallelLoopBody
{
private:
const float *localLuminance, *inputFrame;
float *outputFrame;
float localLuminanceFactor, localLuminanceAddon, maxInputValue;
public:
Parallel_localAdaptation(const float *localLum, const float *inputImg, float *bufferToProcess, const float localLuminanceFact, const float localLuminanceAdd, const float maxInputVal)
:localLuminance(localLum), inputFrame(inputImg),outputFrame(bufferToProcess), localLuminanceFactor(localLuminanceFact), localLuminanceAddon(localLuminanceAdd), maxInputValue(maxInputVal) {};
virtual void operator()( const Range& r ) const {
const float *localLuminancePTR=localLuminance+r.start;
const float *inputFramePTR=inputFrame+r.start;
float *outputFramePTR=outputFrame+r.start;
for (register int IDpixel=r.start ; IDpixel!=r.end ; ++IDpixel, ++inputFramePTR, ++outputFramePTR)
{
float X0=*(localLuminancePTR++)*localLuminanceFactor+localLuminanceAddon;
// TODO : the following line can lead to a divide by zero ! A small offset is added, take care if the offset is too large in case of High Dynamic Range images which can use very small values...
*(outputFramePTR) = (maxInputValue+X0)**inputFramePTR/(*inputFramePTR +X0+0.00000000001f);
//std::cout<<"BasicRetinaFilter::inputFrame[IDpixel]=%f, X0=%f, outputFrame[IDpixel]=%f\n", inputFrame[IDpixel], X0, outputFrame[IDpixel]);
}
}
};
//////////////////////////////////////////
/// Specific filtering methods which manage non const spatial filtering parameter (used By retinacolor and LogProjectors)
class Parallel_horizontalAnticausalFilter_Irregular: public cv::ParallelLoopBody
{
private:
float *outputFrame;
const float *spatialConstantBuffer;
unsigned int IDrowEnd, nbColumns;
public:
Parallel_horizontalAnticausalFilter_Irregular(float *bufferToProcess, const float *spatialConst, const unsigned int idEnd, const unsigned int nbCols)
:outputFrame(bufferToProcess), spatialConstantBuffer(spatialConst), IDrowEnd(idEnd), nbColumns(nbCols){}
virtual void operator()( const Range& r ) const {
for (int IDrow=r.start; IDrow!=r.end; ++IDrow)
{
register float* outputPTR=outputFrame+(IDrowEnd-IDrow)*(nbColumns)-1;
register const float* spatialConstantPTR=spatialConstantBuffer+(IDrowEnd-IDrow)*(nbColumns)-1;
register float result=0;
for (unsigned int index=0; index<nbColumns; ++index)
{
result = *(outputPTR)+ *(spatialConstantPTR--)* result;
*(outputPTR--) = result;
}
}
}
};
class Parallel_verticalCausalFilter_Irregular: public cv::ParallelLoopBody
{
private:
float *outputFrame;
const float *spatialConstantBuffer;
unsigned int nbRows, nbColumns;
public:
Parallel_verticalCausalFilter_Irregular(float *bufferToProcess, const float *spatialConst, const unsigned int nbRws, const unsigned int nbCols)
:outputFrame(bufferToProcess), spatialConstantBuffer(spatialConst), nbRows(nbRws), nbColumns(nbCols){}
virtual void operator()( const Range& r ) const {
for (int IDcolumn=r.start; IDcolumn!=r.end; ++IDcolumn)
{
register float result=0;
register float *outputPTR=outputFrame+IDcolumn;
register const float* spatialConstantPTR=spatialConstantBuffer+IDcolumn;
for (unsigned int index=0; index<nbRows; ++index)
{
result = *(outputPTR) + *(spatialConstantPTR) * result;
*(outputPTR) = result;
outputPTR+=nbColumns;
spatialConstantPTR+=nbColumns;
}
}
}
};
#endif
};
}// end of namespace bioinspired
}// end of namespace cv
#endif

View File

@@ -1,451 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include "imagelogpolprojection.hpp"
#include <cmath>
#include <iostream>
// @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
namespace cv
{
namespace bioinspired
{
// constructor
ImageLogPolProjection::ImageLogPolProjection(const unsigned int nbRows, const unsigned int nbColumns, const PROJECTIONTYPE projection, const bool colorModeCapable)
:BasicRetinaFilter(nbRows, nbColumns),
_sampledFrame(0),
_tempBuffer(_localBuffer),
_transformTable(0),
_irregularLPfilteredFrame(_filterOutput)
{
_inputDoubleNBpixels=nbRows*nbColumns*2;
_selectedProjection = projection;
_reductionFactor=0;
_initOK=false;
_usefullpixelIndex=0;
_colorModeCapable=colorModeCapable;
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::allocating"<<std::endl;
#endif
if (_colorModeCapable)
{
_tempBuffer.resize(nbRows*nbColumns*3);
}
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::done"<<std::endl;
#endif
clearAllBuffers();
}
// destructor
ImageLogPolProjection::~ImageLogPolProjection()
{
}
// reset buffers method
void ImageLogPolProjection::clearAllBuffers()
{
_sampledFrame=0;
_tempBuffer=0;
BasicRetinaFilter::clearAllBuffers();
}
/**
* resize retina color filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void ImageLogPolProjection::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
BasicRetinaFilter::resize(NBrows, NBcolumns);
initProjection(_reductionFactor, _samplingStrenght);
// reset buffers method
clearAllBuffers();
}
// init functions depending on the projection type
bool ImageLogPolProjection::initProjection(const double reductionFactor, const double samplingStrenght)
{
switch(_selectedProjection)
{
case RETINALOGPROJECTION:
return _initLogRetinaSampling(reductionFactor, samplingStrenght);
break;
case CORTEXLOGPOLARPROJECTION:
return _initLogPolarCortexSampling(reductionFactor, samplingStrenght);
break;
default:
std::cout<<"ImageLogPolProjection::no projection setted up... performing default retina projection... take care"<<std::endl;
return _initLogRetinaSampling(reductionFactor, samplingStrenght);
break;
}
}
// -> private init functions dedicated to each projection
bool ImageLogPolProjection::_initLogRetinaSampling(const double reductionFactor, const double samplingStrenght)
{
_initOK=false;
if (_selectedProjection!=RETINALOGPROJECTION)
{
std::cerr<<"ImageLogPolProjection::initLogRetinaSampling: could not initialize logPolar projection for a log projection system\n -> you probably chose the wrong init function, use initLogPolarCortexSampling() instead"<<std::endl;
return false;
}
if (reductionFactor<1.0)
{
std::cerr<<"ImageLogPolProjection::initLogRetinaSampling: reduction factor must be superior to 0, skeeping initialisation..."<<std::endl;
return false;
}
// compute image output size
_outputNBrows=predictOutputSize(this->getNBrows(), reductionFactor);
_outputNBcolumns=predictOutputSize(this->getNBcolumns(), reductionFactor);
_outputNBpixels=_outputNBrows*_outputNBcolumns;
_outputDoubleNBpixels=_outputNBrows*_outputNBcolumns*2;
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: Log resampled image resampling factor: "<<reductionFactor<<", strenght:"<<samplingStrenght<<std::endl;
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: Log resampled image size: "<<_outputNBrows<<"*"<<_outputNBcolumns<<std::endl;
#endif
// setup progressive prefilter that will be applied BEFORE log sampling
setProgressiveFilterConstants_CentredAccuracy(0.f, 0.f, 0.99f);
// (re)create the image output buffer and transform table if the reduction factor changed
_sampledFrame.resize(_outputNBpixels*(1+(unsigned int)_colorModeCapable*2));
// specifiying new reduction factor after preliminar checks
_reductionFactor=reductionFactor;
_samplingStrenght=samplingStrenght;
// compute the rlim for symetric rows/columns sampling, then, the rlim is based on the smallest dimension
_minDimension=(double)(_filterOutput.getNBrows() < _filterOutput.getNBcolumns() ? _filterOutput.getNBrows() : _filterOutput.getNBcolumns());
// input frame dimensions dependent log sampling:
//double rlim=1.0/reductionFactor*(minDimension/2.0+samplingStrenght);
// input frame dimensions INdependent log sampling:
_azero=(1.0+reductionFactor*std::sqrt(samplingStrenght))/(reductionFactor*reductionFactor*samplingStrenght-1.0);
_alim=(1.0+_azero)/reductionFactor;
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: rlim= "<<rlim<<std::endl;
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: alim= "<<alim<<std::endl;
#endif
// get half frame size
unsigned int halfOutputRows = _outputNBrows/2-1;
unsigned int halfOutputColumns = _outputNBcolumns/2-1;
unsigned int halfInputRows = _filterOutput.getNBrows()/2-1;
unsigned int halfInputColumns = _filterOutput.getNBcolumns()/2-1;
// computing log sampling matrix by computing quarters of images
// the original new image center (_filterOutput.getNBrows()/2, _filterOutput.getNBcolumns()/2) being at coordinate (_filterOutput.getNBrows()/(2*_reductionFactor), _filterOutput.getNBcolumns()/(2*_reductionFactor))
// -> use a temporary transform table which is bigger than the final one, we only report pixels coordinates that are included in the sampled picture
std::valarray<unsigned int> tempTransformTable(2*_outputNBpixels); // the structure would be: (pixelInputCoordinate n)(pixelOutputCoordinate n)(pixelInputCoordinate n+1)(pixelOutputCoordinate n+1)
_usefullpixelIndex=0;
double rMax=0;
halfInputRows<halfInputColumns ? rMax=(double)(halfInputRows*halfInputRows):rMax=(double)(halfInputColumns*halfInputColumns);
for (unsigned int idRow=0;idRow<halfOutputRows; ++idRow)
{
for (unsigned int idColumn=0;idColumn<halfOutputColumns; ++idColumn)
{
// get the pixel position in the original picture
// -> input frame dimensions dependent log sampling:
//double scale = samplingStrenght/(rlim-(double)std::sqrt(idRow*idRow+idColumn*idColumn));
// -> input frame dimensions INdependent log sampling:
double scale=getOriginalRadiusLength((double)std::sqrt((double)(idRow*idRow+idColumn*idColumn)));
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: scale= "<<scale<<std::endl;
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: scale2= "<<scale2<<std::endl;
#endif
if (scale < 0) ///check it later
scale = 10000;
#ifdef IMAGELOGPOLPROJECTION_DEBUG
// std::cout<<"ImageLogPolProjection::initLogRetinaSampling: scale= "<<scale<<std::endl;
#endif
unsigned int u=(unsigned int)floor((double)idRow*scale);
unsigned int v=(unsigned int)floor((double)idColumn*scale);
// manage border effects
double length=u*u+v*v;
double radiusRatio=std::sqrt(rMax/length);
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::(inputH, inputW)="<<halfInputRows<<", "<<halfInputColumns<<", Rmax2="<<rMax<<std::endl;
std::cout<<"before ==> ImageLogPolProjection::(u, v)="<<u<<", "<<v<<", r="<<u*u+v*v<<std::endl;
std::cout<<"ratio ="<<radiusRatio<<std::endl;
#endif
if (radiusRatio < 1.0)
{
u=(unsigned int)floor(radiusRatio*double(u));
v=(unsigned int)floor(radiusRatio*double(v));
}
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"after ==> ImageLogPolProjection::(u, v)="<<u<<", "<<v<<", r="<<u*u+v*v<<std::endl;
std::cout<<"ImageLogPolProjection::("<<(halfOutputRows-idRow)<<", "<<idColumn+halfOutputColumns<<") <- ("<<halfInputRows-u<<", "<<v+halfInputColumns<<")"<<std::endl;
std::cout<<(halfOutputRows-idRow)+(halfOutputColumns+idColumn)*_outputNBrows<<" -> "<<(halfInputRows-u)+_filterOutput.getNBrows()*(halfInputColumns+v)<<std::endl;
#endif
if ((u<halfInputRows)&&(v<halfInputColumns))
{
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"*** VALID ***"<<std::endl;
#endif
// set pixel coordinate of the input picture in the transform table at the current log sampled pixel
// 1st quadrant
tempTransformTable[_usefullpixelIndex++]=(halfOutputColumns+idColumn)+(halfOutputRows-idRow)*_outputNBcolumns;
tempTransformTable[_usefullpixelIndex++]=_filterOutput.getNBcolumns()*(halfInputRows-u)+(halfInputColumns+v);
// 2nd quadrant
tempTransformTable[_usefullpixelIndex++]=(halfOutputColumns+idColumn)+(halfOutputRows+idRow)*_outputNBcolumns;
tempTransformTable[_usefullpixelIndex++]=_filterOutput.getNBcolumns()*(halfInputRows+u)+(halfInputColumns+v);
// 3rd quadrant
tempTransformTable[_usefullpixelIndex++]=(halfOutputColumns-idColumn)+(halfOutputRows-idRow)*_outputNBcolumns;
tempTransformTable[_usefullpixelIndex++]=_filterOutput.getNBcolumns()*(halfInputRows-u)+(halfInputColumns-v);
// 4td quadrant
tempTransformTable[_usefullpixelIndex++]=(halfOutputColumns-idColumn)+(halfOutputRows+idRow)*_outputNBcolumns;
tempTransformTable[_usefullpixelIndex++]=_filterOutput.getNBcolumns()*(halfInputRows+u)+(halfInputColumns-v);
}
}
}
// (re)creating and filling the transform table
_transformTable.resize(_usefullpixelIndex);
memcpy(&_transformTable[0], &tempTransformTable[0], sizeof(unsigned int)*_usefullpixelIndex);
// reset all buffers
clearAllBuffers();
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::initLogRetinaSampling: init done successfully"<<std::endl;
#endif
_initOK=true;
return _initOK;
}
bool ImageLogPolProjection::_initLogPolarCortexSampling(const double reductionFactor, const double)
{
_initOK=false;
if (_selectedProjection!=CORTEXLOGPOLARPROJECTION)
{
std::cerr<<"ImageLogPolProjection::could not initialize log projection for a logPolar projection system\n -> you probably chose the wrong init function, use initLogRetinaSampling() instead"<<std::endl;
return false;
}
if (reductionFactor<1.0)
{
std::cerr<<"ImageLogPolProjection::reduction factor must be superior to 0, skeeping initialisation..."<<std::endl;
return false;
}
// compute the smallest image size
unsigned int minDimension=(_filterOutput.getNBrows() < _filterOutput.getNBcolumns() ? _filterOutput.getNBrows() : _filterOutput.getNBcolumns());
// specifiying new reduction factor after preliminar checks
_reductionFactor=reductionFactor;
// compute image output size
_outputNBrows=(unsigned int)((double)minDimension/reductionFactor);
_outputNBcolumns=(unsigned int)((double)minDimension/reductionFactor);
_outputNBpixels=_outputNBrows*_outputNBcolumns;
_outputDoubleNBpixels=_outputNBrows*_outputNBcolumns*2;
// get half frame size
//unsigned int halfOutputRows = _outputNBrows/2-1;
//unsigned int halfOutputColumns = _outputNBcolumns/2-1;
unsigned int halfInputRows = _filterOutput.getNBrows()/2-1;
unsigned int halfInputColumns = _filterOutput.getNBcolumns()/2-1;
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::Log resampled image size: "<<_outputNBrows<<"*"<<_outputNBcolumns<<std::endl;
#endif
// setup progressive prefilter that will be applied BEFORE log sampling
setProgressiveFilterConstants_CentredAccuracy(0.f, 0.f, 0.99f);
// (re)create the image output buffer and transform table if the reduction factor changed
_sampledFrame.resize(_outputNBpixels*(1+(unsigned int)_colorModeCapable*2));
// create the radius and orientation axis and fill them, radius E [0;1], orientation E[-pi, pi]
std::valarray<double> radiusAxis(_outputNBcolumns);
double radiusStep=2.30/(double)_outputNBcolumns;
for (unsigned int i=0;i<_outputNBcolumns;++i)
{
radiusAxis[i]=i*radiusStep;
}
std::valarray<double> orientationAxis(_outputNBrows);
double orientationStep=-2.0*CV_PI/(double)_outputNBrows;
for (unsigned int io=0;io<_outputNBrows;++io)
{
orientationAxis[io]=io*orientationStep;
}
// -> use a temporay transform table which is bigger than the final one, we only report pixels coordinates that are included in the sampled picture
std::valarray<unsigned int> tempTransformTable(2*_outputNBpixels); // the structure would be: (pixelInputCoordinate n)(pixelOutputCoordinate n)(pixelInputCoordinate n+1)(pixelOutputCoordinate n+1)
_usefullpixelIndex=0;
//std::cout<<"ImageLogPolProjection::Starting cortex projection"<<std::endl;
// compute transformation, get theta and Radius in reagrd of the output sampled pixel
double diagonalLenght=std::sqrt((double)(_outputNBcolumns*_outputNBcolumns+_outputNBrows*_outputNBrows));
for (unsigned int radiusIndex=0;radiusIndex<_outputNBcolumns;++radiusIndex)
for(unsigned int orientationIndex=0;orientationIndex<_outputNBrows;++orientationIndex)
{
double x=1.0+sinh(radiusAxis[radiusIndex])*cos(orientationAxis[orientationIndex]);
double y=sinh(radiusAxis[radiusIndex])*sin(orientationAxis[orientationIndex]);
// get the input picture coordinate
double R=diagonalLenght*std::sqrt(x*x+y*y)/(5.0+std::sqrt(x*x+y*y));
double theta=atan2(y,x);
// convert input polar coord into cartesian/C compatble coordinate
unsigned int columnIndex=(unsigned int)(cos(theta)*R)+halfInputColumns;
unsigned int rowIndex=(unsigned int)(sin(theta)*R)+halfInputRows;
//std::cout<<"ImageLogPolProjection::R="<<R<<" / Theta="<<theta<<" / (x, y)="<<columnIndex<<", "<<rowIndex<<std::endl;
if ((columnIndex<_filterOutput.getNBcolumns())&&(columnIndex>0)&&(rowIndex<_filterOutput.getNBrows())&&(rowIndex>0))
{
// set coordinate
tempTransformTable[_usefullpixelIndex++]=radiusIndex+orientationIndex*_outputNBcolumns;
tempTransformTable[_usefullpixelIndex++]= columnIndex+rowIndex*_filterOutput.getNBcolumns();
}
}
// (re)creating and filling the transform table
_transformTable.resize(_usefullpixelIndex);
memcpy(&_transformTable[0], &tempTransformTable[0], sizeof(unsigned int)*_usefullpixelIndex);
// reset all buffers
clearAllBuffers();
_initOK=true;
return true;
}
// action function
std::valarray<float> &ImageLogPolProjection::runProjection(const std::valarray<float> &inputFrame, const bool colorMode)
{
if (_colorModeCapable&&colorMode)
{
// progressive filtering and storage of the result in _tempBuffer
_spatiotemporalLPfilter_Irregular(get_data(inputFrame), &_irregularLPfilteredFrame[0]);
_spatiotemporalLPfilter_Irregular(&_irregularLPfilteredFrame[0], &_tempBuffer[0]); // warning, temporal issue may occur, if the temporal constant is not NULL !!!
_spatiotemporalLPfilter_Irregular(get_data(inputFrame)+_filterOutput.getNBpixels(), &_irregularLPfilteredFrame[0]);
_spatiotemporalLPfilter_Irregular(&_irregularLPfilteredFrame[0], &_tempBuffer[0]+_filterOutput.getNBpixels());
_spatiotemporalLPfilter_Irregular(get_data(inputFrame)+_filterOutput.getNBpixels()*2, &_irregularLPfilteredFrame[0]);
_spatiotemporalLPfilter_Irregular(&_irregularLPfilteredFrame[0], &_tempBuffer[0]+_filterOutput.getNBpixels()*2);
// applying image projection/resampling
register unsigned int *transformTablePTR=&_transformTable[0];
for (unsigned int i=0 ; i<_usefullpixelIndex ; i+=2, transformTablePTR+=2)
{
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::i:"<<i<<"output(max="<<_outputNBpixels<<")="<<_transformTable[i]<<" / intput(max="<<_filterOutput.getNBpixels()<<")="<<_transformTable[i+1]<<std::endl;
#endif
_sampledFrame[*(transformTablePTR)]=_tempBuffer[*(transformTablePTR+1)];
_sampledFrame[*(transformTablePTR)+_outputNBpixels]=_tempBuffer[*(transformTablePTR+1)+_filterOutput.getNBpixels()];
_sampledFrame[*(transformTablePTR)+_outputDoubleNBpixels]=_tempBuffer[*(transformTablePTR+1)+_inputDoubleNBpixels];
}
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::runProjection: color image projection OK"<<std::endl;
#endif
//normalizeGrayOutput_0_maxOutputValue(_sampledFrame, _outputNBpixels);
}else
{
_spatiotemporalLPfilter_Irregular(get_data(inputFrame), &_irregularLPfilteredFrame[0]);
_spatiotemporalLPfilter_Irregular(&_irregularLPfilteredFrame[0], &_irregularLPfilteredFrame[0]);
// applying image projection/resampling
register unsigned int *transformTablePTR=&_transformTable[0];
for (unsigned int i=0 ; i<_usefullpixelIndex ; i+=2, transformTablePTR+=2)
{
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"i:"<<i<<"output(max="<<_outputNBpixels<<")="<<_transformTable[i]<<" / intput(max="<<_filterOutput.getNBpixels()<<")="<<_transformTable[i+1]<<std::endl;
#endif
_sampledFrame[*(transformTablePTR)]=_irregularLPfilteredFrame[*(transformTablePTR+1)];
}
//normalizeGrayOutput_0_maxOutputValue(_sampledFrame, _outputNBpixels);
#ifdef IMAGELOGPOLPROJECTION_DEBUG
std::cout<<"ImageLogPolProjection::runProjection: gray level image projection OK"<<std::endl;
#endif
}
return _sampledFrame;
}
}// end of namespace bioinspired
}// end of namespace cv

View File

@@ -1,243 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef IMAGELOGPOLPROJECTION_H_
#define IMAGELOGPOLPROJECTION_H_
/**
* @class ImageLogPolProjection
* @brief class able to perform a log sampling of an image input (models the log sampling of the photoreceptors of the retina)
* or a log polar projection which models the retina information projection on the primary visual cortex: a linear projection in the center for detail analysis and a log projection of the borders (low spatial frequency motion information in general)
*
* collaboration: Barthelemy DURETTE who experimented the retina log projection
-> "Traitement visuels Bio mimtiques pour la supplance perceptive", internal technical report, May 2005, Gipsa-lab/DIS, Grenoble, FRANCE
*
* * TYPICAL USE:
*
* // create object, here for a log sampling (keyword:RETINALOGPROJECTION): (dynamic object allocation sample)
* ImageLogPolProjection *imageSamplingTool;
* imageSamplingTool = new ImageLogPolProjection(frameSizeRows, frameSizeColumns, RETINALOGPROJECTION);
*
* // init log projection:
* imageSamplingTool->initProjection(1.0, 15.0);
*
* // during program execution, call the log transform applied to a frame called "FrameBuffer" :
* imageSamplingTool->runProjection(FrameBuffer);
* // get output frame and its size:
* const unsigned int logSampledFrame_nbRows=imageSamplingTool->getOutputNBrows();
* const unsigned int logSampledFrame_nbColumns=imageSamplingTool->getOutputNBcolumns();
* const double *logSampledFrame=imageSamplingTool->getSampledFrame();
*
* // at the end of the program, destroy object:
* delete imageSamplingTool;
*
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
*/
//#define __IMAGELOGPOLPROJECTION_DEBUG // used for std output debug information
#include "basicretinafilter.hpp"
namespace cv
{
namespace bioinspired
{
class ImageLogPolProjection:public BasicRetinaFilter
{
public:
enum PROJECTIONTYPE{RETINALOGPROJECTION, CORTEXLOGPOLARPROJECTION};
/**
* constructor, just specifies the image input size and the projection type, no projection initialisation is done
* -> use initLogRetinaSampling() or initLogPolarCortexSampling() for that
* @param nbRows: number of rows of the input image
* @param nbColumns: number of columns of the input image
* @param projection: the type of projection, RETINALOGPROJECTION or CORTEXLOGPOLARPROJECTION
* @param colorMode: specifies if the projection is applied on a grayscale image (false) or color images (3 layers) (true)
*/
ImageLogPolProjection(const unsigned int nbRows, const unsigned int nbColumns, const PROJECTIONTYPE projection, const bool colorMode=false);
/**
* standard destructor
*/
virtual ~ImageLogPolProjection();
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* resize retina color filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* init function depending on the projection type
* @param reductionFactor: the size reduction factor of the ouptup image in regard of the size of the input image, must be superior to 1
* @param samplingStrenght: specifies the strenght of the log compression effect (magnifying coefficient)
* @return true if the init was performed without any errors
*/
bool initProjection(const double reductionFactor, const double samplingStrenght);
/**
* main funtion of the class: run projection function
* @param inputFrame: the input frame to be processed
* @param colorMode: the input buffer color mode: false=gray levels, true = 3 color channels mode
* @return the output frame
*/
std::valarray<float> &runProjection(const std::valarray<float> &inputFrame, const bool colorMode=false);
/**
* @return the numbers of rows (height) of the images OUTPUTS of the object
*/
inline unsigned int getOutputNBrows(){return _outputNBrows;};
/**
* @return the numbers of columns (width) of the images OUTPUTS of the object
*/
inline unsigned int getOutputNBcolumns(){return _outputNBcolumns;};
/**
* main funtion of the class: run projection function
* @param size: one of the input frame initial dimensions to be processed
* @return the output frame dimension
*/
inline static unsigned int predictOutputSize(const unsigned int size, const double reductionFactor){return (unsigned int)((double)size/reductionFactor);};
/**
* @return the output of the filter which applies an irregular Low Pass spatial filter to the imag input (see function
*/
inline const std::valarray<float> &getIrregularLPfilteredInputFrame() const {return _irregularLPfilteredFrame;};
/**
* function which allows to retrieve the output frame which was updated after the "runProjection(...) function BasicRetinaFilter::runProgressiveFilter(...)
* @return the projection result
*/
inline const std::valarray<float> &getSampledFrame() const {return _sampledFrame;};
/**
* function which allows gives the tranformation table, its size is (getNBrows()*getNBcolumns()*2)
* @return the transformation matrix [outputPixIndex_i, inputPixIndex_i, outputPixIndex_i+1, inputPixIndex_i+1....]
*/
inline const std::valarray<unsigned int> &getSamplingMap() const {return _transformTable;};
inline double getOriginalRadiusLength(const double projectedRadiusLength){return _azero/(_alim-projectedRadiusLength*2.0/_minDimension);};
// unsigned int getInputPixelIndex(const unsigned int ){ return _transformTable[index*2+1]};
private:
PROJECTIONTYPE _selectedProjection;
// size of the image output
unsigned int _outputNBrows;
unsigned int _outputNBcolumns;
unsigned int _outputNBpixels;
unsigned int _outputDoubleNBpixels;
unsigned int _inputDoubleNBpixels;
// is the object able to manage color flag
bool _colorModeCapable;
// sampling strenght factor
double _samplingStrenght;
// sampling reduction factor
double _reductionFactor;
// log sampling parameters
double _azero;
double _alim;
double _minDimension;
// template buffers
std::valarray<float>_sampledFrame;
std::valarray<float>&_tempBuffer;
std::valarray<unsigned int>_transformTable;
std::valarray<float> &_irregularLPfilteredFrame; // just a reference for easier understanding
unsigned int _usefullpixelIndex;
// init transformation tables
bool _computeLogProjection();
bool _computeLogPolarProjection();
// specifies if init was done correctly
bool _initOK;
// private init projections functions called by "initProjection(...)" function
bool _initLogRetinaSampling(const double reductionFactor, const double samplingStrenght);
bool _initLogPolarCortexSampling(const double reductionFactor, const double samplingStrenght);
ImageLogPolProjection(const ImageLogPolProjection&);
ImageLogPolProjection& operator=(const ImageLogPolProjection&);
};
}// end of namespace bioinspired
}// end of namespace cv
#endif /*IMAGELOGPOLPROJECTION_H_*/

View File

@@ -1,212 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include <iostream>
#include "magnoretinafilter.hpp"
#include <cmath>
namespace cv
{
namespace bioinspired
{
// Constructor and Desctructor of the OPL retina filter
MagnoRetinaFilter::MagnoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns)
:BasicRetinaFilter(NBrows, NBcolumns, 2),
_previousInput_ON(NBrows*NBcolumns),
_previousInput_OFF(NBrows*NBcolumns),
_amacrinCellsTempOutput_ON(NBrows*NBcolumns),
_amacrinCellsTempOutput_OFF(NBrows*NBcolumns),
_magnoXOutputON(NBrows*NBcolumns),
_magnoXOutputOFF(NBrows*NBcolumns),
_localProcessBufferON(NBrows*NBcolumns),
_localProcessBufferOFF(NBrows*NBcolumns)
{
_magnoYOutput=&_filterOutput;
_magnoYsaturated=&_localBuffer;
clearAllBuffers();
#ifdef IPL_RETINA_ELEMENT_DEBUG
std::cout<<"MagnoRetinaFilter::Init IPL retina filter at specified frame size OK"<<std::endl;
#endif
}
MagnoRetinaFilter::~MagnoRetinaFilter()
{
#ifdef IPL_RETINA_ELEMENT_DEBUG
std::cout<<"MagnoRetinaFilter::Delete IPL retina filter OK"<<std::endl;
#endif
}
// function that clears all buffers of the object
void MagnoRetinaFilter::clearAllBuffers()
{
BasicRetinaFilter::clearAllBuffers();
_previousInput_ON=0;
_previousInput_OFF=0;
_amacrinCellsTempOutput_ON=0;
_amacrinCellsTempOutput_OFF=0;
_magnoXOutputON=0;
_magnoXOutputOFF=0;
_localProcessBufferON=0;
_localProcessBufferOFF=0;
}
/**
* resize retina magno filter object (resize all allocated buffers
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void MagnoRetinaFilter::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
BasicRetinaFilter::resize(NBrows, NBcolumns);
_previousInput_ON.resize(NBrows*NBcolumns);
_previousInput_OFF.resize(NBrows*NBcolumns);
_amacrinCellsTempOutput_ON.resize(NBrows*NBcolumns);
_amacrinCellsTempOutput_OFF.resize(NBrows*NBcolumns);
_magnoXOutputON.resize(NBrows*NBcolumns);
_magnoXOutputOFF.resize(NBrows*NBcolumns);
_localProcessBufferON.resize(NBrows*NBcolumns);
_localProcessBufferOFF.resize(NBrows*NBcolumns);
// to be sure, relink buffers
_magnoYOutput=&_filterOutput;
_magnoYsaturated=&_localBuffer;
// reset all buffers
clearAllBuffers();
}
void MagnoRetinaFilter::setCoefficientsTable(const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float localAdaptIntegration_tau, const float localAdaptIntegration_k )
{
_temporalCoefficient=(float)std::exp(-1.0f/amacrinCellsTemporalCutFrequency);
// the first set of parameters is dedicated to the low pass filtering property of the ganglion cells
BasicRetinaFilter::setLPfilterParameters(parasolCells_beta, parasolCells_tau, parasolCells_k, 0);
// the second set of parameters is dedicated to the ganglion cells output intergartion for their local adaptation property
BasicRetinaFilter::setLPfilterParameters(0, localAdaptIntegration_tau, localAdaptIntegration_k, 1);
}
void MagnoRetinaFilter::_amacrineCellsComputing(const float *OPL_ON, const float *OPL_OFF)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(0,_filterOutput.getNBpixels()), Parallel_amacrineCellsComputing(OPL_ON, OPL_OFF, &_previousInput_ON[0], &_previousInput_OFF[0], &_amacrinCellsTempOutput_ON[0], &_amacrinCellsTempOutput_OFF[0], _temporalCoefficient));
#else
register const float *OPL_ON_PTR=OPL_ON;
register const float *OPL_OFF_PTR=OPL_OFF;
register float *previousInput_ON_PTR= &_previousInput_ON[0];
register float *previousInput_OFF_PTR= &_previousInput_OFF[0];
register float *amacrinCellsTempOutput_ON_PTR= &_amacrinCellsTempOutput_ON[0];
register float *amacrinCellsTempOutput_OFF_PTR= &_amacrinCellsTempOutput_OFF[0];
for (unsigned int IDpixel=0 ; IDpixel<this->getNBpixels(); ++IDpixel)
{
/* Compute ON and OFF amacrin cells high pass temporal filter */
float magnoXonPixelResult = _temporalCoefficient*(*amacrinCellsTempOutput_ON_PTR+ *OPL_ON_PTR-*previousInput_ON_PTR);
*(amacrinCellsTempOutput_ON_PTR++)=((float)(magnoXonPixelResult>0))*magnoXonPixelResult;
float magnoXoffPixelResult = _temporalCoefficient*(*amacrinCellsTempOutput_OFF_PTR+ *OPL_OFF_PTR-*previousInput_OFF_PTR);
*(amacrinCellsTempOutput_OFF_PTR++)=((float)(magnoXoffPixelResult>0))*magnoXoffPixelResult;
/* prepare next loop */
*(previousInput_ON_PTR++)=*(OPL_ON_PTR++);
*(previousInput_OFF_PTR++)=*(OPL_OFF_PTR++);
}
#endif
}
// launch filter that runs all the IPL filter
const std::valarray<float> &MagnoRetinaFilter::runFilter(const std::valarray<float> &OPL_ON, const std::valarray<float> &OPL_OFF)
{
// Compute the high pass temporal filter
_amacrineCellsComputing(get_data(OPL_ON), get_data(OPL_OFF));
// apply low pass filtering on ON and OFF ways after temporal high pass filtering
_spatiotemporalLPfilter(&_amacrinCellsTempOutput_ON[0], &_magnoXOutputON[0], 0);
_spatiotemporalLPfilter(&_amacrinCellsTempOutput_OFF[0], &_magnoXOutputOFF[0], 0);
// local adaptation of the ganglion cells to the local contrast of the moving contours
_spatiotemporalLPfilter(&_magnoXOutputON[0], &_localProcessBufferON[0], 1);
_localLuminanceAdaptation(&_magnoXOutputON[0], &_localProcessBufferON[0]);
_spatiotemporalLPfilter(&_magnoXOutputOFF[0], &_localProcessBufferOFF[0], 1);
_localLuminanceAdaptation(&_magnoXOutputOFF[0], &_localProcessBufferOFF[0]);
/* Compute MagnoY */
register float *magnoYOutput= &(*_magnoYOutput)[0];
register float *magnoXOutputON_PTR= &_magnoXOutputON[0];
register float *magnoXOutputOFF_PTR= &_magnoXOutputOFF[0];
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel)
*(magnoYOutput++)=*(magnoXOutputON_PTR++)+*(magnoXOutputOFF_PTR++);
return (*_magnoYOutput);
}
}// end of namespace bioinspired
}// end of namespace cv

View File

@@ -1,245 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef MagnoRetinaFilter_H_
#define MagnoRetinaFilter_H_
/**
* @class MagnoRetinaFilter
* @brief class which describes the magnocellular channel of the retina:
* -> performs a moving contours extraction with powerfull local data enhancement
*
* TYPICAL USE:
*
* // create object at a specified picture size
* MagnoRetinaFilter *movingContoursExtractor;
* movingContoursExtractor =new MagnoRetinaFilter(frameSizeRows, frameSizeColumns);
*
* // init gain, spatial and temporal parameters:
* movingContoursExtractor->setCoefficientsTable(0, 0.7, 5, 3);
*
* // during program execution, call the filter for contours extraction for an input picture called "FrameBuffer":
* movingContoursExtractor->runfilter(FrameBuffer);
*
* // get the output frame, check in the class description below for more outputs:
* const float *movingContours=movingContoursExtractor->getMagnoYsaturated();
*
* // at the end of the program, destroy object:
* delete movingContoursExtractor;
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
* Based on Alexandre BENOIT thesis: "Le système visuel humain au secours de la vision par ordinateur"
*/
#include "basicretinafilter.hpp"
//#define _IPL_RETINA_ELEMENT_DEBUG
namespace cv
{
namespace bioinspired
{
class MagnoRetinaFilter: public BasicRetinaFilter
{
public:
/**
* constructor parameters are only linked to image input size
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
*/
MagnoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* destructor
*/
virtual ~MagnoRetinaFilter();
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* resize retina magno filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* set parameters values
* @param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param localAdaptIntegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptIntegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setCoefficientsTable(const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float localAdaptIntegration_tau, const float localAdaptIntegration_k);
/**
* launch filter that runs all the IPL magno filter (model of the magnocellular channel of the Inner Plexiform Layer of the retina)
* @param OPL_ON: the output of the bipolar ON cells of the retina (available from the ParvoRetinaFilter class (getBipolarCellsON() function)
* @param OPL_OFF: the output of the bipolar OFF cells of the retina (available from the ParvoRetinaFilter class (getBipolarCellsOFF() function)
* @return the processed result without post-processing
*/
const std::valarray<float> &runFilter(const std::valarray<float> &OPL_ON, const std::valarray<float> &OPL_OFF);
/**
* @return the Magnocellular ON channel filtering output
*/
inline const std::valarray<float> &getMagnoON() const {return _magnoXOutputON;};
/**
* @return the Magnocellular OFF channel filtering output
*/
inline const std::valarray<float> &getMagnoOFF() const {return _magnoXOutputOFF;};
/**
* @return the Magnocellular Y (sum of the ON and OFF magno channels) filtering output
*/
inline const std::valarray<float> &getMagnoYsaturated() const {return *_magnoYsaturated;};
/**
* applies an image normalization which saturates the high output values by the use of an assymetric sigmoide
*/
inline void normalizeGrayOutputNearZeroCentreredSigmoide(){_filterOutput.normalizeGrayOutputNearZeroCentreredSigmoide(&(*_magnoYOutput)[0], &(*_magnoYsaturated)[0]);};
/**
* @return the horizontal cells' temporal constant
*/
inline float getTemporalConstant(){return this->_filteringCoeficientsTable[2];};
private:
// related pointers to these buffers
std::valarray<float> _previousInput_ON;
std::valarray<float> _previousInput_OFF;
std::valarray<float> _amacrinCellsTempOutput_ON;
std::valarray<float> _amacrinCellsTempOutput_OFF;
std::valarray<float> _magnoXOutputON;
std::valarray<float> _magnoXOutputOFF;
std::valarray<float> _localProcessBufferON;
std::valarray<float> _localProcessBufferOFF;
// reference to parent buffers and allow better readability
TemplateBuffer<float> *_magnoYOutput;
std::valarray<float> *_magnoYsaturated;
// varialbles
float _temporalCoefficient;
// amacrine cells filter : high pass temporal filter
void _amacrineCellsComputing(const float *ONinput, const float *OFFinput);
#ifdef MAKE_PARALLEL
/******************************************************
** IF some parallelizing thread methods are available, then, main loops are parallelized using these functors
** ==> main idea paralellise main filters loops, then, only the most used methods are parallelized... TODO : increase the number of parallelised methods as necessary
** ==> functors names = Parallel_$$$ where $$$= the name of the serial method that is parallelised
** ==> functors constructors can differ from the parameters used with their related serial functions
*/
class Parallel_amacrineCellsComputing: public cv::ParallelLoopBody
{
private:
const float *OPL_ON, *OPL_OFF;
float *previousInput_ON, *previousInput_OFF, *amacrinCellsTempOutput_ON, *amacrinCellsTempOutput_OFF;
float temporalCoefficient;
public:
Parallel_amacrineCellsComputing(const float *OPL_ON_PTR, const float *OPL_OFF_PTR, float *previousInput_ON_PTR, float *previousInput_OFF_PTR, float *amacrinCellsTempOutput_ON_PTR, float *amacrinCellsTempOutput_OFF_PTR, float temporalCoefficientVal)
:OPL_ON(OPL_ON_PTR), OPL_OFF(OPL_OFF_PTR), previousInput_ON(previousInput_ON_PTR), previousInput_OFF(previousInput_OFF_PTR), amacrinCellsTempOutput_ON(amacrinCellsTempOutput_ON_PTR), amacrinCellsTempOutput_OFF(amacrinCellsTempOutput_OFF_PTR), temporalCoefficient(temporalCoefficientVal) {}
virtual void operator()( const Range& r ) const {
register const float *OPL_ON_PTR=OPL_ON+r.start;
register const float *OPL_OFF_PTR=OPL_OFF+r.start;
register float *previousInput_ON_PTR= previousInput_ON+r.start;
register float *previousInput_OFF_PTR= previousInput_OFF+r.start;
register float *amacrinCellsTempOutput_ON_PTR= amacrinCellsTempOutput_ON+r.start;
register float *amacrinCellsTempOutput_OFF_PTR= amacrinCellsTempOutput_OFF+r.start;
for (int IDpixel=r.start ; IDpixel!=r.end; ++IDpixel)
{
/* Compute ON and OFF amacrin cells high pass temporal filter */
float magnoXonPixelResult = temporalCoefficient*(*amacrinCellsTempOutput_ON_PTR+ *OPL_ON_PTR-*previousInput_ON_PTR);
*(amacrinCellsTempOutput_ON_PTR++)=((float)(magnoXonPixelResult>0))*magnoXonPixelResult;
float magnoXoffPixelResult = temporalCoefficient*(*amacrinCellsTempOutput_OFF_PTR+ *OPL_OFF_PTR-*previousInput_OFF_PTR);
*(amacrinCellsTempOutput_OFF_PTR++)=((float)(magnoXoffPixelResult>0))*magnoXoffPixelResult;
/* prepare next loop */
*(previousInput_ON_PTR++)=*(OPL_ON_PTR++);
*(previousInput_OFF_PTR++)=*(OPL_OFF_PTR++);
}
}
};
#endif
};
}// end of namespace bioinspired
}// end of namespace cv
#endif /*MagnoRetinaFilter_H_*/

View File

@@ -1,779 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2010-2013, Multicoreware, Inc., all rights reserved.
// Copyright (C) 2010-2013, Advanced Micro Devices, Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// @Authors
// Peng Xiao, pengxiao@multicorewareinc.com
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other oclMaterials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors as is and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
//data (which is float) is aligend in 32 bytes
#define WIDTH_MULTIPLE (32 >> 2)
/////////////////////////////////////////////////////////
//*******************************************************
// basicretinafilter
//////////////// _spatiotemporalLPfilter ////////////////
//_horizontalCausalFilter_addInput
kernel void horizontalCausalFilter_addInput(
global const float * input,
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const int in_offset,
const int out_offset,
const float _tau,
const float _a
)
{
int gid = get_global_id(0);
if(gid >= rows)
{
return;
}
global const float * iptr =
input + mad24(gid, elements_per_row, in_offset / 4);
global float * optr =
output + mad24(gid, elements_per_row, out_offset / 4);
float res;
float4 in_v4, out_v4, res_v4 = (float4)(0);
//vectorize to increase throughput
for(int i = 0; i < cols / 4; ++i, iptr += 4, optr += 4)
{
in_v4 = vload4(0, iptr);
out_v4 = vload4(0, optr);
res_v4.x = in_v4.x + _tau * out_v4.x + _a * res_v4.w;
res_v4.y = in_v4.y + _tau * out_v4.y + _a * res_v4.x;
res_v4.z = in_v4.z + _tau * out_v4.z + _a * res_v4.y;
res_v4.w = in_v4.w + _tau * out_v4.w + _a * res_v4.z;
vstore4(res_v4, 0, optr);
}
res = res_v4.w;
// there may be left some
for(int i = 0; i < cols % 4; ++i, ++iptr, ++optr)
{
res = *iptr + _tau * *optr + _a * res;
*optr = res;
}
}
//_horizontalAnticausalFilter
kernel void horizontalAnticausalFilter(
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const int out_offset,
const float _a
)
{
int gid = get_global_id(0);
if(gid >= rows)
{
return;
}
global float * optr = output +
mad24(gid + 1, elements_per_row, - 1 + out_offset / 4);
float4 result_v4 = (float4)(0), out_v4;
float result = 0;
// we assume elements_per_row is multple of WIDTH_MULTIPLE
for(int i = 0; i < WIDTH_MULTIPLE; ++ i, -- optr)
{
if(i >= elements_per_row - cols)
{
result = *optr + _a * result;
}
*optr = result;
}
result_v4.x = result;
optr -= 3;
for(int i = WIDTH_MULTIPLE / 4; i < elements_per_row / 4; ++i, optr -= 4)
{
// shift left, `offset` is type `size_t` so it cannot be negative
out_v4 = vload4(0, optr);
result_v4.w = out_v4.w + _a * result_v4.x;
result_v4.z = out_v4.z + _a * result_v4.w;
result_v4.y = out_v4.y + _a * result_v4.z;
result_v4.x = out_v4.x + _a * result_v4.y;
vstore4(result_v4, 0, optr);
}
}
//_verticalCausalFilter
kernel void verticalCausalFilter(
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const int out_offset,
const float _a
)
{
int gid = get_global_id(0);
if(gid >= cols)
{
return;
}
global float * optr = output + gid + out_offset / 4;
float result = 0;
for(int i = 0; i < rows; ++i, optr += elements_per_row)
{
result = *optr + _a * result;
*optr = result;
}
}
//_verticalCausalFilter
kernel void verticalAnticausalFilter_multGain(
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const int out_offset,
const float _a,
const float _gain
)
{
int gid = get_global_id(0);
if(gid >= cols)
{
return;
}
global float * optr = output + (rows - 1) * elements_per_row + gid + out_offset / 4;
float result = 0;
for(int i = 0; i < rows; ++i, optr -= elements_per_row)
{
result = *optr + _a * result;
*optr = _gain * result;
}
}
//
// end of _spatiotemporalLPfilter
/////////////////////////////////////////////////////////////////////
//////////////// horizontalAnticausalFilter_Irregular ////////////////
kernel void horizontalAnticausalFilter_Irregular(
global float * output,
global float * buffer,
const int cols,
const int rows,
const int elements_per_row,
const int out_offset,
const int buffer_offset
)
{
int gid = get_global_id(0);
if(gid >= rows)
{
return;
}
global float * optr =
output + mad24(rows - gid, elements_per_row, -1 + out_offset / 4);
global float * bptr =
buffer + mad24(rows - gid, elements_per_row, -1 + buffer_offset / 4);
float4 buf_v4, out_v4, res_v4 = (float4)(0);
float result = 0;
// we assume elements_per_row is multple of WIDTH_MULTIPLE
for(int i = 0; i < WIDTH_MULTIPLE; ++ i, -- optr, -- bptr)
{
if(i >= elements_per_row - cols)
{
result = *optr + *bptr * result;
}
*optr = result;
}
res_v4.x = result;
optr -= 3;
bptr -= 3;
for(int i = WIDTH_MULTIPLE / 4; i < elements_per_row / 4; ++i, optr -= 4, bptr -= 4)
{
buf_v4 = vload4(0, bptr);
out_v4 = vload4(0, optr);
res_v4.w = out_v4.w + buf_v4.w * res_v4.x;
res_v4.z = out_v4.z + buf_v4.z * res_v4.w;
res_v4.y = out_v4.y + buf_v4.y * res_v4.z;
res_v4.x = out_v4.x + buf_v4.x * res_v4.y;
vstore4(res_v4, 0, optr);
}
}
//////////////// verticalCausalFilter_Irregular ////////////////
kernel void verticalCausalFilter_Irregular(
global float * output,
global float * buffer,
const int cols,
const int rows,
const int elements_per_row,
const int out_offset,
const int buffer_offset
)
{
int gid = get_global_id(0);
if(gid >= cols)
{
return;
}
global float * optr = output + gid + out_offset / 4;
global float * bptr = buffer + gid + buffer_offset / 4;
float result = 0;
for(int i = 0; i < rows; ++i, optr += elements_per_row, bptr += elements_per_row)
{
result = *optr + *bptr * result;
*optr = result;
}
}
//////////////// _adaptiveHorizontalCausalFilter_addInput ////////////////
kernel void adaptiveHorizontalCausalFilter_addInput(
global const float * input,
global const float * gradient,
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const int in_offset,
const int grad_offset,
const int out_offset
)
{
int gid = get_global_id(0);
if(gid >= rows)
{
return;
}
global const float * iptr =
input + mad24(gid, elements_per_row, in_offset / 4);
global const float * gptr =
gradient + mad24(gid, elements_per_row, grad_offset / 4);
global float * optr =
output + mad24(gid, elements_per_row, out_offset / 4);
float4 in_v4, grad_v4, out_v4, res_v4 = (float4)(0);
for(int i = 0; i < cols / 4; ++i, iptr += 4, gptr += 4, optr += 4)
{
in_v4 = vload4(0, iptr);
grad_v4 = vload4(0, gptr);
res_v4.x = in_v4.x + grad_v4.x * res_v4.w;
res_v4.y = in_v4.y + grad_v4.y * res_v4.x;
res_v4.z = in_v4.z + grad_v4.z * res_v4.y;
res_v4.w = in_v4.w + grad_v4.w * res_v4.z;
vstore4(res_v4, 0, optr);
}
for(int i = 0; i < cols % 4; ++i, ++iptr, ++gptr, ++optr)
{
res_v4.w = *iptr + *gptr * res_v4.w;
*optr = res_v4.w;
}
}
//////////////// _adaptiveVerticalAnticausalFilter_multGain ////////////////
kernel void adaptiveVerticalAnticausalFilter_multGain(
global const float * gradient,
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const int grad_offset,
const int out_offset,
const float gain
)
{
int gid = get_global_id(0);
if(gid >= cols)
{
return;
}
int start_idx = mad24(rows - 1, elements_per_row, gid);
global const float * gptr = gradient + start_idx + grad_offset / 4;
global float * optr = output + start_idx + out_offset / 4;
float result = 0;
for(int i = 0; i < rows; ++i, gptr -= elements_per_row, optr -= elements_per_row)
{
result = *optr + *gptr * result;
*optr = gain * result;
}
}
//////////////// _localLuminanceAdaptation ////////////////
// FIXME:
// This kernel seems to have precision problem on GPU
kernel void localLuminanceAdaptation(
global const float * luma,
global const float * input,
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const float _localLuminanceAddon,
const float _localLuminanceFactor,
const float _maxInputValue
)
{
int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
int offset = mad24(gidy, elements_per_row, gidx);
float X0 = luma[offset] * _localLuminanceFactor + _localLuminanceAddon;
float input_val = input[offset];
// output of the following line may be different between GPU and CPU
output[offset] = (_maxInputValue + X0) * input_val / (input_val + X0 + 0.00000000001f);
}
// end of basicretinafilter
//*******************************************************
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
//******************************************************
// magno
// TODO: this kernel has too many buffer accesses, better to make it
// vector read/write for fetch efficiency
kernel void amacrineCellsComputing(
global const float * opl_on,
global const float * opl_off,
global float * prev_in_on,
global float * prev_in_off,
global float * out_on,
global float * out_off,
const int cols,
const int rows,
const int elements_per_row,
const float coeff
)
{
int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
int offset = mad24(gidy, elements_per_row, gidx);
opl_on += offset;
opl_off += offset;
prev_in_on += offset;
prev_in_off += offset;
out_on += offset;
out_off += offset;
float magnoXonPixelResult = coeff * (*out_on + *opl_on - *prev_in_on);
*out_on = fmax(magnoXonPixelResult, 0);
float magnoXoffPixelResult = coeff * (*out_off + *opl_off - *prev_in_off);
*out_off = fmax(magnoXoffPixelResult, 0);
*prev_in_on = *opl_on;
*prev_in_off = *opl_off;
}
/////////////////////////////////////////////////////////
//******************************************************
// parvo
// TODO: this kernel has too many buffer accesses, needs optimization
kernel void OPL_OnOffWaysComputing(
global float4 * photo_out,
global float4 * horiz_out,
global float4 * bipol_on,
global float4 * bipol_off,
global float4 * parvo_on,
global float4 * parvo_off,
const int cols,
const int rows,
const int elements_per_row
)
{
int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx * 4 >= cols || gidy >= rows)
{
return;
}
// we assume elements_per_row must be multiples of 4
int offset = mad24(gidy, elements_per_row >> 2, gidx);
photo_out += offset;
horiz_out += offset;
bipol_on += offset;
bipol_off += offset;
parvo_on += offset;
parvo_off += offset;
float4 diff = *photo_out - *horiz_out;
float4 isPositive;// = convert_float4(diff > (float4)(0.0f, 0.0f, 0.0f, 0.0f));
isPositive.x = diff.x > 0.0f;
isPositive.y = diff.y > 0.0f;
isPositive.z = diff.z > 0.0f;
isPositive.w = diff.w > 0.0f;
float4 res_on = isPositive * diff;
float4 res_off = (isPositive - (float4)(1.0f)) * diff;
*bipol_on = res_on;
*parvo_on = res_on;
*bipol_off = res_off;
*parvo_off = res_off;
}
/////////////////////////////////////////////////////////
//******************************************************
// retinacolor
inline int bayerSampleOffset(int step, int rows, int x, int y)
{
return mad24(y, step, x) +
((y % 2) + (x % 2)) * rows * step;
}
/////// colorMultiplexing //////
kernel void runColorMultiplexingBayer(
global const float * input,
global float * output,
const int cols,
const int rows,
const int elements_per_row
)
{
int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
int offset = mad24(gidy, elements_per_row, gidx);
output[offset] = input[bayerSampleOffset(elements_per_row, rows, gidx, gidy)];
}
kernel void runColorDemultiplexingBayer(
global const float * input,
global float * output,
const int cols,
const int rows,
const int elements_per_row
)
{
int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
int offset = mad24(gidy, elements_per_row, gidx);
output[bayerSampleOffset(elements_per_row, rows, gidx, gidy)] = input[offset];
}
kernel void demultiplexAssign(
global const float * input,
global float * output,
const int cols,
const int rows,
const int elements_per_row
)
{
int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
int offset = bayerSampleOffset(elements_per_row, rows, gidx, gidy);
output[offset] = input[offset];
}
//// normalizeGrayOutputCentredSigmoide
kernel void normalizeGrayOutputCentredSigmoide(
global const float * input,
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const float meanval,
const float X0
)
{
int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
int offset = mad24(gidy, elements_per_row, gidx);
float input_val = input[offset];
output[offset] = meanval +
(meanval + X0) * (input_val - meanval) / (fabs(input_val - meanval) + X0);
}
//// normalize by photoreceptors density
kernel void normalizePhotoDensity(
global const float * chroma,
global const float * colorDensity,
global const float * multiplex,
global float * luma,
global float * demultiplex,
const int cols,
const int rows,
const int elements_per_row,
const float pG
)
{
const int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
const int offset = mad24(gidy, elements_per_row, gidx);
int index = offset;
float Cr = chroma[index] * colorDensity[index];
index += elements_per_row * rows;
float Cg = chroma[index] * colorDensity[index];
index += elements_per_row * rows;
float Cb = chroma[index] * colorDensity[index];
const float luma_res = (Cr + Cg + Cb) * pG;
luma[offset] = luma_res;
demultiplex[bayerSampleOffset(elements_per_row, rows, gidx, gidy)] =
multiplex[offset] - luma_res;
}
//////// computeGradient ///////
// TODO:
// this function maybe accelerated by image2d_t or lds
kernel void computeGradient(
global const float * luma,
global float * gradient,
const int cols,
const int rows,
const int elements_per_row
)
{
int gidx = get_global_id(0) + 2, gidy = get_global_id(1) + 2;
if(gidx >= cols - 2 || gidy >= rows - 2)
{
return;
}
int offset = mad24(gidy, elements_per_row, gidx);
luma += offset;
// horizontal and vertical local gradients
const float v_grad = fabs(luma[elements_per_row] - luma[- elements_per_row]);
const float h_grad = fabs(luma[1] - luma[-1]);
// neighborhood horizontal and vertical gradients
const float cur_val = luma[0];
const float v_grad_p = fabs(cur_val - luma[- 2 * elements_per_row]);
const float h_grad_p = fabs(cur_val - luma[- 2]);
const float v_grad_n = fabs(cur_val - luma[2 * elements_per_row]);
const float h_grad_n = fabs(cur_val - luma[2]);
const float horiz_grad = 0.5f * h_grad + 0.25f * (h_grad_p + h_grad_n);
const float verti_grad = 0.5f * v_grad + 0.25f * (v_grad_p + v_grad_n);
const bool is_vertical_greater = horiz_grad < verti_grad;
gradient[offset + elements_per_row * rows] = is_vertical_greater ? 0.06f : 0.57f;
gradient[offset ] = is_vertical_greater ? 0.57f : 0.06f;
}
/////// substractResidual ///////
kernel void substractResidual(
global float * input,
const int cols,
const int rows,
const int elements_per_row,
const float pR,
const float pG,
const float pB
)
{
const int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
int indices [3] =
{
mad24(gidy, elements_per_row, gidx),
mad24(gidy + rows, elements_per_row, gidx),
mad24(gidy + 2 * rows, elements_per_row, gidx)
};
float vals[3] = {input[indices[0]], input[indices[1]], input[indices[2]]};
float residu = pR * vals[0] + pG * vals[1] + pB * vals[2];
input[indices[0]] = vals[0] - residu;
input[indices[1]] = vals[1] - residu;
input[indices[2]] = vals[2] - residu;
}
///// clipRGBOutput_0_maxInputValue /////
kernel void clipRGBOutput_0_maxInputValue(
global float * input,
const int cols,
const int rows,
const int elements_per_row,
const float maxVal
)
{
const int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
const int offset = mad24(gidy, elements_per_row, gidx);
float val = input[offset];
val = clamp(val, 0.0f, maxVal);
input[offset] = val;
}
//// normalizeGrayOutputNearZeroCentreredSigmoide ////
kernel void normalizeGrayOutputNearZeroCentreredSigmoide(
global float * input,
global float * output,
const int cols,
const int rows,
const int elements_per_row,
const float maxVal,
const float X0cube
)
{
const int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
const int offset = mad24(gidy, elements_per_row, gidx);
float currentCubeLuminance = input[offset];
currentCubeLuminance = currentCubeLuminance * currentCubeLuminance * currentCubeLuminance;
output[offset] = currentCubeLuminance * X0cube / (X0cube + currentCubeLuminance);
}
//// centerReductImageLuminance ////
kernel void centerReductImageLuminance(
global float * input,
const int cols,
const int rows,
const int elements_per_row,
const float mean,
const float std_dev
)
{
const int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
const int offset = mad24(gidy, elements_per_row, gidx);
float val = input[offset];
input[offset] = (val - mean) / std_dev;
}
//// inverseValue ////
kernel void inverseValue(
global float * input,
const int cols,
const int rows,
const int elements_per_row
)
{
const int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
const int offset = mad24(gidy, elements_per_row, gidx);
input[offset] = 1.f / input[offset];
}
#define CV_PI 3.1415926535897932384626433832795
//// _processRetinaParvoMagnoMapping ////
kernel void processRetinaParvoMagnoMapping(
global float * parvo,
global float * magno,
global float * output,
const int cols,
const int rows,
const int halfCols,
const int halfRows,
const int elements_per_row,
const float minDistance
)
{
const int gidx = get_global_id(0), gidy = get_global_id(1);
if(gidx >= cols || gidy >= rows)
{
return;
}
const int offset = mad24(gidy, elements_per_row, gidx);
float distanceToCenter =
sqrt(((float)(gidy - halfRows) * (gidy - halfRows) + (gidx - halfCols) * (gidx - halfCols)));
float a = distanceToCenter < minDistance ?
(0.5f + 0.5f * (float)cos(CV_PI * distanceToCenter / minDistance)) : 0;
float b = 1.f - a;
output[offset] = parvo[offset] * a + magno[offset] * b;
}

View File

@@ -1,233 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include "parvoretinafilter.hpp"
// @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
#include <iostream>
#include <cmath>
namespace cv
{
namespace bioinspired
{
//////////////////////////////////////////////////////////
// OPL RETINA FILTER
//////////////////////////////////////////////////////////
// Constructor and Desctructor of the OPL retina filter
ParvoRetinaFilter::ParvoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns)
:BasicRetinaFilter(NBrows, NBcolumns, 3),
_photoreceptorsOutput(NBrows*NBcolumns),
_horizontalCellsOutput(NBrows*NBcolumns),
_parvocellularOutputON(NBrows*NBcolumns),
_parvocellularOutputOFF(NBrows*NBcolumns),
_bipolarCellsOutputON(NBrows*NBcolumns),
_bipolarCellsOutputOFF(NBrows*NBcolumns),
_localAdaptationOFF(NBrows*NBcolumns)
{
// link to the required local parent adaptation buffers
_localAdaptationON=&_localBuffer;
_parvocellularOutputONminusOFF=&_filterOutput;
// (*_localAdaptationON)=&_localBuffer;
// (*_parvocellularOutputONminusOFF)=&(BasicRetinaFilter::TemplateBuffer);
// init: set all the values to 0
clearAllBuffers();
#ifdef OPL_RETINA_ELEMENT_DEBUG
std::cout<<"ParvoRetinaFilter::Init OPL retina filter at specified frame size OK\n"<<std::endl;
#endif
}
ParvoRetinaFilter::~ParvoRetinaFilter()
{
#ifdef OPL_RETINA_ELEMENT_DEBUG
std::cout<<"ParvoRetinaFilter::Delete OPL retina filter OK"<<std::endl;
#endif
}
////////////////////////////////////
// functions of the PARVO filter
////////////////////////////////////
// function that clears all buffers of the object
void ParvoRetinaFilter::clearAllBuffers()
{
BasicRetinaFilter::clearAllBuffers();
_photoreceptorsOutput=0;
_horizontalCellsOutput=0;
_parvocellularOutputON=0;
_parvocellularOutputOFF=0;
_bipolarCellsOutputON=0;
_bipolarCellsOutputOFF=0;
_localAdaptationOFF=0;
}
/**
* resize parvo retina filter object (resize all allocated buffers
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void ParvoRetinaFilter::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
BasicRetinaFilter::resize(NBrows, NBcolumns);
_photoreceptorsOutput.resize(NBrows*NBcolumns);
_horizontalCellsOutput.resize(NBrows*NBcolumns);
_parvocellularOutputON.resize(NBrows*NBcolumns);
_parvocellularOutputOFF.resize(NBrows*NBcolumns);
_bipolarCellsOutputON.resize(NBrows*NBcolumns);
_bipolarCellsOutputOFF.resize(NBrows*NBcolumns);
_localAdaptationOFF.resize(NBrows*NBcolumns);
// link to the required local parent adaptation buffers
_localAdaptationON=&_localBuffer;
_parvocellularOutputONminusOFF=&_filterOutput;
// clean buffers
clearAllBuffers();
}
// change the parameters of the filter
void ParvoRetinaFilter::setOPLandParvoFiltersParameters(const float beta1, const float tau1, const float k1, const float beta2, const float tau2, const float k2)
{
// init photoreceptors low pass filter
setLPfilterParameters(beta1, tau1, k1);
// init horizontal cells low pass filter
setLPfilterParameters(beta2, tau2, k2, 1);
// init parasol ganglion cells low pass filter (default parameters)
setLPfilterParameters(0, tau1, k1, 2);
}
// update/set size of the frames
// run filter for a new frame input
// output return is (*_parvocellularOutputONminusOFF)
const std::valarray<float> &ParvoRetinaFilter::runFilter(const std::valarray<float> &inputFrame, const bool useParvoOutput)
{
_spatiotemporalLPfilter(get_data(inputFrame), &_photoreceptorsOutput[0]);
_spatiotemporalLPfilter(&_photoreceptorsOutput[0], &_horizontalCellsOutput[0], 1);
_OPL_OnOffWaysComputing();
if (useParvoOutput)
{
// local adaptation processes on ON and OFF ways
_spatiotemporalLPfilter(&_bipolarCellsOutputON[0], &(*_localAdaptationON)[0], 2);
_localLuminanceAdaptation(&_parvocellularOutputON[0], &(*_localAdaptationON)[0]);
_spatiotemporalLPfilter(&_bipolarCellsOutputOFF[0], &_localAdaptationOFF[0], 2);
_localLuminanceAdaptation(&_parvocellularOutputOFF[0], &_localAdaptationOFF[0]);
//// Final loop that computes the main output of this filter
//
//// loop that makes the difference between photoreceptor cells output and horizontal cells
//// positive part goes on the ON way, negative pat goes on the OFF way
register float *parvocellularOutputONminusOFF_PTR=&(*_parvocellularOutputONminusOFF)[0];
register float *parvocellularOutputON_PTR=&_parvocellularOutputON[0];
register float *parvocellularOutputOFF_PTR=&_parvocellularOutputOFF[0];
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel)
*(parvocellularOutputONminusOFF_PTR++)= (*(parvocellularOutputON_PTR++)-*(parvocellularOutputOFF_PTR++));
}
return (*_parvocellularOutputONminusOFF);
}
void ParvoRetinaFilter::_OPL_OnOffWaysComputing() // WARNING : this method requires many buffer accesses, parallelizing can increase bandwith & core efficacy
{
// loop that makes the difference between photoreceptor cells output and horizontal cells
// positive part goes on the ON way, negative pat goes on the OFF way
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(0,_filterOutput.getNBpixels()), Parallel_OPL_OnOffWaysComputing(&_photoreceptorsOutput[0], &_horizontalCellsOutput[0], &_bipolarCellsOutputON[0], &_bipolarCellsOutputOFF[0], &_parvocellularOutputON[0], &_parvocellularOutputOFF[0]));
#else
float *photoreceptorsOutput_PTR= &_photoreceptorsOutput[0];
float *horizontalCellsOutput_PTR= &_horizontalCellsOutput[0];
float *bipolarCellsON_PTR = &_bipolarCellsOutputON[0];
float *bipolarCellsOFF_PTR = &_bipolarCellsOutputOFF[0];
float *parvocellularOutputON_PTR= &_parvocellularOutputON[0];
float *parvocellularOutputOFF_PTR= &_parvocellularOutputOFF[0];
// compute bipolar cells response equal to photoreceptors minus horizontal cells response
// and copy the result on parvo cellular outputs... keeping time before their local contrast adaptation for final result
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel)
{
float pixelDifference = *(photoreceptorsOutput_PTR++) -*(horizontalCellsOutput_PTR++);
// test condition to allow write pixelDifference in ON or OFF buffer and 0 in the over
float isPositive=(float) (pixelDifference>0.0f);
// ON and OFF channels writing step
*(parvocellularOutputON_PTR++)=*(bipolarCellsON_PTR++) = isPositive*pixelDifference;
*(parvocellularOutputOFF_PTR++)=*(bipolarCellsOFF_PTR++)= (isPositive-1.0f)*pixelDifference;
}
#endif
}
}// end of namespace bioinspired
}// end of namespace cv

View File

@@ -1,263 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef ParvoRetinaFilter_H_
#define ParvoRetinaFilter_H_
/**
* @class ParvoRetinaFilter
* @brief class which describes the OPL retina model and the Inner Plexiform Layer parvocellular channel of the retina:
* -> performs a contours extraction with powerfull local data enhancement as at the retina level
* -> spectrum whitening occurs at the OPL (Outer Plexiform Layer) of the retina: corrects the 1/f spectrum tendancy of natural images
* ---> enhances details with mid spatial frequencies, attenuates low spatial frequencies (luminance), attenuates high temporal frequencies and high spatial frequencies, etc.
*
* TYPICAL USE:
*
* // create object at a specified picture size
* ParvoRetinaFilter *contoursExtractor;
* contoursExtractor =new ParvoRetinaFilter(frameSizeRows, frameSizeColumns);
*
* // init gain, spatial and temporal parameters:
* contoursExtractor->setCoefficientsTable(0, 0.7, 1, 0, 7, 1);
*
* // during program execution, call the filter for contours extraction for an input picture called "FrameBuffer":
* contoursExtractor->runfilter(FrameBuffer);
*
* // get the output frame, check in the class description below for more outputs:
* const float *contours=contoursExtractor->getParvoONminusOFF();
*
* // at the end of the program, destroy object:
* delete contoursExtractor;
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
* Based on Alexandre BENOIT thesis: "Le système visuel humain au secours de la vision par ordinateur"
*
*/
#include "basicretinafilter.hpp"
//#define _OPL_RETINA_ELEMENT_DEBUG
namespace cv
{
namespace bioinspired
{
//retina classes that derivate from the Basic Retrina class
class ParvoRetinaFilter: public BasicRetinaFilter
{
public:
/**
* constructor parameters are only linked to image input size
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
*/
ParvoRetinaFilter(const unsigned int NBrows=480, const unsigned int NBcolumns=640);
/**
* standard desctructor
*/
virtual ~ParvoRetinaFilter();
/**
* resize method, keeps initial parameters, all buffers are flushed
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* setup the OPL and IPL parvo channels
* @param beta1: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, the amplitude is boosted but it should only be used for values rescaling... if needed
* @param tau1: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param k1: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param beta2: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param tau2: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param k2: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
*/
void setOPLandParvoFiltersParameters(const float beta1, const float tau1, const float k1, const float beta2, const float tau2, const float k2);
/**
* setup more precisely the low pass filter used for the ganglion cells low pass filtering (used for local luminance adaptation)
* @param tau: time constant of the filter (unit is frame for video processing)
* @param k: spatial constant of the filter (unit is pixels)
*/
void setGanglionCellsLocalAdaptationLPfilterParameters(const float tau, const float k){BasicRetinaFilter::setLPfilterParameters(0, tau, k, 2);}; // change the parameters of the filter
/**
* launch filter that runs the OPL spatiotemporal filtering and optionally finalizes IPL Pagno filter (model of the Parvocellular channel of the Inner Plexiform Layer of the retina)
* @param inputFrame: the input image to be processed, this can be the direct gray level input frame, but a better efficacy is expected if the input is preliminary processed by the photoreceptors local adaptation possible to acheive with the help of a BasicRetinaFilter object
* @param useParvoOutput: set true if the final IPL filtering step has to be computed (local contrast enhancement)
* @return the processed Parvocellular channel output (updated only if useParvoOutput is true)
* @details: in any case, after this function call, photoreceptors and horizontal cells output are updated, use getPhotoreceptorsLPfilteringOutput() and getHorizontalCellsOutput() to get them
* also, bipolar cells output are accessible (difference between photoreceptors and horizontal cells, ON output has positive values, OFF ouput has negative values), use the following access methods: getBipolarCellsON() and getBipolarCellsOFF()if useParvoOutput is true,
* if useParvoOutput is true, the complete Parvocellular channel is computed, more outputs are updated and can be accessed threw: getParvoON(), getParvoOFF() and their difference with getOutput()
*/
const std::valarray<float> &runFilter(const std::valarray<float> &inputFrame, const bool useParvoOutput=true); // output return is _parvocellularOutputONminusOFF
/**
* @return the output of the photoreceptors filtering step (high cut frequency spatio-temporal low pass filter)
*/
inline const std::valarray<float> &getPhotoreceptorsLPfilteringOutput() const {return _photoreceptorsOutput;};
/**
* @return the output of the photoreceptors filtering step (low cut frequency spatio-temporal low pass filter)
*/
inline const std::valarray<float> &getHorizontalCellsOutput() const { return _horizontalCellsOutput;};
/**
* @return the output Parvocellular ON channel of the retina model
*/
inline const std::valarray<float> &getParvoON() const {return _parvocellularOutputON;};
/**
* @return the output Parvocellular OFF channel of the retina model
*/
inline const std::valarray<float> &getParvoOFF() const {return _parvocellularOutputOFF;};
/**
* @return the output of the Bipolar cells of the ON channel of the retina model same as function getParvoON() but without luminance local adaptation
*/
inline const std::valarray<float> &getBipolarCellsON() const {return _bipolarCellsOutputON;};
/**
* @return the output of the Bipolar cells of the OFF channel of the retina model same as function getParvoON() but without luminance local adaptation
*/
inline const std::valarray<float> &getBipolarCellsOFF() const {return _bipolarCellsOutputOFF;};
/**
* @return the photoreceptors's temporal constant
*/
inline float getPhotoreceptorsTemporalConstant(){return this->_filteringCoeficientsTable[2];};
/**
* @return the horizontal cells' temporal constant
*/
inline float getHcellsTemporalConstant(){return this->_filteringCoeficientsTable[5];};
private:
// template buffers
std::valarray <float>_photoreceptorsOutput;
std::valarray <float>_horizontalCellsOutput;
std::valarray <float>_parvocellularOutputON;
std::valarray <float>_parvocellularOutputOFF;
std::valarray <float>_bipolarCellsOutputON;
std::valarray <float>_bipolarCellsOutputOFF;
std::valarray <float>_localAdaptationOFF;
std::valarray <float> *_localAdaptationON;
TemplateBuffer<float> *_parvocellularOutputONminusOFF;
// private functions
void _OPL_OnOffWaysComputing();
#ifdef MAKE_PARALLEL
/******************************************************
** IF some parallelizing thread methods are available, then, main loops are parallelized using these functors
** ==> main idea paralellise main filters loops, then, only the most used methods are parallelized... TODO : increase the number of parallelised methods as necessary
** ==> functors names = Parallel_$$$ where $$$= the name of the serial method that is parallelised
** ==> functors constructors can differ from the parameters used with their related serial functions
*/
class Parallel_OPL_OnOffWaysComputing: public cv::ParallelLoopBody
{
private:
float *photoreceptorsOutput, *horizontalCellsOutput, *bipolarCellsON, *bipolarCellsOFF, *parvocellularOutputON, *parvocellularOutputOFF;
public:
Parallel_OPL_OnOffWaysComputing(float *photoreceptorsOutput_PTR, float *horizontalCellsOutput_PTR, float *bipolarCellsON_PTR, float *bipolarCellsOFF_PTR, float *parvocellularOutputON_PTR, float *parvocellularOutputOFF_PTR)
:photoreceptorsOutput(photoreceptorsOutput_PTR), horizontalCellsOutput(horizontalCellsOutput_PTR), bipolarCellsON(bipolarCellsON_PTR), bipolarCellsOFF(bipolarCellsOFF_PTR), parvocellularOutputON(parvocellularOutputON_PTR), parvocellularOutputOFF(parvocellularOutputOFF_PTR) {}
virtual void operator()( const Range& r ) const {
// compute bipolar cells response equal to photoreceptors minus horizontal cells response
// and copy the result on parvo cellular outputs... keeping time before their local contrast adaptation for final result
float *photoreceptorsOutput_PTR= photoreceptorsOutput+r.start;
float *horizontalCellsOutput_PTR= horizontalCellsOutput+r.start;
float *bipolarCellsON_PTR = bipolarCellsON+r.start;
float *bipolarCellsOFF_PTR = bipolarCellsOFF+r.start;
float *parvocellularOutputON_PTR= parvocellularOutputON+r.start;
float *parvocellularOutputOFF_PTR= parvocellularOutputOFF+r.start;
for (register int IDpixel=r.start ; IDpixel!=r.end ; ++IDpixel)
{
float pixelDifference = *(photoreceptorsOutput_PTR++) -*(horizontalCellsOutput_PTR++);
// test condition to allow write pixelDifference in ON or OFF buffer and 0 in the over
float isPositive=(float) (pixelDifference>0.0f);
// ON and OFF channels writing step
*(parvocellularOutputON_PTR++)=*(bipolarCellsON_PTR++) = isPositive*pixelDifference;
*(parvocellularOutputOFF_PTR++)=*(bipolarCellsOFF_PTR++)= (isPositive-1.0f)*pixelDifference;
}
}
};
#endif
};
}// end of namespace bioinspired
}// end of namespace cv
#endif

View File

@@ -1,68 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_PRECOMP_H__
#define __OPENCV_PRECOMP_H__
#include "opencv2/opencv_modules.hpp"
#include "opencv2/bioinspired.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/core/private.hpp"
#include "opencv2/core/ocl.hpp"
#include <valarray>
#ifdef HAVE_OPENCV_OCL
#include "opencv2/ocl/private/util.hpp"
#endif
namespace cv
{
// special function to get pointer to constant valarray elements, since
// simple &arr[0] does not compile on VS2005/VS2008.
template<typename T> inline const T* get_data(const std::valarray<T>& arr)
{ return &((std::valarray<T>&)arr)[0]; }
}
#endif

View File

@@ -1,743 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
/*
* Retina.cpp
*
* Created on: Jul 19, 2011
* Author: Alexandre Benoit
*/
#include "precomp.hpp"
#include "retinafilter.hpp"
#include <cstdio>
#include <sstream>
#include <valarray>
namespace cv
{
namespace bioinspired
{
class RetinaImpl : public Retina
{
public:
/**
* Main constructor with most commun use setup : create an instance of color ready retina model
* @param inputSize : the input frame size
*/
RetinaImpl(Size inputSize);
/**
* Complete Retina filter constructor which allows all basic structural parameters definition
* @param inputSize : the input frame size
* @param colorMode : the chosen processing mode : with or without color processing
* @param colorSamplingMethod: specifies which kind of color sampling will be used
* @param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
* @param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
* @param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
*/
RetinaImpl(Size inputSize, const bool colorMode, int colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
virtual ~RetinaImpl();
/**
* retreive retina input buffer size
*/
Size getInputSize();
/**
* retreive retina output buffer size
*/
Size getOutputSize();
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param retinaParameterFile : the parameters filename
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param fs : the open Filestorage which contains retina parameters
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true);
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param newParameters : a parameters structures updated with the new target configuration
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(Retina::RetinaParameters newParameters);
/**
* @return the current parameters setup
*/
struct Retina::RetinaParameters getParameters();
/**
* parameters setup display method
* @return a string which contains formatted parameters information
*/
const String printSetup();
/**
* write xml/yml formated parameters information
* @rparam fs : the filename of the xml file that will be open and writen with formatted parameters information
*/
virtual void write( String fs ) const;
/**
* write xml/yml formated parameters information
* @param fs : a cv::Filestorage object ready to be filled
*/
virtual void write( FileStorage& fs ) const;
/**
* setup the OPL and IPL parvo channels (see biologocal model)
* OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance (low frequency energy)
* IPL parvo is the OPL next processing stage, it refers to Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision.
* for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* @param colorMode : specifies if (true) color is processed of not (false) to then processing gray level image
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param photoreceptorsLocalAdaptationSensitivity: the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
* @param photoreceptorsTemporalConstant: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param photoreceptorsSpatialConstant: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param horizontalCellsGain: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param HcellsTemporalConstant: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
* @param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 230
*/
void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7);
/**
* set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel
* this channel processes signals outpint from OPL processing stage in peripheral vision, it allows motion information enhancement. It is decorrelated from the details channel. See reference paper for more details.
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 200
* @param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7);
/**
* method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
* @param inputImage : the input cv::Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)
*/
void run(InputArray inputImage);
/**
* method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvo channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
* -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
@param inputImage the input image to process RGB or gray levels
@param outputToneMappedImage the output tone mapped image
*/
void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage);
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getParvo(OutputArray retinaOutput_parvo);
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : a cv::Mat header filled with the internal parvo buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getParvoRAW(OutputArray retinaOutput_parvo);
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getMagno(OutputArray retinaOutput_magno);
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : a cv::Mat header filled with the internal retina magno buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getMagnoRAW(OutputArray retinaOutput_magno);
// original API level data accessors : get buffers addresses from a Mat header, similar to getParvoRAW and getMagnoRAW...
const Mat getMagnoRAW() const;
const Mat getParvoRAW() const;
/**
* activate color saturation as the final step of the color demultiplexing process
* -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
*/
void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0);
/**
* clear all retina buffers (equivalent to opening the eyes after a long period of eye close ;o)
*/
void clearBuffers();
/**
* Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated
* @param activate: true if Magnocellular output should be activated, false if not
*/
void activateMovingContoursProcessing(const bool activate);
/**
* Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated
* @param activate: true if Parvocellular (contours information extraction) output should be activated, false if not
*/
void activateContoursProcessing(const bool activate);
private:
// Parameteres setup members
RetinaParameters _retinaParameters; // structure of parameters
// Retina model related modules
std::valarray<float> _inputBuffer; //!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
// pointer to retina model
RetinaFilter* _retinaFilter; //!< the pointer to the retina module, allocated with instance construction
//! private method called by constructors, gathers their parameters and use them in a unified way
void _init(const Size inputSize, const bool colorMode, int colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
/**
* exports a valarray buffer outing from bioinspired objects to a cv::Mat in CV_8UC1 (gray level picture) or CV_8UC3 (color) format
* @param grayMatrixToConvert the valarray to export to OpenCV
* @param nbRows : the number of rows of the valarray flatten matrix
* @param nbColumns : the number of rows of the valarray flatten matrix
* @param colorMode : a flag which mentions if matrix is color (true) or graylevel (false)
* @param outBuffer : the output matrix which is reallocated to satisfy Retina output buffer dimensions
*/
void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer);
/**
* convert a cv::Mat to a valarray buffer in float format
* @param inputMatToConvert : the OpenCV cv::Mat that has to be converted to gray or RGB valarray buffer that will be processed by the retina model
* @param outputValarrayMatrix : the output valarray
* @return the input image color mode (color=true, gray levels=false)
*/
bool _convertCvMat2ValarrayBuffer(InputArray inputMatToConvert, std::valarray<float> &outputValarrayMatrix);
};
// smart pointers allocation :
Ptr<Retina> createRetina(Size inputSize){ return makePtr<RetinaImpl>(inputSize); }
Ptr<Retina> createRetina(Size inputSize, const bool colorMode, int colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght){
return makePtr<RetinaImpl>(inputSize, colorMode, colorSamplingMethod, useRetinaLogSampling, reductionFactor, samplingStrenght);
}
// RetinaImpl code
RetinaImpl::RetinaImpl(const cv::Size inputSz)
{
_retinaFilter = 0;
_init(inputSz, true, RETINA_COLOR_BAYER, false);
}
RetinaImpl::RetinaImpl(const cv::Size inputSz, const bool colorMode, int colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
{
_retinaFilter = 0;
_init(inputSz, colorMode, colorSamplingMethod, useRetinaLogSampling, reductionFactor, samplingStrenght);
};
RetinaImpl::~RetinaImpl()
{
if (_retinaFilter)
delete _retinaFilter;
}
/**
* retreive retina input buffer size
*/
Size RetinaImpl::getInputSize(){return cv::Size(_retinaFilter->getInputNBcolumns(), _retinaFilter->getInputNBrows());}
/**
* retreive retina output buffer size
*/
Size RetinaImpl::getOutputSize(){return cv::Size(_retinaFilter->getOutputNBcolumns(), _retinaFilter->getOutputNBrows());}
void RetinaImpl::setColorSaturation(const bool saturateColors, const float colorSaturationValue)
{
_retinaFilter->setColorSaturation(saturateColors, colorSaturationValue);
}
struct Retina::RetinaParameters RetinaImpl::getParameters(){return _retinaParameters;}
void RetinaImpl::setup(String retinaParameterFile, const bool applyDefaultSetupOnFailure)
{
try
{
// opening retinaParameterFile in read mode
cv::FileStorage fs(retinaParameterFile, cv::FileStorage::READ);
setup(fs, applyDefaultSetupOnFailure);
}
catch(Exception &e)
{
printf("Retina::setup: wrong/unappropriate xml parameter file : error report :`n=>%s\n", e.what());
if (applyDefaultSetupOnFailure)
{
printf("Retina::setup: resetting retina with default parameters\n");
setupOPLandIPLParvoChannel();
setupIPLMagnoChannel();
}
else
{
printf("=> keeping current parameters\n");
}
}
}
void RetinaImpl::setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure)
{
try
{
// read parameters file if it exists or apply default setup if asked for
if (!fs.isOpened())
{
printf("Retina::setup: provided parameters file could not be open... skeeping configuration\n");
return;
// implicit else case : retinaParameterFile could be open (it exists at least)
}
// OPL and Parvo init first... update at the same time the parameters structure and the retina core
cv::FileNode rootFn = fs.root(), currFn=rootFn["OPLandIPLparvo"];
currFn["colorMode"]>>_retinaParameters.OPLandIplParvo.colorMode;
currFn["normaliseOutput"]>>_retinaParameters.OPLandIplParvo.normaliseOutput;
currFn["photoreceptorsLocalAdaptationSensitivity"]>>_retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity;
currFn["photoreceptorsTemporalConstant"]>>_retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant;
currFn["photoreceptorsSpatialConstant"]>>_retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant;
currFn["horizontalCellsGain"]>>_retinaParameters.OPLandIplParvo.horizontalCellsGain;
currFn["hcellsTemporalConstant"]>>_retinaParameters.OPLandIplParvo.hcellsTemporalConstant;
currFn["hcellsSpatialConstant"]>>_retinaParameters.OPLandIplParvo.hcellsSpatialConstant;
currFn["ganglionCellsSensitivity"]>>_retinaParameters.OPLandIplParvo.ganglionCellsSensitivity;
setupOPLandIPLParvoChannel(_retinaParameters.OPLandIplParvo.colorMode, _retinaParameters.OPLandIplParvo.normaliseOutput, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant, _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant, _retinaParameters.OPLandIplParvo.horizontalCellsGain, _retinaParameters.OPLandIplParvo.hcellsTemporalConstant, _retinaParameters.OPLandIplParvo.hcellsSpatialConstant, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
// init retina IPL magno setup... update at the same time the parameters structure and the retina core
currFn=rootFn["IPLmagno"];
currFn["normaliseOutput"]>>_retinaParameters.IplMagno.normaliseOutput;
currFn["parasolCells_beta"]>>_retinaParameters.IplMagno.parasolCells_beta;
currFn["parasolCells_tau"]>>_retinaParameters.IplMagno.parasolCells_tau;
currFn["parasolCells_k"]>>_retinaParameters.IplMagno.parasolCells_k;
currFn["amacrinCellsTemporalCutFrequency"]>>_retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency;
currFn["V0CompressionParameter"]>>_retinaParameters.IplMagno.V0CompressionParameter;
currFn["localAdaptintegration_tau"]>>_retinaParameters.IplMagno.localAdaptintegration_tau;
currFn["localAdaptintegration_k"]>>_retinaParameters.IplMagno.localAdaptintegration_k;
setupIPLMagnoChannel(_retinaParameters.IplMagno.normaliseOutput, _retinaParameters.IplMagno.parasolCells_beta, _retinaParameters.IplMagno.parasolCells_tau, _retinaParameters.IplMagno.parasolCells_k, _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency,_retinaParameters.IplMagno.V0CompressionParameter, _retinaParameters.IplMagno.localAdaptintegration_tau, _retinaParameters.IplMagno.localAdaptintegration_k);
}catch(Exception &e)
{
printf("RetinaImpl::setup: resetting retina with default parameters\n");
if (applyDefaultSetupOnFailure)
{
setupOPLandIPLParvoChannel();
setupIPLMagnoChannel();
}
printf("Retina::setup: wrong/unappropriate xml parameter file : error report :`n=>%s\n", e.what());
printf("=> keeping current parameters\n");
}
// report current configuration
printf("%s\n", printSetup().c_str());
}
void RetinaImpl::setup(Retina::RetinaParameters newConfiguration)
{
// simply copy structures
memcpy(&_retinaParameters, &newConfiguration, sizeof(Retina::RetinaParameters));
// apply setup
setupOPLandIPLParvoChannel(_retinaParameters.OPLandIplParvo.colorMode, _retinaParameters.OPLandIplParvo.normaliseOutput, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant, _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant, _retinaParameters.OPLandIplParvo.horizontalCellsGain, _retinaParameters.OPLandIplParvo.hcellsTemporalConstant, _retinaParameters.OPLandIplParvo.hcellsSpatialConstant, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
setupIPLMagnoChannel(_retinaParameters.IplMagno.normaliseOutput, _retinaParameters.IplMagno.parasolCells_beta, _retinaParameters.IplMagno.parasolCells_tau, _retinaParameters.IplMagno.parasolCells_k, _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency,_retinaParameters.IplMagno.V0CompressionParameter, _retinaParameters.IplMagno.localAdaptintegration_tau, _retinaParameters.IplMagno.localAdaptintegration_k);
}
const String RetinaImpl::printSetup()
{
std::stringstream outmessage;
// displaying OPL and IPL parvo setup
outmessage<<"Current Retina instance setup :"
<<"\nOPLandIPLparvo"<<"{"
<< "\n\t colorMode : " << _retinaParameters.OPLandIplParvo.colorMode
<< "\n\t normalizeParvoOutput :" << _retinaParameters.OPLandIplParvo.normaliseOutput
<< "\n\t photoreceptorsLocalAdaptationSensitivity : " << _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity
<< "\n\t photoreceptorsTemporalConstant : " << _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant
<< "\n\t photoreceptorsSpatialConstant : " << _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant
<< "\n\t horizontalCellsGain : " << _retinaParameters.OPLandIplParvo.horizontalCellsGain
<< "\n\t hcellsTemporalConstant : " << _retinaParameters.OPLandIplParvo.hcellsTemporalConstant
<< "\n\t hcellsSpatialConstant : " << _retinaParameters.OPLandIplParvo.hcellsSpatialConstant
<< "\n\t parvoGanglionCellsSensitivity : " << _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity
<<"}\n";
// displaying IPL magno setup
outmessage<<"Current Retina instance setup :"
<<"\nIPLmagno"<<"{"
<< "\n\t normaliseOutput : " << _retinaParameters.IplMagno.normaliseOutput
<< "\n\t parasolCells_beta : " << _retinaParameters.IplMagno.parasolCells_beta
<< "\n\t parasolCells_tau : " << _retinaParameters.IplMagno.parasolCells_tau
<< "\n\t parasolCells_k : " << _retinaParameters.IplMagno.parasolCells_k
<< "\n\t amacrinCellsTemporalCutFrequency : " << _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency
<< "\n\t V0CompressionParameter : " << _retinaParameters.IplMagno.V0CompressionParameter
<< "\n\t localAdaptintegration_tau : " << _retinaParameters.IplMagno.localAdaptintegration_tau
<< "\n\t localAdaptintegration_k : " << _retinaParameters.IplMagno.localAdaptintegration_k
<<"}";
return outmessage.str().c_str();
}
void RetinaImpl::write( String fs ) const
{
FileStorage parametersSaveFile(fs, cv::FileStorage::WRITE );
write(parametersSaveFile);
}
void RetinaImpl::write( FileStorage& fs ) const
{
if (!fs.isOpened())
return; // basic error case
fs<<"OPLandIPLparvo"<<"{";
fs << "colorMode" << _retinaParameters.OPLandIplParvo.colorMode;
fs << "normaliseOutput" << _retinaParameters.OPLandIplParvo.normaliseOutput;
fs << "photoreceptorsLocalAdaptationSensitivity" << _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity;
fs << "photoreceptorsTemporalConstant" << _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant;
fs << "photoreceptorsSpatialConstant" << _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant;
fs << "horizontalCellsGain" << _retinaParameters.OPLandIplParvo.horizontalCellsGain;
fs << "hcellsTemporalConstant" << _retinaParameters.OPLandIplParvo.hcellsTemporalConstant;
fs << "hcellsSpatialConstant" << _retinaParameters.OPLandIplParvo.hcellsSpatialConstant;
fs << "ganglionCellsSensitivity" << _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity;
fs << "}";
fs<<"IPLmagno"<<"{";
fs << "normaliseOutput" << _retinaParameters.IplMagno.normaliseOutput;
fs << "parasolCells_beta" << _retinaParameters.IplMagno.parasolCells_beta;
fs << "parasolCells_tau" << _retinaParameters.IplMagno.parasolCells_tau;
fs << "parasolCells_k" << _retinaParameters.IplMagno.parasolCells_k;
fs << "amacrinCellsTemporalCutFrequency" << _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency;
fs << "V0CompressionParameter" << _retinaParameters.IplMagno.V0CompressionParameter;
fs << "localAdaptintegration_tau" << _retinaParameters.IplMagno.localAdaptintegration_tau;
fs << "localAdaptintegration_k" << _retinaParameters.IplMagno.localAdaptintegration_k;
fs<<"}";
}
void RetinaImpl::setupOPLandIPLParvoChannel(const bool colorMode, const bool normaliseOutput, const float photoreceptorsLocalAdaptationSensitivity, const float photoreceptorsTemporalConstant, const float photoreceptorsSpatialConstant, const float horizontalCellsGain, const float HcellsTemporalConstant, const float HcellsSpatialConstant, const float ganglionCellsSensitivity)
{
// retina core parameters setup
_retinaFilter->setColorMode(colorMode);
_retinaFilter->setPhotoreceptorsLocalAdaptationSensitivity(photoreceptorsLocalAdaptationSensitivity);
_retinaFilter->setOPLandParvoParameters(0, photoreceptorsTemporalConstant, photoreceptorsSpatialConstant, horizontalCellsGain, HcellsTemporalConstant, HcellsSpatialConstant, ganglionCellsSensitivity);
_retinaFilter->setParvoGanglionCellsLocalAdaptationSensitivity(ganglionCellsSensitivity);
_retinaFilter->activateNormalizeParvoOutput_0_maxOutputValue(normaliseOutput);
// update parameters struture
_retinaParameters.OPLandIplParvo.colorMode = colorMode;
_retinaParameters.OPLandIplParvo.normaliseOutput = normaliseOutput;
_retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity = photoreceptorsLocalAdaptationSensitivity;
_retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant = photoreceptorsTemporalConstant;
_retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant = photoreceptorsSpatialConstant;
_retinaParameters.OPLandIplParvo.horizontalCellsGain = horizontalCellsGain;
_retinaParameters.OPLandIplParvo.hcellsTemporalConstant = HcellsTemporalConstant;
_retinaParameters.OPLandIplParvo.hcellsSpatialConstant = HcellsSpatialConstant;
_retinaParameters.OPLandIplParvo.ganglionCellsSensitivity = ganglionCellsSensitivity;
}
void RetinaImpl::setupIPLMagnoChannel(const bool normaliseOutput, const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float V0CompressionParameter, const float localAdaptintegration_tau, const float localAdaptintegration_k)
{
_retinaFilter->setMagnoCoefficientsTable(parasolCells_beta, parasolCells_tau, parasolCells_k, amacrinCellsTemporalCutFrequency, V0CompressionParameter, localAdaptintegration_tau, localAdaptintegration_k);
_retinaFilter->activateNormalizeMagnoOutput_0_maxOutputValue(normaliseOutput);
// update parameters struture
_retinaParameters.IplMagno.normaliseOutput = normaliseOutput;
_retinaParameters.IplMagno.parasolCells_beta = parasolCells_beta;
_retinaParameters.IplMagno.parasolCells_tau = parasolCells_tau;
_retinaParameters.IplMagno.parasolCells_k = parasolCells_k;
_retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency = amacrinCellsTemporalCutFrequency;
_retinaParameters.IplMagno.V0CompressionParameter = V0CompressionParameter;
_retinaParameters.IplMagno.localAdaptintegration_tau = localAdaptintegration_tau;
_retinaParameters.IplMagno.localAdaptintegration_k = localAdaptintegration_k;
}
void RetinaImpl::run(InputArray inputMatToConvert)
{
// first convert input image to the compatible format : std::valarray<float>
const bool colorMode = _convertCvMat2ValarrayBuffer(inputMatToConvert.getMat(), _inputBuffer);
// process the retina
if (!_retinaFilter->runFilter(_inputBuffer, colorMode, false, _retinaParameters.OPLandIplParvo.colorMode && colorMode, false))
throw cv::Exception(-1, "RetinaImpl cannot be applied, wrong input buffer size", "RetinaImpl::run", "RetinaImpl.h", 0);
}
void RetinaImpl::applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
{
// first convert input image to the compatible format :
const bool colorMode = _convertCvMat2ValarrayBuffer(inputImage.getMat(), _inputBuffer);
const unsigned int nbPixels=_retinaFilter->getOutputNBrows()*_retinaFilter->getOutputNBcolumns();
// process tone mapping
if (colorMode)
{
std::valarray<float> imageOutput(nbPixels*3);
_retinaFilter->runRGBToneMapping(_inputBuffer, imageOutput, true, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
_convertValarrayBuffer2cvMat(imageOutput, _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), true, outputToneMappedImage);
}else
{
std::valarray<float> imageOutput(nbPixels);
_retinaFilter->runGrayToneMapping(_inputBuffer, imageOutput, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
_convertValarrayBuffer2cvMat(imageOutput, _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), false, outputToneMappedImage);
}
}
void RetinaImpl::getParvo(OutputArray retinaOutput_parvo)
{
if (_retinaFilter->getColorMode())
{
// reallocate output buffer (if necessary)
_convertValarrayBuffer2cvMat(_retinaFilter->getColorOutput(), _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), true, retinaOutput_parvo);
}else
{
// reallocate output buffer (if necessary)
_convertValarrayBuffer2cvMat(_retinaFilter->getContours(), _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), false, retinaOutput_parvo);
}
//retinaOutput_parvo/=255.0;
}
void RetinaImpl::getMagno(OutputArray retinaOutput_magno)
{
// reallocate output buffer (if necessary)
_convertValarrayBuffer2cvMat(_retinaFilter->getMovingContours(), _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), false, retinaOutput_magno);
//retinaOutput_magno/=255.0;
}
// original API level data accessors : copy buffers if size matches, reallocate if required
void RetinaImpl::getMagnoRAW(OutputArray magnoOutputBufferCopy){
// get magno channel header
const cv::Mat magnoChannel=cv::Mat(getMagnoRAW());
// copy data
magnoChannel.copyTo(magnoOutputBufferCopy);
}
void RetinaImpl::getParvoRAW(OutputArray parvoOutputBufferCopy){
// get parvo channel header
const cv::Mat parvoChannel=cv::Mat(getMagnoRAW());
// copy data
parvoChannel.copyTo(parvoOutputBufferCopy);
}
// original API level data accessors : get buffers addresses...
const Mat RetinaImpl::getMagnoRAW() const {
// create a cv::Mat header for the valarray
return Mat((int)_retinaFilter->getMovingContours().size(),1, CV_32F, (void*)get_data(_retinaFilter->getMovingContours()));
}
const Mat RetinaImpl::getParvoRAW() const {
if (_retinaFilter->getColorMode()) // check if color mode is enabled
{
// create a cv::Mat table (for RGB planes as a single vector)
return Mat((int)_retinaFilter->getColorOutput().size(), 1, CV_32F, (void*)get_data(_retinaFilter->getColorOutput()));
}
// otherwise, output is gray level
// create a cv::Mat header for the valarray
return Mat((int)_retinaFilter->getContours().size(), 1, CV_32F, (void*)get_data(_retinaFilter->getContours()));
}
// private method called by constructirs
void RetinaImpl::_init(const cv::Size inputSz, const bool colorMode, int colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
{
// basic error check
if (inputSz.height*inputSz.width <= 0)
throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "RetinaImpl::setup", "Retina.cpp", 0);
unsigned int nbPixels=inputSz.height*inputSz.width;
// resize buffers if size does not match
_inputBuffer.resize(nbPixels*3); // buffer supports gray images but also 3 channels color buffers... (larger is better...)
// allocate the retina model
if (_retinaFilter)
delete _retinaFilter;
_retinaFilter = new RetinaFilter(inputSz.height, inputSz.width, colorMode, colorSamplingMethod, useRetinaLogSampling, reductionFactor, samplingStrenght);
_retinaParameters.OPLandIplParvo.colorMode = colorMode;
// prepare the default parameter XML file with default setup
setup(_retinaParameters);
// init retina
_retinaFilter->clearAllBuffers();
// report current configuration
printf("%s\n", printSetup().c_str());
}
void RetinaImpl::_convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer)
{
// fill output buffer with the valarray buffer
const float *valarrayPTR=get_data(grayMatrixToConvert);
if (!colorMode)
{
outBuffer.create(cv::Size(nbColumns, nbRows), CV_8U);
Mat outMat = outBuffer.getMat();
for (unsigned int i=0;i<nbRows;++i)
{
for (unsigned int j=0;j<nbColumns;++j)
{
cv::Point2d pixel(j,i);
outMat.at<unsigned char>(pixel)=(unsigned char)*(valarrayPTR++);
}
}
}else
{
const unsigned int nbPixels=nbColumns*nbRows;
const unsigned int doubleNBpixels=nbColumns*nbRows*2;
outBuffer.create(cv::Size(nbColumns, nbRows), CV_8UC3);
Mat outMat = outBuffer.getMat();
for (unsigned int i=0;i<nbRows;++i)
{
for (unsigned int j=0;j<nbColumns;++j,++valarrayPTR)
{
cv::Point2d pixel(j,i);
cv::Vec3b pixelValues;
pixelValues[2]=(unsigned char)*(valarrayPTR);
pixelValues[1]=(unsigned char)*(valarrayPTR+nbPixels);
pixelValues[0]=(unsigned char)*(valarrayPTR+doubleNBpixels);
outMat.at<cv::Vec3b>(pixel)=pixelValues;
}
}
}
}
bool RetinaImpl::_convertCvMat2ValarrayBuffer(InputArray inputMat, std::valarray<float> &outputValarrayMatrix)
{
const Mat inputMatToConvert=inputMat.getMat();
// first check input consistency
if (inputMatToConvert.empty())
throw cv::Exception(-1, "RetinaImpl cannot be applied, input buffer is empty", "RetinaImpl::run", "RetinaImpl.h", 0);
// retreive color mode from image input
int imageNumberOfChannels = inputMatToConvert.channels();
// convert to float AND fill the valarray buffer
typedef float T; // define here the target pixel format, here, float
const int dsttype = DataType<T>::depth; // output buffer is float format
const unsigned int nbPixels=inputMat.getMat().rows*inputMat.getMat().cols;
const unsigned int doubleNBpixels=inputMat.getMat().rows*inputMat.getMat().cols*2;
if(imageNumberOfChannels==4)
{
// create a cv::Mat table (for RGBA planes)
cv::Mat planes[4] =
{
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
};
planes[3] = cv::Mat(inputMatToConvert.size(), dsttype); // last channel (alpha) does not point on the valarray (not usefull in our case)
// split color cv::Mat in 4 planes... it fills valarray directely
cv::split(Mat_<Vec<T, 4> >(inputMatToConvert), planes);
}
else if (imageNumberOfChannels==3)
{
// create a cv::Mat table (for RGB planes)
cv::Mat planes[] =
{
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
};
// split color cv::Mat in 3 planes... it fills valarray directely
cv::split(cv::Mat_<Vec<T, 3> >(inputMatToConvert), planes);
}
else if(imageNumberOfChannels==1)
{
// create a cv::Mat header for the valarray
cv::Mat dst(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0]);
inputMatToConvert.convertTo(dst, dsttype);
}
else
CV_Error(Error::StsUnsupportedFormat, "input image must be single channel (gray levels), bgr format (color) or bgra (color with transparency which won't be considered");
return imageNumberOfChannels>1; // return bool : false for gray level image processing, true for color mode
}
void RetinaImpl::clearBuffers() {_retinaFilter->clearAllBuffers();}
void RetinaImpl::activateMovingContoursProcessing(const bool activate){_retinaFilter->activateMovingContoursProcessing(activate);}
void RetinaImpl::activateContoursProcessing(const bool activate){_retinaFilter->activateContoursProcessing(activate);}
}// end of namespace bioinspired
}// end of namespace cv

File diff suppressed because it is too large Load Diff

View File

@@ -1,634 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2010-2013, Multicoreware, Inc., all rights reserved.
// Copyright (C) 2010-2013, Advanced Micro Devices, Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// @Authors
// Peng Xiao, pengxiao@multicorewareinc.com
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other oclMaterials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors as is and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OCL_RETINA_HPP__
#define __OCL_RETINA_HPP__
#include "precomp.hpp"
#ifdef HAVE_OPENCV_OCL
// please refer to c++ headers for API comments
namespace cv
{
namespace bioinspired
{
namespace ocl
{
void normalizeGrayOutputCentredSigmoide(const float meanValue, const float sensitivity, cv::ocl::oclMat &in, cv::ocl::oclMat &out, const float maxValue = 255.f);
void normalizeGrayOutput_0_maxOutputValue(cv::ocl::oclMat &inputOutputBuffer, const float maxOutputValue = 255.0);
void normalizeGrayOutputNearZeroCentreredSigmoide(cv::ocl::oclMat &inputPicture, cv::ocl::oclMat &outputBuffer, const float sensitivity = 40, const float maxOutputValue = 255.0f);
void centerReductImageLuminance(cv::ocl::oclMat &inputOutputBuffer);
class BasicRetinaFilter
{
public:
BasicRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns, const unsigned int parametersListSize = 1, const bool useProgressiveFilter = false);
~BasicRetinaFilter();
inline void clearOutputBuffer()
{
_filterOutput = 0;
};
inline void clearSecondaryBuffer()
{
_localBuffer = 0;
};
inline void clearAllBuffers()
{
clearOutputBuffer();
clearSecondaryBuffer();
};
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
const cv::ocl::oclMat &runFilter_LPfilter(const cv::ocl::oclMat &inputFrame, const unsigned int filterIndex = 0);
void runFilter_LPfilter(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &outputFrame, const unsigned int filterIndex = 0);
void runFilter_LPfilter_Autonomous(cv::ocl::oclMat &inputOutputFrame, const unsigned int filterIndex = 0);
const cv::ocl::oclMat &runFilter_LocalAdapdation(const cv::ocl::oclMat &inputOutputFrame, const cv::ocl::oclMat &localLuminance);
void runFilter_LocalAdapdation(const cv::ocl::oclMat &inputFrame, const cv::ocl::oclMat &localLuminance, cv::ocl::oclMat &outputFrame);
const cv::ocl::oclMat &runFilter_LocalAdapdation_autonomous(const cv::ocl::oclMat &inputFrame);
void runFilter_LocalAdapdation_autonomous(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &outputFrame);
void setLPfilterParameters(const float beta, const float tau, const float k, const unsigned int filterIndex = 0);
inline void setV0CompressionParameter(const float v0, const float maxInputValue, const float)
{
_v0 = v0 * maxInputValue;
_localLuminanceFactor = v0;
_localLuminanceAddon = maxInputValue * (1.0f - v0);
_maxInputValue = maxInputValue;
};
inline void setV0CompressionParameter(const float v0, const float meanLuminance)
{
this->setV0CompressionParameter(v0, _maxInputValue, meanLuminance);
};
inline void setV0CompressionParameter(const float v0)
{
_v0 = v0 * _maxInputValue;
_localLuminanceFactor = v0;
_localLuminanceAddon = _maxInputValue * (1.0f - v0);
};
inline void setV0CompressionParameterToneMapping(const float v0, const float maxInputValue, const float meanLuminance = 128.0f)
{
_v0 = v0 * maxInputValue;
_localLuminanceFactor = 1.0f;
_localLuminanceAddon = meanLuminance * _v0;
_maxInputValue = maxInputValue;
};
inline void updateCompressionParameter(const float meanLuminance)
{
_localLuminanceFactor = 1;
_localLuminanceAddon = meanLuminance * _v0;
};
inline float getV0CompressionParameter()
{
return _v0 / _maxInputValue;
};
inline const cv::ocl::oclMat &getOutput() const
{
return _filterOutput;
};
inline unsigned int getNBrows()
{
return _filterOutput.rows;
};
inline unsigned int getNBcolumns()
{
return _filterOutput.cols;
};
inline unsigned int getNBpixels()
{
return _filterOutput.size().area();
};
inline void normalizeGrayOutput_0_maxOutputValue(const float maxValue)
{
ocl::normalizeGrayOutput_0_maxOutputValue(_filterOutput, maxValue);
};
inline void normalizeGrayOutputCentredSigmoide()
{
ocl::normalizeGrayOutputCentredSigmoide(0.0, 2.0, _filterOutput, _filterOutput);
};
inline void centerReductImageLuminance()
{
ocl::centerReductImageLuminance(_filterOutput);
};
inline float getMaxInputValue()
{
return this->_maxInputValue;
};
inline void setMaxInputValue(const float newMaxInputValue)
{
this->_maxInputValue = newMaxInputValue;
};
protected:
int _NBrows;
int _NBcols;
unsigned int _halfNBrows;
unsigned int _halfNBcolumns;
cv::ocl::oclMat _filterOutput;
cv::ocl::oclMat _localBuffer;
std::valarray <float>_filteringCoeficientsTable;
float _v0;
float _maxInputValue;
float _meanInputValue;
float _localLuminanceFactor;
float _localLuminanceAddon;
float _a;
float _tau;
float _gain;
void _spatiotemporalLPfilter(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &LPfilterOutput, const unsigned int coefTableOffset = 0);
float _squaringSpatiotemporalLPfilter(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &outputFrame, const unsigned int filterIndex = 0);
void _spatiotemporalLPfilter_Irregular(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &outputFrame, const unsigned int filterIndex = 0);
void _localSquaringSpatioTemporalLPfilter(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &LPfilterOutput, const unsigned int *integrationAreas, const unsigned int filterIndex = 0);
void _localLuminanceAdaptation(const cv::ocl::oclMat &inputFrame, const cv::ocl::oclMat &localLuminance, cv::ocl::oclMat &outputFrame, const bool updateLuminanceMean = true);
void _localLuminanceAdaptation(cv::ocl::oclMat &inputOutputFrame, const cv::ocl::oclMat &localLuminance);
void _localLuminanceAdaptationPosNegValues(const cv::ocl::oclMat &inputFrame, const cv::ocl::oclMat &localLuminance, float *outputFrame);
void _horizontalCausalFilter_addInput(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &outputFrame);
void _horizontalAnticausalFilter(cv::ocl::oclMat &outputFrame);
void _verticalCausalFilter(cv::ocl::oclMat &outputFrame);
void _horizontalAnticausalFilter_Irregular(cv::ocl::oclMat &outputFrame, const cv::ocl::oclMat &spatialConstantBuffer);
void _verticalCausalFilter_Irregular(cv::ocl::oclMat &outputFrame, const cv::ocl::oclMat &spatialConstantBuffer);
void _verticalAnticausalFilter_multGain(cv::ocl::oclMat &outputFrame);
};
class MagnoRetinaFilter: public BasicRetinaFilter
{
public:
MagnoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns);
virtual ~MagnoRetinaFilter();
void clearAllBuffers();
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
void setCoefficientsTable(const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float localAdaptIntegration_tau, const float localAdaptIntegration_k);
const cv::ocl::oclMat &runFilter(const cv::ocl::oclMat &OPL_ON, const cv::ocl::oclMat &OPL_OFF);
inline const cv::ocl::oclMat &getMagnoON() const
{
return _magnoXOutputON;
};
inline const cv::ocl::oclMat &getMagnoOFF() const
{
return _magnoXOutputOFF;
};
inline const cv::ocl::oclMat &getMagnoYsaturated() const
{
return _magnoYsaturated;
};
inline void normalizeGrayOutputNearZeroCentreredSigmoide()
{
ocl::normalizeGrayOutputNearZeroCentreredSigmoide(_magnoYOutput, _magnoYsaturated);
};
inline float getTemporalConstant()
{
return this->_filteringCoeficientsTable[2];
};
private:
cv::ocl::oclMat _previousInput_ON;
cv::ocl::oclMat _previousInput_OFF;
cv::ocl::oclMat _amacrinCellsTempOutput_ON;
cv::ocl::oclMat _amacrinCellsTempOutput_OFF;
cv::ocl::oclMat _magnoXOutputON;
cv::ocl::oclMat _magnoXOutputOFF;
cv::ocl::oclMat _localProcessBufferON;
cv::ocl::oclMat _localProcessBufferOFF;
cv::ocl::oclMat _magnoYOutput;
cv::ocl::oclMat _magnoYsaturated;
float _temporalCoefficient;
void _amacrineCellsComputing(const cv::ocl::oclMat &OPL_ON, const cv::ocl::oclMat &OPL_OFF);
};
class ParvoRetinaFilter: public BasicRetinaFilter
{
public:
ParvoRetinaFilter(const unsigned int NBrows = 480, const unsigned int NBcolumns = 640);
virtual ~ParvoRetinaFilter();
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
void clearAllBuffers();
void setOPLandParvoFiltersParameters(const float beta1, const float tau1, const float k1, const float beta2, const float tau2, const float k2);
inline void setGanglionCellsLocalAdaptationLPfilterParameters(const float tau, const float k)
{
BasicRetinaFilter::setLPfilterParameters(0, tau, k, 2);
};
const cv::ocl::oclMat &runFilter(const cv::ocl::oclMat &inputFrame, const bool useParvoOutput = true);
inline const cv::ocl::oclMat &getPhotoreceptorsLPfilteringOutput() const
{
return _photoreceptorsOutput;
};
inline const cv::ocl::oclMat &getHorizontalCellsOutput() const
{
return _horizontalCellsOutput;
};
inline const cv::ocl::oclMat &getParvoON() const
{
return _parvocellularOutputON;
};
inline const cv::ocl::oclMat &getParvoOFF() const
{
return _parvocellularOutputOFF;
};
inline const cv::ocl::oclMat &getBipolarCellsON() const
{
return _bipolarCellsOutputON;
};
inline const cv::ocl::oclMat &getBipolarCellsOFF() const
{
return _bipolarCellsOutputOFF;
};
inline float getPhotoreceptorsTemporalConstant()
{
return this->_filteringCoeficientsTable[2];
};
inline float getHcellsTemporalConstant()
{
return this->_filteringCoeficientsTable[5];
};
private:
cv::ocl::oclMat _photoreceptorsOutput;
cv::ocl::oclMat _horizontalCellsOutput;
cv::ocl::oclMat _parvocellularOutputON;
cv::ocl::oclMat _parvocellularOutputOFF;
cv::ocl::oclMat _bipolarCellsOutputON;
cv::ocl::oclMat _bipolarCellsOutputOFF;
cv::ocl::oclMat _localAdaptationOFF;
cv::ocl::oclMat _localAdaptationON;
cv::ocl::oclMat _parvocellularOutputONminusOFF;
void _OPL_OnOffWaysComputing();
};
class RetinaColor: public BasicRetinaFilter
{
public:
RetinaColor(const unsigned int NBrows, const unsigned int NBcolumns, const int samplingMethod = RETINA_COLOR_DIAGONAL);
virtual ~RetinaColor();
void clearAllBuffers();
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
inline void runColorMultiplexing(const cv::ocl::oclMat &inputRGBFrame)
{
runColorMultiplexing(inputRGBFrame, _multiplexedFrame);
};
void runColorMultiplexing(const cv::ocl::oclMat &demultiplexedInputFrame, cv::ocl::oclMat &multiplexedFrame);
void runColorDemultiplexing(const cv::ocl::oclMat &multiplexedColorFrame, const bool adaptiveFiltering = false, const float maxInputValue = 255.0);
void setColorSaturation(const bool saturateColors = true, const float colorSaturationValue = 4.0)
{
_saturateColors = saturateColors;
_colorSaturationValue = colorSaturationValue;
};
void setChrominanceLPfilterParameters(const float beta, const float tau, const float k)
{
setLPfilterParameters(beta, tau, k);
};
bool applyKrauskopfLMS2Acr1cr2Transform(cv::ocl::oclMat &result);
bool applyLMS2LabTransform(cv::ocl::oclMat &result);
inline const cv::ocl::oclMat &getMultiplexedFrame() const
{
return _multiplexedFrame;
};
inline const cv::ocl::oclMat &getDemultiplexedColorFrame() const
{
return _demultiplexedColorFrame;
};
inline const cv::ocl::oclMat &getLuminance() const
{
return _luminance;
};
inline const cv::ocl::oclMat &getChrominance() const
{
return _chrominance;
};
void clipRGBOutput_0_maxInputValue(cv::ocl::oclMat &inputOutputBuffer, const float maxOutputValue = 255.0);
void normalizeRGBOutput_0_maxOutputValue(const float maxOutputValue = 255.0);
inline void setDemultiplexedColorFrame(const cv::ocl::oclMat &demultiplexedImage)
{
_demultiplexedColorFrame = demultiplexedImage;
};
protected:
inline unsigned int bayerSampleOffset(unsigned int index)
{
return index + ((index / getNBcolumns()) % 2) * getNBpixels() + ((index % getNBcolumns()) % 2) * getNBpixels();
}
inline Rect getROI(int idx)
{
return Rect(0, idx * _NBrows, _NBcols, _NBrows);
}
int _samplingMethod;
bool _saturateColors;
float _colorSaturationValue;
cv::ocl::oclMat _luminance;
cv::ocl::oclMat _multiplexedFrame;
cv::ocl::oclMat _RGBmosaic;
cv::ocl::oclMat _tempMultiplexedFrame;
cv::ocl::oclMat _demultiplexedTempBuffer;
cv::ocl::oclMat _demultiplexedColorFrame;
cv::ocl::oclMat _chrominance;
cv::ocl::oclMat _colorLocalDensity;
cv::ocl::oclMat _imageGradient;
float _pR, _pG, _pB;
bool _objectInit;
void _initColorSampling();
void _adaptiveSpatialLPfilter(const cv::ocl::oclMat &inputFrame, const cv::ocl::oclMat &gradient, cv::ocl::oclMat &outputFrame);
void _adaptiveHorizontalCausalFilter_addInput(const cv::ocl::oclMat &inputFrame, const cv::ocl::oclMat &gradient, cv::ocl::oclMat &outputFrame);
void _adaptiveVerticalAnticausalFilter_multGain(const cv::ocl::oclMat &gradient, cv::ocl::oclMat &outputFrame);
void _computeGradient(const cv::ocl::oclMat &luminance, cv::ocl::oclMat &gradient);
void _normalizeOutputs_0_maxOutputValue(void);
void _applyImageColorSpaceConversion(const cv::ocl::oclMat &inputFrame, cv::ocl::oclMat &outputFrame, const float *transformTable);
};
class RetinaFilter
{
public:
RetinaFilter(const unsigned int sizeRows, const unsigned int sizeColumns, const bool colorMode = false, const int samplingMethod = RETINA_COLOR_BAYER, const bool useRetinaLogSampling = false, const double reductionFactor = 1.0, const double samplingStrenght = 10.0);
~RetinaFilter();
void clearAllBuffers();
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
bool checkInput(const cv::ocl::oclMat &input, const bool colorMode);
bool runFilter(const cv::ocl::oclMat &imageInput, const bool useAdaptiveFiltering = true, const bool processRetinaParvoMagnoMapping = false, const bool useColorMode = false, const bool inputIsColorMultiplexed = false);
void setGlobalParameters(const float OPLspatialResponse1 = 0.7, const float OPLtemporalresponse1 = 1, const float OPLassymetryGain = 0, const float OPLspatialResponse2 = 5, const float OPLtemporalresponse2 = 1, const float LPfilterSpatialResponse = 5, const float LPfilterGain = 0, const float LPfilterTemporalresponse = 0, const float MovingContoursExtractorCoefficient = 5, const bool normalizeParvoOutput_0_maxOutputValue = false, const bool normalizeMagnoOutput_0_maxOutputValue = false, const float maxOutputValue = 255.0, const float maxInputValue = 255.0, const float meanValue = 128.0);
inline void setPhotoreceptorsLocalAdaptationSensitivity(const float V0CompressionParameter)
{
_photoreceptorsPrefilter.setV0CompressionParameter(1 - V0CompressionParameter);
_setInitPeriodCount();
};
inline void setParvoGanglionCellsLocalAdaptationSensitivity(const float V0CompressionParameter)
{
_ParvoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);
_setInitPeriodCount();
};
inline void setGanglionCellsLocalAdaptationLPfilterParameters(const float spatialResponse, const float temporalResponse)
{
_ParvoRetinaFilter.setGanglionCellsLocalAdaptationLPfilterParameters(temporalResponse, spatialResponse);
_setInitPeriodCount();
};
inline void setMagnoGanglionCellsLocalAdaptationSensitivity(const float V0CompressionParameter)
{
_MagnoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);
_setInitPeriodCount();
};
void setOPLandParvoParameters(const float beta1, const float tau1, const float k1, const float beta2, const float tau2, const float k2, const float V0CompressionParameter)
{
_ParvoRetinaFilter.setOPLandParvoFiltersParameters(beta1, tau1, k1, beta2, tau2, k2);
_ParvoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);
_setInitPeriodCount();
};
void setMagnoCoefficientsTable(const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float V0CompressionParameter, const float localAdaptintegration_tau, const float localAdaptintegration_k)
{
_MagnoRetinaFilter.setCoefficientsTable(parasolCells_beta, parasolCells_tau, parasolCells_k, amacrinCellsTemporalCutFrequency, localAdaptintegration_tau, localAdaptintegration_k);
_MagnoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);
_setInitPeriodCount();
};
inline void activateNormalizeParvoOutput_0_maxOutputValue(const bool normalizeParvoOutput_0_maxOutputValue)
{
_normalizeParvoOutput_0_maxOutputValue = normalizeParvoOutput_0_maxOutputValue;
};
inline void activateNormalizeMagnoOutput_0_maxOutputValue(const bool normalizeMagnoOutput_0_maxOutputValue)
{
_normalizeMagnoOutput_0_maxOutputValue = normalizeMagnoOutput_0_maxOutputValue;
};
inline void setMaxOutputValue(const float maxOutputValue)
{
_maxOutputValue = maxOutputValue;
};
void setColorMode(const bool desiredColorMode)
{
_useColorMode = desiredColorMode;
};
inline void setColorSaturation(const bool saturateColors = true, const float colorSaturationValue = 4.0)
{
_colorEngine.setColorSaturation(saturateColors, colorSaturationValue);
};
inline const cv::ocl::oclMat &getLocalAdaptation() const
{
return _photoreceptorsPrefilter.getOutput();
};
inline const cv::ocl::oclMat &getPhotoreceptors() const
{
return _ParvoRetinaFilter.getPhotoreceptorsLPfilteringOutput();
};
inline const cv::ocl::oclMat &getHorizontalCells() const
{
return _ParvoRetinaFilter.getHorizontalCellsOutput();
};
inline bool areContoursProcessed()
{
return _useParvoOutput;
};
bool getParvoFoveaResponse(cv::ocl::oclMat &parvoFovealResponse);
inline void activateContoursProcessing(const bool useParvoOutput)
{
_useParvoOutput = useParvoOutput;
};
const cv::ocl::oclMat &getContours();
inline const cv::ocl::oclMat &getContoursON() const
{
return _ParvoRetinaFilter.getParvoON();
};
inline const cv::ocl::oclMat &getContoursOFF() const
{
return _ParvoRetinaFilter.getParvoOFF();
};
inline bool areMovingContoursProcessed()
{
return _useMagnoOutput;
};
inline void activateMovingContoursProcessing(const bool useMagnoOutput)
{
_useMagnoOutput = useMagnoOutput;
};
inline const cv::ocl::oclMat &getMovingContours() const
{
return _MagnoRetinaFilter.getOutput();
};
inline const cv::ocl::oclMat &getMovingContoursSaturated() const
{
return _MagnoRetinaFilter.getMagnoYsaturated();
};
inline const cv::ocl::oclMat &getMovingContoursON() const
{
return _MagnoRetinaFilter.getMagnoON();
};
inline const cv::ocl::oclMat &getMovingContoursOFF() const
{
return _MagnoRetinaFilter.getMagnoOFF();
};
inline const cv::ocl::oclMat &getRetinaParvoMagnoMappedOutput() const
{
return _retinaParvoMagnoMappedFrame;
};
inline const cv::ocl::oclMat &getParvoContoursChannel() const
{
return _colorEngine.getLuminance();
};
inline const cv::ocl::oclMat &getParvoChrominance() const
{
return _colorEngine.getChrominance();
};
inline const cv::ocl::oclMat &getColorOutput() const
{
return _colorEngine.getDemultiplexedColorFrame();
};
inline bool isColorMode()
{
return _useColorMode;
};
bool getColorMode()
{
return _useColorMode;
};
inline bool isInitTransitionDone()
{
if (_ellapsedFramesSinceLastReset < _globalTemporalConstant)
{
return false;
}
return true;
};
inline float getRetinaSamplingBackProjection(const float projectedRadiusLength)
{
return projectedRadiusLength;
};
inline unsigned int getInputNBrows()
{
return _photoreceptorsPrefilter.getNBrows();
};
inline unsigned int getInputNBcolumns()
{
return _photoreceptorsPrefilter.getNBcolumns();
};
inline unsigned int getInputNBpixels()
{
return _photoreceptorsPrefilter.getNBpixels();
};
inline unsigned int getOutputNBrows()
{
return _photoreceptorsPrefilter.getNBrows();
};
inline unsigned int getOutputNBcolumns()
{
return _photoreceptorsPrefilter.getNBcolumns();
};
inline unsigned int getOutputNBpixels()
{
return _photoreceptorsPrefilter.getNBpixels();
};
private:
bool _useParvoOutput;
bool _useMagnoOutput;
unsigned int _ellapsedFramesSinceLastReset;
unsigned int _globalTemporalConstant;
cv::ocl::oclMat _retinaParvoMagnoMappedFrame;
BasicRetinaFilter _photoreceptorsPrefilter;
ParvoRetinaFilter _ParvoRetinaFilter;
MagnoRetinaFilter _MagnoRetinaFilter;
RetinaColor _colorEngine;
bool _useMinimalMemoryForToneMappingONLY;
bool _normalizeParvoOutput_0_maxOutputValue;
bool _normalizeMagnoOutput_0_maxOutputValue;
float _maxOutputValue;
bool _useColorMode;
void _setInitPeriodCount();
void _processRetinaParvoMagnoMapping();
void _runGrayToneMapping(const cv::ocl::oclMat &grayImageInput, cv::ocl::oclMat &grayImageOutput , const float PhotoreceptorsCompression = 0.6, const float ganglionCellsCompression = 0.6);
};
} /* namespace ocl */
} /* namespace bioinspired */
} /* namespace cv */
#endif /* HAVE_OPENCV_OCL */
#endif /* __OCL_RETINA_HPP__ */

View File

@@ -1,725 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include "retinacolor.hpp"
// @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
#include <iostream>
#include <ctime>
namespace cv
{
namespace bioinspired
{
// init static values
static float _LMStoACr1Cr2[]={1.0, 1.0, 0.0, 1.0, -1.0, 0.0, -0.5, -0.5, 1.0};
//static double _ACr1Cr2toLMS[]={0.5, 0.5, 0.0, 0.5, -0.5, 0.0, 0.5, 0.0, 1.0};
static float _LMStoLab[]={0.5774f, 0.5774f, 0.5774f, 0.4082f, 0.4082f, -0.8165f, 0.7071f, -0.7071f, 0.f};
// constructor/desctructor
RetinaColor::RetinaColor(const unsigned int NBrows, const unsigned int NBcolumns, const int samplingMethod)
:BasicRetinaFilter(NBrows, NBcolumns, 3),
_colorSampling(NBrows*NBcolumns),
_RGBmosaic(NBrows*NBcolumns*3),
_tempMultiplexedFrame(NBrows*NBcolumns),
_demultiplexedTempBuffer(NBrows*NBcolumns*3),
_demultiplexedColorFrame(NBrows*NBcolumns*3),
_chrominance(NBrows*NBcolumns*3),
_colorLocalDensity(NBrows*NBcolumns*3),
_imageGradient(NBrows*NBcolumns*2)
{
// link to parent buffers (let's recycle !)
_luminance=&_filterOutput;
_multiplexedFrame=&_localBuffer;
_objectInit=false;
_samplingMethod=samplingMethod;
_saturateColors=false;
_colorSaturationValue=4.0;
// set default spatio-temporal filter parameters
setLPfilterParameters(0.0, 0.0, 1.5);
setLPfilterParameters(0.0, 0.0, 10.5, 1);// for the low pass filter dedicated to contours energy extraction (demultiplexing process)
setLPfilterParameters(0.f, 0.f, 0.9f, 2);
// init default value on image Gradient
_imageGradient=0.57f;
// init color sampling map
_initColorSampling();
// flush all buffers
clearAllBuffers();
}
RetinaColor::~RetinaColor()
{
}
/**
* function that clears all buffers of the object
*/
void RetinaColor::clearAllBuffers()
{
BasicRetinaFilter::clearAllBuffers();
_tempMultiplexedFrame=0.f;
_demultiplexedTempBuffer=0.f;
_demultiplexedColorFrame=0.f;
_chrominance=0.f;
_imageGradient=0.57f;
}
/**
* resize retina color filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void RetinaColor::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
BasicRetinaFilter::clearAllBuffers();
_colorSampling.resize(NBrows*NBcolumns);
_RGBmosaic.resize(NBrows*NBcolumns*3);
_tempMultiplexedFrame.resize(NBrows*NBcolumns);
_demultiplexedTempBuffer.resize(NBrows*NBcolumns*3);
_demultiplexedColorFrame.resize(NBrows*NBcolumns*3);
_chrominance.resize(NBrows*NBcolumns*3);
_colorLocalDensity.resize(NBrows*NBcolumns*3);
_imageGradient.resize(NBrows*NBcolumns*2);
// link to parent buffers (let's recycle !)
_luminance=&_filterOutput;
_multiplexedFrame=&_localBuffer;
// init color sampling map
_initColorSampling();
// clean buffers
clearAllBuffers();
}
void RetinaColor::_initColorSampling()
{
// filling the conversion table for multiplexed <=> demultiplexed frame
srand((unsigned)time(NULL));
// preInit cones probabilities
_pR=_pB=_pG=0;
switch (_samplingMethod)
{
case RETINA_COLOR_RANDOM:
for (unsigned int index=0 ; index<this->getNBpixels(); ++index)
{
// random RGB sampling
unsigned int colorIndex=rand()%24;
if (colorIndex<8){
colorIndex=0;
++_pR;
}else
{
if (colorIndex<21){
colorIndex=1;
++_pG;
}else{
colorIndex=2;
++_pB;
}
}
_colorSampling[index] = colorIndex*this->getNBpixels()+index;
}
_pR/=(float)this->getNBpixels();
_pG/=(float)this->getNBpixels();
_pB/=(float)this->getNBpixels();
std::cout<<"Color channels proportions: pR, pG, pB= "<<_pR<<", "<<_pG<<", "<<_pB<<", "<<std::endl;
break;
case RETINA_COLOR_DIAGONAL:
for (unsigned int index=0 ; index<this->getNBpixels(); ++index)
{
_colorSampling[index] = index+((index%3+(index%_filterOutput.getNBcolumns()))%3)*_filterOutput.getNBpixels();
}
_pR=_pB=_pG=1.f/3;
break;
case RETINA_COLOR_BAYER: // default sets bayer sampling
for (unsigned int index=0 ; index<_filterOutput.getNBpixels(); ++index)
{
//First line: R G R G
_colorSampling[index] = index+((index/_filterOutput.getNBcolumns())%2)*_filterOutput.getNBpixels()+((index%_filterOutput.getNBcolumns())%2)*_filterOutput.getNBpixels();
//First line: G R G R
//_colorSampling[index] = 3*index+((index/_filterOutput.getNBcolumns())%2)+((index%_filterOutput.getNBcolumns()+1)%2);
}
_pR=_pB=0.25;
_pG=0.5;
break;
default:
#ifdef RETINACOLORDEBUG
std::cerr<<"RetinaColor::No or wrong color sampling method, skeeping"<<std::endl;
#endif
return;
break;//.. not useful, yes
}
// feeling the mosaic buffer:
_RGBmosaic=0;
for (unsigned int index=0 ; index<_filterOutput.getNBpixels(); ++index)
// the RGB _RGBmosaic buffer contains 1 where the pixel corresponds to a sampled color
_RGBmosaic[_colorSampling[index]]=1.0;
// computing photoreceptors local density
_spatiotemporalLPfilter(&_RGBmosaic[0], &_colorLocalDensity[0]);
_spatiotemporalLPfilter(&_RGBmosaic[0]+_filterOutput.getNBpixels(), &_colorLocalDensity[0]+_filterOutput.getNBpixels());
_spatiotemporalLPfilter(&_RGBmosaic[0]+_filterOutput.getDoubleNBpixels(), &_colorLocalDensity[0]+_filterOutput.getDoubleNBpixels());
unsigned int maxNBpixels=3*_filterOutput.getNBpixels();
register float *colorLocalDensityPTR=&_colorLocalDensity[0];
for (unsigned int i=0;i<maxNBpixels;++i, ++colorLocalDensityPTR)
*colorLocalDensityPTR=1.f/ *colorLocalDensityPTR;
#ifdef RETINACOLORDEBUG
std::cout<<"INIT _colorLocalDensity max, min: "<<_colorLocalDensity.max()<<", "<<_colorLocalDensity.min()<<std::endl;
#endif
// end of the init step
_objectInit=true;
}
// public functions
void RetinaColor::runColorDemultiplexing(const std::valarray<float> &multiplexedColorFrame, const bool adaptiveFiltering, const float maxInputValue)
{
// demultiplex the grey frame to RGB frame
// -> first set demultiplexed frame to 0
_demultiplexedTempBuffer=0;
// -> demultiplex process
register unsigned int *colorSamplingPRT=&_colorSampling[0];
register const float *multiplexedColorFramePtr=get_data(multiplexedColorFrame);
for (unsigned int indexa=0; indexa<_filterOutput.getNBpixels() ; ++indexa)
_demultiplexedTempBuffer[*(colorSamplingPRT++)]=*(multiplexedColorFramePtr++);
// interpolate the demultiplexed frame depending on the color sampling method
if (!adaptiveFiltering)
_interpolateImageDemultiplexedImage(&_demultiplexedTempBuffer[0]);
// low pass filtering the demultiplexed frame
_spatiotemporalLPfilter(&_demultiplexedTempBuffer[0], &_chrominance[0]);
_spatiotemporalLPfilter(&_demultiplexedTempBuffer[0]+_filterOutput.getNBpixels(), &_chrominance[0]+_filterOutput.getNBpixels());
_spatiotemporalLPfilter(&_demultiplexedTempBuffer[0]+_filterOutput.getDoubleNBpixels(), &_chrominance[0]+_filterOutput.getDoubleNBpixels());
/*if (_samplingMethod=BAYER)
{
_applyRIFfilter(_chrominance, _chrominance);
_applyRIFfilter(_chrominance+_filterOutput.getNBpixels(), _chrominance+_filterOutput.getNBpixels());
_applyRIFfilter(_chrominance+_filterOutput.getDoubleNBpixels(), _chrominance+_filterOutput.getDoubleNBpixels());
}*/
// normalize by the photoreceptors local density and retrieve the local luminance
register float *chrominancePTR= &_chrominance[0];
register float *colorLocalDensityPTR= &_colorLocalDensity[0];
register float *luminance= &(*_luminance)[0];
if (!adaptiveFiltering)// compute the gradient on the luminance
{
if (_samplingMethod==RETINA_COLOR_RANDOM)
for (unsigned int indexc=0; indexc<_filterOutput.getNBpixels() ; ++indexc, ++chrominancePTR, ++colorLocalDensityPTR, ++luminance)
{
// normalize by photoreceptors density
float Cr=*(chrominancePTR)*_colorLocalDensity[indexc];
float Cg=*(chrominancePTR+_filterOutput.getNBpixels())*_colorLocalDensity[indexc+_filterOutput.getNBpixels()];
float Cb=*(chrominancePTR+_filterOutput.getDoubleNBpixels())*_colorLocalDensity[indexc+_filterOutput.getDoubleNBpixels()];
*luminance=(Cr+Cg+Cb)*_pG;
*(chrominancePTR)=Cr-*luminance;
*(chrominancePTR+_filterOutput.getNBpixels())=Cg-*luminance;
*(chrominancePTR+_filterOutput.getDoubleNBpixels())=Cb-*luminance;
}
else
for (unsigned int indexc=0; indexc<_filterOutput.getNBpixels() ; ++indexc, ++chrominancePTR, ++colorLocalDensityPTR, ++luminance)
{
float Cr=*(chrominancePTR);
float Cg=*(chrominancePTR+_filterOutput.getNBpixels());
float Cb=*(chrominancePTR+_filterOutput.getDoubleNBpixels());
*luminance=_pR*Cr+_pG*Cg+_pB*Cb;
*(chrominancePTR)=Cr-*luminance;
*(chrominancePTR+_filterOutput.getNBpixels())=Cg-*luminance;
*(chrominancePTR+_filterOutput.getDoubleNBpixels())=Cb-*luminance;
}
// in order to get the color image, each colored map needs to be added the luminance
// -> to do so, compute: multiplexedColorFrame - remultiplexed chrominances
runColorMultiplexing(_chrominance, _tempMultiplexedFrame);
//lum = 1/3((f*(ImR))/(f*mR) + (f*(ImG))/(f*mG) + (f*(ImB))/(f*mB));
float *luminancePTR= &(*_luminance)[0];
chrominancePTR= &_chrominance[0];
float *demultiplexedColorFramePTR= &_demultiplexedColorFrame[0];
for (unsigned int indexp=0; indexp<_filterOutput.getNBpixels() ; ++indexp, ++luminancePTR, ++chrominancePTR, ++demultiplexedColorFramePTR)
{
*luminancePTR=(multiplexedColorFrame[indexp]-_tempMultiplexedFrame[indexp]);
*(demultiplexedColorFramePTR)=*(chrominancePTR)+*luminancePTR;
*(demultiplexedColorFramePTR+_filterOutput.getNBpixels())=*(chrominancePTR+_filterOutput.getNBpixels())+*luminancePTR;
*(demultiplexedColorFramePTR+_filterOutput.getDoubleNBpixels())=*(chrominancePTR+_filterOutput.getDoubleNBpixels())+*luminancePTR;
}
}else
{
register const float *multiplexedColorFramePTR= get_data(multiplexedColorFrame);
for (unsigned int indexc=0; indexc<_filterOutput.getNBpixels() ; ++indexc, ++chrominancePTR, ++colorLocalDensityPTR, ++luminance, ++multiplexedColorFramePTR)
{
// normalize by photoreceptors density
float Cr=*(chrominancePTR)*_colorLocalDensity[indexc];
float Cg=*(chrominancePTR+_filterOutput.getNBpixels())*_colorLocalDensity[indexc+_filterOutput.getNBpixels()];
float Cb=*(chrominancePTR+_filterOutput.getDoubleNBpixels())*_colorLocalDensity[indexc+_filterOutput.getDoubleNBpixels()];
*luminance=(Cr+Cg+Cb)*_pG;
_demultiplexedTempBuffer[_colorSampling[indexc]] = *multiplexedColorFramePTR - *luminance;
}
// compute the gradient of the luminance
#ifdef MAKE_PARALLEL // call the TemplateBuffer TBB clipping method
cv::parallel_for_(cv::Range(2,_filterOutput.getNBrows()-2), Parallel_computeGradient(_filterOutput.getNBcolumns(), _filterOutput.getNBrows(), &(*_luminance)[0], &_imageGradient[0]));
#else
_computeGradient(&(*_luminance)[0]);
#endif
// adaptively filter the submosaics to get the adaptive densities, here the buffer _chrominance is used as a temp buffer
_adaptiveSpatialLPfilter(&_RGBmosaic[0], &_chrominance[0]);
_adaptiveSpatialLPfilter(&_RGBmosaic[0]+_filterOutput.getNBpixels(), &_chrominance[0]+_filterOutput.getNBpixels());
_adaptiveSpatialLPfilter(&_RGBmosaic[0]+_filterOutput.getDoubleNBpixels(), &_chrominance[0]+_filterOutput.getDoubleNBpixels());
_adaptiveSpatialLPfilter(&_demultiplexedTempBuffer[0], &_demultiplexedColorFrame[0]);
_adaptiveSpatialLPfilter(&_demultiplexedTempBuffer[0]+_filterOutput.getNBpixels(), &_demultiplexedColorFrame[0]+_filterOutput.getNBpixels());
_adaptiveSpatialLPfilter(&_demultiplexedTempBuffer[0]+_filterOutput.getDoubleNBpixels(), &_demultiplexedColorFrame[0]+_filterOutput.getDoubleNBpixels());
/* for (unsigned int index=0; index<_filterOutput.getNBpixels()*3 ; ++index) // cette boucle pourrait <20>tre supprimee en passant la densit<69> <20> la fonction de filtrage
_demultiplexedColorFrame[index] /= _chrominance[index];*/
_demultiplexedColorFrame/=_chrominance; // more optimal ;o)
// compute and substract the residual luminance
for (unsigned int index=0; index<_filterOutput.getNBpixels() ; ++index)
{
float residu = _pR*_demultiplexedColorFrame[index] + _pG*_demultiplexedColorFrame[index+_filterOutput.getNBpixels()] + _pB*_demultiplexedColorFrame[index+_filterOutput.getDoubleNBpixels()];
_demultiplexedColorFrame[index] = _demultiplexedColorFrame[index] - residu;
_demultiplexedColorFrame[index+_filterOutput.getNBpixels()] = _demultiplexedColorFrame[index+_filterOutput.getNBpixels()] - residu;
_demultiplexedColorFrame[index+_filterOutput.getDoubleNBpixels()] = _demultiplexedColorFrame[index+_filterOutput.getDoubleNBpixels()] - residu;
}
// multiplex the obtained chrominance
runColorMultiplexing(_demultiplexedColorFrame, _tempMultiplexedFrame);
_demultiplexedTempBuffer=0;
// get the luminance, et and add it to each chrominance
for (unsigned int index=0; index<_filterOutput.getNBpixels() ; ++index)
{
(*_luminance)[index]=multiplexedColorFrame[index]-_tempMultiplexedFrame[index];
_demultiplexedTempBuffer[_colorSampling[index]] = _demultiplexedColorFrame[_colorSampling[index]];//multiplexedColorFrame[index] - (*_luminance)[index];
}
_spatiotemporalLPfilter(&_demultiplexedTempBuffer[0], &_demultiplexedTempBuffer[0]);
_spatiotemporalLPfilter(&_demultiplexedTempBuffer[0]+_filterOutput.getNBpixels(), &_demultiplexedTempBuffer[0]+_filterOutput.getNBpixels());
_spatiotemporalLPfilter(&_demultiplexedTempBuffer[0]+_filterOutput.getDoubleNBpixels(), &_demultiplexedTempBuffer[0]+_filterOutput.getDoubleNBpixels());
// get the luminance and add it to each chrominance
for (unsigned int index=0; index<_filterOutput.getNBpixels() ; ++index)
{
_demultiplexedColorFrame[index] = _demultiplexedTempBuffer[index]*_colorLocalDensity[index]+ (*_luminance)[index];
_demultiplexedColorFrame[index+_filterOutput.getNBpixels()] = _demultiplexedTempBuffer[index+_filterOutput.getNBpixels()]*_colorLocalDensity[index+_filterOutput.getNBpixels()]+ (*_luminance)[index];
_demultiplexedColorFrame[index+_filterOutput.getDoubleNBpixels()] = _demultiplexedTempBuffer[index+_filterOutput.getDoubleNBpixels()]*_colorLocalDensity[index+_filterOutput.getDoubleNBpixels()]+ (*_luminance)[index];
}
}
// eliminate saturated colors by simple clipping values to the input range
clipRGBOutput_0_maxInputValue(NULL, maxInputValue);
/* transfert image gradient in order to check validity
memcpy((*_luminance), _imageGradient, sizeof(float)*_filterOutput.getNBpixels());
memcpy(_demultiplexedColorFrame, _imageGradient+_filterOutput.getNBpixels(), sizeof(float)*_filterOutput.getNBpixels());
memcpy(_demultiplexedColorFrame+_filterOutput.getNBpixels(), _imageGradient+_filterOutput.getNBpixels(), sizeof(float)*_filterOutput.getNBpixels());
memcpy(_demultiplexedColorFrame+2*_filterOutput.getNBpixels(), _imageGradient+_filterOutput.getNBpixels(), sizeof(float)*_filterOutput.getNBpixels());
*/
if (_saturateColors)
{
TemplateBuffer<float>::normalizeGrayOutputCentredSigmoide(128, _colorSaturationValue, maxInputValue, &_demultiplexedColorFrame[0], &_demultiplexedColorFrame[0], _filterOutput.getNBpixels());
TemplateBuffer<float>::normalizeGrayOutputCentredSigmoide(128, _colorSaturationValue, maxInputValue, &_demultiplexedColorFrame[0]+_filterOutput.getNBpixels(), &_demultiplexedColorFrame[0]+_filterOutput.getNBpixels(), _filterOutput.getNBpixels());
TemplateBuffer<float>::normalizeGrayOutputCentredSigmoide(128, _colorSaturationValue, maxInputValue, &_demultiplexedColorFrame[0]+_filterOutput.getNBpixels()*2, &_demultiplexedColorFrame[0]+_filterOutput.getNBpixels()*2, _filterOutput.getNBpixels());
}
}
// color multiplexing: input frame size=_NBrows*_filterOutput.getNBcolumns()*3, multiplexedFrame output size=_NBrows*_filterOutput.getNBcolumns()
void RetinaColor::runColorMultiplexing(const std::valarray<float> &demultiplexedInputFrame, std::valarray<float> &multiplexedFrame)
{
// multiply each color layer by its bayer mask
register unsigned int *colorSamplingPTR= &_colorSampling[0];
register float *multiplexedFramePTR= &multiplexedFrame[0];
for (unsigned int indexp=0; indexp<_filterOutput.getNBpixels(); ++indexp)
*(multiplexedFramePTR++)=demultiplexedInputFrame[*(colorSamplingPTR++)];
}
void RetinaColor::normalizeRGBOutput_0_maxOutputValue(const float maxOutputValue)
{
//normalizeGrayOutputCentredSigmoide(0.0, 2, _chrominance);
TemplateBuffer<float>::normalizeGrayOutput_0_maxOutputValue(&_demultiplexedColorFrame[0], 3*_filterOutput.getNBpixels(), maxOutputValue);
//normalizeGrayOutputCentredSigmoide(0.0, 2, _chrominance+_filterOutput.getNBpixels());
//normalizeGrayOutput_0_maxOutputValue(_demultiplexedColorFrame+_filterOutput.getNBpixels(), _filterOutput.getNBpixels(), maxOutputValue);
//normalizeGrayOutputCentredSigmoide(0.0, 2, _chrominance+2*_filterOutput.getNBpixels());
//normalizeGrayOutput_0_maxOutputValue(_demultiplexedColorFrame+_filterOutput.getDoubleNBpixels(), _filterOutput.getNBpixels(), maxOutputValue);
TemplateBuffer<float>::normalizeGrayOutput_0_maxOutputValue(&(*_luminance)[0], _filterOutput.getNBpixels(), maxOutputValue);
}
/// normalize output between 0 and maxOutputValue;
void RetinaColor::clipRGBOutput_0_maxInputValue(float *inputOutputBuffer, const float maxInputValue)
{
//std::cout<<"RetinaColor::normalizing RGB frame..."<<std::endl;
// if outputBuffer unsassigned, the rewrite the buffer
if (inputOutputBuffer==NULL)
inputOutputBuffer= &_demultiplexedColorFrame[0];
#ifdef MAKE_PARALLEL // call the TemplateBuffer TBB clipping method
cv::parallel_for_(cv::Range(0,_filterOutput.getNBpixels()*3), Parallel_clipBufferValues<float>(inputOutputBuffer, 0, maxInputValue));
#else
register float *inputOutputBufferPTR=inputOutputBuffer;
for (register unsigned int jf = 0; jf < _filterOutput.getNBpixels()*3; ++jf, ++inputOutputBufferPTR)
{
if (*inputOutputBufferPTR>maxInputValue)
*inputOutputBufferPTR=maxInputValue;
else if (*inputOutputBufferPTR<0)
*inputOutputBufferPTR=0;
}
#endif
//std::cout<<"RetinaColor::...normalizing RGB frame OK"<<std::endl;
}
void RetinaColor::_interpolateImageDemultiplexedImage(float *inputOutputBuffer)
{
switch(_samplingMethod)
{
case RETINA_COLOR_RANDOM:
return; // no need to interpolate
break;
case RETINA_COLOR_DIAGONAL:
_interpolateSingleChannelImage111(inputOutputBuffer);
break;
case RETINA_COLOR_BAYER: // default sets bayer sampling
_interpolateBayerRGBchannels(inputOutputBuffer);
break;
default:
std::cerr<<"RetinaColor::No or wrong color sampling method, skeeping"<<std::endl;
return;
break;//.. not useful, yes
}
}
void RetinaColor::_interpolateSingleChannelImage111(float *inputOutputBuffer)
{
for (unsigned int indexr=0 ; indexr<_filterOutput.getNBrows(); ++indexr)
{
for (unsigned int indexc=1 ; indexc<_filterOutput.getNBcolumns()-1; ++indexc)
{
unsigned int index=indexc+indexr*_filterOutput.getNBcolumns();
inputOutputBuffer[index]=(inputOutputBuffer[index-1]+inputOutputBuffer[index]+inputOutputBuffer[index+1])/3.f;
}
}
for (unsigned int indexc=0 ; indexc<_filterOutput.getNBcolumns(); ++indexc)
{
for (unsigned int indexr=1 ; indexr<_filterOutput.getNBrows()-1; ++indexr)
{
unsigned int index=indexc+indexr*_filterOutput.getNBcolumns();
inputOutputBuffer[index]=(inputOutputBuffer[index-_filterOutput.getNBcolumns()]+inputOutputBuffer[index]+inputOutputBuffer[index+_filterOutput.getNBcolumns()])/3.f;
}
}
}
void RetinaColor::_interpolateBayerRGBchannels(float *inputOutputBuffer)
{
for (unsigned int indexr=0 ; indexr<_filterOutput.getNBrows()-1; indexr+=2)
{
for (unsigned int indexc=1 ; indexc<_filterOutput.getNBcolumns()-1; indexc+=2)
{
unsigned int indexR=indexc+indexr*_filterOutput.getNBcolumns();
unsigned int indexB=_filterOutput.getDoubleNBpixels()+indexc+1+(indexr+1)*_filterOutput.getNBcolumns();
inputOutputBuffer[indexR]=(inputOutputBuffer[indexR-1]+inputOutputBuffer[indexR+1])/2.f;
inputOutputBuffer[indexB]=(inputOutputBuffer[indexB-1]+inputOutputBuffer[indexB+1])/2.f;
}
}
for (unsigned int indexr=1 ; indexr<_filterOutput.getNBrows()-1; indexr+=2)
{
for (unsigned int indexc=0 ; indexc<_filterOutput.getNBcolumns(); ++indexc)
{
unsigned int indexR=indexc+indexr*_filterOutput.getNBcolumns();
unsigned int indexB=_filterOutput.getDoubleNBpixels()+indexc+1+(indexr+1)*_filterOutput.getNBcolumns();
inputOutputBuffer[indexR]=(inputOutputBuffer[indexR-_filterOutput.getNBcolumns()]+inputOutputBuffer[indexR+_filterOutput.getNBcolumns()])/2.f;
inputOutputBuffer[indexB]=(inputOutputBuffer[indexB-_filterOutput.getNBcolumns()]+inputOutputBuffer[indexB+_filterOutput.getNBcolumns()])/2.f;
}
}
for (unsigned int indexr=1 ; indexr<_filterOutput.getNBrows()-1; ++indexr)
for (unsigned int indexc=0 ; indexc<_filterOutput.getNBcolumns(); indexc+=2)
{
unsigned int indexG=_filterOutput.getNBpixels()+indexc+(indexr)*_filterOutput.getNBcolumns()+indexr%2;
inputOutputBuffer[indexG]=(inputOutputBuffer[indexG-1]+inputOutputBuffer[indexG+1]+inputOutputBuffer[indexG-_filterOutput.getNBcolumns()]+inputOutputBuffer[indexG+_filterOutput.getNBcolumns()])*0.25f;
}
}
void RetinaColor::_applyRIFfilter(const float *sourceBuffer, float *destinationBuffer)
{
for (unsigned int indexr=1 ; indexr<_filterOutput.getNBrows()-1; ++indexr)
{
for (unsigned int indexc=1 ; indexc<_filterOutput.getNBcolumns()-1; ++indexc)
{
unsigned int index=indexc+indexr*_filterOutput.getNBcolumns();
_tempMultiplexedFrame[index]=(4.f*sourceBuffer[index]+sourceBuffer[index-1-_filterOutput.getNBcolumns()]+sourceBuffer[index-1+_filterOutput.getNBcolumns()]+sourceBuffer[index+1-_filterOutput.getNBcolumns()]+sourceBuffer[index+1+_filterOutput.getNBcolumns()])*0.125f;
}
}
memcpy(destinationBuffer, &_tempMultiplexedFrame[0], sizeof(float)*_filterOutput.getNBpixels());
}
void RetinaColor::_getNormalizedContoursImage(const float *inputFrame, float *outputFrame)
{
float maxValue=0.f;
float normalisationFactor=1.f/3.f;
for (unsigned int indexr=1 ; indexr<_filterOutput.getNBrows()-1; ++indexr)
{
for (unsigned int indexc=1 ; indexc<_filterOutput.getNBcolumns()-1; ++indexc)
{
unsigned int index=indexc+indexr*_filterOutput.getNBcolumns();
outputFrame[index]=normalisationFactor*fabs(8.f*inputFrame[index]-inputFrame[index-1]-inputFrame[index+1]-inputFrame[index-_filterOutput.getNBcolumns()]-inputFrame[index+_filterOutput.getNBcolumns()]-inputFrame[index-1-_filterOutput.getNBcolumns()]-inputFrame[index-1+_filterOutput.getNBcolumns()]-inputFrame[index+1-_filterOutput.getNBcolumns()]-inputFrame[index+1+_filterOutput.getNBcolumns()]);
if (outputFrame[index]>maxValue)
maxValue=outputFrame[index];
}
}
normalisationFactor=1.f/maxValue;
// normalisation [0, 1]
for (unsigned int indexp=1 ; indexp<_filterOutput.getNBrows()-1; ++indexp)
outputFrame[indexp]=outputFrame[indexp]*normalisationFactor;
}
//////////////////////////////////////////////////////////
// ADAPTIVE BASIC RETINA FILTER
//////////////////////////////////////////////////////////
// run LP filter for a new frame input and save result at a specific output adress
void RetinaColor::_adaptiveSpatialLPfilter(const float *inputFrame, float *outputFrame)
{
/**********/
_gain = (1-0.57f)*(1-0.57f)*(1-0.06f)*(1-0.06f);
// launch the serie of 1D directional filters in order to compute the 2D low pass filter
// -> horizontal filters work with the first layer of imageGradient
_adaptiveHorizontalCausalFilter_addInput(inputFrame, outputFrame, 0, _filterOutput.getNBrows());
_horizontalAnticausalFilter_Irregular(outputFrame, 0, _filterOutput.getNBrows(), &_imageGradient[0]);
// -> horizontal filters work with the second layer of imageGradient
_verticalCausalFilter_Irregular(outputFrame, 0, _filterOutput.getNBcolumns(), &_imageGradient[0]+_filterOutput.getNBpixels());
_adaptiveVerticalAnticausalFilter_multGain(outputFrame, 0, _filterOutput.getNBcolumns());
}
// horizontal causal filter which adds the input inside... replaces the parent _horizontalCausalFilter_Irregular_addInput by avoiding a product for each pixel
void RetinaColor::_adaptiveHorizontalCausalFilter_addInput(const float *inputFrame, float *outputFrame, unsigned int IDrowStart, unsigned int IDrowEnd)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDrowStart,IDrowEnd), Parallel_adaptiveHorizontalCausalFilter_addInput(inputFrame, outputFrame, &_imageGradient[0], _filterOutput.getNBcolumns()));
#else
register float* outputPTR=outputFrame+IDrowStart*_filterOutput.getNBcolumns();
register const float* inputPTR=inputFrame+IDrowStart*_filterOutput.getNBcolumns();
register const float *imageGradientPTR= &_imageGradient[0]+IDrowStart*_filterOutput.getNBcolumns();
for (unsigned int IDrow=IDrowStart; IDrow<IDrowEnd; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<_filterOutput.getNBcolumns(); ++index)
{
//std::cout<<(*imageGradientPTR)<<" ";
result = *(inputPTR++) + (*imageGradientPTR)* result;
*(outputPTR++) = result;
++imageGradientPTR;
}
// std::cout<<" "<<std::endl;
}
#endif
}
// vertical anticausal filter which multiplies the output by _gain... replaces the parent _verticalAnticausalFilter_multGain by avoiding a product for each pixel and taking into account the second layer of the _imageGradient buffer
void RetinaColor::_adaptiveVerticalAnticausalFilter_multGain(float *outputFrame, unsigned int IDcolumnStart, unsigned int IDcolumnEnd)
{
#ifdef MAKE_PARALLEL
cv::parallel_for_(cv::Range(IDcolumnStart,IDcolumnEnd), Parallel_adaptiveVerticalAnticausalFilter_multGain(outputFrame, &_imageGradient[0]+_filterOutput.getNBpixels(), _filterOutput.getNBrows(), _filterOutput.getNBcolumns(), _gain));
#else
float* outputOffset=outputFrame+_filterOutput.getNBpixels()-_filterOutput.getNBcolumns();
float* gradOffset= &_imageGradient[0]+_filterOutput.getNBpixels()*2-_filterOutput.getNBcolumns();
for (unsigned int IDcolumn=IDcolumnStart; IDcolumn<IDcolumnEnd; ++IDcolumn)
{
register float result=0;
register float *outputPTR=outputOffset+IDcolumn;
register float *imageGradientPTR=gradOffset+IDcolumn;
for (unsigned int index=0; index<_filterOutput.getNBrows(); ++index)
{
result = *(outputPTR) + (*(imageGradientPTR)) * result;
*(outputPTR) = _gain*result;
outputPTR-=_filterOutput.getNBcolumns();
imageGradientPTR-=_filterOutput.getNBcolumns();
}
}
#endif
}
///////////////////////////
void RetinaColor::_computeGradient(const float *luminance)
{
for (unsigned int idLine=2;idLine<_filterOutput.getNBrows()-2;++idLine)
{
for (unsigned int idColumn=2;idColumn<_filterOutput.getNBcolumns()-2;++idColumn)
{
const unsigned int pixelIndex=idColumn+_filterOutput.getNBcolumns()*idLine;
// horizontal and vertical local gradients
const float verticalGrad=fabs(luminance[pixelIndex+_filterOutput.getNBcolumns()]-luminance[pixelIndex-_filterOutput.getNBcolumns()]);
const float horizontalGrad=fabs(luminance[pixelIndex+1]-luminance[pixelIndex-1]);
// neighborhood horizontal and vertical gradients
const float verticalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-2*_filterOutput.getNBcolumns()]);
const float horizontalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-2]);
const float verticalGrad_n=fabs(luminance[pixelIndex+2*_filterOutput.getNBcolumns()]-luminance[pixelIndex]);
const float horizontalGrad_n=fabs(luminance[pixelIndex+2]-luminance[pixelIndex]);
const float horizontalGradient=0.5f*horizontalGrad+0.25f*(horizontalGrad_p+horizontalGrad_n);
const float verticalGradient=0.5f*verticalGrad+0.25f*(verticalGrad_p+verticalGrad_n);
// compare local gradient means and fill the appropriate filtering coefficient value that will be used in adaptative filters
if (horizontalGradient<verticalGradient)
{
_imageGradient[pixelIndex+_filterOutput.getNBpixels()]=0.06f;
_imageGradient[pixelIndex]=0.57f;
}
else
{
_imageGradient[pixelIndex+_filterOutput.getNBpixels()]=0.57f;
_imageGradient[pixelIndex]=0.06f;
}
}
}
}
bool RetinaColor::applyKrauskopfLMS2Acr1cr2Transform(std::valarray<float> &result)
{
bool processSuccess=true;
// basic preliminary error check
if (result.size()!=_demultiplexedColorFrame.size())
{
std::cerr<<"RetinaColor::applyKrauskopfLMS2Acr1cr2Transform: input buffer does not match retina buffer size, conversion aborted"<<std::endl;
return false;
}
// apply transformation
_applyImageColorSpaceConversion(_demultiplexedColorFrame, result, _LMStoACr1Cr2);
return processSuccess;
}
bool RetinaColor::applyLMS2LabTransform(std::valarray<float> &result)
{
bool processSuccess=true;
// basic preliminary error check
if (result.size()!=_demultiplexedColorFrame.size())
{
std::cerr<<"RetinaColor::applyKrauskopfLMS2Acr1cr2Transform: input buffer does not match retina buffer size, conversion aborted"<<std::endl;
return false;
}
// apply transformation
_applyImageColorSpaceConversion(_demultiplexedColorFrame, result, _LMStoLab);
return processSuccess;
}
// template function able to perform a custom color space transformation
void RetinaColor::_applyImageColorSpaceConversion(const std::valarray<float> &inputFrameBuffer, std::valarray<float> &outputFrameBuffer, const float *transformTable)
{
// two step methods in order to allow inputFrame and outputFrame to be the same
unsigned int nbPixels=(unsigned int)(inputFrameBuffer.size()/3), dbpixels=(unsigned int)(2*inputFrameBuffer.size()/3);
const float *inputFrame=get_data(inputFrameBuffer);
float *outputFrame= &outputFrameBuffer[0];
for (unsigned int dataIndex=0; dataIndex<nbPixels;++dataIndex, ++outputFrame, ++inputFrame)
{
// first step, compute each new values
float layer1 = *(inputFrame)**(transformTable+0) +*(inputFrame+nbPixels)**(transformTable+1) +*(inputFrame+dbpixels)**(transformTable+2);
float layer2 = *(inputFrame)**(transformTable+3) +*(inputFrame+nbPixels)**(transformTable+4) +*(inputFrame+dbpixels)**(transformTable+5);
float layer3 = *(inputFrame)**(transformTable+6) +*(inputFrame+nbPixels)**(transformTable+7) +*(inputFrame+dbpixels)**(transformTable+8);
// second, affect the output
*(outputFrame) = layer1;
*(outputFrame+nbPixels) = layer2;
*(outputFrame+dbpixels) = layer3;
}
}
}// end of namespace bioinspired
}// end of namespace cv

View File

@@ -1,389 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
/**
* @class RetinaColor a color multilexing/demultiplexing (demosaicing) based on a human vision inspiration. Different mosaicing strategies can be used, included random sampling !
* => please take a look at the nice and efficient demosaicing strategy introduced by B.Chaix de Lavarene, take a look at the cited paper for more mathematical details
* @brief Retina color sampling model which allows classical bayer sampling, random and potentially several other method ! Low color errors on corners !
* -> Based on the research of:
* .Brice Chaix Lavarene (chaix@lis.inpg.fr)
* .Jeanny Herault (herault@lis.inpg.fr)
* .David Alleyson (david.alleyson@upmf-grenoble.fr)
* .collaboration: alexandre benoit (benoit.alexandre.vision@gmail.com or benoit@lis.inpg.fr)
* Please cite: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC / Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
*/
#ifndef RETINACOLOR_HPP_
#define RETINACOLOR_HPP_
#include "basicretinafilter.hpp"
//#define __RETINACOLORDEBUG //define RETINACOLORDEBUG in order to display debug data
namespace cv
{
namespace bioinspired
{
class RetinaColor: public BasicRetinaFilter
{
public:
/**
* @typedef which allows to select the type of photoreceptors color sampling
*/
/**
* constructor of the retina color processing model
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
* @param samplingMethod: the chosen color sampling method
*/
RetinaColor(const unsigned int NBrows, const unsigned int NBcolumns, const int samplingMethod=RETINA_COLOR_BAYER);
/**
* standard destructor
*/
virtual ~RetinaColor();
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* resize retina color filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* color multiplexing function: a demultiplexed RGB frame of size M*N*3 is transformed into a multiplexed M*N*1 pixels frame where each pixel is either Red, or Green or Blue
* @param inputRGBFrame: the input RGB frame to be processed
* @return, nothing but the multiplexed frame is available by the use of the getMultiplexedFrame() function
*/
inline void runColorMultiplexing(const std::valarray<float> &inputRGBFrame){runColorMultiplexing(inputRGBFrame, *_multiplexedFrame);};
/**
* color multiplexing function: a demultipleed RGB frame of size M*N*3 is transformed into a multiplexed M*N*1 pixels frame where each pixel is either Red, or Green or Blue if using RGB images
* @param demultiplexedInputFrame: the demultiplexed input frame to be processed of size M*N*3
* @param multiplexedFrame: the resulting multiplexed frame
*/
void runColorMultiplexing(const std::valarray<float> &demultiplexedInputFrame, std::valarray<float> &multiplexedFrame);
/**
* color demultiplexing function: a multiplexed frame of size M*N*1 pixels is transformed into a RGB demultiplexed M*N*3 pixels frame
* @param multiplexedColorFrame: the input multiplexed frame to be processed
* @param adaptiveFiltering: specifies if an adaptive filtering has to be perform rather than standard filtering (adaptive filtering allows a better rendering)
* @param maxInputValue: the maximum input data value (should be 255 for 8 bits images but it can change in the case of High Dynamic Range Images (HDRI)
* @return, nothing but the output demultiplexed frame is available by the use of the getDemultiplexedColorFrame() function, also use getLuminance() and getChrominance() in order to retreive either luminance or chrominance
*/
void runColorDemultiplexing(const std::valarray<float> &multiplexedColorFrame, const bool adaptiveFiltering=false, const float maxInputValue=255.0);
/**
* activate color saturation as the final step of the color demultiplexing process
* -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
* */
void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0){_saturateColors=saturateColors; _colorSaturationValue=colorSaturationValue;};
/**
* set parameters of the low pass spatio-temporal filter used to retreive the low chrominance
* @param beta: gain of the filter (generally set to zero)
* @param tau: time constant of the filter (unit is frame for video processing), typically 0 when considering static processing, 1 or more if a temporal smoothing effect is required
* @param k: spatial constant of the filter (unit is pixels), typical value is 2.5
*/
void setChrominanceLPfilterParameters(const float beta, const float tau, const float k){setLPfilterParameters(beta, tau, k);};
/**
* apply to the retina color output the Krauskopf transformation which leads to an opponent color system: output colorspace if Acr1cr2 if input of the retina was LMS color space
* @param result: the input buffer to fill with the transformed colorspace retina output
* @return true if process ended successfully
*/
bool applyKrauskopfLMS2Acr1cr2Transform(std::valarray<float> &result);
/**
* apply to the retina color output the CIE Lab color transformation
* @param result: the input buffer to fill with the transformed colorspace retina output
* @return true if process ended successfully
*/
bool applyLMS2LabTransform(std::valarray<float> &result);
/**
* @return the multiplexed frame result (use this after function runColorMultiplexing)
*/
inline const std::valarray<float> &getMultiplexedFrame() const {return *_multiplexedFrame;};
/**
* @return the demultiplexed frame result (use this after function runColorDemultiplexing)
*/
inline const std::valarray<float> &getDemultiplexedColorFrame() const {return _demultiplexedColorFrame;};
/**
* @return the luminance of the processed frame (use this after function runColorDemultiplexing)
*/
inline const std::valarray<float> &getLuminance() const {return *_luminance;};
/**
* @return the chrominance of the processed frame (use this after function runColorDemultiplexing)
*/
inline const std::valarray<float> &getChrominance() const {return _chrominance;};
/**
* standard 0 to 255 image clipping function appled to RGB images (of size M*N*3 pixels)
* @param inputOutputBuffer: the image to be normalized (rewrites the input), if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param maxOutputValue: the maximum value allowed at the output (values superior to it would be clipped
*/
void clipRGBOutput_0_maxInputValue(float *inputOutputBuffer, const float maxOutputValue=255.0);
/**
* standard 0 to 255 image normalization function appled to RGB images (of size M*N*3 pixels)
* @param maxOutputValue: the maximum value allowed at the output (values superior to it would be clipped
*/
void normalizeRGBOutput_0_maxOutputValue(const float maxOutputValue=255.0);
/**
* return the color sampling map: a Nrows*Mcolumns image in which each pixel value is the ofsset adress which gives the adress of the sampled pixel on an Nrows*Mcolumns*3 color image ordered by layers: layer1, layer2, layer3
*/
inline const std::valarray<unsigned int> &getSamplingMap() const {return _colorSampling;};
/**
* function used (to bypass processing) to manually set the color output
* @param demultiplexedImage: the color image (luminance+chrominance) which has to be written in the object buffer
*/
inline void setDemultiplexedColorFrame(const std::valarray<float> &demultiplexedImage){_demultiplexedColorFrame=demultiplexedImage;};
protected:
// private functions
int _samplingMethod;
bool _saturateColors;
float _colorSaturationValue;
// links to parent buffers (more convienient names
TemplateBuffer<float> *_luminance;
std::valarray<float> *_multiplexedFrame;
// instance buffers
std::valarray<unsigned int> _colorSampling; // table (size (_nbRows*_nbColumns) which specifies the color of each pixel
std::valarray<float> _RGBmosaic;
std::valarray<float> _tempMultiplexedFrame;
std::valarray<float> _demultiplexedTempBuffer;
std::valarray<float> _demultiplexedColorFrame;
std::valarray<float> _chrominance;
std::valarray<float> _colorLocalDensity;// buffer which contains the local density of the R, G and B photoreceptors for a normalization use
std::valarray<float> _imageGradient;
// variables
float _pR, _pG, _pB; // probabilities of color R, G and B
bool _objectInit;
// protected functions
void _initColorSampling();
void _interpolateImageDemultiplexedImage(float *inputOutputBuffer);
void _interpolateSingleChannelImage111(float *inputOutputBuffer);
void _interpolateBayerRGBchannels(float *inputOutputBuffer);
void _applyRIFfilter(const float *sourceBuffer, float *destinationBuffer);
void _getNormalizedContoursImage(const float *inputFrame, float *outputFrame);
// -> special adaptive filters dedicated to low pass filtering on the chrominance (skeeps filtering on the edges)
void _adaptiveSpatialLPfilter(const float *inputFrame, float *outputFrame);
void _adaptiveHorizontalCausalFilter_addInput(const float *inputFrame, float *outputFrame, const unsigned int IDrowStart, const unsigned int IDrowEnd); // TBB parallelized
void _adaptiveVerticalAnticausalFilter_multGain(float *outputFrame, const unsigned int IDcolumnStart, const unsigned int IDcolumnEnd);
void _computeGradient(const float *luminance);
void _normalizeOutputs_0_maxOutputValue(void);
// color space transform
void _applyImageColorSpaceConversion(const std::valarray<float> &inputFrame, std::valarray<float> &outputFrame, const float *transformTable);
#ifdef MAKE_PARALLEL
/******************************************************
** IF some parallelizing thread methods are available, then, main loops are parallelized using these functors
** ==> main idea paralellise main filters loops, then, only the most used methods are parallelized... TODO : increase the number of parallelised methods as necessary
** ==> functors names = Parallel_$$$ where $$$= the name of the serial method that is parallelised
** ==> functors constructors can differ from the parameters used with their related serial functions
*/
/* Template :
class Parallel_ : public cv::ParallelLoopBody
{
private:
public:
Parallel_()
: {}
virtual void operator()( const cv::Range& r ) const {
}
}:
*/
class Parallel_adaptiveHorizontalCausalFilter_addInput: public cv::ParallelLoopBody
{
private:
float *outputFrame;
const float *inputFrame, *imageGradient;
unsigned int nbColumns;
public:
Parallel_adaptiveHorizontalCausalFilter_addInput(const float *inputImg, float *bufferToProcess, const float *imageGrad, const unsigned int nbCols)
:outputFrame(bufferToProcess), inputFrame(inputImg), imageGradient(imageGrad), nbColumns(nbCols) {};
virtual void operator()( const Range& r ) const {
register float* outputPTR=outputFrame+r.start*nbColumns;
register const float* inputPTR=inputFrame+r.start*nbColumns;
register const float *imageGradientPTR= imageGradient+r.start*nbColumns;
for (int IDrow=r.start; IDrow!=r.end; ++IDrow)
{
register float result=0;
for (unsigned int index=0; index<nbColumns; ++index)
{
result = *(inputPTR++) + (*imageGradientPTR++)* result;
*(outputPTR++) = result;
}
}
}
};
class Parallel_adaptiveVerticalAnticausalFilter_multGain: public cv::ParallelLoopBody
{
private:
float *outputFrame;
const float *imageGradient;
unsigned int nbRows, nbColumns;
float filterParam_gain;
public:
Parallel_adaptiveVerticalAnticausalFilter_multGain(float *bufferToProcess, const float *imageGrad, const unsigned int nbRws, const unsigned int nbCols, const float gain)
:outputFrame(bufferToProcess), imageGradient(imageGrad), nbRows(nbRws), nbColumns(nbCols), filterParam_gain(gain){}
virtual void operator()( const Range& r ) const {
float* offset=outputFrame+nbColumns*nbRows-nbColumns;
const float* gradOffset= imageGradient+nbColumns*nbRows-nbColumns;
for (int IDcolumn=r.start; IDcolumn!=r.end; ++IDcolumn)
{
register float result=0;
register float *outputPTR=offset+IDcolumn;
register const float *imageGradientPTR=gradOffset+IDcolumn;
for (unsigned int index=0; index<nbRows; ++index)
{
result = *(outputPTR) + *(imageGradientPTR) * result;
*(outputPTR) = filterParam_gain*result;
outputPTR-=nbColumns;
imageGradientPTR-=nbColumns;
}
}
}
};
class Parallel_computeGradient: public cv::ParallelLoopBody
{
private:
float *imageGradient;
const float *luminance;
unsigned int nbColumns, doubleNbColumns, nbRows, nbPixels;
public:
Parallel_computeGradient(const unsigned int nbCols, const unsigned int nbRws, const float *lum, float *imageGrad)
:imageGradient(imageGrad), luminance(lum), nbColumns(nbCols), doubleNbColumns(2*nbCols), nbRows(nbRws), nbPixels(nbRws*nbCols){};
virtual void operator()( const Range& r ) const {
for (int idLine=r.start;idLine!=r.end;++idLine)
{
for (unsigned int idColumn=2;idColumn<nbColumns-2;++idColumn)
{
const unsigned int pixelIndex=idColumn+nbColumns*idLine;
// horizontal and vertical local gradients
const float verticalGrad=fabs(luminance[pixelIndex+nbColumns]-luminance[pixelIndex-nbColumns]);
const float horizontalGrad=fabs(luminance[pixelIndex+1]-luminance[pixelIndex-1]);
// neighborhood horizontal and vertical gradients
const float verticalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-doubleNbColumns]);
const float horizontalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-2]);
const float verticalGrad_n=fabs(luminance[pixelIndex+doubleNbColumns]-luminance[pixelIndex]);
const float horizontalGrad_n=fabs(luminance[pixelIndex+2]-luminance[pixelIndex]);
const float horizontalGradient=0.5f*horizontalGrad+0.25f*(horizontalGrad_p+horizontalGrad_n);
const float verticalGradient=0.5f*verticalGrad+0.25f*(verticalGrad_p+verticalGrad_n);
// compare local gradient means and fill the appropriate filtering coefficient value that will be used in adaptative filters
if (horizontalGradient<verticalGradient)
{
imageGradient[pixelIndex+nbPixels]=0.06f;
imageGradient[pixelIndex]=0.57f;
}
else
{
imageGradient[pixelIndex+nbPixels]=0.57f;
imageGradient[pixelIndex]=0.06f;
}
}
}
}
};
#endif
};
}// end of namespace bioinspired
}// end of namespace cv
#endif /*RETINACOLOR_HPP_*/

View File

@@ -1,316 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2013
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
**
** This class is based on image processing tools of the author and already used within the Retina class (this is the same code as method retina::applyFastToneMapping, but in an independent class, it is ligth from a memory requirement point of view). It implements an adaptation of the efficient tone mapping algorithm propose by David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
** -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
**
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
/*
* retinafasttonemapping.cpp
*
* Created on: May 26, 2013
* Author: Alexandre Benoit
*/
#include "precomp.hpp"
#include "basicretinafilter.hpp"
#include "retinacolor.hpp"
#include <cstdio>
#include <sstream>
#include <valarray>
namespace cv
{
namespace bioinspired
{
/**
* @class RetinaFastToneMappingImpl a wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV.
* This algorithm is already implemented in thre Retina class (retina::applyFastToneMapping) but used it does not require all the retina model to be allocated. This allows a light memory use for low memory devices (smartphones, etc.
* As a summary, these are the model properties:
* => 2 stages of local luminance adaptation with a different local neighborhood for each.
* => first stage models the retina photorecetors local luminance adaptation
* => second stage models th ganglion cells local information adaptation
* => compared to the initial publication, this class uses spatio-temporal low pass filters instead of spatial only filters.
* ====> this can help noise robustness and temporal stability for video sequence use cases.
* for more information, read to the following papers :
* Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* regarding spatio-temporal filter and the bigger retina model :
* Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
*/
class RetinaFastToneMappingImpl : public RetinaFastToneMapping
{
public:
/**
* constructor
* @param imageInput: the size of the images to process
*/
RetinaFastToneMappingImpl(Size imageInput)
{
unsigned int nbPixels=imageInput.height*imageInput.width;
// basic error check
if (nbPixels <= 0)
throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "RetinaImpl::setup", "retinafasttonemapping.cpp", 0);
// resize buffers
_inputBuffer.resize(nbPixels*3); // buffer supports gray images but also 3 channels color buffers... (larger is better...)
_imageOutput.resize(nbPixels*3);
_temp2.resize(nbPixels);
// allocate the main filter with 2 setup sets properties (one for each low pass filter
_multiuseFilter = makePtr<BasicRetinaFilter>(imageInput.height, imageInput.width, 2);
// allocate the color manager (multiplexer/demultiplexer
_colorEngine = makePtr<RetinaColor>(imageInput.height, imageInput.width);
// setup filter behaviors with default values
setup();
}
/**
* basic destructor
*/
virtual ~RetinaFastToneMappingImpl(){};
/**
* method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvocellular channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular retina::run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. Then, it can have a more limited effect on images with a very high dynamic range. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
* -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
@param inputImage the input image to process RGB or gray levels
@param outputToneMappedImage the output tone mapped image
*/
virtual void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
{
// first convert input image to the compatible format :
const bool colorMode = _convertCvMat2ValarrayBuffer(inputImage.getMat(), _inputBuffer);
// process tone mapping
if (colorMode)
{
_runRGBToneMapping(_inputBuffer, _imageOutput, true);
_convertValarrayBuffer2cvMat(_imageOutput, _multiuseFilter->getNBrows(), _multiuseFilter->getNBcolumns(), true, outputToneMappedImage);
}else
{
_runGrayToneMapping(_inputBuffer, _imageOutput);
_convertValarrayBuffer2cvMat(_imageOutput, _multiuseFilter->getNBrows(), _multiuseFilter->getNBcolumns(), false, outputToneMappedImage);
}
}
/**
* setup method that updates tone mapping behaviors by adjusing the local luminance computation area
* @param photoreceptorsNeighborhoodRadius the first stage local adaptation area
* @param ganglioncellsNeighborhoodRadius the second stage local adaptation area
* @param meanLuminanceModulatorK the factor applied to modulate the meanLuminance information (default is 1, see reference paper)
*/
virtual void setup(const float photoreceptorsNeighborhoodRadius=3.f, const float ganglioncellsNeighborhoodRadius=1.f, const float meanLuminanceModulatorK=1.f)
{
// setup the spatio-temporal properties of each filter
_meanLuminanceModulatorK = meanLuminanceModulatorK;
_multiuseFilter->setV0CompressionParameter(1.f, 255.f, 128.f);
_multiuseFilter->setLPfilterParameters(0.f, 0.f, photoreceptorsNeighborhoodRadius, 1);
_multiuseFilter->setLPfilterParameters(0.f, 0.f, ganglioncellsNeighborhoodRadius, 2);
}
private:
// a filter able to perform local adaptation and low pass spatio-temporal filtering
cv::Ptr <BasicRetinaFilter> _multiuseFilter;
cv::Ptr <RetinaColor> _colorEngine;
//!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
std::valarray<float> _inputBuffer;
std::valarray<float> _imageOutput;
std::valarray<float> _temp2;
float _meanLuminanceModulatorK;
void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer)
{
// fill output buffer with the valarray buffer
const float *valarrayPTR=get_data(grayMatrixToConvert);
if (!colorMode)
{
outBuffer.create(cv::Size(nbColumns, nbRows), CV_8U);
Mat outMat = outBuffer.getMat();
for (unsigned int i=0;i<nbRows;++i)
{
for (unsigned int j=0;j<nbColumns;++j)
{
cv::Point2d pixel(j,i);
outMat.at<unsigned char>(pixel)=(unsigned char)*(valarrayPTR++);
}
}
}else
{
const unsigned int nbPixels=nbColumns*nbRows;
const unsigned int doubleNBpixels=nbColumns*nbRows*2;
outBuffer.create(cv::Size(nbColumns, nbRows), CV_8UC3);
Mat outMat = outBuffer.getMat();
for (unsigned int i=0;i<nbRows;++i)
{
for (unsigned int j=0;j<nbColumns;++j,++valarrayPTR)
{
cv::Point2d pixel(j,i);
cv::Vec3b pixelValues;
pixelValues[2]=(unsigned char)*(valarrayPTR);
pixelValues[1]=(unsigned char)*(valarrayPTR+nbPixels);
pixelValues[0]=(unsigned char)*(valarrayPTR+doubleNBpixels);
outMat.at<cv::Vec3b>(pixel)=pixelValues;
}
}
}
}
bool _convertCvMat2ValarrayBuffer(InputArray inputMat, std::valarray<float> &outputValarrayMatrix)
{
const Mat inputMatToConvert=inputMat.getMat();
// first check input consistency
if (inputMatToConvert.empty())
throw cv::Exception(-1, "RetinaImpl cannot be applied, input buffer is empty", "RetinaImpl::run", "RetinaImpl.h", 0);
// retreive color mode from image input
int imageNumberOfChannels = inputMatToConvert.channels();
// convert to float AND fill the valarray buffer
typedef float T; // define here the target pixel format, here, float
const int dsttype = DataType<T>::depth; // output buffer is float format
const unsigned int nbPixels=inputMat.getMat().rows*inputMat.getMat().cols;
const unsigned int doubleNBpixels=inputMat.getMat().rows*inputMat.getMat().cols*2;
if(imageNumberOfChannels==4)
{
// create a cv::Mat table (for RGBA planes)
cv::Mat planes[4] =
{
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
};
planes[3] = cv::Mat(inputMatToConvert.size(), dsttype); // last channel (alpha) does not point on the valarray (not usefull in our case)
// split color cv::Mat in 4 planes... it fills valarray directely
cv::split(Mat_<Vec<T, 4> >(inputMatToConvert), planes);
}
else if (imageNumberOfChannels==3)
{
// create a cv::Mat table (for RGB planes)
cv::Mat planes[] =
{
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
};
// split color cv::Mat in 3 planes... it fills valarray directely
cv::split(cv::Mat_<Vec<T, 3> >(inputMatToConvert), planes);
}
else if(imageNumberOfChannels==1)
{
// create a cv::Mat header for the valarray
cv::Mat dst(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0]);
inputMatToConvert.convertTo(dst, dsttype);
}
else
CV_Error(Error::StsUnsupportedFormat, "input image must be single channel (gray levels), bgr format (color) or bgra (color with transparency which won't be considered");
return imageNumberOfChannels>1; // return bool : false for gray level image processing, true for color mode
}
// run the initilized retina filter in order to perform gray image tone mapping, after this call all retina outputs are updated
void _runGrayToneMapping(const std::valarray<float> &grayImageInput, std::valarray<float> &grayImageOutput)
{
// apply tone mapping on the multiplexed image
// -> photoreceptors local adaptation (large area adaptation)
_multiuseFilter->runFilter_LPfilter(grayImageInput, grayImageOutput, 0); // compute low pass filtering modeling the horizontal cells filtering to acess local luminance
_multiuseFilter->setV0CompressionParameterToneMapping(1.f, grayImageOutput.max(), _meanLuminanceModulatorK*grayImageOutput.sum()/(float)_multiuseFilter->getNBpixels());
_multiuseFilter->runFilter_LocalAdapdation(grayImageInput, grayImageOutput, _temp2); // adapt contrast to local luminance
// -> ganglion cells local adaptation (short area adaptation)
_multiuseFilter->runFilter_LPfilter(_temp2, grayImageOutput, 1); // compute low pass filtering (high cut frequency (remove spatio-temporal noise)
_multiuseFilter->setV0CompressionParameterToneMapping(1.f, _temp2.max(), _meanLuminanceModulatorK*grayImageOutput.sum()/(float)_multiuseFilter->getNBpixels());
_multiuseFilter->runFilter_LocalAdapdation(_temp2, grayImageOutput, grayImageOutput); // adapt contrast to local luminance
}
// run the initilized retina filter in order to perform color tone mapping, after this call all retina outputs are updated
void _runRGBToneMapping(const std::valarray<float> &RGBimageInput, std::valarray<float> &RGBimageOutput, const bool useAdaptiveFiltering)
{
// multiplex the image with the color sampling method specified in the constructor
_colorEngine->runColorMultiplexing(RGBimageInput);
// apply tone mapping on the multiplexed image
_runGrayToneMapping(_colorEngine->getMultiplexedFrame(), RGBimageOutput);
// demultiplex tone maped image
_colorEngine->runColorDemultiplexing(RGBimageOutput, useAdaptiveFiltering, _multiuseFilter->getMaxInputValue());//_ColorEngine->getMultiplexedFrame());//_ParvoRetinaFilter->getPhotoreceptorsLPfilteringOutput());
// rescaling result between 0 and 255
_colorEngine->normalizeRGBOutput_0_maxOutputValue(255.0);
// return the result
RGBimageOutput=_colorEngine->getDemultiplexedColorFrame();
}
};
CV_EXPORTS Ptr<RetinaFastToneMapping> createRetinaFastToneMapping(Size inputSize)
{
return makePtr<RetinaFastToneMappingImpl>(inputSize);
}
}// end of namespace bioinspired
}// end of namespace cv

View File

@@ -1,526 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include "retinafilter.hpp"
// @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
#include <iostream>
#include <cmath>
namespace cv
{
namespace bioinspired
{
// standard constructor without any log sampling of the input frame
RetinaFilter::RetinaFilter(const unsigned int sizeRows, const unsigned int sizeColumns, const bool colorMode, const int samplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
:
_retinaParvoMagnoMappedFrame(0),
_retinaParvoMagnoMapCoefTable(0),
_photoreceptorsPrefilter((1-(int)useRetinaLogSampling)*sizeRows+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeRows, reductionFactor), (1-(int)useRetinaLogSampling)*sizeColumns+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeColumns, reductionFactor), 4),
_ParvoRetinaFilter((1-(int)useRetinaLogSampling)*sizeRows+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeRows, reductionFactor), (1-(int)useRetinaLogSampling)*sizeColumns+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeColumns, reductionFactor)),
_MagnoRetinaFilter((1-(int)useRetinaLogSampling)*sizeRows+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeRows, reductionFactor), (1-(int)useRetinaLogSampling)*sizeColumns+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeColumns, reductionFactor)),
_colorEngine((1-(int)useRetinaLogSampling)*sizeRows+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeRows, reductionFactor), (1-(int)useRetinaLogSampling)*sizeColumns+useRetinaLogSampling*ImageLogPolProjection::predictOutputSize(sizeColumns, reductionFactor), samplingMethod),
// configure retina photoreceptors log sampling... if necessary
_photoreceptorsLogSampling(NULL)
{
#ifdef RETINADEBUG
std::cout<<"RetinaFilter::size( "<<_photoreceptorsPrefilter.getNBrows()<<", "<<_photoreceptorsPrefilter.getNBcolumns()<<")"<<" =? "<<_photoreceptorsPrefilter.getNBpixels()<<std::endl;
#endif
if (useRetinaLogSampling)
{
_photoreceptorsLogSampling = new ImageLogPolProjection(sizeRows, sizeColumns, ImageLogPolProjection::RETINALOGPROJECTION, true);
if (!_photoreceptorsLogSampling->initProjection(reductionFactor, samplingStrenght))
{
std::cerr<<"RetinaFilter::Problem initializing photoreceptors log sampling, could not setup retina filter"<<std::endl;
delete _photoreceptorsLogSampling;
_photoreceptorsLogSampling=NULL;
}
else
{
#ifdef RETINADEBUG
std::cout<<"_photoreceptorsLogSampling::size( "<<_photoreceptorsLogSampling->getNBrows()<<", "<<_photoreceptorsLogSampling->getNBcolumns()<<")"<<" =? "<<_photoreceptorsLogSampling->getNBpixels()<<std::endl;
#endif
}
}
// set default processing activities
_useParvoOutput=true;
_useMagnoOutput=true;
_useColorMode=colorMode;
// create hybrid output and related coefficient table
_createHybridTable();
// set default parameters
setGlobalParameters();
// stability controls values init
_setInitPeriodCount();
_globalTemporalConstant=25;
// reset all buffers
clearAllBuffers();
// std::cout<<"RetinaFilter::size( "<<this->getNBrows()<<", "<<this->getNBcolumns()<<")"<<_filterOutput.size()<<" =? "<<_filterOutput.getNBpixels()<<std::endl;
}
// destructor
RetinaFilter::~RetinaFilter()
{
if (_photoreceptorsLogSampling!=NULL)
delete _photoreceptorsLogSampling;
}
// function that clears all buffers of the object
void RetinaFilter::clearAllBuffers()
{
_photoreceptorsPrefilter.clearAllBuffers();
_ParvoRetinaFilter.clearAllBuffers();
_MagnoRetinaFilter.clearAllBuffers();
_colorEngine.clearAllBuffers();
if (_photoreceptorsLogSampling!=NULL)
_photoreceptorsLogSampling->clearAllBuffers();
// stability controls value init
_setInitPeriodCount();
}
/**
* resize retina filter object (resize all allocated buffers
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void RetinaFilter::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
unsigned int rows=NBrows, cols=NBcolumns;
// resize optionnal member and adjust other modules size if required
if (_photoreceptorsLogSampling)
{
_photoreceptorsLogSampling->resize(NBrows, NBcolumns);
rows=_photoreceptorsLogSampling->getOutputNBrows();
cols=_photoreceptorsLogSampling->getOutputNBcolumns();
}
_photoreceptorsPrefilter.resize(rows, cols);
_ParvoRetinaFilter.resize(rows, cols);
_MagnoRetinaFilter.resize(rows, cols);
_colorEngine.resize(rows, cols);
// reset parvo magno mapping
_createHybridTable();
// clean buffers
clearAllBuffers();
}
// stability controls value init
void RetinaFilter::_setInitPeriodCount()
{
// find out the maximum temporal constant value and apply a security factor
// false value (obviously too long) but appropriate for simple use
_globalTemporalConstant=(unsigned int)(_ParvoRetinaFilter.getPhotoreceptorsTemporalConstant()+_ParvoRetinaFilter.getHcellsTemporalConstant()+_MagnoRetinaFilter.getTemporalConstant());
// reset frame counter
_ellapsedFramesSinceLastReset=0;
}
void RetinaFilter::_createHybridTable()
{
// create hybrid output and related coefficient table
_retinaParvoMagnoMappedFrame.resize(_photoreceptorsPrefilter.getNBpixels());
_retinaParvoMagnoMapCoefTable.resize(_photoreceptorsPrefilter.getNBpixels()*2);
// fill _hybridParvoMagnoCoefTable
int i, j, halfRows=_photoreceptorsPrefilter.getNBrows()/2, halfColumns=_photoreceptorsPrefilter.getNBcolumns()/2;
float *hybridParvoMagnoCoefTablePTR= &_retinaParvoMagnoMapCoefTable[0];
float minDistance=MIN(halfRows, halfColumns)*0.7f;
for (i=0;i<(int)_photoreceptorsPrefilter.getNBrows();++i)
{
for (j=0;j<(int)_photoreceptorsPrefilter.getNBcolumns();++j)
{
float distanceToCenter=std::sqrt(((float)(i-halfRows)*(i-halfRows)+(j-halfColumns)*(j-halfColumns)));
if (distanceToCenter<minDistance)
{
float a=*(hybridParvoMagnoCoefTablePTR++)=0.5f+0.5f*(float)cos(CV_PI*distanceToCenter/minDistance);
*(hybridParvoMagnoCoefTablePTR++)=1.f-a;
}else
{
*(hybridParvoMagnoCoefTablePTR++)=0.f;
*(hybridParvoMagnoCoefTablePTR++)=1.f;
}
}
}
}
// setup parameters function and global data filling
void RetinaFilter::setGlobalParameters(const float OPLspatialResponse1, const float OPLtemporalresponse1, const float OPLassymetryGain, const float OPLspatialResponse2, const float OPLtemporalresponse2, const float LPfilterSpatialResponse, const float LPfilterGain, const float LPfilterTemporalresponse, const float MovingContoursExtractorCoefficient, const bool normalizeParvoOutput_0_maxOutputValue, const bool normalizeMagnoOutput_0_maxOutputValue, const float maxOutputValue, const float maxInputValue, const float meanValue)
{
_normalizeParvoOutput_0_maxOutputValue=normalizeParvoOutput_0_maxOutputValue;
_normalizeMagnoOutput_0_maxOutputValue=normalizeMagnoOutput_0_maxOutputValue;
_maxOutputValue=maxOutputValue;
_photoreceptorsPrefilter.setV0CompressionParameter(0.9f, maxInputValue, meanValue);
_photoreceptorsPrefilter.setLPfilterParameters(10, 0, 1.5, 1); // keeps low pass filter with high cut frequency in memory (usefull for the tone mapping function)
_photoreceptorsPrefilter.setLPfilterParameters(10, 0, 3.0, 2); // keeps low pass filter with low cut frequency in memory (usefull for the tone mapping function)
_photoreceptorsPrefilter.setLPfilterParameters(0, 0, 10, 3); // keeps low pass filter with low cut frequency in memory (usefull for the tone mapping function)
//this->setV0CompressionParameter(0.6, maxInputValue, meanValue); // keeps log compression sensitivity parameter (usefull for the tone mapping function)
_ParvoRetinaFilter.setOPLandParvoFiltersParameters(0,OPLtemporalresponse1, OPLspatialResponse1, OPLassymetryGain, OPLtemporalresponse2, OPLspatialResponse2);
_ParvoRetinaFilter.setV0CompressionParameter(0.9f, maxInputValue, meanValue);
_MagnoRetinaFilter.setCoefficientsTable(LPfilterGain, LPfilterTemporalresponse, LPfilterSpatialResponse, MovingContoursExtractorCoefficient, 0, 2.0f*LPfilterSpatialResponse);
_MagnoRetinaFilter.setV0CompressionParameter(0.7f, maxInputValue, meanValue);
// stability controls value init
_setInitPeriodCount();
}
bool RetinaFilter::checkInput(const std::valarray<float> &input, const bool)
{
BasicRetinaFilter *inputTarget=&_photoreceptorsPrefilter;
if (_photoreceptorsLogSampling)
inputTarget=_photoreceptorsLogSampling;
bool test=input.size()==inputTarget->getNBpixels() || input.size()==(inputTarget->getNBpixels()*3) ;
if (!test)
{
std::cerr<<"RetinaFilter::checkInput: input buffer does not match retina buffer size, conversion aborted"<<std::endl;
std::cout<<"RetinaFilter::checkInput: input size="<<input.size()<<" / "<<"retina size="<<inputTarget->getNBpixels()<<std::endl;
return false;
}
return true;
}
// main function that runs the filter for a given input frame
bool RetinaFilter::runFilter(const std::valarray<float> &imageInput, const bool useAdaptiveFiltering, const bool processRetinaParvoMagnoMapping, const bool useColorMode, const bool inputIsColorMultiplexed)
{
// preliminary check
bool processSuccess=true;
if (!checkInput(imageInput, useColorMode))
return false;
// run the color multiplexing if needed and compute each suub filter of the retina:
// -> local adaptation
// -> contours OPL extraction
// -> moving contours extraction
// stability controls value update
++_ellapsedFramesSinceLastReset;
_useColorMode=useColorMode;
/* pointer to the appropriate input data after,
* by default, if graylevel mode, the input is processed,
* if color or something else must be considered, specific preprocessing are applied
*/
const std::valarray<float> *selectedPhotoreceptorsLocalAdaptationInput= &imageInput;
const std::valarray<float> *selectedPhotoreceptorsColorInput=&imageInput;
//********** Following is input data specific photoreceptors processing
if (_photoreceptorsLogSampling)
{
_photoreceptorsLogSampling->runProjection(imageInput, useColorMode);
selectedPhotoreceptorsColorInput=selectedPhotoreceptorsLocalAdaptationInput=&(_photoreceptorsLogSampling->getSampledFrame());
}
if (useColorMode&& (!inputIsColorMultiplexed)) // not multiplexed color input case
{
_colorEngine.runColorMultiplexing(*selectedPhotoreceptorsColorInput);
selectedPhotoreceptorsLocalAdaptationInput=&(_colorEngine.getMultiplexedFrame());
}
//********** Following is generic Retina processing
// photoreceptors local adaptation
_photoreceptorsPrefilter.runFilter_LocalAdapdation(*selectedPhotoreceptorsLocalAdaptationInput, _ParvoRetinaFilter.getHorizontalCellsOutput());
// safety pixel values checks
//_photoreceptorsPrefilter.normalizeGrayOutput_0_maxOutputValue(_maxOutputValue);
// run parvo filter
_ParvoRetinaFilter.runFilter(_photoreceptorsPrefilter.getOutput(), _useParvoOutput);
if (_useParvoOutput)
{
_ParvoRetinaFilter.normalizeGrayOutputCentredSigmoide(); // models the saturation of the cells, usefull for visualisation of the ON-OFF Parvo Output, Bipolar cells outputs do not change !!!
_ParvoRetinaFilter.centerReductImageLuminance(); // best for further spectrum analysis
if (_normalizeParvoOutput_0_maxOutputValue)
_ParvoRetinaFilter.normalizeGrayOutput_0_maxOutputValue(_maxOutputValue);
}
if (_useParvoOutput&&_useMagnoOutput)
{
_MagnoRetinaFilter.runFilter(_ParvoRetinaFilter.getBipolarCellsON(), _ParvoRetinaFilter.getBipolarCellsOFF());
if (_normalizeMagnoOutput_0_maxOutputValue)
{
_MagnoRetinaFilter.normalizeGrayOutput_0_maxOutputValue(_maxOutputValue);
}
_MagnoRetinaFilter.normalizeGrayOutputNearZeroCentreredSigmoide();
}
if (_useParvoOutput&&_useMagnoOutput&&processRetinaParvoMagnoMapping)
{
_processRetinaParvoMagnoMapping();
if (_useColorMode)
_colorEngine.runColorDemultiplexing(_retinaParvoMagnoMappedFrame, useAdaptiveFiltering, _maxOutputValue);//_ColorEngine->getMultiplexedFrame());//_ParvoRetinaFilter->getPhotoreceptorsLPfilteringOutput());
return processSuccess;
}
if (_useParvoOutput&&_useColorMode)
{
_colorEngine.runColorDemultiplexing(_ParvoRetinaFilter.getOutput(), useAdaptiveFiltering, _maxOutputValue);//_ColorEngine->getMultiplexedFrame());//_ParvoRetinaFilter->getPhotoreceptorsLPfilteringOutput());
// compute A Cr1 Cr2 to LMS color space conversion
//if (true)
// _applyImageColorSpaceConversion(_ColorEngine->getChrominance(), lmsTempBuffer.Buffer(), _LMStoACr1Cr2);
}
return processSuccess;
}
const std::valarray<float> &RetinaFilter::getContours()
{
if (_useColorMode)
return _colorEngine.getLuminance();
else
return _ParvoRetinaFilter.getOutput();
}
// run the initilized retina filter in order to perform gray image tone mapping, after this call all retina outputs are updated
void RetinaFilter::runGrayToneMapping(const std::valarray<float> &grayImageInput, std::valarray<float> &grayImageOutput, const float PhotoreceptorsCompression, const float ganglionCellsCompression)
{
// preliminary check
if (!checkInput(grayImageInput, false))
return;
this->_runGrayToneMapping(grayImageInput, grayImageOutput, PhotoreceptorsCompression, ganglionCellsCompression);
}
// run the initilized retina filter in order to perform gray image tone mapping, after this call all retina outputs are updated
void RetinaFilter::_runGrayToneMapping(const std::valarray<float> &grayImageInput, std::valarray<float> &grayImageOutput, const float PhotoreceptorsCompression, const float ganglionCellsCompression)
{
// stability controls value update
++_ellapsedFramesSinceLastReset;
std::valarray<float> temp2(grayImageInput.size());
// apply tone mapping on the multiplexed image
// -> photoreceptors local adaptation (large area adaptation)
_photoreceptorsPrefilter.runFilter_LPfilter(grayImageInput, grayImageOutput, 2); // compute low pass filtering modeling the horizontal cells filtering to acess local luminance
_photoreceptorsPrefilter.setV0CompressionParameterToneMapping(1.f-PhotoreceptorsCompression, grayImageOutput.max(), 1.f*grayImageOutput.sum()/(float)_photoreceptorsPrefilter.getNBpixels());
_photoreceptorsPrefilter.runFilter_LocalAdapdation(grayImageInput, grayImageOutput, temp2); // adapt contrast to local luminance
// -> ganglion cells local adaptation (short area adaptation)
_photoreceptorsPrefilter.runFilter_LPfilter(temp2, grayImageOutput, 1); // compute low pass filtering (high cut frequency (remove spatio-temporal noise)
_photoreceptorsPrefilter.setV0CompressionParameterToneMapping(1.f-ganglionCellsCompression, temp2.max(), 1.f*temp2.sum()/(float)_photoreceptorsPrefilter.getNBpixels());
_photoreceptorsPrefilter.runFilter_LocalAdapdation(temp2, grayImageOutput, grayImageOutput); // adapt contrast to local luminance
}
// run the initilized retina filter in order to perform color tone mapping, after this call all retina outputs are updated
void RetinaFilter::runRGBToneMapping(const std::valarray<float> &RGBimageInput, std::valarray<float> &RGBimageOutput, const bool useAdaptiveFiltering, const float PhotoreceptorsCompression, const float ganglionCellsCompression)
{
// preliminary check
if (!checkInput(RGBimageInput, true))
return;
// multiplex the image with the color sampling method specified in the constructor
_colorEngine.runColorMultiplexing(RGBimageInput);
// apply tone mapping on the multiplexed image
_runGrayToneMapping(_colorEngine.getMultiplexedFrame(), RGBimageOutput, PhotoreceptorsCompression, ganglionCellsCompression);
// demultiplex tone maped image
_colorEngine.runColorDemultiplexing(RGBimageOutput, useAdaptiveFiltering, _photoreceptorsPrefilter.getMaxInputValue());//_ColorEngine->getMultiplexedFrame());//_ParvoRetinaFilter->getPhotoreceptorsLPfilteringOutput());
// rescaling result between 0 and 255
_colorEngine.normalizeRGBOutput_0_maxOutputValue(255.0);
// return the result
RGBimageOutput=_colorEngine.getDemultiplexedColorFrame();
}
void RetinaFilter::runLMSToneMapping(const std::valarray<float> &, std::valarray<float> &, const bool, const float, const float)
{
std::cerr<<"not working, sorry"<<std::endl;
/* // preliminary check
const std::valarray<float> &bufferInput=checkInput(LMSimageInput, true);
if (!bufferInput)
return NULL;
if (!_useColorMode)
std::cerr<<"RetinaFilter::Can not call tone mapping oeration if the retina filter was created for gray scale images"<<std::endl;
// create a temporary buffer of size nrows, Mcolumns, 3 layers
std::valarray<float> lmsTempBuffer(LMSimageInput);
std::cout<<"RetinaFilter::--->min LMS value="<<lmsTempBuffer.min()<<std::endl;
// setup local adaptation parameter at the photoreceptors level
setV0CompressionParameter(PhotoreceptorsCompression, _maxInputValue);
// get the local energy of each color channel
// ->L
_spatiotemporalLPfilter(LMSimageInput, _filterOutput, 1);
setV0CompressionParameterToneMapping(PhotoreceptorsCompression, _maxInputValue, this->sum()/_NBpixels);
_localLuminanceAdaptation(LMSimageInput, _filterOutput, lmsTempBuffer.Buffer());
// ->M
_spatiotemporalLPfilter(LMSimageInput+_NBpixels, _filterOutput, 1);
setV0CompressionParameterToneMapping(PhotoreceptorsCompression, _maxInputValue, this->sum()/_NBpixels);
_localLuminanceAdaptation(LMSimageInput+_NBpixels, _filterOutput, lmsTempBuffer.Buffer()+_NBpixels);
// ->S
_spatiotemporalLPfilter(LMSimageInput+_NBpixels*2, _filterOutput, 1);
setV0CompressionParameterToneMapping(PhotoreceptorsCompression, _maxInputValue, this->sum()/_NBpixels);
_localLuminanceAdaptation(LMSimageInput+_NBpixels*2, _filterOutput, lmsTempBuffer.Buffer()+_NBpixels*2);
// eliminate negative values
for (unsigned int i=0;i<lmsTempBuffer.size();++i)
if (lmsTempBuffer.Buffer()[i]<0)
lmsTempBuffer.Buffer()[i]=0;
std::cout<<"RetinaFilter::->min LMS value="<<lmsTempBuffer.min()<<std::endl;
// compute LMS to A Cr1 Cr2 color space conversion
_applyImageColorSpaceConversion(lmsTempBuffer.Buffer(), lmsTempBuffer.Buffer(), _LMStoACr1Cr2);
TemplateBuffer <float> acr1cr2TempBuffer(_NBrows, _NBcolumns, 3);
memcpy(acr1cr2TempBuffer.Buffer(), lmsTempBuffer.Buffer(), sizeof(float)*_NBpixels*3);
// compute A Cr1 Cr2 to LMS color space conversion
_applyImageColorSpaceConversion(acr1cr2TempBuffer.Buffer(), lmsTempBuffer.Buffer(), _ACr1Cr2toLMS);
// eliminate negative values
for (unsigned int i=0;i<lmsTempBuffer.size();++i)
if (lmsTempBuffer.Buffer()[i]<0)
lmsTempBuffer.Buffer()[i]=0;
// rewrite output to the appropriate buffer
_colorEngine->setDemultiplexedColorFrame(lmsTempBuffer.Buffer());
*/
}
// return image with center Parvo and peripheral Magno channels
void RetinaFilter::_processRetinaParvoMagnoMapping()
{
register float *hybridParvoMagnoPTR= &_retinaParvoMagnoMappedFrame[0];
register const float *parvoOutputPTR= get_data(_ParvoRetinaFilter.getOutput());
register const float *magnoXOutputPTR= get_data(_MagnoRetinaFilter.getOutput());
register float *hybridParvoMagnoCoefTablePTR= &_retinaParvoMagnoMapCoefTable[0];
for (unsigned int i=0 ; i<_photoreceptorsPrefilter.getNBpixels() ; ++i, hybridParvoMagnoCoefTablePTR+=2)
{
float hybridValue=*(parvoOutputPTR++)**(hybridParvoMagnoCoefTablePTR)+*(magnoXOutputPTR++)**(hybridParvoMagnoCoefTablePTR+1);
*(hybridParvoMagnoPTR++)=hybridValue;
}
TemplateBuffer<float>::normalizeGrayOutput_0_maxOutputValue(&_retinaParvoMagnoMappedFrame[0], _photoreceptorsPrefilter.getNBpixels());
}
bool RetinaFilter::getParvoFoveaResponse(std::valarray<float> &parvoFovealResponse)
{
if (!_useParvoOutput)
return false;
if (parvoFovealResponse.size() != _ParvoRetinaFilter.getNBpixels())
return false;
register const float *parvoOutputPTR= get_data(_ParvoRetinaFilter.getOutput());
register float *fovealParvoResponsePTR= &parvoFovealResponse[0];
register float *hybridParvoMagnoCoefTablePTR= &_retinaParvoMagnoMapCoefTable[0];
for (unsigned int i=0 ; i<_photoreceptorsPrefilter.getNBpixels() ; ++i, hybridParvoMagnoCoefTablePTR+=2)
{
*(fovealParvoResponsePTR++)=*(parvoOutputPTR++)**(hybridParvoMagnoCoefTablePTR);
}
return true;
}
// method to retrieve the parafoveal magnocellular pathway response (no energy motion in fovea)
bool RetinaFilter::getMagnoParaFoveaResponse(std::valarray<float> &magnoParafovealResponse)
{
if (!_useMagnoOutput)
return false;
if (magnoParafovealResponse.size() != _MagnoRetinaFilter.getNBpixels())
return false;
register const float *magnoXOutputPTR= get_data(_MagnoRetinaFilter.getOutput());
register float *parafovealMagnoResponsePTR=&magnoParafovealResponse[0];
register float *hybridParvoMagnoCoefTablePTR=&_retinaParvoMagnoMapCoefTable[0]+1;
for (unsigned int i=0 ; i<_photoreceptorsPrefilter.getNBpixels() ; ++i, hybridParvoMagnoCoefTablePTR+=2)
{
*(parafovealMagnoResponsePTR++)=*(magnoXOutputPTR++)**(hybridParvoMagnoCoefTablePTR);
}
return true;
}
}// end of namespace bioinspired
}// end of namespace cv

View File

@@ -1,548 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
/**
* @class RetinaFilter
* @brief class which describes the retina model developped at the LIS/GIPSA-LAB www.gipsa-lab.inpg.fr:
* -> performs a contours and moving contours extraction with powerfull local data enhancement as at the retina level
* Based on Alexandre BENOIT thesis: "Le systeme visuel humain au secours de la vision par ordinateur"
*
* => various optimisations and enhancements added after 2007 such as tone mapping capabilities, see reference paper cited in the licence and :
* Benoit A.,Alleysson D., Herault J., Le Callet P. (2009), "Spatio-Temporal Tone Mapping Operator based on a Retina model", Computational Color Imaging Workshop (CCIW09),pp 12-22, Saint Etienne, France
*
* TYPICAL USE:
*
* // create object at a specified picture size
* Retina *retina;
* retina =new Retina(frameSizeRows, frameSizeColumns, RGBmode);
*
* // init gain, spatial and temporal parameters:
* retina->setParameters(0.7, 1, 0, 7, 1, 5, 0, 0, 3 , true);
*
* // during program execution, call the filter for local luminance correction, contours extraction, moving contours extraction from an input picture called "FrameBuffer":
* retina->runfilter(FrameBuffer);
*
* // get the different output frames, check in the class description below for more outputs:
* const std::valarray<float> correctedLuminance=retina->getLocalAdaptation();
* const std::valarray<float> contours=retina->getContours();
* const std::valarray<float> movingContours=retina->getMovingContours();
*
* // at the end of the program, destroy object:
* delete retina;
*
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC / Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
*/
#ifndef RETINACLASSES_H_
#define RETINACLASSES_H_
#include "basicretinafilter.hpp"
#include "parvoretinafilter.hpp"
#include "magnoretinafilter.hpp"
// optional includes (depending on the related publications)
#include "imagelogpolprojection.hpp"
#include "retinacolor.hpp"
//#define __RETINADEBUG // define RETINADEBUG to display debug data
namespace cv
{
namespace bioinspired
{
// retina class that process the 3 outputs of the retina filtering stages
class RetinaFilter//: public BasicRetinaFilter
{
public:
/**
* constructor of the retina filter model with log sampling of the input frame (models the photoreceptors log sampling (central high resolution fovea and lower precision borders))
* @param sizeRows: number of rows of the input image
* @param sizeColumns: number of columns of the input image
* @param colorMode: specifies if the retina works with color (true) of stays in grayscale processing (false), can be adjusted online by the use of setColorMode method
* @param samplingMethod: specifies which kind of color sampling will be used
* @param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
* @param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
* @param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
*/
RetinaFilter(const unsigned int sizeRows, const unsigned int sizeColumns, const bool colorMode=false, const int samplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
/**
* standard destructor
*/
~RetinaFilter();
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* resize retina parvo filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* Input buffer checker: allows to check if the passed image buffer corresponds to retina filter expectations
* @param input: the input image buffer
* @param colorMode: specifiy if the input should be considered by the retina as colored of not
* @return false if not compatible or it returns true if OK
*/
bool checkInput(const std::valarray<float> &input, const bool colorMode);
/**
* run the initilized retina filter, after this call all retina outputs are updated
* @param imageInput: image input buffer, can be grayscale or RGB image respecting the size specified at the constructor level
* @param useAdaptiveFiltering: set true if you want to use adaptive color demultilexing (solve some color artefact problems), see RetinaColor for citation references
* @param processRetinaParvoMagnoMapping: tels if the main outputs takes into account the mapping of the Parvo and Magno channels on the retina (centred parvo (fovea) and magno outside (parafovea))
* @param useColorMode: color information is used if true, warning, if input is only gray level, a buffer overflow error will occur
-> note that if color mode is activated and processRetinaParvoMagnoMapping==true, then the demultiplexed color frame (accessible throw getColorOutput() will be a color contours frame in the fovea and gray level moving contours outside
@param inputIsColorMultiplexed: set trus if the input data is a multiplexed color image (using Bayer sampling for example), the color sampling method must correspond to the RETINA_COLORSAMPLINGMETHOD passed at constructor!
* @return true if process ran well, false in case of failure
*/
bool runFilter(const std::valarray<float> &imageInput, const bool useAdaptiveFiltering=true, const bool processRetinaParvoMagnoMapping=false, const bool useColorMode=false, const bool inputIsColorMultiplexed=false);
/**
* run the initilized retina filter in order to perform color tone mapping applied on an RGB image, after this call the color output of the retina is updated (use function getColorOutput() to grab it)
* the algorithm is based on David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
* -> Meylan L., Alleysson D., and S<>sstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N<> 9, September, 1st, 2007, pp. 2807-2816
* get the resulting gray frame by calling function getParvoColor()
* @param grayImageInput: RGB image input buffer respecting the size specified at the constructor level
* @param PhotoreceptorsCompression: sets the log compression parameters applied at the photoreceptors level (enhance luminance in dark areas)
* @param ganglionCellsCompression: sets the log compression applied at the gnaglion cells output (enhance contrast)
*/
void runGrayToneMapping(const std::valarray<float> &grayImageInput, std::valarray<float> &grayImageOutput, const float PhotoreceptorsCompression=0.6, const float ganglionCellsCompression=0.6);
/**
* run the initilized retina filter in order to perform color tone mapping applied on an RGB image, after this call the color output of the retina is updated (use function getColorOutput() to grab it)
* the algorithm is based on David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
* -> Meylan L., Alleysson D., and S<>sstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N<> 9, September, 1st, 2007, pp. 2807-2816
* get the resulting RGB frame by calling function getParvoColor()
* @param RGBimageInput: RGB image input buffer respecting the size specified at the constructor level
* @param useAdaptiveFiltering: set true if you want to use adaptive color demultilexing (solve some color artefact problems), see RetinaColor for citation references
* @param PhotoreceptorsCompression: sets the log compression parameters applied at the photoreceptors level (enhance luminance in dark areas)
* @param ganglionCellsCompression: sets the log compression applied at the ganglion cells output (enhance contrast)
*/
void runRGBToneMapping(const std::valarray<float> &RGBimageInput, std::valarray<float> &imageOutput, const bool useAdaptiveFiltering, const float PhotoreceptorsCompression=0.6, const float ganglionCellsCompression=0.6);
/**
* run the initilized retina filter in order to perform color tone mapping applied on an RGB image, after this call the color output of the retina is updated (use function getColorOutput() to grab it)
* get the resulting RGB frame by calling function getParvoColor()
* @param LMSimageInput: RGB image input buffer respecting the size specified at the constructor level
* @param useAdaptiveFiltering: set true if you want to use adaptive color demultilexing (solve some color artefact problems), see RetinaColor for citation references
* @param PhotoreceptorsCompression: sets the log compression parameters applied at the photoreceptors level (enhance luminance in dark areas)
* @param ganglionCellsCompression: sets the log compression applied at the gnaglion cells output (enhance contrast)
*/
void runLMSToneMapping(const std::valarray<float> &LMSimageInput, std::valarray<float> &imageOutput, const bool useAdaptiveFiltering, const float PhotoreceptorsCompression=0.6, const float ganglionCellsCompression=0.6);
/**
* set up function of the retina filter: all the retina is initialized at this step, some specific parameters are set by default, use setOPLandParvoCoefficientsTable() and setMagnoCoefficientsTable in order to setup the retina with more options
* @param OPLspatialResponse1: (equal to k1 in setOPLandParvoCoefficientsTable() function) the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param OPLtemporalresponse1: (equal to tau1 in setOPLandParvoCoefficientsTable() function) the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param OPLassymetryGain: (equal to beta2 in setOPLandParvoCoefficientsTable() function) gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param OPLspatialResponse2: (equal to k2 in setOPLandParvoCoefficientsTable() function) the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel
* @param OPLtemporalresponse2: (equal to tau2 in setOPLandParvoCoefficientsTable() function) the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param LPfilterSpatialResponse: (equal to parasolCells_k in setMagnoCoefficientsTable() function) the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param LPfilterGain: (equal to parasolCells_beta in setMagnoCoefficientsTable() function) the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param LPfilterTemporalresponse: (equal to parasolCells_tau in setMagnoCoefficientsTable() function) the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param MovingContoursExtractorCoefficient: (equal to amacrinCellsTemporalCutFrequency in setMagnoCoefficientsTable() function)the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param normalizeParvoOutput_0_maxOutputValue: specifies if the Parvo cellular output should be normalized between 0 and maxOutputValue (true) or not (false) in order to remain at a null mean value, true value is recommended for visualisation
* @param normalizeMagnoOutput_0_maxOutputValue: specifies if the Magno cellular output should be normalized between 0 and maxOutputValue (true) or not (false), setting true may be hazardous because it can enhace the noise response when nothing is moving
* @param maxOutputValue: the maximum amplitude value of the normalized outputs (generally 255 for 8bit per channel pictures)
* @param maxInputValue: the maximum pixel value of the input picture (generally 255 for 8bit per channel pictures), specify it in other case (for example High Dynamic Range Images)
* @param meanValue: the global mean value of the input data usefull for local adaptation setup
*/
void setGlobalParameters(const float OPLspatialResponse1=0.7, const float OPLtemporalresponse1=1, const float OPLassymetryGain=0, const float OPLspatialResponse2=5, const float OPLtemporalresponse2=1, const float LPfilterSpatialResponse=5, const float LPfilterGain=0, const float LPfilterTemporalresponse=0, const float MovingContoursExtractorCoefficient=5, const bool normalizeParvoOutput_0_maxOutputValue=false, const bool normalizeMagnoOutput_0_maxOutputValue=false, const float maxOutputValue=255.0, const float maxInputValue=255.0, const float meanValue=128.0);
/**
* setup the local luminance adaptation capability
* @param V0CompressionParameter: the compression strengh of the photoreceptors local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 160
*/
inline void setPhotoreceptorsLocalAdaptationSensitivity(const float V0CompressionParameter){_photoreceptorsPrefilter.setV0CompressionParameter(1-V0CompressionParameter);_setInitPeriodCount();};
/**
* setup the local luminance adaptation capability
* @param V0CompressionParameter: the compression strengh of the parvocellular pathway (details) local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 160
*/
inline void setParvoGanglionCellsLocalAdaptationSensitivity(const float V0CompressionParameter){_ParvoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);_setInitPeriodCount();};
/**
* setup the local luminance adaptation area of integration
* @param spatialResponse: the spatial constant of the low pass filter applied on the bipolar cells output in order to compute local contrast mean values
* @param temporalResponse: the spatial constant of the low pass filter applied on the bipolar cells output in order to compute local contrast mean values (generally set to zero: immediate response)
*/
inline void setGanglionCellsLocalAdaptationLPfilterParameters(const float spatialResponse, const float temporalResponse){_ParvoRetinaFilter.setGanglionCellsLocalAdaptationLPfilterParameters(temporalResponse, spatialResponse);_setInitPeriodCount();};
/**
* setup the local luminance adaptation capability
* @param V0CompressionParameter: the compression strengh of the magnocellular pathway (motion) local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 160
*/
inline void setMagnoGanglionCellsLocalAdaptationSensitivity(const float V0CompressionParameter){_MagnoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);_setInitPeriodCount();};
/**
* setup the OPL and IPL parvo channels
* @param beta1: gain of the horizontal cells network, if 0, then the mean value of the output is zero (default value), if the parameter is near 1, the amplitude is boosted but it should only be used for values rescaling... if needed
* @param tau1: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param k1: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param beta2: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param tau2: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param k2: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
* @param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 230
*/
void setOPLandParvoParameters(const float beta1, const float tau1, const float k1, const float beta2, const float tau2, const float k2, const float V0CompressionParameter){_ParvoRetinaFilter.setOPLandParvoFiltersParameters(beta1, tau1, k1, beta2, tau2, k2);_ParvoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);_setInitPeriodCount();};
/**
* set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel
* @param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 200
* @param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setMagnoCoefficientsTable(const float parasolCells_beta, const float parasolCells_tau, const float parasolCells_k, const float amacrinCellsTemporalCutFrequency, const float V0CompressionParameter, const float localAdaptintegration_tau, const float localAdaptintegration_k){_MagnoRetinaFilter.setCoefficientsTable(parasolCells_beta, parasolCells_tau, parasolCells_k, amacrinCellsTemporalCutFrequency, localAdaptintegration_tau, localAdaptintegration_k);_MagnoRetinaFilter.setV0CompressionParameter(V0CompressionParameter);_setInitPeriodCount();};
/**
* set if the parvo output should be or not normalized between 0 and 255 (for display purpose generally)
* @param normalizeParvoOutput_0_maxOutputValue: true if normalization should be done
*/
inline void activateNormalizeParvoOutput_0_maxOutputValue(const bool normalizeParvoOutput_0_maxOutputValue){_normalizeParvoOutput_0_maxOutputValue=normalizeParvoOutput_0_maxOutputValue;};
/**
* set if the magno output should be or not normalized between 0 and 255 (for display purpose generally), take care, if nothing is moving, then, the noise will be enanced !!!
* @param normalizeMagnoOutput_0_maxOutputValue: true if normalization should be done
*/
inline void activateNormalizeMagnoOutput_0_maxOutputValue(const bool normalizeMagnoOutput_0_maxOutputValue){_normalizeMagnoOutput_0_maxOutputValue=normalizeMagnoOutput_0_maxOutputValue;};
/**
* setup the maximum amplitude value of the normalized outputs (generally 255 for 8bit per channel pictures)
* @param maxOutputValue: maximum amplitude value of the normalized outputs (generally 255 for 8bit per channel pictures)
*/
inline void setMaxOutputValue(const float maxOutputValue){_maxOutputValue=maxOutputValue;};
/**
* sets the color mode of the frame grabber
* @param desiredColorMode: true if the user needs color information, false for graylevels
*/
void setColorMode(const bool desiredColorMode){_useColorMode=desiredColorMode;};
/**
* activate color saturation as the final step of the color demultiplexing process
* -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
* */
inline void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0){_colorEngine.setColorSaturation(saturateColors, colorSaturationValue);};
/////////////////////////////////////////////////////////////////
// function that retrieve the main retina outputs, one by one, or all in a structure
/**
* @return the input image sampled by the photoreceptors spatial sampling
*/
inline const std::valarray<float> &getPhotoreceptorsSampledFrame() const
{
CV_Assert(_photoreceptorsLogSampling);
return _photoreceptorsLogSampling->getSampledFrame();
};
/**
* @return photoreceptors output, locally adapted luminance only, no high frequency spatio-temporal noise reduction at the next retina processing stages, use getPhotoreceptors method to get complete photoreceptors output
*/
inline const std::valarray<float> &getLocalAdaptation() const {return _photoreceptorsPrefilter.getOutput();};
/**
* @return photoreceptors output: locally adapted luminance and high frequency spatio-temporal noise reduction, high luminance is a little saturated at this stage, but this is corrected naturally at the next retina processing stages
*/
inline const std::valarray<float> &getPhotoreceptors() const {return _ParvoRetinaFilter.getPhotoreceptorsLPfilteringOutput();};
/**
* @return the local luminance of the processed frame (it is the horizontal cells output)
*/
inline const std::valarray<float> &getHorizontalCells() const {return _ParvoRetinaFilter.getHorizontalCellsOutput();};
///////// CONTOURS part, PARVOCELLULAR RETINA PATHWAY
/**
* @return true if Parvocellular output is activated, false if not
*/
inline bool areContoursProcessed(){return _useParvoOutput;};
/**
* method to retrieve the foveal parvocellular pathway response (no details energy in parafovea)
* @param parvoParafovealResponse: buffer that will be filled with the response of the magnocellular pathway in the parafoveal area
* @return true if process succeeded (if buffer exists, is its size matches retina size, if magno channel is activated and if mapping is initialized
*/
bool getParvoFoveaResponse(std::valarray<float> &parvoFovealResponse);
/**
* @param useParvoOutput: true if Parvocellular output should be activated, false if not
*/
inline void activateContoursProcessing(const bool useParvoOutput){_useParvoOutput=useParvoOutput;};
/**
* @return the parvocellular contours information (details), should be used at the fovea level
*/
const std::valarray<float> &getContours(); // Parvocellular output
/**
* @return the parvocellular contours ON information (details), should be used at the fovea level
*/
inline const std::valarray<float> &getContoursON() const {return _ParvoRetinaFilter.getParvoON();};// Parvocellular ON output
/**
* @return the parvocellular contours OFF information (details), should be used at the fovea level
*/
inline const std::valarray<float> &getContoursOFF() const {return _ParvoRetinaFilter.getParvoOFF();};// Parvocellular OFF output
///////// MOVING CONTOURS part, MAGNOCELLULAR RETINA PATHWAY
/**
* @return true if Magnocellular output is activated, false if not
*/
inline bool areMovingContoursProcessed(){return _useMagnoOutput;};
/**
* method to retrieve the parafoveal magnocellular pathway response (no motion energy in fovea)
* @param magnoParafovealResponse: buffer that will be filled with the response of the magnocellular pathway in the parafoveal area
* @return true if process succeeded (if buffer exists, is its size matches retina size, if magno channel is activated and if mapping is initialized
*/
bool getMagnoParaFoveaResponse(std::valarray<float> &magnoParafovealResponse);
/**
* @param useMagnoOutput: true if Magnoocellular output should be activated, false if not
*/
inline void activateMovingContoursProcessing(const bool useMagnoOutput){_useMagnoOutput=useMagnoOutput;};
/**
* @return the magnocellular moving contours information (motion), should be used at the parafovea level without post-processing
*/
inline const std::valarray<float> &getMovingContours() const {return _MagnoRetinaFilter.getOutput();};// Magnocellular output
/**
* @return the magnocellular moving contours information (motion), should be used at the parafovea level with assymetric sigmoide post-processing which saturates motion information
*/
inline const std::valarray<float> &getMovingContoursSaturated() const {return _MagnoRetinaFilter.getMagnoYsaturated();};// Saturated Magnocellular output
/**
* @return the magnocellular moving contours ON information (motion), should be used at the parafovea level without post-processing
*/
inline const std::valarray<float> &getMovingContoursON() const {return _MagnoRetinaFilter.getMagnoON();};// Magnocellular ON output
/**
* @return the magnocellular moving contours OFF information (motion), should be used at the parafovea level without post-processing
*/
inline const std::valarray<float> &getMovingContoursOFF() const {return _MagnoRetinaFilter.getMagnoOFF();};// Magnocellular OFF output
/**
* @return a gray level image with center Parvo and peripheral Magno X channels, WARNING, the result will be ok if you called previously fucntion runFilter(imageInput, processRetinaParvoMagnoMapping=true);
* -> will be accessible even if color mode is activated (but the image is color sampled so quality is poor), but get the same thing but in color by the use of function getParvoColor()
*/
inline const std::valarray<float> &getRetinaParvoMagnoMappedOutput() const {return _retinaParvoMagnoMappedFrame;};// return image with center Parvo and peripheral Magno channels
/**
* color processing dedicated functions
* @return the parvo channel (contours, details) of the processed frame, grayscale output
*/
inline const std::valarray<float> &getParvoContoursChannel() const {return _colorEngine.getLuminance();};
/**
* color processing dedicated functions
* @return the chrominance of the processed frame (same colorspace as the input output, usually RGB)
*/
inline const std::valarray<float> &getParvoChrominance() const {return _colorEngine.getChrominance();}; // only retreive chrominance
/**
* color processing dedicated functions
* @return the parvo + chrominance channels of the processed frame (same colorspace as the input output, usually RGB)
*/
inline const std::valarray<float> &getColorOutput() const {return _colorEngine.getDemultiplexedColorFrame();};// retrieve luminance+chrominance
/**
* apply to the retina color output the Krauskopf transformation which leads to an opponent color system: output colorspace if Acr1cr2 if input of the retina was LMS color space
* @param result: the input buffer to fill with the transformed colorspace retina output
* @return true if process ended successfully
*/
inline bool applyKrauskopfLMS2Acr1cr2Transform(std::valarray<float> &result){return _colorEngine.applyKrauskopfLMS2Acr1cr2Transform(result);};
/**
* apply to the retina color output the Krauskopf transformation which leads to an opponent color system: output colorspace if Acr1cr2 if input of the retina was LMS color space
* @param result: the input buffer to fill with the transformed colorspace retina output
* @return true if process ended successfully
*/
inline bool applyLMS2LabTransform(std::valarray<float> &result){return _colorEngine.applyLMS2LabTransform(result);};
/**
* color processing dedicated functions
* @return the retina initialized mode, true if color mode (RGB), false if grayscale
*/
inline bool isColorMode(){return _useColorMode;}; // return true if RGB mode, false if gray level mode
/**
* @return the irregular low pass filter ouput at the photoreceptors level
*/
inline const std::valarray<float> &getIrregularLPfilteredInputFrame() const {return _photoreceptorsLogSampling->getIrregularLPfilteredInputFrame();};
/**
* @return true if color mode is activated, false if gray levels processing
*/
bool getColorMode(){return _useColorMode;};
/**
*
* @return true if a sufficient number of processed frames has been done since the last parameters update in order to get the stable state (r<>gime permanent)
*/
inline bool isInitTransitionDone(){if (_ellapsedFramesSinceLastReset<_globalTemporalConstant)return false; return true;};
/**
* find a distance in the image input space when the distance is known in the retina log sampled space...read again if it is not clear enough....sorry, i should sleep
* @param projectedRadiusLength: the distance to image center in the retina log sampled space
* @return the distance to image center in the input image space
*/
inline float getRetinaSamplingBackProjection(const float projectedRadiusLength)
{
if (_photoreceptorsLogSampling)
return (float)_photoreceptorsLogSampling->getOriginalRadiusLength(projectedRadiusLength);
return projectedRadiusLength;
};
/////////////////:
// retina dimensions getters
/**
* @return number of rows of the filter
*/
inline unsigned int getInputNBrows(){if (_photoreceptorsLogSampling) return _photoreceptorsLogSampling->getNBrows();else return _photoreceptorsPrefilter.getNBrows();};
/**
* @return number of columns of the filter
*/
inline unsigned int getInputNBcolumns(){if (_photoreceptorsLogSampling) return _photoreceptorsLogSampling->getNBcolumns();else return _photoreceptorsPrefilter.getNBcolumns();};
/**
* @return number of pixels of the filter
*/
inline unsigned int getInputNBpixels(){if (_photoreceptorsLogSampling) return _photoreceptorsLogSampling->getNBpixels();else return _photoreceptorsPrefilter.getNBpixels();};
/**
* @return the height of the frame output
*/
inline unsigned int getOutputNBrows(){return _photoreceptorsPrefilter.getNBrows();};
/**
* @return the width of the frame output
*/
inline unsigned int getOutputNBcolumns(){return _photoreceptorsPrefilter.getNBcolumns();};
/**
* @return the numbers of output pixels (width*height) of the images used by the object
*/
inline unsigned int getOutputNBpixels(){return _photoreceptorsPrefilter.getNBpixels();};
private:
// processing activation flags
bool _useParvoOutput;
bool _useMagnoOutput;
// filter stability controls
unsigned int _ellapsedFramesSinceLastReset;
unsigned int _globalTemporalConstant;
// private template buffers and related access pointers
std::valarray<float> _retinaParvoMagnoMappedFrame;
std::valarray<float> _retinaParvoMagnoMapCoefTable;
// private objects of the class
BasicRetinaFilter _photoreceptorsPrefilter;
ParvoRetinaFilter _ParvoRetinaFilter;
MagnoRetinaFilter _MagnoRetinaFilter;
RetinaColor _colorEngine;
ImageLogPolProjection *_photoreceptorsLogSampling;
bool _useMinimalMemoryForToneMappingONLY;
bool _normalizeParvoOutput_0_maxOutputValue;
bool _normalizeMagnoOutput_0_maxOutputValue;
float _maxOutputValue;
bool _useColorMode;
// private functions
void _setInitPeriodCount();
void _createHybridTable();
void _processRetinaParvoMagnoMapping();
void _runGrayToneMapping(const std::valarray<float> &grayImageInput, std::valarray<float> &grayImageOutput ,const float PhotoreceptorsCompression=0.6, const float ganglionCellsCompression=0.6);
};
}// end of namespace bioinspired
}// end of namespace cv
#endif /*RETINACLASSES_H_*/

View File

@@ -1,555 +0,0 @@
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (bioinspired)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef __TEMPLATEBUFFER_HPP__
#define __TEMPLATEBUFFER_HPP__
#include <valarray>
#include <cstdlib>
#include <iostream>
#include <cmath>
//#define __TEMPLATEBUFFERDEBUG //define TEMPLATEBUFFERDEBUG in order to display debug information
namespace cv
{
namespace bioinspired
{
//// If a parallelization method is available then, you should define MAKE_PARALLEL, in the other case, the classical serial code will be used
#define MAKE_PARALLEL
// ==> then include required includes
#ifdef MAKE_PARALLEL
// ==> declare usefull generic tools
template <class type>
class Parallel_clipBufferValues: public cv::ParallelLoopBody
{
private:
type *bufferToClip;
type minValue, maxValue;
public:
Parallel_clipBufferValues(type* bufferToProcess, const type min, const type max)
: bufferToClip(bufferToProcess), minValue(min), maxValue(max){}
virtual void operator()( const cv::Range &r ) const {
register type *inputOutputBufferPTR=bufferToClip+r.start;
for (register int jf = r.start; jf != r.end; ++jf, ++inputOutputBufferPTR)
{
if (*inputOutputBufferPTR>maxValue)
*inputOutputBufferPTR=maxValue;
else if (*inputOutputBufferPTR<minValue)
*inputOutputBufferPTR=minValue;
}
}
};
#endif
/**
* @class TemplateBuffer
* @brief this class is a simple template memory buffer which contains basic functions to get information on or normalize the buffer content
* note that thanks to the parent STL template class "valarray", it is possible to perform easily operations on the full array such as addition, product etc.
* @author Alexandre BENOIT (benoit.alexandre.vision@gmail.com), helped by Gelu IONESCU (gelu.ionescu@lis.inpg.fr)
* creation date: september 2007
*/
template <class type> class TemplateBuffer : public std::valarray<type>
{
public:
/**
* constructor for monodimensional array
* @param dim: the size of the vector
*/
TemplateBuffer(const size_t dim=0)
: std::valarray<type>((type)0, dim)
{
_NBrows=1;
_NBcolumns=dim;
_NBdepths=1;
_NBpixels=dim;
_doubleNBpixels=2*dim;
}
/**
* constructor by copy for monodimensional array
* @param pVal: the pointer to a buffer to copy
* @param dim: the size of the vector
*/
TemplateBuffer(const type* pVal, const size_t dim)
: std::valarray<type>(pVal, dim)
{
_NBrows=1;
_NBcolumns=dim;
_NBdepths=1;
_NBpixels=dim;
_doubleNBpixels=2*dim;
}
/**
* constructor for bidimensional array
* @param dimRows: the size of the vector
* @param dimColumns: the size of the vector
* @param depth: the number of layers of the buffer in its third dimension (3 of color images, 1 for gray images.
*/
TemplateBuffer(const size_t dimRows, const size_t dimColumns, const size_t depth=1)
: std::valarray<type>((type)0, dimRows*dimColumns*depth)
{
#ifdef TEMPLATEBUFFERDEBUG
std::cout<<"TemplateBuffer::TemplateBuffer: new buffer, size="<<dimRows<<", "<<dimColumns<<", "<<depth<<"valarraySize="<<this->size()<<std::endl;
#endif
_NBrows=dimRows;
_NBcolumns=dimColumns;
_NBdepths=depth;
_NBpixels=dimRows*dimColumns;
_doubleNBpixels=2*dimRows*dimColumns;
//_createTableIndex();
#ifdef TEMPLATEBUFFERDEBUG
std::cout<<"TemplateBuffer::TemplateBuffer: construction successful"<<std::endl;
#endif
}
/**
* copy constructor
* @param toCopy
* @return thenconstructed instance
*emplateBuffer(const TemplateBuffer &toCopy)
:_NBrows(toCopy.getNBrows()),_NBcolumns(toCopy.getNBcolumns()),_NBdepths(toCopy.getNBdephs()), _NBpixels(toCopy.getNBpixels()), _doubleNBpixels(toCopy.getNBpixels()*2)
//std::valarray<type>(toCopy)
{
memcpy(Buffer(), toCopy.Buffer(), this->size());
}*/
/**
* destructor
*/
virtual ~TemplateBuffer()
{
#ifdef TEMPLATEBUFFERDEBUG
std::cout<<"~TemplateBuffer"<<std::endl;
#endif
}
/**
* delete the buffer content (set zeros)
*/
inline void setZero(){std::valarray<type>::operator=(0);};//memset(Buffer(), 0, sizeof(type)*_NBpixels);};
/**
* @return the numbers of rows (height) of the images used by the object
*/
inline unsigned int getNBrows(){return (unsigned int)_NBrows;};
/**
* @return the numbers of columns (width) of the images used by the object
*/
inline unsigned int getNBcolumns(){return (unsigned int)_NBcolumns;};
/**
* @return the numbers of pixels (width*height) of the images used by the object
*/
inline unsigned int getNBpixels(){return (unsigned int)_NBpixels;};
/**
* @return the numbers of pixels (width*height) of the images used by the object
*/
inline unsigned int getDoubleNBpixels(){return (unsigned int)_doubleNBpixels;};
/**
* @return the numbers of depths (3rd dimension: 1 for gray images, 3 for rgb images) of the images used by the object
*/
inline unsigned int getDepthSize(){return (unsigned int)_NBdepths;};
/**
* resize the buffer and recompute table index etc.
*/
void resizeBuffer(const size_t dimRows, const size_t dimColumns, const size_t depth=1)
{
this->resize(dimRows*dimColumns*depth);
_NBrows=dimRows;
_NBcolumns=dimColumns;
_NBdepths=depth;
_NBpixels=dimRows*dimColumns;
_doubleNBpixels=2*dimRows*dimColumns;
}
inline TemplateBuffer<type> & operator=(const std::valarray<type> &b)
{
//std::cout<<"TemplateBuffer<type> & operator= affect vector: "<<std::endl;
std::valarray<type>::operator=(b);
return *this;
}
inline TemplateBuffer<type> & operator=(const type &b)
{
//std::cout<<"TemplateBuffer<type> & operator= affect value: "<<b<<std::endl;
std::valarray<type>::operator=(b);
return *this;
}
/* inline const type &operator[](const unsigned int &b)
{
return (*this)[b];
}
*/
/**
* @return the buffer adress in non const mode
*/
inline type* Buffer() { return &(*this)[0]; }
///////////////////////////////////////////////////////
// Standard Image manipulation functions
/**
* standard 0 to 255 image normalization function
* @param inputOutputBuffer: the image to be normalized (rewrites the input), if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param nbPixels: specifies the number of pixel on which the normalization should be performed, if 0, then all pixels specified in the constructor are processed
* @param maxOutputValue: the maximum output value
*/
static void normalizeGrayOutput_0_maxOutputValue(type *inputOutputBuffer, const size_t nbPixels, const type maxOutputValue=(type)255.0);
/**
* standard 0 to 255 image normalization function
* @param inputOutputBuffer: the image to be normalized (rewrites the input), if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param nbPixels: specifies the number of pixel on which the normalization should be performed, if 0, then all pixels specified in the constructor are processed
* @param maxOutputValue: the maximum output value
*/
void normalizeGrayOutput_0_maxOutputValue(const type maxOutputValue=(type)255.0){normalizeGrayOutput_0_maxOutputValue(this->Buffer(), this->size(), maxOutputValue);};
/**
* sigmoide image normalization function (saturates min and max values)
* @param meanValue: specifies the mean value of th pixels to be processed
* @param sensitivity: strenght of the sigmoide
* @param inputPicture: the image to be normalized if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param outputBuffer: the ouput buffer on which the result is writed, if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param maxOutputValue: the maximum output value
*/
static void normalizeGrayOutputCentredSigmoide(const type meanValue, const type sensitivity, const type maxOutputValue, type *inputPicture, type *outputBuffer, const unsigned int nbPixels);
/**
* sigmoide image normalization function on the current buffer (saturates min and max values)
* @param meanValue: specifies the mean value of th pixels to be processed
* @param sensitivity: strenght of the sigmoide
* @param maxOutputValue: the maximum output value
*/
inline void normalizeGrayOutputCentredSigmoide(const type meanValue=(type)0.0, const type sensitivity=(type)2.0, const type maxOutputValue=(type)255.0){ (void)maxOutputValue; normalizeGrayOutputCentredSigmoide(meanValue, sensitivity, 255.0, this->Buffer(), this->Buffer(), this->getNBpixels());};
/**
* sigmoide image normalization function (saturates min and max values), in this function, the sigmoide is centered on low values (high saturation of the medium and high values
* @param inputPicture: the image to be normalized if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param outputBuffer: the ouput buffer on which the result is writed, if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param sensitivity: strenght of the sigmoide
* @param maxOutputValue: the maximum output value
*/
void normalizeGrayOutputNearZeroCentreredSigmoide(type *inputPicture=(type*)NULL, type *outputBuffer=(type*)NULL, const type sensitivity=(type)40, const type maxOutputValue=(type)255.0);
/**
* center and reduct the image (image-mean)/std
* @param inputOutputBuffer: the image to be normalized if no parameter, the result is rewrited on it
*/
void centerReductImageLuminance(type *inputOutputBuffer=(type*)NULL);
/**
* @return standard deviation of the buffer
*/
double getStandardDeviation()
{
double standardDeviation=0;
double meanValue=getMean();
type *bufferPTR=Buffer();
for (unsigned int i=0;i<this->size();++i)
{
double diff=(*(bufferPTR++)-meanValue);
standardDeviation+=diff*diff;
}
return std::sqrt(standardDeviation/this->size());
};
/**
* Clip buffer histogram
* @param minRatio: the minimum ratio of the lower pixel values, range=[0,1] and lower than maxRatio
* @param maxRatio: the aximum ratio of the higher pixel values, range=[0,1] and higher than minRatio
*/
void clipHistogram(double minRatio, double maxRatio, double maxOutputValue)
{
if (minRatio>=maxRatio)
{
std::cerr<<"TemplateBuffer::clipHistogram: minRatio must be inferior to maxRatio, buffer unchanged"<<std::endl;
return;
}
/* minRatio=min(max(minRatio, 1.0),0.0);
maxRatio=max(max(maxRatio, 0.0),1.0);
*/
// find the pixel value just above the threshold
const double maxThreshold=this->max()*maxRatio;
const double minThreshold=(this->max()-this->min())*minRatio+this->min();
type *bufferPTR=this->Buffer();
double deltaH=maxThreshold;
double deltaL=maxThreshold;
double updatedHighValue=maxThreshold;
double updatedLowValue=maxThreshold;
for (unsigned int i=0;i<this->size();++i)
{
double curentValue=(double)*(bufferPTR++);
// updating "closest to the high threshold" pixel value
double highValueTest=maxThreshold-curentValue;
if (highValueTest>0)
{
if (deltaH>highValueTest)
{
deltaH=highValueTest;
updatedHighValue=curentValue;
}
}
// updating "closest to the low threshold" pixel value
double lowValueTest=curentValue-minThreshold;
if (lowValueTest>0)
{
if (deltaL>lowValueTest)
{
deltaL=lowValueTest;
updatedLowValue=curentValue;
}
}
}
std::cout<<"Tdebug"<<std::endl;
std::cout<<"deltaL="<<deltaL<<", deltaH="<<deltaH<<std::endl;
std::cout<<"this->max()"<<this->max()<<"maxThreshold="<<maxThreshold<<"updatedHighValue="<<updatedHighValue<<std::endl;
std::cout<<"this->min()"<<this->min()<<"minThreshold="<<minThreshold<<"updatedLowValue="<<updatedLowValue<<std::endl;
// clipping values outside than the updated thresholds
bufferPTR=this->Buffer();
#ifdef MAKE_PARALLEL // call the TemplateBuffer multitreaded clipping method
parallel_for_(cv::Range(0,this->size()), Parallel_clipBufferValues<type>(bufferPTR, updatedLowValue, updatedHighValue));
#else
for (unsigned int i=0;i<this->size();++i, ++bufferPTR)
{
if (*bufferPTR<updatedLowValue)
*bufferPTR=updatedLowValue;
else if (*bufferPTR>updatedHighValue)
*bufferPTR=updatedHighValue;
}
#endif
normalizeGrayOutput_0_maxOutputValue(this->Buffer(), this->size(), maxOutputValue);
}
/**
* @return the mean value of the vector
*/
inline double getMean(){return this->sum()/this->size();};
protected:
size_t _NBrows;
size_t _NBcolumns;
size_t _NBdepths;
size_t _NBpixels;
size_t _doubleNBpixels;
// utilities
static type _abs(const type x);
};
///////////////////////////////////////////////////////////////////////
/// normalize output between 0 and 255, can be applied on images of different size that the declared size if nbPixels parameters is setted up;
template <class type>
void TemplateBuffer<type>::normalizeGrayOutput_0_maxOutputValue(type *inputOutputBuffer, const size_t processedPixels, const type maxOutputValue)
{
type maxValue=inputOutputBuffer[0], minValue=inputOutputBuffer[0];
// get the min and max value
register type *inputOutputBufferPTR=inputOutputBuffer;
for (register size_t j = 0; j<processedPixels; ++j)
{
type pixValue = *(inputOutputBufferPTR++);
if (maxValue < pixValue)
maxValue = pixValue;
else if (minValue > pixValue)
minValue = pixValue;
}
// change the range of the data to 0->255
type factor = maxOutputValue/(maxValue-minValue);
type offset = (type)(-minValue*factor);
inputOutputBufferPTR=inputOutputBuffer;
for (register size_t j = 0; j < processedPixels; ++j, ++inputOutputBufferPTR)
*inputOutputBufferPTR=*(inputOutputBufferPTR)*factor+offset;
}
// normalize data with a sigmoide close to 0 (saturates values for those superior to 0)
template <class type>
void TemplateBuffer<type>::normalizeGrayOutputNearZeroCentreredSigmoide(type *inputBuffer, type *outputBuffer, const type sensitivity, const type maxOutputValue)
{
if (inputBuffer==NULL)
inputBuffer=Buffer();
if (outputBuffer==NULL)
outputBuffer=Buffer();
type X0cube=sensitivity*sensitivity*sensitivity;
register type *inputBufferPTR=inputBuffer;
register type *outputBufferPTR=outputBuffer;
for (register size_t j = 0; j < _NBpixels; ++j, ++inputBufferPTR)
{
type currentCubeLuminance=*inputBufferPTR**inputBufferPTR**inputBufferPTR;
*(outputBufferPTR++)=maxOutputValue*currentCubeLuminance/(currentCubeLuminance+X0cube);
}
}
// normalize and adjust luminance with a centered to 128 sigmode
template <class type>
void TemplateBuffer<type>::normalizeGrayOutputCentredSigmoide(const type meanValue, const type sensitivity, const type maxOutputValue, type *inputBuffer, type *outputBuffer, const unsigned int nbPixels)
{
if (sensitivity==1.0)
{
std::cerr<<"TemplateBuffer::TemplateBuffer<type>::normalizeGrayOutputCentredSigmoide error: 2nd parameter (sensitivity) must not equal 0, copying original data..."<<std::endl;
memcpy(outputBuffer, inputBuffer, sizeof(type)*nbPixels);
return;
}
type X0=maxOutputValue/(sensitivity-(type)1.0);
register type *inputBufferPTR=inputBuffer;
register type *outputBufferPTR=outputBuffer;
for (register size_t j = 0; j < nbPixels; ++j, ++inputBufferPTR)
*(outputBufferPTR++)=(meanValue+(meanValue+X0)*(*(inputBufferPTR)-meanValue)/(_abs(*(inputBufferPTR)-meanValue)+X0));
}
// center and reduct the image (image-mean)/std
template <class type>
void TemplateBuffer<type>::centerReductImageLuminance(type *inputOutputBuffer)
{
// if outputBuffer unsassigned, the rewrite the buffer
if (inputOutputBuffer==NULL)
inputOutputBuffer=Buffer();
type meanValue=0, stdValue=0;
// compute mean value
for (register size_t j = 0; j < _NBpixels; ++j)
meanValue+=inputOutputBuffer[j];
meanValue/=((type)_NBpixels);
// compute std value
register type *inputOutputBufferPTR=inputOutputBuffer;
for (size_t index=0;index<_NBpixels;++index)
{
type inputMinusMean=*(inputOutputBufferPTR++)-meanValue;
stdValue+=inputMinusMean*inputMinusMean;
}
stdValue=std::sqrt(stdValue/((type)_NBpixels));
// adjust luminance in regard of mean and std value;
inputOutputBufferPTR=inputOutputBuffer;
for (size_t index=0;index<_NBpixels;++index, ++inputOutputBufferPTR)
*inputOutputBufferPTR=(*(inputOutputBufferPTR)-meanValue)/stdValue;
}
template <class type>
type TemplateBuffer<type>::_abs(const type x)
{
if (x>0)
return x;
else
return -x;
}
template < >
inline int TemplateBuffer<int>::_abs(const int x)
{
return std::abs(x);
}
template < >
inline double TemplateBuffer<double>::_abs(const double x)
{
return std::fabs(x);
}
template < >
inline float TemplateBuffer<float>::_abs(const float x)
{
return std::fabs(x);
}
}// end of namespace bioinspired
}// end of namespace cv
#endif

View File

@@ -1,3 +0,0 @@
#include "test_precomp.hpp"
CV_TEST_MAIN("cv")

View File

@@ -1,16 +0,0 @@
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-declarations"
# if defined __clang__ || defined __APPLE__
# pragma GCC diagnostic ignored "-Wmissing-prototypes"
# pragma GCC diagnostic ignored "-Wextra"
# endif
#endif
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
#include "opencv2/ts.hpp"
#include "opencv2/bioinspired.hpp"
#include <iostream>
#endif

View File

@@ -1,164 +0,0 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2010-2013, Multicoreware, Inc., all rights reserved.
// Copyright (C) 2010-2013, Advanced Micro Devices, Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// @Authors
// Peng Xiao, pengxiao@multicorewareinc.com
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other oclMaterials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors as is and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
#include "opencv2/opencv_modules.hpp"
#include "opencv2/bioinspired.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/core/ocl.hpp" // cv::ocl::haveOpenCL
#if defined(HAVE_OPENCV_OCL)
#include "opencv2/ocl.hpp"
#define RETINA_ITERATIONS 5
static double checkNear(const cv::Mat &m1, const cv::Mat &m2)
{
return cv::norm(m1, m2, cv::NORM_INF);
}
#define PARAM_TEST_CASE(name, ...) struct name : testing::TestWithParam< std::tr1::tuple< __VA_ARGS__ > >
#define GET_PARAM(k) std::tr1::get< k >(GetParam())
static int oclInit = false;
static int oclAvailable = false;
PARAM_TEST_CASE(Retina_OCL, bool, int, bool, double, double)
{
bool colorMode;
int colorSamplingMethod;
bool useLogSampling;
double reductionFactor;
double samplingStrength;
virtual void SetUp()
{
colorMode = GET_PARAM(0);
colorSamplingMethod = GET_PARAM(1);
useLogSampling = GET_PARAM(2);
reductionFactor = GET_PARAM(3);
samplingStrength = GET_PARAM(4);
if (!oclInit)
{
if (cv::ocl::haveOpenCL())
{
try
{
const cv::ocl::DeviceInfo& dev = cv::ocl::Context::getContext()->getDeviceInfo();
std::cout << "Device name:" << dev.deviceName << std::endl;
oclAvailable = true;
}
catch (...)
{
std::cout << "Device name: N/A" << std::endl;
}
}
oclInit = true;
}
}
};
TEST_P(Retina_OCL, Accuracy)
{
if (!oclAvailable)
{
std::cout << "SKIP test" << std::endl;
return;
}
using namespace cv;
Mat input = imread(cvtest::TS::ptr()->get_data_path() + "shared/lena.png", colorMode);
CV_Assert(!input.empty());
ocl::oclMat ocl_input(input);
Ptr<bioinspired::Retina> ocl_retina = bioinspired::createRetina_OCL(
input.size(),
colorMode,
colorSamplingMethod,
useLogSampling,
reductionFactor,
samplingStrength);
Ptr<bioinspired::Retina> gold_retina = bioinspired::createRetina(
input.size(),
colorMode,
colorSamplingMethod,
useLogSampling,
reductionFactor,
samplingStrength);
Mat gold_parvo;
Mat gold_magno;
ocl::oclMat ocl_parvo;
ocl::oclMat ocl_magno;
for(int i = 0; i < RETINA_ITERATIONS; i ++)
{
ocl_retina->run(ocl_input);
gold_retina->run(input);
gold_retina->getParvo(gold_parvo);
gold_retina->getMagno(gold_magno);
ocl_retina->getParvo(ocl_parvo);
ocl_retina->getMagno(ocl_magno);
int eps = colorMode ? 2 : 1;
EXPECT_LE(checkNear(gold_parvo, (Mat)ocl_parvo), eps);
EXPECT_LE(checkNear(gold_magno, (Mat)ocl_magno), eps);
}
}
INSTANTIATE_TEST_CASE_P(Contrib, Retina_OCL, testing::Combine(
testing::Bool(),
testing::Values((int)cv::bioinspired::RETINA_COLOR_BAYER),
testing::Values(false/*,true*/),
testing::Values(1.0, 0.5),
testing::Values(10.0, 5.0)));
#endif

View File

@@ -756,6 +756,7 @@ They are
:math:`[R_1, -t]`,
:math:`[R_2, t]`,
:math:`[R_2, -t]`.
By decomposing ``E``, you can only get the direction of the translation, so the function returns unit ``t``.
recoverPose
@@ -1260,11 +1261,11 @@ stereoCalibrate
-------------------
Calibrates the stereo camera.
.. ocv:function:: double stereoCalibrate( InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2, InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1, InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2, Size imageSize, OutputArray R, OutputArray T, OutputArray E, OutputArray F, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6), int flags=CALIB_FIX_INTRINSIC )
.. ocv:function:: double stereoCalibrate( InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2, InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1, InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2, Size imageSize, OutputArray R, OutputArray T, OutputArray E, OutputArray F, int flags=CALIB_FIX_INTRINSIC ,TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6))
.. ocv:pyfunction:: cv2.stereoCalibrate(objectPoints, imagePoints1, imagePoints2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imageSize[, R[, T[, E[, F[, criteria[, flags]]]]]]) -> retval, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F
.. ocv:pyfunction:: cv2.stereoCalibrate(objectPoints, imagePoints1, imagePoints2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imageSize[, R[, T[, E[, F[, flags[, criteria]]]]]]) -> retval, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F
.. ocv:cfunction:: double cvStereoCalibrate( const CvMat* object_points, const CvMat* image_points1, const CvMat* image_points2, const CvMat* npoints, CvMat* camera_matrix1, CvMat* dist_coeffs1, CvMat* camera_matrix2, CvMat* dist_coeffs2, CvSize image_size, CvMat* R, CvMat* T, CvMat* E=0, CvMat* F=0, CvTermCriteria term_crit=cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,30,1e-6), int flags=CV_CALIB_FIX_INTRINSIC )
.. ocv:cfunction:: double cvStereoCalibrate( const CvMat* object_points, const CvMat* image_points1, const CvMat* image_points2, const CvMat* npoints, CvMat* camera_matrix1, CvMat* dist_coeffs1, CvMat* camera_matrix2, CvMat* dist_coeffs2, CvSize image_size, CvMat* R, CvMat* T, CvMat* E=0, CvMat* F=0, int flags=CV_CALIB_FIX_INTRINSIC, CvTermCriteria term_crit=cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,30,1e-6) )
:param objectPoints: Vector of vectors of the calibration pattern points.

View File

@@ -203,8 +203,8 @@ CV_EXPORTS_W double stereoCalibrate( InputArrayOfArrays objectPoints,
InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1,
InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2,
Size imageSize, OutputArray R,OutputArray T, OutputArray E, OutputArray F,
TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6),
int flags = CALIB_FIX_INTRINSIC );
int flags = CALIB_FIX_INTRINSIC,
TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6) );
//! computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters

View File

@@ -276,9 +276,9 @@ CVAPI(double) cvStereoCalibrate( const CvMat* object_points, const CvMat* image_
CvMat* camera_matrix2, CvMat* dist_coeffs2,
CvSize image_size, CvMat* R, CvMat* T,
CvMat* E CV_DEFAULT(0), CvMat* F CV_DEFAULT(0),
int flags CV_DEFAULT(CV_CALIB_FIX_INTRINSIC),
CvTermCriteria term_crit CV_DEFAULT(cvTermCriteria(
CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,30,1e-6)),
int flags CV_DEFAULT(CV_CALIB_FIX_INTRINSIC));
CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,30,1e-6)) );
#define CV_CALIB_ZERO_DISPARITY 1024

View File

@@ -1998,7 +1998,7 @@ bool cv::findCirclesGrid( InputArray _image, Size patternSize,
{
isFound = boxFinder.findHoles();
}
catch (cv::Exception)
catch (const cv::Exception &)
{
}

View File

@@ -1635,8 +1635,8 @@ double cvStereoCalibrate( const CvMat* _objectPoints, const CvMat* _imagePoints1
CvMat* _cameraMatrix2, CvMat* _distCoeffs2,
CvSize imageSize, CvMat* matR, CvMat* matT,
CvMat* matE, CvMat* matF,
CvTermCriteria termCrit,
int flags )
int flags,
CvTermCriteria termCrit )
{
const int NINTRINSIC = 16;
Ptr<CvMat> npoints, err, J_LR, Je, Ji, imagePoints[2], objectPoints, RT0;
@@ -3278,8 +3278,8 @@ double cv::stereoCalibrate( InputArrayOfArrays _objectPoints,
InputOutputArray _cameraMatrix1, InputOutputArray _distCoeffs1,
InputOutputArray _cameraMatrix2, InputOutputArray _distCoeffs2,
Size imageSize, OutputArray _Rmat, OutputArray _Tmat,
OutputArray _Emat, OutputArray _Fmat, TermCriteria criteria,
int flags )
OutputArray _Emat, OutputArray _Fmat, int flags ,
TermCriteria criteria)
{
int rtype = CV_64F;
Mat cameraMatrix1 = _cameraMatrix1.getMat();
@@ -3322,7 +3322,7 @@ double cv::stereoCalibrate( InputArrayOfArrays _objectPoints,
double err = cvStereoCalibrate(&c_objPt, &c_imgPt, &c_imgPt2, &c_npoints, &c_cameraMatrix1,
&c_distCoeffs1, &c_cameraMatrix2, &c_distCoeffs2, imageSize,
&c_matR, &c_matT, p_matE, p_matF, criteria, flags );
&c_matR, &c_matT, p_matE, p_matF, flags, criteria );
cameraMatrix1.copyTo(_cameraMatrix1);
cameraMatrix2.copyTo(_cameraMatrix2);

View File

@@ -218,6 +218,7 @@ void CirclesGridClusterFinder::findCorners(const std::vector<cv::Point2f> &hull2
void CirclesGridClusterFinder::findOutsideCorners(const std::vector<cv::Point2f> &corners, std::vector<cv::Point2f> &outsideCorners)
{
CV_Assert(!corners.empty());
outsideCorners.clear();
//find two pairs of the most nearest corners
int i, j, n = (int)corners.size();

View File

@@ -57,6 +57,7 @@ CvLevMarq::CvLevMarq()
criteria = cvTermCriteria(0,0,0);
iters = 0;
completeSymmFlag = false;
errNorm = prevErrNorm = DBL_MAX;
}
CvLevMarq::CvLevMarq( int nparams, int nerrs, CvTermCriteria criteria0, bool _completeSymmFlag )
@@ -101,7 +102,7 @@ void CvLevMarq::init( int nparams, int nerrs, CvTermCriteria criteria0, bool _co
J.reset(cvCreateMat( nerrs, nparams, CV_64F ));
err.reset(cvCreateMat( nerrs, 1, CV_64F ));
}
prevErrNorm = DBL_MAX;
errNorm = prevErrNorm = DBL_MAX;
lambdaLg10 = -3;
criteria = criteria0;
if( criteria.type & CV_TERMCRIT_ITER )

View File

@@ -74,7 +74,6 @@ class epnp {
int number_of_correspondences;
double cws[4][3], ccs[4][3];
double cws_determinant;
int max_nr;
double * A1, * A2;
};

View File

@@ -80,7 +80,7 @@ namespace cv
class LMSolverImpl : public LMSolver
{
public:
LMSolverImpl() : maxIters(100) { init(); };
LMSolverImpl() : maxIters(100) { init(); }
LMSolverImpl(const Ptr<LMSolver::Callback>& _cb, int _maxIters) : cb(_cb), maxIters(_maxIters) { init(); }
void init()
@@ -215,7 +215,7 @@ CV_INIT_ALGORITHM(LMSolverImpl, "LMSolver",
obj.info()->addParam(obj, "epsx", obj.epsx);
obj.info()->addParam(obj, "epsf", obj.epsf);
obj.info()->addParam(obj, "maxIters", obj.maxIters);
obj.info()->addParam(obj, "printInterval", obj.printInterval));
obj.info()->addParam(obj, "printInterval", obj.printInterval))
Ptr<LMSolver> createLMSolver(const Ptr<LMSolver::Callback>& cb, int maxIters)
{

View File

@@ -260,7 +260,6 @@ public:
Ptr<PointSetRegistrator::Callback> cb;
int modelPoints;
int maxBasicSolutions;
bool checkPartialSubsets;
double threshold;
double confidence;
@@ -386,11 +385,11 @@ public:
CV_INIT_ALGORITHM(RANSACPointSetRegistrator, "PointSetRegistrator.RANSAC",
obj.info()->addParam(obj, "threshold", obj.threshold);
obj.info()->addParam(obj, "confidence", obj.confidence);
obj.info()->addParam(obj, "maxIters", obj.maxIters));
obj.info()->addParam(obj, "maxIters", obj.maxIters))
CV_INIT_ALGORITHM(LMeDSPointSetRegistrator, "PointSetRegistrator.LMeDS",
obj.info()->addParam(obj, "confidence", obj.confidence);
obj.info()->addParam(obj, "maxIters", obj.maxIters));
obj.info()->addParam(obj, "maxIters", obj.maxIters))
Ptr<PointSetRegistrator> createRANSACPointSetRegistrator(const Ptr<PointSetRegistrator::Callback>& _cb,
int _modelPoints, double _threshold,

View File

@@ -252,7 +252,7 @@ static void findStereoCorrespondenceBM_SSE2( const Mat& left, const Mat& right,
int width1 = width - rofs - ndisp + 1;
int ftzero = state.preFilterCap;
int textureThreshold = state.textureThreshold;
int uniquenessRatio = state.uniquenessRatio*256/100;
int uniquenessRatio = state.uniquenessRatio;
short FILTERED = (short)((mindisp - 1) << DISPARITY_SHIFT);
ushort *sad, *hsad0, *hsad, *hsad_sub;
@@ -274,7 +274,7 @@ static void findStereoCorrespondenceBM_SSE2( const Mat& left, const Mat& right,
sad = (ushort*)alignPtr(buf + sizeof(sad[0]), ALIGN);
hsad0 = (ushort*)alignPtr(sad + ndisp + 1 + dy0*ndisp, ALIGN);
htext = (int*)alignPtr((int*)(hsad0 + (height+dy1)*ndisp) + wsz2 + 2, ALIGN);
cbuf0 = (uchar*)alignPtr(htext + height + wsz2 + 2 + dy0*ndisp, ALIGN);
cbuf0 = (uchar*)alignPtr((uchar*)(htext + height + wsz2 + 2) + dy0*ndisp, ALIGN);
for( x = 0; x < TABSZ; x++ )
tab[x] = (uchar)std::abs(x - ftzero);
@@ -427,28 +427,19 @@ static void findStereoCorrespondenceBM_SSE2( const Mat& left, const Mat& right,
continue;
}
__m128i minsad82 = _mm_unpackhi_epi64(minsad8, minsad8);
__m128i mind82 = _mm_unpackhi_epi64(mind8, mind8);
mask = _mm_cmpgt_epi16(minsad8, minsad82);
mind8 = _mm_xor_si128(mind8,_mm_and_si128(_mm_xor_si128(mind82,mind8),mask));
minsad8 = _mm_min_epi16(minsad8, minsad82);
minsad82 = _mm_shufflelo_epi16(minsad8, _MM_SHUFFLE(3,2,3,2));
mind82 = _mm_shufflelo_epi16(mind8, _MM_SHUFFLE(3,2,3,2));
mask = _mm_cmpgt_epi16(minsad8, minsad82);
mind8 = _mm_xor_si128(mind8,_mm_and_si128(_mm_xor_si128(mind82,mind8),mask));
minsad8 = _mm_min_epi16(minsad8, minsad82);
minsad82 = _mm_shufflelo_epi16(minsad8, 1);
mind82 = _mm_shufflelo_epi16(mind8, 1);
mask = _mm_cmpgt_epi16(minsad8, minsad82);
mind8 = _mm_xor_si128(mind8,_mm_and_si128(_mm_xor_si128(mind82,mind8),mask));
mind = (short)_mm_cvtsi128_si32(mind8);
minsad = sad[mind];
ushort CV_DECL_ALIGNED(16) minsad_buf[8], mind_buf[8];
_mm_store_si128((__m128i*)minsad_buf, minsad8);
_mm_store_si128((__m128i*)mind_buf, mind8);
for( d = 0; d < 8; d++ )
if(minsad > (int)minsad_buf[d] || (minsad == (int)minsad_buf[d] && mind > mind_buf[d]))
{
minsad = minsad_buf[d];
mind = mind_buf[d];
}
if( uniquenessRatio > 0 )
{
int thresh = minsad + ((minsad * uniquenessRatio) >> 8);
int thresh = minsad + (minsad * uniquenessRatio/100);
__m128i thresh8 = _mm_set1_epi16((short)(thresh + 1));
__m128i d1 = _mm_set1_epi16((short)(mind-1)), d2 = _mm_set1_epi16((short)(mind+1));
__m128i dd_16 = _mm_add_epi16(dd_8, dd_8);

View File

@@ -290,8 +290,8 @@ int CV_CameraCalibrationTest::compare(double* val, double* ref_val, int len,
void CV_CameraCalibrationTest::run( int start_from )
{
int code = cvtest::TS::OK;
char filepath[200];
char filename[200];
cv::String filepath;
cv::String filename;
CvSize imageSize;
CvSize etalonSize;
@@ -337,12 +337,12 @@ void CV_CameraCalibrationTest::run( int start_from )
int progress = 0;
int values_read = -1;
sprintf( filepath, "%scameracalibration/", ts->get_data_path().c_str() );
sprintf( filename, "%sdatafiles.txt", filepath );
datafile = fopen( filename, "r" );
filepath = cv::format("%scv/cameracalibration/", ts->get_data_path().c_str() );
filename = cv::format("%sdatafiles.txt", filepath.c_str() );
datafile = fopen( filename.c_str(), "r" );
if( datafile == 0 )
{
ts->printf( cvtest::TS::LOG, "Could not open file with list of test files: %s\n", filename );
ts->printf( cvtest::TS::LOG, "Could not open file with list of test files: %s\n", filename.c_str() );
code = cvtest::TS::FAIL_MISSING_TEST_DATA;
goto _exit_;
}
@@ -354,15 +354,15 @@ void CV_CameraCalibrationTest::run( int start_from )
{
values_read = fscanf(datafile,"%s",i_dat_file);
CV_Assert(values_read == 1);
sprintf(filename, "%s%s", filepath, i_dat_file);
file = fopen(filename,"r");
filename = cv::format("%s%s", filepath.c_str(), i_dat_file);
file = fopen(filename.c_str(),"r");
ts->update_context( this, currTest, true );
if( file == 0 )
{
ts->printf( cvtest::TS::LOG,
"Can't open current test file: %s\n",filename);
"Can't open current test file: %s\n",filename.c_str());
if( numTests == 1 )
{
code = cvtest::TS::FAIL_MISSING_TEST_DATA;
@@ -480,7 +480,7 @@ void CV_CameraCalibrationTest::run( int start_from )
values_read = fscanf(file,"%lf",goodDistortion+2); CV_Assert(values_read == 1);
values_read = fscanf(file,"%lf",goodDistortion+3); CV_Assert(values_read == 1);
/* Read good Rot matrixes */
/* Read good Rot matrices */
for( currImage = 0; currImage < numImages; currImage++ )
{
for( i = 0; i < 3; i++ )
@@ -1382,17 +1382,18 @@ void CV_StereoCalibrationTest::run( int )
for(int testcase = 1; testcase <= ntests; testcase++)
{
char filepath[1000];
cv::String filepath;
char buf[1000];
sprintf( filepath, "%sstereo/case%d/stereo_calib.txt", ts->get_data_path().c_str(), testcase );
f = fopen(filepath, "rt");
filepath = cv::format("%scv/stereo/case%d/stereo_calib.txt", ts->get_data_path().c_str(), testcase );
f = fopen(filepath.c_str(), "rt");
Size patternSize;
vector<string> imglist;
if( !f || !fgets(buf, sizeof(buf)-3, f) || sscanf(buf, "%d%d", &patternSize.width, &patternSize.height) != 2 )
{
ts->printf( cvtest::TS::LOG, "The file %s can not be opened or has invalid content\n", filepath );
ts->printf( cvtest::TS::LOG, "The file %s can not be opened or has invalid content\n", filepath.c_str() );
ts->set_failed_test_info( f ? cvtest::TS::FAIL_INVALID_TEST_DATA : cvtest::TS::FAIL_MISSING_TEST_DATA );
fclose(f);
return;
}
@@ -1405,7 +1406,7 @@ void CV_StereoCalibrationTest::run( int )
buf[--len] = '\0';
if( buf[0] == '#')
continue;
sprintf(filepath, "%sstereo/case%d/%s", ts->get_data_path().c_str(), testcase, buf );
filepath = cv::format("%scv/stereo/case%d/%s", ts->get_data_path().c_str(), testcase, buf );
imglist.push_back(string(filepath));
}
fclose(f);
@@ -1733,7 +1734,7 @@ double CV_StereoCalibrationTest_C::calibrateStereoCamera( const vector<vector<Po
return cvStereoCalibrate(&_objPt, &_imgPt, &_imgPt2, &_npoints, &_cameraMatrix1,
&_distCoeffs1, &_cameraMatrix2, &_distCoeffs2, imageSize,
&matR, &matT, &matE, &matF, criteria, flags );
&matR, &matT, &matE, &matF, flags, criteria );
}
void CV_StereoCalibrationTest_C::rectify( const Mat& cameraMatrix1, const Mat& distCoeffs1,
@@ -1830,7 +1831,7 @@ double CV_StereoCalibrationTest_CPP::calibrateStereoCamera( const vector<vector<
{
return stereoCalibrate( objectPoints, imagePoints1, imagePoints2,
cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2,
imageSize, R, T, E, F, criteria, flags );
imageSize, R, T, E, F, flags, criteria );
}
void CV_StereoCalibrationTest_CPP::rectify( const Mat& cameraMatrix1, const Mat& distCoeffs1,

View File

@@ -85,7 +85,8 @@ Mat calcRvec(const vector<Point3f>& points, const Size& cornerSize)
class CV_CalibrateCameraArtificialTest : public cvtest::BaseTest
{
public:
CV_CalibrateCameraArtificialTest()
CV_CalibrateCameraArtificialTest() :
r(0)
{
}
~CV_CalibrateCameraArtificialTest() {}

View File

@@ -55,7 +55,7 @@ public:
~CV_CameraCalibrationBadArgTest() {}
protected:
void run(int);
void run_func(void) {};
void run_func(void) {}
const static int M = 1;
@@ -334,7 +334,7 @@ public:
CV_Rodrigues2BadArgTest() {}
~CV_Rodrigues2BadArgTest() {}
protected:
void run_func(void) {};
void run_func(void) {}
struct C_Caller
{
@@ -459,10 +459,10 @@ public:
Size imsSize(800, 600);
camMat << 300.f, 0.f, imsSize.width/2.f, 0, 300.f, imsSize.height/2.f, 0.f, 0.f, 1.f;
distCoeffs << 1.2f, 0.2f, 0.f, 0.f, 0.f;
};
~CV_ProjectPoints2BadArgTest() {} ;
}
~CV_ProjectPoints2BadArgTest() {}
protected:
void run_func(void) {};
void run_func(void) {}
Mat_<float> camMat;
Mat_<float> distCoeffs;

View File

@@ -34,7 +34,7 @@ private:
Mat rvec, tvec;
};
};
}
#endif

View File

@@ -185,13 +185,13 @@ void CV_ChessboardDetectorTest::run_batch( const string& filename )
switch( pattern )
{
case CHESSBOARD:
folder = string(ts->get_data_path()) + "cameracalibration/";
folder = string(ts->get_data_path()) + "cv/cameracalibration/";
break;
case CIRCLES_GRID:
folder = string(ts->get_data_path()) + "cameracalibration/circles/";
folder = string(ts->get_data_path()) + "cv/cameracalibration/circles/";
break;
case ASYMMETRIC_CIRCLES_GRID:
folder = string(ts->get_data_path()) + "cameracalibration/asymmetric_circles/";
folder = string(ts->get_data_path()) + "cv/cameracalibration/asymmetric_circles/";
break;
}
@@ -309,8 +309,9 @@ void CV_ChessboardDetectorTest::run_batch( const string& filename )
progress = update_progress( progress, idx, max_idx, 0 );
}
sum_error /= count;
ts->printf(cvtest::TS::LOG, "Average error is %f\n", sum_error);
if (count != 0)
sum_error /= count;
ts->printf(cvtest::TS::LOG, "Average error is %f (%d patterns have been found)\n", sum_error, count);
}
double calcErrorMinError(const Size& cornSz, const vector<Point2f>& corners_found, const vector<Point2f>& corners_generated)

View File

@@ -89,7 +89,14 @@ protected:
}
};
CV_ChessboardDetectorBadArgTest::CV_ChessboardDetectorBadArgTest() {}
CV_ChessboardDetectorBadArgTest::CV_ChessboardDetectorBadArgTest()
{
cpp = false;
flags = 0;
out_corners = NULL;
out_corner_count = NULL;
drawCorners = was_found = false;
}
/* ///////////////////// chess_corner_test ///////////////////////// */
void CV_ChessboardDetectorBadArgTest::run( int /*start_from */)

View File

@@ -62,8 +62,8 @@ void CV_ChessboardDetectorTimingTest::run( int start_from )
int code = cvtest::TS::OK;
/* test parameters */
char filepath[1000];
char filename[1000];
std::string filepath;
std::string filename;
CvMat* _v = 0;
CvPoint2D32f* v;
@@ -75,9 +75,9 @@ void CV_ChessboardDetectorTimingTest::run( int start_from )
int idx, max_idx;
int progress = 0;
sprintf( filepath, "%scameracalibration/", ts->get_data_path().c_str() );
sprintf( filename, "%schessboard_timing_list.dat", filepath );
CvFileStorage* fs = cvOpenFileStorage( filename, 0, CV_STORAGE_READ );
filepath = cv::format("%scv/cameracalibration/", ts->get_data_path().c_str() );
filename = cv::format("%schessboard_timing_list.dat", filepath.c_str() );
CvFileStorage* fs = cvOpenFileStorage( filename.c_str(), 0, CV_STORAGE_READ );
CvFileNode* board_list = fs ? cvGetFileNodeByName( fs, 0, "boards" ) : 0;
if( !fs || !board_list || !CV_NODE_IS_SEQ(board_list->tag) ||
@@ -105,14 +105,14 @@ void CV_ChessboardDetectorTimingTest::run( int start_from )
ts->update_context( this, idx-1, true );
/* read the image */
sprintf( filename, "%s%s", filepath, imgname );
filename = cv::format("%s%s", filepath.c_str(), imgname );
cv::Mat img2 = cv::imread( filename );
img = img2;
if( img2.empty() )
{
ts->printf( cvtest::TS::LOG, "one of chessboard images can't be read: %s\n", filename );
ts->printf( cvtest::TS::LOG, "one of chessboard images can't be read: %s\n", filename.c_str() );
if( max_idx == 1 )
{
code = cvtest::TS::FAIL_MISSING_TEST_DATA;

View File

@@ -211,6 +211,7 @@ void CV_ChessboardSubpixelTest::run( int )
progress = update_progress( progress, i-1, runs_count, 0 );
}
ASSERT_NE(0, count);
sum_dist /= count;
ts->printf(cvtest::TS::LOG, "Average error after findCornerSubpix: %f\n", sum_dist);

View File

@@ -808,6 +808,7 @@ CV_FundamentalMatTest::CV_FundamentalMatTest()
method = 0;
img_size = 10;
cube_size = 10;
dims = 0;
min_f = 1;
max_f = 3;
sigma = 0;//0.1;
@@ -1086,7 +1087,6 @@ protected:
int img_size;
int cube_size;
int dims;
int e_result;
double min_f, max_f;
double sigma;
};
@@ -1124,9 +1124,10 @@ CV_EssentialMatTest::CV_EssentialMatTest()
method = 0;
img_size = 10;
cube_size = 10;
dims = 0;
min_f = 1;
max_f = 3;
sigma = 0;
}

View File

@@ -1,3 +1,3 @@
#include "test_precomp.hpp"
CV_TEST_MAIN("cv")
CV_TEST_MAIN("")

View File

@@ -398,7 +398,7 @@ protected:
void CV_StereoMatchingTest::run(int)
{
string dataPath = ts->get_data_path();
string dataPath = ts->get_data_path() + "cv/";
string algorithmName = name;
assert( !algorithmName.empty() );
if( dataPath.empty() )

View File

@@ -75,6 +75,9 @@ CV_DefaultNewCameraMatrixTest::CV_DefaultNewCameraMatrixTest()
test_array[INPUT].push_back(NULL);
test_array[OUTPUT].push_back(NULL);
test_array[REF_OUTPUT].push_back(NULL);
matrix_type = 0;
center_principal_point = false;
}
void CV_DefaultNewCameraMatrixTest::get_test_array_types_and_sizes( int test_case_idx, vector<vector<Size> >& sizes, vector<vector<int> >& types )
@@ -200,6 +203,9 @@ CV_UndistortPointsTest::CV_UndistortPointsTest()
test_array[OUTPUT].push_back(NULL); // distorted dst points
test_array[TEMP].push_back(NULL); // dst points
test_array[REF_OUTPUT].push_back(NULL);
useCPlus = useDstMat = false;
zero_new_cam = zero_distortion = zero_R = false;
}
void CV_UndistortPointsTest::get_test_array_types_and_sizes( int test_case_idx, vector<vector<Size> >& sizes, vector<vector<int> >& types )
@@ -605,6 +611,11 @@ CV_InitUndistortRectifyMapTest::CV_InitUndistortRectifyMapTest()
test_array[INPUT].push_back(NULL); // new camera matrix
test_array[OUTPUT].push_back(NULL); // distorted dst points
test_array[REF_OUTPUT].push_back(NULL);
useCPlus = false;
zero_distortion = zero_new_cam = zero_R = false;
_mapx = _mapy = NULL;
mat_type = 0;
}
void CV_InitUndistortRectifyMapTest::get_test_array_types_and_sizes( int test_case_idx, vector<vector<Size> >& sizes, vector<vector<int> >& types )

View File

@@ -78,6 +78,8 @@ private:
CV_UndistortPointsBadArgTest::CV_UndistortPointsBadArgTest ()
{
useCPlus = false;
_camera_mat = matR = matP = _distortion_coeffs = _src_points = _dst_points = NULL;
}
void CV_UndistortPointsBadArgTest::run_func()
@@ -311,6 +313,8 @@ private:
CV_InitUndistortRectifyMapBadArgTest::CV_InitUndistortRectifyMapBadArgTest ()
{
useCPlus = false;
_camera_mat = matR = _new_camera_mat = _distortion_coeffs = _mapx = _mapy = NULL;
}
void CV_InitUndistortRectifyMapBadArgTest::run_func()
@@ -431,6 +435,8 @@ private:
CV_UndistortBadArgTest::CV_UndistortBadArgTest ()
{
useCPlus = false;
_camera_mat = _new_camera_mat = _distortion_coeffs = _src = _dst = NULL;
}
void CV_UndistortBadArgTest::run_func()

View File

@@ -55,7 +55,7 @@ class CV_EXPORTS Octree
public:
struct Node
{
Node() {}
Node() { memset(this, 0, sizeof(Node)); }
int begin, end;
float x_min, x_max, y_min, y_max, z_min, z_max;
int maxLevels;
@@ -523,7 +523,7 @@ public:
// Initializes a LDA with num_components (default 0) and specifies how
// samples are aligned (default dataAsRow=true).
LDA(int num_components = 0) :
_num_components(num_components) {};
_num_components(num_components) { }
// Initializes and performs a Discriminant Analysis with Fisher's
// Optimization Criterion on given data in src and corresponding labels
@@ -561,7 +561,7 @@ public:
Mat reconstruct(InputArray src);
// Returns the eigenvectors of this LDA.
Mat eigenvectors() const { return _eigenvectors; };
Mat eigenvectors() const { return _eigenvectors; }
// Returns the eigenvalues of this LDA.
Mat eigenvalues() const { return _eigenvalues; }

View File

@@ -55,7 +55,7 @@ void CvAdaptiveSkinDetector::initData(IplImage *src, int widthDivider, int heigh
imgGrayFrame = cvCreateImage(imageSize, IPL_DEPTH_8U, 1);
imgLastGrayFrame = cvCreateImage(imageSize, IPL_DEPTH_8U, 1);
imgHSVFrame = cvCreateImage(imageSize, IPL_DEPTH_8U, 3);
};
}
CvAdaptiveSkinDetector::CvAdaptiveSkinDetector(int samplingDivider, int morphingMethod)
{
@@ -80,7 +80,7 @@ CvAdaptiveSkinDetector::CvAdaptiveSkinDetector(int samplingDivider, int morphing
imgLastGrayFrame = NULL;
imgSaturationFrame = NULL;
imgHSVFrame = NULL;
};
}
CvAdaptiveSkinDetector::~CvAdaptiveSkinDetector()
{
@@ -93,7 +93,7 @@ CvAdaptiveSkinDetector::~CvAdaptiveSkinDetector()
cvReleaseImage(&imgGrayFrame);
cvReleaseImage(&imgLastGrayFrame);
cvReleaseImage(&imgHSVFrame);
};
}
void CvAdaptiveSkinDetector::process(IplImage *inputBGRImage, IplImage *outputHueMask)
{
@@ -190,7 +190,7 @@ void CvAdaptiveSkinDetector::process(IplImage *inputBGRImage, IplImage *outputHu
if (outputHueMask != NULL)
cvCopy(imgFilteredFrame, outputHueMask);
};
}
//------------------------- Histogram for Adaptive Skin Detector -------------------------//
@@ -202,12 +202,12 @@ CvAdaptiveSkinDetector::Histogram::Histogram()
float *ranges[] = { range };
fHistogram = cvCreateHist(1, histogramSize, CV_HIST_ARRAY, ranges, 1);
cvClearHist(fHistogram);
};
}
CvAdaptiveSkinDetector::Histogram::~Histogram()
{
cvReleaseHist(&fHistogram);
};
}
int CvAdaptiveSkinDetector::Histogram::findCoverageIndex(double surfaceToCover, int defaultValue)
{
@@ -221,7 +221,7 @@ int CvAdaptiveSkinDetector::Histogram::findCoverageIndex(double surfaceToCover,
}
}
return defaultValue;
};
}
void CvAdaptiveSkinDetector::Histogram::findCurveThresholds(int &x1, int &x2, double percent)
{
@@ -244,7 +244,7 @@ void CvAdaptiveSkinDetector::Histogram::findCurveThresholds(int &x1, int &x2, do
x2 = GSD_HUE_UT;
else
x2 += GSD_HUE_LT;
};
}
void CvAdaptiveSkinDetector::Histogram::mergeWith(CvAdaptiveSkinDetector::Histogram *source, double weight)
{
@@ -285,4 +285,4 @@ void CvAdaptiveSkinDetector::Histogram::mergeWith(CvAdaptiveSkinDetector::Histog
}
}
}
};
}

View File

@@ -940,7 +940,7 @@ static void fjac(int /*i*/, int /*j*/, CvMat *point_params, CvMat* cam_params, C
#endif
};
}
static void func(int /*i*/, int /*j*/, CvMat *point_params, CvMat* cam_params, CvMat* estim, void* /*data*/) {
//just do projections
CvMat _Mi;
@@ -979,17 +979,17 @@ static void func(int /*i*/, int /*j*/, CvMat *point_params, CvMat* cam_params, C
cvTranspose( _mp2, estim );
cvReleaseMat( &_mp );
cvReleaseMat( &_mp2 );
};
}
static void fjac_new(int i, int j, Mat& point_params, Mat& cam_params, Mat& A, Mat& B, void* data) {
CvMat _point_params = point_params, _cam_params = cam_params, _Al = A, _Bl = B;
fjac(i,j, &_point_params, &_cam_params, &_Al, &_Bl, data);
};
}
static void func_new(int i, int j, Mat& point_params, Mat& cam_params, Mat& estim, void* data) {
CvMat _point_params = point_params, _cam_params = cam_params, _estim = estim;
func(i,j,&_point_params,&_cam_params,&_estim,data);
};
}
void LevMarqSparse::bundleAdjust( std::vector<Point3d>& points, //positions of points in global coordinate system (input and output)
const std::vector<std::vector<Point2d> >& imagePoints, //projections of 3d points for every camera

View File

@@ -833,7 +833,7 @@ void LBPH::predict(InputArray _src, int &minClass, double &minDist) const {
minDist = DBL_MAX;
minClass = -1;
for(size_t sampleIdx = 0; sampleIdx < _histograms.size(); sampleIdx++) {
double dist = compareHist(_histograms[sampleIdx], query, HISTCMP_CHISQR);
double dist = compareHist(_histograms[sampleIdx], query, HISTCMP_CHISQR_ALT);
if((dist < minDist) && (dist < _threshold)) {
minDist = dist;
minClass = _labels.at<int>((int) sampleIdx);
@@ -872,7 +872,7 @@ CV_INIT_ALGORITHM(Eigenfaces, "FaceRecognizer.Eigenfaces",
obj.info()->addParam(obj, "labels", obj._labels, true);
obj.info()->addParam(obj, "eigenvectors", obj._eigenvectors, true);
obj.info()->addParam(obj, "eigenvalues", obj._eigenvalues, true);
obj.info()->addParam(obj, "mean", obj._mean, true));
obj.info()->addParam(obj, "mean", obj._mean, true))
CV_INIT_ALGORITHM(Fisherfaces, "FaceRecognizer.Fisherfaces",
obj.info()->addParam(obj, "ncomponents", obj._num_components);
@@ -881,7 +881,7 @@ CV_INIT_ALGORITHM(Fisherfaces, "FaceRecognizer.Fisherfaces",
obj.info()->addParam(obj, "labels", obj._labels, true);
obj.info()->addParam(obj, "eigenvectors", obj._eigenvectors, true);
obj.info()->addParam(obj, "eigenvalues", obj._eigenvalues, true);
obj.info()->addParam(obj, "mean", obj._mean, true));
obj.info()->addParam(obj, "mean", obj._mean, true))
CV_INIT_ALGORITHM(LBPH, "FaceRecognizer.LBPH",
obj.info()->addParam(obj, "radius", obj._radius);
@@ -890,7 +890,7 @@ CV_INIT_ALGORITHM(LBPH, "FaceRecognizer.LBPH",
obj.info()->addParam(obj, "grid_y", obj._grid_y);
obj.info()->addParam(obj, "threshold", obj._threshold);
obj.info()->addParam(obj, "histograms", obj._histograms, true);
obj.info()->addParam(obj, "labels", obj._labels, true));
obj.info()->addParam(obj, "labels", obj._labels, true))
bool initModule_contrib()
{

View File

@@ -41,7 +41,7 @@ CvFuzzyPoint::CvFuzzyPoint(double _x, double _y)
{
x = _x;
y = _y;
};
}
bool CvFuzzyCurve::between(double x, double x1, double x2)
{
@@ -51,37 +51,37 @@ bool CvFuzzyCurve::between(double x, double x1, double x2)
return true;
return false;
};
}
CvFuzzyCurve::CvFuzzyCurve()
{
value = 0;
};
}
CvFuzzyCurve::~CvFuzzyCurve()
{
// nothing to do
};
}
void CvFuzzyCurve::setCentre(double _centre)
{
centre = _centre;
};
}
double CvFuzzyCurve::getCentre()
{
return centre;
};
}
void CvFuzzyCurve::clear()
{
points.clear();
};
}
void CvFuzzyCurve::addPoint(double x, double y)
{
points.push_back(CvFuzzyPoint(x, y));
};
}
double CvFuzzyCurve::calcValue(double param)
{
@@ -102,41 +102,41 @@ double CvFuzzyCurve::calcValue(double param)
}
}
return 0;
};
}
double CvFuzzyCurve::getValue()
{
return value;
};
}
void CvFuzzyCurve::setValue(double _value)
{
value = _value;
};
}
CvFuzzyFunction::CvFuzzyFunction()
{
// nothing to do
};
}
CvFuzzyFunction::~CvFuzzyFunction()
{
curves.clear();
};
}
void CvFuzzyFunction::addCurve(CvFuzzyCurve *curve, double value)
{
curves.push_back(*curve);
curve->setValue(value);
};
}
void CvFuzzyFunction::resetValues()
{
int numCurves = (int)curves.size();
for (int i = 0; i < numCurves; i++)
curves[i].setValue(0);
};
}
double CvFuzzyFunction::calcValue()
{
@@ -153,7 +153,7 @@ double CvFuzzyFunction::calcValue()
return s1/s2;
else
return 0;
};
}
CvFuzzyCurve *CvFuzzyFunction::newCurve()
{
@@ -161,14 +161,14 @@ CvFuzzyCurve *CvFuzzyFunction::newCurve()
c = new CvFuzzyCurve();
addCurve(c);
return c;
};
}
CvFuzzyRule::CvFuzzyRule()
{
fuzzyInput1 = NULL;
fuzzyInput2 = NULL;
fuzzyOutput = NULL;
};
}
CvFuzzyRule::~CvFuzzyRule()
{
@@ -180,14 +180,14 @@ CvFuzzyRule::~CvFuzzyRule()
if (fuzzyOutput != NULL)
delete fuzzyOutput;
};
}
void CvFuzzyRule::setRule(CvFuzzyCurve *c1, CvFuzzyCurve *c2, CvFuzzyCurve *o1)
{
fuzzyInput1 = c1;
fuzzyInput2 = c2;
fuzzyOutput = o1;
};
}
double CvFuzzyRule::calcValue(double param1, double param2)
{
@@ -203,31 +203,31 @@ double CvFuzzyRule::calcValue(double param1, double param2)
}
else
return v1;
};
}
CvFuzzyCurve *CvFuzzyRule::getOutputCurve()
{
return fuzzyOutput;
};
}
CvFuzzyController::CvFuzzyController()
{
// nothing to do
};
}
CvFuzzyController::~CvFuzzyController()
{
int size = (int)rules.size();
for(int i = 0; i < size; i++)
delete rules[i];
};
}
void CvFuzzyController::addRule(CvFuzzyCurve *c1, CvFuzzyCurve *c2, CvFuzzyCurve *o1)
{
CvFuzzyRule *f = new CvFuzzyRule();
rules.push_back(f);
f->setRule(c1, c2, o1);
};
}
double CvFuzzyController::calcOutput(double param1, double param2)
{
@@ -243,7 +243,7 @@ double CvFuzzyController::calcOutput(double param1, double param2)
}
v = list.calcValue();
return v;
};
}
CvFuzzyMeanShiftTracker::FuzzyResizer::FuzzyResizer()
{
@@ -299,12 +299,12 @@ CvFuzzyMeanShiftTracker::FuzzyResizer::FuzzyResizer()
fuzzyController.addRule(i1L, NULL, oS);
fuzzyController.addRule(i1M, NULL, oZE);
fuzzyController.addRule(i1H, NULL, oE);
};
}
int CvFuzzyMeanShiftTracker::FuzzyResizer::calcOutput(double edgeDensity, double density)
{
return (int)fuzzyController.calcOutput(edgeDensity, density);
};
}
CvFuzzyMeanShiftTracker::SearchWindow::SearchWindow()
{
@@ -329,7 +329,7 @@ CvFuzzyMeanShiftTracker::SearchWindow::SearchWindow()
depthLow = 0;
depthHigh = 0;
fuzzyResizer = NULL;
};
}
CvFuzzyMeanShiftTracker::SearchWindow::~SearchWindow()
{
@@ -355,7 +355,7 @@ void CvFuzzyMeanShiftTracker::SearchWindow::setSize(int _x, int _y, int _width,
if (y + height > maxHeight)
height = maxHeight - y;
};
}
void CvFuzzyMeanShiftTracker::SearchWindow::initDepthValues(IplImage *maskImage, IplImage *depthMap)
{
@@ -409,7 +409,7 @@ void CvFuzzyMeanShiftTracker::SearchWindow::initDepthValues(IplImage *maskImage,
depthHigh = 32000;
depthLow = 0;
}
};
}
bool CvFuzzyMeanShiftTracker::SearchWindow::shift()
{
@@ -422,7 +422,7 @@ bool CvFuzzyMeanShiftTracker::SearchWindow::shift()
{
return false;
}
};
}
void CvFuzzyMeanShiftTracker::SearchWindow::extractInfo(IplImage *maskImage, IplImage *depthMap, bool initDepth)
{
@@ -528,7 +528,7 @@ void CvFuzzyMeanShiftTracker::SearchWindow::extractInfo(IplImage *maskImage, Ipl
ellipseAngle = 0;
density = 0;
}
};
}
void CvFuzzyMeanShiftTracker::SearchWindow::getResizeAttribsEdgeDensityLinear(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh) {
int x1 = horizontalEdgeTop;
@@ -572,7 +572,7 @@ void CvFuzzyMeanShiftTracker::SearchWindow::getResizeAttribsEdgeDensityLinear(in
} else {
resizeDw = - resizeDx;
}
};
}
void CvFuzzyMeanShiftTracker::SearchWindow::getResizeAttribsInnerDensity(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh)
{
@@ -588,7 +588,7 @@ void CvFuzzyMeanShiftTracker::SearchWindow::getResizeAttribsInnerDensity(int &re
resizeDy = (int)(py*dy);
resizeDw = (int)((1-px)*dx);
resizeDh = (int)((1-py)*dy);
};
}
void CvFuzzyMeanShiftTracker::SearchWindow::getResizeAttribsEdgeDensityFuzzy(int &resizeDx, int &resizeDy, int &resizeDw, int &resizeDh)
{
@@ -627,7 +627,7 @@ void CvFuzzyMeanShiftTracker::SearchWindow::getResizeAttribsEdgeDensityFuzzy(int
resizeDy = int(-dy1);
resizeDh = int(dy1+dy2);
}
};
}
bool CvFuzzyMeanShiftTracker::SearchWindow::meanShift(IplImage *maskImage, IplImage *depthMap, int maxIteration, bool initDepth)
{
@@ -640,7 +640,7 @@ bool CvFuzzyMeanShiftTracker::SearchWindow::meanShift(IplImage *maskImage, IplIm
} while (++numShifts < maxIteration);
return false;
};
}
void CvFuzzyMeanShiftTracker::findOptimumSearchWindow(SearchWindow &searchWindow, IplImage *maskImage, IplImage *depthMap, int maxIteration, int resizeMethod, bool initDepth)
{
@@ -680,17 +680,17 @@ void CvFuzzyMeanShiftTracker::findOptimumSearchWindow(SearchWindow &searchWindow
searchWindow.setSize(searchWindow.x + resizeDx, searchWindow.y + resizeDy, searchWindow.width + resizeDw, searchWindow.height + resizeDh);
}
};
}
CvFuzzyMeanShiftTracker::CvFuzzyMeanShiftTracker()
{
searchMode = tsSetWindow;
};
}
CvFuzzyMeanShiftTracker::~CvFuzzyMeanShiftTracker()
{
// nothing to do
};
}
void CvFuzzyMeanShiftTracker::track(IplImage *maskImage, IplImage *depthMap, int resizeMethod, bool resetSearch, int minKernelMass)
{
@@ -718,4 +718,4 @@ void CvFuzzyMeanShiftTracker::track(IplImage *maskImage, IplImage *depthMap, int
else
searchMode = tsTracking;
}
};
}

View File

@@ -114,7 +114,7 @@ void computeProjectiveMatrix( const Mat& ksi, Mat& Rt )
{
CV_Assert( ksi.size() == Size(1,6) && ksi.type() == CV_64FC1 );
#if defined(HAVE_EIGEN) && EIGEN_WORLD_VERSION == 3
#if defined(HAVE_EIGEN) && EIGEN_WORLD_VERSION == 3 && (!defined _MSC_VER || !defined _M_X64 || _MSC_VER > 1500)
const double* ksi_ptr = reinterpret_cast<const double*>(ksi.ptr(0));
Eigen::Matrix<double,4,4> twist, g;
twist << 0., -ksi_ptr[2], ksi_ptr[1], ksi_ptr[3],

View File

@@ -709,7 +709,7 @@ void cv::SpinImageModel::defaultParams()
T_GeometriccConsistency = 0.25f;
T_GroupingCorespondances = 0.25f;
};
}
Mat cv::SpinImageModel::packRandomScaledSpins(bool separateScale, size_t xCount, size_t yCount) const
{

View File

@@ -2,12 +2,15 @@ set(the_description "The Core Functionality")
ocv_add_module(core PRIVATE_REQUIRED ${ZLIB_LIBRARIES} "${OPENCL_LIBRARIES}" OPTIONAL opencv_cudev)
ocv_module_include_directories(${ZLIB_INCLUDE_DIRS})
if(HAVE_WINRT_CX)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /ZW")
endif()
if(HAVE_WINRT)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /ZW /GS /Gm- /AI\"${WINDOWS_SDK_PATH}/References/CommonConfiguration/Neutral\" /AI\"${VISUAL_STUDIO_PATH}/vcpackages\"")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GS /Gm- /AI\"${WINDOWS_SDK_PATH}/References/CommonConfiguration/Neutral\" /AI\"${VISUAL_STUDIO_PATH}/vcpackages\"")
endif()
if(HAVE_CUDA)
ocv_warnings_disable(CMAKE_CXX_FLAGS -Wundef -Wenum-compare -Wunused-function)
ocv_warnings_disable(CMAKE_CXX_FLAGS -Wundef -Wenum-compare -Wunused-function -Wshadow)
endif()
file(GLOB lib_cuda_hdrs "include/opencv2/${name}/cuda/*.hpp" "include/opencv2/${name}/cuda/*.h")

View File

@@ -316,6 +316,7 @@ RotatedRect
RotatedRect();
RotatedRect(const Point2f& center, const Size2f& size, float angle);
RotatedRect(const CvBox2D& box);
RotatedRect(const Point2f& point1, const Point2f& point2, const Point2f& point3);
//! returns 4 vertices of the rectangle
void points(Point2f pts[]) const;
@@ -338,7 +339,11 @@ The class represents rotated (i.e. not up-right) rectangles on a plane. Each rec
:param size: Width and height of the rectangle.
:param angle: The rotation angle in a clockwise direction. When the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle.
:param box: The rotated rectangle parameters as the obsolete CvBox2D structure.
.. ocv:function:: RotatedRect::RotatedRect(const Point2f& point1, const Point2f& point2, const Point2f& point3)
:param point1:
:param point2:
:param point3: Any 3 end points of the RotatedRect. They must be given in order (either clockwise or anticlockwise).
.. ocv:function:: void RotatedRect::points( Point2f pts[] ) const
.. ocv:function:: Rect RotatedRect::boundingRect() const
@@ -1615,7 +1620,7 @@ The method copies the matrix data to another matrix. Before copying the data, th
so that the destination matrix is reallocated if needed. While ``m.copyTo(m);`` works flawlessly, the function does not handle the case of a partial overlap between the source and the destination matrices.
When the operation mask is specified, and the ``Mat::create`` call shown above reallocated the matrix, the newly allocated matrix is initialized with all zeros before copying the data.
When the operation mask is specified, if the ``Mat::create`` call shown above reallocates the matrix, the newly allocated matrix is initialized with all zeros before copying the data.
.. _Mat::convertTo:

View File

@@ -34,7 +34,7 @@ circle
----------
Draws a circle.
.. ocv:function:: void circle( Mat& img, Point center, int radius, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:function:: void circle( InputOutputArray img, Point center, int radius, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:pyfunction:: cv2.circle(img, center, radius, color[, thickness[, lineType[, shift]]]) -> img
@@ -83,9 +83,9 @@ ellipse
-----------
Draws a simple or thick elliptic arc or fills an ellipse sector.
.. ocv:function:: void ellipse( Mat& img, Point center, Size axes, double angle, double startAngle, double endAngle, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:function:: void ellipse( InputOutputArray img, Point center, Size axes, double angle, double startAngle, double endAngle, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:function:: void ellipse( Mat& img, const RotatedRect& box, const Scalar& color, int thickness=1, int lineType=LINE_8 )
.. ocv:function:: void ellipse( InputOutputArray img, const RotatedRect& box, const Scalar& color, int thickness=1, int lineType=LINE_8 )
.. ocv:pyfunction:: cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) -> img
@@ -331,7 +331,7 @@ line
--------
Draws a line segment connecting two points.
.. ocv:function:: void line( Mat& img, Point pt1, Point pt2, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:function:: void line( InputOutputArray img, Point pt1, Point pt2, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:pyfunction:: cv2.line(img, pt1, pt2, color[, thickness[, lineType[, shift]]]) -> img
@@ -417,7 +417,7 @@ rectangle
-------------
Draws a simple, thick, or filled up-right rectangle.
.. ocv:function:: void rectangle( Mat& img, Point pt1, Point pt2, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:function:: void rectangle( InputOutputArray img, Point pt1, Point pt2, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
.. ocv:function:: void rectangle( Mat& img, Rect rec, const Scalar& color, int thickness=1, int lineType=LINE_8, int shift=0 )
@@ -570,7 +570,7 @@ putText
-----------
Draws a text string.
.. ocv:function:: void putText( Mat& img, const String& text, Point org, int fontFace, double fontScale, Scalar color, int thickness=1, int lineType=LINE_8, bool bottomLeftOrigin=false )
.. ocv:function:: void putText( InputOutputArray img, const String& text, Point org, int fontFace, double fontScale, Scalar color, int thickness=1, int lineType=LINE_8, bool bottomLeftOrigin=false )
.. ocv:pyfunction:: cv2.putText(img, text, org, fontFace, fontScale, color[, thickness[, lineType[, bottomLeftOrigin]]]) -> None

View File

@@ -903,7 +903,7 @@ So, the function chooses an operation mode depending on the flags and size of th
* When ``DFT_COMPLEX_OUTPUT`` is set, the output is a complex matrix of the same size as input.
* When ``DFT_COMPLEX_OUTPUT`` is not set, the output is a real matrix of the same size as input. In case of 2D transform, it uses the packed format as shown above. In case of a single 1D transform, it looks like the first row of the matrix above. In case of multiple 1D transforms (when using the ``DCT_ROWS`` flag), each row of the output matrix looks like the first row of the matrix above.
* When ``DFT_COMPLEX_OUTPUT`` is not set, the output is a real matrix of the same size as input. In case of 2D transform, it uses the packed format as shown above. In case of a single 1D transform, it looks like the first row of the matrix above. In case of multiple 1D transforms (when using the ``DFT_ROWS`` flag), each row of the output matrix looks like the first row of the matrix above.
* If the input array is complex and either ``DFT_INVERSE`` or ``DFT_REAL_OUTPUT`` are not set, the output is a complex array of the same size as input. The function performs a forward or inverse 1D or 2D transform of the whole input array or each row of the input array independently, depending on the flags ``DFT_INVERSE`` and ``DFT_ROWS``.

View File

@@ -507,11 +507,11 @@ CV_EXPORTS_W void randn(InputOutputArray dst, InputArray mean, InputArray stddev
CV_EXPORTS_W void randShuffle(InputOutputArray dst, double iterFactor = 1., RNG* rng = 0);
//! draws the line segment (pt1, pt2) in the image
CV_EXPORTS_W void line(CV_IN_OUT Mat& img, Point pt1, Point pt2, const Scalar& color,
CV_EXPORTS_W void line(InputOutputArray img, Point pt1, Point pt2, const Scalar& color,
int thickness = 1, int lineType = LINE_8, int shift = 0);
//! draws the rectangle outline or a solid rectangle with the opposite corners pt1 and pt2 in the image
CV_EXPORTS_W void rectangle(CV_IN_OUT Mat& img, Point pt1, Point pt2,
CV_EXPORTS_W void rectangle(InputOutputArray img, Point pt1, Point pt2,
const Scalar& color, int thickness = 1,
int lineType = LINE_8, int shift = 0);
@@ -521,18 +521,18 @@ CV_EXPORTS void rectangle(CV_IN_OUT Mat& img, Rect rec,
int lineType = LINE_8, int shift = 0);
//! draws the circle outline or a solid circle in the image
CV_EXPORTS_W void circle(CV_IN_OUT Mat& img, Point center, int radius,
CV_EXPORTS_W void circle(InputOutputArray img, Point center, int radius,
const Scalar& color, int thickness = 1,
int lineType = LINE_8, int shift = 0);
//! draws an elliptic arc, ellipse sector or a rotated ellipse in the image
CV_EXPORTS_W void ellipse(CV_IN_OUT Mat& img, Point center, Size axes,
CV_EXPORTS_W void ellipse(InputOutputArray img, Point center, Size axes,
double angle, double startAngle, double endAngle,
const Scalar& color, int thickness = 1,
int lineType = LINE_8, int shift = 0);
//! draws a rotated ellipse in the image
CV_EXPORTS_W void ellipse(CV_IN_OUT Mat& img, const RotatedRect& box, const Scalar& color,
CV_EXPORTS_W void ellipse(InputOutputArray img, const RotatedRect& box, const Scalar& color,
int thickness = 1, int lineType = LINE_8);
//! draws a filled convex polygon in the image
@@ -582,7 +582,7 @@ CV_EXPORTS_W void ellipse2Poly( Point center, Size axes, int angle,
CV_OUT std::vector<Point>& pts );
//! renders text string in the image
CV_EXPORTS_W void putText( Mat& img, const String& text, Point org,
CV_EXPORTS_W void putText( InputOutputArray img, const String& text, Point org,
int fontFace, double fontScale, Scalar color,
int thickness = 1, int lineType = LINE_8,
bool bottomLeftOrigin = false );

View File

@@ -55,9 +55,9 @@ namespace cv
{
public:
typedef T float_type;
typedef cv::Matx<float_type, 3, 3> Mat3;
typedef cv::Matx<float_type, 4, 4> Mat4;
typedef cv::Vec<float_type, 3> Vec3;
typedef Matx<float_type, 3, 3> Mat3;
typedef Matx<float_type, 4, 4> Mat4;
typedef Vec<float_type, 3> Vec3;
Affine3();
@@ -70,11 +70,11 @@ namespace cv
//Rodrigues vector
Affine3(const Vec3& rvec, const Vec3& t = Vec3::all(0));
//Combines all contructors above. Supports 4x4, 3x3, 1x3, 3x1 sizes of data matrix
explicit Affine3(const cv::Mat& data, const Vec3& t = Vec3::all(0));
//Combines all contructors above. Supports 4x4, 4x3, 3x3, 1x3, 3x1 sizes of data matrix
explicit Affine3(const Mat& data, const Vec3& t = Vec3::all(0));
//Euler angles
Affine3(float_type alpha, float_type beta, float_type gamma, const Vec3& t = Vec3::all(0));
//From 16th element array
explicit Affine3(const float_type* vals);
static Affine3 Identity();
@@ -87,9 +87,6 @@ namespace cv
//Combines rotation methods above. Suports 3x3, 1x3, 3x1 sizes of data matrix;
void rotation(const Mat& data);
//Euler angles
void rotation(float_type alpha, float_type beta, float_type gamma);
void linear(const Mat3& L);
void translation(const Vec3& t);
@@ -105,6 +102,9 @@ namespace cv
// a.rotate(R) is equivalent to Affine(R, 0) * a;
Affine3 rotate(const Mat3& R) const;
// a.rotate(R) is equivalent to Affine(rvec, 0) * a;
Affine3 rotate(const Vec3& rvec) const;
// a.translate(t) is equivalent to Affine(E, t) * a;
Affine3 translate(const Vec3& t) const;
@@ -113,6 +113,8 @@ namespace cv
template <typename Y> operator Affine3<Y>() const;
template <typename Y> Affine3<Y> cast() const;
Mat4 matrix;
#if defined EIGEN_WORLD_VERSION && defined EIGEN_GEOMETRY_MODULE_H
@@ -132,10 +134,26 @@ namespace cv
typedef Affine3<float> Affine3f;
typedef Affine3<double> Affine3d;
static cv::Vec3f operator*(const cv::Affine3f& affine, const cv::Vec3f& vector);
static cv::Vec3d operator*(const cv::Affine3d& affine, const cv::Vec3d& vector);
}
static Vec3f operator*(const Affine3f& affine, const Vec3f& vector);
static Vec3d operator*(const Affine3d& affine, const Vec3d& vector);
template<typename _Tp> class DataType< Affine3<_Tp> >
{
public:
typedef Affine3<_Tp> value_type;
typedef Affine3<typename DataType<_Tp>::work_type> work_type;
typedef _Tp channel_type;
enum { generic_type = 0,
depth = DataType<channel_type>::depth,
channels = 16,
fmt = DataType<channel_type>::fmt + ((channels - 1) << 8),
type = CV_MAKETYPE(depth, channels)
};
typedef Vec<channel_type, channels> vec_type;
};
}
///////////////////////////////////////////////////////////////////////////////////
@@ -179,6 +197,12 @@ cv::Affine3<T>::Affine3(const cv::Mat& data, const Vec3& t)
data.copyTo(matrix);
return;
}
else if (data.cols == 4 && data.rows == 3)
{
rotation(data(Rect(0, 0, 3, 3)));
translation(data(Rect(3, 0, 1, 3)));
return;
}
rotation(data);
translation(t);
@@ -187,13 +211,8 @@ cv::Affine3<T>::Affine3(const cv::Mat& data, const Vec3& t)
}
template<typename T> inline
cv::Affine3<T>::Affine3(float_type alpha, float_type beta, float_type gamma, const Vec3& t)
{
rotation(alpha, beta, gamma);
translation(t);
matrix.val[12] = matrix.val[13] = matrix.val[14] = 0;
matrix.val[15] = 1;
}
cv::Affine3<T>::Affine3(const float_type* vals) : matrix(vals)
{}
template<typename T> inline
cv::Affine3<T> cv::Affine3<T>::Identity()
@@ -261,12 +280,6 @@ void cv::Affine3<T>::rotation(const cv::Mat& data)
CV_Assert(!"Input marix can be 3x3, 1x3 or 3x1");
}
template<typename T> inline
void cv::Affine3<T>::rotation(float_type alpha, float_type beta, float_type gamma)
{
rotation(Vec3(alpha, beta, gamma));
}
template<typename T> inline
void cv::Affine3<T>::linear(const Mat3& L)
{
@@ -382,6 +395,12 @@ cv::Affine3<T> cv::Affine3<T>::rotate(const Mat3& R) const
return result;
}
template<typename T> inline
cv::Affine3<T> cv::Affine3<T>::rotate(const Vec3& _rvec) const
{
return rotate(Affine3f(_rvec).rotation());
}
template<typename T> inline
cv::Affine3<T> cv::Affine3<T>::translate(const Vec3& t) const
{
@@ -404,6 +423,12 @@ cv::Affine3<T>::operator Affine3<Y>() const
return Affine3<Y>(matrix);
}
template<typename T> template <typename Y> inline
cv::Affine3<Y> cv::Affine3<T>::cast() const
{
return Affine3<Y>(matrix);
}
template<typename T> inline
cv::Affine3<T> cv::operator*(const cv::Affine3<T>& affine1, const cv::Affine3<T>& affine2)
{

View File

@@ -502,7 +502,6 @@ class CV_EXPORTS Mat;
class CV_EXPORTS MatExpr;
class CV_EXPORTS UMat;
class CV_EXPORTS UMatExpr;
class CV_EXPORTS SparseMat;
typedef Mat MatND;

View File

@@ -0,0 +1,26 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2014, Advanced Micro Devices, Inc., all rights reserved.
#ifndef __OPENCV_CORE_BUFFER_POOL_HPP__
#define __OPENCV_CORE_BUFFER_POOL_HPP__
namespace cv
{
class BufferPoolController
{
protected:
~BufferPoolController() { }
public:
virtual size_t getReservedSize() const = 0;
virtual size_t getMaxReservedSize() const = 0;
virtual void setMaxReservedSize(size_t size) = 0;
virtual void freeAllReservedBuffers() = 0;
};
}
#endif // __OPENCV_CORE_BUFFER_POOL_HPP__

View File

@@ -444,7 +444,7 @@ CV_INLINE int cvIsInf( double value )
// atomic increment on the linux version of the Intel(tm) compiler
# define CV_XADD(addr, delta) (int)_InterlockedExchangeAdd(const_cast<void*>(reinterpret_cast<volatile void*>(addr)), delta)
#elif defined __GNUC__
# if defined __clang__ && __clang_major__ >= 3 && !defined __ANDROID__ && !defined __EMSCRIPTEN__
# if defined __clang__ && __clang_major__ >= 3 && !defined __ANDROID__ && !defined __EMSCRIPTEN__ && !defined(__CUDACC__)
# ifdef __ATOMIC_ACQ_REL
# define CV_XADD(addr, delta) __c11_atomic_fetch_add((_Atomic(int)*)(addr), delta, __ATOMIC_ACQ_REL)
# else
@@ -459,6 +459,7 @@ CV_INLINE int cvIsInf( double value )
# endif
# endif
#elif defined _MSC_VER && !defined RC_INVOKED
# include <intrin.h>
# define CV_XADD(addr, delta) (int)_InterlockedExchangeAdd((long volatile*)addr, delta)
#else
CV_INLINE CV_XADD(int* addr, int delta) { int tmp = *addr; *addr += delta; return tmp; }

View File

@@ -67,10 +67,10 @@ namespace ocl {
using namespace cv::ocl;
// TODO static functions in the Context class
CV_EXPORTS Context2& initializeContextFromD3D11Device(ID3D11Device* pD3D11Device);
CV_EXPORTS Context2& initializeContextFromD3D10Device(ID3D10Device* pD3D10Device);
CV_EXPORTS Context2& initializeContextFromDirect3DDevice9Ex(IDirect3DDevice9Ex* pDirect3DDevice9Ex);
CV_EXPORTS Context2& initializeContextFromDirect3DDevice9(IDirect3DDevice9* pDirect3DDevice9);
CV_EXPORTS Context& initializeContextFromD3D11Device(ID3D11Device* pD3D11Device);
CV_EXPORTS Context& initializeContextFromD3D10Device(ID3D10Device* pD3D10Device);
CV_EXPORTS Context& initializeContextFromDirect3DDevice9Ex(IDirect3DDevice9Ex* pDirect3DDevice9Ex);
CV_EXPORTS Context& initializeContextFromDirect3DDevice9(IDirect3DDevice9* pDirect3DDevice9);
} // namespace cv::directx::ocl

View File

@@ -51,6 +51,7 @@
#include "opencv2/core/matx.hpp"
#include "opencv2/core/types.hpp"
#include "opencv2/core/bufferpool.hpp"
namespace cv
{
@@ -84,10 +85,8 @@ public:
OPENGL_BUFFER = 7 << KIND_SHIFT,
CUDA_MEM = 8 << KIND_SHIFT,
GPU_MAT = 9 << KIND_SHIFT,
OCL_MAT =10 << KIND_SHIFT,
UMAT =11 << KIND_SHIFT,
STD_VECTOR_UMAT =12 << KIND_SHIFT,
UEXPR =13 << KIND_SHIFT
UMAT =10 << KIND_SHIFT,
STD_VECTOR_UMAT =11 << KIND_SHIFT
};
_InputArray();
@@ -108,11 +107,11 @@ public:
template<typename _Tp> _InputArray(const cudev::GpuMat_<_Tp>& m);
_InputArray(const UMat& um);
_InputArray(const std::vector<UMat>& umv);
_InputArray(const UMatExpr& uexpr);
virtual Mat getMat(int idx=-1) const;
virtual UMat getUMat(int idx=-1) const;
virtual void getMatVector(std::vector<Mat>& mv) const;
virtual void getUMatVector(std::vector<UMat>& umv) const;
virtual cuda::GpuMat getGpuMat() const;
virtual ogl::Buffer getOGlBuffer() const;
void* getObj() const;
@@ -127,13 +126,14 @@ public:
virtual int depth(int i=-1) const;
virtual int channels(int i=-1) const;
virtual bool isContinuous(int i=-1) const;
virtual bool isSubmatrix(int i=-1) const;
virtual bool empty() const;
virtual void copyTo(const _OutputArray& arr) const;
virtual size_t offset(int i=-1) const;
virtual size_t step(int i=-1) const;
bool isMat() const;
bool isUMat() const;
bool isMatVectot() const;
bool isMatVector() const;
bool isUMatVector() const;
bool isMatx();
@@ -205,6 +205,7 @@ public:
virtual bool fixedType() const;
virtual bool needed() const;
virtual Mat& getMatRef(int i=-1) const;
virtual UMat& getUMatRef(int i=-1) const;
virtual cuda::GpuMat& getGpuMatRef() const;
virtual ogl::Buffer& getOGlBufferRef() const;
virtual cuda::CudaMem& getCudaMemRef() const;
@@ -214,7 +215,7 @@ public:
virtual void createSameSize(const _InputArray& arr, int mtype) const;
virtual void release() const;
virtual void clear() const;
virtual void setTo(const _InputArray& value) const;
virtual void setTo(const _InputArray& value, const _InputArray & mask = _InputArray()) const;
};
@@ -265,6 +266,18 @@ CV_EXPORTS InputOutputArray noArray();
/////////////////////////////////// MatAllocator //////////////////////////////////////
//! Usage flags for allocator
enum UMatUsageFlags
{
USAGE_DEFAULT = 0,
// default allocation policy is platform and usage specific
USAGE_ALLOCATE_HOST_MEMORY = 1 << 0,
USAGE_ALLOCATE_DEVICE_MEMORY = 1 << 1,
__UMAT_USAGE_FLAGS_32BIT = 0x7fffffff // Binary compatibility hint
};
struct CV_EXPORTS UMatData;
/*!
@@ -282,8 +295,8 @@ public:
// uchar*& datastart, uchar*& data, size_t* step) = 0;
//virtual void deallocate(int* refcount, uchar* datastart, uchar* data) = 0;
virtual UMatData* allocate(int dims, const int* sizes, int type,
void* data, size_t* step, int flags) const = 0;
virtual bool allocate(UMatData* data, int accessflags) const = 0;
void* data, size_t* step, int flags, UMatUsageFlags usageFlags) const = 0;
virtual bool allocate(UMatData* data, int accessflags, UMatUsageFlags usageFlags) const = 0;
virtual void deallocate(UMatData* data) const = 0;
virtual void map(UMatData* data, int accessflags) const;
virtual void unmap(UMatData* data) const;
@@ -296,6 +309,9 @@ public:
virtual void copy(UMatData* srcdata, UMatData* dstdata, int dims, const size_t sz[],
const size_t srcofs[], const size_t srcstep[],
const size_t dstofs[], const size_t dststep[], bool sync) const;
// default implementation returns DummyBufferPoolController
virtual BufferPoolController* getBufferPoolController() const;
};
@@ -360,11 +376,12 @@ struct CV_EXPORTS UMatData
int refcount;
uchar* data;
uchar* origdata;
size_t size;
size_t size, capacity;
int flags;
void* handle;
void* userdata;
int allocatorFlags_;
};
@@ -667,7 +684,7 @@ public:
Mat& operator = (const MatExpr& expr);
//! retrieve UMat from Mat
UMat getUMat(int accessFlags) const;
UMat getUMat(int accessFlags, UMatUsageFlags usageFlags = USAGE_DEFAULT) const;
//! returns a new matrix header for the specified row
Mat row(int y) const;
@@ -1128,25 +1145,22 @@ typedef Mat_<Vec2d> Mat2d;
typedef Mat_<Vec3d> Mat3d;
typedef Mat_<Vec4d> Mat4d;
class CV_EXPORTS UMatExpr;
class CV_EXPORTS UMat
{
public:
//! default constructor
UMat();
UMat(UMatUsageFlags usageFlags = USAGE_DEFAULT);
//! constructs 2D matrix of the specified size and type
// (_type is CV_8UC1, CV_64FC3, CV_32SC(12) etc.)
UMat(int rows, int cols, int type);
UMat(Size size, int type);
UMat(int rows, int cols, int type, UMatUsageFlags usageFlags = USAGE_DEFAULT);
UMat(Size size, int type, UMatUsageFlags usageFlags = USAGE_DEFAULT);
//! constucts 2D matrix and fills it with the specified value _s.
UMat(int rows, int cols, int type, const Scalar& s);
UMat(Size size, int type, const Scalar& s);
UMat(int rows, int cols, int type, const Scalar& s, UMatUsageFlags usageFlags = USAGE_DEFAULT);
UMat(Size size, int type, const Scalar& s, UMatUsageFlags usageFlags = USAGE_DEFAULT);
//! constructs n-dimensional matrix
UMat(int ndims, const int* sizes, int type);
UMat(int ndims, const int* sizes, int type, const Scalar& s);
UMat(int ndims, const int* sizes, int type, UMatUsageFlags usageFlags = USAGE_DEFAULT);
UMat(int ndims, const int* sizes, int type, const Scalar& s, UMatUsageFlags usageFlags = USAGE_DEFAULT);
//! copy constructor
UMat(const UMat& m);
@@ -1172,7 +1186,6 @@ public:
~UMat();
//! assignment operators
UMat& operator = (const UMat& m);
UMat& operator = (const UMatExpr& expr);
Mat getMat(int flags) const;
@@ -1216,32 +1229,30 @@ public:
UMat reshape(int cn, int newndims, const int* newsz) const;
//! matrix transposition by means of matrix expressions
UMatExpr t() const;
UMat t() const;
//! matrix inversion by means of matrix expressions
UMatExpr inv(int method=DECOMP_LU) const;
UMat inv(int method=DECOMP_LU) const;
//! per-element matrix multiplication by means of matrix expressions
UMatExpr mul(InputArray m, double scale=1) const;
UMat mul(InputArray m, double scale=1) const;
//! computes cross-product of 2 3D vectors
UMat cross(InputArray m) const;
//! computes dot-product
double dot(InputArray m) const;
//! Matlab-style matrix initialization
static UMatExpr zeros(int rows, int cols, int type);
static UMatExpr zeros(Size size, int type);
static UMatExpr zeros(int ndims, const int* sz, int type);
static UMatExpr ones(int rows, int cols, int type);
static UMatExpr ones(Size size, int type);
static UMatExpr ones(int ndims, const int* sz, int type);
static UMatExpr eye(int rows, int cols, int type);
static UMatExpr eye(Size size, int type);
static UMat zeros(int rows, int cols, int type);
static UMat zeros(Size size, int type);
static UMat zeros(int ndims, const int* sz, int type);
static UMat ones(int rows, int cols, int type);
static UMat ones(Size size, int type);
static UMat ones(int ndims, const int* sz, int type);
static UMat eye(int rows, int cols, int type);
static UMat eye(Size size, int type);
//! allocates new matrix data unless the matrix already has specified size and type.
// previous data is unreferenced if needed.
void create(int rows, int cols, int type);
void create(Size size, int type);
void create(int ndims, const int* sizes, int type);
void create(int rows, int cols, int type, UMatUsageFlags usageFlags = USAGE_DEFAULT);
void create(Size size, int type, UMatUsageFlags usageFlags = USAGE_DEFAULT);
void create(int ndims, const int* sizes, int type, UMatUsageFlags usageFlags = USAGE_DEFAULT);
//! increases the reference counter; use with care to avoid memleaks
void addref();
@@ -1313,6 +1324,7 @@ public:
//! custom allocator
MatAllocator* allocator;
UMatUsageFlags usageFlags; // usage flags for allocator
//! and the standard allocator
static MatAllocator* getStdAllocator();

View File

@@ -60,7 +60,7 @@ inline void _InputArray::init(int _flags, const void* _obj, Size _sz)
inline void* _InputArray::getObj() const { return obj; }
inline _InputArray::_InputArray() { init(0, 0); }
inline _InputArray::_InputArray() { init(NONE, 0); }
inline _InputArray::_InputArray(int _flags, void* _obj) { init(_flags, _obj); }
inline _InputArray::_InputArray(const Mat& m) { init(MAT+ACCESS_READ, &m); }
inline _InputArray::_InputArray(const std::vector<Mat>& vec) { init(STD_VECTOR_MAT+ACCESS_READ, &vec); }
@@ -110,7 +110,7 @@ inline _InputArray::~_InputArray() {}
inline bool _InputArray::isMat() const { return kind() == _InputArray::MAT; }
inline bool _InputArray::isUMat() const { return kind() == _InputArray::UMAT; }
inline bool _InputArray::isMatVectot() const { return kind() == _InputArray::STD_VECTOR_MAT; }
inline bool _InputArray::isMatVector() const { return kind() == _InputArray::STD_VECTOR_MAT; }
inline bool _InputArray::isUMatVector() const { return kind() == _InputArray::STD_VECTOR_UMAT; }
inline bool _InputArray::isMatx() { return kind() == _InputArray::MATX; }
@@ -186,6 +186,12 @@ inline _OutputArray::_OutputArray(const Mat& m)
inline _OutputArray::_OutputArray(const std::vector<Mat>& vec)
{ init(FIXED_SIZE + STD_VECTOR_MAT + ACCESS_WRITE, &vec); }
inline _OutputArray::_OutputArray(const UMat& m)
{ init(FIXED_TYPE + FIXED_SIZE + UMAT + ACCESS_WRITE, &m); }
inline _OutputArray::_OutputArray(const std::vector<UMat>& vec)
{ init(FIXED_SIZE + STD_VECTOR_UMAT + ACCESS_WRITE, &vec); }
inline _OutputArray::_OutputArray(const cuda::GpuMat& d_mat)
{ init(FIXED_TYPE + FIXED_SIZE + GPU_MAT + ACCESS_WRITE, &d_mat); }
@@ -267,6 +273,12 @@ inline _InputOutputArray::_InputOutputArray(const Mat& m)
inline _InputOutputArray::_InputOutputArray(const std::vector<Mat>& vec)
{ init(FIXED_SIZE + STD_VECTOR_MAT + ACCESS_RW, &vec); }
inline _InputOutputArray::_InputOutputArray(const UMat& m)
{ init(FIXED_TYPE + FIXED_SIZE + UMAT + ACCESS_RW, &m); }
inline _InputOutputArray::_InputOutputArray(const std::vector<UMat>& vec)
{ init(FIXED_SIZE + STD_VECTOR_UMAT + ACCESS_RW, &vec); }
inline _InputOutputArray::_InputOutputArray(const cuda::GpuMat& d_mat)
{ init(FIXED_TYPE + FIXED_SIZE + GPU_MAT + ACCESS_RW, &d_mat); }
@@ -360,7 +372,7 @@ Mat::Mat(int _rows, int _cols, int _type, void* _data, size_t _step)
data((uchar*)_data), datastart((uchar*)_data), dataend(0), datalimit(0),
allocator(0), u(0), size(&rows)
{
size_t esz = CV_ELEM_SIZE(_type);
size_t esz = CV_ELEM_SIZE(_type), esz1 = CV_ELEM_SIZE1(_type);
size_t minstep = cols * esz;
if( _step == AUTO_STEP )
{
@@ -371,6 +383,12 @@ Mat::Mat(int _rows, int _cols, int _type, void* _data, size_t _step)
{
if( rows == 1 ) _step = minstep;
CV_DbgAssert( _step >= minstep );
if (_step % esz1 != 0)
{
CV_Error(Error::BadStep, "Step must be a multiple of esz1");
}
flags |= _step == minstep ? CONTINUOUS_FLAG : 0;
}
step[0] = _step;
@@ -385,7 +403,7 @@ Mat::Mat(Size _sz, int _type, void* _data, size_t _step)
data((uchar*)_data), datastart((uchar*)_data), dataend(0), datalimit(0),
allocator(0), u(0), size(&rows)
{
size_t esz = CV_ELEM_SIZE(_type);
size_t esz = CV_ELEM_SIZE(_type), esz1 = CV_ELEM_SIZE1(_type);
size_t minstep = cols*esz;
if( _step == AUTO_STEP )
{
@@ -396,6 +414,12 @@ Mat::Mat(Size _sz, int _type, void* _data, size_t _step)
{
if( rows == 1 ) _step = minstep;
CV_DbgAssert( _step >= minstep );
if (_step % esz1 != 0)
{
CV_Error(Error::BadStep, "Step must be a multiple of esz1");
}
flags |= _step == minstep ? CONTINUOUS_FLAG : 0;
}
step[0] = _step;
@@ -1906,7 +1930,7 @@ SparseMat_<_Tp>::SparseMat_(const SparseMat& m)
if( m.type() == DataType<_Tp>::type )
*this = (const SparseMat_<_Tp>&)m;
else
m.convertTo(this, DataType<_Tp>::type);
m.convertTo(*this, DataType<_Tp>::type);
}
template<typename _Tp> inline
@@ -3046,50 +3070,50 @@ const Mat_<_Tp>& operator /= (const Mat_<_Tp>& a, const MatExpr& b)
//////////////////////////////// UMat ////////////////////////////////
inline
UMat::UMat()
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), u(0), offset(0), size(&rows)
UMat::UMat(UMatUsageFlags _usageFlags)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), usageFlags(_usageFlags), u(0), offset(0), size(&rows)
{}
inline
UMat::UMat(int _rows, int _cols, int _type)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), u(0), offset(0), size(&rows)
UMat::UMat(int _rows, int _cols, int _type, UMatUsageFlags _usageFlags)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), usageFlags(_usageFlags), u(0), offset(0), size(&rows)
{
create(_rows, _cols, _type);
}
inline
UMat::UMat(int _rows, int _cols, int _type, const Scalar& _s)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), u(0), offset(0), size(&rows)
UMat::UMat(int _rows, int _cols, int _type, const Scalar& _s, UMatUsageFlags _usageFlags)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), usageFlags(_usageFlags), u(0), offset(0), size(&rows)
{
create(_rows, _cols, _type);
*this = _s;
}
inline
UMat::UMat(Size _sz, int _type)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), u(0), offset(0), size(&rows)
UMat::UMat(Size _sz, int _type, UMatUsageFlags _usageFlags)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), usageFlags(_usageFlags), u(0), offset(0), size(&rows)
{
create( _sz.height, _sz.width, _type );
}
inline
UMat::UMat(Size _sz, int _type, const Scalar& _s)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), u(0), offset(0), size(&rows)
UMat::UMat(Size _sz, int _type, const Scalar& _s, UMatUsageFlags _usageFlags)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), usageFlags(_usageFlags), u(0), offset(0), size(&rows)
{
create(_sz.height, _sz.width, _type);
*this = _s;
}
inline
UMat::UMat(int _dims, const int* _sz, int _type)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), u(0), offset(0), size(&rows)
UMat::UMat(int _dims, const int* _sz, int _type, UMatUsageFlags _usageFlags)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), usageFlags(_usageFlags), u(0), offset(0), size(&rows)
{
create(_dims, _sz, _type);
}
inline
UMat::UMat(int _dims, const int* _sz, int _type, const Scalar& _s)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), u(0), offset(0), size(&rows)
UMat::UMat(int _dims, const int* _sz, int _type, const Scalar& _s, UMatUsageFlags _usageFlags)
: flags(MAGIC_VAL), dims(0), rows(0), cols(0), allocator(0), usageFlags(_usageFlags), u(0), offset(0), size(&rows)
{
create(_dims, _sz, _type);
*this = _s;
@@ -3098,10 +3122,9 @@ UMat::UMat(int _dims, const int* _sz, int _type, const Scalar& _s)
inline
UMat::UMat(const UMat& m)
: flags(m.flags), dims(m.dims), rows(m.rows), cols(m.cols), allocator(m.allocator),
u(m.u), offset(m.offset), size(&rows)
usageFlags(m.usageFlags), u(m.u), offset(m.offset), size(&rows)
{
if( u )
CV_XADD(&(u->urefcount), 1);
addref();
if( m.dims <= 2 )
{
step[0] = m.step[0]; step[1] = m.step[1];
@@ -3117,7 +3140,7 @@ u(m.u), offset(m.offset), size(&rows)
template<typename _Tp> inline
UMat::UMat(const std::vector<_Tp>& vec, bool copyData)
: flags(MAGIC_VAL | DataType<_Tp>::type | CV_MAT_CONT_FLAG), dims(2), rows((int)vec.size()),
cols(1), allocator(0), u(0), offset(0), size(&rows)
cols(1), allocator(0), usageFlags(USAGE_DEFAULT), u(0), offset(0), size(&rows)
{
if(vec.empty())
return;
@@ -3136,8 +3159,7 @@ UMat& UMat::operator = (const UMat& m)
{
if( this != &m )
{
if( m.u )
CV_XADD(&(m.u->urefcount), 1);
const_cast<UMat&>(m).addref();
release();
flags = m.flags;
if( dims <= 2 && m.dims <= 2 )
@@ -3151,6 +3173,8 @@ UMat& UMat::operator = (const UMat& m)
else
copySize(m);
allocator = m.allocator;
if (usageFlags == USAGE_DEFAULT)
usageFlags = m.usageFlags;
u = m.u;
offset = m.offset;
}
@@ -3211,19 +3235,19 @@ void UMat::assignTo( UMat& m, int _type ) const
}
inline
void UMat::create(int _rows, int _cols, int _type)
void UMat::create(int _rows, int _cols, int _type, UMatUsageFlags _usageFlags)
{
_type &= TYPE_MASK;
if( dims <= 2 && rows == _rows && cols == _cols && type() == _type && u )
return;
int sz[] = {_rows, _cols};
create(2, sz, _type);
create(2, sz, _type, _usageFlags);
}
inline
void UMat::create(Size _sz, int _type)
void UMat::create(Size _sz, int _type, UMatUsageFlags _usageFlags)
{
create(_sz.height, _sz.width, _type);
create(_sz.height, _sz.width, _type, _usageFlags);
}
inline

View File

@@ -51,14 +51,16 @@ CV_EXPORTS bool useOpenCL();
CV_EXPORTS bool haveAmdBlas();
CV_EXPORTS bool haveAmdFft();
CV_EXPORTS void setUseOpenCL(bool flag);
CV_EXPORTS void finish2();
CV_EXPORTS void finish();
class CV_EXPORTS Context2;
class CV_EXPORTS Context;
class CV_EXPORTS Device;
class CV_EXPORTS Kernel;
class CV_EXPORTS Program;
class CV_EXPORTS ProgramSource2;
class CV_EXPORTS ProgramSource;
class CV_EXPORTS Queue;
class CV_EXPORTS PlatformInfo;
class CV_EXPORTS Image2D;
class CV_EXPORTS Device
{
@@ -84,9 +86,12 @@ public:
String name() const;
String extensions() const;
String version() const;
String vendor() const;
String OpenCL_C_Version() const;
String OpenCLVersion() const;
int deviceVersionMajor() const;
int deviceVersionMinor() const;
String driverVersion() const;
void* ptr() const;
@@ -201,34 +206,31 @@ protected:
};
class CV_EXPORTS Context2
class CV_EXPORTS Context
{
public:
Context2();
explicit Context2(int dtype);
~Context2();
Context2(const Context2& c);
Context2& operator = (const Context2& c);
Context();
explicit Context(int dtype);
~Context();
Context(const Context& c);
Context& operator = (const Context& c);
bool create();
bool create(int dtype);
size_t ndevices() const;
const Device& device(size_t idx) const;
Program getProg(const ProgramSource2& prog,
Program getProg(const ProgramSource& prog,
const String& buildopt, String& errmsg);
static Context2& getDefault(bool initialize = true);
static Context& getDefault(bool initialize = true);
void* ptr() const;
struct Impl;
inline struct Impl* _getImpl() const { return p; };
friend void initializeContextFromHandle(Context& ctx, void* platform, void* context, void* device);
protected:
struct Impl;
Impl* p;
};
// TODO Move to internal header
void initializeContextFromHandle(Context2& ctx, void* platform, void* context, void* device);
class CV_EXPORTS Platform
{
public:
@@ -240,23 +242,25 @@ public:
void* ptr() const;
static Platform& getDefault();
struct Impl;
inline struct Impl* _getImpl() const { return p; };
friend void initializeContextFromHandle(Context& ctx, void* platform, void* context, void* device);
protected:
struct Impl;
Impl* p;
};
// TODO Move to internal header
void initializeContextFromHandle(Context& ctx, void* platform, void* context, void* device);
class CV_EXPORTS Queue
{
public:
Queue();
explicit Queue(const Context2& c, const Device& d=Device());
explicit Queue(const Context& c, const Device& d=Device());
~Queue();
Queue(const Queue& q);
Queue& operator = (const Queue& q);
bool create(const Context2& c=Context2(), const Device& d=Device());
bool create(const Context& c=Context(), const Device& d=Device());
void finish();
void* ptr() const;
static Queue& getDefault();
@@ -310,7 +314,7 @@ class CV_EXPORTS Kernel
public:
Kernel();
Kernel(const char* kname, const Program& prog);
Kernel(const char* kname, const ProgramSource2& prog,
Kernel(const char* kname, const ProgramSource& prog,
const String& buildopts = String(), String* errmsg=0);
~Kernel();
Kernel(const Kernel& k);
@@ -318,10 +322,11 @@ public:
bool empty() const;
bool create(const char* kname, const Program& prog);
bool create(const char* kname, const ProgramSource2& prog,
bool create(const char* kname, const ProgramSource& prog,
const String& buildopts, String* errmsg=0);
int set(int i, const void* value, size_t sz);
int set(int i, const Image2D& image2D);
int set(int i, const UMat& m);
int set(int i, const KernelArg& arg);
template<typename _Tp> int set(int i, const _Tp& value)
@@ -488,6 +493,7 @@ public:
bool runTask(bool sync, const Queue& q=Queue());
size_t workGroupSize() const;
size_t preferedWorkGroupSizeMultiple() const;
bool compileWorkGroupSize(size_t wsz[]) const;
size_t localMemSize() const;
@@ -502,7 +508,7 @@ class CV_EXPORTS Program
{
public:
Program();
Program(const ProgramSource2& src,
Program(const ProgramSource& src,
const String& buildflags, String& errmsg);
explicit Program(const String& buf);
Program(const Program& prog);
@@ -510,12 +516,12 @@ public:
Program& operator = (const Program& prog);
~Program();
bool create(const ProgramSource2& src,
bool create(const ProgramSource& src,
const String& buildflags, String& errmsg);
bool read(const String& buf, const String& buildflags);
bool write(String& buf) const;
const ProgramSource2& source() const;
const ProgramSource& source() const;
void* ptr() const;
String getPrefix() const;
@@ -527,17 +533,17 @@ protected:
};
class CV_EXPORTS ProgramSource2
class CV_EXPORTS ProgramSource
{
public:
typedef uint64 hash_t;
ProgramSource2();
explicit ProgramSource2(const String& prog);
explicit ProgramSource2(const char* prog);
~ProgramSource2();
ProgramSource2(const ProgramSource2& prog);
ProgramSource2& operator = (const ProgramSource2& prog);
ProgramSource();
explicit ProgramSource(const String& prog);
explicit ProgramSource(const char* prog);
~ProgramSource();
ProgramSource(const ProgramSource& prog);
ProgramSource& operator = (const ProgramSource& prog);
const String& source() const;
hash_t hash() const;
@@ -547,9 +553,51 @@ protected:
Impl* p;
};
class CV_EXPORTS PlatformInfo
{
public:
PlatformInfo();
explicit PlatformInfo(void* id);
~PlatformInfo();
PlatformInfo(const PlatformInfo& i);
PlatformInfo& operator =(const PlatformInfo& i);
String name() const;
String vendor() const;
String version() const;
int deviceNumber() const;
void getDevice(Device& device, int d) const;
protected:
struct Impl;
Impl* p;
};
CV_EXPORTS const char* convertTypeStr(int sdepth, int ddepth, int cn, char* buf);
CV_EXPORTS const char* typeToStr(int t);
CV_EXPORTS const char* memopTypeToStr(int t);
CV_EXPORTS String kernelToStr(InputArray _kernel, int ddepth = -1);
CV_EXPORTS void getPlatfomsInfo(std::vector<PlatformInfo>& platform_info);
class CV_EXPORTS Image2D
{
public:
Image2D();
explicit Image2D(const UMat &src);
Image2D(const Image2D & i);
~Image2D();
Image2D & operator = (const Image2D & i);
void* ptr() const;
protected:
struct Impl;
Impl* p;
};
CV_EXPORTS MatAllocator* getOpenCLAllocator();
}}

View File

@@ -0,0 +1,46 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
// Copyright (C) 2014, Advanced Micro Devices, Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//#define CV_OPENCL_RUN_VERBOSE
#ifdef HAVE_OPENCL
#ifdef CV_OPENCL_RUN_VERBOSE
#define CV_OCL_RUN_(condition, func, ...) \
{ \
if (cv::ocl::useOpenCL() && (condition) && func) \
{ \
printf("%s: OpenCL implementation is running\n", CV_Func); \
fflush(stdout); \
return __VA_ARGS__; \
} \
else \
{ \
printf("%s: Plain implementation is running\n", CV_Func); \
fflush(stdout); \
} \
}
#elif defined CV_OPENCL_RUN_ASSERT
#define CV_OCL_RUN_(condition, func, ...) \
{ \
if (cv::ocl::useOpenCL() && (condition)) \
{ \
CV_Assert(func); \
return __VA_ARGS__; \
} \
}
#else
#define CV_OCL_RUN_(condition, func, ...) \
if (cv::ocl::useOpenCL() && (condition) && func) \
return __VA_ARGS__;
#endif
#else
#define CV_OCL_RUN_(condition, func, ...)
#endif
#define CV_OCL_RUN(condition, func) CV_OCL_RUN_(condition, func)

View File

@@ -1,34 +0,0 @@
#ifndef __OPENCV_CORE_OCL_RUNTIME_HPP__
#define __OPENCV_CORE_OCL_RUNTIME_HPP__
#ifdef HAVE_OPENCL
#if defined(HAVE_OPENCL_STATIC)
#if defined __APPLE__
#include <OpenCL/cl.h>
#else
#include <CL/cl.h>
#endif
#else // HAVE_OPENCL_STATIC
#include "ocl_runtime_opencl.hpp"
#endif // HAVE_OPENCL_STATIC
#ifndef CL_DEVICE_DOUBLE_FP_CONFIG
#define CL_DEVICE_DOUBLE_FP_CONFIG 0x1032
#endif
#ifndef CL_DEVICE_HALF_FP_CONFIG
#define CL_DEVICE_HALF_FP_CONFIG 0x1033
#endif
#ifndef CL_VERSION_1_2
#define CV_REQUIRE_OPENCL_1_2_ERROR CV_ErrorNoReturn(cv::Error::OpenCLApiCallError, "OpenCV compiled without OpenCL v1.2 support, so we can't use functionality from OpenCL v1.2")
#endif
#endif // HAVE_OPENCL
#endif // __OPENCV_CORE_OCL_RUNTIME_HPP__

View File

@@ -394,7 +394,9 @@ template<typename _Tp> static inline _Tp randu()
return (_Tp)theRNG();
}
///////////////////////////////// Formatted string generation /////////////////////////////////
CV_EXPORTS String format( const char* fmt, ... );
///////////////////////////////// Formatted output of cv::Mat /////////////////////////////////
@@ -421,6 +423,12 @@ int print(const Mat& mtx, FILE* stream = stdout)
return print(Formatter::get()->format(mtx), stream);
}
static inline
int print(const UMat& mtx, FILE* stream = stdout)
{
return print(Formatter::get()->format(mtx.getMat(ACCESS_READ)), stream);
}
template<typename _Tp> static inline
int print(const std::vector<Point_<_Tp> >& vec, FILE* stream = stdout)
{

View File

@@ -392,6 +392,7 @@ public:
//! various constructors
RotatedRect();
RotatedRect(const Point2f& center, const Size2f& size, float angle);
RotatedRect(const Point2f& point1, const Point2f& point2, const Point2f& point3);
//! returns 4 vertices of the rectangle
void points(Point2f pts[]) const;

View File

@@ -158,7 +158,7 @@ enum {
CV_StsVecLengthErr= -28, /* incorrect vector length */
CV_StsFilterStructContentErr= -29, /* incorr. filter structure content */
CV_StsKernelStructContentErr= -30, /* incorr. transform kernel content */
CV_StsFilterOffsetErr= -31, /* incorrect filter ofset value */
CV_StsFilterOffsetErr= -31, /* incorrect filter offset value */
CV_StsBadSize= -201, /* the input/output structure size is incorrect */
CV_StsDivByZero= -202, /* division by zero */
CV_StsInplaceNotSupported= -203, /* in-place operation is not supported */

View File

@@ -85,7 +85,7 @@ template<typename _Tp, size_t fixed_size = 1024/sizeof(_Tp)+8> class AutoBuffer
public:
typedef _Tp value_type;
//! the default contructor
//! the default constructor
AutoBuffer();
//! constructor taking the real buffer size
AutoBuffer(size_t _size);
@@ -340,6 +340,8 @@ class CV_EXPORTS CommandLineParser
CommandLineParser(const CommandLineParser& parser);
CommandLineParser& operator = (const CommandLineParser& parser);
~CommandLineParser();
String getPathToApplication() const;
template <typename T>

View File

@@ -47,13 +47,81 @@
namespace cvtest {
namespace ocl {
///////////// Lut ////////////////////////
typedef Size_MatType LUTFixture;
OCL_PERF_TEST_P(LUTFixture, LUT,
::testing::Combine(OCL_TEST_SIZES,
OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params), cn = CV_MAT_CN(type);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, CV_8UC(cn)), lut(1, 256, type);
int dstType = CV_MAKETYPE(lut.depth(), src.channels());
UMat dst(srcSize, dstType);
declare.in(src, lut, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::LUT(src, lut, dst);
SANITY_CHECK(dst);
}
///////////// Exp ////////////////////////
typedef Size_MatType ExpFixture;
OCL_PERF_TEST_P(ExpFixture, Exp, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
declare.in(src).out(dst);
randu(src, 5, 16);
OCL_TEST_CYCLE() cv::exp(src, dst);
SANITY_CHECK(dst, 1e-6, ERROR_RELATIVE);
}
///////////// Log ////////////////////////
typedef Size_MatType LogFixture;
OCL_PERF_TEST_P(LogFixture, Log, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
randu(src, 1, 10000);
declare.in(src).out(dst);
OCL_TEST_CYCLE() cv::log(src, dst);
SANITY_CHECK(dst, 1e-6, ERROR_RELATIVE);
}
///////////// Add ////////////////////////
typedef Size_MatType AddFixture;
OCL_PERF_TEST_P(AddFixture, Add,
::testing::Combine(OCL_TEST_SIZES,
OCL_TEST_TYPES))
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size srcSize = GET_PARAM(0);
const int type = GET_PARAM(1);
@@ -61,15 +129,875 @@ OCL_PERF_TEST_P(AddFixture, Add,
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
randu(src1);
randu(src2);
declare.in(src1, src2).out(dst);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::add(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// Subtract ////////////////////////
typedef Size_MatType SubtractFixture;
OCL_PERF_TEST_P(SubtractFixture, Subtract,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::subtract(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// Mul ////////////////////////
typedef Size_MatType MulFixture;
OCL_PERF_TEST_P(MulFixture, Multiply, ::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::multiply(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// Div ////////////////////////
typedef Size_MatType DivFixture;
OCL_PERF_TEST_P(DivFixture, Divide,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::divide(src1, src2, dst);
SANITY_CHECK(dst, 1e-6, ERROR_RELATIVE);
}
///////////// Absdiff ////////////////////////
typedef Size_MatType AbsDiffFixture;
OCL_PERF_TEST_P(AbsDiffFixture, Absdiff,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).in(dst);
OCL_TEST_CYCLE() cv::absdiff(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// CartToPolar ////////////////////////
typedef Size_MatType CartToPolarFixture;
OCL_PERF_TEST_P(CartToPolarFixture, CartToPolar, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type),
dst1(srcSize, type), dst2(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst1, dst2);
OCL_TEST_CYCLE() cv::cartToPolar(src1, src2, dst1, dst2);
SANITY_CHECK(dst1, 8e-3);
SANITY_CHECK(dst2, 8e-3);
}
///////////// PolarToCart ////////////////////////
typedef Size_MatType PolarToCartFixture;
OCL_PERF_TEST_P(PolarToCartFixture, PolarToCart, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type),
dst1(srcSize, type), dst2(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst1, dst2);
OCL_TEST_CYCLE() cv::polarToCart(src1, src2, dst1, dst2);
SANITY_CHECK(dst1, 5e-5);
SANITY_CHECK(dst2, 5e-5);
}
///////////// Magnitude ////////////////////////
typedef Size_MatType MagnitudeFixture;
OCL_PERF_TEST_P(MagnitudeFixture, Magnitude, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type),
dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::magnitude(src1, src2, dst);
SANITY_CHECK(dst, 1e-6);
}
///////////// Transpose ////////////////////////
typedef Size_MatType TransposeFixture;
OCL_PERF_TEST_P(TransposeFixture, Transpose, ::testing::Combine(
OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::transpose(src, dst);
SANITY_CHECK(dst);
}
///////////// Flip ////////////////////////
enum
{
FLIP_BOTH = 0, FLIP_ROWS, FLIP_COLS
};
CV_ENUM(FlipType, FLIP_BOTH, FLIP_ROWS, FLIP_COLS)
typedef std::tr1::tuple<Size, MatType, FlipType> FlipParams;
typedef TestBaseWithParam<FlipParams> FlipFixture;
OCL_PERF_TEST_P(FlipFixture, Flip,
::testing::Combine(OCL_TEST_SIZES,
OCL_TEST_TYPES, FlipType::all()))
{
const FlipParams params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
const int flipType = get<2>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::flip(src, dst, flipType - 1);
SANITY_CHECK(dst);
}
///////////// minMaxLoc ////////////////////////
typedef Size_MatType MinMaxLocFixture;
OCL_PERF_TEST_P(MinMaxLocFixture, MinMaxLoc,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
bool onecn = CV_MAT_CN(type) == 1;
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type);;
declare.in(src, WARMUP_RNG);
double min_val = 0.0, max_val = 0.0;
Point min_loc, max_loc;
OCL_TEST_CYCLE() cv::minMaxLoc(src, &min_val, &max_val, onecn ? &min_loc : NULL,
onecn ? &max_loc : NULL);
ASSERT_GE(max_val, min_val);
SANITY_CHECK(min_val);
SANITY_CHECK(max_val);
int min_loc_x = min_loc.x, min_loc_y = min_loc.y, max_loc_x = max_loc.x,
max_loc_y = max_loc.y;
SANITY_CHECK(min_loc_x);
SANITY_CHECK(min_loc_y);
SANITY_CHECK(max_loc_x);
SANITY_CHECK(max_loc_y);
}
///////////// Sum ////////////////////////
typedef Size_MatType SumFixture;
OCL_PERF_TEST_P(SumFixture, Sum,
::testing::Combine(OCL_TEST_SIZES,
OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params), depth = CV_MAT_DEPTH(type);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type);
Scalar result;
randu(src, 0, 60);
declare.in(src);
OCL_TEST_CYCLE() result = cv::sum(src);
if (depth >= CV_32F)
SANITY_CHECK(result, 1e-6, ERROR_RELATIVE);
else
SANITY_CHECK(result);
}
///////////// countNonZero ////////////////////////
typedef Size_MatType CountNonZeroFixture;
OCL_PERF_TEST_P(CountNonZeroFixture, CountNonZero,
::testing::Combine(OCL_TEST_SIZES,
OCL_PERF_ENUM(CV_8UC1, CV_32FC1)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type);
int result = 0;
randu(src, 0, 10);
declare.in(src);
OCL_TEST_CYCLE() result = cv::countNonZero(src);
SANITY_CHECK(result);
}
///////////// Phase ////////////////////////
typedef Size_MatType PhaseFixture;
OCL_PERF_TEST_P(PhaseFixture, Phase, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type),
dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::phase(src1, src2, dst, 1);
SANITY_CHECK(dst, 1e-2);
}
///////////// bitwise_and////////////////////////
typedef Size_MatType BitwiseAndFixture;
OCL_PERF_TEST_P(BitwiseAndFixture, Bitwise_and,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::bitwise_and(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// bitwise_xor ////////////////////////
typedef Size_MatType BitwiseXorFixture;
OCL_PERF_TEST_P(BitwiseXorFixture, Bitwise_xor,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::bitwise_xor(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// bitwise_or ////////////////////////
typedef Size_MatType BitwiseOrFixture;
OCL_PERF_TEST_P(BitwiseOrFixture, Bitwise_or,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::bitwise_or(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// bitwise_not ////////////////////////
typedef Size_MatType BitwiseNotFixture;
OCL_PERF_TEST_P(BitwiseNotFixture, Bitwise_not,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::bitwise_not(src, dst);
SANITY_CHECK(dst);
}
///////////// compare////////////////////////
CV_ENUM(CmpCode, CMP_LT, CMP_LE, CMP_EQ, CMP_NE, CMP_GE, CMP_GT)
typedef std::tr1::tuple<Size, MatType, CmpCode> CompareParams;
typedef TestBaseWithParam<CompareParams> CompareFixture;
OCL_PERF_TEST_P(CompareFixture, Compare,
::testing::Combine(OCL_TEST_SIZES,
OCL_TEST_TYPES, CmpCode::all()))
{
const CompareParams params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
const int cmpCode = get<2>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, CV_8UC1);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::compare(src1, src2, dst, cmpCode);
SANITY_CHECK(dst);
}
///////////// pow ////////////////////////
typedef Size_MatType PowFixture;
OCL_PERF_TEST_P(PowFixture, Pow, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
randu(src, -100, 100);
declare.in(src).out(dst);
OCL_TEST_CYCLE() cv::pow(src, -2.0, dst);
SANITY_CHECK(dst, 1e-6, ERROR_RELATIVE);
}
///////////// AddWeighted////////////////////////
typedef Size_MatType AddWeightedFixture;
OCL_PERF_TEST_P(AddWeightedFixture, AddWeighted,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES_134))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params), depth = CV_MAT_DEPTH(type);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
double alpha = 2.0, beta = 1.0, gama = 3.0;
OCL_TEST_CYCLE() cv::addWeighted(src1, alpha, src2, beta, gama, dst);
if (depth >= CV_32F)
SANITY_CHECK(dst, 1e-6, ERROR_RELATIVE);
else
SANITY_CHECK(dst);
}
///////////// Sqrt ///////////////////////
typedef Size_MatType SqrtFixture;
OCL_PERF_TEST_P(SqrtFixture, Sqrt, ::testing::Combine(
OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
randu(src, 0, 1000);
declare.in(src).out(dst);
OCL_TEST_CYCLE() cv::sqrt(src, dst);
SANITY_CHECK(dst, 1e-6, ERROR_RELATIVE);
}
///////////// SetIdentity ////////////////////////
typedef Size_MatType SetIdentityFixture;
OCL_PERF_TEST_P(SetIdentityFixture, SetIdentity,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat dst(srcSize, type);
declare.out(dst);
OCL_TEST_CYCLE() cv::setIdentity(dst, cv::Scalar::all(181));
SANITY_CHECK(dst);
}
///////////// MeanStdDev ////////////////////////
typedef Size_MatType MeanStdDevFixture;
OCL_PERF_TEST_P(MeanStdDevFixture, MeanStdDev,
::testing::Combine(OCL_PERF_ENUM(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3), OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
const double eps = 2e-5;
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type);
Scalar mean, stddev;
declare.in(src, WARMUP_RNG);
OCL_TEST_CYCLE() cv::meanStdDev(src, mean, stddev);
double mean0 = mean[0], mean1 = mean[1], mean2 = mean[2], mean3 = mean[3];
double stddev0 = stddev[0], stddev1 = stddev[1], stddev2 = stddev[2], stddev3 = stddev[3];
SANITY_CHECK(mean0, eps, ERROR_RELATIVE);
SANITY_CHECK(mean1, eps, ERROR_RELATIVE);
SANITY_CHECK(mean2, eps, ERROR_RELATIVE);
SANITY_CHECK(mean3, eps, ERROR_RELATIVE);
SANITY_CHECK(stddev0, eps, ERROR_RELATIVE);
SANITY_CHECK(stddev1, eps, ERROR_RELATIVE);
SANITY_CHECK(stddev2, eps, ERROR_RELATIVE);
SANITY_CHECK(stddev3, eps, ERROR_RELATIVE);
}
///////////// Norm ////////////////////////
CV_ENUM(NormType, NORM_INF, NORM_L1, NORM_L2)
typedef std::tr1::tuple<Size, MatType, NormType> NormParams;
typedef TestBaseWithParam<NormParams> NormFixture;
OCL_PERF_TEST_P(NormFixture, Norm,
::testing::Combine(OCL_PERF_ENUM(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3), OCL_TEST_TYPES, NormType::all()))
{
const NormParams params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
const int normType = get<2>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type);
double res;
declare.in(src1, src2, WARMUP_RNG);
OCL_TEST_CYCLE() res = cv::norm(src1, src2, normType);
SANITY_CHECK(res, 1e-5, ERROR_RELATIVE);
}
///////////// UMat::dot ////////////////////////
typedef Size_MatType UMatDotFixture;
OCL_PERF_TEST_P(UMatDotFixture, UMatDot,
::testing::Combine(OCL_PERF_ENUM(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3), OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
double r = 0.0;
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type);
declare.in(src1, src2, WARMUP_RNG);
OCL_TEST_CYCLE() r = src1.dot(src2);
SANITY_CHECK(r, 1e-5, ERROR_RELATIVE);
}
///////////// Repeat ////////////////////////
typedef Size_MatType RepeatFixture;
OCL_PERF_TEST_P(RepeatFixture, Repeat,
::testing::Combine(OCL_PERF_ENUM(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3), OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params), nx = 2, ny = 2;
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(Size(srcSize.width * nx, srcSize.height * ny), type);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::repeat(src, nx, ny, dst);
SANITY_CHECK(dst);
}
///////////// Min ////////////////////////
typedef Size_MatType MinFixture;
OCL_PERF_TEST_P(MinFixture, Min,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::min(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// Max ////////////////////////
typedef Size_MatType MaxFixture;
OCL_PERF_TEST_P(MaxFixture, Max,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::max(src1, src2, dst);
SANITY_CHECK(dst);
}
///////////// InRange ////////////////////////
typedef Size_MatType InRangeFixture;
OCL_PERF_TEST_P(InRangeFixture, InRange,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), lb(srcSize, type), ub(srcSize, type), dst(srcSize, CV_8UC1);
declare.in(src, lb, ub, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::inRange(src, lb, ub, dst);
SANITY_CHECK(dst);
}
///////////// Normalize ////////////////////////
CV_ENUM(NormalizeModes, CV_MINMAX, CV_L2, CV_L1, CV_C)
typedef tuple<Size, MatType, NormalizeModes> NormalizeParams;
typedef TestBaseWithParam<NormalizeParams> NormalizeFixture;
OCL_PERF_TEST_P(NormalizeFixture, Normalize,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES, NormalizeModes::all()))
{
const NormalizeParams params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params), mode = get<2>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, type);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::normalize(src, dst, 10, 110, mode);
SANITY_CHECK(dst, 5e-2);
}
///////////// ConvertScaleAbs ////////////////////////
typedef Size_MatType ConvertScaleAbsFixture;
OCL_PERF_TEST_P(ConvertScaleAbsFixture, ConvertScaleAbs,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params), cn = CV_MAT_CN(type);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, CV_8UC(cn));
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::convertScaleAbs(src, dst, 0.5, 2);
SANITY_CHECK(dst);
}
///////////// PatchNaNs ////////////////////////
typedef Size_MatType PatchNaNsFixture;
OCL_PERF_TEST_P(PatchNaNsFixture, PatchNaNs,
::testing::Combine(OCL_TEST_SIZES, OCL_PERF_ENUM(CV_32FC1, CV_32FC4)))
{
const Size_MatType_t params = GetParam();
Size srcSize = get<0>(params);
const int type = get<1>(params), cn = CV_MAT_CN(type);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type);
declare.in(src, WARMUP_RNG).out(src);
// generating NaNs
{
Mat src_ = src.getMat(ACCESS_RW);
srcSize.width *= cn;
for (int y = 0; y < srcSize.height; ++y)
{
float * const ptr = src_.ptr<float>(y);
for (int x = 0; x < srcSize.width; ++x)
ptr[x] = (x + y) % 2 == 0 ? std::numeric_limits<float>::quiet_NaN() : ptr[x];
}
}
OCL_TEST_CYCLE() cv::patchNaNs(src, 17.7);
SANITY_CHECK(src);
}
///////////// ScaleAdd ////////////////////////
typedef Size_MatType ScaleAddFixture;
OCL_PERF_TEST_P(ScaleAddFixture, ScaleAdd,
::testing::Combine(OCL_TEST_SIZES, OCL_TEST_TYPES))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src1(srcSize, type), src2(srcSize, type), dst(srcSize, type);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::scaleAdd(src1, 0.6, src2, dst);
SANITY_CHECK(dst, 1e-6);
}
///////////// PSNR ////////////////////////
typedef Size_MatType PSNRFixture;
OCL_PERF_TEST_P(PSNRFixture, PSNR,
::testing::Combine(OCL_TEST_SIZES, OCL_PERF_ENUM(CV_8UC1, CV_8UC4)))
{
const Size_MatType_t params = GetParam();
const Size srcSize = get<0>(params);
const int type = get<1>(params);
checkDeviceMaxMemoryAllocSize(srcSize, type);
double psnr = 0;
UMat src1(srcSize, type), src2(srcSize, type);
declare.in(src1, src2, WARMUP_RNG);
OCL_TEST_CYCLE() psnr = cv::PSNR(src1, src2);
SANITY_CHECK(psnr, 1e-4, ERROR_RELATIVE);
}
///////////// Reduce ////////////////////////
CV_ENUM(ReduceMinMaxOp, CV_REDUCE_MIN, CV_REDUCE_MAX)
typedef tuple<Size, std::pair<MatType, MatType>, int, ReduceMinMaxOp> ReduceMinMaxParams;
typedef TestBaseWithParam<ReduceMinMaxParams> ReduceMinMaxFixture;
OCL_PERF_TEST_P(ReduceMinMaxFixture, Reduce,
::testing::Combine(OCL_TEST_SIZES,
OCL_PERF_ENUM(std::make_pair<MatType, MatType>(CV_8UC1, CV_8UC1),
std::make_pair<MatType, MatType>(CV_32FC4, CV_32FC4)),
OCL_PERF_ENUM(0, 1),
ReduceMinMaxOp::all()))
{
const ReduceMinMaxParams params = GetParam();
const std::pair<MatType, MatType> types = get<1>(params);
const int stype = types.first, dtype = types.second,
dim = get<2>(params), op = get<3>(params);
const Size srcSize = get<0>(params),
dstSize(dim == 0 ? srcSize.width : 1, dim == 0 ? 1 : srcSize.height);
const double eps = CV_MAT_DEPTH(dtype) <= CV_32S ? 1 : 1e-5;
checkDeviceMaxMemoryAllocSize(srcSize, stype);
checkDeviceMaxMemoryAllocSize(srcSize, dtype);
UMat src(srcSize, stype), dst(dstSize, dtype);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::reduce(src, dst, dim, op, dtype);
SANITY_CHECK(dst, eps);
}
CV_ENUM(ReduceAccOp, CV_REDUCE_SUM, CV_REDUCE_AVG)
typedef tuple<Size, std::pair<MatType, MatType>, int, ReduceAccOp> ReduceAccParams;
typedef TestBaseWithParam<ReduceAccParams> ReduceAccFixture;
OCL_PERF_TEST_P(ReduceAccFixture, Reduce,
::testing::Combine(OCL_TEST_SIZES,
OCL_PERF_ENUM(std::make_pair<MatType, MatType>(CV_8UC4, CV_32SC4),
std::make_pair<MatType, MatType>(CV_32FC1, CV_32FC1)),
OCL_PERF_ENUM(0, 1),
ReduceAccOp::all()))
{
const ReduceAccParams params = GetParam();
const std::pair<MatType, MatType> types = get<1>(params);
const int stype = types.first, dtype = types.second,
dim = get<2>(params), op = get<3>(params);
const Size srcSize = get<0>(params),
dstSize(dim == 0 ? srcSize.width : 1, dim == 0 ? 1 : srcSize.height);
const double eps = CV_MAT_DEPTH(dtype) <= CV_32S ? 1 : 3e-4;
checkDeviceMaxMemoryAllocSize(srcSize, stype);
checkDeviceMaxMemoryAllocSize(srcSize, dtype);
UMat src(srcSize, stype), dst(dstSize, dtype);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::reduce(src, dst, dim, op, dtype);
SANITY_CHECK(dst, eps);
}
} } // namespace cvtest::ocl
#endif // HAVE_OPENCL

View File

@@ -0,0 +1,132 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2014, Advanced Micro Devices, Inc., all rights reserved.
#include "perf_precomp.hpp"
#include "opencv2/ts/ocl_perf.hpp"
#ifdef HAVE_OPENCL
namespace cvtest {
namespace ocl {
struct BufferPoolState
{
BufferPoolController* controller_;
size_t oldMaxReservedSize_;
BufferPoolState(BufferPoolController* c, bool enable)
: controller_(c)
{
if (!cv::ocl::useOpenCL())
{
throw ::perf::TestBase::PerfSkipTestException();
}
oldMaxReservedSize_ = c->getMaxReservedSize();
if (oldMaxReservedSize_ == (size_t)-1)
{
throw ::perf::TestBase::PerfSkipTestException();
}
if (!enable)
{
c->setMaxReservedSize(0);
}
else
{
c->freeAllReservedBuffers();
}
}
~BufferPoolState()
{
controller_->setMaxReservedSize(oldMaxReservedSize_);
}
};
typedef TestBaseWithParam<bool> BufferPoolFixture;
OCL_PERF_TEST_P(BufferPoolFixture, BufferPool_UMatCreation100, Bool())
{
BufferPoolState s(cv::ocl::getOpenCLAllocator()->getBufferPoolController(), GetParam());
Size sz(1920, 1080);
OCL_TEST_CYCLE()
{
for (int i = 0; i < 100; i++)
{
UMat u(sz, CV_8UC1);
}
}
SANITY_CHECK_NOTHING();
}
OCL_PERF_TEST_P(BufferPoolFixture, BufferPool_UMatCountNonZero100, Bool())
{
BufferPoolState s(cv::ocl::getOpenCLAllocator()->getBufferPoolController(), GetParam());
Size sz(1920, 1080);
OCL_TEST_CYCLE()
{
for (int i = 0; i < 100; i++)
{
UMat u(sz, CV_8UC1);
countNonZero(u);
}
}
SANITY_CHECK_NOTHING();
}
OCL_PERF_TEST_P(BufferPoolFixture, BufferPool_UMatCanny10, Bool())
{
BufferPoolState s(cv::ocl::getOpenCLAllocator()->getBufferPoolController(), GetParam());
Size sz(1920, 1080);
int aperture = 3;
bool useL2 = false;
double thresh_low = 100;
double thresh_high = 120;
OCL_TEST_CYCLE()
{
for (int i = 0; i < 10; i++)
{
UMat src(sz, CV_8UC1);
UMat dst;
Canny(src, dst, thresh_low, thresh_high, aperture, useL2);
dst.getMat(ACCESS_READ); // complete async operations
}
}
SANITY_CHECK_NOTHING();
}
OCL_PERF_TEST_P(BufferPoolFixture, BufferPool_UMatIntegral10, Bool())
{
BufferPoolState s(cv::ocl::getOpenCLAllocator()->getBufferPoolController(), GetParam());
Size sz(1920, 1080);
OCL_TEST_CYCLE()
{
for (int i = 0; i < 10; i++)
{
UMat src(sz, CV_32FC1);
UMat dst;
integral(src, dst);
dst.getMat(ACCESS_READ); // complete async operations
}
}
SANITY_CHECK_NOTHING();
}
} } // namespace cvtest::ocl
#endif // HAVE_OPENCL

View File

@@ -0,0 +1,203 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2010-2012, Multicoreware, Inc., all rights reserved.
// Copyright (C) 2010-2012, Advanced Micro Devices, Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// @Authors
// Fangfang Bai, fangfang@multicorewareinc.com
// Jin Ma, jin@multicorewareinc.com
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors as is and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "perf_precomp.hpp"
#include "opencv2/ts/ocl_perf.hpp"
#ifdef HAVE_OPENCL
namespace cvtest {
namespace ocl {
///////////// Merge////////////////////////
typedef tuple<Size, MatDepth, int> MergeParams;
typedef TestBaseWithParam<MergeParams> MergeFixture;
OCL_PERF_TEST_P(MergeFixture, Merge,
::testing::Combine(OCL_TEST_SIZES, OCL_PERF_ENUM(CV_8U, CV_32F), Values(2, 3)))
{
const MergeParams params = GetParam();
const Size srcSize = get<0>(params);
const int depth = get<1>(params), cn = get<2>(params), dtype = CV_MAKE_TYPE(depth, cn);
checkDeviceMaxMemoryAllocSize(srcSize, dtype);
UMat dst(srcSize, dtype);
vector<UMat> src(cn);
for (vector<UMat>::iterator i = src.begin(), end = src.end(); i != end; ++i)
{
i->create(srcSize, CV_MAKE_TYPE(depth, 1));
declare.in(*i, WARMUP_RNG);
}
declare.out(dst);
OCL_TEST_CYCLE() cv::merge(src, dst);
SANITY_CHECK(dst);
}
///////////// Split ////////////////////////
typedef MergeParams SplitParams;
typedef TestBaseWithParam<SplitParams> SplitFixture;
OCL_PERF_TEST_P(SplitFixture, Split,
::testing::Combine(OCL_TEST_SIZES, OCL_PERF_ENUM(CV_8U, CV_32F), Values(2, 3)))
{
const SplitParams params = GetParam();
const Size srcSize = get<0>(params);
const int depth = get<1>(params), cn = get<2>(params), type = CV_MAKE_TYPE(depth, cn);
ASSERT_TRUE(cn == 3 || cn == 2);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type);
std::vector<UMat> dst(cn, UMat(srcSize, CV_MAKE_TYPE(depth, 1)));
declare.in(src, WARMUP_RNG);
for (int i = 0; i < cn; ++i)
declare.in(dst[i]);
OCL_TEST_CYCLE() cv::split(src, dst);
ASSERT_EQ(cn, (int)dst.size());
if (cn == 2)
{
UMat & dst0 = dst[0], & dst1 = dst[1];
SANITY_CHECK(dst0);
SANITY_CHECK(dst1);
}
else
{
UMat & dst0 = dst[0], & dst1 = dst[1], & dst2 = dst[2];
SANITY_CHECK(dst0);
SANITY_CHECK(dst1);
SANITY_CHECK(dst2);
}
}
///////////// MixChannels ////////////////////////
typedef tuple<Size, MatDepth> MixChannelsParams;
typedef TestBaseWithParam<MixChannelsParams> MixChannelsFixture;
OCL_PERF_TEST_P(MixChannelsFixture, MixChannels,
::testing::Combine(Values(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3),
OCL_PERF_ENUM(CV_8U, CV_32F)))
{
const MixChannelsParams params = GetParam();
const Size srcSize = get<0>(params);
const int depth = get<1>(params), type = CV_MAKE_TYPE(depth, 2), n = 2;
checkDeviceMaxMemoryAllocSize(srcSize, type);
std::vector<UMat> src(n), dst(n);
for (int i = 0; i < n; ++i)
{
src[i] = UMat(srcSize, type);
dst[i] = UMat(srcSize, type);
declare.in(src[i], WARMUP_RNG).out(dst[i]);
}
int fromTo[] = { 1,2, 2,0, 0,3, 3,1 };
OCL_TEST_CYCLE() cv::mixChannels(src, dst, fromTo, 4);
UMat & dst0 = dst[0], & dst1 = dst[1];
SANITY_CHECK(dst0);
SANITY_CHECK(dst1);
}
///////////// InsertChannel ////////////////////////
typedef Size_MatDepth InsertChannelFixture;
OCL_PERF_TEST_P(InsertChannelFixture, InsertChannel,
::testing::Combine(Values(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3),
OCL_PERF_ENUM(CV_8U, CV_32F)))
{
const Size_MatDepth_t params = GetParam();
const Size srcSize = get<0>(params);
const int depth = get<1>(params), type = CV_MAKE_TYPE(depth, 3);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, depth), dst(srcSize, type, Scalar::all(17));
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::insertChannel(src, dst, 1);
SANITY_CHECK(dst);
}
///////////// ExtractChannel ////////////////////////
typedef Size_MatDepth ExtractChannelFixture;
OCL_PERF_TEST_P(ExtractChannelFixture, ExtractChannel,
::testing::Combine(Values(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3),
OCL_PERF_ENUM(CV_8U, CV_32F)))
{
const Size_MatDepth_t params = GetParam();
const Size srcSize = get<0>(params);
const int depth = get<1>(params), type = CV_MAKE_TYPE(depth, 3);
checkDeviceMaxMemoryAllocSize(srcSize, type);
UMat src(srcSize, type), dst(srcSize, depth);
declare.in(src, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::extractChannel(src, dst, 1);
SANITY_CHECK(dst);
}
} } // namespace cvtest::ocl
#endif // HAVE_OPENCL

View File

@@ -43,45 +43,57 @@
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "perf_precomp.hpp"
#include "opencv2/ts/ocl_perf.hpp"
#include "opencv2/objdetect/objdetect_c.h"
#ifdef HAVE_OPENCL
using namespace perf;
namespace cvtest {
namespace ocl {
///////////// Haar ////////////////////////
PERF_TEST(HaarFixture, Haar)
///////////// dft ////////////////////////
typedef tuple<Size, int> DftParams;
typedef TestBaseWithParam<DftParams> DftFixture;
OCL_PERF_TEST_P(DftFixture, Dft, ::testing::Combine(Values(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3),
Values((int)DFT_ROWS, (int)DFT_SCALE, (int)DFT_INVERSE,
(int)DFT_INVERSE | DFT_SCALE, (int)DFT_ROWS | DFT_INVERSE)))
{
vector<Rect> faces;
const DftParams params = GetParam();
const Size srcSize = get<0>(params);
const int flags = get<1>(params);
Mat img = imread(getDataPath("gpu/haarcascade/basketball1.png"), IMREAD_GRAYSCALE);
ASSERT_TRUE(!img.empty()) << "can't open basketball1.png";
declare.in(img);
UMat src(srcSize, CV_32FC2), dst(srcSize, CV_32FC2);
declare.in(src, WARMUP_RNG).out(dst);
if (RUN_PLAIN_IMPL)
{
CascadeClassifier faceCascade;
ASSERT_TRUE(faceCascade.load(getDataPath("gpu/haarcascade/haarcascade_frontalface_alt.xml")))
<< "can't load haarcascade_frontalface_alt.xml";
OCL_TEST_CYCLE() cv::dft(src, dst, flags | DFT_COMPLEX_OUTPUT);
TEST_CYCLE() faceCascade.detectMultiScale(img, faces,
1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, Size(30, 30));
SANITY_CHECK(faces, 4 + 1e-4);
}
else if (RUN_OCL_IMPL)
{
ocl::OclCascadeClassifier faceCascade;
ocl::oclMat oclImg(img);
ASSERT_TRUE(faceCascade.load(getDataPath("gpu/haarcascade/haarcascade_frontalface_alt.xml")))
<< "can't load haarcascade_frontalface_alt.xml";
OCL_TEST_CYCLE() faceCascade.detectMultiScale(oclImg, faces,
1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, Size(30, 30));
SANITY_CHECK(faces, 4 + 1e-4);
}
else
OCL_PERF_ELSE
SANITY_CHECK(dst, 1e-3);
}
///////////// MulSpectrums ////////////////////////
typedef tuple<Size, bool> MulSpectrumsParams;
typedef TestBaseWithParam<MulSpectrumsParams> MulSpectrumsFixture;
OCL_PERF_TEST_P(MulSpectrumsFixture, MulSpectrums,
::testing::Combine(Values(OCL_SIZE_1, OCL_SIZE_2, OCL_SIZE_3),
Bool()))
{
const MulSpectrumsParams params = GetParam();
const Size srcSize = get<0>(params);
const bool conj = get<1>(params);
UMat src1(srcSize, CV_32FC2), src2(srcSize, CV_32FC2), dst(srcSize, CV_32FC2);
declare.in(src1, src2, WARMUP_RNG).out(dst);
OCL_TEST_CYCLE() cv::mulSpectrums(src1, src2, dst, 0, conj);
SANITY_CHECK(dst, 1e-3);
}
} } // namespace cvtest::ocl
#endif // HAVE_OPENCL

View File

@@ -43,46 +43,40 @@
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "perf_precomp.hpp"
using namespace perf;
#include "perf_precomp.hpp"
#include "opencv2/ts/ocl_perf.hpp"
#ifdef HAVE_OPENCL
namespace cvtest {
namespace ocl {
///////////// gemm ////////////////////////
typedef TestBaseWithParam<Size> gemmFixture;
typedef tuple<Size, int> GemmParams;
typedef TestBaseWithParam<GemmParams> GemmFixture;
#ifdef HAVE_CLAMDBLAS
PERF_TEST_P(gemmFixture, gemm, ::testing::Values(OCL_SIZE_1000, OCL_SIZE_2000))
OCL_PERF_TEST_P(GemmFixture, Gemm, ::testing::Combine(
::testing::Values(Size(1000, 1000), Size(1500, 1500)),
Values((int)cv::GEMM_3_T, (int)cv::GEMM_3_T | (int)cv::GEMM_2_T)))
{
const Size srcSize = GetParam();
GemmParams params = GetParam();
const Size srcSize = get<0>(params);
const int flags = get<1>(params);
Mat src1(srcSize, CV_32FC1), src2(srcSize, CV_32FC1),
UMat src1(srcSize, CV_32FC1), src2(srcSize, CV_32FC1),
src3(srcSize, CV_32FC1), dst(srcSize, CV_32FC1);
declare.in(src1, src2, src3).out(dst).time(srcSize == OCL_SIZE_2000 ? 65 : 8);
declare.in(src1, src2, src3).out(dst);
randu(src1, -10.0f, 10.0f);
randu(src2, -10.0f, 10.0f);
randu(src3, -10.0f, 10.0f);
if (RUN_OCL_IMPL)
{
ocl::oclMat oclSrc1(src1), oclSrc2(src2),
oclSrc3(src3), oclDst(srcSize, CV_32FC1);
OCL_TEST_CYCLE() cv::gemm(src1, src2, 0.6, src3, 1.5, dst, flags);
OCL_TEST_CYCLE() cv::ocl::gemm(oclSrc1, oclSrc2, 1.0, oclSrc3, 1.0, oclDst);
oclDst.download(dst);
SANITY_CHECK(dst, 0.01);
}
else if (RUN_PLAIN_IMPL)
{
TEST_CYCLE() cv::gemm(src1, src2, 1.0, src3, 1.0, dst);
SANITY_CHECK(dst, 0.01);
}
else
OCL_PERF_ELSE
SANITY_CHECK(dst, 0.01);
}
#endif
} } // namespace cvtest::ocl
#endif // HAVE_OPENCL

View File

@@ -0,0 +1,42 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2014, Advanced Micro Devices, Inc., all rights reserved.
#include "perf_precomp.hpp"
#include "opencv2/ts/ocl_perf.hpp"
#ifdef HAVE_OPENCL
namespace cvtest {
namespace ocl {
typedef TestBaseWithParam<std::tr1::tuple<cv::Size, bool> > UsageFlagsBoolFixture;
OCL_PERF_TEST_P(UsageFlagsBoolFixture, UsageFlags_AllocHostMem, ::testing::Combine(OCL_TEST_SIZES, Bool()))
{
Size sz = get<0>(GetParam());
bool allocHostMem = get<1>(GetParam());
UMat src(sz, CV_8UC1, Scalar::all(128));
OCL_TEST_CYCLE()
{
UMat dst(allocHostMem ? USAGE_ALLOCATE_HOST_MEMORY : USAGE_DEFAULT);
cv::add(src, Scalar::all(1), dst);
{
Mat canvas = dst.getMat(ACCESS_RW);
cv::putText(canvas, "Test", Point(20, 20), FONT_HERSHEY_PLAIN, 1, Scalar::all(255));
}
UMat final;
cv::subtract(dst, Scalar::all(1), final);
}
SANITY_CHECK_NOTHING();
}
} } // namespace cvtest::ocl
#endif // HAVE_OPENCL

View File

@@ -202,3 +202,43 @@ PERF_TEST_P(Size_MatType, subtractScalar, TYPICAL_MATS_CORE_ARITHM)
SANITY_CHECK(c, 1e-8);
}
PERF_TEST_P(Size_MatType, multiply, TYPICAL_MATS_CORE_ARITHM)
{
Size sz = get<0>(GetParam());
int type = get<1>(GetParam());
cv::Mat a(sz, type), b(sz, type), c(sz, type);
declare.in(a, b, WARMUP_RNG).out(c);
if (CV_MAT_DEPTH(type) == CV_32S)
{
//According to docs, saturation is not applied when result is 32bit integer
a /= (2 << 16);
b /= (2 << 16);
}
TEST_CYCLE() multiply(a, b, c);
SANITY_CHECK(c, 1e-8);
}
PERF_TEST_P(Size_MatType, multiplyScale, TYPICAL_MATS_CORE_ARITHM)
{
Size sz = get<0>(GetParam());
int type = get<1>(GetParam());
cv::Mat a(sz, type), b(sz, type), c(sz, type);
double scale = 0.5;
declare.in(a, b, WARMUP_RNG).out(c);
if (CV_MAT_DEPTH(type) == CV_32S)
{
//According to docs, saturation is not applied when result is 32bit integer
a /= (2 << 16);
b /= (2 << 16);
}
TEST_CYCLE() multiply(a, b, c, scale);
SANITY_CHECK(c, 1e-8);
}

View File

@@ -628,7 +628,7 @@ void AlgorithmInfo::set(Algorithm* algo, const char* parameter, int argType, con
|| argType == Param::FLOAT || argType == Param::UNSIGNED_INT || argType == Param::UINT64 || argType == Param::UCHAR)
{
if ( !( p->type == Param::INT || p->type == Param::REAL || p->type == Param::BOOLEAN
|| p->type == Param::UNSIGNED_INT || p->type == Param::UINT64 || p->type == Param::FLOAT || argType == Param::UCHAR) )
|| p->type == Param::UNSIGNED_INT || p->type == Param::UINT64 || p->type == Param::FLOAT || p->type == Param::UCHAR) )
{
String message = getErrorMessageForWrongArgumentInSetter(algo->name(), parameter, p->type, argType);
CV_Error(CV_StsBadArg, message);

View File

@@ -251,16 +251,16 @@ void vBinOp64(const T* src1, size_t step1, const T* src2, size_t step2,
template <> \
struct name<template_arg>{ \
typedef register_type reg_type; \
static reg_type load(const template_arg * p) { return load_body ((const reg_type *)p);}; \
static void store(template_arg * p, reg_type v) { store_body ((reg_type *)p, v);}; \
static reg_type load(const template_arg * p) { return load_body ((const reg_type *)p); } \
static void store(template_arg * p, reg_type v) { store_body ((reg_type *)p, v); } \
}
#define FUNCTOR_LOADSTORE(name, template_arg, register_type, load_body, store_body)\
template <> \
struct name<template_arg>{ \
typedef register_type reg_type; \
static reg_type load(const template_arg * p) { return load_body (p);}; \
static void store(template_arg * p, reg_type v) { store_body (p, v);}; \
static reg_type load(const template_arg * p) { return load_body (p); } \
static void store(template_arg * p, reg_type v) { store_body (p, v); } \
}
#define FUNCTOR_CLOSURE_2arg(name, template_arg, body)\
@@ -915,11 +915,14 @@ void convertAndUnrollScalar( const Mat& sc, int buftype, uchar* scbuf, size_t bl
enum { OCL_OP_ADD=0, OCL_OP_SUB=1, OCL_OP_RSUB=2, OCL_OP_ABSDIFF=3, OCL_OP_MUL=4,
OCL_OP_MUL_SCALE=5, OCL_OP_DIV_SCALE=6, OCL_OP_RECIP_SCALE=7, OCL_OP_ADDW=8,
OCL_OP_AND=9, OCL_OP_OR=10, OCL_OP_XOR=11, OCL_OP_NOT=12, OCL_OP_MIN=13, OCL_OP_MAX=14 };
OCL_OP_AND=9, OCL_OP_OR=10, OCL_OP_XOR=11, OCL_OP_NOT=12, OCL_OP_MIN=13, OCL_OP_MAX=14,
OCL_OP_RDIV_SCALE=15 };
#ifdef HAVE_OPENCL
static const char* oclop2str[] = { "OP_ADD", "OP_SUB", "OP_RSUB", "OP_ABSDIFF",
"OP_MUL", "OP_MUL_SCALE", "OP_DIV_SCALE", "OP_RECIP_SCALE",
"OP_ADDW", "OP_AND", "OP_OR", "OP_XOR", "OP_NOT", "OP_MIN", "OP_MAX", 0 };
"OP_ADDW", "OP_AND", "OP_OR", "OP_XOR", "OP_NOT", "OP_MIN", "OP_MAX", "OP_RDIV_SCALE", 0 };
static bool ocl_binary_op(InputArray _src1, InputArray _src2, OutputArray _dst,
InputArray _mask, bool bitwise, int oclop, bool haveScalar )
@@ -931,16 +934,23 @@ static bool ocl_binary_op(InputArray _src1, InputArray _src2, OutputArray _dst,
bool doubleSupport = ocl::Device::getDefault().doubleFPConfig() > 0;
if( oclop < 0 || ((haveMask || haveScalar) && (cn > 4 || cn == 3)) ||
if( oclop < 0 || ((haveMask || haveScalar) && cn > 4) ||
(!doubleSupport && srcdepth == CV_64F))
return false;
char opts[1024];
int kercn = haveMask || haveScalar ? cn : 1;
sprintf(opts, "-D %s%s -D %s -D dstT=%s%s",
int scalarcn = kercn == 3 ? 4 : kercn;
sprintf(opts, "-D %s%s -D %s -D dstT=%s%s -D dstT_C1=%s -D workST=%s -D cn=%d",
(haveMask ? "MASK_" : ""), (haveScalar ? "UNARY_OP" : "BINARY_OP"), oclop2str[oclop],
bitwise ? ocl::memopTypeToStr(CV_MAKETYPE(srcdepth, kercn)) :
ocl::typeToStr(CV_MAKETYPE(srcdepth, kercn)), doubleSupport ? " -D DOUBLE_SUPPORT" : "");
ocl::typeToStr(CV_MAKETYPE(srcdepth, kercn)), doubleSupport ? " -D DOUBLE_SUPPORT" : "",
bitwise ? ocl::memopTypeToStr(CV_MAKETYPE(srcdepth, 1)) :
ocl::typeToStr(CV_MAKETYPE(srcdepth, 1)),
bitwise ? ocl::memopTypeToStr(CV_MAKETYPE(srcdepth, scalarcn)) :
ocl::typeToStr(CV_MAKETYPE(srcdepth, scalarcn)),
kercn);
ocl::Kernel k("KF", ocl::core::arithm_oclsrc, opts);
if( k.empty() )
@@ -957,7 +967,7 @@ static bool ocl_binary_op(InputArray _src1, InputArray _src2, OutputArray _dst,
if( haveScalar )
{
size_t esz = CV_ELEM_SIZE(srctype);
size_t esz = CV_ELEM_SIZE1(srctype)*scalarcn;
double buf[4] = {0,0,0,0};
if( oclop != OCL_OP_NOT )
@@ -988,6 +998,7 @@ static bool ocl_binary_op(InputArray _src1, InputArray _src2, OutputArray _dst,
return k.run(2, globalsize, 0, false);
}
#endif
static void binary_op( InputArray _src1, InputArray _src2, OutputArray _dst,
InputArray _mask, const BinaryFunc* tab,
@@ -1000,16 +1011,19 @@ static void binary_op( InputArray _src1, InputArray _src2, OutputArray _dst,
int dims1 = psrc1->dims(), dims2 = psrc2->dims();
Size sz1 = dims1 <= 2 ? psrc1->size() : Size();
Size sz2 = dims2 <= 2 ? psrc2->size() : Size();
#ifdef HAVE_OPENCL
bool use_opencl = (kind1 == _InputArray::UMAT || kind2 == _InputArray::UMAT) &&
ocl::useOpenCL() && dims1 <= 2 && dims2 <= 2;
dims1 <= 2 && dims2 <= 2;
#endif
bool haveMask = !_mask.empty(), haveScalar = false;
BinaryFunc func;
if( dims1 <= 2 && dims2 <= 2 && kind1 == kind2 && sz1 == sz2 && type1 == type2 && !haveMask )
{
_dst.create(sz1, type1);
if( use_opencl && ocl_binary_op(*psrc1, *psrc2, _dst, _mask, bitwise, oclop, false) )
return;
CV_OCL_RUN(use_opencl,
ocl_binary_op(*psrc1, *psrc2, _dst, _mask, bitwise, oclop, false))
if( bitwise )
{
func = *tab;
@@ -1076,8 +1090,9 @@ static void binary_op( InputArray _src1, InputArray _src2, OutputArray _dst,
if( haveMask && reallocate )
_dst.setTo(0.);
if( use_opencl && ocl_binary_op(*psrc1, *psrc2, _dst, _mask, bitwise, oclop, haveScalar ))
return;
CV_OCL_RUN(use_opencl,
ocl_binary_op(*psrc1, *psrc2, _dst, _mask, bitwise, oclop, haveScalar))
Mat src1 = psrc1->getMat(), src2 = psrc2->getMat();
Mat dst = _dst.getMat(), mask = _mask.getMat();
@@ -1088,9 +1103,7 @@ static void binary_op( InputArray _src1, InputArray _src2, OutputArray _dst,
cn = (int)esz;
}
else
{
func = tab[depth1];
}
if( !haveScalar )
{
@@ -1277,6 +1290,7 @@ static int actualScalarDepth(const double* data, int len)
CV_32S;
}
#ifdef HAVE_OPENCL
static bool ocl_arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
InputArray _mask, int wtype,
@@ -1287,7 +1301,7 @@ static bool ocl_arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
int type1 = _src1.type(), depth1 = CV_MAT_DEPTH(type1), cn = CV_MAT_CN(type1);
bool haveMask = !_mask.empty();
if( ((haveMask || haveScalar) && (cn > 4 || cn == 3)) )
if( ((haveMask || haveScalar) && cn > 4) )
return false;
int dtype = _dst.type(), ddepth = CV_MAT_DEPTH(dtype), wdepth = std::max(CV_32S, CV_MAT_DEPTH(wtype));
@@ -1300,26 +1314,33 @@ static bool ocl_arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
return false;
int kercn = haveMask || haveScalar ? cn : 1;
int scalarcn = kercn == 3 ? 4 : kercn;
char cvtstr[3][32], opts[1024];
sprintf(opts, "-D %s%s -D %s -D srcT1=%s -D srcT2=%s "
"-D dstT=%s -D workT=%s -D convertToWT1=%s "
"-D convertToWT2=%s -D convertToDT=%s%s",
char cvtstr[4][32], opts[1024];
sprintf(opts, "-D %s%s -D %s -D srcT1=%s -D srcT1_C1=%s -D srcT2=%s -D srcT2_C1=%s "
"-D dstT=%s -D dstT_C1=%s -D workT=%s -D workST=%s -D scaleT=%s -D convertToWT1=%s "
"-D convertToWT2=%s -D convertToDT=%s%s -D cn=%d",
(haveMask ? "MASK_" : ""), (haveScalar ? "UNARY_OP" : "BINARY_OP"),
oclop2str[oclop], ocl::typeToStr(CV_MAKETYPE(depth1, kercn)),
ocl::typeToStr(CV_MAKETYPE(depth1, 1)),
ocl::typeToStr(CV_MAKETYPE(depth2, kercn)),
ocl::typeToStr(CV_MAKETYPE(depth2, 1)),
ocl::typeToStr(CV_MAKETYPE(ddepth, kercn)),
ocl::typeToStr(CV_MAKETYPE(ddepth, 1)),
ocl::typeToStr(CV_MAKETYPE(wdepth, kercn)),
ocl::typeToStr(CV_MAKETYPE(wdepth, scalarcn)),
ocl::typeToStr(CV_MAKETYPE(wdepth, 1)),
ocl::convertTypeStr(depth1, wdepth, kercn, cvtstr[0]),
ocl::convertTypeStr(depth2, wdepth, kercn, cvtstr[1]),
ocl::convertTypeStr(wdepth, ddepth, kercn, cvtstr[2]),
doubleSupport ? " -D DOUBLE_SUPPORT" : "");
doubleSupport ? " -D DOUBLE_SUPPORT" : "", kercn);
size_t usrdata_esz = CV_ELEM_SIZE(wdepth);
const uchar* usrdata_p = (const uchar*)usrdata;
const double* usrdata_d = (const double*)usrdata;
float usrdata_f[3];
int i, n = oclop == OCL_OP_MUL_SCALE || oclop == OCL_OP_DIV_SCALE ||
oclop == OCL_OP_RECIP_SCALE ? 1 : oclop == OCL_OP_ADDW ? 3 : 0;
oclop == OCL_OP_RDIV_SCALE || oclop == OCL_OP_RECIP_SCALE ? 1 : oclop == OCL_OP_ADDW ? 3 : 0;
if( n > 0 && wdepth == CV_32F )
{
for( i = 0; i < n; i++ )
@@ -1343,7 +1364,7 @@ static bool ocl_arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
if( haveScalar )
{
size_t esz = CV_ELEM_SIZE(wtype);
size_t esz = CV_ELEM_SIZE1(wtype)*scalarcn;
double buf[4]={0,0,0,0};
Mat src2sc = _src2.getMat();
@@ -1352,13 +1373,20 @@ static bool ocl_arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
ocl::KernelArg scalararg = ocl::KernelArg(0, 0, 0, buf, esz);
if( !haveMask )
k.args(src1arg, dstarg, scalararg);
{
if(n == 0)
k.args(src1arg, dstarg, scalararg);
else if(n == 1)
k.args(src1arg, dstarg, scalararg,
ocl::KernelArg(0, 0, 0, usrdata_p, usrdata_esz));
else
CV_Error(Error::StsNotImplemented, "unsupported number of extra parameters");
}
else
k.args(src1arg, maskarg, dstarg, scalararg);
}
else
{
size_t usrdata_esz = CV_ELEM_SIZE(wdepth);
src2 = _src2.getUMat();
ocl::KernelArg src2arg = ocl::KernelArg::ReadOnlyNoSize(src2, cscale);
@@ -1385,6 +1413,7 @@ static bool ocl_arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
return k.run(2, globalsize, NULL, false);
}
#endif
static void arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
InputArray _mask, int dtype, BinaryFunc* tab, bool muldiv=false,
@@ -1399,7 +1428,9 @@ static void arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
int wtype, dims1 = psrc1->dims(), dims2 = psrc2->dims();
Size sz1 = dims1 <= 2 ? psrc1->size() : Size();
Size sz2 = dims2 <= 2 ? psrc2->size() : Size();
bool use_opencl = _dst.kind() == _OutputArray::UMAT && ocl::useOpenCL() && dims1 <= 2 && dims2 <= 2;
#ifdef HAVE_OPENCL
bool use_opencl = _dst.isUMat() && dims1 <= 2 && dims2 <= 2;
#endif
bool src1Scalar = checkScalar(*psrc1, type2, kind1, kind2);
bool src2Scalar = checkScalar(*psrc2, type1, kind2, kind1);
@@ -1409,11 +1440,10 @@ static void arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
((src1Scalar && src2Scalar) || (!src1Scalar && !src2Scalar)) )
{
_dst.createSameSize(*psrc1, type1);
if( use_opencl &&
CV_OCL_RUN(use_opencl,
ocl_arithm_op(*psrc1, *psrc2, _dst, _mask,
(!usrdata ? type1 : std::max(depth1, CV_32F)),
usrdata, oclop, false))
return;
Mat src1 = psrc1->getMat(), src2 = psrc2->getMat(), dst = _dst.getMat();
Size sz = getContinuousSize(src1, src2, dst, src1.channels());
@@ -1424,8 +1454,8 @@ static void arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
bool haveScalar = false, swapped12 = false;
if( dims1 != dims2 || sz1 != sz2 || cn != cn2 ||
((kind1 == _InputArray::MATX || kind2 == _InputArray::MATX) &&
(sz1 == Size(1,4) || sz2 == Size(1,4))) )
(kind1 == _InputArray::MATX && (sz1 == Size(1,4) || sz1 == Size(1,1))) ||
(kind2 == _InputArray::MATX && (sz2 == Size(1,4) || sz2 == Size(1,1))) )
{
if( checkScalar(*psrc1, type2, kind1, kind2) )
{
@@ -1439,6 +1469,8 @@ static void arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
swapped12 = true;
if( oclop == OCL_OP_SUB )
oclop = OCL_OP_RSUB;
if ( oclop == OCL_OP_DIV_SCALE )
oclop = OCL_OP_RDIV_SCALE;
}
else if( !checkScalar(*psrc2, type1, kind2, kind1) )
CV_Error( CV_StsUnmatchedSizes,
@@ -1508,10 +1540,9 @@ static void arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
if( reallocate )
_dst.setTo(0.);
if( use_opencl &&
ocl_arithm_op(*psrc1, *psrc2, _dst, _mask, wtype,
usrdata, oclop, haveScalar))
return;
CV_OCL_RUN(use_opencl,
ocl_arithm_op(*psrc1, *psrc2, _dst, _mask, wtype,
usrdata, oclop, haveScalar))
BinaryFunc cvtsrc1 = type1 == wtype ? 0 : getConvertFunc(type1, wtype);
BinaryFunc cvtsrc2 = type2 == type1 ? cvtsrc1 : type2 == wtype ? 0 : getConvertFunc(type2, wtype);
@@ -2588,6 +2619,8 @@ static double getMaxVal(int depth)
return tab[depth];
}
#ifdef HAVE_OPENCL
static bool ocl_compare(InputArray _src1, InputArray _src2, OutputArray _dst, int op)
{
if ( !((_src1.isMat() || _src1.isUMat()) && (_src2.isMat() || _src2.isUMat())) )
@@ -2600,7 +2633,7 @@ static bool ocl_compare(InputArray _src1, InputArray _src2, OutputArray _dst, in
const char * const operationMap[] = { "==", ">", ">=", "<", "<=", "!=" };
ocl::Kernel k("KF", ocl::core::arithm_oclsrc,
format("-D BINARY_OP -D srcT1=%s -D workT=srcT1"
format("-D BINARY_OP -D srcT1=%s -D workT=srcT1 -D cn=1"
" -D OP_CMP -D CMP_OPERATOR=%s%s",
ocl::typeToStr(CV_MAKE_TYPE(depth, 1)),
operationMap[op],
@@ -2624,6 +2657,8 @@ static bool ocl_compare(InputArray _src1, InputArray _src2, OutputArray _dst, in
return k.run(2, globalsize, NULL, false);
}
#endif
}
void cv::compare(InputArray _src1, InputArray _src2, OutputArray _dst, int op)
@@ -2631,9 +2666,8 @@ void cv::compare(InputArray _src1, InputArray _src2, OutputArray _dst, int op)
CV_Assert( op == CMP_LT || op == CMP_LE || op == CMP_EQ ||
op == CMP_NE || op == CMP_GE || op == CMP_GT );
if (ocl::useOpenCL() && _src1.dims() <= 2 && _src2.dims() <= 2 && _dst.isUMat() &&
ocl_compare(_src1, _src2, _dst, op))
return;
CV_OCL_RUN(_src1.dims() <= 2 && _src2.dims() <= 2 && _dst.isUMat(),
ocl_compare(_src1, _src2, _dst, op))
int kind1 = _src1.kind(), kind2 = _src2.kind();
Mat src1 = _src1.getMat(), src2 = _src2.getMat();
@@ -2865,11 +2899,125 @@ static InRangeFunc getInRangeFunc(int depth)
return inRangeTab[depth];
}
#ifdef HAVE_OPENCL
static bool ocl_inRange( InputArray _src, InputArray _lowerb,
InputArray _upperb, OutputArray _dst )
{
int skind = _src.kind(), lkind = _lowerb.kind(), ukind = _upperb.kind();
Size ssize = _src.size(), lsize = _lowerb.size(), usize = _upperb.size();
int stype = _src.type(), ltype = _lowerb.type(), utype = _upperb.type();
int sdepth = CV_MAT_DEPTH(stype), ldepth = CV_MAT_DEPTH(ltype), udepth = CV_MAT_DEPTH(utype);
int cn = CV_MAT_CN(stype);
bool lbScalar = false, ubScalar = false;
if( (lkind == _InputArray::MATX && skind != _InputArray::MATX) ||
ssize != lsize || stype != ltype )
{
if( !checkScalar(_lowerb, stype, lkind, skind) )
CV_Error( CV_StsUnmatchedSizes,
"The lower bounary is neither an array of the same size and same type as src, nor a scalar");
lbScalar = true;
}
if( (ukind == _InputArray::MATX && skind != _InputArray::MATX) ||
ssize != usize || stype != utype )
{
if( !checkScalar(_upperb, stype, ukind, skind) )
CV_Error( CV_StsUnmatchedSizes,
"The upper bounary is neither an array of the same size and same type as src, nor a scalar");
ubScalar = true;
}
if (lbScalar != ubScalar)
return false;
bool doubleSupport = ocl::Device::getDefault().doubleFPConfig() > 0,
haveScalar = lbScalar && ubScalar;
if ( (!doubleSupport && sdepth == CV_64F) ||
(!haveScalar && (sdepth != ldepth || sdepth != udepth)) )
return false;
ocl::Kernel ker("inrange", ocl::core::inrange_oclsrc,
format("%s-D cn=%d -D T=%s%s", haveScalar ? "-D HAVE_SCALAR " : "",
cn, ocl::typeToStr(sdepth), doubleSupport ? " -D DOUBLE_SUPPORT" : ""));
if (ker.empty())
return false;
_dst.create(ssize, CV_8UC1);
UMat src = _src.getUMat(), dst = _dst.getUMat(), lscalaru, uscalaru;
Mat lscalar, uscalar;
if (lbScalar && ubScalar)
{
lscalar = _lowerb.getMat();
uscalar = _upperb.getMat();
size_t esz = src.elemSize();
size_t blocksize = 36;
AutoBuffer<uchar> _buf(blocksize*(((int)lbScalar + (int)ubScalar)*esz + cn) + 2*cn*sizeof(int) + 128);
uchar *buf = alignPtr(_buf + blocksize*cn, 16);
if( ldepth != sdepth && sdepth < CV_32S )
{
int* ilbuf = (int*)alignPtr(buf + blocksize*esz, 16);
int* iubuf = ilbuf + cn;
BinaryFunc sccvtfunc = getConvertFunc(ldepth, CV_32S);
sccvtfunc(lscalar.data, 0, 0, 0, (uchar*)ilbuf, 0, Size(cn, 1), 0);
sccvtfunc(uscalar.data, 0, 0, 0, (uchar*)iubuf, 0, Size(cn, 1), 0);
int minval = cvRound(getMinVal(sdepth)), maxval = cvRound(getMaxVal(sdepth));
for( int k = 0; k < cn; k++ )
{
if( ilbuf[k] > iubuf[k] || ilbuf[k] > maxval || iubuf[k] < minval )
ilbuf[k] = minval+1, iubuf[k] = minval;
}
lscalar = Mat(cn, 1, CV_32S, ilbuf);
uscalar = Mat(cn, 1, CV_32S, iubuf);
}
lscalar.convertTo(lscalar, stype);
uscalar.convertTo(uscalar, stype);
}
else
{
lscalaru = _lowerb.getUMat();
uscalaru = _upperb.getUMat();
}
ocl::KernelArg srcarg = ocl::KernelArg::ReadOnlyNoSize(src),
dstarg = ocl::KernelArg::WriteOnly(dst);
if (haveScalar)
{
lscalar.copyTo(lscalaru);
uscalar.copyTo(uscalaru);
ker.args(srcarg, dstarg, ocl::KernelArg::PtrReadOnly(lscalaru),
ocl::KernelArg::PtrReadOnly(uscalaru));
}
else
ker.args(srcarg, dstarg, ocl::KernelArg::ReadOnlyNoSize(lscalaru),
ocl::KernelArg::ReadOnlyNoSize(uscalaru));
size_t globalsize[2] = { ssize.width, ssize.height };
return ker.run(2, globalsize, NULL, false);
}
#endif
}
void cv::inRange(InputArray _src, InputArray _lowerb,
InputArray _upperb, OutputArray _dst)
{
CV_OCL_RUN(_src.dims() <= 2 && _lowerb.dims() <= 2 &&
_upperb.dims() <= 2 && _dst.isUMat(),
ocl_inRange(_src, _lowerb, _upperb, _dst))
int skind = _src.kind(), lkind = _lowerb.kind(), ukind = _upperb.kind();
Mat src = _src.getMat(), lb = _lowerb.getMat(), ub = _upperb.getMat();
@@ -2893,14 +3041,14 @@ void cv::inRange(InputArray _src, InputArray _lowerb,
ubScalar = true;
}
CV_Assert( ((int)lbScalar ^ (int)ubScalar) == 0 );
CV_Assert(lbScalar == ubScalar);
int cn = src.channels(), depth = src.depth();
size_t esz = src.elemSize();
size_t blocksize0 = (size_t)(BLOCK_SIZE + esz-1)/esz;
_dst.create(src.dims, src.size, CV_8U);
_dst.create(src.dims, src.size, CV_8UC1);
Mat dst = _dst.getMat();
InRangeFunc func = getInRangeFunc(depth);

View File

@@ -0,0 +1,28 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2014, Advanced Micro Devices, Inc., all rights reserved.
#ifndef __OPENCV_CORE_BUFFER_POOL_IMPL_HPP__
#define __OPENCV_CORE_BUFFER_POOL_IMPL_HPP__
#include "opencv2/core/bufferpool.hpp"
namespace cv {
class DummyBufferPoolController : public BufferPoolController
{
public:
DummyBufferPoolController() { }
virtual ~DummyBufferPoolController() { }
virtual size_t getReservedSize() const { return (size_t)-1; }
virtual size_t getMaxReservedSize() const { return (size_t)-1; }
virtual void setMaxReservedSize(size_t size) { (void)size; }
virtual void freeAllReservedBuffers() { }
};
} // namespace
#endif // __OPENCV_CORE_BUFFER_POOL_IMPL_HPP__

View File

@@ -41,6 +41,8 @@ static String get_type_name(int type)
{
if( type == Param::INT )
return "int";
if( type == Param::BOOLEAN )
return "bool";
if( type == Param::UNSIGNED_INT )
return "unsigned";
if( type == Param::UINT64 )
@@ -59,6 +61,12 @@ static void from_str(const String& str, int type, void* dst)
std::stringstream ss(str.c_str());
if( type == Param::INT )
ss >> *(int*)dst;
else if( type == Param::BOOLEAN )
{
std::string temp;
ss >> temp;
*(bool*) dst = temp == "true";
}
else if( type == Param::UNSIGNED_INT )
ss >> *(unsigned*)dst;
else if( type == Param::UINT64 )
@@ -229,6 +237,11 @@ CommandLineParser::CommandLineParser(int argc, const char* const argv[], const S
impl->sort_params();
}
CommandLineParser::~CommandLineParser()
{
if (CV_XADD(&impl->refcount, -1) == 1)
delete impl;
}
CommandLineParser::CommandLineParser(const CommandLineParser& parser)
{

View File

@@ -264,6 +264,8 @@ void cv::split(const Mat& src, Mat* mv)
}
}
#ifdef HAVE_OPENCL
namespace cv {
static bool ocl_split( InputArray _m, OutputArrayOfArrays _mv )
@@ -287,10 +289,12 @@ static bool ocl_split( InputArray _m, OutputArrayOfArrays _mv )
return false;
Size size = _m.size();
std::vector<UMat> & dst = *(std::vector<UMat> *)_mv.getObj();
dst.resize(cn);
_mv.create(cn, 1, depth);
for (int i = 0; i < cn; ++i)
dst[i].create(size, depth);
_mv.create(size, depth, i);
std::vector<UMat> dst;
_mv.getUMatVector(dst);
int argidx = k.set(0, ocl::KernelArg::ReadOnly(_m.getUMat()));
for (int i = 0; i < cn; ++i)
@@ -302,11 +306,12 @@ static bool ocl_split( InputArray _m, OutputArrayOfArrays _mv )
}
#endif
void cv::split(InputArray _m, OutputArrayOfArrays _mv)
{
if (ocl::useOpenCL() && _m.dims() <= 2 && _mv.isUMatVector() &&
ocl_split(_m, _mv))
return;
CV_OCL_RUN(_m.dims() <= 2 && _mv.isUMatVector(),
ocl_split(_m, _mv))
Mat m = _m.getMat();
if( m.empty() )
@@ -314,10 +319,19 @@ void cv::split(InputArray _m, OutputArrayOfArrays _mv)
_mv.release();
return;
}
CV_Assert( !_mv.fixedType() || _mv.empty() || _mv.type() == m.depth() );
_mv.create(m.channels(), 1, m.depth());
Mat* dst = &_mv.getMatRef(0);
split(m, dst);
Size size = m.size();
int depth = m.depth(), cn = m.channels();
_mv.create(cn, 1, depth);
for (int i = 0; i < cn; ++i)
_mv.create(size, depth, i);
std::vector<Mat> dst;
_mv.getMatVector(dst);
split(m, &dst[0]);
}
void cv::merge(const Mat* mv, size_t n, OutputArray _dst)
@@ -395,11 +409,14 @@ void cv::merge(const Mat* mv, size_t n, OutputArray _dst)
}
}
#ifdef HAVE_OPENCL
namespace cv {
static bool ocl_merge( InputArrayOfArrays _mv, OutputArray _dst )
{
const std::vector<UMat> & src = *(const std::vector<UMat> *)(_mv.getObj());
std::vector<UMat> src;
_mv.getUMatVector(src);
CV_Assert(!src.empty());
int type = src[0].type(), depth = CV_MAT_DEPTH(type);
@@ -442,10 +459,12 @@ static bool ocl_merge( InputArrayOfArrays _mv, OutputArray _dst )
}
#endif
void cv::merge(InputArrayOfArrays _mv, OutputArray _dst)
{
if (ocl::useOpenCL() && _mv.isUMatVector() && _dst.isUMat() && ocl_merge(_mv, _dst))
return;
CV_OCL_RUN(_mv.isUMatVector() && _dst.isUMat(),
ocl_merge(_mv, _dst))
std::vector<Mat> mv;
_mv.getMatVector(mv);
@@ -612,16 +631,115 @@ void cv::mixChannels( const Mat* src, size_t nsrcs, Mat* dst, size_t ndsts, cons
}
}
#ifdef HAVE_OPENCL
namespace cv {
static void getUMatIndex(const std::vector<UMat> & um, int cn, int & idx, int & cnidx)
{
int totalChannels = 0;
for (size_t i = 0, size = um.size(); i < size; ++i)
{
int ccn = um[i].channels();
totalChannels += ccn;
if (totalChannels == cn)
{
idx = (int)(i + 1);
cnidx = 0;
return;
}
else if (totalChannels > cn)
{
idx = (int)i;
cnidx = i == 0 ? cn : (cn - totalChannels + ccn);
return;
}
}
idx = cnidx = -1;
}
static bool ocl_mixChannels(InputArrayOfArrays _src, InputOutputArrayOfArrays _dst,
const int* fromTo, size_t npairs)
{
std::vector<UMat> src, dst;
_src.getUMatVector(src);
_dst.getUMatVector(dst);
size_t nsrc = src.size(), ndst = dst.size();
CV_Assert(nsrc > 0 && ndst > 0);
Size size = src[0].size();
int depth = src[0].depth(), esz = CV_ELEM_SIZE(depth);
for (size_t i = 1, ssize = src.size(); i < ssize; ++i)
CV_Assert(src[i].size() == size && src[i].depth() == depth);
for (size_t i = 0, dsize = dst.size(); i < dsize; ++i)
CV_Assert(dst[i].size() == size && dst[i].depth() == depth);
String declsrc, decldst, declproc, declcn;
std::vector<UMat> srcargs(npairs), dstargs(npairs);
for (size_t i = 0; i < npairs; ++i)
{
int scn = fromTo[i<<1], dcn = fromTo[(i<<1) + 1];
int src_idx, src_cnidx, dst_idx, dst_cnidx;
getUMatIndex(src, scn, src_idx, src_cnidx);
getUMatIndex(dst, dcn, dst_idx, dst_cnidx);
CV_Assert(dst_idx >= 0 && src_idx >= 0);
srcargs[i] = src[src_idx];
srcargs[i].offset += src_cnidx * esz;
dstargs[i] = dst[dst_idx];
dstargs[i].offset += dst_cnidx * esz;
declsrc += format("DECLARE_INPUT_MAT(%d)", i);
decldst += format("DECLARE_OUTPUT_MAT(%d)", i);
declproc += format("PROCESS_ELEM(%d)", i);
declcn += format(" -D scn%d=%d -D dcn%d=%d", i, src[src_idx].channels(), i, dst[dst_idx].channels());
}
ocl::Kernel k("mixChannels", ocl::core::mixchannels_oclsrc,
format("-D T=%s -D DECLARE_INPUT_MATS=%s -D DECLARE_OUTPUT_MATS=%s"
" -D PROCESS_ELEMS=%s%s", ocl::memopTypeToStr(depth),
declsrc.c_str(), decldst.c_str(), declproc.c_str(), declcn.c_str()));
if (k.empty())
return false;
int argindex = 0;
for (size_t i = 0; i < npairs; ++i)
argindex = k.set(argindex, ocl::KernelArg::ReadOnlyNoSize(srcargs[i]));
for (size_t i = 0; i < npairs; ++i)
argindex = k.set(argindex, ocl::KernelArg::WriteOnlyNoSize(dstargs[i]));
k.set(k.set(argindex, size.height), size.width);
size_t globalsize[2] = { size.width, size.height };
return k.run(2, globalsize, NULL, false);
}
}
#endif
void cv::mixChannels(InputArrayOfArrays src, InputOutputArrayOfArrays dst,
const int* fromTo, size_t npairs)
{
if(npairs == 0)
if (npairs == 0 || fromTo == NULL)
return;
CV_OCL_RUN(dst.isUMatVector(),
ocl_mixChannels(src, dst, fromTo, npairs))
bool src_is_mat = src.kind() != _InputArray::STD_VECTOR_MAT &&
src.kind() != _InputArray::STD_VECTOR_VECTOR;
src.kind() != _InputArray::STD_VECTOR_VECTOR &&
src.kind() != _InputArray::STD_VECTOR_UMAT;
bool dst_is_mat = dst.kind() != _InputArray::STD_VECTOR_MAT &&
dst.kind() != _InputArray::STD_VECTOR_VECTOR;
dst.kind() != _InputArray::STD_VECTOR_VECTOR &&
dst.kind() != _InputArray::STD_VECTOR_UMAT;
int i;
int nsrc = src_is_mat ? 1 : (int)src.total();
int ndst = dst_is_mat ? 1 : (int)dst.total();
@@ -639,12 +757,18 @@ void cv::mixChannels(InputArrayOfArrays src, InputOutputArrayOfArrays dst,
void cv::mixChannels(InputArrayOfArrays src, InputOutputArrayOfArrays dst,
const std::vector<int>& fromTo)
{
if(fromTo.empty())
if (fromTo.empty())
return;
CV_OCL_RUN(dst.isUMatVector(),
ocl_mixChannels(src, dst, &fromTo[0], fromTo.size()>>1))
bool src_is_mat = src.kind() != _InputArray::STD_VECTOR_MAT &&
src.kind() != _InputArray::STD_VECTOR_VECTOR;
src.kind() != _InputArray::STD_VECTOR_VECTOR &&
src.kind() != _InputArray::STD_VECTOR_UMAT;
bool dst_is_mat = dst.kind() != _InputArray::STD_VECTOR_MAT &&
dst.kind() != _InputArray::STD_VECTOR_VECTOR;
dst.kind() != _InputArray::STD_VECTOR_VECTOR &&
dst.kind() != _InputArray::STD_VECTOR_UMAT;
int i;
int nsrc = src_is_mat ? 1 : (int)src.total();
int ndst = dst_is_mat ? 1 : (int)dst.total();
@@ -661,20 +785,41 @@ void cv::mixChannels(InputArrayOfArrays src, InputOutputArrayOfArrays dst,
void cv::extractChannel(InputArray _src, OutputArray _dst, int coi)
{
Mat src = _src.getMat();
CV_Assert( 0 <= coi && coi < src.channels() );
_dst.create(src.dims, &src.size[0], src.depth());
Mat dst = _dst.getMat();
int type = _src.type(), depth = CV_MAT_DEPTH(type), cn = CV_MAT_CN(type);
CV_Assert( 0 <= coi && coi < cn );
int ch[] = { coi, 0 };
if (ocl::useOpenCL() && _src.dims() <= 2 && _dst.isUMat())
{
UMat src = _src.getUMat();
_dst.create(src.dims, &src.size[0], depth);
UMat dst = _dst.getUMat();
mixChannels(std::vector<UMat>(1, src), std::vector<UMat>(1, dst), ch, 1);
return;
}
Mat src = _src.getMat();
_dst.create(src.dims, &src.size[0], depth);
Mat dst = _dst.getMat();
mixChannels(&src, 1, &dst, 1, ch, 1);
}
void cv::insertChannel(InputArray _src, InputOutputArray _dst, int coi)
{
Mat src = _src.getMat(), dst = _dst.getMat();
CV_Assert( src.size == dst.size && src.depth() == dst.depth() );
CV_Assert( 0 <= coi && coi < dst.channels() && src.channels() == 1 );
int stype = _src.type(), sdepth = CV_MAT_DEPTH(stype), scn = CV_MAT_CN(stype);
int dtype = _dst.type(), ddepth = CV_MAT_DEPTH(dtype), dcn = CV_MAT_CN(dtype);
CV_Assert( _src.sameSize(_dst) && sdepth == ddepth );
CV_Assert( 0 <= coi && coi < dcn && scn == 1 );
int ch[] = { 0, coi };
if (ocl::useOpenCL() && _src.dims() <= 2 && _dst.isUMat())
{
UMat src = _src.getUMat(), dst = _dst.getUMat();
mixChannels(std::vector<UMat>(1, src), std::vector<UMat>(1, dst), ch, 1);
return;
}
Mat src = _src.getMat(), dst = _dst.getMat();
mixChannels(&src, 1, &dst, 1, ch, 1);
}
@@ -938,122 +1083,122 @@ stype* dst, size_t dstep, Size size, double*) \
}
DEF_CVT_SCALE_ABS_FUNC(8u, cvtScaleAbs_, uchar, uchar, float);
DEF_CVT_SCALE_ABS_FUNC(8s8u, cvtScaleAbs_, schar, uchar, float);
DEF_CVT_SCALE_ABS_FUNC(16u8u, cvtScaleAbs_, ushort, uchar, float);
DEF_CVT_SCALE_ABS_FUNC(16s8u, cvtScaleAbs_, short, uchar, float);
DEF_CVT_SCALE_ABS_FUNC(32s8u, cvtScaleAbs_, int, uchar, float);
DEF_CVT_SCALE_ABS_FUNC(32f8u, cvtScaleAbs_, float, uchar, float);
DEF_CVT_SCALE_ABS_FUNC(64f8u, cvtScaleAbs_, double, uchar, float);
DEF_CVT_SCALE_ABS_FUNC(8u, cvtScaleAbs_, uchar, uchar, float)
DEF_CVT_SCALE_ABS_FUNC(8s8u, cvtScaleAbs_, schar, uchar, float)
DEF_CVT_SCALE_ABS_FUNC(16u8u, cvtScaleAbs_, ushort, uchar, float)
DEF_CVT_SCALE_ABS_FUNC(16s8u, cvtScaleAbs_, short, uchar, float)
DEF_CVT_SCALE_ABS_FUNC(32s8u, cvtScaleAbs_, int, uchar, float)
DEF_CVT_SCALE_ABS_FUNC(32f8u, cvtScaleAbs_, float, uchar, float)
DEF_CVT_SCALE_ABS_FUNC(64f8u, cvtScaleAbs_, double, uchar, float)
DEF_CVT_SCALE_FUNC(8u, uchar, uchar, float);
DEF_CVT_SCALE_FUNC(8s8u, schar, uchar, float);
DEF_CVT_SCALE_FUNC(16u8u, ushort, uchar, float);
DEF_CVT_SCALE_FUNC(16s8u, short, uchar, float);
DEF_CVT_SCALE_FUNC(32s8u, int, uchar, float);
DEF_CVT_SCALE_FUNC(32f8u, float, uchar, float);
DEF_CVT_SCALE_FUNC(64f8u, double, uchar, float);
DEF_CVT_SCALE_FUNC(8u, uchar, uchar, float)
DEF_CVT_SCALE_FUNC(8s8u, schar, uchar, float)
DEF_CVT_SCALE_FUNC(16u8u, ushort, uchar, float)
DEF_CVT_SCALE_FUNC(16s8u, short, uchar, float)
DEF_CVT_SCALE_FUNC(32s8u, int, uchar, float)
DEF_CVT_SCALE_FUNC(32f8u, float, uchar, float)
DEF_CVT_SCALE_FUNC(64f8u, double, uchar, float)
DEF_CVT_SCALE_FUNC(8u8s, uchar, schar, float);
DEF_CVT_SCALE_FUNC(8s, schar, schar, float);
DEF_CVT_SCALE_FUNC(16u8s, ushort, schar, float);
DEF_CVT_SCALE_FUNC(16s8s, short, schar, float);
DEF_CVT_SCALE_FUNC(32s8s, int, schar, float);
DEF_CVT_SCALE_FUNC(32f8s, float, schar, float);
DEF_CVT_SCALE_FUNC(64f8s, double, schar, float);
DEF_CVT_SCALE_FUNC(8u8s, uchar, schar, float)
DEF_CVT_SCALE_FUNC(8s, schar, schar, float)
DEF_CVT_SCALE_FUNC(16u8s, ushort, schar, float)
DEF_CVT_SCALE_FUNC(16s8s, short, schar, float)
DEF_CVT_SCALE_FUNC(32s8s, int, schar, float)
DEF_CVT_SCALE_FUNC(32f8s, float, schar, float)
DEF_CVT_SCALE_FUNC(64f8s, double, schar, float)
DEF_CVT_SCALE_FUNC(8u16u, uchar, ushort, float);
DEF_CVT_SCALE_FUNC(8s16u, schar, ushort, float);
DEF_CVT_SCALE_FUNC(16u, ushort, ushort, float);
DEF_CVT_SCALE_FUNC(16s16u, short, ushort, float);
DEF_CVT_SCALE_FUNC(32s16u, int, ushort, float);
DEF_CVT_SCALE_FUNC(32f16u, float, ushort, float);
DEF_CVT_SCALE_FUNC(64f16u, double, ushort, float);
DEF_CVT_SCALE_FUNC(8u16u, uchar, ushort, float)
DEF_CVT_SCALE_FUNC(8s16u, schar, ushort, float)
DEF_CVT_SCALE_FUNC(16u, ushort, ushort, float)
DEF_CVT_SCALE_FUNC(16s16u, short, ushort, float)
DEF_CVT_SCALE_FUNC(32s16u, int, ushort, float)
DEF_CVT_SCALE_FUNC(32f16u, float, ushort, float)
DEF_CVT_SCALE_FUNC(64f16u, double, ushort, float)
DEF_CVT_SCALE_FUNC(8u16s, uchar, short, float);
DEF_CVT_SCALE_FUNC(8s16s, schar, short, float);
DEF_CVT_SCALE_FUNC(16u16s, ushort, short, float);
DEF_CVT_SCALE_FUNC(16s, short, short, float);
DEF_CVT_SCALE_FUNC(32s16s, int, short, float);
DEF_CVT_SCALE_FUNC(32f16s, float, short, float);
DEF_CVT_SCALE_FUNC(64f16s, double, short, float);
DEF_CVT_SCALE_FUNC(8u16s, uchar, short, float)
DEF_CVT_SCALE_FUNC(8s16s, schar, short, float)
DEF_CVT_SCALE_FUNC(16u16s, ushort, short, float)
DEF_CVT_SCALE_FUNC(16s, short, short, float)
DEF_CVT_SCALE_FUNC(32s16s, int, short, float)
DEF_CVT_SCALE_FUNC(32f16s, float, short, float)
DEF_CVT_SCALE_FUNC(64f16s, double, short, float)
DEF_CVT_SCALE_FUNC(8u32s, uchar, int, float);
DEF_CVT_SCALE_FUNC(8s32s, schar, int, float);
DEF_CVT_SCALE_FUNC(16u32s, ushort, int, float);
DEF_CVT_SCALE_FUNC(16s32s, short, int, float);
DEF_CVT_SCALE_FUNC(32s, int, int, double);
DEF_CVT_SCALE_FUNC(32f32s, float, int, float);
DEF_CVT_SCALE_FUNC(64f32s, double, int, double);
DEF_CVT_SCALE_FUNC(8u32s, uchar, int, float)
DEF_CVT_SCALE_FUNC(8s32s, schar, int, float)
DEF_CVT_SCALE_FUNC(16u32s, ushort, int, float)
DEF_CVT_SCALE_FUNC(16s32s, short, int, float)
DEF_CVT_SCALE_FUNC(32s, int, int, double)
DEF_CVT_SCALE_FUNC(32f32s, float, int, float)
DEF_CVT_SCALE_FUNC(64f32s, double, int, double)
DEF_CVT_SCALE_FUNC(8u32f, uchar, float, float);
DEF_CVT_SCALE_FUNC(8s32f, schar, float, float);
DEF_CVT_SCALE_FUNC(16u32f, ushort, float, float);
DEF_CVT_SCALE_FUNC(16s32f, short, float, float);
DEF_CVT_SCALE_FUNC(32s32f, int, float, double);
DEF_CVT_SCALE_FUNC(32f, float, float, float);
DEF_CVT_SCALE_FUNC(64f32f, double, float, double);
DEF_CVT_SCALE_FUNC(8u32f, uchar, float, float)
DEF_CVT_SCALE_FUNC(8s32f, schar, float, float)
DEF_CVT_SCALE_FUNC(16u32f, ushort, float, float)
DEF_CVT_SCALE_FUNC(16s32f, short, float, float)
DEF_CVT_SCALE_FUNC(32s32f, int, float, double)
DEF_CVT_SCALE_FUNC(32f, float, float, float)
DEF_CVT_SCALE_FUNC(64f32f, double, float, double)
DEF_CVT_SCALE_FUNC(8u64f, uchar, double, double);
DEF_CVT_SCALE_FUNC(8s64f, schar, double, double);
DEF_CVT_SCALE_FUNC(16u64f, ushort, double, double);
DEF_CVT_SCALE_FUNC(16s64f, short, double, double);
DEF_CVT_SCALE_FUNC(32s64f, int, double, double);
DEF_CVT_SCALE_FUNC(32f64f, float, double, double);
DEF_CVT_SCALE_FUNC(64f, double, double, double);
DEF_CVT_SCALE_FUNC(8u64f, uchar, double, double)
DEF_CVT_SCALE_FUNC(8s64f, schar, double, double)
DEF_CVT_SCALE_FUNC(16u64f, ushort, double, double)
DEF_CVT_SCALE_FUNC(16s64f, short, double, double)
DEF_CVT_SCALE_FUNC(32s64f, int, double, double)
DEF_CVT_SCALE_FUNC(32f64f, float, double, double)
DEF_CVT_SCALE_FUNC(64f, double, double, double)
DEF_CPY_FUNC(8u, uchar);
DEF_CVT_FUNC(8s8u, schar, uchar);
DEF_CVT_FUNC(16u8u, ushort, uchar);
DEF_CVT_FUNC(16s8u, short, uchar);
DEF_CVT_FUNC(32s8u, int, uchar);
DEF_CVT_FUNC(32f8u, float, uchar);
DEF_CVT_FUNC(64f8u, double, uchar);
DEF_CPY_FUNC(8u, uchar)
DEF_CVT_FUNC(8s8u, schar, uchar)
DEF_CVT_FUNC(16u8u, ushort, uchar)
DEF_CVT_FUNC(16s8u, short, uchar)
DEF_CVT_FUNC(32s8u, int, uchar)
DEF_CVT_FUNC(32f8u, float, uchar)
DEF_CVT_FUNC(64f8u, double, uchar)
DEF_CVT_FUNC(8u8s, uchar, schar);
DEF_CVT_FUNC(16u8s, ushort, schar);
DEF_CVT_FUNC(16s8s, short, schar);
DEF_CVT_FUNC(32s8s, int, schar);
DEF_CVT_FUNC(32f8s, float, schar);
DEF_CVT_FUNC(64f8s, double, schar);
DEF_CVT_FUNC(8u8s, uchar, schar)
DEF_CVT_FUNC(16u8s, ushort, schar)
DEF_CVT_FUNC(16s8s, short, schar)
DEF_CVT_FUNC(32s8s, int, schar)
DEF_CVT_FUNC(32f8s, float, schar)
DEF_CVT_FUNC(64f8s, double, schar)
DEF_CVT_FUNC(8u16u, uchar, ushort);
DEF_CVT_FUNC(8s16u, schar, ushort);
DEF_CPY_FUNC(16u, ushort);
DEF_CVT_FUNC(16s16u, short, ushort);
DEF_CVT_FUNC(32s16u, int, ushort);
DEF_CVT_FUNC(32f16u, float, ushort);
DEF_CVT_FUNC(64f16u, double, ushort);
DEF_CVT_FUNC(8u16u, uchar, ushort)
DEF_CVT_FUNC(8s16u, schar, ushort)
DEF_CPY_FUNC(16u, ushort)
DEF_CVT_FUNC(16s16u, short, ushort)
DEF_CVT_FUNC(32s16u, int, ushort)
DEF_CVT_FUNC(32f16u, float, ushort)
DEF_CVT_FUNC(64f16u, double, ushort)
DEF_CVT_FUNC(8u16s, uchar, short);
DEF_CVT_FUNC(8s16s, schar, short);
DEF_CVT_FUNC(16u16s, ushort, short);
DEF_CVT_FUNC(32s16s, int, short);
DEF_CVT_FUNC(32f16s, float, short);
DEF_CVT_FUNC(64f16s, double, short);
DEF_CVT_FUNC(8u16s, uchar, short)
DEF_CVT_FUNC(8s16s, schar, short)
DEF_CVT_FUNC(16u16s, ushort, short)
DEF_CVT_FUNC(32s16s, int, short)
DEF_CVT_FUNC(32f16s, float, short)
DEF_CVT_FUNC(64f16s, double, short)
DEF_CVT_FUNC(8u32s, uchar, int);
DEF_CVT_FUNC(8s32s, schar, int);
DEF_CVT_FUNC(16u32s, ushort, int);
DEF_CVT_FUNC(16s32s, short, int);
DEF_CPY_FUNC(32s, int);
DEF_CVT_FUNC(32f32s, float, int);
DEF_CVT_FUNC(64f32s, double, int);
DEF_CVT_FUNC(8u32s, uchar, int)
DEF_CVT_FUNC(8s32s, schar, int)
DEF_CVT_FUNC(16u32s, ushort, int)
DEF_CVT_FUNC(16s32s, short, int)
DEF_CPY_FUNC(32s, int)
DEF_CVT_FUNC(32f32s, float, int)
DEF_CVT_FUNC(64f32s, double, int)
DEF_CVT_FUNC(8u32f, uchar, float);
DEF_CVT_FUNC(8s32f, schar, float);
DEF_CVT_FUNC(16u32f, ushort, float);
DEF_CVT_FUNC(16s32f, short, float);
DEF_CVT_FUNC(32s32f, int, float);
DEF_CVT_FUNC(64f32f, double, float);
DEF_CVT_FUNC(8u32f, uchar, float)
DEF_CVT_FUNC(8s32f, schar, float)
DEF_CVT_FUNC(16u32f, ushort, float)
DEF_CVT_FUNC(16s32f, short, float)
DEF_CVT_FUNC(32s32f, int, float)
DEF_CVT_FUNC(64f32f, double, float)
DEF_CVT_FUNC(8u64f, uchar, double);
DEF_CVT_FUNC(8s64f, schar, double);
DEF_CVT_FUNC(16u64f, ushort, double);
DEF_CVT_FUNC(16s64f, short, double);
DEF_CVT_FUNC(32s64f, int, double);
DEF_CVT_FUNC(32f64f, float, double);
DEF_CPY_FUNC(64s, int64);
DEF_CVT_FUNC(8u64f, uchar, double)
DEF_CVT_FUNC(8s64f, schar, double)
DEF_CVT_FUNC(16u64f, ushort, double)
DEF_CVT_FUNC(16s64f, short, double)
DEF_CVT_FUNC(32s64f, int, double)
DEF_CVT_FUNC(32f64f, float, double)
DEF_CPY_FUNC(64s, int64)
static BinaryFunc getCvtScaleAbsFunc(int depth)
{
@@ -1161,10 +1306,52 @@ static BinaryFunc getConvertScaleFunc(int sdepth, int ddepth)
return cvtScaleTab[CV_MAT_DEPTH(ddepth)][CV_MAT_DEPTH(sdepth)];
}
#ifdef HAVE_OPENCL
static bool ocl_convertScaleAbs( InputArray _src, OutputArray _dst, double alpha, double beta )
{
int type = _src.type(), depth = CV_MAT_DEPTH(type), cn = CV_MAT_CN(type);
bool doubleSupport = ocl::Device::getDefault().doubleFPConfig() > 0;
if (!doubleSupport && depth == CV_64F)
return false;
char cvt[2][50];
int wdepth = std::max(depth, CV_32F);
ocl::Kernel k("KF", ocl::core::arithm_oclsrc,
format("-D OP_CONVERT_SCALE_ABS -D UNARY_OP -D dstT=uchar -D srcT1=%s"
" -D workT=%s -D convertToWT1=%s -D convertToDT=%s%s",
ocl::typeToStr(depth), ocl::typeToStr(wdepth),
ocl::convertTypeStr(depth, wdepth, 1, cvt[0]),
ocl::convertTypeStr(wdepth, CV_8U, 1, cvt[1]),
doubleSupport ? " -D DOUBLE_SUPPORT" : ""));
if (k.empty())
return false;
_dst.createSameSize(_src, CV_8UC(cn));
UMat src = _src.getUMat(), dst = _dst.getUMat();
ocl::KernelArg srcarg = ocl::KernelArg::ReadOnlyNoSize(src),
dstarg = ocl::KernelArg::WriteOnly(dst, cn);
if (wdepth == CV_32F)
k.args(srcarg, dstarg, (float)alpha, (float)beta);
else if (wdepth == CV_64F)
k.args(srcarg, dstarg, alpha, beta);
size_t globalsize[2] = { src.cols * cn, src.rows };
return k.run(2, globalsize, NULL, false);
}
#endif
}
void cv::convertScaleAbs( InputArray _src, OutputArray _dst, double alpha, double beta )
{
CV_OCL_RUN(_src.dims() <= 2 && _dst.isUMat(),
ocl_convertScaleAbs(_src, _dst, alpha, beta))
Mat src = _src.getMat();
int cn = src.channels();
double scale[] = {alpha, beta};
@@ -1300,9 +1487,7 @@ static LUTFunc lutTab[] =
(LUTFunc)LUT8u_32s, (LUTFunc)LUT8u_32f, (LUTFunc)LUT8u_64f, 0
};
}
namespace cv {
#ifdef HAVE_OPENCL
static bool ocl_LUT(InputArray _src, InputArray _lut, OutputArray _dst)
{
@@ -1320,6 +1505,9 @@ static bool ocl_LUT(InputArray _src, InputArray _lut, OutputArray _dst)
format("-D dcn=%d -D lcn=%d -D srcT=%s -D dstT=%s%s", dcn, lcn,
ocl::typeToStr(src.depth()), ocl::typeToStr(ddepth),
doubleSupport ? " -D DOUBLE_SUPPORT" : ""));
if (k.empty())
return false;
k.args(ocl::KernelArg::ReadOnlyNoSize(src), ocl::KernelArg::ReadOnlyNoSize(lut),
ocl::KernelArg::WriteOnly(dst));
@@ -1327,7 +1515,9 @@ static bool ocl_LUT(InputArray _src, InputArray _lut, OutputArray _dst)
return k.run(2, globalSize, NULL, false);
}
} // cv
#endif
}
void cv::LUT( InputArray _src, InputArray _lut, OutputArray _dst )
{
@@ -1338,8 +1528,8 @@ void cv::LUT( InputArray _src, InputArray _lut, OutputArray _dst )
_lut.total() == 256 && _lut.isContinuous() &&
(depth == CV_8U || depth == CV_8S) );
if (ocl::useOpenCL() && _dst.isUMat() && ocl_LUT(_src, _lut, _dst))
return;
CV_OCL_RUN(_dst.isUMat(),
ocl_LUT(_src, _lut, _dst))
Mat src = _src.getMat(), lut = _lut.getMat();
_dst.create(src.dims, src.size, CV_MAKETYPE(_lut.depth(), cn));
@@ -1357,43 +1547,68 @@ void cv::LUT( InputArray _src, InputArray _lut, OutputArray _dst )
func(ptrs[0], lut.data, ptrs[1], len, cn, lutcn);
}
namespace cv {
#ifdef HAVE_OPENCL
static bool ocl_normalize( InputArray _src, OutputArray _dst, InputArray _mask, int rtype,
double scale, double shift )
{
UMat src = _src.getUMat(), dst = _dst.getUMat();
if( _mask.empty() )
src.convertTo( dst, rtype, scale, shift );
else
{
UMat temp;
src.convertTo( temp, rtype, scale, shift );
temp.copyTo( dst, _mask );
}
return true;
}
#endif
}
void cv::normalize( InputArray _src, OutputArray _dst, double a, double b,
int norm_type, int rtype, InputArray _mask )
{
Mat src = _src.getMat(), mask = _mask.getMat();
double scale = 1, shift = 0;
if( norm_type == CV_MINMAX )
{
double smin = 0, smax = 0;
double dmin = MIN( a, b ), dmax = MAX( a, b );
minMaxLoc( _src, &smin, &smax, 0, 0, mask );
minMaxLoc( _src, &smin, &smax, 0, 0, _mask );
scale = (dmax - dmin)*(smax - smin > DBL_EPSILON ? 1./(smax - smin) : 0);
shift = dmin - smin*scale;
}
else if( norm_type == CV_L2 || norm_type == CV_L1 || norm_type == CV_C )
{
scale = norm( src, norm_type, mask );
scale = norm( _src, norm_type, _mask );
scale = scale > DBL_EPSILON ? a/scale : 0.;
shift = 0;
}
else
CV_Error( CV_StsBadArg, "Unknown/unsupported norm type" );
int type = _src.type(), depth = CV_MAT_DEPTH(type), cn = CV_MAT_CN(type);
if( rtype < 0 )
rtype = _dst.fixedType() ? _dst.depth() : src.depth();
rtype = _dst.fixedType() ? _dst.depth() : depth;
_dst.createSameSize(_src, CV_MAKETYPE(rtype, cn));
_dst.create(src.dims, src.size, CV_MAKETYPE(rtype, src.channels()));
Mat dst = _dst.getMat();
CV_OCL_RUN(_dst.isUMat(),
ocl_normalize(_src, _dst, _mask, rtype, scale, shift))
if( !mask.data )
Mat src = _src.getMat(), dst = _dst.getMat();
if( _mask.empty() )
src.convertTo( dst, rtype, scale, shift );
else
{
Mat temp;
src.convertTo( temp, rtype, scale, shift );
temp.copyTo( dst, mask );
temp.copyTo( dst, _mask );
}
}

View File

@@ -166,16 +166,16 @@ static void copyMask##suffix(const uchar* src, size_t sstep, const uchar* mask,
}
DEF_COPY_MASK(8u, uchar);
DEF_COPY_MASK(16u, ushort);
DEF_COPY_MASK(8uC3, Vec3b);
DEF_COPY_MASK(32s, int);
DEF_COPY_MASK(16uC3, Vec3s);
DEF_COPY_MASK(32sC2, Vec2i);
DEF_COPY_MASK(32sC3, Vec3i);
DEF_COPY_MASK(32sC4, Vec4i);
DEF_COPY_MASK(32sC6, Vec6i);
DEF_COPY_MASK(32sC8, Vec8i);
DEF_COPY_MASK(8u, uchar)
DEF_COPY_MASK(16u, ushort)
DEF_COPY_MASK(8uC3, Vec3b)
DEF_COPY_MASK(32s, int)
DEF_COPY_MASK(16uC3, Vec3s)
DEF_COPY_MASK(32sC2, Vec2i)
DEF_COPY_MASK(32sC3, Vec3i)
DEF_COPY_MASK(32sC4, Vec4i)
DEF_COPY_MASK(32sC6, Vec6i)
DEF_COPY_MASK(32sC8, Vec8i)
BinaryFunc copyMaskTab[] =
{
@@ -247,10 +247,7 @@ void Mat::copyTo( OutputArray _dst ) const
const uchar* sptr = data;
uchar* dptr = dst.data;
// to handle the copying 1xn matrix => nx1 std vector.
Size sz = size() == dst.size() ?
getContinuousSize(*this, dst) :
getContinuousSize(*this);
Size sz = getContinuousSize(*this, dst);
size_t len = sz.width*elemSize();
for( ; sz.height--; sptr += step, dptr += dst.step )
@@ -301,6 +298,7 @@ void Mat::copyTo( OutputArray _dst, InputArray _mask ) const
if( dims <= 2 )
{
CV_Assert( size() == mask.size() );
Size sz = getContinuousSize(*this, dst, mask, mcn);
copymask(data, step, mask.data, mask.step, dst.data, dst.step, sz, &esz);
return;
@@ -355,7 +353,7 @@ Mat& Mat::operator = (const Scalar& s)
Mat& Mat::setTo(InputArray _value, InputArray _mask)
{
if( !data )
if( empty() )
return *this;
Mat value = _value.getMat(), mask = _mask.getMat();
@@ -477,6 +475,8 @@ flipVert( const uchar* src0, size_t sstep, uchar* dst0, size_t dstep, Size size,
}
}
#ifdef HAVE_OPENCL
enum { FLIP_COLS = 1 << 0, FLIP_ROWS = 1 << 1, FLIP_BOTH = FLIP_ROWS | FLIP_COLS };
static bool ocl_flip(InputArray _src, OutputArray _dst, int flipCode )
@@ -521,13 +521,13 @@ static bool ocl_flip(InputArray _src, OutputArray _dst, int flipCode )
return k.args(ocl::KernelArg::ReadOnlyNoSize(src), ocl::KernelArg::WriteOnly(dst), rows, cols).run(2, globalsize, NULL, false);
}
#endif
void flip( InputArray _src, OutputArray _dst, int flip_mode )
{
CV_Assert( _src.dims() <= 2 );
bool use_opencl = ocl::useOpenCL() && _dst.isUMat();
if ( use_opencl && ocl_flip(_src,_dst, flip_mode))
return;
CV_OCL_RUN( _dst.isUMat(), ocl_flip(_src,_dst, flip_mode))
Mat src = _src.getMat();
_dst.create( src.size(), src.type() );
@@ -543,6 +543,7 @@ void flip( InputArray _src, OutputArray _dst, int flip_mode )
flipHoriz( dst.data, dst.step, dst.data, dst.step, dst.size(), esz );
}
#ifdef HAVE_OPENCL
static bool ocl_repeat(InputArray _src, int ny, int nx, OutputArray _dst)
{
@@ -558,6 +559,8 @@ static bool ocl_repeat(InputArray _src, int ny, int nx, OutputArray _dst)
return true;
}
#endif
void repeat(InputArray _src, int ny, int nx, OutputArray _dst)
{
CV_Assert( _src.dims() <= 2 );
@@ -566,11 +569,8 @@ void repeat(InputArray _src, int ny, int nx, OutputArray _dst)
Size ssize = _src.size();
_dst.create(ssize.height*ny, ssize.width*nx, _src.type());
if (ocl::useOpenCL() && _src.isUMat())
{
CV_Assert(ocl_repeat(_src, ny, nx, _dst));
return;
}
CV_OCL_RUN(_dst.isUMat(),
ocl_repeat(_src, ny, nx, _dst))
Mat src = _src.getMat(), dst = _dst.getMat();
Size dsize = dst.size();
@@ -632,6 +632,7 @@ int cv::borderInterpolate( int p, int len, int borderType )
}
else if( borderType == BORDER_WRAP )
{
CV_Assert(len > 0);
if( p < 0 )
p -= ((p-len+1)/len)*len;
if( p >= len )
@@ -770,6 +771,8 @@ void copyMakeConstBorder_8u( const uchar* src, size_t srcstep, cv::Size srcroi,
}
#ifdef HAVE_OPENCL
namespace cv {
static bool ocl_copyMakeBorder( InputArray _src, OutputArray _dst, int top, int bottom,
@@ -826,14 +829,15 @@ static bool ocl_copyMakeBorder( InputArray _src, OutputArray _dst, int top, int
}
#endif
void cv::copyMakeBorder( InputArray _src, OutputArray _dst, int top, int bottom,
int left, int right, int borderType, const Scalar& value )
{
CV_Assert( top >= 0 && bottom >= 0 && left >= 0 && right >= 0 );
if (ocl::useOpenCL() && _dst.isUMat() && _src.dims() <= 2 &&
ocl_copyMakeBorder(_src, _dst, top, bottom, left, right, borderType, value))
return;
CV_OCL_RUN(_dst.isUMat() && _src.dims() <= 2,
ocl_copyMakeBorder(_src, _dst, top, bottom, left, right, borderType, value))
Mat src = _src.getMat();

View File

@@ -272,9 +272,15 @@ void cv::cuda::GpuMat::copyTo(OutputArray _dst, InputArray _mask, Stream& stream
GpuMat mask = _mask.getGpuMat();
CV_DbgAssert( size() == mask.size() && mask.depth() == CV_8U && (mask.channels() == 1 || mask.channels() == channels()) );
uchar* data0 = _dst.getGpuMat().data;
_dst.create(size(), type());
GpuMat dst = _dst.getGpuMat();
// do not leave dst uninitialized
if (dst.data != data0)
dst.setTo(Scalar::all(0), stream);
typedef void (*func_t)(const GpuMat& src, const GpuMat& dst, const GpuMat& mask, Stream& stream);
static const func_t funcs[9][4] =
{

View File

@@ -46,6 +46,7 @@
using namespace cv;
using namespace cv::cuda;
#ifdef HAVE_CUDA
namespace
{
size_t alignUpStep(size_t what, size_t alignment)
@@ -56,6 +57,7 @@ namespace
return res;
}
}
#endif
void cv::cuda::CudaMem::create(int rows_, int cols_, int type_)
{

View File

@@ -236,7 +236,7 @@ namespace ocl {
static bool g_isDirect3DDevice9Ex = false; // Direct3DDevice9Ex or Direct3DDevice9 was used
#endif
Context2& initializeContextFromD3D11Device(ID3D11Device* pD3D11Device)
Context& initializeContextFromD3D11Device(ID3D11Device* pD3D11Device)
{
(void)pD3D11Device;
#if !defined(HAVE_DIRECTX)
@@ -338,13 +338,13 @@ Context2& initializeContextFromD3D11Device(ID3D11Device* pD3D11Device)
}
Context2& ctx = Context2::getDefault(false);
Context& ctx = Context::getDefault(false);
initializeContextFromHandle(ctx, platforms[found], context, device);
return ctx;
#endif
}
Context2& initializeContextFromD3D10Device(ID3D10Device* pD3D10Device)
Context& initializeContextFromD3D10Device(ID3D10Device* pD3D10Device)
{
(void)pD3D10Device;
#if !defined(HAVE_DIRECTX)
@@ -446,13 +446,13 @@ Context2& initializeContextFromD3D10Device(ID3D10Device* pD3D10Device)
}
Context2& ctx = Context2::getDefault(false);
Context& ctx = Context::getDefault(false);
initializeContextFromHandle(ctx, platforms[found], context, device);
return ctx;
#endif
}
Context2& initializeContextFromDirect3DDevice9Ex(IDirect3DDevice9Ex* pDirect3DDevice9Ex)
Context& initializeContextFromDirect3DDevice9Ex(IDirect3DDevice9Ex* pDirect3DDevice9Ex)
{
(void)pDirect3DDevice9Ex;
#if !defined(HAVE_DIRECTX)
@@ -555,14 +555,14 @@ Context2& initializeContextFromDirect3DDevice9Ex(IDirect3DDevice9Ex* pDirect3DDe
CV_Error(cv::Error::OpenCLInitError, "OpenCL: Can't create context for DirectX interop");
}
Context2& ctx = Context2::getDefault(false);
Context& ctx = Context::getDefault(false);
initializeContextFromHandle(ctx, platforms[found], context, device);
g_isDirect3DDevice9Ex = true;
return ctx;
#endif
}
Context2& initializeContextFromDirect3DDevice9(IDirect3DDevice9* pDirect3DDevice9)
Context& initializeContextFromDirect3DDevice9(IDirect3DDevice9* pDirect3DDevice9)
{
(void)pDirect3DDevice9;
#if !defined(HAVE_DIRECTX)
@@ -665,7 +665,7 @@ Context2& initializeContextFromDirect3DDevice9(IDirect3DDevice9* pDirect3DDevice
CV_Error(cv::Error::OpenCLInitError, "OpenCL: Can't create context for DirectX interop");
}
Context2& ctx = Context2::getDefault(false);
Context& ctx = Context::getDefault(false);
initializeContextFromHandle(ctx, platforms[found], context, device);
g_isDirect3DDevice9Ex = false;
return ctx;
@@ -720,7 +720,7 @@ void convertToD3D11Texture2D(InputArray src, ID3D11Texture2D* pD3D11Texture2D)
CV_Assert(srcSize.width == (int)desc.Width && srcSize.height == (int)desc.Height);
using namespace cv::ocl;
Context2& ctx = Context2::getDefault();
Context& ctx = Context::getDefault();
cl_context context = (cl_context)ctx.ptr();
UMat u = src.getUMat();
@@ -777,7 +777,7 @@ void convertFromD3D11Texture2D(ID3D11Texture2D* pD3D11Texture2D, OutputArray dst
CV_Assert(textureType >= 0);
using namespace cv::ocl;
Context2& ctx = Context2::getDefault();
Context& ctx = Context::getDefault();
cl_context context = (cl_context)ctx.ptr();
// TODO Need to specify ACCESS_WRITE here somehow to prevent useless data copying!
@@ -868,7 +868,7 @@ void convertToD3D10Texture2D(InputArray src, ID3D10Texture2D* pD3D10Texture2D)
CV_Assert(srcSize.width == (int)desc.Width && srcSize.height == (int)desc.Height);
using namespace cv::ocl;
Context2& ctx = Context2::getDefault();
Context& ctx = Context::getDefault();
cl_context context = (cl_context)ctx.ptr();
UMat u = src.getUMat();
@@ -925,7 +925,7 @@ void convertFromD3D10Texture2D(ID3D10Texture2D* pD3D10Texture2D, OutputArray dst
CV_Assert(textureType >= 0);
using namespace cv::ocl;
Context2& ctx = Context2::getDefault();
Context& ctx = Context::getDefault();
cl_context context = (cl_context)ctx.ptr();
// TODO Need to specify ACCESS_WRITE here somehow to prevent useless data copying!
@@ -1019,7 +1019,7 @@ void convertToDirect3DSurface9(InputArray src, IDirect3DSurface9* pDirect3DSurfa
CV_Assert(srcSize.width == (int)desc.Width && srcSize.height == (int)desc.Height);
using namespace cv::ocl;
Context2& ctx = Context2::getDefault();
Context& ctx = Context::getDefault();
cl_context context = (cl_context)ctx.ptr();
UMat u = src.getUMat();
@@ -1083,7 +1083,7 @@ void convertFromDirect3DSurface9(IDirect3DSurface9* pDirect3DSurface9, OutputArr
CV_Assert(surfaceType >= 0);
using namespace cv::ocl;
Context2& ctx = Context2::getDefault();
Context& ctx = Context::getDefault();
cl_context context = (cl_context)ctx.ptr();
// TODO Need to specify ACCESS_WRITE here somehow to prevent useless data copying!

View File

@@ -1568,9 +1568,11 @@ PolyLine( Mat& img, const Point* v, int count, bool is_closed,
* External functions *
\****************************************************************************************/
void line( Mat& img, Point pt1, Point pt2, const Scalar& color,
void line( InputOutputArray _img, Point pt1, Point pt2, const Scalar& color,
int thickness, int line_type, int shift )
{
Mat img = _img.getMat();
if( line_type == CV_AA && img.depth() != CV_8U )
line_type = 8;
@@ -1582,10 +1584,12 @@ void line( Mat& img, Point pt1, Point pt2, const Scalar& color,
ThickLine( img, pt1, pt2, buf, thickness, line_type, 3, shift );
}
void rectangle( Mat& img, Point pt1, Point pt2,
void rectangle( InputOutputArray _img, Point pt1, Point pt2,
const Scalar& color, int thickness,
int lineType, int shift )
{
Mat img = _img.getMat();
if( lineType == CV_AA && img.depth() != CV_8U )
lineType = 8;
@@ -1622,9 +1626,11 @@ void rectangle( Mat& img, Rect rec,
}
void circle( Mat& img, Point center, int radius,
void circle( InputOutputArray _img, Point center, int radius,
const Scalar& color, int thickness, int line_type, int shift )
{
Mat img = _img.getMat();
if( line_type == CV_AA && img.depth() != CV_8U )
line_type = 8;
@@ -1647,10 +1653,12 @@ void circle( Mat& img, Point center, int radius,
}
void ellipse( Mat& img, Point center, Size axes,
void ellipse( InputOutputArray _img, Point center, Size axes,
double angle, double start_angle, double end_angle,
const Scalar& color, int thickness, int line_type, int shift )
{
Mat img = _img.getMat();
if( line_type == CV_AA && img.depth() != CV_8U )
line_type = 8;
@@ -1672,9 +1680,11 @@ void ellipse( Mat& img, Point center, Size axes,
_end_angle, buf, thickness, line_type );
}
void ellipse(Mat& img, const RotatedRect& box, const Scalar& color,
void ellipse(InputOutputArray _img, const RotatedRect& box, const Scalar& color,
int thickness, int lineType)
{
Mat img = _img.getMat();
if( lineType == CV_AA && img.depth() != CV_8U )
lineType = 8;
@@ -1918,11 +1928,12 @@ static const int* getFontData(int fontFace)
}
void putText( Mat& img, const String& text, Point org,
void putText( InputOutputArray _img, const String& text, Point org,
int fontFace, double fontScale, Scalar color,
int thickness, int line_type, bool bottomLeftOrigin )
{
Mat img = _img.getMat();
const int* ascii = getFontData(fontFace);
double buf[4];

Some files were not shown because too many files have changed in this diff Show More