Merge remote-tracking branch 'origin/2.4' into merge-2.4

Conflicts:
	doc/tutorials/definitions/tocDefinitions.rst
	modules/core/include/opencv2/core/core.hpp
	modules/core/src/system.cpp
	modules/features2d/src/freak.cpp
	modules/ocl/include/opencv2/ocl/ocl.hpp
	modules/ocl/src/cl_context.cpp
	modules/ocl/test/test_api.cpp
This commit is contained in:
Roman Donchenko 2013-12-16 15:02:12 +04:00
commit 9d8d70d6ca
21 changed files with 1577 additions and 266 deletions

View File

@ -11,5 +11,6 @@
.. |Author_EricCh| unicode:: Eric U+0020 Christiansen
.. |Author_AndreyP| unicode:: Andrey U+0020 Pavlenko
.. |Author_AlexS| unicode:: Alexander U+0020 Smorkalov
.. |Author_MimmoC| unicode:: Mimmo U+0020 Cosenza
.. |Author_BarisD| unicode:: Bar U+0131 U+015F U+0020 Evrim U+0020 Demir U+00F6 z
.. |Author_DomenicoB| unicode:: Domenico U+0020 Daniele U+0020 Bloisi

View File

@ -0,0 +1,728 @@
.. _clojure_dev_intro:
Introduction to OpenCV Development with Clojure
***********************************************
As of OpenCV 2.4.4, OpenCV supports desktop Java development using
nearly the same interface as for Android development.
`Clojure <http://clojure.org/>`_ is a contemporary LISP dialect hosted
by the Java Virtual Machine and it offers a complete interoperability
with the underlying JVM. This means that we should even be able to use
the Clojure REPL (Read Eval Print Loop) as and interactive programmable
interface to the underlying OpenCV engine.
What we'll do in this tutorial
==============================
This tutorial will help you in setting up a basic Clojure environment
for interactively learning OpenCV within the fully programmable
CLojure REPL.
Tutorial source code
--------------------
You can find a runnable source code of the sample in the
:file:`samples/java/clojure/simple-sample` folder of the OpenCV
repository. After having installed OpenCV and Clojure as explained in
the tutorial, issue the following command to run the sample from the
command line.
.. code:: bash
cd path/to/samples/java/clojure/simple-sample
lein run
Preamble
========
For detailed instruction on installing OpenCV with desktop Java support
refer to the `corresponding tutorial <http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_java/java_dev_intro.html>`_.
If you are in hurry, here is a minimum quick start guide to install
OpenCV on Mac OS X:
NOTE 1: I'm assuming you already installed
`xcode <https://developer.apple.com/xcode/>`_,
`jdk <http://www.oracle.com/technetwork/java/javase/downloads/index.html>`_
and `Cmake <http://www.cmake.org/cmake/resources/software.html>`_.
.. code:: bash
cd ~/
mkdir opt
git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 2.4
mkdir build
cd build
cmake -DBUILD_SHARED_LIBS=OFF ..
...
...
make -j8
# optional
# make install
Install Leiningen
=================
Once you installed OpenCV with desktop java support the only other
requirement is to install
`Leiningeng <https://github.com/technomancy/leiningen>`_ which allows
you to manage the entire life cycle of your CLJ projects.
The available `installation guide <https://github.com/technomancy/leiningen#installation>`_ is very easy to be followed:
1. `Download the script <https://raw.github.com/technomancy/leiningen/stable/bin/lein>`_
2. Place it on your ``$PATH`` (cf. ``~/bin`` is a good choice if it is
on your ``path``.)
3. Set the script to be executable. (i.e. ``chmod 755 ~/bin/lein``).
If you work on Windows, follow `this instruction <https://github.com/technomancy/leiningen#windows>`_
You now have both the OpenCV library and a fully installed basic Clojure
environment. What is now needed is to configure the Clojure environment
to interact with the OpenCV library.
Install the localrepo Leiningen plugin
=======================================
The set of commands (tasks in Leiningen parlance) natively supported by
Leiningen can be very easily extended by various plugins. One of them is
the `lein-localrepo <https://github.com/kumarshantanu/lein-localrepo>`_
plugin which allows to install any jar lib as an artifact in the local
maven repository of your machine (typically in the ``~/.m2/repository``
directory of your username).
We're going to use this ``lein`` plugin to add to the local maven
repository the opencv components needed by Java and Clojure to use the
opencv lib.
Generally speaking, if you want to use a plugin on project base only, it
can be added directly to a CLJ project created by ``lein``.
Instead, when you want a plugin to be available to any CLJ project in
your username space, you can add it to the ``profiles.clj`` in the
``~/.lein/`` directory.
The ``lein-localrepo`` plugin will be useful to me in other CLJ
projects where I need to call native libs wrapped by a Java interface.
So I decide to make it available to any CLJ project:
.. code:: bash
mkdir ~/.lein
Create a file named ``profiles.clj`` in the ``~/.lein`` directory and
copy into it the following content:
.. code:: clojure
{:user {:plugins [[lein-localrepo "0.5.2"]]}}
Here we're saying that the version release ``"0.5.2"`` of the
``lein-localrepo`` plugin will be available to the ``:user`` profile for
any CLJ project created by ``lein``.
You do not need to do anything else to install the plugin because it
will be automatically downloaded from a remote repository the very first
time you issue any ``lein`` task.
Install the java specific libs as local repository
==================================================
If you followed the standard documentation for installing OpenCV on your
computer, you should find the following two libs under the directory
where you built OpenCV:
- the ``build/bin/opencv-247.jar`` java lib
- the ``build/lib/libopencv_java247.dylib`` native lib (or ``.so`` in
you built OpenCV a GNU/Linux OS)
They are the only opencv libs needed by the JVM to interact with OpenCV.
Take apart the needed opencv libs
---------------------------------
Create a new directory to store in the above two libs. Start by copying
into it the ``opencv-247.jar`` lib.
.. code:: bash
cd ~/opt
mkdir clj-opencv
cd clj-opencv
cp ~/opt/opencv/build/bin/opencv-247.jar .
First lib done.
Now, to be able to add the ``libopencv_java247.dylib`` shared native lib
to the local maven repository, we first need to package it as a jar
file.
The native lib has to be copied into a directories layout which mimics
the names of your operating system and architecture. I'm using a Mac OS
X with a X86 64 bit architecture. So my layout will be the following:
.. code:: bash
mkdir -p native/macosx/x86_64
Copy into the ``x86_64`` directory the ``libopencv_java247.dylib`` lib.
.. code:: bash
cp ~/opt/opencv/build/lib/libopencv_java247.dylib native/macosx/x86_64/
If you're running OpenCV from a different OS/Architecture pair, here
is a summary of the mapping you can choose from.
.. code:: bash
OS
Mac OS X -> macosx
Windows -> windows
Linux -> linux
SunOS -> solaris
Architectures
amd64 -> x86_64
x86_64 -> x86_64
x86 -> x86
i386 -> x86
arm -> arm
sparc -> sparc
Package the native lib as a jar
-------------------------------
Next you need to package the native lib in a jar file by using the
``jar`` command to create a new jar file from a directory.
.. code:: bash
jar -cMf opencv-native-247.jar native
Note that ehe ``M`` option instructs the ``jar`` command to not create
a MANIFEST file for the artifact.
Your directories layout should look like the following:
.. code:: bash
tree
.
|__ native
|   |__ macosx
|   |__ x86_64
|   |__ libopencv_java247.dylib
|
|__ opencv-247.jar
|__ opencv-native-247.jar
3 directories, 3 files
Locally install the jars
------------------------
We are now ready to add the two jars as artifacts to the local maven
repository with the help of the ``lein-localrepo`` plugin.
.. code:: bash
lein localrepo install opencv-247.jar opencv/opencv 2.4.7
Here the ``localrepo install`` task creates the ``2.4.7.`` release of
the ``opencv/opencv`` maven artifact from the ``opencv-247.jar`` lib and
then installs it into the local maven repository. The ``opencv/opencv``
artifact will then be available to any maven compliant project
(Leiningen is internally based on maven).
Do the same thing with the native lib previously wrapped in a new jar
file.
.. code:: bash
lein localrepo install opencv-native-247.jar opencv/opencv-native 2.4.7
Note that the groupId, ``opencv``, of the two artifacts is the same. We
are now ready to create a new CLJ project to start interacting with
OpenCV.
Create a project
----------------
Create a new CLJ project by using the ``lein new`` task from the
terminal.
.. code:: bash
# cd in the directory where you work with your development projects (e.g. ~/devel)
lein new simple-sample
Generating a project called simple-sample based on the 'default' template.
To see other templates (app, lein plugin, etc), try `lein help new`.
The above task creates the following ``simple-sample`` directories
layout:
.. code:: bash
tree simple-sample/
simple-sample/
|__ LICENSE
|__ README.md
|__ doc
|   |__ intro.md
|
|__ project.clj
|__ resources
|__ src
|   |__ simple_sample
|   |__ core.clj
|__ test
|__ simple_sample
|__ core_test.clj
6 directories, 6 files
We need to add the two ``opencv`` artifacts as dependencies of the newly
created project. Open the ``project.clj`` and modify its dependencies
section as follows:
.. code:: bash
(defproject simple-sample "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url "http://example.com/FIXME"
:license {:name "Eclipse Public License"
:url "http://www.eclipse.org/legal/epl-v10.html"}
:dependencies [[org.clojure/clojure "1.5.1"]
[opencv/opencv "2.4.7"] ; added line
[opencv/opencv-native "2.4.7"]]) ;added line
Note that The Clojure Programming Language is a jar artifact too. This
is why Clojure is called an hosted language.
To verify that everything went right issue the ``lein deps`` task. The
very first time you run a ``lein`` task it will take sometime to
download all the required dependencies before executing the task
itself.
.. code:: bash
cd simple-sample
lein deps
...
The ``deps`` task reads and merges from the ``project.clj`` and the
``~/.lein/profiles.clj`` files all the dependencies of the
``simple-sample`` project and verifies if they have already been
cached in the local maven repository. If the task returns without
messages about not being able to retrieve the two new artifacts your
installation is correct, otherwise go back and double check that you
did everything right.
REPLing with OpenCV
-------------------
Now ``cd`` in the ``simple-sample`` directory and issue the following
``lein`` task:
.. code:: bash
cd simple-sample
lein repl
...
...
nREPL server started on port 50907 on host 127.0.0.1
REPL-y 0.3.0
Clojure 1.5.1
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
user=>
You can immediately interact with the REPL by issuing any CLJ expression
to be evaluated.
.. code:: clojure
user=> (+ 41 1)
42
user=> (println "Hello, OpenCV!")
Hello, OpenCV!
nil
user=> (defn foo [] (str "bar"))
#'user/foo
user=> (foo)
"bar"
When ran from the home directory of a lein based project, even if the
``lein repl`` task automatically loads all the project dependencies, you
still need to load the opencv native library to be able to interact with
the OpenCV.
.. code:: clojure
user=> (clojure.lang.RT/loadLibrary org.opencv.core.Core/NATIVE_LIBRARY_NAME)
nil
Then you can start interacting with OpenCV by just referencing the fully
qualified names of its classes.
NOTE 2: `Here <http://docs.opencv.org/java/>`_ you can find the
full OpenCV Java API.
.. code:: clojure
user=> (org.opencv.core.Point. 0 0)
#<Point {0.0, 0.0}>
Here we created a two dimensions opencv ``Point`` instance. Even if all
the java packages included within the java interface to OpenCV are
immediately available from the CLJ REPL, it's very annoying to prefix
the ``Point.`` instance constructors with the fully qualified package
name.
Fortunately CLJ offer a very easy way to overcome this annoyance by
directly importing the ``Point`` class.
.. code:: clojure
user=> (import 'org.opencv.core.Point)
org.opencv.core.Point
user=> (def p1 (Point. 0 0))
#'user/p1
user=> p1
#<Point {0.0, 0.0}>
user=> (def p2 (Point. 100 100))
#'user/p2
We can even inspect the class of an instance and verify if the value of
a symbol is an instance of a ``Point`` java class.
.. code:: clojure
user=> (class p1)
org.opencv.core.Point
user=> (instance? org.opencv.core.Point p1)
true
If we now want to use the opencv ``Rect`` class to create a rectangle,
we again have to fully qualify its constructor even if it leaves in
the same ``org.opencv.core`` package of the ``Point`` class.
.. code:: clojure
user=> (org.opencv.core.Rect. p1 p2)
#<Rect {0, 0, 100x100}>
Again, the CLJ importing facilities is very handy and let you to map
more symbols in one shot.
.. code:: clojure
user=> (import '[org.opencv.core Point Rect Size])
org.opencv.core.Size
user=> (def r1 (Rect. p1 p2))
#'user/r1
user=> r1
#<Rect {0, 0, 100x100}>
user=> (class r1)
org.opencv.core.Rect
user=> (instance? org.opencv.core.Rect r1)
true
user=> (Size. 100 100)
#<Size 100x100>
user=> (def sq-100 (Size. 100 100))
#'user/sq-100
user=> (class sq-100)
org.opencv.core.Size
user=> (instance? org.opencv.core.Size sq-100)
true
Obviously you can call methods on instances as well.
.. code:: clojure
user=> (.area r1)
10000.0
user=> (.area sq-100)
10000.0
Or modify the value of a member field.
.. code:: clojure
user=> (set! (.x p1) 10)
10
user=> p1
#<Point {10.0, 0.0}>
user=> (set! (.width sq-100) 10)
10
user=> (set! (.height sq-100) 10)
10
user=> (.area sq-100)
100.0
If you find yourself not remembering a OpenCV class behavior, the
REPL gives you the opportunity to easily search the corresponding
javadoc documention:
.. code:: clojure
user=> (javadoc Rect)
"http://www.google.com/search?btnI=I%27m%20Feeling%20Lucky&q=allinurl:org/opencv/core/Rect.html"
Mimic the OpenCV Java Tutorial Sample in the REPL
-------------------------------------------------
Let's now try to port to Clojure the `opencv java tutorial sample <http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_java/java_dev_intro.html>`_.
Instead of writing it in a source file we're going to evaluate it at the
REPL.
Following is the original Java source code of the cited sample.
.. code:: java
import org.opencv.core.Mat;
import org.opencv.core.CvType;
import org.opencv.core.Scalar;
class SimpleSample {
static{ System.loadLibrary("opencv_java244"); }
public static void main(String[] args) {
Mat m = new Mat(5, 10, CvType.CV_8UC1, new Scalar(0));
System.out.println("OpenCV Mat: " + m);
Mat mr1 = m.row(1);
mr1.setTo(new Scalar(1));
Mat mc5 = m.col(5);
mc5.setTo(new Scalar(5));
System.out.println("OpenCV Mat data:\n" + m.dump());
}
}
Add injections to the project
-----------------------------
Before start coding, we'd like to eliminate the boring need of
interactively loading the native opencv lib any time we start a new REPL
to interact with it.
First, stop the REPL by evaluating the ``(exit)`` expression at the REPL
prompt.
.. code:: clojure
user=> (exit)
Bye for now!
Then open your ``project.clj`` file and edit it as follows:
.. code:: clojure
(defproject simple-sample "0.1.0-SNAPSHOT"
...
:injections [(clojure.lang.RT/loadLibrary org.opencv.core.Core/NATIVE_LIBRARY_NAME)])
Here we're saying to load the opencv native lib anytime we run the REPL
in such a way that we have not anymore to remember to manually do it.
Rerun the ``lein repl`` task
.. code:: bash
lein repl
nREPL server started on port 51645 on host 127.0.0.1
REPL-y 0.3.0
Clojure 1.5.1
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
user=>
Import the interested OpenCV java interfaces.
.. code:: clojure
user=> (import '[org.opencv.core Mat CvType Scalar])
org.opencv.core.Scalar
We're going to mimic almost verbatim the original OpenCV java tutorial
to:
- create a 5x10 matrix with all its elements intialized to 0
- change the value of every element of the second row to 1
- change the value of every element of the 6th column to 5
- print the content of the obtained matrix
.. code:: clojure
user=> (def m (Mat. 5 10 CvType/CV_8UC1 (Scalar. 0 0)))
#'user/m
user=> (def mr1 (.row m 1))
#'user/mr1
user=> (.setTo mr1 (Scalar. 1 0))
#<Mat Mat [ 1*10*CV_8UC1, isCont=true, isSubmat=true, nativeObj=0x7fc9dac49880, dataAddr=0x7fc9d9c98d5a ]>
user=> (def mc5 (.col m 5))
#'user/mc5
user=> (.setTo mc5 (Scalar. 5 0))
#<Mat Mat [ 5*1*CV_8UC1, isCont=false, isSubmat=true, nativeObj=0x7fc9d9c995a0, dataAddr=0x7fc9d9c98d55 ]>
user=> (println (.dump m))
[0, 0, 0, 0, 0, 5, 0, 0, 0, 0;
1, 1, 1, 1, 1, 5, 1, 1, 1, 1;
0, 0, 0, 0, 0, 5, 0, 0, 0, 0;
0, 0, 0, 0, 0, 5, 0, 0, 0, 0;
0, 0, 0, 0, 0, 5, 0, 0, 0, 0]
nil
If you are accustomed to a functional language all those abused and
mutating nouns are going to irritate your preference for verbs. Even
if the CLJ interop syntax is very handy and complete, there is still
an impedance mismatch between any OOP language and any FP language
(bein Scala a mixed paradigms programming language).
To exit the REPL type ``(exit)``, ``ctr-D`` or ``(quit)`` at the REPL
prompt.
.. code:: clojure
user=> (exit)
Bye for now!
Interactively load and blur an image
------------------------------------
In the next sample you will learn how to interactively load and blur and
image from the REPL by using the following OpenCV methods:
- the ``imread`` static method from the ``Highgui`` class to read an
image from a file
- the ``imwrite`` static method from the ``Highgui`` class to write an
image to a file
- the ``GaussianBlur`` static method from the ``Imgproc`` class to
apply to blur the original image
We're also going to use the ``Mat`` class which is returned from the
``imread`` method and accpeted as the main argument to both the
``GaussianBlur`` and the ``imwrite`` methods.
Add an image to the project
---------------------------
First we want to add an image file to a newly create directory for
storing static resources of the project.
.. image:: images/lena.png
:alt: Original Image
:align: center
.. code:: bash
mkdir -p resources/images
cp ~/opt/opencv/doc/tutorials/introduction/desktop_java/images/lena.png resource/images/
Read the image
--------------
Now launch the REPL as usual and start by importing all the OpenCV
classes we're going to use:
.. code:: clojure
lein repl
nREPL server started on port 50624 on host 127.0.0.1
REPL-y 0.3.0
Clojure 1.5.1
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
user=> (import '[org.opencv.core Mat Size CvType]
'[org.opencv.highgui Highgui]
'[org.opencv.imgproc Imgproc])
org.opencv.imgproc.Imgproc
Now read the image from the ``resources/images/lena.png`` file.
.. code:: clojure
user=> (def lena (Highgui/imread "resources/images/lena.png"))
#'user/lena
user=> lena
#<Mat Mat [ 512*512*CV_8UC3, isCont=true, isSubmat=false, nativeObj=0x7f9ab3054c40, dataAddr=0x19fea9010 ]>
As you see, by simply evaluating the ``lena`` symbol we know that
``lena.png`` is a ``512x512`` matrix of ``CV_8UC3`` elements type. Let's
create a new ``Mat`` instance of the same dimensions and elements type.
.. code:: clojure
user=> (def blurred (Mat. 512 512 CvType/CV_8UC3))
#'user/blurred
user=>
Now apply a ``GaussianBlur`` filter using ``lena`` as the source matrix
and ``blurred`` as the destination matrix.
.. code:: clojure
user=> (Imgproc/GaussianBlur lena blurred (Size. 5 5) 3 3)
nil
As a last step just save the ``blurred`` matrix in a new image file.
.. code:: clojure
user=> (Highgui/imwrite "resources/images/blurred.png" blurred)
true
user=> (exit)
Bye for now!
Following is the new blurred image of Lena.
.. image:: images/blurred.png
:alt: Blurred Image
:align: center
Next Steps
==========
This tutorial only introduces the very basic environment set up to be
able to interact with OpenCV in a CLJ REPL.
I recommend any Clojure newbie to read the `Clojure Java Interop chapter <http://clojure.org/java_interop>`_ to get all you need to know
to interoperate with any plain java lib that has not been wrapped in
Clojure to make it usable in a more idiomatic and functional way within
Clojure.
The OpenCV Java API does not wrap the ``highgui`` module
functionalities depending on ``Qt`` (e.g. ``namedWindow`` and
``imshow``. If you want to create windows and show images into them
while interacting with OpenCV from the REPL, at the moment you're left
at your own. You could use Java Swing to fill the gap.
License
-------
Copyright © 2013 Giacomo (Mimmo) Cosenza aka Magomimmo
Distributed under the BSD 3-clause License, the same of OpenCV.

Binary file not shown.

After

Width:  |  Height:  |  Size: 351 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

View File

@ -156,6 +156,21 @@ world of the OpenCV.
:height: 90pt
:width: 90pt
================ =================================================
|ClojureLogo| **Title:** :ref:`clojure_dev_intro`
*Compatibility:* > OpenCV 2.4.4
*Author:* |Author_MimmoC|
A tutorial on how to interactively use OpenCV from the Clojure REPL.
================ =================================================
.. |ClojureLogo| image:: images/clojure-logo.png
:height: 90pt
:width: 90pt
* **Android**
.. tabularcolumns:: m{100pt} m{300pt}
@ -314,6 +329,7 @@ world of the OpenCV.
../windows_visual_studio_image_watch/windows_visual_studio_image_watch
../desktop_java/java_dev_intro
../java_eclipse/java_eclipse
../clojure_dev_intro/clojure_dev_intro
../android_binary_package/android_dev_intro
../android_binary_package/O4A_SDK
../android_binary_package/dev_with_OCV_on_Android

View File

@ -305,6 +305,32 @@ private:
AutoLock& operator = (const AutoLock&);
};
class TLSDataContainer
{
private:
int key_;
protected:
CV_EXPORTS TLSDataContainer();
CV_EXPORTS ~TLSDataContainer(); // virtual is not required
public:
virtual void* createDataInstance() const = 0;
virtual void deleteDataInstance(void* data) const = 0;
CV_EXPORTS void* getData() const;
};
template <typename T>
class TLSData : protected TLSDataContainer
{
public:
inline TLSData() {}
inline ~TLSData() {}
inline T* get() const { return (T*)getData(); }
private:
virtual void* createDataInstance() const { return new T; }
virtual void deleteDataInstance(void* data) const { delete (T*)data; }
};
// The CommandLineParser class is designed for command line arguments parsing
class CV_EXPORTS CommandLineParser

View File

@ -738,26 +738,6 @@ namespace cv {
bool __termination = false;
}
#if defined CVAPI_EXPORTS && defined WIN32 && !defined WINCE
#ifdef HAVE_WINRT
#pragma warning(disable:4447) // Disable warning 'main' signature found without threading model
#endif
BOOL WINAPI DllMain( HINSTANCE, DWORD, LPVOID );
BOOL WINAPI DllMain( HINSTANCE, DWORD fdwReason, LPVOID lpReserved )
{
if( fdwReason == DLL_THREAD_DETACH || fdwReason == DLL_PROCESS_DETACH )
{
if (lpReserved != NULL) // called after ExitProcess() call
cv::__termination = true;
cv::deleteThreadAllocData();
cv::deleteThreadData();
}
return TRUE;
}
#endif
namespace cv
{
@ -841,7 +821,224 @@ void Mutex::lock() { impl->lock(); }
void Mutex::unlock() { impl->unlock(); }
bool Mutex::trylock() { return impl->trylock(); }
//////////////////////////////// thread-local storage ////////////////////////////////
class TLSStorage
{
std::vector<void*> tlsData_;
public:
TLSStorage() { tlsData_.reserve(16); }
~TLSStorage();
inline void* getData(int key) const
{
CV_DbgAssert(key >= 0);
return (key < (int)tlsData_.size()) ? tlsData_[key] : NULL;
}
inline void setData(int key, void* data)
{
CV_DbgAssert(key >= 0);
if (key >= (int)tlsData_.size())
{
tlsData_.resize(key + 1, NULL);
}
tlsData_[key] = data;
}
inline static TLSStorage* get();
};
#ifdef WIN32
#pragma warning(disable:4505) // unreferenced local function has been removed
#ifdef HAVE_WINRT
// using C++11 thread attribute for local thread data
static __declspec( thread ) TLSStorage* g_tlsdata = NULL;
static void deleteThreadData()
{
if (g_tlsdata)
{
delete g_tlsdata;
g_tlsdata = NULL;
}
}
inline TLSStorage* TLSStorage::get()
{
if (!g_tlsdata)
{
g_tlsdata = new TLSStorage;
}
return g_tlsdata;
}
#else
#ifdef WINCE
# define TLS_OUT_OF_INDEXES ((DWORD)0xFFFFFFFF)
#endif
static DWORD tlsKey = TLS_OUT_OF_INDEXES;
static void deleteThreadData()
{
if(tlsKey != TLS_OUT_OF_INDEXES)
{
delete (TLSStorage*)TlsGetValue(tlsKey);
TlsSetValue(tlsKey, NULL);
}
}
inline TLSStorage* TLSStorage::get()
{
if (tlsKey == TLS_OUT_OF_INDEXES)
{
tlsKey = TlsAlloc();
CV_Assert(tlsKey != TLS_OUT_OF_INDEXES);
}
TLSStorage* d = (TLSStorage*)TlsGetValue(tlsKey);
if (!d)
{
d = new TLSStorage;
TlsSetValue(tlsKey, d);
}
return d;
}
#endif //HAVE_WINRT
#if defined CVAPI_EXPORTS && defined WIN32 && !defined WINCE
#ifdef HAVE_WINRT
#pragma warning(disable:4447) // Disable warning 'main' signature found without threading model
#endif
BOOL WINAPI DllMain(HINSTANCE, DWORD fdwReason, LPVOID);
BOOL WINAPI DllMain(HINSTANCE, DWORD fdwReason, LPVOID)
{
if (fdwReason == DLL_THREAD_DETACH || fdwReason == DLL_PROCESS_DETACH)
{
cv::deleteThreadAllocData();
cv::deleteThreadRNGData();
cv::deleteThreadData();
}
return TRUE;
}
#endif
#else
static pthread_key_t tlsKey = 0;
static pthread_once_t tlsKeyOnce = PTHREAD_ONCE_INIT;
static void deleteTLSStorage(void* data)
{
delete (TLSStorage*)data;
}
static void makeKey()
{
int errcode = pthread_key_create(&tlsKey, deleteTLSStorage);
CV_Assert(errcode == 0);
}
inline TLSStorage* TLSStorage::get()
{
pthread_once(&tlsKeyOnce, makeKey);
TLSStorage* d = (TLSStorage*)pthread_getspecific(tlsKey);
if( !d )
{
d = new TLSStorage;
pthread_setspecific(tlsKey, d);
}
return d;
}
#endif
class TLSContainerStorage
{
cv::Mutex mutex_;
std::vector<TLSDataContainer*> tlsContainers_;
public:
TLSContainerStorage() { }
~TLSContainerStorage()
{
for (size_t i = 0; i < tlsContainers_.size(); i++)
{
CV_DbgAssert(tlsContainers_[i] == NULL); // not all keys released
tlsContainers_[i] = NULL;
}
}
int allocateKey(TLSDataContainer* pContainer)
{
cv::AutoLock lock(mutex_);
tlsContainers_.push_back(pContainer);
return (int)tlsContainers_.size() - 1;
}
void releaseKey(int id, TLSDataContainer* pContainer)
{
cv::AutoLock lock(mutex_);
CV_Assert(tlsContainers_[id] == pContainer);
tlsContainers_[id] = NULL;
// currently, we don't go into thread's TLSData and release data for this key
}
void destroyData(int key, void* data)
{
cv::AutoLock lock(mutex_);
TLSDataContainer* k = tlsContainers_[key];
if (!k)
return;
try
{
k->deleteDataInstance(data);
}
catch (...)
{
CV_DbgAssert(k == NULL); // Debug this!
}
}
};
static TLSContainerStorage tlsContainerStorage;
TLSDataContainer::TLSDataContainer()
: key_(-1)
{
key_ = tlsContainerStorage.allocateKey(this);
}
TLSDataContainer::~TLSDataContainer()
{
tlsContainerStorage.releaseKey(key_, this);
key_ = -1;
}
void* TLSDataContainer::getData() const
{
CV_Assert(key_ >= 0);
TLSStorage* tlsData = TLSStorage::get();
void* data = tlsData->getData(key_);
if (!data)
{
data = this->createDataInstance();
CV_DbgAssert(data != NULL);
tlsData->setData(key_, data);
}
return data;
}
TLSStorage::~TLSStorage()
{
for (int i = 0; i < (int)tlsData_.size(); i++)
{
void*& data = tlsData_[i];
if (data)
{
tlsContainerStorage.destroyData(i, data);
data = NULL;
}
}
tlsData_.clear();
}
} // namespace cv
//////////////////////////////// thread-local storage ////////////////////////////////

View File

@ -54,8 +54,9 @@ static const int FREAK_NB_SCALES = FREAK::NB_SCALES;
static const int FREAK_NB_PAIRS = FREAK::NB_PAIRS;
static const int FREAK_NB_ORIENPAIRS = FREAK::NB_ORIENPAIRS;
// default pairs
static const int FREAK_DEF_PAIRS[FREAK::NB_PAIRS] =
{ // default pairs
{
404,431,818,511,181,52,311,874,774,543,719,230,417,205,11,
560,149,265,39,306,165,857,250,8,61,15,55,717,44,412,
592,134,761,695,660,782,625,487,549,516,271,665,762,392,178,
@ -92,15 +93,17 @@ static const int FREAK_DEF_PAIRS[FREAK::NB_PAIRS] =
670,249,36,581,389,605,331,518,442,822
};
// used to sort pairs during pairs selection
struct PairStat
{ // used to sort pairs during pairs selection
{
double mean;
int idx;
};
struct sortMean
{
bool operator()( const PairStat& a, const PairStat& b ) const {
bool operator()( const PairStat& a, const PairStat& b ) const
{
return a.mean < b.mean;
}
};
@ -130,17 +133,21 @@ void FREAK::buildPattern()
radius[6]/2.0, radius[6]/2.0
};
// fill the lookup table
for( int scaleIdx=0; scaleIdx < FREAK_NB_SCALES; ++scaleIdx ) {
for( int scaleIdx=0; scaleIdx < FREAK_NB_SCALES; ++scaleIdx )
{
patternSizes[scaleIdx] = 0; // proper initialization
scalingFactor = std::pow(scaleStep,scaleIdx); //scale of the pattern, scaleStep ^ scaleIdx
for( int orientationIdx = 0; orientationIdx < FREAK_NB_ORIENTATION; ++orientationIdx ) {
for( int orientationIdx = 0; orientationIdx < FREAK_NB_ORIENTATION; ++orientationIdx )
{
theta = double(orientationIdx)* 2*CV_PI/double(FREAK_NB_ORIENTATION); // orientation of the pattern
int pointIdx = 0;
PatternPoint* patternLookupPtr = &patternLookup[0];
for( size_t i = 0; i < 8; ++i ) {
for( int k = 0 ; k < n[i]; ++k ) {
for( size_t i = 0; i < 8; ++i )
{
for( int k = 0 ; k < n[i]; ++k )
{
beta = CV_PI/n[i] * (i%2); // orientation offset so that groups of points on each circles are staggered
alpha = double(k)* 2*CV_PI/double(n[i])+beta+theta;
@ -182,7 +189,8 @@ void FREAK::buildPattern()
orientationPairs[39].i=30; orientationPairs[39].j=33; orientationPairs[40].i=31; orientationPairs[40].j=34; orientationPairs[41].i=32; orientationPairs[41].j=35;
orientationPairs[42].i=36; orientationPairs[42].j=39; orientationPairs[43].i=37; orientationPairs[43].j=40; orientationPairs[44].i=38; orientationPairs[44].j=41;
for( unsigned m = FREAK_NB_ORIENPAIRS; m--; ) {
for( unsigned m = FREAK_NB_ORIENPAIRS; m--; )
{
const float dx = patternLookup[orientationPairs[m].i].x-patternLookup[orientationPairs[m].j].x;
const float dy = patternLookup[orientationPairs[m].i].y-patternLookup[orientationPairs[m].j].y;
const float norm_sq = (dx*dx+dy*dy);
@ -192,30 +200,37 @@ void FREAK::buildPattern()
// build the list of description pairs
std::vector<DescriptionPair> allPairs;
for( unsigned int i = 1; i < (unsigned int)FREAK_NB_POINTS; ++i ) {
for( unsigned int i = 1; i < (unsigned int)FREAK_NB_POINTS; ++i )
{
// (generate all the pairs)
for( unsigned int j = 0; (unsigned int)j < i; ++j ) {
for( unsigned int j = 0; (unsigned int)j < i; ++j )
{
DescriptionPair pair = {(uchar)i,(uchar)j};
allPairs.push_back(pair);
}
}
// Input vector provided
if( !selectedPairs0.empty() ) {
if( (int)selectedPairs0.size() == FREAK_NB_PAIRS ) {
if( !selectedPairs0.empty() )
{
if( (int)selectedPairs0.size() == FREAK_NB_PAIRS )
{
for( int i = 0; i < FREAK_NB_PAIRS; ++i )
descriptionPairs[i] = allPairs[selectedPairs0.at(i)];
}
else {
else
{
CV_Error(Error::StsVecLengthErr, "Input vector does not match the required size");
}
}
else { // default selected pairs
else // default selected pairs
{
for( int i = 0; i < FREAK_NB_PAIRS; ++i )
descriptionPairs[i] = allPairs[FREAK_DEF_PAIRS[i]];
}
}
void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat& descriptors ) const {
void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat& descriptors ) const
{
if( image.empty() )
return;
@ -236,8 +251,10 @@ void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat
int direction1;
// compute the scale index corresponding to the keypoint size and remove keypoints close to the border
if( scaleNormalized ) {
for( size_t k = keypoints.size(); k--; ) {
if( scaleNormalized )
{
for( size_t k = keypoints.size(); k--; )
{
//Is k non-zero? If so, decrement it and continue"
kpScaleIdx[k] = std::max( (int)(std::log(keypoints[k].size/FREAK_SMALLEST_KP_SIZE)*sizeCst+0.5) ,0);
if( kpScaleIdx[k] >= FREAK_NB_SCALES )
@ -247,24 +264,29 @@ void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat
keypoints[k].pt.y <= patternSizes[kpScaleIdx[k]] ||
keypoints[k].pt.x >= image.cols-patternSizes[kpScaleIdx[k]] ||
keypoints[k].pt.y >= image.rows-patternSizes[kpScaleIdx[k]]
) {
)
{
keypoints.erase(kpBegin+k);
kpScaleIdx.erase(ScaleIdxBegin+k);
}
}
}
else {
else
{
const int scIdx = std::max( (int)(1.0986122886681*sizeCst+0.5) ,0);
for( size_t k = keypoints.size(); k--; ) {
for( size_t k = keypoints.size(); k--; )
{
kpScaleIdx[k] = scIdx; // equivalent to the formule when the scale is normalized with a constant size of keypoints[k].size=3*SMALLEST_KP_SIZE
if( kpScaleIdx[k] >= FREAK_NB_SCALES ) {
if( kpScaleIdx[k] >= FREAK_NB_SCALES )
{
kpScaleIdx[k] = FREAK_NB_SCALES-1;
}
if( keypoints[k].pt.x <= patternSizes[kpScaleIdx[k]] ||
keypoints[k].pt.y <= patternSizes[kpScaleIdx[k]] ||
keypoints[k].pt.x >= image.cols-patternSizes[kpScaleIdx[k]] ||
keypoints[k].pt.y >= image.rows-patternSizes[kpScaleIdx[k]]
) {
)
{
keypoints.erase(kpBegin+k);
kpScaleIdx.erase(ScaleIdxBegin+k);
}
@ -272,7 +294,8 @@ void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat
}
// allocate descriptor memory, estimate orientations, extract descriptors
if( !extAll ) {
if( !extAll )
{
// extract the best comparisons only
descriptors = cv::Mat::zeros((int)keypoints.size(), FREAK_NB_PAIRS/8, CV_8U);
#if CV_SSE2
@ -280,20 +303,25 @@ void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat
#else
std::bitset<FREAK_NB_PAIRS>* ptr = (std::bitset<FREAK_NB_PAIRS>*) (descriptors.data+(keypoints.size()-1)*descriptors.step[0]);
#endif
for( size_t k = keypoints.size(); k--; ) {
for( size_t k = keypoints.size(); k--; )
{
// estimate orientation (gradient)
if( !orientationNormalized ) {
if( !orientationNormalized )
{
thetaIdx = 0; // assign 0° to all keypoints
keypoints[k].angle = 0.0;
}
else {
else
{
// get the points intensity value in the un-rotated pattern
for( int i = FREAK_NB_POINTS; i--; ) {
for( int i = FREAK_NB_POINTS; i--; )
{
pointsValue[i] = meanIntensity(image, imgIntegral, keypoints[k].pt.x,keypoints[k].pt.y, kpScaleIdx[k], 0, i);
}
direction0 = 0;
direction1 = 0;
for( int m = 45; m--; ) {
for( int m = 45; m--; )
{
//iterate through the orientation pairs
const int delta = (pointsValue[ orientationPairs[m].i ]-pointsValue[ orientationPairs[m].j ]);
direction0 += delta*(orientationPairs[m].weight_dx)/2048;
@ -309,7 +337,8 @@ void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat
thetaIdx -= FREAK_NB_ORIENTATION;
}
// extract descriptor at the computed orientation
for( int i = FREAK_NB_POINTS; i--; ) {
for( int i = FREAK_NB_POINTS; i--; )
{
pointsValue[i] = meanIntensity(image, imgIntegral, keypoints[k].pt.x,keypoints[k].pt.y, kpScaleIdx[k], thetaIdx, i);
}
#if CV_SSE2
@ -384,24 +413,29 @@ void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat
#endif
}
}
else { // extract all possible comparisons for selection
else // extract all possible comparisons for selection
{
descriptors = cv::Mat::zeros((int)keypoints.size(), 128, CV_8U);
std::bitset<1024>* ptr = (std::bitset<1024>*) (descriptors.data+(keypoints.size()-1)*descriptors.step[0]);
for( size_t k = keypoints.size(); k--; ) {
for( size_t k = keypoints.size(); k--; )
{
//estimate orientation (gradient)
if( !orientationNormalized ) {
if( !orientationNormalized )
{
thetaIdx = 0;//assign 0° to all keypoints
keypoints[k].angle = 0.0;
}
else {
else
{
//get the points intensity value in the un-rotated pattern
for( int i = FREAK_NB_POINTS;i--; )
pointsValue[i] = meanIntensity(image, imgIntegral, keypoints[k].pt.x,keypoints[k].pt.y, kpScaleIdx[k], 0, i);
direction0 = 0;
direction1 = 0;
for( int m = 45; m--; ) {
for( int m = 45; m--; )
{
//iterate through the orientation pairs
const int delta = (pointsValue[ orientationPairs[m].i ]-pointsValue[ orientationPairs[m].j ]);
direction0 += delta*(orientationPairs[m].weight_dx)/2048;
@ -418,15 +452,18 @@ void FREAK::computeImpl( const Mat& image, std::vector<KeyPoint>& keypoints, Mat
thetaIdx -= FREAK_NB_ORIENTATION;
}
// get the points intensity value in the rotated pattern
for( int i = FREAK_NB_POINTS; i--; ) {
for( int i = FREAK_NB_POINTS; i--; )
{
pointsValue[i] = meanIntensity(image, imgIntegral, keypoints[k].pt.x,
keypoints[k].pt.y, kpScaleIdx[k], thetaIdx, i);
}
int cnt(0);
for( int i = 1; i < FREAK_NB_POINTS; ++i ) {
for( int i = 1; i < FREAK_NB_POINTS; ++i )
{
//(generate all the pairs)
for( int j = 0; j < i; ++j ) {
for( int j = 0; j < i; ++j )
{
ptr->set(cnt, pointsValue[i] >= pointsValue[j] );
++cnt;
}
@ -442,7 +479,8 @@ uchar FREAK::meanIntensity( const cv::Mat& image, const cv::Mat& integral,
const float kp_y,
const unsigned int scale,
const unsigned int rot,
const unsigned int point) const {
const unsigned int point) const
{
// get point position in image
const PatternPoint& FreakPoint = patternLookup[scale*FREAK_NB_ORIENTATION*FREAK_NB_POINTS + rot*FREAK_NB_POINTS + point];
const float xf = FreakPoint.x+kp_x;
@ -455,7 +493,8 @@ uchar FREAK::meanIntensity( const cv::Mat& image, const cv::Mat& integral,
const float radius = FreakPoint.sigma;
// calculate output:
if( radius < 0.5 ) {
if( radius < 0.5 )
{
// interpolation multipliers:
const int r_x = static_cast<int>((xf-x)*1024);
const int r_y = static_cast<int>((yf-y)*1024);
@ -507,7 +546,8 @@ std::vector<int> FREAK::selectPairs(const std::vector<Mat>& images
if( verbose )
std::cout << "Number of images: " << images.size() << std::endl;
for( size_t i = 0;i < images.size(); ++i ) {
for( size_t i = 0;i < images.size(); ++i )
{
Mat descriptorsTmp;
computeImpl(images[i],keypoints[i],descriptorsTmp);
descriptors.push_back(descriptorsTmp);
@ -520,8 +560,10 @@ std::vector<int> FREAK::selectPairs(const std::vector<Mat>& images
Mat descriptorsFloat = Mat::zeros(descriptors.rows, 903, CV_32F);
std::bitset<1024>* ptr = (std::bitset<1024>*) (descriptors.data+(descriptors.rows-1)*descriptors.step[0]);
for( int m = descriptors.rows; m--; ) {
for( int n = 903; n--; ) {
for( int m = descriptors.rows; m--; )
{
for( int n = 903; n--; )
{
if( ptr->test(n) == true )
descriptorsFloat.at<float>(m,n)=1.0f;
}
@ -529,7 +571,8 @@ std::vector<int> FREAK::selectPairs(const std::vector<Mat>& images
}
std::vector<PairStat> pairStat;
for( int n = 903; n--; ) {
for( int n = 903; n--; )
{
// the higher the variance, the better --> mean = 0.5
PairStat tmp = { fabs( mean(descriptorsFloat.col(n))[0]-0.5 ) ,n};
pairStat.push_back(tmp);
@ -538,19 +581,22 @@ std::vector<int> FREAK::selectPairs(const std::vector<Mat>& images
std::sort( pairStat.begin(),pairStat.end(), sortMean() );
std::vector<PairStat> bestPairs;
for( int m = 0; m < 903; ++m ) {
for( int m = 0; m < 903; ++m )
{
if( verbose )
std::cout << m << ":" << bestPairs.size() << " " << std::flush;
double corrMax(0);
for( size_t n = 0; n < bestPairs.size(); ++n ) {
for( size_t n = 0; n < bestPairs.size(); ++n )
{
int idxA = bestPairs[n].idx;
int idxB = pairStat[m].idx;
double corr(0);
// compute correlation between 2 pairs
corr = fabs(compareHist(descriptorsFloat.col(idxA), descriptorsFloat.col(idxB), HISTCMP_CORREL));
if( corr > corrMax ) {
if( corr > corrMax )
{
corrMax = corr;
if( corrMax >= corrTresh )
break;
@ -560,7 +606,8 @@ std::vector<int> FREAK::selectPairs(const std::vector<Mat>& images
if( corrMax < corrTresh/*0.7*/ )
bestPairs.push_back(pairStat[m]);
if( bestPairs.size() >= 512 ) {
if( bestPairs.size() >= 512 )
{
if( verbose )
std::cout << m << std::endl;
break;
@ -568,11 +615,13 @@ std::vector<int> FREAK::selectPairs(const std::vector<Mat>& images
}
std::vector<int> idxBestPairs;
if( (int)bestPairs.size() >= FREAK_NB_PAIRS ) {
if( (int)bestPairs.size() >= FREAK_NB_PAIRS )
{
for( int i = 0; i < FREAK_NB_PAIRS; ++i )
idxBestPairs.push_back(bestPairs[i].idx);
}
else {
else
{
if( verbose )
std::cout << "correlation threshold too small (restrictive)" << std::endl;
CV_Error(Error::StsError, "correlation threshold too small (restrictive)");
@ -583,11 +632,13 @@ std::vector<int> FREAK::selectPairs(const std::vector<Mat>& images
/*
// create an image showing the brisk pattern
void FREAKImpl::drawPattern()
{ // create an image showing the brisk pattern
{
Mat pattern = Mat::zeros(1000, 1000, CV_8UC3) + Scalar(255,255,255);
int sFac = 500 / patternScale;
for( int n = 0; n < kNB_POINTS; ++n ) {
for( int n = 0; n < kNB_POINTS; ++n )
{
PatternPoint& pt = patternLookup[n];
circle(pattern, Point( pt.x*sFac,pt.y*sFac)+Point(500,500), pt.sigma*sFac, Scalar(0,0,255),2);
// rectangle(pattern, Point( (pt.x-pt.sigma)*sFac,(pt.y-pt.sigma)*sFac)+Point(500,500), Point( (pt.x+pt.sigma)*sFac,(pt.y+pt.sigma)*sFac)+Point(500,500), Scalar(0,0,255),2);
@ -615,11 +666,13 @@ FREAK::~FREAK()
{
}
int FREAK::descriptorSize() const {
int FREAK::descriptorSize() const
{
return FREAK_NB_PAIRS / 8; // descriptor length in bytes
}
int FREAK::descriptorType() const {
int FREAK::descriptorType() const
{
return CV_8U;
}

View File

@ -5,4 +5,7 @@ endif()
set(the_description "OpenCL-accelerated Computer Vision")
ocv_define_module(ocl opencv_core opencv_imgproc opencv_features2d opencv_objdetect opencv_video opencv_calib3d opencv_ml "${OPENCL_LIBRARIES}")
if(TARGET opencv_test_ocl)
target_link_libraries(opencv_test_ocl "${OPENCL_LIBRARIES}")
endif()
ocv_warnings_disable(CMAKE_CXX_FLAGS -Wshadow)

View File

@ -25,12 +25,26 @@ Returns the list of devices
ocl::setDevice
--------------
Returns void
Initialize OpenCL computation context
.. ocv:function:: void ocl::setDevice( const DeviceInfo* info )
:param info: device info
ocl::initializeContext
--------------------------------
Alternative way to initialize OpenCL computation context.
.. ocv:function:: void ocl::initializeContext(void* pClPlatform, void* pClContext, void* pClDevice)
:param pClPlatform: selected ``platform_id`` (via pointer, parameter type is ``cl_platform_id*``)
:param pClContext: selected ``cl_context`` (via pointer, parameter type is ``cl_context*``)
:param pClDevice: selected ``cl_device_id`` (via pointer, parameter type is ``cl_device_id*``)
This function can be used for context initialization with D3D/OpenGL interoperability.
ocl::setBinaryPath
------------------
Returns void

View File

@ -118,6 +118,7 @@ namespace cv
const PlatformInfo* platform;
DeviceInfo();
~DeviceInfo();
};
struct PlatformInfo
@ -136,6 +137,7 @@ namespace cv
std::vector<const DeviceInfo*> devices;
PlatformInfo();
~PlatformInfo();
};
//////////////////////////////// Initialization & Info ////////////////////////
@ -151,6 +153,10 @@ namespace cv
// set device you want to use
CV_EXPORTS void setDevice(const DeviceInfo* info);
// Initialize from OpenCL handles directly.
// Argument types is (pointers): cl_platform_id*, cl_context*, cl_device_id*
CV_EXPORTS void initializeContext(void* pClPlatform, void* pClContext, void* pClDevice);
enum FEATURE_TYPE
{
FEATURE_CL_DOUBLE = 1,

View File

@ -175,6 +175,7 @@ namespace cv
data = m.data;
datastart = m.datastart;
dataend = m.dataend;
clCxt = m.clCxt;
wholerows = m.wholerows;
wholecols = m.wholecols;
offset = m.offset;

View File

@ -57,6 +57,12 @@
namespace cv {
namespace ocl {
using namespace cl_utils;
#if defined(WIN32)
static bool __termination = false;
#endif
struct __Module
{
__Module();
@ -71,36 +77,10 @@ cv::Mutex& getInitializationMutex()
return __module.initializationMutex;
}
struct PlatformInfoImpl
static cv::Mutex& getCurrentContextMutex()
{
cl_platform_id platform_id;
std::vector<int> deviceIDs;
PlatformInfo info;
PlatformInfoImpl()
: platform_id(NULL)
{
}
};
struct DeviceInfoImpl
{
cl_platform_id platform_id;
cl_device_id device_id;
DeviceInfo info;
DeviceInfoImpl()
: platform_id(NULL), device_id(NULL)
{
}
};
static std::vector<PlatformInfoImpl> global_platforms;
static std::vector<DeviceInfoImpl> global_devices;
return __module.currentContextMutex;
}
static bool parseOpenCLVersion(const std::string& versionStr, int& major, int& minor)
{
@ -131,6 +111,141 @@ static bool parseOpenCLVersion(const std::string& versionStr, int& major, int& m
return true;
}
struct PlatformInfoImpl : public PlatformInfo
{
cl_platform_id platform_id;
std::vector<int> deviceIDs;
PlatformInfoImpl()
: platform_id(NULL)
{
}
void init(int id, cl_platform_id platform)
{
CV_Assert(platform_id == NULL);
this->_id = id;
platform_id = platform;
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_PROFILE, this->platformProfile));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_VERSION, this->platformVersion));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_NAME, this->platformName));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_VENDOR, this->platformVendor));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_EXTENSIONS, this->platformExtensons));
parseOpenCLVersion(this->platformVersion,
this->platformVersionMajor, this->platformVersionMinor);
}
};
struct DeviceInfoImpl: public DeviceInfo
{
cl_platform_id platform_id;
cl_device_id device_id;
DeviceInfoImpl()
: platform_id(NULL), device_id(NULL)
{
}
void init(int id, PlatformInfoImpl& platformInfoImpl, cl_device_id device)
{
CV_Assert(device_id == NULL);
this->_id = id;
platform_id = platformInfoImpl.platform_id;
device_id = device;
this->platform = &platformInfoImpl;
cl_device_type type = cl_device_type(-1);
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_TYPE, type));
this->deviceType = DeviceType(type);
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_PROFILE, this->deviceProfile));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_VERSION, this->deviceVersion));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_NAME, this->deviceName));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_VENDOR, this->deviceVendor));
cl_uint vendorID = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_VENDOR_ID, vendorID));
this->deviceVendorId = vendorID;
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DRIVER_VERSION, this->deviceDriverVersion));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_EXTENSIONS, this->deviceExtensions));
parseOpenCLVersion(this->deviceVersion,
this->deviceVersionMajor, this->deviceVersionMinor);
size_t maxWorkGroupSize = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_WORK_GROUP_SIZE, maxWorkGroupSize));
this->maxWorkGroupSize = maxWorkGroupSize;
cl_uint maxDimensions = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS, maxDimensions));
std::vector<size_t> maxWorkItemSizes(maxDimensions);
openCLSafeCall(clGetDeviceInfo(device, CL_DEVICE_MAX_WORK_ITEM_SIZES, sizeof(size_t) * maxDimensions,
(void *)&maxWorkItemSizes[0], 0));
this->maxWorkItemSizes = maxWorkItemSizes;
cl_uint maxComputeUnits = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_COMPUTE_UNITS, maxComputeUnits));
this->maxComputeUnits = maxComputeUnits;
cl_ulong localMemorySize = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_LOCAL_MEM_SIZE, localMemorySize));
this->localMemorySize = (size_t)localMemorySize;
cl_ulong maxMemAllocSize = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_MEM_ALLOC_SIZE, maxMemAllocSize));
this->maxMemAllocSize = (size_t)maxMemAllocSize;
cl_bool unifiedMemory = false;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_HOST_UNIFIED_MEMORY, unifiedMemory));
this->isUnifiedMemory = unifiedMemory != 0;
//initialize extra options for compilation. Currently only fp64 is included.
//Assume 4KB is enough to store all possible extensions.
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_EXTENSIONS, this->deviceExtensions));
size_t fp64_khr = this->deviceExtensions.find("cl_khr_fp64");
if(fp64_khr != std::string::npos)
{
this->compilationExtraOptions += "-D DOUBLE_SUPPORT";
this->haveDoubleSupport = true;
}
else
{
this->haveDoubleSupport = false;
}
size_t intel_platform = platformInfoImpl.platformVendor.find("Intel");
if(intel_platform != std::string::npos)
{
this->compilationExtraOptions += " -D INTEL_DEVICE";
this->isIntelDevice = true;
}
else
{
this->isIntelDevice = false;
}
if (id < 0)
{
#ifdef CL_VERSION_1_2
if (this->deviceVersionMajor > 1 || (this->deviceVersionMajor == 1 && this->deviceVersionMinor >= 2))
{
::clRetainDevice(device);
}
#endif
}
}
};
static std::vector<PlatformInfoImpl> global_platforms;
static std::vector<DeviceInfoImpl> global_devices;
static void split(const std::string &s, char delim, std::vector<std::string> &elems) {
std::stringstream ss(s);
std::string item;
@ -329,8 +444,6 @@ not_found:
static bool __initialized = false;
static int initializeOpenCLDevices()
{
using namespace cl_utils;
assert(!__initialized);
__initialized = true;
@ -351,19 +464,9 @@ static int initializeOpenCLDevices()
for (size_t i = 0; i < platforms.size(); ++i)
{
PlatformInfoImpl& platformInfo = global_platforms[i];
platformInfo.info._id = i;
cl_platform_id platform = platforms[i];
platformInfo.platform_id = platform;
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_PROFILE, platformInfo.info.platformProfile));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_VERSION, platformInfo.info.platformVersion));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_NAME, platformInfo.info.platformName));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_VENDOR, platformInfo.info.platformVendor));
openCLSafeCall(getStringInfo(clGetPlatformInfo, platform, CL_PLATFORM_EXTENSIONS, platformInfo.info.platformExtensons));
parseOpenCLVersion(platformInfo.info.platformVersion,
platformInfo.info.platformVersionMajor, platformInfo.info.platformVersionMinor);
platformInfo.init(i, platform);
std::vector<cl_device_id> devices;
cl_int status = getDevices(platform, CL_DEVICE_TYPE_ALL, devices);
@ -375,89 +478,15 @@ static int initializeOpenCLDevices()
int baseIndx = global_devices.size();
global_devices.resize(baseIndx + devices.size());
platformInfo.deviceIDs.resize(devices.size());
platformInfo.info.devices.resize(devices.size());
platformInfo.devices.resize(devices.size());
for(size_t j = 0; j < devices.size(); ++j)
{
cl_device_id device = devices[j];
DeviceInfoImpl& deviceInfo = global_devices[baseIndx + j];
deviceInfo.info._id = baseIndx + j;
deviceInfo.platform_id = platform;
deviceInfo.device_id = device;
deviceInfo.info.platform = &platformInfo.info;
platformInfo.deviceIDs[j] = deviceInfo.info._id;
cl_device_type type = cl_device_type(-1);
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_TYPE, type));
deviceInfo.info.deviceType = DeviceType(type);
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_PROFILE, deviceInfo.info.deviceProfile));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_VERSION, deviceInfo.info.deviceVersion));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_NAME, deviceInfo.info.deviceName));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_VENDOR, deviceInfo.info.deviceVendor));
cl_uint vendorID = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_VENDOR_ID, vendorID));
deviceInfo.info.deviceVendorId = vendorID;
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DRIVER_VERSION, deviceInfo.info.deviceDriverVersion));
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_EXTENSIONS, deviceInfo.info.deviceExtensions));
parseOpenCLVersion(deviceInfo.info.deviceVersion,
deviceInfo.info.deviceVersionMajor, deviceInfo.info.deviceVersionMinor);
size_t maxWorkGroupSize = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_WORK_GROUP_SIZE, maxWorkGroupSize));
deviceInfo.info.maxWorkGroupSize = maxWorkGroupSize;
cl_uint maxDimensions = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS, maxDimensions));
std::vector<size_t> maxWorkItemSizes(maxDimensions);
openCLSafeCall(clGetDeviceInfo(device, CL_DEVICE_MAX_WORK_ITEM_SIZES, sizeof(size_t) * maxDimensions,
(void *)&maxWorkItemSizes[0], 0));
deviceInfo.info.maxWorkItemSizes = maxWorkItemSizes;
cl_uint maxComputeUnits = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_COMPUTE_UNITS, maxComputeUnits));
deviceInfo.info.maxComputeUnits = maxComputeUnits;
cl_ulong localMemorySize = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_LOCAL_MEM_SIZE, localMemorySize));
deviceInfo.info.localMemorySize = (size_t)localMemorySize;
cl_ulong maxMemAllocSize = 0;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_MAX_MEM_ALLOC_SIZE, maxMemAllocSize));
deviceInfo.info.maxMemAllocSize = (size_t)maxMemAllocSize;
cl_bool unifiedMemory = false;
openCLSafeCall(getScalarInfo(clGetDeviceInfo, device, CL_DEVICE_HOST_UNIFIED_MEMORY, unifiedMemory));
deviceInfo.info.isUnifiedMemory = unifiedMemory != 0;
//initialize extra options for compilation. Currently only fp64 is included.
//Assume 4KB is enough to store all possible extensions.
openCLSafeCall(getStringInfo(clGetDeviceInfo, device, CL_DEVICE_EXTENSIONS, deviceInfo.info.deviceExtensions));
size_t fp64_khr = deviceInfo.info.deviceExtensions.find("cl_khr_fp64");
if(fp64_khr != std::string::npos)
{
deviceInfo.info.compilationExtraOptions += "-D DOUBLE_SUPPORT";
deviceInfo.info.haveDoubleSupport = true;
}
else
{
deviceInfo.info.haveDoubleSupport = false;
}
size_t intel_platform = platformInfo.info.platformVendor.find("Intel");
if(intel_platform != std::string::npos)
{
deviceInfo.info.compilationExtraOptions += " -D INTEL_DEVICE";
deviceInfo.info.isIntelDevice = true;
}
else
{
deviceInfo.info.isIntelDevice = false;
}
platformInfo.deviceIDs[j] = baseIndx + j;
deviceInfo.init(baseIndx + j, platformInfo, device);
}
}
}
@ -468,7 +497,7 @@ static int initializeOpenCLDevices()
for(size_t j = 0; j < platformInfo.deviceIDs.size(); ++j)
{
DeviceInfoImpl& deviceInfo = global_devices[platformInfo.deviceIDs[j]];
platformInfo.info.devices[j] = &deviceInfo.info;
platformInfo.devices[j] = &deviceInfo;
}
}
@ -487,6 +516,8 @@ DeviceInfo::DeviceInfo()
// nothing
}
DeviceInfo::~DeviceInfo() { }
PlatformInfo::PlatformInfo()
: _id(-1),
platformVersionMajor(0), platformVersionMinor(0)
@ -494,40 +525,135 @@ PlatformInfo::PlatformInfo()
// nothing
}
PlatformInfo::~PlatformInfo() { }
class ContextImpl;
struct CommandQueue
{
ContextImpl* context_;
cl_command_queue clQueue_;
CommandQueue() : context_(NULL), clQueue_(NULL) { }
~CommandQueue() { release(); }
void create(ContextImpl* context_);
void release()
{
#ifdef WIN32
// if process is on termination stage (ExitProcess was called and other threads were terminated)
// then disable command queue release because it may cause program hang
if (!__termination)
#endif
{
if(clQueue_)
{
openCLSafeCall(clReleaseCommandQueue(clQueue_)); // some cleanup problems are here
}
}
clQueue_ = NULL;
context_ = NULL;
}
};
cv::TLSData<CommandQueue> commandQueueTLSData;
//////////////////////////////// OpenCL context ////////////////////////
//This is a global singleton class used to represent a OpenCL context.
class ContextImpl : public Context
{
public:
const cl_device_id clDeviceID;
cl_device_id clDeviceID;
cl_context clContext;
cl_command_queue clCmdQueue;
const DeviceInfo& deviceInfo;
const DeviceInfoImpl& deviceInfoImpl;
protected:
ContextImpl(const DeviceInfo& deviceInfo, cl_device_id clDeviceID)
: clDeviceID(clDeviceID), clContext(NULL), clCmdQueue(NULL), deviceInfo(deviceInfo)
ContextImpl(const DeviceInfoImpl& _deviceInfoImpl, cl_context context)
: clDeviceID(_deviceInfoImpl.device_id), clContext(context), deviceInfoImpl(_deviceInfoImpl)
{
// nothing
#ifdef CL_VERSION_1_2
if (supportsFeature(FEATURE_CL_VER_1_2))
{
openCLSafeCall(clRetainDevice(clDeviceID));
}
#endif
openCLSafeCall(clRetainContext(clContext));
ContextImpl* old = NULL;
{
cv::AutoLock lock(getCurrentContextMutex());
old = currentContext;
currentContext = this;
}
if (old != NULL)
{
delete old;
}
}
~ContextImpl()
{
CV_Assert(this != currentContext);
#ifdef CL_VERSION_1_2
if (supportsFeature(FEATURE_CL_VER_1_2))
{
openCLSafeCall(clReleaseDevice(clDeviceID));
}
#endif
if (deviceInfoImpl._id < 0) // not in the global registry, so we should cleanup it
{
#ifdef CL_VERSION_1_2
if (supportsFeature(FEATURE_CL_VER_1_2))
{
openCLSafeCall(clReleaseDevice(deviceInfoImpl.device_id));
}
#endif
PlatformInfoImpl* platformImpl = (PlatformInfoImpl*)(deviceInfoImpl.platform);
delete platformImpl;
delete const_cast<DeviceInfoImpl*>(&deviceInfoImpl);
}
clDeviceID = NULL;
#ifdef WIN32
// if process is on termination stage (ExitProcess was called and other threads were terminated)
// then disable command queue release because it may cause program hang
if (!__termination)
#endif
{
if(clContext)
{
openCLSafeCall(clReleaseContext(clContext));
}
}
clContext = NULL;
}
~ContextImpl();
public:
static void setContext(const DeviceInfo* deviceInfo);
static void initializeContext(void* pClPlatform, void* pClContext, void* pClDevice);
bool supportsFeature(FEATURE_TYPE featureType) const;
static void cleanupContext(void);
static ContextImpl* getContext();
private:
ContextImpl(const ContextImpl&); // disabled
ContextImpl& operator=(const ContextImpl&); // disabled
static ContextImpl* currentContext;
};
static ContextImpl* currentContext = NULL;
ContextImpl* ContextImpl::currentContext = NULL;
static bool __deviceSelected = false;
Context* Context::getContext()
{
return ContextImpl::getContext();
}
ContextImpl* ContextImpl::getContext()
{
if (currentContext == NULL)
{
@ -571,7 +697,7 @@ bool Context::supportsFeature(FEATURE_TYPE featureType) const
const DeviceInfo& Context::getDeviceInfo() const
{
return ((ContextImpl*)this)->deviceInfo;
return ((ContextImpl*)this)->deviceInfoImpl;
}
const void* Context::getOpenCLContextPtr() const
@ -581,7 +707,13 @@ const void* Context::getOpenCLContextPtr() const
const void* Context::getOpenCLCommandQueuePtr() const
{
return &(((ContextImpl*)this)->clCmdQueue);
ContextImpl* pThis = (ContextImpl*)this;
CommandQueue* commandQueue = commandQueueTLSData.get();
if (commandQueue->context_ != pThis)
{
commandQueue->create(pThis);
}
return &commandQueue->clQueue_;
}
const void* Context::getOpenCLDeviceIDPtr() const
@ -595,44 +727,18 @@ bool ContextImpl::supportsFeature(FEATURE_TYPE featureType) const
switch (featureType)
{
case FEATURE_CL_INTEL_DEVICE:
return deviceInfo.isIntelDevice;
return deviceInfoImpl.isIntelDevice;
case FEATURE_CL_DOUBLE:
return deviceInfo.haveDoubleSupport;
return deviceInfoImpl.haveDoubleSupport;
case FEATURE_CL_UNIFIED_MEM:
return deviceInfo.isUnifiedMemory;
return deviceInfoImpl.isUnifiedMemory;
case FEATURE_CL_VER_1_2:
return deviceInfo.deviceVersionMajor > 1 || (deviceInfo.deviceVersionMajor == 1 && deviceInfo.deviceVersionMinor >= 2);
return deviceInfoImpl.deviceVersionMajor > 1 || (deviceInfoImpl.deviceVersionMajor == 1 && deviceInfoImpl.deviceVersionMinor >= 2);
}
CV_Error(CV_StsBadArg, "Invalid feature type");
return false;
}
#if defined(WIN32)
static bool __termination = false;
#endif
ContextImpl::~ContextImpl()
{
#ifdef WIN32
// if process is on termination stage (ExitProcess was called and other threads were terminated)
// then disable command queue release because it may cause program hang
if (!__termination)
#endif
{
if(clCmdQueue)
{
openCLSafeCall(clReleaseCommandQueue(clCmdQueue)); // some cleanup problems are here
}
if(clContext)
{
openCLSafeCall(clReleaseContext(clContext));
}
}
clCmdQueue = NULL;
clContext = NULL;
}
void fft_teardown();
void clBlasTeardown();
@ -641,53 +747,69 @@ void ContextImpl::cleanupContext(void)
fft_teardown();
clBlasTeardown();
cv::AutoLock lock(__module.currentContextMutex);
cv::AutoLock lock(getCurrentContextMutex());
if (currentContext)
delete currentContext;
currentContext = NULL;
{
ContextImpl* ctx = currentContext;
currentContext = NULL;
delete ctx;
}
}
void ContextImpl::setContext(const DeviceInfo* deviceInfo)
{
CV_Assert(deviceInfo->_id >= 0 && deviceInfo->_id < (int)global_devices.size());
CV_Assert(deviceInfo->_id >= 0); // we can't specify custom devices
CV_Assert(deviceInfo->_id < (int)global_devices.size());
{
cv::AutoLock lock(__module.currentContextMutex);
cv::AutoLock lock(getCurrentContextMutex());
if (currentContext)
{
if (currentContext->deviceInfo._id == deviceInfo->_id)
if (currentContext->deviceInfoImpl._id == deviceInfo->_id)
return;
}
}
DeviceInfoImpl& infoImpl = global_devices[deviceInfo->_id];
CV_Assert(deviceInfo == &infoImpl.info);
CV_Assert(deviceInfo == &infoImpl);
cl_int status = 0;
cl_context_properties cps[3] = { CL_CONTEXT_PLATFORM, (cl_context_properties)(infoImpl.platform_id), 0 };
cl_context clContext = clCreateContext(cps, 1, &infoImpl.device_id, NULL, NULL, &status);
openCLVerifyCall(status);
#ifdef PRINT_KERNEL_RUN_TIME
cl_command_queue clCmdQueue = clCreateCommandQueue(clContext, infoImpl.device_id, CL_QUEUE_PROFILING_ENABLE, &status);
#else /*PRINT_KERNEL_RUN_TIME*/
cl_command_queue clCmdQueue = clCreateCommandQueue(clContext, infoImpl.device_id, 0, &status);
#endif /*PRINT_KERNEL_RUN_TIME*/
ContextImpl* ctx = new ContextImpl(infoImpl, clContext);
clReleaseContext(clContext);
(void)ctx;
}
void ContextImpl::initializeContext(void* pClPlatform, void* pClContext, void* pClDevice)
{
CV_Assert(pClPlatform != NULL);
CV_Assert(pClContext != NULL);
CV_Assert(pClDevice != NULL);
cl_platform_id platform = *(cl_platform_id*)pClPlatform;
cl_context context = *(cl_context*)pClContext;
cl_device_id device = *(cl_device_id*)pClDevice;
PlatformInfoImpl* platformInfoImpl = new PlatformInfoImpl();
platformInfoImpl->init(-1, platform);
DeviceInfoImpl* deviceInfoImpl = new DeviceInfoImpl();
deviceInfoImpl->init(-1, *platformInfoImpl, device);
ContextImpl* ctx = new ContextImpl(*deviceInfoImpl, context);
(void)ctx;
}
void CommandQueue::create(ContextImpl* context)
{
release();
cl_int status = 0;
// TODO add CL_QUEUE_PROFILING_ENABLE
cl_command_queue clCmdQueue = clCreateCommandQueue(context->clContext, context->clDeviceID, 0, &status);
openCLVerifyCall(status);
ContextImpl* ctx = new ContextImpl(infoImpl.info, infoImpl.device_id);
ctx->clCmdQueue = clCmdQueue;
ctx->clContext = clContext;
ContextImpl* old = NULL;
{
cv::AutoLock lock(__module.currentContextMutex);
old = currentContext;
currentContext = ctx;
}
if (old != NULL)
{
delete old;
}
context_ = context;
clQueue_ = clCmdQueue;
}
int getOpenCLPlatforms(PlatformsInfo& platforms)
@ -700,7 +822,7 @@ int getOpenCLPlatforms(PlatformsInfo& platforms)
for (size_t id = 0; id < global_platforms.size(); ++id)
{
PlatformInfoImpl& impl = global_platforms[id];
platforms.push_back(&impl.info);
platforms.push_back(&impl);
}
return platforms.size();
@ -730,9 +852,9 @@ int getOpenCLDevices(std::vector<const DeviceInfo*> &devices, int deviceType, co
for (size_t id = 0; id < global_devices.size(); ++id)
{
DeviceInfoImpl& deviceInfo = global_devices[id];
if (((int)deviceInfo.info.deviceType & deviceType) != 0)
if (((int)deviceInfo.deviceType & deviceType) != 0)
{
devices.push_back(&deviceInfo.info);
devices.push_back(&deviceInfo);
}
}
}
@ -765,6 +887,20 @@ void setDevice(const DeviceInfo* info)
}
}
void initializeContext(void* pClPlatform, void* pClContext, void* pClDevice)
{
try
{
ContextImpl::initializeContext(pClPlatform, pClContext, pClDevice);
__deviceSelected = true;
}
catch (...)
{
__deviceSelected = true;
throw;
}
}
bool supportsFeature(FEATURE_TYPE featureType)
{
return Context::getContext()->supportsFeature(featureType);

View File

@ -40,7 +40,7 @@
//M*/
#include "test_precomp.hpp"
#include "opencv2/core/opencl/runtime/opencl_core.hpp" // for OpenCL types: cl_mem
#include "opencv2/core/opencl/runtime/opencl_core.hpp" // for OpenCL types & functions
#include "opencv2/core/ocl.hpp"
TEST(TestAPI, openCLExecuteKernelInterop)
@ -127,3 +127,87 @@ TEST(OCL_TestTAPI, performance)
t = (double)cv::getTickCount() - t;
printf("cpu exec time = %gms per iter\n", t*1000./niters/cv::getTickFrequency());
}
// This test must be DISABLED by default!
// (We can't restore original context for other tests)
TEST(TestAPI, DISABLED_InitializationFromHandles)
{
#define MAX_PLATFORMS 16
cl_platform_id platforms[MAX_PLATFORMS] = { NULL };
cl_uint numPlatforms = 0;
cl_int status = ::clGetPlatformIDs(MAX_PLATFORMS, &platforms[0], &numPlatforms);
ASSERT_EQ(CL_SUCCESS, status) << "clGetPlatformIDs";
ASSERT_NE(0, (int)numPlatforms);
int selectedPlatform = 0;
cl_platform_id platform = platforms[selectedPlatform];
ASSERT_NE((void*)NULL, platform);
cl_device_id device = NULL;
status = ::clGetDeviceIDs(platform, CL_DEVICE_TYPE_ALL, 1, &device, NULL);
ASSERT_EQ(CL_SUCCESS, status) << "clGetDeviceIDs";
ASSERT_NE((void*)NULL, device);
cl_context_properties cps[3] = { CL_CONTEXT_PLATFORM, (cl_context_properties)(platform), 0 };
cl_context context = ::clCreateContext(cps, 1, &device, NULL, NULL, &status);
ASSERT_EQ(CL_SUCCESS, status) << "clCreateContext";
ASSERT_NE((void*)NULL, context);
ASSERT_NO_THROW(cv::ocl::initializeContext(&platform, &context, &device));
status = ::clReleaseContext(context);
ASSERT_EQ(CL_SUCCESS, status) << "clReleaseContext";
#ifdef CL_VERSION_1_2
#if 1
{
cv::ocl::Context* ctx = cv::ocl::Context::getContext();
ASSERT_NE((void*)NULL, ctx);
if (ctx->supportsFeature(cv::ocl::FEATURE_CL_VER_1_2)) // device supports OpenCL 1.2+
{
status = ::clReleaseDevice(device);
ASSERT_EQ(CL_SUCCESS, status) << "clReleaseDevice";
}
}
#else // code below doesn't work on Linux (SEGFAULTs on 1.1- devices are not handled via exceptions)
try
{
status = ::clReleaseDevice(device); // NOTE This works only with !DEVICES! that supports OpenCL 1.2
(void)status; // no check
}
catch (...)
{
// nothing, there is no problem
}
#endif
#endif
// print the name of current device
cv::ocl::Context* ctx = cv::ocl::Context::getContext();
ASSERT_NE((void*)NULL, ctx);
const cv::ocl::DeviceInfo& deviceInfo = ctx->getDeviceInfo();
std::cout << "Device name: " << deviceInfo.deviceName << std::endl;
std::cout << "Platform name: " << deviceInfo.platform->platformName << std::endl;
ASSERT_EQ(context, *(cl_context*)ctx->getOpenCLContextPtr());
ASSERT_EQ(device, *(cl_device_id*)ctx->getOpenCLDeviceIDPtr());
// do some calculations and check results
cv::RNG rng;
Size sz(100, 100);
cv::Mat srcMat = cvtest::randomMat(rng, sz, CV_32FC4, -10, 10, false);
cv::Mat dstMat;
cv::ocl::oclMat srcGpuMat(srcMat);
cv::ocl::oclMat dstGpuMat;
cv::Scalar v = cv::Scalar::all(1);
cv::add(srcMat, v, dstMat);
cv::ocl::add(srcGpuMat, v, dstGpuMat);
cv::Mat dstGpuMatMap;
dstGpuMat.download(dstGpuMatMap);
EXPECT_LE(checkNorm(dstMat, dstGpuMatMap), 1e-3);
}

View File

@ -46,7 +46,7 @@ int main(int argc, char** argv)
const char* algorithm_opt = "--algorithm=";
const char* maxdisp_opt = "--max-disparity=";
const char* blocksize_opt = "--blocksize=";
const char* nodisplay_opt = "--no-display=";
const char* nodisplay_opt = "--no-display";
const char* scale_opt = "--scale=";
if(argc < 3)

View File

@ -0,0 +1,9 @@
/target
/classes
/checkouts
pom.xml
pom.xml.asc
*.jar
*.class
/.lein-*
/.nrepl-port

View File

@ -0,0 +1,14 @@
(defproject simple-sample "0.1.0-SNAPSHOT"
:pom-addition [:developers [:developer {:id "magomimmo"}
[:name "Mimmo Cosenza"]
[:url "https://github.com/magomimmoo"]]]
:description "A simple project to start REPLing with OpenCV"
:url "http://example.com/FIXME"
:license {:name "BSD 3-Clause License"
:url "http://opensource.org/licenses/BSD-3-Clause"}
:dependencies [[org.clojure/clojure "1.5.1"]
[opencv/opencv "2.4.7"]
[opencv/opencv-native "2.4.7"]]
:main simple-sample.core
:injections [(clojure.lang.RT/loadLibrary org.opencv.core.Core/NATIVE_LIBRARY_NAME)])

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 KiB

View File

@ -0,0 +1,16 @@
;;; to run this code from the terminal: "$ lein run". It will save a
;;; blurred image version of resources/images/lena.png as
;;; resources/images/blurred.png
(ns simple-sample.core
(:import [org.opencv.core Point Rect Mat CvType Size Scalar]
org.opencv.highgui.Highgui
org.opencv.imgproc.Imgproc))
(defn -main [& args]
(let [lena (Highgui/imread "resources/images/lena.png")
blurred (Mat. 512 512 CvType/CV_8UC3)]
(print "Blurring...")
(Imgproc/GaussianBlur lena blurred (Size. 5 5) 3 3)
(Highgui/imwrite "resources/images/blurred.png" blurred)
(println "done!")))

View File

@ -0,0 +1,7 @@
(ns simple-sample.core-test
(:require [clojure.test :refer :all]
[simple-sample.core :refer :all]))
(deftest a-test
(testing "FIXME, I fail."
(is (= 0 1))))