1072 lines
39 KiB
Plaintext
1072 lines
39 KiB
Plaintext
|
[/
|
||
|
/ Copyright (c) 2009 Helge Bahmann
|
||
|
/ Copyright (c) 2014, 2017 Andrey Semashev
|
||
|
/
|
||
|
/ Distributed under the Boost Software License, Version 1.0. (See accompanying
|
||
|
/ file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
|
||
|
/]
|
||
|
|
||
|
[library Boost.Atomic
|
||
|
[quickbook 1.4]
|
||
|
[authors [Bahmann, Helge][Semashev, Andrey]]
|
||
|
[copyright 2011 Helge Bahmann]
|
||
|
[copyright 2012 Tim Blechmann]
|
||
|
[copyright 2013, 2017 Andrey Semashev]
|
||
|
[id atomic]
|
||
|
[dirname atomic]
|
||
|
[purpose Atomic operations]
|
||
|
[license
|
||
|
Distributed under the Boost Software License, Version 1.0.
|
||
|
(See accompanying file LICENSE_1_0.txt or copy at
|
||
|
[@http://www.boost.org/LICENSE_1_0.txt])
|
||
|
]
|
||
|
]
|
||
|
|
||
|
[section:introduction Introduction]
|
||
|
|
||
|
[section:introduction_presenting Presenting Boost.Atomic]
|
||
|
|
||
|
[*Boost.Atomic] is a library that provides [^atomic]
|
||
|
data types and operations on these data types, as well as memory
|
||
|
ordering constraints required for coordinating multiple threads through
|
||
|
atomic variables. It implements the interface as defined by the C++11
|
||
|
standard, but makes this feature available for platforms lacking
|
||
|
system/compiler support for this particular C++11 feature.
|
||
|
|
||
|
Users of this library should already be familiar with concurrency
|
||
|
in general, as well as elementary concepts such as "mutual exclusion".
|
||
|
|
||
|
The implementation makes use of processor-specific instructions where
|
||
|
possible (via inline assembler, platform libraries or compiler
|
||
|
intrinsics), and falls back to "emulating" atomic operations through
|
||
|
locking.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:introduction_purpose Purpose]
|
||
|
|
||
|
Operations on "ordinary" variables are not guaranteed to be atomic.
|
||
|
This means that with [^int n=0] initially, two threads concurrently
|
||
|
executing
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
void function()
|
||
|
{
|
||
|
n ++;
|
||
|
}
|
||
|
|
||
|
might result in [^n==1] instead of 2: Each thread will read the
|
||
|
old value into a processor register, increment it and write the result
|
||
|
back. Both threads may therefore write [^1], unaware that the other thread
|
||
|
is doing likewise.
|
||
|
|
||
|
Declaring [^atomic<int> n=0] instead, the same operation on
|
||
|
this variable will always result in [^n==2] as each operation on this
|
||
|
variable is ['atomic]: This means that each operation behaves as if it
|
||
|
were strictly sequentialized with respect to the other.
|
||
|
|
||
|
Atomic variables are useful for two purposes:
|
||
|
|
||
|
* as a means for coordinating multiple threads via custom
|
||
|
coordination protocols
|
||
|
* as faster alternatives to "locked" access to simple variables
|
||
|
|
||
|
Take a look at the [link atomic.usage_examples examples] section
|
||
|
for common patterns.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:thread_coordination Thread coordination using Boost.Atomic]
|
||
|
|
||
|
The most common use of [*Boost.Atomic] is to realize custom
|
||
|
thread synchronization protocols: The goal is to coordinate
|
||
|
accesses of threads to shared variables in order to avoid
|
||
|
"conflicts". The
|
||
|
programmer must be aware of the fact that
|
||
|
compilers, CPUs and the cache
|
||
|
hierarchies may generally reorder memory references at will.
|
||
|
As a consequence a program such as:
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
int x = 0, int y = 0;
|
||
|
|
||
|
thread1:
|
||
|
x = 1;
|
||
|
y = 1;
|
||
|
|
||
|
thread2
|
||
|
if (y == 1) {
|
||
|
assert(x == 1);
|
||
|
}
|
||
|
|
||
|
might indeed fail as there is no guarantee that the read of `x`
|
||
|
by thread2 "sees" the write by thread1.
|
||
|
|
||
|
[*Boost.Atomic] uses a synchronisation concept based on the
|
||
|
['happens-before] relation to describe the guarantees under
|
||
|
which situations such as the above one cannot occur.
|
||
|
|
||
|
The remainder of this section will discuss ['happens-before] in
|
||
|
a "hands-on" way instead of giving a fully formalized definition.
|
||
|
The reader is encouraged to additionally have a
|
||
|
look at the discussion of the correctness of a few of the
|
||
|
[link atomic.usage_examples examples] afterwards.
|
||
|
|
||
|
[section:mutex Enforcing ['happens-before] through mutual exclusion]
|
||
|
|
||
|
As an introductory example to understand how arguing using
|
||
|
['happens-before] works, consider two threads synchronizing
|
||
|
using a common mutex:
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
mutex m;
|
||
|
|
||
|
thread1:
|
||
|
m.lock();
|
||
|
... /* A */
|
||
|
m.unlock();
|
||
|
|
||
|
thread2:
|
||
|
m.lock();
|
||
|
... /* B */
|
||
|
m.unlock();
|
||
|
|
||
|
The "lockset-based intuition" would be to argue that A and B
|
||
|
cannot be executed concurrently as the code paths require a
|
||
|
common lock to be held.
|
||
|
|
||
|
One can however also arrive at the same conclusion using
|
||
|
['happens-before]: Either thread1 or thread2 will succeed first
|
||
|
at [^m.lock()]. If this is be thread1, then as a consequence,
|
||
|
thread2 cannot succeed at [^m.lock()] before thread1 has executed
|
||
|
[^m.unlock()], consequently A ['happens-before] B in this case.
|
||
|
By symmetry, if thread2 succeeds at [^m.lock()] first, we can
|
||
|
conclude B ['happens-before] A.
|
||
|
|
||
|
Since this already exhausts all options, we can conclude that
|
||
|
either A ['happens-before] B or B ['happens-before] A must
|
||
|
always hold. Obviously cannot state ['which] of the two relationships
|
||
|
holds, but either one is sufficient to conclude that A and B
|
||
|
cannot conflict.
|
||
|
|
||
|
Compare the [link boost_atomic.usage_examples.example_spinlock spinlock]
|
||
|
implementation to see how the mutual exclusion concept can be
|
||
|
mapped to [*Boost.Atomic].
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:release_acquire ['happens-before] through [^release] and [^acquire]]
|
||
|
|
||
|
The most basic pattern for coordinating threads via [*Boost.Atomic]
|
||
|
uses [^release] and [^acquire] on an atomic variable for coordination: If ...
|
||
|
|
||
|
* ... thread1 performs an operation A,
|
||
|
* ... thread1 subsequently writes (or atomically
|
||
|
modifies) an atomic variable with [^release] semantic,
|
||
|
* ... thread2 reads (or atomically reads-and-modifies)
|
||
|
the value this value from the same atomic variable with
|
||
|
[^acquire] semantic and
|
||
|
* ... thread2 subsequently performs an operation B,
|
||
|
|
||
|
... then A ['happens-before] B.
|
||
|
|
||
|
Consider the following example
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
atomic<int> a(0);
|
||
|
|
||
|
thread1:
|
||
|
... /* A */
|
||
|
a.fetch_add(1, memory_order_release);
|
||
|
|
||
|
thread2:
|
||
|
int tmp = a.load(memory_order_acquire);
|
||
|
if (tmp == 1) {
|
||
|
... /* B */
|
||
|
} else {
|
||
|
... /* C */
|
||
|
}
|
||
|
|
||
|
In this example, two avenues for execution are possible:
|
||
|
|
||
|
* The [^store] operation by thread1 precedes the [^load] by thread2:
|
||
|
In this case thread2 will execute B and "A ['happens-before] B"
|
||
|
holds as all of the criteria above are satisfied.
|
||
|
* The [^load] operation by thread2 precedes the [^store] by thread1:
|
||
|
In this case, thread2 will execute C, but "A ['happens-before] C"
|
||
|
does ['not] hold: thread2 does not read the value written by
|
||
|
thread1 through [^a].
|
||
|
|
||
|
Therefore, A and B cannot conflict, but A and C ['can] conflict.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:fences Fences]
|
||
|
|
||
|
Ordering constraints are generally specified together with an access to
|
||
|
an atomic variable. It is however also possible to issue "fence"
|
||
|
operations in isolation, in this case the fence operates in
|
||
|
conjunction with preceding (for `acquire`, `consume` or `seq_cst`
|
||
|
operations) or succeeding (for `release` or `seq_cst`) atomic
|
||
|
operations.
|
||
|
|
||
|
The example from the previous section could also be written in
|
||
|
the following way:
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
atomic<int> a(0);
|
||
|
|
||
|
thread1:
|
||
|
... /* A */
|
||
|
atomic_thread_fence(memory_order_release);
|
||
|
a.fetch_add(1, memory_order_relaxed);
|
||
|
|
||
|
thread2:
|
||
|
int tmp = a.load(memory_order_relaxed);
|
||
|
if (tmp == 1) {
|
||
|
atomic_thread_fence(memory_order_acquire);
|
||
|
... /* B */
|
||
|
} else {
|
||
|
... /* C */
|
||
|
}
|
||
|
|
||
|
This provides the same ordering guarantees as previously, but
|
||
|
elides a (possibly expensive) memory ordering operation in
|
||
|
the case C is executed.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:release_consume ['happens-before] through [^release] and [^consume]]
|
||
|
|
||
|
The second pattern for coordinating threads via [*Boost.Atomic]
|
||
|
uses [^release] and [^consume] on an atomic variable for coordination: If ...
|
||
|
|
||
|
* ... thread1 performs an operation A,
|
||
|
* ... thread1 subsequently writes (or atomically modifies) an
|
||
|
atomic variable with [^release] semantic,
|
||
|
* ... thread2 reads (or atomically reads-and-modifies)
|
||
|
the value this value from the same atomic variable with [^consume] semantic and
|
||
|
* ... thread2 subsequently performs an operation B that is ['computationally
|
||
|
dependent on the value of the atomic variable],
|
||
|
|
||
|
... then A ['happens-before] B.
|
||
|
|
||
|
Consider the following example
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
atomic<int> a(0);
|
||
|
complex_data_structure data[2];
|
||
|
|
||
|
thread1:
|
||
|
data[1] = ...; /* A */
|
||
|
a.store(1, memory_order_release);
|
||
|
|
||
|
thread2:
|
||
|
int index = a.load(memory_order_consume);
|
||
|
complex_data_structure tmp = data[index]; /* B */
|
||
|
|
||
|
In this example, two avenues for execution are possible:
|
||
|
|
||
|
* The [^store] operation by thread1 precedes the [^load] by thread2:
|
||
|
In this case thread2 will read [^data\[1\]] and "A ['happens-before] B"
|
||
|
holds as all of the criteria above are satisfied.
|
||
|
* The [^load] operation by thread2 precedes the [^store] by thread1:
|
||
|
In this case thread2 will read [^data\[0\]] and "A ['happens-before] B"
|
||
|
does ['not] hold: thread2 does not read the value written by
|
||
|
thread1 through [^a].
|
||
|
|
||
|
Here, the ['happens-before] relationship helps ensure that any
|
||
|
accesses (presumable writes) to [^data\[1\]] by thread1 happen before
|
||
|
before the accesses (presumably reads) to [^data\[1\]] by thread2:
|
||
|
Lacking this relationship, thread2 might see stale/inconsistent
|
||
|
data.
|
||
|
|
||
|
Note that in this example, the fact that operation B is computationally
|
||
|
dependent on the atomic variable, therefore the following program would
|
||
|
be erroneous:
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
atomic<int> a(0);
|
||
|
complex_data_structure data[2];
|
||
|
|
||
|
thread1:
|
||
|
data[1] = ...; /* A */
|
||
|
a.store(1, memory_order_release);
|
||
|
|
||
|
thread2:
|
||
|
int index = a.load(memory_order_consume);
|
||
|
complex_data_structure tmp;
|
||
|
if (index == 0)
|
||
|
tmp = data[0];
|
||
|
else
|
||
|
tmp = data[1];
|
||
|
|
||
|
[^consume] is most commonly (and most safely! see
|
||
|
[link atomic.limitations limitations]) used with
|
||
|
pointers, compare for example the
|
||
|
[link boost_atomic.usage_examples.singleton singleton with double-checked locking].
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:seq_cst Sequential consistency]
|
||
|
|
||
|
The third pattern for coordinating threads via [*Boost.Atomic]
|
||
|
uses [^seq_cst] for coordination: If ...
|
||
|
|
||
|
* ... thread1 performs an operation A,
|
||
|
* ... thread1 subsequently performs any operation with [^seq_cst],
|
||
|
* ... thread1 subsequently performs an operation B,
|
||
|
* ... thread2 performs an operation C,
|
||
|
* ... thread2 subsequently performs any operation with [^seq_cst],
|
||
|
* ... thread2 subsequently performs an operation D,
|
||
|
|
||
|
then either "A ['happens-before] D" or "C ['happens-before] B" holds.
|
||
|
|
||
|
In this case it does not matter whether thread1 and thread2 operate
|
||
|
on the same or different atomic variables, or use a "stand-alone"
|
||
|
[^atomic_thread_fence] operation.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface Programming interfaces]
|
||
|
|
||
|
[section:configuration Configuration and building]
|
||
|
|
||
|
The library contains header-only and compiled parts. The library is
|
||
|
header-only for lock-free cases but requires a separate binary to
|
||
|
implement the lock-based emulation. Users are able to detect whether
|
||
|
linking to the compiled part is required by checking the
|
||
|
[link atomic.interface.feature_macros feature macros].
|
||
|
|
||
|
The following macros affect library behavior:
|
||
|
|
||
|
[table
|
||
|
[[Macro] [Description]]
|
||
|
[[`BOOST_ATOMIC_NO_CMPXCHG8B`] [Affects 32-bit x86 Oracle Studio builds. When defined,
|
||
|
the library assumes the target CPU does not support `cmpxchg8b` instruction used
|
||
|
to support 64-bit atomic operations. This is the case with very old CPUs (pre-Pentium).
|
||
|
The library does not perform runtime detection of this instruction, so running the code
|
||
|
that uses 64-bit atomics on such CPUs will result in crashes, unless this macro is defined.
|
||
|
Note that the macro does not affect MSVC, GCC and compatible compilers because the library infers
|
||
|
this information from the compiler-defined macros.]]
|
||
|
[[`BOOST_ATOMIC_NO_CMPXCHG16B`] [Affects 64-bit x86 MSVC and Oracle Studio builds. When defined,
|
||
|
the library assumes the target CPU does not support `cmpxchg16b` instruction used
|
||
|
to support 128-bit atomic operations. This is the case with some early 64-bit AMD CPUs,
|
||
|
all Intel CPUs and current AMD CPUs support this instruction. The library does not
|
||
|
perform runtime detection of this instruction, so running the code that uses 128-bit
|
||
|
atomics on such CPUs will result in crashes, unless this macro is defined. Note that
|
||
|
the macro does not affect GCC and compatible compilers because the library infers
|
||
|
this information from the compiler-defined macros.]]
|
||
|
[[`BOOST_ATOMIC_NO_MFENCE`] [Affects 32-bit x86 Oracle Studio builds. When defined,
|
||
|
the library assumes the target CPU does not support `mfence` instruction used
|
||
|
to implement thread fences. This instruction was added with SSE2 instruction set extension,
|
||
|
which was available in CPUs since Intel Pentium 4. The library does not perform runtime detection
|
||
|
of this instruction, so running the library code on older CPUs will result in crashes, unless
|
||
|
this macro is defined. Note that the macro does not affect MSVC, GCC and compatible compilers
|
||
|
because the library infers this information from the compiler-defined macros.]]
|
||
|
[[`BOOST_ATOMIC_FORCE_FALLBACK`] [When defined, all operations are implemented with locks.
|
||
|
This is mostly used for testing and should not be used in real world projects.]]
|
||
|
[[`BOOST_ATOMIC_DYN_LINK` and `BOOST_ALL_DYN_LINK`] [Control library linking. If defined,
|
||
|
the library assumes dynamic linking, otherwise static. The latter macro affects all Boost
|
||
|
libraries, not just [*Boost.Atomic].]]
|
||
|
[[`BOOST_ATOMIC_NO_LIB` and `BOOST_ALL_NO_LIB`] [Control library auto-linking on Windows.
|
||
|
When defined, disables auto-linking. The latter macro affects all Boost libraries,
|
||
|
not just [*Boost.Atomic].]]
|
||
|
]
|
||
|
|
||
|
Besides macros, it is important to specify the correct compiler options for the target CPU.
|
||
|
With GCC and compatible compilers this affects whether particular atomic operations are
|
||
|
lock-free or not.
|
||
|
|
||
|
Boost building process is described in the [@http://www.boost.org/doc/libs/release/more/getting_started/ Getting Started guide].
|
||
|
For example, you can build [*Boost.Atomic] with the following command line:
|
||
|
|
||
|
[pre
|
||
|
bjam --with-atomic variant=release instruction-set=core2 stage
|
||
|
]
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface_memory_order Memory order]
|
||
|
|
||
|
#include <boost/memory_order.hpp>
|
||
|
|
||
|
The enumeration [^boost::memory_order] defines the following
|
||
|
values to represent memory ordering constraints:
|
||
|
|
||
|
[table
|
||
|
[[Constant] [Description]]
|
||
|
[[`memory_order_relaxed`] [No ordering constraint.
|
||
|
Informally speaking, following operations may be reordered before,
|
||
|
preceding operations may be reordered after the atomic
|
||
|
operation. This constraint is suitable only when
|
||
|
either a) further operations do not depend on the outcome
|
||
|
of the atomic operation or b) ordering is enforced through
|
||
|
stand-alone `atomic_thread_fence` operations. The operation on
|
||
|
the atomic value itself is still atomic though.
|
||
|
]]
|
||
|
[[`memory_order_release`] [
|
||
|
Perform `release` operation. Informally speaking,
|
||
|
prevents all preceding memory operations to be reordered
|
||
|
past this point.
|
||
|
]]
|
||
|
[[`memory_order_acquire`] [
|
||
|
Perform `acquire` operation. Informally speaking,
|
||
|
prevents succeeding memory operations to be reordered
|
||
|
before this point.
|
||
|
]]
|
||
|
[[`memory_order_consume`] [
|
||
|
Perform `consume` operation. More relaxed (and
|
||
|
on some architectures more efficient) than `memory_order_acquire`
|
||
|
as it only affects succeeding operations that are
|
||
|
computationally-dependent on the value retrieved from
|
||
|
an atomic variable.
|
||
|
]]
|
||
|
[[`memory_order_acq_rel`] [Perform both `release` and `acquire` operation]]
|
||
|
[[`memory_order_seq_cst`] [
|
||
|
Enforce sequential consistency. Implies `memory_order_acq_rel`, but
|
||
|
additionally enforces total order for all operations such qualified.
|
||
|
]]
|
||
|
]
|
||
|
|
||
|
See section [link atomic.thread_coordination ['happens-before]] for explanation
|
||
|
of the various ordering constraints.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface_atomic_flag Atomic flags]
|
||
|
|
||
|
#include <boost/atomic/atomic_flag.hpp>
|
||
|
|
||
|
The `boost::atomic_flag` type provides the most basic set of atomic operations
|
||
|
suitable for implementing mutually exclusive access to thread-shared data. The flag
|
||
|
can have one of the two possible states: set and clear. The class implements the
|
||
|
following operations:
|
||
|
|
||
|
[table
|
||
|
[[Syntax] [Description]]
|
||
|
[
|
||
|
[`atomic_flag()`]
|
||
|
[Initialize to the clear state. See the discussion below.]
|
||
|
]
|
||
|
[
|
||
|
[`bool test_and_set(memory_order order)`]
|
||
|
[Sets the atomic flag to the set state; returns `true` if the flag had been set prior to the operation]
|
||
|
]
|
||
|
[
|
||
|
[`void clear(memory_order order)`]
|
||
|
[Sets the atomic flag to the clear state]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
`order` always has `memory_order_seq_cst` as default parameter.
|
||
|
|
||
|
Note that the default constructor `atomic_flag()` is unlike `std::atomic_flag`, which
|
||
|
leaves the default-constructed object uninitialized. This potentially requires dynamic
|
||
|
initialization during the program startup to perform the object initialization, which
|
||
|
makes it unsafe to create global `boost::atomic_flag` objects that can be used before
|
||
|
entring `main()`. Some compilers though (especially those supporting C++11 `constexpr`)
|
||
|
may be smart enough to perform flag initialization statically (which is, in C++11 terms,
|
||
|
a constant initialization).
|
||
|
|
||
|
This difference is deliberate and is done to support C++03 compilers. C++11 defines the
|
||
|
`ATOMIC_FLAG_INIT` macro which can be used to statically initialize `std::atomic_flag`
|
||
|
to a clear state like this:
|
||
|
|
||
|
std::atomic_flag flag = ATOMIC_FLAG_INIT; // constant initialization
|
||
|
|
||
|
This macro cannot be implemented in C++03 because for that `atomic_flag` would have to be
|
||
|
an aggregate type, which it cannot be because it has to prohibit copying and consequently
|
||
|
define the default constructor. Thus the closest equivalent C++03 code using [*Boost.Atomic]
|
||
|
would be:
|
||
|
|
||
|
boost::atomic_flag flag; // possibly, dynamic initialization in C++03;
|
||
|
// constant initialization in C++11
|
||
|
|
||
|
The same code is also valid in C++11, so this code can be used universally. However, for
|
||
|
interface parity with `std::atomic_flag`, if possible, the library also defines the
|
||
|
`BOOST_ATOMIC_FLAG_INIT` macro, which is equivalent to `ATOMIC_FLAG_INIT`:
|
||
|
|
||
|
boost::atomic_flag flag = BOOST_ATOMIC_FLAG_INIT; // constant initialization
|
||
|
|
||
|
This macro will only be implemented on a C++11 compiler. When this macro is not available,
|
||
|
the library defines `BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface_atomic_object Atomic objects]
|
||
|
|
||
|
#include <boost/atomic/atomic.hpp>
|
||
|
|
||
|
[^boost::atomic<['T]>] provides methods for atomically accessing
|
||
|
variables of a suitable type [^['T]]. The type is suitable if
|
||
|
it is /trivially copyable/ (3.9/9 \[basic.types\]). Following are
|
||
|
examples of the types compatible with this requirement:
|
||
|
|
||
|
* a scalar type (e.g. integer, boolean, enum or pointer type)
|
||
|
* a [^class] or [^struct] that has no non-trivial copy or move
|
||
|
constructors or assignment operators, has a trivial destructor,
|
||
|
and that is comparable via [^memcmp].
|
||
|
|
||
|
Note that classes with virtual functions or virtual base classes
|
||
|
do not satisfy the requirements. Also be warned
|
||
|
that structures with "padding" between data members may compare
|
||
|
non-equal via [^memcmp] even though all members are equal.
|
||
|
|
||
|
[section:interface_atomic_generic [^boost::atomic<['T]>] template class]
|
||
|
|
||
|
All atomic objects support the following operations and properties:
|
||
|
|
||
|
[table
|
||
|
[[Syntax] [Description]]
|
||
|
[
|
||
|
[`atomic()`]
|
||
|
[Initialize to an unspecified value]
|
||
|
]
|
||
|
[
|
||
|
[`atomic(T initial_value)`]
|
||
|
[Initialize to [^initial_value]]
|
||
|
]
|
||
|
[
|
||
|
[`bool is_lock_free()`]
|
||
|
[Checks if the atomic object is lock-free; the returned value is consistent with the `is_always_lock_free` static constant, see below]
|
||
|
]
|
||
|
[
|
||
|
[`T load(memory_order order)`]
|
||
|
[Return current value]
|
||
|
]
|
||
|
[
|
||
|
[`void store(T value, memory_order order)`]
|
||
|
[Write new value to atomic variable]
|
||
|
]
|
||
|
[
|
||
|
[`T exchange(T new_value, memory_order order)`]
|
||
|
[Exchange current value with `new_value`, returning current value]
|
||
|
]
|
||
|
[
|
||
|
[`bool compare_exchange_weak(T & expected, T desired, memory_order order)`]
|
||
|
[Compare current value with `expected`, change it to `desired` if matches.
|
||
|
Returns `true` if an exchange has been performed, and always writes the
|
||
|
previous value back in `expected`. May fail spuriously, so must generally be
|
||
|
retried in a loop.]
|
||
|
]
|
||
|
[
|
||
|
[`bool compare_exchange_weak(T & expected, T desired, memory_order success_order, memory_order failure_order)`]
|
||
|
[Compare current value with `expected`, change it to `desired` if matches.
|
||
|
Returns `true` if an exchange has been performed, and always writes the
|
||
|
previous value back in `expected`. May fail spuriously, so must generally be
|
||
|
retried in a loop.]
|
||
|
]
|
||
|
[
|
||
|
[`bool compare_exchange_strong(T & expected, T desired, memory_order order)`]
|
||
|
[Compare current value with `expected`, change it to `desired` if matches.
|
||
|
Returns `true` if an exchange has been performed, and always writes the
|
||
|
previous value back in `expected`.]
|
||
|
]
|
||
|
[
|
||
|
[`bool compare_exchange_strong(T & expected, T desired, memory_order success_order, memory_order failure_order))`]
|
||
|
[Compare current value with `expected`, change it to `desired` if matches.
|
||
|
Returns `true` if an exchange has been performed, and always writes the
|
||
|
previous value back in `expected`.]
|
||
|
]
|
||
|
[
|
||
|
[`static bool is_always_lock_free`]
|
||
|
[This static boolean constant indicates if any atomic object of this type is lock-free]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
`order` always has `memory_order_seq_cst` as default parameter.
|
||
|
|
||
|
The `compare_exchange_weak`/`compare_exchange_strong` variants
|
||
|
taking four parameters differ from the three parameter variants
|
||
|
in that they allow a different memory ordering constraint to
|
||
|
be specified in case the operation fails.
|
||
|
|
||
|
In addition to these explicit operations, each
|
||
|
[^atomic<['T]>] object also supports
|
||
|
implicit [^store] and [^load] through the use of "assignment"
|
||
|
and "conversion to [^T]" operators. Avoid using these operators,
|
||
|
as they do not allow explicit specification of a memory ordering
|
||
|
constraint.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface_atomic_integral [^boost::atomic<['integral]>] template class]
|
||
|
|
||
|
In addition to the operations listed in the previous section,
|
||
|
[^boost::atomic<['I]>] for integral
|
||
|
types [^['I]], except `bool`, supports the following operations,
|
||
|
which correspond to [^std::atomic<['I]>]:
|
||
|
|
||
|
[table
|
||
|
[[Syntax] [Description]]
|
||
|
[
|
||
|
[`I fetch_add(I v, memory_order order)`]
|
||
|
[Add `v` to variable, returning previous value]
|
||
|
]
|
||
|
[
|
||
|
[`I fetch_sub(I v, memory_order order)`]
|
||
|
[Subtract `v` from variable, returning previous value]
|
||
|
]
|
||
|
[
|
||
|
[`I fetch_and(I v, memory_order order)`]
|
||
|
[Apply bit-wise "and" with `v` to variable, returning previous value]
|
||
|
]
|
||
|
[
|
||
|
[`I fetch_or(I v, memory_order order)`]
|
||
|
[Apply bit-wise "or" with `v` to variable, returning previous value]
|
||
|
]
|
||
|
[
|
||
|
[`I fetch_xor(I v, memory_order order)`]
|
||
|
[Apply bit-wise "xor" with `v` to variable, returning previous value]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
Additionally, as a [*Boost.Atomic] extension, the following operations are also provided:
|
||
|
|
||
|
[table
|
||
|
[[Syntax] [Description]]
|
||
|
[
|
||
|
[`I fetch_negate(memory_order order)`]
|
||
|
[Change the sign of the value stored in the variable, returning previous value]
|
||
|
]
|
||
|
[
|
||
|
[`I fetch_complement(memory_order order)`]
|
||
|
[Set the variable to the one\'s complement of the current value, returning previous value]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_negate(memory_order order)`]
|
||
|
[Change the sign of the value stored in the variable, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_complement(memory_order order)`]
|
||
|
[Set the variable to the one\'s complement of the current value, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_add(I v, memory_order order)`]
|
||
|
[Add `v` to variable, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_sub(I v, memory_order order)`]
|
||
|
[Subtract `v` from variable, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_and(I v, memory_order order)`]
|
||
|
[Apply bit-wise "and" with `v` to variable, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_or(I v, memory_order order)`]
|
||
|
[Apply bit-wise "or" with `v` to variable, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_xor(I v, memory_order order)`]
|
||
|
[Apply bit-wise "xor" with `v` to variable, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`bool add_and_test(I v, memory_order order)`]
|
||
|
[Add `v` to variable, returning `true` if the result is zero and `false` otherwise]
|
||
|
]
|
||
|
[
|
||
|
[`bool sub_and_test(I v, memory_order order)`]
|
||
|
[Subtract `v` from variable, returning `true` if the result is zero and `false` otherwise]
|
||
|
]
|
||
|
[
|
||
|
[`bool and_and_test(I v, memory_order order)`]
|
||
|
[Apply bit-wise "and" with `v` to variable, returning `true` if the result is zero and `false` otherwise]
|
||
|
]
|
||
|
[
|
||
|
[`bool or_and_test(I v, memory_order order)`]
|
||
|
[Apply bit-wise "or" with `v` to variable, returning `true` if the result is zero and `false` otherwise]
|
||
|
]
|
||
|
[
|
||
|
[`bool xor_and_test(I v, memory_order order)`]
|
||
|
[Apply bit-wise "xor" with `v` to variable, returning `true` if the result is zero and `false` otherwise]
|
||
|
]
|
||
|
[
|
||
|
[`bool bit_test_and_set(unsigned int n, memory_order order)`]
|
||
|
[Set bit number `n` in the variable to 1, returning `true` if the bit was previously set to 1 and `false` otherwise]
|
||
|
]
|
||
|
[
|
||
|
[`bool bit_test_and_reset(unsigned int n, memory_order order)`]
|
||
|
[Set bit number `n` in the variable to 0, returning `true` if the bit was previously set to 1 and `false` otherwise]
|
||
|
]
|
||
|
[
|
||
|
[`bool bit_test_and_complement(unsigned int n, memory_order order)`]
|
||
|
[Change bit number `n` in the variable to the opposite value, returning `true` if the bit was previously set to 1 and `false` otherwise]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
`order` always has `memory_order_seq_cst` as default parameter.
|
||
|
|
||
|
The [^opaque_['op]] and [^['op]_and_test] variants of the operations
|
||
|
may result in a more efficient code on some architectures because
|
||
|
the original value of the atomic variable is not preserved. In the
|
||
|
[^bit_test_and_['op]] operations, the bit number `n` starts from 0, which
|
||
|
means the least significand bit, and must not exceed
|
||
|
[^std::numeric_limits<['I]>::digits - 1].
|
||
|
|
||
|
In addition to these explicit operations, each
|
||
|
[^boost::atomic<['I]>] object also
|
||
|
supports implicit pre-/post- increment/decrement, as well
|
||
|
as the operators `+=`, `-=`, `&=`, `|=` and `^=`.
|
||
|
Avoid using these operators,
|
||
|
as they do not allow explicit specification of a memory ordering
|
||
|
constraint which always defaults to `memory_order_seq_cst`.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface_atomic_pointer [^boost::atomic<['pointer]>] template class]
|
||
|
|
||
|
In addition to the operations applicable to all atomic object,
|
||
|
[^boost::atomic<['P]>] for pointer
|
||
|
types [^['P]] (other than pointers to [^void], function or member pointers) support
|
||
|
the following operations, which correspond to [^std::atomic<['P]>]:
|
||
|
|
||
|
[table
|
||
|
[[Syntax] [Description]]
|
||
|
[
|
||
|
[`T fetch_add(ptrdiff_t v, memory_order order)`]
|
||
|
[Add `v` to variable, returning previous value]
|
||
|
]
|
||
|
[
|
||
|
[`T fetch_sub(ptrdiff_t v, memory_order order)`]
|
||
|
[Subtract `v` from variable, returning previous value]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
Similarly to integers, the following [*Boost.Atomic] extensions are also provided:
|
||
|
|
||
|
[table
|
||
|
[[Syntax] [Description]]
|
||
|
[
|
||
|
[`void opaque_add(ptrdiff_t v, memory_order order)`]
|
||
|
[Add `v` to variable, returning nothing]
|
||
|
]
|
||
|
[
|
||
|
[`void opaque_sub(ptrdiff_t v, memory_order order)`]
|
||
|
[Subtract `v` from variable, returning nothing]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
`order` always has `memory_order_seq_cst` as default parameter.
|
||
|
|
||
|
In addition to these explicit operations, each
|
||
|
[^boost::atomic<['P]>] object also
|
||
|
supports implicit pre-/post- increment/decrement, as well
|
||
|
as the operators `+=`, `-=`. Avoid using these operators,
|
||
|
as they do not allow explicit specification of a memory ordering
|
||
|
constraint which always defaults to `memory_order_seq_cst`.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface_atomic_convenience_typedefs [^boost::atomic<['T]>] convenience typedefs]
|
||
|
|
||
|
For convenience, several shorthand typedefs of [^boost::atomic<['T]>] are provided:
|
||
|
|
||
|
[c++]
|
||
|
|
||
|
typedef atomic< char > atomic_char;
|
||
|
typedef atomic< unsigned char > atomic_uchar;
|
||
|
typedef atomic< signed char > atomic_schar;
|
||
|
typedef atomic< unsigned short > atomic_ushort;
|
||
|
typedef atomic< short > atomic_short;
|
||
|
typedef atomic< unsigned int > atomic_uint;
|
||
|
typedef atomic< int > atomic_int;
|
||
|
typedef atomic< unsigned long > atomic_ulong;
|
||
|
typedef atomic< long > atomic_long;
|
||
|
typedef atomic< unsigned long long > atomic_ullong;
|
||
|
typedef atomic< long long > atomic_llong;
|
||
|
|
||
|
typedef atomic< void* > atomic_address;
|
||
|
typedef atomic< bool > atomic_bool;
|
||
|
typedef atomic< wchar_t > atomic_wchar_t;
|
||
|
typedef atomic< char16_t > atomic_char16_t;
|
||
|
typedef atomic< char32_t > atomic_char32_t;
|
||
|
|
||
|
typedef atomic< uint8_t > atomic_uint8_t;
|
||
|
typedef atomic< int8_t > atomic_int8_t;
|
||
|
typedef atomic< uint16_t > atomic_uint16_t;
|
||
|
typedef atomic< int16_t > atomic_int16_t;
|
||
|
typedef atomic< uint32_t > atomic_uint32_t;
|
||
|
typedef atomic< int32_t > atomic_int32_t;
|
||
|
typedef atomic< uint64_t > atomic_uint64_t;
|
||
|
typedef atomic< int64_t > atomic_int64_t;
|
||
|
|
||
|
typedef atomic< int_least8_t > atomic_int_least8_t;
|
||
|
typedef atomic< uint_least8_t > atomic_uint_least8_t;
|
||
|
typedef atomic< int_least16_t > atomic_int_least16_t;
|
||
|
typedef atomic< uint_least16_t > atomic_uint_least16_t;
|
||
|
typedef atomic< int_least32_t > atomic_int_least32_t;
|
||
|
typedef atomic< uint_least32_t > atomic_uint_least32_t;
|
||
|
typedef atomic< int_least64_t > atomic_int_least64_t;
|
||
|
typedef atomic< uint_least64_t > atomic_uint_least64_t;
|
||
|
typedef atomic< int_fast8_t > atomic_int_fast8_t;
|
||
|
typedef atomic< uint_fast8_t > atomic_uint_fast8_t;
|
||
|
typedef atomic< int_fast16_t > atomic_int_fast16_t;
|
||
|
typedef atomic< uint_fast16_t > atomic_uint_fast16_t;
|
||
|
typedef atomic< int_fast32_t > atomic_int_fast32_t;
|
||
|
typedef atomic< uint_fast32_t > atomic_uint_fast32_t;
|
||
|
typedef atomic< int_fast64_t > atomic_int_fast64_t;
|
||
|
typedef atomic< uint_fast64_t > atomic_uint_fast64_t;
|
||
|
typedef atomic< intmax_t > atomic_intmax_t;
|
||
|
typedef atomic< uintmax_t > atomic_uintmax_t;
|
||
|
|
||
|
typedef atomic< std::size_t > atomic_size_t;
|
||
|
typedef atomic< std::ptrdiff_t > atomic_ptrdiff_t;
|
||
|
|
||
|
typedef atomic< intptr_t > atomic_intptr_t;
|
||
|
typedef atomic< uintptr_t > atomic_uintptr_t;
|
||
|
|
||
|
The typedefs are provided only if the corresponding type is available.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:interface_fences Fences]
|
||
|
|
||
|
#include <boost/atomic/fences.hpp>
|
||
|
|
||
|
[table
|
||
|
[[Syntax] [Description]]
|
||
|
[
|
||
|
[`void atomic_thread_fence(memory_order order)`]
|
||
|
[Issue fence for coordination with other threads.]
|
||
|
]
|
||
|
[
|
||
|
[`void atomic_signal_fence(memory_order order)`]
|
||
|
[Issue fence for coordination with signal handler (only in same thread).]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:feature_macros Feature testing macros]
|
||
|
|
||
|
#include <boost/atomic/capabilities.hpp>
|
||
|
|
||
|
[*Boost.Atomic] defines a number of macros to allow compile-time
|
||
|
detection whether an atomic data type is implemented using
|
||
|
"true" atomic operations, or whether an internal "lock" is
|
||
|
used to provide atomicity. The following macros will be
|
||
|
defined to `0` if operations on the data type always
|
||
|
require a lock, to `1` if operations on the data type may
|
||
|
sometimes require a lock, and to `2` if they are always lock-free:
|
||
|
|
||
|
[table
|
||
|
[[Macro] [Description]]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_FLAG_LOCK_FREE`]
|
||
|
[Indicate whether `atomic_flag` is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_BOOL_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<bool>` is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_CHAR_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<char>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_CHAR16_T_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<char16_t>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_CHAR32_T_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<char32_t>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_WCHAR_T_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<wchar_t>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_SHORT_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<short>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_INT_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<int>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_LONG_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<long>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_LLONG_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<long long>` (including signed/unsigned variants) is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_ADDRESS_LOCK_FREE` or `BOOST_ATOMIC_POINTER_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<T *>` is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_THREAD_FENCE`]
|
||
|
[Indicate whether `atomic_thread_fence` function is lock-free]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_SIGNAL_FENCE`]
|
||
|
[Indicate whether `atomic_signal_fence` function is lock-free]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
In addition to these standard macros, [*Boost.Atomic] also defines a number of extension macros,
|
||
|
which can also be useful. Like the standard ones, these macros are defined to values `0`, `1` and `2`
|
||
|
to indicate whether the corresponding operations are lock-free or not.
|
||
|
|
||
|
[table
|
||
|
[[Macro] [Description]]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_INT8_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<int8_type>` is lock-free.]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_INT16_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<int16_type>` is lock-free.]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_INT32_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<int32_type>` is lock-free.]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_INT64_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<int64_type>` is lock-free.]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_INT128_LOCK_FREE`]
|
||
|
[Indicate whether `atomic<int128_type>` is lock-free.]
|
||
|
]
|
||
|
[
|
||
|
[`BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`]
|
||
|
[Defined after including `atomic_flag.hpp`, if the implementation
|
||
|
does not support the `BOOST_ATOMIC_FLAG_INIT` macro for static
|
||
|
initialization of `atomic_flag`. This macro is typically defined
|
||
|
for pre-C++11 compilers.]
|
||
|
]
|
||
|
]
|
||
|
|
||
|
In the table above, `intN_type` is a type that fits storage of contiguous `N` bits, suitably aligned for atomic operations.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:usage_examples Usage examples]
|
||
|
|
||
|
[include examples.qbk]
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[/
|
||
|
[section:platform_support Implementing support for additional platforms]
|
||
|
|
||
|
[include platform.qbk]
|
||
|
|
||
|
[endsect]
|
||
|
]
|
||
|
|
||
|
[/ [xinclude autodoc.xml] ]
|
||
|
|
||
|
[section:limitations Limitations]
|
||
|
|
||
|
While [*Boost.Atomic] strives to implement the atomic operations
|
||
|
from C++11 and later as faithfully as possible, there are a few
|
||
|
limitations that cannot be lifted without compiler support:
|
||
|
|
||
|
* [*Using non-POD-classes as template parameter to `atomic<T>` results
|
||
|
in undefined behavior]: This means that any class containing a
|
||
|
constructor, destructor, virtual methods or access control
|
||
|
specifications is not a valid argument in C++98. C++11 relaxes
|
||
|
this slightly by allowing "trivial" classes containing only
|
||
|
empty constructors. [*Advise]: Use only POD types.
|
||
|
* [*C++98 compilers may transform computation- to control-dependency]:
|
||
|
Crucially, `memory_order_consume` only affects computationally-dependent
|
||
|
operations, but in general there is nothing preventing a compiler
|
||
|
from transforming a computation dependency into a control dependency.
|
||
|
A fully compliant C++11 compiler would be forbidden from such a transformation,
|
||
|
but in practice most if not all compilers have chosen to promote
|
||
|
`memory_order_consume` to `memory_order_acquire` instead
|
||
|
(see [@https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59448 this] gcc bug
|
||
|
for example). In the current implementation [*Boost.Atomic] follows that trend,
|
||
|
but this may change in the future.
|
||
|
[*Advice]: In general, avoid `memory_order_consume` and use `memory_order_acquire`
|
||
|
instead. Use `memory_order_consume` only in conjunction with
|
||
|
pointer values, and only if you can ensure that the compiler cannot
|
||
|
speculate and transform these into control dependencies.
|
||
|
* [*Fence operations enforce "too strong" compiler ordering]:
|
||
|
Semantically, `memory_order_acquire`/`memory_order_consume`
|
||
|
and `memory_order_release` need to restrain reordering of
|
||
|
memory operations only in one direction. Since there is no
|
||
|
way to express this constraint to the compiler, these act
|
||
|
as "full compiler barriers" in this implementation. In corner
|
||
|
cases this may result in a less efficient code than a C++11 compiler
|
||
|
could generate.
|
||
|
* [*No interprocess fallback]: using `atomic<T>` in shared memory only works
|
||
|
correctly, if `atomic<T>::is_lock_free() == true`.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:porting Porting]
|
||
|
|
||
|
[section:unit_tests Unit tests]
|
||
|
|
||
|
[*Boost.Atomic] provides a unit test suite to verify that the
|
||
|
implementation behaves as expected:
|
||
|
|
||
|
* [*fallback_api.cpp] verifies that the fallback-to-locking aspect
|
||
|
of [*Boost.Atomic] compiles and has correct value semantics.
|
||
|
* [*native_api.cpp] verifies that all atomic operations have correct
|
||
|
value semantics (e.g. "fetch_add" really adds the desired value,
|
||
|
returning the previous). It is a rough "smoke-test" to help weed
|
||
|
out the most obvious mistakes (for example width overflow,
|
||
|
signed/unsigned extension, ...).
|
||
|
* [*lockfree.cpp] verifies that the [*BOOST_ATOMIC_*_LOCKFREE] macros
|
||
|
are set properly according to the expectations for a given
|
||
|
platform, and that they match up with the [*is_always_lock_free] and
|
||
|
[*is_lock_free] members of the [*atomic] object instances.
|
||
|
* [*atomicity.cpp] lets two threads race against each other modifying
|
||
|
a shared variable, verifying that the operations behave atomic
|
||
|
as appropriate. By nature, this test is necessarily stochastic, and
|
||
|
the test self-calibrates to yield 99% confidence that a
|
||
|
positive result indicates absence of an error. This test is
|
||
|
very useful on uni-processor systems with preemption already.
|
||
|
* [*ordering.cpp] lets two threads race against each other accessing
|
||
|
multiple shared variables, verifying that the operations
|
||
|
exhibit the expected ordering behavior. By nature, this test is
|
||
|
necessarily stochastic, and the test attempts to self-calibrate to
|
||
|
yield 99% confidence that a positive result indicates absence
|
||
|
of an error. This only works on true multi-processor (or multi-core)
|
||
|
systems. It does not yield any result on uni-processor systems
|
||
|
or emulators (due to there being no observable reordering even
|
||
|
the order=relaxed case) and will report that fact.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:tested_compilers Tested compilers]
|
||
|
|
||
|
[*Boost.Atomic] has been tested on and is known to work on
|
||
|
the following compilers/platforms:
|
||
|
|
||
|
* gcc 4.x: i386, x86_64, ppc32, ppc64, sparcv9, armv6, alpha
|
||
|
* Visual Studio Express 2008/Windows XP, x86, x64, ARM
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[section:acknowledgements Acknowledgements]
|
||
|
|
||
|
* Adam Wulkiewicz created the logo used on the [@https://github.com/boostorg/atomic GitHub project page]. The logo was taken from his [@https://github.com/awulkiew/boost-logos collection] of Boost logos.
|
||
|
|
||
|
[endsect]
|
||
|
|
||
|
[endsect]
|