It looks like we can probably just use the generic GCC stuff instead;
the generated code looks pretty similar. We should come back to that.
These routines are only used by the pthread implementation, and
__bionic_atomic_inc isn't used, so we can remove it.
Change-Id: I8b5b8cb30a1b159f0e85c3675aee06ddef39b429
In particular this affects assert(3) and __cxa_pure_virtual, both of
which have managed to confuse people this week by apparently aborting
without reason. (Because stderr goes nowhere, normally.)
Bug: 6852995
Bug: 6840813
Change-Id: I7f5d17d5ddda439e217b7932096702dc013b9142
We're going to modify the __atomic_xxx implementation to provide
full memory barriers, to avoid problems for NDK machine code that
link to these functions.
First step is to remove their usage from our platform code.
We now use inlined versions of the same functions for a slight
performance boost.
+ remove obsolete atomics_x86.c (was never compiled)
NOTE: This improvement was benchmarked on various devices.
Comparing a pthread mutex lock + atomic increment + unlock
we get:
- ARMv7 emulator, running on a 2.4 GHz Xeon:
before: 396 ns after: 288 ns
- x86 emulator in KVM mode on same machine:
before: 27 ns after: 27 ns
- Google Nexus S, in ARMv7 mode (single-core):
before: 82 ns after: 76 ns
- Motorola Xoom, in ARMv7 mode (multi-core):
before: 121 ns after: 120 ns
The code has also been rebuilt in ARMv5TE mode for correctness.
Change-Id: Ic1dc72b173d59b2e7af901dd70d6a72fb2f64b17
We simply copy the stuff we need from cutils headers.
A future patch will change cutils to include the private <bionic_atomic_inline.h>
Change-Id: Ib6fd9a03bc9e337ce867bd606dc94c2b4438480a