The problem with the original patch was that using syscall(3) means that
errno can be set, but pthread_create(3) was abusing the TLS errno slot as
a pthread_mutex_t for the thread startup handshake.
There was also a mistake in the check for syscall failures --- it should
have checked against -1 instead of 0 (not just because that's the default
idiom, but also here because futex(2) can legitimately return values > 0).
This patch stops abusing the TLS errno slot and adds a pthread_mutex_t to
pthread_internal_t instead. (Note that for LP64 sizeof(pthread_mutex_t) >
sizeof(uintptr_t), so we could potentially clobber other TLS slots too.)
I've also rewritten the LP32 compatibility stubs to directly reuse the
code from the .h file.
This reverts commit 75c55ff84e.
Bug: 15195455
Change-Id: I6ffb13e5cf6a35d8f59f692d94192aae9ab4593d
This reverts commit ced906c849.
Causes issues on art / dalvik due to a broken return value
check and other undiagnosed issues.
bug: 15195455
Change-Id: I5d6bbb389ecefb0e33a5237421a9d56d32a9317c
Also hide part of the system properties compatibility code, since
we needed to touch that to keep it building.
I'll remove __futex_syscall4 and futex in a later patch.
Bug: 11156955
Change-Id: Ibbf42414c5bb07fb9f1c4a169922844778e4eeae
To use jemalloc, add MALLOC_IMPL = jemalloc in a board config file
and you get the new version automatically.
Update the pthread_create_key tests since jemalloc uses a few keys.
Add a new test to verify memalign works as expected.
Bug: 981363
Change-Id: I16eb152b291a95bd2499e90492fc6b4bd7053836
If libnetd_client can't be found, operate as before and use the default netId
potentially overriden by a more specific netId passed in to
android_get*fornet().
(cherry picked from commit 559c7842cc)
Change-Id: I42ef3293172651870fb46d2de22464c4f03e8e0b
+ Name the dispatch header correctly (NetdClientDispatch.h).
+ Hide the global dispatch variable (__netdClientDispatch).
+ Explain why it's okay to read the variable without locking.
+ Use quotes instead of angle-brackets for non-system includes.
+ Add necessary declarations for C compiles (and not just C++).
Change-Id: Id0932165e71d81da5fce77a684f40c2263f58e61
The library exists outside bionic. It is dynamically loaded, to replace selected
standard socket syscalls with versions that talk to netd.
Change connect() to use the library if available.
(cherry picked from commit 3a6b627a14df8111b03e452f2df4b5f4938e0e49)
Change-Id: Ib6198e19dbc306521a26fcecfdf6e8424d163fc9
This more general interface lets liblog give us any fatal log message,
regardless of source. This means we can remove the special case for
LOG_ALWAYS_FATAL with a simpler scheme that automatically works for
the VM too.
Change-Id: Ia6dbf7c3dbabf223081bd5159294835d954bb067
pthread_once is nice for decoupling, but it makes resource availability less
predictable, which is a bad thing.
This fixes a test failure if uselocale(3) is called before
pthread.pthread_key_create_lots runs.
Change-Id: Ie2634f986a50e7965582d4bd6e5aaf48cf0d55c8
Add tests for the above.
Add the fortify implementations of __stpcpy_chk and __stpncpy_chk.
Modify the strncpy test to cover more cases and use this template for
stpncpy.
Add all of the fortify test cases.
Bug: 13746695
Change-Id: I8c0f0d4991a878b8e8734fff12c8b73b07fdd344
Also neuter __isthreaded.
We should come back to try to hide struct FILE's internals for LP64.
Bug: 3453512
Bug: 3453550
Change-Id: I7e115329fb4579246a72fea367b9fc8cb6055d18
This is part of the upstream sync (Net/Open/Free BSDs expose the
nameser.h in their public headers).
Change-Id: Ib063d4e50586748cc70201a8296cd90d2e48bbcf
* libc (fatal) logging now makes socket connection to the
user-space logging service.
* Add a TARGET_USES_LOGD make flag for BoardConfig.mk to manage
whether logd is enabled for use or not.
Change-Id: I96ab598c76d6eec86f9d0bc81094c1fb3fb0d9b4
Our <machine/asm.h> files were modified from upstream, to the extent
that no architecture was actually using the upstream ENTRY or END macros,
assuming that architecture even had such a macro upstream. This patch moves
everyone to the same macros, with just a few tweaks remaining in the
<machine/asm.h> files, which no one should now use directly.
I've removed most of the unused cruft from the <machine/asm.h> files, though
there's still rather a lot in the mips/mips64 ones.
Bug: 12229603
Change-Id: I2fff287dc571ac1087abe9070362fb9420d85d6d
Remove the linker's reliance on BSD cruft and use the glibc-style
ElfW macro. (Other code too, but the linker contains the majority
of the code that needs to work for Elf32 and Elf64.)
All platforms need dl_iterate_phdr_static, so it doesn't make sense
to have that part of the per-architecture configuration.
Bug: 12476126
Change-Id: I1d7f918f1303a392794a6cd8b3512ff56bd6e487
Also make the other architectures more similar to one another,
use NULL instead of 0 in calling code, and remove an unused #define.
Change-Id: I52b874afb6a351c802f201a0625e484df6d093bb
This patch changes the domain that the memory barrier operates on. Assumes
that the scope of bionic_atomic_barrier() does not include device memory,
memory shared with the GPU or any other memory external to the processor
cluster.
Change-Id: I291e741c98a64c86f3a3cf99811bbf1e714ac9aa
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
The bionic_atomic_cmpxchg() API states that the cmpxchg() will be done without
explicit memory barriers. LDAXR/STLXR semantics involve half barriers for
load/store.
This patch optimises cmpxchg() by using LDXR/STXR and avoiding unnecessary half
bariers. It also fixes the clobber list for all the bionic_atomic_*() functions.
Change-Id: Iae9468965785cfeeec791d52f1e8cbc524adb682
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
This is the first patch out of a series of patches that add support for
AArch64, the new 64bit execution state of the ARMv8 Architecture. The
patches add support for LP64 programming model.
The patch adds:
* "arch-aarch64" to the architecture directories.
* "arch-aarch64/include" - headers used by libc
* "arch-aarch64/bionic":
- crtbegin, crtend support;
- aarch64 specific syscall stubs;
- setjmp, clone, vfork assembly files.
Change-Id: If72b859f81928d03ad05d4ccfcb54c2f5dbf99a5
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
This patch adds support for AArch64 atomic operations. Some
of the stubs use the lightweight store/load exclusive.
Change-Id: Iaf704d048b2dc15bf08cf8e4f0c3ea9f2052fe13
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
It looks like we can probably just use the generic GCC stuff instead;
the generated code looks pretty similar. We should come back to that.
These routines are only used by the pthread implementation, and
__bionic_atomic_inc isn't used, so we can remove it.
Change-Id: I8b5b8cb30a1b159f0e85c3675aee06ddef39b429
I fixed this bug a while back, but didn't remove it from the list,
could have added a better test, and could have written clearer code
that didn't require a comment.
Change-Id: Iebdf0f9a54537a7d5cbca254a5967b1543061f3d
Let the kernel keep pthread_internal_t::tid updated, including
across forks and for the main thread. This then lets us fix
pthread_join to only return after the thread has really exited.
Also fix the thread attributes of the main thread so we don't
unmap the main thread's stack (which is really owned by the
dynamic linker and contains things like environment variables),
which fixes crashes when joining with an exited main thread
and also fixes problems reported publicly with accessing environment
variables after the main thread exits (for which I've added a new
unit test).
In passing I also fixed a bug where if the clone(2) inside
pthread_create(3) fails, we'd unmap the child's stack and TLS (which
contains the mutex) and then try to unlock the mutex. Boom! It wasn't
until after I'd uploaded the fix for this that I came across a new
public bug reporting this exact failure.
Bug: 8206355
Bug: 11693195
Bug: https://code.google.com/p/android/issues/detail?id=57421
Bug: https://code.google.com/p/android/issues/detail?id=62392
Change-Id: I2af9cf6e8ae510a67256ad93cad891794ed0580b
We only need it for MAX_ERRNO, and it's time we had somewhere to put
the little assembler utility macros we've been putting off writing.
Change-Id: I9354d2e0dc47c689296a34b5b229fc9ba75f1a83
<pthread.h> was missing nonnull attributes, noreturn on pthread_exit,
and had incorrect cv qualifiers for several standard functions.
I've also marked the non-standard stuff (where I count glibc rather
than POSIX as "standard") so we can revisit this cruft for LP64 and
try to ensure we're compatible with glibc.
I've also broken out the pthread_cond* functions into a new file.
I've made the remaining pthread files (plus ptrace) part of the bionic code
and fixed all the warnings.
I've added a few more smoke tests for chunks of untested pthread functionality.
We no longer need the libc_static_common_src_files hack for any of the
pthread implementation because we long since stripped out the rest of
the armv5 support, and this hack was just to ensure that __get_tls in libc.a
went via the kernel if necessary.
This patch also finishes the job of breaking up the pthread.c monolith, and
adds a handful of new tests.
Change-Id: Idc0ae7f5d8aa65989598acd4c01a874fe21582c7
The x86_64 build was failing because clone.S had a call to __thread_entry which
was being added to a different intermediate .a on the way to making libc.so,
and the linker couldn't guarantee statically that such a relocation would be
possible.
ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC
This patch addresses that by ensuring that the caller and callee end up in the
same intermediate .a. While I'm here, I've tried to clean up some of the mess
that led to this situation too. In particular, this removes libc/private/ from
the default include path (except for the DNS code), and splits out the DNS
code into its own library (since it's a weird special case of upstream NetBSD
code that's diverged so heavily it's unlikely ever to get back in sync).
There's more cleanup of the DNS situation possible, but this is definitely a
step in the right direction, and it's more than enough to get x86_64 building
cleanly.
Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
If __get_tls has the right type, a lot of confusing casting can disappear.
It was probably a mistake that __get_tls was exposed as a function for mips
and x86 (but not arm), so let's (a) ensure that the __get_tls function
always matches the macro, (b) that we have the function for arm too, and
(c) that we don't have the function for any 64-bit architecture.
Change-Id: Ie9cb989b66e2006524ad7733eb6e1a65055463be
Although 'register' is deprecated, we need to use v1, and there's
no way to do that through register constraints on the assembler
fragment itself.
Change-Id: Ib5b12c4c3652513d10cc61d4a4b11314ece25663