In practice, thanks to all the registers the stubs don't actually change,
but it's confusing to have an incorrect declaration.
I suspect that fcntl remains broken for aarch64; it happens to work for
x86_64 because the first vararg argument gets placed in the right register
anyway, but I have no reason to believe that's true for aarch64.
This patch adds a unit test, though, so we'll be able to tell when we get
as far as running the unit tests.
Change-Id: I58dd0054fe99d7d51d04c22781d8965dff1afbf3
Unlike on 32-bit systems where off_t is 32-bit, we don't want to
throw away the top 32 bits of an LP64 system's 64-bit off_t.
Change-Id: Ib2e0daeb4fc0b8ab3d1b983d0b371d8f81033b50
<pthread.h> was missing nonnull attributes, noreturn on pthread_exit,
and had incorrect cv qualifiers for several standard functions.
I've also marked the non-standard stuff (where I count glibc rather
than POSIX as "standard") so we can revisit this cruft for LP64 and
try to ensure we're compatible with glibc.
I've also broken out the pthread_cond* functions into a new file.
I've made the remaining pthread files (plus ptrace) part of the bionic code
and fixed all the warnings.
I've added a few more smoke tests for chunks of untested pthread functionality.
We no longer need the libc_static_common_src_files hack for any of the
pthread implementation because we long since stripped out the rest of
the armv5 support, and this hack was just to ensure that __get_tls in libc.a
went via the kernel if necessary.
This patch also finishes the job of breaking up the pthread.c monolith, and
adds a handful of new tests.
Change-Id: Idc0ae7f5d8aa65989598acd4c01a874fe21582c7
Experiment shows that the claim in the makefile was false: gdb works fine
setting breakpoints in these functions when compiled without special treatment.
Change-Id: Ibdf4dd5a14d171c954b8c2089daaf28e1c310be9
I really don't want to add yet another copy for aarch64.
Also sort arm, mips, and x86.
Also silence the "TARGET_ARCH_VARIANT" warning for non-ARM; Intel and MIPS
have both complained about it.
Change-Id: I32c592a90c0cf0cdae250d84035b3e4655543781
(aarch64 kernels only have the newer system calls.)
Also expose the new functionality that's exposed by glibc in our header files.
Change-Id: I45d2d168a03f88723d1f7fbf634701006a4843c5
Modern architectures only get the *at(2) system calls. For example,
aarch64 doesn't have open(2), and expects userspace to use openat(2)
instead.
Change-Id: I87b4ed79790cb8a80844f5544ac1a13fda26c7b5
Also clean up <signal.h> and revert the hacks that were necessary
for 64-bit in linker/debugger.cpp until now.
Change-Id: I3b0554ca8a49ee1c97cda086ce2c1954ebc11892
Let's have both use rt_sigprocmask, like in glibc. The 64-bit ABIs
can share the same code as the 32-bit ABIs.
Also, let's test the return side of these calls, not just the
setting.
Bug: 11069919
Change-Id: I11da99f85b5b481870943c520d05ec929b15eddb
The x86_64 build was failing because clone.S had a call to __thread_entry which
was being added to a different intermediate .a on the way to making libc.so,
and the linker couldn't guarantee statically that such a relocation would be
possible.
ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC
This patch addresses that by ensuring that the caller and callee end up in the
same intermediate .a. While I'm here, I've tried to clean up some of the mess
that led to this situation too. In particular, this removes libc/private/ from
the default include path (except for the DNS code), and splits out the DNS
code into its own library (since it's a weird special case of upstream NetBSD
code that's diverged so heavily it's unlikely ever to get back in sync).
There's more cleanup of the DNS situation possible, but this is definitely a
step in the right direction, and it's more than enough to get x86_64 building
cleanly.
Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
If __get_tls has the right type, a lot of confusing casting can disappear.
It was probably a mistake that __get_tls was exposed as a function for mips
and x86 (but not arm), so let's (a) ensure that the __get_tls function
always matches the macro, (b) that we have the function for arm too, and
(c) that we don't have the function for any 64-bit architecture.
Change-Id: Ie9cb989b66e2006524ad7733eb6e1a65055463be
Normally we don't have -Werror for upstream code, but for those warnings
that probably point to 32-bit assumptions about pointers, we want those
warnings to always be errors.
Change-Id: Ibece9caf09b2f7989ca600ef448d07868669a8fb
The NDK ABI requires that you support SSE2, and the build system won't let you
build with ARCH_X86_HAVE_SSE2 set to false. So let's stop pretending this
constant is actually a variable, and let's remove the corresponding dead code.
Also, the USE_SSE2 and USE_SSE3 macros are unused, so let's not bother
setting them.
Change-Id: I40b501d998530d22518ce1c4d14575513a8125bb
Clang and gcc default to different standards, so we should be explicit
about the versions we want to compile for.
Change-Id: I65495a2392dd29f36373b94c616c2506173e6033
Use basic .c versions of all functions for x86_64 until they are
manually optimized and .s versions released.
Change-Id: I59bba08931e894822db485c8803c2665c226234a
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
Fortify calls to recv() and recvfrom().
We use __bos0 to match glibc's behavior, and because I haven't
tested using __bos.
Change-Id: Iad6ae96551a89af17a9c347b80cdefcf2020c505
Found by adapting the simple unit tests for libc logging to test
snprintf too. Fix taken from upstream OpenBSD without updating
the rest of stdio.
Change-Id: Ie339a8e9393a36080147aae4d6665118e5d93647
Required for x86 build with multilib compiler.
Change-Id: Iac71cdc3461df6fb48cb2a7b713324ca368e6704
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
I've mailed the tz list about this, and will switch to whatever upstream
fix comes along as soon as it's available.
Bug: 10310929
Change-Id: I36bf3fcf11f5ac9b88137597bac3487a7bb81b0f
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Merge from internal master.
(cherry-picked from 7c860db074)
Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Merge from internal master.
(cherry-picked from d119b7b6f4)
Change-Id: I296463b251ef9fab004ee4dded2793feca5b547a
Only works on some kernels, and only on page-aligned regions of
anonymous memory. It will show up in /proc/pid/maps as
[anon:<name>] and in /proc/pid/smaps as Name: <name>
Change-Id: If31667cf45ff41cc2a79a140ff68707526def80e
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Change-Id: Ifd091ebdbce70fe35a7c5d8f71d5914255f3af35
Yet another archaic relic containing bugs that had been fixed years before the
Android project even started...
Bug: 9935113
Change-Id: I3c9d019a216efd609ee568cf8c70bc360f357403
This updates the MIPS arch to be much more in
sync with the commit Nick Kralevich made last
June; see 9d40326830.
Rewrite
crtbegin.S -> crtbegin.c
crtbegin_so.S -> crtbegin_so.c
__dso_handle.S -> __dso_handle.c
__dso_handle_so.S -> __dso_handle_so.c
atexit.S -> atexit.c
Previously __do_global_dtors_aux was in the tasks
__FINI_ARRAY__ linked with crtbegin.S and it now being
removed as there is no need to call a destructor just
before terminating a process.
Shared libraries, on the other hand, are linked with
crtbegin_so.c and have a hidden destructor declared
to allow the bionic linker to call __on_dlclose().
Change-Id: Ieb4da5199b54573de05743990e309db381a11cb8
Signed-off-by: Pete Delaney <piet.delaney@imgtec.com>
Signed-off-by: Chao-Ying Fu <chao-ying.fu@imgtec.com>
Signed-off-by: Chris Dearman <chris.dearman@imgtec.com>
Well, kinda... localtime.c still contains a bunch of Android-specific
hacks, as does strftime.c. But the other files are now exactly the same
as upstream.
This catches up with several years of bug fixes, and fixes most of the
compiler warnings that were in this code. (Just two remain.)
Bug: 1744909
Change-Id: I2ddfecb6fd408c847397c17afb0fff859e27feef
* commit 'fbec57d46c42460b2381484d1610ff21922d162e':
bionic: add compatibility mode for properties
bionic: use the size of the file to determine property area size
Allow a new bionic to work with an old init property area by supporting
the old format.
(cherry picked from commit ad76c85b9c)
Change-Id: Ib496e818a62a5834d40c71eb4745783d998be893
* A dlmalloc usage error shouldn't call abort(3) because we want to
cause a SIGSEGV by writing the address dlmalloc didn't like to an
address the kernel won't like, so that debuggerd will dump the
memory around the address that upset dlmalloc.
* Switch to the simpler FreeBSD/NetBSD style of registering stdio
cleanup. Hopefully this will let us simplify more of the stdio
implementation.
* Clear the stdio cleanup handler before we abort because of a dlmalloc
corruption error. This fixes the reported bug, where we'd hang inside
dlmalloc because the stdio cleanup reentered dlmalloc.
Bug: 9301265
Change-Id: Ief31b389455d6876e5a68f0f5429567d37277dbc
- eventfd.cpp and eventfd.s will output to the same file when building libc.a
out/target/product/*/obj/STATIC_LIBRARIES/libc_intermediates/WHOLE/libc_common_objs/eventfd.o
- And then `eventfd` will undefined when statically linked to libc.
Also add a unit test.
(cherry-pick of 8baa929d5d3bcf63381cf78ba76168c80c303f5e.)
Change-Id: Icd0eb0f4ce0511fb9ec00a504d491afd47d744d3
- eventfd.cpp and eventfd.s will output to the same file when building libc.a
out/target/product/*/obj/STATIC_LIBRARIES/libc_intermediates/WHOLE/libc_common_objs/eventfd.o
- And then `eventfd` will undefined when statically linked to libc.
Also add a unit test.
Change-Id: Ib310ade3256712ca617a90539e8eb07459c98505
For some reason, socketcalls.c was only being compiled for ARM, where
it makes no sense. For x86 we generate stubs for the socket functions
that use __NR_socketcall directly.
Change-Id: I84181e6183fae2314ae3ed862276eba82ad21e8e
We only need one logging API, and I prefer the one that does no
allocation and is thus safe to use in any context.
Also use O_CLOEXEC when opening the /dev/log files.
Move everything logging-related into one header file.
Change-Id: Ic1e3ea8e9b910dc29df351bff6c0aa4db26fbb58
The defines HAVE_32_BYTE_CACHE_LINES and ARCH_ARM_USE_NON_NEON_MEMCPY
are not used by any code. The previous memcpy code that used these
has been split into different architecture versions to avoid the need
for them.
Bug: 8005082
Merge from internal master.
(cherry-picked from commit 6e1a5cf31b)
Change-Id: Ib18fc3f4131b21cdbd19b9dde7697ac25d066fcf
Move arch specific code for arm, mips, x86 into separate
makefiles.
In addition, add different arm cpu versions of memcpy/memset.
Bug: 8005082
Merge from internal master (acdde8c1cf).
Change-Id: I04f3d0715104fab618e1abf7cf8f7eec9bec79df
This gets us back to using vfork now our ARM vfork assembler stub is
fixed, and adds the missing thread safety for the 'pidlist'.
Bug: 5335385
Change-Id: Ib08bfa65b2cb9fa695717aae629ea14816bf988d
This is actually a slightly newer upstream version than the one I
originally pulled. Hopefully now it's in upstream-freebsd it will
be easier to track upstream, though I still need to sit down and
write the necessary scripts at some point.
Bug: 5110679
Change-Id: I87e563f0f95aa8e68b45578e2a8f448bbf827a33
The defines HAVE_32_BYTE_CACHE_LINES and ARCH_ARM_USE_NON_NEON_MEMCPY
are not used by any code. The previous memcpy code that used these
has been split into different architecture versions to avoid the need
for them.
Bug: 8005082
(cherry picked from commit 6e1a5cf31b)
Change-Id: I69654d47db1458136782b5504290f620e924ee75
Move arch specific code for arm, mips, x86 into separate
makefiles.
In addition, add different arm cpu versions of memcpy/memset.
Bug: 8005082
(cherry picked from commit acdde8c1cf)
Change-Id: I0108d432af9f6283ae99adfc92a3399e5ab3e31d
The old scandir implementation didn't take into account the varying
size of directory entries, and didn't correctly clean up on its
error exits.
Bug: 7339844
Change-Id: Ib40e3564709752241a3119a496cbb2192e3f9abe
imgtec pointed out that pthread_kill(3) was broken, but most of the
other functions that ought to return ESRCH for invalid/exited threads
were equally broken.
Change-Id: I96347f6195549aee0c72dc39063e6c5d06d2e01f
# Via Elliott Hughes (1) and Gerrit Code Review (1)
* commit '0a2cb815974ea96af664fa966079966a08916722':
Simplify __stack_chk_fail, and fix it so we get debuggerd stack traces.
libc_bionic.a is already compiled -Werror, but this one file gets
compiled into its own library because it needs to be compiled with
-fno-stack-protector.
Change-Id: I273c535ab5c73ccaccbcf793fda1f788a2589abe
This reverts commit 6f94de3ca4
(Doesn't try to increase the number of TLS slots; that leads to
an inability to boot. Adds more tests.)
Change-Id: Ia7d25ba3995219ed6e686463dbba80c95cc831ca
# Via Android Git Automerger (1) and others
* commit '6b73d13fa414afeecba6718bf724e8ac922bac39':
Revert "Revert "Pull the pthread_key_t functions out of pthread.c.""
# Via Gerrit Code Review (2) and Android Git Automerger (1)
* commit 'e4b08318c13fac774b233a5459427563d2983f79':
Revert "Pull the pthread_key_t functions out of pthread.c."
POSIX says pthread_create returns EAGAIN, not ENOMEM.
Also pull pthread_attr_t functions into their own file.
Also pull pthread_setname_np into its own file.
Also remove unnecessary #includes from pthread_key.cpp.
Also account for those pthread keys used internally by bionic,
so they don't count against the number of keys available to user
code. (They do with glibc, but glibc's limit is the much more
generous 1024.)
Also factor out the common errno-restoring idiom to reduce gotos.
Bug: 6702535
Change-Id: I555e66efffcf2c1b5a2873569e91489156efca42
This was originally motivated by noticing that we were setting the
wrong bits for the well-known tls entries. That was a harmless bug
because none of the well-known tls entries has a destructor, but
it's best not to leave land mines lying around.
Also add some missing POSIX constants, a new test, and fix
pthread_key_create's return value when we hit the limit.
Change-Id: Ife26ea2f4b40865308e8410ec803b20bcc3e0ed1
There's now only one place where we deal with this stuff, it only needs to
be parsed once by the dynamic linker (rather than by each recipient), and it's
now easier for us to get hold of auxv data early on.
Change-Id: I6314224257c736547aac2e2a650e66f2ea53bef5
We had two copies of the backtrace code, and two copies of the
libcorkscrew /proc/pid/maps code. This patch gets us down to one.
We also had hacks so we could log in the malloc debugging code.
This patch pulls the non-allocating "printf" code out of the
dynamic linker so everyone can share.
This patch also makes the leak diagnostics easier to read, and
makes it possible to paste them directly into the 'stack' tool (by
using relative PCs).
This patch also fixes the stdio standard stream leak that was
causing a leak warning every time tf_daemon ran.
Bug: 7291287
Change-Id: I66e4083ac2c5606c8d2737cb45c8ac8a32c7cfe8
Add signalfd() call to bionic.
Adding the signalfd call was done in 3 steps:
- add signalfd4 system call (function name and syscall
number) to libc/SYSCALLS.TXT
- generate all necessary headers by calling
libc/tools/gensyscalls.py. This patch is adding
the generated files since the build system
does not call gensyscalls.py.
- create the signalfd wrapper in signalfd.cpp and add
the function prototype to sys/signalfd.h
(cherry-pick of 0c11611c11, modified to
work with older versions of GCC still in use on some branches.)
Change-Id: I4c6c3f12199559af8be63f93a5336851b7e63355
Add signalfd() call to bionic.
Adding the signalfd call was done in 3 steps:
- add signalfd4 system call (function name and syscall
number) to libc/SYSCALLS.TXT
- generate all necessary headers by calling
libc/tools/gensyscalls.py. This patch is adding
the generated files since the build system
does not call gensyscalls.py.
- create the signalfd wrapper in signalfd.cpp and add
the function prototype to sys/signalfd.h
Change-Id: I7ee1d3e60d5d3e1c73d9820e07d23b9ce6e1a5ab
raise() should use pthread_kill() in a pthreads environment.
For bionic this means it should always be used.
Change-Id: Ic679272b664d2b8a7068b628fb83a9f7395c441f
This patch replaces .S versions of x86 crtfiles with .c which are much
easier to support. Some of the files are matching .c version of Arm
crtfiles. x86 files required some cleanup anyway and this cleanup actually
led to matching Arm files.
I didn't change anything to share the same crt*.c between x86 and Arm. I
prefer to keep them separate for a while in case any change is required
for one of the arch, but it's good thing to do in the following patches.
Change-Id: Ibcf033f8d15aa5b10c05c879fd4b79a64dfc70f3
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
Most of these tests were in system/extras, but I've added more to cover other
cases explicitly mentioned by POSIX.
Change-Id: I5e8d77e4179028d77306935cceadbb505515dcde
The declaration for alphasort() in <dirent.h> used the deprecated:
int alphasort(const void*, const void*);
while both Posix and GLibc use instead:
int alphasort(const struct dirent** a, const struct dirent** b);
See: http://pubs.opengroup.org/onlinepubs/9699919799/functions/alphasort.html
This patch does the following:
- Update the declaration to match Posix/GLibc
- Get rid of the upstream BSD code which isn't compatible with the new
signature.
- Implement a new trivial alphasort() with the right signature, and
ensure that it uses strcoll() instead of strcmp().
- Remove Bionic-specific #ifdef .. #else .. #endif block in
dirent_test.cpp which uses alphasort().
Even through strcoll() currently uses strcmp(), this does the right
thing in the case where we decide to update strcoll() to properly
implement locale-specific ordered comparison.
Change-Id: I4fd45604d8a940aaf2eb0ecd7d73e2f11c9bca96
Based on a pair of patches from Intel:
https://android-review.googlesource.com/#/c/43909/https://android-review.googlesource.com/#/c/44903/
For x86, this patch supports _both_ the global that ARM/MIPS use
and the per-thread TLS entry (%gs:20) that GCC uses by default. This
lets us support binaries built with any x86 toolchain (right now,
the NDK is emitting x86 code that uses the global).
I've also extended the original tests to cover ARM/MIPS too, and
be a little more thorough for x86.
Change-Id: I02f279a80c6b626aecad449771dec91df235ad01