Normally we don't have -Werror for upstream code, but for those warnings
that probably point to 32-bit assumptions about pointers, we want those
warnings to always be errors.
Change-Id: Ibece9caf09b2f7989ca600ef448d07868669a8fb
The NDK ABI requires that you support SSE2, and the build system won't let you
build with ARCH_X86_HAVE_SSE2 set to false. So let's stop pretending this
constant is actually a variable, and let's remove the corresponding dead code.
Also, the USE_SSE2 and USE_SSE3 macros are unused, so let's not bother
setting them.
Change-Id: I40b501d998530d22518ce1c4d14575513a8125bb
Clang and gcc default to different standards, so we should be explicit
about the versions we want to compile for.
Change-Id: I65495a2392dd29f36373b94c616c2506173e6033
Use basic .c versions of all functions for x86_64 until they are
manually optimized and .s versions released.
Change-Id: I59bba08931e894822db485c8803c2665c226234a
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
Fortify calls to recv() and recvfrom().
We use __bos0 to match glibc's behavior, and because I haven't
tested using __bos.
Change-Id: Iad6ae96551a89af17a9c347b80cdefcf2020c505
Found by adapting the simple unit tests for libc logging to test
snprintf too. Fix taken from upstream OpenBSD without updating
the rest of stdio.
Change-Id: Ie339a8e9393a36080147aae4d6665118e5d93647
Required for x86 build with multilib compiler.
Change-Id: Iac71cdc3461df6fb48cb2a7b713324ca368e6704
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
I've mailed the tz list about this, and will switch to whatever upstream
fix comes along as soon as it's available.
Bug: 10310929
Change-Id: I36bf3fcf11f5ac9b88137597bac3487a7bb81b0f
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Merge from internal master.
(cherry-picked from 7c860db074)
Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Merge from internal master.
(cherry-picked from d119b7b6f4)
Change-Id: I296463b251ef9fab004ee4dded2793feca5b547a
Only works on some kernels, and only on page-aligned regions of
anonymous memory. It will show up in /proc/pid/maps as
[anon:<name>] and in /proc/pid/smaps as Name: <name>
Change-Id: If31667cf45ff41cc2a79a140ff68707526def80e
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Change-Id: Ifd091ebdbce70fe35a7c5d8f71d5914255f3af35
Yet another archaic relic containing bugs that had been fixed years before the
Android project even started...
Bug: 9935113
Change-Id: I3c9d019a216efd609ee568cf8c70bc360f357403