I originally modified the krait mainloop prefetch from cacheline * 8 to * 2.
This causes a perf degradation for copies bigger than will fit in the cache.
Fixing this back to the original * 8. I tried other multiples, but * 8 is th
sweet spot on krait.
Bug: 11221806
(cherry picked from commit c3c58fb560)
Change-Id: I369f81d91ba97a3fcecac84ac57dec921b4758c8
For some reason the new cortex-a15 memcpy code from ARM is really bad
for really large copies. This change forces us to go down the old path
for all copies.
All of my benchmarks show the new version is faster for large copies, but
something is going on that I don't understand.
Bug: 10838353
Change-Id: I01c16d4a2575e76f4c69862c6f78fd9024eb3fb8
I accidentally did a signed comparison of the size_t values passed in
for three of the _chk functions. Changing them to unsigned compares.
Add three new tests to verify this failure is fixed.
Bug: 10691831
Change-Id: Ia831071f7dffd5972a748d888dd506c7cc7ddba3
The backtrace when a fortify check failed was not correct. This change
adds all of the necessary directives to get a correct backtrace.
Fix the strcmp directives and change all labels to local labels.
Testing:
- Verify that the runtime can decode the stack for __memcpy_chk, __memset_chk,
__strcpy_chk, __strcat_chk fortify failures.
- Verify that gdb can decode the stack properly when hitting a fortify check.
- Verify that the runtime can decode the stack for a seg fault for all of the
_chk functions and for memcpy/memset.
- Verify that gdb can decode the stack for a seg fault for all of the _chk
functions and for memcpy/memset.
- Verify that the runtime can decode the stack for a seg fault for strcmp.
- Verify that gdb can decode the stack for a seg fault in strcmp.
Bug: 10342460
Bug: 10345269
Change-Id: I1dedadfee207dce4a285e17a21e8952bbc63786a
The libcorkscrew stack unwinder does not understand cfi directives,
so add .save directives so that it can function properly.
Also add the directives in to strcmp.S and fix a missing set of
directives in cortex-a9/memcpy_base.S.
Bug: 10345269
Change-Id: I043f493e0bb6c45bd3f4906fbe1d9f628815b015
This change pulls the memcpy code out into a new file so that the
__strcpy_chk and __strcat_chk can use it with an include.
The new versions of the two chk functions uses assembly versions
of strlen and memcpy to implement this check. This allows near
parity with the assembly versions of strcpy/strcat. It also means that
as memcpy implementations get faster, so do the chk functions.
Other included changes:
- Change all of the assembly labels to local labels. The other labels
confuse gdb and mess up backtracing.
- Add .cfi_startproc and .cfi_endproc directives so that gdb is not
confused when falling through from one function to another.
- Change all functions to use cfi directives since they are more powerful.
- Move the memcpy_chk fail code outside of the memcpy function definition
so that backtraces work properly.
- Preserve lr before the calls to __fortify_chk_fail so that the backtrace
actually works.
Testing:
- Ran the bionic unit tests. Verified all error messages in logs are set
correctly.
- Ran libc_test, replacing strcpy with __strcpy_chk and replacing
strcat with __strcat_chk.
- Ran the debugger on nexus10, nexus4, and old nexus7. Verified that the
backtrace is correct for all fortify check failures. Also verify that
when falling through from __memcpy_chk to memcpy that the backtrace is
still correct. Also verified the same for __memset_chk and bzero.
Verified the two different paths in the cortex-a9 memset routine that
save variables to the stack still show the backtrace properly.
Bug: 9293744
Change-Id: Id5aec8c3cb14101d91bd125eaf3770c9c8aa3f57
(cherry picked from commit 2be91915dc)
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Change-Id: Ifd091ebdbce70fe35a7c5d8f71d5914255f3af35
This is needed when passing -mcpu=cortex-a9 or higher on a modern
toolchain for prebuilt library compatibility
Change-Id: I73eb2393377914ae26216a8c2828ad973d1c1225
Tested using a static version of the strlen libc_test program
on a nexus7 that uses the generic code.
Merge from internal master.
(cherry-picked from d8d10a8994)
Change-Id: I88f7dc01dc5b5c3ac2d5580d92153bc1bc36c564
This optimized version is primarily targeted at cortex-a15.
Tested on all nexus devices using the system/extras/libc_test strlen test.
Tested alignments from 1 to 32 that are powers of 2.
Tested that strlen does not cross page boundaries at all alignments.
Speed improvements listed below:
cortex-a15
- Sizes >= 32 bytes, ~75% improvement.
- Sizes >= 1024 bytes, ~250% improvement.
cortex-a9
- Sizes >= 32 bytes, ~75% improvement.
- Sizes >= 1024 bytes, ~85% improvement.
krait
- Sizes >= 32 bytes, ~95% improvement.
- Sizes >= 1024 bytes, ~160% improvement.
Merge from internal master.
(cherry-picked from 2fc0717977)
Change-Id: I1ceceb4e745fd68e9d946f96d1d42e0cdaff6ccf
We cleaned up the auto-generated ones a while back to not touch
the stack unnecessarily if they have <= 4 arguments. This patch
cleans up some hand-crafted ones.
Also improve comments in clone.S.
Change-Id: I8850bf98f2b26829385315304472a760e6880ed8
Tested using a static version of the strlen libc_test program
on a nexus7 that uses the generic code.
Change-Id: If04d15dcb6c0b18f27f2fefadca5510ed49016c5
This memcpy code uses NEON/VFP to achieve very good performance
on ARMv7-A processors. It is specifically tuned for A15 but should
provide good performance on A9 also. It is equivalent to the code
in cortex-strings rev 116.
This patch is a follow up the existing gerrit change:
I7f6f77995f3ca903ad9c66d14261441667a2a935
This version includes a tweak for performance on misaligned
buffers and splits the header comment into license and
documentation sections.
Change-Id: Ibd2e23c8d8e01357ba0247be1d05192de3ceba69
Signed-off-by: Will Newton <will.newton@linaro.org>
This memcpy code uses NEON/VFP to achieve very good performance
on ARMv7-A processors. It is specifically tuned for A15 but should
provide good performance on A9 also. It is equivalent to the code
in cortex-strings rev 116.
This patch is a follow up the existing gerrit change:
I7f6f77995f3ca903ad9c66d14261441667a2a935
But this version includes a tweak for performance on misaligned
buffers.
Change-Id: I285abac0068f8ae29a1cbf7862ea8590aadaf0a7
Signed-off-by: Will Newton <will.newton@linaro.org>
Streamline the memcpy a bit removing some unnecessary instructions.
The biggest speed improvement comes from changing the size of
the preload. On krait, the sweet spot for the preload in the main
loop is twice the L1 cache line size.
In most cases, these small tweaks yield > 1000MB/s speed ups. As
the size of the memcpy approaches about 1MB, the speed improvement
disappears.
Change-Id: Ief79694d65324e2db41bee4707dae19b8c24be62