I've no idea what _BITSIZE was supposed to be, glibc doesn't have it,
the BSDs don't have it, and no code is currently using it. But having
it set unconditionally to 32 sounds like a bad idea.
Change-Id: I900235c1489afba891fff0bc3b43e9d593249a4f
Clang (prior to 3.4) does not actually provide a declaration (or definition)
of _Unwind_GetIP() for ARM. We can work around this by writing our own
basic implementation using the available primitive operations.
Change-Id: If6c66846952d8545849ad32d2b55daa4599cfe2c
Use basic .c versions of all functions for x86_64 until they are
manually optimized and .s versions released.
Change-Id: I59bba08931e894822db485c8803c2665c226234a
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
This was causing conflicting declarations for the library definitions of
common functions like sprintf(), snprintf(), and strchr().
Change-Id: I5daaa8a58183aa0d4d0fae8a7cb799671810f576
This is used to set/get TLS on x86_64. There's no public declaration
of this because it's not meant to be used outside the C library, like
glibc (though we don't currently have any visibility controls to ensure
this).
Change-Id: I5fc0a5e3ffc3f4cd597d92ee685ab19568ea18f7
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
This touches the x86 stubs too because arm, x86, and x86_64 now
all share the same header (at a source level), which causes a
reordering of the #include lines.
Change-Id: If9a1e2b2718bd41d8399fea748bce672c513ef84
* Tune syscall stubs generator for 4th target: x86_64
* Update SYSCALLS.TXT with x86_64 syscalls:
- Most of the x86 syscalls are equally supported
- *32 syscalls are not supported on 64-bit
- *64 syscalls are replaced accordingly without 64 suffix
- Some syscalls are not supported, replaced with x86_64 analog
Syscalls are regenerated as separate patch for review convenience.
Change-Id: I4ea2e0f13759b0aa61f05208ca68da8d6bc7c048
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
Copyright headers shouldn't contain the filename (and especially
shouldn't contain a different file's filename).
Change-Id: I82690a3bf371265402bc16f5d2fbb9299c3a1926
Manual changes:
cpp.py: cope with macros that refer to other macros.
defaults.py: x86 no longer always implies __i386__; use __i386__ to replace
the kernel CONFIG_X86_32 flag.
asm/page.h: the upstream page.h isn't a uapi header and no longer includes
the stuff we were using it for. Let's just have our own static file, since
it's the same for all our architectures (both 32- and 64-bit).
sys/select.h: we used to use the various FD_SET-related macros from the
kernel header files, but they've gone. Adjust by adding trivial equivalent
definitions.
Automated changes:
libc/kernel/arch-x86, libc/kernel/common: regenerated from
external/kernel-headers.
Change-Id: I84fc0ed52dc742e043b4ae300fd3b58ee99b7fcd
If "n" is smaller than the size of "src", then we'll
never read off the end of src. It makes no sense to call
__strncpy_chk2 in those circumstances.
For example, consider the following code:
int main() {
char src[10];
char dst[5];
memcpy(src, "0123456789", sizeof(src));
strncpy(dst, src, sizeof(dst));
dst[4] = '\0';
printf("%s\n", dst);
return 0;
}
In this code, it's clear that the strncpy will never read off
the end of src.
Change-Id: I9cf58857a0c5216b4576d21d3c1625e2913ccc03
localtime.c and strftime.c are still quite different from upstream because of
our extensions, but the other files continue to be identical, and the two
exceptions should be otherwise identical.
From the tzcode2013e release notes:
Changes affecting Godthab time stamps after 2037 if version mismatch
Allow POSIX-like TZ strings where the transition time's hour can
range from -167 through 167, instead of the POSIX-required 0
through 24. E.g., TZ='FJT-12FJST,M10.3.1/146,M1.3.4/75' for the
new Fiji rules. This is a more-compact way to represent
far-future time stamps for America/Godthab, America/Santiago,
Antarctica/Palmer, Asia/Gaza, Asia/Hebron, Asia/Jerusalem,
Pacific/Easter, and Pacific/Fiji. Other zones are unaffected by
this change. (Derived from a suggestion by Arthur David Olson.)
Allow POSIX-like TZ strings where daylight saving time is in
effect all year. E.g., TZ='WART4WARST,J1/0,J365/25' for Western
Argentina Summer Time all year. This supports a more-compact way
to represent the 2013d data for America/Argentina/San_Luis.
Because of the change for San Luis noted above this change does not
affect the current data. (Thanks to Andrew Main (Zefram) for
suggestions that improved this change.)
Where these two TZ changes take effect, there is a minor extension
to the tz file format in that it allows new values for the
embedded TZ-format string, and the tz file format version number
has therefore been increased from 2 to 3 as a precaution.
Version-2-based client code should continue to work as before for
all time stamps before 2038. Existing version-2-based client code
(tzcode, GNU/Linux, Solaris) has been tested on version-3-format
files, and typically works in practice even for time stamps after
2037; the only known exception is America/Godthab.
Changes affecting API
Support for floating-point time_t has been removed.
It was always dicey, and POSIX no longer requires it.
(Thanks to Eric Blake for suggesting to the POSIX committee to
remove it, and thanks to Alan Barrett, Clive D.W. Feather, Andy
Heninger, Arthur David Olson, and Alois Treindl, for reporting
bugs and elucidating some of the corners of the old floating-point
implementation.)
The signatures of 'offtime', 'timeoff', and 'gtime' have been
changed back to the old practice of using 'long' to represent UT
offsets. This had been inadvertently and mistakenly changed to
'int_fast32_t'. (Thanks to Christos Zoulos.)
The code avoids undefined behavior on integer overflow in some
more places, including gmtime, localtime, mktime and zdump.
Changes affecting code internals
Minor changes pacify GCC 4.7.3 and GCC 4.8.1.
Changes affecting documentation and commentary
Documentation and commentary is more careful to distinguish UT in
general from UTC in particular. (Thanks to Steve Allen.)
From the tzcode2013f release notes:
Changes affecting API
The types of the global variables 'timezone' and 'altzone' (if present)
have been changed back to 'long'. This is required for 'timezone'
by POSIX, and for 'altzone' by common practice, e.g., Solaris 11.
These variables were originally 'long' in the tz code, but were
mistakenly changed to 'time_t' in 1987; nobody reported the
incompatibility until now. The difference matters on x32, where
'long' is 32 bits and 'time_t' is 64. (Thanks to Elliott Hughes.)
Change-Id: I14937c42a391ddb865e4d89f0783961bcc6baa21
From the release notes:
Changes affecting near-future time stamps
Tocantins will very likely not observe DST starting this spring.
(Thanks to Steffen Thorsen.)
Jordan will likely stay at UTC+3 indefinitely, and will not fall
back this fall.
Palestine will fall back at 00:00, not 01:00. (Thanks to Steffen Thorsen.)
Change-Id: Iccee57578eef2ab51c519a23f151bc1963262ffe
From the release notes:
Changes affecting near-future time stamps
This year Fiji will start DST on October 27, not October 20.
(Thanks to David Wheeler for the heads-up.) For now, guess that
Fiji will continue to spring forward the Sunday before the fourth
Monday in October.
Changes affecting time stamps before 1970
Pacific/Johnston is now a link to Pacific/Honolulu. This corrects
some errors before 1947.
Some zones have been turned into links, when they differ from
existing zones only in older data that was likely invented or that
differs only in LMT or transition from LMT. These changes affect
only time stamps before 1943. The affected zones are:
Africa/Juba, America/Anguilla, America/Aruba, America/Dominica,
America/Grenada, America/Guadeloupe, America/Marigot,
America/Montserrat, America/St_Barthelemy, America/St_Kitts,
America/St_Lucia, America/St_Thomas, America/St_Vincent,
America/Tortola, and Europe/Vaduz. (Thanks to Alois Treindl for
confirming that the old Europe/Vaduz zone was wrong and the new
link is better for WWII-era times.)
Change Kingston Mean Time from -5:07:12 to -5:07:11. This affects
America/Cayman, America/Jamaica and America/Grand_Turk time stamps
from 1890 to 1912.
Change the UT offset of Bern Mean Time from 0:29:44 to 0:29:46.
This affects Europe/Zurich time stamps from 1853 to 1894. (Thanks
to Alois Treindl).
Change the date of the circa-1850 Zurich transition from 1849-09-12
to 1853-07-16, overriding Shanks with data from Messerli about
postal and telegraph time in Switzerland.
Data changes affecting behavior of tzselect and similar programs
Country code BQ is now called the more-common name "Caribbean Netherlands"
rather than the more-official "Bonaire, St Eustatius & Saba".
Remove from zone.tab the names America/Montreal, America/Shiprock,
and Antarctica/South_Pole, as they are equivalent to existing
same-country-code zones for post-1970 time stamps. The data for
these names are unchanged, so the names continue to work as before.
Change-Id: If78a517687532afcc0b22c7df664b5955f6e1564
Much of the per-architecture duplication can be removed, so let's do so
before we add the 64-bit architectures.
Change-Id: Ieb796503c8e5353ea38c3bab768bb9a690c9a767
Fortify calls to recv() and recvfrom().
We use __bos0 to match glibc's behavior, and because I haven't
tested using __bos.
Change-Id: Iad6ae96551a89af17a9c347b80cdefcf2020c505
Found by adapting the simple unit tests for libc logging to test
snprintf too. Fix taken from upstream OpenBSD without updating
the rest of stdio.
Change-Id: Ie339a8e9393a36080147aae4d6665118e5d93647
I accidentally did a signed comparison of the size_t values passed in
for three of the _chk functions. Changing them to unsigned compares.
Add three new tests to verify this failure is fixed.
Bug: 10691831
Merge from internal master.
(cherry-picked from 883ef2499c)
Change-Id: Id9a96b549435f5d9b61dc132cf1082e0e30889f5
The backtrace when a fortify check failed was not correct. This change
adds all of the necessary directives to get a correct backtrace.
Fix the strcmp directives and change all labels to local labels.
Testing:
- Verify that the runtime can decode the stack for __memcpy_chk, __memset_chk,
__strcpy_chk, __strcat_chk fortify failures.
- Verify that gdb can decode the stack properly when hitting a fortify check.
- Verify that the runtime can decode the stack for a seg fault for all of the
_chk functions and for memcpy/memset.
- Verify that gdb can decode the stack for a seg fault for all of the _chk
functions and for memcpy/memset.
- Verify that the runtime can decode the stack for a seg fault for strcmp.
- Verify that gdb can decode the stack for a seg fault in strcmp.
Bug: 10342460
Bug: 10345269
Merge from internal master.
(cherry-picked from 05332f2ce7)
Change-Id: Ibc919b117cfe72b9ae97e35bd48185477177c5ca
The libcorkscrew stack unwinder does not understand cfi directives,
so add .save directives so that it can function properly.
Also add the directives in to strcmp.S and fix a missing set of
directives in cortex-a9/memcpy_base.S.
Bug: 10345269
Merge from internal master.
(cherry-picked from 5f7ccea3ff)
Change-Id: If48a216203216a643807f5d61906015984987189
This adds mmap64() to bionic so that it is possible to have
large offset passed to kernel. However, the syscall mechanism
only passes 32-bit number to kernel. So effectively, the
largest offset that can be passed is about 43 bits (since
offset is signed, and the number passed to kernel is number
of pages (page size == 4K => 12 bits)).
Change-Id: Ib54f4e9b54acb6ef8b0324f3b89c9bc810b07281
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
__page_shift and __page_size were accidentally declared in unistd.h with
C linkage - their implementation needs to use the same linkage.
Going forward, though, let's stop the inlining madness and let's kill
the non-standard __getpageshift(). This patch takes getpagesize(3) out
of line and removes __getpageshift but fixes __page_shift and __page_size
for backwards binary compatibility.
Change-Id: I35ed66a08989ced1db422eb03e4d154a5d6b5bda
Signed-off-by: Bernhard Rosenkraenzer <Bernhard.Rosenkranzer@linaro.org>
This file was generated using bionic/libc/kernel/tools/update_all.py
The only change is a new netlink.h file, from external/kernel-headers.
Please see the commit message there for details.
Change-Id: I83645b88f0baff838131197913ebd70be69abd3f
KernelArgumentBlock is defined as a class in KernelArgumentBlock.h, but
forward declarations refer to it as a struct.
While this is essentially the same, the mismatch causes a compiler
warning in clang (and may cause warnings in future versions of gcc) in
code that is supposed to be compiled with -Werror.
Change-Id: I4ba49d364c44d0a42c276aff3a8098300dbdcdf0
Signed-off-by: Bernhard Rosenkraenzer <Bernhard.Rosenkranzer@linaro.org>
I accidentally did a signed comparison of the size_t values passed in
for three of the _chk functions. Changing them to unsigned compares.
Add three new tests to verify this failure is fixed.
Bug: 10691831
Change-Id: Ia831071f7dffd5972a748d888dd506c7cc7ddba3
Fix source location. Move declaration of __strchr_chk out of
ifdef __BIONIC_FORTIFY which should be available for strchr.cpp
compilation when __BIONIC_FORTIFY is not defined.
Change-Id: I552a6e16656e59b276b322886cfbf57bbfb2e6a7
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
Null or constant dereferencing occurs if properties are not initialized.
On Android devices it shouldn't happen but can be faced if testing bionic
libc.so on Linux host.
Change-Id: I8f047cbe17d0e7bcde40ace000a8aa53789c16cb
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
The backtrace when a fortify check failed was not correct. This change
adds all of the necessary directives to get a correct backtrace.
Fix the strcmp directives and change all labels to local labels.
Testing:
- Verify that the runtime can decode the stack for __memcpy_chk, __memset_chk,
__strcpy_chk, __strcat_chk fortify failures.
- Verify that gdb can decode the stack properly when hitting a fortify check.
- Verify that the runtime can decode the stack for a seg fault for all of the
_chk functions and for memcpy/memset.
- Verify that gdb can decode the stack for a seg fault for all of the _chk
functions and for memcpy/memset.
- Verify that the runtime can decode the stack for a seg fault for strcmp.
- Verify that gdb can decode the stack for a seg fault in strcmp.
Bug: 10342460
Bug: 10345269
Change-Id: I1dedadfee207dce4a285e17a21e8952bbc63786a
Introduce __bos0 as a #define for __builtin_object_size((s), 0).
This macro is intended to be used for places where the standard
__bos macro isn't appropriate.
memcpy, memmove, and memset deliberately use __bos0. This is done
for two reasons:
1) I haven't yet tested to see if __bos is safe to use.
2) glibc uses __bos0 for these methods.
Change-Id: Ifbe02efdb10a72fe3529dbcc47ff647bde6feeca
clock_gettime was returning EINVAL for the values
produced by pthread_getcpuclockid.
Bug: 10346183
(cherry picked from commit 9b06cc3c1b)
Change-Id: Ib81a7024c218a4502f256c3002b9030e2aaa278d
We used to just try any iface we'd been told about as a
fallback, but that will end up mistakenly using a secondary
network's dns when we really don't have a default connection.
It also messed up our detection of whether we were doing the
lookup on the default or not (we'd get back our secondary net
iface as the default, do the compare and think we were on default).
Remove the lies and let dns fail if we don't have an iface for it.
bug:10132565
Conflicts:
libc/netbsd/resolv/res_cache.c
Change-Id: I357a9c34dad83215f44c5e0dd41ce2a7d6fe8f3f
Required for x86 build with multilib compiler.
Change-Id: Iac71cdc3461df6fb48cb2a7b713324ca368e6704
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
We used to just try any iface we'd been told about as a
fallback, but that will end up mistakenly using a secondary
network's dns when we really don't have a default connection.
It also messed up our detection of whether we were doing the
lookup on the default or not (we'd get back our secondary net
iface as the default, do the compare and think we were on default).
Remove the lies and let dns fail if we don't have an iface for it.
bug:10132565
Change-Id: I5f0f2abacaaaaf23c5292b20fba9d8dcb6fb10c5
I've mailed the tz list about this, and will switch to whatever upstream
fix comes along as soon as it's available.
Bug: 10310929
(cherry picked from commit 7843d44a59)
Change-Id: I205e2440703444c50cecd91d3458d33613ddbc59
I've mailed the tz list about this, and will switch to whatever upstream
fix comes along as soon as it's available.
Bug: 10310929
Change-Id: I36bf3fcf11f5ac9b88137597bac3487a7bb81b0f
The libcorkscrew stack unwinder does not understand cfi directives,
so add .save directives so that it can function properly.
Also add the directives in to strcmp.S and fix a missing set of
directives in cortex-a9/memcpy_base.S.
Bug: 10345269
Change-Id: I043f493e0bb6c45bd3f4906fbe1d9f628815b015
clock_gettime was returning EINVAL for the values
produced by pthread_getcpuclockid.
Bug: 10346183
Change-Id: Iabe643d7d46110bb311a0367aa0fc737f653208e
This change pulls the memcpy code out into a new file so that the
__strcpy_chk and __strcat_chk can use it with an include.
The new versions of the two chk functions uses assembly versions
of strlen and memcpy to implement this check. This allows near
parity with the assembly versions of strcpy/strcat. It also means that
as memcpy implementations get faster, so do the chk functions.
Other included changes:
- Change all of the assembly labels to local labels. The other labels
confuse gdb and mess up backtracing.
- Add .cfi_startproc and .cfi_endproc directives so that gdb is not
confused when falling through from one function to another.
- Change all functions to use cfi directives since they are more powerful.
- Move the memcpy_chk fail code outside of the memcpy function definition
so that backtraces work properly.
- Preserve lr before the calls to __fortify_chk_fail so that the backtrace
actually works.
Testing:
- Ran the bionic unit tests. Verified all error messages in logs are set
correctly.
- Ran libc_test, replacing strcpy with __strcpy_chk and replacing
strcat with __strcat_chk.
- Ran the debugger on nexus10, nexus4, and old nexus7. Verified that the
backtrace is correct for all fortify check failures. Also verify that
when falling through from __memcpy_chk to memcpy that the backtrace is
still correct. Also verified the same for __memset_chk and bzero.
Verified the two different paths in the cortex-a9 memset routine that
save variables to the stack still show the backtrace properly.
Bug: 9293744
(cherry-picked from 2be91915dc)
Change-Id: Ia407b74d3287d0b6af0139a90b6eb3bfaebf2155
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Merge from internal master.
(cherry-picked from 7c860db074)
Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
This change pulls the memcpy code out into a new file so that the
__strcpy_chk and __strcat_chk can use it with an include.
The new versions of the two chk functions uses assembly versions
of strlen and memcpy to implement this check. This allows near
parity with the assembly versions of strcpy/strcat. It also means that
as memcpy implementations get faster, so do the chk functions.
Other included changes:
- Change all of the assembly labels to local labels. The other labels
confuse gdb and mess up backtracing.
- Add .cfi_startproc and .cfi_endproc directives so that gdb is not
confused when falling through from one function to another.
- Change all functions to use cfi directives since they are more powerful.
- Move the memcpy_chk fail code outside of the memcpy function definition
so that backtraces work properly.
- Preserve lr before the calls to __fortify_chk_fail so that the backtrace
actually works.
Testing:
- Ran the bionic unit tests. Verified all error messages in logs are set
correctly.
- Ran libc_test, replacing strcpy with __strcpy_chk and replacing
strcat with __strcat_chk.
- Ran the debugger on nexus10, nexus4, and old nexus7. Verified that the
backtrace is correct for all fortify check failures. Also verify that
when falling through from __memcpy_chk to memcpy that the backtrace is
still correct. Also verified the same for __memset_chk and bzero.
Verified the two different paths in the cortex-a9 memset routine that
save variables to the stack still show the backtrace properly.
Bug: 9293744
Change-Id: Id5aec8c3cb14101d91bd125eaf3770c9c8aa3f57
(cherry picked from commit 2be91915dc)
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Merge from internal master.
(cherry-picked from d119b7b6f4)
Change-Id: I296463b251ef9fab004ee4dded2793feca5b547a
Use the new __bionic_name_mem function to name malloc'd memory as
"libc_malloc" on kernels that support it.
Change-Id: I7235eae6918fa107010039b9ab8b7cb362212272
Only works on some kernels, and only on page-aligned regions of
anonymous memory. It will show up in /proc/pid/maps as
[anon:<name>] and in /proc/pid/smaps as Name: <name>
Change-Id: If31667cf45ff41cc2a79a140ff68707526def80e
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98