Issue:
The kernel will pad the entry->d_reclen in a getdents64 call to a
long-word boundary. For very long records, this could exceed the
size of a struct dirent. The mismatch in the size was causing error
paranoid checking code in bionic to fail... thus causing an early
"end" when reading the dirent structures from the kernel buffer.
Test:
ls
mkdir abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstu
ls
Change-Id: I75d1f8e45e1655fdd7bac4a08a481d086f28073a
Author: Bruce Beare <bruce.j.beare@intel.com>
The posix_memalign(3) function is very similar to the traditional
memalign(3) function, but with better error reporting and a guarantee
that the memory it allocates can be freed. In bionic, memalign(3)
allocated memory can be freed, so posix_memalign(3) is just a wrapper
around memalign(3).
Change-Id: I62ee908aa5ba6b887d8446a00d8298d080a6a299
This patch uses the new hardware feature macros for x86 to define
various compile-time macros used to make the C library use
SSE2 and/or SSSE3 optimized memory functions for target CPUs
that support these features.
Note that previously, we relied on the macros being defined by
build/core/combo/TARGET_linux-x86.mk, but this is no longer the
case.
Change-Id: Ieae5ff5284c0c839bc920953fb6b91d2f2633afc
this works by building a directed graph of acquired
pthread mutexes and making sure there are no loops in
that graph.
this feature is enabled with:
setprop debug.libc.pthread 1
when a potential deadlock is detected, a large warning is
output to the log with appropriate back traces.
currently disabled at compile-time. set PTHREAD_DEBUG_ENABLED=1
to enable.
Change-Id: I916eed2319599e8aaf8f229d3f18a8ddbec3aa8a
This patch provides several small optimizations to the
implementation of mutex locking and unlocking. Note that
a following patch will get rid of the global recursion
lock, and provide a few more aggressive changes, I
though it'd be simpler to split this change in two parts.
+ New behaviour: pthread_mutex_lock et al now detect
recursive mutex overflows and will return EAGAIN in
this case, as suggested by POSIX. Before, the counter
would just wrap to 0.
- Remove un-necessary reloads of the mutex value from memory
by storing it in a local variable (mvalue)
- Remove un-necessary reload of the mutex value by passing
the 'shared' local variable to _normal_lock / _normal_unlock
- Remove un-necessary reload of the mutex value by using a
new macro (MUTEX_VALUE_OWNER()) to compare the thread id
for recursive/errorcheck mutexes
- Use a common inlined function to increment the counter
of a recursive mutex. Also do not use the global
recursion lock in this case to speed it up.
Change-Id: I106934ec3a8718f8f852ef547f3f0e9d9435c816
This patch changes the implementation of pthread_once()
to avoid the use of a single global recursive mutex. This
should also slightly speed up the non-common case where
we have to call the init function, or wait for another
thread to finish the call.
Change-Id: I8a93f4386c56fb89b5d0eb716689c2ce43bdcad9
Fix dead loops in file ./bionic/libc/unistd/pathconf.c
Change-Id: I7a1e6bcd9879c96bacfd376b88a1f899793295c8
Author: Jin Wei <wei.a.jin@intel.com>
Signed-off-by: Bruce Beare <bruce.j.beare@intel.com>
When forking of a new process in bionic, it is critical that it
does not allocate any memory according to the comment in
java_lang_ProcessManager.c:
"Note: We cannot malloc() or free() after this point!
A no-longer-running thread may be holding on to the heap lock, and
an attempt to malloc() or free() would result in deadlock."
However, as fork is using standard lib calls when tracing it a bit,
they might allocate memory, and thus causing the deadlock.
This is a rewrite so that the function cpuacct_add, that fork calls,
will use system calls instead of standard lib calls.
Signed-off-by: christian bejram <christian.bejram@stericsson.com>
Change-Id: Iff22ea6b424ce9f9bf0ac8e9c76593f689e0cc86
Pass kernel space sigset_t size to __rt_sigprocmask to workaround
the miss-match of NSIG/sigset_t definition between kernel and bionic.
Note: Patch originally from Google...
Change-Id: I4840fdc56d0b90d7ce2334250f04a84caffcba2a
Signed-off-by: Chenyang Du <chenyang.du@intel.com>
Signed-off-by: Bruce Beare <bruce.j.beare@intel.com>
Chars are signed for x86 -- correct the comparison semantics.
Change-Id: I2049e98eb063c0b4e83ea973d3fcae49c6817dde
Author: Liubov Dmitrieva <liubov.dmitrieva@intel.com>
Signed-off-by: Bruce Beare <bruce.j.beare@intel.com>
Fix the compile warning to let the libc.debug.malloc=10 works well
Due to unsuitable value comparison, which cause compiler optimize the
code of comparing two digits.
Change-Id: I0bedd596c9ca2ba308fb008da20ecb328d8548f5
Signed-off-by: Bruce Beare <bruce.j.beare@intel.com>
Author: liu chuansheng <chuansheng.liu@intel.com>