Cleanup thp_init, change it so that the DEFAULT_THP_PAGESIZE
setting can be overridden with glibc.malloc.hugetlb=0 tunable.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Now that the fastbins are gone, set the default per size class length of
tcache to 16. We observe that doing this retains the original performance
of malloc.
Reviewed-by: DJ Delorie <dj@redhat.com>
do_check_remalloced_chunk checks properties of fastbin chunks. But, it is also
used to check properties of other chunks. Hence, remove this and merge the body
of the function in do_check_malloced_chunk.
Reviewed-by: DJ Delorie <dj@redhat.com>
Linux supports multi-sized Transparent Huge Pages (mTHP). For the purpose
of this patch description, we call the block size mapped by a non-last
level pagetable level, the traditional THP size (2M for 4K basepage,
512M for 64K basepage). Linux now also supports intermediate THP sizes
mapped by the last level pagetable - we call that the mTHP size.
The support for mTHP in Linux has grown to be better and stable over time -
applications can benefit from reduced page faults and reduced kernel
memory management overhead, albeit at the cost of internal fragmentation.
We have observed consistent performance boosts with mTHP with little
variance.
As a result, enable 2M THP by default on Aarch64. This enables THP even if
user hasn't passed glibc.malloc.hugetlb=1. If user has passed it, we avoid
making the system call to check the hugepage size from sysfs, and override
it with the hardcoded 2MB.
There are two additional benefits of this patch, if the transparent
hugepage sysctl is set to madvise or always:
1) The THP size is now hardcoded to 2MB for Aarch64. This avoids a
syscall for fetching the THP size from sysfs.
2) On 64K basepage size systems, the traditional THP size is 512M, which
is unusable and impractical. We can instead benefit from the mTHP size of
2M. Apart from the usual benefit of THPs/mTHPs as described above, Aarch64
systems benefit from reduced TLB pressure on this mTHP size, commonly
known as the "contpte" size. If the application takes a pagefault, and
either the THP sysctl settings is "always", or the virtual memory area
has been madvise(MADV_HUGEPAGE)'d along with sysctl being "madvise", then
Linux will fault in a 2M mTHP, mapping contiguous pages into the pagetable,
and painting the pagetable entries with the cont-bit. This bit is a hint to
the hardware that the concerned pagetable entry maps a page which is part
of a set of contiguous pages - the TLB then only remembers a single entry
for this set of 2M/64K = 32 pages, because the physical address of any
other page in this contiguous set is computable by the TLB cached physical
address via a linear offset. Hence, what was only possible with the
traditional THP size, is now possible with the mTHP size.
We see a 6.25% performance improvement on SPEC.
If the sysctl is set to never, no transparent hugepages will be created by
the kernel. But, this patch still sets thp_pagesize = 2MB. The benefit is
that on MORECORE() invocation, we extend the heap by 2MB instead of 4KB,
potentially reducing the frequency of this syscall's invocation by 512x.
Note that, there is no difference in cost between an sbrk(2M) and sbrk(4K);
the kernel only does a virtual reservation and does not touch user physical
memory.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Currently, if the initial program break is not aligned to the system page
size, then we align the pointer down to the page size. If there is a gap
before the heap VMA, then such an adjustment means that the madvise() range
now contains a gap. The behaviour in the upstream kernel is currently this:
madvise() will return -ENOMEM, even though the operation will still succeed
in the sense that the VM_HUGEPAGE flag will be set on the heap VMA. We
*must not* depend on this behaviour - this is an internal kernel
implementation, and earlier kernels may possibly abort the operation
altogether.
The other case is that there is no gap, and as a result we may end up
setting the VM_HUGEPAGE flag on that other VMA too, which is an
unnecessary side effect.
Let us fix this by aligning the pointer up to the page size. We should
also subtract the pointer difference from the size, because if we don't,
since the pointer is now aligned up, the size may cross the heap VMA, thus
leading to the same problem but at the other end.
There is no need to check this new size against mp_.thp_pagesize to decide
whether to make the madvise() call. The reason we make this check at the
start of madvise_thp() is to check whether the size of the VMA is enough
to map THPs into it. Since that check has passed, all that we need to
ensure now is that q + size does not cross the heap VMA.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
The allocation_index was being incremented before checking if mmap()
succeeds. If mmap() fails, allocation_index would still be incremented,
creating a gap in the allocations tracking array and making
allocation_index inconsistent with the actual number of successful
allocations.
This fix moves the allocation_index increment to after the mmap()
success check, ensuring it only increments when an allocation actually
succeeds. This maintains proper tracking for leak detection and
prevents gaps in the allocations array.
Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Single-threaded malloc tests exercise only the SINGLE_THREAD_P paths in
the malloc implementation. This commit runs variants of these tests in
a multi-threaded environment in order to exercise the alternate code
paths in the same test scenarios, thus potentially improving coverage.
$(test)-threaded-main and $(test)-threaded-worker variants are
introduced for most single-threaded malloc tests (with a small number of
exceptions). The -main variants run the base test in a main thread
while the test environment has an alternate thread running, whereas the
-worker variants run the test in an alternate thread while the main
thread waits on it.
The tests themselves are unmodified, and the change is accomplished by
using -DTEST_IN_THREAD at compile time, which instructs support/
infrastructure to run the test while an alternate thread waits on it.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Directly call _int_free_chunk during tcache shutdown to avoid recursion.
Calling __libc_free on a block from tcache gets flagged as a double free,
and tcache_double_free_verify checks every tcache chunk (quadratic
overhead).
Reviewed-by: Arjun Shankar <arjun@redhat.com>
The Linux specific test-case in tst-free-errno was backing up malloc
metadata for a large mmap'd block, overwriting the block with its own
mmap, then restoring malloc metadata and calling free to force an munmap
failure. However, the backed up pages containing metadata can
occasionally be overlapped by the overwriting mmap, leading to a
metadata corruption.
This commit replaces this Linux specific test case with a simpler,
generic, three block allocation, expecting the kernel to coalesce the
VMAs, then cause a fragmentation to trigger the same failure.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The clang default to warning for missing fall-through and it does
not support all comment-like annotation that gcc does. Use C23
[[fallthrough]] annotation instead.
proper attribute instead.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
The tcache is used for allocation only if an exact match is found. In the
large tcache code added in commit cbfd798810, we currently extract a
chunk of size greater than or equal to the size we need, but don't check
strict equality. This patch fixes that behaviour.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Avoid needing to check for tcache == NULL by initializing it
to a dummy read-only tcache structure. This dummy is all zeros,
so logically it is both full (when you want to put) and empty (when
you want to get). Also, there are two dummies, one used for
"not yet initialized" and one for "tunables say we shouldn't have
a tcache".
The net result is twofold:
1. Checks for tcache == NULL may be removed from the fast path.
Whether this makes the fast path faster when tcache is
disabled is TBD, but the normal case is tcache enabled.
2. no memory for tcache is allocated if tunables disable caching.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Linux handles virtual memory in Virtual Memory Areas (VMAs). The
madvise(MADV_HUGEPAGE) call works on a VMA granularity, which sets the
VM_HUGEPAGE flag on the VMA. This flag is invariant of the mprotect()
syscall which is used in growing the secondary heaps. Therefore, we
need to call madvise() only when we are sure that VM_HUGEPAGE was not
previously set, which is only in the case when h->size < mp_.thp_pagesize.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
clang does not support the __builtin_*_overflow_p builtins, on gcc
the macros will call __builtin_*_overflow_p.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
Cleanup _int_memalign. Simplify the logic. Add a seperate check
for mmap. Only release the tail chunk if it is at least MINSIZE.
Use the new mmap abstractions.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Linux handles virtual memory in Virtual Memory Areas (VMAs). The
madvise(MADV_HUGEPAGE) call works on a VMA granularity, which sets the
VM_HUGEPAGE flag on the VMA. If this VMA or a portion of it is mremapped
to a different location, Linux will create a new VMA, which will have
the same flags as the old one. This implies that the VM_HUGEPAGE flag
will be retained. Therefore, if we can guarantee that the old VMA was
marked with VM_HUGEPAGE, then there is no need to call madvise_thp() in
mremap_chunk().
The old chunk comes from a heap or non-heap allocation, both of which
have already been enlightened for THP. This implies that, if THP is on,
and the size of the old chunk is greater than or equal to thp_pagesize,
the VMA to which this chunk belongs to, has the VM_HUGEPAGE flag set.
Hence in this case we can avoid invoking the madvise() syscall.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add mmap_set_chunk() to create a new chunk from an mmap block.
Remove set_mmap_is_hp() since it is done inside mmap_set_chunk().
Rename prev_size_mmap() to mmap_base_offset(). Cleanup comments.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove the odd atomic_forced_read which is neither atomic nor forced.
Some uses are completely redundant, so simply remove them. In other cases
the intended use is to force a memory ordering, so use acquire load for those.
In yet other cases their purpose is unclear, for example __nscd_cache_search
appears to allow concurrent accesses to the cache while it is being garbage
collected by another thread! Use relaxed atomic loads here to block spills
from accidentally reloading memory that is being changed.
Passes regress on AArch64, OK for commit?
Refactor malloc.c to remove dead code, create macros to abstract duplicated
code, and cleanup sysmalloc_mmap_fallback to remove logic not related to the
mmap call.
Change the return type of mmap_base to uintptr_t since this allows using
operations on the return value, and avoids casting in both calls in
mremap_chunk and munmap_chunk.
Cleanup sysmalloc_mmap_fallback. Remove unused parameters nb, oldsize
and av. Remove redundant overflow check and instead use size_t for all
parameters except extra_flags to prevent overflows. Move logic not concerned
with the mmap call itself outside the function after both calls to
sysmalloc_mmap_fallback are made; this means move code for naming the VMA
and marking the arena being extended as non-contiguous to the calling code to
be handled in the case that the mmap is successful. Calculate the fallback
size from nb to avoid modifying size after it has been set for MORECORE.
Remove unused noncontiguous macro.
Remove redundant assert for checking unreachable option for global_max_fast.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Remove support for obsolete dumped heaps. Dumping heaps was discontinued
8 years ago, however loading a dumped heap is still supported. This blocks
changes and improvements of the malloc data structures - hence it is time
to remove this. Ancient binaries that still call malloc_set_state will now
get the -1 error code. Update tst-mallocstate.c to just check for this.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
We currently unlock the arena mutex in arena_get_retry() unconditionally.
Therefore, hoist out the unlock from the if-else control block.
Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
Minor cleanup of libc_realloc: remove unnecessary special cases for mmap, move
ar_ptr initialization, first check for oldmem == NULL.
Reviewed-by: DJ Delorie <dj@redhat.com>
Remove all unused atomics. Replace uses of catomic_increment and
catomic_decrement with atomic_fetch_add_relaxed which maps to a standard
compiler builtin. Relaxed memory ordering is correct for simple counters
since they only need atomicity.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
I have not checked with all versions for all ABIs, but I saw failures
with gcc-14 on arm, alpha, hppa, i686, sparc, sh4, and microblaze.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
Add mremap_chunk support for mmap()ed chunks using hugepages by accounting for
their alignment, to prevent the mremap call failing in most cases where the
size passed is not a hugepage size multiple. It also improves robustness for
reallocating hugepages since mremap is much less likely to fail, so running
out of memory when reallocating a larger size and having to copy the old
contents after mremap fails is also less likely.
To track whether an mmap()ed chunk uses hugepages, have a flag in the lowest
bit of the mchunk_prev_size field which is set after a call to sysmalloc_mmap,
and accessed later in mremap_chunk. Create macros for getting and setting this
bit, and for mapping the bit off when accessing the field for mmap()ed chunks.
Since the alignment cannot be lower than 8 bytes, this flag cannot affect the
alignment data.
Add malloc/tst-tcfree4-malloc-check to the tests-exclude-malloc-check list as
malloc-check prevents the tcache from being used to store chunks. This test
caused failures due to a bug in mem2chunk_check to be fixed in a later patch.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Change the mmap chunk layout to be identical to a normal chunk. This makes it
safe for tcache to hold mmap chunks and simplifies size calculations in
memsize and musable. Add mmap_base() and mmap_size() macros to simplify code.
Reviewed-by: Cupertino Miranda <cupertino.miranda@oracle.com>
When transparent hugepages (THP) are configured to 32MB on x86/loongarch
systems, the current big_size value may not be sufficiently large to
guarantee that free(ptr) [1] will call munmap(ptr_aligned, big_size).
Tested on x86_64 and loongarch64.
PS: Without this patch and using 32M THP, there is a about 50% chance
that malloc/tst-free-errno-malloc-hugetlb1 will fail on both x86_64 and
loongarch64.
[1] malloc/tst-free-errno.c:
...
errno = 1789;
/* This call to free() is supposed to call
munmap (ptr_aligned, big_size);
which increases the number of VMAs by 1, which is supposed
to fail. */
-> free (ptr);
TEST_VERIFY (get_errno () == 1789);
}
...
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
We want tcache_key not to be a commonly-occurring value in memory, so ensure
a minimum amount of one and zero bits.
And we need it non-zero, otherwise even if tcache_double_free_verify sets
e->key to 0 before calling __libc_free, it gets called again by __libc_free,
thus looping indefinitely.
Fixes: c968fe5062 ("malloc: Use tailcalls in __libc_free")
MALLOC_DEBUG only works on locked arenas, so move the call to
check_inuse_chunk from __libc_free() to _int_free_chunk().
Regress now passes if MALLOC_DEBUG is enabled.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Arenas support huge pages but not transparent huge pages. Add this by
also checking mp_.thp_pagesize when creating a new arena, and use madvise.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove an odd use of __curbrk and use MORECORE (0) instead.
This fixes Hurd build since it doesn't define this symbol.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>