1118 Commits

Author SHA1 Message Date
Wilco Dijkstra
1c588a2187 malloc: Improve thp_init
Cleanup thp_init, change it so that the DEFAULT_THP_PAGESIZE
setting can be overridden with glibc.malloc.hugetlb=0 tunable.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-12-17 19:11:01 +00:00
Dev Jain
0b9210bd76 malloc: set default tcache fill count to 16
Now that the fastbins are gone, set the default per size class length of
tcache to 16. We observe that doing this retains the original performance
of malloc.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
dacc2ade92 malloc: Remove fastbin comments
Now that all the fastbin code is gone, remove the remaining comments
referencing fastbins.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
bb5a4f5295 malloc: Remove fastbin infrastructure
Now that all users of the fastbin code are gone, remove the fastbin
infrastructure.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
73245de202 malloc: Remove do_check_remalloced_chunk
do_check_remalloced_chunk checks properties of fastbin chunks. But, it is also
used to check properties of other chunks. Hence, remove this and merge the body
of the function in do_check_malloced_chunk.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
7447efa962 malloc: remove fastbin code from malloc_info
In preparation for removal of fastbins, remove all fastbin code from
malloc_info.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
433ee9c02f malloc: remove fastbin code from do_check_malloc_state
In preparation for removal of fastbins, remove all fastbin code from
do_check_malloc_state.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
80ee32910f malloc: remove mallopt fastbin stats
In preparation for removal of fastbins, remove all fastbin code from
mallopt.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
bf1015fb2d malloc: remove allocation from fastbin, and trim_fastbins
In preparation for removal of fastbins, remove the fastbin allocation
path, and remove the TRIM_FASTBINS code.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
e3062b06c5 malloc: remove malloc_consolidate
In preparation for removal of fastbins, remove the consolidation
infrastructure of fastbins.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
7632ba6018 malloc: remove fastbin tests
Remove all the fastbin tests in preparation for removing the fastbins.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-12-17 15:32:53 +00:00
Dev Jain
321e1fc73f malloc: Enable 2MB THP by default on Aarch64
Linux supports multi-sized Transparent Huge Pages (mTHP). For the purpose
of this patch description, we call the block size mapped by a non-last
level pagetable level, the traditional THP size (2M for 4K basepage,
512M for 64K basepage). Linux now also supports intermediate THP sizes
mapped by the last level pagetable - we call that the mTHP size.

The support for mTHP in Linux has grown to be better and stable over time -
applications can benefit from reduced page faults and reduced kernel
memory management overhead, albeit at the cost of internal fragmentation.
We have observed consistent performance boosts with mTHP with little
variance.

As a result, enable 2M THP by default on Aarch64. This enables THP even if
user hasn't passed glibc.malloc.hugetlb=1. If user has passed it, we avoid
making the system call to check the hugepage size from sysfs, and override
it with the hardcoded 2MB.

There are two additional benefits of this patch, if the transparent
hugepage sysctl is set to madvise or always:

1) The THP size is now hardcoded to 2MB for Aarch64. This avoids a
syscall for fetching the THP size from sysfs.

2) On 64K basepage size systems, the traditional THP size is 512M, which
is unusable and impractical. We can instead benefit from the mTHP size of
2M. Apart from the usual benefit of THPs/mTHPs as described above, Aarch64
systems benefit from reduced TLB pressure on this mTHP size, commonly
known as the "contpte" size. If the application takes a pagefault, and
either the THP sysctl settings is "always", or the virtual memory area
has been madvise(MADV_HUGEPAGE)'d along with sysctl being "madvise", then
Linux will fault in a 2M mTHP, mapping contiguous pages into the pagetable,
and painting the pagetable entries with the cont-bit. This bit is a hint to
the hardware that the concerned pagetable entry maps a page which is part
of a set of contiguous pages - the TLB then only remembers a single entry
for this set of 2M/64K = 32 pages, because the physical address of any
other page in this contiguous set is computable by the TLB cached physical
address via a linear offset. Hence, what was only possible with the
traditional THP size, is now possible with the mTHP size.

We see a 6.25% performance improvement on SPEC.

If the sysctl is set to never, no transparent hugepages will be created by
the kernel. But, this patch still sets thp_pagesize = 2MB. The benefit is
that on MORECORE() invocation, we extend the heap by 2MB instead of 4KB,
potentially reducing the frequency of this syscall's invocation by 512x.
Note that, there is no difference in cost between an sbrk(2M) and sbrk(4K);
the kernel only does a virtual reservation and does not touch user physical
memory.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-12-10 12:18:16 +00:00
Dev Jain
26e6e4d51e malloc: Do not make out-of-bounds madvise call on non-aligned heap
Currently, if the initial program break is not aligned to the system page
size, then we align the pointer down to the page size. If there is a gap
before the heap VMA, then such an adjustment means that the madvise() range
now contains a gap. The behaviour in the upstream kernel is currently this:
madvise() will return -ENOMEM, even though the operation will still succeed
in the sense that the VM_HUGEPAGE flag will be set on the heap VMA. We
*must not* depend on this behaviour - this is an internal kernel
implementation, and earlier kernels may possibly abort the operation
altogether.

The other case is that there is no gap, and as a result we may end up
setting the VM_HUGEPAGE flag on that other VMA too, which is an
unnecessary side effect.

Let us fix this by aligning the pointer up to the page size. We should
also subtract the pointer difference from the size, because if we don't,
since the pointer is now aligned up, the size may cross the heap VMA, thus
leading to the same problem but at the other end.

There is no need to check this new size against mp_.thp_pagesize to decide
whether to make the madvise() call. The reason we make this check at the
start of madvise_thp() is to check whether the size of the VMA is enough
to map THPs into it. Since that check has passed, all that we need to
ensure now is that q + size does not cross the heap VMA.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-12-10 12:18:16 +00:00
Adhemerval Zanella
d89e3a77c4 malloc: Extend malloc function hiding to tst-reallocarray (BZ #32366)
clang 20 optimize out reallocarray.

Reviewed-by: Sam James <sam@gentoo.org>
2025-12-09 11:14:14 -03:00
Adhemerval Zanella
54516bb385 malloc: Extend malloc function hiding to tst-pvalloc (BZ #32366)
clang 21 optimize out reallocarray.

Reviewed-by: Sam James <sam@gentoo.org>
2025-12-09 11:14:12 -03:00
Osama Abdelkader
57ce2d8243 Fix allocation_index increment in malloc_internal
The allocation_index was being incremented before checking if mmap()
succeeds.  If mmap() fails, allocation_index would still be incremented,
creating a gap in the allocations tracking array and making
allocation_index inconsistent with the actual number of successful
allocations.

This fix moves the allocation_index increment to after the mmap()
success check, ensuring it only increments when an allocation actually
succeeds.  This maintains proper tracking for leak detection and
prevents gaps in the allocations array.

Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-12-01 13:35:36 +01:00
Arjun Shankar
244c404ae8 malloc: Add threaded variants of single-threaded malloc tests
Single-threaded malloc tests exercise only the SINGLE_THREAD_P paths in
the malloc implementation.  This commit runs variants of these tests in
a multi-threaded environment in order to exercise the alternate code
paths in the same test scenarios, thus potentially improving coverage.

$(test)-threaded-main and $(test)-threaded-worker variants are
introduced for most single-threaded malloc tests (with a small number of
exceptions).  The -main variants run the base test in a main thread
while the test environment has an alternate thread running, whereas the
-worker variants run the test in an alternate thread while the main
thread waits on it.

The tests themselves are unmodified, and the change is accomplished by
using -DTEST_IN_THREAD at compile time, which instructs support/
infrastructure to run the test while an alternate thread waits on it.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-11-24 16:47:52 +01:00
Wilco Dijkstra
7f670284d8 malloc: Use _int_free_chunk in tcache_thread_shutdown
Directly call _int_free_chunk during tcache shutdown to avoid recursion.
Calling __libc_free on a block from tcache gets flagged as a double free,
and tcache_double_free_verify checks every tcache chunk (quadratic
overhead).

Reviewed-by: Arjun Shankar <arjun@redhat.com>
2025-11-20 12:28:46 +00:00
Justin King
56549264d1 malloc: add free_sized and free_aligned_sized from C23
Signed-off-by: Justin King <jcking@google.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-11-19 13:47:53 -03:00
Arjun Shankar
e53d85947f malloc: Simplify tst-free-errno munmap failure test
The Linux specific test-case in tst-free-errno was backing up malloc
metadata for a large mmap'd block, overwriting the block with its own
mmap, then restoring malloc metadata and calling free to force an munmap
failure.  However, the backed up pages containing metadata can
occasionally be overlapped by the overwriting mmap, leading to a
metadata corruption.

This commit replaces this Linux specific test case with a simpler,
generic, three block allocation, expecting the kernel to coalesce the
VMAs, then cause a fragmentation to trigger the same failure.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-11-18 14:28:42 +01:00
Collin Funk
3fe3f62833 Cleanup some recently added whitespace.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-10-30 18:56:58 -07:00
Adhemerval Zanella
970364dac0 Annotate swtich fall-through
The clang default to warning for missing fall-through and it does
not support all comment-like annotation that gcc does.  Use C23
[[fallthrough]] annotation instead.
proper attribute instead.

Reviewed-by: Collin Funk <collin.funk1@gmail.com>
2025-10-29 12:54:01 -03:00
Adhemerval Zanella
f91abbde02 malloc: Remove unused tcache_set_inactive
clang warns that this function is not used.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-10-29 12:53:53 -03:00
Dev Jain
b2b4b46a52 malloc: fix large tcache code to check for exact size match
The tcache is used for allocation only if an exact match is found. In the
large tcache code added in commit cbfd798810, we currently extract a
chunk of size greater than or equal to the size we need, but don't check
strict equality. This patch fixes that behaviour.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-10-24 16:55:02 +00:00
DJ Delorie
2bf2188fae malloc: avoid need for tcache == NULL checks
Avoid needing to check for tcache == NULL by initializing it
to a dummy read-only tcache structure.  This dummy is all zeros,
so logically it is both full (when you want to put) and empty (when
you want to get).  Also, there are two dummies, one used for
"not yet initialized" and one for "tunables say we shouldn't have
a tcache".

The net result is twofold:

1. Checks for tcache == NULL may be removed from the fast path.
    Whether this makes the fast path faster when tcache is
    disabled is TBD, but the normal case is tcache enabled.

2. no memory for tcache is allocated if tunables disable caching.

Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-10-21 16:51:03 -04:00
Adhemerval Zanella
76dfd91275 Suppress -Wmaybe-uninitialized only for gcc
The warning is not supported by clang.

Reviewed-by: Sam James <sam@gentoo.org>
2025-10-21 09:24:05 -03:00
Dev Jain
6e8f32d39a malloc: Do not call madvise if heap's oldsize >= THP size
Linux handles virtual memory in Virtual Memory Areas (VMAs). The
madvise(MADV_HUGEPAGE) call works on a VMA granularity, which sets the
VM_HUGEPAGE flag on the VMA. This flag is invariant of the mprotect()
syscall which is used in growing the secondary heaps. Therefore, we
need to call madvise() only when we are sure that VM_HUGEPAGE was not
previously set, which is only in the case when h->size < mp_.thp_pagesize.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-10-20 11:33:54 -03:00
Adhemerval Zanella
41e27c400d malloc: Use INT_ADD_OVERFLOW instead of __builtin_add_overflow_p
clang does not support the __builtin_*_overflow_p builtins, on gcc
the macros will call __builtin_*_overflow_p.

Reviewed-by: Collin Funk <collin.funk1@gmail.com>
2025-10-20 11:33:54 -03:00
Wilco Dijkstra
e974b1b7eb malloc: Cleanup _int_memalign
Cleanup _int_memalign. Simplify the logic. Add a seperate check
for mmap. Only release the tail chunk if it is at least MINSIZE.
Use the new mmap abstractions.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-10-17 17:03:54 +00:00
Dev Jain
fa5d1b5419 malloc: Do not call madvise if oldsize >= THP size
Linux handles virtual memory in Virtual Memory Areas (VMAs). The
madvise(MADV_HUGEPAGE) call works on a VMA granularity, which sets the
VM_HUGEPAGE flag on the VMA. If this VMA or a portion of it is mremapped
to a different location, Linux will create a new VMA, which will have
the same flags as the old one. This implies that the VM_HUGEPAGE flag
will be retained. Therefore, if we can guarantee that the old VMA was
marked with VM_HUGEPAGE, then there is no need to call madvise_thp() in
mremap_chunk().

The old chunk comes from a heap or non-heap allocation, both of which
have already been enlightened for THP. This implies that, if THP is on,
and the size of the old chunk is greater than or equal to thp_pagesize,
the VMA to which this chunk belongs to, has the VM_HUGEPAGE flag set.
Hence in this case we can avoid invoking the madvise() syscall.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-10-08 12:59:30 +00:00
Wilco Dijkstra
88de32a070 malloc: Improve mmap interface
Add mmap_set_chunk() to create a new chunk from an mmap block.
Remove set_mmap_is_hp() since it is done inside mmap_set_chunk().
Rename prev_size_mmap() to mmap_base_offset().  Cleanup comments.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-10-08 12:59:30 +00:00
Wilco Dijkstra
adbd3ba137 atomic: Remove atomic_forced_read
Remove the odd atomic_forced_read which is neither atomic nor forced.
Some uses are completely redundant, so simply remove them.  In other cases
the intended use is to force a memory ordering, so use acquire load for those.
In yet other cases their purpose is unclear, for example __nscd_cache_search
appears to allow concurrent accesses to the cache while it is being garbage
collected by another thread!  Use relaxed atomic loads here to block spills
from accidentally reloading memory that is being changed.

Passes regress on AArch64, OK for commit?
2025-10-08 12:59:30 +00:00
William Hunt
849a274531 malloc: Cleanup macros, asserts and sysmalloc_mmap_fallback
Refactor malloc.c to remove dead code, create macros to abstract duplicated
code, and cleanup sysmalloc_mmap_fallback to remove logic not related to the
mmap call.

Change the return type of mmap_base to uintptr_t since this allows using
operations on the return value, and avoids casting in both calls in
mremap_chunk and munmap_chunk.

Cleanup sysmalloc_mmap_fallback. Remove unused parameters nb, oldsize
and av. Remove redundant overflow check and instead use size_t for all
parameters except extra_flags to prevent overflows. Move logic not concerned
with the mmap call itself outside the function after both calls to
sysmalloc_mmap_fallback are made; this means move code for naming the VMA
and marking the arena being extended as non-contiguous to the calling code to
be handled in the case that the mmap is successful. Calculate the fallback
size from nb to avoid modifying size after it has been set for MORECORE.

Remove unused noncontiguous macro.

Remove redundant assert for checking unreachable option for global_max_fast.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-10-03 16:34:10 +00:00
Wilco Dijkstra
85c5b504aa malloc: Remove dumped heap support
Remove support for obsolete dumped heaps.  Dumping heaps was discontinued
8 years ago, however loading a dumped heap is still supported. This blocks
changes and improvements of the malloc data structures - hence it is time
to remove this.  Ancient binaries that still call malloc_set_state will now
get the -1 error code.  Update tst-mallocstate.c to just check for this.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-09-19 13:11:56 +00:00
Dev Jain
f807e85c31 malloc: Hoist common unlock out of if-else control block
We currently unlock the arena mutex in arena_get_retry() unconditionally.
Therefore, hoist out the unlock from the if-else control block.

Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
2025-09-18 15:50:15 -04:00
Wilco Dijkstra
19442c052c malloc: Cleanup libc_realloc
Minor cleanup of libc_realloc: remove unnecessary special cases for mmap, move
ar_ptr initialization, first check for oldmem == NULL.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-09-10 09:18:06 +00:00
Wilco Dijkstra
210ee29503 atomics: Remove unused atomics
Remove all unused atomics.  Replace uses of catomic_increment and
catomic_decrement with atomic_fetch_add_relaxed which maps to a standard
compiler builtin. Relaxed memory ordering is correct for simple counters
since they only need atomicity.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-09-10 09:18:06 +00:00
Samuel Thibault
245ea60b0e malloc: check "negative" tcache_key values by hand
instead of undefined cases from casting uintptr_t into intptr_t.
2025-09-09 23:05:00 +02:00
Adhemerval Zanella
b9fe06a8a8 malloc: Fix Os build on some ABIs
I have not checked with all versions for all ABIs, but I saw failures
with gcc-14 on arm, alpha, hppa, i686, sparc, sh4, and microblaze.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
2025-09-08 08:21:48 -03:00
DJ Delorie
320cf1e1b5 malloc: add tst-mxfast to hugetlb exclusion list
tst-mxfast needs GLIBC_TUNABLES to be set to its own value.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-08-29 13:22:19 -04:00
Wilco Dijkstra
921e251e8f malloc: Support hugepages in mremap_chunk
Add mremap_chunk support for mmap()ed chunks using hugepages by accounting for
their alignment, to prevent the mremap call failing in most cases where the
size passed is not a hugepage size multiple. It also improves robustness for
reallocating hugepages since mremap is much less likely to fail, so running
out of memory when reallocating a larger size and having to copy the old
contents after mremap fails is also less likely.

To track whether an mmap()ed chunk uses hugepages, have a flag in the lowest
bit of the mchunk_prev_size field which is set after a call to sysmalloc_mmap,
and accessed later in mremap_chunk. Create macros for getting and setting this
bit, and for mapping the bit off when accessing the field for mmap()ed chunks.
Since the alignment cannot be lower than 8 bytes, this flag cannot affect the
alignment data.

Add malloc/tst-tcfree4-malloc-check to the tests-exclude-malloc-check list as
malloc-check prevents the tcache from being used to store chunks. This test
caused failures due to a bug in mem2chunk_check to be fixed in a later patch.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-08-27 13:07:09 +00:00
Wilco Dijkstra
614cfd0f8a malloc: Change mmap chunk layout
Change the mmap chunk layout to be identical to a normal chunk.  This makes it
safe for tcache to hold mmap chunks and simplifies size calculations in
memsize and musable.  Add mmap_base() and mmap_size() macros to simplify code.

Reviewed-by: Cupertino Miranda <cupertino.miranda@oracle.com>
2025-08-27 11:41:58 +00:00
caiyinyu
d4ccda8e69 malloc: Fix tst bug in malloc/tst-free-errno-malloc-hugetlb1.
When transparent hugepages (THP) are configured to 32MB on x86/loongarch
systems, the current big_size value may not be sufficiently large to
guarantee that free(ptr) [1] will call munmap(ptr_aligned, big_size).

Tested on x86_64 and loongarch64.

PS: Without this patch and using 32M THP, there is a about 50% chance
that malloc/tst-free-errno-malloc-hugetlb1 will fail on both x86_64 and
loongarch64.

[1] malloc/tst-free-errno.c:
...
       errno = 1789;
       /* This call to free() is supposed to call
            munmap (ptr_aligned, big_size);
          which increases the number of VMAs by 1, which is supposed
          to fail.  */
->     free (ptr);
       TEST_VERIFY (get_errno () == 1789);
     }
...

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-08-19 09:05:32 +08:00
Samuel Thibault
8543577b04 malloc: Fix checking for small negative values of tcache_key
tcache_key is unsigned so we should turn it explicitly to signed before
taking its absolute value.
2025-08-10 23:45:35 +02:00
Samuel Thibault
2536c4f858 malloc: Make sure tcache_key is odd enough
We want tcache_key not to be a commonly-occurring value in memory, so ensure
a minimum amount of one and zero bits.

And we need it non-zero, otherwise even if tcache_double_free_verify sets
e->key to 0 before calling __libc_free, it gets called again by __libc_free,
thus looping indefinitely.

Fixes: c968fe5062 ("malloc: Use tailcalls in __libc_free")
2025-08-10 09:44:08 +02:00
Wilco Dijkstra
a5e9269f51 malloc: Fix MALLOC_DEBUG
MALLOC_DEBUG only works on locked arenas, so move the call to
check_inuse_chunk from __libc_free() to _int_free_chunk().
Regress now passes if MALLOC_DEBUG is enabled.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-08-08 14:00:43 +00:00
Wilco Dijkstra
05a14648e9 malloc: Support THP in arenas
Arenas support huge pages but not transparent huge pages.  Add this by
also checking mp_.thp_pagesize when creating a new arena, and use madvise.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-08-08 14:00:11 +00:00
Wilco Dijkstra
94ebcfc4f2 malloc: Remove use of __curbrk
Remove an odd use of __curbrk and use MORECORE (0) instead.
This fixes Hurd build since it doesn't define this symbol.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-08-08 13:59:31 +00:00
Wilco Dijkstra
7ab623afb9 Revert "Remove use of __curbrk."
This reverts commit 1ee0b771a9.
2025-08-04 17:31:56 +00:00
Wilco Dijkstra
91a7726374 Revert "Improve MALLOC_DEBUG"
This reverts commit 4b3e65682d.
2025-08-04 17:31:54 +00:00