-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Narrow bounds on kernel malloc allocations. #2261
base: dev
Are you sure you want to change the base?
Conversation
The uma_zgrow_bounds function allows to recover the full bounds on an item from nested allocators that rely on UMA.
This prevents possible overflows into the slab header.
- Narrow bounds on kmalloc allocations backed by UMA zones, this prevents extra space in items to be accessible. - When the requested size is not representable, imply M_ZERO for UMA allocations. This ensures that no data is accidentally leaked through extra space in the item. Note that the current malloc KPI does not promise to protect from memory leaks due to improper initialization if M_ZERO is not set, therefore it might be sensible to only zero-initialize the padding space beyond the requested allocation size, which will likely not be initialized by the client code. Patch co-authored by: Alfredo Mazzinghi <[email protected]>
This handles malloc_large allocations in the same way of UMA-backed allocations. When the capability would not be representable, imply M_ZERO to ensure no data leaks through the padding. Patch co-authored by: Alfredo Mazzinghi <[email protected]>
This is only affecting memstat, which only uses the private UMA structure definitions.
Add assertion to check this.
Only zero the padding due to representable length rounding for the requested allocation size. This should generally be cheaper than zeroing the whole allocation.
Only zero the padding due to representable length rounding for the requested allocation size. This should generally be cheaper than zeroing the whole allocation.
When growing an existing allocation with an unrepresentable length, we need to initialize any padding space to ensure there are no data leaks. When copying data to a new allocation, we must ensure that no data past the original capability length is copied.
@@ -680,7 +680,7 @@ void * | |||
int indx; | |||
caddr_t va; | |||
uma_zone_t zone; | |||
#if defined(DEBUG_REDZONE) || defined(KASAN) | |||
#if defined(DEBUG_REDZONE) || defined(KASAN) || defined(__CHERI_PURE_CAPABILITY__) | |||
unsigned long osize = size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I admit a temptation to slap a __diagused on at least the local variables and make them unconditional.
Hmm, note that in userspace we have recently switched to not forcing tight bounds but in effect setting the bounds to what |
This was my original reasoning as well, however there is an interesting interaction with the caller responsibility for initialization. This patch is not really about bounds and more about memory leaks through uninit memory. I will explain better: However, in the CHERI world (and arguably also in the non-CHERI world), the caller can access outside of the requested bounds if the allocator committed a larger chunk of memory (e.g. a 64 bytes UMA block for a 48 bytes requested size). This becomes increasingly tricky with unrepresentable sizes, because you can not guarantee that it is possible to narrow bounds exactly in all cases, so you still need zero-initialization to protect against data/capability leaks across kmalloc callers. This patch use of bounds is really about reducing the need for zero-initialization. I think there are essentially 3 options:
The root cause is using shared UMA zones across different C types, but this is a deeper issue. Anyway, I agree that having a knob to turn it off is useful, and I will add one. |
Hmmm, I guess part of what threw me off is that in the original PR comment, bullet point 4 seems to be option 2, so it read like we were doing both 1 and 2. I do think of the available options I prefer something along the lines of option 2) for allocations <= PAGE_SIZE (which avoids the need for the extra complications to re-derive bounds for allocations using the small UMA buckets which I think are most likely to be potential hot paths) and option 3) for allocations > PAGE_SIZE. In the case of option 3, one thing you might consider is seeing if you can arrange for the layer that allocates VM pages to know about the padding area and try to allocate zeroed pages for the padding in the |
I could have phrased it a bit better probably. I think allocating zeroed pages though |
This is required to prevent memory leaks through the extra space in the allocations.
In general, kernel malloc uses a number of fixed-size UMA zones to back small allocations (<= 64K) and a dedicated malloc_large for large allocations that grabs memory directly from kmem.
In both cases, the requested allocation size may be smaller than the actual committed size if the requested size is not representable. Representablity is platform-dependent and we can not make assumptions about the minimum representable size (although I think it will always be >= PAGE_SIZE).
With CHERI this is problematic because the caller will receive a capability that might be larger than the requested size and some of the additional committed space is reachable but uninitialized (unless M_ZERO).
This patch does the following: