For a fair amount of time, null-deref bugs were a highly exploitable kernel bug class. Back when the kernel was able to access userland memory without restriction, and userland programs were still able to map the zero page, there were many easy techniques for exploiting null-deref bugs. However with the introduction of modern exploit mitigations such as SMEP and SMAP, as well as mmap_min_addr preventing unprivileged programs from mmap’ing low addresses, null-deref bugs are generally not considered a security issue in modern kernel versions. This blog post provides an exploit technique demonstrating that treating these bugs as universally innocuous often leads to faulty evaluations of their relevance to security.
Kernel oops overview
At present, when the Linux kernel triggers a null-deref from within a process context, it generates an oops, which is distinct from a kernel panic. A panic occurs when the kernel determines that there is no safe way to continue execution, and that therefore all execution must cease. However, the kernel does not stop all execution during an oops — instead the kernel tries to recover as best as it can and continue execution. In the case of a task, that involves throwing out the existing kernel stack and going directly to make_task_dead which calls do_exit. The kernel will also publish in dmesg a “crash” log and kernel backtrace depicting what state the kernel was in when the oops occurred. This may seem like an odd choice to make when memory corruption has clearly occurred — however the intention is to allow kernel bugs to more easily be detectable and loggable under the philosophy that a working system is much easier to debug than a dead one.
The unfortunate side effect of the oops recovery path is that the kernel is not able to perform any associated cleanup that it would normally perform on a typical syscall error recovery path. This means that any locks that were locked at the moment of the oops stay locked, any refcounts remain taken, any memory otherwise temporarily allocated remains allocated, etc. However, the process that generated the oops, its associated kernel stack, task struct and derivative members etc. can and often will be freed, meaning that depending on the precise circumstances of the oops, it’s possible that no memory is actually leaked. This becomes particularly important in regards to exploitation later.
Reference count mismanagement overview
Refcount mismanagement is a fairly well-known and exploitable issue. In the case where software improperly decrements a refcount, this can lead to a classic UAF primitive. The case where software improperly doesn’t decrement a refcount (leaking a reference) is also often exploitable. If the attacker can cause a refcount to be repeatedly improperly incremented, it is possible that given enough effort the refcount may overflow, at which point the software no longer has any remotely sensible idea of how many refcounts are taken on an object. In such a case, it is possible for an attacker to destroy the object by incrementing and decrementing the refcount back to zero after overflowing, while still holding reachable references to the associated memory. 32-bit refcounts are particularly vulnerable to this sort of overflow. It is important however, that each increment of the refcount allocates little or no physical memory. Even a single byte allocation is quite expensive if it must be performed 232 times.
Example null-deref bug
When a kernel oops unceremoniously ends a task, any refcounts that the task was holding remain held, even though all memory associated with the task may be freed when the task exits. Let’s look at an example — an otherwise unrelated bug I coincidentally discovered in the very recent past:
static int show_smaps_rollup(struct seq_file *m, void *v)
{
struct proc_maps_private *priv = m->private;
struct mem_size_stats mss;
struct mm_struct *mm;
struct vm_area_struct *vma;
unsigned long last_vma_end = 0;
int ret = 0;
priv->task = get_proc_task(priv->inode); //task reference taken
if (!priv->task)
return -ESRCH;
mm = priv->mm; //With no vma's, mm->mmap is NULL
if (!mm || !mmget_not_zero(mm)) { //mm reference taken
ret = -ESRCH;
goto out_put_task;
}
memset(&mss, 0, sizeof(mss));
ret = mmap_read_lock_killable(mm); //mmap read lock taken
if (ret)
goto out_put_mm;
hold_task_mempolicy(priv);
for (vma = priv->mm->mmap; vma; vma = vma->vm_next) {
smap_gather_stats(vma, &mss);
last_vma_end = vma->vm_end;
}
show_vma_header_prefix(m, priv->mm->mmap->vm_start,last_vma_end, 0, 0, 0, 0); //the deref of mmap causes a kernel oops here
seq_pad(m, ' ');
seq_puts(m, "[rollup]\n");
__show_smap(m, &mss, true);
release_task_mempolicy(priv);
mmap_read_unlock(mm);
out_put_mm:
mmput(mm);
out_put_task:
put_task_struct(priv->task);
priv->task = NULL;
return ret;
}
This file is intended simply to print a set of memory usage statistics for the respective process. Regardless, this bug report reveals a classic and otherwise innocuous null-deref bug within this function. In the case of a task that has no VMA’s mapped at all, the task’s mm_struct mmap member will be equal to NULL. Thus the priv->mm->mmap->vm_start access causes a null dereference and consequently a kernel oops. This bug can be triggered by simply read’ing /proc/[pid]/smaps_rollup on a task with no VMA’s (which itself can be stably created via ptrace):
This kernel oops will mean that the following events occur:
The associated struct file will have a refcount leaked if fdget took a refcount (we’ll try and make sure this doesn’t happen later)
The associated seq_file within the struct file has a mutex that will forever be locked (any future reads/writes/lseeks etc. will hang forever).
The task struct associated with the smaps_rollup file will have a refcount leaked
The mm_struct’s mm_users refcount associated with the task will be leaked
The mm_struct’s mmap lock will be permanently readlocked (any future write-lock attempts will hang forever)
Each of these conditions is an unintentional side-effect that leads to buggy behaviors, but not all of those behaviors are useful to an attacker. The permanent locking of events 2 and 5 only makes exploitation more difficult. Condition 1 is unexploitable because we cannot leak the struct file refcount again without taking a mutex that will never be unlocked. Condition 3 is unexploitable because a task struct uses a safe saturating kernel refcount_t which prevents the overflow condition. This leaves condition 4.
The mm_users refcount still uses an overflow-unsafe atomic_t and since we can take a readlock an indefinite number of times, the associated mmap_read_lock does not prevent us from incrementing the refcount again. There are a couple important roadblocks we need to avoid in order to repeatedly leak this refcount:
We cannot call this syscall from the task with the empty vma list itself — in other words, we can’t call read from /proc/self/smaps_rollup. Such a process cannot easily make repeated syscalls since it has no virtual memory mapped. We avoid this by reading smaps_rollup from another process.
We must re-open the smaps_rollup file every time because any future reads we perform on a smaps_rollup instance we already triggered the oops on will deadlock on the local seq_file mutex lock which is locked forever. We also need to destroy the resulting struct file (via close) after we generate the oops in order to prevent untenable memory usage.
If we access the mm through the same pid every time, we will run into the task struct max refcount before we overflow the mm_users refcount. Thus we need to create two separate tasks that share the same mm and balance the oopses we generate across both tasks so the task refcounts grow half as quickly as the mm_users refcount. We do this via the clone flag CLONE_VM
We must avoid opening/reading the smaps_rollup file from a task that has a shared file descriptor table, as otherwise a refcount will be leaked on the struct file itself. This isn’t difficult, just don’t read the file from a multi-threaded process.
Our final refcount leaking overflow strategy is as follows:
Process A forks a process B
Process B issues PTRACE_TRACEME so that when it segfaults upon return from munmap it won’t go away (but rather will enter tracing stop)
Proces B clones with CLONE_VM | CLONE_PTRACE another process C
Process B munmap’s its entire virtual memory address space — this also unmaps process C’s virtual memory address space.
Process A forks new children D and E which will access (B|C)’s smaps_rollup file respectively
(D|E) opens (B|C)’s smaps_rollup file and performs a read which will oops, causing (D|E) to die. mm_users will be refcount leaked/incremented once per oops
Process A goes back to step 5 ~232 times
The above strategy can be rearchitected to run in parallel (across processes not threads, because of roadblock 4) and improve performance. On server setups that print kernel logging to a serial console, generating 232 kernel oopses takes over 2 years. However on a vanilla Kali Linux box using a graphical interface, a demonstrative proof-of-concept takes only about 8 days to complete! At the completion of execution, the mm_users refcount will have overflowed and be set to zero, even though this mm is currently in use by multiple processes and can still be referenced via the proc filesystem.
Exploitation
Once the mm_users refcount has been set to zero, triggering undefined behavior and memory corruption should be fairly easy. By triggering an mmget and an mmput (which we can very easily do by opening the smaps_rollup file once more) we should be able to free the entire mm and cause a UAF condition:
Unfortunately, since 64591e8605 (“mm: protect free_pgtables with mmap_lock write lock in exit_mmap”), exit_mmap unconditionally takes the mmap lock in write mode. Since this mm’s mmap_lock is permanently readlocked many times, any calls to __mmput will manifest as a permanent deadlock inside of exit_mmap.
However, before the call permanently deadlocks, it will call several other functions:
uprobe_clear_state
exit_aio
ksm_exit
khugepaged_exit
Additionally, we can call __mmput on this mm from several tasks simultaneously by having each of them trigger an mmget/mmput on the mm, generating irregular race conditions. Under normal execution, it should not be possible to trigger multiple __mmput’s on the same mm (much less concurrent ones) as __mmput should only be called on the last and only refcount decrement which sets the refcount to zero. However, after the refcount overflow, all mmget/mmput’s on the still-referenced mm will trigger an __mmput. This is because each mmput that decrements the refcount to zero (despite the corresponding mmget being why the refcount was above zero in the first place) believes that it is solely responsible for freeing the associated mm.
This racy __mmput primitive extends to its callees as well. exit_aio is a good candidate for taking advantage of this:
void exit_aio(struct mm_struct *mm)
{
struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
struct ctx_rq_wait wait;
int i, skipped;
if (!table)
return;
atomic_set(&wait.count, table->nr);
init_completion(&wait.comp);
skipped = 0;
for (i = 0; i < table->nr; ++i) {
struct kioctx *ctx =
rcu_dereference_protected(table->table[i], true);
if (!ctx) {
skipped++;
continue;
}
ctx->mmap_size = 0;
kill_ioctx(mm, ctx, &wait);
}
if (!atomic_sub_and_test(skipped, &wait.count)) {
/* Wait until all IO for the context are done. */
wait_for_completion(&wait.comp);
}
RCU_INIT_POINTER(mm->ioctx_table, NULL);
kfree(table);
}
While the callee function kill_ioctx is written in such a way to prevent concurrent execution from causing memory corruption (part of the contract of aio allows for kill_ioctx to be called in a concurrent way), exit_aio itself makes no such guarantees. Two concurrent calls of exit_aio on the same mm struct can consequently induce a double free of the mm->ioctx_table object, which is fetched at the beginning of the function, while only being freed at the very end. This race window can be widened substantially by creating many aio contexts in order to slow down exit_aio’s internal context freeing loop. Successful exploitation will trigger the following kernel BUG indicating that a double free has occurred:
Note that as this exit_aio path is hit from __mmput, triggering this race will produce at least two permanently deadlocked processes when those processes later try to take the mmap write lock. However, from an exploitation perspective, this is irrelevant as the memory corruption primitive has already occurred before the deadlock occurs. Exploiting the resultant primitive would probably involve racing a reclaiming allocation in between the two frees of the mm->ioctx_table object, then taking advantage of the resulting UAF condition of the reclaimed allocation. It is undoubtedly possible, although I didn’t take this all the way to a completed PoC.
Conclusion
While the null-dereference bug itself was fixed in October 2022, the more important fix was the introduction of an oops limit which causes the kernel to panic if too many oopses occur. While this patch is already upstream, it is important that distributed kernels also inherit this oops limit and backport it to LTS releases if we want to avoid treating such null-dereference bugs as full-fledged security issues in the future. Even in that best-case scenario, it is nevertheless highly beneficial for security researchers to carefully evaluate the side-effects of bugs discovered in the future that are similarly “harmless” and ensure that the abrupt halt of kernel code execution caused by a kernel oops does not lead to other security-relevant primitives.
It’s been a while since our last technical blogpost, so here’s one right on time for the Christmas holidays. We describe a method to exploit a use-after-free in the Linux kernel when objects are allocated in a specific slab cache, namely the
kmalloc-cg
series of SLUB caches used for cgroups. This vulnerability is assigned CVE-2022-32250 and exists in Linux kernel versions 5.18.1 and prior.
The use-after-free vulnerability in the Linux kernel netfilter subsystem was discovered by NCC Group’s Exploit Development Group (EDG). They published a very detailed write-up with an in-depth analysis of the vulnerability and an exploitation strategy that targeted Linux Kernel version 5.13. Additionally, Theori published their own analysis and exploitation strategy, this time targetting the Linux Kernel version 5.15. We strongly recommend having a thorough read of both articles to better understand the vulnerability prior to reading this post, which almost exclusively focuses on an exploitation strategy that works on the latest vulnerable version of the Linux kernel, version 5.18.1.
The aforementioned exploitation strategies are different from each other and from the one detailed here since the targeted kernel versions have different peculiarities. In version 5.13, allocations performed with either the
GFP_KERNEL
flag or the
GFP_KERNEL_ACCOUNT
flag are served by the
kmalloc-*
slab caches. In version 5.15, allocations performed with the
GFP_KERNEL_ACCOUNT
flag are served by the
kmalloc-cg-*
slab caches. While in both 5.13 and 5.15 the affected object,
nft_expr,
is allocated using
GFP_KERNEL,
the difference in exploitation between them arises because a commonly used heap spraying object, the System V message structure (
struct msg_msg)
, is served from
kmalloc-*
in 5.13 but from
kmalloc-cg-*
in 5.15. Therefore, in 5.15,
struct msg_msg
cannot be used to exploit this vulnerability.
In 5.18.1, the object involved in the use-after-free vulnerability,
nft_expr,
is itself allocated with
GFP_KERNEL_ACCOUNT
in the
kmalloc-cg-*
slab caches. Since the exploitation strategies presented by the NCC Group and Theori rely on objects allocated with
GFP_KERNEL,
they do not work against the latest vulnerable version of the Linux kernel.
The subject of this blog post is to present a strategy that works on the latest vulnerable version of the Linux kernel.
Vulnerability
Netfilter sets can be created with a maximum of two associated expressions that have the
NFT_EXPR_STATEFUL
flag. The vulnerability occurs when a set is created with an associated expression that does not have the
NFT_EXPR_STATEFUL
flag, such as the
dynset
and
lookup
expressions. These two expressions have a reference to another set for updating and performing lookups, respectively. Additionally, to enable tracking, each set has a bindings list that specifies the objects that have a reference to them.
During the allocation of the associated
dynset
or
lookup
expression objects, references to the objects are added to the bindings list of the referenced set. However, when the expression associated to the set does not have the
NFT_EXPR_STATEFUL
flag, the creation is aborted and the allocated expression is destroyed. The problem occurs during the destruction process where the bindings list of the referenced set is not updated to remove the reference, effectively leaving a dangling pointer to the freed expression object. Whenever the set containing the dangling pointer in its bindings list is referenced again and its bindings list has to be updated, a use-after-free condition occurs.
Exploitation
Before jumping straight into exploitation details, first let’s see the definition of the structures involved in the vulnerability:
structure represents an nftables set, a built-in generic infrastructure of nftables that allows using any supported selector to build sets, which makes possible the representation of maps and verdict maps (check the corresponding nftables wiki entry for more details).
expressions have to be bound to a given set on which the add, delete, or update operations will be performed.
When a given
nft_set
has expressions bound to it, they are added to the
nft_set.bindings
double linked list. A visual representation of an
nft_set
with 2 expressions is shown in the diagram below.
The
binding
member of the
nft_lookup
and
nft_dynset
expressions is defined as follows:
// Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/net/netfilter/nf_tables.h#L576
/**
* struct nft_set_binding - nf_tables set binding
*
* @list: set bindings list node
* @chain: chain containing the rule bound to the set
* @flags: set action flags
*
* A set binding contains all information necessary for validation
* of new elements added to a bound set.
*/
struct nft_set_binding {
struct list_head list;
const struct nft_chain *chain;
u32 flags;
};
The important member in our case is the
list
member. It is of type
struct list_head
, the same as the
nft_lookup.binding
and
nft_dynset.binding
members. These are the foundation for building a double linked list in the kernel. For more details on how linked lists in the Linux kernel are implemented refer to this article.
With this information, let’s see what the vulnerability allows to do. Since the UAF occurs within a double linked list let’s review the common operations on them and what that implies in our scenario. Instead of showing a generic example, we are going to use the linked list that is build with the
nft_set
and the expressions that can be bound to it.
In the diagram shown above, the simplified pseudo-code for removing the
expressions are defined at different offsets, the write operation is done at different offsets.
With this out of the way we can now list the write primitives that this vulnerability allows, depending on which expression is the vulnerable one:
nft_lookup
: Write an 8-byte address at offset 24 (
binding.list->next
) or offset 32 (
binding.list->prev
) of a freed
nft_lookup
object.
nft_dynset
: Write an 8-byte address at offset 64 (
binding.list->next
) or offset 72 (
binding.list->prev
) of a freed
nft_dynset
object.
The offsets mentioned above take into account the fact that
nft_lookup
and
nft_dynset
expressions are bundled in the
data
member of an
nft_expr
object (the data member is at offset 8).
In order to do something useful with the limited write primitves that the vulnerability offers we need to find objects allocated within the same slab caches as the
nft_lookup
and
nft_dynset
expression objects that have an interesting member at the listed offsets.
Therefore, the objects suitable for exploitation will be different from those of the publicly available exploits targetting version 5.13 and 5.15.
Exploit Strategy
The ultimate primitives we need to exploit this vulnerability are the following:
Memory leak primitive: Mainly to defeat KASLR.
RIP control primitive: To achieve kernel code execution and escalate privileges.
However, neither of these can be achieved by only using the 8-byte write primitive that the vulnerability offers. The 8-byte write primitive on a freed object can be used to corrupt the object replacing the freed allocation. This can be leveraged to force a partial free on either the
nft_set
,
nft_lookup
or the
nft_dynset
objects.
Partially freeing
nft_lookup
and
nft_dynset
objects can help with leaking pointers, while partially freeing an
nft_set
object can be pretty useful to craft a partial fake
nft_set
to achieve RIP control, since it has an
ops
member that points to a function table.
Therefore, the high-level exploitation strategy would be the following:
Leak the kernel image base address.
Leak a pointer to an
nft_set
object.
Obtain RIP control.
Escalate privileges by overwriting the kernel’s
MODPROBE_PATH
global variable.
Return execution to userland and drop a root shell.
The following sub-sections describe how this can be achieved.
Partial Object Free Primitive
A partial object free primitive can be built by looking for a kernel object allocated with
GFP_KERNEL_ACCOUNT
within kmalloc-cg-64 or kmalloc-cg-96, with a pointer at offsets 24 or 32 for kmalloc-cg-64 or at offsets 64 and 72 for kmalloc-cg-96. Afterwards, when the object of interest is destroyed,
kfree()
has to be called on that pointer in order to partially free the targeted object.
One of such objects is the
fdtable
object, which is meant to hold the file descriptor table for a given process. Its definition is shown below.
// Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/linux/fdtable.h#L27
struct fdtable {
unsigned int max_fds; /* 0 4 */
/* XXX 4 bytes hole, try to pack */
struct file * * fd; /* 8 8 */
long unsigned int * close_on_exec; /* 16 8 */
long unsigned int * open_fds; /* 24 8 */
long unsigned int * full_fds_bits; /* 32 8 */
struct callback_head rcu __attribute__((__aligned__(8))); /* 40 16 */
/* size: 56, cachelines: 1, members: 6 */
/* sum members: 52, holes: 1, sum holes: 4 */
/* forced alignments: 1 */
/* last cacheline: 56 bytes */
} __attribute__((__aligned__(8)));
The size of an
fdtable
object is 56, is allocated in the kmalloc-cg-64 slab and thus can be used to replace
nft_lookup
objects. It has a member of interest at offset 24 (
open_fds
), which is a pointer to an unsigned long integer array. The allocation of
fdtable
objects is done by the kernel function
alloc_fdtable()
, which can be reached with the following call stack.
pointer can be triggered by simply terminating the child process that allocated the
fdtable
object.
Leaking Pointers
The exploit primitive provided by this vulnerability can be used to build a leaking primitive by overwriting the vulnerable object with an object that has an area that will be copied back to userland. One such object is the System V message represented by the
msg_msg
structure, which is allocated in
kmalloc-cg-*
slab caches starting from kernel version 5.14.
The
msg_msg
structure acts as a header of System V messages that can be created via the userland
msgsnd()
function. The content of the message can be found right after the header within the same allocation. System V messages are a widely used exploit primitive for heap spraying.
Since the size of the allocation for a System V message can be controlled, it is possible to allocate it in both kmalloc-cg-64 and kmalloc-cg-96 slab caches.
It is important to note that any data to be leaked must be written past the first 48 bytes of the message allocation, otherwise it would overwrite the
msg_msg
header. This restriction discards the
nft_lookup
object as a candidate to apply this technique to as it is only possible to write the pointer either at offset 24 or offset 32 within the object. The ability of overwriting the
msg_msg.m_ts
member, which defines the size of the message, helps building a strong out-of-bounds read primitive if the value is large enough. However, there is a check in the code to ensure that the
m_ts
member is not negative when interpreted as a signed long integer and heap addresses start with
0xffff
, making it a negative long integer.
Leaking an
nft_set
Pointer
Leaking a pointer to an
nft_set
object is quite simple with the memory leak primitive described above. The steps to achieve it are the following:
1. Create a target set where the expressions will be bound to.
2. Create a rule with a lookup expression bound to the target set from step 1.
3. Create a set with an embedded
nft_dynset
expression bound to the target set. Since this is considered an invalid expression to be embedded to a set, the
nft_dynset
object will be freed but not removed from the target set bindings list, causing a UAF.
4. Spray System V messages in the kmalloc-cg-96 slab cache in order to replace the freed
nft_dynset
object (via
msgsnd()
function). Tag all the messages at offset 24 so the one corrupted with the
nft_set
pointer can later be identified.
5. Remove the rule created, which will remove the entry of the
nft_lookup
expression from the target set’s bindings list. Removing this from the list effectively writes a pointer to the target
nft_set
object where the original
binding.list.prev
member was (offset 72). Since the freed
nft_dynset
object was replaced by a System V message, the pointer to the
nft_set
will be written at offset 24 within the message data.
6. Use the userland
msgrcv()
function to read the messages and check which one does not have the tag anymore, as it would have been replaced by the pointer to the
nft_set
.
Leaking a Kernel Function Pointer
Leaking a kernel pointer requires a bit more work than leaking a pointer to an
nft_set
object. It requires being able to partially free objects within the target set bindings list as a means of crafting use-after-free conditions. This can be done by using the partial object free primitive using
fdtable
object already described. The steps followed to leak a pointer to a kernel function are the following.
1. Increase the number of open file descriptors by calling
dup()
on
stdout
65 times.
2. Create a target set where the expressions will be bound to (different from the one used in the `
nft_set
` adress leak).
3. Create a set with an embedded
nft_lookup
expression bound to the target set. Since this is considered an invalid expression to be embedded into a set, the
nft_lookup
object will be freed but not removed from the target set bindings list, causing a UAF.
4. Spray
fdtable
objects in order to replace the freed
nft_lookup
from step 3.
5. Create a set with an embedded
nft_dynset
expression bound to the target set. Since this is considered an invalid expression to be embedded into a set, the
nft_dynset
object will be freed but not removed from the target set bindings list, causing a UAF. This addition to the bindings list will write the pointer to its binding member into the
open_fds
member of the
fdtable
object (allocated in step 4) that replaced the
nft_lookup
object.
6. Spray System V messages in the kmalloc-cg-96 slab cache in order to replace the freed
nft_dynset
object (via
msgsnd()
function). Tag all the messages at offset 8 so the one corrupted can be identified.
7. Kill all the child processes created in step 4 in order to trigger the partial free of the System V message that replaced the
nft_dynset
object, effectively causing a UAF to a part of a System V message.
8. Spray
time_namespace
objects in order to replace the partially freed System V message allocated in step 7. The reason for using the
time_namespace
objects is explained later.
9. Since the System V message header was not corrupted, find the System V message whose tag has been overwritten. Use
msgrcv()
to read the data from it, which is overlapping with the newly allocated
time_namespace
object. The offset 40 of the data portion of the System V message corresponds to
time_namespace.ns->ops
member, which is a function table of functions defined within the kernel core. Armed with this information and the knowledge of the offset from the kernel base image to this function it is possible to calculate the kernel image base address.
10. Clean-up the child processes used to spray the
time_namespace
objects.
time_namespace
objects are interesting because they contain an
ns_common
structure embedded in them, which in turn contains an
ops
member that points to a function table with functions defined within the kernel core. The
member is executed when an item has to be removed from the set. The item removal can be done from a rule that removes an element from a set when certain criteria is matched. Using the
The snippet above shows the creation of a table, a chain, and a set that contains elements of type
ipv4_addr
(i.e. IPv4 addresses). Then a rule is added, which deletes the item
127.0.0.1
from the set
my_set
when an incoming packet has the source IPv4 address
127.0.0.1
. Whenever a packet matching that criteria is processed via nftables, the
delete
function pointer of the specified set is called.
Therefore, RIP control can be achieved with the following steps. Consider the target set to be the
nft_set
object whose address was already obtained.
Add a rule to the table being used for exploitation in which an item is removed from the target set when the source IP of incoming packets is
127.0.0.1
.
Partially free the
nft_set
object from which the address was obtained.
Spray System V messages containing a partially fake
nft_set
object containing a fake
ops
table, with a given value for the
ops->delete
member.
Trigger the call of
nft_set->ops->delete
by locally sending a network packet to
127.0.0.1
. This can be done by simply opening a TCP socket to
127.0.0.1
at any port and issuing a
connect()
call.
Escalating Privileges
Once the control of the RIP register is achieved and thus the code execution can be redirected, the last step is to escalate privileges of the current process and drop to an interactive shell with root privileges.
A way of achieving this is as follows:
Pivot the stack to a memory area under control. When the
delete
function is called, the RSI register contains the address of the memory region where the nftables register values are stored. The values of such registers can be controlled by adding an
immediate
expression in the rule created to achieve RIP control.
Afterwards, since the nftables register memory area is not big enough to fit a ROP chain to overwrite the
MODPROBE_PATH
global variable, the stack is pivoted again to the end of the fake
The stack pivot gadgets and ROP chain used can be found below.
// ROP gadget to pivot the stack to the nftables registers memory area
0xffffffff8169361f: push rsi ; add byte [rbp+0x310775C0], al ; rcr byte [rbx+0x5D], 0x41 ; pop rsp ; ret ;
// ROP gadget to pivot the stack to the memory allocation holding the target nft_set
0xffffffff810b08f1: pop rsp ; ret ;
When the execution flow is redirected, the RSI register contains the address otf the nftables’ registers memory area. This memory can be controlled and thus is used as a temporary stack, given that the area is not big enough to hold the entire ROP chain. Afterwards, using the second gadget shown above, the stack is pivoted towards the end of the fake
nft_set
object.
// ROP chain used to overwrite the MODPROBE_PATH global variable
0xffffffff8148606b: pop rax ; ret ;
0xffffffff8120f2fc: pop rdx ; ret ;
0xffffffff8132ab39: mov qword [rax], rdx ; ret ;
It is important to mention that the stack pivoting gadget that was used performs memory dereferences, requiring the address to be mapped. While experimentally the address was usually mapped, it negatively impacts the exploit reliability.
Wrapping Up
We hope you enjoyed this reading and could learn something new. If you are hungry for more make sure to check our other blog posts.
We wish y’all a great Christmas holidays and a happy new year! Here’s to a 2023 with more bugs, exploits, and write ups!