Public password dumps in ELK

Original text by Marc Smeets

Passwords, passwords, passwords: end users and defenders hate them, attackers love them. Despite the recent focus on stronger authentication forms by defenders, passwords are still the predominant way to get access to systems. And due to the habit of end users reusing passwords, and the multitude of public leaks in the last few years, they serve as an important attack vector in the red teamer’s arsenal. Find accounts of target X in the many publicly available dumps, try these passwords or logical iterations of it (Summer2014! might very well be Winter2018! at a later moment) on a webmail or other externally accessible portals, and you may have got initial access to your target’s systems. Can’t find any accounts of your target in the dump? No worries, your intel and recon may give you private email addresses that very well may be sharing the password with the target’s counter parts.

Major public password dumps

Recently, two major password dumps got leaked publicly: Exploit.in and Leakbase (goes also by the name BreachCompilation). This resulted in many millions of username-password combinations to be leaked. The leaks come in the form of multiple text files, neatly indexed in alphabetical order for ‘quick’ lookup. But the lookup remains extremely slow, especially if the index is done on the username instead of the domain name part of the email address. So, I wanted to re-index them, store them in a way that allows for quick lookup, and have the lookup interface easily accessible. I went with ELK as it ticks all the boxes. I’ll explain in this blog how you can do the same. All code can be found on our GitHub.

A few downsides of this approach

Before continuing I want to address a few short comings of this approach:

  • Old data: one can argue that many of the accounts in the dumps are many years old and therefore not directly useful. Why go through all the trouble? Well, I rather have the knowledge of old passwords and then decide if I want to use them, then not knowing them at all.
  • Parsing errors: the input files of the dump are not nicely formatted. They contain lots of errors in the form of different encodings, control characters, inconsistent structure, etc. I want to ‘sanitize’ the input to a certain degree to filter out the most commonly used errors in the dumps. But, that introduces the risk that I filter out too much. It’s about finding a balance. But overall, I’m ok with losing some input data.
  • ELK performance: Elasticsearch may not be the best solution for this. A regular SQL DB may actually be better suited to store the data as we generally know how the data is formatted, and could also be faster with lookups. I went with ELK for this project as I wanted some more mileage under my belt with the ELK stack. Also, the performance still is good enough for me.

Overall process flow

Our goal is to search for passwords in our new Kibana web interface. To get there we need to do the following:

  1. Download the public password dumps, or find another way to have the input files on your computer.
  2. Create a new system/virtual machine that will serve as the ELK host.
  3. Setup and configure ELK.
  4. Create scripts to sanitize the input files and import the data.
  5. Do some post import tuning of Elasticsearch and Kibana.

Let’s walk through each step in more detail.

Getting the password dumps

Pointing you to the downloads is not going to work as the links quickly become obsolete, while new links appear. Search and you will find. As said earlier, both dumps from Exploit.in and Leakbase became public recently. But you may also be interested in dumps from ‘Anti Public’, LinkedIn (if you do the cracking yourself) and smaller leaks lesser broadly discussed in the news. Do note that there is overlap in the data from different dumps: not all dumps have unique data.

Whatever you download, you want to end up with (a collection of) files that have their data stored as username:password. Most of the dumps have it that way by default.

Creating the new system

More is better when talking about hardware. Both in CPU, memory and disk. I went with a virtual machine with 8 cores, 24GB ram and about 1TB of disk. The cores and memory are really important during the importing of the data. The disk space required depends on the size of the dumps you want to store. To give you an idea: storing Leakbase using my scripts requires about 308GB for Elasticsearch, Exploit.In about 160GB.

Operating system wise I went with a rather default Linux server. Small note of convenience here: setup the disk using LVM so you can easily increase the disk If you require more space later on.

Setup and configure ELK

There are many manuals for installation of ELK. I can recommend @Cyb3rWard0g’s HELK project on GitHub. It’s a great way to have the basics up and running in a matter of minutes.

git clone https://github.com/Cyb3rWard0g/HELK.git
./HELK/scripts/helk_install.sh

There are a few things we want to tune that will greatly improve the performance:

  • Disable swap as that can really kill Elasticsearch’s performance:sudo swapoff –a remove swap mounts in /etc/fstab
  • Increase the JVM’s memory usage to about 50% of available system memory:
    Check the jvm options file at /etc/elasticsearch/jvm.options and /etc/logstash/jvm.options, and change the values of –Xms and –Xmx to half of your system memory. In my case for Elasticsearch:-Xmx12g -Xms12gDo note that Elasticsearch and Logstash JVMs work independently and therefor their values together should not surpass your system memory.

We also need to instruct Logstash about how to interpret the data. I’m using the following configuration file:

root@office-elk:~# cat /etc/logstash/conf.d/passworddump2elk.conf
input {
    tcp {
        port   => 3515
        codec  => line
    }
}

filter{
   dissect {
     mapping => { "message" => "%{DumpName} %{Email} %{Password} %{Domain}" }
   }
   mutate{
      remove_field => [ "host", "port" ]
   }
}

output {
    if " _dissectfailure" in [tags] {
       file {
          path => "/var/log/logstash/logstash-import-failure-%{+YYYY-MM-dd}.log"
          codec => rubydebug
       }
   } else {
      elasticsearch{
         hosts => [ "127.0.0.1:9200" ]
         index => "passworddump-%{+YYYY.MM.dd}"
      }
   }
}

As you can see in the dissect filter we expect the data to appear in the following format:

DumpName EmailAddress Password Domain

There is a very good reason why we also want the Domain part of the email address as a separate indexable field: as red teamers you tend to search for accounts tied to specific domains/companies. Searching for <anything>@domainname is an extremely CPU expensive search to do. So, we spend some more CPU power on the importing to have quicker lookups in the end. We also store the DumpName as this might become handy in some cases. And for troubleshooting we store all lines that didn’t parse correctly using the “if ” _dissectfailure” in [tags]”.

Note: using a dissect vs a grok filter gives us a small performance increase. But if you want you can accomplish the same using grok.

Don’t forget to restart the services for the config to become active.

Sanitize and import the data

The raw data that we get form the dumps is generally speaking organized. But it does contain errors, e.g. parts of html code as password, weird long lines, multitudes of spaces, passwords in languages that we are not interested in, non printable control characters.

I’ve chosen to do the following cleaning actions in my script. It’s done with simple cut, grep, awk commands, so you can easily tune to your preference:

  • Remove spaces from the entire line
    This has the risk that you lose passwords that have a space in it. In my testing I’ve concluded that the far majority of spaces in the input files are from initial parsing or saving errors, and only a tiny fraction could perhaps be a (smart) user that had a space in the password.
  • Convert to all ASCII
    You may find the most interesting character set usage in the dumps. I’m solely interested in full ASCII. This is a rather bold way to sanitize that works for me. You may want to do differently.
  • Remove non-printable characters
    Again you may find the most interesting characters in the dumps. During my testing I kept encountering control characters that I never even heard about. So I decided to remove all non-printable all together. But whatever you decide to change, you really want to get rid of all the control characters.
  • Remove lines without proper email address format
    This turns out to be a rather efficient way of cleaning. Do note that this means that the occasional username@gmail..com will also be purged.
  • Remove lines without a colon, empty username or empty password
    In my testing this turned out to be rather effective.
  • Remove really long lines (60+ char)
    You will purge the occasional extremely long email addresses or passwords, but in my testing this appeared to be near 0. Especially for corporate email addresses where most of the times a strict naming policy is in place.
  • Remove Russian email addresses (.ru)
    The amount of .ru email addresses in the dumps, and the rather unlikely case that it may be interesting, made me purged them all together. This saves import time, lookup time and a considerable amount of disk space.

After the purging of irrelevant data, we still need to reformat the data in a way that Logstash can understand. I use a AWK one liner for this, that’s explained in the comments of the script. Finally, we send it to the Logstash daemon who will do the formatting and sending along to Elasticsearch.

It’s important to note that even though we rather heavily scrutinized the input data earlier on, the AWK reformatting and the Logstash importing still can filter out lines that contain errors.

Overall, we lose some data that might actually be used when sanitizing differently. When using my scripts on the Leakbase dump, you end up with 1.1 billion records in Elasticsearch while the import data roughly contains 1.4 billion records. For the Exploit.In dump its about 600 out of 800 million. It’s about finding a balance that works for you. I’m looking forward to your pull requests for better cleaning of the import data.

To kick off the whole importing you can run a command like:

for i in $(find ./BreachCompilation/data/*.txt -type f); do ./sanitizePasswordDump.sh $i LeakBase; done

The 2nd parameter (LeakBase) is the name I give to this dump.

Don’t be surprised if this command takes the better part of a day to complete, so run it in a screen session.

Post import tuning

Now that we are importing data to Elasticsearch, there is a small performance tuning that we can do: remove the indexing of the ‘message’ field. The message field contains the entire message that was received via Logstash. An index requires CPU power and uses (some) disk space. As we already index the sub-fields this is rather useless. You can also choose to not store it at all. But the general advice is to keep it around. You may never know when it becomes useful. With the following command we keep it, but we remove the indexation.

curl -XPUT http://localhost:9200/_template/passworddump_template -d '
{
 "template" : "passworddump-*",
 "mappings" : {
  "logs": {
   "properties" : {
    "message" : {
    "type":"text", "store":"yes", "index":"false", "fields":{"keyword":{"type":"keyword","ignore_above":256}}
    }
   }
  }
 }
}'

Now the only thing left to do is to instruct Elasticsearch to create the actual index patterns. We do this via the Kibana web interface:

  • click ‘Management’ and fill in the Index pattern: in our case passworddump-*.
  • Select the option ‘I don’t want to use the Time Filter’ as we don’t have any use for searching on specific time periods of the import.

Important note: if you create the index before you have altered the indexation options (previous step), your indexation preferences are not stored; its only set on index creation time.  You can verify this by checking if the ‘message’ field is searchable; it should not. If it is, remove this index, store the template again and recreate the index.

There are a few more things that can make your life easier when querying:

  1. Change advanced setting discover:sampleSize to a higher value to have more results presented.
  2. Create a view with all the relevant data in 1 shot:
    By default Kibana shows the raw message to the screen. This is isn’t very helpful. Go to ‘Discover’, expand one of the results and hit the ‘Toggle column in table’ button next to the fields you want to be displayed, e.g. DumpName, Email, Password and Domain).
  3. Make this viewing a repeatable view
    Now hit ‘Save’ in the top bar, give it a name, and hit save. This search is now saved and you can always go back to this easy viewing.
  4. Large data search script
    For the domains that have more than a few hits, or for the cases where you want to redirect it to a file for easy importing to another tool, Kibana is not the easiest interface. I’ve created a little script that can help you. It requires 1 parameter: the exact Domain search string you would give to Elasticsearch when querying it directly. It returns a list of username:password for your query.

Good luck searching!

Automated AD and Windows test lab deployments with Invoke-ADLabDeployer

Original text by Marc Smeets

We are happy to introduce Invoke-ADLabDeployment: a PowerShell project that helps you to quickly deploy a virtual test environment with Windows servers, Windows desktops, Office, Active Directory and a networking setup with multiple broadcast segments, all running on your local Hyper-V environment.

It is an in-house developed tool that we use heavily during our red teaming engagements. We use this to quickly spin-up lab environments that resemble the target environment. This way we can better develop and test our attacks, as well as investigate what artefacts we might leave behind.

We feel it is time to give back to the community and allow other red and blue teams to benefit from this. You can read more about its functionality and why we developed this below. Or you can go straight to the code and manual on our github.

Background and goal

During our red teaming engagements, we encounter many different setups. To test our attacks and to know what trails we leave behind, we need to test in setups that mimic the client’s environment as much as possible. But we never know what exact environment we will encounter. It could be all Win10 desktops with Office 2016 and 2016 servers, it could be Win7 with Office 2010 and 2008R2 servers, or it could be any combination of those. Also, x86 vs x64 can make a big difference for our payloads.

We prefer more time for hacking and less time for deploying test labs. The old method of manually cloning prepared virtual machines, manually go through the networking and AD setup, manually install Office and other client-side tools….sigh…its time consuming and error prone. And if you become sloppy by using older test machines, you might expose yourself with poor OPSEC.

This is where Invoke-ADLabDeployer comes in. It does the heavy lifting for you. In a matter of minutes, you have a freshly deployed lab with multiple Windows systems, ranging from Win7 to Win10, Server 2008R2 to 2016, all domain joined, with Office version 2010, 2013 or 2016 installed, alongside other client-side tools you want installed, as well as a networking setup with multiple broadcast segments. Its not a fixed test lab; you define the exact lab setup in a config file.

Differences with other tooling

There are other projects out there that do somewhat similar things. We reviewed those with our specific demands in mind. We need something that:

  • Allows us to mimic the client’s environment as much as possible.
  • Is relative lightweight in code base and we can easily add functionality to if required.
  • Has support for all Windows OS versions currently encountered at clients, specifically support for as old as Windows 7 and Server 2008R2.
  • Can deploy Active Directory, and have systems join it.
  • Can install Office 2010, 2013 and 2016.
  • Can install other client-side software.
  • Can deploy multiple subnets so we can test network level attacks while not putting everything in the same broadcast segment (keeping it like real target networks).
  • Keeps resource usage low.
  • Is able to deploy large number of systems, e.g. from 1 to 25.
  • Is cost efficient.

We searched for other solutions. But we found none that fitted our exact requirements. There are a few other PowerShell projects for test lab deployment like AutomatedLab, ws2016Lab, and Lability. And they are awesome for things that they are developed for. But for our goal they either have a lot of dependencies, huge code bases, don’t have support for as low as Win7 and 2008R2, or require all labs to be deployed via DSC, which doesn’t provide the flexibility we want.

There are Configuration Management tools like Ansible and Puppet, but they don’t deploy operating systems and networks or they do in an unfriendly way. There are Image Deployment tools like WIM and PXE boot, but they are lacking options for dynamic post deployment configuration. We could do snap shots of virtual machines. But this requires many manual steps, which is time consuming and error prone.

Of course there is also cloud computing, which some think would be excellent for this usage. Well, we think the downsides are too big. We could only find Azure and AWS that do any mature form of Windows deployments. But they don’t really do client Windows versions (Azure can do with expensive subscription), don’t really allow you to decide the exact patch level of your systems, do very icky things on the network level (multicast and broadcast are non-existing in the cloud) and it can become very(!) expensive if you deploy a larger network.

So, whatever solution we chose, we would still need to either script on the base OS deployment phase, or script on the post deployment phase. We could combine solutions. But after a quick try we gave up crying as the number of extra tools required and overall overhead was just insane.

It seemed we needed to develop something easy and lightweight ourselves. And when you think of it, its not that hard to do what we want to do. Especially when you are using modern Hyper-V with its many smart tricks applied to keep resource usage low, like differencing disks and dynamic memory.

Technical requirements

Invoke-ADLabDeployer relies heavily on Hyper-V, sysprep and (remote) Powershell for the deployment and configuration. It runs on a local Windows host with Hyper-V. In our lab we are using an Intel Skull NUC with maximum RAM running Server 2016. It does the job perfectly for our usage.

You may try running it on older Windows version. Just make sure to install WMF5 or later on your host node. For the guests you don’t need PowerShell 5: an important design decision for us was to support  as low as PowerShell version 2 for the deployed machines. This in theory would make it also able to deploy 2003 and XP machines. We have not tested this yet.

Base image preparation and usage

Before you can deploy your first network, you need to have base images prepared. The script does not do this for you. Some other tools do this for you, and I may include this in future versions. But at this moment you need to create the base images yourself.

This base image preparation means you create a new virtual machine, install the Windows version you want on it, update to the level you want, enable remote powershell, do other customisation you want all your future deployment to have, and you power it off using a sysprep command. You do this for every base OS you want in your library. This task might take you a few evenings to complete, of which the installation of Windows updates takes the most time. But as I’ve included the correct unattend.xml files in the repository, you can skip at least the endless hours of debugging sysprep 🙂 Check the readme on our github for a detailed walkthrough of this very important preparation.

Running Invoke-ADLabDeployer

When the base image preparation is done, all you need to do is:

  • Define the lab in a XML config file
  • Import and run Invoke-ADLabDeployer
  • Wait a few minutes

The XML config file defines the details of the networks, the systems and the setup of Active Directory domain. The settings for the network and ADDS are pretty straight forward.

Invoke-ADLabDeployer config of network and AD

In the config file you also define what systems you deploy, and how they are configured. Here is an example of 2 systems: “server1” is an 20012R2 machine that will be running the Domain Controller (as defined in the ADDS section), and “server2” is a non-domain joined system. Both install Chrome, 7zip and Notepad++, but server2 also wil have a file copied (the installer of Microsoft ATA).

Config of 2 servers: ‘server1’ and ‘server2’

There are many more options you can use in the XML config file to automate the tuning of systems. Yo can find all options on github. But I would like to point out 2 more to give you an idea: <SkipDeploy> and the installation of Office.

<SkipDeploy> is a tag that you can insert per system allowing you to keep the system config in the file, but not have it deployed. This makes it easier to use the same config when you only want to deploy a subset of systems.

The automated installation of Microsoft Office is done using the <OfficeInstaller> and the <OfficeConfig> tag. OfficeInstaller points to the actual installer EXE copied from the contents of the Office ISO. The OfficeConfig point to the config file required for automated Office installs. I’ve included these Office config files in the repository.

Config options SkipDeploy and for the installation of Office

You can test the config file by running with the -CheckConfigOnly parameter. It either tells you the errors in your config, or it will present you with an output similar like this.

Checking the config

You can catch the output in hashtables and go through all the details of the config if you want.

Checking the config with local hashtables

When running the script you will get an output similar like the following screenshots. First it will read the config and create the networks and systems.

Read config, create network and systems, and boot

Then it will install the Active Directory domain controller, check if its successful and have clients join the domain.

Domain actions

Finally it will iterate through all the systems and perform software installation (including Office) and final tuning based on smart things and config file parameters.

Software installation and final configuration

The total deployment speed is heavily depending on your hardware specs. But on our hardware a single system is deployed in a few minutes, while a 10-host network, with AD, and all hosts with Office and other tools installed, can take up to 40 minutes. Currently the code does not do any parallelisation. I might implement this in the future. But to be honest the deployment time is of lesser concern for us.

You can find the code on the project’s github. You can expect more updates in the near future. I very much welcome your code submissions. Or you can let me know your ideas via Twitter.

Thanks to University of Amsterdam students Fons Mijnen and Vincent van Dongen for following up on our initial idea with their research and PoC.

Identifying Clear Text LDAP binds to your DC’s

Original text by RUS

If I told you that there was a 90% plus chance that your Domain Controllers allowed receiving credentials in clear text over your network, you would probably wouldn’t believe me. If I went a step further and told you that nearly half of the customers I visit for AD security assessments not only allowed them, but had extremely privileged accounts such as Domain Admins credentials traversing the network in clear text, you would probably think «that wouldn’t happen on my network», well that’s what they all told me too 🙂

With a little bit of effort, you can reveal this common scenario very easily.

[Update] If you have 2008 R2, you need this hotfix to ensure the events have the flag which identifies Simple vs Unsigned binds.

A Simple Check

The first step to understanding if your affected is to look for Event ID 2886 and 2887 in your Directory Service log. If any of your Domain Controllers have the 2886 event present, it indicates that LDAP signing is not being enforced by your DC and it is possible to perform a simple (clear text) LDAP bind over a non-encrypted connection. This configuration is controlled by the security option «Domain controller: LDAP server signing requirements». I nearly always find this setting set to «None».

Our next port of call is the 2887 event. This event occurs every 24 hours and will report how many unsigned and clear text binds have occurred to this DC. If you have any numbers greater than zero, you have some homework to do. Luckily, the rest of this post will help you do just that.

The core of the issue is this, when an application performs a simple LDAP bind, the username and password is transmitted in clear text in the very first packet. The DC doesn’t even have a chance to prevent this exposure from occurring.  If this connection is not encrypted at a lower layer such as TLS or IPSec, it may be intercepted and a bad day may soon follow.

 OK, how badly am I affected?

Thankfully, we have the ability to increase our «LDAP Interface Events» diagnostic levels on a DC to report when these insecure binds occur. The commands for enabling and disabling are below. Warning: This setting has the ability to create a large number of events into the Directory Service event log. It will also log a number of other LDAP interface errors which may seem extremely alarming, these are normal and don’t freak out too much when you see them. I highly recommend enabling this diagnostic level for a very short period of time (like 1o minutes) during your initial discovery phase. Once you have identified (and remediated) the noisiest offenders, you can begin enabling it for longer periods each time.Копировать

 # Enable Simple LDAP Bind Logging 
Reg Add HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics /v "16 LDAP Interface Events" /t REG_DWORD /d 2

Копировать

 # Disable Simple LDAP Bind Logging.
Reg Add HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics /v "16 LDAP Interface Events" /t REG_DWORD /d 0

Note: You may need replace the double quotes after copy+paste.

Once we have the 2889 events logged, we can examine them and determine which IP addresses and user accounts are binding insecurely. Looking through them all one by one may take awhile, the following are some resources to help you along.

Custom Event View

Download here -> LDAP Signing Events Custom View.xml

This custom event view that can help you easily isolate «LDAP Signing Events» on your DC’s Once imported, it will create a nice filtered view of all of the relevant LDAP signing events (2886 through 2889). Follow these steps to import it.

  1. Open Event Viewer.
  2. Right-click «Custom Views» then select «Import Custom View».
  3. Provide a path to the downloaded .xml file. Feel free to rename it/change the description and location as you see fit.
  4. If you are doing this on a management server (and you should be) you will get an error about the service channel being missing. Once you bind to the appropriate Domain Controller, it will show the appropriate events.

If there are no events, awesome! But as per above, if we have 2886 and 2887 events, we need to investigate. With diagnostics enabled, the 2889 events will also show up (possibly in large quantities) in this view.

Insecure LDAP Binds Query Script

Download here -> Query-InsecureLDAPBinds.ps1

The second resource is a simple PowerShell script that will parse and extract the relevant data from the logged 2889 events on your DC into a nice .csv By default, it will only query for 2889 events that occurred in the past 24 hours. You can override this if you wish to go further (or shorter if there is lots of noise)Копировать

 .\Query-InsecureLDAPBinds.ps1 -ComputerName dc1.contoso.com -Hours 24

The output .CSV will include IP Addresses, Ports, Username and the binding type. This can help quickly narrow down the affected user accounts and begin your remediation. It should look the following:Копировать

 "IPAddress","Port","User","BindType"
"10.0.0.3","60901","CONTOSO\Administrator","Simple"
"[::1]","65445","CONTOSO\Administrator","Simple"

Fixing Your Applications

Sometimes fixing the offending applications is as simple as ticking a «Secure Connection» or «Secure Bind» checkbox inside the applications config. Sometimes this may require the use of certificates on your DC to allow TLS binds to them over TCP 636.

If the application has no way binding securely, throw it out. OK, sometimes thats not always possible, but if your application vendor won’t provide a secure way to do LDAP binds, you will need to get a little extreme and encrypt the whole TCP stream between the application and DC using IPSEC with ESP. Thankfully most modern applications have some kind of ability to perform secure LDAP connections and we don’t need to go this far.

Once you have cleaned up the main offenders (both by privilege and total count) you can repeat the diagnostics process for a little longer each time to track down the overnight scheduled tasks and processes and other connections that happen only periodically. It’s a slow process but required before you block these binds entirely.

Conclusion

Hopefully you stop reading reading this post because your DC’s «Require Signing» for their LDAP configuration. If not, you should now be well aware of the culprits in your environment and on your way to remediating them. In a future post I will explain the process of actually preventing these weak LDAP binds and also monitoring for applications still attempting them (and exposing their credentials in the process)

Abusing SeLoadDriverPrivilege for privilege escalation

Original text by OSCAR MALLO

0x01 – Preamble

In Windows operating systems, it is well known that assigning certain privileges to user accounts without administration permissions can result in local privilege escalation attacks. Although Microsoft’s documentation is quite clear about it, throughout several pentests we have found privilege assignment policies assigned to ordinary users, that we have been able to exploit to take full control over a system.

Today, we will analyze the impact associated to the assignment of the “Load and unload device drivers” policy, which specifies the users who are allowed to dynamically load device drivers. The activation of this policy in the context of non-privileged users implies a significant risk due to the possibility of executing code in kernel space.

Although this is a well known technique, it can not be found in post-exploitation and privilege escalation tools, and there are no tools which carry out automatic exploitation neither.

0x02 – SeLoadDriverPrivilege and Access Tokens

Load and unload device drivers” policy is accessible from the local group policy editor (gpedit.msc) under the following path: “Computer configuration-> Windows settings-> Security Settings -> User Rights Assignment”

Given its implications, the default values of this policy include only the group of “administrators” and “print operators“. The following table shows the default values, according to the documentation:

Effective default policy values

NOTE: The print operators group may seem quite innocuous to the naked eye, however it has the ability to load device drivers in domain controllers as well as manage printer-type objects in the active directory. Additionally, this group has the capabilities to authenticate itself in a domain controller, so it is of special interest to verify the membership of users in this group.

The assignment of this policy allows the activation of “SeLoadDriverPrivilege” in the user access tokens, and consequently allowing the loading of device controllers.

An access token is a type of object that describes the security context of a process or thread, and is assigned to all processes created in the system. Among other things, it specifies the SID (Security identifier) that identifies the user account, the SIDs linked to the different groups of which it is a member, as well as the list of privileges assigned to the user or to the groups to which he belongs to. This information is essential in the access control model of the operating system and this information is verified every time that you try to access any securizable object in the system.

To understand the exploitation procedure (which will be explained later), it is necessary to take into account that starting from Windows Vista, the operating system implements a privilege separation technique called “User Account Control”, better known as UAC. As a summary, this security measure is based on the “minimum privilege principle” limiting the privileges of certain processes of the user through the use of ‘restricted access tokens‘, which omit certain privileges assigned to the user.

Taking into account this information, we will analyze the exploitation process of this privilege to load a driver from a user account without administration permissions

0x03 – Exploitation procedure

Obtaining a shell with an unrestricted access token

In order to obtain an unrestricted access token we have the following options:

  • Make use of the “Run as administrator” functionality to elevate any process initiated by the user. The use of these mechanisms in the context of users who are not administrators will allow obtaining a token without restrictions.
  • Make use of the elevate tool. This tool allows to start an elevated process.

elevate.exe -c cmd.exe

  • Compile an application including a manifest to indicate the use of an unrestricted token, which will trigger the UAC prompt when started.
  • Use some technique to bypass UAC.

SeLoadDriverPrivilege privilege activation

Once we have an unrestricted token we can notice that, by default the SeLoadDriverPrivilege is available in the user’s privilege list on the access token but disabled by default. To make use of the privilege it is necessary to activate it explicitly. In order to accomplish this we have to perform the following steps. First we’ll need to acquire a reference of the privilege by using LookupPrivilegeValue() API. After that, the function AdjustTokenPriviliges() can be used to activate the privilege.

  1. TOKEN_PRIVILEGES tp;
  2. LUID luid;
  3. if (!LookupPrivilegeValue(
  4. NULL, // lookup privilege on local system
  5. lpszPrivilege, // privilege to lookup
  6. &luid)) // receives LUID of privilege
  7. {
  8. printf(«LookupPrivilegeValue error: %un», GetLastError());
  9. return FALSE;
  10. }
  11. tp.PrivilegeCount = 1;
  12. tp.Privileges[0].Luid = luid;
  13. if (bEnablePrivilege)
  14. tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;
  15. else
  16. tp.Privileges[0].Attributes = 0;
  17. // Enable the privilege or disable all privileges.
  18. if (!AdjustTokenPrivileges(
  19. hToken,
  20. FALSE,
  21. &tp,
  22. sizeof(TOKEN_PRIVILEGES),
  23. (PTOKEN_PRIVILEGES)NULL,
  24. (PDWORD)NULL))
  25. {
  26. printf(«AdjustTokenPrivileges error: %un», GetLastError());
  27. return FALSE;
  28. }

Driver Load

Loading drivers from user space can be done using the Windows NTLoadDriver API, its format is detailed below:

  1. NTSTATUS NTLoadDriver(
  2. _In_ PUNICODE_STRING DriverServiceName
  3. );

This function takes as the only input parameter DriverServiceName, a pointer to a string in UNICODE format which specifies the registry key that defines the driver configuration:

\Registry\Machine\System\CurrentControlSet\Services\DriverName

Under the DriverName key there are different configuration parameters that can be defined. The most relevant are:

  • ImagePath: REG_EXPAND_SZ type value which specifies the driver path. In this context, the path should be a directory with modification permissions by the non-privileged user.
  • Type: Value of type REG_WORD in which the type of the service is indicated. For our purpose, the value should be defined as SERVICE_KERNEL_DRIVER (0x00000001).

One thing to keep in mind is that the registry key passed to NTLoadDriver is by default located under the HKLM key (HKEY_LOCAL_MACHINE), which defines modification permissions only to the Administrators group. Although the documentation indicates the use of the key ” Registry Machine System CurrentControlSet Services “, the NTLoadDriver API does not restrict paths under the HKCU (HKEY_CURRENT_USER) key, which can be modified by non-privileged users.

Taking into account this situation, when invoking the NTLoadDriver API, it will be possible to use a registry key under HKCU (HKEY_CURRENT_USER), specifying a path following this format:

RegistryUser{NON_PRIVILEGED_USER_SID}

The account’s SID value can be obtained programmatically by using the GetTokenInformation API, which allows the user to obtain his access token information. Alternatively, the SID can be consulted using the “whoami /all” command, or through the following PowerShell instructions:

  1. (New-Object System.Security.Principal.NTAccount(«NOMBRE_CUENTA_USUARIO»)).Translate([System.Security.Principal.SecurityIdentifier]).value
  2. # En el contexto de un usuario del dominio.
  3. Get-ADUser -Identity ‘ NOMBRE_CUENTA_USUARIO ‘ | select SID

0x04 – Proof of concept

To abuse the driver loading privilege, a PoC application has been created in order to automate the procedure described above.

A starting point is a non-privileged user (test) to which the privilege “Load and unload device drivers” has been assigned.

Load and unload driver policy assigned to a non privileged user

As discussed earlier, initially the user will be assigned a restricted token, which does not include the SeLoadDriverPrivilege privilege.

Initial verification of privileges assigned to the user test (restricted token)

If you have an interactive session, you can perform the elevation token elevation by accepting the UAC prompt, otherwise you should use some UAC bypassing technique.

In this specific case we assume that there is an interactive session in the system. By using the tool elevate, a new terminal with an associated unrestricted token can be spawned, after accepting the UAC prompt.

As you can see the privilege “SeLoadDriverPrivilege” is present in the user’s access token, however it is disabled.

Obtaining an unrestricted token in a non privileged account

At this point we can use the PoC tool EOPLOADDRIVER (https://github.com/TarlogicSecurity/EoPLoadDriver/), which will allow us to:

  • Enable the SeLoadDriverPrivilege privilege
  • Create the registry key under HKEY_CURRENT_USER (HKCU) and set driver configuration settings
  • Execute the NTLoadDriver function, specifying the registry key previously created

The tool can be invoked as shown below:

EOPLOADDRIVER.exe RegistryKey DriverImagePath

The RegistryKey parameter specifies the registry key created under HKCU (“Registry User{NON_PRIVILEGED_USER_SID}”, while the DriverImagePath specifies the location of the driver in the file system.

EOPLOADDRIVER tool execution

It is possible to verify the driver was loaded successfully using the DriverView tool.

Driver load verification with DriverView

0x05 – Exploitation

Once we have been able to load a driver from an unprivileged user account, the next step is to identify a signed driver that has a vulnerability that allows to elevate privileges.

For this example, we have selected the driver Capcom.sys (SHA1: c1d5cf8c43e7679b782630e93f5e6420ca1749a7) which has a ‘functionality’ that allows to execute code in kernel space from a function defined in user space.

This driver has different public exploits:

  • ExploitCapcom from Tandasat – https://github.com/tandasat/ExploitCapcom – This exploit allows you to obtain a Shell as SYSTEM
  • PuppetStrings by zerosum0x0 – https://github.com/zerosum0x0/puppetstrings – Allows hiding running processes

Following the procedure described above, we need to spawn an elevated terminal in order to obtain an unrestricted token. After this, we can execute the PoC tool (EOPLOADDRIVER) to enable the SeLoadDriverPrivilege and load the selected driver as shown below:

Elevation of command prompt and execution of EOPLOADDRIVER

Once the driver is loaded it is possible to run any desired exploit. The following image shows the use of Tandasat’s exploit “ExploitCapcom” to obtain a terminal as SYSTEM.

Tandasat’s ExploitCapcom execution for privilege escalation as SYSTEM

0x06 – Conclusions

We have been able to verify how the assignment of the “Load and unload device drivers” privilege enables the dynamic loading of drivers in the kernel. Although by default Windows forbids unsigned drivers, multiple weaknesses have been identified in signed drivers that could be exploited to completely compromise a system.

All tests have been performed in a Windows 10 Version 1708 environment.

As of Windows 10 Version 1803, NTLoadDriver seems to forbid references to registry keys under HKEY_CURRENT_USER.

Remapping Python Opcodes

Original text by Chris Lyne

In my previous blog post, I talked about compiled Python (.pyc) files that I couldn’t manage to decompile. Because of this, my ability to audit the source code of Druva inSync was limited, and I felt compelled to dig deeper. What I found was that all of the operation codes (opcodes) had been shuffled around. This is a known technique for obfuscating Python bytecode. I’ll take you step by step through how I fixed these opcodes to successfully recover the source code. This technique is not limited to Druva and can generally be used to remap any Python opcodes.

Let’s first look at the problem.

It won’t decompile

In the installation directory of Druva inSync, you’ll notice that there is a Python27.dll and library.zip. When inSync.exe boots up, these files are immediately read. I’ve filtered the procmon output for brevity. This behavior is indicative of an application built with py2exe.

When you unzip library.zip, it contains a bunch of compiled Python modules — custom and standard. Shown below is a subset of the whole.

.pyc’s can often be decompiled pretty quickly by tools like uncompyle6. For example, struct.pyc is part of the Python standard library, and it decompiles just fine — only a few imports.

Decompiling normal struct.pyc

Now, doing the same against the struct.pyc packaged in library.zip, here’s the output:

Decompiling Druva struct.pyc

Unknown magic number 62216.

But why?

If you were to look at a .pyc file in a hex editor, the magic number is in the first 2 bytes of the file, and it will be different depending on the Python version that compiled it. For example, here is 62216:

Druva magic number

62216 is not a documented magic number. But 62211 is close to it, and that corresponds to version 2.7.

What’s strange is that the python27.dll distributed in the Druva inSync installation is version 2.7, hence the DLL name. Yet, its magic number is different.

Druva python27.dll

This looks nearly identical to the normal 2.7.15150.1013 DLL.

Normal python27.dll

The size and modified date are a little different, but the version is the same. If the version is the same, why is the magic number not 62211?

Digging into the OpCodes

My next idea was to load up a Python interpreter using the Druva inSync libraries. I did this by first dropping the Druva python27.dll into c:\Python27. I also had to ensure that the search path pointed to the Python modules distributed with Druva inSync.

Python interpreter with Druva libraries loaded

At this point, I could load the ‘opcode’ module to view the map of opcodes.

Druva opcode map

Below is the normal Python 2.7 opcode map:

Normal opcode map

Notice that the opcodes are completely different. For example, ‘CALL_FUNCTION’ maps to 131 normally, and its opcode is 111 in the Druva distribution. As far as I can tell, this is true for every operation.

Remapping the OpCodes

In order to decompile these obfuscated .pyc files, the opcodes need to be remapped back to the original Python 2.7 mapping. Easy enough, right? It’s slightly more complicated than it appears on the surface. In order to accomplish this, one needs to understand the .pyc file format. Let’s take a look at that.

Structure of a .pyc

Let’s turn to the code to make sense of the .pyc file structure. We are looking at the py_compile module’s compile function. This function will convert a .py file into a .pyc.

Starting at line 106, timestamp is first populated with the .py file’s last modification time (st_mtime). And on line 111, the source code is read into codestring.

Next, the source code is compiled into a code object using the builtin compile function. Since the .py likely contains a sequence of statements, the ‘exec’ mode is specified.

Assuming no errors occur, the new filename is created (cfile). If basic optimizations were turned on via the -o flag (__debug__ tells us this), the extension will be ‘.pyo’, otherwise it will be ‘.pyc’.

Finally, the file is written to. The first 4 bytes will be the magic string value, returned by imp.get_magic(). Next, the timestamp is written. And finally, the code object is serialized using the marshal module.

Let’s look at an example by compiling our own module.

Example: Hello world

Here’s our friend, hello.py. It’s just a print statement.

If we compile it, it spits out a hello.pyc file.

Here is a hexdump of hello.pyc

If we were to load this up, we can actually parse out the individual components. First we read the file, and store the contents in bytes :

The magic string is 03f30d0a; however, the magic number is 03f3. It’s always followed by 0d0a.

If we unpack this unsigned short, the magic number is 62211. We now know the .pyc was compiled by version 2.7. Let’s look at the timestamp now. It is 4 bytes long, starting at offset 4.

.

This makes sense because I created the .py file at 2:26 PM on April 30th.

And finally, the serialized code object remains to be read. It can be deserialized with the marshal module, and the object is executable. Hello world!

Let’s frame up the problem to be solved. The main goal is to decompile a .pyc file, and fix its opcodes. During decompilation, an intermediate step is to disassemble the .pyc code objects into opcodes.

Disassembling Code Objects

Let’s use the dis module to disassemble the code object in hello.pyc.

All of these instructions are required to print ‘Hello world!’. In the first instruction, we can see “0 LOAD_CONST 0 (‘Hello world!’)”. “0 LOAD_CONST” means a LOAD_CONST operation starts at offset 0 in the bytecode. And “0 (‘Hello world!’)” means that the constant at index 0 is loaded (the string is just shown in the disassembly output for clarity). Technically speaking, LOAD_CONST pushes a constant onto the stack.

Looking at the code object, the bytecode (co_code) and constants (co_consts) are accessible (and variables, etc).

Here is the raw bytecode:

Here the opcode at offset 0 is ‘d’, which is actually decimal 100 in ascii. This can be looked up in the opname sequence.

The next two bytes, “\x00\x00” represent the index of the ‘Hello world!’ constant (operand).

We’ve now established that code objects can be disassembled with the dis module. The disassembly displays instructions consisting of operation names and operands. We can also inspect the raw bytecode (co_code) and constants (co_consts) stored in code objects (other stuff as well). It gets tricky when code objects contain nested code objects.

Since we have the opcode mappings for both Druva and the normal Python 2.7, we can develop a basic strategy for opcode conversion. My strategy was to disassemble the code object in a .pyc file, convert each operation, and stuff all of this into a new code object. No need to remap operands. However, it’s just a bit more complicated than that. Let’s look at nested code objects.

Nested Code Objects

Most of the modules you encounter will be more complex than the hello world. They will contain functions and classes as well.

Here is a more advanced hello world example:

Breaking it down, we have a class named “HelloClass”. The class contains functions named “__init__” and “sayHello.” Let’s disassemble the code object after reading the .pyc.

Notice the LOAD_CONST instruction at offset 9. A HelloClass code object is loaded. This HelloClass code object is stored at index 1 in co_consts.

Let’s disassemble that too.

More code objects? Yep. The __init__ and sayHello functions are code objects as well. A code object can have many layers of nested code objects. This requires the opcode remapping algorithm to be recursive!

The Algorithm

For reference, here are the opcode mappings again:

Druva opcode mapping
Normal Python 2.7 opcode mapping

Here’s my general algorithm.

Starting with the outer code object in the .pyc file (code_obj_in), convert all of the opcodes using the mappings above and store into new_co_code. For example, if a CALL_FUNCTION is encountered, the opcode will be converted from 111 to 131. We will then inspect the co_consts sequence and recursively remap any code objects found in there. new_co_consts will be added into the output code object.

When the new .pyc file is created (not shown), it will have a magic number of 62211, and all code objects will be populated with remapped opcodes. Let’s see the script in action.

Running the process converts a total of 1773 .pyc files. Notice I copied the Druva python27.dll into C:\Python27. Bytecode was disassembled using the Druva opcode mappings, and then converted.

Converting opcodes

And after conversion, we can successfully decompile the .pyc’s in the inSyncClient folder! Prior to opcode conversion, this was not possible.

Decompilation is successful

Closing Thoughts

I hope this serves as a useful introduction to how Python opcodes might be obfuscated. There are other tools (e.g. pyREtic) out there that do the same kind of remapping process we’ve discussed here. In fact, after writing this code, I found out that this logic had already been implemented specifically for Druva inSync in the dedrop repository.

I’m sure there are more elegant approaches to opcode conversion, but the script definitely got the job done. If you’re interested in checking out the full source code, I’ve dropped it on our GitHub. Thanks for reading, and check out the Tenable TechBlog for more technical blogs and vulnerability write-ups. Give me a shout on Twitter as well!

-Chris Lyne (@lynerc)

Туда-обратная разработка. Веселее, чем б*тпл*г, полезнее, чем STM

Original text by @N3M351DA

Не рекомендуется к прочтению тем, кто обладает проблемами с нервной системой.

Когда я кидала статью на утверждение, меня спросили: “А где же тут ИБ, это же схемотехника!” Хочу от себя заметить, что ИБ помимо сферы ИТ распространяется не только на стандартизацию, безопасность программного обеспечения, защиту от утечек информации по сторонним каналам (сюда в том числе входит изучение распространения радиоволн и электромагнитных сигналов), но и на аппаратный уровень.

Википедия:

  • Обра́тная разрабо́тка (обратное проектирование, обратный инжиниринг, реверс-инжиниринг; англ. reverse engineering) — исследование некоторого готового устройства или программы, а также документации на него с целью понять принцип его работы; например, чтобы обнаружить недокументированные возможности (в том числе программные закладки), сделать изменение или воспроизвести устройство, программу или иной объект с аналогичными функциями, но без прямого копирования.
  • Схемоте́хника — научно-техническое направление, занимающееся проектированием, созданием и отладкой (синтезом и анализом) электронных схем и устройств различного назначения.

К сожалению, в данной статье мы ничего проектировать и создавать не будем. В том числе, заниматься отладкой (синтезом и анализом), в том варианте, в котором ей занимаются при создании и проектировании устройств, то есть перед их запуском в производство.

В данной статье мы рассмотрим, как методом подбора комбинации логических значений можно научиться управлять проприентарным устройством для нарушения целостности информации. Мы будем использовать недокументированные возможности устройства, то есть не предусмотренные инструкцией.

Однажды (тысячу лет назад) мне в руки попался интереснейший девайс, я не смогла удержаться и разобрала его. Постараюсь описать самое интересное из того что я увидела внутри и подтолкнуть к определенным выводам тех, кто захочет развить данную тему.

Уточнять, что за устройство мы рассматриваем я не буду по этическим причинам.

Итак, перед нами сравнительно небольшая плата:

Рассматриваемый нами девайс содержит дисплей, который имеет шесть выводов. Изображения, которые может выводить дисплей – фиксированные, то есть они заранее созданы и отображаются, когда на выводы дисплея поступает определенный набор сигналов с контролирующей микросхемой.

С другой стороны мы можем увидеть:

– отсек с батарейками (1),

– пассивные компоненты,

– микросхему, залитую компаундом – для того, чтобы защитить ее от внешних воздействий и реверсеров (2),

– контакты датчика(3),

– кусок пластика с прорезями.

После удаления куска пластика с прорезями мы можем наблюдать 4 светодиода и один фотодиод (группа элементов справа). Это удивительно, так как в корпусе как раз нет отверстий ни для первых, ни для второго. Можно предположить, что фотодиод служит для определения момента закрытия крышки корпуса, но опять же – достаточного отверстия для поступления света там нет.

Еще одним следствием работы фотодиода может быть то, что после разбора корпуса, при попадании на него света прошивка перестала вести себя статично и микросхема начала отправлять меандр на контакты дисплея, от чего тот начал мигать.

Меандр – последовательность прямоугольных импульсов с равной длиной импульса и паузой, в логических устройствах может считываться как 0-1-0-1-0-1-0 и так далее.

Чтобы определить на скорую руку на какие контакты приходит сигнал – меандр, был использован светодиод. Путем преставления ног светодиода между контактами (одна – на земле, вторая на сигнальном контакте) можно легко определить какие из контактов – «мерцают». Так же, это можно проделать с помощью мультиметра в режиме измерения постоянного тока.

Так как отковыривать компаунд дело долгое и неблагодарное, а внесение изменений в прошивку данного девайса в срочном порядке вряд ли будет популярно, далее будет предложено простое и действенное изменение значения на дисплее нужное нам.

Практическим образом было установлено, что, если использовать постоянный сигнал с источника питания девайса, можно добиться фиксации изображения на экране. Эксперимент проводился с двумя контактами, но можно их припаивать и пробовать узнавать режимы работы дисплея с изменением большего количества информационных значений.

Мы разобрались, что при закорачивании контактов 3 и 5 можно добиться специфической статической надписи. Для того, чтобы это проделать требуется:

  • Разобрать корпус (он крайне легко разбирается руками).
  • Кинуть минимум две перемычки от батарейки к контактам 3 и 5.

Если сделать это аккуратно, то можно легко справиться за 60 секунд. При учете специфических условий из данных нескольких минут, останется еще время на посидеть в мессенджере.

Отрицательный результат достигается закорачиванием 3 и 6 контакта соответственно, но здесь нас предательски выдает неверная надпись, под результатом, логически несовместимая с ним.

TODO:

[+] За счет пайки и использования оставшихся контактов добиться вывода на экране полностью статичной надписи.

Для этого потребуется создать таблицу значений.

Пример ее заполнения:

Номер вывода123456Выводимое изображение
Состояние выводов (1)?1??1?Беременна (статическое)Значок часы, книжка (моргают)
Состояние выводов (2)
[-] Возможно, полученные результаты усложнят процесс фальсификации, что недопустимо.

Также чтобы определить вектор развития событий скажу: в данной статье мы с вами рассмотрели исследование и научились фальсифицировать значение теста на беременность с помощью двух перемычек.

Как показала практика, мало кто осведомлен о том, как должен выглядеть экран положительного теста. Поэтому, в принципе, предложенного варианта фальсификации может быть достаточно.