How the $LogFile works?

Original text by MSUHANOV

In the official NTFS implementation, all metadata changes to a file system are logged to ensure the consistent recovery of critical file system structures after a system crash. This is called write-ahead logging.

The logged metadata is stored in a file called “$LogFile”, which is found in a root directory of an NTFS file system.

Currently, there is no much documentation for this file available. Most sources are either too high-level (describing the logging and recovery processes in general) or just contain the layout of key structures without further description.

The process of metadata logging is based on two components: the log file service (LFS) and the NTFS client of the LFS (both are implemented as a part of the NTFS driver).

The LFS provides an interface for its clients to store a buffer in a circular (“infinite”) area of a log file and to read such buffers from that log file. In particular, the following simplified types of actions are supported:

  • store a buffer (client data) as a log record, return its log sequence number (LSN);
  • store a buffer (client data) as a restart area, return its LSN;
  • if a log file is full, raise an exception for a client;
  • mark previously stored data as unused;
  • given an LSN, locate a stored buffer (client data) and return it;
  • given an LSN, find a next LSN for the same client and return it (forward search);
  • given an LSN, find a previous LSN for the same client and return it (backward search).

As you can see, the LFS is the data management layer for the NTFS logging component, the LFS doesn’t do the actual logging of metadata operations. Each buffer received from a client is opaque to the LFS (the LFS is only aware of a type of this buffer: whether it’s a log record or a client restart area).

The actual logging (and recovery) is implemented as a part of the NTFS client of the LFS. Each buffer sent from this component to the LFS contains something related to a transaction. Here, a transaction is a set of metadata changes necessary to complete a specific high-level operation.

For example, the following metadata changes are combined as a transaction when a file is renamed:

  1. delete an index entry (with an old file name) for a target file from a file name index within a parent directory;
  2. delete the $FILE_NAME attribute (with an old file name) from a target file record;
  3. create the $FILE_NAME attribute (with a new file name) in a target file record;
  4. add an index entry (with a new file name) for a target file in a file name index within a parent directory.

If all of these changes were applied to a volume successfully, then the transaction is marked as forgotten.

But before we get to the format of metadata changes used by the NTFS client, we need to dissect on-disk structures of the LFS.

First of all, since each client buffer stored in a log file is identified by an LSN, it’s important to understand how these LSNs are generated by the LFS.

Each LSN is a 64-bit number containing the following components: a sequence number and an offset. An offset is stored in the lower part of an LSN, its value is a number of 8-byte increments from the beginning of a log file. This offset points to an LFS structure containing a client buffer and related metadata, this structure is called an LFS record. A sequence number is stored in the higher part, it’s a value from a counter which is incremented when a log file is wrapped (when a new structure is written to the beginning of the circular area, not to the end of this area).

The number of bits reserved for the sequence number part of an LSN is variable, it depends on the size of a log file (and it’s recorded in it).

For example, if 44 bits are reserved for the sequence number part and the LSN is 2124332, then the sequence number is 2 and the offset is 27180 8-byte increments (217440 bytes).

The LSNs have an important property: they are always increasing. An LSN for a new entry is always greater than an LSN for an older entry (technically, these numbers can overflow, but they won’t, because it’s practically impossible to reach the 64-bit limit).

An LFS record is a structure containing a header and client data. The following data is stored in the LFS record header: an LSN for this record, a previous LSN for the same client, an LSN for the undo operation for the same client, a client ID, a transaction ID, a record type (a log record or a client restart area), length of client data, various flags. Many values mentioned before are specified by the client.

LFS records are written to LFS record pages. Each LFS record page is 4096 bytes in size (it’s equal to the page size), it contains a header (the first four bytes are “RCRD”) and one or more LFS records. Since client data can be large, two or more adjacent LFS record pages may be required to store one LFS record (thus, an LFS record can be larger than an LFS record page; only the first segment has the LFS record header).

Each LFS record page is protected by an update sequence array, which is used to detect failed (torn) writes. Here is a description of the protection process (source):

The update sequence array consists of an array of nUSHORT values, where n is the size of the structure being protected divided by the sequence number stride. The first word contains the update sequence number, which is a cyclical counter of the number of times the containing structure has been written to disk. Next are the n saved USHORT values that were overwritten by the update sequence number the last time the containing structure was written to disk.

Each time the protected structure is about to be written to disk, the last word in each sequence number stride is saved to its respective position in the sequence number array, then it is overwritten with the next update sequence number. After the write, or whenever the structure is read, the saved word from the sequence number array is restored to its actual position in the structure. Before restoring the saved words on reads, all the sequence numbers at the end of each stride are compared with the actual sequence number at the start of the array. If any of these comparisons are not equal, then a failed multisector transfer has been detected.

(It should be noted that the stride is 512 bytes, even if an underlying drive has a larger sector size. Also, the size of an update sequence array isn’t n, but n+1.)

Here is the layout of a typical LFS record page:

lfs-record-page

Here is the layout of two LFS record pages containing a large LFS record:

lfs-record-pages

Finally, the circular (“infinite”) area of a log file consists of many LFS record pages. As described before, LFS records written to a log file can wrap, so a large LFS record starting in the last LFS record page also hits the first LFS record page of the circular area.

lfs-infinite.png

When writing a new LFS record into a current LFS record page, existing LFS records in this page can be lost because of a torn write or a system crash. Thus, data that was successfully stored before can be lost because of a new write.

In order to protect against such scenarios, a special area exists in a log file. It’s located before the circular area.

In the version 1.1 of the LFS, a special area consists of two pages, which are used to store two copies of a current LFS record page. Before putting a new LFS record into a current LFS record page, this page is stored in the special area (the first copy). After putting a new LFS record into a current LFS record page, the modified page is also written to the special area (the second copy, the first copy isn’t overwritten by the second one).

If a torn write or a system crash occurs when writing the second copy,  the first copy (without a new LFS record) will be available for the recovery. If everything is okay and the LFS needs the special area for a new update, then the second copy is written to the circular area of a log file (and the special area becomes available for a new update).

These two copies of a current LFS record page are called tail copies (because they always represent the latest LFS record page to be written to the circular area). The latest tail copy isn’t moved to the circular area immediately. So, in order to get a full set of LFS record pages during the recovery, the LFS should apply the latest tail page (or the valid one, if another tail page is invalid) to the circular area.

In the version 2.0 of the LFS, a special area consists of 32 pages. When the LFS needs to put a new LFS record into a current LFS record page or when the LFS prepares a new LFS record page with a single LFS record, the updated page (containing a new LFS record) or the new page is simply written to the special area (to an unused page).

If a torn write or a system crash occurs when writing that page, an older version of the same page from the special area is used. Occasionally, LFS record pages with latest data are moved to the circular area (and corresponding pages in the special area are marked as unused).

I don’t know how LFS record pages in this special area are called. I call them fast pages.

The new version of the LFS requires less writes by reducing the number of page transfers to the circular area. It should be noted that the version of a log file is downgraded to 1.1 during the clean shutdown by default (so an NTFS file system can be mounted using a previous version of Windows).

Also, Microsoft is going to release the version 3.0 of the LFS. This version will be used on DAX volumes. When a log file is mapped in the DAX mode, integrity of its pages is going to be protected using the CRC32 checksum (and there would be no update sequence arrays, because they won’t work well with byte-addressable memory). This will make things faster (no paging writes).

Finally, a log file begins with two restart pages, each one is 4096 bytes in size (again, it’s the page size; the first four bytes for each page are “RSTR”). These pages are also protected with update sequence arrays.

A restart page contains the LFS version number, a page size, and a restart area (not to be confused with a client restart area).

A restart area is a structure containing the latest LSN used (at the time when this structure was written), the number of clients of the LFS, the list of clients of the LFS, the number of bits used for the sequence number part of every LSN, as well as some data for sanity checks and forward compatibility (an offset of the first LFS record within an LFS record page, which is also an offset of the continuation of client data from a previous LFS record page, and a size of an LFS record header; both offsets allow unsupported fields to be ignored in LFS record pages and in LFS records).

A list of clients is composed of client records. A client record contains the oldest LSN required by this client, the LSN of the latest client restart area, the name of this client (as well as other information about this client). Currently, the only client is called “NTFS”.

Two restart pages provide reliability against a possible failure (a torn write or a system crash). These pages aren’t necessary synchronized.

Here is the generic layout of a log file:

lfs-layout.png

When the LFS is asked to provide initial data for its client, it will read and return the latest client restart area according to an LSN recorded in the appropriate client record. (Later, during the logging operation, the LFS won’t touch the oldest LFS record required by each client.)

A client receives its latest restart area, interprets it (remember that the LFS is unaware of the client data format), and decides what actions (if any) must be taken. If a log record is needed, then a client asks the LFS to provide this record (as a buffer) by its LSN.

The NTFS client tells the LFS to write a client restart area at the end of the checkpoint operation. During a checkpoint, the NTFS client writes a set of log records containing data about current transactions followed by a restart area, which points to every piece of that data (using LSNs). During the recovery, the NTFS client uses this data to decide which transactions are committed and which aren’t: committed transaction must be performed again using their redo data (there is a chance that this data didn’t hit the volume), while uncommitted transaction must be rolled back using their undo data.

And now we can take a look at the format of client data!

There are three versions of the NTFS client data format: 0.0, 1.0, and 2.0.

The last one seems to be under development, because it’s not enabled yet. This new version removes redundant open attribute table dumps and attribute names dumps, which were previously made during a checkpoint (the same data can be reconstructed from log records, so there is no reason to waste the space and link these dumps to a client restart area).

Currently, only the first two versions are used: 0.0 and 1.0. There are no significant differences between them. The most notable difference, although not a really significant one, is the format of open attribute entries.

A client restart area contains major and minor version numbers of the NTFS client data format used, an LSN to be used as a starting point for the analysis pass (when the NTFS driver builds a table of transactions and a table of dirty data ranges). Also, a client restart area contains LSNs for a transaction table dumped to a log file from memory (this table can be absent as well), an open attribute table dumped to a log file from memory, a list of attribute names dumped to a log file from memory, and a dirty page table dumped to a log file from memory (which is used to track dirty data ranges).

An open attribute table and a list of attribute names reference a nonresident attribute opened for a log operation. An entry from an open attribute table contains an $MFT reference number for a file record which nonresident attribute has been opened and a type code of this attribute (e.g, $DATA). An entry from a list of attribute names contains a Unicode name of a nonresident attribute opened along with an index of a corresponding entry in the open attribute table.

And a log record written during an operation on a nonresident attribute contains an index of a target attribute in the open attribute table. Based on this information (an $MFT file reference, an attribute code, and an attribute name), it’s possible to locate a target attribute. Also, a log record contains an offset within a target attribute at which new data is going to be written.

It should be noted that no table referenced by a client restart area is in the up-to-date state. New items from log records after the client restart area should be accounted in these tables.

A log record is an actual descriptor of a logged operation. A log record contains a redo type and data (can be empty), an undo type and data (can be empty too), a number of a target $MFT file record segment (for operations on resident attributes and on $MFT data in general), an index of a target attribute within the open attribute table (for operations on nonresident attributes), and several fields used to calculate an offset within a target.

Redo data is written when a transaction is committed, undo data is written when a transaction is rolled back (to bring things back to their previous state). There are some exceptions, however: when a nonresident attribute is opened, its open attribute record is stored as redo data and its Unicode name is stored as undo data.

Here is a full list of log operation types (as of Windows 10, build 18323):

Noop
CompensationLogRecord
InitializeFileRecordSegment
DeallocateFileRecordSegment
WriteEndOfFileRecordSegment
CreateAttribute
DeleteAttribute
UpdateResidentValue
UpdateNonresidentValue
UpdateMappingPairs
DeleteDirtyClusters
SetNewAttributeSizes
AddIndexEntryRoot
DeleteIndexEntryRoot
AddIndexEntryAllocation
DeleteIndexEntryAllocation
WriteEndOfIndexBuffer
SetIndexEntryVcnRoot
SetIndexEntryVcnAllocation
UpdateFileNameRoot
UpdateFileNameAllocation
SetBitsInNonresidentBitMap
ClearBitsInNonresidentBitMap
HotFix
EndTopLevelAction
PrepareTransaction
CommitTransaction
ForgetTransaction
OpenNonresidentAttribute
OpenAttributeTableDump
AttributeNamesDump
DirtyPageTableDump
TransactionTableDump
UpdateRecordDataRoot
UpdateRecordDataAllocation
UpdateRelativeDataIndex
UpdateRelativeDataAllocation
ZeroEndOfFileRecord

Here is a decoded transaction used to rename a file (from “aaa.txt” to “bbb.txt”).

It should be noted that updates to some attributes can be recorded partially. For example, an update to the $STANDARD_INFORMATION attribute can record data starting from the M timestamp (and the C timestamp, which is stored before the M timestamp, will be absent in the redo/undo data).

The only thing left is the meaning of every log operation. Not today!


Update (2019-02-17):

How long does it take for old data to become overwritten with new data?

In one of my tests with Windows 10, it took 16 minutes. In another test with Windows 10, it took 5 hours and 20 minutes. In both tests, mouse movements were the only user activity.

Реклама

Hypervisor From Scratch – Part 1: Basic Concepts & Configure Testing Environment

( Original text by Sinaei )

Hello everyone!

Welcome to the first part of a multi-part series of tutorials called “Hypervisor From Scratch”. As the name implies, this course contains technical details to create a basic Virtual Machine based on hardware virtualization. If you follow the course, you’ll be able to create your own virtual environment and you’ll get an understanding of how VMWare, VirtualBox, KVM and other virtualization softwares use processors’ facilities to create a virtual environment.

Introduction

Both Intel and AMD support virtualization in their modern CPUs. Intel introduced (VT-x technology) that previously codenamed “Vanderpool” on November 13, 2005, in Pentium 4 series. The CPU flag for VT-xcapability is “vmx” which stands for Virtual Machine eXtension.

AMD, on the other hand, developed its first generation of virtualization extensions under the codename “Pacifica“, and initially published them as AMD Secure Virtual Machine (SVM), but later marketed them under the trademark AMD Virtualization, abbreviated AMD-V.

There two types of the hypervisor. The hypervisor type 1 called “bare metal hypervisor” or “native” because it runs directly on a bare metal physical server, a type 1 hypervisor has direct access to the hardware. With a type 1 hypervisor, there is no operating system to load as the hypervisor.

Contrary to a type 1 hypervisor, a type 2 hypervisor loads inside an operating system, just like any other application. Because the type 2 hypervisor has to go through the operating system and is managed by the OS, the type 2 hypervisor (and its virtual machines) will run less efficiently (slower) than a type 1 hypervisor.

Even more of the concepts about Virtualization is the same, but you need different considerations in VT-x and AMD-V. The rest of these tutorials mainly focus on VT-x because Intel CPUs are more popular and more widely used. In my opinion, AMD describes virtualization more clearly in its manuals but Intel somehow makes the readers confused especially in Virtualization documentation.

Hypervisor and Platform 

These concepts are platform independent, I mean you can easily run the same code routine in both Linux or Windows and expect the same behavior from CPU but I prefer to use Windows as its more easily debuggable (at least for me.) but I try to give some examples for Linux systems whenever needed. Personally, as Linux kernel manages faults like #GP and other exceptions and tries to avoid kernel panic and keep the system up so it’s better for testing something like hypervisor or any CPU-related affair. On the other hand, Windows never tries to manage any unexpected exception and it leads to a blue screen of death whenever you didn’t manage your exceptions, thus you might get lots of BSODs.By the way, you’d better test it on both platforms (and other platforms too.).

At last, I might (and definitely) make mistakes like wrong implementation or misinformation or forget about mentioning some important description so I should say sorry in advance if I make any faults and I’ll be glad for every comment that tells me my mistakes in the technical information or misinformation.

That’s enough, Let’s get started!

The Tools you’ll need

You should have a Visual Studio with WDK installed. you can get Windows Driver Kit (WDK) here.

The best way to debug Windows and any kernel mode affair is using Windbg which is available in Windows SDK here. (If you installed WDK with default installing options then you probably install WDK and SDK together so you can skip this step.)

You should be able to debug your OS (in this case Windows) using Windbg, more information here.

Hex-rays IDA Pro is an important part of this tutorial.

OSR Driver Loader which can be downloaded here, we use this tools in order to load our drivers into the Windows machine.

SysInternals DebugView for printing the DbgPrint() results.

Chameleon

Creating a Test Environment

Almost all of the codes in this tutorial have to run in kernel level and you must set up either a Linux Kernel Module or Windows Driver Kit (WDK) module. As configuring VMM involves lots of assembly code, you should know how to run inline assembly within you kernel project. In Linux, you shouldn’t do anything special but in the case of  Windows, WDK no longer supports inline assembly in an x64 environment so if you didn’t work on this problem previously then you might have struggle creating a simple x64 inline project but don’t worry in one of my post I explained it step by step so I highly recommend seeing this topic to solve the problem before continuing the rest of this part.

Now its time to create a driver!

There is a good article here if you want to start with Windows Driver Kit (WDK).

The whole driver is this :

123456789101112131415161718192021222324252627282930313233343536373839404142434445#include <ntddk.h>#include <wdf.h>#include <wdm.h> extern void inline AssemblyFunc1(void);extern void inline AssemblyFunc2(void); VOID DrvUnload(PDRIVER_OBJECT  DriverObject);NTSTATUS DriverEntry(PDRIVER_OBJECT  pDriverObject, PUNICODE_STRING  pRegistryPath); #pragma alloc_text(INIT, DriverEntry)#pragma alloc_text(PAGE, Example_Unload) NTSTATUS DriverEntry(PDRIVER_OBJECT  pDriverObject, PUNICODE_STRING  pRegistryPath){ NTSTATUS NtStatus = STATUS_SUCCESS; UINT64 uiIndex = 0; PDEVICE_OBJECT pDeviceObject = NULL; UNICODE_STRING usDriverName, usDosDeviceName;  DbgPrint(«DriverEntry Called.»);  RtlInitUnicodeString(&usDriverName, L»\Device\MyHypervisor»); RtlInitUnicodeString(&usDosDeviceName, L»\DosDevices\MyHypervisor»);  NtStatus = IoCreateDevice(pDriverObject, 0, &usDriverName, FILE_DEVICE_UNKNOWN, FILE_DEVICE_SECURE_OPEN, FALSE, &pDeviceObject);  if (NtStatus == STATUS_SUCCESS) { pDriverObject->DriverUnload = DrvUnload; pDeviceObject->Flags |= IO_TYPE_DEVICE; pDeviceObject->Flags &= (~DO_DEVICE_INITIALIZING); IoCreateSymbolicLink(&usDosDeviceName, &usDriverName); } return NtStatus;} VOID DrvUnload(PDRIVER_OBJECT  DriverObject){ UNICODE_STRING usDosDeviceName; DbgPrint(«DrvUnload Called rn»); RtlInitUnicodeString(&usDosDeviceName, L»\DosDevices\MyHypervisor»); IoDeleteSymbolicLink(&usDosDeviceName); IoDeleteDevice(DriverObject->DeviceObject);}

AssemblyFunc1 and AssemblyFunc2 are two external functions that defined as inline x64 assembly code.

Our driver needs to register a device so that we can communicate with our virtual environment from User-Mode code, on the hand, I defined DrvUnload which use PnP Windows driver feature and you can easily unload your driver and remove device then reload and create a new device.

The following code is responsible for creating a new device :

123456789101112 RtlInitUnicodeString(&usDriverName, L»\Device\MyHypervisor»); RtlInitUnicodeString(&usDosDeviceName, L»\DosDevices\MyHypervisor»);  NtStatus = IoCreateDevice(pDriverObject, 0, &usDriverName, FILE_DEVICE_UNKNOWN, FILE_DEVICE_SECURE_OPEN, FALSE, &pDeviceObject);  if (NtStatus == STATUS_SUCCESS) { pDriverObject->DriverUnload = DrvUnload; pDeviceObject->Flags |= IO_TYPE_DEVICE; pDeviceObject->Flags &= (~DO_DEVICE_INITIALIZING); IoCreateSymbolicLink(&usDosDeviceName, &usDriverName); }

If you use Windows, then you should disable Driver Signature Enforcement to load your driver, that’s because Microsoft prevents any not verified code to run in Windows Kernel (Ring 0).

To do this, press and hold the shift key and restart your computer. You should see a new Window, then

  1. Click Advanced options.
  2. On the new Window Click Startup Settings.
  3. Click on Restart.
  4. On the Startup Settings screen press 7 or F7 to disable driver signature enforcement.

The latest thing I remember is enabling Windows Debugging messages through registry, in this way you can get DbgPrint() results through SysInternals DebugView.

Just perform the following steps:

In regedit, add a key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Debug Print Filter

Under that , add a DWORD value named IHVDRIVER with a value of 0xFFFF

Reboot the machine and you’ll good to go.

Some thoughts before the start

There are some keywords that will be frequently used in the rest of these series and you should know about them (Most of the definitions derived from Intel software developer’s manual, volume 3C).

Virtual Machine Monitor (VMM): VMM acts as a host and has full control of the processor(s) and other platform hardware. A VMM is able to retain selective control of processor resources, physical memory, interrupt management, and I/O.

Guest Software: Each virtual machine (VM) is a guest software environment.

VMX Root Operation and VMX Non-root Operation: A VMM will run in VMX root operation and guest software will run in VMX non-root operation.

VMX transitions: Transitions between VMX root operation and VMX non-root operation.

VM entries: Transitions into VMX non-root operation.

Extended Page Table (EPT): A modern mechanism which uses a second layer for converting the guest physical address to host physical address.

VM exits: Transitions from VMX non-root operation to VMX root operation.

Virtual machine control structure (VMCS): is a data structure in memory that exists exactly once per VM, while it is managed by the VMM. With every change of the execution context between different VMs, the VMCS is restored for the current VM, defining the state of the VM’s virtual processor and VMM control Guest software using VMCS.

The VMCS consists of six logical groups:

  •  Guest-state area: Processor state saved into the guest state area on VM exits and loaded on VM entries.
  •  Host-state area: Processor state loaded from the host state area on VM exits.
  •  VM-execution control fields: Fields controlling processor operation in VMX non-root operation.
  •  VM-exit control fields: Fields that control VM exits.
  •  VM-entry control fields: Fields that control VM entries.
  •  VM-exit information fields: Read-only fields to receive information on VM exits describing the cause and the nature of the VM exit.

I found a great work which illustrates the VMCS, The PDF version is also available here

VMCS
VMCS

Don’t worry about the fields, I’ll explain most of them clearly in the later parts, just remember VMCS Structure varies between different version of a processor.

VMX Instructions 

VMX introduces the following new instructions.

Intel/AMD MnemonicDescription
INVEPTInvalidate Translations Derived from EPT
INVVPIDInvalidate Translations Based on VPID
VMCALLCall to VM Monitor
VMCLEARClear Virtual-Machine Control Structure
VMFUNCInvoke VM function
VMLAUNCHLaunch Virtual Machine
VMRESUMEResume Virtual Machine
VMPTRLDLoad Pointer to Virtual-Machine Control Structure
VMPTRSTStore Pointer to Virtual-Machine Control Structure
VMREADRead Field from Virtual-Machine Control Structure
VMWRITEWrite Field to Virtual-Machine Control Structure
VMXOFFLeave VMX Operation
VMXONEnter VMX Operation

Life Cycle of VMM Software

  • The following items summarize the life cycle of a VMM and its guest software as well as the interactions between them:
    • Software enters VMX operation by executing a VMXON instruction.
    • Using VM entries, a VMM can then turn guests into VMs (one at a time). The VMM effects a VM entry using instructions VMLAUNCH and VMRESUME; it regains control using VM exits.
    • VM exits transfer control to an entry point specified by the VMM. The VMM can take action appropriate to the cause of the VM exit and can then return to the VM using a VM entry.
    • Eventually, the VMM may decide to shut itself down and leave VMX operation. It does so by executing the VMXOFF instruction.

That’s enough for now!

In this part, I explained about general keywords that you should be aware and we create a simple lab for our future tests. In the next part, I will explain how to enable VMX on your machine using the facilities we create above, then we survey among the rest of the virtualization so make sure to check the blog for the next part.

References

[1] Intel® 64 and IA-32 architectures software developer’s manual combined volumes 3 (https://software.intel.com/en-us/articles/intel-sdm

[2] Hardware-assisted Virtualization (http://www.cs.cmu.edu/~412/lectures/L04_VTx.pdf)

[3] Writing Windows Kernel Driver (https://resources.infosecinstitute.com/writing-a-windows-kernel-driver/)

[4] What Is a Type 1 Hypervisor? (http://www.virtualizationsoftware.com/type-1-hypervisors/)

[5] Intel / AMD CPU Internals (https://github.com/LordNoteworthy/cpu-internals)

[6] Windows 10: Disable Signed Driver Enforcement (https://ph.answers.acer.com/app/answers/detail/a_id/38288/~/windows-10%3A-disable-signed-driver-enforcement)

[7] Instruction Set Mapping » VMX Instructions (https://docs.oracle.com/cd/E36784_01/html/E36859/gntbx.html)

Library to reflectively load a driver and bypass Windows Driver signing enforcement .

Картинки по запросу kernel driver signing

About

Reflective Kernel Driver injection is a injection technique base off Reflective DLL injection by Stephen Fewer. The technique bypasses Windows driver signing enforcement (KMCS). Reflective programming is employed to perform the loading of a driver from memory into the kernel. As such the driver is responsible for loading itself by implementing a minimal Portable Executable (PE) file loader. Injection works on Windows Vista up to Windows 10, running on x64.

An exploit for the Capcom driver is also included as a simple usage example.

Overview

The process of injecting a driver into the kernel is twofold. Firstly, the driver you wish to inject must be written into the kernel address space. Secondly the driver must be loaded into kernel in such a way that the driver’s run time expectations are met, such as resolving its imports or relocating it to a suitable location in memory.

Assuming we have ring0 code execution and the driver we wish to inject has been written into an arbitrary location of memory kernel, Reflective Driver Injection works as follows.

  • Execution is passed, either via PSCreateSystemThread() or a tiny bootstrap shellcode, to the driver’s ReflectiveLoader function which is located at the beginning of the driver’s code section (typically offset 0x400).
  • As the driver’s image will currently exists in an arbitrary location in memory the ReflectiveLoader will first calculate its own image’s current location in memory so as to be able to parse its own headers for use later on.
  • The ReflectiveLoader will then use MmGetSystemRoutineAddress (assumed to be passed in as arg0) to calculate the addresses of six functions required by the loader, namely ExAllocatePoolWithTag, ExFreePoolWithTag, IoCreateDriver, RtlImageDirectoryEntryToData, RtlImageNtHeader, and RtlQueryModuleInformation.
  • The ReflectiveLoader will now allocate a continuous region of memory into which it will proceed to load its own image. The location is not important as the loader will correctly relocate the image later on.
  • The driver’s headers and sections are loaded into their new locations in memory.
  • The ReflectiveLoader will then process the newly loaded copy of its image’s relocation table.
  • The ReflectiveLoader will then process the newly loaded copy of its image’s import table, resolving any module dependencies (assuming they are already loaded into the kernel) and their respective imported function addresses.
  • The ReflectiveLoader will then call IoCreateDriver passing the driver’s DriverEntry exported function as the second parameter. The driver has now been successfully loaded into memory.
  • Finally the ReflectiveLoader will return execution to the initial bootstrap shellcode which called it, or if it was called via PSCreateSystemThread, the thread will terminate.

Build

Open the ‘Reflective Driver Loading.sln’ file in Visual Studio C++ and build the solution in Release mode to make Hadouken.exe and reflective_driver.sys

Usage

To test load Capcom.sys into the kernel then use the Hadouken.exe to inject reflective_driver.sys into the kernel e.g.:

Hadouken reflective_driver.sys

DOWNLOAD