How the $LogFile works?

Original text by MSUHANOV

In the official NTFS implementation, all metadata changes to a file system are logged to ensure the consistent recovery of critical file system structures after a system crash. This is called write-ahead logging.

The logged metadata is stored in a file called “$LogFile”, which is found in a root directory of an NTFS file system.

Currently, there is no much documentation for this file available. Most sources are either too high-level (describing the logging and recovery processes in general) or just contain the layout of key structures without further description.

The process of metadata logging is based on two components: the log file service (LFS) and the NTFS client of the LFS (both are implemented as a part of the NTFS driver).

The LFS provides an interface for its clients to store a buffer in a circular (“infinite”) area of a log file and to read such buffers from that log file. In particular, the following simplified types of actions are supported:

  • store a buffer (client data) as a log record, return its log sequence number (LSN);
  • store a buffer (client data) as a restart area, return its LSN;
  • if a log file is full, raise an exception for a client;
  • mark previously stored data as unused;
  • given an LSN, locate a stored buffer (client data) and return it;
  • given an LSN, find a next LSN for the same client and return it (forward search);
  • given an LSN, find a previous LSN for the same client and return it (backward search).

As you can see, the LFS is the data management layer for the NTFS logging component, the LFS doesn’t do the actual logging of metadata operations. Each buffer received from a client is opaque to the LFS (the LFS is only aware of a type of this buffer: whether it’s a log record or a client restart area).

The actual logging (and recovery) is implemented as a part of the NTFS client of the LFS. Each buffer sent from this component to the LFS contains something related to a transaction. Here, a transaction is a set of metadata changes necessary to complete a specific high-level operation.

For example, the following metadata changes are combined as a transaction when a file is renamed:

  1. delete an index entry (with an old file name) for a target file from a file name index within a parent directory;
  2. delete the $FILE_NAME attribute (with an old file name) from a target file record;
  3. create the $FILE_NAME attribute (with a new file name) in a target file record;
  4. add an index entry (with a new file name) for a target file in a file name index within a parent directory.

If all of these changes were applied to a volume successfully, then the transaction is marked as forgotten.

But before we get to the format of metadata changes used by the NTFS client, we need to dissect on-disk structures of the LFS.

First of all, since each client buffer stored in a log file is identified by an LSN, it’s important to understand how these LSNs are generated by the LFS.

Each LSN is a 64-bit number containing the following components: a sequence number and an offset. An offset is stored in the lower part of an LSN, its value is a number of 8-byte increments from the beginning of a log file. This offset points to an LFS structure containing a client buffer and related metadata, this structure is called an LFS record. A sequence number is stored in the higher part, it’s a value from a counter which is incremented when a log file is wrapped (when a new structure is written to the beginning of the circular area, not to the end of this area).

The number of bits reserved for the sequence number part of an LSN is variable, it depends on the size of a log file (and it’s recorded in it).

For example, if 44 bits are reserved for the sequence number part and the LSN is 2124332, then the sequence number is 2 and the offset is 27180 8-byte increments (217440 bytes).

The LSNs have an important property: they are always increasing. An LSN for a new entry is always greater than an LSN for an older entry (technically, these numbers can overflow, but they won’t, because it’s practically impossible to reach the 64-bit limit).

An LFS record is a structure containing a header and client data. The following data is stored in the LFS record header: an LSN for this record, a previous LSN for the same client, an LSN for the undo operation for the same client, a client ID, a transaction ID, a record type (a log record or a client restart area), length of client data, various flags. Many values mentioned before are specified by the client.

LFS records are written to LFS record pages. Each LFS record page is 4096 bytes in size (it’s equal to the page size), it contains a header (the first four bytes are “RCRD”) and one or more LFS records. Since client data can be large, two or more adjacent LFS record pages may be required to store one LFS record (thus, an LFS record can be larger than an LFS record page; only the first segment has the LFS record header).

Each LFS record page is protected by an update sequence array, which is used to detect failed (torn) writes. Here is a description of the protection process (source):

The update sequence array consists of an array of nUSHORT values, where n is the size of the structure being protected divided by the sequence number stride. The first word contains the update sequence number, which is a cyclical counter of the number of times the containing structure has been written to disk. Next are the n saved USHORT values that were overwritten by the update sequence number the last time the containing structure was written to disk.

Each time the protected structure is about to be written to disk, the last word in each sequence number stride is saved to its respective position in the sequence number array, then it is overwritten with the next update sequence number. After the write, or whenever the structure is read, the saved word from the sequence number array is restored to its actual position in the structure. Before restoring the saved words on reads, all the sequence numbers at the end of each stride are compared with the actual sequence number at the start of the array. If any of these comparisons are not equal, then a failed multisector transfer has been detected.

(It should be noted that the stride is 512 bytes, even if an underlying drive has a larger sector size. Also, the size of an update sequence array isn’t n, but n+1.)

Here is the layout of a typical LFS record page:

lfs-record-page

Here is the layout of two LFS record pages containing a large LFS record:

lfs-record-pages

Finally, the circular (“infinite”) area of a log file consists of many LFS record pages. As described before, LFS records written to a log file can wrap, so a large LFS record starting in the last LFS record page also hits the first LFS record page of the circular area.

lfs-infinite.png

When writing a new LFS record into a current LFS record page, existing LFS records in this page can be lost because of a torn write or a system crash. Thus, data that was successfully stored before can be lost because of a new write.

In order to protect against such scenarios, a special area exists in a log file. It’s located before the circular area.

In the version 1.1 of the LFS, a special area consists of two pages, which are used to store two copies of a current LFS record page. Before putting a new LFS record into a current LFS record page, this page is stored in the special area (the first copy). After putting a new LFS record into a current LFS record page, the modified page is also written to the special area (the second copy, the first copy isn’t overwritten by the second one).

If a torn write or a system crash occurs when writing the second copy,  the first copy (without a new LFS record) will be available for the recovery. If everything is okay and the LFS needs the special area for a new update, then the second copy is written to the circular area of a log file (and the special area becomes available for a new update).

These two copies of a current LFS record page are called tail copies (because they always represent the latest LFS record page to be written to the circular area). The latest tail copy isn’t moved to the circular area immediately. So, in order to get a full set of LFS record pages during the recovery, the LFS should apply the latest tail page (or the valid one, if another tail page is invalid) to the circular area.

In the version 2.0 of the LFS, a special area consists of 32 pages. When the LFS needs to put a new LFS record into a current LFS record page or when the LFS prepares a new LFS record page with a single LFS record, the updated page (containing a new LFS record) or the new page is simply written to the special area (to an unused page).

If a torn write or a system crash occurs when writing that page, an older version of the same page from the special area is used. Occasionally, LFS record pages with latest data are moved to the circular area (and corresponding pages in the special area are marked as unused).

I don’t know how LFS record pages in this special area are called. I call them fast pages.

The new version of the LFS requires less writes by reducing the number of page transfers to the circular area. It should be noted that the version of a log file is downgraded to 1.1 during the clean shutdown by default (so an NTFS file system can be mounted using a previous version of Windows).

Also, Microsoft is going to release the version 3.0 of the LFS. This version will be used on DAX volumes. When a log file is mapped in the DAX mode, integrity of its pages is going to be protected using the CRC32 checksum (and there would be no update sequence arrays, because they won’t work well with byte-addressable memory). This will make things faster (no paging writes).

Finally, a log file begins with two restart pages, each one is 4096 bytes in size (again, it’s the page size; the first four bytes for each page are “RSTR”). These pages are also protected with update sequence arrays.

A restart page contains the LFS version number, a page size, and a restart area (not to be confused with a client restart area).

A restart area is a structure containing the latest LSN used (at the time when this structure was written), the number of clients of the LFS, the list of clients of the LFS, the number of bits used for the sequence number part of every LSN, as well as some data for sanity checks and forward compatibility (an offset of the first LFS record within an LFS record page, which is also an offset of the continuation of client data from a previous LFS record page, and a size of an LFS record header; both offsets allow unsupported fields to be ignored in LFS record pages and in LFS records).

A list of clients is composed of client records. A client record contains the oldest LSN required by this client, the LSN of the latest client restart area, the name of this client (as well as other information about this client). Currently, the only client is called “NTFS”.

Two restart pages provide reliability against a possible failure (a torn write or a system crash). These pages aren’t necessary synchronized.

Here is the generic layout of a log file:

lfs-layout.png

When the LFS is asked to provide initial data for its client, it will read and return the latest client restart area according to an LSN recorded in the appropriate client record. (Later, during the logging operation, the LFS won’t touch the oldest LFS record required by each client.)

A client receives its latest restart area, interprets it (remember that the LFS is unaware of the client data format), and decides what actions (if any) must be taken. If a log record is needed, then a client asks the LFS to provide this record (as a buffer) by its LSN.

The NTFS client tells the LFS to write a client restart area at the end of the checkpoint operation. During a checkpoint, the NTFS client writes a set of log records containing data about current transactions followed by a restart area, which points to every piece of that data (using LSNs). During the recovery, the NTFS client uses this data to decide which transactions are committed and which aren’t: committed transaction must be performed again using their redo data (there is a chance that this data didn’t hit the volume), while uncommitted transaction must be rolled back using their undo data.

And now we can take a look at the format of client data!

There are three versions of the NTFS client data format: 0.0, 1.0, and 2.0.

The last one seems to be under development, because it’s not enabled yet. This new version removes redundant open attribute table dumps and attribute names dumps, which were previously made during a checkpoint (the same data can be reconstructed from log records, so there is no reason to waste the space and link these dumps to a client restart area).

Currently, only the first two versions are used: 0.0 and 1.0. There are no significant differences between them. The most notable difference, although not a really significant one, is the format of open attribute entries.

A client restart area contains major and minor version numbers of the NTFS client data format used, an LSN to be used as a starting point for the analysis pass (when the NTFS driver builds a table of transactions and a table of dirty data ranges). Also, a client restart area contains LSNs for a transaction table dumped to a log file from memory (this table can be absent as well), an open attribute table dumped to a log file from memory, a list of attribute names dumped to a log file from memory, and a dirty page table dumped to a log file from memory (which is used to track dirty data ranges).

An open attribute table and a list of attribute names reference a nonresident attribute opened for a log operation. An entry from an open attribute table contains an $MFT reference number for a file record which nonresident attribute has been opened and a type code of this attribute (e.g, $DATA). An entry from a list of attribute names contains a Unicode name of a nonresident attribute opened along with an index of a corresponding entry in the open attribute table.

And a log record written during an operation on a nonresident attribute contains an index of a target attribute in the open attribute table. Based on this information (an $MFT file reference, an attribute code, and an attribute name), it’s possible to locate a target attribute. Also, a log record contains an offset within a target attribute at which new data is going to be written.

It should be noted that no table referenced by a client restart area is in the up-to-date state. New items from log records after the client restart area should be accounted in these tables.

A log record is an actual descriptor of a logged operation. A log record contains a redo type and data (can be empty), an undo type and data (can be empty too), a number of a target $MFT file record segment (for operations on resident attributes and on $MFT data in general), an index of a target attribute within the open attribute table (for operations on nonresident attributes), and several fields used to calculate an offset within a target.

Redo data is written when a transaction is committed, undo data is written when a transaction is rolled back (to bring things back to their previous state). There are some exceptions, however: when a nonresident attribute is opened, its open attribute record is stored as redo data and its Unicode name is stored as undo data.

Here is a full list of log operation types (as of Windows 10, build 18323):

Noop
CompensationLogRecord
InitializeFileRecordSegment
DeallocateFileRecordSegment
WriteEndOfFileRecordSegment
CreateAttribute
DeleteAttribute
UpdateResidentValue
UpdateNonresidentValue
UpdateMappingPairs
DeleteDirtyClusters
SetNewAttributeSizes
AddIndexEntryRoot
DeleteIndexEntryRoot
AddIndexEntryAllocation
DeleteIndexEntryAllocation
WriteEndOfIndexBuffer
SetIndexEntryVcnRoot
SetIndexEntryVcnAllocation
UpdateFileNameRoot
UpdateFileNameAllocation
SetBitsInNonresidentBitMap
ClearBitsInNonresidentBitMap
HotFix
EndTopLevelAction
PrepareTransaction
CommitTransaction
ForgetTransaction
OpenNonresidentAttribute
OpenAttributeTableDump
AttributeNamesDump
DirtyPageTableDump
TransactionTableDump
UpdateRecordDataRoot
UpdateRecordDataAllocation
UpdateRelativeDataIndex
UpdateRelativeDataAllocation
ZeroEndOfFileRecord

Here is a decoded transaction used to rename a file (from “aaa.txt” to “bbb.txt”).

It should be noted that updates to some attributes can be recorded partially. For example, an update to the $STANDARD_INFORMATION attribute can record data starting from the M timestamp (and the C timestamp, which is stored before the M timestamp, will be absent in the redo/undo data).

The only thing left is the meaning of every log operation. Not today!


Update (2019-02-17):

How long does it take for old data to become overwritten with new data?

In one of my tests with Windows 10, it took 16 minutes. In another test with Windows 10, it took 5 hours and 20 minutes. In both tests, mouse movements were the only user activity.

Реклама

NTFS Case Sensitivity on Windows

Original text by Tyranid’s Lair

Back in February 2018 Microsoft released on interesting blog post (link) which introduced per-directory case-sensitive NTFS support. MS have been working on making support for WSL more robust and interop between the Linux and Windows side of things started off a bit rocky. Of special concern was the different semantics between traditional Unix-like file systems and Windows NTFS.

I always keep an eye out for new Windows features which might have security implications and per-directory case sensitivity certainly caught my attention. With 1903 not too far off I thought it was time I actual did a short blog post about per-directory case-sensitivity and mull over some of the security implications. While I’m at it why not go on a whistle-stop tour of case sensitivity in Windows NT over the years.

Disclaimer. I don’t currently and have never previously worked for Microsoft so much of what I’m going to discuss is informed speculation.

The Early Years

The Windows NT operating system has had the ability to have case-sensitive files since the very first version. This is because of the OS’s well known, but little used, POSIX subsystem. If you look at the documentation for CreateFile you’ll notice a flag, FILE_FLAG_POSIX_SEMANTICS which is used for the following purposes:«Access will occur according to POSIX rules. This includes allowing multiple files with names, differing only in case, for file systems that support that naming.»
It’s make sense therefore that all you’d need to do to get a case-sensitive file system is use this flag exclusively. Of course being an optional flag it’s unlikely that the majority of Windows software will use it correctly. You might wonder what the flag is actually doing, as CreateFile is not a system call. If we dig into the code inside KERNEL32 we’ll find the following:
BOOL CreateFileInternal(LPCWSTR lpFileName,…, DWORD dwFlagsAndAttributes){ // … OBJECT_ATTRIBUTES ObjectAttributes;if(dwFlagsAndAttributes & FILE_FLAG_POSIX_SEMANTICS){ ObjectAttributes.Attributes = 0;}else{ ObjectAttributes.Attributes = OBJ_CASE_INSENSITIVE;} NtCreateFile(…,&ObjectAttributes,…);}
This code shows that if the FILE_FLAG_POSIX_SEMANTICS flag is set, the the Attributes member of the OBJECT_ATTRIBUTES structure passed to NtCreateFile is initialized to 0. Otherwise it’s initialized with the flag OBJ_CASE_INSENSITIVE. The OBJ_CASE_INSENSITIVE instructs the Object Manager to do a case-insensitive lookup for a named kernel object. However files do not directly get parsed by the Object Manager, so the IO manager converts this flag to the IO_STACK_LOCATION flag SL_CASE_SENSITIVE before handing it off to the file system driver in an IRP_MJ_CREATE IRP. The file system driver can then honour that flag or not, in the case of NTFS it honours it and performs a case-sensitive file search instead of the default case-insensitive search.
Aside. Specifying FILE_FLAG_POSIX_SEMANTICS supports one other additional feature of CreateFile that I can see. By specifying FILE_FLAG_BACKUP_SEMANTICS, FILE_FLAG_POSIX_SEMANTICS  and FILE_ATTRIBUTE_DIRECTORY in the dwFlagsAndAttributes parameter and CREATE_NEW as the dwCreationDisposition parameter the API will create a new directory and return a handle to it. This would normally require calling CreateDirectory, then a second call to open or using the native NtCreateFile system call.
NTFS always supported case-preserving operations, so creating the file AbC.txt will leave the case intact. However when it does an initial check to make sure the file doesn’t already exist if you request abc.TXT then NTFS would find it during a case-insensitive search. If the create is done case-sensitive then NTFS won’t find the file and you can now create the second file. This allows NTFS to support full case-sensitivity. 
It seems too simple to create files in a case-sensitive manner, just use the FILE_FLAG_POSIX_SEMANTICSflag or don’t pass OBJ_CASE_INSENSITIVE to NtCreateFile. Let’s try that using PowerShell on a default installation on Windows 10 1809 to see if that’s really the case.

Opening the file AbC.txt with OBJ_CASE_INSENSITIVE and without.

First we create a file with the name AbC.txt, as NTFS is case preserving this will be the name assigned to it in the file system. We then open the file first with the OBJ_CASE_INSENSITIVE attribute flag set and specifying the name all in lowercase. As expected we open the file and displaying the name shows the case-preserved form. Next we do the same operation without the OBJ_CASE_INSENSITIVE flag, however unexpectedly it still works. It seems the kernel is just ignoring the missing flag and doing the open case-insensitive. 
It turns out this is by design, as case-insensitive operation is defined as opt-in no one would ever correctly set the flag and the whole edifice of the Windows subsystem would probably quickly fall apart. Therefore honouring enabling support for case-sensitive operation is behind a Session Manager Kernel Registry valueObCaseInsensitive. This registry value is reflected in the global kernel variable, ObpCaseInsensitivewhich is set to TRUE by default. There’s only one place this variable is used, ObpLookupObjectName, which looks like the following:
NTSTATUS ObpLookupObjectName(POBJECT_ATTRIBUTES ObjectAttributes,…){ // … DWORD Attributes = ObjectAttributes->Attributes;if(ObpCaseInsensitive){ Attributes |= OBJ_CASE_INSENSITIVE;} // Continue lookup. }
From this code we can see if ObpCaseInsensitive set to TRUE then regardless of the Attribute flags passed to the lookup operation OBJ_CASE_INSENSITIVE is always set. What this means is no matter what you do you can’t perform a case-sensitive lookup operation on a default install of Windows. Of course if you installed the POSIX subsystem you’ll typically find the kernel variable set to FALSE which would enable case-sensitive operation for everyone, at least if they forget to set the flags. 
Let’s try the same test again with PowerShell but make sure ObpCaseInsensitive is FALSE to see if we now get the expected operation.

Running the same tests but with ObpCaseInsensitive set to FALSE. With OBJ_CASE_INSENSITIVE the file open succeeds, without the flag it fails with an error.

With the OBJ_CASE_INSENSITIVE flag set we can still open the file AbC.txt with the lower case name. However without specifying the flag we we get STATUS_OBJECT_NAME_NOT_FOUND which indicates the lookup operation failed.

Windows Subsystem for Linux

Let’s fast forward to the introduction of WSL in Windows 10 1607. WSL needed some way of representing a typical case-sensitive Linux file system. In theory the developers could have implemented it on top of a case-insensitive file system but that’d likely introduce too many compatibility issues. However just disabling ObCaseInsensitive globally would likely introduce their own set of compatibility issues on the Windows side. A compromise was needed to support case-sensitive files on an existing volume.
AsideIt could be argued that Unix-like operating systems (including Linux) don’t have a case-sensitive file system at all, but a case-blind file system. Most Unix-like file systems just treat file names on disk as strings of opaque bytes, either the file name matches a sequence of bytes or it doesn’t. The file system doesn’t really care whether any particular byte is a lower or upper case character. This of course leads to interesting problems such as where two file names which look identical to a user can have different byte representations resulting in unexpected failures to open files. Some file systems such macOS’s HFS+ use Unicode Normalization Forms to make file names have a canonical byte representation to make this easier but leads to massive additional complexity, and was infamously removed in the successor APFS.
This compromise can be found back in ObpLookupObjectName as shown below:
NTSTATUS ObpLookupObjectName(POBJECT_ATTRIBUTES ObjectAttributes,…){ // … DWORD Attributes = ObjectAttributes->Attributes;if(ObpCaseInsensitive && KeGetCurrentThread()->CrossThreadFlags.ExplicitCaseSensitivity == FALSE){ Attributes |= OBJ_CASE_INSENSITIVE;} // Continue lookup. }
In the code we now find that the existing check for ObpCaseInsensitive is augmented with an additional check on the current thread’s CrossThreadFlags for the ExplicitCaseSensitivity bit flag. Only if the flag is not set will case-insensitive lookup be forced. This looks like a quick hack to get case-sensitive files without having to change the global behavior. We can find the code which sets this flag in NtSetInformationThread.
NTSTATUS NtSetInformationThread(HANDLE ThreadHandle, THREADINFOCLASS ThreadInformationClass, PVOID ThreadInformation, ULONG ThreadInformationLength){switch(ThreadInformationClass){case ThreadExplicitCaseSensitivity:if(ThreadInformationLength !=sizeof(DWORD))return STATUS_INFO_LENGTH_MISMATCH; DWORD value =*((DWORD*)ThreadInformation);if(value){if(!SeSinglePrivilegeCheck(SeDebugPrivilege, PreviousMode))return STATUS_PRIVILEGE_NOT_HELD;if(!RtlTestProtectedAccess(Process, 0x51))return STATUS_ACCESS_DENIED;}if(value) Thread->CrossThreadFlags.ExplicitCaseSensitivity = TRUE;else Thread->CrossThreadFlags.ExplicitCaseSensitivity = FALSE;break;} // … }
Notice in the code to set the the ExplicitCaseSensitivity flag we need to have both SeDebugPrivilege and be a protected process at level 0x51 which is PPL at Windows signing level. This code is from Windows 10 1809, I’m not sure it was this restrictive previously. However for the purposes of WSL it doesn’t matter as all processes are gated by a system service and kernel driver so these checks can be easily bypassed. As any new thread for a WSL process must go via the Pico process driver this flag could be automatically set and everything would just work.

Per-Directory Case-Sensitivity

A per-thread opt-out from case-insensitivity solved the immediate problem, allowing WSL to create case-sensitive files on an existing volume, but it didn’t help Windows applications inter-operating with files created by WSL. I’m guessing NTFS makes no guarantees on what file will get opened if performing a case-insensitive lookup when there’s multiple files with the same name but with different case. A Windows application could easily get into difficultly trying to open a file and always getting the wrong one. Further work was clearly needed, so introduced in 1803 was the topic at the start of this blog, Per-Directory Case Sensitivity. 
The NTFS driver already handled the case-sensitive lookup operation, therefore why not move the responsibility to enable case sensitive operation to NTFS? There’s plenty of spare capacity for a simple bit flag. The blog post I reference at the start suggests using the fsutil command to set case-sensitivity, however of course I want to know how it’s done under the hood so I put fsutil from a Windows Insider build into IDA to find out what it was doing. Fortunately changing case-sensitivity is now documented. You pass the FILE_CASE_SENSITIVE_INFORMATION structure with the FILE_CS_FLAG_CASE_SENSITIVE_DIRset via NtSetInformationFile to a directory. with the FileCaseSensitiveInformation information class. We can see the implementation for this in the NTFS driver.
NTSTATUS NtfsSetCaseSensitiveInfo(PIRP Irp, PNTFS_FILE_OBJECT FileObject){if(FileObject->Type != FILE_DIRECTORY){return STATUS_INVALID_PARAMETER;} NSTATUS status = NtfsCaseSensitiveInfoAccessCheck(Irp, FileObject);if(NT_ERROR(status))return status; PFILE_CASE_SENSITIVE_INFORMATION info = (PFILE_CASE_SENSITIVE_INFORMATION)Irp->AssociatedIrp.SystemBuffer;if(info->Flags & FILE_CS_FLAG_CASE_SENSITIVE_DIR){if((g_NtfsEnableDirCaseSensitivity & 1)== 0)return STATUS_NOT_SUPPORTED;if((g_NtfsEnableDirCaseSensitivity & 2)&&!NtfsIsFileDeleteable(FileObject)){return STATUS_DIRECTORY_NOT_EMPTY;} FileObject->Flags |= 0x400;}else{if(NtfsDoesDirHaveCaseDifferingNames(FileObject)){return STATUS_CASE_DIFFERING_NAMES_IN_DIR;} FileObject->Flags &=~0x400;}return STATUS_SUCCESS;}
There’s a bit to unpack here. Firstly you can only apply this to a directory, which makes some sense based on the description of the feature. You also need to pass an access check with the call NtfsCaseSensitiveInfoAccessCheck. We’ll skip over that for a second. 
Next we go into the actual setting or unsetting of the flag. Support for Per-Directory Case-Sensitivity is not enabled unless bit 0 is set in the global g_NtfsEnableDirCaseSensitivity variable. This value is loaded from the value NtfsEnableDirCaseSensitivity in HKLM\SYSTEM\CurrentControlSet\Control\FileSystem, the value is set to 0 by default. This means that this feature is not available on a fresh install of Windows 10, almost certainly this value is set when WSL is installed, but I’ve also found it on the Microsoft app-development VMwhich I don’t believe has WSL installed, so you might find it enabled in unexpected places. The g_NtfsEnableDirCaseSensitivity variable can also have bit 1 set, which indicates that the directory must be empty before changing the case-sensitivity flag (checked with NtfsIsFileDeleteable) however I’ve not seen that enabled. If those checks pass then the flag 0x400 is set in the NTFS file object.
If the flag is being unset the only check made is whether the directory contains any existing colliding file names. This seems to have been added recently as when I originally tested this feature in an Insider Preview you could disable the flag with conflicting filenames which isn’t necessarily sensible behavior.
Going back to the access check, the code for NtfsCaseSensitiveInfoAccessCheck looks like the following:
NTSTATUS NtfsCaseSensitiveInfoAccessCheck(PIRP Irp, PNTFS_FILE_OBJECT FileObject){if(NtfsEffectiveMode(Irp)|| FileObject->Access & FILE_WRITE_ATTRIBUTES){ PSECURITY_DESCRIPTOR SecurityDescriptor; SECURITY_SUBJECT_CONTEXT SubjectContext; SeCaptureSubjectContext(&SubjectContext); NtfsLoadSecurityDescriptor(FileObject,&SecurityDescriptor);if(SeAccessCheck(SecurityDescriptor,&SubjectContext FILE_ADD_FILE | FILE_ADD_SUBDIRECTORY | FILE_DELETE_CHILD)){return STATUS_SUCCESS;}}return STATUS_ACCESS_DENIED;}
The first check ensures the file handle is opened with FILE_WRITE_ATTRIBUTES access, however that isn’t sufficient to enable the flag. The check also ensures that if an access check is performed on the directory’s security descriptor that the caller would be granted FILE_ADD_FILE, FILE_ADD_SUBDIRECTORY and FILE_DELETE_CHILD access rights. Presumably this secondary check is to prevent situations where a file handle was shared to another process with less privileges but with FILE_WRITE_ATTRIBUTES rights. 
If the security check is passed and the feature is enabled you can now change the case-sensitivity behavior, and it’s even honored by arbitrary Windows applications such as PowerShell or notepad without any changes. Also note that the case-sensitivity flag is inherited by any new directory created under the original.

Showing setting case sensitive on a directory then using Set-Content and Get-Content to interact with the files.

Security Implications of Per-Directory Case-Sensitivity

Let’s get on to the thing which interests me most, what’s the security implications on this feature? You might not immediately see a problem with this behavior. What it does do is subvert the expectations of normal Windows applications when it comes to the behavior of file name lookup with no way of of detecting its use or mitigating against it. At least with the FILE_FLAG_POSIX_SEMANTICS flag you were only introducing unexpected case-sensitivity if you opted in, but this feature means the NTFS driver doesn’t pay any attention to the state of OBJ_CASE_INSENSITIVE when making its lookup decisions. That’s great from an interop perspective, but less great from a correctness perspective.
Some of the use cases I could see this being are problem are as follows:

  • TOCTOU where the file name used to open a file has its case modified between a security check and the final operation resulting in the check opening a different file to the final one. 
  • Overriding file lookup in a shared location if the create request’s case doesn’t the actual case of the file on disk. This would be mitigated if the flag to disable setting case-sensitivity on empty directories was enabled by default.
  • Directory tee’ing, where you replace lookup of an earlier directory in a path based on the state of the case-sensitive flag. This at least is partially mitigated by the check for conflicting file names in a directory, however I’ve no idea how robust that is.

I found it interesting that this feature also doesn’t use RtlIsSandboxToken to check the caller’s not in a sandbox. As long as you meet the access check requirements it looks like you can do this from an AppContainer, but its possible I missed something.  On the plus side this feature isn’t enabled by default, but I could imagine it getting set accidentally through enterprise imaging or some future application decides it must be on, such as Visual Studio. It’s a lot better from a security perspective to not turn on case-sensitivity globally. Also despite my initial interest I’ve yet to actual find a good use for this behavior, but IMO it’s only a matter of time 🙂

Abusing Mount Points over the SMB Protocol

Original text by Tyranid’s Lair

This blog post is a quick writeup on an interesting feature of SMBv2 which might have uses for lateral movement and red-teamers. When I last spent significant time looking at symbolic link attacks on Windows I took a close look at the SMB server. Since version 2 the SMB protocol has support for symbolic links, specifically the NTFS Reparse Point format. If the SMB server encounters an NTFS symbolic link within a share it’ll extract the REPARSE_DATA_BUFFER and return it to the client based on the SMBv2 protocol specification§2.2.2.2.1.

Screenshot of symbolic link error response from SMB specifications.

The client OS is responsible for parsing the REPARSE_DATA_BUFFER and following it locally. This means that only files the client can already access can be referenced by symbolic links. In fact even resolving symbolic links locally isn’t enabled by default, although I did find a bypass which allowed a malicious server to bypass the client policy and allowing resolving symbolic links locally. Microsoft declined to fix the bypass at the time, it’s issue 138 if you’re interested.

What I found interesting is while IO_REPARSE_TAG_SYMLINK is handled specially on the client, if the server encounters the IO_REPARSE_TAG_MOUNT_POINT reparse point it would follow it on the server. Therefore, if you could introduce a mount point within a share you could access any fixed disk on the server, even if it’s not shared directly. That could have many uses for lateral movement, but the question becomes how could we add a mount point without already having local access to the disk?

First thing to try is to just create a mount point via a UNC path and see what happens. Using the MKLINKCMD built-in you get the following:

Using mklink on \\localhost\c$\abc returns the error "Local NTFS volumes are required to complete the operation."

The error would indicate that setting mount points on remote servers isn’t supported. This would make some sense, setting a mount point on a remote drive would result in unexpected consequences. You’d assume the protocol either doesn’t support setting reparse points at all, or at least restricts them to only allowing symbolic links. We can get a rough idea what the protocol expects by looking up the details in the protocol specification. Setting a reparse point requires sending the FSCTL_SET_REPARSE_POINT IO control code to a file, therefore we can look up the section on the SMB2 IOCTL command to see if any there’s any information about the control code.

After a bit of digging you’ll find that FSCTL_SET_REPARSE_POINT is indeed supported and there’s a note in §3.3.5.15.13 which I’ve reproduced below.

«When the server receives a request that contains an SMB2 header with a Command value equal to SMB2 IOCTL and a CtlCode of FSCTL_SET_REPARSE_POINT, message handling proceeds as follows:If the ReparseTag field in FSCTL_SET_REPARSE_POINT, as specified in [MS-FSCC] section 2.3.65, is not IO_REPARSE_TAG_SYMLINK, the server SHOULD verify that the caller has the required permissions to execute this FSCTL.<330> If the caller does not have the required permissions, the server MUST fail the call with an error code of STATUS_ACCESS_DENIED.»The text in the specification seems to imply the server only needs to check explicitly for IO_REPARSE_TAG_SYMLINK, and if the tag is something different it should do some sort of check to see if it’s allowed, but it doesn’t say anything about setting a different tag to be explicitly banned. Perhaps it’s just the MKLINK built-in which doesn’t handle this scenario? Let’s try the CreateMountPoint tool from my symboliclink-testing-tools project and see if that helps.

Using CreateMountPoint on \\localhost\c$\abc gives access denied.

CreateMountPoint doesn’t show an error about only supporting local NTFS volumes, but it does return an access denied error. This ties in with the description §3.3.5.15.13, if the implied check fails the code should return access denied. Of course the protocol specification doesn’t actually say what check should be performed, I guess it’s time to break out the disassembler and look at the implementation in the SMBv2 driver, srv2.sys.

I used IDA to look for immediate values for IO_REPARSE_TAG_SYMLINK which is 0xA000000C. It seems likely that any check would first look for that value along with any other checking for the other tags. In the driver from Windows 10 1809 there was only one hit in Smb2ValidateIoctl. The code is roughly as follows:

NTSTATUS Smb2ValidateIoctl(SmbIoctlRequest* request){ // … switch(request>IoControlCode){case FSCTL_SET_REPARSE_POINT: REPARSE_DATA_BUFFER* reparse =(REPARSE_DATA_BUFFER*)request>Buffer;
// Validate length etc. if(reparse>ReparseTag != IO_REPARSE_TAG_SYMLINK &&
!request>SomeOffset->SomeByteValue){
return STATUS_ACCESS_DENIED;} // Complete FSCTL_SET_REPARSE_POINT request. }}

The code extracts the data from the IOCTL request, it fails with STATUS_ACCESS_DENIED if the tag is not IO_REPARSE_TAG_SYMLINK and some byte value is 0 which is referenced from the request data. Tracking down who sets this value can be tricky sometimes, however I usually have good results by just searching for the variables offset as an immediate value in IDA, in this case 0x200 and just go through the results looking for likely MOV instructions. I found an instruction «MOV [RCX+0x200], AL» inside Smb2ExecuteSessionSetupReal which looked to be the one. The variable is being set with the result of the call to Smb2IsAdmin which just checks if the caller has the BUILTIN\Administrators group in their token. It seems that we can set arbitrary reparse points on a remote share, as long as we’re an administrator on the machine. We should still test that’s really the case:

Using CreateMountPoint on \\localhost\c$\abc is successful and listing the directory showing the windows folder.

Testing from an administrator account allows us to create the mount point, and when listing the directory from a UNC path the Windows folder is shown. While I’ve demonstrated this on local admin shares this will work on any share and the mount point is followed on the remote server.

Is this trick useful? Requiring administrator access does mean it’s not something you could abuse for local privilege escalation and if you have administrator access remotely there’s almost certainly nastier things you could do. Still it could be useful if the target machine has the admin shares disabled, or there’s monitoring in place which would detect the use of ADMIN$ or C$ in lateral movement as if there’s any other writable share you could add a new directory which would give full control over any other fixed drive.

I can’t find anyone documenting this before, but I could have missed it as the search results are heavily biased towards SAMBA configurations when you search for SMB and mount points (for obvious reasons). This trick is another example of ensuring you test any assumptions about the security behavior of a system as it’s probably not documented what the actual behavior is. Even though a tool such as MKLINK claims a lack of a support for setting remote mount points by digging into available specification and looking at the code itself you can find some interesting stuff.