A few months ago, Microsoft let slip a forthcoming Windows 10 feature that was, at the time, called InPrivate Desktop: a lightweight virtual machine for running untrusted applications in an isolated environment. That feature has now been officially announced with a new name, Windows Sandbox.
Windows 10 already uses virtual machines to increase isolation between certain components and protect the operating system. These VMs have been used in a few different ways. Since its initial release, for example, suitably configured systems have used a small virtual machine running alongside the main operating system to host portions of LSASS. LSASS is a critical Windows subsystem that, among other things, knows various secrets, such as password hashes, encryption keys, and Kerberos tickets. Here, the VM is used to protect LSASS from hacking tools such that even if the base operating system is compromised, these critical secrets might be kept safe.Ars Technica
In the other direction, Microsoft added the ability to run Edge tabs within a virtual machine to reduce the risk of compromise when visiting a hostile website. The goal here is the opposite of the LSASS virtual machine—it’s designed to stop anything nasty from breaking out of the virtual machine and contaminating the main operating system, rather than preventing an already contaminated main operating system from breaking into the virtual machine.
Windows Sandbox is similar to the Edge virtual machine but designed for arbitrary applications. Running software in a virtual machine and then integrating that software into the main operating system is not new—VMware has done this on Windows for two decades now—but Windows Sandbox is using a number of techniques to reduce the overhead of the virtual machine while also maximizing the performance of software running within the VM, without compromising the isolation it offers.
Enlarge / The sandbox depends on operating system files residing in the host.Microsoft
Traditional virtual machines have their own operating system installation stored on a virtual disk image, and that operating system must be updated and maintained separately from the host operating system. The disk image used by Windows Sandbox, by contrast, shares the majority of its files with the host operating system; it contains a small amount of mutable data, the rest being immutable references to host OS files. This means that it’s always running the same version of Windows as the host and that, as the host is updated and patched, the sandbox OS is likewise updated and patched.
Sharing is used for memory, too; operating system executables and libraries loaded within the VM use the same physical memory as those same executables and libraries loaded into the host OS.
Enlarge / That sharing of the host’s operating system files even occurs when the files are loaded into memory.Microsoft
Standard virtual machines running a complete operating system include their own process scheduler that carves up processor time between all the running threads and processes. For regular VMs, this scheduler is opaque; the host just knows that the guest OS is running, and it has no insight into the processors and threads within that guest. The sandbox virtual machine is different; its processes and threads are directly exposed to the host OS’ scheduler, and they are scheduled just like any other threads on the machine. This means that if the sandbox has a low priority thread, it can be displaced by a higher priority thread from the host. The result is that the host is generally more responsive, and the sandbox behaves like a regular application, not a black-box virtual machine.
On top of this, video cards with WDDM 2.5 drivers can offer hardware-accelerated graphics to software running within the sandbox. With older drivers, the sandbox will run with the kind of software-emulated graphics that are typical of virtual machines.
Taken together, Windows Sandbox combines elements of virtual machines and containers. The security boundary between the sandbox and the host operating system is a hardware-enforced boundary, as is the case with virtual machines, and the sandbox has virtualized hardware much like a VM. At the same time, other aspects—such as sharing executables both on-disk and in-memory with the host as well as running an identical operating system version as the host—use technology from Windows Containers.
At least for now, the Sandbox appears to be entirely ephemeral. It gets destroyed and reset whenever it’s closed, so no changes can persist between runs. The Edge virtual machines worked similarly in their first incarnation; in subsequent releases, Microsoft added support for transferring files from the virtual machine to the host so that they could be stored persistently. We’d expect a similar kind of evolution for the Sandbox.
Windows Sandbox will be available in Insider builds of Windows 10 Pro and Enterprise starting with build 18305. At the time of writing, that build hasn’t shipped to insiders, but we expect it to be coming soon.
Hello and welcome back to the fifth part of the “Hypervisor From Scratch” tutorial series. Today we will be configuring our previously allocated Virtual Machine Control Structure (VMCS) and in the last, we execute VMLAUNCH and enter to our hardware-virtualized world! Before reading the rest of this part, you have to read the previous parts as they are really dependent.
The full source code of this tutorial is available on GitHub :
Most of this topic derived from Chapter 24 – (VIRTUAL MACHINE CONTROL STRUCTURES) & Chapter 26 – (VM ENTRIES) available at Intel 64 and IA-32 architectures software developer’s manual combined volumes 3. Of course, for more information, you can read the manual as well.
Table of contents
Introduction
Table of contents
VMX Instructions
VMPTRST
VMCLEAR
VMPTRLD
Enhancing VM State Structure
Preparing to launch VM
VMX Configurations
Saving a return point
Returning to the previous state
VMLAUNCH
VMX Controls
VM-Execution Controls
VM-entry Control Bits
VM-exit Control Bits
PIN-Based Execution Control
Interruptibility State
Configuring VMCS
Gathering Machine state for VMCS
Setting up VMCS
Checking VMCS Layout
VM-Exit Handler
Resume to next instruction
VMRESUME
Let’s Test it!
Conclusion
References
This part is highly inspired from Hypervisor For Beginner and some of methods are exactly like what implemented in that project.
VMX Instructions
In part 3, we implemented VMXOFF function now let’s implement other VMX instructions function. I also make some changes in calling VMXON and VMPTRLD functions to make it more modular.
VMPTRST
VMPTRST stores the current-VMCS pointer into a specified memory address. The operand of this instruction is always 64 bits and it’s always a location in memory.
The following function is the implementation of VMPTRST:
This instruction applies to the VMCS which VMCS region resides at the physical address contained in the instruction operand. The instruction ensures that VMCS data for that VMCS (some of these data may be currently maintained on the processor) are copied to the VMCS region in memory. It also initializes some parts of the VMCS region (for example, it sets the launch state of that VMCS to clear).
123456789101112131415
BOOLEAN Clear_VMCS_State(IN PVirtualMachineState vmState) { // Clear the state of the VMCS to inactive int status = __vmx_vmclear(&vmState->VMCS_REGION); DbgPrint(«[*] VMCS VMCLAEAR Status is : %d\n», status); if (status) { // Otherwise terminate the VMX DbgPrint(«[*] VMCS failed to clear with status %d\n», status); __vmx_off(); return FALSE; } return TRUE;}
VMPTRLD
It marks the current-VMCS pointer valid and loads it with the physical address in the instruction operand. The instruction fails if its operand is not properly aligned, sets unsupported physical-address bits, or is equal to the VMXON pointer. In addition, the instruction fails if the 32 bits in memory referenced by the operand do not match the VMCS revision identifier supported by this processor.
12345678910
BOOLEAN Load_VMCS(IN PVirtualMachineState vmState) { int status = __vmx_vmptrld(&vmState->VMCS_REGION); if (status) { DbgPrint(«[*] VMCS failed with status %d\n», status); return FALSE; } return TRUE;}
In order to implement VMRESUME you need to know about some VMCS fields so the implementation of VMRESUME is after we implement VMLAUNCH. (Later in this topic)
Enhancing VM State Structure
As I told you in earlier parts, we need a structure to save the state of our virtual machine in each core separately. The following structure is used in the newest version of our hypervisor, each field will be described in the rest of this topic.
123456789
typedef struct _VirtualMachineState{ UINT64 VMXON_REGION; // VMXON region UINT64 VMCS_REGION; // VMCS region UINT64 EPTP; // Extended-Page-Table Pointer UINT64 VMM_Stack; // Stack for VMM in VM-Exit State UINT64 MSRBitMap; // MSRBitMap Virtual Address UINT64 MSRBitMapPhysical; // MSRBitMap Physical Address} VirtualMachineState, *PVirtualMachineState;
Note that its not the final _VirtualMachineState structure and we’ll enhance it in future parts.
Preparing to launch VM
In this part, we’re just trying to test our hypervisor in our driver, in the future parts we add some user-mode interactions with our driver so let’s start with modifying our DriverEntry as it’s the first function that executes when our driver is loaded.
Below all the preparation from Part 2, we add the following lines to use our Part 4 (EPT) structures :
123
// Initiating EPTP and VMX PEPTP EPTP = Initialize_EPTP(); Initiate_VMX();
I added an export to a global variable called “VirtualGuestMemoryAddress” that holds the address of where our guest code starts.
Now let’s fill our allocated pages with \xf4 which stands for HLT instruction. I choose HLT because with some special configuration (described below) it’ll cause VM-Exit and return the code to the Host handler.
Let’s create a function which is responsible for running our virtual machine on a specific core.
1
void LaunchVM(int ProcessorID , PEPTP EPTP);
I set the ProcessorID to 0, so we’re in the 0th logical processor.
Keep in mind that every logical core has its own VMCS and if you want your guest code to run in other logical processor, you should configure them separately.
Now we should set the affinity to the specific logical processor using Windows KeSetSystemAffinityThread function and make sure to choose the specific core’s vmState as each core has its own separate VMXON and VMCS region.
1234567
KAFFINITY kAffinityMask; kAffinityMask = ipow(2, ProcessorID); KeSetSystemAffinityThread(kAffinityMask); DbgPrint(«[*]\t\tCurrent thread is executing in %d th logical processor.\n», ProcessorID); PAGED_CODE();
Now, we should allocate a specific stack so that every time a VM-Exit occurs then we can save the registers and calling other Host functions.
I prefer to allocate a separate location for stack instead of using current RSP of the driver but you can use current stack (RSP) too.
The following lines are for allocating and zeroing the stack of our VM-Exit handler.
12345678910
// Allocate stack for the VM Exit Handler. UINT64 VMM_STACK_VA = ExAllocatePoolWithTag(NonPagedPool, VMM_STACK_SIZE, POOLTAG); vmState[ProcessorID].VMM_Stack = VMM_STACK_VA; if (vmState[ProcessorID].VMM_Stack == NULL) { DbgPrint(«[*] Error in allocating VMM Stack.\n»); return; } RtlZeroMemory(vmState[ProcessorID].VMM_Stack, VMM_STACK_SIZE);
Same as above, allocating a page for MSR Bitmap and adding it to vmState, I’ll describe about them later in this topic.
1234567891011
// Allocate memory for MSRBitMap vmState[ProcessorID].MSRBitMap = MmAllocateNonCachedMemory(PAGE_SIZE); // should be aligned if (vmState[ProcessorID].MSRBitMap == NULL) { DbgPrint(«[*] Error in allocating MSRBitMap.\n»); return; } RtlZeroMemory(vmState[ProcessorID].MSRBitMap, PAGE_SIZE); vmState[ProcessorID].MSRBitMapPhysical = VirtualAddress_to_PhysicalAddress(vmState[ProcessorID].MSRBitMap);
Now it’s time to clear our VMCS state and load it as the current VMCS in the specific processor (in our case the 0th logical processor).
The Clear_VMCS_State and Load_VMCS are described above :
123456789101112
// Clear the VMCS State if (!Clear_VMCS_State(&vmState[ProcessorID])) { goto ErrorReturn; } // Load VMCS (Set the Current VMCS) if (!Load_VMCS(&vmState[ProcessorID])) { goto ErrorReturn; }
Now it’s time to setup VMCS, A detailed explanation of VMCS setup is available later in this topic.
1234
DbgPrint(«[*] Setting up VMCS.\n»); Setup_VMCS(&vmState[ProcessorID], EPTP);
The last step is to execute the VMLAUNCH but we shouldn’t forget about saving the current state of the stack (RSP & RBP) because during the execution of Guest code and after returning from VM-Exit, we have to now the current state and return from it. It’s because if you leave the driver with wrong RSP & RBP then you definitely see a BSOD.
12
Save_VMXOFF_State();
Saving a return point
For Save_VMXOFF_State() , I declared two global variables called g_StackPointerForReturning, g_BasePointerForReturning. No need to save RIP as the return address is always available in the stack. Just EXTERN it in the assembly file :
As we saved the current state, if we want to return to the previous state, we have to restore RSP & RBP and clear the stack position and eventually a RET instruction. (I Also add a VMXOFF because it should be executed before return.)
123456789101112131415161718192021222324
Restore_To_VMXOFF_State PROC PUBLIC VMXOFF ; turn it off before existing MOV rsp, g_StackPointerForReturningMOV rbp, g_BasePointerForReturning ; make rsp point to a correct return pointADD rsp,8 ; return Truexor rax,raxmov rax,1 ; return section mov rbx, [rsp+28h+8h]mov rsi, [rsp+28h+10h]add rsp, 020hpop rdi ret Restore_To_VMXOFF_State ENDP
The “return section” is defined like this because I saw the return section of LaunchVM in IDA Pro.
LaunchVM Return Frame
One important thing that can’t be easily ignored from the above picture is I have such a gorgeous, magnificent & super beautiful IDA PRO theme. I always proud of myself for choosing themes like this !
VMLAUNCH
Now it’s time to executed the VMLAUNCH.
12345678910
__vmx_vmlaunch(); // if VMLAUNCH succeed will never be here ! ULONG64 ErrorCode = 0; __vmx_vmread(VM_INSTRUCTION_ERROR, &ErrorCode); __vmx_off(); DbgPrint(«[*] VMLAUNCH Error : 0x%llx\n», ErrorCode); DbgBreakPoint();
As the comment describes, if we VMLAUNCH succeed we’ll never execute the other lines. If there is an error in the state of VMCS (which is a common problem) then we have to run VMREAD and read the error code from VM_INSTRUCTION_ERROR field of VMCS, also VMXOFF and print the error. DbgBreakPoint is just a debug breakpoint (int 3) and it can be useful only if you’re working with a remote kernel Windbg Debugger. It’s clear that you can’t test it in your system because executing a cc in the kernel will freeze your system as long as there is no debugger to catch it so it’s highly recommended to create a remote Kernel Debugging machine and test your codes.
Also, It can’t be tested on a remote VMWare debugging (and other virtual machine debugging tools) because nested VMX is not supported in current Intel processors.
Remember we’re still in LaunchVM function and __vmx_vmlaunch() is the intrinsic function for VMLAUNCH & __vmx_vmread is for VMREAD instruction.
Now it’s time to read some theories before configuring VMCS.
VMX Controls
VM-Execution Controls
In order to control our guest features, we have to set some fields in our VMCS. The following tables represent the Primary Processor-Based VM-Execution Controls and Secondary Processor-Based VM-Execution Controls.
In the earlier versions of VMX, there is nothing like Secondary Processor-Based VM-Execution Controls. Now if you want to use the secondary table you have to set the 31st bit of the first table otherwise it’s like the secondary table field with zeros.
The definition of the above table is this (we ignore some bits, you can define them if you want to use them in your hypervisor):
The pin-based VM-execution controls constitute a 32-bit vector that governs the handling of asynchronous events (for example: interrupts). We’ll use it in the future parts, but for now let define it in our Hypervisor.
The guest-state area includes the following fields that characterize guest state but which do not correspond to processor registers: Activity state (32 bits). This field identifies the logical processor’s activity state. When a logical processor is executing instructions normally, it is in the active state. Execution of certain instructions and the occurrence of certain events may cause a logical processor to transition to an inactive state in which it ceases to execute instructions. The following activity states are defined: — 0: Active. The logical processor is executing instructions normally.
— 1: HLT. The logical processor is inactive because it executed the HLT instruction. — 2: Shutdown. The logical processor is inactive because it incurred a triple fault1 or some other serious error. — 3: Wait-for-SIPI. The logical processor is inactive because it is waiting for a startup-IPI (SIPI).
• Interruptibility state (32 bits). The IA-32 architecture includes features that permit certain events to be blocked for a period of time. This field contains information about such blocking. Details and the format of this field are given in Table below.
Configuring VMCS
Gathering Machine state for VMCS
In order to configure our Guest-State & Host-State we need to have details about current system state, e.g Global Descriptor Table Address, Interrupt Descriptor Table Add and Read all the Segment Registers.
These functions describe how all of these data can be gathered.
Get_GDT_Limit PROC LOCAL gdtr[10]:BYTE sgdt gdtr mov ax, WORD PTR gdtr[0] retGet_GDT_Limit ENDP
IDT Limit:
1234567
Get_IDT_Limit PROC LOCAL idtr[10]:BYTE sidt idtr mov ax, WORD PTR idtr[0] retGet_IDT_Limit ENDP
RFLAGS:
12345
Get_RFLAGS PROC pushfq pop rax retGet_RFLAGS ENDP
Setting up VMCS
Let’s get down to business (We have a long way to go).
This section starts with defining a function called Setup_VMCS.
1
BOOLEAN Setup_VMCS(IN PVirtualMachineState vmState, IN PEPTP EPTP);
This function is responsible for configuring all of the options related to VMCS and of course the Guest & Host state.
These task needs a special instruction called “VMWRITE”.
VMWRITE, writes the contents of a primary source operand (register or memory) to a specified field in a VMCS. In VMX root operation, the instruction writes to the current VMCS. If executed in VMX non-root operation, the instruction writes to the VMCS referenced by the VMCS link pointer field in the current VMCS.
The VMCS field is specified by the VMCS-field encoding contained in the register secondary source operand.
The following enum contains most of the VMCS field need for VMWRITE & VMREAD instructions. (newer processors add newer fields.)
Keep in mind, those fields that start with HOST_ are related to the state in which the hypervisor sets whenever a VM-Exit occurs and those which start with GUEST_ are related to to the state in which the hypervisor sets for guest when a VMLAUNCH executed.
The purpose of & 0xF8 is that Intel mentioned that the three less significant bits must be cleared and otherwise it leads to error when you execute VMLAUNCH with Invalid Host State error.
VMCS_LINK_POINTER should be 0xffffffffffffffff.
12
// Setting the link pointer to the required value for 4KB VMCS. __vmx_vmwrite(VMCS_LINK_POINTER, ~0ULL);
The rest of this topic, intends to perform the VMX instructions in the current state of machine, so must of the guest and host configurations should be the same. In the future parts we’ll configure them to a separate guest layout.
Let’s configure GUEST_IA32_DEBUGCTL.
The IA32_DEBUGCTL MSR provides bit field controls to enable debug trace interrupts, debug trace stores, trace messages enable, single stepping on branches, last branch record recording, and to control freezing of LBR stack.
In short : LBR is a mechanism that provides processor with some recording of registers.
We don’t use them but let’s configure them to the current machine’s MSR_IA32_DEBUGCTL and you can see that __readmsr is the intrinsic function for RDMSR.
BOOLEAN GetSegmentDescriptor(IN PSEGMENT_SELECTOR SegmentSelector, IN USHORT Selector, IN PUCHAR GdtBase){ PSEGMENT_DESCRIPTOR SegDesc; if (!SegmentSelector) return FALSE; if (Selector & 0x4) { return FALSE; } SegDesc = (PSEGMENT_DESCRIPTOR)((PUCHAR)GdtBase + (Selector & ~0x7)); SegmentSelector->SEL = Selector; SegmentSelector->BASE = SegDesc->BASE0 | SegDesc->BASE1 << 16 | SegDesc->BASE2 << 24; SegmentSelector->LIMIT = SegDesc->LIMIT0 | (SegDesc->LIMIT1ATTR1 & 0xf) << 16; SegmentSelector->ATTRIBUTES.UCHARs = SegDesc->ATTR0 | (SegDesc->LIMIT1ATTR1 & 0xf0) << 4; if (!(SegDesc->ATTR0 & 0x10)) { // LA_ACCESSED ULONG64 tmp; // this is a TSS or callgate etc, save the base high part tmp = (*(PULONG64)((PUCHAR)SegDesc + 8)); SegmentSelector->BASE = (SegmentSelector->BASE & 0xffffffff) | (tmp << 32); } if (SegmentSelector->ATTRIBUTES.Fields.G) { // 4096-bit granularity is enabled for this segment, scale the limit SegmentSelector->LIMIT = (SegmentSelector->LIMIT << 12) + 0xfff; } return TRUE;}
Also, there is another MSR called IA32_KERNEL_GS_BASEthat is used to set the kernel GS base. whenever you run instructions like SYSCALL and enter to the ring 0, you need to change the current GS register and that can be done using SWAPGS. This instruction copies the content of IA32_KERNEL_GS_BASE into the IA32_GS_BASE and now it’s used in the kernel when you want to re-enter user-mode, you should change the user-mode GS Base. MSR_FS_BASE on the other hand, don’t have a kernel base because it used in 32-Bit mode while you have a 64-bit (long mode) kernel.
The GUEST_INTERRUPTIBILITY_INFO & GUEST_ACTIVITY_STATE.
12
__vmx_vmwrite(GUEST_INTERRUPTIBILITY_INFO, 0); __vmx_vmwrite(GUEST_ACTIVITY_STATE, 0); //Active state
Now we reach to the most important part of our VMCS and it’s the configuration of CPU_BASED_VM_EXEC_CONTROL and SECONDARY_VM_EXEC_CONTROL.
These fields enable and disable some important features of guest, e.g you can configure VMCS to cause a VM-Exit whenever an execution of HLT instruction detected (in Guest). Please check the VM-Execution Controls parts above for a detailed description.
As you can see we set CPU_BASED_HLT_EXITING that will cause the VM-Exit on HLT and activate secondary controls using CPU_BASED_ACTIVATE_SECONDARY_CONTROLS.
In the secondary controls, we used CPU_BASED_CTL2_RDTSCP and for now comment CPU_BASED_CTL2_ENABLE_EPT because we don’t need to deal with EPT in this part. In the future parts, I describe using EPT or Extended Page Table that we configured in the 4th part.
The description of PIN_BASED_VM_EXEC_CONTROL, VM_EXIT_CONTROLS and VM_ENTRY_CONTROLS is available above but for now, let zero them.
ULONG AdjustControls(IN ULONG Ctl, IN ULONG Msr){ MSR MsrValue = { 0 }; MsrValue.Content = __readmsr(Msr); Ctl &= MsrValue.High; /* bit == 0 in high word ==> must be zero */ Ctl |= MsrValue.Low; /* bit == 1 in low word ==> must be one */ return Ctl;}
Next step is setting Control Register for guest and host, we set them to the same value using intrinsic functions.
If you want to use SYSENTER in your guest then you should configure the following MSRs. It’s not important to set these values in x64 Windows because Windows doesn’t support SYSENTER in x64 versions of Windows, It uses SYSCALL instead and for 32-bit processes, first change the current execution mode to long-mode (using Heaven’s Gate technique) but in 32-bit processors these fields are mandatory.
The next important part is to set the RIP and RSP of the guest when a VMLAUNCH executes it starts with RIP you configured in this part and RIP and RSP of the host when a VM-Exit occurs. It’s pretty clear that Host RIP should point to a function that is responsible for managing VMX Events based on return code and decide to execute a VMRESUME or turn off hypervisor using VMXOFF.
123456789
// left here just for test __vmx_vmwrite(0, (ULONG64)VirtualGuestMemoryAddress); //setup guest sp __vmx_vmwrite(GUEST_RIP, (ULONG64)VirtualGuestMemoryAddress); //setup guest ip __vmx_vmwrite(HOST_RSP, ((ULONG64)vmState->VMM_Stack + VMM_STACK_SIZE — 1)); __vmx_vmwrite(HOST_RIP, (ULONG64)VMExitHandler);
HOST_RSP points to VMM_Stack that we allocated above and HOST_RIP points to VMExitHandler (an assembly written function that described below). GUEST_RIP points to VirtualGuestMemoryAddress(the global variable that we configured during EPT initialization) and GUEST_RSP to zero because we don’t put any instruction that uses stack so for a real-world example it should point to writeable different address.
Setting these fields to a Host Address will not cause a problem as long as we have a same CR3 in our guest state so all the addresses are mapped exactly the same as the host.
Done ! Our VMCS is almost ready.
Checking VMCS Layout
Unfortunatly, checking VMCS Layout is not as straight as the other parts, you have to control all the checklists described in [CHAPTER 26] VM ENTRIES from Intel’s 64 and IA-32 Architectures Software Developer’s Manual including the following sections:
26.2 CHECKS ON VMX CONTROLS AND HOST-STATE AREA
26.3 CHECKING AND LOADING GUEST STATE
26.4 LOADING MSRS
26.5 EVENT INJECTION
26.6 SPECIAL FEATURES OF VM ENTRY
26.7 VM-ENTRY FAILURES DURING OR AFTER LOADING GUEST STATE
26.8 MACHINE-CHECK EVENTS DURING VM ENTRY
The hardest part of this process is when you have no idea about the incorrect part of your VMCS layout or on the other hand when you miss something that eventually causes the failure.
This is because Intel just gives an error number without any further details about what’s exactly wrong in your VMCS Layout.
The errors shown below.
To solve this problem, I created a user-mode application called VmcsAuditor. As its name describes, if you have any error and don’t have any idea about solving the problem then it can be a choice.
Keep in mind that VmcsAuditor is a tool based on Bochs emulator support for VMX so all the checks come from Bochs and it’s not a 100% reliable tool that solves all the problem as we don’t know what exactly happening inside processor but it can be really useful and time saver.
The source code and executable files available on GitHub :
VMX Exit handler should be a pure assembly function because calling a compiled function needs some preparing and some register modification and the most important thing in VMX Handler is saving the registers state so that you can continue, other time.
I create a sample function for saving the registers and returning the state but in this function we call another C function.
PUBLIC VMExitHandler EXTERN MainVMExitHandler:PROCEXTERN VM_Resumer:PROC .code _text VMExitHandler PROC push r15 push r14 push r13 push r12 push r11 push r10 push r9 push r8 push rdi push rsi push rbp push rbp ; rsp push rbx push rdx push rcx push rax mov rcx, rsp ;GuestRegs sub rsp, 28h ;rdtsc call MainVMExitHandler add rsp, 28h pop rax pop rcx pop rdx pop rbx pop rbp ; rsp pop rbp pop rsi pop rdi pop r8 pop r9 pop r10 pop r11 pop r12 pop r13 pop r14 pop r15 sub rsp, 0100h ; to avoid error in future functions JMP VM_Resumer VMExitHandler ENDP end
The main VM-Exit handler is a switch-case function that has different decisions over the VMCS VM_EXIT_REASON and EXIT_QUALIFICATION.
In this part, we’re just performing an action over EXIT_REASON_HLT and just print the result and restore the previous state.
From the following code, you can clearly see what event cause the VM-exit. Just keep in mind that some reasons only lead to VM-Exit if the VMCS’s control execution fields (described above) allows for it. For instance, the execution of HLT in guest software will cause VM-Exit if the 7th bit of the Primary Processor-Based VM-Execution Controls allows it.
VOID MainVMExitHandler(PGUEST_REGS GuestRegs){ ULONG ExitReason = 0; __vmx_vmread(VM_EXIT_REASON, &ExitReason); ULONG ExitQualification = 0; __vmx_vmread(EXIT_QUALIFICATION, &ExitQualification); DbgPrint(«\nVM_EXIT_REASION 0x%x\n», ExitReason & 0xffff); DbgPrint(«\EXIT_QUALIFICATION 0x%x\n», ExitQualification); switch (ExitReason) { // // 25.1.2 Instructions That Cause VM Exits Unconditionally // The following instructions cause VM exits when they are executed in VMX non-root operation: CPUID, GETSEC, // INVD, and XSETBV. This is also true of instructions introduced with VMX, which include: INVEPT, INVVPID, // VMCALL, VMCLEAR, VMLAUNCH, VMPTRLD, VMPTRST, VMRESUME, VMXOFF, and VMXON. // case EXIT_REASON_VMCLEAR: case EXIT_REASON_VMPTRLD: case EXIT_REASON_VMPTRST: case EXIT_REASON_VMREAD: case EXIT_REASON_VMRESUME: case EXIT_REASON_VMWRITE: case EXIT_REASON_VMXOFF: case EXIT_REASON_VMXON: case EXIT_REASON_VMLAUNCH: { break; } case EXIT_REASON_HLT: { DbgPrint(«[*] Execution of HLT detected… \n»); // DbgBreakPoint(); // that’s enough for now 😉 Restore_To_VMXOFF_State(); break; } case EXIT_REASON_EXCEPTION_NMI: { break; } case EXIT_REASON_CPUID: { break; } case EXIT_REASON_INVD: { break; } case EXIT_REASON_VMCALL: { break; } case EXIT_REASON_CR_ACCESS: { break; } case EXIT_REASON_MSR_READ: { break; } case EXIT_REASON_MSR_WRITE: { break; } case EXIT_REASON_EPT_VIOLATION: { break; } default: { // DbgBreakPoint(); break; } }}
Resume to next instruction
If a VM-Exit occurs (e.g the guest executed a CPUID instruction), the guest RIP remains constant and it’s up to you to change the Guest RIP or not so if you don’t have a special function for managing this situation then you execute a VMRESUME and it’s like an infinite loop of executing CPUID and VMRESUME because you didn’t change the RIP.
In order to solve this problem you have to read a VMCS field called VM_EXIT_INSTRUCTION_LEN that stores the length of the instruction that caused the VM-Exit so you have to first, read the GUEST current RIP, second the VM_EXIT_INSTRUCTION_LEN and third add it to GUEST RIP. Now your GUEST RIP points to the next instruction and you’re good to go.
VMRESUME is like VMLAUNCH but it’s used in order to resume the Guest.
VMLAUNCH fails if the launch state of current VMCS is not “clear”. If the instruction is successful, it sets the launch state to “launched.”
VMRESUME fails if the launch state of the current VMCS is not “launched.”
So it’s clear that if you executed VMLAUNCH before, then you can’t use it anymore to resume to the Guest code and in this condition VMRESUME is used.
The following code is the implementation of VMRESUME.
12345678910111213141516
VOID VM_Resumer(VOID){ __vmx_vmresume(); // if VMRESUME succeed will never be here ! ULONG64 ErrorCode = 0; __vmx_vmread(VM_INSTRUCTION_ERROR, &ErrorCode); __vmx_off(); DbgPrint(«[*] VMRESUME Error : 0x%llx\n», ErrorCode); // It’s such a bad error because we don’t where to go ! // prefer to break DbgBreakPoint();}
Let’s Test it !
Well, we have done with configuration and now its time to run our driver using OSR Driver Loader, as always, first you should disable driver signature enforcement then run your driver.
As you can see from the above picture (in launching VM area), first we set the current logical processor to 0, next we clear our VMCS status using VMCLEAR instruction then we set up our VMCS layout and finally execute a VMLAUNCH instruction.
Now, our guest code is executed and as we configured our VMCS to exit on the execution of HLT(CPU_BASED_HLT_EXITING), so it’s successfully executed and our VM-EXIT handler function called, then it calls the main VM-Exit handler and as the VMCS exit reason is 0xc (EXIT_REASON_HLT), our VM-Exit handler detects an execution of HLT in guest and now it captures the execution.
After that our machine state saving mechanism executed and we successfully turn off hypervisor using VMXOFF and return to the first caller with a successful (RAX = 1) status.
That’s it ! Wasn’t it easy ?!
Conclusion
In this part, we get familiar with configuring Virtual Machine Control Structure and finally run our guest code. The future parts would be an enhancement to this configuration like entering protected-mode,interrupt injection, page modification logging,virtualizing the current machine and so on thus making sure to visit the blog more frequently for future parts and if you have any question or problem you can use the comments section below.
Welcome to the fourth part of the “Hypervisor From Scratch”. This part is primarily about translating guest address through Extended Page Table (EPT) and its implementation. We also see how shadow tables work and other cool stuff.
First of all, make sure to read the earlier parts before reading this topic as these parts are really dependent on each other also you should have a basic understanding of paging mechanism and how page tables work. A good article is here for paging tables.
Most of this topic derived from Chapter 28 – (VMX SUPPORT FOR ADDRESS TRANSLATION) available at Intel 64 and IA-32 architectures software developer’s manual combined volumes 3.
The full source code of this tutorial is available on GitHub :
Before starting, I should give my thanks to Petr Beneš, as this part would never be completed without his help.
Introduction
Second Level Address Translation (SLAT) or nested paging, is an extended layer in the paging mechanism that is used to map hardware-based virtualization virtual addresses into the physical memory.
AMD implemented SLAT through the Rapid Virtualization Indexing (RVI) technology known as Nested Page Tables (NPT) since the introduction of its third-generation Opteron processors and microarchitecture code name Barcelona. Intel also implemented SLAT in Intel® VT-x technologiessince the introduction of microarchitecture code name Nehalem and its known as Extended Page Table (EPT) and is used in Core i9, Core i7, Core i5, and Core i3 processors.
ARM processors also have some kind of implementation known as known as Stage-2 page-tables.
There are two methods, the first one is Shadow Page Tables and the second one is Extended Page Tables.
Software-assisted paging (Shadow Page Tables)
Shadow page tables are used by the hypervisor to keep track of the state of physical memory in which the guest thinks that it has access to physical memory but in the real world, the hardware prevents it to access hardware memory otherwise it will control the host and it is not what it intended to be.
In this case, VMM maintains shadow page tables that map guest-virtual pages directly to machine pages and any guest modifications to V->P tables synced to VMM V->M shadow page tables.
By the way, using Shadow Page Table is not recommended today as always lead to VMM traps (which result in a vast amount of VM-Exits) and losses the performance due to the TLB flush on every switch and another caveat is that there is a memory overhead due to shadow copying of guest page tables.
Hardware-assisted paging (Extended Page Table)
To reduce the complexity of Shadow Page Tables and avoiding the excessive vm-exits and reducing the number of TLB flushes, EPT, a hardware-assisted paging strategy implemented to increase the performance.
According to a VMware evaluation paper: “EPT provides performance gains of up to 48% for MMU-intensive benchmarks and up to 600% for MMU-intensive microbenchmarks”.
EPT implemented one more page table hierarchy, to map Guest-Virtual Address to Guest-Physical address which is valid in the main memory.
In EPT,
One page table is maintained by guest OS, which is used to generate the guest-physical address.
The other page table is maintained by VMM, which is used to map guest physical address to host physical address.
so for each memory access operation, EPT MMU directly gets the guest physical address from the guest page table and then gets the host physical address by the VMM mapping table automatically.
Extended Page Table vs Shadow Page Table
EPT:
Walk any requested address
Appropriate to programs that have a large amount of page table miss when executing
Less chance to exit VM (less context switch)
Two-layer EPT
Means each access needs to walk two tables
Easier to develop
Many particular registers
Hardware helps guest OS to notify the VMM
SPT:
Only walk when SPT entry miss
Appropriate to programs that would access only some addresses frequently
Every access might be intercepted by VMM (many traps)
One reference
Fast and convenient when page hit
Hard to develop
Two-layer structure
Complicated reverse map
Permission emulation
Detecting Support for EPT, NPT
If you want to see whether your system supports EPT on Intel processor or NPT on AMD processor without using assembly (CPUID), you can download coreinfo.exe from Sysinternals, then run it. The last line will show you if your processor supports EPT or NPT.
EPT Translation
EPT defines a layer of address translation that augments the translation of linear addresses.
The extended page-table mechanism (EPT) is a feature that can be used to support the virtualization of physical memory. When EPT is in use, certain addresses that would normally be treated as physical addresses (and used to access memory) are instead treated as guest-physical addresses. Guest-physical addresses are translated by traversing a set of EPT paging structures to produce physical addresses that are used to access memory.
EPT is used when the “enable EPT” VM-execution control is 1. It translates the guest-physical addresses used in VMX non-root operation and those used by VM entry for event injection.
EPT translation is exactly like regular paging translation but with some minor differences. In paging, the processor translates Virtual Address to Physical Address while in EPT translation you want to translate a Guest Virtual Address to Host Physical Address.
If you’re familiar with paging, the 3rd control register (CR3) is the base address of PML4 Table (in an x64 processor or more generally it points to root paging directory), in EPT guest is not aware of EPT Translation so it has CR3 too but this CR3 is used to convert Guest Virtual Address to Guest Physical Address, whenever you find your target Guest Physical Address, it’s EPT mechanism that treats your Guest Physical Address like a virtual address and the EPTP is the CR3.
Just think about the above sentence one more time!
So your target physical address should be divided into 4 part, the first 9 bits points to EPT PML4E (note that PML4 base address is in EPTP), the second 9 bits point the EPT PDPT Entry (the base address of PDPT comes from EPT PML4E), the third 9 bits point to EPT PD Entry (the base address of PD comes from EPT PDPTE) and the last 9 bit of the guest physical address point to an entry in EPT PT table (the base address of PT comes form EPT PDE) and now the EPT PT Entry points to the host physical address of the corresponding page.
You might ask, as a simple Virtual to Physical Address translation involves accessing 4 physical address, so what happens ?!
The answer is the processor internally translates all tables physical address one by one, that’s why paging and accessing memory in a guest software is slower than regular address translation. The following picture illustrates the operations for a Guest Virtual Address to Host Physical Address.
If you want to think about x86 EPT virtualization, assume, for example, that CR4.PAE = CR4.PSE = 0. The translation of a 32-bit linear address then operates as follows:
Bits 31:22 of the linear address select an entry in the guest page directory located at the guest-physical address in CR3. The guest-physical address of the guest page-directory entry (PDE) is translated through EPT to determine the guest PDE’s physical address.
Bits 21:12 of the linear address select an entry in the guest page table located at the guest-physical address in the guest PDE. The guest-physical address of the guest page-table entry (PTE) is translated through EPT to determine the guest PTE’s physical address.
Bits 11:0 of the linear address is the offset in the page frame located at the guest-physical address in the guest PTE. The guest physical address determined by this offset is translated through EPT to determine the physical address to which the original linear address translates.
Note that PAE stands for Physical Address Extension which is a memory management feature for the x86 architecture that extends the address space and PSE stands for Page Size Extension that refers to a feature of x86 processors that allows for pages larger than the traditional 4 KiB size.
In addition to translating a guest-physical address to a host physical address, EPT specifies the privileges that software is allowed when accessing the address. Attempts at disallowed accesses are called EPT violations and cause VM-exits.
Keep in mind that address never translates through EPT, when there is no access. That your guest-physical address is never used until there is access (Read or Write) to that location in memory.
Implementing Extended Page Table (EPT)
Now that we know some basics, let’s implement what we’ve learned before. Based on Intel manual we should write (VMWRITE) EPTP or Extended-Page-Table Pointer to the VMCS. The EPTP structure described below.
The above tables can be described using the following structure :
123456789101112
// See Table 24-8. Format of Extended-Page-Table Pointertypedef union _EPTP { ULONG64 All; struct { UINT64 MemoryType : 3; // bit 2:0 (0 = Uncacheable (UC) — 6 = Write — back(WB)) UINT64 PageWalkLength : 3; // bit 5:3 (This value is 1 less than the EPT page-walk length) UINT64 DirtyAndAceessEnabled : 1; // bit 6 (Setting this control to 1 enables accessed and dirty flags for EPT) UINT64 Reserved1 : 5; // bit 11:7 UINT64 PML4Address : 36; UINT64 Reserved2 : 16; }Fields;}EPTP, *PEPTP;
Each entry in all EPT tables is 64 bit long. EPT PML4E and EPT PDPTE and EPT PD are the same but EPT PTE has some minor differences.
An EPT entry is something like this :
Ok, Now we should implement tables and the first table is PML4. The following table shows the format of an EPT PML4 Entry (PML4E).
PML4E can be a structure like this :
1234567891011121314151617
// See Table 28-1. typedef union _EPT_PML4E { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 Reserved1 : 5; // bit 7:3 (Must be Zero) UINT64 Accessed : 1; // bit 8 UINT64 Ignored1 : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved2 : 4; // bit 51:N UINT64 Ignored3 : 12; // bit 63:52 }Fields;}EPT_PML4E, *PEPT_PML4E;
As long as we want to have a 4-level paging, the second table is EPT Page-Directory-Pointer-Table (PDTP), the following picture illustrates the format of PDPTE :
PDPTE’s structure is like this :
1234567891011121314151617
// See Table 28-3typedef union _EPT_PDPTE { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 Reserved1 : 5; // bit 7:3 (Must be Zero) UINT64 Accessed : 1; // bit 8 UINT64 Ignored1 : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved2 : 4; // bit 51:N UINT64 Ignored3 : 12; // bit 63:52 }Fields;}EPT_PDPTE, *PEPT_PDPTE;
For the third table of paging we should implement an EPT Page-Directory Entry (PDE) as described below:
PDE’s structure:
1234567891011121314151617
// See Table 28-5typedef union _EPT_PDE { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 Reserved1 : 5; // bit 7:3 (Must be Zero) UINT64 Accessed : 1; // bit 8 UINT64 Ignored1 : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved2 : 4; // bit 51:N UINT64 Ignored3 : 12; // bit 63:52 }Fields;}EPT_PDE, *PEPT_PDE;
The last page is EPT which is described below.
PTE will be :
Note that you have, EPTMemoryType, IgnorePAT, DirtyFlag and SuppressVE in addition to the above pages.
1234567891011121314151617181920
// See Table 28-6typedef union _EPT_PTE { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 EPTMemoryType : 3; // bit 5:3 (EPT Memory type) UINT64 IgnorePAT : 1; // bit 6 UINT64 Ignored1 : 1; // bit 7 UINT64 AccessedFlag : 1; // bit 8 UINT64 DirtyFlag : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved : 4; // bit 51:N UINT64 Ignored3 : 11; // bit 62:52 UINT64 SuppressVE : 1; // bit 63 }Fields;}EPT_PTE, *PEPT_PTE;
There are other types of implementing page walks ( 2 or 3 level paging) and if you set the 7th bit of PDPTE (Maps 1 GB) or the 7th bit of PDE (Maps 2 MB) so instead of implementing 4 level paging (like what we want to do for the rest of the topic) you set those bits but keep in mind that the corresponding tables are different. These tables described in (Table 28-4. Format of an EPT Page-Directory Entry (PDE) that Maps a 2-MByte Page) and (Table 28-2. Format of an EPT Page-Directory-Pointer-Table Entry (PDPTE) that Maps a 1-GByte Page). Alex Ionescu’s SimpleVisor is an example of implementing in this way.
An important note is almost all the above structures have a 36-bit Physical Address which means our hypervisor supports only 4-level paging. It is because every page table (and every EPT Page Table) consist of 512 entries which means you need 9 bits to select an entry and as long as we have 4 level tables, we can’t use more than 36 (4 * 9) bits. Another method with wider address range is not implemented in all major OS like Windows or Linux. I’ll describe EPT PML5E briefly later in this topic but we don’t implement it in our hypervisor as it’s not popular yet!
By the way, N is the physical-address width supported by the processor. CPUID with 80000008H in EAX gives you the supported width in EAX bits 7:0.
Let’s see the rest of the code, the following code is the Initialize_EPTP function which is responsible for allocating and mapping EPTP.
Note that the PAGED_CODE() macro ensures that the calling thread is running at an IRQL that is low enough to permit paging.
1234
UINT64 Initialize_EPTP(){ PAGED_CODE(); …
First of all, allocating EPTP and put zeros on it.
Now that we have all of our pages available, let’s allocate two page (2*4096) continuously because we need one of the pages for our RIP to start and one page for our Stack (RSP). After that, we need two EPT Page Table Entries (PTEs) with permission to execute, read, write. The physical address should be divided by 4096 (PAGE_SIZE) because if we dived a hex number by 4096 (0x1000) 12 digits from the right (which are zeros) will disappear and these 12 digits are for choosing between 4096 bytes.
By the way, we let stack be executable too and that’s because, in a regular VM, we should put RWX to all pages because its the responsibility of internal page tables to set or clear NX bit. We need to change them from EPT Tables for special purposes (e.g intercepting instruction fetch for a special page). Changing from EPT tables will lead to EPT-Violation, in this way we can intercept these events.
The actual need is two page but we need to build page tables inside our guest software thus we allocate up to 10 page.
I’ll explain about intercepting pages from EPT, later in these series.
123456789101112131415161718192021
// Setup PT by allocating two pages Continuously // We allocate two pages because we need 1 page for our RIP to start and 1 page for RSP 1 + 1 and other paages for paging const int PagesToAllocate = 10; UINT64 Guest_Memory = ExAllocatePoolWithTag(NonPagedPool, PagesToAllocate * PAGE_SIZE, POOLTAG); RtlZeroMemory(Guest_Memory, PagesToAllocate * PAGE_SIZE); for (size_t i = 0; i < PagesToAllocate; i++) { EPT_PT[i].Fields.AccessedFlag = 0; EPT_PT[i].Fields.DirtyFlag = 0; EPT_PT[i].Fields.EPTMemoryType = 6; EPT_PT[i].Fields.Execute = 1; EPT_PT[i].Fields.ExecuteForUserMode = 0; EPT_PT[i].Fields.IgnorePAT = 0; EPT_PT[i].Fields.PhysicalAddress = (VirtualAddress_to_PhysicalAddress( Guest_Memory + ( i * PAGE_SIZE ))/ PAGE_SIZE ); EPT_PT[i].Fields.Read = 1; EPT_PT[i].Fields.SuppressVE = 0; EPT_PT[i].Fields.Write = 1; }
Note: EPTMemoryType can be either 0 (for uncached memory) or 6 (write-back) memory and as we want our memory to be cacheable so put 6 on it.
The next table is PDE. PDE should point to PTE base address so we just put the address of the first entry from the EPT PTE as the physical address for Page Directory Entry.
We’ve almost done! Just set up the EPTP for our VMCS by putting 0x6 as the memory type (which is write-back) and we walk 4 times so the page walk length is 4-1=3 and PML4 address is the physical address of the first entry in the PML4 table.
I’ll explain about DirtyAndAcessEnabled field later in this topic.
DbgPrint(«[*] Extended Page Table Pointer allocated at %llx»,EPTPointer); return EPTPointer;
All the above page tables should be aligned to 4KByte boundaries but as long as we allocate >= PAGE_SIZE (One PFN record) so it’s automatically 4kb-aligned.
Our implementation consist of 4 tables, therefore, the full layout is like this:
Accessed and Dirty Flags in EPTP
In EPTP, you’ll decide whether enable accessed and dirty flags for EPT or not using the 6th bit of the extended-page-table pointer (EPTP). Setting this flag causes processor accesses to guest paging structure entries to be treated as writes.
For any EPT paging-structure entry that is used during guest-physical-address translation, bit 8 is the accessed flag. For an EPT paging-structure entry that maps a page (as opposed to referencing another EPT paging structure), bit 9 is the dirty flag.
Whenever the processor uses an EPT paging-structure entry as part of the guest-physical-address translation, it sets the accessed flag in that entry (if it is not already set).
Whenever there is a write to a guest-physical address, the processor sets the dirty flag (if it is not already set) in the EPT paging-structure entry that identifies the final physical address for the guest-physical address (either an EPT PTE or an EPT paging-structure entry in which bit 7 is 1).
These flags are “sticky,” meaning that, once set, the processor does not clear them; only software can clear them.
5-Level EPT Translation
Intel suggests a new table in translation hierarchy, called PML5 which extends the EPT into a 5-layer table and guest operating systems can use up to 57 bit for the virtual-addresses while the classic 4-level EPT is limited to translating 48-bit guest-physical addresses. None of the modern OSs use this feature yet.
PML5 is also applying to both EPT and regular paging mechanism.
Translation begins by identifying a 4-KByte naturally aligned EPT PML5 table. It is located at the physical address specified in bits 51:12 of EPTP. An EPT PML5 table comprises 512 64-bit entries (EPT PML5Es). An EPT PML5E is selected using the physical address defined as follows.
Bits 63:52 are all 0.
Bits 51:12 are from EPTP.
Bits 11:3 are bits 56:48 of the guest-physical address.
Bits 2:0 are all 0.
Because an EPT PML5E is identified using bits 56:48 of the guest-physical address, it controls access to a 256-TByte region of the linear address space.
The only difference is you should put PML5 physical address instead of the PML4 address in EPTP.
Well, Intel’s explanation about Cache invalidating is really vague and I couldn’t understand it completely but I asked Petr and he explains me in this way:
VMX-specific TLB-management instructions:
INVEPT – Invalidate cached Extended Page Table (EPT) mappings in the processor to synchronize address translation in virtual machines with memory-resident EPT pages.
INVVPID – Invalidate cached mappings of address translation based on the Virtual Processor ID (VPID).
Imagine we access guest-physical-address 0x1000,it’ll get translated to host-physical-address 0x5000. Next time, if we access 0x1000, the CPU won’t send the request to the memory bus but uses cached memory instead. it’s faster. Now let’s say we change EPT_PDPT->PhysicalAddress to point to different EPT PD or change the attributes of one of the EPT tables, now we have to tell the processor that your cache is invalid and that’s what exactly INVEPT performs.
Now we have two terms here, Single-Context and All-Context.
Single-Context means, that you invalidate all EPT-derived translations based on a single EPTP (in short: for single VM).
All-Context means that you invalidate all EPT-derived translations. (for every-VM).
So in case if you wouldn’t perform INVEPT after changing EPT’s structures, you would be risking that the CPU would reuse old translations.
Basically, any change to EPT structure needs INVEPT but switching EPT (or VMCS) doesn’t need INVEPT because that translation will be “tagged” with the changed EPTP in the cache.
The following assembly function is responsible for INVEPT.
12345678910111213
INVEPT_Instruction PROC PUBLIC invept rcx, oword ptr [rdx] jz @jz jc @jc xor rax, rax ret @jz: mov rax, VMX_ERROR_CODE_FAILED_WITH_STATUS ret @jc: mov rax, VMX_ERROR_CODE_FAILED retINVEPT_Instruction ENDP
Note that VMX_ERROR_CODE_FAILED_WITH_STATUS and VMX_ERROR_CODE_FAILED define like this.
Using the above functions in a modification state, tell the processor to invalidate its cache.
Conclusion
In this part, we see how to initialize the Extended Page Table and map guest physical address to host physical address then we build the EPTP based on the allocated addresses.
The future part would be about building the VMCS and implementing other VMX instructions. Don’t forget to check the blog for the future posts.
This is the third part of the tutorial “Hypervisor From Scratch“. You may have noticed that the previous parts have steadily been getting more complicated. This part should teach you how to get started with creating your own VMM, we go to demonstrate how to interact with the VMM from Windows User-mode (IOCTL Dispatcher), then we solve the problems with the affinity and running code in a special core. Finally, we get familiar with initializing VMXON Regions and VMCS Regions then we load our hypervisor regions into each core and implement our custom functions to work with hypervisor instruction and many more things related to Virtual-Machine Control Data Structures (VMCS).
Some of the implementations derived from HyperBone (Minimalistic VT-X hypervisor with hooks) and HyperPlatform by Satoshi Tanda and hvpp which is great work by my friend Petr Beneš the person who really helped me creating these series.
The full source code of this tutorial is available on :
The most important function in IRP MJ functions for us is DrvIOCTLDispatcher (IRP_MJ_DEVICE_CONTROL) and that’s because this function can be called from user-mode with a special IOCTL number, it means you can have a special code in your driver and implement a special functionality corresponding this code, then by knowing the code (from user-mode) you can ask your driver to perform your request, so you can imagine that how useful this function would be.
Now let’s implement our functions for dispatching IOCTL code and print it from our kernel-mode driver.
As long as I know, there are several methods by which you can dispatch IOCTL e.g METHOD_BUFFERED, METHOD_NIETHER, METHOD_IN_DIRECT, METHOD_OUT_DIRECT. These methods should be followed by the user-mode caller (the difference are in the place where buffers transfer between user-mode and kernel-mode or vice versa), I just copy the implementations with some minor modification form Microsoft’s Windows Driver Samples, you can see the full code for user-mode and kernel-mode.
Imagine we have the following IOCTL codes:
12345678910111213141516171819
//// Device type — in the «User Defined» range.»//#define SIOCTL_TYPE 40000 //// The IOCTL function codes from 0x800 to 0xFFF are for customer use.//#define IOCTL_SIOCTL_METHOD_IN_DIRECT \ CTL_CODE( SIOCTL_TYPE, 0x900, METHOD_IN_DIRECT, FILE_ANY_ACCESS ) #define IOCTL_SIOCTL_METHOD_OUT_DIRECT \ CTL_CODE( SIOCTL_TYPE, 0x901, METHOD_OUT_DIRECT , FILE_ANY_ACCESS ) #define IOCTL_SIOCTL_METHOD_BUFFERED \ CTL_CODE( SIOCTL_TYPE, 0x902, METHOD_BUFFERED, FILE_ANY_ACCESS ) #define IOCTL_SIOCTL_METHOD_NEITHER \ CTL_CODE( SIOCTL_TYPE, 0x903, METHOD_NEITHER , FILE_ANY_ACCESS )
There is a convention for defining IOCTLs as it mentioned here,
The IOCTL is a 32-bit number. The first two low bits define the “transfer type” which can be METHOD_OUT_DIRECT, METHOD_IN_DIRECT, METHOD_BUFFERED or METHOD_NEITHER.
The next set of bits from 2 to 13 define the “Function Code”. The high bit is referred to as the “custom bit”. This is used to determine user-defined IOCTLs versus system defined. This means that function codes 0x800 and greater are customs defined similarly to how WM_USER works for Windows Messages.
The next two bits define the access required to issue the IOCTL. This is how the I/O Manager can reject IOCTL requests if the handle has not been opened with the correct access. The access types are such as FILE_READ_DATA and FILE_WRITE_DATA for example.
The last bits represent the device type the IOCTLs are written for. The high bit again represents user-defined values.
In IOCTL Dispatcher, The “Parameters.DeviceIoControl.IoControlCode” of the IO_STACK_LOCATIONcontains the IOCTL code being invoked.
For METHOD_IN_DIRECT and METHOD_OUT_DIRECT, the difference between IN and OUT is that with IN, you can use the output buffer to pass in data while the OUT is only used to return data.
The METHOD_BUFFERED is a buffer that the data is copied from this buffer. The buffer is created as the larger of the two sizes, the input or output buffer. Then the read buffer is copied to this new buffer. Before you return, you simply copy the return data into the same buffer. The return value is put into the IO_STATUS_BLOCK and the I/O Manager copies the data into the output buffer. The METHOD_NEITHERis the same.
Ok, let’s see an example :
First, we declare all our needed variable.
Note that the PAGED_CODE macro ensures that the calling thread is running at an IRQL that is low enough to permit paging.
123456789101112131415161718192021222324252627
NTSTATUS DrvIOCTLDispatcher( PDEVICE_OBJECT DeviceObject, PIRP Irp){ PIO_STACK_LOCATION irpSp;// Pointer to current stack location NTSTATUS ntStatus = STATUS_SUCCESS;// Assume success ULONG inBufLength; // Input buffer length ULONG outBufLength; // Output buffer length PCHAR inBuf, outBuf; // pointer to Input and output buffer PCHAR data = «This String is from Device Driver !!!»; size_t datalen = strlen(data) + 1;//Length of data including null PMDL mdl = NULL; PCHAR buffer = NULL; UNREFERENCED_PARAMETER(DeviceObject); PAGED_CODE(); irpSp = IoGetCurrentIrpStackLocation(Irp); inBufLength = irpSp->Parameters.DeviceIoControl.InputBufferLength; outBufLength = irpSp->Parameters.DeviceIoControl.OutputBufferLength; if (!inBufLength || !outBufLength) { ntStatus = STATUS_INVALID_PARAMETER; goto End; } …
Then we have to use switch-case through the IOCTLs (Just copy buffers and show it from DbgPrint()).
123456789101112131415161718
switch (irpSp->Parameters.DeviceIoControl.IoControlCode) { case IOCTL_SIOCTL_METHOD_BUFFERED: DbgPrint(«Called IOCTL_SIOCTL_METHOD_BUFFERED\n»); PrintIrpInfo(Irp); inBuf = Irp->AssociatedIrp.SystemBuffer; outBuf = Irp->AssociatedIrp.SystemBuffer; DbgPrint(«\tData from User :»); DbgPrint(inBuf); PrintChars(inBuf, inBufLength); RtlCopyBytes(outBuf, data, outBufLength); DbgPrint((«\tData to User : «)); PrintChars(outBuf, datalen); Irp->IoStatus.Information = (outBufLength < datalen ? outBufLength : datalen); break; …
Even though you can see all the implementations in my GitHub but that’s enough, in the rest of the post we only use the IOCTL_SIOCTL_METHOD_BUFFERED method.
Now from user-mode and if you remember from the previous part where we create a handle (HANDLE) using CreateFile, now we can use the DeviceIoControl to call DrvIOCTLDispatcher(IRP_MJ_DEVICE_CONTROL) along with our parameters from user-mode.
1234567891011121314151617181920212223242526272829
char OutputBuffer[1000]; char InputBuffer[1000]; ULONG bytesReturned; BOOL Result; StringCbCopy(InputBuffer, sizeof(InputBuffer), «This String is from User Application; using METHOD_BUFFERED»); printf(«\nCalling DeviceIoControl METHOD_BUFFERED:\n»); memset(OutputBuffer, 0, sizeof(OutputBuffer)); Result = DeviceIoControl(handle, (DWORD)IOCTL_SIOCTL_METHOD_BUFFERED, &InputBuffer, (DWORD)strlen(InputBuffer) + 1, &OutputBuffer, sizeof(OutputBuffer), &bytesReturned, NULL ); if (!Result) { printf(«Error in DeviceIoControl : %d», GetLastError()); return 1; } printf(» OutBuffer (%d): %s\n», bytesReturned, OutputBuffer);
There is an old, yet great topic here which describes the different types of IOCT dispatching.
I think we’re done with WDK basics, its time to see how we can use Windows in order to build our VMM.
Per Processor Configuration and Setting Affinity
Affinity to a special logical processor is one of the main things that we should consider when working with the hypervisor.
Unfortunately, in Windows, there is nothing like on_each_cpu (like it is in Linux Kernel Module) so we have to change our affinity manually in order to run on each logical processor. In my Intel Core i7 6820HQ I have 4 physical cores and each core can run 2 threads simultaneously (due to the presence of hyper-threading) thus we have 8 logical processors and of course 8 sets of all the registers (including general purpose registers and MSR registers) so we should configure our VMM to work on 8 logical processors.
To get the count of logical processors you can use KeQueryActiveProcessors(), then we should pass a KAFFINITY mask to the KeSetSystemAffinityThread which sets the system affinity of the current thread.
KAFFINITY mask can be configured using a simple power function :
1234567891011121314151617
int ipow(int base, int exp) { int result = 1; for (;;) { if ( exp & 1) { result *= base; } exp >>= 1; if (!exp) { break; } base *= base; } return result;}
then we should use the following code in order to change the affinity of the processor and run our code in all the logical cores separately:
12345678910
KAFFINITY kAffinityMask; for (size_t i = 0; i < KeQueryActiveProcessors(); i++) { kAffinityMask = ipow(2, i); KeSetSystemAffinityThread(kAffinityMask); DbgPrint(«=====================================================»); DbgPrint(«Current thread is executing in %d th logical processor.»,i); // Put you function here ! }
Conversion between the physical and virtual addresses
VMXON Regions and VMCS Regions (see below) use physical address as the operand to VMXON and VMPTRLD instruction so we should create functions to convert Virtual Address to Physical address:
And as long as we can’t directly use physical addresses for our modifications in protected-mode then we have to convert physical address to virtual address.
In the previous part, we query about the presence of hypervisor from user-mode, but we should consider checking about hypervisor from kernel-mode too. This reduces the possibility of getting kernel errors in the future or there might be something that disables the hypervisor using the lock bit, by the way, the following code checks IA32_FEATURE_CONTROL MSR (MSR address 3AH) to see if the lock bitis set or not.
123456789101112131415161718192021222324252627
BOOLEAN Is_VMX_Supported(){ CPUID data = { 0 }; // VMX bit __cpuid((int*)&data, 1); if ((data.ecx & (1 << 5)) == 0) return FALSE; IA32_FEATURE_CONTROL_MSR Control = { 0 }; Control.All = __readmsr(MSR_IA32_FEATURE_CONTROL); // BIOS lock check if (Control.Fields.Lock == 0) { Control.Fields.Lock = TRUE; Control.Fields.EnableVmxon = TRUE; __writemsr(MSR_IA32_FEATURE_CONTROL, Control.All); } else if (Control.Fields.EnableVmxon == FALSE) { DbgPrint(«[*] VMX locked off in BIOS»); return FALSE; } return TRUE;}
The structures used in the above function declared like this:
Before executing VMXON, software should allocate a naturally aligned 4-KByte region of memory that a logical processor may use to support VMX operation. This region is called the VMXON region. The address of the VMXON region (the VMXON pointer) is provided in an operand to VMXON.
A VMM can (should) use different VMXON Regions for each logical processor otherwise the behavior is “undefined”.
Note: The first processors to support VMX operation require that the following bits be 1 in VMX operation: CR0.PE, CR0.NE, CR0.PG, and CR4.VMXE. The restrictions on CR0.PE and CR0.PG imply that VMX operation is supported only in paged protected mode (including IA-32e mode). Therefore, the guest software cannot be run in unpaged protected mode or in real-address mode.
Now that we are configuring the hypervisor, we should have a global variable that describes the state of our virtual machine, I create the following structure for this purpose, currently, we just have two fields (VMXON_REGION and VMCS_REGION) but we will add new fields in this structure in the future parts.
// at IRQL > DISPATCH_LEVEL memory allocation routines don’t work if (KeGetCurrentIrql() > DISPATCH_LEVEL) KeRaiseIrqlToDpcLevel();
This code is for changing current IRQL Level to DISPATCH_LEVEL but we can ignore this code as long as we use MmAllocateContiguousMemory but if you want to use another type of memory for your VMXON region you should use MmAllocateContiguousMemorySpecifyCache (commented), other types of memory you can use can be found here.
Note that to ensure proper behavior in VMX operation, you should maintain the VMCS region and related structures in writeback cacheable memory. Alternatively, you may map any of these regions or structures with the UC memory type. Doing so is strongly discouraged unless necessary as it will cause the performance of transitions using those structures to suffer significantly.
Write-back is a storage method in which data is written into the cache every time a change occurs, but is written into the corresponding location in main memory only at specified intervals or under certain conditions. Being cachable or not cachable can be determined from the cache disable bit in paging structures (PTE).
By the way, we should allocate 8192 Byte because there is no guarantee that Windows allocates the aligned memory so we can find a piece of 4096 Bytes aligned in 8196 Bytes. (by aligning I mean, the physical address should be divisible by 4096 without any reminder).
In my experience, the MmAllocateContiguousMemory allocation is always aligned, maybe it is because every page in PFN are allocated by 4096 bytes and as long as we need 4096 Bytes, then it’s aligned.
PHYSICAL_ADDRESS PhysicalMax = { 0 }; PhysicalMax.QuadPart = MAXULONG64; int VMXONSize = 2 * VMXON_SIZE; BYTE* Buffer = MmAllocateContiguousMemory(VMXONSize, PhysicalMax); // Allocating a 4-KByte Contigous Memory region if (Buffer == NULL) { DbgPrint(«[*] Error : Couldn’t Allocate Buffer for VMXON Region.»); return FALSE;// ntStatus = STATUS_INSUFFICIENT_RESOURCES; }
Now we should convert the address of the allocated memory to its physical address and make sure it’s aligned.
Memory that MmAllocateContiguousMemory allocates is uninitialized. A kernel-mode driver must first set this memory to zero. Now we should use RtlSecureZeroMemory for this case.
Before executing VMXON, software should write the VMCS revision identifier to the VMXON region. (Specifically, it should write the 31-bit VMCS revision identifier to bits 30:0 of the first 4 bytes of the VMXON region; bit 31 should be cleared to 0.)
It need not initialize the VMXON region in any other way. Software should use a separate region for each logical processor and should not access or modify the VMXON region of a logical processor between the execution of VMXON and VMXOFF on that logical processor. Doing otherwise may lead to unpredictable behavior.
So let’s get the Revision Identifier from IA32_VMX_BASIC_MSR and write it to our VMXON Region.
The last part is used for executing VMXON instruction.
12345678910
int status = __vmx_on(&alignedPhysicalBuffer); if (status) { DbgPrint(«[*] VMXON failed with status %d\n», status); return FALSE; } vmState->VMXON_REGION = alignedPhysicalBuffer; return TRUE;
__vmx_on is the intrinsic function for executing VMXON. The status code shows diffrenet meanings.
Value
Meaning
0
The operation succeeded.
1
The operation failed with extended status available in the VM-instruction error field of the current VMCS.
2
The operation failed without status available.
If we set the VMXON Region using VMXON and it fails then status = 1. If there isn’t any VMCS the status =2 and if the operation was successful then status =0.
If you execute the above code twice without executing VMXOFF then you definitely get errors.
Now, our VMXON Region is ready and we’re good to go.
Virtual-Machine Control Data Structures (VMCS)
A logical processor uses virtual-machine control data structures (VMCSs) while it is in VMX operation. These manage transitions into and out of VMX non-root operation (VM entries and VM exits) as well as processor behavior in VMX non-root operation. This structure is manipulated by the new instructions VMCLEAR, VMPTRLD, VMREAD, and VMWRITE.
The above picture illustrates the lifecycle VMX operation on VMCS Region.
Initializing VMCS Region
A VMM can (should) use different VMCS Regions so you need to set logical processor affinity and run you initialization routine multiple times.
The location where the VMCS located is called “VMCS Region”.
VMCS Region is a
4 Kbyte (bits 11:0 must be zero)
Must be aligned to the 4KB boundary
This pointer must not set bits beyond the processor’s physical-address width (Software can determine a processor’s physical-address width by executing CPUID with 80000008H in EAX. The physical-address width is returned in bits 7:0 of EAX.)
There might be several VMCSs simultaneously in a processor but just one of them is currently active and the VMLAUNCH, VMREAD, VMRESUME, and VMWRITE instructions operate only on the current VMCS.
Using VMPTRLD sets the current VMCS on a logical processor.
The memory operand of the VMCLEAR instruction is also the address of a VMCS. After execution of the instruction, that VMCS is neither active nor current on the logical processor. If the VMCS had been current on the logical processor, the logical processor no longer has a current VMCS.
VMPTRST is responsible to give the current VMCS pointer it stores the value FFFFFFFFFFFFFFFFH if there is no current VMCS.
The launch state of a VMCS determines which VM-entry instruction should be used with that VMCS. The VMLAUNCH instruction requires a VMCS whose launch state is “clear”; the VMRESUME instruction requires a VMCS whose launch state is “launched”. A logical processor maintains a VMCS’s launch state in the corresponding VMCS region.
If the launch state of the current VMCS is “clear”, successful execution of the VMLAUNCH instruction changes the launch state to “launched”.
The memory operand of the VMCLEAR instruction is the address of a VMCS. After execution of the instruction, the launch state of that VMCS is “clear”.
There are no other ways to modify the launch state of a VMCS (it cannot be modified using VMWRITE) and there is no direct way to discover it (it cannot be read using VMREAD).
The following picture illustrates the contents of a VMCS Region.
The following code is responsible for allocating VMCS Region :
The above code is exactly the same as VMXON Region except for __vmx_vmptrld instead of __vmx_on, __vmx_vmptrld is the intrinsic function for VMPTRLD instruction.
In VMCS also we should find the Revision Identifier from MSR_IA32_VMX_BASIC and write in VMCS Region before executing VMPTRLD.
After configuring the above regions, now its time to think about DrvClose when the handle to the driver is no longer maintained by the user-mode application. At this time, we should terminate VMX and free every memory that we allocated before.
The following function is responsible for executing VMXOFF then calling to MmFreeContiguousMemoryin order to free the allocated memory :
123456789101112131415161718192021
void Terminate_VMX(void) { DbgPrint(«\n[*] Terminating VMX…\n»); KAFFINITY kAffinityMask; for (size_t i = 0; i < ProcessorCounts; i++) { kAffinityMask = ipow(2, i); KeSetSystemAffinityThread(kAffinityMask); DbgPrint(«\t\tCurrent thread is executing in %d th logical processor.», i); __vmx_off(); MmFreeContiguousMemory(PhysicalAddress_to_VirtualAddress(vmState[i].VMXON_REGION)); MmFreeContiguousMemory(PhysicalAddress_to_VirtualAddress(vmState[i].VMCS_REGION)); } DbgPrint(«[*] VMX Operation turned off successfully. \n»); }
Keep in mind to convert VMXON and VMCS Regions to virtual address because MmFreeContiguousMemory accepts VA, otherwise, it leads to a BSOD.
Ok, It’s almost done!
Testing our VMM
Let’s create a test case for our code, first a function for Initiating VMXON and VMCS Regions through all logical processor.
PVirtualMachineState vmState;int ProcessorCounts; PVirtualMachineState Initiate_VMX(void) { if (!Is_VMX_Supported()) { DbgPrint(«[*] VMX is not supported in this machine !»); return NULL; } ProcessorCounts = KeQueryActiveProcessorCount(0); vmState = ExAllocatePoolWithTag(NonPagedPool, sizeof(VirtualMachineState)* ProcessorCounts, POOLTAG); DbgPrint(«\n=====================================================\n»); KAFFINITY kAffinityMask; for (size_t i = 0; i < ProcessorCounts; i++) { kAffinityMask = ipow(2, i); KeSetSystemAffinityThread(kAffinityMask); // do st here ! DbgPrint(«\t\tCurrent thread is executing in %d th logical processor.», i); Enable_VMX_Operation(); // Enabling VMX Operation DbgPrint(«[*] VMX Operation Enabled Successfully !»); Allocate_VMXON_Region(&vmState[i]); Allocate_VMCS_Region(&vmState[i]); DbgPrint(«[*] VMCS Region is allocated at ===============> %llx», vmState[i].VMCS_REGION); DbgPrint(«[*] VMXON Region is allocated at ===============> %llx», vmState[i].VMXON_REGION); DbgPrint(«\n=====================================================\n»); }}
The above function should be called from IRP MJ CREATE so let’s modify our DrvCreate to :
123456789101112131415
NTSTATUS DrvCreate(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] DrvCreate Called !»); if (Initiate_VMX()) { DbgPrint(«[*] VMX Initiated Successfully.»); } Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT); return STATUS_SUCCESS;}
And modify DrvClose to :
12345678910111213
NTSTATUS DrvClose(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] DrvClose Called !»); // executing VMXOFF on every logical processor Terminate_VMX(); Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT); return STATUS_SUCCESS;}
Now, run the code, In the case of creating the handle (You can see that our regions allocated successfully).
And when we call CloseHandle from user mode:
Source code
The source code of this part of the tutorial is available on my GitHub.
Conclusion
In this part we learned about different types of IOCTL Dispatching, then we see different functions in Windows to manage our hypervisor VMM and we initialized the VMXON Regions and VMCS Regions then we terminate them.
In the future part, we’ll focus on VMCS and different actions that can be performed in VMCS Regions in order to control our guest software.
It’s the second part of a multiple series of a tutorial called “Hypervisor From Scratch”, First I highly recommend to read the first part (Basic Concepts & Configure Testing Environment) before reading this part, as it contains the basic knowledge you need to know in order to understand the rest of this tutorial.
In this section, we will learn about Detecting Hypervisor Support for our processor, then we simply config the basic stuff to Enable VMX and Entering VMX Operation and a lot more thing about Window Driver Kit (WDK).
Configuring Our IRP Major Functions
Beside our kernel-mode driver (“MyHypervisorDriver“), I created a user-mode application called “MyHypervisorApp“, first of all (The source code is available in my GitHub), I should encourage you to write most of your codes in user-mode rather than kernel-mode and that’s because you might not have handled exceptions so it leads to BSODs, or on the other hand, running less code in kernel-mode reduces the possibility of putting some nasty kernel-mode bugs.
If you remember from the previous part, we create some Windows Driver Kit codes, now we want to develop our project to support more IRP Major Functions.
IRP Major Functions are located in a conventional Windows table that is created for every device, once you register your device in Windows, you have to introduce these functions in which you handle these IRP Major Functions. That’s like every device has a table of its Major Functions and everytime a user-mode application calls any of these functions, Windows finds the corresponding function (if device driver supports that MJ Function) based on the device that requested by the user and calls it then pass an IRP pointer to the kernel driver.
Now its responsibility of device function to check the privileges or etc.
Note that our device name is “\Device\MyHypervisorDevice“.
After that, we need to introduce our Major Functions for our device.
1234567891011121314151617
if (NtStatus == STATUS_SUCCESS && NtStatusSymLinkResult == STATUS_SUCCESS) { for (uiIndex = 0; uiIndex < IRP_MJ_MAXIMUM_FUNCTION; uiIndex++) pDriverObject->MajorFunction[uiIndex] = DrvUnsupported; DbgPrint(«[*] Setting Devices major functions.»); pDriverObject->MajorFunction[IRP_MJ_CLOSE] = DrvClose; pDriverObject->MajorFunction[IRP_MJ_CREATE] = DrvCreate; pDriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = DrvIOCTLDispatcher; pDriverObject->MajorFunction[IRP_MJ_READ] = DrvRead; pDriverObject->MajorFunction[IRP_MJ_WRITE] = DrvWrite; pDriverObject->DriverUnload = DrvUnload; } else { DbgPrint(«[*] There was some errors in creating device.»); }
You can see that I put “DrvUnsupported” to all functions, this is a function to handle all MJ Functions and told the user that it’s not supported. The main body of this function is like this:
12345678910
NTSTATUS DrvUnsupported(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] This function is not supported 🙁 !»); Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT); return STATUS_SUCCESS;}
We also introduce other major functions that are essential for our device, we’ll complete the implementation in the future, let’s just leave them alone.
Every major function will only trigger if we call its corresponding function from user-mode. For instance, there is a function (in user-mode) called CreateFile (And all its variants like CreateFileA and CreateFileW for ASCII and Unicode) so everytime we call CreateFile the function that registered as IRP_MJ_CREATE will be called or if we call ReadFile then IRP_MJ_READ and WriteFile then IRP_MJ_WRITE will be called. You can see that Windows treats its devices like files and everything we need to pass from user-mode to kernel-mode is available in PIRP Irp as a buffer when the function is called.
In this case, Windows is responsible to copy user-mode buffer to kernel mode stack.
Don’t worry we use it frequently in the rest of the project but we only support IRP_MJ_CREATE in this part and left others unimplemented for our future parts.
IRP Minor Functions
IRP Minor functions are mainly used for PnP manager to notify for a special event, for example,The PnP manager sends IRP_MN_START_DEVICE after it has assigned hardware resources, if any, to the device or The PnP manager sends IRP_MN_STOP_DEVICE to stop a device so it can reconfigure the device’s hardware resources.
We will need these minor functions later in these series.
For optimizing VMM, you can use Fast I/O which is a different way to initiate I/O operations that are faster than IRP. Fast I/O operations are always synchronous.
Fast I/O is specifically designed for rapid synchronous I/O on cached files. In fast I/O operations, data is transferred directly between user buffers and the system cache, bypassing the file system and the storage driver stack. (Storage drivers do not use fast I/O.) If all of the data to be read from a file is resident in the system cache when a fast I/O read or write request is received, the request is satisfied immediately.
When the I/O Manager receives a request for synchronous file I/O (other than paging I/O), it invokes the fast I/O routine first. If the fast I/O routine returns TRUE, the operation was serviced by the fast I/O routine. If the fast I/O routine returns FALSE, the I/O Manager creates and sends an IRP instead.
#pragma once#include <ntddk.h>#include <wdf.h>#include <wdm.h> extern void inline Breakpoint(void);extern void inline Enable_VMX_Operation(void); NTSTATUS DriverEntry(PDRIVER_OBJECT pDriverObject, PUNICODE_STRING pRegistryPath);VOID DrvUnload(PDRIVER_OBJECT DriverObject);NTSTATUS DrvCreate(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvRead(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvWrite(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvClose(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvUnsupported(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvIOCTLDispatcher(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp); VOID PrintChars(_In_reads_(CountChars) PCHAR BufferAddress, _In_ size_t CountChars);VOID PrintIrpInfo(PIRP Irp); #pragma alloc_text(INIT, DriverEntry)#pragma alloc_text(PAGE, DrvUnload)#pragma alloc_text(PAGE, DrvCreate)#pragma alloc_text(PAGE, DrvRead)#pragma alloc_text(PAGE, DrvWrite)#pragma alloc_text(PAGE, DrvClose)#pragma alloc_text(PAGE, DrvUnsupported)#pragma alloc_text(PAGE, DrvIOCTLDispatcher) // IOCTL Codes and Its meanings#define IOCTL_TEST 0x1 // In case of testing
Now just compile your driver.
Loading Driver and Check the presence of Device
In order to load our driver (MyHypervisorDriver) first download OSR Driver Loader, then run Sysinternals DbgView as administrator make sure that your DbgView captures the kernel (you can check by going Capture -> Capture Kernel).
After that open the OSR Driver Loader (go to OsrLoader -> kit-> WNET -> AMD64 -> FRE) and open OSRLOADER.exe (in an x64 environment). Now if you built your driver, find .sys file (in MyHypervisorDriver\x64\Debug\ should be a file named: “MyHypervisorDriver.sys”), in OSR Driver Loader click to browse and select (MyHypervisorDriver.sys) and then click to “Register Service” after the message box that shows your driver registered successfully, you should click on “Start Service”.
Please note that you should have WDK installed for your Visual Studio in order to be able building your project.
Now come back to DbgView, then you should see that your driver loaded successfully and a message “[*] DriverEntry Called. ” should appear.
If there is no problem then you’re good to go, otherwise, if you have a problem with DbgView you can check the next step.
Keep in mind that now you registered your driver so you can use SysInternals WinObj in order to see whether “MyHypervisorDevice” is available or not.
The Problem with DbgView
Unfortunately, for some unknown reasons, I’m not able to view the result of DbgPrint(), If you can see the result then you can skip this step but if you have a problem, then perform the following steps:
Under that , add a DWORD value named IHVDRIVER with a value of 0xFFFF
Reboot the machine and you’ll good to go.
It always works for me and I tested on many computers but my MacBook seems to have a problem.
In order to solve this problem, you need to find a Windows Kernel Global variable called, nt!Kd_DEFAULT_Mask, this variable is responsible for showing the results in DbgView, it has a mask that I’m not aware of so I just put a 0xffffffff in it to simply make it shows everything!
To do this, you need a Windows Local Kernel Debugging using Windbg.
Open a Command Prompt window as Administrator. Enter bcdedit /debug on
If the computer is not already configured as the target of a debug transport, enter bcdedit /dbgsettings local
Reboot the computer.
After that you need to open Windbg with UAC Administrator privilege, go to File > Kernel Debug > Local > press OK and in you local Windbg find the nt!Kd_DEFAULT_Mask using the following command :
12
prlkd> x nt!kd_Default_Maskfffff801`f5211808 nt!Kd_DEFAULT_Mask = <no type information>
Now change it value to 0xffffffff.
1
lkd> eb fffff801`f5211808 ff ff ff ff
After that, you should see the results and now you’ll good to go.
Remember this is an essential step for the rest of the topic, because if we can’t see any kernel detail then we can’t debug.
Detecting Hypervisor Support
Discovering support for vmx is the first thing that you should consider before enabling VT-x, this is covered in Intel Software Developer’s Manual volume 3C in section 23.6 DISCOVERING SUPPORT FOR VMX.
You could know the presence of VMX using CPUID if CPUID.1:ECX.VMX[bit 5] = 1, then VMX operation is supported.
First of all, we need to know if we’re running on an Intel-based processor or not, this can be understood by checking the CPUID instruction and find vendor string “GenuineIntel“.
The following function returns the vendor string form CPUID instruction.
string GetCpuID(){ //Initialize used variables char SysType[13]; //Array consisting of 13 single bytes/characters string CpuID; //The string that will be used to add all the characters to //Starting coding in assembly language _asm { //Execute CPUID with EAX = 0 to get the CPU producer XOR EAX, EAX CPUID //MOV EBX to EAX and get the characters one by one by using shift out right bitwise operation. MOV EAX, EBX MOV SysType[0], al MOV SysType[1], ah SHR EAX, 16 MOV SysType[2], al MOV SysType[3], ah //Get the second part the same way but these values are stored in EDX MOV EAX, EDX MOV SysType[4], al MOV SysType[5], ah SHR EAX, 16 MOV SysType[6], al MOV SysType[7], ah //Get the third part MOV EAX, ECX MOV SysType[8], al MOV SysType[9], ah SHR EAX, 16 MOV SysType[10], al MOV SysType[11], ah MOV SysType[12], 00 } CpuID.assign(SysType, 12); return CpuID;}
The last step is checking for the presence of VMX, you can check it using the following code :
As you can see it checks CPUID with EAX=1 and if the 5th (6th) bit is 1 then the VMX Operation is supported. We can also perform the same thing in Kernel Driver.
All in all, our main code should be something like this:
123456789101112131415161718192021222324252627
int main(){ string CpuID; CpuID = GetCpuID(); cout << «[*] The CPU Vendor is : » << CpuID << endl; if (CpuID == «GenuineIntel») { cout << «[*] The Processor virtualization technology is VT-x. \n»; } else { cout << «[*] This program is not designed to run in a non-VT-x environemnt !\n»; return 1; } if (VMX_Support_Detection()) { cout << «[*] VMX Operation is supported by your processor .\n»; } else { cout << «[*] VMX Operation is not supported by your processor .\n»; return 1; } _getch(); return 0;}
The final result:
Enabling VMX Operation
If our processor supports the VMX Operation then its time to enable it. As I told you above, IRP_MJ_CREATE is the first function that should be used to start the operation.
Form Intel Software Developer’s Manual (23.7 ENABLING AND ENTERING VMX OPERATION):
Before system software can enter VMX operation, it enables VMX by setting CR4.VMXE[bit 13] = 1. VMX operation is then entered by executing the VMXON instruction. VMXON causes an invalid-opcode exception (#UD) if executed with CR4.VMXE = 0. Once in VMX operation, it is not possible to clear CR4.VMXE. System software leaves VMX operation by executing the VMXOFF instruction. CR4.VMXE can be cleared outside of VMX operation after executing of VMXOFF. VMXON is also controlled by the IA32_FEATURE_CONTROL MSR (MSR address 3AH). This MSR is cleared to zero when a logical processor is reset. The relevant bits of the MSR are:
Bit 0 is the lock bit. If this bit is clear, VMXON causes a general-protection exception. If the lock bit is set, WRMSR to this MSR causes a general-protection exception; the MSR cannot be modified until a power-up reset condition. System BIOS can use this bit to provide a setup option for BIOS to disable support for VMX. To enable VMX support in a platform, BIOS must set bit 1, bit 2, or both, as well as the lock bit.
Bit 1 enables VMXON in SMX operation. If this bit is clear, execution of VMXON in SMX operation causes a general-protection exception. Attempts to set this bit on logical processors that do not support both VMX operation and SMX operation cause general-protection exceptions.
Bit 2 enables VMXON outside SMX operation. If this bit is clear, execution of VMXON outside SMX operation causes a general-protection exception. Attempts to set this bit on logical processors that do not support VMX operation cause general-protection exceptions.
Now you should create some function to perform this operation in assembly.
Just in Header File (in my case Source.h) declare your function:
1
extern void inline Enable_VMX_Operation(void);
Then in assembly file (in my case SourceAsm.asm) add this function (Which set the 13th (14th) bit of Cr4).
1234567891011
Enable_VMX_Operation PROC PUBLICpush rax ; Save the state xor rax,rax ; Clear the RAXmov rax,cr4or rax,02000h ; Set the 14th bitmov cr4,rax pop rax ; Restore the stateretEnable_VMX_Operation ENDP
Also, declare your function in the above of SourceAsm.asm.
If you see the following result, then you completed the second part successfully.
Important Note: Please consider that your .asm file should have a different name from your driver main file (.c file) for example if your driver file is “Source.c” then using the name “Source.asm” causes weird linking errors in Visual Studio, you should change the name of you .asm file to something like “SourceAsm.asm” to avoid these kinds of linker errors.
Conclusion
In this part, you learned about basic stuff you to know in order to create a Windows Driver Kit program and then we entered to our virtual environment so we build a cornerstone for the rest of the parts.
In the third part, we’re getting deeper with Intel VT-x and make our driver even more advanced so wait, it’ll be ready soon!
Welcome to the first part of a multi-part series of tutorials called “Hypervisor From Scratch”. As the name implies, this course contains technical details to create a basic Virtual Machine based on hardware virtualization. If you follow the course, you’ll be able to create your own virtual environment and you’ll get an understanding of how VMWare, VirtualBox, KVM and other virtualization softwares use processors’ facilities to create a virtual environment.
Introduction
Both Intel and AMD support virtualization in their modern CPUs. Intel introduced (VT-x technology) that previously codenamed “Vanderpool” on November 13, 2005, in Pentium 4 series. The CPU flag for VT-xcapability is “vmx” which stands for Virtual Machine eXtension.
AMD, on the other hand, developed its first generation of virtualization extensions under the codename “Pacifica“, and initially published them as AMD Secure Virtual Machine (SVM), but later marketed them under the trademark AMD Virtualization, abbreviated AMD-V.
There two types of the hypervisor. The hypervisor type 1 called “bare metal hypervisor” or “native” because it runs directly on a bare metal physical server, a type 1 hypervisor has direct access to the hardware. With a type 1 hypervisor, there is no operating system to load as the hypervisor.
Contrary to a type 1 hypervisor, a type 2 hypervisor loads inside an operating system, just like any other application. Because the type 2 hypervisor has to go through the operating system and is managed by the OS, the type 2 hypervisor (and its virtual machines) will run less efficiently (slower) than a type 1 hypervisor.
Even more of the concepts about Virtualization is the same, but you need different considerations in VT-x and AMD-V. The rest of these tutorials mainly focus on VT-x because Intel CPUs are more popular and more widely used. In my opinion, AMD describes virtualization more clearly in its manuals but Intel somehow makes the readers confused especially in Virtualization documentation.
Hypervisor and Platform
These concepts are platform independent, I mean you can easily run the same code routine in both Linux or Windows and expect the same behavior from CPU but I prefer to use Windows as its more easily debuggable (at least for me.) but I try to give some examples for Linux systems whenever needed. Personally, as Linux kernel manages faults like #GP and other exceptions and tries to avoid kernel panic and keep the system up so it’s better for testing something like hypervisor or any CPU-related affair. On the other hand, Windows never tries to manage any unexpected exception and it leads to a blue screen of death whenever you didn’t manage your exceptions, thus you might get lots of BSODs.By the way, you’d better test it on both platforms (and other platforms too.).
At last, I might (and definitely) make mistakes like wrong implementation or misinformation or forget about mentioning some important description so I should say sorry in advance if I make any faults and I’ll be glad for every comment that tells me my mistakes in the technical information or misinformation.
That’s enough, Let’s get started!
The Tools you’ll need
You should have a Visual Studio with WDK installed. you can get Windows Driver Kit (WDK) here.
The best way to debug Windows and any kernel mode affair is using Windbg which is available in Windows SDK here. (If you installed WDK with default installing options then you probably install WDK and SDK together so you can skip this step.)
You should be able to debug your OS (in this case Windows) using Windbg, more information here.
Hex-rays IDA Pro is an important part of this tutorial.
OSR Driver Loader which can be downloaded here, we use this tools in order to load our drivers into the Windows machine.
Almost all of the codes in this tutorial have to run in kernel level and you must set up either a Linux Kernel Module or Windows Driver Kit (WDK) module. As configuring VMM involves lots of assembly code, you should know how to run inline assembly within you kernel project. In Linux, you shouldn’t do anything special but in the case of Windows, WDK no longer supports inline assembly in an x64 environment so if you didn’t work on this problem previously then you might have struggle creating a simple x64 inline project but don’t worry in one of my post I explained it step by step so I highly recommend seeing this topic to solve the problem before continuing the rest of this part.
Now its time to create a driver!
There is a good article here if you want to start with Windows Driver Kit (WDK).
AssemblyFunc1 and AssemblyFunc2 are two external functions that defined as inline x64 assembly code.
Our driver needs to register a device so that we can communicate with our virtual environment from User-Mode code, on the hand, I defined DrvUnload which use PnP Windows driver feature and you can easily unload your driver and remove device then reload and create a new device.
The following code is responsible for creating a new device :
If you use Windows, then you should disable Driver Signature Enforcement to load your driver, that’s because Microsoft prevents any not verified code to run in Windows Kernel (Ring 0).
To do this, press and hold the shift key and restart your computer. You should see a new Window, then
Click Advanced options.
On the new Window Click Startup Settings.
Click on Restart.
On the Startup Settings screen press 7 or F7 to disable driver signature enforcement.
The latest thing I remember is enabling Windows Debugging messages through registry, in this way you can get DbgPrint() results through SysInternals DebugView.
Under that , add a DWORD value named IHVDRIVER with a value of 0xFFFF
Reboot the machine and you’ll good to go.
Some thoughts before the start
There are some keywords that will be frequently used in the rest of these series and you should know about them (Most of the definitions derived from Intel software developer’s manual, volume 3C).
Virtual Machine Monitor (VMM): VMM acts as a host and has full control of the processor(s) and other platform hardware. A VMM is able to retain selective control of processor resources, physical memory, interrupt management, and I/O.
Guest Software: Each virtual machine (VM) is a guest software environment.
VMX Root Operation and VMX Non-root Operation: A VMM will run in VMX root operation and guest software will run in VMX non-root operation.
VMX transitions: Transitions between VMX root operation and VMX non-root operation.
VM entries: Transitions into VMX non-root operation.
Extended Page Table (EPT): A modern mechanism which uses a second layer for converting the guest physical address to host physical address.
VM exits: Transitions from VMX non-root operation to VMX root operation.
Virtual machine control structure (VMCS): is a data structure in memory that exists exactly once per VM, while it is managed by the VMM. With every change of the execution context between different VMs, the VMCS is restored for the current VM, defining the state of the VM’s virtual processor and VMM control Guest software using VMCS.
The VMCS consists of six logical groups:
Guest-state area: Processor state saved into the guest state area on VM exits and loaded on VM entries.
Host-state area: Processor state loaded from the host state area on VM exits.
VM-execution control fields: Fields controlling processor operation in VMX non-root operation.
VM-exit control fields: Fields that control VM exits.
VM-entry control fields: Fields that control VM entries.
VM-exit information fields: Read-only fields to receive information on VM exits describing the cause and the nature of the VM exit.
I found a great work which illustrates the VMCS, The PDF version is also available here.
Don’t worry about the fields, I’ll explain most of them clearly in the later parts, just remember VMCS Structure varies between different version of a processor.
VMX Instructions
VMX introduces the following new instructions.
Intel/AMD Mnemonic
Description
INVEPT
Invalidate Translations Derived from EPT
INVVPID
Invalidate Translations Based on VPID
VMCALL
Call to VM Monitor
VMCLEAR
Clear Virtual-Machine Control Structure
VMFUNC
Invoke VM function
VMLAUNCH
Launch Virtual Machine
VMRESUME
Resume Virtual Machine
VMPTRLD
Load Pointer to Virtual-Machine Control Structure
VMPTRST
Store Pointer to Virtual-Machine Control Structure
VMREAD
Read Field from Virtual-Machine Control Structure
VMWRITE
Write Field to Virtual-Machine Control Structure
VMXOFF
Leave VMX Operation
VMXON
Enter VMX Operation
Life Cycle of VMM Software
The following items summarize the life cycle of a VMM and its guest software as well as the interactions between them:
Software enters VMX operation by executing a VMXON instruction.
Using VM entries, a VMM can then turn guests into VMs (one at a time). The VMM effects a VM entry using instructions VMLAUNCH and VMRESUME; it regains control using VM exits.
VM exits transfer control to an entry point specified by the VMM. The VMM can take action appropriate to the cause of the VM exit and can then return to the VM using a VM entry.
Eventually, the VMM may decide to shut itself down and leave VMX operation. It does so by executing the VMXOFF instruction.
That’s enough for now!
In this part, I explained about general keywords that you should be aware and we create a simple lab for our future tests. In the next part, I will explain how to enable VMX on your machine using the facilities we create above, then we survey among the rest of the virtualization so make sure to check the blog for the next part.
Let’s assume that you’re on a penetration test, where the Azure infrastructure is in scope (as it should be), and you have access to a domain account that happens to have “Contributor” rights on an Azure subscription. Contributor rights are typically harder to get, but we do see them frequently given out to developers, and if you’re lucky, an overly friendly admin may have added the domain users group as contributors for a subscription. Alternatively, we can assume that we started with a lesser privileged user and escalated up to the contributor account.
At this point, we could try to gather available credentials, dump configuration data, and attempt to further our access into other accounts (Owners/Domain Admins) in the subscription. For the purpose of this post, let’s assume that we’ve exhausted the read-only options and we’re still stuck with a somewhat privileged user that doesn’t allow us to pivot to other subscriptions (or the internal domain). At this point we may want to go after the virtual machines.
Attacking VMs
When attacking VMs, we could do some impactful testing and start pulling down snapshots of VHD files, but that’s noisy and nobody wants to download 100+ GB disk images. Since we like to tread lightly and work with the tools we have, let’s try for command execution on the VMs. In this example environment, let’s assume that none of the VMs are publicly exposed and you don’t want to open any firewall ports to allow for RDP or other remote management protocols.
You will want to run this command from an AzureRM session in PowerShell, that is authenticated with a Contributor account. You can authenticate to Azure with the Login-AzureRmAccount command.
In this example, we’ve added an extra line (Invoke-Mimikatz) to the end of the Invoke-Mimikatz.ps1 file to run the function after it’s been imported. Here is a sample run of the Invoke-Mimikatz.ps1 script on the VM (where no real accounts were logged in, ).
This is handy for running your favorite PS scripts on a couple of VMs (one at a time), but what if we want to scale this to an entire subscription?
Running Multiple Commands
I’ve added the Invoke-AzureRmVMBulkCMD function to MicroBurst to allow for execution of scripts against multiple VMs in a subscription. With this function, we can run commands against an entire subscription, a specific Resource Group, or just a list of individual hosts.
The GIF above has been sped up for demo purposes, but the total time to run Mimikatz on the 5 VMs in this subscription was 7 Minutes and 14 seconds. It’s not ideal (see below), but it’s functional. I haven’t taken the time to multi-thread this yet, but if anyone would like to help, feel free to send in a pull request here.
Other Ideas
For the purposes of this demo, we just ran Mimikatz on all of the VMs. That’s nice, but it may not always be your best choice. Additional PowerShell options that you may want to consider:
Spawning Cobalt Strike, Empire, or Metasploit sessions
Searching for Sensitive Files
Run domain information gathering scripts on one VM and use the output to target other specific VMs for code execution
Performance Issues
As a friendly reminder, this was all done in a demo environment. If you choose to make use of this in the real world, keep this in mind: Not all Azure regions or VM images will respond the same way. I have found that some regions and VMs are better suited for running these commands. I have run into issues (stalling, failing to execute) with non-US Azure regions and the usage of these commands.
Your mileage may vary, but for the most part, I have had luck with the US regions and standard Windows Server 2012 images. In my testing, the Invoke-Mimikatz.ps1 script would usually take around 30-60 seconds to run. Keep in mind that the script has to be uploaded to the VM for each round of execution, and some of your VMs may be underpowered.
Mitigations and Detection
For the defenders that are reading this, please be careful with your Owner and Contributor rights. If you have one take away from the post, let it be this – Contributor rights means SYSTEM rights on all the VMs.
If you want to cut down your contributor’s rights to execute these commands, create a new role for your contributors and limit the Microsoft.Compute/virtualMachines/runCommand/action permissions for your users.
Additionally, if you want to detect this, keep an eye out for the “Run Command on Virtual Machine” log entries. It’s easy to set up alerts for this, and unless the Invoke-AzureRmVMRunCommand is an integral part of your VM management process, it should be easy to detect when someone is using this command.
The following alert logic will let you know when anyone tries to use this command (Success or Failure). You can also extend the scope of this alert to All VMs in a subscription.
As always, if you have any issues, comments, or improvements for this script, feel free to reach out via the MicroBurst Github page.
This is a little nifty trick for detecting virtualization environments. At least, some of them.
Anytime you restore the snapshot of your virtual machine your guest OS environment will usually run some initialization tasks first. If we talk about VMWare these tasks will be ran by the vmtoolsd.exe process (of course, assuming you have the VMware Tools installed).
Some of the tasks this process performs include clipboard initialization, often placing whatever is in the clipboard on the host inside the clipboard belonging to the guest OS. And this activity is a bad ‘opsec’ of the guest software.
By checking what process recently modified the clipboard we have a good chance of determining that the program is running inside the virtual machine. All you have to do is to call GetClipboardOwner API to determine the window that is the owner of the clipboard at the time of calling, and from there, the process name via e.g. GetWindowThreadProcessId. Yup, it’s that simple. While it may not work all the time, it is just yet another way of testing the environment.
If you want to check how and if it works on your VM snapshots you can use this little program: ClipboardOwnerDebug.exe
This is what I see on my win7 vm snapshot after I revert to its last state and run the ClipboardOwnerDebug.exe program:
Notably, I didn’t drag&drop/copy paste the ClipboardOwnerDebug.exe file to VM, I actually copied it via a network share to ensure my clipboard doesn’t change during this test; and, even if I did just CTRL+C (copy) the file on the host and CTRL+V (paste) it on the guest the result would be very similar anyway. The vmtoolsd.exe process just gets involved all the time.
The malware doesn’t need to rely on the first call to the GetClipboardOwner API. It could stall for a bit observing changes to the clipboard owner windows and testing if at any point there is a reference to a well-known virtualization process. Anytime the context of copying to clipboard changes between the host and the guest OS (very often when you do manual reversing), the clipboard window ownership will change, even if just temporarily.
The below is an example of the clipboard ownership changing during a simple VM session where things are copied to clipboard a few time, both on the host and on the guest and the context of the the clipboard changes. The context switch means that when the guest gets the mouse/keyboard focus, the changes to host clipboard are immediately reflected by the appearance of the vmtoolsd.exe process on the list:
VMFS is a clustered file system that disables (by default) multiple virtual machines from opening and writing to the same virtual disk (vmdk file). This prevents more than one virtual machine from inadvertently accessing the same vmdk file. This is the safety mechanism to avoid data corruption in cases where the applications in the virtual machine do not maintain consistency in the writes performed to the shared disk. However, you might have some third-party cluster-aware application, where the multi-writer option allows VMFS-backed disks to be shared by multiple virtual machines and leverage third-party OS/App cluster solutions to share a single VMDK disk on VMFS filesystem. These third-party cluster-aware applications, in which the applications ensure that writes originate from multiple different virtual machines, does not cause data loss. Examples of such third-party cluster-aware applications are Oracle RAC, Veritas Cluster Filesystem, etc.
There is VMware KB “Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag (1034165)” available at https://kb.vmware.com/kb/1034165 KB describes how to enable or disable simultaneous write protection provided by VMFS using the multi-writer flag. It is the official resource how to use multi-write flag but the operational procedure is a little bit obsolete as vSphere 6.x supports configuration from WebClient (Flash) or vSphere Client (HTML5) GUI as highlighted in the screenshot below.
However, KB 1034165 containsseveral important limitations which should be considered and addressed in solution design. Limitations of multi-writer mode are:
The virtual disk must be eager zeroed thick; it cannot be zeroed thick or thin provisioned.
Sharing is limited to 8 ESXi/ESX hosts with VMFS-3 (vSphere 4.x) and VMFS-5 (vSphere 5.x) and VMFS-6 in multi-writer mode.
Hot adding a virtual disk removes Multi-Writer Flag.
Let’s focus on 8 ESXi host limit. The above statement about scalability is a little bit unclear. That’s the reason why one of my customers has asked me what does it really mean. I did some research on internal VMware resources and fortunately enough I’ve found internal VMware discussion about this topic, so I think sharing the info about this topic will help to broader VMware community.
Here is 8 host limit explanation in other words …
“8 host limit implies how many ESXi hosts can simultaneously open the same virtual disk (aka VMDK file). If the cluster-aware application is not going to have more than 8 nodes, it works and it is supported. This limitation applies to a group of VMs sharing the same VMDK file for a particular instance of the cluster-aware application. In case, you need to consolidate multiple application clusters into a single vSphere cluster, you can safely do it and app nodes from one app cluster instance can run on other ESXi nodes than app nodes from another app cluster instance. It means that if you have more than one app cluster instance, all app cluster instances can leverage resources from more than 8 ESXi hosts in vSphere Cluster.”
The best way to fully understand specific behavior is to test it. That’s why I have a pretty decent home lab. However, I do not have 10 physical ESXi host, therefore I have created a nested vSphere environment with vSphere Cluster having 9 ESXi hosts. You can see vSphere cluster with two App Cluster Instances (App1, App2) on the screenshot below.
Application Cluster instance App1 is composed of 9 nodes (9 VMs) and App2 instance just from 2 nodes. Each instance is sharing their own VMDK disk. The whole test infrastructure is conceptually depicted on the figures below.
Test Step 1: I have started 8 of 9 VMs of App1 cluster instance on 8 ESXi hosts (ESXi01-ESXi08). Such setup works perfectly fine as there is 1 to 1 mapping between VMs and ESX hosts within the limit of 8 ESXi hosts having shared VMDK1 opened.
Test Step 2: Next step is to test the Power-On operation of App1-VM9 on ESXi09. Such operation fails. This is expected result because 9th ESXi host cannot open the VMDK1 file on VMFS datastore.
The error message is visible on the screenshot below.
Test Step 3: Next step is to Power On App1-VM9 on ESXi01. This operation is successful as two app cluster nodes (virtual machines App1-VM1 and App1-VM9) are running on single ESXi host (ESX01) therefore only 8 ESXi hosts have the VMDK1 file open and we are in the supported limits.
Test Step 4: Let’s test vMotion of App1-VM9 from ESXi01 to ESX09. Such operation fails. This is expected result because of the same reason as on Power-On operation. App1 Cluster instance would be stretched across 9 ESXi hosts but 9thESXi host cannot open VMDK1 file on VMFS datastore.
The error message is a little bit different but the root cause is the same.
Test Step 5: Let’s test vMotion of App2-VM2 from ESXi08 to ESX09. Such operation works because App2 Cluster instance is still stretched across two ESXi hosts only so it is within supported 8 ESXi hosts limit.
Test step 6: The last test is the vMotion of App2-VM2 from vSphere Cluster (ESXi08) to standalone ESXi host outside of the vSphere cluster (ESX01). Such operation works because App2 Cluster instance is still stretched across two ESXi hosts only so it is within supported 8 ESXi hosts limit. vSphere cluster is not the boundary for multi-write VMDK mode.
FAQ
Q: What exactly does it mean the limitation of 8 ESXi hosts?
A: 8 ESXi host limit implies how many ESXi hosts can simultaneously open the same virtual disk (aka VMDK file). If the cluster-aware application is not going to have more than 8 nodes, it works and it is supported. Details and various scenarios are described in this article.
Q: Where are stored the information about the locks from ESXi hosts?
A: The normal VMFS file locking mechanism is in use, therefore there are VMFS file locks which can be displayed by ESXi command: vmkfstools -D
The only difference is that multi-write VMDKs can have multiple locks as is shown in the screenshot below.
Q: Is it supported to use DRS rules for vmdk multi-write in case that is more than 8 ESXi hosts in the cluster where VMs with configured multi-write vmdks are running?
A: Yes. It is supported. DRS rules can be beneficial to keep all nodes of the particular App Cluster Instance on specified ESXi hosts. This is not necessary nor required from the technical point of view, but it can be beneficial from a licensing point of view.
Q: How ESXi life cycle can be handled with the limit 8 ESXi hosts?
A: Let’s discuss specific VM operations and supportability of multi-write vmdk configuration. The source for the answers is VMware KB https://kb.vmware.com/kb/1034165
Power on, off, restart virtual machine – supported
Suspend VM – unsupported
Hot add virtual disks — only to existing adapters
Hot remove devices – supported
Hot extend virtual disk – unsupported
Connect and disconnect devices – supported
Snapshots – unsupported
Snapshots of VMs with independent-persistent disks – supported
Q: Is it possible to migrate VMs with multi-write vmdks to different cluster when it will be offline?
A: Yes. VM can be Shut Down or Power Off and Power On on any ESXi host outside of the vSphere cluster. The only requirement is to have the same VMFS datastore available on source and target ESXi host. Please, keep in mind that the maximum supported number of ESXi hosts connected to a single VMFS datastore is 64.
VT-x is name of CPU virtualisation technology by Intel. KVM is component of Linux kernel which makes use of VT-x. And QEMU is a user-space application which allows users to create virtual machines. QEMU makes use of KVM to achieve efficient virtualisation. In this article we will talk about how these three technologies work together. Don’t expect an in-depth exposition about all aspects here, although in future, I might follow this up with more focused posts about some specific parts.
Something About Virtualisation First
Let’s first touch upon some theory before going into main discussion. Related to virtualisation is concept of emulation – in simple words, faking the hardware. When you use QEMU or VMWare to create a virtual machine that has ARM processor, but your host machine has an x86 processor, then QEMU or VMWare would emulate or fake ARM processor. When we talk about virtualisation we mean hardware assisted virtualisation where the VM’s processor matches host computer’s processor. Often conflated with virtualisation is an even more distinct concept of containerisation. Containerisation is mostly a software concept and it builds on top of operating system abstractions like process identifiers, file system and memory consumption limits. In this post we won’t discuss containers any more.
A typical VM set up looks like below:
At the lowest level is hardware which supports virtualisation. Above it, hypervisor or virtual machine monitor (VMM). In case of KVM, this is actually Linux kernel which has KVM modules loaded into it. In other words, KVM is a set of kernel modules that when loaded into Linux kernel turn the kernel into hypervisor. Above the hypervisor, and in user space, sit virtualisation applications that end users directly interact with – QEMU, VMWare etc. These applications then create virtual machines which run their own operating systems, with cooperation from hypervisor.
Finally, there is “full” vs. “para” virtualisation dichotomy. Full virtualisation is when OS that is running inside a VM is exactly the same as would be running on real hardware. Paravirtualisation is when OS inside VM is aware that it is being virtualised and thus runs in a slightly modified way than it would on real hardware.
VT-x
VT-x is CPU virtualisation for Intel 64 and IA-32 architecture. For Intel’s Itanium, there is VT-I. For I/O virtualisation there is VT-d. AMD also has its virtualisation technology called AMD-V. We will only concern ourselves with VT-x.
Under VT-x a CPU operates in one of two modes: root and non-root. These modes are orthogonal to real, protected, long etc, and also orthogonal to privilege rings (0-3). They form a new “plane” so to speak. Hypervisor runs in root mode and VMs run in non-root mode. When in non-root mode, CPU-bound code mostly executes in the same way as it would if running in root mode, which means that VM’s CPU-bound operations run mostly at native speed. However, it doesn’t have full freedom.
Privileged instructions form a subset of all available instructions on a CPU. These are instructions that can only be executed if the CPU is in higher privileged state, e.g. current privilege level (CPL) 0 (where CPL 3 is least privileged). A subset of these privileged instructions are what we can call “global state-changing” instructions – those which affect the overall state of CPU. Examples are those instructions which modify clock or interrupt registers, or write to control registers in a way that will change the operation of root mode. This smaller subset of sensitive instructions are what the non-root mode can’t execute.
VMX and VMCS
Virtual Machine Extensions (VMX) are instructions that were added to facilitate VT-x. Let’s look at some of them to gain a better understanding of how VT-x works.
VMXON: Before this instruction is executed, there is no concept of root vs non-root modes. The CPU operates as if there was no virtualisation. VMXON must be executed in order to enter virtualisation. Immediately after VMXON, the CPU is in root mode.
VMXOFF: Converse of VMXON, VMXOFF exits virtualisation.
VMLAUNCH: Creates an instance of a VM and enters non-root mode. We will explain what we mean by “instance of VM” in a short while, when covering VMCS. For now think of it as a particular VM created inside QEMU or VMWare.
VMRESUME: Enters non-root mode for an existing VM instance.
When a VM attempts to execute an instruction that is prohibited in non-root mode, CPU immediately switches to root mode in a trap-like way. This is called a VM exit.
Let’s synthesise the above information. CPU starts in a normal mode, executes VMXON to start virtualisation in root mode, executes VMLAUNCH to create and enter non-root mode for a VM instance, VM instance runs its own code as if running natively until it attempts something that is prohibited, that causes a VM exit and a switch to root mode. Recall that the software running in root mode is hypervisor. Hypervisor takes action to deal with the reason for VM exit and then executes VMRESUME to re-enter non-root mode for that VM instance, which lets the VM instance resume its operation. This interaction between root and non-root mode is the essence of hardware virtualisation support.
Of course the above description leaves some gaps. For example, how does hypervisor know why VM exit happened? And what makes one VM instance different from another? This is where VMCS comes in. VMCS stands for Virtual Machine Control Structure. It is basically a 4KiB part of physical memory which contains information needed for the above process to work. This information includes reasons for VM exit as well as information unique to each VM instance so that when CPU is in non-root mode, it is the VMCS which determines which instance of VM it is running.
As you may know, in QEMU or VMWare, we can decide how many CPUs a particular VM will have. Each such CPU is called a virtual CPU or vCPU. For each vCPU there is one VMCS. This means that VMCS stores information on CPU-level granularity and not VM level. To read and write a particular VMCS, VMREAD and VMWRITE instructions are used. They effectively require root mode so only hypervisor can modify VMCS. Non-root VM can perform VMWRITE but not to the actual VMCS, but a “shadow” VMCS – something that doesn’t concern us immediately.
There are also instructions that operate on whole VMCS instances rather than individual VMCSs. These are used when switching between vCPUs, where a vCPU could belong to any VM instance. VMPTRLD is used to load the address of a VMCS and VMPTRST is used to store this address to a specified memory address. There can be many VMCS instances but only one is marked as current and active at any point. VMPTRLD marks a particular VMCS as active. Then, when VMRESUME is executed, the non-root mode VM uses that active VMCS instance to know which particular VM and vCPU it is executing as.
Here it’s worth noting that all the VMX instructions above require CPL level 0, so they can only be executed from inside the Linux kernel (or other OS kernel).
VMCS basically stores two types of information:
Context info which contains things like CPU register values to save and restore during transitions between root and non-root.
Control info which determines behaviour of the VM inside non-root mode.
More specifically, VMCS is divided into six parts.
Guest-state stores vCPU state on VM exit. On VMRESUME, vCPU state is restored from here.
Host-state stores host CPU state on VMLAUNCH and VMRESUME. On VM exit, host CPU state is restored from here.
VM execution control fields determine the behaviour of VM in non-root mode. For example hypervisor can set a bit in a VM execution control field such that whenever VM attempts to execute RDTSC instruction to read timestamp counter, the VM exits back to hypervisor.
VM exit control fields determine the behaviour of VM exits. For example, when a bit in VM exit control part is set then debug register DR7 is saved whenever there is a VM exit.
VM entry control fields determine the behaviour of VM entries. This is counterpart of VM exit control fields. A symmetric example is that setting a bit inside this field will cause the VM to always load DR7 debug register on VM entry.
VM exit information fields tell hypervisor why the exit happened and provide additional information.
There are other aspects of hardware virtualisation support that we will conveniently gloss over in this post. Virtual to physical address conversion inside VM is done using a VT-x feature called Extended Page Tables (EPT). Translation Lookaside Buffer (TLB) is used to cache virtual to physical mappings in order to save page table lookups. TLB semantics also change to accommodate virtual machines. Advanced Programmable Interrupt Controller (APIC) on a real machine is responsible for managing interrupts. In VM this too is virtualised and there are virtual interrupts which can be controlled by one of the control fields in VMCS. I/O is a major part of any machine’s operations. Virtualising I/O is not covered by VT-x and is usually emulated in user space or accelerated by VT-d.
KVM
Kernel-based Virtual Machine (KVM) is a set of Linux kernel modules that when loaded, turn Linux kernel into hypervisor. Linux continues its normal operations as OS but also provides hypervisor facilities to user space. KVM modules can be grouped into two types: core module and machine specific modules. kvm.ko is the core module which is always needed. Depending on the host machine CPU, a machine specific module, like kvm-intel.ko or kvm-amd.ko will be needed. As you can guess, kvm-intel.ko uses the functionality we described above in VT-x section. It is KVM which executes VMLAUNCH/VMRESUME, sets up VMCS, deals with VM exits etc. Let’s also mention that AMD’s virtualisation technology AMD-V also has its own instructions and they are called Secure Virtual Machine (SVM). Under `arch/x86/kvm/` you will find files named `svm.c` and `vmx.c`. These contain code which deals with virtualisation facilities of AMD and Intel respectively.
KVM interacts with user space – in our case QEMU – in two ways: through device file `/dev/kvm` and through memory mapped pages. Memory mapped pages are used for bulk transfer of data between QEMU and KVM. More specifically, there are two memory mapped pages per vCPU and they are used for high volume data transfer between QEMU and the VM in kernel.
`/dev/kvm` is the main API exposed by KVM. It supports a set of `ioctl`s which allow QEMU to manage VMs and interact with them. The lowest unit of virtualisation in KVM is a vCPU. Everything builds on top of it. The `/dev/kvm` API is a three-level hierarchy.
System Level: Calls this API manipulate the global state of the whole KVM subsystem. This, among other things, is used to create VMs.
VM Level: Calls to this API deal with a specific VM. vCPUs are created through calls to this API.
vCPU Level: This is lowest granularity API and deals with a specific vCPU. Since QEMU dedicates one thread to each vCPU (see QEMU section below), calls to this API are done in the same thread that was used to create the vCPU.
After creating vCPU QEMU continues interacting with it using the ioctls and memory mapped pages.
QEMU
Quick Emulator (QEMU) is the only user space component we are considering in our VT-x/KVM/QEMU stack. With QEMU one can run a virtual machine with ARM or MIPS core but run on an Intel host. How is this possible? Basically QEMU has two modes: emulator and virtualiser. As an emulator, it can fake the hardware. So it can make itself look like a MIPS machine to the software running inside its VM. It does that through binary translation. QEMU comes with Tiny Code Generator (TCG). This can be thought if as a sort of high-level language VM, like JVM. It takes for instance, MIPS code, converts it to an intermediate bytecode which then gets executed on the host hardware.
The other mode of QEMU – as a virtualiser – is what achieves the type of virtualisation that we are discussing here. As virtualiser it gets help from KVM. It talks to KVM using ioctl’s as described above.
QEMU creates one process for every VM. For each vCPU, QEMU creates a thread. These are regular threads and they get scheduled by the OS like any other thread. As these threads get run time, QEMU creates impression of multiple CPUs for the software running inside its VM. Given QEMU’s roots in emulation, it can emulate I/O which is something that KVM may not fully support – take example of a VM with particular serial port on a host that doesn’t have it. Now, when software inside VM performs I/O, the VM exits to KVM. KVM looks at the reason and passes control to QEMU along with pointer to info about the I/O request. QEMU emulates the I/O device for that requests – thus fulfilling it for software inside VM – and passes control back to KVM. KVM executes a VMRESUME to let that VM proceed.
In the end, let us summarise the overall picture in a diagram: