Hypervisor From Scratch – Part 5: Setting up VMCS & Running Guest Code

Original text by Sinaei )

Introduction

Hello and welcome back to the fifth part of the “Hypervisor From Scratch” tutorial series. Today we will be configuring our previously allocated Virtual Machine Control Structure (VMCS) and in the last, we execute VMLAUNCH and enter to our hardware-virtualized world! Before reading the rest of this part, you have to read the previous parts as they are really dependent.

The full source code of this tutorial is available on GitHub :

[https://github.com/SinaKarvandi/Hypervisor-From-Scratch]

Most of this topic derived from Chapter 24 – (VIRTUAL MACHINE CONTROL STRUCTURES) & Chapter 26 – (VM ENTRIES) available at Intel 64 and IA-32 architectures software developer’s manual combined volumes 3. Of course, for more information, you can read the manual as well.

Table of contents

  • Introduction
  • Table of contents
  • VMX Instructions
    • VMPTRST
    • VMCLEAR
    • VMPTRLD
  • Enhancing VM State Structure
  • Preparing to launch VM
  • VMX Configurations
  • Saving a return point
  • Returning to the previous state
  • VMLAUNCH
  • VMX Controls
    • VM-Execution Controls
    • VM-entry Control Bits
    • VM-exit Control Bits
    • PIN-Based Execution Control
    • Interruptibility State
  • Configuring VMCS
    • Gathering Machine state for VMCS
    • Setting up VMCS
    • Checking VMCS Layout
  • VM-Exit Handler
    • Resume to next instruction
  • VMRESUME
  • Let’s Test it!
  • Conclusion
  • References

This part is highly inspired from Hypervisor For Beginner and some of methods are exactly like what implemented in that project.

VMX Instructions

In part 3, we implemented VMXOFF function now let’s implement other VMX instructions function. I also make some changes in calling VMXON and VMPTRLD functions to make it more modular.

VMPTRST

VMPTRST stores the current-VMCS pointer into a specified memory address. The operand of this instruction is always 64 bits and it’s always a location in memory.

The following function is the implementation of VMPTRST:

12345678910UINT64 VMPTRST(){    PHYSICAL_ADDRESS vmcspa;    vmcspa.QuadPart = 0;    __vmx_vmptrst((unsigned __int64 *)&vmcspa);     DbgPrint(«[*] VMPTRST %llx\n», vmcspa);     return 0;}

VMCLER

This instruction applies to the VMCS which VMCS region resides at the physical address contained in the instruction operand. The instruction ensures that VMCS data for that VMCS (some of these data may be currently maintained on the processor) are copied to the VMCS region in memory. It also initializes some parts of the VMCS region (for example, it sets the launch state of that VMCS to clear).

123456789101112131415BOOLEAN Clear_VMCS_State(IN PVirtualMachineState vmState) {     // Clear the state of the VMCS to inactive    int status = __vmx_vmclear(&vmState->VMCS_REGION);     DbgPrint(«[*] VMCS VMCLAEAR Status is : %d\n», status);    if (status)    {        // Otherwise terminate the VMX        DbgPrint(«[*] VMCS failed to clear with status %d\n», status);        __vmx_off();        return FALSE;    }    return TRUE;}

VMPTRLD

It marks the current-VMCS pointer valid and loads it with the physical address in the instruction operand. The instruction fails if its operand is not properly aligned, sets unsupported physical-address bits, or is equal to the VMXON pointer. In addition, the instruction fails if the 32 bits in memory referenced by the operand do not match the VMCS revision identifier supported by this processor.

12345678910BOOLEAN Load_VMCS(IN PVirtualMachineState vmState) {     int status = __vmx_vmptrld(&vmState->VMCS_REGION);    if (status)    {        DbgPrint(«[*] VMCS failed with status %d\n», status);        return FALSE;    }    return TRUE;}

In order to implement VMRESUME you need to know about some VMCS fields so the implementation of VMRESUME is after we implement VMLAUNCH. (Later in this topic)

Enhancing VM State Structure

As I told you in earlier parts, we need a structure to save the state of our virtual machine in each core separately. The following structure is used in the newest version of our hypervisor, each field will be described in the rest of this topic.

123456789typedef struct _VirtualMachineState{    UINT64 VMXON_REGION;                    // VMXON region    UINT64 VMCS_REGION;                     // VMCS region    UINT64 EPTP;                            // Extended-Page-Table Pointer    UINT64 VMM_Stack;                       // Stack for VMM in VM-Exit State    UINT64 MSRBitMap;                       // MSRBitMap Virtual Address    UINT64 MSRBitMapPhysical;               // MSRBitMap Physical Address} VirtualMachineState, *PVirtualMachineState;

Note that its not the final _VirtualMachineState structure and we’ll enhance it in future parts.

Preparing to launch VM

In this part, we’re just trying to test our hypervisor in our driver, in the future parts we add some user-mode interactions with our driver so let’s start with modifying our DriverEntry as it’s the first function that executes when our driver is loaded.

Below all the preparation from Part 2, we add the following lines to use our Part 4 (EPT) structures :

123 // Initiating EPTP and VMX PEPTP EPTP = Initialize_EPTP(); Initiate_VMX();

I added an export to a global variable called “VirtualGuestMemoryAddress” that holds the address of where our guest code starts.

Now let’s fill our allocated pages with \xf4 which stands for HLT instruction. I choose HLT because with some special configuration (described below) it’ll cause VM-Exit and return the code to the Host handler.

Let’s create a function which is responsible for running our virtual machine on a specific core.

1void LaunchVM(int ProcessorID , PEPTP EPTP);

I set the ProcessorID to 0, so we’re in the 0th logical processor.

Keep in mind that every logical core has its own VMCS and if you want your guest code to run in other logical processor, you should configure them separately.

Now we should set the affinity to the specific logical processor using Windows KeSetSystemAffinityThread function and make sure to choose the specific core’s vmState as each core has its own separate VMXON and VMCS region.

1234567    KAFFINITY kAffinityMask;        kAffinityMask = ipow(2, ProcessorID);        KeSetSystemAffinityThread(kAffinityMask);         DbgPrint(«[*]\t\tCurrent thread is executing in %d th logical processor.\n», ProcessorID);         PAGED_CODE();

Now, we should allocate a specific stack so that every time a VM-Exit occurs then we can save the registers and calling other Host functions.

I prefer to allocate a separate location for stack instead of using current RSP of the driver but you can use current stack (RSP) too.

The following lines are for allocating and zeroing the stack of our VM-Exit handler.

12345678910  // Allocate stack for the VM Exit Handler. UINT64 VMM_STACK_VA = ExAllocatePoolWithTag(NonPagedPool, VMM_STACK_SIZE, POOLTAG); vmState[ProcessorID].VMM_Stack = VMM_STACK_VA;  if (vmState[ProcessorID].VMM_Stack == NULL) { DbgPrint(«[*] Error in allocating VMM Stack.\n»); return; } RtlZeroMemory(vmState[ProcessorID].VMM_Stack, VMM_STACK_SIZE);

Same as above, allocating a page for MSR Bitmap and adding it to vmState, I’ll describe about them later in this topic.

1234567891011 // Allocate memory for MSRBitMap vmState[ProcessorID].MSRBitMap = MmAllocateNonCachedMemory(PAGE_SIZE);  // should be aligned if (vmState[ProcessorID].MSRBitMap == NULL) { DbgPrint(«[*] Error in allocating MSRBitMap.\n»); return; } RtlZeroMemory(vmState[ProcessorID].MSRBitMap, PAGE_SIZE); vmState[ProcessorID].MSRBitMapPhysical = VirtualAddress_to_PhysicalAddress(vmState[ProcessorID].MSRBitMap); 

Now it’s time to clear our VMCS state and load it as the current VMCS in the specific processor (in our case the 0th logical processor).

The Clear_VMCS_State and Load_VMCS are described above :

123456789101112  // Clear the VMCS State if (!Clear_VMCS_State(&vmState[ProcessorID])) { goto ErrorReturn; }  // Load VMCS (Set the Current VMCS) if (!Load_VMCS(&vmState[ProcessorID])) { goto ErrorReturn; } 

Now it’s time to setup VMCS, A detailed explanation of VMCS setup is available later in this topic.

1234  DbgPrint(«[*] Setting up VMCS.\n»); Setup_VMCS(&vmState[ProcessorID], EPTP); 

The last step is to execute the VMLAUNCH but we shouldn’t forget about saving the current state of the stack (RSP & RBP) because during the execution of Guest code and after returning from VM-Exit, we have to now the current state and return from it. It’s because if you leave the driver with wrong RSP & RBP then you definitely see a BSOD.

12  Save_VMXOFF_State();

Saving a return point

For Save_VMXOFF_State() , I declared two global variables called g_StackPointerForReturningg_BasePointerForReturning. No need to save RIP as the return address is always available in the stack. Just EXTERN it in the assembly file :

123 EXTERN g_StackPointerForReturning:QWORDEXTERN g_BasePointerForReturning:QWORD

The implementation of Save_VMXOFF_State :

123456Save_VMXOFF_State PROC PUBLICMOV g_StackPointerForReturning,rspMOV g_BasePointerForReturning,rbpret Save_VMXOFF_State ENDP

Returning to the previous state

As we saved the current state, if we want to return to the previous state, we have to restore RSP & RBP and clear the stack position and eventually a RET instruction. (I Also add a VMXOFF because it should be executed before return.)

123456789101112131415161718192021222324Restore_To_VMXOFF_State PROC PUBLIC VMXOFF  ; turn it off before existing MOV rsp, g_StackPointerForReturningMOV rbp, g_BasePointerForReturning ; make rsp point to a correct return pointADD rsp,8 ; return Truexor rax,raxmov rax,1 ; return section mov     rbx, [rsp+28h+8h]mov     rsi, [rsp+28h+10h]add     rsp, 020hpop     rdi ret Restore_To_VMXOFF_State ENDP

The “return section” is defined like this because I saw the return section of LaunchVM in IDA Pro.

LaunchVM Return Frame
😉

One important thing that can’t be easily ignored from the above picture is I have such a gorgeous, magnificent & super beautiful IDA PRO theme. I always proud of myself for choosing themes like this ! 

VMLAUNCH

Now it’s time to executed the VMLAUNCH.

12345678910  __vmx_vmlaunch();  // if VMLAUNCH succeed will never be here ! ULONG64 ErrorCode = 0; __vmx_vmread(VM_INSTRUCTION_ERROR, &ErrorCode); __vmx_off(); DbgPrint(«[*] VMLAUNCH Error : 0x%llx\n», ErrorCode); DbgBreakPoint(); 

As the comment describes, if we VMLAUNCH succeed we’ll never execute the other lines. If there is an error in the state of VMCS (which is a common problem) then we have to run VMREAD and read the error code from VM_INSTRUCTION_ERROR field of VMCS, also VMXOFF and print the error. DbgBreakPoint is just a debug breakpoint (int 3) and it can be useful only if you’re working with a remote kernel Windbg Debugger. It’s clear that you can’t test it in your system because executing a cc in the kernel will freeze your system as long as there is no debugger to catch it so it’s highly recommended to create a remote Kernel Debugging machine and test your codes.

Also, It can’t be tested on a remote VMWare debugging (and other virtual machine debugging tools) because nested VMX is not supported in current Intel processors.

Remember we’re still in LaunchVM function and __vmx_vmlaunch() is the intrinsic function for VMLAUNCH & __vmx_vmread is for VMREAD instruction.

Now it’s time to read some theories before configuring VMCS.

VMX Controls

VM-Execution Controls

In order to control our guest features, we have to set some fields in our VMCS. The following tables represent the Primary Processor-Based VM-Execution Controls and Secondary Processor-Based VM-Execution Controls.

Primary-Processor-Based-VM-Execution-Controls

We define the above table like this:

123456789101112131415161718192021#define CPU_BASED_VIRTUAL_INTR_PENDING        0x00000004#define CPU_BASED_USE_TSC_OFFSETING           0x00000008#define CPU_BASED_HLT_EXITING                 0x00000080#define CPU_BASED_INVLPG_EXITING              0x00000200#define CPU_BASED_MWAIT_EXITING               0x00000400#define CPU_BASED_RDPMC_EXITING               0x00000800#define CPU_BASED_RDTSC_EXITING               0x00001000#define CPU_BASED_CR3_LOAD_EXITING            0x00008000#define CPU_BASED_CR3_STORE_EXITING           0x00010000#define CPU_BASED_CR8_LOAD_EXITING            0x00080000#define CPU_BASED_CR8_STORE_EXITING           0x00100000#define CPU_BASED_TPR_SHADOW                  0x00200000#define CPU_BASED_VIRTUAL_NMI_PENDING         0x00400000#define CPU_BASED_MOV_DR_EXITING              0x00800000#define CPU_BASED_UNCOND_IO_EXITING           0x01000000#define CPU_BASED_ACTIVATE_IO_BITMAP          0x02000000#define CPU_BASED_MONITOR_TRAP_FLAG           0x08000000#define CPU_BASED_ACTIVATE_MSR_BITMAP         0x10000000#define CPU_BASED_MONITOR_EXITING             0x20000000#define CPU_BASED_PAUSE_EXITING               0x40000000#define CPU_BASED_ACTIVATE_SECONDARY_CONTROLS 0x80000000

In the earlier versions of VMX, there is nothing like Secondary Processor-Based VM-Execution Controls. Now if you want to use the secondary table you have to set the 31st bit of the first table otherwise it’s like the secondary table field with zeros.

Secondary-Processor-Based-VM-Execution-Controls

The definition of the above table is this (we ignore some bits, you can define them if you want to use them in your hypervisor):

12345#define CPU_BASED_CTL2_ENABLE_EPT            0x2#define CPU_BASED_CTL2_RDTSCP                0x8#define CPU_BASED_CTL2_ENABLE_VPID            0x20#define CPU_BASED_CTL2_UNRESTRICTED_GUEST    0x80#define CPU_BASED_CTL2_ENABLE_VMFUNC        0x2000

VM-entry Control Bits

The VM-entry controls constitute a 32-bit vector that governs the basic operation of VM entries.

VM-Entry-Controls
12345// VM-entry Control Bits #define VM_ENTRY_IA32E_MODE             0x00000200#define VM_ENTRY_SMM                    0x00000400#define VM_ENTRY_DEACT_DUAL_MONITOR     0x00000800#define VM_ENTRY_LOAD_GUEST_PAT         0x00004000

VM-exit Control Bits

The VM-exit controls constitute a 32-bit vector that governs the basic operation of VM exits.

VM-Exit-Controls
12345// VM-exit Control Bits #define VM_EXIT_IA32E_MODE              0x00000200#define VM_EXIT_ACK_INTR_ON_EXIT        0x00008000#define VM_EXIT_SAVE_GUEST_PAT          0x00040000#define VM_EXIT_LOAD_HOST_PAT           0x00080000

PIN-Based Execution Control

The pin-based VM-execution controls constitute a 32-bit vector that governs the handling of asynchronous events (for example: interrupts). We’ll use it in the future parts, but for now let define it in our Hypervisor.

Pin-Based-VM-Execution-Controls
123456// PIN-Based Execution#define PIN_BASED_VM_EXECUTION_CONTROLS_EXTERNAL_INTERRUPT                 0x00000001#define PIN_BASED_VM_EXECUTION_CONTROLS_NMI_EXITING                         0x00000004#define PIN_BASED_VM_EXECUTION_CONTROLS_VIRTUAL_NMI                         0x00000010#define PIN_BASED_VM_EXECUTION_CONTROLS_ACTIVE_VMX_TIMER                 0x00000020 #define PIN_BASED_VM_EXECUTION_CONTROLS_PROCESS_POSTED_INTERRUPTS        0x00000040

Interruptibility State

The guest-state area includes the following fields that characterize guest state but which do not correspond to processor registers:
Activity state (32 bits). This field identifies the logical processor’s activity state. When a logical processor is executing instructions normally, it is in the active state. Execution of certain instructions and the occurrence of certain events may cause a logical processor to transition to an inactive state in which it ceases to execute instructions.
The following activity states are defined:
— 0: Active. The logical processor is executing instructions normally.

— 1: HLT. The logical processor is inactive because it executed the HLT instruction.
— 2: Shutdown. The logical processor is inactive because it incurred a triple fault1 or some other serious error.
— 3: Wait-for-SIPI. The logical processor is inactive because it is waiting for a startup-IPI (SIPI).

• Interruptibility state (32 bits). The IA-32 architecture includes features that permit certain events to be blocked for a period of time. This field contains information about such blocking. Details and the format of this field are given in Table below.

Interruptibility-State

Configuring VMCS

Gathering Machine state for VMCS

In order to configure our Guest-State & Host-State we need to have details about current system state, e.g Global Descriptor Table Address, Interrupt Descriptor Table Add and Read all the Segment Registers.

These functions describe how all of these data can be gathered.

GDT Base :

123456Get_GDT_Base PROC    LOCAL   gdtr[10]:BYTE    sgdt    gdtr    mov     rax, QWORD PTR gdtr[2]    retGet_GDT_Base ENDP

CS segment register:

1234GetCs PROC    mov     rax, cs    retGetCs ENDP

DS segment register:

1234GetDs PROC    mov     rax, ds    retGetDs ENDP

ES segment register:

1234GetEs PROC    mov     rax, es    retGetEs ENDP

SS segment register:

1234GetSs PROC    mov     rax, ss    retGetSs ENDP

FS segment register:

1234GetFs PROC    mov     rax, fs    retGetFs ENDP

GS segment register:

1234GetGs PROC    mov     rax, gs    retGetGs ENDP

LDT:

1234GetLdtr PROC    sldt    rax    retGetLdtr ENDP

TR (task register):

1234GetTr PROC    str rax    retGetTr ENDP

Interrupt Descriptor Table:

1234567Get_IDT_Base PROC    LOCAL   idtr[10]:BYTE     sidt    idtr    mov     rax, QWORD PTR idtr[2]    retGet_IDT_Base ENDP

GDT Limit:

1234567Get_GDT_Limit PROC    LOCAL   gdtr[10]:BYTE     sgdt    gdtr    mov     ax, WORD PTR gdtr[0]    retGet_GDT_Limit ENDP

IDT Limit:

1234567Get_IDT_Limit PROC    LOCAL   idtr[10]:BYTE     sidt    idtr    mov     ax, WORD PTR idtr[0]    retGet_IDT_Limit ENDP

RFLAGS:

12345Get_RFLAGS PROC    pushfq    pop     rax    retGet_RFLAGS ENDP

Setting up VMCS

Let’s get down to business (We have a long way to go).

This section starts with defining a function called Setup_VMCS.

1BOOLEAN Setup_VMCS(IN PVirtualMachineState vmState, IN PEPTP EPTP);

This function is responsible for configuring all of the options related to VMCS and of course the Guest & Host state.

These task needs a special instruction called “VMWRITE”.

VMWRITE, writes the contents of a primary source operand (register or memory) to a specified field in a VMCS. In VMX root operation, the instruction writes to the current VMCS. If executed in VMX non-root operation, the instruction writes to the VMCS referenced by the VMCS link pointer field in the current VMCS.

The VMCS field is specified by the VMCS-field encoding contained in the register secondary source operand. 

The following enum contains most of the VMCS field need for VMWRITE & VMREAD instructions. (newer processors add newer fields.)

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134enum VMCS_FIELDS { GUEST_ES_SELECTOR = 0x00000800, GUEST_CS_SELECTOR = 0x00000802, GUEST_SS_SELECTOR = 0x00000804, GUEST_DS_SELECTOR = 0x00000806, GUEST_FS_SELECTOR = 0x00000808, GUEST_GS_SELECTOR = 0x0000080a, GUEST_LDTR_SELECTOR = 0x0000080c, GUEST_TR_SELECTOR = 0x0000080e, HOST_ES_SELECTOR = 0x00000c00, HOST_CS_SELECTOR = 0x00000c02, HOST_SS_SELECTOR = 0x00000c04, HOST_DS_SELECTOR = 0x00000c06, HOST_FS_SELECTOR = 0x00000c08, HOST_GS_SELECTOR = 0x00000c0a, HOST_TR_SELECTOR = 0x00000c0c, IO_BITMAP_A = 0x00002000, IO_BITMAP_A_HIGH = 0x00002001, IO_BITMAP_B = 0x00002002, IO_BITMAP_B_HIGH = 0x00002003, MSR_BITMAP = 0x00002004, MSR_BITMAP_HIGH = 0x00002005, VM_EXIT_MSR_STORE_ADDR = 0x00002006, VM_EXIT_MSR_STORE_ADDR_HIGH = 0x00002007, VM_EXIT_MSR_LOAD_ADDR = 0x00002008, VM_EXIT_MSR_LOAD_ADDR_HIGH = 0x00002009, VM_ENTRY_MSR_LOAD_ADDR = 0x0000200a, VM_ENTRY_MSR_LOAD_ADDR_HIGH = 0x0000200b, TSC_OFFSET = 0x00002010, TSC_OFFSET_HIGH = 0x00002011, VIRTUAL_APIC_PAGE_ADDR = 0x00002012, VIRTUAL_APIC_PAGE_ADDR_HIGH = 0x00002013, VMFUNC_CONTROLS = 0x00002018, VMFUNC_CONTROLS_HIGH = 0x00002019, EPT_POINTER = 0x0000201A, EPT_POINTER_HIGH = 0x0000201B, EPTP_LIST = 0x00002024, EPTP_LIST_HIGH = 0x00002025, GUEST_PHYSICAL_ADDRESS = 0x2400, GUEST_PHYSICAL_ADDRESS_HIGH = 0x2401, VMCS_LINK_POINTER = 0x00002800, VMCS_LINK_POINTER_HIGH = 0x00002801, GUEST_IA32_DEBUGCTL = 0x00002802, GUEST_IA32_DEBUGCTL_HIGH = 0x00002803, PIN_BASED_VM_EXEC_CONTROL = 0x00004000, CPU_BASED_VM_EXEC_CONTROL = 0x00004002, EXCEPTION_BITMAP = 0x00004004, PAGE_FAULT_ERROR_CODE_MASK = 0x00004006, PAGE_FAULT_ERROR_CODE_MATCH = 0x00004008, CR3_TARGET_COUNT = 0x0000400a, VM_EXIT_CONTROLS = 0x0000400c, VM_EXIT_MSR_STORE_COUNT = 0x0000400e, VM_EXIT_MSR_LOAD_COUNT = 0x00004010, VM_ENTRY_CONTROLS = 0x00004012, VM_ENTRY_MSR_LOAD_COUNT = 0x00004014, VM_ENTRY_INTR_INFO_FIELD = 0x00004016, VM_ENTRY_EXCEPTION_ERROR_CODE = 0x00004018, VM_ENTRY_INSTRUCTION_LEN = 0x0000401a, TPR_THRESHOLD = 0x0000401c, SECONDARY_VM_EXEC_CONTROL = 0x0000401e, VM_INSTRUCTION_ERROR = 0x00004400, VM_EXIT_REASON = 0x00004402, VM_EXIT_INTR_INFO = 0x00004404, VM_EXIT_INTR_ERROR_CODE = 0x00004406, IDT_VECTORING_INFO_FIELD = 0x00004408, IDT_VECTORING_ERROR_CODE = 0x0000440a, VM_EXIT_INSTRUCTION_LEN = 0x0000440c, VMX_INSTRUCTION_INFO = 0x0000440e, GUEST_ES_LIMIT = 0x00004800, GUEST_CS_LIMIT = 0x00004802, GUEST_SS_LIMIT = 0x00004804, GUEST_DS_LIMIT = 0x00004806, GUEST_FS_LIMIT = 0x00004808, GUEST_GS_LIMIT = 0x0000480a, GUEST_LDTR_LIMIT = 0x0000480c, GUEST_TR_LIMIT = 0x0000480e, GUEST_GDTR_LIMIT = 0x00004810, GUEST_IDTR_LIMIT = 0x00004812, GUEST_ES_AR_BYTES = 0x00004814, GUEST_CS_AR_BYTES = 0x00004816, GUEST_SS_AR_BYTES = 0x00004818, GUEST_DS_AR_BYTES = 0x0000481a, GUEST_FS_AR_BYTES = 0x0000481c, GUEST_GS_AR_BYTES = 0x0000481e, GUEST_LDTR_AR_BYTES = 0x00004820, GUEST_TR_AR_BYTES = 0x00004822, GUEST_INTERRUPTIBILITY_INFO = 0x00004824, GUEST_ACTIVITY_STATE = 0x00004826, GUEST_SM_BASE = 0x00004828, GUEST_SYSENTER_CS = 0x0000482A, HOST_IA32_SYSENTER_CS = 0x00004c00, CR0_GUEST_HOST_MASK = 0x00006000, CR4_GUEST_HOST_MASK = 0x00006002, CR0_READ_SHADOW = 0x00006004, CR4_READ_SHADOW = 0x00006006, CR3_TARGET_VALUE0 = 0x00006008, CR3_TARGET_VALUE1 = 0x0000600a, CR3_TARGET_VALUE2 = 0x0000600c, CR3_TARGET_VALUE3 = 0x0000600e, EXIT_QUALIFICATION = 0x00006400, GUEST_LINEAR_ADDRESS = 0x0000640a, GUEST_CR0 = 0x00006800, GUEST_CR3 = 0x00006802, GUEST_CR4 = 0x00006804, GUEST_ES_BASE = 0x00006806, GUEST_CS_BASE = 0x00006808, GUEST_SS_BASE = 0x0000680a, GUEST_DS_BASE = 0x0000680c, GUEST_FS_BASE = 0x0000680e, GUEST_GS_BASE = 0x00006810, GUEST_LDTR_BASE = 0x00006812, GUEST_TR_BASE = 0x00006814, GUEST_GDTR_BASE = 0x00006816, GUEST_IDTR_BASE = 0x00006818, GUEST_DR7 = 0x0000681a, GUEST_RSP = 0x0000681c, GUEST_RIP = 0x0000681e, GUEST_RFLAGS = 0x00006820, GUEST_PENDING_DBG_EXCEPTIONS = 0x00006822, GUEST_SYSENTER_ESP = 0x00006824, GUEST_SYSENTER_EIP = 0x00006826, HOST_CR0 = 0x00006c00, HOST_CR3 = 0x00006c02, HOST_CR4 = 0x00006c04, HOST_FS_BASE = 0x00006c06, HOST_GS_BASE = 0x00006c08, HOST_TR_BASE = 0x00006c0a, HOST_GDTR_BASE = 0x00006c0c, HOST_IDTR_BASE = 0x00006c0e, HOST_IA32_SYSENTER_ESP = 0x00006c10, HOST_IA32_SYSENTER_EIP = 0x00006c12, HOST_RSP = 0x00006c14, HOST_RIP = 0x00006c16,};

Ok, let’s continue with our configuration.

The next step is configuring host Segment Registers.

1234567 __vmx_vmwrite(HOST_ES_SELECTOR, GetEs() & 0xF8); __vmx_vmwrite(HOST_CS_SELECTOR, GetCs() & 0xF8); __vmx_vmwrite(HOST_SS_SELECTOR, GetSs() & 0xF8); __vmx_vmwrite(HOST_DS_SELECTOR, GetDs() & 0xF8); __vmx_vmwrite(HOST_FS_SELECTOR, GetFs() & 0xF8); __vmx_vmwrite(HOST_GS_SELECTOR, GetGs() & 0xF8); __vmx_vmwrite(HOST_TR_SELECTOR, GetTr() & 0xF8);

Keep in mind, those fields that start with HOST_ are related to the state in which the hypervisor sets whenever a VM-Exit occurs and those which start with GUEST_ are related to to the state in which the hypervisor sets for guest when a VMLAUNCH executed.

The purpose of & 0xF8 is that Intel mentioned that the three less significant bits must be cleared and otherwise it leads to error when you execute VMLAUNCH with Invalid Host State error.

VMCS_LINK_POINTER should be 0xffffffffffffffff.

12 // Setting the link pointer to the required value for 4KB VMCS. __vmx_vmwrite(VMCS_LINK_POINTER, ~0ULL);

The rest of this topic, intends to perform the VMX instructions in the current state of machine, so must of the guest and host configurations should be the same. In the future parts we’ll configure them to a separate guest layout.

Let’s configure GUEST_IA32_DEBUGCTL.

The IA32_DEBUGCTL MSR provides bit field controls to enable debug trace interrupts, debug trace stores, trace messages enable, single stepping on branches, last branch record recording, and to control freezing of LBR stack.

In short : LBR is a mechanism that provides processor with some recording of registers.

We don’t use them but let’s configure them to the current machine’s MSR_IA32_DEBUGCTL and you can see that __readmsr is the intrinsic function for RDMSR.

1234  __vmx_vmwrite(GUEST_IA32_DEBUGCTL, __readmsr(MSR_IA32_DEBUGCTL) & 0xFFFFFFFF); __vmx_vmwrite(GUEST_IA32_DEBUGCTL_HIGH, __readmsr(MSR_IA32_DEBUGCTL) >> 32); 

For configuring TSC you should modify the following values, I don’t have a precise explanation about it, so let them be zeros.

Note that, values that we put Zero on them can be ignored and if you don’t modify them, it’s like you put zero on them.

123456789101112 /* Time-stamp counter offset */ __vmx_vmwrite(TSC_OFFSET, 0); __vmx_vmwrite(TSC_OFFSET_HIGH, 0);  __vmx_vmwrite(PAGE_FAULT_ERROR_CODE_MASK, 0); __vmx_vmwrite(PAGE_FAULT_ERROR_CODE_MATCH, 0);  __vmx_vmwrite(VM_EXIT_MSR_STORE_COUNT, 0); __vmx_vmwrite(VM_EXIT_MSR_LOAD_COUNT, 0);  __vmx_vmwrite(VM_ENTRY_MSR_LOAD_COUNT, 0); __vmx_vmwrite(VM_ENTRY_INTR_INFO_FIELD, 0);

This time, we’ll configure Segment Registers and other GDT for our Host (When VM-Exit occurs).

12345678910 GdtBase = Get_GDT_Base();  FillGuestSelectorData((PVOID)GdtBase, ES, GetEs()); FillGuestSelectorData((PVOID)GdtBase, CS, GetCs()); FillGuestSelectorData((PVOID)GdtBase, SS, GetSs()); FillGuestSelectorData((PVOID)GdtBase, DS, GetDs()); FillGuestSelectorData((PVOID)GdtBase, FS, GetFs()); FillGuestSelectorData((PVOID)GdtBase, GS, GetGs()); FillGuestSelectorData((PVOID)GdtBase, LDTR, GetLdtr()); FillGuestSelectorData((PVOID)GdtBase, TR, GetTr());

Get_GDT_Base is defined above, in the process of gathering information for our VMCS.

FillGuestSelectorData is responsible for setting the GUEST selector, attributes, limit, and base for VMCS. It implemented as below :

123456789101112131415161718192021void FillGuestSelectorData( __in PVOID GdtBase, __in ULONG Segreg, __in USHORT Selector){ SEGMENT_SELECTOR SegmentSelector = { 0 }; ULONG            uAccessRights;  GetSegmentDescriptor(&SegmentSelector, Selector, GdtBase); uAccessRights = ((PUCHAR)& SegmentSelector.ATTRIBUTES)[0] + (((PUCHAR)& SegmentSelector.ATTRIBUTES)[1] << 12);  if (!Selector) uAccessRights |= 0x10000;  __vmx_vmwrite(GUEST_ES_SELECTOR + Segreg * 2, Selector); __vmx_vmwrite(GUEST_ES_LIMIT + Segreg * 2, SegmentSelector.LIMIT); __vmx_vmwrite(GUEST_ES_AR_BYTES + Segreg * 2, uAccessRights); __vmx_vmwrite(GUEST_ES_BASE + Segreg * 2, SegmentSelector.BASE); }

The function body for GetSegmentDescriptor :

123456789101112131415161718192021222324252627282930313233 BOOLEAN GetSegmentDescriptor(IN PSEGMENT_SELECTOR SegmentSelector, IN USHORT Selector, IN PUCHAR GdtBase){ PSEGMENT_DESCRIPTOR SegDesc;  if (!SegmentSelector) return FALSE;  if (Selector & 0x4) { return FALSE; }  SegDesc = (PSEGMENT_DESCRIPTOR)((PUCHAR)GdtBase + (Selector & ~0x7));  SegmentSelector->SEL = Selector; SegmentSelector->BASE = SegDesc->BASE0 | SegDesc->BASE1 << 16 | SegDesc->BASE2 << 24; SegmentSelector->LIMIT = SegDesc->LIMIT0 | (SegDesc->LIMIT1ATTR1 & 0xf) << 16; SegmentSelector->ATTRIBUTES.UCHARs = SegDesc->ATTR0 | (SegDesc->LIMIT1ATTR1 & 0xf0) << 4;  if (!(SegDesc->ATTR0 & 0x10)) { // LA_ACCESSED ULONG64 tmp; // this is a TSS or callgate etc, save the base high part tmp = (*(PULONG64)((PUCHAR)SegDesc + 8)); SegmentSelector->BASE = (SegmentSelector->BASE & 0xffffffff) | (tmp << 32); }  if (SegmentSelector->ATTRIBUTES.Fields.G) { // 4096-bit granularity is enabled for this segment, scale the limit SegmentSelector->LIMIT = (SegmentSelector->LIMIT << 12) + 0xfff; }  return TRUE;}

Also, there is another MSR called IA32_KERNEL_GS_BASE that is used to set the kernel GS base. whenever you run instructions like SYSCALL and enter to the ring 0, you need to change the current GS register and that can be done using SWAPGS. This instruction copies the content of IA32_KERNEL_GS_BASE into the IA32_GS_BASE and now it’s used in the kernel when you want to re-enter user-mode, you should change the user-mode GS Base. MSR_FS_BASE on the other hand, don’t have a kernel base because it used in 32-Bit mode while you have a 64-bit (long mode) kernel.

The GUEST_INTERRUPTIBILITY_INFO & GUEST_ACTIVITY_STATE.

12 __vmx_vmwrite(GUEST_INTERRUPTIBILITY_INFO, 0); __vmx_vmwrite(GUEST_ACTIVITY_STATE, 0);   //Active state

Now we reach to the most important part of our VMCS and it’s the configuration of CPU_BASED_VM_EXEC_CONTROL and SECONDARY_VM_EXEC_CONTROL.

These fields enable and disable some important features of guest, e.g you can configure VMCS to cause a VM-Exit whenever an execution of HLT instruction detected (in Guest). Please check the VM-Execution Controls parts above for a detailed description.

123 __vmx_vmwrite(CPU_BASED_VM_EXEC_CONTROL, AdjustControls(CPU_BASED_HLT_EXITING | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS, MSR_IA32_VMX_PROCBASED_CTLS)); __vmx_vmwrite(SECONDARY_VM_EXEC_CONTROL, AdjustControls(CPU_BASED_CTL2_RDTSCP /* | CPU_BASED_CTL2_ENABLE_EPT*/, MSR_IA32_VMX_PROCBASED_CTLS2)); 

As you can see we set CPU_BASED_HLT_EXITING that will cause the VM-Exit on HLT and activate secondary controls using CPU_BASED_ACTIVATE_SECONDARY_CONTROLS.

In the secondary controls, we used CPU_BASED_CTL2_RDTSCP and for now comment CPU_BASED_CTL2_ENABLE_EPT because we don’t need to deal with EPT in this part. In the future parts, I describe using EPT or Extended Page Table that we configured in the 4th part.

The description of PIN_BASED_VM_EXEC_CONTROLVM_EXIT_CONTROLS and VM_ENTRY_CONTROLS is available above but for now, let zero them.

1234 __vmx_vmwrite(PIN_BASED_VM_EXEC_CONTROL, AdjustControls(0, MSR_IA32_VMX_PINBASED_CTLS)); __vmx_vmwrite(VM_EXIT_CONTROLS, AdjustControls(VM_EXIT_IA32E_MODE | VM_EXIT_ACK_INTR_ON_EXIT, MSR_IA32_VMX_EXIT_CTLS)); __vmx_vmwrite(VM_ENTRY_CONTROLS, AdjustControls(VM_ENTRY_IA32E_MODE, MSR_IA32_VMX_ENTRY_CTLS)); 

Also, the AdjustControls is defined like this:

123456789ULONG AdjustControls(IN ULONG Ctl, IN ULONG Msr){ MSR MsrValue = { 0 };  MsrValue.Content = __readmsr(Msr); Ctl &= MsrValue.High;     /* bit == 0 in high word ==> must be zero */ Ctl |= MsrValue.Low;      /* bit == 1 in low word  ==> must be one  */ return Ctl;}

Next step is setting Control Register for guest and host, we set them to the same value using intrinsic functions.

12345678910 __vmx_vmwrite(GUEST_CR0, __readcr0()); __vmx_vmwrite(GUEST_CR3, __readcr3()); __vmx_vmwrite(GUEST_CR4, __readcr4());  __vmx_vmwrite(GUEST_DR7, 0x400);  __vmx_vmwrite(HOST_CR0, __readcr0()); __vmx_vmwrite(HOST_CR3, __readcr3()); __vmx_vmwrite(HOST_CR4, __readcr4()); 

The next part is setting up IDT and GDT’s Base and Limit for our guest.

1234 __vmx_vmwrite(GUEST_GDTR_BASE, Get_GDT_Base()); __vmx_vmwrite(GUEST_IDTR_BASE, Get_IDT_Base()); __vmx_vmwrite(GUEST_GDTR_LIMIT, Get_GDT_Limit()); __vmx_vmwrite(GUEST_IDTR_LIMIT, Get_IDT_Limit());

Set the RFLAGS.

1 __vmx_vmwrite(GUEST_RFLAGS, Get_RFLAGS());

If you want to use SYSENTER in your guest then you should configure the following MSRs. It’s not important to set these values in x64 Windows because Windows doesn’t support SYSENTER in x64 versions of Windows, It uses SYSCALL instead and for 32-bit processes, first change the current execution mode to long-mode (using Heaven’s Gate technique) but in 32-bit processors these fields are mandatory.

1234567 __vmx_vmwrite(GUEST_SYSENTER_CS, __readmsr(MSR_IA32_SYSENTER_CS)); __vmx_vmwrite(GUEST_SYSENTER_EIP, __readmsr(MSR_IA32_SYSENTER_EIP)); __vmx_vmwrite(GUEST_SYSENTER_ESP, __readmsr(MSR_IA32_SYSENTER_ESP)); __vmx_vmwrite(HOST_IA32_SYSENTER_CS, __readmsr(MSR_IA32_SYSENTER_CS)); __vmx_vmwrite(HOST_IA32_SYSENTER_EIP, __readmsr(MSR_IA32_SYSENTER_EIP)); __vmx_vmwrite(HOST_IA32_SYSENTER_ESP, __readmsr(MSR_IA32_SYSENTER_ESP)); 

Don’t forget to configure HOST_FS_BASEHOST_GS_BASEHOST_GDTR_BASEHOST_IDTR_BASEHOST_TR_BASE.

12345678 GetSegmentDescriptor(&SegmentSelector, GetTr(), (PUCHAR)Get_GDT_Base()); __vmx_vmwrite(HOST_TR_BASE, SegmentSelector.BASE);  __vmx_vmwrite(HOST_FS_BASE, __readmsr(MSR_FS_BASE)); __vmx_vmwrite(HOST_GS_BASE, __readmsr(MSR_GS_BASE));  __vmx_vmwrite(HOST_GDTR_BASE, Get_GDT_Base()); __vmx_vmwrite(HOST_IDTR_BASE, Get_IDT_Base());

The next important part is to set the RIP and RSP of the guest when a VMLAUNCH executes it starts with RIP you configured in this part and RIP and RSP of the host when a VM-Exit occurs. It’s pretty clear that Host RIP should point to a function that is responsible for managing VMX Events based on return code and decide to execute a VMRESUME or turn off hypervisor using VMXOFF.

123456789 // left here just for test __vmx_vmwrite(0, (ULONG64)VirtualGuestMemoryAddress);     //setup guest sp __vmx_vmwrite(GUEST_RIP, (ULONG64)VirtualGuestMemoryAddress);     //setup guest ip    __vmx_vmwrite(HOST_RSP, ((ULONG64)vmState->VMM_Stack + VMM_STACK_SIZE — 1)); __vmx_vmwrite(HOST_RIP, (ULONG64)VMExitHandler); 

HOST_RSP points to VMM_Stack that we allocated above and HOST_RIP points to VMExitHandler (an assembly written function that described below). GUEST_RIP points to VirtualGuestMemoryAddress(the global variable that we configured during EPT initialization) and GUEST_RSP to zero because we don’t put any instruction that uses stack so for a real-world example it should point to writeable different address.

Setting these fields to a Host Address will not cause a problem as long as we have a same CR3 in our guest state so all the addresses are mapped exactly the same as the host.

Done ! Our VMCS is almost ready.

Checking VMCS Layout

Unfortunatly, checking VMCS Layout is not as straight as the other parts, you have to control all the checklists described in [CHAPTER 26] VM ENTRIES from Intel’s 64 and IA-32 Architectures Software Developer’s Manual including the following sections:

  • 26.2 CHECKS ON VMX CONTROLS AND HOST-STATE AREA
  • 26.3 CHECKING AND LOADING GUEST STATE 
  • 26.4 LOADING MSRS
  • 26.5 EVENT INJECTION
  • 26.6 SPECIAL FEATURES OF VM ENTRY
  • 26.7 VM-ENTRY FAILURES DURING OR AFTER LOADING GUEST STATE
  • 26.8 MACHINE-CHECK EVENTS DURING VM ENTRY

The hardest part of this process is when you have no idea about the incorrect part of your VMCS layout or on the other hand when you miss something that eventually causes the failure.

This is because Intel just gives an error number without any further details about what’s exactly wrong in your VMCS Layout.

The errors shown below.

VM Errors

To solve this problem, I created a user-mode application called VmcsAuditor. As its name describes, if you have any error and don’t have any idea about solving the problem then it can be a choice.

Keep in mind that VmcsAuditor is a tool based on Bochs emulator support for VMX so all the checks come from Bochs and it’s not a 100% reliable tool that solves all the problem as we don’t know what exactly happening inside processor but it can be really useful and time saver.

The source code and executable files available on GitHub :

[https://github.com/SinaKarvandi/VMCS-Auditor]

Further description available here.

VM-Exit Handler

When our guest software exits and give the handle back to the host, its VM-exit reasons can be defined in the following definitions.

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960#define EXIT_REASON_EXCEPTION_NMI       0#define EXIT_REASON_EXTERNAL_INTERRUPT  1#define EXIT_REASON_TRIPLE_FAULT        2#define EXIT_REASON_INIT                3#define EXIT_REASON_SIPI                4#define EXIT_REASON_IO_SMI              5#define EXIT_REASON_OTHER_SMI           6#define EXIT_REASON_PENDING_VIRT_INTR   7#define EXIT_REASON_PENDING_VIRT_NMI    8#define EXIT_REASON_TASK_SWITCH         9#define EXIT_REASON_CPUID               10#define EXIT_REASON_GETSEC              11#define EXIT_REASON_HLT                 12#define EXIT_REASON_INVD                13#define EXIT_REASON_INVLPG              14#define EXIT_REASON_RDPMC               15#define EXIT_REASON_RDTSC               16#define EXIT_REASON_RSM                 17#define EXIT_REASON_VMCALL              18#define EXIT_REASON_VMCLEAR             19#define EXIT_REASON_VMLAUNCH            20#define EXIT_REASON_VMPTRLD             21#define EXIT_REASON_VMPTRST             22#define EXIT_REASON_VMREAD              23#define EXIT_REASON_VMRESUME            24#define EXIT_REASON_VMWRITE             25#define EXIT_REASON_VMXOFF              26#define EXIT_REASON_VMXON               27#define EXIT_REASON_CR_ACCESS           28#define EXIT_REASON_DR_ACCESS           29#define EXIT_REASON_IO_INSTRUCTION      30#define EXIT_REASON_MSR_READ            31#define EXIT_REASON_MSR_WRITE           32#define EXIT_REASON_INVALID_GUEST_STATE 33#define EXIT_REASON_MSR_LOADING         34#define EXIT_REASON_MWAIT_INSTRUCTION   36#define EXIT_REASON_MONITOR_TRAP_FLAG   37#define EXIT_REASON_MONITOR_INSTRUCTION 39#define EXIT_REASON_PAUSE_INSTRUCTION   40#define EXIT_REASON_MCE_DURING_VMENTRY  41#define EXIT_REASON_TPR_BELOW_THRESHOLD 43#define EXIT_REASON_APIC_ACCESS         44#define EXIT_REASON_ACCESS_GDTR_OR_IDTR 46#define EXIT_REASON_ACCESS_LDTR_OR_TR   47#define EXIT_REASON_EPT_VIOLATION       48#define EXIT_REASON_EPT_MISCONFIG       49#define EXIT_REASON_INVEPT              50#define EXIT_REASON_RDTSCP              51#define EXIT_REASON_VMX_PREEMPTION_TIMER_EXPIRED     52#define EXIT_REASON_INVVPID             53#define EXIT_REASON_WBINVD              54#define EXIT_REASON_XSETBV              55#define EXIT_REASON_APIC_WRITE          56#define EXIT_REASON_RDRAND              57#define EXIT_REASON_INVPCID             58#define EXIT_REASON_RDSEED              61#define EXIT_REASON_PML_FULL            62#define EXIT_REASON_XSAVES              63#define EXIT_REASON_XRSTORS             64#define EXIT_REASON_PCOMMIT             65

VMX Exit handler should be a pure assembly function because calling a compiled function needs some preparing and some register modification and the most important thing in VMX Handler is saving the registers state so that you can continue, other time.

I create a sample function for saving the registers and returning the state but in this function we call another C function.

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061PUBLIC VMExitHandler  EXTERN MainVMExitHandler:PROCEXTERN VM_Resumer:PROC .code _text VMExitHandler PROC     push r15    push r14    push r13    push r12    push r11    push r10    push r9    push r8            push rdi    push rsi    push rbp    push rbp ; rsp    push rbx    push rdx    push rcx    push rax    mov rcx, rsp ;GuestRegs sub rsp, 28h  ;rdtsc call MainVMExitHandler add rsp, 28h    pop rax    pop rcx    pop rdx    pop rbx    pop rbp ; rsp    pop rbp    pop rsi    pop rdi     pop r8    pop r9    pop r10    pop r11    pop r12    pop r13    pop r14    pop r15   sub rsp, 0100h ; to avoid error in future functions JMP VM_Resumer  VMExitHandler ENDP end

The main VM-Exit handler is a switch-case function that has different decisions over the VMCS VM_EXIT_REASON and EXIT_QUALIFICATION.

In this part, we’re just performing an action over EXIT_REASON_HLT and just print the result and restore the previous state.

From the following code, you can clearly see what event cause the VM-exit. Just keep in mind that some reasons only lead to VM-Exit if the VMCS’s control execution fields (described above) allows for it. For instance, the execution of HLT in guest software will cause VM-Exit if the 7th bit of the Primary Processor-Based VM-Execution Controls allows it.

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293VOID MainVMExitHandler(PGUEST_REGS GuestRegs){ ULONG ExitReason = 0; __vmx_vmread(VM_EXIT_REASON, &ExitReason);   ULONG ExitQualification = 0; __vmx_vmread(EXIT_QUALIFICATION, &ExitQualification);  DbgPrint(«\nVM_EXIT_REASION 0x%x\n», ExitReason & 0xffff); DbgPrint(«\EXIT_QUALIFICATION 0x%x\n», ExitQualification);   switch (ExitReason) { // // 25.1.2  Instructions That Cause VM Exits Unconditionally // The following instructions cause VM exits when they are executed in VMX non-root operation: CPUID, GETSEC, // INVD, and XSETBV. This is also true of instructions introduced with VMX, which include: INVEPT, INVVPID, // VMCALL, VMCLEAR, VMLAUNCH, VMPTRLD, VMPTRST, VMRESUME, VMXOFF, and VMXON. //  case EXIT_REASON_VMCLEAR: case EXIT_REASON_VMPTRLD: case EXIT_REASON_VMPTRST: case EXIT_REASON_VMREAD: case EXIT_REASON_VMRESUME: case EXIT_REASON_VMWRITE: case EXIT_REASON_VMXOFF: case EXIT_REASON_VMXON: case EXIT_REASON_VMLAUNCH: { break; } case EXIT_REASON_HLT: { DbgPrint(«[*] Execution of HLT detected… \n»);  // DbgBreakPoint();  // that’s enough for now 😉 Restore_To_VMXOFF_State();  break; } case EXIT_REASON_EXCEPTION_NMI: { break; }  case EXIT_REASON_CPUID: { break; }  case EXIT_REASON_INVD: { break; }  case EXIT_REASON_VMCALL: { break; }  case EXIT_REASON_CR_ACCESS: { break; }  case EXIT_REASON_MSR_READ: { break; }  case EXIT_REASON_MSR_WRITE: { break; }  case EXIT_REASON_EPT_VIOLATION: { break; }  default: { // DbgBreakPoint(); break;  } }}

Resume to next instruction

If a VM-Exit occurs (e.g the guest executed a CPUID instruction), the guest RIP remains constant and it’s up to you to change the Guest RIP or not so if you don’t have a special function for managing this situation then you execute a VMRESUME and it’s like an infinite loop of executing CPUID and VMRESUME because you didn’t change the RIP.

In order to solve this problem you have to read a VMCS field called VM_EXIT_INSTRUCTION_LEN that stores the length of the instruction that caused the VM-Exit so you have to first, read the GUEST current RIP, second the VM_EXIT_INSTRUCTION_LEN and third add it to GUEST RIP. Now your GUEST RIP points to the next instruction and you’re good to go.

The following function is for this purpose.

12345678910111213VOID ResumeToNextInstruction(VOID){ PVOID ResumeRIP = NULL; PVOID CurrentRIP = NULL; ULONG ExitInstructionLength = 0;  __vmx_vmread(GUEST_RIP, &CurrentRIP); __vmx_vmread(VM_EXIT_INSTRUCTION_LEN, &ExitInstructionLength);  ResumeRIP = (PCHAR)CurrentRIP + ExitInstructionLength;  __vmx_vmwrite(GUEST_RIP, (ULONG64)ResumeRIP);}

VMRESUME

VMRESUME is like VMLAUNCH but it’s used in order to resume the Guest.

  • VMLAUNCH fails if the launch state of current VMCS is not “clear”. If the instruction is successful, it sets the launch state to “launched.”
  • VMRESUME fails if the launch state of the current VMCS is not “launched.”

So it’s clear that if you executed VMLAUNCH before, then you can’t use it anymore to resume to the Guest code and in this condition VMRESUME is used.

The following code is the implementation of VMRESUME.

12345678910111213141516VOID VM_Resumer(VOID){  __vmx_vmresume();  // if VMRESUME succeed will never be here !  ULONG64 ErrorCode = 0; __vmx_vmread(VM_INSTRUCTION_ERROR, &ErrorCode); __vmx_off(); DbgPrint(«[*] VMRESUME Error : 0x%llx\n», ErrorCode);  // It’s such a bad error because we don’t where to go ! // prefer to break DbgBreakPoint();}

Let’s Test it !

Well, we have done with configuration and now its time to run our driver using OSR Driver Loader, as always, first you should disable driver signature enforcement then run your driver.

As you can see from the above picture (in launching VM area), first we set the current logical processor to 0, next we clear our VMCS status using VMCLEAR instruction then we set up our VMCS layout and finally execute a VMLAUNCH instruction.

Now, our guest code is executed and as we configured our VMCS to exit on the execution of HLT(CPU_BASED_HLT_EXITING), so it’s successfully executed and our VM-EXIT handler function called, then it calls the main VM-Exit handler and as the VMCS exit reason is 0xc (EXIT_REASON_HLT), our VM-Exit handler detects an execution of HLT in guest and now it captures the execution.

After that our machine state saving mechanism executed and we successfully turn off hypervisor using VMXOFF and return to the first caller with a successful (RAX = 1) status.

That’s it ! Wasn’t it easy ?!

:)

Conclusion

In this part, we get familiar with configuring Virtual Machine Control Structure and finally run our guest code. The future parts would be an enhancement to this configuration like entering protected-mode,interrupt injectionpage modification logging, virtualizing the current machine and so on thus making sure to visit the blog more frequently for future parts and if you have any question or problem you can use the comments section below.

Thanks for reading!

References

[1] Vol 3C – Chapter 24 – (VIRTUAL MACHINE CONTROL STRUCTURES) (https://software.intel.com/en-us/articles/intel-sdm)

[2] Vol 3C – Chapter 26 – (VM ENTRIES) (https://software.intel.com/en-us/articles/intel-sdm)

[3] Segmentation (https://wiki.osdev.org/Segmentation)

[4] x86 memory segmentation (https://en.wikipedia.org/wiki/X86_memory_segmentation)

[5] VmcsAuditor – A Bochs-Based Hypervisor Layout Checker (https://rayanfam.com/topics/vmcsauditor-a-bochs-based-hypervisor-layout-checker/)

[6] Rohaaan/Hypervisor For Beginners (https://github.com/rohaaan/hypervisor-for-beginners)

[7] SWAPGS — Swap GS Base Register (https://www.felixcloutier.com/x86/SWAPGS.html)

[8] Knockin’ on Heaven’s Gate – Dynamic Processor Mode Switching (http://rce.co/knockin-on-heavens-gate-dynamic-processor-mode-switching/)

Реклама

Hypervisor From Scratch – Part 4: Address Translation Using Extended Page Table (EPT)

Original text by Sinaei )

Welcome to the fourth part of the “Hypervisor From Scratch”. This part is primarily about translating guest address through Extended Page Table (EPT) and its implementation. We also see how shadow tables work and other cool stuff.

First of all, make sure to read the earlier parts before reading this topic as these parts are really dependent on each other also you should have a basic understanding of paging mechanism and how page tables work. A good article is here for paging tables.

Most of this topic derived from  Chapter 28 – (VMX SUPPORT FOR ADDRESS TRANSLATION) available at Intel 64 and IA-32 architectures software developer’s manual combined volumes 3.

The full source code of this tutorial is available on GitHub :

[https://github.com/SinaKarvandi/Hypervisor-From-Scratch]

Before starting, I should give my thanks to Petr Beneš, as this part would never be completed without his help.

Introduction 

Second Level Address Translation (SLAT) or nested paging, is an extended layer in the paging mechanism that is used to map hardware-based virtualization virtual addresses into the physical memory.

AMD implemented SLAT through the Rapid Virtualization Indexing (RVI) technology known as Nested Page Tables (NPT) since the introduction of its third-generation Opteron processors and microarchitecture code name BarcelonaIntel also implemented SLAT in Intel® VT-x technologiessince the introduction of microarchitecture code name Nehalem and its known as Extended Page Table (EPT) and is used in  Core i9, Core i7, Core i5, and Core i3 processors.

ARM processors also have some kind of implementation known as known as Stage-2 page-tables.

There are two methods, the first one is Shadow Page Tables and the second one is Extended Page Tables.

Software-assisted paging (Shadow Page Tables)

Shadow page tables are used by the hypervisor to keep track of the state of physical memory in which the guest thinks that it has access to physical memory but in the real world, the hardware prevents it to access hardware memory otherwise it will control the host and it is not what it intended to be.

In this case, VMM maintains shadow page tables that map guest-virtual pages directly to machine pages and any guest modifications to V->P tables synced to VMM V->M shadow page tables.

By the way, using Shadow Page Table is not recommended today as always lead to VMM traps (which result in a vast amount of VM-Exits) and losses the performance due to the TLB flush on every switch and another caveat is that there is a memory overhead due to shadow copying of guest page tables.

Hardware-assisted paging (Extended Page Table)

Nothing Special :)

To reduce the complexity of Shadow Page Tables and avoiding the excessive vm-exits and reducing the number of TLB flushes, EPT, a hardware-assisted paging strategy implemented to increase the performance.

According to a VMware evaluation paper: “EPT provides performance gains of up to 48% for MMU-intensive benchmarks and up to 600% for MMU-intensive microbenchmarks”.

EPT implemented one more page table hierarchy, to map Guest-Virtual Address to Guest-Physical address which is valid in the main memory.

In EPT,

  • One page table is maintained by guest OS, which is used to generate the guest-physical address.
  • The other page table is maintained by VMM, which is used to map guest physical address to host physical address.

so for each memory access operation, EPT MMU directly gets the guest physical address from the guest page table and then gets the host physical address by the VMM mapping table automatically.

Extended Page Table vs Shadow Page Table 

EPT:

  • Walk any requested address
    • Appropriate to programs that have a large amount of page table miss when executing
    • Less chance to exit VM (less context switch)
  • Two-layer EPT
    • Means each access needs to walk two tables
  • Easier to develop
    • Many particular registers
    • Hardware helps guest OS to notify the VMM

SPT:

  • Only walk when SPT entry miss
    • Appropriate to programs that would access only some addresses frequently
    • Every access might be intercepted by VMM (many traps)
  • One reference
    • Fast and convenient when page hit
  • Hard to develop
    • Two-layer structure
    • Complicated reverse map
    • Permission emulation

Detecting Support for EPT, NPT

If you want to see whether your system supports EPT on Intel processor or NPT on AMD processor without using assembly (CPUID), you can download coreinfo.exe from Sysinternals, then run it. The last line will show you if your processor supports EPT or NPT.

EPT Translation

EPT defines a layer of address translation that augments the translation of linear addresses.

The extended page-table mechanism (EPT) is a feature that can be used to support the virtualization of physical memory. When EPT is in use, certain addresses that would normally be treated as physical addresses (and used to access memory) are instead treated as guest-physical addresses. Guest-physical addresses are translated by traversing a set of EPT paging structures to produce physical addresses that are used to access memory.

EPT is used when the “enable EPT” VM-execution control is 1. It translates the guest-physical addresses used in VMX non-root operation and those used by VM entry for event injection.

EPT translation is exactly like regular paging translation but with some minor differences. In paging, the processor translates Virtual Address to Physical Address while in EPT translation you want to translate a Guest Virtual Address to Host Physical Address.

If you’re familiar with paging, the 3rd control register (CR3) is the base address of PML4 Table (in an x64 processor or more generally it points to root paging directory), in EPT guest is not aware of EPT Translation so it has CR3 too but this CR3 is used to convert Guest Virtual Address to Guest Physical Address, whenever you find your target Guest Physical Address, it’s EPT mechanism that treats your Guest Physical Address like a virtual address and the EPTP is the CR3

Just think about the above sentence one more time!

So your target physical address should be divided into 4 part, the first 9 bits points to EPT PML4E (note that PML4 base address is in EPTP), the second 9 bits point the EPT PDPT Entry (the base address of PDPT comes from EPT PML4E), the third 9 bits point to EPT PD Entry (the base address of PD comes from EPT PDPTE) and the last 9 bit of the guest physical address point to an entry in EPT PT table (the base address of PT comes form EPT PDE) and now the EPT PT Entry points to the host physical address of the corresponding page.

EPT Translation

You might ask, as a simple Virtual to Physical Address translation involves accessing 4 physical address, so what happens ?! 

The answer is the processor internally translates all tables physical address one by one, that’s why paging and accessing memory in a guest software is slower than regular address translation. The following picture illustrates the operations for a Guest Virtual Address to Host Physical Address.

If you want to think about x86 EPT virtualization,  assume, for example, that CR4.PAE = CR4.PSE = 0. The translation of a 32-bit linear address then operates as follows:

  • Bits 31:22 of the linear address select an entry in the guest page directory located at the guest-physical address in CR3. The guest-physical address of the guest page-directory entry (PDE) is translated through EPT to determine the guest PDE’s physical address.
  • Bits 21:12 of the linear address select an entry in the guest page table located at the guest-physical address in the guest PDE. The guest-physical address of the guest page-table entry (PTE) is translated through EPT to determine the guest PTE’s physical address.
  • Bits 11:0 of the linear address is the offset in the page frame located at the guest-physical address in the guest PTE. The guest physical address determined by this offset is translated through EPT to determine the physical address to which the original linear address translates.

Note that PAE stands for Physical Address Extension which is a memory management feature for the x86 architecture that extends the address space and PSE stands for Page Size Extension that refers to a feature of x86 processors that allows for pages larger than the traditional 4 KiB size.

In addition to translating a guest-physical address to a host physical address, EPT specifies the privileges that software is allowed when accessing the address. Attempts at disallowed accesses are called EPT violations and cause VM-exits.

Keep in mind that address never translates through EPT, when there is no access. That your guest-physical address is never used until there is access (Read or Write) to that location in memory.

Implementing Extended Page Table (EPT)

Now that we know some basics, let’s implement what we’ve learned before. Based on Intel manual we should write (VMWRITE) EPTP or Extended-Page-Table Pointer to the VMCS. The EPTP structure described below.

Extended-Page-Table Pointer

The above tables can be described using the following structure :

123456789101112// See Table 24-8. Format of Extended-Page-Table Pointertypedef union _EPTP { ULONG64 All; struct { UINT64 MemoryType : 3; // bit 2:0 (0 = Uncacheable (UC) — 6 = Write — back(WB)) UINT64 PageWalkLength : 3; // bit 5:3 (This value is 1 less than the EPT page-walk length) UINT64 DirtyAndAceessEnabled : 1; // bit 6  (Setting this control to 1 enables accessed and dirty flags for EPT) UINT64 Reserved1 : 5; // bit 11:7 UINT64 PML4Address : 36; UINT64 Reserved2 : 16; }Fields;}EPTP, *PEPTP;

Each entry in all EPT tables is 64 bit long. EPT PML4E and EPT PDPTE and EPT PD are the same but EPT PTE has some minor differences.

An EPT entry is something like this :

EPT Entries

Ok, Now we should implement tables and the first table is PML4. The following table shows the format of an EPT PML4 Entry (PML4E).

EPT PML4E

PML4E can be a structure like this :

1234567891011121314151617// See Table 28-1. typedef union _EPT_PML4E { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 Reserved1 : 5; // bit 7:3 (Must be Zero) UINT64 Accessed : 1; // bit 8 UINT64 Ignored1 : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved2 : 4; // bit 51:N UINT64 Ignored3 : 12; // bit 63:52 }Fields;}EPT_PML4E, *PEPT_PML4E;

As long as we want to have a 4-level paging, the second table is EPT Page-Directory-Pointer-Table (PDTP), the following picture illustrates the format of PDPTE :

EPT PDPTE

PDPTE’s structure is like this :

1234567891011121314151617// See Table 28-3typedef union _EPT_PDPTE { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 Reserved1 : 5; // bit 7:3 (Must be Zero) UINT64 Accessed : 1; // bit 8 UINT64 Ignored1 : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved2 : 4; // bit 51:N UINT64 Ignored3 : 12; // bit 63:52 }Fields;}EPT_PDPTE, *PEPT_PDPTE;

For the third table of paging we should implement an EPT Page-Directory Entry (PDE) as described below:

EPT PDE

PDE’s structure:

1234567891011121314151617// See Table 28-5typedef union _EPT_PDE { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 Reserved1 : 5; // bit 7:3 (Must be Zero) UINT64 Accessed : 1; // bit 8 UINT64 Ignored1 : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved2 : 4; // bit 51:N UINT64 Ignored3 : 12; // bit 63:52 }Fields;}EPT_PDE, *PEPT_PDE;

The last page is EPT which is described below.

EPT PTE

PTE will be :

Note that you have, EPTMemoryType, IgnorePAT, DirtyFlag and SuppressVE in addition to the above pages.

1234567891011121314151617181920// See Table 28-6typedef union _EPT_PTE { ULONG64 All; struct { UINT64 Read : 1; // bit 0 UINT64 Write : 1; // bit 1 UINT64 Execute : 1; // bit 2 UINT64 EPTMemoryType : 3; // bit 5:3 (EPT Memory type) UINT64 IgnorePAT : 1; // bit 6 UINT64 Ignored1 : 1; // bit 7 UINT64 AccessedFlag : 1; // bit 8 UINT64 DirtyFlag : 1; // bit 9 UINT64 ExecuteForUserMode : 1; // bit 10 UINT64 Ignored2 : 1; // bit 11 UINT64 PhysicalAddress : 36; // bit (N-1):12 or Page-Frame-Number UINT64 Reserved : 4; // bit 51:N UINT64 Ignored3 : 11; // bit 62:52 UINT64 SuppressVE : 1; // bit 63 }Fields;}EPT_PTE, *PEPT_PTE;

There are other types of implementing page walks ( 2 or 3 level paging) and if you set the 7th bit of PDPTE (Maps 1 GB) or the 7th bit of PDE (Maps 2 MB) so instead of implementing 4 level paging (like what we want to do for the rest of the topic) you set those bits but keep in mind that the corresponding tables are different. These tables described in (Table 28-4. Format of an EPT Page-Directory Entry (PDE) that Maps a 2-MByte Page) and (Table 28-2. Format of an EPT Page-Directory-Pointer-Table Entry (PDPTE) that Maps a 1-GByte Page). Alex Ionescu’s SimpleVisor is an example of implementing in this way.

An important note is almost all the above structures have a 36-bit Physical Address which means our hypervisor supports only 4-level paging. It is because every page table (and every EPT Page Table) consist of 512 entries which means you need 9 bits to select an entry and as long as we have 4 level tables, we can’t use more than 36 (4 * 9) bits. Another method with wider address range is not implemented in all major OS like Windows or Linux. I’ll describe EPT PML5E briefly later in this topic but we don’t implement it in our hypervisor as it’s not popular yet!

By the way, N is the physical-address width supported by the processor. CPUID with 80000008H in EAX gives you the supported width in EAX bits 7:0.

Let’s see the rest of the code, the following code is the Initialize_EPTP function which is responsible for allocating and mapping EPTP.

Note that the PAGED_CODE() macro ensures that the calling thread is running at an IRQL that is low enough to permit paging.

1234UINT64 Initialize_EPTP(){ PAGED_CODE();        …

First of all, allocating EPTP and put zeros on it.

1234567 // Allocate EPTP PEPTP EPTPointer = ExAllocatePoolWithTag(NonPagedPool, PAGE_SIZE, POOLTAG);  if (!EPTPointer) { return NULL; } RtlZeroMemory(EPTPointer, PAGE_SIZE);

Now, we need a blank page for our EPT PML4 Table.

1234567 // Allocate EPT PML4 PEPT_PML4E EPT_PML4 = ExAllocatePoolWithTag(NonPagedPool, PAGE_SIZE, POOLTAG); if (!EPT_PML4) { ExFreePoolWithTag(EPTPointer, POOLTAG); return NULL; } RtlZeroMemory(EPT_PML4, PAGE_SIZE);

And another empty page for PDPT.

12345678// Allocate EPT Page-Directory-Pointer-Table PEPT_PDPTE EPT_PDPT = ExAllocatePoolWithTag(NonPagedPool, PAGE_SIZE, POOLTAG); if (!EPT_PDPT) { ExFreePoolWithTag(EPT_PML4, POOLTAG); ExFreePoolWithTag(EPTPointer, POOLTAG); return NULL; } RtlZeroMemory(EPT_PDPT, PAGE_SIZE);

Of course its true about Page Directory Table.

12345678910 // Allocate EPT Page-Directory PEPT_PDE EPT_PD = ExAllocatePoolWithTag(NonPagedPool, PAGE_SIZE, POOLTAG);  if (!EPT_PD) { ExFreePoolWithTag(EPT_PDPT, POOLTAG); ExFreePoolWithTag(EPT_PML4, POOLTAG); ExFreePoolWithTag(EPTPointer, POOLTAG); return NULL; } RtlZeroMemory(EPT_PD, PAGE_SIZE);

The last table is a blank page for EPT Page Table.

1234567891011 // Allocate EPT Page-Table PEPT_PTE EPT_PT = ExAllocatePoolWithTag(NonPagedPool, PAGE_SIZE, POOLTAG);  if (!EPT_PT) { ExFreePoolWithTag(EPT_PD, POOLTAG); ExFreePoolWithTag(EPT_PDPT, POOLTAG); ExFreePoolWithTag(EPT_PML4, POOLTAG); ExFreePoolWithTag(EPTPointer, POOLTAG); return NULL; } RtlZeroMemory(EPT_PT, PAGE_SIZE);

Now that we have all of our pages available, let’s allocate two page (2*4096) continuously because we need one of the pages for our RIP to start and one page for our Stack (RSP). After that, we need two EPT Page Table Entries (PTEs) with permission to executereadwrite. The physical address should be divided by 4096 (PAGE_SIZE) because if we dived a hex number by 4096 (0x1000) 12 digits from the right (which are zeros) will disappear and these 12 digits are for choosing between 4096 bytes.

By the way, we let stack be executable too and that’s because, in a regular VM, we should put RWX to all pages because its the responsibility of internal page tables to set or clear NX bit. We need to change them from EPT Tables for special purposes (e.g intercepting instruction fetch for a special page). Changing from EPT tables will lead to EPT-Violation, in this way we can intercept these events.

The actual need is two page but we need to build page tables inside our guest software thus we allocate up to 10 page.

I’ll explain about intercepting pages from EPT, later in these series.

123456789101112131415161718192021 // Setup PT by allocating two pages Continuously // We allocate two pages because we need 1 page for our RIP to start and 1 page for RSP 1 + 1 and other paages for paging  const int PagesToAllocate = 10; UINT64 Guest_Memory = ExAllocatePoolWithTag(NonPagedPool, PagesToAllocate * PAGE_SIZE, POOLTAG); RtlZeroMemory(Guest_Memory, PagesToAllocate * PAGE_SIZE);  for (size_t i = 0; i < PagesToAllocate; i++) { EPT_PT[i].Fields.AccessedFlag = 0; EPT_PT[i].Fields.DirtyFlag = 0; EPT_PT[i].Fields.EPTMemoryType = 6; EPT_PT[i].Fields.Execute = 1; EPT_PT[i].Fields.ExecuteForUserMode = 0; EPT_PT[i].Fields.IgnorePAT = 0; EPT_PT[i].Fields.PhysicalAddress = (VirtualAddress_to_PhysicalAddress( Guest_Memory + ( i * PAGE_SIZE ))/ PAGE_SIZE ); EPT_PT[i].Fields.Read = 1; EPT_PT[i].Fields.SuppressVE = 0; EPT_PT[i].Fields.Write = 1;  }

Note: EPTMemoryType can be either 0 (for uncached memory) or 6 (write-back) memory and as we want our memory to be cacheable so put 6 on it.

The next table is PDE. PDE should point to PTE base address so we just put the address of the first entry from the EPT PTE as the physical address for Page Directory Entry.

123456789101112// Setting up PDE EPT_PD->Fields.Accessed = 0; EPT_PD->Fields.Execute = 1; EPT_PD->Fields.ExecuteForUserMode = 0; EPT_PD->Fields.Ignored1 = 0; EPT_PD->Fields.Ignored2 = 0; EPT_PD->Fields.Ignored3 = 0; EPT_PD->Fields.PhysicalAddress = (VirtualAddress_to_PhysicalAddress(EPT_PT) / PAGE_SIZE); EPT_PD->Fields.Read = 1; EPT_PD->Fields.Reserved1 = 0; EPT_PD->Fields.Reserved2 = 0; EPT_PD->Fields.Write = 1;

Next step is mapping PDPT. PDPT Entry should point to the first entry of Page-Directory.

123456789101112 // Setting up PDPTE EPT_PDPT->Fields.Accessed = 0; EPT_PDPT->Fields.Execute = 1; EPT_PDPT->Fields.ExecuteForUserMode = 0; EPT_PDPT->Fields.Ignored1 = 0; EPT_PDPT->Fields.Ignored2 = 0; EPT_PDPT->Fields.Ignored3 = 0; EPT_PDPT->Fields.PhysicalAddress = (VirtualAddress_to_PhysicalAddress(EPT_PD) / PAGE_SIZE); EPT_PDPT->Fields.Read = 1; EPT_PDPT->Fields.Reserved1 = 0; EPT_PDPT->Fields.Reserved2 = 0; EPT_PDPT->Fields.Write = 1;

The last step is configuring PML4E which points to the first entry of the PTPT.

123456789101112 // Setting up PML4E EPT_PML4->Fields.Accessed = 0; EPT_PML4->Fields.Execute = 1; EPT_PML4->Fields.ExecuteForUserMode = 0; EPT_PML4->Fields.Ignored1 = 0; EPT_PML4->Fields.Ignored2 = 0; EPT_PML4->Fields.Ignored3 = 0; EPT_PML4->Fields.PhysicalAddress = (VirtualAddress_to_PhysicalAddress(EPT_PDPT) / PAGE_SIZE); EPT_PML4->Fields.Read = 1; EPT_PML4->Fields.Reserved1 = 0; EPT_PML4->Fields.Reserved2 = 0; EPT_PML4->Fields.Write = 1;

We’ve almost done! Just set up the EPTP for our VMCS by putting 0x6 as the memory type (which is write-back) and we walk 4 times so the page walk length is 4-1=3 and PML4 address is the physical address of the first entry in the PML4 table.

I’ll explain about DirtyAndAcessEnabled field later in this topic.

1234567 // Setting up EPTP EPTPointer->Fields.DirtyAndAceessEnabled = 1; EPTPointer->Fields.MemoryType = 6; // 6 = Write-back (WB) EPTPointer->Fields.PageWalkLength = 3;  // 4 (tables walked) — 1 = 3 EPTPointer->Fields.PML4Address = (VirtualAddress_to_PhysicalAddress(EPT_PML4) / PAGE_SIZE); EPTPointer->Fields.Reserved1 = 0; EPTPointer->Fields.Reserved2 = 0;

and the last step.

12 DbgPrint(«[*] Extended Page Table Pointer allocated at %llx»,EPTPointer); return EPTPointer;

All the above page tables should be aligned to 4KByte boundaries but as long as we allocate >= PAGE_SIZE (One PFN record) so it’s automatically 4kb-aligned.

Our implementation consist of 4 tables, therefore, the full layout is like this:

EPT Layout

Accessed and Dirty Flags in EPTP

In EPTP, you’ll decide whether enable accessed and dirty flags for EPT or not using the 6th bit of the extended-page-table pointer (EPTP). Setting this flag causes processor accesses to guest paging structure entries to be treated as writes.

For any EPT paging-structure entry that is used during guest-physical-address translation, bit 8 is the accessed flag. For an EPT paging-structure entry that maps a page (as opposed to referencing another EPT paging structure), bit 9 is the dirty flag.

Whenever the processor uses an EPT paging-structure entry as part of the guest-physical-address translation, it sets the accessed flag in that entry (if it is not already set).

Whenever there is a write to a guest-physical address, the processor sets the dirty flag (if it is not already set) in the EPT paging-structure entry that identifies the final physical address for the guest-physical address (either an EPT PTE or an EPT paging-structure entry in which bit 7 is 1).

These flags are “sticky,” meaning that, once set, the processor does not clear them; only software can clear them.

5-Level EPT Translation

Intel suggests a new table in translation hierarchy, called PML5 which extends the EPT into a 5-layer table and guest operating systems can use up to 57 bit for the virtual-addresses while the classic 4-level EPT is limited to translating 48-bit guest-physical
addresses. None of the modern OSs use this feature yet.

PML5 is also applying to both EPT and regular paging mechanism.

Translation begins by identifying a 4-KByte naturally aligned EPT PML5 table. It is located at the physical address specified in bits 51:12 of EPTP. An EPT PML5 table comprises 512 64-bit entries (EPT PML5Es). An EPT PML5E is selected using the physical address defined as follows.

  • Bits 63:52 are all 0.
  • Bits 51:12 are from EPTP.
  • Bits 11:3 are bits 56:48 of the guest-physical address.
  • Bits 2:0 are all 0.
  • Because an EPT PML5E is identified using bits 56:48 of the guest-physical address, it controls access to a 256-TByte region of the linear address space.

The only difference is you should put PML5 physical address instead of the PML4 address in EPTP.

For more information about 5-layer paging take a look at this Intel documentation.

Invalidating Cache (INVEPT)

Well, Intel’s explanation about Cache invalidating is really vague and I couldn’t understand it completely but I asked Petr and he explains me in this way:

  • VMX-specific TLB-management instructions:
    • INVEPT – Invalidate cached Extended Page Table (EPT) mappings in the processor to synchronize address translation in virtual machines with memory-resident EPT pages.
    • INVVPID – Invalidate cached mappings of address translation based on the Virtual Processor ID (VPID).

Imagine we access guest-physical-address 0x1000,it’ll get translated to host-physical-address 0x5000. Next time, if we access 0x1000, the CPU won’t send the request to the memory bus but uses cached memory instead. it’s faster. Now let’s say we change EPT_PDPT->PhysicalAddress to point to different EPT PD or change the attributes of one of the EPT tables, now we have to tell the processor that your cache is invalid and that’s what exactly INVEPT performs.

Now we have two terms here, Single-Context and All-Context.

Single-Context means, that you invalidate all EPT-derived translations based on a single EPTP (in short: for single VM).

All-Context means that you invalidate all EPT-derived translations. (for every-VM).

So in case if you wouldn’t perform INVEPT after changing EPT’s structures, you would be risking that the CPU would reuse old translations.

Basically, any change to EPT structure needs INVEPT but switching EPT (or VMCS) doesn’t need INVEPT because that translation will be “tagged” with the changed EPTP in the cache.

The following assembly function is responsible for INVEPT.

12345678910111213INVEPT_Instruction PROC PUBLIC        invept  rcx, oword ptr [rdx]        jz @jz        jc @jc        xor     rax, rax        ret @jz:    mov     rax, VMX_ERROR_CODE_FAILED_WITH_STATUS        ret @jc:    mov     rax, VMX_ERROR_CODE_FAILED        retINVEPT_Instruction ENDP

Note that VMX_ERROR_CODE_FAILED_WITH_STATUS and VMX_ERROR_CODE_FAILED define like this.

123    VMX_ERROR_CODE_SUCCESS              = 0    VMX_ERROR_CODE_FAILED_WITH_STATUS   = 1    VMX_ERROR_CODE_FAILED               = 2

Now, we implement INVEPT.

12345678910unsigned char INVEPT(UINT32 type, INVEPT_DESC* descriptor){ if (!descriptor) { static INVEPT_DESC zero_descriptor = { 0 }; descriptor = &zero_descriptor; }  return INVEPT_Instruction(type, descriptor);}

To invalidate all the contexts use the following function.

1234unsigned char INVEPT_ALL_CONTEXTS(){ return INVEPT(all_contexts ,NULL);}

And the last step is for Single-Context INVEPT which needs an EPTP.

12345unsigned char INVEPT_SINGLE_CONTEXT(EPTP ept_pointer){ INVEPT_DESC descriptor = { ept_pointer, 0 }; return INVEPT(single_context, &descriptor);}

Using the above functions in a modification state, tell the processor to invalidate its cache.

Conclusion 

In this part, we see how to initialize the Extended Page Table and map guest physical address to host physical address then we build the EPTP based on the allocated addresses.

The future part would be about building the VMCS and implementing other VMX instructions. Don’t forget to check the blog for the future posts.

Have a good time!

References

[1] Vol 3C – 28.2 THE EXTENDED PAGE TABLE MECHANISM (EPT) (https://software.intel.com/en-us/articles/intel-sdm)

[2] Performance Evaluation of Intel EPT Hardware Assist (https://www.vmware.com/pdf/Perf_ESX_Intel-EPT-eval.pdf)

[3] Second Level Address Translation (https://en.wikipedia.org/wiki/Second_Level_Address_Translation)  

[4] Memory Virtualization (http://www.cs.nthu.edu.tw/~ychung/slides/Virtualization/VM-Lecture-2-2-SystemVirtualizationMemory.pptx)  [5] Best Practices for Paravirtualization Enhancements from Intel® Virtualization Technology: EPT and VT-d (https://software.intel.com/en-us/articles/best-practices-for-paravirtualization-enhancements-from-intel-virtualization-technology-ept-and-vt-d)[6] 5-Level Paging and 5-Level EPT (https://software.intel.com/sites/default/files/managed/2b/80/5-level_paging_white_paper.pdf) [7] Xen Summit November 2007 – Jun Nakajima (http://www-archive.xenproject.org/files/xensummit_fall07/12_JunNakajima.pdf) [8] gipervizor against rutkitov: as it works (http://developers-club.com/posts/133906/) [9] Intel SGX Explained (https://www.semanticscholar.org/paper/Intel-SGX-Explained-Costan-Devadas/2d7f3f4ca3fbb15ae04533456e5031e0d0dc845a) [10] Intel VT-x (https://github.com/tnballo/notebook/wiki/Intel-VTx) [11] Introduction to IA-32e hardware paging (https://www.triplefault.io/2017/07/introduction-to-ia-32e-hardware-paging.html)

Hypervisor From Scratch – Part 3: Setting up Our First Virtual Machine

( Original text by Sinaei )

Introduction

This is the third part of the tutorial “Hypervisor From Scratch“. You may have noticed that the previous parts have steadily been getting more complicated. This part should teach you how to get started with creating your own VMM, we go to demonstrate how to interact with the VMM from Windows User-mode (IOCTL Dispatcher), then we solve the problems with the affinity and running code in a special core. Finally, we get familiar with initializing VMXON Regions and VMCS Regions then we load our hypervisor regions into each core and implement our custom functions to work with hypervisor instruction and many more things related to Virtual-Machine Control Data Structures (VMCS).

Some of the implementations derived from HyperBone (Minimalistic VT-X hypervisor with hooks) and HyperPlatform by Satoshi Tanda and hvpp which is great work by my friend Petr Beneš the person who really helped me creating these series.

The full source code of this tutorial is available on :

[https://github.com/SinaKarvandi/Hypervisor-From-Scratch]

Interacting with VMM Driver from User-Mode

The most important function in IRP MJ functions for us is DrvIOCTLDispatcher (IRP_MJ_DEVICE_CONTROL) and that’s because this function can be called from user-mode with a special IOCTL number, it means you can have a special code in your driver and implement a special functionality corresponding this code, then by knowing the code (from user-mode) you can ask your driver to perform your request, so you can imagine that how useful this function would be.

Now let’s implement our functions for dispatching IOCTL code and print it from our kernel-mode driver.

As long as I know, there are several methods by which you can dispatch IOCTL e.g METHOD_BUFFERED, METHOD_NIETHER, METHOD_IN_DIRECT, METHOD_OUT_DIRECT. These methods should be followed by the user-mode caller (the difference are in the place where buffers transfer between user-mode and kernel-mode or vice versa), I just copy the implementations with some minor modification form Microsoft’s Windows Driver Samples, you can see the full code for user-mode and kernel-mode.

Imagine we have the following IOCTL codes:

12345678910111213141516171819//// Device type           — in the «User Defined» range.»//#define SIOCTL_TYPE 40000 //// The IOCTL function codes from 0x800 to 0xFFF are for customer use.//#define IOCTL_SIOCTL_METHOD_IN_DIRECT \    CTL_CODE( SIOCTL_TYPE, 0x900, METHOD_IN_DIRECT, FILE_ANY_ACCESS  ) #define IOCTL_SIOCTL_METHOD_OUT_DIRECT \    CTL_CODE( SIOCTL_TYPE, 0x901, METHOD_OUT_DIRECT , FILE_ANY_ACCESS  ) #define IOCTL_SIOCTL_METHOD_BUFFERED \    CTL_CODE( SIOCTL_TYPE, 0x902, METHOD_BUFFERED, FILE_ANY_ACCESS  ) #define IOCTL_SIOCTL_METHOD_NEITHER \    CTL_CODE( SIOCTL_TYPE, 0x903, METHOD_NEITHER , FILE_ANY_ACCESS  )

There is a convention for defining IOCTLs as it mentioned here,

The IOCTL is a 32-bit number. The first two low bits define the “transfer type” which can be METHOD_OUT_DIRECT, METHOD_IN_DIRECT, METHOD_BUFFERED or METHOD_NEITHER.

The next set of bits from 2 to 13 define the “Function Code”. The high bit is referred to as the “custom bit”. This is used to determine user-defined IOCTLs versus system defined. This means that function codes 0x800 and greater are customs defined similarly to how WM_USER works for Windows Messages.

The next two bits define the access required to issue the IOCTL. This is how the I/O Manager can reject IOCTL requests if the handle has not been opened with the correct access. The access types are such as FILE_READ_DATA and FILE_WRITE_DATA for example.

The last bits represent the device type the IOCTLs are written for. The high bit again represents user-defined values.

In IOCTL Dispatcher, The “Parameters.DeviceIoControl.IoControlCode” of the IO_STACK_LOCATIONcontains the IOCTL code being invoked.

For METHOD_IN_DIRECT and METHOD_OUT_DIRECT, the difference between IN and OUT is that with IN, you can use the output buffer to pass in data while the OUT is only used to return data.

The METHOD_BUFFERED is a buffer that the data is copied from this buffer. The buffer is created as the larger of the two sizes, the input or output buffer. Then the read buffer is copied to this new buffer. Before you return, you simply copy the return data into the same buffer. The return value is put into the IO_STATUS_BLOCK and the I/O Manager copies the data into the output buffer. The METHOD_NEITHERis the same.

Ok, let’s see an example :

First, we declare all our needed variable.

Note that the PAGED_CODE macro ensures that the calling thread is running at an IRQL that is low enough to permit paging.

123456789101112131415161718192021222324252627NTSTATUS DrvIOCTLDispatcher( PDEVICE_OBJECT DeviceObject, PIRP Irp){ PIO_STACK_LOCATION  irpSp;// Pointer to current stack location NTSTATUS            ntStatus = STATUS_SUCCESS;// Assume success ULONG               inBufLength; // Input buffer length ULONG               outBufLength; // Output buffer length PCHAR               inBuf, outBuf; // pointer to Input and output buffer PCHAR               data = «This String is from Device Driver !!!»; size_t              datalen = strlen(data) + 1;//Length of data including null PMDL                mdl = NULL; PCHAR               buffer = NULL;  UNREFERENCED_PARAMETER(DeviceObject);  PAGED_CODE();  irpSp = IoGetCurrentIrpStackLocation(Irp); inBufLength = irpSp->Parameters.DeviceIoControl.InputBufferLength; outBufLength = irpSp->Parameters.DeviceIoControl.OutputBufferLength;  if (!inBufLength || !outBufLength) { ntStatus = STATUS_INVALID_PARAMETER; goto End; } …

Then we have to use switch-case through the IOCTLs (Just copy buffers and show it from DbgPrint()).

123456789101112131415161718 switch (irpSp->Parameters.DeviceIoControl.IoControlCode) { case IOCTL_SIOCTL_METHOD_BUFFERED:  DbgPrint(«Called IOCTL_SIOCTL_METHOD_BUFFERED\n»); PrintIrpInfo(Irp); inBuf = Irp->AssociatedIrp.SystemBuffer; outBuf = Irp->AssociatedIrp.SystemBuffer; DbgPrint(«\tData from User :»); DbgPrint(inBuf); PrintChars(inBuf, inBufLength); RtlCopyBytes(outBuf, data, outBufLength); DbgPrint((«\tData to User : «)); PrintChars(outBuf, datalen); Irp->IoStatus.Information = (outBufLength < datalen ? outBufLength : datalen); break; …

The PrintIrpInfo is like this :

123456789101112131415161718VOID PrintIrpInfo(PIRP Irp){ PIO_STACK_LOCATION  irpSp; irpSp = IoGetCurrentIrpStackLocation(Irp);  PAGED_CODE();  DbgPrint(«\tIrp->AssociatedIrp.SystemBuffer = 0x%p\n», Irp->AssociatedIrp.SystemBuffer); DbgPrint(«\tIrp->UserBuffer = 0x%p\n», Irp->UserBuffer); DbgPrint(«\tirpSp->Parameters.DeviceIoControl.Type3InputBuffer = 0x%p\n», irpSp->Parameters.DeviceIoControl.Type3InputBuffer); DbgPrint(«\tirpSp->Parameters.DeviceIoControl.InputBufferLength = %d\n», irpSp->Parameters.DeviceIoControl.InputBufferLength); DbgPrint(«\tirpSp->Parameters.DeviceIoControl.OutputBufferLength = %d\n», irpSp->Parameters.DeviceIoControl.OutputBufferLength); return;}

Even though you can see all the implementations in my GitHub but that’s enough, in the rest of the post we only use the IOCTL_SIOCTL_METHOD_BUFFERED method.

Now from user-mode and if you remember from the previous part where we create a handle (HANDLE) using CreateFile, now we can use the DeviceIoControl to call DrvIOCTLDispatcher(IRP_MJ_DEVICE_CONTROL) along with our parameters from user-mode.

1234567891011121314151617181920212223242526272829 char OutputBuffer[1000]; char InputBuffer[1000]; ULONG bytesReturned; BOOL Result;  StringCbCopy(InputBuffer, sizeof(InputBuffer), «This String is from User Application; using METHOD_BUFFERED»);  printf(«\nCalling DeviceIoControl METHOD_BUFFERED:\n»);  memset(OutputBuffer, 0, sizeof(OutputBuffer));  Result = DeviceIoControl(handle, (DWORD)IOCTL_SIOCTL_METHOD_BUFFERED, &InputBuffer, (DWORD)strlen(InputBuffer) + 1, &OutputBuffer, sizeof(OutputBuffer), &bytesReturned, NULL );  if (!Result) { printf(«Error in DeviceIoControl : %d», GetLastError()); return 1;  } printf(»    OutBuffer (%d): %s\n», bytesReturned, OutputBuffer);

There is an old, yet great topic here which describes the different types of IOCT dispatching.

I think we’re done with WDK basics, its time to see how we can use Windows in order to build our VMM.


Per Processor Configuration and Setting Affinity

Affinity to a special logical processor is one of the main things that we should consider when working with the hypervisor.

Unfortunately, in Windows, there is nothing like on_each_cpu (like it is in Linux Kernel Module) so we have to change our affinity manually in order to run on each logical processor. In my Intel Core i7 6820HQ I have 4 physical cores and each core can run 2 threads simultaneously (due to the presence of hyper-threading) thus we have 8 logical processors and of course 8 sets of all the registers (including general purpose registers and MSR registers) so we should configure our VMM to work on 8 logical processors.

To get the count of logical processors you can use KeQueryActiveProcessors(), then we should pass a KAFFINITY mask to the KeSetSystemAffinityThread which sets the system affinity of the current thread.

KAFFINITY mask can be configured using a simple power function :

1234567891011121314151617int ipow(int base, int exp) { int result = 1; for (;;) { if ( exp & 1) { result *= base; } exp >>= 1; if (!exp) { break; } base *= base; } return result;}

then we should use the following code in order to change the affinity of the processor and run our code in all the logical cores separately:

12345678910 KAFFINITY kAffinityMask; for (size_t i = 0; i < KeQueryActiveProcessors(); i++) { kAffinityMask = ipow(2, i); KeSetSystemAffinityThread(kAffinityMask); DbgPrint(«=====================================================»); DbgPrint(«Current thread is executing in %d th logical processor.»,i); // Put you function here !  }

Conversion between the physical and virtual addresses

VMXON Regions and VMCS Regions (see below) use physical address as the operand to VMXON and VMPTRLD instruction so we should create functions to convert Virtual Address to Physical address:

1234UINT64 VirtualAddress_to_PhysicallAddress(void* va){ return MmGetPhysicalAddress(va).QuadPart;}

And as long as we can’t directly use physical addresses for our modifications in protected-mode then we have to convert physical address to virtual address.

1234567UINT64 PhysicalAddress_to_VirtualAddress(UINT64 pa){ PHYSICAL_ADDRESS PhysicalAddr; PhysicalAddr.QuadPart = pa;  return MmGetVirtualForPhysical(PhysicalAddr);}

Query about Hypervisor from the kernel

In the previous part, we query about the presence of hypervisor from user-mode, but we should consider checking about hypervisor from kernel-mode too. This reduces the possibility of getting kernel errors in the future or there might be something that disables the hypervisor using the lock bit, by the way, the following code checks IA32_FEATURE_CONTROL MSR (MSR address 3AH) to see if the lock bitis set or not.

123456789101112131415161718192021222324252627BOOLEAN Is_VMX_Supported(){ CPUID data = { 0 };  // VMX bit __cpuid((int*)&data, 1); if ((data.ecx & (1 << 5)) == 0) return FALSE;  IA32_FEATURE_CONTROL_MSR Control = { 0 }; Control.All = __readmsr(MSR_IA32_FEATURE_CONTROL);  // BIOS lock check if (Control.Fields.Lock == 0) { Control.Fields.Lock = TRUE; Control.Fields.EnableVmxon = TRUE; __writemsr(MSR_IA32_FEATURE_CONTROL, Control.All); } else if (Control.Fields.EnableVmxon == FALSE) { DbgPrint(«[*] VMX locked off in BIOS»); return FALSE; }  return TRUE;}

The structures used in the above function declared like this:

1234567891011121314151617181920212223typedef union _IA32_FEATURE_CONTROL_MSR{ ULONG64 All; struct { ULONG64 Lock : 1;                // [0] ULONG64 EnableSMX : 1;           // [1] ULONG64 EnableVmxon : 1;         // [2] ULONG64 Reserved2 : 5;           // [3-7] ULONG64 EnableLocalSENTER : 7;   // [8-14] ULONG64 EnableGlobalSENTER : 1;  // [15] ULONG64 Reserved3a : 16;         // ULONG64 Reserved3b : 32;         // [16-63] } Fields;} IA32_FEATURE_CONTROL_MSR, *PIA32_FEATURE_CONTROL_MSR; typedef struct _CPUID{ int eax; int ebx; int ecx; int edx;} CPUID, *PCPUID;

VMXON Region

Before executing VMXON, software should allocate a naturally aligned 4-KByte region of memory that a logical processor may use to support VMX operation. This region is called the VMXON region. The address of the VMXON region (the VMXON pointer) is provided in an operand to VMXON.

A VMM can (should) use different VMXON Regions for each logical processor otherwise the behavior is “undefined”.

Note: The first processors to support VMX operation require that the following bits be 1 in VMX operation: CR0.PE, CR0.NE, CR0.PG, and CR4.VMXE. The restrictions on CR0.PE and CR0.PG imply that VMX operation is supported only in paged protected mode (including IA-32e mode). Therefore, the guest software cannot be run in unpaged protected mode or in real-address mode. 

Now that we are configuring the hypervisor, we should have a global variable that describes the state of our virtual machine, I create the following structure for this purpose, currently, we just have two fields (VMXON_REGION and VMCS_REGION) but we will add new fields in this structure in the future parts.

12345typedef struct _VirtualMachineState{ UINT64 VMXON_REGION;                        // VMXON region UINT64 VMCS_REGION;                         // VMCS region} VirtualMachineState, *PVirtualMachineState;

And of course a global variable:

1extern PVirtualMachineState vmState;

I create the following function (in memory.c) to allocate VMXON Region and execute VMXON instruction using the allocated region’s pointer.

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162BOOLEAN Allocate_VMXON_Region(IN PVirtualMachineState vmState){ // at IRQL > DISPATCH_LEVEL memory allocation routines don’t work if (KeGetCurrentIrql() > DISPATCH_LEVEL) KeRaiseIrqlToDpcLevel();   PHYSICAL_ADDRESS PhysicalMax = { 0 }; PhysicalMax.QuadPart = MAXULONG64;   int VMXONSize = 2 * VMXON_SIZE; BYTE* Buffer = MmAllocateContiguousMemory(VMXONSize + ALIGNMENT_PAGE_SIZE, PhysicalMax);  // Allocating a 4-KByte Contigous Memory region  PHYSICAL_ADDRESS Highest = { 0 }, Lowest = { 0 }; Highest.QuadPart = ~0;  //BYTE* Buffer = MmAllocateContiguousMemorySpecifyCache(VMXONSize + ALIGNMENT_PAGE_SIZE, Lowest, Highest, Lowest, MmNonCached); if (Buffer == NULL) { DbgPrint(«[*] Error : Couldn’t Allocate Buffer for VMXON Region.»); return FALSE;// ntStatus = STATUS_INSUFFICIENT_RESOURCES; } UINT64 PhysicalBuffer = VirtualAddress_to_PhysicallAddress(Buffer);  // zero-out memory RtlSecureZeroMemory(Buffer, VMXONSize + ALIGNMENT_PAGE_SIZE); UINT64 alignedPhysicalBuffer = (BYTE*)((ULONG_PTR)(PhysicalBuffer + ALIGNMENT_PAGE_SIZE — 1) &~(ALIGNMENT_PAGE_SIZE — 1));  UINT64 alignedVirtualBuffer = (BYTE*)((ULONG_PTR)(Buffer + ALIGNMENT_PAGE_SIZE — 1) &~(ALIGNMENT_PAGE_SIZE — 1));  DbgPrint(«[*] Virtual allocated buffer for VMXON at %llx», Buffer); DbgPrint(«[*] Virtual aligned allocated buffer for VMXON at %llx», alignedVirtualBuffer); DbgPrint(«[*] Aligned physical buffer allocated for VMXON at %llx», alignedPhysicalBuffer);  // get IA32_VMX_BASIC_MSR RevisionId  IA32_VMX_BASIC_MSR basic = { 0 };   basic.All = __readmsr(MSR_IA32_VMX_BASIC);  DbgPrint(«[*] MSR_IA32_VMX_BASIC (MSR 0x480) Revision Identifier %llx», basic.Fields.RevisionIdentifier);   //* (UINT64 *)alignedVirtualBuffer  = 04;  //Changing Revision Identifier *(UINT64 *)alignedVirtualBuffer = basic.Fields.RevisionIdentifier;   int status = __vmx_on(&alignedPhysicalBuffer); if (status) { DbgPrint(«[*] VMXON failed with status %d\n», status); return FALSE; }  vmState->VMXON_REGION = alignedPhysicalBuffer;  return TRUE;}

Let’s explain the  above function,

123 // at IRQL > DISPATCH_LEVEL memory allocation routines don’t work if (KeGetCurrentIrql() > DISPATCH_LEVEL) KeRaiseIrqlToDpcLevel();

This code is for changing current IRQL Level to DISPATCH_LEVEL but we can ignore this code as long as we use MmAllocateContiguousMemory but if you want to use another type of memory for your VMXON region you should use  MmAllocateContiguousMemorySpecifyCache (commented), other types of memory you can use can be found here.

Note that to ensure proper behavior in VMX operation, you should maintain the VMCS region and related structures in writeback cacheable memory. Alternatively, you may map any of these regions or structures with the UC memory type. Doing so is strongly discouraged unless necessary as it will cause the performance of transitions using those structures to suffer significantly.

Write-back is a storage method in which data is written into the cache every time a change occurs, but is written into the corresponding location in main memory only at specified intervals or under certain conditions. Being cachable or not cachable can be determined from the cache disable bit in paging structures (PTE).

By the way, we should allocate 8192 Byte because there is no guarantee that Windows allocates the aligned memory so we can find a piece of 4096 Bytes aligned in 8196 Bytes. (by aligning I mean, the physical address should be divisible by 4096 without any reminder).

In my experience, the MmAllocateContiguousMemory allocation is always aligned, maybe it is because every page in PFN are allocated by 4096 bytes and as long as we need 4096 Bytes, then it’s aligned.

If you are interested in Page Frame Number (PFN) then you can read Inside Windows Page Frame Number (PFN) – Part 1 and Inside Windows Page Frame Number (PFN) – Part 2.

123456789 PHYSICAL_ADDRESS PhysicalMax = { 0 }; PhysicalMax.QuadPart = MAXULONG64;  int VMXONSize = 2 * VMXON_SIZE; BYTE* Buffer = MmAllocateContiguousMemory(VMXONSize, PhysicalMax);  // Allocating a 4-KByte Contigous Memory region if (Buffer == NULL) { DbgPrint(«[*] Error : Couldn’t Allocate Buffer for VMXON Region.»); return FALSE;// ntStatus = STATUS_INSUFFICIENT_RESOURCES; }

Now we should convert the address of the allocated memory to its physical address and make sure it’s aligned.

Memory that MmAllocateContiguousMemory allocates is uninitialized. A kernel-mode driver must first set this memory to zero. Now we should use RtlSecureZeroMemory for this case.

12345678910 UINT64 PhysicalBuffer = VirtualAddress_to_PhysicallAddress(Buffer);  // zero-out memory RtlSecureZeroMemory(Buffer, VMXONSize + ALIGNMENT_PAGE_SIZE); UINT64 alignedPhysicalBuffer = (BYTE*)((ULONG_PTR)(PhysicalBuffer + ALIGNMENT_PAGE_SIZE — 1) &~(ALIGNMENT_PAGE_SIZE — 1)); UINT64 alignedVirtualBuffer = (BYTE*)((ULONG_PTR)(Buffer + ALIGNMENT_PAGE_SIZE — 1) &~(ALIGNMENT_PAGE_SIZE — 1));  DbgPrint(«[*] Virtual allocated buffer for VMXON at %llx», Buffer); DbgPrint(«[*] Virtual aligned allocated buffer for VMXON at %llx», alignedVirtualBuffer); DbgPrint(«[*] Aligned physical buffer allocated for VMXON at %llx», alignedPhysicalBuffer);

From Intel’s manual (24.11.5 VMXON Region ):

Before executing VMXON, software should write the VMCS revision identifier to the VMXON region. (Specifically, it should write the 31-bit VMCS revision identifier to bits 30:0 of the first 4 bytes of the VMXON region; bit 31 should be cleared to 0.)

It need not initialize the VMXON region in any other way. Software should use a separate region for each logical processor and should not access or modify the VMXON region of a logical processor between the execution of VMXON and VMXOFF on that logical processor. Doing otherwise may lead to unpredictable behavior.

So let’s get the Revision Identifier from IA32_VMX_BASIC_MSR  and write it to our VMXON Region.

1234567891011 // get IA32_VMX_BASIC_MSR RevisionId  IA32_VMX_BASIC_MSR basic = { 0 };   basic.All = __readmsr(MSR_IA32_VMX_BASIC);  DbgPrint(«[*] MSR_IA32_VMX_BASIC (MSR 0x480) Revision Identifier %llx», basic.Fields.RevisionIdentifier);  //Changing Revision Identifier *(UINT64 *)alignedVirtualBuffer = basic.Fields.RevisionIdentifier;

The last part is used for executing VMXON instruction.

12345678910 int status = __vmx_on(&alignedPhysicalBuffer); if (status) { DbgPrint(«[*] VMXON failed with status %d\n», status); return FALSE; }  vmState->VMXON_REGION = alignedPhysicalBuffer;  return TRUE;

__vmx_on is the intrinsic function for executing VMXON. The status code shows diffrenet meanings.

ValueMeaning
0The operation succeeded.
1The operation failed with extended status available in the VM-instruction error field of the current VMCS.
2The operation failed without status available.

If we set the VMXON Region using VMXON and it fails then status = 1. If there isn’t any VMCS the status =2 and if the operation was successful then status =0.

If you execute the above code twice without executing VMXOFF then you definitely get errors.

Now, our VMXON Region is ready and we’re good to go.

Virtual-Machine Control Data Structures (VMCS)

A logical processor uses virtual-machine control data structures (VMCSs) while it is in VMX operation. These manage transitions into and out of VMX non-root operation (VM entries and VM exits) as well as processor behavior in VMX non-root operation. This structure is manipulated by the new instructions VMCLEAR, VMPTRLD, VMREAD, and VMWRITE.

VMX Life cycle

The above picture illustrates the lifecycle VMX operation on VMCS Region.

Initializing  VMCS Region

A VMM can (should) use different VMCS Regions so you need to set logical processor affinity and run you initialization routine multiple times.

The location where the VMCS located is called “VMCS Region”.

VMCS Region is a

  • 4 Kbyte (bits 11:0 must be zero)
  • Must be aligned to the 4KB boundary

This pointer must not set bits beyond the processor’s physical-address width (Software can determine a processor’s physical-address width by executing CPUID with 80000008H in EAX. The physical-address width is returned in bits 7:0 of EAX.)

There might be several VMCSs simultaneously in a processor but just one of them is currently active and the VMLAUNCH, VMREAD, VMRESUME, and VMWRITE instructions operate only on the current VMCS.

Using VMPTRLD sets the current VMCS on a logical processor.

The memory operand of the VMCLEAR instruction is also the address of a VMCS. After execution of the instruction, that VMCS is neither active nor current on the logical processor. If the VMCS had been current on the logical processor, the logical processor no longer has a current VMCS.

VMPTRST is responsible to give the current VMCS pointer it stores the value FFFFFFFFFFFFFFFFH if there is no current VMCS.

The launch state of a VMCS determines which VM-entry instruction should be used with that VMCS. The VMLAUNCH instruction requires a VMCS whose launch state is “clear”; the VMRESUME instruction requires a VMCS whose launch state is “launched”. A logical processor maintains a VMCS’s launch state in the corresponding VMCS region.

If the launch state of the current VMCS is “clear”, successful execution of the VMLAUNCH instruction changes the launch state to “launched”.

The memory operand of the VMCLEAR instruction is the address of a VMCS. After execution of the instruction, the launch state of that VMCS is “clear”.

There are no other ways to modify the launch state of a VMCS (it cannot be modified using VMWRITE) and there is no direct way to discover it (it cannot be read using VMREAD).

The following picture illustrates the contents of a VMCS Region.

VMCS Region

The following code is responsible for allocating VMCS Region :

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061BOOLEAN Allocate_VMCS_Region(IN PVirtualMachineState vmState){ // at IRQL > DISPATCH_LEVEL memory allocation routines don’t work if (KeGetCurrentIrql() > DISPATCH_LEVEL) KeRaiseIrqlToDpcLevel();   PHYSICAL_ADDRESS PhysicalMax = { 0 }; PhysicalMax.QuadPart = MAXULONG64;   int VMCSSize = 2 * VMCS_SIZE; BYTE* Buffer = MmAllocateContiguousMemory(VMCSSize + ALIGNMENT_PAGE_SIZE, PhysicalMax);  // Allocating a 4-KByte Contigous Memory region  PHYSICAL_ADDRESS Highest = { 0 }, Lowest = { 0 }; Highest.QuadPart = ~0;  //BYTE* Buffer = MmAllocateContiguousMemorySpecifyCache(VMXONSize + ALIGNMENT_PAGE_SIZE, Lowest, Highest, Lowest, MmNonCached);  UINT64 PhysicalBuffer = VirtualAddress_to_PhysicallAddress(Buffer); if (Buffer == NULL) { DbgPrint(«[*] Error : Couldn’t Allocate Buffer for VMCS Region.»); return FALSE;// ntStatus = STATUS_INSUFFICIENT_RESOURCES; } // zero-out memory RtlSecureZeroMemory(Buffer, VMCSSize + ALIGNMENT_PAGE_SIZE); UINT64 alignedPhysicalBuffer = (BYTE*)((ULONG_PTR)(PhysicalBuffer + ALIGNMENT_PAGE_SIZE — 1) &~(ALIGNMENT_PAGE_SIZE — 1));  UINT64 alignedVirtualBuffer = (BYTE*)((ULONG_PTR)(Buffer + ALIGNMENT_PAGE_SIZE — 1) &~(ALIGNMENT_PAGE_SIZE — 1));    DbgPrint(«[*] Virtual allocated buffer for VMCS at %llx», Buffer); DbgPrint(«[*] Virtual aligned allocated buffer for VMCS at %llx», alignedVirtualBuffer); DbgPrint(«[*] Aligned physical buffer allocated for VMCS at %llx», alignedPhysicalBuffer);  // get IA32_VMX_BASIC_MSR RevisionId  IA32_VMX_BASIC_MSR basic = { 0 };   basic.All = __readmsr(MSR_IA32_VMX_BASIC);  DbgPrint(«[*] MSR_IA32_VMX_BASIC (MSR 0x480) Revision Identifier %llx», basic.Fields.RevisionIdentifier);   //Changing Revision Identifier *(UINT64 *)alignedVirtualBuffer = basic.Fields.RevisionIdentifier;   int status = __vmx_vmptrld(&alignedPhysicalBuffer); if (status) { DbgPrint(«[*] VMCS failed with status %d\n», status); return FALSE; }  vmState->VMCS_REGION = alignedPhysicalBuffer;  return TRUE;}

The above code is exactly the same as VMXON Region except for __vmx_vmptrld instead of __vmx_on__vmx_vmptrld  is the intrinsic function for VMPTRLD instruction.

In VMCS also we should find the Revision Identifier from MSR_IA32_VMX_BASIC  and write in VMCS Region before executing VMPTRLD.

The MSR_IA32_VMX_BASIC  is defined as below.

123456789101112131415161718typedef union _IA32_VMX_BASIC_MSR{ ULONG64 All; struct { ULONG32 RevisionIdentifier : 31;   // [0-30] ULONG32 Reserved1 : 1;             // [31] ULONG32 RegionSize : 12;           // [32-43] ULONG32 RegionClear : 1;           // [44] ULONG32 Reserved2 : 3;             // [45-47] ULONG32 SupportedIA64 : 1;         // [48] ULONG32 SupportedDualMoniter : 1;  // [49] ULONG32 MemoryType : 4;            // [50-53] ULONG32 VmExitReport : 1;          // [54] ULONG32 VmxCapabilityHint : 1;     // [55] ULONG32 Reserved3 : 8;             // [56-63] } Fields;} IA32_VMX_BASIC_MSR, *PIA32_VMX_BASIC_MSR;

VMXOFF

After configuring the above regions, now its time to think about DrvClose when the handle to the driver is no longer maintained by the user-mode application. At this time, we should terminate VMX and free every memory that we allocated before.

The following function is responsible for executing VMXOFF then calling to MmFreeContiguousMemoryin order to free the allocated memory :

123456789101112131415161718192021void Terminate_VMX(void) {  DbgPrint(«\n[*] Terminating VMX…\n»);  KAFFINITY kAffinityMask; for (size_t i = 0; i < ProcessorCounts; i++) { kAffinityMask = ipow(2, i); KeSetSystemAffinityThread(kAffinityMask); DbgPrint(«\t\tCurrent thread is executing in %d th logical processor.», i);   __vmx_off(); MmFreeContiguousMemory(PhysicalAddress_to_VirtualAddress(vmState[i].VMXON_REGION)); MmFreeContiguousMemory(PhysicalAddress_to_VirtualAddress(vmState[i].VMCS_REGION));  }  DbgPrint(«[*] VMX Operation turned off successfully. \n»); }

Keep in mind to convert VMXON and VMCS Regions to virtual address because MmFreeContiguousMemory accepts VA, otherwise, it leads to a BSOD.

Ok, It’s almost done!

Testing our VMM

Let’s create a test case for our code, first a function for Initiating VMXON and VMCS Regions through all logical processor.

1234567891011121314151617181920212223242526272829303132333435363738PVirtualMachineState vmState;int ProcessorCounts; PVirtualMachineState Initiate_VMX(void) {  if (!Is_VMX_Supported()) { DbgPrint(«[*] VMX is not supported in this machine !»); return NULL; }  ProcessorCounts = KeQueryActiveProcessorCount(0); vmState = ExAllocatePoolWithTag(NonPagedPool, sizeof(VirtualMachineState)* ProcessorCounts, POOLTAG);   DbgPrint(«\n=====================================================\n»);  KAFFINITY kAffinityMask; for (size_t i = 0; i < ProcessorCounts; i++) { kAffinityMask = ipow(2, i); KeSetSystemAffinityThread(kAffinityMask); // do st here ! DbgPrint(«\t\tCurrent thread is executing in %d th logical processor.», i);  Enable_VMX_Operation(); // Enabling VMX Operation DbgPrint(«[*] VMX Operation Enabled Successfully !»);  Allocate_VMXON_Region(&vmState[i]); Allocate_VMCS_Region(&vmState[i]);   DbgPrint(«[*] VMCS Region is allocated at  ===============> %llx», vmState[i].VMCS_REGION); DbgPrint(«[*] VMXON Region is allocated at ===============> %llx», vmState[i].VMXON_REGION);  DbgPrint(«\n=====================================================\n»); }}

The above function should be called from IRP MJ CREATE so let’s modify our DrvCreate to :

123456789101112131415NTSTATUS DrvCreate(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){  DbgPrint(«[*] DrvCreate Called !»);  if (Initiate_VMX()) { DbgPrint(«[*] VMX Initiated Successfully.»); }  Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT);  return STATUS_SUCCESS;}

And modify DrvClose to :

12345678910111213NTSTATUS DrvClose(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] DrvClose Called !»);  // executing VMXOFF on every logical processor Terminate_VMX();  Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT);  return STATUS_SUCCESS;}

Now, run the code, In the case of creating the handle (You can see that our regions allocated successfully).

VMX Regions

And when we call CloseHandle from user mode:

VMXOFF

Source code

The source code of this part of the tutorial is available on my GitHub.

Conclusion

In this part we learned about different types of IOCTL Dispatching, then we see different functions in Windows to manage our hypervisor VMM and we initialized the VMXON Regions and VMCS Regions then we terminate them.

In the future part, we’ll focus on VMCS and different actions that can be performed in VMCS Regions in order to control our guest software.

References

[1] Intel® 64 and IA-32 architectures software developer’s manual combined volumes 3 (https://software.intel.com/en-us/articles/intel-sdm

[2] Windows Driver Samples (https://github.com/Microsoft/Windows-driver-samples)

[3] Driver Development Part 2: Introduction to Implementing IOCTLs (https://www.codeproject.com/Articles/9575/Driver-Development-Part-2-Introduction-to-Implemen)

[3] Hyperplatform (https://github.com/tandasat/HyperPlatform)

[4] PAGED_CODE macro (https://technet.microsoft.com/en-us/ff558773(v=vs.96))

[5] HVPP (https://github.com/wbenny/hvpp)

[6] HyperBone Project (https://github.com/DarthTon/HyperBone)

[7] Memory Caching Types (https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/ne-wdm-_memory_caching_type)

[8] What is write-back cache? (https://whatis.techtarget.com/definition/write-back)

Hypervisor From Scratch – Part 2: Entering VMX Operation

Original text bySinaei )

Hi guys,

It’s the second part of a multiple series of a tutorial called “Hypervisor From Scratch”, First I highly recommend to read the first part (Basic Concepts & Configure Testing Environment) before reading this part, as it contains the basic knowledge you need to know in order to understand the rest of this tutorial.

In this section, we will learn about Detecting Hypervisor Support for our processor, then we simply config the basic stuff to Enable VMX and Entering VMX Operation and a lot more thing about Window Driver Kit (WDK).

Configuring Our IRP Major Functions

Beside our kernel-mode driver (“MyHypervisorDriver“), I created a user-mode application called “MyHypervisorApp“, first of all (The source code is available in my GitHub), I should encourage you to write most of your codes in user-mode rather than kernel-mode and that’s because you might not have handled exceptions so it leads to BSODs, or on the other hand, running less code in kernel-mode reduces the possibility of putting some nasty kernel-mode bugs.

If you remember from the previous part, we create some Windows Driver Kit codes, now we want to develop our project to support more IRP Major Functions.

IRP Major Functions are located in a conventional Windows table that is created for every device, once you register your device in Windows, you have to introduce these functions in which you handle these IRP Major Functions. That’s like every device has a table of its Major Functions and everytime a user-mode application calls any of these functions, Windows finds the corresponding function (if device driver supports that MJ Function) based on the device that requested by the user and calls it then pass an IRP pointer to the kernel driver.

Now its responsibility of device function to check the privileges or etc.

The following code creates the device :

12345678910111213 NTSTATUS NtStatus = STATUS_SUCCESS; UINT64 uiIndex = 0; PDEVICE_OBJECT pDeviceObject = NULL; UNICODE_STRING usDriverName, usDosDeviceName;  DbgPrint(«[*] DriverEntry Called.»);   RtlInitUnicodeString(&usDriverName, L»\\Device\\MyHypervisorDevice»); RtlInitUnicodeString(&usDosDeviceName, L»\\DosDevices\\MyHypervisorDevice»);  NtStatus = IoCreateDevice(pDriverObject, 0, &usDriverName, FILE_DEVICE_UNKNOWN, FILE_DEVICE_SECURE_OPEN, FALSE, &pDeviceObject); NTSTATUS NtStatusSymLinkResult = IoCreateSymbolicLink(&usDosDeviceName, &usDriverName);

Note that our device name is “\Device\MyHypervisorDevice.

After that, we need to introduce our Major Functions for our device.

1234567891011121314151617 if (NtStatus == STATUS_SUCCESS && NtStatusSymLinkResult == STATUS_SUCCESS) { for (uiIndex = 0; uiIndex < IRP_MJ_MAXIMUM_FUNCTION; uiIndex++) pDriverObject->MajorFunction[uiIndex] = DrvUnsupported;  DbgPrint(«[*] Setting Devices major functions.»); pDriverObject->MajorFunction[IRP_MJ_CLOSE] = DrvClose; pDriverObject->MajorFunction[IRP_MJ_CREATE] = DrvCreate; pDriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = DrvIOCTLDispatcher; pDriverObject->MajorFunction[IRP_MJ_READ] = DrvRead; pDriverObject->MajorFunction[IRP_MJ_WRITE] = DrvWrite;  pDriverObject->DriverUnload = DrvUnload; } else { DbgPrint(«[*] There was some errors in creating device.»); }

You can see that I put “DrvUnsupported” to all functions, this is a function to handle all MJ Functions and told the user that it’s not supported. The main body of this function is like this:

12345678910NTSTATUS DrvUnsupported(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] This function is not supported 🙁 !»);  Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT);  return STATUS_SUCCESS;}

We also introduce other major functions that are essential for our device, we’ll complete the implementation in the future, let’s just leave them alone.

12345678910111213141516171819202122232425262728293031323334353637383940414243NTSTATUS DrvCreate(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] Not implemented yet 🙁 !»);  Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT);  return STATUS_SUCCESS;} NTSTATUS DrvRead(IN PDEVICE_OBJECT DeviceObject,IN PIRP Irp){ DbgPrint(«[*] Not implemented yet 🙁 !»);  Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT);  return STATUS_SUCCESS;} NTSTATUS DrvWrite(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] Not implemented yet 🙁 !»);  Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT);  return STATUS_SUCCESS;} NTSTATUS DrvClose(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ DbgPrint(«[*] Not implemented yet 🙁 !»);  Irp->IoStatus.Status = STATUS_SUCCESS; Irp->IoStatus.Information = 0; IoCompleteRequest(Irp, IO_NO_INCREMENT);  return STATUS_SUCCESS;}

Now let’s see IRP MJ Functions list and other types of Windows Driver Kit handlers routine.

IRP Major Functions List

This is a list of IRP Major Functions which we can use in order to perform different operations.

123456789101112131415161718192021222324252627282930#define IRP_MJ_CREATE                   0x00#define IRP_MJ_CREATE_NAMED_PIPE        0x01#define IRP_MJ_CLOSE                    0x02#define IRP_MJ_READ                     0x03#define IRP_MJ_WRITE                    0x04#define IRP_MJ_QUERY_INFORMATION        0x05#define IRP_MJ_SET_INFORMATION          0x06#define IRP_MJ_QUERY_EA                 0x07#define IRP_MJ_SET_EA                   0x08#define IRP_MJ_FLUSH_BUFFERS            0x09#define IRP_MJ_QUERY_VOLUME_INFORMATION 0x0a#define IRP_MJ_SET_VOLUME_INFORMATION   0x0b#define IRP_MJ_DIRECTORY_CONTROL        0x0c#define IRP_MJ_FILE_SYSTEM_CONTROL      0x0d#define IRP_MJ_DEVICE_CONTROL           0x0e#define IRP_MJ_INTERNAL_DEVICE_CONTROL  0x0f#define IRP_MJ_SHUTDOWN                 0x10#define IRP_MJ_LOCK_CONTROL             0x11#define IRP_MJ_CLEANUP                  0x12#define IRP_MJ_CREATE_MAILSLOT          0x13#define IRP_MJ_QUERY_SECURITY           0x14#define IRP_MJ_SET_SECURITY             0x15#define IRP_MJ_POWER                    0x16#define IRP_MJ_SYSTEM_CONTROL           0x17#define IRP_MJ_DEVICE_CHANGE            0x18#define IRP_MJ_QUERY_QUOTA              0x19#define IRP_MJ_SET_QUOTA                0x1a#define IRP_MJ_PNP                      0x1b#define IRP_MJ_PNP_POWER                IRP_MJ_PNP      // Obsolete….#define IRP_MJ_MAXIMUM_FUNCTION         0x1b

Every major function will only trigger if we call its corresponding function from user-mode. For instance, there is a function (in user-mode) called CreateFile (And all its variants like CreateFileA and CreateFileW for ASCII and Unicode) so everytime we call CreateFile the function that registered as IRP_MJ_CREATE will be called or if we call ReadFile then IRP_MJ_READ and WriteFile then IRP_MJ_WRITE  will be called. You can see that Windows treats its devices like files and everything we need to pass from user-mode to kernel-mode is available in PIRP Irp as a buffer when the function is called.

In this case, Windows is responsible to copy user-mode buffer to kernel mode stack.

Don’t worry we use it frequently in the rest of the project but we only support IRP_MJ_CREATE in this part and left others unimplemented for our future parts.

IRP Minor Functions

IRP Minor functions are mainly used for PnP manager to notify for a special event, for example,The PnP manager sends IRP_MN_START_DEVICE  after it has assigned hardware resources, if any, to the device or The PnP manager sends IRP_MN_STOP_DEVICE to stop a device so it can reconfigure the device’s hardware resources.

We will need these minor functions later in these series.

A list of IRP Minor Functions is available below:

1234567891011121314151617181920212223IRP_MN_START_DEVICEIRP_MN_QUERY_STOP_DEVICEIRP_MN_STOP_DEVICEIRP_MN_CANCEL_STOP_DEVICEIRP_MN_QUERY_REMOVE_DEVICEIRP_MN_REMOVE_DEVICEIRP_MN_CANCEL_REMOVE_DEVICEIRP_MN_SURPRISE_REMOVALIRP_MN_QUERY_CAPABILITIES IRP_MN_QUERY_PNP_DEVICE_STATEIRP_MN_FILTER_RESOURCE_REQUIREMENTSIRP_MN_DEVICE_USAGE_NOTIFICATIONIRP_MN_QUERY_DEVICE_RELATIONSIRP_MN_QUERY_RESOURCESIRP_MN_QUERY_RESOURCE_REQUIREMENTSIRP_MN_QUERY_IDIRP_MN_QUERY_DEVICE_TEXTIRP_MN_QUERY_BUS_INFORMATIONIRP_MN_QUERY_INTERFACEIRP_MN_READ_CONFIGIRP_MN_WRITE_CONFIGIRP_MN_DEVICE_ENUMERATEDIRP_MN_SET_LOCK

Fast I/O

For optimizing VMM, you can use Fast I/O which is a different way to initiate I/O operations that are faster than IRP. Fast I/O operations are always synchronous.

According to MSDN:

Fast I/O is specifically designed for rapid synchronous I/O on cached files. In fast I/O operations, data is transferred directly between user buffers and the system cache, bypassing the file system and the storage driver stack. (Storage drivers do not use fast I/O.) If all of the data to be read from a file is resident in the system cache when a fast I/O read or write request is received, the request is satisfied immediately. 

When the I/O Manager receives a request for synchronous file I/O (other than paging I/O), it invokes the fast I/O routine first. If the fast I/O routine returns TRUE, the operation was serviced by the fast I/O routine. If the fast I/O routine returns FALSE, the I/O Manager creates and sends an IRP instead.

The definition of Fast I/O Dispatch table is:

123456789101112131415161718192021222324252627282930typedef struct _FAST_IO_DISPATCH {  ULONG                                  SizeOfFastIoDispatch;  PFAST_IO_CHECK_IF_POSSIBLE             FastIoCheckIfPossible;  PFAST_IO_READ                          FastIoRead;  PFAST_IO_WRITE                         FastIoWrite;  PFAST_IO_QUERY_BASIC_INFO              FastIoQueryBasicInfo;  PFAST_IO_QUERY_STANDARD_INFO           FastIoQueryStandardInfo;  PFAST_IO_LOCK                          FastIoLock;  PFAST_IO_UNLOCK_SINGLE                 FastIoUnlockSingle;  PFAST_IO_UNLOCK_ALL                    FastIoUnlockAll;  PFAST_IO_UNLOCK_ALL_BY_KEY             FastIoUnlockAllByKey;  PFAST_IO_DEVICE_CONTROL                FastIoDeviceControl;  PFAST_IO_ACQUIRE_FILE                  AcquireFileForNtCreateSection;  PFAST_IO_RELEASE_FILE                  ReleaseFileForNtCreateSection;  PFAST_IO_DETACH_DEVICE                 FastIoDetachDevice;  PFAST_IO_QUERY_NETWORK_OPEN_INFO       FastIoQueryNetworkOpenInfo;  PFAST_IO_ACQUIRE_FOR_MOD_WRITE         AcquireForModWrite;  PFAST_IO_MDL_READ                      MdlRead;  PFAST_IO_MDL_READ_COMPLETE             MdlReadComplete;  PFAST_IO_PREPARE_MDL_WRITE             PrepareMdlWrite;  PFAST_IO_MDL_WRITE_COMPLETE            MdlWriteComplete;  PFAST_IO_READ_COMPRESSED               FastIoReadCompressed;  PFAST_IO_WRITE_COMPRESSED              FastIoWriteCompressed;  PFAST_IO_MDL_READ_COMPLETE_COMPRESSED  MdlReadCompleteCompressed;  PFAST_IO_MDL_WRITE_COMPLETE_COMPRESSED MdlWriteCompleteCompressed;  PFAST_IO_QUERY_OPEN                    FastIoQueryOpen;  PFAST_IO_RELEASE_FOR_MOD_WRITE         ReleaseForModWrite;  PFAST_IO_ACQUIRE_FOR_CCFLUSH           AcquireForCcFlush;  PFAST_IO_RELEASE_FOR_CCFLUSH           ReleaseForCcFlush;} FAST_IO_DISPATCH, *PFAST_IO_DISPATCH;

Defined Headers

I created the following headers (source.h) for my driver.

12345678910111213141516171819202122232425262728293031323334#pragma once#include <ntddk.h>#include <wdf.h>#include <wdm.h> extern void inline Breakpoint(void);extern void inline Enable_VMX_Operation(void);  NTSTATUS DriverEntry(PDRIVER_OBJECT  pDriverObject, PUNICODE_STRING  pRegistryPath);VOID DrvUnload(PDRIVER_OBJECT  DriverObject);NTSTATUS DrvCreate(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvRead(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvWrite(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvClose(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvUnsupported(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp);NTSTATUS DrvIOCTLDispatcher(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp); VOID PrintChars(_In_reads_(CountChars) PCHAR BufferAddress, _In_ size_t CountChars);VOID PrintIrpInfo(PIRP Irp); #pragma alloc_text(INIT, DriverEntry)#pragma alloc_text(PAGE, DrvUnload)#pragma alloc_text(PAGE, DrvCreate)#pragma alloc_text(PAGE, DrvRead)#pragma alloc_text(PAGE, DrvWrite)#pragma alloc_text(PAGE, DrvClose)#pragma alloc_text(PAGE, DrvUnsupported)#pragma alloc_text(PAGE, DrvIOCTLDispatcher)   // IOCTL Codes and Its meanings#define IOCTL_TEST 0x1 // In case of testing

Now just compile your driver.

Loading Driver and Check the presence of Device

In order to load our driver (MyHypervisorDriver) first download OSR Driver Loader, then run Sysinternals DbgView as administrator make sure that your DbgView captures the kernel (you can check by going Capture -> Capture Kernel).

Enable Capturing Event

After that open the OSR Driver Loader (go to OsrLoader -> kit-> WNET -> AMD64 -> FRE) and open OSRLOADER.exe (in an x64 environment). Now if you built your driver, find .sys file (in MyHypervisorDriver\x64\Debug\ should be a file named: “MyHypervisorDriver.sys”), in OSR Driver Loader click to browse and select (MyHypervisorDriver.sys) and then click to “Register Service” after the message box that shows your driver registered successfully, you should click on “Start Service”.

Please note that you should have WDK installed for your Visual Studio in order to be able building your project.

Load Driver in OSR Driver Loader

Now come back to DbgView, then you should see that your driver loaded successfully and a message “[*] DriverEntry Called. ” should appear.

If there is no problem then you’re good to go, otherwise, if you have a problem with DbgView you can check the next step.

Keep in mind that now you registered your driver so you can use SysInternals WinObj in order to see whether “MyHypervisorDevice” is available or not.

WinObj

The Problem with DbgView

Unfortunately, for some unknown reasons, I’m not able to view the result of DbgPrint(), If you can see the result then you can skip this step but if you have a problem, then perform the following steps:

As I mentioned in part 1:

In regedit, add a key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Debug Print Filter

Under that , add a DWORD value named IHVDRIVER with a value of 0xFFFF

Reboot the machine and you’ll good to go.

It always works for me and I tested on many computers but my MacBook seems to have a problem.

In order to solve this problem, you need to find a Windows Kernel Global variable called, nt!Kd_DEFAULT_Mask, this variable is responsible for showing the results in DbgView, it has a mask that I’m not aware of so I just put a 0xffffffff in it to simply make it shows everything!

To do this, you need a Windows Local Kernel Debugging using Windbg.

  1. Open a Command Prompt window as Administrator. Enter bcdedit /debug on
  2. If the computer is not already configured as the target of a debug transport, enter bcdedit /dbgsettings local
  3. Reboot the computer.

After that you need to open Windbg with UAC Administrator privilege, go to File > Kernel Debug > Local > press OK and in you local Windbg find the nt!Kd_DEFAULT_Mask using the following command :

12prlkd> x nt!kd_Default_Maskfffff801`f5211808 nt!Kd_DEFAULT_Mask = <no type information>

Now change it value to 0xffffffff.

1lkd> eb fffff801`f5211808 ff ff ff ff
kd_DEFAULT_Mask

After that, you should see the results and now you’ll good to go.

Remember this is an essential step for the rest of the topic, because if we can’t see any kernel detail then we can’t debug.

DbgView

Detecting Hypervisor Support

Discovering support for vmx is the first thing that you should consider before enabling VT-x, this is covered in Intel Software Developer’s Manual volume 3C in section 23.6 DISCOVERING SUPPORT FOR VMX.

You could know the presence of VMX using CPUID if CPUID.1:ECX.VMX[bit 5] = 1, then VMX operation is supported.

First of all, we need to know if we’re running on an Intel-based processor or not, this can be understood by checking the CPUID instruction and find vendor string “GenuineIntel“.

The following function returns the vendor string form CPUID instruction.

12345678910111213141516171819202122232425262728293031323334353637string GetCpuID(){ //Initialize used variables char SysType[13]; //Array consisting of 13 single bytes/characters string CpuID; //The string that will be used to add all the characters to   //Starting coding in assembly language _asm { //Execute CPUID with EAX = 0 to get the CPU producer XOR EAX, EAX CPUID //MOV EBX to EAX and get the characters one by one by using shift out right bitwise operation. MOV EAX, EBX MOV SysType[0], al MOV SysType[1], ah SHR EAX, 16 MOV SysType[2], al MOV SysType[3], ah //Get the second part the same way but these values are stored in EDX MOV EAX, EDX MOV SysType[4], al MOV SysType[5], ah SHR EAX, 16 MOV SysType[6], al MOV SysType[7], ah //Get the third part MOV EAX, ECX MOV SysType[8], al MOV SysType[9], ah SHR EAX, 16 MOV SysType[10], al MOV SysType[11], ah MOV SysType[12], 00 } CpuID.assign(SysType, 12); return CpuID;}

The last step is checking for the presence of VMX, you can check it using the following code :

1234567891011121314151617181920bool VMX_Support_Detection(){  bool VMX = false; __asm { xor    eax, eax inc    eax cpuid bt     ecx, 0x5 jc     VMXSupport VMXNotSupport : jmp     NopInstr VMXSupport : mov    VMX, 0x1 NopInstr : nop }  return VMX;}

As you can see it checks CPUID with EAX=1 and if the 5th (6th) bit is 1 then the VMX Operation is supported. We can also perform the same thing in Kernel Driver.

All in all, our main code should be something like this:

123456789101112131415161718192021222324252627int main(){ string CpuID; CpuID = GetCpuID(); cout << «[*] The CPU Vendor is : » << CpuID << endl; if (CpuID == «GenuineIntel») { cout << «[*] The Processor virtualization technology is VT-x. \n»; } else { cout << «[*] This program is not designed to run in a non-VT-x environemnt !\n»; return 1; } if (VMX_Support_Detection()) { cout << «[*] VMX Operation is supported by your processor .\n»; } else { cout << «[*] VMX Operation is not supported by your processor .\n»; return 1; } _getch();    return 0;}

The final result:

User-mode app

Enabling VMX Operation

If our processor supports the VMX Operation then its time to enable it. As I told you above, IRP_MJ_CREATE is the first function that should be used to start the operation.

Form Intel Software Developer’s Manual (23.7 ENABLING AND ENTERING VMX OPERATION):

Before system software can enter VMX operation, it enables VMX by setting CR4.VMXE[bit 13] = 1. VMX operation is then entered by executing the VMXON instruction. VMXON causes an invalid-opcode exception (#UD) if executed with CR4.VMXE = 0. Once in VMX operation, it is not possible to clear CR4.VMXE. System software leaves VMX operation by executing the VMXOFF instruction. CR4.VMXE can be cleared outside of VMX operation after executing of VMXOFF.
VMXON is also controlled by the IA32_FEATURE_CONTROL MSR (MSR address 3AH). This MSR is cleared to zero when a logical processor is reset. The relevant bits of the MSR are:

  •  Bit 0 is the lock bit. If this bit is clear, VMXON causes a general-protection exception. If the lock bit is set, WRMSR to this MSR causes a general-protection exception; the MSR cannot be modified until a power-up reset condition. System BIOS can use this bit to provide a setup option for BIOS to disable support for VMX. To enable VMX support in a platform, BIOS must set bit 1, bit 2, or both, as well as the lock bit.
  •  Bit 1 enables VMXON in SMX operation. If this bit is clear, execution of VMXON in SMX operation causes a general-protection exception. Attempts to set this bit on logical processors that do not support both VMX operation and SMX operation cause general-protection exceptions.
  •  Bit 2 enables VMXON outside SMX operation. If this bit is clear, execution of VMXON outside SMX operation causes a general-protection exception. Attempts to set this bit on logical processors that do not support VMX operation cause general-protection exceptions.

Setting CR4 VMXE Bit

Do you remember the previous part where I told you how to create an inline assembly in Windows Driver Kit x64

Now you should create some function to perform this operation in assembly.

Just in Header File (in my case Source.h) declare your function:

1extern void inline Enable_VMX_Operation(void);

Then in assembly file (in my case SourceAsm.asm) add this function (Which set the 13th (14th) bit of Cr4).

1234567891011Enable_VMX_Operation PROC PUBLICpush rax ; Save the state xor rax,rax ; Clear the RAXmov rax,cr4or rax,02000h         ; Set the 14th bitmov cr4,rax pop rax ; Restore the stateretEnable_VMX_Operation ENDP

Also, declare your function in the above of SourceAsm.asm.

1PUBLIC Enable_VMX_Operation

The above function should be called in DrvCreate:

123456NTSTATUS DrvCreate(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp){ Enable_VMX_Operation(); // Enabling VMX Operation DbgPrint(«[*] VMX Operation Enabled Successfully !»); return STATUS_SUCCESS;}

At last, you should call the following function from the user-mode:

123456789 HANDLE hWnd = CreateFile(L»\\\\.\\MyHypervisorDevice», GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, /// lpSecurityAttirbutes OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_OVERLAPPED, NULL); /// lpTemplateFile

If you see the following result, then you completed the second part successfully.

Final Show

Important Note: Please consider that your .asm file should have a different name from your driver main file (.c file) for example if your driver file is “Source.c” then using the name “Source.asm” causes weird linking errors in Visual Studio, you should change the name of you .asm file to something like “SourceAsm.asm” to avoid these kinds of linker errors.

Conclusion

In this part, you learned about basic stuff you to know in order to create a Windows Driver Kit program and then we entered to our virtual environment so we build a cornerstone for the rest of the parts.

In the third part, we’re getting deeper with Intel VT-x and make our driver even more advanced so wait, it’ll be ready soon!

The source code of this topic is available at :

[https://github.com/SinaKarvandi/Hypervisor-From-Scratch/]

References

[1] Intel® 64 and IA-32 architectures software developer’s manual combined volumes 3 (https://software.intel.com/en-us/articles/intel-sdm

[2] IRP_MJ_DEVICE_CONTROL (https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/irp-mj-device-control)

[3]  Windows Driver Kit Samples (https://github.com/Microsoft/Windows-driver-samples/blob/master/general/ioctl/wdm/sys/sioctl.c)

[4] Setting Up Local Kernel Debugging of a Single Computer Manually (https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/setting-up-local-kernel-debugging-of-a-single-computer-manually)

[5] Obtain processor manufacturer using CPUID (https://www.daniweb.com/programming/software-development/threads/112968/obtain-processor-manufacturer-using-cpuid)

[6] Plug and Play Minor IRPs (https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/plug-and-play-minor-irps)

[7] _FAST_IO_DISPATCH structure (https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/ns-wdm-_fast_io_dispatch)

[8] Filtering IRPs and Fast I/O (https://docs.microsoft.com/en-us/windows-hardware/drivers/ifs/filtering-irps-and-fast-i-o)

[9] Windows File System Filter Driver Development (https://www.apriorit.com/dev-blog/167-file-system-filter-driver)

Hypervisor From Scratch – Part 1: Basic Concepts & Configure Testing Environment

( Original text by Sinaei )

Hello everyone!

Welcome to the first part of a multi-part series of tutorials called “Hypervisor From Scratch”. As the name implies, this course contains technical details to create a basic Virtual Machine based on hardware virtualization. If you follow the course, you’ll be able to create your own virtual environment and you’ll get an understanding of how VMWare, VirtualBox, KVM and other virtualization softwares use processors’ facilities to create a virtual environment.

Introduction

Both Intel and AMD support virtualization in their modern CPUs. Intel introduced (VT-x technology) that previously codenamed “Vanderpool” on November 13, 2005, in Pentium 4 series. The CPU flag for VT-xcapability is “vmx” which stands for Virtual Machine eXtension.

AMD, on the other hand, developed its first generation of virtualization extensions under the codename “Pacifica“, and initially published them as AMD Secure Virtual Machine (SVM), but later marketed them under the trademark AMD Virtualization, abbreviated AMD-V.

There two types of the hypervisor. The hypervisor type 1 called “bare metal hypervisor” or “native” because it runs directly on a bare metal physical server, a type 1 hypervisor has direct access to the hardware. With a type 1 hypervisor, there is no operating system to load as the hypervisor.

Contrary to a type 1 hypervisor, a type 2 hypervisor loads inside an operating system, just like any other application. Because the type 2 hypervisor has to go through the operating system and is managed by the OS, the type 2 hypervisor (and its virtual machines) will run less efficiently (slower) than a type 1 hypervisor.

Even more of the concepts about Virtualization is the same, but you need different considerations in VT-x and AMD-V. The rest of these tutorials mainly focus on VT-x because Intel CPUs are more popular and more widely used. In my opinion, AMD describes virtualization more clearly in its manuals but Intel somehow makes the readers confused especially in Virtualization documentation.

Hypervisor and Platform 

These concepts are platform independent, I mean you can easily run the same code routine in both Linux or Windows and expect the same behavior from CPU but I prefer to use Windows as its more easily debuggable (at least for me.) but I try to give some examples for Linux systems whenever needed. Personally, as Linux kernel manages faults like #GP and other exceptions and tries to avoid kernel panic and keep the system up so it’s better for testing something like hypervisor or any CPU-related affair. On the other hand, Windows never tries to manage any unexpected exception and it leads to a blue screen of death whenever you didn’t manage your exceptions, thus you might get lots of BSODs.By the way, you’d better test it on both platforms (and other platforms too.).

At last, I might (and definitely) make mistakes like wrong implementation or misinformation or forget about mentioning some important description so I should say sorry in advance if I make any faults and I’ll be glad for every comment that tells me my mistakes in the technical information or misinformation.

That’s enough, Let’s get started!

The Tools you’ll need

You should have a Visual Studio with WDK installed. you can get Windows Driver Kit (WDK) here.

The best way to debug Windows and any kernel mode affair is using Windbg which is available in Windows SDK here. (If you installed WDK with default installing options then you probably install WDK and SDK together so you can skip this step.)

You should be able to debug your OS (in this case Windows) using Windbg, more information here.

Hex-rays IDA Pro is an important part of this tutorial.

OSR Driver Loader which can be downloaded here, we use this tools in order to load our drivers into the Windows machine.

SysInternals DebugView for printing the DbgPrint() results.

Chameleon

Creating a Test Environment

Almost all of the codes in this tutorial have to run in kernel level and you must set up either a Linux Kernel Module or Windows Driver Kit (WDK) module. As configuring VMM involves lots of assembly code, you should know how to run inline assembly within you kernel project. In Linux, you shouldn’t do anything special but in the case of  Windows, WDK no longer supports inline assembly in an x64 environment so if you didn’t work on this problem previously then you might have struggle creating a simple x64 inline project but don’t worry in one of my post I explained it step by step so I highly recommend seeing this topic to solve the problem before continuing the rest of this part.

Now its time to create a driver!

There is a good article here if you want to start with Windows Driver Kit (WDK).

The whole driver is this :

123456789101112131415161718192021222324252627282930313233343536373839404142434445#include <ntddk.h>#include <wdf.h>#include <wdm.h> extern void inline AssemblyFunc1(void);extern void inline AssemblyFunc2(void); VOID DrvUnload(PDRIVER_OBJECT  DriverObject);NTSTATUS DriverEntry(PDRIVER_OBJECT  pDriverObject, PUNICODE_STRING  pRegistryPath); #pragma alloc_text(INIT, DriverEntry)#pragma alloc_text(PAGE, Example_Unload) NTSTATUS DriverEntry(PDRIVER_OBJECT  pDriverObject, PUNICODE_STRING  pRegistryPath){ NTSTATUS NtStatus = STATUS_SUCCESS; UINT64 uiIndex = 0; PDEVICE_OBJECT pDeviceObject = NULL; UNICODE_STRING usDriverName, usDosDeviceName;  DbgPrint(«DriverEntry Called.»);  RtlInitUnicodeString(&usDriverName, L»\Device\MyHypervisor»); RtlInitUnicodeString(&usDosDeviceName, L»\DosDevices\MyHypervisor»);  NtStatus = IoCreateDevice(pDriverObject, 0, &usDriverName, FILE_DEVICE_UNKNOWN, FILE_DEVICE_SECURE_OPEN, FALSE, &pDeviceObject);  if (NtStatus == STATUS_SUCCESS) { pDriverObject->DriverUnload = DrvUnload; pDeviceObject->Flags |= IO_TYPE_DEVICE; pDeviceObject->Flags &= (~DO_DEVICE_INITIALIZING); IoCreateSymbolicLink(&usDosDeviceName, &usDriverName); } return NtStatus;} VOID DrvUnload(PDRIVER_OBJECT  DriverObject){ UNICODE_STRING usDosDeviceName; DbgPrint(«DrvUnload Called rn»); RtlInitUnicodeString(&usDosDeviceName, L»\DosDevices\MyHypervisor»); IoDeleteSymbolicLink(&usDosDeviceName); IoDeleteDevice(DriverObject->DeviceObject);}

AssemblyFunc1 and AssemblyFunc2 are two external functions that defined as inline x64 assembly code.

Our driver needs to register a device so that we can communicate with our virtual environment from User-Mode code, on the hand, I defined DrvUnload which use PnP Windows driver feature and you can easily unload your driver and remove device then reload and create a new device.

The following code is responsible for creating a new device :

123456789101112 RtlInitUnicodeString(&usDriverName, L»\Device\MyHypervisor»); RtlInitUnicodeString(&usDosDeviceName, L»\DosDevices\MyHypervisor»);  NtStatus = IoCreateDevice(pDriverObject, 0, &usDriverName, FILE_DEVICE_UNKNOWN, FILE_DEVICE_SECURE_OPEN, FALSE, &pDeviceObject);  if (NtStatus == STATUS_SUCCESS) { pDriverObject->DriverUnload = DrvUnload; pDeviceObject->Flags |= IO_TYPE_DEVICE; pDeviceObject->Flags &= (~DO_DEVICE_INITIALIZING); IoCreateSymbolicLink(&usDosDeviceName, &usDriverName); }

If you use Windows, then you should disable Driver Signature Enforcement to load your driver, that’s because Microsoft prevents any not verified code to run in Windows Kernel (Ring 0).

To do this, press and hold the shift key and restart your computer. You should see a new Window, then

  1. Click Advanced options.
  2. On the new Window Click Startup Settings.
  3. Click on Restart.
  4. On the Startup Settings screen press 7 or F7 to disable driver signature enforcement.

The latest thing I remember is enabling Windows Debugging messages through registry, in this way you can get DbgPrint() results through SysInternals DebugView.

Just perform the following steps:

In regedit, add a key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Debug Print Filter

Under that , add a DWORD value named IHVDRIVER with a value of 0xFFFF

Reboot the machine and you’ll good to go.

Some thoughts before the start

There are some keywords that will be frequently used in the rest of these series and you should know about them (Most of the definitions derived from Intel software developer’s manual, volume 3C).

Virtual Machine Monitor (VMM): VMM acts as a host and has full control of the processor(s) and other platform hardware. A VMM is able to retain selective control of processor resources, physical memory, interrupt management, and I/O.

Guest Software: Each virtual machine (VM) is a guest software environment.

VMX Root Operation and VMX Non-root Operation: A VMM will run in VMX root operation and guest software will run in VMX non-root operation.

VMX transitions: Transitions between VMX root operation and VMX non-root operation.

VM entries: Transitions into VMX non-root operation.

Extended Page Table (EPT): A modern mechanism which uses a second layer for converting the guest physical address to host physical address.

VM exits: Transitions from VMX non-root operation to VMX root operation.

Virtual machine control structure (VMCS): is a data structure in memory that exists exactly once per VM, while it is managed by the VMM. With every change of the execution context between different VMs, the VMCS is restored for the current VM, defining the state of the VM’s virtual processor and VMM control Guest software using VMCS.

The VMCS consists of six logical groups:

  •  Guest-state area: Processor state saved into the guest state area on VM exits and loaded on VM entries.
  •  Host-state area: Processor state loaded from the host state area on VM exits.
  •  VM-execution control fields: Fields controlling processor operation in VMX non-root operation.
  •  VM-exit control fields: Fields that control VM exits.
  •  VM-entry control fields: Fields that control VM entries.
  •  VM-exit information fields: Read-only fields to receive information on VM exits describing the cause and the nature of the VM exit.

I found a great work which illustrates the VMCS, The PDF version is also available here

VMCS
VMCS

Don’t worry about the fields, I’ll explain most of them clearly in the later parts, just remember VMCS Structure varies between different version of a processor.

VMX Instructions 

VMX introduces the following new instructions.

Intel/AMD MnemonicDescription
INVEPTInvalidate Translations Derived from EPT
INVVPIDInvalidate Translations Based on VPID
VMCALLCall to VM Monitor
VMCLEARClear Virtual-Machine Control Structure
VMFUNCInvoke VM function
VMLAUNCHLaunch Virtual Machine
VMRESUMEResume Virtual Machine
VMPTRLDLoad Pointer to Virtual-Machine Control Structure
VMPTRSTStore Pointer to Virtual-Machine Control Structure
VMREADRead Field from Virtual-Machine Control Structure
VMWRITEWrite Field to Virtual-Machine Control Structure
VMXOFFLeave VMX Operation
VMXONEnter VMX Operation

Life Cycle of VMM Software

  • The following items summarize the life cycle of a VMM and its guest software as well as the interactions between them:
    • Software enters VMX operation by executing a VMXON instruction.
    • Using VM entries, a VMM can then turn guests into VMs (one at a time). The VMM effects a VM entry using instructions VMLAUNCH and VMRESUME; it regains control using VM exits.
    • VM exits transfer control to an entry point specified by the VMM. The VMM can take action appropriate to the cause of the VM exit and can then return to the VM using a VM entry.
    • Eventually, the VMM may decide to shut itself down and leave VMX operation. It does so by executing the VMXOFF instruction.

That’s enough for now!

In this part, I explained about general keywords that you should be aware and we create a simple lab for our future tests. In the next part, I will explain how to enable VMX on your machine using the facilities we create above, then we survey among the rest of the virtualization so make sure to check the blog for the next part.

References

[1] Intel® 64 and IA-32 architectures software developer’s manual combined volumes 3 (https://software.intel.com/en-us/articles/intel-sdm

[2] Hardware-assisted Virtualization (http://www.cs.cmu.edu/~412/lectures/L04_VTx.pdf)

[3] Writing Windows Kernel Driver (https://resources.infosecinstitute.com/writing-a-windows-kernel-driver/)

[4] What Is a Type 1 Hypervisor? (http://www.virtualizationsoftware.com/type-1-hypervisors/)

[5] Intel / AMD CPU Internals (https://github.com/LordNoteworthy/cpu-internals)

[6] Windows 10: Disable Signed Driver Enforcement (https://ph.answers.acer.com/app/answers/detail/a_id/38288/~/windows-10%3A-disable-signed-driver-enforcement)

[7] Instruction Set Mapping » VMX Instructions (https://docs.oracle.com/cd/E36784_01/html/E36859/gntbx.html)

CVE-2018-5407 (Flaw in the Intel processor execution engine sharing on SMT (e.g. Hyper-Threading) architectures.) POC ,

Summary

This is a proof-of-concept exploit of the PortSmash microarchitecture attack, tracked by CVE-2018-5407.

Alt text

Setup

Prerequisites

A CPU featuring SMT (e.g. Hyper-Threading) is the only requirement.

This exploit code should work out of the box on Skylake and Kaby Lake. For other SMT architectures, customizing the strategies and/or waiting times in spy is likely needed.

OpenSSL

Download and install OpenSSL 1.1.0h or lower:

cd /usr/local/src
wget https://www.openssl.org/source/openssl-1.1.0h.tar.gz
tar xzf openssl-1.1.0h.tar.gz
cd openssl-1.1.0h/
export OPENSSL_ROOT_DIR=/usr/local/ssl
./config -d shared --prefix=$OPENSSL_ROOT_DIR --openssldir=$OPENSSL_ROOT_DIR -Wl,-rpath=$OPENSSL_ROOT_DIR/lib
make -j8
make test
sudo checkinstall --strip=no --stripso=no --pkgname=openssl-1.1.0h-debug --provides=openssl-1.1.0h-debug --default make install_sw

If you use a different path, you’ll need to make changes to Makefile and sync.sh.

Tooling

freq.sh

Turns off frequency scaling and TurboBoost.

sync.sh

Sync trace through pipes. It has two victims, one of which should be active at a time:

  1. The stock openssl running dgst command to produce a P-384 signature.
  2. A harness ecc that calls scalar multiplication directly with a known key. (Useful for profiling.)

The script will generate a P-384 key pair in secp384r1.pem if it does not already exist.

The script outputs data.bin which is what openssl dgst signed, and you should be able to verify the ECDSA signature data.sig afterwards with

openssl dgst -sha512 -verify secp384r1.pem -signature data.sig data.bin

In the ecc tool case, data.bin and secp384r1.pem are meaningless and data.sig is not created.

For the taskset commands in sync.sh, the cores need to be two logical cores of the same physical core; sanity check with

$ grep '^core id' /proc/cpuinfo
core id		: 0
core id		: 1
core id		: 2
core id		: 3
core id		: 0
core id		: 1
core id		: 2
core id		: 3

So the script is currently configured for logical cores 3 and 7 that both map to physical core 3 (core_id).

spy

Measurement process that outputs measurements in timings.bin. To change the spy strategy, check the port defines in spy.h. Only one strategy should be active at build time.

Note that timings.bin is actually raw clock cycle counter values, not latencies. Look in parse_raw_simple.py to understand the data format if necessary.

ecc

Victim harness for running OpenSSL scalar multiplication with known inputs. Example:

./ecc M 4 deadbeef0123456789abcdef00000000c0ff33

Will execute 4 consecutive calls to EC_POINT_mul with the given hex scalar.

parse_raw_simple.py

Quick and dirty hack to view 1D traces. The top plot is the raw trace. Everything below is a different digital filter of the raw trace for viewing purposes. Zoom and pan are your friends here.

You might have to adjust the CEIL variable if the plots are too aggressively clipped.

Python packages:

sudo apt-get install python-numpy python-matplotlib

Usage

Turn off frequency scaling:

./freq.sh

Make sure everything builds:

make clean
make

Take a measurement:

./sync.sh

View the trace:

python parse_raw_simple.py timings.bin

You can play around with one victim at a time in sync.sh. Sample output for the openssl dgst victim is in parse_raw_simple.png.

Credits

  • Alejandro Cabrera Aldaya (Universidad Tecnológica de la Habana (CUJAE), Habana, Cuba)
  • Billy Bob Brumley (Tampere University of Technology, Tampere, Finland)
  • Sohaib ul Hassan (Tampere University of Technology, Tampere, Finland)
  • Cesar Pereida García (Tampere University of Technology, Tampere, Finland)
  • Nicola Tuveri (Tampere University of Technology, Tampere, Finland)

Apple T2 security chip on new Macbook prevents software from using the mic to eavesdrop

( Original text by BY  )

Apple MacBook is equipped with a new T2 security chip, which uses a hard-breaking design, can automatically disable the microphone when necessary – such as closing the laptop screen. It is reported that the Apple T2 security chip is bundled with the Secure Enclave security zone coprocessor, which is designed to support MacOS’s Apple File System (APFS) encrypted storage, Touch ID, secure boot and more.

In addition, the chip has a number of controllers that integrate management functions for the system, SSD, audio, and image signal processors. As described in the Apple T2 Chip Security Overview document published in October 2018:

“All Mac portables with the Apple T2 Security Chip feature a hardware disconnect that ensures that the microphone is disabled whenever the lid 
is closed.”

As a result, when the MacBook is closed, even users running with the kernel or root privileges cannot eavesdrop on users. The webcam won’t be disconnected from the hardware when the screen is closed. Apple said: “The camera is not disconnected in hardware because its field of view 
 is completely obstructed with the lid closed.” This hardware-based protection makes it extremely difficult for malicious attackers to eavesdrop.

 

What are some fun C++ tricks

This one applies to all languages so far:

a=a+b-(b=a);

A REALLY fast way of swaping a and b.

#include <iostream>
#include <string>

using namespace std;

int main (int argc, char*argv) {
float a; cout << «A:»; cin >> a;
float b; cout << «B:» ; cin >> b;

cout << «———————» << endl;
cout << «A=» << a << «, B=» << b << endl;
a=a+b-(b=a);
cout << «A=» << a << «, B=» << b << endl;
exit(0);
}


void Send(int * to, const int* from, const int count)

{

   int n = (count+7) / 8;

   switch(count%8)

   {

      case 0:

         do

            {

               *to++ = *from++;

      case 7:

               *to++ = *from++;

      case 6:

               *to++ = *from++;

      case 5:

               *to++ = *from++;

      case 4:

               *to++ = *from++;

      case 3:

               *to++ = *from++;

      case 2:

               *to++ = *from++;

      case 1:

               *to++ = *from++;

              } while (—n>0);

    }

}


Preprocessor Tricks

The arraysize macro used in Chrome’s source:

  1. template <typename T, size_t N>
  2. char (&ArraySizeHelper(T (&array)[N]))[N];
  3. #define arraysize(array) (sizeof(ArraySizeHelper(array)))

This is better than the ordinary sizeof(array)/sizeof(array[0]) because it raises compilation errors when the passed in array is just a pointer, or a null pointer whereas the simpler macro silently returns a useless value. For a detailed example, see PVS-Studio vs Chromium.

Predefined Macros:

  1. #define expect(expr) if(!expr) cerr << «Assertion « << #expr \
  2. » failed at « << __FILE__ << «:» << __LINE__ << endl;
  3. #define stringify(x) #x
  4. #define tostring(x) stringify(x)
  5. #define MAGIC_CONSTANT 314159
  6. cout << «Value of MAGIC_CONSTANT=» << tostring(MAGIC_CONSTANT);

The tostring macro is a common trick used to expand macro values inside other macros. The Linux kernel uses a lot of macro tricks.

Using iterators for quickly dumping the contents of a container:

  1. #define dbg(v) copy(v.begin(), v.end(), ostream_iterator<typeof(*v.begin())>(cout, » «))

Sadly, this doesn’t work for pair types so maps are out of scope.

Template Voodoo

Recursion:You can specialize your class templates for certain cases, so you can write down the base-case of a recursion and then define the generic template as a recursive combination of base cases.
For example, the following code calculates the values of the Choose function at compile time:

  1. template<unsigned n, unsigned r>
  2. struct Choose {
  3. enum {value = (n * Choose<n1, r1>::value) / r};
  4. };
  5. template<unsigned n>
  6. struct Choose<n, 0> {
  7. enum {value = 1};
  8. };
  9. int main() {
  10. // Prints 56
  11. cout << Choose<8, 3>::value;
  12. // Compile time values can be used as array sizes
  13. int x[Choose<25, 3>::value];
  14. }

More interesting examples at C++ Programming/Templates/Template Meta-Programming

Mostly Painless Memory Management and RAII

With certain restrictions, you can create templates for «smart» pointers that automatically deallocate resources when they go out of scope or reference count goes to 0. This is basically done by overloading operator * and operator =. Based on your use case, you can transfer ownership when the operator = is used, or update reference counts.
See Smart Pointer Guidelines — The Chromium Projects and http://code.google.com/searchfra…

Argument-dependent name lookup aka Koenig lookup

When a function call cannot be matched to a name in the current namespace, other namespaces can be searched for a matching signature. This is why std::cout << "Hi"; works even though operator << is defined in the stdnamespace.
See Argument-dependent name lookup


auto keyword
In C++ you can use auto to iterate over map,vector,set,..etc which specifies that the type of the variable that is being declared will be automatically deduced from its initializer or for functions it will the return type or it will be deduced from its return statements
So instead of :

  1. vector<int> vs;
  2. vs.push_back(4),vs.push_back(7),vs.push_back(9),vs.push_back(10);
  3. for (vector<int>::iterator it = vs.begin(); it != vs.end(); ++it)
  4. cout << *it << ‘ ‘;cout<<‘\n’;

just use :

  1. vector<int> vs;
  2. vs.push_back(4),vs.push_back(7),vs.push_back(9),vs.push_back(10);
  3. for (auto it: vs)
  4. cout << it << ‘ ‘;cout<<endl;
  5. //you can also change the values using
  6. vector<int> vs;
  7. vs.push_back(4),vs.push_back(7),vs.push_back(9),vs.push_back(10);
  8. for (auto& it: vs) it*=3;
  9. for (auto it: vs)
  10. cout << it << ‘ ‘;cout<<endl;

Declaring variable

  1. template<class A, class B>
  2. auto mult(A x, B y) -> decltype(x * y){
  3. return x * y;
  4. }
  5. int main(){
  6. auto a = 3 * 2; //the return type is the type of operator (x*y)
  7. cout<<a<<endl;
  8. return 0;
  9. }

The Power of Strings 

  1. int n,m;
  2. cin >> n >> m;
  3. int matrix[n+1][m+1];
  4. //This loop
  5. for(int i = 1; i <= n; i++) {
  6. for(int j = 1; j <= m; j++)
  7. cout << matrix[i][j] << » «;
  8. cout << «\n»;
  9. }
  10. // is equivalent to this
  11. for(int i = 1; i <= n; i++)
  12. for(int j = 1; j <= m; j++)
  13. cout << matrix[i][j] << » \n»[j == m];

because " \n" is a char*," \n"[0] is ' ' and " \n"[1] is '\n'  .

Some Hidden function
__gcd(x, y)
you don’t need to code gcd function.

  1. cout<<__gcd(54,48)<<endl; //return 6

__builtin_ffs(x)
This function returns 1 + least significant 1-bit of x. If x == 0, returns 0. Here x is int, this function with suffix ‘l’ gets a long argument and with suffix ‘ll’ gets a long long argument.
e.g:  __builtin_ffs(10) = 2 because 10 is ‘…10 1 0′ in base 2 and first 1-bit from right is at index 1 (0-based) and function returns 1 + index.three)

Pairing tricks 

  1. pair<int, int> p;
  2. //This
  3. p = make_pair(1, 2);
  4. //equivalent to this
  5. p = {1, 2};
  6. //So
  7. pair<int, pair<char, long long> > p;
  8. //now easier
  9. p = {1, {‘a’, 2ll}};

Super include 
Simply use
#include <bits/stdc++.h>
This library includes many of libraries we do need  like algorithm, iostream, vector and many more. Believe me you don’t need to include anything else 😀 !!

Smart Pointers

Using smart pointers, we can make pointers to work in way that we don’t need to explicitly call delete. Smart pointer is a wrapper class over a pointer with operator like * and -> overloaded. The objects of smart pointer class look like pointer, but can do many things that a normal pointer can’t like automatic destruction (yes, we don’t have to explicitly use delete), reference counting and more.
The idea is to make a class with a pointer, destructor and overloaded operators like * and ->. Since destructor is automatically called when an object goes out of scope, the dynamically alloicated memory would automatically deleted (or reference count can be decremented).


You can put URIs in your C++ code and the compiler will not throw any error.

  1. #include <iostream>
  2. int main() {
  3. using namespace std;
  4. http://www.google.com
  5. int x = 5;
  6. cout << x;
  7. }

Explanation: Any identifier followed by a : becomes a (goto) label in C++. Anything followed by // becomes a comment so in the code above, http is a label and //google.com/is a comment. The compiler might throw a warning however, since the label is unutilized.


Don’t Confuse Assign (=) with Test-for-Equality (==).

This one is elementary, although it might have baffled Sherlock Holmes. The following looks innocent and would compile and run just fine if C++ were more like BASIC:

if (a = b)
cout << «a is equal to b.»;

Because this looks so innocent, it creates logic errors requiring hours to track down within a large program unless you’re on the lookout for it. (So when a program requires debugging, this is the first thing I look for.) In C and C++, the following is not a test for equality:

a = b

What this does, of course, is assign the value of b to a and then evaluate to the value assigned.
The problem is that a = b does not generally evaluate to a reasonable true/false condition—with one major exception I’ll mention later. But in C and C++, any numeric value can be used as a condition for “if” or “while.
Assume that a and b are set to 0. The effect of the previously-shown if statement is to place the value of b into a; then the expression a = b evaluates to 0. The value 0 equates to false. Consequently, aand b are equal, but exactly the wrong thing gets printed:

if (a = b)     // THIS ENSURES a AND b ARE EQUAL…
cout << «a and b are equal.»;
else
cout << «a and b are not equal.»;  // BUT THIS GETS PRINTED!

The solution, of course, is to use test-for-equality when that’s what you want. Note the use of double equal signs (==). This is correct inside a condition.

// CORRECT VERSION:
if (a == b)
cout << «a and b are equal.»;


The most amazing trick i found was a status of someone’s topcoder profile:
Code:
#include <cstdio>
double m[]= {7709179928849219.0, 771};
int main()
{
m[1]—?m[0]*=2,main():printf((char*)m);
}
You will be seriously amazed by the ouput…here it is:
C++Sucks

I tried to analyse the code and came up with a reason but not an exact explanation..so i tried to ask it on stackoverflow..you can go through the explanation here:
Concept behind this 4 lines tricky C++ code
Read it and you will learn something you wouldn’t have even thought of… 😉


  1. static const unsigned char BitsSetTable256[256] =
  2. {
  3. # define B2(n) n, n+1, n+1, n+2
  4. # define B4(n) B2(n), B2(n+1), B2(n+1), B2(n+2)
  5. # define B6(n) B4(n), B4(n+1), B4(n+1), B4(n+2)
  6. B6(0), B6(1), B6(1), B6(2)
  7. };
  8. unsigned int v; // count the number of bits set in 32-bit value v
  9. unsigned int c; // c is the total bits set in v
  10. // Option 1:
  11. c = BitsSetTable256[v & 0xff] +
  12. BitsSetTable256[(v >> 8) & 0xff] +
  13. BitsSetTable256[(v >> 16) & 0xff] +
  14. BitsSetTable256[v >> 24];
  15. // Option 2:
  16. unsigned char * p = (unsigned char *) &v;
  17. c = BitsSetTable256[p[0]] +
  18. BitsSetTable256[p[1]] +
  19. BitsSetTable256[p[2]] +
  20. BitsSetTable256[p[3]];
  21. // To initially generate the table algorithmically:
  22. BitsSetTable256[0] = 0;
  23. for (int i = 0; i < 256; i++)
  24. {
  25. BitsSetTable256[i] = (i & 1) + BitsSetTable256[i / 2];
  26. }

 

 

  1. float Q_rsqrt( float number )
  2. {
  3. long i;
  4. float x2, y;
  5. const float threehalfs = 1.5F;
  6. x2 = number * 0.5F;
  7. y = number;
  8. i = * ( long * ) &y; // evil floating point bit level hacking
  9. i = 0x5f3759df - ( i >> 1 ); // what the fuck?
  10. y = * ( float * ) &i;
  11. y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
  12. // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
  13. return y;
  14. }

Iteration: 

  1. #define FOR(i,n) for(int (i)=0;(i)<(n);(i)++)
  2. #define FORR(i,a,b) for(int (i)=(a);(i)<(b);(i)++)
  3. //reverse
  4. #define REV(i,n) for(int (i)=(n)-1;(i)>=0;(i)--)

Handy way to use it like this. 

  1. typedef long long int int64;
  2. typedef unsigned long long int uint64;

FastIO for +ve integers.

    1. inline void frint(int *a){
    2. register char c=0;while (c<33) c=getchar_unlocked();*a=0;
    3. while (c>33){*a=*a*10+c-'0';c=getchar_unlocked();}
    4. }

Try This….

#include <iostream>
using namespace std;
int main()
{
int a,b,c;
int count = 1;
for (b=c=10;a=»- FIGURE?, UMKC,XYZHello Folks,\
TFy!QJu ROo TNn(ROo)SLq SLq ULo+\
UHs UJq TNn*RPn/QPbEWS_JSWQAIJO^\
NBELPeHBFHT}TnALVlBLOFAkHFOuFETp\
HCStHAUFAgcEAelclcn^r^r\\tZvYxXy\
T|S~Pn SPm SOn TNn ULo0ULo#ULo-W\
Hq!WFs XDt!» [b+++21]; )
for(; a— > 64 ; )
putchar ( ++c==’Z’ ? c = c/ 9:33^b&1);
return 0;
}


I think one of the coolest of all time, is defining an abstract base class in C++, and inheriting from it in python, and passing it back to C++ to call.
It actually works

  1. struct Interface{
  2. int foo()const=0;
  3. virtual ~Interface(){}
  4. };
  5. void call(Interface const& f){
  6. std::cout<<f.foo()<<std::endl;
  7. }
  8. struct InterfaceWrap final: Interface, boost::python::wrapper<Interface>
  9. {
  10. int foo() const final
  11. {
  12. return this->get_override("foo")();
  13. }
  14. };
  15. BOOST_PYTHON_MODULE(interface){
  16. using namespace boost::python;
  17. class_<Interface ,boost ::noncopyable,boost::shared_ptr<Interface>>("_InterfaceCpp",no_init)
  18. .def("foo",&Interface::foo)
  19. ;
  20. class_<InterfaceWrap ,bases<Interface>,boost::shared_ptr<InterfaceWrap>>("Interface",init<>())
  21. ;
  22. def("call",&call);
  23. }

and then

  1. import interface as i # C++ code
  2. class impl(i.Interface):#inherit from C++ class
  3. def __init__(self):
  4. i.Interface.__init__(self)
  5. def foo(self):
  6. return 100
  7. i.call(impl())#call C++ function with Python derived class

This does exactly what you think it should do.


void qsort ( void * base, size_t num, size_t size, int ( * compar ) ( const void *, const void * ) )

base Pointer to the first element of the array to be sorted.

num Number of elements in the array pointed by base. size_t is an unsigned integral type.

size Size in bytes of each element in the array. size_t is an unsigned integral type.

compar Function that compares two elements. This function is called repeatedly by qsorttocomparetwoelements.It shall follow the following prototype:

int compar ( const void * elem1, const void * elem2 );

Taking a pointer to two pointers as arguments (both type-casted to const void*). The function should compare the data pointed by both: if they match in ranking, the function shall return zero; if elem1 goes before elem2, it shall return a negative value; and if it goes after, a positive value.

Eg :

int values[] = { 40, 10, 100, 90, 20, 25 };

int compare (const void * a, const void * b) { return ( *(int*)a — *(int*)b ); }

int main () {
int n;
qsort (values, 6, sizeof(int), compare);
for (n=0; n<6; n++) printf («%d «,values[n]); return 0; }


Partial template specialization

C++11 has this cool function get<J> which can be used to access the first and second member of a pair, with a different syntax:

  1. std::pair < std::string, double > pr ( «pi», 3.14 );
  2. std::cout << std::get < 0 > ( pr ); // outputs «pi»
  3. std::cout << std::get < 1 > ( pr ); // outputs 3.14

Note that this is a function and not a function object or a member function.

I do not find it trivial to write a function

  • with three template types template < size_t J, class T1, class T2 >
  • which can get std::pair < T1, T2 > as an argument
  • and outputs pr.first if the template value J is 0
  • and outputs pr.second if the template value J is 1.

In particular consider that in C++ one cannot overload a function based on its return type. So what should the return type of this function be declared as? T1 or T2?

  1. template < size_t J, class T1, class T2>
  2. ??? get ( std::pair < T1, T2 > & );

The interesting thing is that one could already write this function in C++98 using Partial template specialization, which is a really cool trick. The problem is that function templates cannot be partially specialized, but this is easy to solve:

  1. namespace
  2. {
  3. /*!
  4. * helper template to do the work with partial specialization
  5. */
  6. template < size_t J, class T1, class T2 >
  7. struct Get;
  8. template < class T1, class T2>
  9. struct Get < 0, T1, T2 >
  10. {
  11. typedef typename std::pair < T1, T2 >::first_type result_type;
  12. static result_type & elm ( std::pair < T1, T2 > & pr ) { return pr.first; }
  13. static const result_type & elm ( const std::pair < T1, T2 > & pr ) { return pr.first; }
  14. };
  15. template < class T1, class T2>
  16. struct Get < 1, T1, T2 >
  17. {
  18. typedef typename std::pair < T1, T2 >::second_type result_type;
  19. static result_type & elm ( std::pair < T1, T2 > & pr ) { return pr.second; }
  20. static const result_type & elm ( const std::pair < T1, T2 > & pr ) { return pr.second; }
  21. };
  22. }
  23. template < size_t J, class T1, class T2 >
  24. typename Get< J, T1, T2 >::result_type & get ( std::pair< T1, T2 > & pr )
  25. {
  26. return Get < J, T1, T2 >::elm( pr );
  27. }
  28. template < size_t J, class T1, class T2 >
  29. const typename Get< J, T1, T2 >::result_type & get ( const std::pair< T1, T2 > & pr )
  30. {
  31. return Get < J, T1, T2 >::elm( pr );
  32. }

Define operator<< for STL structures to make it easy to add debug outputs to your code. (This is better than special printing functions because it nests automatically! Printing a map< vector<int>, int> works without any additional effort if you can print any map and any vector.)

Additionally, define a macro that makes nicer debug outputs and makes it easy to turn them off using the standard mechanism (same one that is used for assert). Here’s a short example how to do all of this in C++11:

  1. #include <iostream>
  2. #include <string>
  3. #include <map>
  4. #ifdef NDEBUG
  5. #define DEBUG(var)
  6. #else
  7. #define DEBUG(var) { std::cout << #var << ": " << (var) << std::endl; }
  8. #endif
  9. template<typename T1, typename T2>
  10. std::ostream& operator<< (std::ostream& out, const std::map<T1,T2> &M) {
  11. out << "{ ";
  12. for (auto item:M) out << item.first << "->" << item.second << ", ";
  13. out << "}";
  14. return out;
  15. }
  16. int main() {
  17. std::map<std::string,int> age = { {"Joe",47}, {"Bob",22}, {"Laura",19} };
  18. DEBUG(age);
  19. }

This is a very amazing piece of code:

main(a){printf(a,34,a=»main(a){printf(a,34,a=%c%s%c,34);}»,34);}

It is the shortest C++ code which when executed prints itself. It was discovered by Vlad Taeerov and Rashit Fakhreyev and is only 64 characters in length(Making it the shortest).


To Convert list<T> to vector<T>:

  1. std::vector<T> v(l.begin(), l.end());

 

Variadic Templates :

They can be useful in places. You can pass any number of parameters .
Example  :

  1. #include <iostream>
  2. #include <bitset>
  3. #include <string>
  4. using namespace std;
  5.  
  6. void print() {
  7. cout<<«Nothing to print :)» ;
  8. }
  9.  
  10. template<typename T,typename args>
  11. void print(T x,args y) {
  12. cout<<x<<endl;
  13. print(y…);
  14. }
  15.  
  16. int main() {
  17. print(10,14.56,«Quora»,bitset<20>(28));
  18. return 0;
  19. }

 

Code on ideon : http://ideone.com/b8TNHD

Output :

  1. 10
  2. 14.56
  3. Quora
  4. 00000000000000011100
  5. Nothing to print 🙂

 

Range based for loops can be used with some STL containers :
eg.

 

  1. #include <iostream>
  2. #include <list>
  3. #include <vector>
  4.  
  5. using namespace std;
  6.  
  7. int main() {
  8. list<int> x;
  9. x.push_back(10);
  10. x.push_back(20);
  11. for(auto i : x)
  12. cout<<i;
  13. return 0;
  14. }

Build a blockchain with C++

So, you might have heard a lot about something called a blockchain lately and wondered what all the fuss is about. A blockchain is a ledger which has been written in such a way that updating the data contained within it becomes very difficult, some say the blockchain is immutable and to all intents and purposes they’re right but immutability suggests permanence and nothing on a hard drive could ever be considered permanent. Anyway, we’ll leave the philosophical debate to the non-techies; you’re looking for someone to show you how to write a blockchain in C++, so that’s exactly what I’m going to do.

Before I go any further, I have a couple of disclaimers:

  1. This tutorial is based on one written by Savjee using NodeJS; and
  2. It’s been a while since I wrote any C++ code, so I might be a little rusty.

The first thing you’ll want to do is open a new C++ project, I’m using CLion from JetBrains but any C++ IDE or even a text editor will do. For the interests of this tutorial we’ll call the project TestChain, so go ahead and create the project and we’ll get started.

If you’re using CLion you’ll see that the main.cpp file will have already been created and opened for you; we’ll leave this to one side for the time being.

Create a file called Block.h in the main project folder, you should be able to do this by right-clicking on the TestChain directory in the Project Tool Window and selecting: New > C/C++ Header File.

Inside the file, add the following code (if you’re using CLion place all code between the lines that read #define TESTCHAIN_BLOCK_H and #endif):

These lines above tell the compiler to include the cstdint, and iostream libraries.

Add this line below it:

This essentially creates a shortcut to the std namespace, which means that we don’t need to refer to declarations inside the stdnamespace by their full names e.g. std::string, but instead use their shortened names e.g. string.

So far, so good; let’s start fleshing things out a little more.

A blockchain is made up of a series of blocks which contain data and each block contains a cryptographic representation of the previous block, which means that it becomes very hard to change the contents of any block without then needing to change every subsequent one; hence where the blockchain essentially gets its immutable properties.

So let’s create our block class, add the following lines to the Block.h header file:

Unsurprisingly, we’re calling our class Block (line 1) followed by the public modifier (line 2) and public variable sPrevHash(remember each block is linked to the previous block) (line 3). The constructor signature (line 5) takes three parameters for nIndexIn, and sDataIn; note that the const keyword is used along with the reference modifier (&) so that the parameters are passed by reference but cannot be changed, this is done to improve efficiency and save memory. The GetHash method signature is specified next (line 7) followed by the MineBlock method signature (line 9), which takes a parameter nDifficulty. We specify the private modifier (line 11) followed by the private variables _nIndex, _nNonce, _sData, _sHash, and _tTime (lines 12–16). The signature for _CalculateHash (line 18) also has the const keyword, this is to ensure the output from the method cannot be changed which is very useful when dealing with a blockchain.

Now it’s time to create our Blockchain.h header file in the main project folder.

Let’s start by adding these lines (if you’re using CLion place all code between the lines that read #define TESTCHAIN_BLOCKCHAIN_H and #endif):

They tell the compiler to include the cstdint, and vector libraries, as well as the Block.h header file we have just created, and creates a shortcut to the std namespace.

Now let’s create our blockchain class, add the following lines to the Blockchain.h header file:

As with our block class, we’re keeping things simple and calling our blockchain class Blockchain (line 1) followed by the publicmodifier (line 2) and the constructor signature (line 3). The AddBlock signature (line 5) takes a parameter bNew which must be an object of the Block class we created earlier. We then specify the private modifier (line 7) followed by the private variables for _nDifficulty, and _vChain (lines 8–9) as well as the method signature for _GetLastBlock (line 11) which is also followed by the const keyword to denote that the output of the method cannot be changed.

Ain't nobody got time for that!Since blockchains use cryptography, now would be a good time to get some cryptographic functionality in our blockchain. We’re going to be using the SHA256 hashing technique to create hashes of our blocks, we could write our own but really – in this day and age of open source software – why bother?

To make my life that much easier, I copied and pasted the text for the sha256.h, sha256.cpp and LICENSE.txt files shown on the C++ sha256 function from Zedwood and saved them in the project folder.

Right, let’s keep going.

Create a source file for our block and save it as Block.cpp in the main project folder; you should be able to do this by right-clicking on the TestChain directory in the Project Tool Window and selecting: New > C/C++ Source File.

Start by adding these lines, which tell the compiler to include the Block.h and sha256.h files we added earlier.

Follow these with the implementation of our block constructor:

The constructor starts off by repeating the signature we specified in the Block.h header file (line 1) but we also add code to copy the contents of the parameters into the the variables _nIndex, and _sData. The _nNonce variable is set to -1 (line 2) and the _tTime variable is set to the current time (line 3).

Let’s add an accessor for the block’s hash:

We specify the signature for GetHash (line 1) and then add a return for the private variable _sHash (line 2).

As you might have read, blockchain technology was made popular when it was devised for the Bitcoin digital currency, as the ledger is both immutable and public; which means that, as one user transfers Bitcoin to another user, a transaction for the transfer is written into a block on the blockchain by nodes on the Bitcoin network. A node is another computer which is running the Bitcoin software and, since the network is peer-to-peer, it could be anyone around the world; this process is called ‘mining’ as the owner of the node is rewarded with Bitcoin each time they successfully create a valid block on the blockchain.

To successfully create a valid block, and therefore be rewarded, a miner must create a cryptographic hash of the block they want to add to the blockchain that matches the requirements for a valid hash at that time; this is achieved by counting the number of zeros at the beginning of the hash, if the number of zeros is equal to or greater than the difficulty level set by the network that block is valid. If the hash is not valid a variable called a nonce is incremented and the hash created again; this process, called Proof of Work (PoW), is repeated until a hash is produced that is valid.

So, with that being said, let’s add the MineBlock method; here’s where the magic happens!

We start with the signature for the MineBlock method, which we specified in the Block.h header file (line 1), and create an array of characters with a length one greater that the value specified for nDifficulty (line 2). A for loop is used to fill the array with zeros, followed by the final array item being given the string terminator character (\0), (lines 3–6) then the character array or c-string is turned into a standard string (line 8). A do…while loop is then used (lines 10–13) to increment the _nNonce and _sHashis assigned with the output of _CalculateHash, the front portion of the hash is then compared the string of zeros we’ve just created; if no match is found the loop is repeated until a match is found. Once a match is found a message is sent to the output buffer to say that the block has been successfully mined (line 15).

We’ve seen it mentioned a few times before, so let’s now add the _CalculateHash method:

We kick off with the signature for the _CalculateHash method (line 1), which we specified in the Block.h header file, but we include the inline keyword which makes the code more efficient as the compiler places the method’s instructions inline wherever the method is called; this cuts down on separate method calls. A string stream is then created (line 2), followed by appending the values for _nIndex, _tTime, _sData, _nNonce, and sPrevHash to the stream (line 3). We finish off by returning the output of the sha256 method (from the sha256 files we added earlier) using the string output from the string stream (line 5).

Right, let’s finish off our blockchain implementation! Same as before, create a source file for our blockchain and save it as Blockchain.cpp in the main project folder.

Add these lines, which tell the compiler to include the Blockchain.h file we added earlier.

Follow these with the implementation of our blockchain constructor:

We start off with the signature for the blockchain constructor we specified in Blockchain.h (line 1). As a blocks are added to the blockchain they need to reference the previous block using its hash, but as the blockchain must start somewhere we have to create a block for the next block to reference, we call this a genesis block. A genesis block is created and placed onto the _vChain vector (line 2). We then set the _nDifficulty level (line 3) depending on how hard we want to make the PoW process.

Now it’s time to add the code for adding a block to the blockchain, add the following lines:

The signature we specified in Blockchain.h for AddBlock is added (line 1) followed by setting the sPrevHash variable for the new block from the hash of the last block on the blockchain which we get using _GetLastBlock and its GetHash method (line 2). The block is then mined using the MineBlock method (line 3) followed by the block being added to the _vChain vector (line 4), thus completing the process of adding a block to the blockchain.

Let’s finish this file off by adding the last method:

We add the signature for _GetLastBlock from Blockchain.h (line 1) followed by returning the last block found in the _vChainvector using its back method (line 2).

Right, that’s almost it, let’s test it out!

Remember the main.cpp file? Now’s the time to update it, open it up and replace the contents with the following lines:

This tells the compiler to include the Blockchain.h file we created earlier.

Then add the following lines:

As with most C/C++ programs, everything is kicked off by calling the main method, this one creates a new blockchain (line 2) and informs the user that a block is being mined by printing to the output buffer (line 4) then creates a new block and adds it to the chain (line 5); the process for mining that block will then kick off until a valid hash is found. Once the block is mined the process is repeated for two more blocks.

Time to run it! If you are using CLion simply hit the ‘Run TestChain’ button in the top right hand corner of the window. If you’re old skool, you can compile and run the program using the following commands from the command line:

If all goes well you should see an output like this:

Congratulations, you have just written a blockchain from scratch in C++, in case you got lost I’ve put all the files into a Github repo. Since the original code for Bitcoin was also written in C++, why not take a look at its code and see what improvements you can make to what we’ve started off today?

Orig post

Как программировать Arduino на ассемблере

Читаем данные с датчика температуры DHT-11 на «голом» железе Arduino Uno ATmega328p используя только ассемблер

Попробуем на простом примере рассмотреть, как можно “хакнуть” Arduino Uno и начать писать программы в машинных кодах, т.е. на ассемблере для микроконтроллера ATmega328p. На данном микроконтроллере собственно и собрана большая часть недорогих «классических» плат «duino». Данный код также будет работать на практически любой demo плате на ATmega328p и после небольших возможных доработок на любой плате Arduino на Atmel AVR микроконтроллере. В примере я постарался подойти так близко к железу, как это только возможно. Для лучшего понимания того, как работает микроконтроллер не будем использовать какие-либо готовые библиотеки, а уж тем более Arduino IDE. В качестве учебно-тренировочной задачи попробуем сделать самое простое что только возможно — правильно и полезно подергать одной ногой микроконтроллера, ну то есть будем читать данные из датчика температуры и влажности DHT-11.

Arduino очень клевая штука, но многое из того что происходит с микроконтроллером специально спрятано в дебрях библиотек и среды Arduino для того чтобы не пугать новичков. Поигравшись с мигающим светодиодом я захотел понять, как микроконтроллер собственно работает. Помимо утоления чисто познавательного зуда, знание того как работает микроконтроллер и стандартные средства общения микроконтроллера с внешним миром — это называется «периферия», дает преимущество при написании кода как для Arduino так и при написания кода на С/Assembler для микроконтроллеров а также помогает создавать более эффективные программы. Итак, будем делать все наиболее близко к железу, у нас есть: плата совместимая с Arduino Uno, датчик DHT-11, три провода, Atmel Studio и машинные коды.

Для начало подготовим нужное оборудование.

Писать код будем в Atmel Studio 7 — бесплатно скачивается с сайта производителя микроконтроллера — Atmel.

Atmel Studio 7

Весь код запускался на клоне Arduino Uno — у меня это DFRduino Uno от DFRobot, на контроллере ATmega328p работающем на частоте 16 MHz — отличная надежная плата. Каких-либо отличий от стандартного Uno в процессе эксплуатации я не заметил. Похожая чорная плата от DFBobot, только “Mega” отлетала у меня 2 года в качестве управляющего контроллера квадрокоптера — куда ее только не заносило — проблем не было.

DFRduino Uno

Для просмотра сигналов длительностью в микросекунды (а это на минутку 1 миллионная доля секунды), я использовал штуку, которая называется “логический анализатор”. Конкретно, я использовал клон восьмиканального USBEE AX Pro. Как смотреть для отладки такие быстрые процессы без осциллографа или логического анализатора — на самом деле даже не знаю, ничего посоветовать не могу.

Прежде всего я подключил свой клон Uno — как я говорил у меня это DFRduino Uno к Atmel Studio 7 и решил попробовать помигать светодиодиком на ассемблере. Как подключить описанно много где, один из примеров по ссылке в конце. Код пишется прямо в студии, прошивать плату можно через USB порт используя привычные возможности загрузчика Arduino -через AVRDude. Можно шить и через внешний программатор, я пробовал на китайском USBASP, по факту у меня оба способа работали. В обоих случаях надо только правильно настроить прошивальщик AVRDude, пример моих настроек на картинке

Полная строка аргументов:
-C “C:\avrdude\avrdude.conf” -p atmega328p -c arduino -P COM7 115200 -U flash:w:”$(ProjectDir)Debug\$(TargetName).hex:i

В итоге, для простоты я остановился на прошивке через USB порт — это стандартный способ для Arduio. На моей UNO стоит чип ATmega 328P, его и надо указать при создании проекта. Нужно также выбрать порт к которому подключаем Arduino — на моем компьютере это был COM7.

Для того, чтобы просто помигать светодиодом никаких дополнительных подключений не нужно, будем использовать светодиод, размещенный на плате и подключенный к порту Arduino D13 — напомню, что это 5-ая ножка порта «PORTB» контроллера.

Подключаем плату через USB кабель к компьютеру, пишем код в студии, прошиваем прямо из студии. Основная проблема здесь собственно увидеть это мигание, поскольку контроллер фигачит на частоте 16 MHz и, если включать и выключать светодиод такой же частотой мы увидим тускло горящий светодиод и собственно все.

Для того чтобы увидеть, когда он светится и когда он потушен, мы зажжем светодиод и займем процессор какой-либо бесполезной работой на примерно 1 секунду. Саму задержку можно рассчитать вручную зная частоту — одна команда выполняется за 1 такт или используя специальный калькулятор по ссылки внизу. После установки задержки, код выполняющий примерно то же что делает классический «Blink» Arduino может выглядеть примерно так:

      			cli
			sbi DDRB, 5	; PORT B, Pin 5 - на выход
			sbi PORTB, 5	; выставили на Pin 5 лог единицу

loop:						    ; delay 1000 ms
			ldi  r18, 82
			ldi  r19, 43
			ldi  r20, 0
L1:			dec  r20
			brne L1
			dec  r19
			brne L1
			dec  r18
			brne L1
			nop
			
			in R16, PORTB	; переключили XOR 5-ый бит в порту
			ldi R17, 0b00100000
			EOR R16, R17
			out PORTB, R16
			
			rjmp loop
еще раз — на моей плате светодиод Arduino (D13) сидит на 5 ноге порта PORTB ATmeg-и.

Но на самом деле так писать не очень хорошо, поскольку мы полностью похерили такие важные штуки как стек и вектор прерываний (о них — позже).

Ок, светодиодиком помигали, теперь для того чтобы практика работа с GPIO была более или менее осмысленной прочитаем значения с датчика DHT11 и сделаем это также целиком на ассемблере.

Для того чтобы прочитать данные из датчика нужно в правильной последовательность выставлять на рабочей линии датчика сигналы высокого и низкого уровня — собственно это и называется дергать ногой микроконтроллера. С одной стороны, ничего сложного, с другой стороны все какая-то осмысленная деятельность — меряем температуру и влажность — можно сказать сделали первый шаг к построению какой ни будь «Погодной станции» в будущем.

Забегая на один шаг вперед, хорошо бы понять, а что собственно с прочитанными данными будем делать? Ну хорошо прочитали мы значение датчика и установили значение переменной в памяти контроллера в 23 градуса по Цельсию, соответственно. Как посмотреть на эти цифры? Решение есть! Полученные данные я буду смотреть на большом компьютере выводя их через USART контроллера через виртуальный COM порт по USB кабелю прямо в терминальную программу типа PuTTY. Для того чтобы компьютер смог прочитать наши данные будем использовать преобразователь USB-TTL — такая штука которая и организует виртуальный COM порт в Windows.

Сама схема подключения может выглядеть примерно так:

Сигнальный вывод датчика подключен к ноге 2 (PIN2) порта PORTD контролера или (что то же самое) к выводу D2 Arduino. Он же через резистор 4.7 kOm “подтянут” на “плюс” питания. Плюс и минус датчика подключены — к соответствующим проводам питания. USB-TTL переходник подключен к выходу Tx USART порта Arduino, что значит PIN1 порта PORTD контроллера.

В собранном виде на breadboard:

Разбираемся с датчиком и смотрим datasheet. Сам по себе датчик несложный, и использует всего один сигнальный провод, который надо подтянуть через резистор к +5V — это будет базовый «высокий» уровень на линии. Если линия свободна — т.е. ни контроллер, ни датчик ничего не передают, на линии как раз и будет базовый «высокий» уровень. Когда датчик или контроллер что-то передают, то они занимают линию — устанавливают на линии «низкий» уровень на какое-то время. Всего датчик передает 5 байт. Байты датчик передает по очереди, сначала показатели влажности, потом температуры, завершает все контрольной суммой, это выглядит как “HHTTXX”, в общем смотрим datasheet. Пять байт — это 40 бит и каждый бит при передаче кодируется специальным образом.

Для упрощения, будет считать, что «высокий» уровень на линии — это «единица», а «низкий» соответственно «ноль». Согласно datasheet для начала работы с датчиком надо положить контроллером сигнальную линию на землю, т.е. получить «ноль» на линии и сделать это на период не менее чем 20 милсек (миллисекунд), а потом резко отпустить линию. В ответ — датчик должен выдать на сигнальную линию свою посылку, из сигналов высокого и низкого уровня разной длительности, которые кодируют нужные нам 40 бит. И, согласно datasheet, если мы удачно прочитаем эту посылку контроллером, то мы сразу поймем что: а) датчик собственно ответил, б) передал данные по влажности и температуре, с) передал контрольную сумму. В конце передачи датчик отпускает линию. Ну и в datasheet написано, что датчик можно опрашивать не чаще чем раз в секунду.

Итак, что должен сделать микроконтроллер, согласно datasheet, чтобы датчик ему ответил — нужно прижать линию на 20 миллисекунд, отпустить и быстро смотреть, что на линии:

Датчик должен ответить — положить линию в ноль на 80 микросекунд (мксек), потом отпустить на те же 80 мксек — это можно считать подтверждением того, что датчик на линии живой и откликается:

После этого, сразу же, по падению с высокого уровня на нижний датчик начинает передавать 40 отдельных бит. Каждый бит кодируются специальной посылкой, которая состоит из двух интервалов. Сначала датчик занимает линию (кладет ее в ноль) на определенное время — своего рода первый «полубит». Потом датчик отпускает линию (линия подтягивается к единице) тоже на определенное время — это типа второй «полубит». Длительность этих интервалов — «полубитов» в микросекундах кодирует что собственно пытается передать датчик: бит “ноль” или бит “единица”.

Рассмотрим описание битовой посылки: первый «полубит» всегда низкого уровня и фиксированной длительности — около 50 мксек. Длительность второго «полубита» определят, что датчик собственно передает.

Для передачи нуля используется сигнал высокого уровня длительностью 26–28 мксек:

Для передачи единицы, длительность сигнала высокого увеличивается до 70 микросекунд:

Мы не будет точно высчитывать длительность каждого интервала, нам вполне достаточно понимания, что если длительность второго «полубита» меньше чем первого — то закодирован ноль, если длительность второго «полубита» больше — то закодирована единица. Всего у нас 40 бит, каждый бит кодируется двумя импульсами, всего нам надо значит прочитать 80 интервалов. После того как прочитали 80 интервалов будем сравнить их попарно, первый “полубит” со вторым.

Вроде все просто, что же требуется от микроконтроллера для того чтобы прочитать данные с датчика? Получается нужно значит дернуть ногой в ноль, а потом просто считать всю длинную посылку с датчика на той же ноге. По ходу, будем разбирать посылку на «полу-биты», определяя где передается бит ноль, где единица. Потом соберем получившиеся биты, в байты, которые и будут ожидаемыми данными о влажности и температуре.

Ок, мы начали писать код и для начала попробуем проверить, а работает ли вообще датчик, для этого мы просто положим линию на 20 милсек и посмотрим на линии, что из этого получится логическим анализатором.

Определения:

==========		DEFINES =======================================
; определения для порта, к которому подключем DHT11			
				.EQU DHT_Port=PORTD
				.EQU DHT_InPort=PIND
				.EQU DHT_Pin=PORTD2
				.EQU DHT_Direction=DDRD
				.EQU DHT_Direction_Pin=DDD2

				.DEF Tmp1=R16
				.DEF USART_ByteR=R17		; переменная для отправки байта через USART
				.DEF Tmp2=R18
				.DEF USART_BytesN=R19		; переменная - сколько байт отправить в USART
				.DEF Tmp3=R20
				.DEF Cycle_Count=R21		; счетчик циклов в Expect_X
				.DEF ERR_CODE=R22			; возврат ошибок из подпрограмм
				.DEF N_Cycles=R23			; счетчик в READ_CYCLES
				.DEF ACCUM=R24
				.DEF Tmp4=R25

Как я уже писал сам датчик подключен на 2 ногу порта D. В Arduino Uno это цифровой выход D2 (смотрим для проверки Arduino Pinout).

Все делаем тупо: инициализировали порт на выход, выставили ноль, подождали 20 миллисекунд, освободили линию, переключили ногу в режим чтения и ждем появление сигналов на ноге.

;============	DHD11 INIT =======================================
; после инициализации сразу !!!! надо считать ответ контроллера и собственно данные
DHT_INIT:		CLI	; еще раз, на всякий случай - критичная ко времени секция

				; сохранили X для использования в READ_CYCLES - там нет времени инициализировать
				LDI XH, High(CYCLES)	; загрузили старшйи байт адреса Cycles
				LDI XL, Low (CYCLES)	; загрузили младший байт адреса Cycles

				LDI Tmp1, (1<<DHT_Direction_Pin)
				OUT DHT_Direction, Tmp1			; порт D, Пин 2 на выход

				LDI Tmp1, (0<<DHT_Pin)
				OUT DHT_Port, Tmp1			; выставили 0 

				RCALL DELAY_20MS		; ждем 20 миллисекунд

				LDI Tmp1, (1<<DHT_Pin)		; освободили линию - выставили 1
				OUT DHT_Port, Tmp1	

				RCALL DELAY_10US		; ждем 10 микросекунд

				
				LDI Tmp1, (0<<DHT_Direction_Pin)		; порт D, Pin 2 на вход
				OUT DHT_Direction, Tmp1	
				LDI Tmp1,(1<<DHT_Pin)		; подтянули pull-up вход на вместе с внешним резистором на линии
				OUT DHT_Port, Tmp1		

; ждем ответа от сенсора - он должен положить линию в ноль на 80 us и отпустить на 80 us

Смотрим анализатором — а ответил ли датчик?

Да, ответ есть — вот те сигналы после нашего первого импульса в 20 милсек — это и есть ответ датчика. Для просмотра посылки я использовал китайский клон USBEE AX Pro который подключен к сигнальному проводу датчика.

Растянем масштаб так чтобы увидеть окончание нашего импульса в 20 милсек и лучше увидеть начало посылки от датчика — смотрим все как в datasheet — сначала датчик выставил низкий/высокий уровень по 80 мксек, потом начал передавать биты — а данном случае во втором «полубите» передается «0»

Значит датчик работает и данные нам прислал, теперь надо эти данные правильно прочитать. Поскольку задача у нас учебная, то и решать ее будем тупо в лоб. В момент ответа датчика, т.е. в момент перехода с высокого уровня в низкий, мы запустим цикл с счетчиком числа повторов нашего цикла. Внутри цикла, будем постоянно следить за уровнем сигнала на ноге. Итого, в цикле будем ждать, когда сигнал на ноге перейдет обратно на высокий уровень — тем самым определив длительность сигнала первого «полубита». Наш микроконтроллер работает на частоте 16 MHz и за период например в 50 микросекунд контроллер успеет выполнить около 800 инструкций. Когда на линии появится высокий уровень — то мы из цикла аккуратно выходим, а число повторов цикла, которые мы отсчитали с использованием счетчика — запоминаем в переменную.

После перехода сигнальной линии уже на высокий уровень мы делаем такую же операцию– считаем циклы, до момента когда датчик начнет передавать следующий бит и положит линию в низкий уровень. К счастью, нам не надо знать точный временной интервал наших импульсов, нам достаточно понимать, что один интервал больше другого. Понятно, что если датчик передает бит «ноль» то длительность второго «полубита» и соответственно число циклов, которые мы отсчитали будет меньше чем длительность первого «полубита». Если же датчик передал бит «единица», то число циклов которые мы насчитаем во время второго полубита будет больше чем в первым.

И для того что бы мы не висели вечно, если вдруг датчик не ответил или засбоил, сам цикл мы будем запускать на какой-то временной период, но который гарантированно больше самой длинной посылки, чтоб если датчик не ответил, то мы смогли выйти по тайм-ауту.

В данном случае показан пример для ситуации, когда у нас на линии был ноль, и мы считаем сколько раз мы в цикле мы считали состояние ноги контроллера, пока датчик не переключил линию в единицу.

;=============	EXPECT 1 =========================================
; крутимся в цикле ждем нужного состояния на пине
; когда появилось - выходим
; сообщаем сколько циклов ждали
; или сообщение об ошибке тайм оута если не дождались
EXPECT_1:		LDI Cycle_Count, 0			; загрузили счетчик циклов
			LDI ERR_CODE, 2			; Ошибка 2 - выход по тайм Out

			ldi  Tmp1, 2			; Загрузили 
			ldi  Tmp2, 169			; задержку 80 us

EXP1L1:			INC Cycle_Count			; увеличили счетчик циклов

			IN Tmp3, DHT_InPort		; читаем порт
			SBRC Tmp3, DHT_Pin	; Если 1 
			RJMP EXIT_EXPECT_1	; То выходим
			dec  Tmp2			; если нет то крутимся в задержке
			brne EXP1L1
			dec  Tmp1
			brne EXP1L1
			NOP					; Здесь выход по тайм out
			RET

EXIT_EXPECT_1:		LDI ERR_CODE, 1			; ошибка 1, все нормально, в Cycle_Count счетчик циклов
			RET

Аналогичная подпрограмма используется для того, чтобы посчитать сколько циклов у нас должно прокрутиться, пока датчик из состояния ноль на линии переложил линию в состояние единицы.

Для расчета временных задержек мы будет использовать тот же подход, который мы использовали при мигании светодиодом — подберем параметры пустого цикла для формирования нужной паузы. Я использовал специальный калькулятор. При желании можно посчитать число рабочих инструкций и вручную.

Памяти в нашем контроллере довольно много — аж 2 (Два) килобайта, так что мы не будем жлобствовать с памятью, и тупо сохраним данные счетчиков относительно наших 80 ( 40 бит, 2 интервала на бит) интервалов в память.

Объявим переменную

CYCLES: .byte 80 ; буфер для хранения числа циклов

И сохраним все считанные циклы в память.

;============== READ CYCLES ====================================
; читаем биты контроллера и сохраняем в Cycles 
READ_CYCLES:	LDI N_Cycles, 80			; читаем 80 циклов
READ:		NOP
		RCALL EXPECT_1				; Открутился 0
		ST X+, Cycles_Counter			; Сохранили число циклов 
			
		RCALL EXPECT_0
		ST X+, Cycles_Counter			; Сохранили число циклов 
		
		DEC N_Cycles				; уменьшили счетчик
		BRNE READ					
		RET					; все циклы считали

Теперь, для отладки, попробуем посмотреть насколько удачно посчиталось длительность интервалов и понять действительно ли мы считали данные из датчика. Понятно, что число отсчитанных циклов первого «полубита» должно быть примерно одинаково у всех битовых посылок, а вот число циклов при отсчете второго «полубита» будет или существенно меньше, или наоборот существенно больше.

Для того чтобы передавать данные в большой компьютер будем использовать USART контроллера, который через USB кабель будет передавать данные в программу — терминал, например PuTTY. Передаем опять же тупо в лоб — засовываем байт в нужный регистр управления USART-а и ждем, когда он передастся. Для удобства я также использовал пару подпрограмм, типа — передать несколько байт, начиная с адреса в Y, ну и перевести каретку в терминале для красоты.

;============	SEND 1 BYTE VIA USART =====================
SEND_BYTE:	NOP
SEND_BYTE_L1:	LDS Tmp1, UCSR0A
		SBRS Tmp1, UDRE0			; если регистр данных пустой
		RJMP SEND_BYTE_L1
		STS UDR0, USART_ByteR		; то шлем байт из R17
		NOP
		RET				

;============	SEND CRLF VIA USART ===============================
SEND_CRLF:	LDI USART_ByteR, $0D
		RCALL SEND_BYTE	
		LDI USART_ByteR, $0A
		RCALL SEND_BYTE
		RET			

;============	SEND N BYTES VIA USART ============================
; Y - что слать, USART_BytesN - сколько байт
SEND_BYTES:	NOP
SBS_L1:		LD USART_ByteR, Y+
		RCALL SEND_BYTE
		DEC USART_BytesN
		BRNE SBS_L1
		RET

Отправив в терминал число отсчётов для 80 интервалов, можно попробовать собрать собственно значащие биты. Делать будем как написано в учебнике, т.е. в datasheet — попарно сравним число циклов первого «полубита» с числом циклов второго. Если вторые пол-бита короче — значит это закодировать ноль, если длиннее — то единица. После сравнения биты накапливаем в аккумуляторе и сохраняем в память по-байтово начиная с адреса BITS.

;=============	GET BITS ===============================================
; Из Cycles делаем байты в  BITS				
GET_BITS:			LDI Tmp1, 5			; для пяти байт - готовим счетчики
				LDI Tmp2, 8			; для каждого бита
				LDI ZH, High(CYCLES)	; загрузили старшйи байт адреса Cycles
				LDI ZL, Low (CYCLES)	; загрузили младший байт адреса Cycles
				LDI YH, High(BITS)	; загрузили старший байт адреса BITS
				LDI YL, Low (BITS)	; загрузили младший байт адреса BITS

ACC:				LDI ACCUM, 0			; акамулятор инициализировали
				LDI Tmp2, 8			; для каждого бита

TO_ACC:				LSL ACCUM				; сдвинули влево
				LD Tmp3, Z+			; считали данные [i]
				LD Tmp4, Z+			; о циклах и [i+1]
				CP Tmp3, Tmp4			; сравнить первые пол бита с второй половину бита если положительно - то BITS=0, если отрицительно то BITS=1
				BRPL J_SHIFT		; если положительно (0) то просто сдвиг	
				ORI ACCUM, 1			; если отрицательно (1) то добавили 1
J_SHIFT:			DEC Tmp2				; повторить для 8 бит
				BRNE TO_ACC
				ST Y+, ACCUM			; сохранили акамулятор
				DEC Tmp1				; для пяти байт
				BRNE ACC
				RET

Итак, здесь мы собрали в памяти начиная с метки BITS те пять байт, которые передал контроллер. Но работать с ними в таком формате не очень неудобно, поскольку в памяти это выглядит примерно, как:
34002100ХХ, где 34 — это влажность целая часть, 00 — данные после запятой влажности, 21 — температура, 00 — опять данные после запятой температуры, ХХ — контрольная сумма. А нам надо бы вывести в терминал красиво типа «Temperature = 21.00». Так что для удобства, растащим данные по отдельным переменным.

Определения

H10:			.byte 1		; чиcло - целая часть влажность
H01:			.byte 1		; число - дробная часть влажность
T10:			.byte 1		; число - целая часть температура в C
T01:			.byte 1		; число - дробная часть температура

И сохраняем байты из BITS в нужные переменные

;============	GET HnT DATA =========================================
; из BITS вытаскиваем цифры H10...
; !!! чуть хакнули, потому что H10 и дальше... лежат последовательно в памяти

GET_HnT_DATA:	NOP

				LDI ZH, HIGH(BITS)
				LDI ZL, LOW(BITS)
				LDI XH, HIGH(H10)
				LDI XL, LOW(H10)
												; TODO - перевести на счетчик таки
				LD Tmp1, Z+			; Считали
				ST X+, Tmp1			; сохранили
				
				LD Tmp1, Z+			; Считали
				ST X+, Tmp1			; сохранили

				LD Tmp1, Z+			; Считали
				ST X+, Tmp1			; сохранили

				LD Tmp1, Z+			; Считали
				ST X+, Tmp1			; сохранили

				RET

После этого преобразуем цифры в коды ASCII, чтобы данные можно было нормально прочитать в терминале, добавляем названия данных, ну там «температура» из флеша и шлем в COM порт в терминал.

PuTTY с данными

Для того, чтобы это измерять температуру регулярно добавляем вечный цикл с задержкой порядка 1200 миллисекунд, поскольку datasheet DHT11 говорит, что не рекомендуется опрашивать датчик чаще чем 1 раз в секунду.

Основной цикл после этого выглядит примерно так:

;============	MAIN
			;!!! Главный вход
RESET:			NOP		

			; Internal Hardware Init
			CLI		; нам прерывания не нужны пока
				
			; stack init		
			LDI Tmp1, Low(RAMEND)
			OUT SPL, Tmp1
			LDI Tmp1, High(RAMEND)
			OUT SPH, Tmp1

			RCALL USART0_INIT

			; Init data
			RCALL COPY_STRINGS		; скопировали данные в RAM
			RCALL TEST_DATA			; подготовили тестовые данные

loop:				NOP						; крутимся в вечном цикле ....
				; External Hardware Init
				RCALL DHT_INIT
				; получили здесь подтверждение контроллера и надо в темпе читать биты
				RCALL READ_CYCLES
				; критичная ко времени секция завершилась...
				
				;Тест - отправить Cycles в USART		
				;RCALL TEST_CYCLES
				
				; получаем из посылки биты
				RCALL GET_BITS
				
				;Тест - отправить BITS в USART
				;RCALL TEST_BITS  
				
				; получаем из BITS цифровые данные
				RCALL GET_HnT_DATA
				
				;Тест - отправить 4 байта начиная с H10 в USART
				;RCALL TEST_H10_T01
				
				; подготовидли температуру и влажность в ASCII		
				RCALL HnT_ASCII_DATA_EX
				
				; Отправить готовую температуру (надпись и ASCII данные) в USART
				RCALL PRINT_TEMPER
				; Отправить готовую влажность (надпись и ASCII данные) в USART
				RCALL PRINT_HUMID
				; переведем строку дял красоты				
				RCALL SEND_CRLF
							
				RCALL DELAY_1200MS				;повторяем каждые 1.2 секунды 
				rjmp loop		; зациклились

Прошиваем, подключаем USB-TTL кабель (преобразователь)к компьютеру, запускаем терминал, выбираем правильный виртуальный COM порта и наслаждаемся нашим новым цифровым термометром. Для проверки можно погреть датчик в руке — у меня температура при этом растет, а влажность как ни странно уменьшается.

Ссылки по теме:
AVR Delay Calc
Как подключить Arduino для программирования в Atmel Studio 7
DHT11 Datasheet
ATmega DataSheet
Atmel AVR 8-bit Instruction Set
Atmel Studio
Код примера на github