“Borderline bulletproof” security for API keys

In episode 5 of my Keep in Touch podcast I shared some thoughts on how I currently think about protecting credentials, API keys, and secrets (summarised as keys for the rest of the post) for 1st and 3rd party services and APIs. This blog post is a recap of those ideas and also a call for feedback and input from you.

The risks

The problem I am talking about addressing, is related to the fact that many applications embed keys in their binaries. These keys are required to authenticate (and authorise) the app, the user, or both with remote services. These services could be public or private APIs. Similar considerations apply if these APIs are 1st party or 3rd party.

If these keys are bundled with the app, a nefarious attacker could extract them and then impersonate the developer.

It’s not my intention to teach bad actors how to execute their plans, but I do want to give a few examples of some attack vectors that could lead to sensitive information falling in the wrong hands:

  • the nightmare scenario: the attacker could run a man-in-the-middle attack by running a WiFi hotspot or could be operating one of the nodes on the internet between the device and the APIs the app connects to
  • the easier than you’d think scenario: the attacker could run an on-device proxy (such as the brilliant Charles for iOS app that I use daily for non-nefarious reasons), and monitor all traffic
  • the attacker could grab the app binary (ie. apk or ipa) and do a string search for key-like strings
    Any of the above could lead to the attacker obtaining the keys that the app uses to authenticate itself against remote servers. Once they get these keys they can start impersonating the app and its users.

The hypothesis

I believe that by not including any keys in the app’s binary and by provisioning these credentials only via a channel owned by the operating system that uses SSL pinning, the risks mentioned above can be reduced or even avoided.

SSL pinning is essential in order to prevent an attacker from inspecting the traffic between the client app and the remote endpoints.

Animated GIF showing the way keys can be securely transmitted from cloud to app to server. This flow is described in detail below

This animation attempts to illustrate the approach I am suggesting:

  1. Provision the app without any keys
  2. Attempt to detect if the app is running on a jailbroken / rooted device, and bail if that is the case
  3. Upload the necessary keys to the Trusted Cloud provided by the Host operating system (i.e. CloudKit for iOS apps) using another app (i.e. console, browser, etc)
  4. Enforce read-only access to these keys for the client app
  5. Upon (first) launch, fetch the keys from the Trusted Cloud
  6. If the keys must be stored, only store them in a keychain or in a secure enclave
  7. Use the keys to communicate with the Server using an SSL pinned connection
  8. Should the keys become compromised, they should be replaced, and the client apps should naturally fetch the new keys. For this reason, getting the new keys should not require the previous keys.

The steps above rely on the fact that the mobile OS provides a cloud service that uses SSL pinning in order to communicate to the client apps. Without it, there cannot be a secure key provisioning mechanism and the entire flow falls apart.

The checklist

Often times the security considerations come in “late” or “after the fact”. To assess the state of affairs, and to add more protection going forward, I would go over the checklist below:

  • Limit the number of people who have access to the credentials. Replace the credentials when these people leave the job.
  • Never embed the keys in the codebase
  • Never grant the build (or distribution) servers access to the keys
  • Only trust the cloud service that belongs to the platform vendor for your app (i.e. CloudKit for iOS apps, Firebase for Android apps.)
  • Avoid storing the keys anywhere (on the device) unless really necessary
  • If the keys must be stored, then the only acceptable place is the Keychain
  • Avoid bundling black-box libraries. If the libraries use networking, inspect all networking traffic
  • Do not use a 3rd party networking library unless absolutely necessary. Always check the source code of such libraries
  • Never be the largest consumer of any one technology, framework, or library
  • Only load the keys at the code level that really needs them. Do not pass them through the business layer, particularly if there are 3rd party analytics or logging libraries bundled with the app
  • Obfuscate the code and avoid obvious naming strategies for the keys
  • Check the app’s signature / hash or anything else similar that guarantees that the binary has not been tampered with
  • Be mindful of jailbroken or rooted devices but never change the code in order to support them
  • Do a quid-pro-quo vulnerability analysis with a trusted partner (make sure dinner or beer is something at stake if an issue is found)

If you have more to add to this list, I would love to hear from you!

Closing comments

I think of myself as a security minded person, but I am by no means an expert. I offer no guarantees that implementing the steps above is going to remove or prevent any security related problems for a system I have not seen before, but, based on my current knowledge, I do believe it is the closest thing to a “borderline bulletproof” approach to securing keys and credentials that native apps use.

Lastly, I sincerely hope that, if you read this and have identified a flaw in my reasoning, or if you can spot an oversight, you will share with me (ideally with a solution in tow) so I can update the article.