Information Security & Client Certificates

An information security compromise targeting celebrity data made waves recently, the salaciousness of the contents driving much of the attention.

Entire digital lives unveiled to unknown numbers of people. Messages, contacts, emails, photos, files, phone logs, and all of the metadata that goes with each of them.

This is a nightmare situation for most of us. It is something that is not supposed to happen, despite the immediate and deplorable incantations of “told you so!” by so many.

closed-265596_640

A surprising amount of victim blaming has taken place. Even on ostensibly professional sites like LinkedIn, the most common sentiment was some variation of “you shouldn’t have…”. This includes, I’ve found, people who have positions of responsibility over information security.

Contemplating iCloud Security

We don’t know where all of the data came from, or over what time period it was gathered. We do know, however, that at least some of the data came from Apple’s online iCloud storage, gathered from use of products like the iPhone: Products that can automatically upload your photos, conversations, messages, and so on, to the cloud. This is a huge benefit, and if security is managed right there should be minimal to no risk of compromise.

I’m going to focus on that momentarily not to blame Apple — they follow many of the same practices as their peers — but because it’s an instructive and common case, and represents a very widely used cloud storage mechanism.

The dominant explanation is that users used weak passwords, many waving their hands and declaring themselves safe due to their, apparently, “non-weak” password. This is a non-starter: Apple enforces fairly typical complexity requirements, and rate limits to such a degree that brute-force attacking passwords is untenable.

They did have a gap in one of their product APIs (Find My iPhone) that allowed for unrestricted brute force attempts, but Apple assures that it wasn’t used in the compromise, and there’s no reason not to trust their statements on that.

It seems probable that the attackers didn’t guess the passwords.

Another possibility is that they used the same passwords across sites, and one was compromised (via a hack, or internally).

Never use the same password across any services. I railed about this four years ago, offering up a suggestion for client-side, site-specific hashing that would eliminate this as a vulnerability, though the industry has made zero headway since, and the same issue occurs over and over again.

Alternately they were phished/socially engineered, where someone sent them an email or somehow got them to go to a site pretending to be Apple, imploring them to enter their username/password. Again, with the simple suggestion I made this would be a non-issue. But that’s another issue.

Speaking of social engineering, and to go on a bit of an aside, I remember being at one organization once when a senior manager at the firm emailed from some random Yahoo account asking for their account to be unlocked, a new password sent their way. I intercepted this while it was being completed, but was in awe that anyone could have gotten into this privileged account simply by claiming to be the person. After side-band confirmations it was verified that they were who they claimed to be, but the assumption that “we aren’t targets” was enough for such an egregiously vulnerable avenue to exist, everyone wide eyed and naive. And the truth is that such avenues exist at many organizations, and over many systems — the phone system is notoriously insecure, for instance, and with ease you can call numbers with the caller ID announcing you as whoever you want. Want to be a bank and ask for banking information? The government asking for a SSN? Your success rate will be high.

While password re-use and social engineering are possibilities, it seems most likely that they took advantage of the account recovery system, where answering a couple of “personal” questions allowed for an immediate account takeover.

Do you know what your Apple Id recovery questions are? Until this incident I’d forgotten that I’d even filled those out when I originally set up the account (at the time to play Angry Birds on a new iPad, that seemingly inconsequential account later becoming a corporate development manager authority and key to significant personal information), or the extreme amount of liability they presented.

Despite my inconveniently complex password, if you knew my email address, birthdate, the street I grew up on, the band I listened to in High School, or my best friend in the same, you would be given the immediate ability to takeover my account. You could download backups and usurp my Apple identity.

This information is not private. Many who I went to school with could easily answer these questions. If I were in the public eye it would be public record.

Your birthdate, mother’s maiden name, the street you grew up on, and the music you listened to at certain times in your life — this information should not secure anything, and it isn’t a secret. If someone with nefarious intentions offhandedly asked me what my favorite band was in high school, I would answer without thinking twice, not realizing that it had become the lynchpin of the security of an enormous trove of private information.

And just to be clear, when Apple asks these questions their intention is that you will provide actual answers. Another blame-the-victim retort is that everyone should know that you aren’t supposed to provide real answers. You are supposed to, the claim goes, provide random characters that you’ve stashed away. But that is in no way the intention of the system.

Indeed, this whole account recovery system is based around the notion that it’s annoying when you lose your account, so it makes it as easy as possible.

Security questions are terrible. That they allow one to completely side-step other authentication mechanisms is extraordinary, and I have no doubt that Apple is working on significant fixes.

Additionally, you can make use of 2-factor verification, as you can on Google, Amazon and others. This can be inconvenient, but is becoming a necessity in a very hostile digital world. It is something I long did on all other accounts, but hadn’t yet for Apple (especially given that even with 2FA, many of the critical systems remain unprotected).

Client Certificates – The Authentication / 2FA That Few Use

Much of the audience of this blog tends to be software developers, and we architect and implement the security systems that protect this sort of information.

We are the defense.

One underused authentication/2FA mechanism that is supported across virtually every platform (desktop and mobile), and is surprisingly easy to deploy (especially for inhouse or high security systems), are SSL Client Certificates.

Just as your browser verifies that the site you are talking to is legitimately who it says it is — without the site having to divulge any secret to the browser — the site, via all major web servers and proxies, can verify that you are who you say you are. Without your browser divulging any secret to the site, via the functionality of SSL client certificates.

This can be an extremely powerful mechanism to control access, and it doesn’t require any complex deployment of dongles, extraneous authorization techniques, or even paying for special signing certificates.

To give an example, I recently became concerned about the surface area of a large bulk of external code that was used for a solution. While it is a widely known PHP solution, the scale of code and the number of entry points made me extremely wary about the potential for compromise, in addition to the normal concerns about the misuse of poorly managed credentials.

I couldn’t possibly guarantee that the system was hack proof against nefarious agents. There was a concern about users not securely managing their username/password combinations.

This system housed highly confidential information with a limited number of users. It would be a professional disaster if it were compromised.

The gateway to the platform is nginx, running a standard SSL issuer cert (SHA256, 4096-bits, perfect forward secrecy…every best practice). That much was fine. Users demanded that they be able to access the site from varying locations, on a multitude of devices, so simply limiting it to site to site tunnels was a non-starter, and of limited benefit regardless. Requiring the user to VPN into an isolated zone to use the app was considered but rejected given the process inconvenience it imposed.

So I had client SSL certs deployed per user. By creating a custom CA on an isolated, high-trust system, and deploying that cert as the client-cert authority on the web server (still using the universally trusted CA for the transport SSL cert), I could then generate and securely deploy PFX certs to users, installing it on their mobile devices and in their user accounts on their desktops. Simply by setting this authority in ngninx (via ssl_client_certificate and setting ssl_verify_client to on), instantly the only users who could engage with the application were those who held those private certificates. On the server I maintain a cluster-shared list of created certs and revocations, further cementing assurances and control.

They still had to login as normal via username and password, but this powerful mechanism provided very effective two factor authentication — they needed both their login credentials and the certificate — without the complexity and work and process inconvenience of overlaying a HOTP/TOTP type solution on a third-party application. And to the users, once the certificate was installed there was no extra work for them, this extra security happening magically and transparently.

There is no way that a website could coerce or socially engineer them to provide this certificate on demand, and it is securely protected against export or usurping on most platforms (even if the user had the knowledge to try to export it), and the interaction between server and client, even if it were in plaintext, does not reveal the secrets of the certificate.

I love client certificates. I’ve used them as coded 2FA mechanisms on applications (using the reverse proxy functionality to pass through the certificate details to the backend web application), and as a very simple mechanism to robustly secure legacy or external applications.

Client certificates are an extremely secure mechanism that isn’t used nearly enough. There are a multitude of places where client certificates can play a part in processes. For the previously discussed account recovery system, for instance, a 2048-bit client certificate can technically be delivered as a PNG-hosted version 40 QR code, to be printed and hidden away in a vault somewhere. Should it be? Maybe not, but there are a tremendous array of creative uses of this widely deployed but seldom used functionality.

EDIT 2014-09-12

Given emailed queries about implementation details, it’s worth going through the simple steps of setting up client certificates in a practical configuration, in this case using Linux with openssl and nginx, though it isn’t terribly different doing this with a cert authority on Windows. In this case I’m ignoring issued & revocations logs to simplify and to get readers quickly to the “wow this is neat and works and I want to learn more” (this “get running quickly” technique is how PHP and Mongodb both became so popular, as an aside), but you will absolutely want to use those in any practical implementation: With revocations, if you knew a certificate was possibly compromised (on a stolen laptop for instance — with best practices that shouldn’t be a concern at all, but if you want to be extremely vigilant), blocking it is trivial.

These are not complete details, and are intended purely to inspire interest. Read up on specifics if you want to deploy this in production. This is not Earth shattering, and many people have and already are using techniques like this, but I cover it because one thing I’ve noticed working in software architecture and software engineering is that many…far too many….developers have a very limited understanding of the capacity of the tools they work with everyday, leading to an unfortunate situation of compromised security and re-invented wheels.

On a Linux server with openssl

mkdir ~/my_client_ca
cd ~/my_client_ca
openssl genrsa -aes256 -out client_ca.key 4096

You’ll be prompted to enter a password that will be used to protect the issuing key.

openssl req -new -x509 -days 3650 -key client_ca.key -out client_ca.crt

You’ll be prompted for details about the certificate authority, and to enter the password for the key. We’ve generated the certificate authority. The CRT file is an unprivileged file and can be copied to your web server — not as a server SSL cert, but purely for validating user client certs. They key is a protected file and is what we used to sign client certs.

So let’s create a client cert for a new user.

openssl genrsa -aes256 -out bob.key 2048

You’ll be prompted for a password for Bob’s key, as with the CA key.

openssl req -new -key bob.key -out bob.csr

As with the CA, you’ll be asked for the details about Bob. These details are important, and are the fundamental details of the certificate that you use later when evaluating certificates.

Now our CA can sign the certificate, vouching for its authenticity.

openssl x509 -req -days 365 -in bob.csr -CA client_ca.crt -CAkey client_ca.key -CAcreateserial -out bob.crt

Note the -CAcreateserial parameter. This causes openssl to create a file called client_ca.srl in the directory, using it to assign unique serial numbers to certs.

Now let’s turn that crt into a protected PFX that we can deploy to Bob’s devices.

openssl pkcs12 -export -out bob.pfx -inkey bob.key -in bob.crt

You’ll be prompted for the password to Bob’s key, and then the password to set on the new PFX export — the one that Bob will need to use to install the certificate.

Now you have a PFX. Bob can install that, with the necessary password, in Windows, on his iPad, on his Android device, and so on. He or the agent should be choosing appropriate secure options, such as setting the key as non-exportable, or requiring a password on each use. You can also deliver this new certificate directly from a web server for immediate installation.

On your nginx instance, in the server block, you will have set-


ssl_client_certificate /etc/nginx/certs/client_ca.crt;
ssl_verify_client on;

That instance will now only accept signed, time-valid certificates from that CA. This, I should repeat, is separate from the SSL certificate used by the server to authenticate itself with clients, which should be a trusted root CA generated certificate. And again, learn about revocations because the time will come when you want to invalidate a deployed certificate.

Note that nginx makes some variables available for logical or pass-thru use. For instance if I’m using nginx as a reverse-proxy/gatekeeper in front of a backend application, I might set-


proxy_set_header x-client-subject $ssl_client_s_dn;
proxy_set_header x-client-success $ssl_client_verify;

Now having details about the client certificate in that back-end application (note that the $ssl_client_verify applies only if above we set ssl_verify_client to optional).

Cheap empowerment, adding significant security controls, transparently, to web platforms.