- Password Security and Cookies
- Security issues with AccessURL original implementation
- Solving all the issues by simple design change
Security issues found
After using AccessURL browser extension I found a few issues which allowed me to get almost any credentials. To have better understanding of the issues lets look at a sample URL you get from the extension –
https://accessurl.com/GMRT#yzdne1
This URL has two parts “GMRT” is the identifier sent to AccessURL server and retrieves the encrypted data, the part after the hash “yzdne1” is never sent to the server and is used as the to decrypt the encryption and recover the cleartext cookie.
Unfortunately, both of this values are too short to be secure and safe from a brute-force attack ( An attack where you go through every possible value until you get a valid result).
Please note though AccessURL did improve their security since I reported the following issues, it does not mean they follow any of the suggested solution or even the minimum requirements outlined in this article. It is up to you to decide if you trust any third party you share information with.
Issue 1 – Safety in numbers (Part 1)
The most severe (and easy to fix, issue) is that without rate limiting it is very easy for a single person to make hundreds of requests per second to AccessURL server and get all saved credentials.
What rate limiting does is to prevent a single user (or IP address) from making too many requests to the server. Let’s say we limit fetching the stored credentials to only allow 6 requests per minute from a single user. Most users will never notice the limit since they will probably not access the URL more than once or twice a day, even if they get the URL wrong and try a few variations it will probably take them longer than a minute to make six attempts. But even with rate limiting, 4 chars have only ~15 million possible URLs. that might look like a high number but since NetFlix has ~30 Milion users if only half of them wanted to share a link at the same time all URLs would be valid (So I can only assume URLs would be longer as more users will use the AccessURLs but then your chances of falling on a valid URL even with very limited number of trails will still be very high) and an attacker can use multiple IPs to increase the attempt rate (As Mirai demonstrates there are many compromise devices out there).
Increasing the identifier part of the URL to 6 chars will give us ~56 billion options, so we can easily have the entire users of Facebook, NetFlix and Google sharing their access simultaneously but if we really want to prevent attackers from guessing existing URLs we would probably want to have a minimum of 8 chars which will give us ~200 trillion possible URLs and would make guessing hard even without using rate limiting.
So should we use 8 chars ? Maybe but when considering security you should always go as high as you can.
In the last issue discussed in this article, we will see how with a small design change we can get a lot more security without increasing the total URL length.
If you are using hapi you can use patova plugin and limitd. Personally, I prefer to use fail2ban since I use it to monitor and block malicious access on multiple services.
Issue 2 – Safety in numbers (Part 2)
OK, so let us say AccessURL decided to stick with a short identifier and no rate limiting. Why should they care? All the information is encrypted so an attacker will not be able to get the plaintext cookie data in any way.
We’ll not quite.
First, not all the information is encrypted, actually only the cookie is encrypted AccessURL also saves a screen shot of the site. The screenshot might contain sensitive information like emails, phone numbers, and usernames which shouldn’t be public. This information will also allow an attacker to know which service he might gain access to if he manages to break the encryption. An attacker might not care much about a NetFlix account but if bob shared his email account (which he shouldn’t) or carol shared her bank account (which she certainly shouldn’t!) an attacker might decide to try and crack them first. The solution is easy, encrypt everything. Screenshots, cookies, times, date lets encrypt it all. I mean we’re anyway doing it for the cookie so why wouldn’t we so it for all the information?
Ok so we encrypted everything and the encryption algorithm we use is super strong and battle-tested like AES-256 (which is what AccessURL using) are we safe now?
Unfortunately no.
As we mentioned before the passphrase used to encrypt the information is quite short. AES-256 allows us to use keys with 256 bits of entropy but a 6 chars passphrase will only have ~36 bits of entropy and that’s if we use lowercase, uppercase, and numbers. If we dig a bit into the code the extension code we can see something similar to –
1 2 |
pass = Math.random().toString(36).substring(2, 6); CryptoJS.AES.encrypt('COOKIE-JSON', pass); |
We’ll go a bit more into this code on the next issue but we can see the code uses base36 to generate the passphrase which means we only have about 31 bits. Both values are in brute-force range but it would actually take ~30 times less to brute-force 31 bits than 36.
If for the identifier part an attacker could try and access hundreds of URLs each second without rate limiting cracking the password is done locally and is only limited by the attacker hardware. On a cheap laptop, an attacker can attempt thousand of guesses each second and with some more expensive hardware billions and even trillions of guesses per second.
I tested a random URL on my laptop and it only took a couple of days to get the key and decrypt the cookie. But an attacker willing to spend a few bucks using a cloud service would probably get the key in a few minuted for less than 5 USD.
In theory, the solution is quite simple, use a longer passphrase with more entropy. The best passphrase would contain 256 bits of entropy (since that’s AES 256 key size) and for that, we will need a base64 encoded string of 43 chars. If you don’t care about the URL length or data storage I would say use the maximum key size available. Even half of that (128 bits) would be considerably secure and will take all the computing power in the world longer than the life of the universe to brute-force. But from a usability perspective, there is probably no difference from 43 chars to 22 chars so why not use the higher one?
I suspect that AccessURL wanted to keep their URL short as possible so using 42 chars just for the key might be too much even 22 chars will be too long. 6 chars are probably always too short but can we get a reasonable result with a shorter passphrase?
As we saw with the identifier part we could use rate limiting to make it harder for an attacker to brute-force the URL can we do it for the passphrase as well?
Turns out we can do something very similar with our passphrase but since it means actually slowing the attacker hardware it is not as straight forward as just enabling rate limiting on our server. To explain how this is done we first have to look into one of the first steps in the encryption process called Key Derivation.
We mentioned before that AES-256 uses 256-bit keys (no more no less) so if we are using a six char passphrase we will need a way to turn it into a 256-bits value. What most Key Derivation Function (KDF) are actually doing is to use hash functions to get a fixed sized key. I will not go into much details about hash functions here but they have a key role in password security and encryption so if you are not familiar with them you should read about them some more.
For this topic here is all you need to know about hash functions –
- 1A given hash function output is always the same length.
- A hash function is deterministic so for identical inputs, it will always return the same output.
- For a small change in the input, there is a large change in the output, and it is impossible to predict the output without running the hash function on the input.
- It is impossible to retrieve the original input from a given output.
The last two points are only valid for cryptographic hash functions and we will assume our KDF is using a cryptographic hash function.
So lets look at a few outputs of MD5 hash function to better understand what it does =
MD5(‘password’) => 5f4dcc3b5aa765d61d8327deb882cf99
MD5(‘passwOrd’) => 827ccb0eea8a706c4c34a16891f84e7b
MD5(‘Super Long Password That No One Will Ever Guess’) => e36276e3ce9ecfdb5a91484a8593d0c0
So as we can see it doesn’t matter the length of the input we will always get a 32 digit hex number as the output (128-bit). Even though ‘password’ and ‘passwOrd’ are almost identical we get a completely different output for each of them. The last ‘Super long…’ string is larger than 128-bit but still MD5 always returns a 128-bit value no matter how long the input is.
What it means is that if we use a hash function to derive the key for our encryption even if Eve can guess the password she will have to the KDF to get the key and decrypt the ciphertext. Most hash function including MD5, are very fast functions to run. So using them naively doesn’t really slow down an attacker (which is unfortunate since CryptoJS clones OpenSSL implementation which by default uses MD5 as the base for the KDF). Newer and more secure hash function have a difficulty parameter built in. Bcrypt and Scrypt and recently Argon2 are considered very secure hash function and should be considered for any modern implementation.
But for this post, I will give an example of using PBKDF2 not because it is more secure but because it is very common and easy to understand.
To secure its output PBKDF2 adds Salt and runs multiple rounds of a selected hash function.
We will not go into salting here but the multiple rounds are easy to understand.
Assuming we already naively use the SHA1 hash and want to slow the attacker 1000 times
What we could do is run SHA1 in a loop 1000 times to derive the key
1 2 3 4 |
let key = 'password'; for (let i = 0; i < 1000; i += 1) { key = SHA1(key); } |
Eve will have to run the KDF 1000 times for each password she tries.
Now there is a catch the more rounds being used the slower it will be for Alice and Bob to calculate the hash as well. but as we’ll see next this is almost negligible for the users and can be devastating for the attacker.
Let us budget the KDF run time to 100ms. 100ms is fast enough to be almost transparent to the user. On my laptop, a single round of SHA1 using CryptoJS takes less than 0.05 milliseconds so 2000 rounds of SHA1 is a bit less than 100ms (CryptoJS is pretty slow. see my recommendation at the end of this post about using SJCL). Presuming an attacker can run 3.4 million rounds of sha1 each millisecond. Eve will need something stronger than my laptop but those numbers are achievable with 400$ GPU, so we’re not talking about a state level attacker more like a teenager who saved her allowance.
So with that GPU and 3.4 million tries she can run the entire 6 chars base36 keyspace in just 0.6 seconds that’s crazy fast but if we use 2000 rounds per hash it will take her 21 hours. While a significant amount of time 21 hours are still way too fast for us to be secure and this is with a really cheap hardware. Let’s see where we stand with 8 chars base 58 –
A single round of sha1 would take 10 hours to break. If we use 2000 rounds that would be 2.4 years!
That might sound like a lot but remember Eve is only using a cheap hardware and a year or two from now that would be even cheaper. We wouldn’t want to have to change our code every few months just to catch up with the latest hardware. We should choose values that would give us a bit more breathing space.
With 10 chars and 2000 rounds, we are talking about 8,000 years to crack a single key. even if 5 years from now hardware would be 1000 times faster and 2 times cheaper it would still take our attacker 4 years to break a single key with the same budget.
So are 10 chars enough? Maybe, probably not. As with the previous issue, we are going to in the last issue I’m going to suggest a small design change that will allow us to have much better security.
So I think I mumbled enough numbers so to summarize this point there are three things I want you to take from it
- Weak passwords are always weak. As we saw 6 chars passwords are very hard to protect even if we add more rounds to the KDF. If the password is short or part of a common password list no amount of rounds will help to protect your password
- Always choose higher numbers when possible, sure we saw 10 chars with 2000 rounds of SHA1 seem to be fairly protected against brute-force but unfortunately, that is actually the bare minimum for us to be somewhat secure. It is very hard to predict where hardware and technology will go GPUs are much faster at calculating hashes than CPUs and ASICS are much faster than GPUs. A year from now someone might find a better logarithm to calculate SHA1 or completely break it. That why you should always use a common and tested hash algorithm (like Bcrypt or Scrypt) with as many rounds and long password as possible
- The more valuable and important the data you are protecting the stronger you need to protect it. That seem kinda obvious but we have to be clear the only reason we might be OK with using only 10 char passphrase for encrypting a cookie for NetFlix is its low value. So we can assume whoever will try to attack it wouldn’t spend a lot of time and money doing so. If someone wants to share access to their e-mail or online banking accounts they will need to have better security since the attacker will use way more expensive hardware or even try to compromise the underlying system to bypass the encryption. Personally, I wouldn’t trust any 3rd party with access to my e-mail account (no matter how secure they are)
Issue 3 – Being truly random
This is just a small bonus issue, I don’t know if this is actually exploitable in the way AccessURL implement their encryption but it is always better to follow the recommended best practice which tells us not to use Math.random()
for generating encryption keys.
Never use any random function (PRNG) in any language for encryption unless the function is cryptographically secure (CSPRNG) and intended to be used in encryption. Math.random
is not cryptographically secure.
In modern browsers like Mozilla Firefox and Google Chrome, we can use window.crypto.getRandomValues
to get cryptographically secure random numbers. The reason why you shouldn’t use Math.random
is that its values might look random but they are not always hard to predict. An attacker might be able to guess the PRNG inner state and so predict future values if he can get access to many random values. What helps mitigate this issue a bit with relation to AccessURL is that Chrome recently changed their Math.random()
implementation to be a lot more random than it used to be (But it is still not safe for cryptography).
The random values are generated in the client so it’s a lot harder for an attacker to fish out random values from the client and figure out the PRNG internal state. But that doesn’t mean the risk is not there.
With modern browsers, you should use window.crypto.getRandomValues if it’s not available it’s usually advised to use the crypto library provided random function. Unfortunately, CryptoJS PRNG doesn’t seem to be cryptographically secure (They use Donald Knuth’s linear congruential PRNG, I couldn’t find any specific information about it but according to Wikipedia linear congruential PRNGs are not safe).
CryptoJS have very insecure defaults and PRNG that shouldn’t be used for crypto. If you are using this library I would suggest considering a better alternative. Stanford has their own JS crypto library SJCL it was written and tested by some of Stanford crypto experts and professors. I think it is considered the only JS crypto library worth using. SJCL comes with relatively secure defaults and has a built-in secure PRNG and when I tested it, it was a lot more performant than CryptoJS. I was able to get about 5 times speed improvement and so running PBKDF2 with 10000 rounds at 100ms.
In the next part we see how we can solve all this issue and a bonus issue with a simple design change –
Part three – Solving all the issues by simple design change