What kind of rear window into encryption do the Five Eyes want?

Friday, Jun 30, 2017, 03:16 AM | Source: Pursuit

Vanessa Teague, Suelette Dreyfus, Chris Culnane, Toby Murray

The Australian Prime Minister Malcolm Turnbull and Attorney General George Brandis want to introduce laws that would force technology companies to ensure their systems are capable of decrypting terrorists’ communications.

Following a meeting of the Five Eyes intelligence alliance in Ottawa - comprising Australia, Canada, New Zealand, the United Kingdom and the United States - a careful reading of the group’s recent communique shows agreement on much less; a commitment to “develop our engagement with communications and technology companies to explore shared solutions”.

Messaging apps like WhatsApp use end-to-end encryption, with a key known only to the recipient. Picture: Pixabay

Surely allowing good police officers to read terrorists’ communications could only be good for everyone? But the controversy arises because the mathematical laws that impede good people cracking terrorists’ communications are exactly the rules that keep your banking, health, voting and other data secure.

“This is not about creating or exploiting back doors, as some privacy advocates continue to say, despite constant reassurance from us,” says Mr Turnbull. Decrypting terrorists’ communications without undermining the security of everyone else sounds great, but this not an engineering plan and every known attempt has failed.

The Australian government has also announced a new “information warfare unit”. You can bet that many other countries have well-established units, some of them actively looking for flaws in Australia’s defences. And if we put weaknesses there, there is a good chance that other people will find them.

But, in order to understand the implications of the debate, first we need to understand what encryption is.

Decrypting the message

Encryption means hiding the content of information from everyone except the person you intend to talk to; it does not hide the existence of the information, just what it says.

Modern encryption uses maths. An algorithm on your computer transforms your message into a sequence of ones and zeros that’s meaningless except to a reader who knows the secret key that can transform it back, or decrypt it. Naturally, people have been trying to break encryption since it was invented - trying to recover the message from the encrypted version without knowing the key. This is called cryptanalysis.

The best-known encryption algorithm is RSA. It works on the simplest possible mathematical foundation - the fact that it’s easy to multiply but hard to factorise. To generate an RSA key you choose two random, very large prime numbers (hundreds of digits long), then you multiply them together and publish the result. You keep the prime factors secret - this is your decryption key. If you choose large enough primes, it’s just too hard to run enough computation to extract them and decrypt your message.

Ordinary posts on Facebook are hidden from eavesdroppers, but not from Facebook itself. You send your encrypted message to Facebook, which decrypts and reads it, then encrypts it again to send it to others. They need to read your communications to assess your mood, decide which of your friends’ posts to show you and display targeted ads for stuff you’re likely to buy. Gmail is similar.

If the government’s proposal is simply to give Australian law enforcement a way of getting a warrant to ask these companies for data that they already have, then that seems perfectly reasonable.

End-to-end encryption

It gets complicated when individuals use end-to-end encryption. This kind of encryption is used when messaging apps such as Signal, Wickr, Telegram and WhatsApp run on your device and encrypt your message with a key known only to the recipient – it is decrypted only when it arrives on your friend’s device.

The best known encryption algorithm is RSA, which uses two random, very large prime numbers. Picture: Shutterstock

The company or service provider never learns the decryption key, and cannot read the message. They still learn who is communicating with who - metadata is unaffected. End-to-end encryption makes possible the promise that was made when metadata retention was introduced, that it will only be the “address on the front of the envelope”, not the contents, that can be accessed by law enforcement.

It’s uncertain what the Australian government proposes to do in this case.

One possibility is to compromise the device, a technique that Edward Snowden described early on in his revelations about the National Security Agency (NSA) saying “unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it”. This means there’s no need for cryptanalysis – it’s just a matter of reading the message before it has been encrypted or after it has been decrypted.

Perhaps the government is proposing to ask device manufacturers for a guarantee that they will have a method of reading communications or files before they are encrypted. If this is so, then it is very different from asking Facebook or Gmail for messages they already have.

This was the point of Apple’s landmark case against the FBI; whether the manufacturer of a device and a provider of cloud storage was obliged to provide access. But even the manufacturer of a device might not generally be able to access the data. Apple had intended to design a phone that even Apple couldn’t read, but a bug in the logic of their security protections allowed the FBI to read the messages anyway.

So the requirement to allow the manufacturer access means deliberately designing the device to be less secure than it could be, which inevitably introduces the possibility that someone other than legitimate law enforcement might try to use the same process to get the data.

The Wannacry ransomware that had a catastrophic impact on Britain’s NHS exploited a weakness that had allegedly been used by the NSA for surveillance. This weakness, dubbed EternalBlue, was quickly exploited when it came into the possession of criminals – and WannaCry was born.

Although this vulnerability was probably discovered rather than inserted, and deliberately weaponised, the same questions apply here: how could we be confident that the facility designed for reading people’s communications would be exploited only by legitimate law enforcement officers with a warrant, not by organised crime, foreign spy agencies, or terrorists themselves?

Getting the right key

Another possibility is to weaken or backdoor the encryption algorithms so that cryptanalysis becomes easier. The simplest kind of backdoor is key escrow, an arrangement in which the key needed to decrypt encrypted data is held in escrow so that, under certain circumstances, an authorised third party may gain access. The government then promises not to read your messages without a good reason.

The opportunities for abuse are obvious. Given that we have already seen metadata accessed in order to monitor journalists’ communication patterns without a warrant, concerns about the abuse of key escrow are justified.

The National Security Agency in the United States had created a backdoor, but it was re-keyed. Picture: Wikimedia

But backdoors can be more subtle. A weakness in a commonly-used random number generator called Dual EC DRBG was allegedly introduced by the NSA. The backdoor consisted of parameters they knew, but nobody else did. It probably allowed them to decrypt Virtual Private Network traffic. Most concerning was the discovery, many years later, that someone had quietly re-keyed the backdoor. In 2015, tech giant Juniper Networks revealed that the program it depended on to secure its routers used the same weakened algorithm, but with different parameters. Same backdoor, new key but exactly whose key remains a mystery.

Cryptographic keys need to be long in order to be secure – otherwise the attacker can simply guess. US policy in the 1980s and 1990s restricted the export of cryptography that used long keys in the hope of decrypting foreign communications.

Unfortunately, as late as 2015, many servers still communicated using the deliberately-weakened “export grade” cryptographic key lengths. And there are examples here in Australia.

In 2015, during the NSW state elections, we found that online voting was vulnerable to tampering because it included code from a server using export-restricted cryptography. By the same year, the cryptanalysis could be run overnight on the Amazon cloud for about AU$100. A misguided US policy to facilitate NSA cryptanalysis introduced an open opportunity for vote exposure and manipulation in an Australian voting system some 20 years later.

Do the maths first, then write the legislation

Good-quality cryptography is available all over the Internet. If Australia insists on these weakened standards, law-abiding Australians will adopt them but terrorists will simply download something secure. Meanwhile any weakness in our algorithms or devices will jeopardise the security of our banking, elections, health records and just about everything else.

This is a critical matter of national security, both ways. It’s not a political dispute between people who want to catch terrorists and people who want to protect their privacy – it’s an engineering problem that has to be addressed with a clear understanding of what the options and logical implications are.

Making our communications a little less secure so that good people can read our messages might make us much more vulnerable to bad people in the long run.

Banner: Shutterstock