Cracking the iPhone

Encryption and the San Bernardino case

The room was dimly lit at Espace C, a shared workspace in Montreal. Journalists and activists were drinking beer and eating chips, computers on their laps. On a giant screen was a dark software window that showed the identifiers of everyone’s laptops. We could see a list of the websites we were accessing in plain text.

Anyone in the room who had access to the same Wi-Fi router could observe, over the air, which websites we were accessing, and therefore, deduct what we were staring at. A skilled hacker could spoof one of these websites and grin as we would hand over our usernames and passwords.

We were at a cryptoparty: a gathering during which people learn how to use encryption. It was hosted by Dmitri Vitaliev, a digital security consultant and trainer for human rights activists and independent journalists, as part of the monthly meetings of Hack/Hackers Montreal.

It was the first time I came to realize that encryption is necessary for each and every one of us. Without it, we cannot communicate away from prying ears. How can anyone speak their mind if they always feel spied on? How could today’s booming online economy thrive without secure encrypted online transactions? How could innovation happen in all sectors of our economy if competitors could intercept trade secrets?

The Apple v FBI case threw the spotlight on the issue of encryption in the media. The FBI found an iPhone that belonged to Syed Farook, one of the two shooters who killed 14 and injured 22 in San Bernardino, California. They said that his iPhone contained a set of data that was essential to their investigation and wanted to force Apple to create a backdoor in the iPhone’s encryption so they could have access to it.

Its owner was a terrorist; what kind of privacy did he deserve? Isn’t he dead after all? The real implications of this case were, however, much deeper and interconnected then they seemed to be.

The first thing to understand about encryption is to realize that the Internet does not exist in the cloud. It has a physical presence in real life, and it relies on a vast physical infrastructure. Your Internet browser window on your laptop is only the visible end of that network. Your Wi-Fi router connects to a cable modem. Cables run from houses, schools, and office to data centers operated by your Internet service provider (ISP). Your ISP takes care of links between their data centers and the other ISPs, connecting you to the Internet. Each of these physical infrastructure points is an opportunity for your ISP, the government, or hackers to intercept your data.

Its owner was a terrorist; what kind of privacy did he deserve?
Isn’t he dead after all?

This is where encryption comes in. It is what automatically keeps anyone who could be snooping on the infrastructure from intercepting your bank account information, your social media usernames and passwords, your browsing habits, or the intimate pictures you just sent to your partner. We can also use the same encryption on data stored on electronic devices. Even if that information never goes on the Internet, it is protected in case of theft or search – be it on a smartphone, tablet, computer, or Farook’s iPhone.

Two persons who want to send each other an encrypted message need a shared key to unencrypt it. However, if we assume that someone could be eavesdropping, the two individuals cannot agree publicly on a shared key, because the eavesdropper could intercept it. To go around this problem, we use public key encryption. Modern public key encryption involves complex, mathematical equation solving by computers, but its main concepts can be easily explained using colours.

That example works on the underlying principle that it is easy to mix two colours together to get a new colour, but it is extremely hard to separate the two exact colours that were used to obtain this colour. Say we have Alex and John, who want to send each other a message, but Eve is there listening. Alex and John publicly agree on a shared colour, say yellow, that even Eve can see. However, Alex and John each choose a colour that they keep private, say red for Alex and blue for John, and mix it with the public colour they agreed on. They then send each other their mixture over the network.

The main concepts of encryption can be easily explained using colours.

The trick happens there: Alex takes the combination of colours she received from John and adds to it her private colour, and John does the same in reverse, thus both ending with the same shared mixture of colour. Since Eve does not know either Alex’s or John’s private colours, because they are mixed with the public colour when sent over the network, it is impossible for her to find the final mixture of colours.

The same thing happens when using encryption online, but with numbers. The public colour is called a public key, and the private colours are private keys. By mixing those colours, that is, encrypting the message, Alex and John can use a public infrastructure to share private messages, transactions, or other data without Eve being able to read them.

The FBI’s order to Apple that would have forced it to create a backdoor in its iOS encryption would have had the effect of giving what could be called a super private key to the government leaving them exposed to theft or leak. It immediately attracted backlash in the media. Effectively, it would have created a precedent where any tech company could have been asked to provide a way for the United States government to bypass their encryption services.

Apple’s CEO, Tim Cook, started the counterattack on February 16 with an open letter to Apple’s customers, writing:

The FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession.

Outrage spurred from every corner of the internet, and tech giants filed amicus briefs for Apple against the FBI’s case. Michael Geist, a law professor at the University of Ottawa where he holds the Canada Research Chair in Internet and E-commerce Law, wrote on his blog that the case would establish a legal precedent that could be used to force tech companies to build backdoor access to devices they make, break their encryption services, or weaken their security measures.

Forcing Apple to create such a piece of software, already dubbed online FBiOS, could have exposed the millions of iOS users around the world to hackers. “Creating security vulnerabilities leaves everyone more vulnerable since there is no mechanism to limit weakened security measures or backdoors to the good guys,” Geist wrote. “If the U.S. government can get it, so too can other foreign governments or criminal organizations.”

The story came, however, to an abrupt end on March 28, when the FBI retired its case against Apple. The FBI had found a way to hack into the iPhone without Apple’s help. Since their whole case hinged on the premise that only Apple could unlock Farook’s device, it could not hold up in court anymore.

Apple does not have any reward program for hackers who find vulnerabilities in its software.

Still, that unexpected conclusion asks more questions. In a statement, the FBI said that they had the assistance of a third party in their feat. That third party probably served the FBI with what is called a zero-day exploit: a vulnerability in a piece of software that its maker is unaware of. Zero-day exploits are commonly sold online by hackers, either to the makers of the software on the white market, to hackers on the black market or to the United States government on an increasingly unlegislated grey market. According to Forbes’ journalist Andy Greenberg, a zero-day exploit for Apple’s iOS costs between $100,000 and $250,000. However, with the public backlash the FBI was under, prices might have gone higher.

It has become progressively common for the United States government to buy zero-day exploits. Journalist Kim Zetter exposed it in her book Countdown to Zero Day, where she wrote: “In the last few years, [the market for zero-day exploits] has gone commercial and exploded as the number of buyers and sellers has ballooned, along with prices, and the once murky trade has become legitimized with the entry of government dollars into the arena to create an unregulated cyberweapons bazaar.” Zetter also wrote that Apple does not have any reward program for hackers who find vulnerabilities in its software, so the incentive for hackers to sell their zero-exploits to firms specialized in their resale to government is higher than for exploits found in Microsoft’s, Facebook’s or Google’s software, whom all have had reward programs in place for a long time now.

That third party probably served the FBI with what is called a zero-day exploit.

The possibility that the FBI has paid for a zero-day exploit affecting the second most used mobile operating systems in the world, an operating system relied upon by countless governments, industries, and general users for its security, is deeply troubling. Since Apple has no way of getting its hands on it, it cannot distribute an update that would patch the vulnerability in its software. If that exploit gets in the hands of hackers or other foreign governments, it would expose an uncountable number of people. And that possibility would be, almost certainly, a much truer threat to the United States national security compared to anything they may have found on Farook’s iPhone.

Reporting and some pictures by Olivier Sylvestre.
Layout and some other pictures by Guillaume Levasseur.