The standoff between Apple and the FBI is the latest chapter in the escalating battle between technology companies that are encrypting data in order to protect customers’ privacy and security and the US government, which says it needs the ability to access encrypted data to keep America safe.
Sharon Goldberg, a Boston University associate professor of computer science and a fellow at the Hariri Institute for Computing and Computational Science and Engineering, recently discussed her views on how the case relates to security and the growing debate over encryption.
The FBI says it is essential for its investigation to have access to the data on the shooter's iPhone. As an expert on network security and data, what do you think?
Well, the headlines say, “Apple refuses to unlock terrorist phone.” That sounds really bad if you assume that the information needed for this investigation is only available on the shooter’s phone and can’t be obtained in any other way.
“What Apple is being asked to do is engineer a new vulnerability into the iPhone.”
But it’s important to remember that the information we see on our phones is not just stored on our phones. For example, every phone call we make—who we call, how long we spoke, how often we called them, at what time we called them—is stored by the phone company. You can see this information in your phone bill every month.
Your phone also has a GPS, and some cell phone providers use it to very accurately track your location. Your Gmail is stored on Google’s servers. Yahoo mail is on Yahoo servers. Facebook messages are stored on Facebook’s servers. None of that information is encrypted—Google, Yahoo, and Facebook can and do share that information in response to a search warrant.
Even iMessage, the iPhone text messaging application that encrypts the individual messages sent from one person to another, reveals “metadata”—who is talking to who, when are they talking, and how frequently. All this information is incredibly revealing.
Is the FBI's court order against Apple part of the argument law enforcement officials have been making, that because of technology companies' growing use of encryption for iPhones and other devices, as well as for internet connectivity, they can't gather the information they need for surveillance—that we're "going dark"?
The Berkman Center for Internet and Society at Harvard University recently issued a report analyzing this issue. The report found that we’re at the opposite of “going dark.” With people spending more and more time on the internet, the opportunities for collecting information about people are increasing all the time. This increasing use of encryption is a natural response to the unprecedented amount of information we are putting online every day.
Encryption helps prevent malicious actors from getting our banking information, stealing our identity, gaining access to our health records, reading our tax returns, and eavesdropping on our communications. The internet is so, so insecure. If you feel you’re “going dark,” that you can’t do surveillance with the vast amount of information out there, and the vast number of security vulnerabilities out there, then you’re doing something wrong.
So this case is about Apple being asked to break the encryption on the shooter's phone, right?
Not exactly. What Apple is being asked to do is engineer a new vulnerability into the iPhone. The FBI wants Apple to write a new piece of software that defeats the security features of the iPhone. But the existence of this piece of software would create a large number of security risks for Apple and its users.
What kind of security risks?
The way the iPhone works right now is that to unlock the phone, you have to manually enter a four- or six-digit passcode. If you enter the passcode incorrectly some number of times—you get 10 guesses—then you’re locked out of the phone, you can’t ever get in.
The FBI says this software will only need to run on the specific phone the court order is referring to. Probably this can be done by writing the code so that it only runs on a phone with a specific hardware identification number—that of the shooter’s phone.
Finally, this new software will be loaded onto the shooter’s phone using the “software update” mechanism that iPhone users are so familiar with. However, for an iPhone to accept a software update, the software needs to be cryptographically signed by Apple. So Apple is being asked to write and cryptographically sign software that will allow the FBI to unlock the shooter’s phone.
What's a cryptographic signature?
Think of it like this: In ancient Egypt, the pharaoh would write a letter and then use his signet ring to stamp a wax seal on it. That meant that everything enclosed inside the wax seal—everything inside that letter—was written by the pharaoh, and only by him. That makes the pharaoh’s signet ring an incredibly valuable object. Anyone holding the signet ring could issue decrees in the name of the pharaoh.
A cryptographic signature is sort of the digital equivalent. The software being signed is like the decree from the pharaoh, the cryptographic signature on the software is like the wax seal, and the cryptographic signing key is like the pharaoh’s signet ring.
By cryptographically signing the software, Apple is certifying that the software is written by Apple and only by Apple. Even changing a single line in the code would stop the signature from validating.
Shouldn't these two steps—that the code checks for the hardware identification number of a specific phone, and that the code is cryptographically signed by Apple—ensure that the code can't be used on any phone other than the shooter's?
Ideally, yes. But in reality, things could be very different.
For one thing, an attacker might be able to tamper with the hardware identification number on an innocent user’s iPhone, changing it so that it matches that of the shooter’s phone. This hardware attack would allow the code written for the shooter’s phone to run on another user’s iPhone.
For another, there could be a bug that allows the code to run even if the signature does not validate. Someone can use a bug like this to change the code so that it runs on another phone. This might sound far-fetched, but this exact bug has happened in code written by Apple. It’s called the “goto fail” bug and it broke the cryptographic code used by iPhones for network communications.
There could also be a weakness in the algorithm used to create the cryptographic signature. Then the signature would validate even if the code were changed. Again, someone could exploit this to change the code so that it runs on another phone. This exact bug has also occurred in the past. Attackers were able to load Flame malware as a signed Microsoft Windows update—just like what the FBI is asking Apple to do—because they broke the security of the algorithm used to compute cryptographic signatures on Windows software updates.
I could go on.
The point is that security engineering is really hard. Modern systems are complicated. People make mistakes. This is the whole reason we have software updates in the first place—to fix these mistakes. So by signing a new piece of software that defeats the iPhone’s passcode security, Apple is creating a whole new set of vulnerabilities that attackers could exploit to attack innocent users.
But wouldn't this code only be seen and used by Apple and the FBI? How could attackers get their hands on it?
There’s a really nice blog post by Jonathan Zdziarski, an expert in forensics, who says that if this new software Apple is being asked to write is a tool that’s going to be used in forensics, for it to be admissible in court, it must be validated by many independent parties and provided to the defense. And, as we know by now, the more people who have access to sensitive information, the more likely it is to be breached.
Are there additional vulnerabilities that would be introduced, especially each time Apple might be forced to comply with such a court order in the future?
Apple would probably need to write and sign a device-specific piece of code in response to every court order.
Apart from giving increasing numbers of people access to this sensitive code, it also means that Apple’s code-signing key would have to be used frequently. That key is extremely valuable. It’s like the pharaoh’s signet ring. If the signing key is stolen or compromised, an attacker would be able to load malicious software onto any Apple phone. The attacker could then make the phone do anything it wanted: turn on the microphone and eavesdrop on conversations, track the user’s movements with its GPS, activate the camera and secretly film the user, and so on.
That’s why Apple likely has extremely robust processes to protect its signing key from theft. The few times a year that software updates are added to iOS, the software is signed through what is likely a very slow and painstaking process that involves a very small number of authorized people at Apple.
Now imagine the signing key was used multiple times a month to sign device-specific code for forensic purposes, like the FBI is asking for here. This process would need to be run more frequently and involve more people, and so it’s more likely to be attacked. This is a massive attack surface for perhaps the most valuable piece of cryptographic material that Apple has—its code-signing key.
A lot of people say that Apple should comply, that it's worth some of the security risks you've raised for the FBI to be able to get access to information that could help it catch terrorists and other criminals. What do you think about the tradeoffs involved?
I think that our efforts to secure the internet are nowhere near where they need to be, and this would be a step backward at a time when we need to improve cybersecurity, not weaken it. This would weaken everyone’s security in hopes of making it easier to trap a few bad actors. But if I were a bad actor, why would I use a product that I know has extra features written in to help law enforcement? I would just go buy some other product, made elsewhere in the world, that I know the FBI can’t hack into.
Source: Boston University