The interesting case of a court demanding that Apple unlock a single iPhone has opened up a whole new can of worms in the domain of governing computing and communication that occurs over the internet. The thought that Apple would be asked to “break into” on of their own devices is at first a little bit off-putting, as it should be. After all, we keep a number of private conversations and images or documents on every device now, considering that a phone has the same or greater capability than most computers from 20 years ago. I certainly agree with Apple’s claim that once they build in a single piece of “malware” to break into one of their devices, that code, if it gets into the wrong hands, can be used over and over again as a “master key” to break into any number of iPhones, and thus writing such a code comes with the grave responsibility of keeping it under close wraps. Having taken a cryptography course where we learned that the best cryptographic systems are ones where the algorithm is public and only the key is private (and not used for a large number of transmissions), this need for a physical level of security around the malware does seem daunting. Furthermore, if Apple were to budge and write the malware, the precedent it would set is both a slippery slope of tech companies taking dev time to serve the needs all sorts of warrants made under the All Writs Act and an opening of the door to government legislation of encryption schemes for big companies. I’m really don’t think that the government needs to snoop on the phone of a dead terrorist to raise more evidence against him and his co-conspirator.
Unfortunately, I can’t see any merit behind the government forcing large companies like Apple and Google to take time to build backdoors into the systems that they’ve taken so long to and put so much are into building securely. This feels to me like the government is taking the high levels of privacy we’ve achieved in the field of computer science and turning it into a controlled substance. If only the big companies that sold phones were required to build in backdoors, this doesn’t change the mathematics behind cryptography, and it will just go to a black market where developers are contracted privately to anonymously author code which provides secure channels of communication for terrorists and others scheming evil.
There is a good counterargument to my stance in the claim “If you’ve got nothing to hide, you’ve got nothing to fear,” because one would hope that our government would only be using the data they obtain when they need to and for the right purposes. Even if you have something a little bit embarrassing on your phone or laptop, one would hope that the government agency would only see it if you’re under suspicion and, if it has no relevance to their investigation, that they would ignore it and pretend like they never saw it. However, given the NSA’s recent behavior and the fact that government agencies are made up of imperfect humans who can abuse their power, this is not the level of trust that a lot of Americans have in their government when it comes to being watched by “big brother.” The back door would have to have some sort of an accountability check in, making it even harder to invent and implement.
I enjoyed the article by Benjamin Wittes as I thought it brought up fair arguments against my stance, especially regarding the validity of the claim that he security risks the computer scientists are claiming will be introduced by built in backdoors are “grave.” The arguments he makes originally seem very persuasive, but they seem flawed to me. In my mind, however, it is still very dangerous to build in a single point of failure, that is a backdoor, to your code because it becomes a great place for hackers to focus on and it has a very high reward. Although code bases are expanded all of the time, expanding one to put a new single point of failure in, with fresh-rolled, untested cryptography which the government might just pass legislation requiring, is dangerous. Unless there is a proven system for ensuring that only person X with a warrant can, but can easily, decrypt a past communication, the government can’t force a company to implement such a non-existent policy. Even if that policy did exist, a flawed implementation of the system could be devastating and would be more likely to be attacked and contain perhaps more valuable data than any other secrets tech companies are trying to keep.
I find it peculiar that as the Wittes article says, code can become “the world’s largest ungoverned space,” if legislation is not put in soon regarding communications over it. I think that it would be great if the government could catch all of the bad guys in a perfectly ethical manner all of the time, but in case like these, it’s not perfect, and the system that could make it a trustworthy process does not exist and requires the not-so-trusted government to be a player. I think that if the government wants a back door, they’re going to have to do the math and the proofs themselves and get their system scrutinized by the cryptography public before making it a standard, because it is too great a burden to put on the tech companies to, by some certain date, comply with a standard for building in backdoors, asking them to either just drop their current level of security and make their systems more vulnerable, or to come up with a new algorithm entirely by then.