I’d like to start with a cliché: if a tree falls in the forest and no one is there to hear it, does it make any sound? To most, the answer is probably “yes.” Conspiracy theorists might answer with less certainty, while philosophers will quote a bunch of people you were supposed to read in high school. Lawyers and scientists, on the other hand, will quibble over definitions. After all, what is “sound?” Is “sound” defined as vibrations in the atmosphere hitting your eardrums? If so, then the absence of eardrums makes the presence of “sound” impossible. Or do we define “sound” as the potential for eardrums to hear something, if eardrums were present? In this case the tree probably would make “sound.”
Right now I imagine the only “sound” on your mind is the pounding headache I’m instigating, so I’ll cut the semantics short. What I want to talk about is whether the 4th Amendment requires “human eyes,” (although we will be returning to our hypothetical forest eventually). I’ve mentioned several times that I think entirely computerized surveillance is normatively different, at least in a privacy sense, from human surveillance, and that this difference is important for a workable understanding of privacy in the modern world. Computers are everywhere, a part of everything, and computers like to record things. I often use the phrase “data collection” rather than “surveillance” because “data collection” sounds automated and mundane, whereas “surveillance” makes you look over your shoulder. But maybe data collection and surveillance are truly different. Would we say that a computer “data collecting” is the same as a man in helicopter looking through your window, “surveilling?” Perhaps the man’s “human eyes” make him normatively different.
The Jewel of Denial?
I am not the only one to make this distinction. The US Government has argued on several occasions that a lack of “human eyes” should alter the 4th Amendment analysis. In response to the Snowden disclosures, several lawsuits were filed on domestic surveillance, (some were filed earlier but reinvigorated) challenging different forms of dragnet surveillance. In one of these cases, Jewel v. NSA, the plaintiffs are challenging a group of NSA surveillance programs utilizing “Upstream collection,” a surveillance method for collecting information in bulk from the Internet backbone (major telecom sites that handle large amounts of international Internet traffic). One of the government’s arguments defending the legality of Upstream collection is that entirely automated surveillance should be treated differently under the 4th Amendment. Although it is unclear whether the case will actually get to the 4th Amendment question, (national security cases tend to fizzle due to the “state secrets privilege” and problems with standing), the “human eyes” argument presents an interesting question in 4th Amendment jurisprudence, and deserves some attention.
To paraphrase the Government’s argument, they assert that most 4th Amendment cases on “searches” (a 4th Amendment buzzword) involved humans doing the searching, and so computer searches (with no humans) should be treated differently. To bolster this argument, the Government points to other non-human searches where the Supreme Court has applied different standards. For instance, the Supreme Court has repeatedly said that drug dog sniffs are not 4th Amendment “searches,” and thus the police can use drug dogs in schools, airports, or during traffic stops without a warrant. Furthermore, the Court says that chemical drug tests that only identify a substance as contraband are not “searches” (although how they obtained the substance might be treated differently). The premise in these cases is that an invasion of privacy requires a human to do the invading, and so chemical reactions and dogs cannot invade your privacy. And if a dog cannot invade your privacy, neither can a computer. Or at least that is the Government’s reading.
Dogs and Katz
I like the analogy between computers and drug dogs, but I’m not sure it will hold much precedential weight. Drug dogs are sui generis – unique under the law – so they arguably don’t represent any underlying principle that we might apply to computers. And even if there were a clear principle, the justification the Court uses is not that a dog cannot invade your privacy, (a drug dog sniffing your home is an invasion of privacy), but rather that you have “no reasonable expectation of privacy in contraband.” This line has never made much sense, (even Justice Kennedy thinks so), but it suggests that dogs can invade your privacy if they sniff for something that isn’t contraband. This certainly casts doubt on any human/non-human dichotomy. I think the principle the Court is ultimately moving towards in its drug dog cases is that a non-invasive search that only identifies contraband can be used in public spaces without a warrant. This is more nuanced than a simple “human eyes” argument, but it might still support automated computer searches.
Looking at the more traditional cases, though, the Government has an uphill climb. Putting aside the drug dog analogy, arguably the most important case is Katz v. United States. This was the landmark Supreme Court case that expanded what the 4th Amendment protected from our “persons, houses, papers, and effects,” to cover everything you reasonably expect to be private. So although Charles Katz was in a public phone booth, he reasonably expected his conversation to be private, making the Government’s warrantless wiretap a search. Katz is important because it specifically discounts any police attempts to minimize the intrusiveness of their search. It doesn’t matter if the police do everything in their power to minimize the amount of information gathered, or to minimize the number of people that information is exposed to, a search a search based purely upon the defendant’s privacy. As the court famously said, “the 4th Amendment protects people, not places.”
But protects people from what, exactly? Going back to our falling tree metaphor, if the Government looks through your information, but no one is there to see it, including the Government, is your privacy invaded? Can a computer, by itself, invade your privacy?
Phoning it In
To elucidate this question, I want to look at smartphones. I think it is fair to assume that no one’s privacy is invaded by their own smartphone. And I mean literally the phone – not an app developer, or a hacker, or Apple – the phone itself. Quite to the contrary, smartphones are the modern diary, except profoundly more detailed and utterly ubiquitous. The recent case Riley v. California talked at length about how much personal information we share with our phones, and used this to support the decision to treat phones differently. Phones have become an extension of ourselves, with a staggering number of people having their phone within arm’s reach at all times. So I would ask, does your phone begin to invade your privacy if it is seized by the police? (Again, speaking strictly about the phone.) Does the person in control of the phone determine whether the phone invades your privacy? The answer has to be no. The phone never invades your privacy; it is the police that invade your privacy when they read your phone’s contents.
Using the falling tree question as an analogy, this method of thinking is akin to saying that “sound” requires an eardrum to hear it. Without an eardrum, it is just vibrations: the potential for sound. Without anyone there to see your information, your privacy isn’t invaded. I don’t want to strain the metaphor too much, but from a practical sense, trees are falling in forests all around us, and we don’t seem to hear anything. Computers are so imbedded into our daily lives that most people don’t even notice them. On the contrary, we readily trade personal information for the benefits they provide. Rather than claim that people “just don’t care about privacy,” however, I would argue that this isn’t what people mean when they say “privacy.” My relationship with Google, or Facebook, or the NSA, is substantially different than my relationship with my family, or neighbors, or friends. My definition of privacy changes depending who it is privacy from. With regard to the former category, I’m less concerned with what they know about me, (or in this case, what their computers know), than with what they do with that information. We probably don’t really care if that falling tree makes sound; we care if it knocks out our electricity.
Which is not to say that we should not be concerned with information that is processed by these larger institutions, but rather that the standards we apply should reflect the different privacy concerns they raise. Automated scanning of Internet traffic strikes me as a less substantial privacy invasion than a host of police activities for which we don’t require warrants, but that doesn’t mean it is nothing. So even if a lack of “human eyes” deserves a different legal standard, this doesn’t necessarily mean that anything is fair game as long as it is automated; it simply means that the body of law that has developed for “human eyes” searches shouldn’t necessarily apply to those without “human eyes.” And in this latter category, rather than emphasize restricting collection, something I think is not intrinsically harmful, we should emphasize restricting data use.
What Else Don’t You See?
Naturally I see the problems with this line of thinking. In my hypothetical world, I can easily separate data collection and analysis from data use and abuse. In practice, this is more problematic. The falling tree still creates the potential for sound, and computerized data collection certainly creates the potential for privacy invasions. Such an approach necessitates robust transparency and oversight, two things which we currently lack. The “state secrets privilege” I alluded to earlier is a perfect example of how our current national security apparatus makes transparency difficult, and while I am not particularly concerned with the automated data collection at issue in Jewel, I am much more concerned with the fact that it was not publicly known.
I also don’t mean to suggest that “human eyes” is the only or even the most important argument in the larger domestic surveillance debate. There are several lawsuits, challenging several different programs and methods of surveillance, and I doubt that they will rise or fall as one. Despite my apparent support for it, I actually think the “human eyes” argument is one of the weaker cases for the Government. It would be a substantial departure from established law, and courts are typically reluctant to make decisions with such far reaching impact. Rather, I suspect that the importance of these issues means that courts will continue to search for jurisdictional “outs” to avoid answering the question. While we might bemoan the application of the “state secrets privilege,” some issues really are better determined by the other branches of Government.
So while I doubt that the domestic surveillance debate will be resolved by a court, I do think that the political process should embrace a more nuanced view of automated surveillance. Computers have the potential to be a drug sniffing dog for all forms of illegal activity, or at least any illegal activity that involves computers, and utilizing them as such holds the potential to reap the benefits of domestic surveillance while simultaneously providing greater privacy protections than we currently have. While I certainly have my concerns, and I acknowledge that my opinion is not widely shared, overall I think such a system would be a net benefit for society.
Until next time.