I want to talk about Stingrays. (Yes, I’ve decided to give this whole law thing a rest, and focus on my true passion, marine biology!) Alas, the Stingray I’m referring to is not an arrow-tailed aquatic leviathan, but a secretive cellphone surveillance technique used by law enforcement to trick our phones into giving away our precious data! The Stingray has been (as you’ll see, rather fittingly) floating outside my field of view for some time now: I’ve been vaguely aware of it, and generally understood the concept, but as of yet I’ve not given it a hard look. But recently FBI planes believed to be carrying Stingrays have made the media rounds, so I’ve decided to give some thought to their legality, and some of the broader implications this type of technology raises.
To start: the tech. The Stingray is basically a device that masquerades as a cell phone tower and tricks cellphones to connect to it instead of an actual cell tower. (The technical term most often used is “IMSI catcher,” although I think defining the technology broadly is more useful.) Once caught in the Stingray’s clutches, the device can do some very cool (and very sketchy) things, ranging from simply identifying individual phones to conducting man-in-the-middle attacks. On the surface, they look suspiciously similar to Evil Twin Wi-Fi networks (basically fake public Wi-Fi that tricks people into connecting to it), and so my assessment of their legality comes with an immediately elevated eyebrow.
With regard to direct sources on the Stingray’s legality, however, there is relatively little to go on. Despite similar devices popping up as early as 1995 in court, the government is particularly tight-lipped about the details, claiming that revealing too much would hinder law enforcement, and court cases addressing them are few. When pressed, the government seems to treat these devices as analogous to pen-registers, (an old-school method for tracking the numbers a phone dials and receives), which are regulated by ECPA. Pen registers are a low bar for law enforcement, but at least there is a bar. Yet some have suggested that Stingrays are not even covered by that statute, meaning that police usage is currently completely unregulated, at least outside of the police’s self-imposed restrictions. (I should note that similar technologies are used and regulated overseas, as well as in the private sector among surveillance enthusiasts, but not knowing the specifics ultimately makes the legal analysis tricky, regardless.)
Access and Data
From a regulatory perspective, there are two levels of privacy protection to consider: protection of the phone itself against “unauthorized access,” (i.e. hacking), and protection of the data that the phone transmits. While often lumped together into a generalized “privacy,” these rights are quite distinct when you consider their sources. “Unauthorized access” derives from property law and the right to physical privacy, which is extremely robust, particularly against law enforcement. Physical privacy includes the right for your body to not be touched, the right to exclude others from your home, and so forth. These are old rights, and still very powerful.
The protection of data, however, is an information privacy right, which are much newer, much narrower, and subject to the powerful competing right of free speech. In the United States, data must be specifically protected (e.g. health information with HIPAA; communications with ECPA) and these data protections tend to be limited. In practice there are relatively few types of data that have overt protection, and notably lacking from that list is location data, at least at the federal level. This reflects our country’s strong emphasis on free speech and the free flow of information, particularly when that information was shared with third parties (like Comcast or Google) or “knowingly exposed to the public” (like our public movements). When information is shared with others we must consider their free speech rights too, and it is typically assumed that sharing information reflects a diminished expectation of privacy towards that information.
I bring up this access/data distinction because the challenge faced by contemporary surveillance cases is that they tend to rely on data rights, the weaker of the two, whereas I suspect that the Stingray might violate access rights, making the case against it much stronger. (See also United States v. Jones.) Based on my understanding of the technology, the Stingray is suspiciously close to committing an “unauthorized access” under the Computer Fraud and Abuse Act (CFAA). The CFAA is basically the federal anti-hacking statute, and while “access” is left undefined, it is designed to encompass as much activity as possible, so I have little doubt that it could apply to the Stingray; the critical detail is how the Stingray interacts with your phone.
I envision three scenarios: 1.The Net: the Stingray is a passive net, collecting the encrypted transmissions that your phone actively emits. 2. The Hacker: the Stingray actively seizes control of your phone, forcing it to connect to the Stingray, or to hand over data. 3. The Trickster: the Stingray uses its knowledge of your phone’s algorithms to trick your phone into connecting to it. While all are potentially troubling, I suspect the first scenario is probably legal and the second is probably illegal, but the most likely and most difficult scenario is the third. (And recall this is purely on the access question. The contents of our communications are also protected by information privacy laws like ECPA.)
The difficulty with my “Trickster” scenario is that it skirts the traditional active/passive dichotomy that makes the access question easy. In the “Hacker” scenario, the “unauthorized access” was active; in the “Web” scenario, the data collection was passive. Easy. But the “Trickster” uses a bit of both. By manipulating its knowledge of how the phone’s algorithms operate, the Trickster can actively make itself look like something else, and then passively wait for the phone to connect to it. It would be like creating a fake Post Office blue mailbox, and then waiting for people to drop off private letters. It’s not really a question of physical privacy; it’s fraud. And although it is the Computer Fraud and Abuse Act, the CFAA doesn’t have a provision prohibiting fraud without access.
Therefore, a lot rides on whether the Stingray engages in behavior that could be deemed an “access.” The difficulty with computers is that communication between devices is extremely common, and it’s unclear whether a deceptive police communication that elicits information qualifies as an “unauthorized access.” For a physical analogy, imagine that a gang has a secret handshake, and an undercover cop does the handshake to prove his or her authenticity. That physical contact was “authorized,” and therefore legal, even though that contact probably wouldn’t have been authorized if the suspect knew the officer’s true identity. We allow for a certain degree of deception by police officers engaging in investigative work, whether it’s undercover officers or lying to defendants to elicit confessions.
Since the rules are different for police officers as compared to normal citizens, it is not immediately clear to me that using cyber-deception to induce a suspect’s phone to transmit private information is necessarily different from an undercover officer using deception to get the suspect to reveal incriminating information. The problem on the cyber side is that most “hacks” could be characterized as deceiving someone else’s computer to do what you want, (e.g. code injection), and it seems unlikely that anything characterized as a “hack” would be allowable by the police without a warrant. So perhaps the better question is:
What is a Police Hack?
I have a lot of thoughts about this question, but they aren’t quite congealed into a cohesive framework. Putting aside issues of entrapment, which are separate, I suspect that the suspect’s consent is an important part of traditional police deception: an undercover officer is allowed to enter a suspect’s home as long as the suspect consents, notwithstanding the deception. But the police cannot simply put on a disguise and break into your house, because there you didn’t consent. Likewise if a police officer pretends to be someone else via text message (think To Catch a Predator), and thereby elicits incriminating information, that too should be ok, again because the suspect consented. So the police tricking your phone into giving up information might be treated differently because you are not given a meaningful opportunity to consent. Your phone cannot consent for you.
But I’m not sure that consent tells the whole story. Suppose the police wrote a computer program to monitor all of your text messages, and pretended that it was actually an update from Apple? We agree to these updates all of the time, and no one actually reads the details. So the police might get consent, but suddenly the behavior seems unlawful. And a purely consent-focused framework would overlook the fact that most networked computer interactions operate completely independent of the user. There is some evidence that the Stingray writes metadata onto the phones it surveils, but writing metadata is so common that I doubt it would qualify as an unauthorized access, regardless of consent. I don’t micromanage these details, and my lack of consent shouldn’t be a challenge if my consent is never normally required.
So maybe the distinction for these non-consent activities should be how active or passive the police activity is: if the police actively inject code into the suspect’s computer, that is illegal, but if they passively create a decoy, which the suspect then connects to, that is legal. (This wouldn’t clarify the fake Apple update, but perhaps there are some police activities where the suspect’s consent simply isn’t a valid defense.)
A similar debate to this active/passive dichotomy is ongoing in the security community regarding just how far cybersecurity measures can go in response to a hack, ranging from purely intrusion detection, to more aggressive active defense, to the retaliatory “hackback,” but it isn’t clear how the police fit into this debate. Regardless, it is fairly well established that decoys (called honeypots) are perfectly legal, lending credence to the legality of a “Net” scenario Stingray. Yet placing these decoys on your private network is substantially different from placing cyber-landmines in public places that are designed to be triggered, and the Stingray seems more akin to the latter. And without more details on exactly what the Stingray does and how it’s used, we won’t be able to say for certain whether it is or should be legal.
Privacy Through Algorithms
What ultimately makes the Stingray challenging is that it asks us to determine an individual’s expectation of privacy in technology that they do not understand, and often have no control over. I’ve discussed in previous posts the importance of understanding privacy metrics: we understand that loud conversations held in public spaces aren’t private because the metric – sound – is one we all intuitively understand. Understanding these metrics allows us to take steps to protect our privacy, either by whispering or moving to a more private location. Based on this assumed understanding of privacy metrics, courts typically assume that what we “knowingly expose to the public” is not private.
But how should we treat what we “unknowingly expose to the public?” The Supreme Court addressed this to some degree in Kyllo v. United States, (discussing thermal imaging technology), which limited the police’s use of technology “not in general public use” without a warrant. But that case relied heavily on the fact that the surveillance was used on the defendant’s home, arguably the thing most strongly protected by the 4th Amendment, and nonetheless emphasized the type of information obtained. Kyllo could be interpreted as a broad protection against new technologies, and therefore a type of “access” protection, but it also could be viewed as essentially protecting data about the inside of the home, and therefore a “data” protection. And while the latter is arguably narrower, it is also more robust. Tying privacy rights to technological availability essentially guarantees a diminution of rights over time, whereas the protection of the home appears to be timeless.
There is plenty more to talk about with the Stingray, but this post is running long. The government’s secrecy about the details is always troubling, the problem with cellphone encryption technology deserves a post unto itself, and the Stingray is itself simply another technological development forcing us to ask hard questions about what our civil liberties really mean, and invalidating the Stingray on what amounts to a technicality does little to further that debate. But it certainly gives law-minded folks an excuse to talk for far too long.
Until next time