I like to think that I am open to changing my mind. Despite a tendency to take hard lines for the sake of argument, I try to maintain a healthy skepticism of what I argue. Belief in our own brilliance is often more blinding than illuminating, and I will forever dread the thought of being unrelentingly wrong. I mention this because although I generally think that my methodology towards privacy is preferable, there are situations where my approach strikes many as inadequate. When I discuss issues like doxing and revenge porn with non-lawyers, especially women, they are struck by a sense of injustice. The phrase that frequently arises is “once it’s out there,” articulating a sense of powerlessness at the extent of our society’s valuation of the free flow of information above personal privacy. And while I’m not convinced that these problems warrant a complete reevaluation of my world-view, the existence of such strong opposing beliefs certainly warrants some examination.
For some background, my general approach to privacy is fairly hands off, preferring to regulate concrete harms, rather than impose limitations on data collection, data sharing, etc. The underlying theme of such an approach is that we want to identify the activities that are truly harmful, while allowing businesses to innovate as much as possible. So while many people dislike Facebook (or any other company) collecting data on our online browsing behavior, I would argue that this isn’t really harmful to the user, and thus shouldn’t be the focus of regulation. What they do with the data, however, could be harmful, and that is where the law should focus. There are some caveats, naturally, the most important of which is adequate transparency, both in the public and private sector. But one potentially problematic assumption of this approach is that data collection and data sharing are not normally harmful in themselves, and I think therein lies the difficulty with things like revenge porn.
The More You Know
Among the myriad unfinished ideas filling my Google Docs, one of them is titled “When mere knowledge is a harm.” Are there some things that are so private, that simply having someone else know them hurts you? This idea, while simple enough to understand, presents serious problems to our current understanding of free speech, which posits that speech is usually not inherently harmful. People may use information to harm you through discrimination, social backlash, or countless other ways, but the information itself is technically harmless. This is different from information that we keep private because we associate its knowledge with harmful behavior, like our credit card number. We assume that disclosure is very likely to lead to harmful behavior, so we choose to keep it private (see also: doxing), but the disclosure itself is not harmful.
Information deemed inherently harmful seems to occupy what I would call the “none of your business” category of privacy, which is determined largely by societal norms dictating what we do and don’t talk about. Our bathroom habits, our religious beliefs, and most prominently: sex. Yes, most of this “inherently harmful information” seems to be about or closely related to our sexual habits. Nude photographs, illicit internet browsing history, and the topic of this post, revenge porn: these are the types of information we cannot bear another human being to see, at least without our consent.
The area of sexual privacy has received increasing media attention in recent months, starting with the massive leak of celebrity nude photos in August, up to just this past week with the FTC enforcement action against the owner of a major revenge porn website. These issues have proved to be particularly difficult for privacy professionals, primarily because “once it’s out there,” information (including photographs and videos) is very difficult to remove from the internet. Indeed the FTC case was highlighted for its novel approach to the issue, leading some to call it “the end of revenge porn.” And while I don’t think the ruling deserves that much credit, it is certainly a step in the right direction.
For those who don’t know, revenge porn is essentially the publishing of nude photographs or sexually explicit videos of ex-girlfriends, spouses, or lovers (they are overwhelmingly, though not exclusively, women), without their consent, often with the victim’s name, address, and place of work attached. The typical narrative is one where the couple assumed the content would be kept private, but when the relationship ended, one party took out their frustration against the other by posting the content online, typically through sites specifically for that purpose. With the increasing prevalence of sexting, the ease of video recording with smartphones, and numerous outlets for this type of material, it is not surprising that revenge porn has become a substantial problem.
There are numerous difficulties with revenge porn, ranging from the problems with consent to implications for gender politics to questions of free speech. Yet the largest problem is arguably the practical difficulty in restricting the flow of information on the internet: a highly distributed, largely anonymous, information sharing superhighway. Even assuming you have legal recourse to force sites to remove content (which I’ll get to), the process becomes a game of cyber Whac-A-Mole, where you are forced to constantly monitor a nearly limitless number of sites for uploads and reuploads. You may be able to effectively police the major content distributors, but the internet is awash with small-scale websites and informal public servers to post illegal material. Bad actors may even utilize Tor and its hidden services, making removal of the content almost impossible. And even then, you can never truly hope to remove the copies that may be stored on an individual’s personal computer.
But before you can remove the content, you must show that it is removable. While this sounds simple enough, the problem is that the website hosting the content isn’t actually the one that violated your rights. There are numerous statutes that you can use to go after the person who actually disclosed this private information, but it is much more difficult to restrict those who received this information through legitimate means. Internet service providers have immunity against infringing content of their users, and any statute that attempts to regulate revenge porn will necessarily be infringing upon free speech. (The few state statutes that have emerged have been criticized as being overbroad and vague, a death sentence under the First Amendment.) To better understand this distinction, imagine you are simply dealing with a fact. You may sue someone for disclosing the fact and breaching your trust, but it is much harder to sue the person they told and attempt to stop them from knowing the thing they now know. The website may remove the content on its own terms, but forcing them to do so can be challenging.
Rather surprisingly, one of the best solutions to this problem is through copyright law. Copyright law can be a bit complicated, and I am no expert, but the general idea is that the person who takes the photo or shoots the video is the copyright holder to that content automatically. So the person who takes a selfie actually is the copyright holder to that picture, which empowers them with the traditional copyright infringement remedies. Any website that publishes these photos or videos without the copyright holder’s consent is in violation of US copyright law, and the copyright holder can file a DMCA action to have the website remove the photographs. While this is an imperfect solution, the DMCA action is a powerful remedy, and the expansion of intellectual property rights in private photos may offer a more comprehensive solution to this problem.
Yet even if a broad remedy were implemented, this simply begs the questions: what was consented to? Consent in the privacy context is problematic because people simply don’t think about the privacy implications of their actions. Privacy agreements have become something of a punchline in popular culture, but their complexity stems from the difficulty in articulating exactly when, where, and with whom information can be shared. And while these agreements may be tedious, the alternative is to leave these issues to unspoken assumptions and situational implications. Absent a clear agreement or express terms, it is very difficult to determine what was allowable, especially considering the people making these agreements often haven’t thought about the issues themselves.
And perhaps most damningly, the remedies that are available can never adequately compensate for the harm that is inflicted. Even if the video is completely removed, we cannot erase the memories of those who saw it. Money is nice, but there comes a point where irreparable harms really are irreparable, and no post-hoc remedy will suffice.
Oh no you don’t
Despite the problems discussed above, I think the underlying issue is really one of deterrence, or rather the lack thereof. Put simply, technology makes certain offenses too easy to commit, and the normal deterrent effect of our post-hoc laws is rendered ineffective. After all, privacy related offenses have always dealt with issues of consent, free speech, and inadequate remedies; the difference now is that these crimes can be committed essentially on a whim. Without the practical difficulty of carrying out the offense, it is simply too easy for people to disregard the consequences in a moment of passion. And as I mentioned above, “once it’s out there . . .”
This problem of deterrence is most likely tied to the online disinhibition effect. I’ve discussed this before, but for a number of reasons, we all tend to act differently online. Our actions may seem less consequential, or the ease of perpetration may lead to a “screw it” effect (the late night amazon purchase; the ill-considered text), or we may think we are less likely to be caught. Regardless of the exact reasons, the impact is clear: people act more cruelly online, and this has led to the widespread commission of privacy offenses that were previously comparatively minor problems.
So how do we deal with this problem? One approach would be to improve our cultural education of online actions and to create stronger social norms reinforcing good behavior and discouraging bad behavior. Schools are increasingly teaching children from a young age to consider how easily photos can be shared across the internet, and a generation raised with cell phones will likely be less inclined to view them as detached from normal social behavior. We need people to intuitively consider their online actions to be just as much, if not more consequential. Yet I’m unsure a purely societal response will truly be enough, as the existence of this technology means this potential is always looming. We may understand it is bad, but that doesn’t necessarily mean people won’t still do it.
As far as legal remedies go, the issue is less clear. We can always argue for stricter punishments, greater enforcement, and so forth, to increase the deterrent effect, but at some point these are likely to have diminishing returns. Even the threat of the death penalty is insufficient to deter individuals in some cases, and in all likelihood the perpetrators of these crimes are not thinking through the consequences of their actions.
One solution I alluded to above is to give victims of revenge porn and similar privacy offenses automatic copyright rights in the content that was illegally distributed. This would empower victims to use the already robust content control mechanisms under copyright law to protect their privacy. While this would still subject them to the problem of cyber Whac-A-Mole, it would at least empower victims with a sense of autonomy and societal justice. But to a large degree this is already occurring. Most mainstream websites don’t allow this type of content per their terms and conditions, and will remove infringing content and ban or suspend violating users. While the harm is not completely removed, it is certainly minimized. Furthermore, the FTC is taking steps to make the profiting from such activity illegal, even if the website wasn’t involved in the initial disclosure. If the website encourages, or perhaps even allows bad behavior, and utilizes advertising, this would arguably be sufficient for the FTC to initiate an enforcement action against the website, as was the case with Craig Brittain.
But these are hardly complete solutions. Copyright law still has large exceptions for parody and fair use, both of which would theoretically allow the disclosure of the content without a DMCA remedy. Attempts to regulate revenge porn websites require some nexus with commerce, so websites that don’t generate any profit would theoretically be exempt, and none of the proposed remedies can erase the public’s memory, even if the images and videos are no longer easily accessible.
I often wonder if the problems raised in these “none of your business” categories of privacy are really fleeting ones. Do our social phobias of sex, religion, human excrement, etc., stem from something substantively different, or will they slowly fade into obscurity? When I try to look at them objectively, (or as objectively as I can), they often strike me as holdovers from our more primitive societal origins. At some point, a picture really is just a picture, and if society wouldn’t get riled up over it, neither would we.
Because remember, as Sam the Eagle discovered: underneath our clothes, we are all walking around naked!