Vive is wearable health tracker in the form of a wristband designed to track intoxication through a combination of biomarkers and social media. Its logic is twofold – the wristband tracks markers of dehydration and intoxication directly through transdermal sensors, and the device syncs with social media to alert friends if you are unresponsive to occasional vibrations. Vive therefore facilitates the safe consumption of alcohol, and will warn you and your friends if it appears you have been overserved.
I love this device, partly because I think it’s a pretty cool idea, but mostly because it so perfectly encapsulates all of the problems that I’ve been prognosticating about wearable health devices. Specifically, when wearable health trackers start giving accurate and important health information, what are the implications of all of that data?
Drunk Driving and Drug Testing
I’ll start with the most obvious problem: police subpoenas for drunk driving. This problem was hinted at with things like the Fitbit, but Vive brings it right to the forefront. The Vive is a full-time blood alcohol content (BAC) monitoring device, complete with timestamps and third party sharing: basically, a drunk driver’s nightmare. Under traditional 4th Amendment jurisprudence, the police might not even need a warrant to access these records, thanks to the third party doctrine. Even if Vive promises to respect your privacy and not share your data, the police can request those records and Vive will most likely have to comply. Putting aside issues of accuracy, (which will presumably be resolved with advancing technology), the Vive raises a serious case involving our expectation of privacy in health data. While step counters and heart rate monitors raise minor issues, BAC can put people in jail.
So do we have an expectation of privacy in our blood alcohol content? These discussions are difficult because you have to separate the information acquired from the means of acquiring it. For instance, the current gold standard for BAC monitoring is a blood test, which of course requires drawing blood. Drawing blood is a serious invasion of bodily autonomy, and as such requires a warrant. (Police can’t even say our liver creates an exigent circumstance.) Less invasive methods, like the breathalyzer, typically require a lower level of suspicion, but are hampered by the requirement that the suspect exhale into the machine, something that cannot be forced, and often cannot be accomplished (say, if the suspect is really drunk). But if the suspect is wearing a Vive, they are effectively testing themselves: is the police examining the Vive really a substantial invasion of privacy? You are not being forced to take the test; you are being forced to provide the information. Savvy readers may raise the 5th Amendment right against self-incrimination, but that right is limited to statements, and typically doesn’t apply to computers and electronic devices. (However, whether they can compel you to provide a password is debated.) The police may not be able to compel the information based on no suspicion, but on reasonable suspicion or probable cause, it certainly seems likely.
I think this would probably be a good thing. Drunk driving is enabled by our inability to accurately perceive just how impaired we are, and having an unambiguous and permanent record of our level of intoxication can only serve to deter potential drunk drivers. People might make silly arguments against this involving “freedom,” but there’s no freedom to break the law simply because you don’t know how drunk you are.
But let’s take it one step further. If the Vive knows how drunk you are and is connected to the internet, why not sync it with your car? Rather than let a knowingly drunk person drive anyways, we could simply make the car not start. (I’m assuming it can distinguish the intoxicated Vive wearer from his/her sober friend.) Several states already use a similar technology for convicted drunk drivers to ensure they aren’t repeat offenders, but these are typically unwieldy and a major inconvenience. (For instance, ignition interlock essentially requires you to pass a breathalyzer test before the ignition will operate.) But if it’s just a wrist sensor, why not make it mandatory? Or what if we put the sensor on the steering wheel? There are obvious quibbles: what if you’re wearing gloves, the potential for glitches, etc., but they don’t undermine the principle. Assuming the BAC tests are reliable and we remove all inconvenience, is there any reason not to make it mandatory? Do you have a privacy right to drive your car without being drug tested, no matter how minimal the test may be?
I acknowledge that I tend to be less privacy protective on issues like these, but I have never understood the argument against drug tests per se. Surely you must balance the invasiveness of the test against the asserted government interest. So to draw blood, (everyone hates needles) the government must meet a high burden. Even peeing in a cup is a minor hassle. But for completely noninvasive tests, there should be no objection so long as there is a valid government interest (say, preventing drunk driving). There is always the looming threat of discriminatory requirements, (e.g. drug testing as a precondition for welfare benefits but not Wall Street bailouts), or discriminatory enforcement, but these are problems of application, and don’t address the underlying issue. And while these discussions always stir rhetoric forecasting the end of freedom or the rise of totalitarianism, that is pure alarmism.
I doubt this view will take hold, but I’m not sure it’s for any principled reason. Maybe we have an inherent desire to be able to break the rules: a right to get away with small crimes. A technical solution feels unfair; despite the fact the only freedom it would take away is one we technically never had. And I’m sure it partly stems from an inherent distrust of the government and the belief that we can never control abusive practices. The irony in this position is that often the proposed check on government power is to increase surveillance in the other direction, such as requiring dash-cams for police officers. Personally, I tend to favor both. We live in the information age, restricting information gathering strikes me as swimming against the current.
Before moving on to the broader implications of health data tracking, I want to briefly peer into the world of social media. After all, Vive is specifically designed to share your level of intoxication with friends. The extent to which Vive will communicate this is a bit unclear, but it seems to be limited to a select group of trustworthy friends, presumably within close proximity. Yet when you extend control to users, inevitably some people will share more than they probably should. And at its worst, this could turn into a homing beacon identifying the drunkest people at a given party/bar/club. The issues this raises for consent, sexual predators, and date rape practically write themselves, so I won’t elaborate. I don’t know if this is substantially different from drunk status updates, drunk texts, and even just drunk body language, but somehow having a quantifiable number attached to someone’s level of intoxication makes it much more real.
Beyond Blood Alcohol Content
Although BAC tracking presents the most acute problem, biomarker tracking generally raises many similar issues, and the practice is becoming increasingly common. Personal health trackers are all the rage among fitness enthusiasts, the military foresees using health monitoring devices on soldiers to identify nutrient deficiencies (which can then be 3D printed into their food), and smart phones are monitoring sleep patterns to identify sleep disturbances. Even medical device manufacturers are increasingly networking their products à la Internet of Things to transmit performance logs from within the patient, from pacemakers to insulin pumps. If these also tracked relevant biomarkers, this data would be a boon to the health profession in general. Big data is already a medical research powerhouse, and supplementing this with continuous health information for a large minority of the population could yield unimaginable benefits.
And for those who are skeptical of the growth potential of personal health tracking, I’d suggest looking into area of quantified self. Among health and fitness enthusiasts, data about biomarkers is the newest trend. The Fitbit and Nike+ are actually fairly mainstream, with more advanced technologies employed by professional athletes to optimize athletic performance and recovery. Even among the purely health focused, data is oddly addictive, with blood sugar meters, ketonuria strips, and activity trackers as increasingly common diet aids. Personal health trackers may currently be limited to gross measurements like heart rate, blood sugar, and now BAC, but their potential is virtually unlimited, and increasing demand for data will ensure that more biomarkers can and will be tracked.
Yet what are the implications of this data? Of course there are issues of de-identification, re-identification, and data sharing generally, but that’s just the tip of the iceberg. Although we may not like to admit it, most of our actions are influenced by physiological processes, and awareness of these processes can shape how we respond to human behavior. If someone is attempting to mitigate a charge of murder to manslaughter with a defense akin to “I wasn’t in control of my actions,” surely there are biomarkers we could track that would be relevant to that discussion. A spike in dopamine, a rush of adrenaline, these would certainly suggest the suspect wasn’t in complete control of their faculties. I don’t mean to suggest that we convict or acquit solely upon biomarkers, but a constant stream of data about the defendant’s physiological state would certainly be persuasive evidence.
Or suppose we were able to use this data to accurately predict adverse health events hours prior to their occurrence, purely through monitoring biomarkers. Would this information impose a duty upon the data holder to inform the customer? Or the customer’s doctor? Would it impose a duty to do the analytics, knowing its potential benefits? We obviously could ask for the customer’s consent, but is there any good reason not to simply require it? The costs saved through preventative medicine would surely cover any cost of data retention and analytics. And would providing this information constitute the unlicensed practice of medicine? Companies would naturally be reticent to impose upon themselves this potential liability nightmare, but the trend lines are clear, and it’s only a matter of time before the issue is on their doorstep.
I pose so many questions because health data has weighty concerns on both sides. Diagnoses of serious illnesses can carry the threat of social stigma, both real and perceived, and maintaining privacy in this area is often particularly important. Yet I also think most agree that some degree of privacy invasion is justified if it saves the patient’s life. But rarely are the issues so clear cut, and more often the data shared only minimally affects the patient’s privacy, while providing minimal immediate benefit. Crafting rules that are tuned to this gradation is often frustrating, compounded as always by the broad spectrum of personal views on privacy.
I’m interested to see where the trend of health tracking leads. References to Brave New World will inevitably arise (a welcome relief from the usual Orwellian ones) as the potential for manipulating biomarkers is exploited to alter human behavior. And if we can change behavior by changing biology, what broader implications does this carry? Snickers already coined “you’re not you when you’re hungry,” and that may be more prescient than they realize. Our conception of “self” blurs when discussed in terms of physiology, and increasing awareness of the impact of physiology may call into question fundamental notions of individual autonomy. But that’s a subject for another day.