45 Forsyth Street
34 Cargill Hall
Boston, MA 02115
ATTN: Woodrow Hartzog, 120 Knowles
360 Huntington Avenue
Boston, MA 02115
- Human-computer interaction
- Privacy and surveillance
- PhD in Mass Communication, University of North Carolina at Chapel Hill
- LLM in Intellectual Property, The George Washington University Law School
- JD, Samford University’s Cumberland School of Law
- BA, Samford University
Woodrow Hartzog holds a joint appointment with the School of Law and the Khoury College of Computer Sciences. His research focuses on modern privacy and data protection problems. He tries to understand the rules and ethics for personal data processing, surveillance, and media. He is currently working on projects in three main areas: 1) privacy, surveillance, and mediated social interaction 2) data protection and data security, and 3) robotics and automated technologies in everyday life.
Professor Hartzog’s work has been published in numerous scholarly publications such as the Yale Law Journal, Columbia Law Review, California Law Review, and Michigan Law Review and popular national publications such as The New York Times, The Washington Post, and The Guardian, Wired, BBC, CNN, Bloomberg, New Scientist, Slate, The Atlantic, and The Nation. His book, Privacy’s Blueprint: The Battle to Control the Design of New Technologies, was published in 2018 by Harvard University Press.
Professor Hartzog has testified three times before Congress on data protection and data security issues. His work has won numerous awards, including the International Association of Privacy Professionals Scholarship award. He is a four-time recipient of the Future of Privacy Forum’s “Privacy Papers for Policy Makers” Award, whereby the organization distributes privacy research deemed to be the most relevant to policy makers in an annual digest.
Professor Hartzog is a Non-resident Fellow at The Cordell Institute for Policy in Medicine & Law at Washington University, and an Affiliate Scholar at the Center for Internet and Society at Stanford Law School. Prior to joining Northeastern in 2017, Professor Hartzog was the Starnes Professor of Law at Samford University’s Cumberland School of Law. He has served as a Visiting Professor at Notre Dame Law School and the University of Maine School of Law. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center.
Hartzog, Woodrow. Privacy's Blueprint: the Battle to Control the Design of New Technologies. Harvard University Press, 2018.
Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.
In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.
Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust.
Halpern, Sue. Hatzog, Woodrow. Ziegler, Mary. Farivar, Cyrus. Igo, Sarah E.. The New York Review of Books: The Known Known. 2018
Hartzog, Woodrow, The Public Information Fallacy (December 7, 2017). 98 Boston University Law Review 459 (2019).
The concept of privacy in “public” information or acts is a perennial topic for debate. It has given privacy law fits. People struggle to reconcile the notion of protecting information that has been made public with traditional accounts of privacy. As a result, successfully labeling information as public often functions as a permission slip for surveillance and personal data practices. It has also given birth to a significant and persistent misconception — that public information is an established and objective concept.
In this article, I argue that the “no privacy in public” justification is misguided because nobody knows what “public” even means. It has no set definition in law or policy. This means that appeals to the public nature of information and contexts in order to justify data and surveillance practices is often just guesswork. There are at least three different ways to conceptualize public information: descriptively, negatively, or by designation. For example, is the criteria for determining publicness whether it was hypothetically accessible to anyone? Or is public information anything that’s controlled, designated, or released by state actors? Or maybe what’s public is simply everything that’s “not private?”
If the concept of “public” is going to shape people’s social and legal obligations, its meaning should not be assumed. Law and society must recognize that labeling something as public is both consequential and value-laden. To move forward, we should focus the values we want to serve, the relationships and outcomes we want to foster, and the problems we want to avoid.
Richards, Neil M. and Hartzog, Woodrow, The Pathologies of Digital Consent (April 11, 2019). Washington University Law Review, 2019.
This article offers four contributions to improve our understanding of consent in the digital world. First, we offer a conceptual vocabulary of “the pathologies of consent” — a framework for talking about different kinds of defects that consent models can suffer, such as unwitting consent, coerced consent, and incapacitated consent. Second, we offer three conditions for when consent will be most valid in the digital context: when choice is infrequent, when the potential harms resulting from that choice are vivid and easy to imagine, and where we have the correct incentives choose consciously and seriously. The further we fall from these conditions, the more a particular consent will be pathological and thus suspect. Third, we argue that out theory of consent pathologies sheds light on the so-called “privacy paradox” — the notion that there is a gap between what consumers say about wanting privacy and what they actually do in practice. Understanding the “privacy paradox” in terms of consent pathologies shows how consumers are not hypocrites who say one thing but do another. On the contrary, the pathologies of consent reveal how consumers can be nudged and manipulated by powerful companies against their actual interests, and that this process is easier when consumer protection law falls far from the gold standard. In light of these findings, we offer a fourth contribution — the theory of consumer trust we have suggested in prior work and which we further elaborate here as an alternative to our over-reliance on consent and its many pathologies.
Hartzog, Woodrow, The Case Against Idealising Control (December 12, 2018). 4 European Data Protection Law Review 423 (2018).
Seemingly everyone, from scholars, industry, and privacy advocates to lawmakers, regulators, and judges seems to have settled on the idea that the key to privacy is control over personal information. But in practice, there is only so much a person can do. Control is far too precious and finite of a concept to meaningfully scale. It will never work for personal data mediated by technology.
Now we have an entire empire of data protection built around the crumbling edifice of control. The idealisation of control in modern data protection regimes like the GDPR and the ePrivacy Directive creates a pursuit that is actually adversarial to safe and sustainable data practices. It deludes us about the efficacy of rules and dooms future regulatory proposals to walk down the same, misguided path. We should dislodge and minimise the concept of control as a goal of data protection.
In mediated environments, the control we users get is illusory, overwhelming, and myopic. Justifying control measures on privacy grounds requires so much justification and tying ourselves in knots that it feels like it’s merely serving as a proxy for some other protection goal that’s just out of reach. Lawmakers and companies should pursue more direct values like trust, obscurity, and autonomy. They should embrace more direct strategies like mandatory deletion, collection and purpose limitations, and non-waivable duties of care, loyalty, discretion. People’s trust in companies should be protected regardless of the control they are given.
Hartzog, Woodrow, Body Cameras and the Path to Redeem Privacy Law (October 17, 2018). 96 North Carolina Law Review 1257 (2018). Available at SSRN: https://ssrn.com/abstract=3268988
From a privacy perspective, the movement towards police body cameras seems ominous. The prospect of a surveillance device capturing massive amounts of data concerning people’s most vulnerable moments is daunting. These concerns are compounded by the fact that there is little consensus and few hard rules on how and for whom these systems should be built and used. But in many ways, this blank slate is a gift. Law and policy makers are not burdened by the weight of rules and technologies created in a different time for a different purpose. These surveillance and data technologies will be modern. Many of the risks posed by the systems will be novel as well. Our privacy rules must keep up.
In this Article, I argue that police body cameras are an opportunity to chart a path past privacy law’s most vexing missteps and omissions. Specifically, lawmakers should avoid falling back on the “reasonable expectation of privacy” standard. Instead, they should use body cameras to embrace more nuanced theories of privacy, such as trust and obscurity. Trust-based relationships can be used to counter the harshness of the third party doctrine. The value of obscurity reveals the misguided nature of the argument that there is “no privacy in public.”
Law and policy makers can also better protect privacy by creating rules that address how body cameras and data technologies are designed in addition to how they are used. Since body-camera systems implicate every stage of the modern data life cycle from collection to disclosure, they can serve as a useful model across industry and government. But if law and policy makers hope to show how privacy rules can be improved, they must act quickly. The path to privacy law’s redemption will stay clear for only so long.
Neil Richards & Woodrow Hartzog, "Privacy's Trust Gap," 126 Yale Law Journal 1180 (2017)
It can be easy to get depressed about the state of privacy these days. In an age of networked digital information, many of us feel disempowered by the various governments, companies, and criminals trying to peer into our lives to collect our digital data trails. When so much is in flux, the way we think about an issue matters a great deal. Yet while new technologies abound, our ideas and thinking — as well as our laws — have lagged in grappling with the new problems raised by the digital revolution. In their important new book, Obfuscation: A User’s Guide for Privacy and Protest (2016), Finn Brunton and Helen Nissenbaum offer a manifesto for the digitally weak and powerless, whether ordinary consumers or traditionally marginalized groups. They call for increased use of obfuscation, the deliberate addition of bad information to interfere with surveillance; one that can be “good enough” to do a job for individuals much or even most of the time. Obfuscation is attractive because it offers to empower individuals against the shadowy government and corporate forces of surveillance in the new information society. While this concept represents an important contribution to the privacy debates, we argue in this essay that we should be hesitant to embrace obfuscation fully.
We argue instead that as a society we can and should do better than relying on individuals to protect themselves against powerful institutions. We must think about privacy instead as involving the increasing importance of information relationships in the digital age, and our need to rely on (and share information with) other people and institutions to live our lives. Good relationships rely upon trust, and the way we have traditionally thought about privacy in terms of individual protections creates a trust gap. If we were to double down on obfuscation, this would risk deepening that trust gap. On the contrary, we believe that the best solution for problems of privacy in the digital society is to use law to create incentives to build sustainable, trust-promoting information relationships.
We offer an alternative frame for thinking about privacy problems in the digital age, and propose that a conceptual revolution based upon trust is a better path forward than one based on obfuscation. Drawing upon our prior work, as well as the growing community of scholars working at the intersection of privacy and trust, we offer a blueprint for trust in our digital society. This consists of four foundations of trust — the commitment to be honest about data practices, the importance of discretion in data usage, the need for protection of personal data against outsiders, and the overriding principle of loyalty to the people whose data is being used, so that it is data and not humans that become exploited. We argue that we must recognize the importance of information relationships in our networked, data-driven society. There exist substantial incentives already for digital intermediaries to build trust. But when incentives and markets fail, the obligation for trust-promotion must fall to law and policy. The first-best privacy future will remain one in which privacy is safeguarded by law, in addition to private ordering and self-help.
Woodrow Hartzog , MARYLAND LAW REVIEW 952 (2017) (symposium).
Privacy law is in a bit of a pickle thanks to our love of the Fair Information Practices (“FIPs”). The FIPs are the set of aspirational principles developed over the past fifty years used to model rules for responsible data practices. Thanks to the FIPs, data protection regimes around the world require those collecting and using personal information to be accountable, prudent, and transparent. They give data subjects control over their information by bestowing rights of correction and deletion. While the FIPs have been remarkably useful, they have painted us into a corner.1 A sea change is afoot in the relationship between privacy and technology. FIPs-based regimes were relatively well-equipped for the first wave of personal computing. But automated technologies and exponentially greater amounts of data have pushed FIPs principles like data minimization, transparency, choice, and access to the limit. Advances in robotics, genetics, biometrics, and algorithmic decision making are challenging the idea that rules meant to ensure fair aggregation of personal information in databases are sufficient. Control over information in databases isn’t even the half of it anymore. The mass connectivity of the “Internet of Things” and near ubiquity of mobile devices make the security and surveillance risks presented by the isolated computer terminals and random CCTV cameras of the ‘80s and ‘90s seem quaint.
Richards, Neil M. and Hartzog, Woodrow, Taking Trust Seriously in Privacy Law (September 3, 2015). 19 Stanford Technology Law Review 431 (2016). Available at SSRN: https://ssrn.com/abstract=2655719 or http://dx.doi.org/10.2139/ssrn.2655719
Trust is beautiful. The willingness to accept vulnerability to the actions of others is the essential ingredient for friendship, commerce, transportation, and virtually every other activity that involves other people. It allows us to build things, and it allows us to grow. Trust is everywhere, but particularly at the core of the information relationships that have come to characterize our modern, digital lives. Relationships between people and their ISPs, social networks, and hired professionals are typically understood in terms of privacy. But the way we have talked about privacy has a pessimism problem – privacy is conceptualized in negative terms, which leads us to mistakenly look for “creepy” new practices, focus excessively on harms from invasions of privacy, and place too much weight on the ability of individuals to opt out of harmful or offensive data practices.
But there is another way to think about privacy and shape our laws. Instead of trying to protect us against bad things, privacy rules can also be used to create good things, like trust. In this paper, we argue that privacy can and should be thought of as enabling trust in our essential information relationships. This vision of privacy creates value for all parties to an information transaction and enables the kind of sustainable information relationships on which our digital economy must depend.
Drawing by analogy on the law of fiduciary duties, we argue that privacy laws and practices centered on trust would enrich our understanding of the existing privacy principles of confidentiality, transparency, and data protection. Re-considering these principles in terms of trust would move them from procedural means of compliance for data extraction towards substantive principles to build trusted, sustainable information relationships. Thinking about privacy in terms of trust also reveals a principle that we argue should become a new bedrock tenet of privacy law: the Loyalty that data holders must give to data subjects. Rejuvenating privacy law by getting past Privacy Pessimism is essential if we are to build the kind of digital society that is sustainable and ultimately beneficial to all – users, governments, and companies. There is a better way forward for privacy. Trust us.
Woodrow Hartzog & Evan Selinger, "Surveillance as Loss of Obscurity," 72 Washington and Lee Law Review 1343 (2015)
Everyone seems concerned about government surveillance, yet we have a hard time agreeing when and why it is a problem and what we should do about it. When is surveillance in public unjustified? Does metadata raise privacy concerns? Should encrypted devices have a backdoor for law enforcement officials? Despite increased attention, surveillance jurisprudence and theory still struggle for coherence. A common thread for modern surveillance problems has been difficult to find.
In this article we argue that the concept of ‘obscurity,’ which deals with the transaction costs involved in finding or understanding information, is the key to understanding and uniting modern debates about government surveillance. Obscurity can illuminate different areas where transactions costs for surveillance are operative and explain why making surveillance hard but possible is the central issue in the government-surveillance debates. Obscurity can also explain why the solutions to the government-surveillance problem should revolve around introducing friction and inefficiency into process, whether it be legally through procedural requirements like warrants or technologies like robust encryption.
Ultimately, obscurity can provide a clearer picture of why and when government surveillance is troubling. It provides a common thread for disparate surveillance theories and can be used to direct surveillance reform.
Neil Richards , Woodrow Hartzog , DEPAUL LAW REVIEW 579 (2017) (symposium)
Hartzog, Woodrow and Selinger, Evan, The Internet of Heirlooms and Disposable Things (June 1, 2016). 17 North Carolina Journal of Law & Technology 581 (2016). Available at SSRN: https://ssrn.com/abstract=2787511
The Internet of Things (“IoT”) is here, and we seem to be going all in. We are trying to put a microchip in nearly every object that is not nailed down and even a few that are. Soon, your cars, toasters, toys, and even your underwear will be wired up to make your lives better. The general thought seems to be that “Internet connectivity makes good objects great.” While the IoT might be incredibly useful, we should proceed carefully. Objects are not necessarily better simply because they are connected to the Internet. Often, the Internet can make objects worse and users worse-off. Digital technologies can be hacked. Each new camera, microphone, and sensor adds another vector for attack and another point of surveillance in our everyday lives. The problem is that privacy and data security law have failed to recognize some “things” are more dangerous than others as part of the IoT. Some objects, like coffee pots and dolls, can last long after the standard life-cycle of software. Meanwhile cheap, disposable objects, like baby wipes, might not be worth outfitting with the most secure hardware and software. Yet they all are part of the network. This essay argues that the nature of the “thing” in the IoT should play a more prominent role in privacy and data security law. The decision to wire up an object should be coupled with responsibilities to make sure its users are protected. Only then, can we trust the Internet of Heirlooms and Disposable Things.
Hartzog, Woodrow and Solove, Daniel J., The Scope and Potential of FTC Data Protection (November 1, 2015). 83 George Washington Law Review 2230 (2015); GWU Law School Public Law Research Paper No. 2014-40; GWU Legal Studies Research Paper No. 2014-40. Available at SSRN: https://ssrn.com/abstract=2461096
For more than fifteen years, the FTC has regulated privacy and data security through its authority to police deceptive and unfair trade practices as well as through powers conferred by specific statutes and international agreements. Recently, the FTC’s powers for data protection have been challenged by Wyndham Worldwide Corp. and LabMD. These recent cases raise a fundamental issue, and one that has surprisingly not been well explored: How broad are the FTC’s privacy and data security regulatory powers? How broad should they be?
In this Article, we address the issue of the scope of FTC authority in the areas of privacy and data security, which together we will refer to as “data protection.” We argue that the FTC not only has the authority to regulate data protection to the extent it has been doing, but that its granted jurisdiction can expand its reach much more. Normatively, we argue that the FTC’s current scope of data protection authority is essential to the United States data protection regime and should be fully embraced to respond to the privacy harms unaddressed by existing remedies available in tort or contract, or by various statutes. In contrast to the legal theories underlying these other claims of action, the FTC can regulate with a much different and more flexible understanding of harm than one focused on monetary or physical injury.
Thus far, the FTC has been quite modest in its enforcement, focusing on the most egregious offenders and enforcing the most widespread industry norms. Yet the FTC can and should push the development of norms a little more (though not in an extreme or aggressive way). We discuss steps the FTC should take to change the way it exercises its power, such as with greater transparency and more nuanced sanctioning and auditing.
Woodrow Hartzog , 5 JOURNAL OF HUMAN ROBOT INTERACTION 70 (2016).
Consumer robots like personal digital assistants, automated cars, robot companions, chore-bots, and personal drones raise common consumer protection issues, such as fraud, privacy, data security, and risks to health, physical safety, and finances. They also raise new consumer protection issues, or at least call into question how existing consumer protection regimes might be applied to such emerging technologies. Yet it is unclear which legal regimes should govern these robots and what consumer protection rules for robots should look like.
This paper argues that the FTC’s grant of authority and existing jurisprudence are well-suited for protecting consumers who buy and interact with robots. The FTC has proven to be a capable regulator of communications, organizational procedures, and design, which are the three crucial concepts for safe consumer robots.
Woodrow Hartzog, Gregory Conti, John Nelson, and Lisa A. Shay, Inefficiently Automated Law Enforcement, 2015 Mich. St. L. Rev. 1763 (2015),
For some crimes the entire law enforcement process can now be automated. No humans are needed to detect the crime, identify the perpetrator, or impose punishment. While automated systems are cheap and efficient, governments and citizens must look beyond these obvious savings as manual labor is replaced by robots and computers.
Inefficiency and indeterminacy have significant value in automated law enforcement systems and should be preserved. Humans are inefficient, yet more capable of ethical and contextualized decision-making than automated systems. Inefficiency is also an effective safeguard against perfectly enforcing laws that were created with implicit assumptions of leniency and discretion.
This Article introduces a theory of inefficiently automated law enforcement built around the idea that those introducing or increasing automation in one part of an automated law enforcement system should ensure that inefficiency and indeterminacy are preserved or increased in other parts of the system.
ROBOT LAW (Edward Elgar: Ryan Calo, Michael Froomkin, & Ian Kerr, eds. 2016) (Woodrow Hartzog, Lisa Shay, Greg Conti, Dominic Larkin, and John Nelson)
Due to recent advances in computerized analysis and robotics, automated law enforcement has become technically feasible. Unfortunately, laws were not created with automated enforcement in mind, and even seemingly simple laws have subtle features that require programmers to make assumptions about how to encode them. We demonstrate this ambiguity with an experiment where a group of 52 programmers was assigned the task of automating the enforcement of traffic speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets. Despite quantitative data for both vehicle speed and the speed limit, the number of tickets issued varied from none to one per sensor sample above the speed limit. Our results from the experiment highlight the significant deviation in number and type of citations issued during the course of the commute, based on legal interpretations and assumptions made by programmers untrained in the law. These deviations were mitigated, but not eliminated, in one sub-group that was provided with a legally-reviewed software design specification, providing insight into ways to automate the law in the future. Automation of legal reasoning is likely to be the most effective in contexts where legal conclusions are predictable because there is little room for choice in a given model; that is, they are determinable. Yet this experiment demonstrates that even relatively narrow and straightforward “rules” can be problematically indeterminate in practice.
ROBOT LAW (Edward Elgar: Ryan Calo, Michael Froomkin, & Ian Kerr, eds. 2016) (Woodrow Hartzog, Lisa Shay, Greg Conti, and John Nelson)
We are rapidly approaching a time when automated law enforcement will no longer be an aberration, but rather a viable option for many law enforcement agencies. Consider the following hypothetical based on existing technology: Driving down a highway where the speed limit is 65mph, your vehicle’s built-in GPS receiver detects that you are approaching a large city. Cross-referencing your location with a database of speed limits, the car determines that the speed limit reduces to 55mph in another mile. A pleasant computer-generated contralto emits from the speaker system, “Warning! Speed limit reducing to 55 mph.”
However, there is excellent weather and visibility, and traffic is moving briskly. Unaware of new law enforcement policies in effect, you decide to maintain the prevailing traffic flow at 63 mph. As you cross into the 55mph zone, your vehicle’s pleasant contralto announces, “Posted speed limit exceeded, authorities notified.” Simultaneously, your vehicle’s onboard communications system notifies a nationwide moving-violation tracking system indicating the date, time, location, vehicle registration, and recorded speed. The tracking system determines, based on location, the appropriate agency. The police agency’s computer looks up the appropriate fine and emails a ticket to the person registered as the vehicle’s owner as well as to the company insuring the vehicle. This is an example of “perfect surveillance and enforcement.”6 Alternatively, the vehicle could have been programmed to simply reduce speed to the posted speed limit, a sort of reverse cruise control. This would be an example of “perfect prevention” or “preemption.
Ira Rubinstein & Woodrow Hartzog, "Anonymization and Risk," 91 Washington Law Review 703 (2016)
Perfect anonymization of data sets that contain personal information has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the best way to move data release policy past the alleged failures of anonymization is to focus on the process of minimizing risk of reidentification and sensitive attribute disclosure, not preventing harm. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk.
Woodrow Hartzog, "Unfair and Deceptive Robots," 74 Maryland Law Review 785 (2015)
Robots, like household helpers, personal digital assistants, automated cars, and personal drones are or will soon be available to consumers. These robots raise common consumer protection issues, such as fraud, privacy, data security, and risks to health, physical safety and finances. Robots also raise new consumer protection issues, or at least call into question how existing consumer protection regimes might be applied to such emerging technologies. Yet it is unclear which legal regimes should govern these robots and what consumer protection rules for robots should look like.
The thesis of the Article is that the FTC’s grant of authority and existing jurisprudence make it the preferable regulatory agency for protecting consumers who buy and interact with robots. The FTC has proven to be a capable regulator of communications, organizational procedures, and design, which are the three crucial concepts for safe consumer robots. Additionally, the structure and history of the FTC shows that the agency is capable of fostering new technologies as it did with the Internet. The agency generally defers to industry standards, avoids dramatic regulatory lurches, and cooperates with other agencies. Consumer robotics is an expansive field with great potential. A light but steady response by the FTC will allow the consumer robotics industry to thrive while preserving consumer trust and keeping consumers safe from harm.
Hartzog, Woodrow (2014) "Reviving Implied Confidentiality," Indiana Law Journal: Vol. 89 : Iss. 2 , Article 6.
The law of online relationships has a significant flaw—it regularly fails to account for the possibility of an implied confidence. The established doctrine of implied confidentiality is, without explanation, almost entirely absent from online jurisprudence in environments where it has traditionally been applied offline, such as with sensitive data sets and intimate social interactions.
Courts’ abandonment of implied confidentiality in online environments should have been foreseen. The concept has not been developed enough to be consistently applied in environments such as the Internet that lack obvious physical or contextual cues of confidence. This absence is significant because implied confidentiality could be the missing piece that helps resolve the problems caused by the disclosure of personal information on the Internet.
This Article urges a revival of implied confidentiality by identifying from the relevant case law a set of implied confidentiality norms based upon party perception and inequality that courts should be, but are not, considering in online disputes. These norms are used to develop a framework for courts to better recognize implied agreements and relationships of trust in all contexts.
Woodrow Hartzog,12 COLORADO TECHNOLOGY LAW JOURNAL 332 (2014) (symposium)
Two of the greatest modern challenges to protecting personal information are determining how to protect information that is already known by many and how to create an adequate remedy for privacy harms that are opaque, remote, or cumulative. Both of these challenges are front and center for those who seek to protect socially shared information. Social media and wearable communication technologies like Google Glass present vexing questions about whether information that is known by many can ever be “private,” what the privacy harm might be from this information’s misuse, and how to remedy such harms in balance with competing values such as free speech, transparency, and security. Some law and policy makers have responded to the challenge of protecting privacy in an era of m
Daniel J. Solove & Woodrow Hartzog, "The FTC and the New Common Law of Privacy," 114 Columbia Law Review 583 (2014)
One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite over fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States — more so than nearly any privacy statute or any common law tort.
In this Article, we contend that the FTC’s privacy jurisprudence is functionally equivalent to a body of common law, and we examine it as such. We explore how and why the FTC, and not contract law, came to dominate the enforcement of privacy policies. A common view of the FTC’s privacy jurisprudence is that it is thin, merely focusing on enforcing privacy promises. In contrast, a deeper look at the principles that emerge from FTC privacy “common law” demonstrates that the FTC’s privacy jurisprudence is quite thick. The FTC has codified certain norms and best practices and has developed some baseline privacy protections. Standards have become so specific they resemble rules. We contend that the foundations exist to develop this “common law” into a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, extends far beyond privacy policies, and involves a full suite of substantive rules that exist independently from a company’s privacy representations.
Woodrow Hartzog & Frederic D. Stutzman, "Obscurity by Design," 88 Washington Law Review 385 (2013)
Design-based solutions to confront technological privacy threats are becoming popular with regulators. However, these promising solutions have left the full potential of design untapped. With respect to online communication technologies, design-based solutions for privacy remain incomplete because they have yet to successfully address the trickiest aspect of the Internet — social interaction. This Article posits that privacy-protection strategies such as “Privacy by Design” face unique challenges with regard to social software and social technology due to their interactional nature.
This Article proposes that design-based solutions for social technologies benefit from increased attention to user interaction, with a focus on the principles of “obscurity” rather than the expansive and vague concept of “privacy.” The main thesis of this Article is that obscurity is the optimal protection for most online social interactions and, as such, is a natural locus for design-based privacy solutions for social technologies. To that end, this Article develops a model of “obscurity by design” as a means to address the privacy problems inherent in social technologies and the Internet.
Woodrow Hartzog & Frederic D. Stutzman, "The Case for Online Obscurity," 101 California Law Review 1 (2013)
On the Internet, obscure information has a minimal risk of being discovered or understood by unintended recipients. Empirical research demonstrates that Internet users rely on obscurity perhaps more than anything else to protect their privacy. Yet, online obscurity has been largely ignored by courts and lawmakers. In this Article, we argue that obscurity is a critical component of online privacy, but it has not been embraced by courts and lawmakers because it has never been adequately defined or conceptualized. This lack of definition has resulted in the concept of online obscurity being too insubstantial to serve as a helpful guide in privacy disputes. In its place, courts and lawmakers have generally found that the unfettered ability of any hypothetical individual to find and access information on the Internet renders that information public, and therefore ineligible for privacy protection. Drawing from multiple disciplines, this Article develops a more focused, clear, and workable definition of online obscurity: information is obscure online if it lacks one or more key factors that are essential to discovery or comprehension. We have identified four of these factors: (1) search visibility, (2) unprotected access, (3) identification, and (4) clarity. This framework could be applied as an analytical tool or as part of an obligation. Viewing obscurity as a continuum could help courts and lawmakers determine if information is eligible for privacy protections. Obscurity could also serve as a compromise protective remedy: instead of forcing websites to remove sensitive information, courts could mandate some form of obscurity. Finally, obscurity could form part of an agreement where Internet users bound to a “duty to maintain obscurity” would be allowed to further disclose information so long as they kept the information generally as obscure as they received it.
Woodrow Hartzog, "Website Design as Contract," 60 American University Law Review 1635 (2011)