Nicholas Bagley, The Procedure Fetish
, 118 Mich. L. Rev.
__ (forthcoming, 2019), available at SSRN
Every administrative law professor has been there. Perhaps you are discussing hard-look review, notice-and-comment rulemaking, or procedural challenges to non-legislative rules. Students, perhaps puzzled by the courts’ (mostly the D.C. Circuit’s) indifference to the spare requirements of the Administrative Procedure Act, may wonder where this layer of doctrine comes from or, more importantly, why it is there. At that point you go back to the beginning of the class. Remember concerns about how the “fourth branch of the Government . . . has deranged our three branch legal theories much as the concept of a fourth dimension unsettles our three-dimensional thinking”? Remember the theory about agency behavior that posits regulators’ incentives will steer them toward servicing the industry they are supposed to monitor in the public interest? These additional procedures are here to compensate for those worries about legitimacy, capture, and public participation, thus justifying and improving the workings of the administrative state.
So far, so familiar. But then the plot takes a twist. Professor Nicholas Bagley bursts like Kool-Aid Man through the wall of your classroom. This intruder, however, is telling you to stop drinking the Procedural Kool-Aid that has sustained so many administrative law jurists and scholars. (Not so much “OH YEAH!” as “No.”) In The Procedure Fetish, forthcoming in the Michigan Law Review, Bagley contends that procedural constraints on agency action can sometimes bolster legitimacy and improve governance, but lawyers’ unexamined fealty to the cult of procedure does not hold up to scrutiny. Further, Bagley argues that for progressive lawyers and scholars this faith is misguided and plays into hands of those who seek to undermine an activist state. Although Bagley speaks primarily here to his progressive fellows-in-arms, this sharply argued paper merits the attention of administrative lawyers of every stripe. It changes the way I will teach the subject. (Also, it is a great read; the prose sings and sometimes even struts.) Continue reading "How to Learn to Stop Worrying and Love the Administrative State"
In 2019, Oregon became the first state to pass legislation that essentially bans single-family zoning. As states across the country struggle to respond to the housing affordability crisis, Oregon’s actions do not stand alone. John Infranca’s recent article, The New State Zoning: Land Use Preemption Amid a Housing Crisis, may have been published before Oregon’s historic vote but it is essential reading for those interested in the future of zoning.
The article does a masterful job collecting examples of similar moves by states to preempt local zoning as a way to facilitate the construction of more dense housing. It also persuasively argues that states are going to increasingly use state preemption through state regulation as a way to respond to the housing affordability crisis. Continue reading "Reclaiming State Authority Over Zoning"
As tort reform heated up in the United States late in the last century, so too did the debate over the appropriateness of punitive damages awards, especially where those damages were seen to be excessive. Complicating the picture, of course, is what it means for such damages to be excessive in the first place, for, unlike traditional damages intended to compensate the injured party, punitive damages are intended to punish and deter the wrongdoing party. As a starting point, most courts and scholars are in agreement that the reprehensibility of the wrongdoing party and the amount needed to deter similar conduct in the future are important considerations that should be taken into account before awarding punitive damages. After this, however, all bets are off. For instance, scholars disagree with one another as to whether punitive damages are really out of control in the first place (most, but not all, seem to think that they are), and even if they are, they further disagree on what should be done about the problem. For instance, how predictable should punitive damages awards be, and what role, if any, should be played by the defendant’s wealth, or by other civil or criminal penalties the wrongdoer might be subject to, or by the probability of the defendant’s behavior escaping detection, or by the ratio between the compensatory and punitive damages, or by whether the claim is being reviewed as excessive on common law grounds or as unconstitutional on due process grounds, and how does all of this tie in to the twin (but frequently at odds) goals of punishment and deterrence? Indeed, there are few principles in all of remedies more contentious (and confusing!) than those governing the current punitive damages landscape, as a stack of recently-graded remedies exams sitting next to my desk will readily attest.
It is in part due to this confusion that hundreds of law review articles have been written on punitive damages since the 1980s alone—just when tort reform started to find its feet under the Reagan administration—initiating a cataclysmic shift in the punitive damages landscape whose aftershocks are still being felt today. Fortunately, one of the newest contributions to the literature—a well-researched, enjoyably-written, and cogently-argued Article called Taming Blockbuster Punitive Damages Awards by Professors Benjamin J. McMichael and W. Kip Viscusi—has found something new to say. The Article not only provides “the first empirical analysis of the effect of state punitive damages caps on blockbuster awards” (i.e., those awards exceeding $100 million, which arguably pose the biggest threat to fundamental notions of fairness), but also is the first to explore the dynamic interplay between the attempt of individual states to rein in and render more predictable punitive damages awards “with the effect of the Supreme Court’s current constitutional doctrine on punitive damages.” (P. 171.) Continue reading "Making Punitive Damages More Predictable"
All lawyers in private practice must recognize the possibility of opening a summons and seeing their names listed as defendants. Many private practitioners are more concerned about malpractice than professional discipline. The Preface to the Restatement of Law Governing Lawyers captures the regulatory role of malpractice in stating that “the remedy of malpractice liability and the remedy of disqualification are practically of greater importance in most law practice than the risk of disciplinary proceedings.”
Despite the important role that malpractice plays in influencing lawyer conduct, only a small number of empirical scholars have studied legal malpractice claims. That is one reason why we should welcome the recent book by Herbert M. Kritzer and Neil Vidmar, When Lawyers Screw Up: Improving Access to Justice for Legal Malpractice Victims. As suggested by the book title, the book persuasively makes the case for change because a large percentage of victims are deprived of a meaningful remedy in pursuing legal malpractice claims. Continue reading "What Lawyers Can Learn from Their Mistakes: An Empirical Examination of Legal Malpractice"
Everyone agrees that law has a conduct-guiding function. Moreover, most legal theorists assume that this conduct-guiding occurs, or is supposed to occur, by providing reasons for action. This very readable book is about the kind of reasons to comply with the law that law can provide and—under favorable conditions—does provide. As most of us know, officials applying legal requirements largely act as if these requirements trump (nearly) everything else for law subjects. In terms made famous by Joseph Raz, they treat law as giving rise to pre-emptive reasons to comply. These are reasons that (a) are ordinary reasons in favor of conduct and (b) exclude some opposing reasons, in the sense that they are not to be considered in a law subject’s practical reasoning. But this is not how civil disobedients and otherwise law-abiding motorists treat many legal requirements. (The latter, notoriously, consider what appear to be excluded considerations, such as the speed of traffic and the apparent likelihood that speeders will be apprehended, to reach decisions about obeying the posted speed limit.)
This gives us two views about what sort of reasons law (potentially) provides for action: (1) reasons that pre-empt competing reasons, and (2) reasons that compete with others in terms of weight. Gur carefully criticizes the two positions as inadequate before developing a refreshingly different sort of answer. The reader will be surprised to learn what this difference implies about the law and its authority. Continue reading "Reasons to Comply With the Law"
Jeremy N. Sheff, Jefferson’s Taper
(Feb. 11, 2019), available at SSRN
It’s not news that normatively fraught debates in legal academia tend to become polarized and then stuck. Scholarship often tends to cohere around preexisting camps, causing debate to focus on which camp (and who within each camp) is right and to ignore the possibility that the available framings may have missed something important. In light of this, one of the most valuable and refreshing moves an article can make is to throw a bomb into the long-accepted binary of a given academic debate by suggesting an entirely new way of thinking about an issue. This is precisely what Jeremy Sheff does to the debate over foundational concepts of information ownership in his fascinating and provocative draft, Jefferson’s Taper.
Here’s the backstory: Some scholars favor a limited vision of information owners’ rights and tend to embrace what has become known as the utilitarian theory of copyright and patent. According to this view, property in creative expression or inventions is not rooted in any notion of “right” other than the state’s positive law. Rather, the state grants monopolies in information only because (and to the extent that) doing so is necessary to incentivize the creation of things that would earn no profits for their owners absent law’s imposition of exclusive rights. Other scholars prefer a more expansive vision of owners’ rights; these scholars tend to advocate an alternative view of copyright and patent rooted in the writings of John Locke. This approach locates a pre-political right to ideas in the labor expended in creating them and rejects the notion that copyright and patent are nothing more than state-created monopolies designed to calibrate the optimal level of creative and inventive production. Continue reading "A Classical Perspective on Information Ownership"
Sharona Hoffman, What Genetic Testing Teaches About Predictive Health Analytics Regulation
, __ N.C. L. Rev.
__ (forthcoming), available at SSRN
Professor Sharona Hoffman is one of our most prominent health law scholars. She is particularly interested in the intricacies of health privacy and quality in the context of pervasive healthcare technologies such as electronic health records and big data. Her expertise extends to a deep understanding of the Genetic Information Nondiscrimination Act of 2008 (GINA) and the scope of the Americans with Disabilities Act (ADA). In her excellent article, What Genetic Testing Teaches about Predictive Health Analytics Regulation, Hoffman neatly combines these interests, providing a thoughtful critique of predictive health analytics founded on a detailed description of our legal and policy experiences with genetic testing. The comparison is particularly pertinent because, to an extent, algorithmic medicine is stepping into a space that many had hoped would by now be occupied by precision medicine.
Hoffman identifies the policy and regulatory issues raised by both genetic testing and what she labels as long-term predictive analytics as “clinical validity and accuracy, privacy and discrimination, and psychological harms.” (P. 14.) At root, these raise the question of what Jessica Roberts and Elizabeth Weeks call “healthism,” “[p]ermitting—and even encouraging—discriminatory treatment based on an individual’s health status.” Continue reading "Algorithmic Medicine and the Lessons of Genomic Testing"
Michael J. Higdon, Parens Patriae and the Disinherited Child
(July 2, 2019), available at SSRN
In the United States, parents can disinherit their dependent children. This rule, which I’ll call the “disinheritance power,” is one of the most blazingly idiosyncratic strands of American law. Indeed, no other legal system gives decedents this cruel freedom. And although scholars have criticized the disinheritance power for decades, it remains firmly on the books.
Michael Higdon’s engaging new article attacks this problem from a new angle. Higdon proposes that states use the venerable doctrine of parens patriae as a safety valve against egregious exercises of the disinheritance power. Continue reading "A Novel Limit on the Power to Disinherit Children"
The Fifth Amendment to the federal Constitution and virtually all state constitutions require the government to pay compensation when it “takes” private property. But many state constitutions also require compensation for government actions that “damage” property. Until now, these “Damagings Clauses” have largely been ignored by legal scholars, particularly constitutional law scholars—and even by property rights advocates. But an outstanding 2018 article professor Maureen “Molly” Brady (who has just moved from the University of Virginia to Harvard) could help change that. She sheds light on the origins of these clauses in the late nineteenth and early twentieth centuries, the ways in which they have been largely gutted by court decisions, and what can be done to resuscitate them today.
Twenty-seven state constitutions have clauses clause prohibiting the “damaging” or “injuring” of private property for public use without just compensation. In the article, Prof. Brady explains how damagings clauses were enacted in order to compensate owners for harm inflicted by new infrastructure development that was not covered by the then-dominant interpretation of state takings clauses, which generally required either a physical invasion or occupation of the property or (in the case of regulatory takings) direct restrictions on the owner’s right to use the land. This did not cover such situations as the creation of various types of pollution, debris, and access barriers that sometimes rendered property difficult or impossible to use. But, while the wording of the clauses and the originally understood meaning, suggested they should apply broadly, Brady shows that over time courts in most states effectively gutted them, restricting compensation only to cases where compensation was already likely to be required by state or federal takings clauses. Continue reading "Learning from the History of State Damagings Clauses"
As distributed ledger or “blockchain” technology continues to offer decentralised and distributed decision-making, Yeung considers the way in which those automated processes (code as law) are likely to interact with conventional means of governance (code of law). This technology is based on peer-to-peer verification of transactions: it takes various forms, but the common theme is that the record of transactions is shared with all users of a given system, and transactions only make it on to that record after a fierce process of mathematical ratification. As a result, the intermediaries on which transactions have for so long depended, such as banks, clearing houses and property registries, are no longer required. Altruism and self-interest are aligned because all users have a vested interest in the continued integrity and success of the closed system, and third party intervention is neither required nor (for many users, at least in principle), desired.
Distribution and decentralisation are the crucial components of distributed ledger technology, and are the principle features which distinguish them from those forms of electronic payments which use intermediaries and electronic bank money, such as Paypal, WorldPay and BACS, for example. These characteristics also explain why cybercurrencies are often described as “trustless”, meaning that transacting parties need not have any trust for one another in the real world, so long as they trust the payment protocol (which, for reasons which will soon become apparent, they probably should). Decentralisation in this context simply means that everyone who might want to use the currency, and so has a copy of the relevant software, also has a copy of the ledger. The ledger is a record of every transaction made using that currency, and each computer operating the software (known as a node) has a copy of the entire thing: from the beginning (the “Genesis Block”) to today’s latest block. This is where the term Distributed Ledger Technology (DLT) comes from: Blockchain, which was created to underpin Bitcoin, was the first distributed ledger, but there are now distributed ledgers of several different forms. Common to every one, however, is the idea that all participants have access to the full history of transactions made using that protocol. This is a novel way of dealing with the ages-old double spend problem. Historically, the challenge of how to prevent double spending has been met in two ways: the first is by using physical tokens, whose corporeal form physically prevents their being spent more than once, and the second is by employing an independent third party, such as a bank, to keep a record of transactions and their effects on the subsequent spending power of the parties involved. Cybercurrencies achieve the same thing by sharing information with every user and by ensuring that the information so shared is perfectly synchronised. This way, “coins” cannot be spent twice because everyone would know that this is what was being attempted, and the consensus necessary for validation and recording would not be reached. Security is thus achieved through complete transparency, and distributed ledgers have no need for any centralised record-keeping, nor for any third party intermediary to verify the integrity of transactions. Continue reading "Computer Code as Law: A New Frontier?"
Chaz Arnett, From Decarceration to E-Carceration
, 41 Cardozo L. Rev.
___ (forthcoming, 2019), available at SSRN
Almost six months ago, best-selling author and legal scholar Michelle Alexander wrote for the New York Times in reference to electronic monitoring devices used in the criminal process: If the goal is to end mass incarceration and mass criminalization, digital prisons are not the answer. But why not? States are increasingly considering alternatives to incarceration, including electronic monitoring, as a means to reduce the economic and social pressures of the phenomenon of mass incarceration. The notable and bipartisan First Step Act passed by Congress in December 2018 encourages further use of electronic monitoring devices in the federal system. Why not embrace this ever-improving technology to reduce the deleterious effects of this phenomenon? Indeed, many Americans believe electronic monitoring can and should be a part of the solution.
Chaz Arnett’s powerful article, From Decarceration to E-Carceration, forthcoming in the Cardozo Law Review, argues to the contrary. He asserts that the expansion of electronic monitoring devices in community corrections threatens to entrench the most deleterious effects of mass incarceration – its operation as a mechanism of social stratification and racialized marginalization–without reducing the expanding footprint of the carceral state. Because his novel contribution reframes how we engage with the introduction of technologies as criminal justice reform, this is a must-read piece for those interested in resolving the problems of mass incarceration in the United States. Continue reading "Is E-Carceration a Problem? Confronting the Shortcomings of Technological Criminal Justice Reforms"