Abstracts & Bios
Hall of Graduate Studies, Faculty Lounge
ABSTRACT: Rationally Choosing to Act Corruptly: The Role of Uncertainty
This paper examines the complex, dynamic nature and causes of various forms of corruption by focusing on the conditions and incentives that either encourage or deter individuals from engaging in such illicit behaviors. Expanding on Becker’s rational choice model of criminal behavior, it explores how different forms and levels of uncertainty affect how a decision maker assesses the following four core variables when contemplating corrupt behavior: (1) the value of the potential windfalls of corruption and (2) the likelihood of securing such advantages successfully (the expected benefits), compared with (3) the magnitude of possible penalties or sanctions and (4) the probability of their imposition (the expected costs).
The certainty with which the expected benefits of corruption can be assessed depends considerably on the form of corruption pursued. In particular, corrupt transactions that require the participation of two or more parties (e.g., bribery, extortion, collusion) will generally create more uncertainty about both the expected size and probability of benefit than illicit diversions pursued by a solo actor (e.g., fraud, embezzlement). With regards to the magnitude of the benefit, in negotiations, one or more parties may conceal relevant information concerning the true value of the exchanged item or privilege. More significantly, because corrupt transactions require agreement and coordination between players, they are subject to the same constraints as other self-interested, voluntary exchanges, including the threat of default. Agreements to behave illicitly are not legally enforceable, so, in the absence of extralegal security and assurance mechanisms, if one party reneges or performs imperfectly, the probability that the other will receive the desired windfall is correspondingly reduced. The paper analyzes how parties may seek to reduce the probability of defection by increasing the frequency and duration of their exchanges and investigates how the pursuit of such strategies may create or perpetuate endemic, systemic corruption within certain environments.
On the cost side of the equation, individuals deciding whether to act corruptly will consider potential formal and informal sanctions. The paper examines how the specificity with which formal sanctions are articulated ex ante affects individual decision making, as well as how knowledge and the content of community norms influences how a potentially corrupt actor assesses the magnitude of possible informal social sanctions (e.g., peer disapproval, shaming, stigmatization, ostracism, damage to future interests). It also considers the role of opportunity and transaction costs affect the effective expected utility of the contemplated activity. The paper then explores the challenges associated with evaluating the likelihood of legal and other penalties, focusing on the difficulties of assessing the probability of detection when contemplating a clandestine crime like corruption in which the total number of actual offenders is unquantifiable. The paper also examines how an individual’s likelihood of sanction may be affected by her identity, position, or the incidence of corruption in her operating environment, and it considers ways in which the uncertainty associated with punishment may encourage accessory offenses such as records falsification, bribery of law enforcement or judicial officials, or money laundering.
Lindsey Carson is an SJD candidate at the University of Toronto where her dissertation examines the unintended consequences of laws against foreign bribery on economic and political institutions in developing countries. She holds a JD from the University of Pennsylvania and an MSc in Development Studies from the London School of Economics. Her research interests include law and development, international economic law, international business transactions, trade and foreign investment, criminal law, and white-collar crime.
ABSTRACT: The Implications of Heuristics and Biases Research on Moral and Legal Responsibility
Do the findings of the heuristics and biases tradition of cognitive psychology have implications for moral and legal responsibility?
Research in the field of cognitive psychology has demonstrated that human reasoning processes are often non-normative; that people exhibit “bounded” rationality. Specifically, such research has demonstrated that human reasoning processes often rely upon inaccurate rules of thumb, or heuristics, and are subject to a wide variety of identifiable biases, often as a result of reliance upon such heuristics. These, in turn, lead to errors in judgment, which may impact human behavior. This paper shall argue that this impact cannot be ignored when dealing with questions of moral and legal responsibility. It will argue that when heuristic reasoning has a decisive impact upon human judgment and decision-making, this fact can ultimately negate moral culpability. Further, it will argue that where legal responsibility is stipulated upon the culpability of the agent, such absence of culpability is reason to refrain from imposing legal liability as well. These arguments will be explored in the case of “judgment under uncertainty” – situations in which a person is required to estimate probabilities and evaluate future risks. Research has shown that in situations of uncertainty, heuristics and biases can cause the gross underestimation of risks involved in potential activities. It will be argued that where agents act based on such erroneous assessments, they are not morally culpable for their actions; and that the negation of moral culpability warrants the revisiting of the question of certain forms of legal responsibility. Specifically, the paper will argue that in light of heuristics and bias research, the current reasonable person standard employed in negligence offences is unjustifiable and must be replaced with a standard of responsibility that better reflects individual culpability.
The paper will further explore the implications of bias research for ex-ante regulation, considering the systematic and predictable nature of heuristic error.
Leora Dahan Katz is a J.S.D. candidate at the Yale Law School and a fellow at the Yale Center for Law and Philosophy. She is also a graduate student at the Hebrew University of Jerusalem Philosophy Department. She holds an LL.M. from the Yale Law School, and LL.M., LL.B. and B.A. (literature) degrees from the Hebrew University of Jerusalem. Her doctoral dissertation addresses the differential treatment of risk and harm in criminal law.
Sterling Memorial Library, International Room
ABSTRACT: Whose Constitution is it Anyway?: Interrogating the ‘Nimietic’ Constitutional Commitments in India
Uncertainty is often seen both as the core problem as well as an opportunity for law. While some see uncertainty as a threat to our notions of Rule of Law, others see in it the possibility of popular contestation of law leading to its legitimation. Robert Cover in Nomos and Narrative invited us to reframe this conception of uncertainty. He understood the judicial role arising not from “unclear” law, but from “too much” law. Law, especially constitutional law, evolves not from the cracks in the text of the constitution, but from the multiplicity of constitutional narratives and “nomoi”. The core of law is not the problem or potential of indeterminacy, but the surplus and nimiety of legal imaginaries. When citizens, through ‘redemptive’ social movements, contest the meaning of the constitution, they do so not by presuming constitution’s vagueness, but from the perspective of their commitment to their constitutional interpretation. In this paper, I interrogate such nimietic constitutional commitments in the context of the currently raging debate on affirmative action for the Muslim religious minority in India. This debate is witnessing the conflicting interpretations of equal and secular constitutional citizenship in India, and has not been exhausted by the legal battles in the judicial space, extending into the political and social spaces. Thus, the constitution is not merely a document of settling interpretive disagreements, but has become the site of defining the identity of India’s polity. I track the text and context of three petitions- the challenge against the affirmative action policies for Muslims by a Hindu right-wing organization, Rashtriya Mukti Morcha (‘National Emancipation Front’), the defense of the policy by Muslim socio-religious organizations like the Jamiat-Ulema-e-Hind (‘Organization of Muslim Clerics of India’), and the petition in a separate case filed by low caste Muslim and Christian organizations for the recognition of their communities for special treatment by the state. Reading these three texts in the background of the emerging constitutional politics, of both the street and institutional variety, I flesh out how they construct distinct constitutional narratives. Further, I argue that these three distinct constitutional narratives, by sharing the same public space, have influenced each other (a phenomenon which Cover might have overlooked), leading to a process of constitutional learning. Within this competition and interaction of constitutional ‘alterities’ and visions, I locate the notion of shared yet distinct commitment to constitutional values. Thus, moving towards seeing nimiety, rather than uncertainty, as an artifact of law gives us a vocabulary of understanding shared constitutional commitment despite deep disagreements about law’s meaning.
Mohsin Bhat is currently a JSD candidate at Yale Law School. He is a graduate of NALSAR University of Law (India), and before joining YLS as an LL.M. candidate in 2010, he clerked at the Indian Supreme Court. His interests include constitutional law, law and religion, and law and society. In his doctoral research, he is studying the religious minority Muslim social movements in India and their claims for affirmative action. He is exploring the interaction of legal meaning and politics, rights consciousness and creation of constitutional cultures.
ABSTRACT: Cultural Experts and the Culture of Courts
When judges give meaning to the cases in front of them, they use not only legal knowledge, but also knowledge about the world. The source of the judges’ knowledge about the world is the “common sense”, which is the intangible cultural system that contains people's informal knowledge about the world from their social group's point of view. Insomuch as the judges' interpretation about the world is limited to their social group's interpretation, what ought they do when the cases in front of them involve persons who do not share their group's interpretation and knowledge about the world?
In the US judges frequently rely upon testimonies or opinions by cultural experts (e.g. anthropologists or sociologists), and the general practice of using cultural experts in courts is developed and regulated, even if the adherence to these experts' findings is inconsistent among different courts. As opposed to the US, Israel has no formal method for inserting cultural content by experts to the legal procedure, and there are only sporadic examples of opinions or testimonies of cultural experts in the Israeli legal system, albeit its diverse and divided cultural nature.
Keeping the opposing examples of the US and Israel in mind, the article will discuss the following question: Is it possible for judges to eliminate the inherent cultural element that dominates their ruling and clouds its objectivity in culture-related cases by adhering to cultural experts, and what are the dangers in adopting this practice?
Masua is a Ph.D. Candidate at the Zvi Meitar Center for Advanced Legal Studies. She acquired her LL.B. (Magna Cum Laude) in Law and Political Science from Bar Ilan University (2009), and her LL.M. (With Honors) from Columbia University School of Law, N.Y. (2011), where she also obtained the Parker School of Foreign and Comparative Law Certificate (2011). Her areas of research are law and culture, religious feminism, constitutional law and family law.
ABSTRACT: The Making of National Law in the Uncertainty of Civil War: A Reassessment of the Senate’s Crafting of the Conscription Act of 1863
In the opening months of 1863, as a civil war that marked one of the most uncertain moments in American history raged across the country, those tasked with crafting laws for the nation wondered aloud what would be left of their beloved Republic when the last bullet finally sounded on the battlefield. Two years earlier, eleven states had proclaimed their departure from the United States. A month before, twelve hundred soldiers had died in a single battle trying to bring them back. Now, in letters and newspapers and dinner conversations, commentators predicted the eminent departure of additional states and questioned the Union’s ability to defeat those who had first set the nation’s borders adrift.
According to conventional wisdom, lawmakers in Washington D.C. responded to this unprecedented level of uncertainty by casting aside the Constitution’s logic of enumerated powers. Focusing on the text of federal statutes while glossing over the words and deeds of the lawmakers, scholars have concluded that older doctrines of states’ rights ceased to matter to the northern lawmakers who assembled in the nation’s council halls. War’s terrible uncertainty, this most recent scholarship suggests, swept aside inherited understandings of states’ rights and gave way to a new constitutional order of broad federal power that brought freedom to America.
A return to the evidence, however, reveals a far more nuanced response by northern lawmakers. Drawing on previously overlooked congressional debates, correspondence, and speeches, this paper examines how senators tasked with representing a state defined their constitutional obligations as they embarked on one of the war’s most significant legislative feats: the creation of a national military that encompassed black and white soldiers alike.
An analysis of the debates that culminated in this legislation, known as the 1863 Conscription Act, suggests that in the face of the uncertainties unleashed by eleven secessions and subsequent military defeats, northern senators relied upon the familiar constitutional doctrines of interstate comity, states’ rights, and state equality to accomplish their goals. At a time when the nation had ripped apart at the seams, these familiar doctrines provided anxious senators with a means of fulfilling their duties of representation, placating their opposition, and mitigating the risk of a feared revival of the slave power’s control of the federal government. In the end, this senatorial deference to the rights of the northern states as co-equal members of the Union helped bring freedom to the southern states, while ensuring that the struggle for civil rights remained a century and a half more in the making.
Maeve Herbert Glass is a Ph.D. candidate in history at Princeton University. She received her BA in history from Yale and a JD from Columbia Law School. Maeve studies the political and constitutional history of the United States, with a focus on the Civil War era.
ABSTRACT: The Court of Chancery’s equity jurisdiction: introducing uncertainty into the common law?
“Equity is a roguish thing: for law we have a measure, know what to trust to; equity is according to the conscience of him that is Chancellor, and as that is larger or narrower, so is equity. ‘Tis all one as if they should make the standard for the measure we call a foot, a Chancellor’s foot; what an uncertain measure would this be? One Chancellor has a long foot, another a short foot, a third an indifferent foot: ‘tis the same thing in a Chancellor’s conscience.” - John Selden (1584---1654)
Selden’s quote owes its longevity to both its simplicity as well as the vividness of its imagery. What it signifies is that for 16th century common lawyers, uncertainty in law was indeed ‘a roguish thing’ – an evil best to be avoided.
Nevertheless, contemporaries frequently qualified the virtues of the ideal of legal certainty. As such, St. German’s early justification for the supplementary jurisdiction of the Court of Chancery emphasised that the strict application of certain laws needed to be complemented by a concern for justice in each individual case. In modern jurisprudence, the strife for certainty was challenged even more fundamentally. Pound identified ‘scientific’ certainty with excessive formalism, which was in turn characterised as inherently conservative and predisposed against progressive developments in law.
Whereas the virtue of certainty has thus been called into question, uncertainty mostly retained the negative connotation exemplified by Selden’s quote. The progressive critique on certainty in law, however, already sets the agenda for a re---evaluation of uncertainty: Drawing on the circumstances of the rise of the English Court of Chancery between the 14th and the 16th century, I will address the question whether uncertainty as a temporary phenomenon can be an effective remedy against excessive formalism and serve as a catalyst for legal evolution.
After a rapid extension in both scope and doctrine in its first centuries, the English common law had reached a stage of procedural and remedial rigidity by the end of the 13th century. Its perceived certainty came at the cost of excluding a relevant share of societal conflicts from solution in the royal courts.
The rise of the Court of Chancery from the late 14th century onwards is best understood as a reaction against these developments. The court answered to the perceived rigidity of the common law by offering a more flexible procedure and a greater variety of remedies. In contrast to the common law, it allowed the examination of parties and witnesses under oath in order to receive a fuller picture of the underlying factual situation. This openness further facilitated the recognition of new legal instruments like the trust or the oral contract. Its judgments were then enforced by a variety of personal and monetary sanctions at the discretion of the court. The remarkable rise of the Court of Chancery suggests that there was popular demand for its procedures and remedies that the common law had failed to accommodate.
How, then, could these new procedures and remedies develop? Judgments from the formative years of the court have not been preserved. On the demand side, however, parties apparently opted for a trial---and---error approach. Petitions from the early years of the court provide ample evidence that plaintiffs consciously risked testing the limits of its growing jurisdiction by advancing a broad variety of claims. The petitions suggest reliance on the uncertainty of rules and remedies in the early Court of Chancery. It was only through its openness to actually adjudicate these contingent cases that the Court of Chancery could then develop its procedures and remedies. Uncertainty thus can be identified as an important factor in the rise of the court.
Against this historical background, I will consider why and under what conditions uncertainty might become a positive force in legal development.
David is a Ph.D student at the Max-Planck-Institute for European Legal History in Frankfurt/Main, Germany. His research focuses on the early modern English Court of Chancery. David received an LL.B degree from University College London in 2009 and graduated with a German law degree from Cologne University in 2011. In 2013, he was a Visiting Researcher at Yale Law School.
ABSTRACT: Loopholes, Equity and Morality
How does law deal with uncertainty about itself? Due to the limits of language and human foresight, legal loopholes are unavoidable in any legal system. It is, however, difficult to know how to regulate such loopholes - when they should be closed and when they should be left open? Should the decision be left to judicial discretion, as in a Hartian model? Should all loopholes be legitimate, as in a Holmesian paradigm? Can Dworkin’s Herculean judge provide direction by discovering the very best answer, consistent with legal values and principles in each and every case?
This paper explores the parallel development of two ancient legal systems grappling with these very questions. In Roman Palestine, during the third and fourth centuries of the Common Era, Jewish legal scholars debated a phenomenon known as ha‘arama (lit., cunning), clever legal dodges circumventing laws in various arenas. The most extensive discussions are found in the Palestinian Talmud, a corpus which betrays substantial Roman influence. Indeed, the Palestinian rabbis’ attempts to systematize their approach to such uncertainty came on the heels of the very same initiative among Roman jurists. The first and second centuries of the Common Era witnessed a sea change in Roman jurisprudence. Whereas earlier, a rigid and formalistic interpretatio of the Twelve Tables had ruled the day, Cicero’s famed statement of summum ius summum iniuria (the more application of the law, the more injury) ushered in a time when the spirit of the law, and not simply its letter, was recognized as significant. This came in the form of closing loopholes, known as fraus legi, or legal fraud. Ultimately it was the second century jurist Julian who was best known for leadership in this area.
Upon analysis, the Palestinian rabbis and the Roman jurists have much in common in dealing with the uncertainty that loopholes pose. Both emphasize the importance of the law’s own values in determining the response to a given legal dodge. In Roman law, the concept of “equity,” defined as what the lawgiver would have said had s/he been present, becomes the decisive factor. Which consequences would the lawgiver have wanted and which would s/he have balked at? In the rabbinic material, the discussion is likewise about the ultimate goal of the loophole, whether it is in line with the values of the law or not. Significantly, neither the jurists nor the rabbis leave the decision up to judicial discretion, as positivists would have it, or permit unlimited legal activism, as legal realists would envision. Instead, these ancient legal scholars betray the roots of Dworkinian natural law theory, suggesting that the legal system as a whole has principles and values which are just as important as the written letter itself.
Elana Stein is a doctoral candidate in Religion at Columbia University, where she received a BA in History and Masters in Religion. Her dissertation, which she has recently submitted, is entitled, "Rabbinic Legal Loopholes: Formalism, Equity and Subjectivity," and its focus is the evolution of rabbinic thinking about loopholes in both the Palestinian and Babylonian Talmuds. Elana is a graduate of the Center for Jewish Law and Contemporary Civilization Fellowship at Cardozo Law School, where she began to apply legal theory to her study of rabbinic law. She is also an alumna of the Graduate Program in Advanced Talmudic Studies at Yeshiva University and has served as the Community Scholar at Lincoln Square Synagogue in Manhattan for the past five and a half years. She lives on the Upper West Side of Manhattan with her husband Yonah and son Azzan.
Sterling Memorial Library, International Room
ABSTRACT: Enhancing Consumer Decision-Making by Improving Disclosure Regulation
Most of the 92.4 million Americans who invest in mutual funds lack the financial knowledge to cope adequately as consumers with the complexity of these investments, reducing their ability to sufficiently support themselves during retirement. At the core of this complexity is the difficulty in understanding the relative importance of information which has uncertain impact on future returns, such as fund past performance, and information with a clear future impact, such as fund fees.
The concern over fund investor protection has been recognized to some degree by the SEC, who in 2008 adopted a simplified disclosure regime requiring funds to provide a summary prospectus containing key information in plain English and in a standardized form. Following the SEC’s reform Beshears et al. conducted an experiment to estimate the effect of the simplified disclosure, concluding that the summary prospectus did not alter investment decisions.
The failure of the summary prospectus to improve consumer decision-making reveals two problematic aspects of the regulation adopting process by which SEC adopts regulation. Firstly, regulation was not tested before being implemented. The importance of empirical testing prior to adoption of regulation is particularly important when regulation is meant to improve an aspect of consumer behavior, which cannot easily be predicted.
Second, the reform is problematic as it is not based on an adequate theory of consumer behavior but rather on the vague notion that people are not perfectly rational. The SEC could have developed a theory of consumer behavior by bridging the gap between the research on consumer behavior of psychologists and economists on the one hand, and the legal literature on mutual fund regulation on the other. Building on the existing social science literature I propose a theory of consumer behavior that can be used as a basis for identifying the causes of regulation failure and how to effectively address it. Such a theory identifies the shortcomings of the SEC’s simplified disclosure, namely that it maintains the salience of past performance and that it still requires consumers to make complex calculations.
To illuminate the problems with the SEC’s simplified disclosure I discuss the parallel reform of the European Commission (EC). In 2010 the EC adopted the Key Information for Investors Document (KIID) requirement. Prior to the adoption of the KIID, the EC conducted extensive empirical testing leading to requirements that differ from the SEC summary prospectus.
The failure of the SEC to improve consumer welfare in the case of simplified disclosure can be understood as resulting from an insufficient welfare analysis framework for consumer protection policy-making. Regulation of funds that enhances consumer protection and decision-making depends largely on the strength of the theories used by regulators in identifying problems and justifying regulation.
In the past, cost-benefit analysis has not been central in SEC rulemaking. This is likely to change as the SEC faces pressure better to justify its policies. The criticism of the simplified disclosure, as well the theory of consumer behavior I propose, is a contribution toward filling the gap leading to a more robust and comprehensive SEC welfare analysis.
Talia received her LLB in Law and Economics from the Hebrew University and a BCL from Oxford University where she was a Weidenfeld Scholar. She is currently a SJD candidate at Harvard University focusing on regulation of financial institutions.
ABSTRACT: From Bungee Cords to Safety Nets: Evaluating and Rethinking Status Maintenance in a Climate of Uncertainty
Developed democracies spend billions of dollars on status maintenance programs, such as Social Security, unemployment insurance, and mortgage assistance, that shield individuals from the disruptive effects of economic uncertainty. These status-maintaining programs often seem indistinguishable in public discourse from universalist or social adequacy programs like Medicare, food stamps, and universal public education. Yet, in light of transformative changes in labor markets and increased budgetary pressures, status maintenance and social adequacy often end up competing for the same, shrinking pool of resources.
My paper draws on both legal and philosophical sources to argue for a revised—though not entirely dismissive—stance toward status maintenance as a response to economic uncertainty. In Part I, I identify several programs, with a focus on United States legislation and case law, that share a common purpose—to maintain, completely or partially, individuals’ social and economic status. These include home mortgage assistance; Social Security; unemployment insurance; trade adjustment assistance; asset and income exemptions in bankruptcy law; and educational support requirements in family law. Status maintenance programs and entitlements help those who were doing well continue to do well but often provide less help to those who have not climbed far up the economic ladder.
In Part II, I classify, reconstruct, and evaluate what I take to be the best or most prominent normative arguments in favor of status maintenance. The arguments I evaluate claim that status maintenance programs are justified because they:
(a) replicate private insurance;
(b) protect people’s ability to form and carry out life plans;
(c) respond to the psychological propensity to avoid losses;
(d) protect dependents and children from disruptive change;
(e) help promote social solidarity;
(f) ensure the continued existence of a middle class; and/or
(g) promote productivity and so increase revenue for other programs.
In Part III, I consider how legal and policy innovations could protect those aspects of social and economic status that genuinely deserve protection, while freeing more resources for other social goals such as upward social mobility for the worst-off. I focus on four types of proposals:
(a) Those that unbundle status by tailoring status maintenance more narrowly to normatively compelling interests.
(b) Those that cap status maintenance and so differentiate middle-class status maintenance from the maintenance of wealthy individuals’ status.
(c) Those that smooth status transitions, decreasing the steepness and irregularity of downward mobility rather than trying to prevent it altogether.
(d) Those that encourage individuals to pursue forms of self-sustaining status that do not require expensive support.
Such policy innovations include German-style unemployment insurance that aims to keep workers employed at the same companies but at lower wages; “right to rent” programs that allow homeowners facing foreclosure to rent their houses from mortgage lenders; caps on tax exemptions for wealthy individuals’ retirement accounts; a revised version of Jacob Hacker’s Economic Security Index; and limitations on consumer credit. These innovations can free resources for the promotion of upward mobility and social adequacy while continuing to protect individuals against the increased uncertainty of employment and housing markets.
Govind received a J.D. in 2013 with pro bono distinction from Stanford Law School, where he was a Student Fellow at the Stanford Center for Law and Biosciences, received the Steven M. Block Civil Liberties Award, and was a member of the Stanford Law Review. He is also a Ph.D. candidate in Philosophy at Stanford, where he is is working on a dissertation project on socioeconomic mobility and theories of justice, supported by a Richard & Dixie Grossman Stanford Interdisciplinary Graduate Fellowship. In 2014, he will begin a clerkship with the Hon. Carlos F. Lucero, U.S. Court of Appeals for the 10th Circuit, in Denver, CO.
ABSTRACT: Potential Harms, Probable Opportunism: A Balanced Solution for Contingent Creditors of Corporations
Millions of shipbuilders working in proximity to asbestos, women prescribed DES medication, and children sleeping in lead-paint-coated rooms have suffered injuries long after their contact with the injurious products. When an element of uncertainty exists regarding the likelihood of an injury arising in the future from prior exposure to a defective product, the victims often have neither the status nor the power to protect their interests or seek redress for the harm caused to them. Instead, with relative impunity, product manufacturing firms rife with contingent liability can behave opportunistically at the potential expense of these involuntary claimants; with the firm standing to benefit from profits and the burden of losses shifted onto contingent claimants. This paper offers an innovative framework to remedy the problematic effect uncertainty causes to product liability victims’ legitimate compensation.
Contingent creditors are individuals or parties who are potentially owed a debt or obligation, depending on the outcome of a future event. By definition, in order to qualify as a contingent creditor, some degree of uncertainty that can only be resolved by a future event must exist regarding the probability or scope of the liability. Since contingent creditors’ identities commonly remain unknown for a significant period of time, and they often become creditors involuntarily. Frequently, they possess neither the ability to protect their interests via ex-ante negotiated security interests or other concessions, nor the ability to induce receivership or bankruptcy until it is too late. Involuntary contingent creditors in general, and “long-tail” contingent tort creditors in particular, thus prove inadequately protected, uniquely vulnerable and their ability to attain legitimate compensation is highly uncertain. In reality, with varying degrees of likelihood, any person coming into contact with any product at any time, unknowingly and unwillingly represents a potential involuntary contingent tort claimant.
This proposal focuses on the two primary scenarios for opportunistic behavior to be taken by corporations at contingent creditors’ expense – through asset acquisitions in solvent firms and excessive risk taking in insolvent ones. Despite this problem’s prevalence, no existing mechanism alleviates these uncertainties in an efficient and fair manner. In this paper I introduce a remedy utilizing a novel combination of a capped-insurance framework with a requirement for asset-selling firms to dissolve and distribute profits. This mechanism successfully “solves” the uncertainties and externalities caused by solvent firms engaging in asset sales. However, the potential for excessive risk taking by insolvent firms seemingly remains. Thus, the paper puts forth a second proposition: explaining why, under particularized scenarios of opportunism in insolvency, contingent creditors should be granted a right to bring direct breach-of-fiduciary-duty suits against directors. Directors on the other hand, would be able to dodge the uncertainty of potential personal liability by requesting the appointment of a court approved contingent creditor examiner upon the firm’s entrance into insolvency.
Through eliminating uncertainty for the transacting corporations, directors of insolvent firms and contingent creditors, this comprehensive framework proves both efficient and fair; providing better protection and less uncertainty for future victims of faulty products.
Born and raised in Los Angeles, CA, Jeremy emigrated to Israel in 2000. After completing military service, Jeremy spent four years at Bar-Ilan University in Ramat Gan, Israel, earning an LLB, LLM and MBA, and serving as a senior editor of "Mechkaray Mishpat" (the Bar-Ilan Law Review). Following a year working in the international corporate division at a leading Israeli law firm, Jeremy commenced studies toward a PhD in law at Tel Aviv University's Zvi Meitar Center for Advanced Legal Studies. During 2011 Jeremy was appointed a Visiting Fellow at the Program on Corporate Governance at Harvard Law School, and the following year as a Visiting Researcher, also at Harvard Law School. During the summer of 2012, Jeremy explored questions in legal ethics in Poland and Germany as a member of FASPE (Fellowships at Auschwitz for the Study of Professional Ethic). Jeremy's primary research focuses are corporate governance and law & technology.
Room 129 (Yale Law School)
ABSTRACT: On the Instrumental Value of Vagueness in the Law
It is natural to think that law ought not to be vague. After all, law is supposed to guide conduct and vague law seems poorly suited to do that. Contrary to this common impression, however, a number of authors have argued that vagueness in the law is sometimes a good thing, because it is – in one way or another – a means to achieving certain valuable legislative ends. Timothy Endicott argues, for example, that vagueness in the law is valuable in virtue of being a necessary means to adequately regulating certain forms of conduct, while Jeremy Waldron argues that it is valuable because it is a facilitating – or partial – means to invoking people’s capacity for practical deliberation in a special way. On views like these, then, vagueness in the law is – or at least can be – instrumentally valuable.
In this paper, I want to point out what I take to be a rather common mistake underlying the attribution of instrumental value to vagueness in the law. I argue that many authors – including Endicott and Waldron – wrongly associate vagueness with instrumental roles that are really played by a closely related semantic phenomenon – what I call incommensurate multidimensionality. Incommensurate multidimensionality entails vagueness, so it is perhaps unsurprising that the former is sometimes mistaken for the latter. Such a mistake, however, has significant consequences when it comes to the proper attribution of instrumental value, because value only “transmits” from ends to means, but not to necessary consequences of those means.
I begin by explaining very briefly what kind of vagueness is supposed to be a valuable feature of law and how incommensurable multidimensionality entails such vagueness. Next, I examine the arguments made by Endicott and Waldron, arguing that their key premises are based on a mistake – incommensurate multidimensionality, rather than vagueness, facilitates the relevant legal ends. I devote most of my discussion to Endicott’s argument and then go on to show how my critique carries over to Waldron’s. After that, I consider what seems like the most natural general response to my critique and argue that there is a general reason why this strategy fails. Lastly, I note that – independently of any considerations of means and ends – standard deontic logic (SDL) validates a principle that, given my concession that incommensurate multidimensionality is good, seems to force on me the conclusion that vagueness is good, too. I mention in brief a number of ways to avoid this conclusion, in order to acknowledge that – and illustrate how – in making claims about how to reason with value claims, one incurs significant commitments in the logic of value.
Hrafn Asgeirsson completed his Ph.D. in philosophy at the University of Southern California in May 2012. He is currently Part-time Lecturer at the University of Iceland (Philosophy) and at Bifrost University (Law). Prior to that, he was Postdoctoral Research Fellow at Monash University, Faculty of Law, where he participated in the ARC Discovery Project “A Principled Theory of Legal Interpretation.” His main interests lie in philosophy of law, philosophy of language, and metaethics
ABSTRACT: The Constitutive Contingency of Uncertainty in the Construction of Contemporary Jurisprudence
Through the affirmation of individual subjectivity and autonomy, the progressive enforcement of individual freedom and the consolidation of the constitutional State i.a., the modern era created conditions for science to settle its basis on the rationally critical and pluralist intersubjectivity. In this process, truth becomes a relational and intersubjective construction in cultural space and time. Gradually, the ideas of finitude, contingency, indeterminacy and incompleteness are integrated to the process of self-criticism of the so-called hard sciences. With Bachelard and Kuhn, historicity and social practices become part of the scientific elaboration. Scientific knowledge is understood as a permanent construction, approximate and fallible, and concepts as objectivity, truth and correctness are considered relations in permanent construction. Uncertainty becomes a constitutive contingency of scientific knowledge and even a propelling element of research.
In Legal Philosophy produced especially since Kant, there are diverse attempts to rationally justify the existence of juridical orders on the basis of freedom and autonomy. This implies responsibility for and participation in the rational process of decision-making in society. However, as science shows us, we have only of approximate and fallible knowledge about the themes on which we ought to deliberate. The task is then to intersubjectively (attempt to) promote the fixation of provisory beliefs, to be shared as long as their premises can be sustained, which necessarily involves uncertainty and the risk we take when we face it.
But Law seems to have a certain difficulty in considering itself a discursive, experiential process and consequently in inserting itself in the present context of scientific discussion, still behaving much more as a formal science, relating concepts to one another and syllogistically applying them. It is not able to deal with uncertainty as constitutive of its own scientificity. In this sense, it hasn’t fully consolidated itself as an applied social science yet.
As for us, citizens and holders of the State of Law, we haven’t taken Law into our own hands in the sense of exercising autonomy, among other factors for fearing uncertainty. Consequently, we delegate the function of dealing with it to institutions that assume a paternalistic, heteronomous role in regulating various spheres of our lives. But fear is not a rational means of dealing with uncertainty. The difficulty still found by some in dissolving the alleged duality Law x uncertainty makes explicit the presence of a pre- modern element in Jurisprudence that has not been overcome yet. For Law structures itself not on a different order of reality from the one we live in, but upon the very facts of our lifeworld. The advantage of such a conception of Law is that it deals with autonomy in a detranscendentalized and at the same time social, consensual normative sense.
Uncertainty’s character of constitutive contingency in the permanent construction of contemporary Jurisprudence and its effects on the citizens’ process of autonomization are thematic contents that the text to be developed intends to explore in more detail.
ABSTRACT: The Permanent Emergency in Environmental Law
This paper is part of a broader project that argues that environmental issues confront us as a permanent emergency, such that our understanding of how the rule of law functions in emergency times can inform how the rule of law functions in the environmental context. The paper focuses on the analogy between emergencies and environmental issues. It argues that the analogy arises from the combination of epistemic frailty and existential threats, which challenge the commitment to govern through the rule of law.
I first examine the nature of uncertainty faced in environmental law. I argue that uncertainty is not a monolithic concept, but rather, that environmental issues are layered with different kinds of uncertainty. I focus on two kinds of uncertainty: (1) indeterminacy, that which is fundamentally unpredictable due to the intricate networks of complex systems, and (2) ignorance, the ‘unknown unknowns’. Neither indeterminacy nor ignorance can be resolved with more or better science. Therefore, environmental issues characterized by indeterminacy and ignorance are like emergencies in that they challenge our ability to govern through law in the face of irresolvable uncertainty.
The problem, however, is that the dominant decision-making paradigms of environmental law treat permanent uncertainty as temporary and exceptional — gaps in scientific knowledge that will eventually be filled. I argue that cost-benefit analysis and the precautionary principle have, thus far, failed to confront the more fundamental problem of how to make decisions in the face of permanent uncertainty. I therefore turn to the emergency context as a starting point for how the rule of law can confront permanent uncertainty head-on.
I have previously argued that emergencies cannot be treated as temporary and exceptional, because there is nothing, conceptually speaking, that prevents a temporary and exceptional emergency regime from normalizing and degrading fundamental rule of law principles. Therefore, we must have a conception of the rule of law that can function in all times. Here I refer to a common law conception of the rule of law, which, at its core, requires that all public exercises of power be justified. This paper builds on the requirement of justification and queries what adequate justification would look like in environmental law where we face permanent uncertainty. I examine two paradigms of risk decision-making set out by Elizabeth Fisher —a rational-instrumental model, and a deliberative-constitutional model. Unlike Fisher, however, I argue that only the deliberative-constitutional model has the potential to respond to permanent uncertainty and fulfill the requirement of adequate justification. By building on common law constitutionalism and Fisher’s deliberative-constitutive paradigm, I map how public officials can make environmental decisions that take permanent uncertainty seriously and fulfill a substantive conception of the rule of law.
Jocelyn Stacey is a Doctor of Civil Law Candidate at the McGill University Faculty of Law. She holds a Joseph-Armand Bombardier Canadian Graduate Scholarship and a McGill Doctoral Teaching Fellowship. Jocelyn completed her LLM at Yale Law School in 2011. She also has a LLB from the University of Calgary and a Bachelor of Science from the University of Alberta. In 2009-2010, she clerked for the Honourable Justice Marshall Rothstein at the Supreme Court of Canada.
The working title of her doctoral thesis is "Permanent Emergency: Rethinking Environment and the Rule of Law" and her areas of interest are environmental, administrative and constitutional law, common law methodology, and legal theory.
Sterling Memorial Library, International Room
ABSTRACT: Time-Shifted Morality: A Critique of the Legal Discourse on Online Copyright Infringement
The article critically examines the discourse against the widespread practice of unauthorized sharing of copyrighted content on the Internet. Legal discourse condemning this behavior and trying to persuade Internet users of its moral unacceptability relies on a rhetoric that lacks resonance and credibility: the “download as theft” rhetoric. The theft metaphor has been widely used in case law and in legislation.
It is argued that this reliance is explained by an indifference to the fact that the deeply embedded norm against theft that we hold is maladaptive in the contemporary technological predicament. To explicate this, the article uses the concept of “time-shifted morality”. The article’s hope is to serve as a stimulus for scholars and legal decision-makers to reconsider the rhetoric used in writing and arguing about online copyright infringement.
Tito Rendas is a Ph.D. candidate at Católica Global School of Law, in Lisbon, Portugal, where he also works as a teaching assistant. His fields of interest are intellectual property law – online copyright law in particular – and the intersection between law and the mind sciences. He recently received an LL.M. from Harvard Law School (2012), where he contributed to the Harvard International Law Journal and was a member of the board of the Harvard European Law Association. Prior to that, he received his law degree from the School of Law of the Catholic University of Portugal (2010) and an LL.M. from Católica Global School of Law (2011).
ABSTRACT: Managing Uncertainty in Intellectual Property Law
In many ways, intellectual property is largely about managing uncertainty. Unlike traditional property, in which property boundaries are clearly delineated and property rights can be marked tangibly, the boundaries in intellectual property are, by their very nature, uncertain. Intellectual property law is an attempt to create man-made boundaries between intangible concepts, whether it be to incentivize innovation using patents, reward creativity through copyrights, or encourage investment in a brand by trademark protection. While intellectual property law manages uncertainty, it does not eliminate it. This is partly because some uncertainty cannot be eliminated. Words sometimes cannot capture what an inventor has actually created, for example.
Other forms of uncertainty, however, are reducible. Yet intellectual property law does not reduce uncertainty to its limit. Indeed, in many circumstances, intellectual property law deliberately maintains some level of uncertainty when defining or interpreting property rights.
This Article explores why this is the case. In particular, it explores how legal uncertainty can be used as a form of regulatory control. By not making the legal boundaries between intellectual concepts as sharp as they could be, intellectual property law allows some form of reducible uncertainty to persist.
This use of legal uncertainty as regulatory control has a number of consequences. First, it defers decisions on legal claims to the future. Allowing or even fostering uncertainty in the law allows more of a “know-it-when-I-see-it” approach to legal interpretation – judges can adjudicate legal rights in the context in which they arise in a particular case. In the lexicon of law and economics, uncertainty transforms legal rules into legal standards, allowing greater flexibility for ex-post determination by judges and other adjudicators. In areas where we feel adjudication should be context -specific, uncertainty can be a helpful tool that implicitly delegates legal authority to a future adjudicator.
Second, uncertain as compared to certain boundaries can chill the behavior of risk-averse actors. This may or may not be socially beneficial, depending on the context. In particular, if a regulation is intended to promote the creation of a social good, legal uncertainty will likely decrease production of this good. If a regulation is preventing a social harm, however, an uncertain regulation might actually deter behavior more than a certain one. In other words, leaving some level of ambiguity on a regulation might actually be more effective than making everything crystal clear.
The Article explores how these general principles are especially salient in the context of intellectual property. In particular, the Article examines the doctrine of equivalents in patent law and the fair use doctrine in copyright law as examples of maintained uncertainty in intellectual property law. The Article also discusses how the legal uncertainty injected by these doctrines helps promote the rationales underlying intellectual property.
Neel is a fourth-year Ph.D. student in Economics at Princeton University. In 2005, Neel received his J.D. from Harvard Law School, where he served as Notes Editor of the Harvard Law Review. After law school, he clerked for the Hon. Vaughn R. Walker on the U.S. District Court for the Northern District of California and the Hon. Ann Claire Williams on the U.S. Court of Appeals for the Seventh Circuit. Neel is licensed to practice law in California and Illinois and previously worked as an associate at the law firm of Latham & Watkins. He received his Bachelor's Degree in Computer Engineering with a minor in Mathematics from the University of Illinois in 2001. He relied on his technical background both when practicing law as a patent attorney, and as lead programmer and patent counsel for Spindrop, a music technology company that he co-founded in 2010.
Room 128 (Yale Law School)
ABSTRACT: A Failure to Communicate: The Legal Response to Hate Crimes and Civil Disobedience
In this paper, I examine the legal response to two unique violations of criminal law: hate crimes and civil disobedience. Hate crime laws which impose harsher punishments are sometimes criticized because they wrongly criminalize “thought crimes”, or because their additional penalty is thought to constitute a sort of double jeopardy. Conversely, civil disobedients are often penalized in a similar manner as ordinary offenders of the same law, which many believe is unjust.
The paper consists of two parts. In the first part, I argue that the communicative function of criminal law makes a salient difference for why hate crimes are given harsher penalties, and why many think that civil disobedients should be given more lenient penalties. I find that those who embrace such a communicative aspect of criminal law may be able to justify the disparate treatment of these crimes more easily than those who hold a strictly consequentialist or retributivist view.
The second part of the paper raises a potential worry about communicative theories of punishment, which is that they lack clear principles guiding their application and hence introduce an unacceptable element of uncertainty or arbitrariness to punishment. While I agree that this is potentially troubling, I dispute the notion that there is nothing guiding or restricting communicative theories of punishment, especially in liberal societies. Because hate crimes and acts of civil disobedience are typically communicative acts themselves, I suggest viewing such violations through the lens of speech act theory. Examining the illocutionary and perlocutionary acts associated with these acts, I believe, can give us insight into the appropriate legal response to them. I conclude with two additional suggestions regarding the application of communicative theory to these issues: to help delineate the categories of protected class in hate crime, and to provide potential distinctions amongst civil disobedients who are more (or less) deserving of leniency.
Samuel is a doctoral candidate in philosophy at Rice University. Prior to studying philosophy, he worked in investment banking, and studied at the Wharton School of the University of Pennsylvania (B.S.) and Stanford University (M.S. Engineering). His research interests are in moral, legal, and political philosophy, especially regarding his dissertation topic of commodification, as well as the nature and limits of state power in areas such as punishment, civic obligation, and civil disobedience.
ABSTRACT: Happiness and Constitutional Law: Promise, Challenges, and Next Steps
Is there a legal right to happiness? In this paper, I make the normative claim that the fundamental purpose of law is to create a happy society. I argue that a right to happiness is not only consistent with this purpose, but also with reasonable interpretations of international law as well as various domestic constitutions. I argue further that happiness should be the guiding principle for constitutional law because it is superior to all other organizing principles in its ability to address the uncertainty that arises from subjective conceptions of what constitutes a just society.
Even though the importance of happiness is not without precedent in a variety of constitutions, arguing that it should be the guiding principle is a bold claim. It is a bold claim precisely because the tide in political and constitutional philosophy has shifted significantly away from utilitarianism and towards a focus on rights. This shift, in my view, is wrong.
Proponents of rights-based conceptions of justice unfairly use idealized theory to attack utilitarianism and advance the case for rights; no consideration is given to whether the idealized models can be extended to non-idealized settings; and very little consideration is given to whether the subjective realities or biases of the rights formulators skew their apparently neutral approach. This creates significant uncertainty in that the rights that are formulated may not have any congruence with the social reality in a particular jurisdiction, let alone transferability between jurisdictions.
A focus on happiness is an answer to this uncertainty, particularly in light of recent advances in measuring and understanding happiness—what some call the science of happiness—because these advances allow us to “proxy the concept of utility in a satisfactory way” that was not previously possible. The law does not and cannot live in a theoretical vacuum. What the science of happiness provides is a way of checking whether existing formulations of rights, based on idealized theories, are actually consistent with social realities, that is to say, whether rights (as we currently perceive them) make sense outside the theoretical vacuum. What the science of happiness also has the potential to do, if we accept happiness as a guiding principle of justice, is to assist in the construction and interpretation of rights as well as in the application of proportionality analysis. This paper will discuss, using thought experiments, some of the ways that the science of happiness could inform current debates on constitutional construction, interpretation, and application (especially at the proportionality phase) in ways that are superior to existing rights discourse.
Benjamin Perryman is a doctoral candidate at Yale Law School, where he is the Canadian Bar Association Viscount Bennett Fellow and a Humphrey Fellow in International Human Rights Law. Benjamin obtained his LLM degree from Yale Law School as a Fulbright Scholar and his JD degree from Osgoode Hall. He also holds a Master of Development Economics degree. Benjamin is called to the Bars of Ontario and Nova Scotia, and was previously a law clerk at the Federal Court (Canada) and Supreme Court of Nova Scotia. He lives in Dartmouth, Nova Scotia.
ABSTRACT: Uncertainty and Justice: Regulating Physician-Assisted Suicide
Vermont has recently become the fourth state to legalize physician-assisted suicide (PAS), while in 39 states PAS is subject to criminal prosecution. Needless to say, legalization and proscription of PAS are enacted in the name of justice, but how can two legislative actions both be fair if they are as wide as the poles apart? I think the answer lies with an inherent uncertainty/rational indeterminacy in the initial phase of the process during which the notion of justice is conceptualized.
Any legal decision needs to be just; but what is justice really about? I assert that for legal purposes justice should be defined as a set of values based on which legal rules can be formulated. In American legal discourse, these values can be divided into five groups/categories as follows: rights, morality, social utility, formal administrability and institutional competence. Furthermore, within each category there are opposite arguments leading to different legal outcomes (e.g., PAS is bad or neutral to public morality; a right to PAS exists or not; the Supreme Court is competent or not to pass decisions like Vacco v. Quill, etc.)
Now, in order for a legal decision to be perfectly fair, it should equally embrace all five categories of values within it, but in controversial cases like PAS, this never happens since different value categories start contradicting each other. Therefore, one must choose the main value category and then “tune” the arguments from the others in accordance with it. For example, in deciding upon PAS one can single out rights as the most important value (a right to PAS is constitutional) and then harmonize others with it (conventional morality does not apply to PAS; spending lesser resources on terminally ill patients promotes social utility; courts are competent to judge upon PAS’ constitutionality; legalizing PAS will make it formally administrable).
Since one value cannot logically precede over the other (e.g., rights over morality and vice versa), the choice of a main value cannot be explained strictly rationally – it is heavily influenced by irrational factors (emotions and traditions) and is therefore uncertain, producing different visions of justice (the resulting concept of justice is rationalized after the initial uncertain choice is made).
In other words, uncertainty is a bifurcation point of justice where two opposite legal actions can both be fair provided the categories of values were harmonized for each of them. I argue that as long as proscription and legalization of PAS have this value-based foundation of justice, both measures can honestly be considered just.
Konstantin Tretyakov is a doctoral candidate at Harvard Law School where he is working on the dissertation "Bioethics and Law: Towards a Cultural Interpretation of Justice." Konstantin has his Bachelor in Laws and PhD in Law degrees from Moscow State Institute of International Relations (MGIMO-University) where he majored in Chinese legal studies, constitutional law and international law. He was also in Harvard Law School LL.M. Program in the academic year of 2012-2013 (requirements fulfilled, degree waived) concentrating on philosophy of law, bioethics and constitutional law.
During his S.J.D. studies, Konstantin focuses on a right to die and organs transplantation problems in the United States and China in light of different theories of social justice
Room 129 (Yale Law School)
ABSTRACT: Uncertainty and Firm Rationality in Antitrust Law
Antitrust law regulates firm behaviour on the basis that firms are rational profit maximisers. Rational models employed in antitrust law assume that firms take decisions based on complete knowledge of available alternatives and effective evaluation of each of these alternatives. However, in reality firms operate in complex and uncertain environments in which they do not have complete information. Organisational and behavioural theories investigating the manner and extent to which uncertainty affects the assumption of firm rationality have found firms to be boundedly rational. Yet, attempts to integrate this literature into antitrust law have met with stiff criticism. This paper furthers the enquiry into the bounded rationality of firms and studies its relevance to antitrust law. The paper finds evidence that managers in firms suffer from behavioural biases; that these biases may perpetuate at the firm level and may affect the conduct of firms in the market. Further, behavioural theories of the firm suggest that the conduct of firms in the market is affected by their internal operations and that a firm’s internal processes are removed from the rationality standard currently adopted in antitrust law.
The paper concludes by suggesting that the theoretical framework of antitrust law may benefit from incorporating the insights from management studies and business strategy, in order to be more closely aligned with empirical observations of firm behaviour.
Shilpi Bhattacharya is currently pursuing a Ph.D. at the Erasmus University Rotterdam as part of the European Doctorate in Law and Economics. She was previously an Assistant Professor of Law at the O.P. Jindal Global University in India and has also practiced at the law firm Linklaters LLP in Singapore. Her primary research is in Antitrust Law and Behavioral Law and Economics.
ABSTRACT: Debias them!? – Towards a Critical Theory of Reducing Biases and Uncertainty in Private Law
Much of private law rests on the idea of private autonomy, a concept deeply interwoven with the ideal of rational agency by a homo economicus. Since the 1970s, however, this model actor and its cognitive and volitional premises have come under increasing attack. Men fall prey to biases as they process even simple pieces of information. These effects are particularly strong in decisions under risk and uncertainty. As behavioral economics bids farewell to long-cherished homo economicus, the law is bound to recalibrate a number of its basic concepts, too.
One frequent reaction in the field of behavioral law and economics is the call for debiasing, i.e. measures which reduce cognitive biases. By, for example, reframing or narratively highlighting the potential risks of an investment decision, such as the purchase of a house or of stocks, one could possibly reduce investor overoptimism.
The conceptual attractiveness of debiasing resides in its status as a middle category between content-based ius cogens on the one hand and the mere disclosure of information on the other. It promises to help people to more rationally deal with conditions of risk and uncertainty while preserving freedom of choice. The key question from a legal perspective, however, should be: What are the normative implications inherent in the framework of debiasing? Current scholarly debate for the most part shuns this question. My thesis is that this is a serious mistake. Actively reducing human biases raises crucial normative questions. I therefore would like to suggest a “normative turn” of the discourse, which should comprise at least two dimensions: the necessary normativity and the normative potential of debiasing measures.
Necessary normativity means that debiasing must be conceived as an inherently normative concept. This claim rests on two main arguments. First, debiasing is not a neutral tool despite its frequent portrayal as such. Rather it has far-reaching implications for the concept of human autonomy. This holds true even though, on the level of the content of the decision, freedom of choice prevails: debiasing may still interfere with what we call an autonomous decision-making process. A philosophical and legal analysis of traditional concepts of autonomy (Kant) as well as of contemporary doctrines (Robert Paul Wolff, Harry Frankfurt, Gerald Dworkin) underscores this point. Second, debiasing may even induce what may be called a “pecuniary bias”. Most debiasing interventions target the correct assessment of the potential financial impact of a decision with uncertain outcome. Thereby, they highlight the monetary implications and, inadvertently, make them mentally more “available”. The decision-maker is thus likely to attach a greater decision weight to financial considerations – to the possible detriment of other decision factors such as health, fairness or sustainability. Normative necessity therefore calls for debiasing to be normatively grounded and for these implicit normative issues to be made explicit. If the law aims at steering people’s decisions in some direction through debiasing it needs a normative compass which enables it to determine which direction is socially desirable and which one is not.
In turn, this implies a – so far unexploited - normative potential of debiasing: its techniques might be used not only to reduce certain biases but also to foster socially desired behavior in private decisions. By such “contractual pro-biasing” one could promote public good-oriented goals, e.g. sustainability or fairness, on a large scale in the midst of the workings of a market economy. The approach may thus contribute to a conscious politicization and de-neutralization of the core area of private law while still preserving the cherished principle of freedom of choice to a maximum.
Philipp Hacker studied Law at the Universities of Munich and Salamanca from 2004–09. During that time he was a research assistant at the Max-Planck-Institute for Intellectual Property and Competition Law and a fellow of the Maximilianeum Foundation and of the German National Academic Foundation. After further studies in Philosophy and German Literature, he passed the second German bar exam in 2012. Currently, he prepares his PhD thesis at the Humboldt University of Berlin on behavioral law and economics in a European private law context, and is an LL.M. candidate at the Yale Law School. He is a co-author of “FairEconomy – Crises, Culture, Competition and the Role of Law” (with Wolfgang Fikentscher and Rupprecht Podszun).
Room 121 (Yale Law School)
ABSTRACT: Embracing Contingency: Toward a Responsive Emergency Law
Such leading theorists as Hegel and Kuhn argued that progress occurs via punctuated evolution triggered by episodes of uncertainty. History teaches us that such moments are often sparked by contingencies such as disasters and economic crises. The manner by which law interprets and responds to instability shapes its transformative potential. The New Deal is a prime example. Policy innovation in the wake of the Great Depression, facilitated by emergency delegation of legislative authority, instigated one of the century’s most important social paradigm shifts. In the twenty-first century, however, law often appears to exacerbate instability and impede productive transformation. The past decade has been marked by contingencies that provoked public debate about fundamental socio-political arrangements. Hurricane Katrina, for example, metamorphosed into a political crisis implicating structural inequality in America. More recently, the global recession triggered social movements challenging the operation of contemporary democracy. In each case, legal responses were widely characterized as symptomatic of underlying problems. In New Orleans, authorities were criticized for elevating ‘law and order’ over social justice. Likewise, legislative rescue packages following 2008’s financial crisis were perceived to reflect illegitimate political agency, and governments were criticized for repressing social movements by force of law. These events signal a challenge to law’s role in the face of contingency: Can law embrace uncertainty and facilitate transformation, or will law instead suppress dialogue and reinforce injustice?
This paper critiques the preservative logic and avoidance of political uncertainty that underlies jurisprudential thought and state policy in the past decade. While legal academics have rapidly become interested in contingency following 9/11, emergency jurisprudence has defined itself through mistrust of law’s politically generative potential. The prospect of social change through contingency has come to be associated with the notorious case of Weimar. Thus the central problematic of emergency law is taken to be status quo preservation and deflation of uncertainty. This perspective, endorsed by Schmittians and liberals alike, promotes a narrowed conception of contingency politics. When extended to a paradigm for dealing with uncertainty, it amounts to what Nonet & Selznick termed repressive law. It is imperative that we reimagine emergency law as contingencies become more frequent and severe.
To inform such re-imagination, this paper develops a theory of law’s influence on contingency politics. First, law channels politics into spaces and processes, both institutional and extra-institutional. Second, law focuses politics on particular constructions of events, actors, and issues. Different variations of channeling and focusing modalities contribute to political dynamics that vary in their transformative potential. Where law’s influence is more repressive, it channels contingency politics to extra-institutional spaces and promotes a radicalized dynamic. Where law’s influence is more facilitative, it channels politics into structured spaces where fundamental socio-political questions are pursued without undermining law’s legitimacy. This latter dynamic represents an alternate logic of contingency that embraces uncertainty and moves toward what Nonet & Selznick termed responsive law.
Andrew Brighten is a Ph.D. candidate in Jurisprudence & Social Policy at the University of California, Berkeley. He holds a B.C.L./LL.B. from McGill University and a M.A. in Economics from Queen’s University. Andrew’s dissertation investigates how law influences politics of social change in times of emergency and crisis.
ABSTRACT: On the Misappropriation of Law through Executive Actors: Enforcing Dominant Rationality through Political Utilization of Law
Law is the dominant instrument in modern social organization (e.g. Weber  2008), and it is assumed that law and rule-of-law are the only legitimate means for democratic or just organization (Habermas 1998; Luhmann 2001). The argument is here that legal rule prevents arbitrary exercise of power. Critical contributions emphasize that law in itself is problematic: because it constructs society in the simplistic categories in and out, because it is always open for colonization through discipline, and because of the fact that it claims universality for social organization (Biebricher 2003, 2012; Litowitz 2000). If inquiring the weaknesses of law, one comes across its indeterminacy: no law, statute, bill or constitution is thoroughly determined. Generally speaking, law is always more or less defined; a more favorable argument would term it flexible. What has been identified as a weakness here, is conceptualized by law-makers and rather positivist legal philosophers as a quality: since it gives a certain extent of discretion to the application of law. It is argued that this is necessary for that law is applied in different contexts. Nevertheless, consequences following from this are janus-faced: in an ideal case it means that law can respond to the plurality and complexity of the social world; a worst case scenario would mean that in applying law actors can sneak in their interests. This leads us to hypothetically identify the following dimensions as most relevant for appropriate or inappropriate rule-of-law: (1) conceptual determinacy of legal texts (cf. Perez/Teubner 2006), and (2) the praxis of interpretation and application of law (cf. Dworkin 1982). In legal texts especially vague and open concepts (e.g. indeterminate legal terms, ILTs) open the floodgates to misappropriation (cf. Heck 1914, Jesch 1957, Koch 1979, Engisch 2010). And even though legal indeterminacy is intended to enable delegation of decision power to executing agencies (Osterloh 1995), vagueness can open loopholes to enforce and promote private, economic, political and other ends. The means by which this can be achieved is interpretation, and this key characteristic of legal praxis has been well researched (cf. Dallmayer 1992, Marmor 1995, Raz 1996 and 2009, Cleveland 2004, Robertson 2010). For critical political science and public law approaches it is evidently interesting to inquiry the praxis of interpretation of law through executive agencies, namely governmental bodies and bureaucracies. This is even more the case, if we assume with Max Weber that authority is in contemporary society mainly experienced as administration (Weber  2008: 162).
In the following article we show that executive actors can misuse law to realize a political agenda when stretching the meaning through a very genuine interpretation of open legal concepts. The examples we will draw on are proceedings on the granting of building permissions for the German Muslim minority. By arguing on legal grounds (here building codes and laws) that a mosque or cultural center does not fit into the overall appearance of a locality, representatives and executives of societal majorities help realize an exclusionary construction of social reality. These acts of misappropriation of law show that in this way dominant groups can enforce and promote their rationality through law. With this we want to call attention to the fact that rule-of-law is problematic if utilized to solve political disputes; because the solution which by legal form claims legitimacy and adequacy might be at least only political and arbitrary.
Ilya Levin, research assistant at the Chair of Public Law, Russian Law and Comparative Law at the Humboldt University Berlin. He studied law in Berlin and recently completed a PhD thesis on "Public Private Partnership and the Olympic Games in Sochi"
ABSTRACT: Federal Enforcement of Police Reform
Congress passed 42 U.S.C. §14141 in an effort to combat police misconduct and incentivize widespread reform. The statute gives the Attorney General a public right of action against police agencies that engage in a pattern or practice of unconstitutional wrongdoing. While academics initially praised the law’s passage, many have since worried that the Justice Department has not effectively administered the measure. But no research has systematically and empirically analyzed how the Justice Department has applied this statute. Using a combination of qualitative and quantitative methods, I fill this gap in the available literature by comprehensively detailing the Justice Department’s enforcement of §14141 over time. I conclude that changes in leadership and internal policies have directly influenced the administration of the statute. These changes at the Justice Department have affected both the breadth and depth of §14141 enforcement. Ambiguity in the statutory language handed down by Congress has opened the door for the Justice Department to mediate the impact of the legislation. As currently written, §14141 appears to be an unreliable method for instigating widespread police reform. This realization has grave implications for the future utility of public rights of action. Structural reform litigation often relies on statutory language authorizing public rights of action to be initiated by the executive branch. These findings suggest that, in such cases, the executive branch can easily mediate the impact of ambiguous statutory authorizations.
Based on these empirical findings, I argue that Congress should reformulate §14141 to grant private parties a limited equitable right of action against police departments engaged in a pattern or practice of unconstitutional misconduct. This would likely increase the number of §14141 cases, thereby incentivizing the spread of proactive reform in police agencies across the country. In order to prevent these private §14141 claims from interfering with active public claims, this proposed statutory change would provide the Attorney General with narrow authority to intervene and block a private §14141 claim. The Attorney General could only take advantage of this limited power when, prior to the initiation of the private claim, the Justice Department has already initiated a public §14141 investigation. This statutory change would permit the Justice Department to continue the important job of structurally reforming problematic police departments, while empowering a new group of plaintiffs to fill the gaps left by the Justice Department’s historically uneven enforcement policies.
I am a Visiting Assistant Professor of Law at the University of Illinois College of Law, where I teach criminal law and information privacy. I am also a Ph.D. candidate in the Jurisprudence and Social Policy Program (JSP) at the University of California, Berkeley. My ongoing dissertation uses a combination of qualitative and quantitative methods to examine the Justice Department’s implementation of structural reform litigation in American police departments.
Room 128 (Yale Law School)
ABSTRACT: Some Skepticism About Bright-Line Rules: A Case Study in First Nations Property Law
Influential strands of contemporary legal theory regard predictable formal rights and standard default rules as the sine qua non of a modern property law. Property theorists tend to be skeptical, if not outwardly hostile, toward localized and context-contingent variations in property that defy the conventional vision of a seamless, state-centric system. “Local property” presumably raises information costs between owners and non-owners, and creates fundamental uncertainties in translation between the internal life of local communities, external markets and the predominant legal system. Nevertheless, such local property arrangements persist in a surprising number and diversity of settings around the world—though they remain understudied as such.
Local property regimes have seen a marked resurgence in recent years among First Nations (indigenous communities) in Canada, as several communities grapple with newly available options for tenure change on their reserve and treaty lands. These movements include a complex mix of statutory pathways, negotiated treaties, governance agreements, and indigenous or customary law. But two contrasting outcomes of reform efforts are now becoming clear. Some communities have opted to develop “standard” property regimes that are modelled on the freehold estate and closely resemble the private-individual and public property forms familiar to the common law. Other communities resist standardization and have maintained or developed customized forms of local property—for example, by retaining a combination of collective and individual interests, allocating specific use rights based on the functions of certain resources, placing restraints on alienation, and adapting the leasehold and other contractual instruments as key mechanisms for exchange and transfer with third parties.
Based on preliminary fieldwork, this paper builds a theory to address why some First Nations choose to build local property institutions against the conventional wisdom of standardization broadly promoted by both private law theorists and law and development scholars. I argue that, at one level, communities have deployed a set of unique institutional tools to manage the uncertainties of local property and thereby reduce inefficiencies and promote community autonomy and human flourishing. But at another level, the “uncertainty” of localized property regimes is itself an affirmative feature of these new institutions that reinforces the long-term commitment of outside investors and other actors to the success of First Nations communities.
ABSTRACT: Rethinking our Renewables: Who Owns the Wind?
Although there is still much scientific uncertainty regarding the best mitigation measures to address climate change, it is widely agreed that one way to reduce the effects of climate change, or at least to slow the process, is by changing our energy mix and to increasingly rely on renewable sources, such as wind. Indeed, wind energy production has significantly risen over the past two decades and is likely to become even more prominent in years to come due to the uncertainties associated with climate change, and the ever growing need for new and clean energy sources. This paper will argue that while wind energy has many ecological and economic advantages, harvesting the kinetic energy locked in the wind also presents some compelling challenges with respect to property allocation and use of natural resources. The nub of the problem is that despite its naturally replenishing properties, the kinetic energy in wind is not entirely endless. Because of their ability to regenerate we tend to think of renewables as endlessly abundant. Yet the extraction of energy by one wind farm can reduce the wind available for others in the downwind direction, which might lead to conflicts over the use of the existing wind resources. Policy-makers are thus soon likely to face a challenge with respect to managing our winds.
When applying a law and economics approach, this can be seen as a problem of externalities. Each farmer enjoys the gains produced from the turbine, but does not shoulder the costs imposed on the neighbors in the downwind direction. This paper focuses on the regimes employed to address this externality, manage and protect wind, and specifically the creation of rights to use wind as it blows over the land, titled “wind rights”. After providing a snapshot of the existing wind rights in the United States, the paper will look at the development of wind rights in light of the prevailing theories on property evolution, asking if and how they can explain the evolution of wind rights thus far and the specific ways in which they have evolved. In addition, by drawing on the example of wind rights development, the paper will explore how the scientific uncertainties regarding the behavior of a resource and its availability may impact property evolution and the possible effect the ‘invisibleness’ of a resource could have on property development.
ABSTRACT: Commitment without Constitution: Chinese Property Law Reform and Beyond
According to Douglas North, “for economic growth to occur the sovereign or government must not merely establish the relevant set of rights, but must make a credible commitment to them.” North’s prescription is to constrain the government with a set of rules that do not permit leeway for violating commitments, such as a political system of checks and balances and independent judiciary. For post-Communist countries, it is often uncertain whether a democratic constitution could work and how long it would take to make it work. Under this limitation, how to build a credible private property system in post-Communist countries becomes a difficult question. Privatization is a process of transferring resources from the government to private holders-- why should the government do that?
Government agencies at different levels and departments would try their best to retain their control and distort the privatization process if there is no motivation to define and protect property rights. It is exactly what has happened in Russia—the tragedy of the anti-commons. As Michael Heller has said, transition regimes have often failed to endow any individual with a bundle of rights that represents full ownership of storefronts or other scarce resources. Instead, those regimes have ratified the expectations of powerful socialist-era stakeholders by making them rights-holders in the new economy. That is why a private holder of a stick of property rights could not make use of the storefronts in Moscow. In contrast, China, a post-Communist country which maintains state ownership of urban land, began urban land use reform in the 1980s and experienced the first real estate boom early in 1993. Like it or not, there is no tragedy of commons or anti-commons and the market seems to work very well, which is a sign that property rights in urban real estate have been defined and protected to the extent that encourages continuous investment. How has China accomplished this?
The answer lies in the governmental land sales system in urban China. It is an institutional buyout of local governments, which exercised the state land ownership during the Communist era. Government sales of land use rights are an exchange: private companies get real property rights defined and protected for a fixed period and local governments get the land revenue. This mechanism has successfully won the support of local governments for defining and protecting property rights and is the legal foundation for the housing boom in urban China.
When constitution is unavailable to constrain the government to make credible commitments, another way is to align the interests of the government and other stakeholders with the reform. That’s why I replace North’s “constitution and commitment” with “land sales and commitment” in the Chinese background. Meanwhile, this approach has its drawback: interest groups fostered in the reform might oppose to put that reform further. How to overcome this problem is a difficult issue for the Chinese government.
Room 129 (Yale Law School)
ABSTRACT: Constitutional Supremacy and Counter-Interpretation in the US and UK
The New Commonwealth Model of Judicial Review (NCM) aimed to end the standoff between democracy and uncertainty. Popular constitutionalists argue that elected representatives ought to interpret the constitution because it expresses the people’s political ideals. Proponents of judicial supremacy respond that multiple interpreters invite chaos. Judges must have the final word to end dangerous conflicts. NCM promised the best of both worlds: Judges interpret rights, but legislatures have the final say on the law. By comparing conflicts between legislatures and courts in the US and UK, I argue that NCM failed because it introduced uncertainty over the status of constitutional rights. As a result, parliament and the court speak past each other endangering the legitimacy of the constitution. NCM should dispel this uncertainty by following the US model of constitutional supremacy.
When fighting against a court, legislatures can wield two different languages. The first is rights misgivings. Parliament asserts a right to violate the constitution because it represents the people’s will. The second language is counter-interpretation. The legislature disagrees, not with the right, but with the judge’s interpretation of the right. In this counter-interpretive battle over constitutional meaning, the odds are stacked against the legislature. Courts are more popular, focused, and wield superior expertise. Incentives are structured against counter-interpretation.
Indeed, in the UK right-wing politicians and the press wield the language of rights misgivings. They have demonized the Human Rights Act (HRA), which empowers judges to protect rights. They have denounced it as the “world’s worst law” and a “perverts’ charter.” Relatedly, Parliament has never counter-interpreted a right, usually deferring to the judiciary on its meaning.
Constitutional supremacy removes the option of rights misgivings. The constitution is the highest law and the only language of legitimation. If the legislature disagrees with the court, its only move is to counter-interpret.
In the US, constitutional supremacy has forced counter-interpretation. After extensive opposition to ratification of the Constitution, support became almost universal and counter-interpretation took off. Those who opposed the constitution now called upon it, claiming to understand it better than those who wrote it. This tradition continues today.
After the 2010 UK election, the HRA was briefly de-facto constitutionally supreme and counter-interpretation flourished. In the campaign, Conservatives railed against the HRA, specifically immigrants’ use of the “right to family life” to prevent deportation. They promised to “scrap” the HRA. The election produced a hung parliament, and the Conservatives formed a coalition with a third party, the Liberal Democrats. The Liberals strongly support the HRA, and Conservatives scaled back their campaign against it. Politics gave the HRA de-facto supremacy. Yet, the Conservatives still had to appease their base. They began to counter-interpret. Cabinet promulgated executive rules asserting a narrower interpretation of a right to family life than the Court’s. When the Court struck the rules down on technical grounds, Parliament vowed to pass a new law to reassert its interpretation. However, after the coalition between Liberals and Conservatives fell apart, Conservatives returned to the language of rights misgivings.
Joshua Braver holds a J.D. from Yale Law School and is a Ph.D student at Yale Political Science. He works on democratic theory as well as related issues in comparative constitutional law, including judicial review, constitution-making and conflicts between legislatures and courts.
ABSTRACT: The Principle of Legal Certainty in the Case-Law of the ECJ: From Certainty to Trust
The principle of legal certainty is inherent to any western legal system, although it appears in many different shapes. At the crossroads of European legal traditions, the case-law of the Court of Justice of the European Union is testament of that fact. According to the Court, the principle of legal certainty requires «that rules of law be clear, precise and predictable as regards their effects» (16/02/2012, Marcello Costa, C-72/10-C-77/10). How can these requirements be reconciled with the fact that this principle is probably the most uncertain, ambiguous and unpredictable of all the European norms?
The goal of the paper will be to show that the uncertainties and incoherences of the Court’s judgments regarding the general principle of legal certainty (almost 2.500 so far) are the result of unquestioned postulates as to what the principle means. The first part will set out the sprawling, confused and contradictory nature of this case-law which, according to the Court, covers or is at the origin of, among others, the principles of legitimate expectations, vested rights, non retroactivity and res judicata.
The second part of the paper will show that the jurisprudential inconsistencies can be –at least partly- explained by the history of legal theory. It appears that from ancient Greek and Roman law until today, with a very significant turn at the 17th century, four ‘logics’ have built up that animate the modern principle of legal certainty. Two of these logics propose alternative bases for the principle: the cartesian logic is grounded on the requirement for (absolute) certainty while the fiduciary logic is built on the notion of trust. The two remaining logics are related to the beneficiaries of the principle: the subjects of law (subjective logic) or the power or the Prince (political logic). This last logic can be traced back to Antiquity while the three other logics mostly arose during the 17th century, with the scientific revolution of Descartes and Newton (cartesian logic) and with the Lockean theory of the social contract (fiduciary logic) and the development of individual rights (subjective logic). This brief historical overview will also give an account of the evolution of the principle of legal certainty on American soil. While its presence, which was established during the 19th century, suffered from the attack of Jerome Franck, the goal of many American legal realists remains the predictability of law.
Finally, I will propose two simple ideas: to abolish the (unrealistic) cartesian logic as a basis for legal certainty and to give precedence instead to the fiduciary logic. In terms of beneficiaries, I will argue that legal certainty should operate mainly in favour of the individual and not the powers that be, in this case the European Union.
Dr. Jérémie Van Meerbeeck is a Judge at the Brussels Court of First Instance in Belgium. He is also a guest lecturer at University Saint-Louis in Brussels. He defended his Phd in May 2012 on the principle of legal certainty in the case-law of the European Court of Justice. He has published numerous articles on a variety of subjects including criminal law, supreme court proceedings, legal theory and European Union law.
ABSTRACT: Three Levels of Dialogue in Precedent Formation at the ECJ and the ECHR
This paper schematically presents three considerations on how the European Court of Human Rights (ECtHR) and the Court of Justice of the European Union (CJEU) attempt to respond to legal uncertainty. For that, I introduce the concept of ‘reasoning from context’ in precedent formation. The paper suggests that the traditional way in which the metaphor of ‘judicial dialogue’ is used in European constitutional theory to show the interconnection between the CJEU and the domestic courts of the Member States with an emphasis on the preliminary reference procedure is deficient. Instead, I explore a broader, threefold manner of conceptualising dialogue – not only between courts, but also between courts and civil society, and courts and legislatures. My main argument is that this alternative conceptualisation of dialogue allows courts to confront legal uncertainty without endangering judicial legitimacy. Thus, the Luxembourg and the Strasbourg Court would often create or overrule a precedent, enjoying input from their main constituents – the Member States, as well as from civil society and the legislatures. In terms of legal dialogue, I first explore reliance on majoritarian trends inspired by, but not limited to, the constitutional law of member countries. Second, dialogue between the judiciary and civil society can take the shape of non-state third party interventions (amici curiae briefs) in which representatives of NGOs and human rights organisations take part through feeding the judiciary with information on the development of the law beyond the narrow context of an individual dispute. Finally, the notion of a dialogue between the judiciary and the legislative branch is envisaged as weak judicial review. In this broader account of dialogue, the ECtHR scores better that the CJEU – in other words, I show that precedents of the Strasbourg Court are established through ‘reasoning from context’: they settle uncertainty in the law while consulting a wider range of relevant stakeholders.
Bilyana Petkova is reading for a PhD in International Relations at the University of Kent and is a candidate for a Master in Studies of Law at the Yale Law School. She earned her BA degree from the Panteion University of Athens and her MA degree at the Maastricht University. She spent the spring term of 2011 as a Research Assistant at the Yale Law School and was later a Visiting PhD Student at the Law Department of the European University Institute in Florence. Bilyans’s academic interests are in Comparative Constitutional Law, European law and European public policy.
Room 121 (Yale Law School)
ABSTRACT: Liberties Behind an Uncertain Veil of Privacy? An Experimental (Law and Economics) Approach to Chilling Effects and the Right to Be Forgotten
A major problem of privacy law is that we lack a compelling account of how voluntary privacy losses and permanent information storage might corrode our civil liberties. Consider the simple case of an Internet service provider offering you a discount if you consent to the disclosure of information about your book taste. Would you prefer a discount over your ability to be unobserved when reading Mao’s Little Red Book or Milne’s Winnie-the-Pooh? In case of consent, would you stop reading books that others consider to be deviant from their normative expectations? And would it allay your privacy fears if you anticipated that stored information will be deleted automatically after a while? I will argue that privacy law has incompletely theorized the risk of chilling effects in case of private-sector surveillance. I will also argue that automated forgetfulness may reduce chilling effects.
Tying in with First Amendment doctrine and fear-based theories of freedom, privacy scholars have repeatedly claimed that uncertainty about a watchful gaze will deter people from making choices that do not conform with the normative expectations of the mainstream. This chilling effects hypothesis is grounded on the assumption that people cannot choose whether they want to disclose personal information. This typically holds for government surveillance. A more complex behavioral problem arises in case of private-sector surveillance which is usually based on consent. Traditional accounts of privacy law assume that consent is the ultimate expression of autonomous choice, irrespective of the consequences that this choice entails. The right to consent is purely procedural; it is specified only as a choice of actions or strategies. My account starts from the claim that consent should rather be seen as the resolution of a conflict between normative preferences and preferences for money. Providing people with an incentivized ex ante consent option may lure them into surveillance and induce them to forego the benefits from norm deviations – the benefits from exercising their civil liberties. The right to have personal information deleted (right to be forgotten) might mediate these behavioral effects or privacy valuations.
Using methods from experimental economics, I will show that the monetary benefits from consent constitute a price that people are willing to accept for norm-compliant behavior resulting from a chilling effect. Providing people with a prior consent option is sufficient to generate chilling effects. Nudging people towards potential publicity, however, does not increase the value they place on privacy. Surprisingly, the right to be forgotten reduces neither chilling effects nor privacy valuations. Moreover, the experiment provides evidence for status quo bias in case of a permanent storage default.
Yoan Hermstrüwer holds a Licence en droit (LL.B.) from the Université Panthéon-Assas (Paris II) and completed the First State Examination at the University of Bonn. He is currently a Research Fellow at the Max Planck Institute for Research on Collective Goods and a Ph.D. candidate at the University of Bonn. He has been a Visiting Researcher at Yale Law School. In his dissertation, he investigates the behavioral and constitutional dimensions of consent rules in European privacy law. His research interests include behavioral law and economics, Internet regulation, constitutional law and public international law.
ABSTRACT: New Technologies, Old Doctrines: The Case for Abandoning “Abandoned DNA”
As we go about our daily routines, we leave trails of genetic material in our wake. When we toss out coffee cups or tissues, we throw away saliva and skin cells along with them. Police officers can collect these items and test them for DNA. Under governing case law, they are free to do so without triggering the Fourth Amendment right to be secure against unreasonable searches and seizures. For, the reasoning goes, when one tosses away a cup or napkin, one thereby abandons that object along with the DNA on it – and for Fourth Amendment purposes, one has no reasonable expectation of privacy in something one abandons. Numerous courts have accepted this analysis and denied Fourth Amendment challenges to surreptitiously collected DNA. Their decisions have entrenched the controversial concept of “abandoned DNA.” Abandoned DNA stirs up deep anxieties about personal privacy in the genomics era. Because it gives state officials relatively unfettered access to people’s DNA samples, it both facilitates criminal investigations and raises serious privacy concerns. How ought we to balance these competing considerations? This question raises a more general one: how should we adapt Fourth Amendment principles to address the rapidly evolving – and thus perpetually uncertain – privacy implications of new investigative technologies?
This article aims to address these questions through a critical analysis of the abandoned DNA case law. That case law is inconsistent. Some courts faced with abandoned DNA claims focus their abandonment analyses on discarded objects like napkins, while others focus on the DNA samples gleaned from those objects. Both approaches find support in settled Fourth Amendment doctrine. To select between them, the article considers the more basic question of whether there is a prima facie, direct privacy interest in DNA. If there is such an interest, then the Fourth Amendment analysis ought to address it by focusing on whether the DNA samples were themselves abandoned. The article concludes that there is a privacy interest in DNA, and that the abandonment analysis should therefore focus directly on DNA samples. This conclusion addresses the inconsistency in the case law but gives rise to a further complication. Unlike most objects, DNA samples are rarely discarded deliberately; we shed them constantly, unwittingly, and unavoidably. As such, DNA samples do not fit well with the standard Fourth Amendment abandonment analysis, which turns on the intent to abandon. To address this complication, the article considers when and how Fourth Amendment doctrine should be adapted to deal with the uncertainties caused by new technologies with indeterminate implications for privacy. The article proposes a model for distinguishing proper judicial restraint from mere judicial abdication in this context. Applying this model, it finds the abandonment doctrine cannot be applied to DNA samples. It therefore concludes the abandoned DNA doctrine should itself be abandoned.
Palma Paciocco is an S.J.D. candidate at Harvard Law School. She holds a Joint Honors B.A. in philosophy and history, a B.C.L., and an LL.B. from McGill University. Before beginning her doctoral studies, she served as a law clerk to the Honorable Justice Louise Charron of the Supreme Court of Canada and completed the LL.M. program at Harvard Law School. Her research interests include criminal law, criminal procedure and evidence, sentencing, and ethics. Her dissertation considers how we should understand the prosecutor’s ethical obligation to “seek justice” in the context of a criminal justice system dominated by plea-bargaining.
ABSTRACT: Constitutional Uncertainty
Legal scholarship treats uncertainty in different, potentially incommensurable, ways. Law and economics, concerned with efficiency and information symmetries, and legal theory, concerned about procedural fairness and predictability, generally guard against uncertainty. By contrast, particular provisions of constitutional law (Fourth Amendment protections among others) stand for the proposition that protecting uncertainty is a positive, even constitutive function of American law. Conventional accounts of the relationship between constitutional rights and economics, which equate increasing public information with advancing the cause of freedom, prosperity, and democracy, need to be reexamined in light of these provisions. This paper zeroes in on constitutional uncertainty’s intersection with economics via the last redoubt of uncertainty readily available to the average citizen: privacy law.
Privacy law works through two interrelated concepts: property and domesticity. Privacy treats property as both protection and alienable resource. Privacy also stylizes domesticity as free of government and market interference. However, technological change allowing for the speedy monetization of personal information has allowed economic access to domestic spaces and personal property previously protected by privacy norms. Because constitutional uncertainty produces information asymmetries and distributional inequities, legal scholarship has been sanguine in the face of weakening privacy. Freedom-of-information narratives, backed by First Amendment considerations, encourage this trend. As a result, objections to deteriorating privacy protections typically reduce to ethical objections to invasive new technology.
This omits privacy’s functional role in American political and economic development. Everybody knows big data and social media have economic as well as social and political consequences. But what are they? At least since Knight (1921), uncertainty has been linked to that most American trait: profit-seeking. Uncertainty often protects what, under an economic lens, looks like irrationality or inefficiency. Yet incubation – indistinguishable from uncertainty from a public vantage point - is a precondition of innovation. Thus privacy generates change. Likewise, while ideas about constitutional change require methods of popular mobilization to attain legal status, their origins rarely rest in the public sphere alone. By shielding ideas and resources from public perception, and thus regulation, privacy, protecting uncertainty, can drive constitutional development and changes in systems of public profit.
If privacy protections shrink too far, individual citizens and the economy writ large may pay significant costs. Constitutional protections for uncertainty allow for individual innovation, the base unit of inputs into the larger economic and political structure. Firms, increasingly forced to hide research and development from the informational requirements of a public sphere designed to maximize predictability, face ever greater transaction costs. If American exceptionalism relies on belief in the possibility of beating the odds, doing so requires spaces of constitutionally-protected uncertainty rather than carefully calculated risk management strategies. American economics is thus incomprehensible without an appreciation of how uncertainty functions. Minimally, constitutional law’s emphasis on privacy demands a normative reevaluation of uncertainty away from its bogeyman role in legal scholarship. More stringent privacy protection may be necessary, not because of an irreducible ethical benefit, but because privacy provides constitutional protection for the uncertainty that is constitutive of economic and political change alike.
Caitlin Tully is a third year law student at Yale. She received her undergraduate degree from Princeton in history in 2010, where she worked on the emergence of risk during religious disestablishment. Current research focuses on the intersection of economics and corporate law with constitutional law.
Lucas Thompson is a lecturer in the political science department at Yale and a post-doctoral associate with the MacMillan Center for International and Area Studies. He received his Ph.D from Yale in 2013 with a dissertation on the politics of presidential emergency power. His research explores how uncertainty in institutional design can improve exercises of executive power and protect citizen liberties
Room 128 (Yale Law School)
ABSTRACT: The SEC and the Courts’ Cooperative Policing of Related Party Transactions
A transaction between a corporation and its director or officer (“related party transaction”) presents a risk of conflicts of interest; but it can also benefit the corporation and often is inevitable. To sort beneficial related party transactions from detrimental ones, the current legal regime, which consists of securities regulation and corporate law, regulates related party transactions through ex ante screening procedures and ex post litigation. Disclosure plays an essential role in both stages. Based on a set of hand-collected data on actual disclosures in proxy statements of Fortune top 50 companies, I find that regulating related party transactions through the U.S. Securities and Exchange Commission (SEC) regulations may not be very effective, mainly because each company’s approving committee, consisting of independent directors, has considerable discretion not only over approval of a proposed related party transaction, but also over determining whether the transaction’s details should be disclosed. On the corporate law side, in fiduciary duty of loyalty litigations, there is a large amount of uncertainty over the applicable standard of review for related party transactions that satisfy certain conditions of state safe harbor rules, such as an approval by disinterested directors. This paper proposes linking the selective disclosure problem to the question of the applicable standard of re- view in fiduciary duty of loyalty litigations. Specifically, this paper argues that the courts should consider using non-disclosure of a related party transaction to shareholders as one of the justifications for applying the fairness standard rather than the business judgment rule. This paper identifies several potential benefits from this proposal, such as creating better incentives for directors to disclose more details on related party transactions, giving litigants more predictable rules, and allowing for better accumulation of disclosure data over time, thus providing better guidance to companies and market participants in distinguishing between beneficial and harmful related party transactions.
Geeyoung Min received her B.A. in anthropology and B.L. from Seoul National University, where she was awarded numerous scholarships. She also received an LL.M. from Yale Law School and was a visiting scholar there. While at Yale, she was a recipient of Lillian Goldman Scholarship and John M. Olin Summer Fellowship. In the summer of 2005, she worked as an associate at Kim & Chang, the largest law firm in South Korea. She is currently pursuing a J.S.D. at Yale Law School. Her doctorate dissertation focuses on the issues of corporate law and corporate governance, including how charters of large, publicly-traded companies in the U.S. are determined and evolve, and cross-national comparison of corporate governance regimes.
ABSTRACT: Counterparty Risk Management in OTC Derivatives Market: From Private to Public Decision Making
This paper evaluates the post-crisis regulatory reforms in the EU and US that introduce a mandatory central clearing system in the OTC derivatives market. The effectiveness of this reform is heavily dependent upon the correct determination of the scope of the mandatory clearing obligation. This paper argues that the decision making process in the current regulatory framework in the US and EU is problematic in that respect, as it heavily relies upon the ability of public entities to determine the correct scope of the mandatory clearing obligation. The argument proceeds in five parts. The first part defines OTC derivatives and addresses the most common reasons to enter into a derivatives contract. In a second part, we analyze the problems OTC derivatives market encountered during the financial crisis 2007-2008 and identify the risks involved. The third part focuses on one specific risk, namely counterparty risk and how that risk can be managed. In the fourth part, the paper evaluates the two mechanisms which exist to manage this risk, namely bilateral and central clearing. Finally, this paper concludes with an evaluation of the introduction of a mandatory central clearing system in the EU and US. It cast doubts on the effectiveness of the system to reduce systemic risk in the OTC derivatives market, as the system relies too much on the ability of public entities to correctly assess the scope of the mandatory clearing system.
Katrien Morbée obtained her LL.B. and LL.M. from Ghent University. She is currently an LL.M. candidate at Yale Law School and a Ph.D. candidate at the KU Leuven – University of Leuven Faculty of Law. Her research agenda focuses on corporate law and financial regulations, with special attention to the current regulatory reforms in the OTC derivatives market.
ABSTRACT: Reforming Derivatives Markets after the Financial Crisis: Systemic Risk and Transaction Costs Analysis
The 2008 financial crisis revealed how many countries suffered from highly myopic views and approaches in regulating (or de-regulating) their markets. Two causes in particular, an overly strong belief in the effectiveness of financial industry self-regulation combined with an underestimation of the systemic and global spread of risk, ensured the financial meltdown.
This paper is structured in three parts. In the first part, we address the areas of opaqueness and uncertainty, which characterized the regulation of financial markets before the financial crisis, with a specific focus on financial derivatives. Pre-crisis national regulators lacked effective tools to supervise derivative dealers and transactions. Derivatives markets were ‘subsidized’ by the US and EU governments with de-regulation policies that had been based on the myth of the expertise and ability of the private industry to self-regulate itself. Moreover, the absence of functional international agreements fostered cross-border development of the derivatives market and the consequent systemic spread of financial risk. These two factors contributed to the global contagion of the financial crisis, which on one side, triggered national governments to rapidly adopt international harmonized guidelines to re-regulate financial markers and on the other, persuade regulators to re-evaluate global financial markets policies in order to greater harmonize regulatory strategies.
Part II of this paper examines the extent to which current reforms and proposed reforms both in the US and in the EU will expand ‘public’ derivatives markets, while correspondingly reducing the scope of ‘private’ markets (which broadly coincide with the ‘unregulated’ OTC markets). We also question whether these reforms will, on the whole, reduce the scope and impact of systemic risk – a macro level issue – and reduce transaction costs of derivatives trading – thus promoting liquidity and creating a more efficient market.
In the third part, we will then analyze how market participants are reacting to the current changes in the regulatory scenario, with a focus on both cross-market and cross-border regulatory arbitrage. In fact, already, market participants are anticipating the implementation of new rules on derivatives, which might increase the implementation costs for the financial industry, by re-modeling and transferring their derivatives activities to existing platforms, while national regulators are still working on adopting final regulation of derivatives.
Room 129 (Yale Law School)
ABSTRACT: Uncertainty, Statistics and International Institutions – Towards a “New International Epistemic Order” in Int’l Law?
How do international institutions and their law attempt to regulate and manage uncertainty? And conversely, how is international law itself shaped and transformed by these attempts? To address these questions, this paper inquires into the relationship between uncertainty, international law and statistical knowledge.
The main argument is that, firstly, international institutions and their law manage global uncertainty through a particular technique of knowledge production: That is, quantification – the collection, harmonization and dissemination of numerical data, statistics and indicators worldwide. Quantification is a particular form of “epistemic” uncertainty management, i.e. it makes risk and uncertainty manageable through a particular technique of “knowing” the world. Secondly, the paper shows how this form of “governance by numbers” transforms basic concepts and areas of international law. To react to these developments, international legal scholarship needs to rethink, and reclaim, international law as an epistemic order governing the production, dissemination and harmonization of specific forms of knowledge.
The paper develops these arguments in four steps. It first reviews two basic approaches to uncertainty, knowledge and international law in the existing literature and situates this contribution within them. Part III provides a historical vignette on the internationalization of statistics to illustrate the long-term influences of statistical uncertainty management. Part IV then addresses how global governance by numbers has impacted on three core areas of international law, namely international institutional law, sovereignty, and human rights law. Part IV reviews possible approaches to reconstruct and reclaim international law as an epistemic order. Part V concludes.
Methodologically, the paper attempts to combine a doctrinal reconstruction of applicable law with insights from decision theory, international relations, sociology of knowledge, and critical approaches to international law.
Michael Riegner, currently a Hauser Global LLM scholar at NYU Law School, is senior research fellow at Giessen Law School in Germany and a member of the research group “Law and Governance of Development Cooperation”. His work focuses on international and comparative constitutional law, namely human rights, development and legal perspectives from the Global South. He was a research fellow at the Max-Planck-Institute in Heidelberg from 2008 to 2011 and studied law at the Universities of Passau and Geneva. He is admitted to the bar in Germany and worked, inter alia, for the German development cooperation agency GIZ in Kosovo and for the International Criminal Tribunal for the Former Yugoslavia. His current research interests concern the use of indicators in development cooperation and the role of international institutional law in the global production and distribution of knowledge.
ABSTRACT: Inventing Intervention: The Battle of Navarino, the Pacific Blockade, and the Contingent Origins of a (Quasi-) Legal Concept
This paper explores the uncertainty which surrounded, and the legal consequences which followed, the 1827 British, French, and Russian intervention in the Greek struggle for independence from the Ottoman Empire. This otherwise obscure action, which culminated in the Battle of Navarino, has had a long life in international law: in the nineteenth century, it stood for the limited use of military force, especially for reasons of “humanity;” in the twenty-first century, it has been resurrected as a precedent for humanitarian intervention. Oddly, however, the legal history of Navarino has been missed: what did the actors involved think and say they were doing, as they did it?
By answering this question, using primary sources from archives in London and Istanbul, I suggest that well-established legal categories governed the action up until the very last moment, when the three powers used force, in the form of a blockade. At that point, a combination of geopolitical imperatives and astute Ottoman diplomacy led the powers to deny they were at war. But in light of the blockade and the battle, every foreign minister wondered if his state really was at war. Each one groped for new legal templates to describe and guide his actions—showing, I argue, policymakers’ need for law to make sense of complex situations.
But more importantly, I contend, the uncertainty around Navarino was legally genitive, in two ways. First was its legacy of a reason to use force. All of the powers settled on the view that they were not, in fact, at war, which was incoherent in view of existing international law on the use of force. This uncertainty about precedents sent scholars searching for a new explanation for the action—and they found it in the view that there had been something legally significant about the Ottomans’ atrocities, or their level of “civilization,” which had justified the action. Navarino’s exceptionalism is also important for modern invocations—because today, with the resort to force generally banned by the U.N. Charter, a humanitarian intervention undertaken without Security Council authorization is, inherently, an exceptional act, requiring exceptional justifications.
Second, Navarino left a legacy of a means of using force. Though they had not intended to do so, the British and French had found themselves maintaining a blockade by force, while denying they were at war. This legally incoherent situation could later be adduced to support a convenient new view of customary international law. The contours of neutrality law made it very advantageous for these two maritime powers to be able to blockade weaker states while still denying that they were at war—and they did so repeatedly throughout the nineteenth century. Navarino served as the key precedent, even though the powers’ motives were usually far from humanitarian.
This story, then, illustrates policymakers’ search for legal templates to structure their actions and avoid uncertainty, and I argue that the template they settled on both made Navarino a useable example for humanitarian intervention, and made it a cautionary tale.
Will Smiley is a third-year JD student at Yale. He received his PhD in Middle Eastern Studies from Cambridge in 2012, and is a Graduate Research Associate at the Harvard Center for History and Economics. His research focuses on the history of international law.
ABSTRACT: Embracing Uncertain Outcomes: A Procedural Certainty View of Int’l Law
The majority of ‘domestic’ legal theories considers legal uncertainty as undesirable, as it would prevent law from fulfilling its social function. According to this reasoning, law’s function is to orient behavior, and it can only do so in providing predictability based on stable and certain law. The certainty of law itself, as well as of judicial and other outcomes, is considered paramount for the certainty and security of social relations.
Applying this view to international legal affairs can be tricky, as the characteristics and mechanisms traditionally thought to make law stable and certain hardly withstand closer scrutiny in international law. Several systemic features affect the capacity of international law conforming to such domestic checklists. To list just a few, stability is threatened as much by the proliferation of sources with unclear normativity, as by changes in more traditional sources like treaties and unilateral acts. The indeterminacy inherent in language undermines the certainty of the international legal norms and the predictability this system can provide. This becomes even more pronounced as in contrast to domestic legal systems the international setting rarely has centralized compulsory judicial bodies to harmonize interpretation. These systemic features affect the capacity of some areas of international law to ensure certain and stable outcomes as domestic legal theories require. From this standpoint, international law is thus labeled as unpredictable and inconsistent, under buzzwords like fragmentation, indeterminacy, and changing normativity.
While international law might not be best placed to provide predictability through certainty of outcomes, it can nonetheless provide predictability of process. Based on an empirical qualitative investigation, I argue that actors resort to procedural solutions as an alternative to the outcome-oriented approach. Broadly understood as to cover circumstances beyond the judicial process, procedures provide a predictable time frame in which parties can adjust their expectations and conduct, working as a signaling and coordinating mechanism. In contrast to the outcome-focused theories, which assume that actors would be paralyzed if faced with uncertainty of outcomes, international legal actors do continue to operate – based on what more closely resembles a procedural predictability scenario.
Melanie Wahl is a PhD candidate in International Law at the Graduate Institute of International and Development Studies (IHEID) in Geneva, Switzerland. During the 2013-2014 academic year, she is affiliated with Yale Law School as a visiting researcher. Her current research uses empirical methods to assess the perception of legal certainty of several domains of international law.
While in Geneva, she was a teaching assistant at IHEID, at the Geneva Academy of International Humanitarian Law and Human Rights and for WTO law at the University of Geneva. Prior to joining the PhD program, she worked on the legal protection of refugees at UNHCR-Brazil. She also coordinated treaty negotiations and implementation in matters of mutual legal assistance and assets recovery for the Brazilian Ministry of Justice.
She carries nationalities both of Germany and Brazil and speaks English, French, German, Portuguese and Spanish fluently.
Room 121 (Yale Law School)
ABSTRACT: Peer Review: Managing Uncertainty in the United States Jury System
The jury system injects an inherently unpredictable, human variable into the United States justice system. Prior scholarship has documented, for example, the inability of open court questioning to elicit meaningful information about prospective juror bias. Acquittals in nonviolent drug prosecutions raise concern about the possibility of jury nullification; if jurors set aside the law in favor of their own intuitions about justice, laws may not be enforced uniformly. Most recently, popular coverage of the jury that acquitted George Zimmerman has focused on the prosecutor’s failure to identify “red flags” in particular jurors’ responses that might have influenced the trial’s outcome. Citing specific examples, I show that the process of jury selection is only made more uncertain by the idiosyncrasies of particular judges and jurisdictions.
I focus, here, on voir dire. First, in the context of a review of the pertinent literature, I propose an anthropological approach to studying lawyers’ approaches to assessing and evaluating jurors. Drawing on American anthropologist Clifford Geertz’s discussion of law as a symbol system, I argue that lawyers synthesize the information they elicit from jurors with an eye towards ordering the disorderly. The approaches lawyers take to voir dire represent an attempt to systematize and regularize a process that might otherwise feel immune to prediction or strategic intervention. This impulse to impose order, I argue, permeates culture as it does legal processes. But the analytic and predictive possibility of anthropological methods to illuminate patterns in lawyers’ thinking is remarkably absent from legal scholarship. This is particularly striking when “jury consulting” expertise has emerged as a marketable antidote to the perceived uncertainty of the trial by jury.
To date, there is little empirical data or scholarship examining subjective and variable approaches to jury selection. I posit that lawyers have highly developed and strongly felt intuitions about the jurors they encounter during voir dire, intuitions that take shape long before a lawyer enters a courtroom. I argue that the patterns that emerge in assessments of jurors’ character and truthfulness rely on understandings of human psychology and behavior rooted in everyday social interactions and relationships. To illuminate these patterns, both qualitative and quantitative research methods are essential. I conclude by outlining an interdisciplinary approach to studying how lawyers tacitly translate people into (purportedly) known entities, and the impact of these assumptions. This kind of project is a key, first step toward understanding how uncertainty can be managed to promote greater effectiveness and efficiency of the U.S. justice system.
Anna Offit is a doctoral candidate in the Department of Anthropology at Princeton University. She received an MPhil in Social Anthropological Analysis from the University of Cambridge in 2009, and a J.D. from the Georgetown University Law Center in 2012 after which she was admitted to the New York and New Jersey Bar. As a law student, she was Editor-in-Chief of the Georgetown Journal of Legal Ethics, and served as a law clerk for the Department of Justice’s Office for Civil Rights, where she assisted in efforts to oversee the implementation of the Prison Rape Elimination Act. Her dissertation project examines the role that hypothetical jurors play in prosecutors’ case preparation.
ABSTRACT: The Practice of Law and the Intolerance of Certainty
This paper considers whether the practice of law requires an intolerance of certainty. Such a view goes beyond an implicit epistemological assumption that law is intolerant of uncertainty and seeking of certainty, which is itself based in a perspective that sees uncertainty as intrinsically negative and something which should be eliminated or regulated. Informed by a psychological exploration of the context of ordinary lawyering, this artificial elevation of certainty is exposed as incomplete and potentially harmful. Not only does the practice of law take place in an environment saturated with uncertainty, but uncertainty is an essential part of being a lawyer. While uncertainty is indomitable, it also has both positive and negative potential depending on how it is appraised. Lawyers can use uncertainty adaptively in order to be integrative and ethical professionals, as well as to craft good legal outcomes which depart from the status quo. To that extent, legal practice is resistant to the certainty of law as a wholly predictable, static and inflexible system. Two examples help illustrate this perspective: the self-organising nature of professional norms, and the relationship between legal narratives, certainty and uncertainty.
Stephen Tang is a PhD (Clinical Psychology) Candidate at the Australian National University (ANU) Research School of Psychology, and an Associate Lecturer at the ANU College of Law. Stephen’s current law-related research includes projects on the psychological wellbeing, motivations, ethical practice and professional identity of law students and lawyers. His doctoral research examines the psychology of indecision and indecisiveness, particularly experiential and self-regulatory aspects. Stephen was formerly a lawyer in commercial practice and has since worked as a psychologist in acute and community mental health settings.
ABSTRACT: Setting the Stage for a Legal Research on Judicial Deliberation
Judicial deliberation – i.e., the phase in the civil process during which the judges withdraw to deliberate on their judgment (hereinafter: JD) – is characterized by legal uncertainty. This to such an extent that the phase of JD is often perceived as a black box which is fed with a certain input (i.e., the debate between the parties) and which then generates its output (i.e., the judgment) without anyone really having an idea of how the mechanisms inside the box work to generate that output. The following questions, for example, continue to remain unanswered: Which judicial acts fall under the secrecy of deliberation? Is it possible to deliberate by email? What is the impact of a unus judex on the quality of jurisprudence? What should be done when judges have irreconcilable differences during deliberation? What should be done when the judge postpones judgment beyond reasonable time limits? This list of problematic issues regarding JD is endless and continues to grow every day.
This existing uncertainty is the result of a lack of fundamental reflection on JD. It seems like both legislators and scholars are uninterested in deliberation from a procedural point of view. This because it is considered a formality, a mere administrative phase, which only arises after the real action – i.e., the debate between the parties – has taken place. Moreover in cases where the uncertainty is recognized as an issue, it is often justified under the guise of legal customs, secrecy of deliberation or even judicial independence.
In this paper, however, the author will demonstrate that the existing legal uncertainty is not justifiable and is even malignant in nature. First of all it will be emphasised that JD is not a purely administrative matter, but an essential part of the civil process and the judge’s mission. Therefore, the paper will put forward that parties to a civil procedure should be able, especially during deliberation, to fall back on a regulatory framework which clearly indicates the guarantees offered to them and the responsibilities of each of the different procedural actors. Secondly, the paper will delve into the idea and extent of freedom of movement of judges during the deliberation phase and how the legal uncertainty often prevents judges from making use of their freedom. It will be established that this ‘chilling effect’ is a source of concern as it blocks innovation and threatens judicial independence. Lastly, the paper will demonstrate how a more extensive study of JD will deepen our understanding of the characteristic interplay in the field of procedural law between different types of rules – i.e., procedural rights, rules of judicial organization, rules of professional conduct and soft law. By way of conclusion, the paper will end with a plea for increased scholarly attention for JD, by setting out a detailed research agenda which touches upon all of the above-mentioned issues.
Els Vandensande is a junior researcher at the Institute for Procedural Law of the University of Leuven (Belgium). Her research focuses on judicial deliberation, harmonization of procedural law and judicial administration. She graduated from the University of Leuven in 2011 with a Master’s Degree in Law and a Bachelor’s Degree in Philosophy. During her Master's she took part in the Research Master in Law in cooperation with Tilburg University (the Netherlands). Currently, she is an LL.M. Candidate at Yale Law School.