Contract Law Illusions and Delusions, forthcoming, J. Leg. Stud. in Bus. (2024)
This essay explores the extent to which lawyers’ beliefs about contract law emanate from illusions and delusions embedded in and around the doctrinal canon. It praises the recent piece on generative contract interpretation by Arbel and Hoffman as means of skewering metaphysical nonsense like “mutual” or “shared” intention of the parties. It also assesses the extent to which belief in, and justification of, contract law as an institution borders on the delusional. The claim is that our very ability to cope with the world entails “dark trust” we barely notice and cannot measure. Thus contracts, rather than being very meaningful or even not nearly as meaningful as lawyers think, are means of groping for order, reassurance, and certainty in the face of chaos, insecurity, and the unknown unknowns. They can be, in a phrase, delusions of order.
Governance ≠ Leadership: What Blockchain and AI Won’t Do for Corporate Lawyers, 46 J. Corp. Law 965 (2021)
This is a contribution to the Journal of Corporation Law’s 2020 symposium on blockchain technology and corporate governance. The thesis is that blockchain technology is well suited to the monitoring function in corporate governance; that monitoring as the primary function of corporate governance is a particularly legal conception; and that the business conception of governance has far more to do with leadership, strategy, and operations. If the legal and business conceptions of governance tend to be ships passing in the night (at least in this somewhat exaggerated rendering), it is because prevailing economic and legal theoretical models have a difficult time incorporating human qualities that underlie leadership, intuition, insight, and creativity. Law schools have long taught litigation skills and transactional skills have come into vogue as well. Teaching leadership to aspiring business lawyers is the next challenge.
Lawyering Somewhere Between Computation and the Will to Act, in Larry DiMatteo, et al., eds, THE CAMBRIDGE HANDBOOK OF LAWYERING IN THE DIGITAL AGE (Cambridge University Press, 2021) .
This is a reflection on machine and human contributions to lawyering in the digital age. Increasingly capable machines can already unleash massive processing power on vast stores of discovery and research data to assess relevancies and, at times, to predict legal outcomes. At the same time, there is wide acceptance, at least among legal academics, of the conclusions from behavioral psychology that slow, deliberative “System 2” thinking (perhaps replicated computationally) needs to control the heuristics and biases to which fast, intuitive “System 1” thinking is prone. Together, those trends portend computational deliberation – artificial intelligence or machine learning – substituting for human thinking in more and more of a lawyer’s professional functions.
Yet, unlike machines, human lawyers are self-reproducing automata. They can perceive purposes and have a will to act, characteristics that resist easy scientific explanation. For all its power, computational intelligence is unlikely to evolve intuition, insight, creativity, and the will to change the objective world, characteristics as human as System 1 thinking’s heuristics and biases. We therefore need to be circumspect about the extent to which we privilege System 2-like deliberation (particularly that which can be replicated computationally) over uniquely human contributions to lawyering: those mixed blessings like persistence, passion, and the occasional compulsiveness.
The Persistence of “Dumb” Contracts, 2 Stan. J. Blockchain L. & Pol’y 1 (2019)*
“Smart contracts” are a hot topic. Presently, smart contracts mostly consist as evidence of property, like crypto-currencies or mortgages, created and/or transferred on blockchain technology. This is an exploration of the theoretical possibilities of artificial intelligence in a far broader range of complex and heretofore negotiated transactions that occur over time. My goal is to understand what it means to make a contract smarter, i.e. to delegate more and more of the creation, performance, and disposition of legally binding transactions to machine thinking. Moreover, I want to do so from the perspective of one who is neither a true believer in the purported technological singularity to come nor a digital Luddite. There are two primary themes. First, the extent to which complex transactions occurring over time can be embodied in computer programs – the ability of the contracts to be smarter rather than dumber – depends on the extent to which the subject of the transaction becomes not just a social fact, but an institutional reality. The dumb contract is merely a map of an antecedent reality, but the smart one is a real thing in itself. Second, smart rather than dumb contracts will require the translation of often fuzzy legal predicates, otherwise capable of expression in truth-functional logic, into digital proxies expressible in the non-ambiguous discrete units of code. The upshot of these two themes is that, at least until there is some better evidence that a technological singularity will occur, deciding will remain something that is fundamentally different than reasoning by way of logic or code. Hence, for the time being, dumb contracts, ones that leave open the possibility of what Karl Llewellyn called situation sense, will persist.
Halting, Intuition, Heuristics, and Action: Alan Turing and the Theoretical Constraints on AI-Lawyering, 5 Savannah L. Rev. 133 (2018).
This is a reflection on the relationship of lawyering and artificial intelligence. Its goal is a better understanding of the theoretical constraints of the latter. The first part is an assessment of one particular and crucially important aspect in the theory of machine thinking – determining if the program being run will reach a conclusion. This is known as the “Halting Problem.” One question at the far reaches of AI capability is whether any physical machine presently conceivable could always, on its own, for every possible program, determine whether the program will ultimately generate an answer. The essence of the Halting Problem is that the answer to that specific question is “no.” Hence, unless a human programs the machine to decide it short of a final answer being generated, the machine won’t itself be able to decide whether it had thought enough and it was time to fish or cut bait. The second part is a philosophical reflection on what it means to decide something as opposed merely to think about it. Humans don’t have a Halting Problem. Even if they think as logically and formally as a machine, they also act. The thesis is that humans seem always in the case of every problem to be able to stop thinking and start doing, even if they don’t know whether the thinking is or will ever be complete. The third part is an assessment of what a law school of the future ought to look like, given this moderate view of the interaction between thinking machines and deciding humans.
*An online version of the article can be found at the Stanford Journal of Blockchain Law & Policy.
© Jeffrey Lipshaw 2013