Artificial Intelligence and the Law

The Persistence of “Dumb Contracts” (working paper)

“Smart contracts” are a hot topic. Presently, smart contracts mostly consist as evidence of property, like crypto-currencies or mortgages, created and/or transferred on blockchain technology. This is an exploration of the theoretical possibilities of artificial intelligence in a far broader range of complex and heretofore negotiated transactions that occur over time. My goal is to understand what it means to make a contract smarter, i.e. to delegate more and more of the creation, performance, and disposition of legally binding transactions to machine thinking. Moreover, I want to do so from the perspective of one who is neither a true believer in the purported technological singularity to come nor a digital Luddite.

There are two primary themes. First, the extent to which complex transactions occurring over time can be embodied in computer programs – the ability of the contracts to be smarter rather than dumber – depends on the extent to which the subject of the transaction becomes not just a social fact, but an institutional reality. The dumb contract is merely a map of an antecedent reality, but the smart one is a real thing in itself. Second, smart rather than dumb contracts will require the translation of often fuzzy legal predicates, otherwise capable of expression in truth-functional logic, into digital proxies expressible in the non-ambiguous discrete units of code. The upshot of these two themes is that, at least until there is some better evidence that a technological singularity will occur, deciding will remain something that is fundamentally different than reasoning by way of logic or code. Hence, for the time being, dumb contracts, ones that leave open the possibility of what Karl Llewellyn called situation sense, will persist.

Halting, Intuition, Heuristics, and Action: Alan Turing and the Theoretical Constraints on AI-Lawyering, __ Savannah L. Rev. ___ (2018).

This is a reflection on the relationship of lawyering and artificial intelligence. Its goal is a better understanding of the theoretical constraints of the latter. The first part is an assessment of one particular and crucially important aspect in the theory of machine thinking – determining if the program being run will reach a conclusion. This is known as the “Halting Problem.” One question at the far reaches of AI capability is whether any physical machine presently conceivable could always, on its own, for every possible program, determine whether the program will ultimately generate an answer. The essence of the Halting Problem is that the answer to that specific question is “no.” Hence, unless a human programs the machine to decide it short of a final answer being generated, the machine won’t itself be able to decide whether it had thought enough and it was time to fish or cut bait. The second part is a philosophical reflection on what it means to decide something as opposed merely to think about it. Humans don’t have a Halting Problem. Even if they think as logically and formally as a machine, they also act. The thesis is that humans seem always in the case of every problem to be able to stop thinking and start doing, even if they don’t know whether the thinking is or will ever be complete. The third part is an assessment of what a law school of the future ought to look like, given this moderate view of the interaction between thinking machines and deciding humans.

© Jeffrey Lipshaw 2013