Coding into cracks – How inherent flaws of law can be exploited by artificial intelligence

Essay submitted for the 47th Saint Gallen Symposium.

For more than 50 years, researchers have been developing the theoretical foundations to use Artificial Intelligence (A.I.) in law. But, only recently, the first A.I.’s made their way into big law companies. For now, A.I. systems act as assistants, however, it is expected that their responsibilities grow in the next years. As they do so, computers will face the complicated challenge of understanding human language and, more specifically, law. But could it be that they also do that better than us? A common opinion is ”yes, because laws are sets of logical hierarchical rules, which computers can handle well” and, conversely, ”no, because computers follow fixed instructions and therefore can’t reason”. We will know the answer in the foreseeable future, but both justifications are wrong. In this essay, we will try to show why laws can not be interpreted as simple mathematical statements and how computers are able to cope with it, possibly posing a challenge to the judiciary system procedures and helping us to see flaws in today’s making and amendment of laws.

The cracks

It is often believed that law is a logic and consistent set of rules or, more modestly, that the inconsistencies in legislation compose a small set among the vast universe of laws discussed and written by a parliamentary institution based on democratic principles and constantly under review. However, this is not strictly true. Holmes, an associate justice from the USA supreme court in the early XX century, famously wrote: ”The life of the law has not been logic; it has been  experience”.

Seminal work of philosophers Jacques Derrida, Niklas Luhmann and Rudolf Wiethölter originally discussed the fact that law is ridden with inconsistencies[1], even though civilian instinct may insist that the laws are subject to strict logical reasoning and so are accepted and legitimated. Although there are many examples of such contradictions, which can be sourced from the discussions on the book ”Paradoxes and Inconsistencies in the Law”[2], we would like to avoid the interpretation that flaws are due to a specific law or country or political inclination and will discuss law on the broadest sense possible.

Some reasons why laws contain inconsistencies are, for instance, because legislation adapts in reaction to the historical/social context and usually there isn’t extended thought before laws have to be changed. This is directly related to the fact that there are close ties between law and traditions. Such relation is concomitantly cause and palliative solution for inconsistencies: at the same time laws have no compromise to mathematical rigour, they are subject to further judgment, review and iteration that makes them adequate and legitimate. And, most importantly for this essay, laws contain inconsistencies because they are almost always made and interpreted using classical logic [3], which can not be used in systems with paradoxical definitions. As a surprise to many people, there are other logic systems, some of which can handle inconsistencies better such as paraconsistent [4] and defeasible logic [5].

But the stated reasons on why laws are flawed are by no means nihilistic and do not implicate that laws should be disregarded. In a context in which law is made by humans and interpreted by humans, the vagueness is what allows laws to conform [6]. Furthermore, in order to make sense of the legislation, it becomes necessary to have an extensive knowledge not only of laws, but also of the current political situation and human reasoning and nature. This knowledge is cumulative and, because of that, so extensive, that it takes years to train a lawyer to be able to reason a complex law case and, most importantly, to express her or his own reasoning understandably and convincingly. For the complexity and importance of this job, the lawyer profession has been for many centuries praised and subject of much esteem. Fairness and persuasion were commodities that few could offer, let alone computers. But this was to change in this decade.

The code

The attempts to automatize the activities related to law date back to the 70’s [7, 8, 9]. But, until recently, researchers could not tell whether or not computers would abide unable to do tasks such as collecting evidence, ordering data by relevance, synthesizing unstructured text and image, interacting with lawyers, sensing political and public moods, creating and supporting thesis and conveniently exposing the summary in a human understandable way. Computer still don’t excel at some of this tasks, but not long ago many researchers from the A.I. field believed it would be impossible for computers to execute such tasks even on a basic level [7, 10, 11].

As consequence, the practical use of A.I. until the 90’s was limited to simple tasks such as counting the frequency of words, simple word context guessing and matching short text excerpts to input keywords [7, 12]. The technology at this time was highly experimental and it took a 50 years hiatus until A.I.  software and humans shared tasks in big companies.

Ross, for example, is a modern software lawyer already employed in law companies to do the same job as junior lawyers. Ross searches relevant cases and is able to extract facts and conclusions from documents [13]. And it can only do so because it uses multiple techniques of natural language processing, information retrieval, machine learning, computational linguistics, knowledge representation and knowledge reasoning [14]. Theoretical studies on these areas are relatively recent and this is the first reasons why A.I. software felt short of expectations until now. The second reason is that the computational power needed for such intensive tasks was not available a couple of years ago.

So, what would be a scenario in which A.I. is completely integrated with law and used in the judiciary system? Prof. Richard Susskind, from University of Oxford and one of the earliest promoters of use of A.I. in law, suggests that in the beginning, two parallel processes will occur: the use of A.I. as assistants and the progressive increase in A.I. autonomy in judiciary tasks [15]. At the moment, the first seems to be true as Ross is already used with this purpose, but Susskind proposes that soon people will use A.I. online platforms as consultants and that the role of A.I. in law will be to democratize access to law advice. This scenario is optimistic but feasible, since many technology companies today adopt a business model of democratic access to technology. But a much less discussed aspect of this scenario is how our laws and public workers will cope
with it.

Suppose a litigation whose parts use both human and A.I. lawyers and which is arbitrated by a judge who also uses a A.I. aid. Now, it is not uncommon that very complex scenarios develop, which are further painted by the parts. As a consequence, it can happen that there are not clear precedents. The plaintiff and her or his A.I. colleague would be able to bring a statically chosen set of previous legislation and metadata (information that describes information) to be presented by the lawyer. This metadata, which will guide the lawyer on how to make the case, is not only about the law itself, but it will take into account the expertise of the judge and the defense lawyer, making accurate guesses on the probability of winning the case. For example, a software from the University of Liverpool developed by the group led by Prof. Katie Atkinson was able to guess the correct result of 31 out of 32 cases of real law cases sampled [16].

And, fairly, the judge and defendant lawyer will have the same awareness. In a case regarding a long standing law, the plaintiff will be able to find a few hundred related precedents/legislation, with a couple being more relevant. Only to find out those are themselves contradictory to other laws as indicated by the defense lawyer. In the end, the infinite loop will be inevitably terminated by a verdict. But the main problem is that all of this discussion will be added to the pile, increasing the risk of making the whole set of laws even more inconsistent.

One useful analogy to understand this situation is the Socratic method. Socrates would walk in the street market of Athens talking to different people. As soon as an interlocutor made an assertion, Socrates would ask what are the premises, usually 4 or 5 would suffice. Then, Socrates would show that based on these premises the thesis of the interlocutor does not hold. And the point is, the more promises one adds or the more original promises one changes from the beliefs the  interlocutor started with, the less likely it will untangle the inconsistencies pointed by Socrates. Back to the court case, if we imagine that we are trying to make the laws sound like one consistent system and courts decisions are the answer to Socrates’ questions, we would have one remarkable difference. Instead of a handful of premises, the law system of every country deals with literally dozens of millions of premises, which, in our analogy, encompass legislation and precedents. And one point that may have passed unnoticed in this anecdote is that computers played the role of Socrates.

The same way Socrates fostered his interlocutors to think critically by realizing inconsistencies, we might be forced to see by the use of A.I. that our law systems may in the long term not reflect what we expect from it. Maybe Socrates himself reached this conclusion before being ordered to poison himself by a jury in Athens 399 B.C. The point being, how will we react to the realization that law may become cumulatively less intuitive? And why would we hear computers if philosophers and jurists already pointed that?One of the reasons is that the use of A.I. will allow us to look at law in a broader scale rather than self-contained and, with that, inconsistencies will become much clearer and quantifiable. For example, a team from Griffith University, Australia, now seeks to work with the Australian Taxation Office on the detection of loopholes in taxation laws and regulations [5]. Now, as we said before, inconsistencies are often not a source from serious grieve due to the consensus that law should be interpreted and that those interpretations should be made hierarchical, archived and used for posterity. Therefore, the point is not the sole existence of inconsistencies, but their abundance in a way that inconsistencies can be criminally exploited and undermine public confidence in the law.

At the same time this may help to bring light on the romantic view on the fairness of law, it might diminish the confidence on it below the threshold required for legitimacy of the institutions, giving the feeling that as long as the law in old enough or complex enough and the chances of inconsistencies have been introduced are high, A.I. can be used as a tool to revert and postpone decisions. We may find a harsh way to realize that the same way a fruitful discussion requires common knowledge, consensus requires common ignorance. However, this seems-to-be-dystopian reality assumes the way we make and interpret law remains the same for the coming years.

Concluding remarks

In the book Artificial legal intelligence by Pamela Grey, it reads ”There is now an opportunity to review legal intelligence and consciously determinate any evolutionary leap in the form of codification [17]”. The author suggests that the challenging advance of A.I. in law is in fact an opportunity to reform the law system. Big changes on law systems are indeed rare and have only happened a few times in history in reaction to moments of huge turmoil. However, if it becomes imminent that reforms are made due to the use of A.I., it will be an advantage that we had thought beforehand on the implications of modifying our current law model. At this point, resistance should be expected from an institution which is largely based in tradition. In order to aid any transition, it is a requirement that world leaders, law makers and judiciary are acquainted with the ongoing changes of the use of A.I. in law and that population has a minimum degree of programming literacy to understand and concur. To have an uneducated opinion on this matter is willingly assuming the risk of choosing a sub-optimal solution when changes in what is law and how we do law become increasingly imperative.


[1] Gunther Teubner. Dealing with paradoxes of law: Derrida, luhmann, wi-
ethölter. In Oren Perez and Gunther Teubner, editors, Paradoxes and In-
consistencies in the Law, chapter 2, pages 41–64. Bloomsbury Publishing,
[2] Oren Perez and Gunther Teubner. Paradoxes and Inconsistencies in the Law. Bloomsbury Publishing, 2005.
[3] Lee Lovevinger. An introduction to legal logic. Indiana Law Journal, 27(4):1, 1952.
[4] Newton CA Da Costa and Walter A Carnielli. On paraconsistent deontic logic. Philosophia, 16(3):293–305, 1986.
[5] Grigoris Antoniou, David Billington, Guido Governatori, and Michael J Maher. On the modeling and analysis of regulations. In Proceedings of the Australian conference information systems, pages 20–29, 1999.
[6] Edward H Levi. An introduction to legal reasoning. The University of Chicago Law Review, 15(3):501–574, 1948.
[7] Bruce G Buchanan and Thomas E Headrick. Some speculation about artificial intelligence and legal reasoning. Stanford Law Review, pages 40–62,1970.
[8] L Thorne McCarty. Reflections on” taxman”: An experiment in artificial intelligence and legal reasoning. Harvard Law Review, pages 837–893, 1977.
[9] John McCarthy and Patrick J Hayes. Some philosophical problems from the standpoint of artificial intelligence. Readings in artificial intelligence, pages 431–450, 1969.
[10] Dan Hunter. Out of their minds: Legal theory in neural networks. Artificial Intelligence and Law, 7(2-3):129–151, 1999.
[11] David G Stork. HAL’s Legacy: 2001’s Computer as Dream and Reality. MIT Press, 1997.
[12] Richard E Susskind. Expert systems in law: A jurisprudential inquiry. Clarendon, 1987.
[13] BBC News. The tech start-up planning to shake up the legal world. http: //, 2016. [Online; accessed 31-Jan-2017].
[14] David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. Building watson: An overview of the deepqa project. AI magazine, 31(3):59–79, 2010.
[15] Bloomberg Law. How A.I. Will Excel at Legal Work., 2016. [Online; accessed 31-Jan-
[16] BBC Radio 4 Law in action. Artificial Intelligence and the Law., 2016. [Online; accessed 31-Jan-
[17] Pamela N Gray. Artificial legal intelligence. Ashgate Publishing Company, 1997.

Deixe uma resposta

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *