Monthly Archives: October 2014

Predicting Supreme Court Decisions Using Artificial Intelligence

Predicting Supreme Court Outcomes Using AI ?

Is it possible to predict the outcomes of legal cases – such as Supreme Court decisions – using Artificial Intelligence (AI)?  I recently had the opportunity to consider this point at a talk that I gave entitled “Machine Learning Within Law” at Stanford.

At that talk, I discussed a very interesting new paper entitled “Predicting the Behavior of the Supreme Court of the United States” by Prof. Dan Katz (Mich. State Law),  Data Scientist Michael Bommarito,  and Prof. Josh Blackman (South Texas Law).

Katz, Bommarito, and Blackman used machine-learning AI techniques to build a computer model capable of predicting the outcomes of arbitrary Supreme Court cases with an accuracy of about 70% – a strong result.  This post will discuss their approach and why it was an improvement over prior research in this area.

Quantitative Legal Prediction

The general idea behind such approaches is to use computer-based analysis of existing data (e.g. data on past Supreme Court cases) in order to predict the outcome of future legal events (e.g. pending cases).  The approach to using data to inform legal predictions (as opposed to pure lawyerly analysis) has been largely championed by Prof. Katz – something that he has dubbed  “Quantitative Legal Prediction” in recent work.

Legal prediction is an important function that attorneys perform for clients.  Attorneys predict all sorts of things, ranging from the likely outcome of pending cases, risk of liability, and estimates about damages, to the importance of various laws and facts to legal decision-makers.   Attorneys use a mix of legal training, problem-solving, analysis, experience, analogical reasoning, common sense, intuition and other higher order cognitive skills to engage in sophisticated, informed assessments of likely outcomes.

By contrast, the quantitative approach takes a different tack:  using analysis of data employing advanced algorithms to result in data-driven predictions of legal outcomes (instead of, or in addition to traditional legal analysis).  These data-driven predictions can provide additional information to support attorney analysis.

Predictive Analytics: Finding Useful Patterns in Data

Outside of law, predictive analytics has widely applied to produce automated, predictions in multiple contexts.   Real world examples of predictive analytics include: the automated product recommendations made by Amazon.com, movie recommendations made by Netflix, and the search terms automatically suggested by Google.

Scanning Data for Patterns that Are Predictive of Future Outcomes

In general, predictive analytics approaches use advanced computer algorithms to scan large amounts of data to detect patterns.  These patterns can be often used to make intelligent, useful predictions about never-before-seen future data.  Many of these approaches employ “Machine Learning” techniques to engage in prediction. I have written about some of the ways that machine-learning based analytical approaches are starting to be used within law and the legal system.

Broadly speaking, machine-learning refers to a research area studying computer systems that are able improve their performance on some task over time with experience.  Such algorithms are specifically designed to detect patterns in data that can be highlight non-obvious relationships or that can be predictive of future outcomes (such as detecting Netflix users who like movie X, tend also to like movie Y and concluding you like movie X, so you’re likely to like movie Y.)

Importantly these algorithms are designed to “learn” –  in the sense that they can change their own behavior to get better at some task – like predicting movie preferences – over time by detecting new, useful patterns within additional data.  Thus, the general idea behind predictive legal analytics is to examine data concerning past legal cases and use  machine learning algorithms to detect and learn patterns that could be predictive of future case outcomes.

In such a machine learning approach — called supervised learning –  we “train” the algorithm by providing it  with examples of past data that is has been definitively classified.  For example, there may be a body of existing data about Supreme Court cases along with confirmed data indicating whether the outcome was affirm or reverse, along with other potentially predictive data, such as lower circuit, and subject matter at issue.  Such an algorithm examines this training data to detect patterns and statistical correlations between variables and outcomes (e.g. 9th Circuit cases more likely to be reversed) and build a computer model that will be predictive of future outcomes.

It is helpful to briefly review some earlier research in using data analytics to engage prediction of Supreme Court outcomes to understand the contribution of Katz, Bommarito, and Blackman’s paper.

Prior Work in Analytical Supreme Court Prediction

Pioneering work in the area of quantitative legal prediction began in 2004 with a seminal project by Prof. Ted Ruger (U Penn), Andrew D. Martin (now dean at U Michigan) and other collaborators, employing statistical methods to predict Supreme Court outcomes.   That project pitted experts in legal prediction – law professors and attorneys – against a statistical model that had analyzed data about hundreds of past Supreme Court cases.

Somewhat surprisingly the computer model significantly outperformed the experts in predictive ability. The computer model correctly forecasted 75% of Supreme Court outcomes, while the experts only had a 59% success rate in predicting Supreme Court affirm or reversal decisions.  (The computer and the experts performed roughly the same in predicting the votes of individual justices – as opposed to the ultimate outcome –  with the computer getting 66.7 % correct predictions vs. the experts 67.9%).

Improvements by Katz, Bommarito, and Blackman (2014)

The work by Ruger, Martin et. al – while pioneering – left some room for improvement.  One aspect was that their predictive model – while highly predictive of the relatively short time frame examined (the October 2002 term)  – was thought not to be broadly generalizable to predicting arbitrary Supreme Court cases across any timespan.  A primary reason was that the period of Supreme Court cases that they examined to build their models – roughly 1994 – 2000 – involved an unusually stable court.  Notably, this period exhibited no change in personnel (i.e. justices leaving the court and new justices being appointed).

A model that was “trained” on data from an unusually stable period of the Supreme Court, and tested on a short case-load of relatively non-fluctuation might not perform as accurately when applied to a broader or less homogenous examination period, or might not handle changes in court composition in a robust manner.

Ideally, we would any such computer predictive model to be flexible enough, and generalizable enough to handle significant changes in personnel and still be able to produce accurate predictions. Additionally, such a model should be general enough to predict case outcomes with a relatively consistent level of accuracy regardless of the term or period of years examined.

Katz, Bommarito, and Blackman: Machine Learning And Random Forests

While building upon Ruger et al’s pioneering work. Katz, Bommarito, and Blackman improve upon it by employing a relatively new machine learning approach known as “Random Forests.”   Without getting into the details, it is important to note that Random Forest approaches have been shown to be quite robust and generalizable as compared to other modeling approaches in contexts such as this.   The authors applied this algorithmic approach to examine data about past Supreme Court cases found in the Supreme Court Database.  In addition to outcome (e.g. affirmed, reverse), this database contains hundreds of variables about nearly every Supreme Court decision of the past 60 years.

Recall that machine learning approaches often working by providing an algorithm with existing data (such as data concerning past Supreme Court case outcomes and potentially predictive variables such as lower-circuit) in order to “train” it.  The algorithms looks for patterns and builds an internal computer model that can hopefully be used to provide prediction is future, never-before-seen data – such as pending Supreme Court case.

Katz, Bommarito, and Blackman did this and produced a new robust machine-learning based computer model that correctly forecasted ~ 70%  of Supreme Court affirm / reverse decisions.

This was actually a significant improvement over prior work.   Although Ruger’s et. al’s model had a a 75% prediction rate on the period it was analyzed against,  Katz et. al’s model was a much more robust, generalizable model.

The new model is able to withstand changes in Supreme Court composition and still produce accurate results even when applied across widely variable supreme court terms, with varying levels of case predictability.   In other words, it is unlikely that the Ruger model – focused only on one term 2002 – would produce a 75% rate across a 50 year range of Supreme Court jurisprudence.  By contrast, the computer model produced by Katz et. model consistently delivered a 70% prediction rate across nearly 8,000 cases across 50+ years.

Conclusion: Prediction in Law Going Forward

Katz, Bommarito, and Blackman’s paper is an important contribution.  In the not too distant future, such data-driven approaches to engaging in legal prediction are likely to become more common within law. Outside of law, data analytics and machine-learning have been transforming industries ranging from medicine to finance, and it is unlikely that law will remain as comparatively untouched by such sweeping changes as it remains today.

In future posts I will discuss machine learning within law more generally, and principles for understanding what such AI techniques ca, and cannot do within law given the state of current technology, and some implications of these technological changes.

Machine Learning and Law Talk @ Stanford Law School (October 9)

Please join me on October 9, 2014 at Stanford Law School for a talk entitled “Machine Learning Within Law”.

As the name suggests, this talk will be focused on current and future applications involving machine-learning automation in the practice of law.

The paper upon which the talk is based “Machine Learning and Law” (2014 Univ. Wash. Law Review) can be downloaded here.

More details about the talk can be found here on Stanford’s website.