Guest Post: Is the US Supreme Court in lockstep with Congress when it comes to abortion?

This guest post is part of the CAP Research Community Series. This series highlights research, applications, and projects created with Caselaw Access Project data.

Abdul Abdulrahim is a graduate student at the University of Oxford completing a DPhil in Computer Science. His primary interests are in the use of technology in government and law and developing neural-symbolic models that mitigate the issues around interpretability and explainability in AI. Prior to the DPhil, he worked as an advisor to the UK Parliament and a lawyer at Linklaters LLP.

The United States of America (U.S.) has seen declining public support for major political institutions, and a general disengagement with the processes or outcomes of the branches of government. According to Pew's Public Trust in Government survey earlier this year, "public trust in the government remains near historic lows," with only 14% of Americans stating that they can trust the government to do "what is right" most of the time. We believed this falling support could affect the relationship between the branches of government and the independence they might have.

One indication of this was a study on congressional law-making which found that Congress was more than twice as likely to overturn a Supreme Court decision when public support for the Court is at its lowest compared to its highest level (Nelson & Uribe-McGuire, 2017). Furthermore, another study found that it was more common for Congress to legislate against Supreme Court rulings that ignored the legislative intentions, or rejects positions taken by federal, state, or local governments — due to ideological differences (Eskridge Jr, 1991).

To better understand how the interplay between the U.S. Congress and Supreme Court has evolved over time, we developed a method for tracking the ideological changes in each branch using word embeddings and text corpora generated. For Supreme Court, we used the opinions for the cases provided in the CAP dataset — though we extended this to include other federal court opinion to ensure our results were stable. As for Congress, we used the transcribed speeches of the Congress from Stanford's Social Science Data Collection (SSDS) (Gentzkow & Taddy, 2018). We use the case study of reproductive rights (particularly, the target word "abortion"), which is arguably one of the more contentious topics ideologically divided Americans have struggled to agree on. Over the decades, we have seen shifts in the interpretation of rights by both the U.S. Congress and Supreme Court that has arguably led to the expansion of reproductive rights in the 1960s and a contraction in the subsequent decades.

What are word embeddings? To track these changes, we use a quantitative method of tracking semantic shift from computational linguistics, which is based on the co-occurrence statistics of words used — and corpora of Congress speeches and the Court's judicial opinions. These are also known as word embeddings. They are a distributed representation for text that is perhaps one of the key breakthroughs for the impressive performance of deep learning methods on challenging natural language processing problems. This allows us to see, using the text corpus as a proxy, how they have ideologically leaned over the years on the issue of abortion, and whether any particular case led to an ideological divide or alignment.

For a more detailed account on word embeddings and the different algorithms used, I highly recommend Sebastian Ruder's "On word embeddings".

Our experimental setup In tracking the semantic shifts, we evaluated a couple of approaches using a word2vec algorithm. Conceptually, we formulate the task of discovering semantic shifts as follows. Given a time sorted corpus: corpus 1, corpus 2, …, corpus n, we locate our target word and its meanings in the different time periods. We chose the word2vec algorithm based comparisons made on the performance of the different algorithms which were count-based, prediction-based or a hybrid of the two on a corpus of U.S. Supreme Court opinions. We found that although there is variability in coherence and stability as a result of the algorithm chosen, the word2vec models show the most promise in capturing the wider interpretation of our target word. Between the two word2vec algorithms — Continuous Bag of Words (CBOW) and Skip-Gram Negative Sampling (SGNS) — we observe similar performance, however, the latter showed more promising results in capturing case law related to our target word at a specific time period.

As we test one algorithm in our experiments — a low dimensional representation learned with SGNS — with the incremental updates method (IN) and diachronic alignment method (AL), we got results for two models SGNS (IN) and SGNS (AL). In our implementation, we use parts of the Python library gensim and supplement this with implementations by Dubossarsky et al. (2019) and Hamilton et al. (2016b) for tracking semantic shifts. For the SGNS (AL) model, we only extract regular word-context pairs (w,c) for time slices and trained SGNS on these. For the SGNS (IN) model, we similarly extract the regular word-context pairs (w,c), but rather than divide the corpus and train on separate time bins, we train the first time period and incrementally add new words, update and save the model.

To tune our algorithm, we performed two main evaluations (intrinsic and extrinsic) on samples of our corpora, comparing the performance across different hyperparameters (window size and minimum word frequency). Based on these results, the parameters used were MIN = 200 (minimum word frequency), WIN = 5 (symmetric window cut-off), DIM = 300 (vector dimensionality), CDS = 0:75 (context distribution smoothing), K = 5 (number of negative samples) and EP = 1 (number of training epochs).

What trends did we observe in our results? We observed some notable trends from the changes in the nearest neighbours to our target word. Using the nearest neighbours to abortion indicates how the speakers or writers who generated our corpous associate the word and what connotations it might have in the group.

To better assess our results, we conducted an expert interview with a Womens and Equalities Specialist to categorise the words as: (i) a medically descriptive word, i.e., it relates to common medical terminology on the topic; (ii) a legally descriptive word, i.e., it relates to case, legislation or opinion terminology; and (iii) a potentially biased word, i.e., it is not a legal or medical term and thus was chosen by the user as a descriptor.

Nearest Neighbours Table Key. Description of keys used to classify words in the nearest neighbours by type of terminology. These were based on the insights derived from an expert interview.

Table showing "Category", "Colour Code", and "Description" for groups of words.

A key observation we made on the approaches to tracking semantic shifts is that depending on what type of cultural shift we intend to track, we might want to pick a different method. The incremental updates approach helps identify how parts of a word sense from a preceding time periods change in response to cultural developments in the new time period. For example, we see how the relevance of Roe v. Wade (1973) changes across all time periods in our incremental updates model for the judicial opinions.

In contrast, the diachronic alignment approach better reflects what the issues of that specific period are in the top nearest neighbours. For instance, the case of Roe v. Wade (1973) appears in the nearest neighbours for the judicial opinions shortly after it is decided in the decade up to 1975 but drops off our top words until the decades up to 1995 and 2015, where the cases of Webster v. Reproductive Health Services (1989), Planned Parenthood v. Casey (1992) and Gonzales v. Carhart (2007) overrule aspects of Roe v. Wade (1973) — hence, the new references to it. This is useful for detecting the key issues of a specific time period and explains why it has the highest overall detection performance of all our approaches.

Local Changes in U.S. Federal Court Opinions. The top 10 nearest neighbours to the target word "abortion" ranked by cosine similarity for each model.

Table displaying "Incremental Updates" for the years 1965, 1975, 1985, 1995, 2005, and 2015.

Table displaying "Diachronic Alignment" for the years 1965, 1975, 1985, 1995, 2005, and 2015.

Local Changes in U.S. Congress Speeches. The top 10 nearest neighbours to the target word "abortion" ranked by cosine similarity for each model.

Table displaying "Incremental Updates" for the years 1965, 1975, 1985, 1995, 2005, and 2015.

Table displaying "Diachronic Alignment" for the years 1965, 1975, 1985, 1995, 2005, and 2015.

These preliminary insights allow us to understand some of the interplay between the Courts and Congress on the topic of reproductive rights. The method also offers a way to identify bias and how it may feed into the process of lawmaking. As such, for future work, we aim to refine the methods to serve as a guide for operationalising word embeddings models to identify bias - as well as the issues that arise when applied to legal or political corpora.

Using Machine Learning to Extract Nuremberg Trials Transcript Document Citations

In Harvard's Nuremberg Trials Project, being able to link to cited documents in each trial's transcript is a key feature of site navigation. Each document submitted into evidence by prosecution and defense lawyers is introduced in the transcript and discussed, and the site user is offered the possibility at each document mention to click open the document and view its contents and attendant metadata. While document references generally follow various standard patterns, deviations from the pattern large and small are numerous, and correctly identifying the type of document reference – is this a prosecution or defense exhibit, for example – can be quite tricky, often requiring teasing out contextual clues.

While manual linkage is highly accurate, it becomes infeasible over a corpus of 153,000 transcript pages and more than 100,000 document references to manually tag and classify each mention of a document, whether it be a prosecution or defense trial exhibit, or a source document from which the former were often chosen. Automated approaches offer the most likely promise of a scalable solution, with strategic, manual, final-mile workflows responsible for cleanup and optimization.

Initial prototyping by Harvard of automated document reference capture focused on the use of pattern matching in regular expressions. Targeting only the most frequently found patterns in the corpus, Harvard was able to extract more than 50,000 highly reliable references. While continuing with this strategy could have found significantly more references, it was not clear that once identified, a document reference could be accurately typed without manual input.

At this point Harvard connected with Tolstoy, a natural language processing (NLP) AI startup, to ferret out the rest of the tags and identify them by type. Employing a combination of machine learning and rule-based pattern matching, Tolstoy was able to extract and classify the bulk of remaining document references.

Background on Machine Learning

Machine learning is a comprehensive branch of artificial intelligence. It is, essentially, statistics on steroids. Working from a “training set” – a set of human-labeled examples – a machine learning algorithm identifies patterns in the data that allow it to make predictions. For example, a model that is supplied many labeled pictures of cats and dogs will eventually find features of the cat images that correlate with the label “cat,” and likewise, for “dog.” Broadly speaking, the same formula is used by self-driving cars learning how to respond to traffic signs, pedestrians, and other moving objects.

In Harvard’s case, a model was needed that could learn to extract and classify, using a labeled training set, document references in the court transcripts. To enable this, one of the main features used was surrounding context, including possible trigger words that can be used to determine whether a given trial exhibit was submitted by the prosecution or defense. To be most useful, the classifier needed to be very accurate (correctly labeled as either prosecution or defense), precise (minimal false positives), and have a high recall (few missing references).

Feature Engineering

The first step in any machine learning project is to produce a thorough, unbiased training set. Since Harvard staff had already identified 53,000 verified references, Tolstoy used that, along with an additional set generated using more precise heuristics, to train a baseline model.

The model is the predictive algorithm. There are many different families of models a data scientist can choose from. For example, one might use a support vector machine (SVM) if there are fewer examples than features, a convolutional neural net (CNN) for images, or a recurrent neural net (RNN) for processing long passages requiring memory. That said, the model is only a part of the entire data processing pipeline, which includes data pre-processing (cleaning), feature engineering, and post-processing.

Here, Tolstoy used a "random forest" algorithm. This method uses a series of decision-tree classifiers with nodes, or branches, representing points at which the training data is subdivided based on feature characteristics. The random forest classifier aggregates the final decisions of a suite of decision trees, predicting the class most often output by the trees. The entire process is randomized as each tree selects a random subset of the training data and random subset of features to use for each node.

Models work best when they are trained on the right features of the data. Feature engineering is the process by which one chooses the most predictive parts of available training data. For example, predicting the price of a house might take into account features such as the square footage, location, age, amenities, recent remodeling, etc.

In this case, we needed to predict the type of document reference involved: was it a prosecution or defense trial exhibit? The exact same sequence of characters, say "Exhibit 435," could be either defense or prosecution, depending on – among other things – the speaker and how they introduced it. Tolstoy used features such as the speaker, the presence or absence of prosecution or defense attorneys' names (or that of the defendant), and the presence or absence of country name abbreviations to classify the references.

Post-Processing

Machine learning is a great tool in a predictive pipeline, but in order to gain very high accuracy and recall rates, one often needs to combine it with heuristics-based methods as well. For example, in the transcripts, phrases like “submitted under” or “offered under” may precede a document reference. These phrases were used to catch references that had previously been missed. Other post-processing included catching and removing tags from false positives, such as years (e.g. January 1946) or descriptions (300 Germans). These techniques allowed us to preserve high precision while maximizing recall.

Collaborative, Iterative Build-out

In the build-out of the data processing pipeline, it was important for both Tolstoy and Harvard to carefully review interim results, identify and discuss error patterns and suggest next-step solutions. Harvard, as a domain expert, was able to quickly spot areas where the model was making errors. These iterations allowed Tolstoy to fine-tune the features used in the model, and amend the patterns used in identifying document references. This involved a workflow of tweaking, testing and feedback, a cycle repeated numerous times until full process maturity was reached. Ultimately, Tolstoy was able to successfully capture more than 130,000 references throughout the 153,000 pages, with percentages in the high 90s for accuracy and low 90s for recall. After final data filtering and tuning at Harvard, these results will form the basis for the key feature enabling interlinkage between the two major data domains of the Nuremberg Trials Project: the transcripts and evidentiary documents. Working together with Tolstoy and machine learning has significantly reduced the resources and time otherwise required to do this work.

Getting Started with Caselaw Access Project Data

Today we’re sharing new ways to get started with Caselaw Access Project data using tutorials from The Programming Historian and more.

The Caselaw Access Project makes 360 years of U.S. case law available as a machine-readable text corpus. In developing a research community around the dataset, we’ve been creating and sharing resources for getting started.

In our gallery, we’ve been developing tutorials and our examples repository for working with our data alongside research results, applications, fun stuff, and more:

The Programming Historian shares peer-reviewed tutorials for computational workflows in the humanities. Here are a group of their guides for working with text data, from processing to analysis:

We want to share and build ways to start working with Caselaw Access Project data. Do you have an idea for a future tutorial? Drop us a line to let us know!

Guest Post: Creating a Case Recommendation System Using Gensim’s Doc2Vec

This guest post is part of the CAP Research Community Series. This series highlights research, applications, and projects created with Caselaw Access Project data.

Minna Fingerhood graduated from the University of Pennsylvania this past May with a B.A. in Science, Technology, and Society (STSC) and Engineering Entrepreneurship. She is currently a data scientist in New York and is passionate about the intersection of technology, law, and data ethics.

The United States Criminal Justice System is the largest in the world. With more than 2.3 million inmates, the US incarcerates more than 25% of the world’s prison population, even as its general population only accounts for 5%. As a result, and perhaps unsurprisingly, the system has become increasingly congested, inefficient, and at times indifferent.

The ramifications of an overpopulated prison system are far-reaching: from violent and unsanitary prison conditions; to backlogged criminal courts; to overworked and often overwhelmed public defenders. These consequences are severe and, in many cases, undermine the promise of fair and equal access to justice. In an effort to help address these challenges, the government has employed various technologies that aim to increase efficiency and decrease human bias throughout many stages of the system. While these technologies and innovations may have been implemented with earnest intentions, in practice, they often serve bureaucratic imperatives and can reinforce and magnify the bias they intend to eliminate.

Given this context, it is particularly important to emphasize technologies that serve to democratize and decentralize institutional and governmental processes, especially in connection with the Criminal Justice System as each decision carries enormous gravity. Therefore, with the help of the Caselaw Access Project (CAP), I have developed a legal case recommendation system that can be used by underserved defendants and their attorneys.

This effort was inspired by the work of the Access to Justice Movement, which uses data to lower barriers to the legal system, as well as by the landmark Supreme Court decision Gideon vs. Wainwright (1963), which guarantees underserved defendants the right to counsel under the Sixth Amendment. As Justice Black wrote in the majority opinion for this case: “[The American Constitution is] designed to assure fair trials before impartial tribunals in which every defendant stands equal before the law. This noble ideal cannot be realized if the poor man charged with crime has to face his accusers without a lawyer to assist him.” While this case afforded poor defendants the right to free counsel, the meaning of ‘fair trial’ has become compromised in a system where a public defender can have, on average, over 200 cases.

My case recommendation system attempts to empower defendants throughout this flawed process by finding the 15 cases most similar to their own based primarily on text. In theory, my application could be made available to individuals awaiting trial, thereby facilitating their own thoughtful contributions to the work of their public defender. These contributions could take many forms such as encouraging the use of precedent cited in previously successful cases or learning more about why judges deemed other arguments to be unconvincing.

My application hosted locally through Plotly Dash.

My project primarily relies on natural language processing as I wanted to match cases through text. For this, I chose to use the majority opinion texts for cases from 1970 to 2018. I used majority opinion text as it was available in the CAP dataset, but in the future I look forward to expanding beyond the scope of CAP to include text from news reports, party filings, and even case transcripts. Further, while 1970 is a fairly arbitrary cut-off date, it also marks the year in which the term “War on Drugs” was coined. From this time forward, the prison system rapidly evolved into one marked by mass incarceration. I believed this would be useful for criminal court case data. Lastly, the data does not include cases subsequent to June of 2018, because the CAP dataset does not contain volumes past this date.

In creating semantic meaning from the text, I used Doc2Vec (through Python’s Gensim package), a derivative of the more well-known Word2Vec. This method of language processing relies on a shallow neural net to generate document vectors for every court case. Vectors are created by looking at word embeddings, or the location of words relative to other words, creating an understanding of language for the computer. These vector representations of case text can be compared to all other document vectors using cosine similarity to quantify likeness.

Example: a PCA representation of chosen words in my Gensim Doc2Vec model. Note: I adjusted my output vectors to have a length of 300.

In addition to text, I included the year of the majority decision as a feature for indicating similarity, but in a smaller proportion at 7% of the weight given to cosine similarity for word vectors. In a scenario where two cases were equally textually similar to a sample case, I wanted the more recent one to appear first, but did not want date to overshadow text similarity.

I believe this project is a promising starting point for educating and empowering underserved defendants. The CAP dataset is a rich and expansive resource, and much more can be done to further develop this project. I am looking forward to including more text into the program as a means of increasing accuracy for my unsupervised model and providing the user with more informative resources. I also believe that creating additional optional parameters for recommendation, such as geography or individual judges, could substantially improve the efficacy of this search engine. Ultimately, I look forward to collaborating with others to expand this project so that individuals caught within the criminal justice system can better use technology as a tool for empowerment and transparency, thereby creating more opportunities for fairness and justice.

In that regard, I have posted additional information regarding this project on my GitHub at https://github.com/minnaf/Case_Recommendation_System. Please contact me if you are interested in collaborating or learning more at minnaf@sas.upenn.edu.

A Thought on Digitization

Although it is excellent, and I recommend it very highly, I had not expected Roy Scranton's Learning to Die in the Anthropocene to shed light on the Caselaw Access Project. Near the end of the book, he writes,

The study of the humanities is nothing less than the patient nurturing of the roots and heirloom varietals of human symbolic life. This nurturing is a practice not strictly of curation, as many seem to think today, but of active attention, cultivation, making and remaking. It is not enough for the archive to be stored, mapped, or digitized. It must be worked.

The value of the Caselaw Access Project is not primarily in preservation, saving space, or the abstraction of being machine-readable; it is in its new uses, the making of things from the raw material. We at LIL and others have begun to make new things, but we're only at the beginning. We hope you will join us, and surprise us.

Creating a Data Analysis Workspace with Voyant and the CAP API

This tutorial is an introduction to creating a data analysis workspace with Voyant and the Caselaw Access Project API. Voyant is a computational analysis tool for text corpora.

Import a Corpus

Let’s start by retrieving all full cases from New Mexico:

https://api.case.law/v1/cases/?jurisdiction=nm&full_case=true

Copy and paste that API call into the Add Texts box and select Reveal. Here’s more on how to create your own CAP API call.

Create Stopwords

You’ve just created a corpus in Voyant! Nice 😎. Next we’re going to create stopwords to minimize noise in our data.

In Voyant, hover over a section header and select the sliding bar icon to define options for this tool.

Blue sliding bar icon shown displaying text "define options for this tool".

From the Stopwords field shown here, select Edit List. Scroll to the end of default stopwords, and copy and paste this list of common metadata fields, OCR errors, and other fragments:

id
url
name
name_abbriviation 
decision_date
docket_number 
first_page
last_page
citations
volume 
reporter 
court 
jurisdiction
https
api.case.law
slug
tbe
nthe

Once you’re ready, Save and Confirm.

Your stopwords list is done! Here’s more about creating and editing your list of stopwords.

Data Sandbox

Let’s get started. Voyant has out of the box tools for analysis and visualization to try in your browser. Here are some examples!

Summary: "The Summary provides a simple, textual overview of the current corpus, including (as applicable for multiple documents) number of words, number of unique words, longest and shortest documents, highest and lowest vocabulary density, average number of words per sentence, most frequent words, notable peaks in frequency, and distinctive words."

Here’s our summary for New Mexico case law.

Termsberry: "The TermsBerry tool is intended to mix the power of visualizing high frequency terms with the utility of exploring how those same terms co-occur (that is, to what extend they appear in proximity with one another)."

Here's our Termsberry.

Collocates Graph: "Collocates Graph represents keywords and terms that occur in close proximity as a force directed network graph."

Here's our Collocates Graph.

Today we created a data analysis workspace with Voyant and the Caselaw Access Project API.

To see how words are used in U.S. case law over time, try Historical Trends. Share what you find with us at info@case.law.

Guest Post: Do Elected and Appointed Judges Write Opinions Differently?

Unlike anywhere else in the world, most judges in the United States today are elected. But it hasn’t always been this way. Over the past two centuries, the American states have taken a variety of different paths, alternating through a variety of elective and appointive methods. Opponents of judicial elections charge that these institutions detract from judicial independence, harm the legitimacy of the judiciary, and put unqualified jurists on the bench; those who support judicial elections counter that, by publicly involving the American people in the process of judicial selection, judicial elections can enhance judicial legitimacy. To say this has been an intense debate of academic, political, and popular interest is an understatement.

Surprisingly little attention has been paid by scholars and policymakers to how these institutions affect legal development. Using the enormous dataset of state supreme court opinions CAP provides, we examined one small piece of this puzzle: whether opinions written by elected judges tend to be more well-grounded in law than those written by judges who will not stand for election. This is an important topic. Given the important role that the norm of stare decisis plays in the American legal system, opinions that cite many existing precedents are likely to be perceived as persuasive due to their extensive legal reasoning. More persuasive precedents, in turn, are more likely to be cited and increase a court’s policymaking influence among its sister courts.

State Courts’ Use of Citations Over American History

The CAP dataset provides a particularly rich opportunity to examine state courts’ usage of citations because we can see how citation practices vary as the United States slowly builds its own independent body of caselaw.

We study the 52 existing state courts of last resort, as well as their parent courts. For example, our dataset includes cases from the Tennessee Supreme Court as well as the Tennessee Supreme Court of Errors and Appeals, a court that was previously Tennessee’s court of last resort. We exclude the decisions of the colonial and territorial courts, as well as decisions from early courts that were populated by legislators, rather than judges.

The resulting dataset contains 1,702,404 cases from 77 courts of last resort. The three states with the greatest number of cases in the dataset are Louisiana (86,031), Pennsylvania (70,804), and Georgia (64,534). Generally, courts in highly populous states, such as Florida and Texas, tend to carry a higher caseload than those who govern less populous states, such as North and South Dakota.

To examine citation practices in state supreme courts, we first needed to extract citations from each state supreme court opinion. For this purpose, we utilize the LexNLP Python package released by LexPredict, a data-driven consulting and technology firm. In addition to parsing the citation (i.e. 1 Ill. 19), we also extract the report the opinion is published in and the court of the case cited (i.e. Illinois Supreme Court). Most state supreme court cases— about 68.7% of majority opinions greater than 100 words—cite another case. About one-third of cases cite between 1 and 5 other cases while about 5% of cases cite 25 or more other cases. The number of citations in an opinion trends upward with time, as Figure 1 shows.

plot of the average number of citations between the late 1700s and early 2000s, increasing exponentially from about 0 to about 15
Figure 1: The average number of citations in a state supreme court opinion since the American founding.

The number of citations in a case varies by state, as well. Some state courts tend to write opinions with a greater number of citations than other state courts. Figure 2 presents the proportion of opinions (with at least 100 words) in each state with at least three citations since 1950. States like Florida, New York, Louisiana, Oregon, and Michigan produce the greatest proportion of opinions with less than three citations. It may not be coincidence that Louisiana and New York are two of the highest caseload state courts in the country; judges with many cases on their dockets may be forced to publish opinions more quickly with less research and legal writing allocated to citing precedent. Conversely, cases with low caseloads like Montana and Wyoming produce the greatest proportion of cases with at least three citations. When judges have more time to craft an opinion, they produce opinions that are more well-grounded in existing precedent.

choropleth map of the United States
Figure 2: The proportion of state supreme court opinions citing at least three cases by state since 1950 (the two Texas and Oklahoma high courts are aggregated).

Explaining Differences in State Supreme Court Citation

We expected that the number of citations included in a state supreme court method would vary based on the method through which a state supreme court’s justices are retained. We use linear regression to model the median number of citations in a state-year as a function of selection method, caseload, partisan control of the state legislature, and general state expenditures. We restrict the time period for this analysis to the 1942-2010 period.

regression results with confidence intervals and coefficient estimates
Figure 3: Linear Regression results of the effects of judicial retention method on the average number of citations in a state supreme court opinion, including state and year fixed effects.

The results are shown in Figure 3. Compared to judges who face nonpartisan elections, judges who are appointed, face retention elections, and face partisan elections include more citations in their opinions. In appointed systems, the median opinion contains about 3 more citations (about three-fifths of a standard deviation shift) than in nonpartisan election systems. In retention election systems, the median opinion contains almost 5 more citations (about a full standard deviation shift in citations) than in nonpartisan election systems. Even in partisan election systems, the median opinion contains a little less than 3 more citations.

Some Conclusions

These differences represent the potential for drastic consequences for implementation and broader legal development based on the type of judicial selection method in a state. Because opinions with more citations tend, in turn, to be more likely to be cited in the future, the relationship we have uncovered between selection method and opinion quality suggests that judicial selection and retention methods have important downstream consequences for the relative influence of state supreme courts in American legal development. These consequences are important for policymakers to consider as they consider altering the methods by which their judges reach the bench.

CAP Code Share: Get Opinion Author

This month we're sharing new ways to start working with data from the Caselaw Access Project. This CAP code share from Anastasia Aizman shows us how to get opinion authors from cases with the CAP API and CourtListener: Get opinion author!

There are millions of court opinions that make up our legal history. With data, we can learn new things about individual opinions, who authored them, and how that activity influences the larger landscape. This code share reviews how to get started with the names of opinion authors.

This code finds opinion authors from cases using the CAP API and CourtListener. By forming a query to the CAP API and returning the cases from that query, this code connects keywords that match with individual opinion authors using the CourtListener API. The final output creates a data frame of those authors and related data in CourtListener. Nice 🙌

Have you created or adapted code for working with data from the Caselaw Access Project? Send it our way or add it to our shared repository.

We want to share new ways to start working with data from the Caselaw Access Project. Looking for code to start your next project? Try our examples repository and get started today.

Tutorial: Retrieve Cases by Citation with the CAP Case Browser

In this tutorial we’re going to learn how to retrieve a case by citation using the Caselaw Access Project's case browser.

The CAP case browser is a way to browse 6.7 million cases digitized from the collections of the Harvard Law School Library.

Retrieve Case by Citation: Brown v. Board of Education

  1. Find the citation of a case you want to retrieve. Let’s start with Brown v. Board of Education: Brown v. Board of Education, 347 U.S. 483 (1954).

  2. In the citation, find the case reporter, volume, and page: Brown v. Board of Education, 347 U.S. 483 (1954).

  3. We’re going to create our URL using this template: cite.case.law/<reporter>/<volume>/<page>

  4. In the reporter, volume, and page fields, add the information for the case you want to retrieve. Your URL for Brown v. Board of Education, 347 U.S. 483 (1954) should look like this: cite.case.law/us/347/483/

  5. Let’s try it out! Add the URL you’ve just created to your browser’s search bar, and press Enter.

You just retrieved a case by citation using the CAP case browser! Nice job. You can now read and share this case at this address: cite.case.law/us/347/483.

This tutorial shares one way to retrieve a case by citation in the CAP case browser. Find and share your first case today!