Using Machine Learning to Extract Nuremberg Trials Transcript Document Citations

In Harvard's Nuremberg Trials Project, being able to link to cited documents in each trial's transcript is a key feature of site navigation. Each document submitted into evidence by prosecution and defense lawyers is introduced in the transcript and discussed, and the site user is offered the possibility at each document mention to click open the document and view its contents and attendant metadata. While document references generally follow various standard patterns, deviations from the pattern large and small are numerous, and correctly identifying the type of document reference – is this a prosecution or defense exhibit, for example – can be quite tricky, often requiring teasing out contextual clues.

While manual linkage is highly accurate, it becomes infeasible over a corpus of 153,000 transcript pages and more than 100,000 document references to manually tag and classify each mention of a document, whether it be a prosecution or defense trial exhibit, or a source document from which the former were often chosen. Automated approaches offer the most likely promise of a scalable solution, with strategic, manual, final-mile workflows responsible for cleanup and optimization.

Initial prototyping by Harvard of automated document reference capture focused on the use of pattern matching in regular expressions. Targeting only the most frequently found patterns in the corpus, Harvard was able to extract more than 50,000 highly reliable references. While continuing with this strategy could have found significantly more references, it was not clear that once identified, a document reference could be accurately typed without manual input.

At this point Harvard connected with Tolstoy, a natural language processing (NLP) AI startup, to ferret out the rest of the tags and identify them by type. Employing a combination of machine learning and rule-based pattern matching, Tolstoy was able to extract and classify the bulk of remaining document references.

Background on Machine Learning

Machine learning is a comprehensive branch of artificial intelligence. It is, essentially, statistics on steroids. Working from a “training set” – a set of human-labeled examples – a machine learning algorithm identifies patterns in the data that allow it to make predictions. For example, a model that is supplied many labeled pictures of cats and dogs will eventually find features of the cat images that correlate with the label “cat,” and likewise, for “dog.” Broadly speaking, the same formula is used by self-driving cars learning how to respond to traffic signs, pedestrians, and other moving objects.

In Harvard’s case, a model was needed that could learn to extract and classify, using a labeled training set, document references in the court transcripts. To enable this, one of the main features used was surrounding context, including possible trigger words that can be used to determine whether a given trial exhibit was submitted by the prosecution or defense. To be most useful, the classifier needed to be very accurate (correctly labeled as either prosecution or defense), precise (minimal false positives), and have a high recall (few missing references).

Feature Engineering

The first step in any machine learning project is to produce a thorough, unbiased training set. Since Harvard staff had already identified 53,000 verified references, Tolstoy used that, along with an additional set generated using more precise heuristics, to train a baseline model.

The model is the predictive algorithm. There are many different families of models a data scientist can choose from. For example, one might use a support vector machine (SVM) if there are fewer examples than features, a convolutional neural net (CNN) for images, or a recurrent neural net (RNN) for processing long passages requiring memory. That said, the model is only a part of the entire data processing pipeline, which includes data pre-processing (cleaning), feature engineering, and post-processing.

Here, Tolstoy used a "random forest" algorithm. This method uses a series of decision-tree classifiers with nodes, or branches, representing points at which the training data is subdivided based on feature characteristics. The random forest classifier aggregates the final decisions of a suite of decision trees, predicting the class most often output by the trees. The entire process is randomized as each tree selects a random subset of the training data and random subset of features to use for each node.

Models work best when they are trained on the right features of the data. Feature engineering is the process by which one chooses the most predictive parts of available training data. For example, predicting the price of a house might take into account features such as the square footage, location, age, amenities, recent remodeling, etc.

In this case, we needed to predict the type of document reference involved: was it a prosecution or defense trial exhibit? The exact same sequence of characters, say "Exhibit 435," could be either defense or prosecution, depending on – among other things – the speaker and how they introduced it. Tolstoy used features such as the speaker, the presence or absence of prosecution or defense attorneys' names (or that of the defendant), and the presence or absence of country name abbreviations to classify the references.

Post-Processing

Machine learning is a great tool in a predictive pipeline, but in order to gain very high accuracy and recall rates, one often needs to combine it with heuristics-based methods as well. For example, in the transcripts, phrases like “submitted under” or “offered under” may precede a document reference. These phrases were used to catch references that had previously been missed. Other post-processing included catching and removing tags from false positives, such as years (e.g. January 1946) or descriptions (300 Germans). These techniques allowed us to preserve high precision while maximizing recall.

Collaborative, Iterative Build-out

In the build-out of the data processing pipeline, it was important for both Tolstoy and Harvard to carefully review interim results, identify and discuss error patterns and suggest next-step solutions. Harvard, as a domain expert, was able to quickly spot areas where the model was making errors. These iterations allowed Tolstoy to fine-tune the features used in the model, and amend the patterns used in identifying document references. This involved a workflow of tweaking, testing and feedback, a cycle repeated numerous times until full process maturity was reached. Ultimately, Tolstoy was able to successfully capture more than 130,000 references throughout the 153,000 pages, with percentages in the high 90s for accuracy and low 90s for recall. After final data filtering and tuning at Harvard, these results will form the basis for the key feature enabling interlinkage between the two major data domains of the Nuremberg Trials Project: the transcripts and evidentiary documents. Working together with Tolstoy and machine learning has significantly reduced the resources and time otherwise required to do this work.

Getting Started with Caselaw Access Project Data

Today we’re sharing new ways to get started with Caselaw Access Project data using tutorials from The Programming Historian and more.

The Caselaw Access Project makes 360 years of U.S. case law available as a machine-readable text corpus. In developing a research community around the dataset, we’ve been creating and sharing resources for getting started.

In our gallery, we’ve been developing tutorials and our examples repository for working with our data alongside research results, applications, fun stuff, and more:

The Programming Historian shares peer-reviewed tutorials for computational workflows in the humanities. Here are a group of their guides for working with text data, from processing to analysis:

We want to share and build ways to start working with Caselaw Access Project data. Do you have an idea for a future tutorial? Drop us a line to let us know!

Guest Post: Creating a Case Recommendation System Using Gensim’s Doc2Vec

This guest post is part of the CAP Research Community Series. This series highlights research, applications, and projects created with Caselaw Access Project data.

Minna Fingerhood graduated from the University of Pennsylvania this past May with a B.A. in Science, Technology, and Society (STSC) and Engineering Entrepreneurship. She is currently a data scientist in New York and is passionate about the intersection of technology, law, and data ethics.

The United States Criminal Justice System is the largest in the world. With more than 2.3 million inmates, the US incarcerates more than 25% of the world’s prison population, even as its general population only accounts for 5%. As a result, and perhaps unsurprisingly, the system has become increasingly congested, inefficient, and at times indifferent.

The ramifications of an overpopulated prison system are far-reaching: from violent and unsanitary prison conditions; to backlogged criminal courts; to overworked and often overwhelmed public defenders. These consequences are severe and, in many cases, undermine the promise of fair and equal access to justice. In an effort to help address these challenges, the government has employed various technologies that aim to increase efficiency and decrease human bias throughout many stages of the system. While these technologies and innovations may have been implemented with earnest intentions, in practice, they often serve bureaucratic imperatives and can reinforce and magnify the bias they intend to eliminate.

Given this context, it is particularly important to emphasize technologies that serve to democratize and decentralize institutional and governmental processes, especially in connection with the Criminal Justice System as each decision carries enormous gravity. Therefore, with the help of the Caselaw Access Project (CAP), I have developed a legal case recommendation system that can be used by underserved defendants and their attorneys.

This effort was inspired by the work of the Access to Justice Movement, which uses data to lower barriers to the legal system, as well as by the landmark Supreme Court decision Gideon vs. Wainwright (1963), which guarantees underserved defendants the right to counsel under the Sixth Amendment. As Justice Black wrote in the majority opinion for this case: “[The American Constitution is] designed to assure fair trials before impartial tribunals in which every defendant stands equal before the law. This noble ideal cannot be realized if the poor man charged with crime has to face his accusers without a lawyer to assist him.” While this case afforded poor defendants the right to free counsel, the meaning of ‘fair trial’ has become compromised in a system where a public defender can have, on average, over 200 cases.

My case recommendation system attempts to empower defendants throughout this flawed process by finding the 15 cases most similar to their own based primarily on text. In theory, my application could be made available to individuals awaiting trial, thereby facilitating their own thoughtful contributions to the work of their public defender. These contributions could take many forms such as encouraging the use of precedent cited in previously successful cases or learning more about why judges deemed other arguments to be unconvincing.

My application hosted locally through Plotly Dash.

My project primarily relies on natural language processing as I wanted to match cases through text. For this, I chose to use the majority opinion texts for cases from 1970 to 2018. I used majority opinion text as it was available in the CAP dataset, but in the future I look forward to expanding beyond the scope of CAP to include text from news reports, party filings, and even case transcripts. Further, while 1970 is a fairly arbitrary cut-off date, it also marks the year in which the term “War on Drugs” was coined. From this time forward, the prison system rapidly evolved into one marked by mass incarceration. I believed this would be useful for criminal court case data. Lastly, the data does not include cases subsequent to June of 2018, because the CAP dataset does not contain volumes past this date.

In creating semantic meaning from the text, I used Doc2Vec (through Python’s Gensim package), a derivative of the more well-known Word2Vec. This method of language processing relies on a shallow neural net to generate document vectors for every court case. Vectors are created by looking at word embeddings, or the location of words relative to other words, creating an understanding of language for the computer. These vector representations of case text can be compared to all other document vectors using cosine similarity to quantify likeness.

Example: a PCA representation of chosen words in my Gensim Doc2Vec model. Note: I adjusted my output vectors to have a length of 300.

In addition to text, I included the year of the majority decision as a feature for indicating similarity, but in a smaller proportion at 7% of the weight given to cosine similarity for word vectors. In a scenario where two cases were equally textually similar to a sample case, I wanted the more recent one to appear first, but did not want date to overshadow text similarity.

I believe this project is a promising starting point for educating and empowering underserved defendants. The CAP dataset is a rich and expansive resource, and much more can be done to further develop this project. I am looking forward to including more text into the program as a means of increasing accuracy for my unsupervised model and providing the user with more informative resources. I also believe that creating additional optional parameters for recommendation, such as geography or individual judges, could substantially improve the efficacy of this search engine. Ultimately, I look forward to collaborating with others to expand this project so that individuals caught within the criminal justice system can better use technology as a tool for empowerment and transparency, thereby creating more opportunities for fairness and justice.

In that regard, I have posted additional information regarding this project on my GitHub at https://github.com/minnaf/Case_Recommendation_System. Please contact me if you are interested in collaborating or learning more at minnaf@sas.upenn.edu.

A Thought on Digitization

Although it is excellent, and I recommend it very highly, I had not expected Roy Scranton's Learning to Die in the Anthropocene to shed light on the Caselaw Access Project. Near the end of the book, he writes,

The study of the humanities is nothing less than the patient nurturing of the roots and heirloom varietals of human symbolic life. This nurturing is a practice not strictly of curation, as many seem to think today, but of active attention, cultivation, making and remaking. It is not enough for the archive to be stored, mapped, or digitized. It must be worked.

The value of the Caselaw Access Project is not primarily in preservation, saving space, or the abstraction of being machine-readable; it is in its new uses, the making of things from the raw material. We at LIL and others have begun to make new things, but we're only at the beginning. We hope you will join us, and surprise us.

Creating a Data Analysis Workspace with Voyant and the CAP API

This tutorial is an introduction to creating a data analysis workspace with Voyant and the Caselaw Access Project API. Voyant is a computational analysis tool for text corpora.

Import a Corpus

Let’s start by retrieving all full cases from New Mexico:

https://api.case.law/v1/cases/?jurisdiction=nm&full_case=true

Copy and paste that API call into the Add Texts box and select Reveal. Here’s more on how to create your own CAP API call.

Create Stopwords

You’ve just created a corpus in Voyant! Nice 😎. Next we’re going to create stopwords to minimize noise in our data.

In Voyant, hover over a section header and select the sliding bar icon to define options for this tool.

Blue sliding bar icon shown displaying text "define options for this tool".

From the Stopwords field shown here, select Edit List. Scroll to the end of default stopwords, and copy and paste this list of common metadata fields, OCR errors, and other fragments:

id
url
name
name_abbriviation 
decision_date
docket_number 
first_page
last_page
citations
volume 
reporter 
court 
jurisdiction
https
api.case.law
slug
tbe
nthe

Once you’re ready, Save and Confirm.

Your stopwords list is done! Here’s more about creating and editing your list of stopwords.

Data Sandbox

Let’s get started. Voyant has out of the box tools for analysis and visualization to try in your browser. Here are some examples!

Summary: "The Summary provides a simple, textual overview of the current corpus, including (as applicable for multiple documents) number of words, number of unique words, longest and shortest documents, highest and lowest vocabulary density, average number of words per sentence, most frequent words, notable peaks in frequency, and distinctive words."

Here’s our summary for New Mexico case law.

Termsberry: "The TermsBerry tool is intended to mix the power of visualizing high frequency terms with the utility of exploring how those same terms co-occur (that is, to what extend they appear in proximity with one another)."

Here's our Termsberry.

Collocates Graph: "Collocates Graph represents keywords and terms that occur in close proximity as a force directed network graph."

Here's our Collocates Graph.

Today we created a data analysis workspace with Voyant and the Caselaw Access Project API.

To see how words are used in U.S. case law over time, try Historical Trends. Share what you find with us at info@case.law.

Guest Post: Do Elected and Appointed Judges Write Opinions Differently?

Unlike anywhere else in the world, most judges in the United States today are elected. But it hasn’t always been this way. Over the past two centuries, the American states have taken a variety of different paths, alternating through a variety of elective and appointive methods. Opponents of judicial elections charge that these institutions detract from judicial independence, harm the legitimacy of the judiciary, and put unqualified jurists on the bench; those who support judicial elections counter that, by publicly involving the American people in the process of judicial selection, judicial elections can enhance judicial legitimacy. To say this has been an intense debate of academic, political, and popular interest is an understatement.

Surprisingly little attention has been paid by scholars and policymakers to how these institutions affect legal development. Using the enormous dataset of state supreme court opinions CAP provides, we examined one small piece of this puzzle: whether opinions written by elected judges tend to be more well-grounded in law than those written by judges who will not stand for election. This is an important topic. Given the important role that the norm of stare decisis plays in the American legal system, opinions that cite many existing precedents are likely to be perceived as persuasive due to their extensive legal reasoning. More persuasive precedents, in turn, are more likely to be cited and increase a court’s policymaking influence among its sister courts.

State Courts’ Use of Citations Over American History

The CAP dataset provides a particularly rich opportunity to examine state courts’ usage of citations because we can see how citation practices vary as the United States slowly builds its own independent body of caselaw.

We study the 52 existing state courts of last resort, as well as their parent courts. For example, our dataset includes cases from the Tennessee Supreme Court as well as the Tennessee Supreme Court of Errors and Appeals, a court that was previously Tennessee’s court of last resort. We exclude the decisions of the colonial and territorial courts, as well as decisions from early courts that were populated by legislators, rather than judges.

The resulting dataset contains 1,702,404 cases from 77 courts of last resort. The three states with the greatest number of cases in the dataset are Louisiana (86,031), Pennsylvania (70,804), and Georgia (64,534). Generally, courts in highly populous states, such as Florida and Texas, tend to carry a higher caseload than those who govern less populous states, such as North and South Dakota.

To examine citation practices in state supreme courts, we first needed to extract citations from each state supreme court opinion. For this purpose, we utilize the LexNLP Python package released by LexPredict, a data-driven consulting and technology firm. In addition to parsing the citation (i.e. 1 Ill. 19), we also extract the report the opinion is published in and the court of the case cited (i.e. Illinois Supreme Court). Most state supreme court cases— about 68.7% of majority opinions greater than 100 words—cite another case. About one-third of cases cite between 1 and 5 other cases while about 5% of cases cite 25 or more other cases. The number of citations in an opinion trends upward with time, as Figure 1 shows.

plot of the average number of citations between the late 1700s and early 2000s, increasing exponentially from about 0 to about 15
Figure 1: The average number of citations in a state supreme court opinion since the American founding.

The number of citations in a case varies by state, as well. Some state courts tend to write opinions with a greater number of citations than other state courts. Figure 2 presents the proportion of opinions (with at least 100 words) in each state with at least three citations since 1950. States like Florida, New York, Louisiana, Oregon, and Michigan produce the greatest proportion of opinions with less than three citations. It may not be coincidence that Louisiana and New York are two of the highest caseload state courts in the country; judges with many cases on their dockets may be forced to publish opinions more quickly with less research and legal writing allocated to citing precedent. Conversely, cases with low caseloads like Montana and Wyoming produce the greatest proportion of cases with at least three citations. When judges have more time to craft an opinion, they produce opinions that are more well-grounded in existing precedent.

choropleth map of the United States
Figure 2: The proportion of state supreme court opinions citing at least three cases by state since 1950 (the two Texas and Oklahoma high courts are aggregated).

Explaining Differences in State Supreme Court Citation

We expected that the number of citations included in a state supreme court method would vary based on the method through which a state supreme court’s justices are retained. We use linear regression to model the median number of citations in a state-year as a function of selection method, caseload, partisan control of the state legislature, and general state expenditures. We restrict the time period for this analysis to the 1942-2010 period.

regression results with confidence intervals and coefficient estimates
Figure 3: Linear Regression results of the effects of judicial retention method on the average number of citations in a state supreme court opinion, including state and year fixed effects.

The results are shown in Figure 3. Compared to judges who face nonpartisan elections, judges who are appointed, face retention elections, and face partisan elections include more citations in their opinions. In appointed systems, the median opinion contains about 3 more citations (about three-fifths of a standard deviation shift) than in nonpartisan election systems. In retention election systems, the median opinion contains almost 5 more citations (about a full standard deviation shift in citations) than in nonpartisan election systems. Even in partisan election systems, the median opinion contains a little less than 3 more citations.

Some Conclusions

These differences represent the potential for drastic consequences for implementation and broader legal development based on the type of judicial selection method in a state. Because opinions with more citations tend, in turn, to be more likely to be cited in the future, the relationship we have uncovered between selection method and opinion quality suggests that judicial selection and retention methods have important downstream consequences for the relative influence of state supreme courts in American legal development. These consequences are important for policymakers to consider as they consider altering the methods by which their judges reach the bench.

CAP Code Share: Get Opinion Author

This month we're sharing new ways to start working with data from the Caselaw Access Project. This CAP code share from Anastasia Aizman shows us how to get opinion authors from cases with the CAP API and CourtListener: Get opinion author!

There are millions of court opinions that make up our legal history. With data, we can learn new things about individual opinions, who authored them, and how that activity influences the larger landscape. This code share reviews how to get started with the names of opinion authors.

This code finds opinion authors from cases using the CAP API and CourtListener. By forming a query to the CAP API and returning the cases from that query, this code connects keywords that match with individual opinion authors using the CourtListener API. The final output creates a data frame of those authors and related data in CourtListener. Nice 🙌

Have you created or adapted code for working with data from the Caselaw Access Project? Send it our way or add it to our shared repository.

We want to share new ways to start working with data from the Caselaw Access Project. Looking for code to start your next project? Try our examples repository and get started today.

Tutorial: Retrieve Cases by Citation with the CAP Case Browser

In this tutorial we’re going to learn how to retrieve a case by citation using the Caselaw Access Project's case browser.

The CAP case browser is a way to browse 6.7 million cases digitized from the collections of the Harvard Law School Library.

Retrieve Case by Citation: Brown v. Board of Education

  1. Find the citation of a case you want to retrieve. Let’s start with Brown v. Board of Education: Brown v. Board of Education, 347 U.S. 483 (1954).

  2. In the citation, find the case reporter, volume, and page: Brown v. Board of Education, 347 U.S. 483 (1954).

  3. We’re going to create our URL using this template: cite.case.law/<reporter>/<volume>/<page>

  4. In the reporter, volume, and page fields, add the information for the case you want to retrieve. Your URL for Brown v. Board of Education, 347 U.S. 483 (1954) should look like this: cite.case.law/us/347/483/

  5. Let’s try it out! Add the URL you’ve just created to your browser’s search bar, and press Enter.

You just retrieved a case by citation using the CAP case browser! Nice job. You can now read and share this case at this address: cite.case.law/us/347/483.

This tutorial shares one way to retrieve a case by citation in the CAP case browser. Find and share your first case today!

Computational Support for Statutory Interpretation with Caselaw Access Project Data

This post is about a research paper (preprint) on sentence retrieval for statutory interpretation that we presented at the International Conference on Artificial Intelligence and Law (ICAIL 2019) held in June at Montreal, Canada. The paper describes some of our recent work on computational methods for statutory interpretation carried out at the University of Pittsburgh. The idea is to focus on vague statutory concepts and enable a program to retrieve sentences that explain the meaning of such concepts. The Library Innovation Lab's Caselaw Access Project (CAP) provides an ideal corpus of case law that is needed for such work.

Abstract rules in statutory provisions must account for diverse situations, even those not yet encountered. That is one reason why legislators use vague, open textured terms, abstract standards, principles, and values. When there are doubts about the meaning, interpretation of a provision may help to remove them. Interpretation involves an investigation of how the term has been referred to, explained, recharacterized, or applied in the past. While court decisions are an ideal source of sentences interpreting statutory terms, manually reviewing the sentences is labor intensive and many sentences are useless or redundant.

In our work we automate this process. Specifically, given a statutory provision, a user’s interest in the meaning of a concept from the provision, and a list of sentences, we rank more highly the sentences that elaborate upon the meaning of the concept, such as:

  • definitional sentences (e.g., a sentence that provides a test for when the concept applies).
  • sentences that state explicitly in a different way what the concept means or state what it does not mean.
  • sentences that provide an example, instance, or counterexample of the concept.
  • sentences that show how a court determines whether something is such an example, instance, or counterexample.

We downloaded the complete bulk data from the Caselaw Access Project. Altogether the data set comprises more than 6.7 million unique cases. We ingested the data set into an Elasticsearch instance. For the analysis of the textual fields we used the LemmaGen Analysis plugin which is a wrapper around a Java implementation of the LemmaGen project.

To support our experiments we indexed the documents at multiple levels of granularity. Specifically, the documents were indexed at the level of full cases, as well as segmented into the head matter and individual opinions (e.g., majority opinion, dissent, concurrence). This segmentation was performed by the Caselaw Access Project using a combination of human labor and automatic tools. We also used our U.S. case law sentence segmenter to segment each case into individual sentences and indexed those as well. Finally, we used the sentences to create paragraphs. We considered a line-break between two sentences as an indication of a paragraph boundary.

For our corpus we initially selected three terms from different provisions of the United States Code:

  1. independent economic value (18 U.S. Code § 1839(3)(B))
  2. identifying particular (5 U.S. Code § 552a(a)(4))
  3. common business purpose (29 U.S. Code § 203(r)(1))

For each term we have collected a set of sentences by extracting all the sentences mentioning the term from the court decisions retrieved from the Caselaw Access Project data. In total we assembled a small corpus of 4,635 sentences. Three human annotators classified the sentences into four categories according to their usefulness for the interpretation:

  1. high value - sentence intended to define or elaborate upon the meaning of the concept
  2. certain value - sentence that provides grounds to elaborate on the concept’s meaning
  3. potential value - sentence that provides additional information beyond what is known from the provision the concept comes from
  4. no value - no additional information over what is known from the provision

The complete data set including the annotation guidelines has been made publicly available.

We performed a detailed study on a number of retrieval methods. We confirmed that retrieving the sentences directly by measuring similarity between the query and a sentence yields mediocre results. Taking into account the contexts of sentences turned out to be the crucial step in improving the performance of the ranking. We observed that query expansion and novelty detection techniques are also able to capture information that could be used as an additional layer in a ranker’s decision. Based on the detailed error analysis we integrated the context-aware ranking methods with the components based on query expansion and novelty detection into a specialized framework for retrieval of case-law sentences for statutory interpretation. Evaluation of different implementations of the framework shows promising results (.725 for NDGC at 10, .662 at 100. Normalized Discounted Cumulative Gain is a measure of ranking quality.)

To provide an intuitive understanding of the performance of the best model we list the top five sentences retrieved for each of the three terms below. Finally, it is worth noting that for future we plan to significantly increase the size of the data set and the number of statutory terms.

Independent economic value

  1. [. . . ] testimony also supports the independent economic value element in that a manufacturer could [. . . ] be the first on the market [. . . ]
  2. [. . . ] the information about vendors and certification has independent economic value because it would be of use to a competitor [. . . ] as well as a manufacturer
  3. [. . . ] the designs had independent economic value [. . . ] because they would be of value to a competitor who could have used them to help secure the contract
  4. Plaintiffs have produced enough evidence to allow a jury to conclude that their alleged trade secrets have independent economic value.
  5. Defendants argue that the trade secrets have no independent economic value because Plaintiffs’ technology has not been "tested or proven."

Identifying particular

  1. In circumstances where duty titles pertain to one and only one individual [. . . ], duty titles may indeed be "identifying particulars" [. . . ]
  2. Appellant first relies on the plain language of the Privacy Act which states that a "record" is "any item . . . that contains [. . . ] identifying particular [. . . ]
  3. Here, the district court found that the duty titles were not numbers, symbols, or other identifying particulars.
  4. [. . . ] the Privacy Act [. . . ] does not protect documents that do not include identifying particulars.
  5. [. . . ] the duty titles in this case are not "identifying particulars" because they do not pertain to one and only one individual.

Common business purpose

  1. [. . . ] the fact of common ownership of the two businesses clearly is not sufficient to establish a common business purpose.
  2. Because the activities of the two businesses are not related and there is no common business purpose, the question of common control is not determinative.
  3. It is settled law that a profit motive alone will not justify the conclusion that even related activities are performed for a common business purpose.
  4. It is not believed that the simple objective of making a profit for stockholders can constitute a common business purpose [. . . ]
  5. [. . . ] factors such as unified operation, related activity, interdependency, and a centralization of ownership or control can all indicate a common business purpose.

In conclusion, we have conducted a systematic study of sentence retrieval from case law with the goal of supporting statutory interpretation. Based on a detailed error analysis of traditional methods, we proposed a specialized framework that mitigates some of the challenges we identified. As evidenced above, the results of applying the framework are promising.