Telling Stories with CAP Data: The Prolific Mr. Cartwright

When I think about data, caselaw isn't the first thing that comes to mind.

The word “data” evokes tabulated click-through rates, aggregated housing statistics, and short, readily classifiable chunks of born-digital text. Multi-page 19th century legal documents don’t exactly fit the archetype.

Practically speaking, of course, all it takes for a body of material to become usable ‘data’ is a person or organization willing to make that material accessible to analysis. The HLS Library Innovation Lab’s Caselaw Access Project represents an effort to do just that for centuries of American caselaw. Through resources generated and maintained by the Caselaw Access Project, researchers can explore the rich legal history of the United States byte by byte.

If the rise of big data has taught us one thing, however, it is that “can” does not necessarily imply “should.” Indeed, the practice of subjecting core texts from the humanities and social sciences to data-driven analysis has been met with sharp resistance from some quarters. A widely discussed essay by Daniel Allington, Sarah Brouillette, and David Golumbia criticizing the “Digital Humanities” movement recently argued that the application of quantitative methods to such material has driven “the displacement of… humanities scholarship and activism in favor of the manufacture of digital tools and archives.” To the essay’s writers and others of similar mind, the expansion of data’s domain comes as a threat to the integrity of a long tradition of scholarship.

In this post, I present my experience working with the Caselaw Access Project’s publically available Illinois dataset as evidence for a more optimistic narrative – namely that applying quantitative techniques to corpuses primarily associated with the qualitative disciplines can help us to uncover and relate stories which might otherwise go unnoticed.

I uncovered this particular story while messing around with measures of “prolificness” amongst Illinois judges between 1850 and the present. I had generated a plot tracking the number of opinions judges had published per year over the timespan (each point corresponds to one judge’s output over the course of one year):

Yearly output by judges in the dataset

I noticed an interesting trend – in a window of time between about 1890 and 1930, many justices were publishing upwards of 50 opinions per year (it’s worth noting that modern publication numbers have likely been pushed down by the rise of unpublished opinions, which are not indexed in reporters and therefore cannot be cited as precedent). Digging down a little further, I plotted yearly publication volume for the 5 Illinois judges who wrote the most opinions over the course of their careers.

The 5 most prolific judges in the dataset

All of these judges fall more or less into the timespan discussed, and all were justices of the Illinois Supreme Court. Running the numbers, it became apparent that one Mr. Justice Cartwright was firmly in the lead as the most prolific publisher of legal opinions in the history of the state of Illinois.

My efforts to investigate Cartwright’s life and times through internet research were largely unfruitful. Among the most complete sources I found was a short profile on a website dedicated to social reformer Florence Kelley, which cites just two brief articles about Cartwright – both of them published in the 1920s. A brief Wikipedia entry provides a portrait of the justice taken in 1919, about five years before his death.

Justice James H. Cartwright, 1919

From these paltry sources, I learned that Cartwright was born in Iowa Territory on December 1st, 1842. After serving with some distinction in the Civil War, he attended Michigan Law School starting in 1865. Between 1868 and 1876, he served as general attorney for a regional railroad company. After a period of private practice, Cartwright was elected as a circuit court judge in Oregon, Illinois in 1888. In 1895, he became a justice of the Illinois Supreme Court – a position which he held until his death on May 18th, 1924.

However, none of the sources I was able to locate shed much light on Cartwright’s amazing prolificness, though some of the articles written around the time of his death do reference it offhand. For further insights, I turned to the data. After cleaning and standardizing data corresponding to Cartwright and his peers on the Illinois Supreme Court across his almost 30-year career, I visualized the yearly output of of each justice present in the dataset.

Cartwright's published opinion output relative to that of his peers

With a few exceptions, Cartwright was among the most prolific publishers on the court throughout his time as a justice. He was particularly active in his early years of service, with a marked drop off in the two years immediately preceding his death. However, it is clear from the visualization above that Cartwright wasn’t writing enormously more than his peers – there is often at least one justice who authors more than him in a given year, and he occasionally winds up in the middle of the pack. Where, then, does his dominance come from? To find out, I generated a cumulative plot of the number of opinions written by each justice in the span between 1895 and 1925.

Cumulative published opinion counts for Cartwright and peers

As we can see, it is not just Cartwright’s yearly rate of production that catapulted him to dominance – it is also his consistency. In the years between 1896 and 1922, just once did Cartwright have an annual output of fewer than 50 opinions. Over the course of a lengthy career, he kept up this breakneck pace with a degree of longevity and persistence that seems to have eluded his peers.

Perhaps a bit of this relative immunity to fatigue can be attributed to the style of Cartwright’s writing. Per the visualization below, Cartwright tended to writer shorter opinions than the majority of his peers – his average opinion totalled about 1,724 words, as compared to the court-wide average of 1,949 words. Justice Orrin Carter, the second most prolific justice on the court in the period examined, averaged about 2,209 words per opinion. Carter’s 1,129 opinions summatively contain 2,493,649 words, whereas Cartwright’s 1,978 opinions contain 3,411,869. Interestingly, Cartwright’s word counts were at their lowest during the beginning and end of his career.

Cartwright's average opinion lengths relative to those of his peers

This basic investigation demonstrates just a few of the insights that this dataset offers into the professional life of Cartwright and his peers. In the hands of an interested researcher with questions to ask, a few gigabytes of digitized caselaw can speak volumes to the progress of American legal history and its millions of little stories.

The data used in this blog post can be downloaded on the Caselaw Access Project Website: https://capapi.org/bulk-access/. An iPython Notebook containing all of the analysis and visualization code used in this post can be found on Github here: https://github.com/john-bowers/capexamples/blob/master/CAPDemo.ipynb. Please note that this dataset contains OCR errors and was not cleaned completely – figures are approximate.

The CAP Tracking Tool

Evelin Heidel (@scannopolis on Twitter) recently asked me to document our Caselaw Access Project (website, video) digitization workflow, and open up the source for the CAP "Tracking Tool." I'll dig into our digitization workflow in my next post, but in this post, I'll discuss the Tracking Tool or TT for short. I created the TT to track CAP's physical and digital objects and their associated metadata. More specifically, it:

  • Tracked the physical book from receipt, to scanning, to temporary storage, to permanent storage
  • Served as a repository for book metadata, some of which was retrieved automatically through internal APIs, but most of which was keyed in by hand
  • Tracked the digital objects from scanning to QA, to upload, to receipt from our XML vendor
  • Facilitated sending automated delivery requests to the Harvard Depository, which stored most of our reporters
  • Provided reports on the progress of the project and the fitness of the data we were receiving from our XML vendor

If I might toot my own horn, I'd say it drastically improved the efficiency and accuracy of the project, so it's no wonder Evelin is not the first person to request I open up the source. If doing so were a trivial undertaking I certainly wouldn't hesitate, but it's not. While we have a policy of making all new projects public by default in LIL, that was not the case in the position I held when I created the tracking tool. And while there's nothing particularly sensitive in the code, I'm not comfortable releasing it without a thorough review. I also don't believe that after all that work the code would be particularly useful to people. There's so much technical debt, and it's so tightly coupled with our process, data, vendors, and institutional resources that I'm sure adapting it to a new project would take significantly more effort than starting over. I'm confident that development of Capstone — the tool which manages and distributes the fruits of this project — is a much better use of my time.

Please allow me to expound.

The Tracking Tool - The Not So Great Parts

During the project's conception in 2013, I conceived of the TT as a small utility to track metadata and log the receipt, scanning, and shipping of casebooks. Turning a small utility into a monolithic data management environment by continually applying ad hoc enhancements under significant time constraint is the perfect recipe for technical debt, and that's precisely what we ended up with.

S3 bucket names are hard-coded into models. Recipient's email addresses are hard-coded into automated reports. Tests? Ha!

The only flexibility I designed into the application, such as being able to configure the steps each volume would proceed through during the digitization process, was to mitigate not knowing exactly what the workflow would look like when I started coding, not because I was trying to make a general-purpose tool. It was made, from the ground-up, to work with our project-specific idiosyncrasies. For example, code peppered throughout the application handles a volume's reporter series, which is a critical part of this workflow but nonexistent in most projects. Significant bits of functionality are based on access to internal Harvard APIs, or having data formatted in the CaseXML, VolumeXML, and ALTO formats.

If all of that wasn't enough, it's written in everybody's favorite language, PHP5, using Laravel 4, which was released in 2013, and isn't the most straightforward framework to upgrade. I maintain that this was a good design choice at the time, but it indeed isn't something I'd recommend adopting today.

Now that I've dedicated a pretty substantial chunk of this post to how the TT is a huge, flaming pile of garbage, let's jump right over to the "pro" column before I get fired.

The Tracking Tool - The Better Parts

Despite all of its hacky bits, the TT is functional, stable, and does its jobs well.

Barcodes

Each book is identified in the TT by its barcode, so users can quickly bring up a book's metadata/event log screen with the wave of a barcode scanner. Harvard's cataloging system assigned most of the barcodes, but techs could generate new CAP-only barcodes for the occasional exception, such as when we received a book from another institution. Regardless of the barcode's source, all books need to have an entry in the TT's database. Techs could create those entries individually if necessary, but most often would create them in bulk. If the book has a cataloging system barcode, it pulls some metadata, such as the volume number and publication year, from the cataloging API.

Reporters

A crucial part of the metadata and organization of this tool is the reporter table — a hand-compiled list of every reporter series' in the scope of this project. Several expert law librarians constructed the table by combing through a few hundred years of Harvard cataloging data, which after many generations of library management and cataloging systems, had varying levels of accuracy. If you're interested, check out our master reporter list on github! The application guesses each volume's reporter based on its HOLLIS number — another internal cataloging identifier — but needs to be double-checked by the tech.

Automated Expertise

There are several data points created during the in-hand metadata analysis stage which would trigger outside review. If a book was automatically determined to be rare using a set of criteria determined by our Special Collections department, or the tech flagged it as needing bibliographic review, the TT included the barcode in its daily email to their respective groups of specialists.

Process Steps and Book Logs

The system has a configurable set of process steps each volume must complete, such as in-hand metadata analysis or scanning, with configurable prerequisites. Such a system ensures all books proceed through all of the steps, in the intended order, and facilitates very granular progress reports. Each step is recorded in the book's log, which also contains:

  • Info Entries: e.g., user x changed the publication year for this book
  • Warnings: e.g.,  the scan job was put on hold
  • Exceptions: e.g., the scanned book failed the QA test.

Control Flow

Each of those process steps has a configurable set of prerequisites. For example, to mark a book as "analyzed," it must have several metadata elements recorded. To mark it as "stored on X shelf," the log must contain a "scanned" event.

If a supervisor needs to track down a book during the digitization process, they can put that book "on hold." The next person to scan that barcode sees a prominent warning and must engage with a confirmation prompt before taking any action. Generally, the person who placed the volume on hold would put instructions in the volume's notes field.

Efficiency

Accessing each volume page to record an event, such as receipt of a book from the repository, is terribly inefficient with more than a few books. In the streamlined mode, techs specify a process step which they can bulk-apply to any book by scanning its barcode. An audio cue indicates whether or not the status was applied correctly, so the technician doesn't even have to look a the computer unless there's a problem.

External Communication

The TT has a simple REST API to communicate with daemons that run on other systems. Through the API, external processes can trigger uploading metadata once the file upload is complete, monitor our scanner output, discover newly uploaded objects from our vendor, sync scan timestamp and QA status, and a few other things.

Quality Assurance

Within the TT lies a system to inspect the output received from our XML vendor. The user can view statistics about the number volumes received per state or jurisdiction, drill down to see XML tag statistics at different levels of granularity, or even drill down to individual cases where you can view page images overlaid with interactive ALTO text. The higher-level overviews were quite useful in ironing out some vendor process problems.

The Long and Short of It

The tracking tool was an invaluable part of the CAP workflow, but vast swaths of the code would only be useful to people replicating this exact project, using Harvard's internal cataloging systems, using the highly automated scanner we used configured precisely like ours, and receiving XML in the exact format we designed for this project. While a subset of the TT's features would be pretty useful to most people doing book digitization, I am very confident that anybody interested in using it would be much better off creating a more straightforward, more generalizable tool from scratch, using a better language. I've considered starting a more generalizable, open source tool for digitization projects, but if someone else gets to it first, I'd be happy to discuss the architectural wisdom I've gained by writing the TT. If someone knows of another open source project already doing this, let me know; I'd love to check it out. Reach out to lil@law.harvard.edu with any questions, comments, or hate mail!

The 'Library' in Library Innovation Lab

A roomful of people sitting in armchairs with laptops may not appear at first glance to be a place where library work is happening. It could look more like a tech startup, or maybe a student lounge (modulo the ages of some of the people in the armchairs). You don't have to be a librarian to see it, but, as the only librarian presently working at LIL, I'll try to show how LIL's work is at the heart of librarianship.

Of our main projects, the Nuremberg project is the closest to a notion of traditional library work: scanning, optical character recognition, and metadata creation for trial documents and transcripts from the Nuremberg Military Tribunals, a collection of enormous historical interest. This is squarely in the realm of library collections, preservation, and access.

In its broad outline, the work on Nuremberg is similar to that of the Caselaw Access Project, the digitization of all U.S. case law. This project, however, is what Jonathan Zittrain has referred to as a systemic intervention. By making the law freely accessible online, we are not only going to alter the form of and access to the print collection, but we are going to transform the relationships of libraries, lawyers, courts, scholars, and citizens to the law. By freeing the law for a multitude of uses, the Caselaw Access Project will support efforts like H2O, LIL's free casebook platform, another intervention into the field of publishing.

Over the last forty years or so, as computers have become more and more essential to library work, libraries have ceded control to vendors. For example, not only does a library subscribing to an online journal database lose the ability to make collection development decisions autonomously (though LOCKSS, a distributed preservation system, helps address this), but, in relinquishing control of the platform, it relinquishes the power to protect patron confidentiality, and consequently intellectual freedom.

Perma.cc is an intervention of a different sort, a tool to combat link rot. As a means of permanently archiving web links, it's close to libraries' preservation efforts, but the point of action is generally the author or editor of a document, not an archivist, post-publication. Further, Perma's reliability rests on the authority of the library to maintain collections in perpetuity.

As library work, these interventions are radical, in the sense of at-the-root: they address core activities of the library, they engage long-standing problems in librarianship, and they expand on and distribute traditional library work.

Announcing the 2018 Cohort of LIL Summer Fellows

Each summer brings to LIL a new cohort of Summer Fellows to inspire and challenge us with their visions of what libraries make possible.

Over the past two summers, we've learned with and from these colleagues about building online collections of local news stories (Alexander Nwala), connecting citizens of Nigeria with information about human rights law (Jake Effoduh), exploring the Guantanamo Detainee Library (Muira McCammon), creating high fidelity web archives (Ilya Kreymer), imagining Palestine-Israel through maps (Zena Agha) and many other things (more about our previous cohorts here, here and here).

Next month, we'll welcome eight new Summer Fellows to LIL:

  • Hannah Brinkmann - Hannah is a comic artist and student at the University of Applied Sciences in Hamburg, Germany. She'll be connecting with our Nuremberg Trials Project and our Library's foreign collections to develop a graphic novel about "conscience trials" in Germany.

  • Alexandra Dolan-Mescal - Alexandra is a UX designer focused on ethics in design and research. She'll be developing a social media data label system inspired by the Traditional Knowledge Labels project.

  • Tim Walsh - Tim is a digital archivist and programmer. He'll be working on tools to help librarians and archivists manage sensitive personal information in digital archives.

  • Carrie Bly - Carrie is an architect studying at the Harvard Graduate School of Design. She'll be exploring connections and contrasts between library and garden classification systems.

  • Shira Feldman - Shira is an artist and writer. This summer she will be exploring the intersection of internet and art, and what it means to live in a networked, digital culture.

  • Kendra Greene - Kendra is a writer and researcher from Dallas. She'll be working on a book about dangerous library collections.

  • Evelin Heidel - Evelin (aka scann) is a teacher and open knowledge advocate from Argentina with deep experience in DIY digitization. She'll be developing learning resources to help small libraries, community archives, underrepresented groups and others build their own digital collections.

  • Franny Corry - Franny is a digital history researcher studying at the USC Annenberg School for Communication and Journalism. She'll be working this summer on combining personal narrative with web archives to collect social histories of the web.

We invited these eight explorers to join us after reviewing over 120 applications and conducting roughly 60 interviews, including multiple interviews with all of the finalists. This year's applicant pool amazed and challenged us, and we are so grateful to everyone who applied and who helped spread the word about the opportunity. Thank you!

LIL Takes Toronto: the Creative Commons Summit 2018

The Creative Commons Conference in Toronto was wonderful this past weekend! It was a pleasure to meet the mix of artists, educators, civil servants, policymakers, journalists, and copyright agents (and more) who were there.

Talks touched on everything from how the Toronto Public Library manages their digital collections and engages their local cultural and tech communities with collections, to feminist theory and open access, to the state of copyright/open access worldwide.

The range of stakeholders and interested parties involved with open access was greater than I realized. While I'm familiar with libraries and academics being interested in OA and OER, the number of government policymakers and artists who were there to learn and discuss was heartening.

Until next year, Creative Commons! And thank you, Ontario! -Brett and Casey

Brett and Casey

Overheard in LIL - Episode 2

This week:

A chat bot that can sue anyone and everything!

Devices listening to our every move

And an interview with Jack Cushman, a developer in LIL, about built-in compassion (and cruelty) in law, why lawyers should learn to program, weird internet, and lovely podcast gimmicks (specifically that of Rachel and Griffin McElroy's Wonderful! podcast)

Starring Adam Ziegler, Anastasia Aizman, Brett Johnson, Casey Gruppioni, and Jack Cushman.

LITA, Day One

We're off to a great start here in Denver at the LITA 2017 Forum.

Casey Fiesler set the mood for the afternoon with a provoking discussion of algorithmically-aided decision-making and its effects on our daily lives. Do YouTube's copyright-protecting algorithms necessarily put fetters on Fair Use? Do personalized search results play to our unconscious tendency to avoid things we dislike? Neither "technological solutionism" nor technophobia are adequate responses. Fiesler calls for algorithmic openness (tell us when algorithms are in use, and what are they doing), and for widespread acknowledgment that human psychology and societal factors are deeply implicated as well.

In a concurrent session immediately afterwards, Sam Kome took a deep dive into the personally identifiable information (PII) his library (and certainly everyone else's) has been unwittingly collecting about their patrons, simply by using today's standard technologies. Kome is examining everything from the bottom up, scrubbing data and putting in place policies to ensure that little or no PII touches his library systems again.

Jayne Blodgett discussed her strategy for negotiating the sometimes tense relationship between libraries and their partners in IT; hot on the heels of the discussion about patron privacy and leaky web services, the importance of this relationship couldn't be more plain.

Samuel Willis addressed web accessibility and its centrality to the mission of libraries. He detailed his efforts to survey and improve the accessibility of resources for patrons with print disabilities, and offered suggestions for inducing vendors to improve their products. The group pondered how to maintain the privacy of patrons with disabilities, providing the services they require without demanding that they identify themselves as disabled, and without storing that personal information in library systems.

The day screeched to a close with a double-dose of web security awareness: Gary Browning and Ricardo Viera checked the security chops of the audience, and offered practical tips for foiling the hackers who can and do visit our libraries and access our libraries' systems. (Word to the wise: you probably should be blocking any unneeded USB ports in your public-facing technology with USB blockers. )

And that's just one path through the many concurrent sessions from this afternoon at LITA.

Looking forward to another whirlwind day tomorrow!

Git physical

This is a guest blog post by our summer fellow Miglena Minkova.

Last week at LIL, I had the pleasure of running a pilot of git physical, the first part of a series of workshops aimed at introducing git to artists and designers through creative challenges. In this workshop I focused on covering the basics: three-tree architecture, simple git workflow, and commands (add, commit, push). These lessons were fairly standard but contained a twist: The whole thing was completely analogue!

The participants, a diverse group of fellows and interns, engaged in a simplified version control exercise. Each participant was tasked with designing a postcard about their summer at LIL. Following basic git workflow, they took their designs from the working directory, through the staging index, to the version database, and to the remote repository where they displayed them. In the process they “pushed” five versions of their postcard design, each accompanied by a commit note. Working in this way allowed them to experience the workflow in a familiar setting and learn the basics in an interactive and social environment. By the end of the workshop everyone had ideas on how to implement git in their work and was eager to learn more.

Timelapse gif by Doyung Lee (doyunglee.github.io)

Not to mention some top-notch artwork was created.

The workshop was followed by a short debriefing session and Q&A.

Check GitHub for more info.

Alongside this overview, I want to share some of the thinking that went behind the scenes.

Starting with some background. Artists and designers perform version control in their work but in a much different way than developers do with git. They often use error-prone strategies to track document changes such as saving files in multiple places using obscure file naming conventions, working in large master files, or relying on in-built software features. At best these strategies result in inconsistencies, duplication and a large disc storage, and at worst, irreversible mistakes, loss of work, and multiple conflicting documents. Despite experiencing some of the same problems as developers, artists and designers are largely unfamiliar with git (exceptions exist).

The impetus for teaching artists and designers git was my personal experience with it. I had not been formally introduced to the concept of version control or git through my studies, nor my work. I discovered git during the final year of my MLIS degree when I worked with an artist to create a modular open source digital edition of an artist’s book. This project helped me see git as an ubiquitous tool with versatile application across multiple contexts and practices, the common denominator of which is making, editing, and sharing digital documents.

I realized that I was faced with a challenge: How do I get artists and designers excited about learning git?

I used my experience as a design educated digital librarian to create relatable content and tailor delivery to the specific characteristics of the audience: highly visual, creative, and non-technical.

Why create another git workshop? There are, after all, plenty of good quality learning resources out there and I have no intention of reinventing the wheel or competing with existing learning resources. However, I have noticed some gaps that I wanted to address through my workshop.

First of all, I wanted to focus on accessibility and have everyone start on equal ground with no prior knowledge or technical skills required. Even the simplest beginner level tutorials and training materials rely heavily on technology and the CLI (Command Line Interface) as a way of introducing new concepts. Notoriously intimidating for non-technical folk, the CLI seems inevitable given the fact that git is a command line tool. The inherent expectation of using technology to teach git means that people need to learn the architecture, terminology, workflow, commands, and the CLI all at the same time. This seems ambitious and a tad unrealistic for an audience of artists and designers.

I decided to put the technology on hold and combine several pedagogies to leverage learning: active learning, learning through doing, and project-based learning. To contextualize the topic, I embedded elements of the practice of artists and designers by including an open ended creative challenge to serve as a trigger and an end goal. I toyed with different creative challenges using deconstruction, generative design, and surrealist techniques. However this seemed to steer away from the main goal of the workshop. It also made it challenging to narrow down the scope, especially as I realized that no single workflow can embrace the diversity of creative practices. At the end, I chose to focus on versioning a combination of image and text in a single document. This helped to define the learning objectives, and cover only one functionality: the basic git workflow.

I considered it important to introduce concepts gradually in a familiar setting using analogue means to visualize black-box concepts and processes. I wanted to employ abstraction to present the git workflow in a tangible, easily digestible, and memorable way. To achieve this the physical environment and set up was crucial for the delivery of the learning objectives. In terms of designing the workspace, I assigned and labelled different areas of the space to represent the components of git’s architecture. I made use of directional arrows to illustrate the workflow sequence alongside the commands that needed to be executed and used a “remote” as a way of displaying each version on a timeline. Low-tech or no-tech solution such as carbon paper were used to make multiple copies. It took several experiments to get the sketchpad layering right, especially as I did not want to introduce manual redundancies that do little justice to git.

Thinking over the audience interaction, I had considered role play and collaboration. However these modes did not enable each participant to go through the whole workflow and fell short of addressing the learning objectives. Instead I provided each participant with initial instructions to guide them through the basic git workflow and repeat it over and over again using their own design work. The workshop was followed with debriefing which articulated the specific benefits for artists and designers, outlined use cases depending on the type of work they produce, and featured some existing examples of artwork done using git. This was to emphasize that the workshop did not offer a one-size fits all solution, but rather a tool that artists and designers can experiment with and adopt in many different ways in their work.

I want to thank Becky and Casey for their editing work.

Going forward, I am planning to develop a series of workshops introducing other git functionality such as basic merging and branching, diff-ing, and more, and tag a lab exercise to each of them. By providing multiple ways of processing the same information I am hoping that participants will successfully connect the workshop experience and git practice.