Overheard in LIL - Episode 2

This week:

A chat bot that can sue anyone and everything!

Devices listening to our every move

And an interview with Jack Cushman, a developer in LIL, about built-in compassion (and cruelty) in law, why lawyers should learn to program, weird internet, and lovely podcast gimmicks (specifically that of Rachel and Griffin McElroy's Wonderful! podcast)

Starring Adam Ziegler, Anastasia Aizman, Brett Johnson, Casey Gruppioni, and Jack Cushman.

LITA, Day One

We're off to a great start here in Denver at the LITA 2017 Forum.

Casey Fiesler set the mood for the afternoon with a provoking discussion of algorithmically-aided decision-making and its effects on our daily lives. Do YouTube's copyright-protecting algorithms necessarily put fetters on Fair Use? Do personalized search results play to our unconscious tendency to avoid things we dislike? Neither "technological solutionism" nor technophobia are adequate responses. Fiesler calls for algorithmic openness (tell us when algorithms are in use, and what are they doing), and for widespread acknowledgment that human psychology and societal factors are deeply implicated as well.

In a concurrent session immediately afterwards, Sam Kome took a deep dive into the personally identifiable information (PII) his library (and certainly everyone else's) has been unwittingly collecting about their patrons, simply by using today's standard technologies. Kome is examining everything from the bottom up, scrubbing data and putting in place policies to ensure that little or no PII touches his library systems again.

Jayne Blodgett discussed her strategy for negotiating the sometimes tense relationship between libraries and their partners in IT; hot on the heels of the discussion about patron privacy and leaky web services, the importance of this relationship couldn't be more plain.

Samuel Willis addressed web accessibility and its centrality to the mission of libraries. He detailed his efforts to survey and improve the accessibility of resources for patrons with print disabilities, and offered suggestions for inducing vendors to improve their products. The group pondered how to maintain the privacy of patrons with disabilities, providing the services they require without demanding that they identify themselves as disabled, and without storing that personal information in library systems.

The day screeched to a close with a double-dose of web security awareness: Gary Browning and Ricardo Viera checked the security chops of the audience, and offered practical tips for foiling the hackers who can and do visit our libraries and access our libraries' systems. (Word to the wise: you probably should be blocking any unneeded USB ports in your public-facing technology with USB blockers. )

And that's just one path through the many concurrent sessions from this afternoon at LITA.

Looking forward to another whirlwind day tomorrow!

Git physical

This is a guest blog post by our summer fellow Miglena Minkova.

Last week at LIL, I had the pleasure of running a pilot of git physical, the first part of a series of workshops aimed at introducing git to artists and designers through creative challenges. In this workshop I focused on covering the basics: three-tree architecture, simple git workflow, and commands (add, commit, push). These lessons were fairly standard but contained a twist: The whole thing was completely analogue!

The participants, a diverse group of fellows and interns, engaged in a simplified version control exercise. Each participant was tasked with designing a postcard about their summer at LIL. Following basic git workflow, they took their designs from the working directory, through the staging index, to the version database, and to the remote repository where they displayed them. In the process they “pushed” five versions of their postcard design, each accompanied by a commit note. Working in this way allowed them to experience the workflow in a familiar setting and learn the basics in an interactive and social environment. By the end of the workshop everyone had ideas on how to implement git in their work and was eager to learn more.

Timelapse gif by Doyung Lee (doyunglee.github.io)

Not to mention some top-notch artwork was created.

The workshop was followed by a short debriefing session and Q&A.

Check GitHub for more info.

Alongside this overview, I want to share some of the thinking that went behind the scenes.

Starting with some background. Artists and designers perform version control in their work but in a much different way than developers do with git. They often use error-prone strategies to track document changes such as saving files in multiple places using obscure file naming conventions, working in large master files, or relying on in-built software features. At best these strategies result in inconsistencies, duplication and a large disc storage, and at worst, irreversible mistakes, loss of work, and multiple conflicting documents. Despite experiencing some of the same problems as developers, artists and designers are largely unfamiliar with git (exceptions exist).

The impetus for teaching artists and designers git was my personal experience with it. I had not been formally introduced to the concept of version control or git through my studies, nor my work. I discovered git during the final year of my MLIS degree when I worked with an artist to create a modular open source digital edition of an artist’s book. This project helped me see git as an ubiquitous tool with versatile application across multiple contexts and practices, the common denominator of which is making, editing, and sharing digital documents.

I realized that I was faced with a challenge: How do I get artists and designers excited about learning git?

I used my experience as a design educated digital librarian to create relatable content and tailor delivery to the specific characteristics of the audience: highly visual, creative, and non-technical.

Why create another git workshop? There are, after all, plenty of good quality learning resources out there and I have no intention of reinventing the wheel or competing with existing learning resources. However, I have noticed some gaps that I wanted to address through my workshop.

First of all, I wanted to focus on accessibility and have everyone start on equal ground with no prior knowledge or technical skills required. Even the simplest beginner level tutorials and training materials rely heavily on technology and the CLI (Command Line Interface) as a way of introducing new concepts. Notoriously intimidating for non-technical folk, the CLI seems inevitable given the fact that git is a command line tool. The inherent expectation of using technology to teach git means that people need to learn the architecture, terminology, workflow, commands, and the CLI all at the same time. This seems ambitious and a tad unrealistic for an audience of artists and designers.

I decided to put the technology on hold and combine several pedagogies to leverage learning: active learning, learning through doing, and project-based learning. To contextualize the topic, I embedded elements of the practice of artists and designers by including an open ended creative challenge to serve as a trigger and an end goal. I toyed with different creative challenges using deconstruction, generative design, and surrealist techniques. However this seemed to steer away from the main goal of the workshop. It also made it challenging to narrow down the scope, especially as I realized that no single workflow can embrace the diversity of creative practices. At the end, I chose to focus on versioning a combination of image and text in a single document. This helped to define the learning objectives, and cover only one functionality: the basic git workflow.

I considered it important to introduce concepts gradually in a familiar setting using analogue means to visualize black-box concepts and processes. I wanted to employ abstraction to present the git workflow in a tangible, easily digestible, and memorable way. To achieve this the physical environment and set up was crucial for the delivery of the learning objectives. In terms of designing the workspace, I assigned and labelled different areas of the space to represent the components of git’s architecture. I made use of directional arrows to illustrate the workflow sequence alongside the commands that needed to be executed and used a “remote” as a way of displaying each version on a timeline. Low-tech or no-tech solution such as carbon paper were used to make multiple copies. It took several experiments to get the sketchpad layering right, especially as I did not want to introduce manual redundancies that do little justice to git.

Thinking over the audience interaction, I had considered role play and collaboration. However these modes did not enable each participant to go through the whole workflow and fell short of addressing the learning objectives. Instead I provided each participant with initial instructions to guide them through the basic git workflow and repeat it over and over again using their own design work. The workshop was followed with debriefing which articulated the specific benefits for artists and designers, outlined use cases depending on the type of work they produce, and featured some existing examples of artwork done using git. This was to emphasize that the workshop did not offer a one-size fits all solution, but rather a tool that artists and designers can experiment with and adopt in many different ways in their work.

I want to thank Becky and Casey for their editing work.

Going forward, I am planning to develop a series of workshops introducing other git functionality such as basic merging and branching, diff-ing, and more, and tag a lab exercise to each of them. By providing multiple ways of processing the same information I am hoping that participants will successfully connect the workshop experience and git practice.

AALL 2017: The Caselaw Access Project + Perma.cc Hit Austin

Members of the LIL team including Adam, Anastasia, Brett and Caitlin visited Texas this past weekend to participate in the American Association of Law Libraries Conference in Austin. Tacos were eaten, talks were given (and attended) and friends were made over additional tacos.

Brett and Caitlin had to the chance to meet dozens of law librarians, court staff and others while manning the Perma.cc table in the main hall:

On Monday Adam and Anastaia presented “Case Law as Data: Making It, Sharing It, Using It“, discussing the CAP project and the exploring ways to use the new legal data the project is surfacing.

After their presentation they asked those that attended for ideas on how ways to use the data and received an incredible response — over 60 ideas were tossed out by those there!

This year’s AALL was a hot spot of good ideas, conversation and creative thought. Thanks AALL and inland Texas!

A Million Squandered: The “Million Dollar Homepage” as a Decaying Digital Artifact

In 2005, British student Alex Tew had a million-dollar idea. He launched www.MillionDollarHomepage.com, a website that presented initial visitors with nothing but a 1000×1000 canvas of blank pixels. At the cost of $1/pixel, visitors could permanently claim 10×10 blocks of pixels and populate them however they’d like. Pixel blocks could also be embedded with URLs and tooltip text of the buyer’s choosing.

The site took off, raising a total of $1,037,100 (the last 1,000 pixels were auctioned off for $38,100). Its customers and content demonstrate a massive range of variation, from individuals bragging about their disposable income to payday loan companies and media promoters. Some purchased minimal 10×10 blocks, while others strung together thousands of pixels to create detailed graphics. The biggest graphic on the page, a chain of pixel blocks purchased by a seemingly defunct domain called “pixellance.com”, contains $10,800 worth of pixels.

The largest graphic on the Million Dollar Homepage, an advertisement for www.pixellance.com

While most of the graphical elements on the Million Dollar Homepage are promotional in nature, it seems safe to say that the buying craze was motivated by a deeper fixation on the site’s perceived importance as a digital artifact. A banner at the top of the page reads “Own a Piece of Internet History,” a fair claim given the coverage that it received in the blogosphere and in the popular press. To buy a block of pixels was, in theory, to leave one’s mark on a collective accomplishment reflective of the internet’s enormous power to connect people and generate value.

But to what extent has this history been preserved? Does the Million Dollar Homepage represent a robust digital artifact 12 years after its creation, or has it fallen prey to the ephemerality common to internet content? Have the forces of link rot and administrative neglect rendered it a shell of its former self?

The Site

On the surface, there is little amiss with www.MillionDollarHomepage.com. Its landing page retains its early 2000’s styling, save for an embedded twitter link in the upper left corner. The (now full) pixel canvas remains intact, saturated with the eye-melting color palettes of an earlier internet era. Overall, the site’s landing page gives the impression of having been frozen at the time of its completion.

A screenshot of the Million Dollar Homepage captured in July of 2017

However, efforts to access the other pages linked on the site’s navigation bar return unformatted 404 messages. The “contact me” link redirects to the creator’s Twitter page. It seems that the site has been stripped of its functional components, leaving little but the content of the pixel canvas itself.

Still, the canvas remains a largely intact record of the aesthetics and commercialization patterns of the internet circa 2005. It is populated by pixelated representations of clunky fonts, advertisements for sketchy looking internet gambling sites, and promises of risqué images. Many of the pixel blocks bear a familial resemblance to today’s clickbait banner ads, with scantily clothed models and promises of free goods and content. Of course, this eye-catching pixel art serves a specific purpose: to get the user to click, redirecting to a site of the buyer’s choosing. What happens when we do?

Internet links are not always permanent. As pages are deleted or renamed, backends are restructured, and domain namespaces change hands, previously reachable content and resources can be replaced by 404 pages. This “link rot” is the target of the Library Innovation Lab’s Perma.cc project, which allows individuals and institutions to create archived snapshots of webpages hosted at a trustable, static URLs.

Over the decade or so since the Million Dollar Homepage sold its last pixel, link rot has ravaged the site’s embedded links. Of the 2,816 links that embedded on the page (accounting for a total of 999,400 pixels), 547 are entirely unreachable at this time. A further 489 redirect to a different domain or to a domain resale portal, leaving 1,780 reachable links. Most of the domains to which these links correspond are for sale or devoid of content.

A visualization of link rot in the Million Dollar Homepage. Pixel blocks shaded in red link to unreachable or entirely empty pages, blocks shaded in blue link to domain redirects, and blocks shaded in green are reachable (but are often for sale or have limited content) [Note: this image replaces a previous image which was not colorblind-safe]

The 547 unreachable links are attached to graphical elements that collectively take up 342,000 pixels (face value: $342,000). Redirects account for a further 145,000 pixels (face value: $145,000). While it would take a good deal of manual work to assess the reachable pages for content value, the majority do not seem to reflect their original purpose. Though the Million Dollar Homepage’s pixel canvas exists as a largely intact digital artifact, the vast web of sites which it publicizes has decayed greatly over the course of time.

The decay of the Million Dollar Homepage speaks to a pressing challenge in the field of digital archiving. The meaning of a digital artifact to a viewer or researcher is often dependent on the accessibility of other digital artifacts with which it is linked or otherwise networked — a troubling proposition given the inherent dynamism of internet links and addresses. The process of archiving a digital object does not, therefore, necessarily end with the object itself.

What, then, is to be done about the Million Dollar Homepage? While it has clear value as an example of the internet’s ever-evolving culture, emergent potential, and sheer bizarreness, the site reveals itself to be little more than an empty directory upon closer inspection. For the full potential of the Million Dollar Homepage as an artifact to be realized, the web of sites which it catalogues would optimally need to be restored as it existed when the pixels were sold. Given the existence of powerful and widely accessible tools such as the Wayback machine, this kind of restorative curation may well be within reach.

LIL Talks: Comedy

This is a guest post by our LIL interns — written by Zach Tan with help from Anna Bialas and Doyung Lee

This week, LIL’s resident comic (and staff member) Brett Johnson taught a room full of LIL staff, interns, and fellows the finer intricacies of stand up comedy, which included the construction of a set, joke writing, and the challenges and high points of the craft.

As one example, Brett showed and broke down multiple jokes into the core structure of setup and punch line (or, platform and dismount) for analysis. Additionally, we were also given an insight into the industry where we often take for granted the sheer amount of work, honing, and refining that goes into a set.

We also explored what it meant to be a comic, and how the immediacy of audience reaction and enjoyment means that stand up comedy is one of the only art forms with an extremely evident (and sometimes, brutal) line between success and failure.

Though the talk was littered with choice jokes and funny bits, we definitely came away with a refreshing look into some aspects of stand-up comedy that rarely goes noticed.

Warc.games at IIPC

At IIPC last week, Jack Cushman (LIL developer) and Ilya Kreymer (former LIL summer fellow) shared their work on security considerations for web archives, including warc.games, a sandbox for developers interested in exploring web archive security.

Slides: http://labs.rhizome.org/presentations/security.html

Warc.games repo: https://github.com/harvard-lil/warcgames

David Rosenthal of Stanford also has a great write-up on the presentation: http://blog.dshr.org/2017/06/wac2017-security-issues-for-web-archives.html

IIPC 2017 – Day Three

On day three of IIPC 2017 (day 1, day 2), we heard more about what I see as the two main themes of the conference: archives users and metadata for provenance.

On the user front, I’ll point out Sumitra Duncan’s talk on NYARC Discovery; like WALK, presented yesterday, this project aggregates search across multiple archives, improving access for users. Peter Webster of Webster Research & Consulting and Chris Fryer from the Parliamentary Archives spoke about their study of the archive’s users: the questions of what users want and need, and how they actually use the archive, are fundamental. How we think archives should or could be used may not be as pertinent as we imagine…

On the metadata front, Emily Maemura and Nicholas Worby from the University of Toronto spoke about the ways in which documentation and curatorial process affect users’ experience of and access to archives — the staffing history of a collecting organization, for example, could be an important part of understanding why a web archive contains what it does. Jackie Dooley (OCLC Research), Alexis Antracoli (Princeton University), and Karen Stoll Farrell (Frick Art Reference Library) presented their work on developing web archiving metadata best practices to meet user needs — and it becomes clear that my two main themes could really be seen as one. OCLC Research will issue their reports in July.

I’ll also point out Nicholas Taylor’s excellent talk on the legal use cases for archives, and, of course, LIL’s Anastasia Aizman and Matt Phillips, who gave a super talk on their ongoing work on comparing web archives. Thanks again, and hope to see you all next year!