The study, stewardship, and creation of tools and communities that empower all seekers and disseminators of knowledge to make well-informed decisions about AI technologies.

For the past few years, we have been using the term “Librarianship of AI” to describe an emerging practice and guiding framework for sense-making in this AI moment via the lens of librarianship. As AI models become a new gateway to knowledge, this marks just the latest in a long line of paradigm shifts in how we access information. Libraries have long played a fundamental role in collecting, curating, preserving, and sharing knowledge, but the ways libraries carry out this mission shift sharply with changes in technology. Understanding that technology is never neutral, we must draw upon the skills and principles of librarians as trusted stewards of knowledge.

We define “Librarianship of AI” as the study, stewardship, and creation of tools and communities that empower all seekers and disseminators of knowledge to make well-informed decisions about AI technologies. This includes fostering critical AI literacies, guiding responsible use, and reimagining our knowledge futures by encouraging all to “think like a librarian.” If AI literacy is part of information literacy, then it’s essential to think about how library principles can be translated into a new domain and how libraries can be leaders in navigating and shaping AI integration and governance. As we explore how AI may change how we engage with knowledge, the principles championed by libraries such as access, transparency, and care are more applicable than ever.

The inherent variability of generative AI output is a challenge, and metacognition plays a crucial role in literacy and experimentation. To effectively calibrate trust, individuals must form a grounded mental model of AI’s error boundaries, and engage in critical reflection about their own thinking and learning throughout the process. This looks like helping users think through questions like:

  • When should I use this generative AI tool?
  • What are the strengths and limitations of this tool?
  • What process did this tool use to generate its search results?
  • How can I effectively calibrate trust in AI output?
  • Who made this tool and how trustworthy are they?
  • What “problem” is this tool trying to solve or not solve?
  • How might AI change how we communicate, learn, and make sense of the world?
  • How might AI impact search, research, and learning processes?

In order to lower the barrier to entry for librarians, legal researchers, students, and others to understand AI in their particular domains, we are building human-centered AI tools that recenter user agency; additionally, we’re developing artifacts to articulate how we apply techniques and outlooks common in the library and literacy world. Librarians should be collaborating with technologists, domain experts, patrons, and stakeholders in the building, evaluation, and stewardship of AI tools.

Above all, we need to adopt the same mindset libraries had when adapting to the invention of books, databases, and other new forms of knowledge. By assessing the sociotechnical systems that underpin AI through an accurate functional lens, we can find the deeper patterns that help us continue to collect, steward, and share the world’s knowledge.

The projects listed below enable different communities to understand AI in their particular contexts, and empower them to form their own opinions and frameworks for thinking about AI and new ways to discover knowledge.