Purpose & Goals
Assessment is indispensable for libraries seeking to create inclusive programs, evaluate policies, and substantiate ongoing initiatives. However, despite this importance, it is not inevitable that library assessment practices (including methods, instruments, and platforms) are themselves rooted in equity, diversity, and inclusion (EDI). As public and academic libraries move towards EDI, every facet of library work must align in this direction– including assessment work. Without EDI-informed assessment practices, libraries risk maintaining harmful consequences to marginalized community groups we supposedly aim to equitably serve. With this challenge in mind, this paper represents the product of a year-long working group tasked with defining and executing an EDI-informed meta-assessment process. This means: examining the means, gaps, and limitations of our current libraries’ assessment practices and exploring ways to shift our current and future practices to be aligned with our libraries' stated values. In this paper, while we will touch upon all aspects and results of our work, we will primarily focus on our process of meta-assessment, including: grounding anti-racism and anti-oppression frameworks; determining effective methodologies, providing reproducible survey and analysis materials, and offering next steps. We believe that this focus will provide readers a clear model of meta-assessment that can be selectively reproduced at their own institutions, which will enable practitioners to develop findings and next steps that are tailored for their libraries.
Design & Methodology
Our team of seven library staff and faculty, representing assessment perspectives across the library, first tackled the “meta” aspect of meta-assessment: how are we currently approaching assessment, and what gaps emerge specifically around EDI? To answer these questions, we conducted a survey that was shared across all 10 of our libraries and 31 departments. As part of our survey design, it was important to have a clear, collective definition of EDI-informed assessment. We conducted a preliminary literature review which looked into common assessment challenges related to racism, disability, homophobia, gendered oppression, and autonomy & privacy. We chose these specific areas in order to create sharper avenues to examine EDI. Our survey, which had a 90% response rate, consisted of 27 questions (19 qualitative, 8 quantitative) and addressed the following areas: what do departments assess, how (e.g., tools, platforms, workflows) and why; what principles around power, anti-racism, or privacy do they consider, and how; how is ethical assessment operationalized? Next, we created highly organized tools to divide the work and track our outreach. To analyze the data, we divided the dataset by each question, in order to better track themes across departments. We assigned 1-2 questions to a primary analyzer and a secondary reviewer to see if data was interpreted differently between group members. Because of the number of members working on this, and that the size of the data varied per question, we decided that manual data analysis worked best. Informed by traditional qualitative data analysis methodologies, we created an analysis worksheet that walked group members through a process of coding and thematic analysis, thereby doubling as instructional artifacts. We compiled the worksheets to identify salient themes, predominantly around gaps in EDI-informed assessment practices.
Findings
We found that our libraries use quantitative assessment (reference, instruction, circulation, and collections data) and qualitative (instructional feedback, user experiences, employee satisfaction surveys) assessments. The survey also revealed that our library departments want to create EDI-informed assessments but didn’t know how to start that work beyond digital accessibility. A few departments considered ideas around positionality and tokenization around demographic collection and noted interest in learning how to include harm reduction of marginalized communities. The majority of responses mentioned privacy and accessibility of physical and digital assessment forms as their primary EDI considerations. More than half of respondents weren’t relying on any (internal or external) resources to guide them through assessments. Those that did use resource guides tended to use them for very specific types of assessment (e.g., utilizing W3C standards for assessing web accessibility). Despite these uncertainties, we discovered that the majority of departments, regardless of the type of assessment, indicated that they consider the who, what, where, when, why, and how of the project, its cost, whom it may affect, relevance, short and long-term considerations, safety, accessibility, and impact before conducting an assessment project. Our libraries were also concerned with how assessment impacts students, departmental relations, or faculty-liaison relationships; however, overwhelmingly, departments prioritize students as the center of their assessment work. We synthesized that generally our libraries wanted to find ways to center EDI in assessment through: ethical representation (concerning the inclusion of marginalized or oppressed student communities in assessment projects); autonomy and safety of the user (concerning patron consent and anonymization of their information); ethics of data retention, storage, and reuse (concerning how patron information is maintained and used to protect their safety and autonomy); and presenting data (concerning the way data is synthesized and presented, and whether it authentically reflects the data collected).
Action & Impact
Our post-assessment action plan consists of two phases: a comprehensive literature review and a pilot program to act on our review’s recommendations. Our literature review provided an initial foundation of knowledge in response to the confusion and lack of direction our colleagues, including our group, had around EDI-informed assessment. We structured our review into three large categories: assessment design, execution & analysis, and data retention & sharing. Within each section, we addressed the major themes from the findings, providing clear and approachable recommendations around questions such as: how to identify researcher positionality, how to ethically recruit marginalized participants, what informed consent looks like in assessment, why we must have data destruction policies, and more. In addition to our recommendations, we found an anti-racist assessment checklist created by the educational non-profit WestEd, which aligned closely with our findings and clearly atomized many of our recommendations. Our group’s final report to senior leadership included the survey and literature review findings, and was approved in January 2024. In the next few months, we will reconvene to detail and implement the following action plan. In honor of our findings around data transparency, we will host a series of workshops that share out the report. These workshops will also serve to gather interest in our pilot program. The pilot will ask 3-4 assessment practitioners to choose a past, existing, or new assessment project, and utilize our recommendations to determine and implement adaptations to their practice. We will evaluate the pilot’s efficacy through listening sessions with pilot participants to assess: ease of digesting our group’s recommendations; need for additional research; capacity for individual practitioners to implement findings; further confusion or gaps navigating our report. We hope the pilot, upon revisions, can be a model for departments to reproduce for every assessment project they take on.
Practical Implications & Value
Through this paper presentation, we aim to share with audiences a clear and comprehensive meta-assessment process which they can amend and reproduce at their own libraries of whatever size. Readers will take away a blueprint that includes resources to: design and implement meta-assessment surveys; work with colleagues using template tools to effectively divide labor and collectively analyze large qualitative and quantitative data sets; and determine impactful avenues that are tailored for their institution’s assessment gaps and needs. We believe that our meta-assessment practice, with its particular lens of anti-racism, equity, and inclusion, is a vital reflexive process in any library’s assessment strategy. Primarily, this process ensures that libraries pause to evaluate whether their assessment work is materially aligned with their stated values. With our paper, we hope those looking to engage with this process can do so without creating workflows, assessment instruments, and more from scratch, and better focus on the questions they’d like to engage their peers in.
Alexandra Provo, NYU
Evonn Stapleton, NYU
Lia Warner, NYU
Nicholas Wolf, NYU
Rachel Mahre, NYU