Purpose & Goals
Can we create an evaluation survey for course integrated instruction and drop-in workshops that complements for-credit course evaluation tools used at the campus level? Many librarians and archivists at this institution teach, but most of this instruction is not for course credit. The few who teach courses for credit benefit from evaluation at the campus level through the Course-Instructor Opinion Survey (CIOS) from the Office of Academic Effectiveness. CIOS results are attached to librarian and archivist promotion dossiers. Our library has piloted a survey for non-credit library instruction. When this project started, changes to our department's promotion process and integration with promotion review processes at the campus level motivated us to show instruction effectiveness in some way. The pilot survey is similarly meant to offer meaningful feedback for individual instructors, their supervisors (annual review), the library review committee (promotion and cumulative reviews), library leadership, and our non-tenure-track faculty peers (promotion reviews).
Design & Methodology
The non-credit instruction evaluation survey is a mixed methods approach utilizing Qualtrics software. The survey was designed in two parts: 1) The survey itself which consists of 5 Likert scale questions and one open ended essay response field and, 2) a URL generator which embeds metadata, allowing for consistent formatting and easier categorization. In designing this project and creating a methodology, one goal was to develop a consistent vocabulary and approach to evaluating instruction. Part of the design process included creating a shared glossary which helped determine the scope of instructional types. A working group developed the questions and refined survey focus, gathered feedback from library faculty colleagues, and launched a pilot program with training.
Findings
After launching the pilot project in spring of 2023, we analyzed the first round of results and feedback from the pilot participants. The focus of this initial debrief was to understand the experience of administering the survey and any issues. Following the debrief, we made changes to the tool itself, editing the questions based on instructor feedback and attendee responses. At the time of this proposal, one year into the pilot, we have data from 12 instructors across 85 classes. We have collected over 1000 responses and over 250 open-ended comments. The poster will show average scores, the estimated response rate, a selection of typical comments, and any differences noted between responses for course-integrated instruction and drop-in library workshops. A finding of the project was that the nature of the short survey did not lend itself to deep or actionable criticism of the instructors or courses. This was purposeful – the survey designers sought to maximize student response rates in an environment where students suffer from survey fatigue. In reviewing the literature during survey design, we found that students are not motivated to complete a long survey (Hoel & Dahl, 2019), and that low response rates can indicate lower validity of the results (Chapman & Joines, 2017). Moreover, librarians and archivists can use other means to solicit more meaningful feedback, such as peer teaching observations, or surveying the professors in whose courses we provide instruction. Chapman, D. D., & Joines, J. A. (2017). Strategies for increasing response rates for online end-of-course evaluations. International Journal of Teaching and Learning in Higher Education, 29(1), 47-60. Hoel, A., & Dahl, T. I. (2019). Why bother? Student motivation to participate in student evaluations of teaching. Assessment & Evaluation in Higher Education, 44(3), 361-378.
Action & Impact
The Georgia Tech Library now has a tool which we can use to show quality, impact, and value of Library non-credit instruction for both workshops and course integrated instruction. At the time of this proposal, we have presented the first years’ worth of findings to library faculty and leadership, and general results will be included in the Library’s 2023 impact report. In the coming year or two, we will be able to test whether the survey results are meaningful additions to annual or promotion review dossiers. This will be evaluated by gathering feedback from the librarian or archivist using the tool, their managers, and the library and Institute faculty review committees. Another area of focus will be on whether the comments and scores create actionable feedback to improve instruction or lead to follow-up processes such as peer or campus led teaching observations. We hope that the tool and those resulting critiques will improve instruction and therefore student success.
Practical Implications & Value
Student success is a nationwide concern. The extent to which improvements to library instruction contribute to better student outcomes is a question we hope to shed light on. A review of the literature indicates ongoing efforts at assessing student learning in library instruction sessions, but fewer examples of student evaluations of library teaching. We hope our poster will add valuable feedback and contribute to the body of research and librarianship. Additionally, there seems to be a trend toward data-based decision making within libraries, mirroring trends across the academy. Our project looks at a potential model for applying student evaluations of teaching to non-credit instruction formats, that complements existing evaluations of credit-bearing courses.
View Poster (PDF)