Gathering Evidence by Asking Library Users about Memorable Experiences

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

For this week’s blog, I thought I’d share a specific question to ask library users that’s proving itself highly useful, but that I haven’t seen used much before in library assessment:

“Tell me about a memorable time in the library.”

Working with colleagues Cameron Hoffman-McGaw and Meg Ecclestone, I first used this question during the in-person interview phase of an on-going study on information literacy (IL) practices in academic library spaces. In response, participants gave detailed accounts of studying with friends, moments that increased or decreased their stress levels, and insight into the 24/7 Learning Commons environment – a world that librarians at my place of work see very infrequently, as the library proper is closed after 10pm. The main theme of answers was the importance of supportive social networks that form and are maintained in the library.

The question was so successful in the qualitative phase of our IL study, I was curious how it might translate to another project – an upcoming major library survey that was to be sent to all campus library users in March, 2016. Here’s the text of the survey question that we used:

“Tell us about a memorable time in the library. It might be something that you were involved in, or that you witnessed. It might be a positive or negative experience.”

It wasn’t a required question; people were free to skip it. But 47% (404/851) of survey takers answered the question, and the answers ranged in length from a sentence to several paragraphs. While analysis isn’t complete on the data generated from this question, some obvious themes jump out. Library users wrote about how both library services and spaces help or cause anxiety and stress, the importance of social connections and accompanying support received in our spaces, the role of the physical environment, and the value placed on the library as a space where diverse people can be encountered, among many other topics.

To what end are we using data from this question? First, we’re doing the usual analysis – looking at the negative experiences and emotions users expressed and evaluating whether changes need to be made, policies created, etc. Second, the question helped surface some of the intangible benefits of the library, which we hadn’t spent a lot of time considering (emotional support networks, the library’s importance as a central place on campus where diverse groups interact). Now librarians are able to articulate a wider range of these benefits – backed up with evidence in the form of answers to the “memorable time” question – which helps when advocating for the library on campus, and connecting to key points in our Academic Plan document.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Assessment and evidence based library and information practice

by Lorie Kloda
Assessment Librarian, McGill University

I have held the position of Assessment Librarian for almost three years, and been involved in the evidence-based library and information practice (EBLIP) movement for over a decade. Since taking on this position, I have been trying to make sense of EBLIP in my job – trying to understand how these two concepts complement each other, overlap, or even contradict one another.

In a 2006 article, “EBL and Library Assessment: Two Solitudes?” Pam Ryan, then the Assessment Librarian at the University of Alberta, asked the question regarding assessment and EBLIP, “Are these separate movements within librarianship forming theoretical bridges? Is some sort of merger, fusion, or takeover in the future?” It’s almost 10 years later, and I think this question still remains unanswered. I think that part of the answer lies in the way in which assessment and EBLIP relate to one another, not just on a theoretical level, but on a practical level.

In my work, I see assessment as having two (not mutually exclusive) goals: one, to inform decision-making for quality improvement to anticipate and meet users’ needs, and two, to demonstrate impact or value. There are, however, some occasions (OK, there are a lot of occasions) when one cannot conduct assessment. Hurdles to assessment include a lack of time, data, resource, experience, and skills. In cases where one cannot conduct assessment, whatever the reason, one can make use of evidence: credible, transferable findings from published research, to inform decision making.

One of the roles of an assessment librarian, or really, any librarian working in assessment and evaluation, is to foster a culture of assessment in the organization in which they work. According to Lakos and Phipps,

“A culture of assessment is an organizational environment in which decisions are based on facts, research, and analysis, and where services are planned and delivered in ways that maximize positive outcomes for customers and stakeholders.”

I understand the above quote to mean that librarians need research, analysis of local data, and facts in order to plan and make decisions to best serve library users. A culture of assessment, then, is also one that is evidence-based. I find this idea encouraging and I plan to spend some time thinking more about how the steps in EBLIP and assessment overlap. While I think the realm of library and information practice is still far from a takeover or merger when it comes to assessment and EBLIP, I think the two will continue to mingle and hopefully foster a culture which leads to increasingly improved services.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Library Assessment Data Management ≠ Post-It Notes

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

Over the last few years, data librarians have become increasingly focused on data management planning as major funders and journals insist researchers have data management plans (DMPs) in place. A DMP is a document that outlines how data will be taken care of during its life cycle. Lately I’ve spent a lot of time thinking about how my data service portfolio dovetails nicely with library assessment activities. A lot of discussion in the library evidence-based practice community is about obtaining and analyzing data and stats, with little emphasis about stewardship of said information. Recently, I gave a talk at an assessment workshop put on by the Council of Post-Secondary Library Directors where I reflected on data management for assessment activities. This post introduces some of the steps in working out a DMP. Please note that while DMPs usually refer to data, not statistics, in my world the ‘D’ stands for both.

Step 1: Take an Inventory
You can’t manage what you don’t know about! Spend some time identifying all your sources of evidence and what format they’re in. I like to group by themes – reference, instruction, e-resources, etc. While you’re doing this, it’s also helpful to reflect on whether the data you’re collecting is meeting your needs. Collecting something that you don’t need? Ditch it. Not getting the evidence you need? Figure out how to collect it. Also think about the format in which the data are coming in. Are you downloading a PDF that’s making your life miserable when what you need is a .csv file? See if that option is available.

Step 2: The ‘Hit by the Bus’ Rule (aka Documentation)
If you’re going to archive assessment data, you need to give the data some context. I like to think of this as the ‘hit by a bus’ rule. If a bus hits me tomorrow, will one of my colleagues be able to step into my job and carry on with minimal problems? What documentation is required by someone else to understand, collect, and use your data? Every single year when I’m working on stats for external bodies, I have to pull out my notes and see how I calculated various numbers in previous years. This is documentation that should be stored in a safe, yet accessible place.

7004634470_2eebf7e336_z
Post-its don’t count as ‘documentation.’ Neither does a random sheet of paper in a towering stack on your desk. Photo by Wade Morgan

Step 3: Storage and Backup
You’ve figured out what evidence and accompanying documentation you need to archive. Now what are you actually going to do with it? For routine data and stats, around my institution we use a shared drive. Within the shared drive I’ve built a simple website that sits on top of all the individuals data files; instead of having to scroll through hundreds of file names, users can just click links that are nicely divided by themes, years, vendors, etc. IT backs up the shared drive, as do I on an external hard drive. If your institution has access to Dataverse hosted on a Canadian server, this is a good option.

Step 4: Preservation
For key documents, you might consider archiving them in a larger university archive. LibQUAL+ results, documentation, and raw data are currently being archived via my institution’s instance of DSpace.

Step 5: Sharing
I always felt like a hypocrite, imploring researchers to make their data open when I squirreled library data away in closed-access shared drives. Starting with LibQUAL+, this past year I’ve tried to make as much library data open as possible. This wasn’t just uploading the files to our DSpace, but also involved anonymizing data to ensure no one was identifiable.

If you’re going to share data with your university community and/or the general public, keep in mind that you’ll need to identify this right away when you’re designing your evidence-collection strategies. For example, if you’re doing a survey, participants need to be informed that their responses will be made available. Ethics boards will also want to know if you’re doing research with humans (monkeys too, but if you’ve got monkeys in your library you’ve got bigger problems than figuring out a DMP…)

Perhaps the most important aspect of sharing is setting context around stats and data that go into the wild. If you’re going to post information, make sure there’s a story around it to explain what viewers are seeing. For example, there’s a very good reason that most institutions score below expectations in the “Information Control” category on LibQUAL+ – there isn’t a library search tool that’s as good as Google, which is what our users expect. Adding some context that explains that the poor scores are part of a wider trend in libraries and why this trend is happening will help people understand it’s not that your library is necessarily doing a bad job compared to other libraries.

Want more info on data management planning? Here are a few good resources:

DMP Builder by the California Digital Library

VIU’s Data Management Guide

Research Data Management @ UBC

What are your thoughts about library assessment data and DMPs? Does your institution have a DMP for assessment data? If so, how does your institution keep assessment data and stats safe? Let’s keep the conversation going below in the comments, or contact me at kathleen.reed@viu.ca or on Twitter @kathleenreed

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Are Students Succeeding with a Library Credit Course? C-EBLIP Journal Club, October 6, 2014

by Rachel Sarjeant-Jenkins
Client Services, University Library, University of Saskatchewan

I recently had the opportunity to lead our C-EBLIP Journal Club in a discussion of Jean Marie Cook’s article “A library credit course and student success rates: A longitudinal study” in College & Research Libraries 75, no. 3 (2014) (available at http://crl.acrl.org/content/75/3/272.full.pdf+html). This article had been sitting on my desk for a few months waiting for that magical moment when I found the time to read it thoroughly. Then came my turn to host journal club. What a perfect opportunity to finally delve into Cook’s article! And it couldn`t have come at a better time in light of our library’s focus on developing a programmatic approach to library instruction and the broader teaching and learning environment in which academic libraries currently find themselves.

Following some ‘proper’ journal club discussion about the article’s methodology and findings, Cook’s article proved a wonderful catalyst for a conversation about library instruction at our institution. Initially we were simply envious of Cook’s situation, where a library-focused course is one of the areas within her institution’s priorities. But then the questions started.

• Is there value in having a stand-alone library course or is it better to have instruction firmly embedded or integrated into academic program courses? (Of course, this question did not mean we ever stopped desiring that institutional commitment to information literacy — who would!?)
• How do you assess student learning? And, more importantly, how do you gauge the actual ongoing use of that learning by students?

We also talked about library value. The impetus for Cook’s work was institutional interest in ROI; the result was her quantitative research project.
• How, we asked, can qualitative data be used to support (and enhance) quantitative data when demonstrating library value to the parent institution?
So many questions, and only a lunchtime to discuss.

Not surprisingly, our hour just wasn’t enough. What that hour did do, however, was get us thinking. We talked about the known information literacy courses on campus and learned about pockets of embedded instruction by our librarians that we were completely unaware of. We had a lively debate about quantitative and qualitative research and the benefits of each. And of course we talked about assessment, not only that we need to do more of it and do it more consistently, but also the importance of knowing what we are trying to assess and therefore when we want to assess it.

Our journal club hour got me excited and primed for the next steps in developing our library’s programmatic approach to instruction. Cook’s article, and the energetic conversation it inspired, was an excellent beginning.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.