by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University
Over the last few years, data librarians have become increasingly focused on data management planning as major funders and journals insist researchers have data management plans (DMPs) in place. A DMP is a document that outlines how data will be taken care of during its life cycle. Lately I’ve spent a lot of time thinking about how my data service portfolio dovetails nicely with library assessment activities. A lot of discussion in the library evidence-based practice community is about obtaining and analyzing data and stats, with little emphasis about stewardship of said information. Recently, I gave a talk at an assessment workshop put on by the Council of Post-Secondary Library Directors where I reflected on data management for assessment activities. This post introduces some of the steps in working out a DMP. Please note that while DMPs usually refer to data, not statistics, in my world the ‘D’ stands for both.
Step 1: Take an Inventory
You can’t manage what you don’t know about! Spend some time identifying all your sources of evidence and what format they’re in. I like to group by themes – reference, instruction, e-resources, etc. While you’re doing this, it’s also helpful to reflect on whether the data you’re collecting is meeting your needs. Collecting something that you don’t need? Ditch it. Not getting the evidence you need? Figure out how to collect it. Also think about the format in which the data are coming in. Are you downloading a PDF that’s making your life miserable when what you need is a .csv file? See if that option is available.
Step 2: The ‘Hit by the Bus’ Rule (aka Documentation)
If you’re going to archive assessment data, you need to give the data some context. I like to think of this as the ‘hit by a bus’ rule. If a bus hits me tomorrow, will one of my colleagues be able to step into my job and carry on with minimal problems? What documentation is required by someone else to understand, collect, and use your data? Every single year when I’m working on stats for external bodies, I have to pull out my notes and see how I calculated various numbers in previous years. This is documentation that should be stored in a safe, yet accessible place.
Post-its don’t count as ‘documentation.’ Neither does a random sheet of paper in a towering stack on your desk. Photo by Wade Morgan
Step 3: Storage and Backup
You’ve figured out what evidence and accompanying documentation you need to archive. Now what are you actually going to do with it? For routine data and stats, around my institution we use a shared drive. Within the shared drive I’ve built a simple website that sits on top of all the individuals data files; instead of having to scroll through hundreds of file names, users can just click links that are nicely divided by themes, years, vendors, etc. IT backs up the shared drive, as do I on an external hard drive. If your institution has access to Dataverse hosted on a Canadian server, this is a good option.
Step 4: Preservation
For key documents, you might consider archiving them in a larger university archive. LibQUAL+ results, documentation, and raw data are currently being archived via my institution’s instance of DSpace.
Step 5: Sharing
I always felt like a hypocrite, imploring researchers to make their data open when I squirreled library data away in closed-access shared drives. Starting with LibQUAL+, this past year I’ve tried to make as much library data open as possible. This wasn’t just uploading the files to our DSpace, but also involved anonymizing data to ensure no one was identifiable.
If you’re going to share data with your university community and/or the general public, keep in mind that you’ll need to identify this right away when you’re designing your evidence-collection strategies. For example, if you’re doing a survey, participants need to be informed that their responses will be made available. Ethics boards will also want to know if you’re doing research with humans (monkeys too, but if you’ve got monkeys in your library you’ve got bigger problems than figuring out a DMP…)
Perhaps the most important aspect of sharing is setting context around stats and data that go into the wild. If you’re going to post information, make sure there’s a story around it to explain what viewers are seeing. For example, there’s a very good reason that most institutions score below expectations in the “Information Control” category on LibQUAL+ – there isn’t a library search tool that’s as good as Google, which is what our users expect. Adding some context that explains that the poor scores are part of a wider trend in libraries and why this trend is happening will help people understand it’s not that your library is necessarily doing a bad job compared to other libraries.
Want more info on data management planning? Here are a few good resources:
What are your thoughts about library assessment data and DMPs? Does your institution have a DMP for assessment data? If so, how does your institution keep assessment data and stats safe? Let’s keep the conversation going below in the comments, or contact me at firstname.lastname@example.org or on Twitter @kathleenreed
This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.