Evidence for Big Deal Decisions: The Importance of Consultation

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

As the loonie tanks against the USD, my place of work finds itself in the same situation of many libraries – needing to make cuts to make up for the shortfall and/or beg admin for more money. Inevitably, this means talking about Big Deals, “an online aggregation of journals that publishers offer as a one-price, one size fits all package” (Frazier, 2001). Are they worth the cost? And if they’re worth it on a cost-per-title basis, are they still worth it when you factor in how much of our budget gets eaten up by them? Are they worth it when this is an unsustainable business model controlled by a handful of major publishers? These questions have been on the forefront of my mind as I run the numbers on Big Deal packages.

If you’re looking for a good introductory article to assessing Big Deals, I recommend “Deal or No Deal? Evaluating Big Deals and Their Journals” by Blecic et al. (or you can read the EBLIP Evidence Summary of the article). However, like much of the literature on the subject of evaluating Big Deals, it’s written from a quantitative perspective, and places great emphasis on cost-per-use data. Relying so heavily on one metric has always made me uncomfortable. How fortuitous, then, that a recent trip to the 2015 Canadian Library Assessment Workshop (CLAW) included a very interesting presentation on Big Deal – “Unbundling the Big Deal” by Dr. Vincent Larivière and Stéphanie Gagnon at the Université de Montréal, and Arnald Desrochers at Université du Québec à Montréal. Both institutions had recently undertaken large-scale analyses of their periodicals collections, led by Dr. Larivière.

In addition to quantitative analysis of COUNTER JR1 (Number of Successful Full-Text Article Requests) data and citations, there was a survey sent to faculty, post-docs, and grad students. This survey asked for a list of the 10 most important journal titles for the respondent’s research and teaching, and 5 most important to the field of study. At U. de M., 2,213 people responded to the survey, and what they said was the stunning part of this presentation: 50% of the journal titles listed by respondents as critical to their research and teaching, and their disciplines, didn’t show up as essential titles in the COUNTER reports and citation analyses. If librarians had simply relied on quantitative data to break up a Big Deal, they would have missed out on a significant number of titles the faculty, post-docs, and grad students deemed essential!

While there’s lots to unpack on the subject of why such a high number of journals are deemed essential but aren’t showing up above the “cut” threshold line in JR1 (i.e. they’re not being heavily used), this one finding should give librarians pause. A good deal of the research that’s been done on describing ways to best make evidence-based choices related to Big Deals off-handedly mention that faculty should be consulted, but Dr. Larivière’s research has me convinced that this consultation needs to be rigourous and not an after-thought.

The presentation also had me once again appreciating the value qualitative research brings to library assessment. The literature on Big Deals is mainly based on quantitative analysis of usage reports, and Dr. Larivière’s research makes it clear that librarians cannot rely solely on this type of data (especially simplistic cost-per-article data) for a thorough analysis of Big Deals. If we do, we risk misunderstanding the needs of faculty, post-docs, and grad students.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

4 thoughts on “Evidence for Big Deal Decisions: The Importance of Consultation

  1. (Please excuse the shameless self-promotion, but…)
    Of possible interest as well might be this article: “A Triangulation Method to Dismantling a Disciplinary “Big Deal” http://istl.org/15-spring/refereed3.html
    I compared three data points (full-text downloads, citation analysis of faculty publications, and user feedback) to “triangulate” to a decision on titles to keep if breaking up a big deal. Not practical for a large, multi-disciplinary bundle; but helpful for smaller, single-discipline-focused “big deals”.

    I also agree that the input of faculty/users is critical – not just to help with decision-making… but to include them in the process (give them a voice) so that they do not feel blindsided later when/if their favourite journal gets cancelled. And also to raise their awareness about the challenges of the current journal publishing market. They are really the ones with the power to change the system to a more equitable, sustainable one.

  2. Kathleen – I, too, have been doing a lot of JR1 analysis the last few months, and have also been concerned about using the data as the only reason for decision-making. Apart from the need to consider our users – and the fact that, in small disciplines, what looks like low use can actually be very high use – we stumbled on so many flaws in the data we were trying to use as evidence that I worried about what we were possibly basing our decisions on. I found this article very interesting reading: Bucknell, Terry (2012). “Garbage In, Gospel Out: Twelve Reasons Why Librarians Should Not Accept Cost-per-Download Figures at Face Value”, Serials Librarian 63(2). doi: 10.1080/0361526X.2012.680687. I’m really interested in ways to engage our users in this kind of decision-making. If you come across anything else. Thanks for the post!

Leave a Reply to Jaclyn McLean Cancel reply

Your email address will not be published. Required fields are marked *