The Problem with the Present: C-EBLIP Journal Club, June 21, 2016

by Stevie Horn
University Archives and Special Collections, University of Saskatchewan

Article: Dupont, Christian & Elizabeth Yakel. “’What’s So Special about Special Collections?’ Or, Assessing the Value Special Collections Bring to Academic Libraries.” Evidence Based Library and Information Practice [Online], 8.2(2013): 9-21. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/19615/15221

I was pleased to have the opportunity to lead the last C-EBLIP Journal Club session of the season. I chose an article which looked at the difficulties in employing performance measures to assess the value of a special collection or archives to the academic library. The article has some failings in that it is written largely from a business perspective, and uses special collections and archives interchangeably in a way that becomes problematic if you consider the archives’ responsibility as a repository for institutional records (which go through many different phases of use)—however, it did serve as a useful springboard for our talk.

What interested me was that those present immediately latched on to the problem of “What about preservation value?” when considering the article’s model of measuring performance. The article poses that the best way to measure a special collection/archives’ “return on investment” is not simply by counting the number of times an item is used (a collection-based method), but rather by reporting the number of hours a user spends working with an item, and what the learning outcomes of that use are determined to be (a user-based method) (Dupont and Yakel, 11).

In some ways, a user-centric approach to measuring performance in archives and special collections makes good sense. A single researcher may spend five weeks exploring fifteen boxes, or taking a close look at a single manuscript, and so recording the user hours spent may prove a more accurate measure of use. To reinforce this, there are a number of difficulties in utilizing collection-based metrics with manuscript collections. Individual documents studied within an archival collection are almost impossible to track. Generally a file is treated as an “item”, and the number of files in a box might be averaged. The article points out, accurately, that this imprecision renders collection-based tabulation of archival documents, images, and ephemera virtually “meaningless” (Dupont and Yakel, 14).

However, if the end goal is determining “return on investment”, user-centric data also leaves out a large piece of the picture. This piece is the previously mentioned “preservation value”, or the innate value in safeguarding unique historical documents. Both collection-based and user-based metrics record current usage in order to determine the value of a collection at the present time. This in-the-present approach becomes problematic when applied to a special collections or archives, however, for the simple reason that these bodies not only preserve the past for study in the present, but also for study in the distant future.

To pull apart this problem of using present-based metrics to measure the worth of a future-purposed unit of the academic library, we look at the recent surge in scholarship surrounding aboriginal histories. As Truth and Reconciliation surfaces in the public consciousness, materials which may have been ignored for decades within archival/special collections are now in high demand. Questions of this nature accounted for approximately forty percent of our usage in the last month alone. Had collections-centric or user-centric metrics been applied for those decades of non-use, these materials would have appeared to be of little worth, and the special collections/archives’ “return on investment” may also have been brought into question. The persistence of archives and special collections in preserving unique historic materials regardless of patterns of use means that these materials can play a role in changing perspectives and changing lives nationwide.

If, as Albie Sachs says in his 2006 article on “Archives, Truth, and Reconciliation”, archives and special collections preserve history “for the unborn . . . not, as we used to think, to guard certainty [but] to protect uncertainty because who knows how the future might use those documents”, might not the employment of only present-centric metrics do more damage than good? (Sachs, 14). And, if the value of an archives or special collections cannot be judged solely in the present, but must take an unknown and unknowable future into account, perhaps the formulation of a truly comprehensive measure of “return on investment” in this field is impossible.

Sources:
Dupont, Christian & Elizabeth Yakel. “’What’s So Special about Special Collections?’ Or, Assessing the Value Special Collections Bring to Academic Libraries.” Evidence Based Library and Information Practice [Online], 8.2(2013): 9-21. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/19615/15221

Sachs, Albie. “Archives, Truth, and Reconciliation”. Archivaria , 62 (2006): pp. 1 -14.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

The Many Benefits of OA and Open Peer Review: C-EBLIP Journal Club, May 10, 2016

by DeDe Dawson
Science Library, University of Saskatchewan

Article:
Tennant JP, Waldner F, Jacques DC et al. The academic, economic and societal impacts of Open Access: an evidence-based review [version 1; referees: 4 approved, 1 approved with reservations]. F1000Research 2016, 5:632 (doi: 10.12688/f1000research.8460.1)

This is arguably the perfect journal club article! A juicy topic with a few points of contention, a journal platform with many innovative features, and open post-publication peer review. Lots and lots to discuss, and indeed we ran out of time. Here I try to summarize our conversation:

I always gravitate towards review articles and highly recommend them to students. They are often the most efficient way to get up to speed on all the relevant literature in a complex area. Open Access (OA) is just such a complex area with multiple overlapping layers of issues, and all progressing so rapidly, that it is the ideal topic for a review. It is ironic because OA itself is such a simple concept. The complexity comes from the challenges of implementation and the multiple stakeholders (and vested interests) involved.

This review article summarizes the main evidence to date on the impact of OA from three perspectives: academic, economic, and societal. They are essentially three lines of reasoning in support of OA. We thought that the strongest, most well-developed argument in favour of OA in this article was the academic. It certainly had the most citations behind it because of the highly productive research area documenting the OA citation effect. We also thought that maybe this academic perspective was focused on by the authors because it was the most likely to persuade researchers who might be reading this review.

So, was the point of the article to persuade researchers to support OA? The authors have an obvious bias as proponents of OA. But how important is it for authors to be neutral? We thought it was unrealistic to expect authors not to have a bias, and for most kinds of papers authors indeed argue a particular point. But should review papers be different? It was suggested that if authors are clear and upfront about their objectives and competing interests this shouldn’t be a problem.

This brought us to the question of what evidence against OA might there be anyway? (We expose our own pro-OA bias here!). One of the online commenters on the article challenged the authors to provide a more balanced review – but he could not provide the authors with links to literature to support these other anti-OA perspectives. Some of the obvious counter-arguments were already dealt with in the article: such as the rise of deceptive (“predatory”) publishing, and the challenges of paying article processing charges (APCs) for authors without funding or those from underdeveloped countries. Otherwise, it is pretty hard to argue against OA unless you are a commercial publisher (or shareholder) with financial interest in sustaining the current system. The commenter argued that jobs will be lost in the transition. But this is a weak point. Are we to prop up an entire dysfunctional, and inequitable, system for the sake of some jobs? Besides these jobs will likely morph into other more relevant and useful functions. What seemed to emerge from this back-and-forth was that “sustainable” means something completely different to commercial publishers (and their allies) and OA proponents! Publishers are from Mars; OA proponents are from Venus.

Beyond the article itself we had a lot to say about the platform and the open peer review model. The article is essentially still in its pre-print version. It was posted on the F1000Research site before peer review. It was a fascinating process to see the reviewers’ reports as they were submitted, and to watch as others commented on the article and the authors responded. It gave the impression of a proper scholarly conversation taking place. This is ideally what journals should be facilitating. Technology allows this now – so why are so many journals still clinging to outdated formats from the print era?

The “open” nature of the reviews and comments also ensured an appropriate level of civility. Who has not received rude and unproductive comments from a reviewer that feels protected by their anonymity? (There is an entire Tumblr site devoted to such remarks!). However, if the reviewer is obliged to reveal themselves, not just to the authors but to the whole of the readership, then they are more likely to behave diplomatically, and provide constructive and substantiated critiques. This also works in the reviewer’s favour: readers (and evaluators) can plainly see the amount of work and time invested by the reviewer in their function. If the reviewer has spent considerable time in providing a thoughtful review then they can justifiably link to it on their CV and collegial committees can see for themselves the energy the reviewer expended.

We also spoke of how we might use this kind of journal format in information literacy instruction with students. This would more clearly make the point that scholarship is a conversation, and that there are multiple points of view. It would demystify the peer review process too: we can see the issues raised by the reviewers and can follow the paper into its next version seeing how the authors might address these concerns. This process is usually completely hidden from the average reader; so it is difficult for a student to imagine a paper other than the final version.

These various versions of papers do present challenges for the reader in citing though! It seems that all the versions remain on the site and have their own DOIs, but the added complexity in citing remains. This is a relatively minor issue though compared to the benefits of an open scholarly conversation that such a model of peer review allows.

We look forward to seeing the next version of this article and continuing the conversation on the benefits of OA!

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Carrots & Sticks: Encouraging self-archiving in an IR. C-EBLIP Journal Club, Mar 31, 2016

by Shannon Lucky
IT Librarian
University Library, University of Saskatchewan

Article: Betz, S., & Hall, R. (2015). Self-Archiving with Ease in an Institutional Repository: Microinteractions and the User Experience. Information Technology and Libraries, 34(3), 43–58. http://doi.org/10.6017/ital.v34i3.5900

One of the things I love about the C-EBLIP journal club is the ease of having one of my colleagues pick out an interesting article from their area of specialization so I can poke my head into their world for an hour and see what ideas they are wrestling with. As an IT librarian, picking an article creates some anxiety because systems and technology aren’t always that accessible (or interesting) for a diverse audience. I was happy to see Sonya Betz and Robyn Hall’s article pop up on a library tech listserv as it was a great fit for our group.

The University Library currently doesn’t have an institutional repository (IR) for the entire campus, but we do have a DSpace eCommons repository for research by UofS librarians. Because we have all deposited our own work into eCommons our conversation started with a unanimous (good natured) rant about how hard it is to do self-archiving. It is time-consuming and the technology was deemed to be frustrating and unsatisfying. Like other tedious institutional reporting systems, we assumed this was the only way. As one member put it, “I didn’t know we could expect better”.

While we talked about how frustrating the process could be, we also wondered just how much effort, time, and money should be invested in improving a system that we all have to use, but that our library users will never see. When do we make the call that something is good enough and we, or our fellow faculty, can suck it up and figure it out or ask for help? One of my favourite suggestions was that a “good enough” scenario would have the user feeling “the absence of anger”. Apparently the bar is quite low. Betz and Hall talk about some of the barriers to self-archiving but don’t ask why, when contributing to IRs is so difficult, many academics voluntarily submit their work to sites like academia.edu and ResearchGate – what is it they are doing right that we could learn from?

This led to a discussion about what libraries could do to encourage faculty, both within and outside the library, to deposit in an IR. We saw two routes: the carrot and the stick.

1024px-Carrot_and_stick_motivation svg

Carrots:
• Link academic reporting systems together to cut down on the number of places this information needs to be input (e.g. have citations from the IR export to formatted CVs, link ORCHID accounts with IR entries for authority control and better exposure, etc.)
• Group scholarly output for colleges, departments, or research groups together in the IR to show the collective impact of their work
• Gamify the submission process with progress bars, badges, and the ability to level up you scholarly work

Sticks:
• Money. Canada Council requires submission to an IR as a part of their funding model
• Librarians armed with actual sticks going office to office “persuading” scholars to deposit their research

We agreed that libraries don’t wield an effective stick in this scenario. Research services, colleges, and departments have to be the ones to put on the pressure to deposit. Librarians can help make that happen and (hopefully) make it as pain-free as possible.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Describing a phenomenon through experience: C-EBLIP Journal Club, February 16, 2016

by Carolyn Pytlyk
Research Facilitator
University Library, University of Saskatchewan

Article:
Forster, M. (2015). Phenomenography: A methodology for information literacy research. Journal of Librarianship and Information Science, 1–10. doi: 10.1177/ 0961000614566481.

Way back in October at the C-EBLIP Fall Symposium, Margy MacMillan from Mount Royal University talked about phenomenography as a new methodology for conducting research on information literacy. Phenomenography is “the empirical study of the limited number of qualitatively different ways in which various phenomena in, and aspects of, the world around us are experienced” (Marton quoted in Forster, p. 1). Margy’s enthusiasm and excitement for phenomenography certainly piqued my interest. In my conversations with library researchers, research methodology is often to topic of discussion when planning research projects, applying for grants, or developing research budgets and can sometimes be a stumbling block for researchers. As such when it came my turn to convene Journal Club, I thought Forster’s review article might be a good opportunity to explore phenomenography as a viable library research methodology for library researchers.

The majority of our conversation revolved around whether phenomenography was indeed a useful new methodology for conducting library research or not. For the most part, we agreed that from the perspective of the review article, it seemed a rather complex and involved methodology. However, in the end, we couldn’t really tell without actually following a researcher through the process. This review article was a fairly good introduction to and overview of phenomenography but to really understand its complexity, we agreed that we would need to read research employing phenomenography as a methodology to see how it works and if it is really as complex as it seems at the outset.

While presenting an intriguing and possible methodological alternative, this article left us with many more questions than answers. Some questions stemming from this review article include:
1. Is this a useful methodology? Would library researchers use it?
2. Is it a methodology about how we think?
3. How do researchers unobtrusively interview people without priming the participants? Is it even possible?
4. Is it a complex methodology, or does it just seem like it?
5. What are the steps involved? How does someone actually do it?
6. Could it be appropriate for library research other than information literacy (like usability or librarians as researchers)?
7. What other methodologies are out there in other disciplines that are possible for library research?
8. What sorts of learning/training would researchers need before undertaking phenomenography?
9. Do researchers have to be experienced interviewers to use it?

Still, despite the numerous unanswered questions, we were not deterred and were in agreement that we are all keen to learn more about it and its process.

Finally, we rounded out our conversation with the value of review articles, although not all of us are keen on them. (Don’t worry; I won’t name names.). Forster’s article not only opened our eyes to phenomenography as a new methodology; it also opened our eyes to the value of review articles as providing overviews of new methodologies, both as consumers and producers of knowledge.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Useful, Useable, Desireable: C-EBLIP Journal Club, January 7, 2016

by Jaclyn McLean
Collection Services, University Library
University of Saskatchewan

At our first journal club meeting of 2016, I chose an article from Weave; a new peer-reviewed, OA, web-based journal, to start a discussion about usability principles and teams in academic libraries:

Godfrey, K. (2015). Creating a culture of usability. Weave: Journal of Library User Experience, 1(3). http://dx.doi.org/10.3998/weave.12535642.0001.301

I’ve been reading a lot in the areas of usability and user experience (UX), especially in libraries, as I build a foundation of knowledge for my program of research. This article seemed like an interesting introduction for those less familiar with usability principles, and the idea of a culture of usability across the library intrigued me. I also like the Weave editorial philosophy, especially their primary aim “to improve the practice of UX in libraries, and in the process, to help libraries be better, more relevant, more useful, more accessible places.” This aligns well with some of the reasons I picked usability and UX for my research, and ideas I keep in mind in my practice as well. And before I dig into the article, and our discussion, I just want to mention something about usability and UX. Weave is a journal aiming to improve UX, but the article we read was about usability teams and principles. Usability and UX are not the same (though you’ll see the terms used nearly interchangeably at times, incorectly).

Godfrey’s article could be divided into two broad sections: a description of usability and usability teams, and an examination of the local experience at Memorial University Libraries. In the first section, she frames her discussion with a literature search on usability principles and practices, and the newer concept of standing usability teams in libraries. She also discusses the importance of making usability a core concept in all areas of library development – physical, virtual, and service. She describes the core concepts of usability, and how Memorial is consciously applying the idea of examining pain points and other concepts usually confined to online environments, to their physical spaces. The challenges of creating a culture of usability (or of changing any culture), and especially the concept of join-in rather than buy-in when attempting such a significant change were very interesting to think about.

The second section gives an overview of the Memorial University Libraries context, and how the implementation of a usability team went there. Godfrey outlines how the team was formed, what’s been done so far, and some plans for the future. She identifies the creation of their usability team as “the first step to creating a culture of usability and improving the user experience.”

Our discussion ranged widely, from the style of the article, to ideas of usability beyond the web, concepts of building culture, and beyond. Several of us were hungry for more – details of the actual projects undertaken by the usability team and their outcomes – but recognized that this wasn’t the article we had in our hands. This article felt more like an introduction to the concept of standing usability teams in libraries, an overview of usability concepts, and some local experiences rather than a full case study or assessment of a usability team in a library.

The bulk of our discussion focused on local context. We already do a lot of talking about our different cultures and how to build them here, and have focused recently on building cultures in the area of leadership, project management, assessment, and EBLIP. How many cultures can one workplace consciously foster, we wondered? Could we honestly see something like a standing usability team happening here? In the end, we thought that adopting usability concepts and ideas into work we already do, and good standing committees that are already in place would be more successful in our context. In that way, we specifically talked about EBLIP – because by it’s very definition, EBLIP takes into account our users. So maybe rather than adding a new culture shift to our agenda, it’s more about keeping the user aspect of EBLIP in mind when we implement or assess services and programs – and use that as a reminder to stop assuming we know what our users need, or as a reminder to check in with our users on a regular basis.

Libraries have a bad reputation of looking inward and forgetting about our users – so even broad discussions of user preferences and initial user consultation could be a significant improvement. I know from my own area of work (technical services), a key example of how we fall down on user consultation is when a discovery system needs to be reconfigured, and only library staff are consulted for needs/preferences, rather than users.

In the end, this article made us hungry for more. As practitioners, we were immediately curious about the how and the what of the work. We wanted to see the outcomes of the iterative testing, the aggregated responses from the survey, and the results from this standing team. We hope that Godfrey is planning a follow-up with more of the details from on the ground, so we can continue to learn from what seems to be a unique project. Krista, if you’re reading this, I hope that you are planning to share more about the work you’ve done so far and what’s planned next!

Publish or practice, never that simple: C-EBLIP Journal Club, November 17, 2015

by Selinda Berg
Schulich School of Medicine – Windsor Program
Leddy Library, University of Windsor
University Library Researcher in Residence, University of Saskatchewan

As the Researcher-in-Residence, I was very eager to convene the November gathering of the University of Saskatchewan Library’s C-EBLIP Journal Club. I think that this initiative by the Centre (C-EBLIP) is incredibly valuable to librarians: It expands our understanding of the research landscape; increases are understanding our colleague’s research interests; and diversifies our perspectives and deepens our knowledge about research.

The article we discussed in November was:
Finlay, C. F., Ni, C. Tsou, A., Sugimoto, C. R. (2013). Publish or practice?: Examination of librarians’ contributions to research. portal: Libraries and the Academy, 134), 403-421.

In this article, the researchers share the results of their investigation into the authorship of LIS literature with an emphasis on understanding the contributions and attributes of practitioner scholarship. The article intersects well with my own research interests, as well as aligns with many of the ongoing conversations about the research outputs and the research productivity by academic librarians. The conversation was lively, informative, and thoughtful.

The article was well-received by those at journal club with members highlighting the article’s clear methods and style of writing. The discussion was diverse and lead us to many different conversation, but three themes did emerge.

Other possible interpretations and explanations:
The authors found that there was a decrease in the proportion of article published by practitioners between 2006 and 2011. The authors made a couple of suggestions as to why this may have occurred, including the increase in non-traditional publications and the decrease in expectations for research. In addition to these explanations, we discussed other possibilities including a movement away from LIS journals as librarians’ research interests become more diverse; a decrease in tenure-track/tenured librarian positions (resulting in more contract positions without research opportunities and perhaps more practice heavy positions); and/or a change in the nature of articles with a movement away from a focus on quantity of articles to a focus on quality research.

Application of method and findings to the development of institutional standards and a disciplinary research culture:
The discussion led to interesting conversation about how contributions to scholarship are measured, both in relation to our disciplinary research culture as well as institutional standards. As scholarly communications evolve, is the counting of articles in respected journals the only (or best) was to evaluate research contributions? This discussion led us to further consideration about how disciplinary differences in research culture make a difference in the interpretation of contributions, and in turn, the relatively young and immature research culture in academic libraries makes it difficult to name our disciplinary criteria and in turn develop institutional standards.

Related research questions:
The article was really well-received and from good research comes more questions. The article raised some interesting discussion about related research questions that were not within in the scope of the research article. There was an interest in knowing more about the qualities and attributes of the librarians who have been publishing (including their position, their length of service, their motivations for research, and the factors that determined where they publish). There was also questions as to whether these librarians who are contributing to scholarship through “traditional” scholarly venues are also contributing to the scholarly conversations though non-traditional formats (blogs, open publishing etc.). Lastly there was an underlying assumption that these two bodies of literature by two set of authors, LIS scholars and practitioner-scholars interact and impact each other; however, there was an interest in knowing how these two bodies literature, written by two groups of authors actually do interact- for example: are they citing each other, or do they cite their own communities?

Great discussion ensued at the meeting and some stimulating ideas were generated from the many interesting findings within the paper and beyond. Some very thoughtful discussion emerged during journal club—looking forward to Janurary!!

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Data for Librarians: C-EBLIP Journal Club, October 1, 2015

by Kristin Bogdan
Engineering Library, University of Saskatchewan

At the second meeting of the C-EBLIP Journal club for 2015-2016, held on October 1, 2015, we discussed the article:

MacMillan, D. (2014). “Data Sharing and Discovery: What Librarians Need to Know”. The Journal of Academic Librarianship, 40, 541-549. http://dx.doi.org/10.1016/j.acalib.2014.06.011

I chose this article because it is a nice overview of the key things that librarians should be familiar with about data and data management. MacMillan does a great job of synthesizing the information out there and applying it in a Canadian context, where current data management trends are not as driven by granting agencies as they are in other jurisdictions (although that could be coming). There was general agreement that the article was a useful place to start when it comes to understanding where data management can fit into library services and systems.

The flow of the discussion changed as we looked at data sharing and discovery based on the roles that librarians and information scientists fulfill in this context. We recognized that the library is a possible home for research data and that we have a role as educators, curators, and stewards of data, but we are also researchers who consume and produce data. These points of view overlap and complement each other, but also offer different ways of looking at how the library can be involved.

When it comes to our role as curators and stewards of data, we discussed the kinds of things that could make data sharing difficult. The members of the Journal Club acknowledged that there is a difference between being able to find data and being able to provide those data to patrons in a way that is usable and sustainable. Infrastructure is required for data sharing and discovery, and there are many possible ways to make this happen. Should libraries have their own repositories or take advantage of existing repositories? What are the possible down-sides of housing data in institutional repositories instead of those that are discipline-specific (highlighted by MacMillan on page 546)? How can we work together to make the most of our limited resources and provide the most comprehensive services for Canadian researchers? Resources are being collected by the Canadian Association of Research Libraries (CARL), including a list of institutional repositories and adoptive repositories (http://www.carl-abrc.ca/ir.html). We talked briefly about data journals as dissemination venues, but wondered about the implications of publishers owning this content.

Issues around data privacy also came up in the discussion. Concerns were raised around security and the measures in place to make sure the individuals’ identities were protected. The Saskatchewan Research Data Centre (SKY-RDC) was identified as an example of how data can be distributed in a controlled way to protect research subjects (more about the SKY-RDC here: http://library.usask.ca/sky-rdc/index.html). In terms of research data, we came to the conclusion that privacy will trump sharing in terms of sensitive data.

Our role as data producers and consumers brought up concerns about when it was appropriate to release data that was still being written about. The idea of being scooped came up as a possible deterrent to making data public. This applies to “small” data as much as to “big” data. There were also concerns about how data sets would be used after they were made public. What if they were not used in a way that was consistent with their intended purpose? Data documentation can help users understand the data and use it in a way that enriches their research but acknowledges the possible limitations of the original data set. Data citation is an important if still relatively new thing, and part of our role as stewards and creators will be to make citing data as easy and common-place as citing other materials.

In the end, I think this article was a great place to begin the discussion of data sharing and discovery in the context of libraries for the C-EBLIP Journal Club. The discussion generated more questions than answers, which made it clear that this is a topic worthy of further investigation.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Organizational Innovation: C-EBLIP Journal Club, August 20, 2015

by Virginia Wilson
Director, Centre for Evidence Based Library and Information Practice

The first C-EBLIP journal club meeting of the 2015/16 academic year took place on Thursday, August 20, 2015. Despite many librarians being in full-on holiday mode at the time, six participants discussed the following article:

Jantz, R.C. (2015). The Determinants of Organizational Innovation: An Interpretation and Implications of Research Libraries. College and Research Libraries, 76(4), 512-536. http://crl.acrl.org/content/76/4.toc

This article was chosen by me with the idea that innovation was a light and exciting topic and that the article would be perfect for an August journal club meeting. While the article did have some redeeming qualities, the club meeting had a bit of a rocky start: this article is long and slightly unwieldy! Jantz conducted a research study which focused on innovation as a specific type of change and he determined five factors that had a significant impact on how well libraries innovate. The research method consisted of a survey distributed to 50 member libraries of the Association of Research Libraries.

The discussion opened up with some problems around the methodology of the research. The details of how the research was conducted were sketchy. There was no word on how Jantz coded the data, no talk of analysis or collection methods in detail. The survey instrument was not included in the paper, nor were the raw survey results, which, one journal club member commented, would have been the interesting part! In terms of the instrument, it would have been helpful to have a look at it as club members wondered if the author defined the complex terms he used. How can we be sure that all the respondents were on the same page? The author also did not list the specific institutions surveyed, causing us to wonder if they were perhaps skewed to the sciences? Would liberal arts universities/libraries have R&D departments?

Club members found it problematic that the surveys were only administered to senior leadership teams, as views could be quite different down the organizational hierarchy. As well, as the responses were said to have come from senior leadership teams, members were interested in how this might have logistically happened. Did the teams get together to fill out the surveys? Did each team member fill out a survey and were the results of those collated? This is an example of insufficient detail around the methods employed for this research and the problems could have been alleviated with more attention to detail. It was a good takeaway for my own research: be detail oriented, especially when it comes to methodology! If the research is to be useful, it has to be seen as valid and reliable. That won’t happen if the reader is questioning the methods.

Another issue about the paper was that in several instances, the author stated things as established facts but did not cite them. Perhaps the author made assumptions but as we are attuned to the idea of providing evidence for claims, we weren’t buying it. Another take away: in terms of evidence, more is more! Or at the very least, some is vastly better than none. As well, in the conclusion of the paper, the author used the terms vision and mission interchangeably, which particularly irritated one journal club member and was another example of imprecision.

The discussion moved from the particulars of the article to innovation in general with some observations being made:
• Innovation is not dependent on age: people across the age spectrum can be innovative.
• We are overwhelmed by choice. The more choices we have makes making a choice more difficult. Decision-making is difficult.
• Is there a type of innovation on the other side of radical, i.e. useless? Change for change sake?
• One participant felt that innovation of all types can be useless. Innovation doesn’t necessarily create more viable choices.
• Libraries are always playing catch up…it may be innovative to us but not elsewhere [depending of course on the library].
• Difference between innovation and library innovation. Is there a difference between innovation and implementation?

At the very least, C-EBLIP Journal Club members felt that reading about innovation could be valuable when heading into a state of change. And let’s face it, we’re generally dealing with change more often than not these days. To conclude, although the article had some methodological gaps and members felt that the author could have been more selective when transforming his PhD dissertation into an article, the article did give us the basis for a fruitful discussion on innovation and the first meeting of 2015/16 was an hour well spent.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Conducting Research on Instructional Practices: C-EBLIP Journal Club, May 14, 2015

by Tasha Maddison
University Library, University of Saskatchewan

Journal club article:
Dhawan, A., & Chen, C.J.J. (2014). Library instruction for first-year students. Reference Services Review, 42(3), 414-432.

I was sent this article through an active Scopus alert that I have running on the topic of flipped classrooms. I had not received an article that met my particular search criteria in a long while, so I was excited to read this offering. The authors had included flipped classrooms as one of their recommended keywords for the article; yet the term does not make an appearance until page 426 in a section entitled ‘thoughts for improving library instruction’ and makes up just over a paragraph of content. It was interesting to me to witness firsthand how the use of an inappropriate keyword caused further exposure to research that I probably would not have read otherwise, which is both good and bad. I chose this article for C-EBLIP Journal Club for this reason, as I believed it would generate a spirited debate on the use of keywords, and that it did. Immediately there was a strong reaction from the membership on how dangerous it can be to use deceptive descriptions and/or keywords in the promotion of your work, as you will likely end up frustrating your audience.

I found the scope of the literature review in this article to be overly ambitious as it focuses on librarian/faculty collaboration and best practices for instruction in addition to information on the first year college experience (pg. 415). I wondered if the reader would have been better served with a more specific review of the literature on ‘for-credit’ first year library instruction. Another point worthy of noting is the significant examination of the assessment process throughout the article including information about the rubric that was used as well as evidence from the ACRL framework and the work of Megan Oakleaf; yet the only quantiative data provided in the case study was briefly summarized on page 423.

The group had a lively discussion on the worth of communicating research on instructional practices in scholarly literature. Members questioned whether or not there is value in the ‘how we done it good’ type article and the validity of reporting observations and details of your approach without providing assessment findings or quantitative data. I would argue that there is a need for this type of information within library literature. Librarians with teaching as part of their assigned duties require practical information about course content, samples of rubrics, and details of innovative pedagogy, as well as best practices when using a certain methodology which ideally outlines both the successes and failures. Despite the advantages to the practitioner in the field, we postulated on how such information could be used within evidence based practice, as the findings from these types of articles are typically not generalizable and often suffer from inconsistent use of research methodology.

We wondered if there is a need to create a new category for scholarly output. If so, do these articles need to be peer reviewed or should they be simply presented as a commentary? There is merit in practitioner journals that describe knowledge and advice from individuals in the field, detailing what they do. This type of scholarly output has the potential to validate professional practice and help librarians in these types of positions develop a reputation by publishing the results of their integration of innovative teaching practices into their information literacy instruction.

In spite of the fact that this article had little to do with flipped classrooms, I did find a lot of interesting take-a-ways including: details of student learning services within the library and learning communities on their campus, as well as the merit of providing for-credit mandatory information literacy courses.

Suggested further reading: http://blogs.lse.ac.uk/impactofsocialsciences/2015/05/13/reincarnating-the-research-article-into-a-living-document/

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Altmetrics: what does it measure? Is there a role in research assessment? C-EBLIP Journal Club April 6, 2015

by Li Zhang
Science and Engineering Libraries, University of Saskatchewan

Finally, I had the opportunity to lead the C-EBLIP Journal Club on April 6, 2015! This was originally scheduled for January, but was cancelled due to my injury. The article I chose was:

How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. By Zohreh Zahedi, Rodrigo, Costas, and Paul Wouters. Scientometrics, 2014, Vol.101(2), pp.1491-1513.

There are several reasons why I chose this article on altmetrics. First, in the University of Saskatchewan Library, research is part of our assignment of duties. Inevitably, how to evaluate librarians’ research outputs has been a topic of discussion in the collegium. Citation indicators are probably the most widely used tool for evaluation of publications. But with the advancement of technology and different modes of communications, how to capture the impact of scholarly activities from those alternative venues? Altmetrics seems to be a timely addition to the discussion. Second, altmetrics is an area I am interested in developing my expertise. My research interests encompass bibliometrics and its application in research evaluation; therefore, it is natural to extend my interests to this new emerging field. Third, this paper not only presents detailed information on the methods used in this research but also provides a balanced view about altmetrics, thus helping us to understand how altmetric analysis is conducted and to be aware of the issues around this new metrics as well.

We briefly discussed the methodology and main findings in the article. Some of the interesting findings include: Mendeley readership was probably the most useful source for altmetrics, while the mentioning of the publications in other types of media (such as twitter, delicious, and Wikipedia) was very low; Mendeley readership counts also had a moderate positive correlation to citation counts; in some fields of social sciences and humanities, altmetric counts were actually higher than citation counts, suggesting altmetrics could be a potentially useful tool for capturing impact of scholarly publications from different sources in these fields, in addition to citation indicators.

Later in the session, we discussed a couple of issues related to altmetrics. Although measuring the impact of scholarly publications in alternative sources has gained notice, it is not yet clear why publications are mentioned in these sources. What kind of impact does Altmetrics measure? In traditional citation indicators, at least we know that the cited articles stimulated or informed the current research in some way (either positive or negative). In contrast, a paper appearing in Mendeley does not necessarily mean it is read. Similarly, a paper mentioned in Twitter could be just self-promotion (there is nothing wrong with it!). From here, we extended our discussion to publishing behaviours and promotion strategies. Are social scientists more likely to use social media to promote their research and publications than natural scientists? The award criteria and merit system in academia will also play a role. If altmetrics is counted as an indication of the quality of the publications, we may see a sudden surge of social media use by researchers. Further, it is much easier to manipulate altmetrics than citation metrics. Care needs to be taken before we can confidently use altmetrics as a reliable tool to measure scholarly activities.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.