Conducting Research on Instructional Practices: C-EBLIP Journal Club, May 14, 2015

by Tasha Maddison
University Library, University of Saskatchewan

Journal club article:
Dhawan, A., & Chen, C.J.J. (2014). Library instruction for first-year students. Reference Services Review, 42(3), 414-432.

I was sent this article through an active Scopus alert that I have running on the topic of flipped classrooms. I had not received an article that met my particular search criteria in a long while, so I was excited to read this offering. The authors had included flipped classrooms as one of their recommended keywords for the article; yet the term does not make an appearance until page 426 in a section entitled ‘thoughts for improving library instruction’ and makes up just over a paragraph of content. It was interesting to me to witness firsthand how the use of an inappropriate keyword caused further exposure to research that I probably would not have read otherwise, which is both good and bad. I chose this article for C-EBLIP Journal Club for this reason, as I believed it would generate a spirited debate on the use of keywords, and that it did. Immediately there was a strong reaction from the membership on how dangerous it can be to use deceptive descriptions and/or keywords in the promotion of your work, as you will likely end up frustrating your audience.

I found the scope of the literature review in this article to be overly ambitious as it focuses on librarian/faculty collaboration and best practices for instruction in addition to information on the first year college experience (pg. 415). I wondered if the reader would have been better served with a more specific review of the literature on ‘for-credit’ first year library instruction. Another point worthy of noting is the significant examination of the assessment process throughout the article including information about the rubric that was used as well as evidence from the ACRL framework and the work of Megan Oakleaf; yet the only quantiative data provided in the case study was briefly summarized on page 423.

The group had a lively discussion on the worth of communicating research on instructional practices in scholarly literature. Members questioned whether or not there is value in the ‘how we done it good’ type article and the validity of reporting observations and details of your approach without providing assessment findings or quantitative data. I would argue that there is a need for this type of information within library literature. Librarians with teaching as part of their assigned duties require practical information about course content, samples of rubrics, and details of innovative pedagogy, as well as best practices when using a certain methodology which ideally outlines both the successes and failures. Despite the advantages to the practitioner in the field, we postulated on how such information could be used within evidence based practice, as the findings from these types of articles are typically not generalizable and often suffer from inconsistent use of research methodology.

We wondered if there is a need to create a new category for scholarly output. If so, do these articles need to be peer reviewed or should they be simply presented as a commentary? There is merit in practitioner journals that describe knowledge and advice from individuals in the field, detailing what they do. This type of scholarly output has the potential to validate professional practice and help librarians in these types of positions develop a reputation by publishing the results of their integration of innovative teaching practices into their information literacy instruction.

In spite of the fact that this article had little to do with flipped classrooms, I did find a lot of interesting take-a-ways including: details of student learning services within the library and learning communities on their campus, as well as the merit of providing for-credit mandatory information literacy courses.

Suggested further reading:

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Altmetrics: what does it measure? Is there a role in research assessment? C-EBLIP Journal Club April 6, 2015

by Li Zhang
Science and Engineering Libraries, University of Saskatchewan

Finally, I had the opportunity to lead the C-EBLIP Journal Club on April 6, 2015! This was originally scheduled for January, but was cancelled due to my injury. The article I chose was:

How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. By Zohreh Zahedi, Rodrigo, Costas, and Paul Wouters. Scientometrics, 2014, Vol.101(2), pp.1491-1513.

There are several reasons why I chose this article on altmetrics. First, in the University of Saskatchewan Library, research is part of our assignment of duties. Inevitably, how to evaluate librarians’ research outputs has been a topic of discussion in the collegium. Citation indicators are probably the most widely used tool for evaluation of publications. But with the advancement of technology and different modes of communications, how to capture the impact of scholarly activities from those alternative venues? Altmetrics seems to be a timely addition to the discussion. Second, altmetrics is an area I am interested in developing my expertise. My research interests encompass bibliometrics and its application in research evaluation; therefore, it is natural to extend my interests to this new emerging field. Third, this paper not only presents detailed information on the methods used in this research but also provides a balanced view about altmetrics, thus helping us to understand how altmetric analysis is conducted and to be aware of the issues around this new metrics as well.

We briefly discussed the methodology and main findings in the article. Some of the interesting findings include: Mendeley readership was probably the most useful source for altmetrics, while the mentioning of the publications in other types of media (such as twitter, delicious, and Wikipedia) was very low; Mendeley readership counts also had a moderate positive correlation to citation counts; in some fields of social sciences and humanities, altmetric counts were actually higher than citation counts, suggesting altmetrics could be a potentially useful tool for capturing impact of scholarly publications from different sources in these fields, in addition to citation indicators.

Later in the session, we discussed a couple of issues related to altmetrics. Although measuring the impact of scholarly publications in alternative sources has gained notice, it is not yet clear why publications are mentioned in these sources. What kind of impact does Altmetrics measure? In traditional citation indicators, at least we know that the cited articles stimulated or informed the current research in some way (either positive or negative). In contrast, a paper appearing in Mendeley does not necessarily mean it is read. Similarly, a paper mentioned in Twitter could be just self-promotion (there is nothing wrong with it!). From here, we extended our discussion to publishing behaviours and promotion strategies. Are social scientists more likely to use social media to promote their research and publications than natural scientists? The award criteria and merit system in academia will also play a role. If altmetrics is counted as an indication of the quality of the publications, we may see a sudden surge of social media use by researchers. Further, it is much easier to manipulate altmetrics than citation metrics. Care needs to be taken before we can confidently use altmetrics as a reliable tool to measure scholarly activities.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.