Impactful research

by Nicole Eva-Rice, Liaison Librarian for Management, Economics, Political Science, and Agriculture Studies, University of Lethbridge Library

Why do we do research? Is it simply to fulfill our obligations for tenure and promotion? Is it to satisfy our curiosity about some phenomenon? Or is it to help our fellow librarians (or researchers in another discipline) to do their jobs, or further the knowledge in our field?

I find myself grappling with these thoughts when embarking on a new research project. Sometimes it’s difficult to see the point of our research when we are stuck on the ‘publish or perish’ hamster wheel, and I suspect it’s all the more so for faculty outside of librarianship. It’s wonderful when we have an obvious course set out for us and can see the practical applications of our research – finding a cure for a disease, for example, or a way to improve school curriculum – but what if the nature of our research is more esoteric? Does the world need another article on the philosophy of librarianship, or the creative process in research methods? Or are these ‘make work’ projects for scholars who must research in order to survive in academe?

My most satisfying research experiences, and the ones I most appreciate from others, have to do with practical aspects of my job. I love research that can directly inform my day to day work, and know that any decisions I make based on that research have been grounded in evidence. If someone has researched the effectiveness of flipping a one-shot and can show me if it’s better or worse than the alternative, I am very appreciative of their efforts both in performing the study and publishing their results as I can benefit directly from their experience. Likewise, if someone publishes an article on how they systematically analyzed their serials collections to make cuts, I can put their practices to use in my own library. I may not cite those articles – in fact, most people won’t unless they do further research along that line – but they have a direct impact on the field of librarianship. Unfortunately, that impact is invisible to the author/researchers, unless we make a point of making contact with them and telling them how we were able to apply their research in our own institutions (and I don’t know about you, but I have never done that nor has it occurred to me to do that until just this minute). So measuring ‘impact’ by citations, tweets, or downloads just doesn’t do justice to the true impact of that article. Even a philosophy of librarianship article could have serious ‘impact’ in the way that it affects the way someone approaches their job – but unless the reader goes on to write another article citing it, that original article doesn’t have anything that proves the very real impact it has made.

In fact, the research doesn’t even have to result in a scholarly article – if I read a blog post on some of these topics, I might still be able to benefit from them and use the ideas in my own practice. Of course, this depends on exactly what the content is and how much rigor you need in replicating the procedure in your own institution, but sometimes I find blog posts more useful in my day-to-day practice than the actual scholarly articles. Even the philosophical-type posts are more easily digested and contemplated in the length and tone provided in a more informal publication.

This is all to say that I think the way we measure and value academic research is seriously flawed – something many librarians (and other academics) would agree with, but that others in academia still strongly adhere to. This is becoming almost a moral issue for me. Why does everything have to be measurable? Why can’t STP committees take the research project as described at face value, and accept other types of impact it could have on readers/policy makers/practitioners rather than assigning a numerical value based on where it was published and how many times it was cited?

When I hear other faculty members discussing their research, even if I don’t know anything about their subject area, I can often tell if it will have ‘real’ impact or not. The health sciences researcher whose report to the government resulted in policy change obviously had a real impact – but she won’t have a peer-reviewed article to list on her CV (unless she goes out of her way to create one to satisfy the process) nor will she likely have citations (unless the aforementioned article is written). It also makes me think about my next idea for a research project, which is truly just something I’ve been curious about, but which I can’t see many practical implications for other than to serve others’ curiosity. It’s a departure for me because I am usually the most practical of people and my research usually has to serve the dual purpose of both having application in my current workplace as well as becoming fodder for another line on my CV. As I have been thinking about the implication of impact more and more, I realize that as publicly paid employees, perhaps we have an obligation to make our research have as wide a practical impact as possible. What do you think? Have we moved beyond the luxury of researching for research’s sake? As employees of public institutions, do we have a societal impact to produce practical outcomes? I’m curious as to what others think and would love to continue the conversation.

For more on impact and what can count as evidence of it, please see Farah Friesen’s previous posts on this blog, What “counts” as evidence of impact? Part 1 and Part 2.


This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

What “counts” as evidence of impact? (Part 1 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

I work at the Centre for Faculty Development (CFD), a joint partnership between the University of Toronto and St. Michael’s Hospital, a fully affiliated teaching hospital. CFD is composed of educators and researchers in the medical education/health professions education field.

As a librarian who has been fully integrated into a research team, I have applied my training and skills to different aspects of this position. One of the areas for which I am now responsible is tracking the impact of the CFD.

While spending time tracking the work that CFD does, I have started to question what “counts” as evidence of impact and why certain types of impact are more important than others.

So what exactly is impact? This is an important question to discuss because how we define impact affects what we count, and what we choose to count changes our behaviour.

We are all familiar with the traditional metrics that count as impact in academia: 1) research, 2) teaching, and 3) service. Yet the three are not treated equally, with research often given the most weight when it comes time for annual reviews and tenure decisions.

What we select as indicators of impact actively shapes and constrains our focus and endeavours. If research is worth the most, might this not encourage faculty to put most of their efforts into research and less into teaching or service?

Hmm… does this remind us of something? Oh yes! Our allegiance to research is strong and reflected in other ways. Evidence-based practice also purports a “three legged stool” comprising 1) research evidence, 2) practice knowledge and expertise, and 3) client preferences and values,1 but research is often synonymous with evidence and most valued out of the three types of evidence that should be taken into consideration in EBP.2,3 It is not accidental that what is given most weight in academia is the same as what is given most weight in EBP: research. We have established similar hierarchies of what counts that permeate our scholarly work and our decision-making in practice!

Research impact is traditionally tracked through number of grants, publications, and citations (and maybe conference presentations). Attention to altmetrics is growing, but altmetrics tends to track these very same traditional research products, but speeds up the time between production and dissemination (the actual use or impact of altmetrics is a whole other worthy discussion).

Why is it that an academic’s impact (or an academic department’s impact) is essentially dependent on research impact?

There are practical reasons for this of course: research productivity influences an institution’s academic standing and influences the distribution of funding. As an example of the former, one of the performance indicators used by the University of Toronto is research excellence,4 and is based on comparing the number of publications and citations generated by UofT faculty (in sciences) to faculty at other Canadian institutions. For an example of the latter, one can refer to the UK’s Research Excellence Framework (REF) which assesses “the quality of research in UK higher education institutions”5 and allocates funding based on these REF scores.

While these practical considerations cannot be ignored, might it not benefit us to broaden our definition of impact in education scholarship? (Note that the comparisons of research excellence above are based on sciences faculty only. We must think critically about the type of metrics that are appropriate for different fields/disciplines).

This is tied to the question of the ‘value’ and purpose of education. What is it that we hope to achieve as educators and education researchers? The “rise of measurement culture”6 creates the expectation that “educational outcomes can and should be measured.”6 But those of us working in education intuit that there are potentially unquantifiable benefits in the work that we do.

  • How do we account for the broad range of educational impacts that we have?
  • How might we better capture the complex social processes/impacts in education?
  • What other types of indicators might we choose measure, to ‘make count’ as impact, beyond traditional metrics and altmetrics?
  • How do we encourage researchers/faculty to start conceiving of impact more broadly?

While considering these questions, we must be wary of the pressure to produce and play the tracking ‘game,’ lest we fall into “focus[ing] on what is measurable at the expense of what is important.”7

Part 2 in June will examine some possible responses to the questions above regarding alternative indicators to help (re)define educational impact more broadly. A great resource for further thoughts on the topic of impact and metrics: http://blogs.lse.ac.uk/impactofsocialsciences/

I would like to thank Stella Ng and Lindsay Baker for their collaboration and guidance on this work, and to Amy Dionne and Carolyn Ziegler for their support of this project.

  1. University of Saskatchewan. What is EBLIP? Centre for Evidence Based Library & Information Practice. http://library.usask.ca/ceblip/eblip/what-is-eblip.php. Accessed Feb 10, 2016.
  2. Mantzoukas S. A review of evidence-based practice, nursing research and reflection: levelling the hierarchy. J Clin Nurs. 2008;17(2):214-23.
  3. Mykhalovskiy E, Weir L. The problem of evidence-based medicine: directions for social science. Soc Sci Med. 2004;59(5):1059-69.
  4. University of Toronto. Performance Indicators 2014 Comprehensive Inventory. https://www.utoronto.ca/performance-indicators-2014-comprehensive-inventory. Accessed Feb 10, 2016.
  5. Higher Education Funding Council for England (HEFCE). REF 2014. Research Excellence Framework. http://www.ref.ac.uk/. Accessed Feb 10, 2016.
  6. Biesta G. Good education in an age of measurement: on the need to reconnect with the question of purpose in education. Educ Assess Eval Acc. 2009; 21(1): 33-46.
  7. Buttliere B. We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better. The Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2015/10/08/we-need-informative-metrics-how-to-make-metrics-better/. Published Oct 8, 2015. Accessed Feb 10, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Some Musings on Metrics

by Marjorie Mitchell
Librarian, Learning and Research Services
UBC Okanagan Library

As the first few weeks of the new academic year wrap up in Canada, academic librarians can now shift their focus from orienting new students back to supporting faculty and graduate students, especially research focused support. Many researchers are preparing grant funding applications for the fall round of deadlines and the systems for assessing these applications is becoming ever more complex.

As research funding becomes a global competition, how are funders to decide which research deserves their support?

Over the past few years, global discussions regarding various metrics determining research impact have increased. Within their institutional research communications, administrators use impact metrics to compare their institutions to others, either nationally or internationally. Within their funding applications, researchers use impact factors to indicate the importance and worthiness of their research. One real appeal of metrics is that they are tangible, objective measures of the real use of a product of scholarly research. Or are they?

Since the 1950s, bibliographic citation databases have been in continuous development and have formed a broad base for different publication metrics, especially article and journal metrics. These metrics have not been without issues, not the least of which is the variation in citation patterns between disciplines and the potential for researchers to attempt to “play” the system to make it appear that their research has had greater impact than it actually has had.

Coined in 2010 by Priem, “alternative metrics” measure the impact of newer, non-traditional forms of scholarship published and discussed outside academic journals or conference proceedings. Digital humanities, community-involved research, and emerging forms of scholarship prove challenging for grant funding bodies and administrators to assess. Interestingly, books have neither been extensively covered in the bibliographic citation databases nor have been the subject of computerized citation analysis to the same degree as journal articles or new, non-traditional forms of scholarly publications. All of these instances are fertile ground for conversations led by librarians.

Does this matter?

Institutionally, librarians can help both researchers and administrators to gain a fuller understanding of the uses, and potential pitfalls from misuse, of metrics of all varieties. The broader the understanding of the subtleties of metrics, the less likely they are to be misunderstood and/or misrepresented. Ultimately, this greater understanding could form the basis for a more balanced and equitable story of research happening within our universities.

Priem, J., & Hemminger, B. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday, 15(7). Retrieved September 21, 2015, from http://firstmonday.org/ojs/index.php/fm/article/view/2874/2570

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.