by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital
I work at the Centre for Faculty Development (CFD), a joint partnership between the University of Toronto and St. Michael’s Hospital, a fully affiliated teaching hospital. CFD is composed of educators and researchers in the medical education/health professions education field.
As a librarian who has been fully integrated into a research team, I have applied my training and skills to different aspects of this position. One of the areas for which I am now responsible is tracking the impact of the CFD.
While spending time tracking the work that CFD does, I have started to question what “counts” as evidence of impact and why certain types of impact are more important than others.
So what exactly is impact? This is an important question to discuss because how we define impact affects what we count, and what we choose to count changes our behaviour.
We are all familiar with the traditional metrics that count as impact in academia: 1) research, 2) teaching, and 3) service. Yet the three are not treated equally, with research often given the most weight when it comes time for annual reviews and tenure decisions.
What we select as indicators of impact actively shapes and constrains our focus and endeavours. If research is worth the most, might this not encourage faculty to put most of their efforts into research and less into teaching or service?
Hmm… does this remind us of something? Oh yes! Our allegiance to research is strong and reflected in other ways. Evidence-based practice also purports a “three legged stool” comprising 1) research evidence, 2) practice knowledge and expertise, and 3) client preferences and values,1 but research is often synonymous with evidence and most valued out of the three types of evidence that should be taken into consideration in EBP.2,3 It is not accidental that what is given most weight in academia is the same as what is given most weight in EBP: research. We have established similar hierarchies of what counts that permeate our scholarly work and our decision-making in practice!
Research impact is traditionally tracked through number of grants, publications, and citations (and maybe conference presentations). Attention to altmetrics is growing, but altmetrics tends to track these very same traditional research products, but speeds up the time between production and dissemination (the actual use or impact of altmetrics is a whole other worthy discussion).
Why is it that an academic’s impact (or an academic department’s impact) is essentially dependent on research impact?
There are practical reasons for this of course: research productivity influences an institution’s academic standing and influences the distribution of funding. As an example of the former, one of the performance indicators used by the University of Toronto is research excellence,4 and is based on comparing the number of publications and citations generated by UofT faculty (in sciences) to faculty at other Canadian institutions. For an example of the latter, one can refer to the UK’s Research Excellence Framework (REF) which assesses “the quality of research in UK higher education institutions”5 and allocates funding based on these REF scores.
While these practical considerations cannot be ignored, might it not benefit us to broaden our definition of impact in education scholarship? (Note that the comparisons of research excellence above are based on sciences faculty only. We must think critically about the type of metrics that are appropriate for different fields/disciplines).
This is tied to the question of the ‘value’ and purpose of education. What is it that we hope to achieve as educators and education researchers? The “rise of measurement culture”6 creates the expectation that “educational outcomes can and should be measured.”6 But those of us working in education intuit that there are potentially unquantifiable benefits in the work that we do.
- How do we account for the broad range of educational impacts that we have?
- How might we better capture the complex social processes/impacts in education?
- What other types of indicators might we choose measure, to ‘make count’ as impact, beyond traditional metrics and altmetrics?
- How do we encourage researchers/faculty to start conceiving of impact more broadly?
While considering these questions, we must be wary of the pressure to produce and play the tracking ‘game,’ lest we fall into “focus[ing] on what is measurable at the expense of what is important.”7
Part 2 in June will examine some possible responses to the questions above regarding alternative indicators to help (re)define educational impact more broadly. A great resource for further thoughts on the topic of impact and metrics: http://blogs.lse.ac.uk/impactofsocialsciences/
I would like to thank Stella Ng and Lindsay Baker for their collaboration and guidance on this work, and to Amy Dionne and Carolyn Ziegler for their support of this project.
- University of Saskatchewan. What is EBLIP? Centre for Evidence Based Library & Information Practice. http://library.usask.ca/ceblip/eblip/what-is-eblip.php. Accessed Feb 10, 2016.
- Mantzoukas S. A review of evidence-based practice, nursing research and reflection: levelling the hierarchy. J Clin Nurs. 2008;17(2):214-23.
- Mykhalovskiy E, Weir L. The problem of evidence-based medicine: directions for social science. Soc Sci Med. 2004;59(5):1059-69.
- University of Toronto. Performance Indicators 2014 Comprehensive Inventory. https://www.utoronto.ca/performance-indicators-2014-comprehensive-inventory. Accessed Feb 10, 2016.
- Higher Education Funding Council for England (HEFCE). REF 2014. Research Excellence Framework. http://www.ref.ac.uk/. Accessed Feb 10, 2016.
- Biesta G. Good education in an age of measurement: on the need to reconnect with the question of purpose in education. Educ Assess Eval Acc. 2009; 21(1): 33-46.
- Buttliere B. We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better. The Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2015/10/08/we-need-informative-metrics-how-to-make-metrics-better/. Published Oct 8, 2015. Accessed Feb 10, 2016.
This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.