A Lesson Not Imposssible




Each year about this time I start to imagine what if this year things were different. The possibilities of lively discussions, great feats of learning, and engaged students arise with excitement and then that doubting voice drifts in…

But what if it was not impossible?

What if the needs I perceived in my students and in my goals could be met?

Inspiration is offered by Mick Ebeling and the Not Impossible labs through their work that began when they heard of a boy named Daniel, an individual with a particular need, and then asked the question “what if this is not impossible?”. After pulling together a team to think through and test possibilities, they found a solution that worked. The created solution not only gave Daniel, an arm it is changing a community – “help one, help many.”

Meet Daniel or hear Mick speak of creating an eyewriter for graffiti-artist TEMPT

What if what will help one will help many? Who is your Daniel?

My Daniel is Sara (pseudonym), the 2nd year psych student I taught back in 2005 who wanted to learn statistics and needed to succeed in her honour’s program, but was afraid of math. I began developing explanations for Sara (e.g., is the differences between groups larger than the difference within groups?). The rationale and analogies allowed Sara to see the function of a t-test, know when to use it, and know what information is needed first and then what math to use to compute a t statistic. My approach became to link a statistical test’s purpose, function, and formula by pairing the mathematical formula on the board with a description of its purpose – specifically, why this test might be used. I then illustrate the functional patterns that I would look for noting the variables of the formula they represent.

For example, I would show the formula for a Pearson correlation and describe its purpose of determining to what extent two variables are related. The Pearson correlation functions by detecting co-occurrence, such as when survey respondents answer two questions similarly. If variable A is rated highly, is variable B? The analogy I use is pair dancing – if they are in sync they get higher ratings (closer to 100 or in this case 1.0). The math reflects this purpose and function by the numbers it uses and generates.

The resulting cognitive framework scaffold of purpose-function-math not only helped Sara to face and complete assignments, it helped her classmates compare statistical tests and better understand when to select them. Since then, I have continued to develop materials and teach sessions locally, nationally, and internationally on statistics for individuals with no or little background in math and stats. The approach that was designed to meet Sara’s needs now benefits many.

A key part to the Not Impossible Lab’s success is identifying a collection of others who have the knowledge or skills needed to make the impossible not impossible. Mark needed knowledge of prosthetics and access to equipment. I talked with individuals who disliked and struggled with statistics, as well as those used statistics daily. What do you need? And who among our community of excellent educators, librarians, course (re)designers, curriculum discussion facilitators, mentors, students, and the wider community, will be on your team?

I would love to learn of your experiences of Not Impossible and wrestling with the “seems impossible”!

wâhkôhtowin: 2014: Linking Kindred Sprits




The Beadwork Committee, of the College of Education at the University of Saskatchewan, had a vision for a national conference that would bring together “kindred spirits” to unpack decolonization and kindle Indigenization processes and methods to transform educational practices. This vision is coming to fruition from September 18-20th, when the University will welcome delegates from the province, the country, and the world.

The wâhkôhtowin conference is structured uniquely, in that on the first full day, papers will be presented in concurrent sessions, where delegates might share ideas regarding Indigenous theory and application, decolonizing practices, the value of Story-telling, working with Elders, examining land-based pedagogies, and about ethics, research, and protocols. On the second day, delegates gather at wanuskewin, bringing together their collective experiences and knowledge, and work collaboratively to determine “next steps” toward decolonizing and Indigenizing. “Witnesses” from the four directions will speak at the end of the conference, to reflect on the work that has been accomplished, and the relationships that have been built.

The conference could not have proceeded without the voices and prayers of the Elders  Mary Lee, Mike Maurice, Darlene Speidel and Martha Peet who are guiding us through the conference.

Although the conference is full, the entire campus and community beyond are warmly invited to attend the plenary talk Indigenous Education: My Journey,”offered by the Right Honourable Paul Martin. We will convene for this talk in Convocation Hall, in the Peter McKinnon Building, on Friday, September 19th, from 10-11:45 a.m. There is no cost for admission to this event.

GMCTE To Host Annual Celebration of Teaching



The Gwenna Moss Centre for Teaching Effectiveness will host the annual Celebration of Teaching in recognition of the past academic year’s award-winning teachers on Friday September 12. At this year’s Celebration, the Sylvia Wallace Sessional Lecturer Award and the Provost’s Outstanding Teaching Awards will be presented.

The Celebration will take place at the U of S in Arts 241 from 3:30 to 5:30. If you are planning to attend please RSVP to Sharilyn Lee at the GMCTE at sharilyn.lee@usask.ca.

The award winners are listed below. Click on the individual names to learn more about the recipients.

Sylvia Wallace Sessional Lecturer Award

Rod Johnson and Bert Weichel, Geography and Planning

Provost’s Awards

The recipients of the campus-wide Provost’s Teaching Awards are:

  • Provost’s Award for Excellence in Aboriginal Education: Verna St, Denis
  • Provost’s Award for Outstanding New Teacher: Dionne Pohler
  • Provost’s Award for Outstanding Graduate Teaching: Jan Gelech

The winners of the Provost’s College Awards for Outstanding Teaching (college specific) are:

Master Teacher Award

2014 Spring: Ronald C.C. Cuming

2013 Fall: Debbie Pushor

New Research Guides at the University Library: LibGuides2 Update



By Shannon Lucky, Information Technology Library

As we enter a new Fall semester the University Library has launched a major update to our Research Guides. These guides, built on the new LibGuides2 platform, are carefully curated selections of discipline and course specific resources combined with information on how to conduct research, writing skills, and other valuable Library tools. To explore the new guides, go to the University Library homepage and choose “Research Guides” under the Tools and Services column on the left-hand menu, or go directly to http://libguides.usask.ca.

LG2_Homepage

There are 3 types of Research Guides you can find through the University Library:

  • Subject Guides are maintained by your subject librarian. These guides present carefully chosen selections of subject specific, high quality, scholarly resources, saving you and your students time by highlighting the best resources in your discipline. Browse Subject Guides.
  • Topic Guides cover how-to topics such as Finding Journal Articles, How to Evaluate Information Sources, and Citation Style Guides. They also present resources and information that are not subject specific such as Open Access, Copyright for Educators, and Research Metrics. To browse Topic Guides go to the Research Guides Homepage and choose “By Type” from the menu.
  • Course Guides are built to support a specific course or individual class. They can direct students to course specific resources including permanent links to full text articles, recommended research databases, books and other library holdings, video tutorials, and much more. They can also be used to easily create reading lists that link directly to any items that the library has in our digital or physical collections. Explore an example of a Course Guide DeDe Dawson, Science Librarian, has created for a Biology 301 class. To browse other Course Guides go to the Research Guides Homepage and choose “By Type” from the menu.

Research Guides are fully integrated with the Library collections and can also including any online resources from outside the Library that fall within copyright permissions or can be linked to from the guides. This platform is very flexible and user friendly and we encourage you to make use of this newly updated resource. If you are interested in using Research Guides as a teaching tool please get in touch with your subject librarian and let them know you want to create a Course Guide using LibGuides2.  If you have suggestions or questions about your Subject Guide please contact your subject librarian or Tell US your comments. We welcome suggestions for improving the guides and tailoring them to better serve our patrons.

Peer-to-Peer Writing Feedback: That’s what friends are for!




Peer-to-PeerPeer-to-peer writing feedback is a process by which students judge other students’ written work and produce and provide feedback to them and then, in turn, also receive feedback on their own work. By feedback, I mean commentary of a formative kind: that is, students have the chance to consider and incorporate the feedback received from their peers. Peers are not assigning grades (that would be “peer assessment”), but they may be evaluating the work using a set of provided standards or a rubric. The process can be anonymous, but it does not need to be.

This fall, I will be trying out peer-to-peer writing feedback in a course I teach on leadership and professionalism.  For me, use of this learning activity serves two related learning outcomes that I intend for students:  (1) to write more clearly and concisely, and (2) to effectively provide and receive feedback (not restricted to the area of writing or to the context of peers).

I was very encouraged this week when I came upon an article by Nicol, Thomson, and Breslin (2014) where they reported on a study of how producing peer feedback impacted learning for first year engineering students. Below, I’ve integrated and summarized some of the learning benefits that stood out to me.

Receivers of peer feedback tend to find it…

  • written in more accessible language and therefore more easily understood
  • more like dialogue than a one-way transmission and also less directive, allowing students to locate the feedback they need
  • timed to allow improvements to be made

Producers of peer feedback tend to…

  • develop a better understanding of the standards being applied, and an appreciation for the role of the summative assessor (i.e., grader)
  • compare the work of their peers to their own and benefit from the examples of other approaches to writing
  • build critical thinking capacity about both the writing of peers and their own writing

And, an overarching zinger appears in the article, where the authors quote another research team:   ‘Students seem to improve their writing more by giving comments than by receiving them’ (p. 104). To me this finding aligns with the oft-quoted saying “the best way to learn is to teach.”

Let me know via your own comments if you’d like to learn more about how I go about incorporating this learning activity into my course this term and how it turns out.  Wish me luck.

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment& Evaluation in Higher Education, 39 (1), 102-122.

Photograph by Susan Bens

How Do We Define Success in an Open Course




A version of this post was originally published on Heather Ross’s blog on June 24, 2014.

ToqueIn June I attended the Society for Teaching and Learning In Higher Education (STLHE) conference in Kingston, Ontario. As part of the conference I presented, along with Nancy Turner and Jaymie Koroluk (University of Ontario Institute of Technology), a poster about the Introduction to Learning Technologies (ILT) open course that the GMCTE offered earlier this year. During discussions around our poster as well as in other sessions related to open courses, I had a number of conversations with colleagues about just what is “success” in an open course.

Completion rates are often used as measures of success by administrators and the media, but is that really a fair measurement? Open courses, whether we call them MOOCs  (Massive Open Online Courses) or the TOOCs (Truly Open Online Courses) that we’re advocating at the GMCTE, aren’t like traditional face-to–face or distance courses in that students don’t pay tuition, there are no prerequisites for entry into the courses and no formal credit is given to students. Why do we try to measure success in open courses using the same metrics that we use for traditional courses when they are so different (of course the argument can absolutely be made that rates of attrition in traditional courses shouldn’t be measures of success either)?

While I was at the conference, an article appeared in The Chronicle of Higher Education about a new paper out from a study conducted jointly by researchers at Cornell University and Stanford University looking at types of engagement in “Massive Online Courses”. The authors of the study argue that there are five types of participants in open courses including Viewers (watch the videos), Solvers (complete assignments without watching videos or reading lecture notes), All-Rounders (do at least some of both), Collectors (download for viewing materials later) and Bystanders (they registered, but there’s no evidence that they did anything in the course). I think that these categories have merit and provide a more nuanced picture of participants, taking us beyond simply grouping everyone into those who complete and those who don’t.

Very few people completed all of the assignments in ILT, so if we looked at completion rates as the measure of success, then this course was a failure. If, however, we look at different metrics another picture emerges. After the course ended (it’s a truly open course so all of the materials are still open) we sent a survey to the 300 participants and 15 percent completed the surveys (yes, I know it’s a very low response rate, but it’s an open course and most people may have been ignoring my emails by the end). Of those who completed the survey, 81.3% said that they applied what they had learned for their own professional development and 69.6 percent said that they shared what they learned with colleagues and / or students.

Learning technologies are constantly changing and as such, I saw it as important that there should bean increase in participant comfort and skill in using a variety of types of tools rather than developing expertise in use of specific ones. A key success of the course for me was therefore the response to the survey question regarding the effect the course had on their comfort level with learning technologies; 55.3 percent reported a moderate increase and 21.3 percent said they experienced a considerable increase.

Of course the low rate of response does mean we have to interpret these results with caution, but the data does add to the argument that success for these courses shouldn’t be measured by how many students do all of the work. I’m currently completing an overall program review of the course for one of my Ph.D. courses and will then be revising the course for another offering next January (watch for details about the course dates and registration to appear on Educatus in the Fall). We’re also working with Ken Coates, the Canada Research Chair in Regional Innovation at the Johnson-Shoyama Graduate School of Public Policy and the Director of the International Centre for Northern Governance and Development on an open course that he’ll be teaching early in 2015. Both courses will provide us with valuable information on what students actually do in an open course, as well as how they define success for themselves.

Problem Solving = Great! But what kind of problems are our students really learning?




What learning are we really asking our students to demonstrate, and what are we saying actually matters through our assessments?

Within statistics, exams require students to apply statistical procedure such as t-tests to questions e.g., is there a significant difference between boys and girls on self-confidence or neural activity when the mean is… where the criteria of significance is typical, the problem to solve is clear and familiar, the variables are provided, and even the values are given. Just plug into memorized equations. In contrast, what if I was to ask on assignments (for practicing) and the exam questions such as presenting a news story and asking students to outline the information and statistical analyses they would engage in to take a stance.  They might then have to look up prior studies to find likely values, debate whether gender is dichotomous categories or a continuous variables for their purposes, and determine how to operationalize the topic, set a 1/20 or more conservative cut off for significance, and select and apply a statistical analysis. Which assessment would better measure the learning I would want my students to have when they go forward? Which learning would you want that A+ to represent when you are deciding if they will be your honours or graduate student?
Problem solving process

Several years have passed since I heard Dr. Eric Mazur speak of changing the activity in the classroom to engage students in learning and increase their conceptual understanding of physics. His approach of peer instruction is well shared. The video was included in an earlier blog post about participatory learning and transfer)

In the June 2014 opening plenary of the Society for Teaching and Learning in Higher Education conference in Kingston, Dr. Eric Mazur’s pursuit of improving learning has remained but his focus had shifted:

“For 20 years I have been working on approach to teaching, never realizing that assessment was directing the learning … assessment is the silent killer of learning.”

As educators, we do not teach so that students simply learn the concept, lens, or procedure for tomorrow, but for the days and weeks that follow. If delaying an exam one day disadvantages students or achieving a high grade cannot predict understanding of the fundamental concepts of force, he asked have they really learned? If the assessments only reflect and demand a low level of learning, then our students will not learn at the higher levels that we desire them to achieve. Do exams that promote cramming or could be answered with a quick Google search really measure the type of transfer or retention of information that we really should be aiming for?

Of several changes that Dr. Mazur outlined to improve assessment, the one that really caused me to pause was his comment about what kind of problems are we asking students to solve.

Think of the problems typically found in your field – the ones where the outcomes are desired but the procedure and path to get there is not known (e.g., design a new mechanism, identify the properties of what is before them, or write a persuasive statement). However, in our assessments, as Dr. Mazur contrasted, the problems students are asked to solve involves applying known procedures to a set of clearly outlined information to solve for an unknown outcome.

During the plenary, he presented a series of possible questions asking about estimating the number of piano tuners: the first version required students to make assumptions about frequency and populations, then to reduce students’ questions and uncertainty the second version provided the assumptions, the third the name of the formula and so on until the students were simply asked to remember the formula and input numbers…moving down the levels of Bloom’s taxonomy from creativity and evaluation to simple remembering.

Add in the removal of the resources that I would reach for when running a statistical analysis or citing for a journal article, and the removal of collaboration and consultation that my research enjoys but not my teaching of research, and the distinction between the reality I think I am preparing students for and the exam become more disparate.

The question is how pre-defined and easily remembered or repeated is the “information” students are being asked to identify, note as missing and connect.

Resources

Video: Asking Good questions, Humber College http://www.humber.ca/centreforteachingandlearning/instructional-strategies/teaching-methods/course-development-tools/blooms-taxonomy.html
Asking questions that foster problem solving based on Bloom Taxonomy

Bloom’s Taxonomy, University of Victoria
http://www.coun.uvic.ca/learning/exams/blooms-taxonomy.html
Lists example verbs and descriptions for each competence level

Bloom’s taxonomy, www.bloomstaxonomy.org
http://www.bloomstaxonomy.org/Blooms%20Taxonomy%20questions.pdf
Question stems and example assignments

Educatus Taking a Summer Hiatus

Throughout most of the year a new post is added to this blog at least twice per week. We understand that many of our readers, as well as much of the staff at the GMCTE take some time off in the summer. This summer, the Educatus blog will be taking off about six weeks before returning with our usual schedule of postings in mid-August, just as we and the rest of the University of Saskatchewan community are busily preparing for the new academic year. Have a great summer. See you August.

Students’ expectations are formed early




I have been enjoying a series of blog posts written by the acclaimed UK based higher education researcher Professor Graham Gibbs (you can start with the first of the series here).  The blogs have been drawn from a comprehensive publication called 53 Powerful Ideas All Teachers Should Know About, with one idea presented on the blog each week.  I was particularly struck by the blog post from a few weeks ago as the ideas presented resonated with the approach of the University of Saskatchewan’s undergraduate research initiative.  A key approach has been embedding such experiences in large first year courses which addresses Professor Gibbs’ key take away message; have students start as you mean them to go on.  I hope you enjoy and perhaps sample some of Professor Gibb’s other thought provoking ideas!

Idea 7- Students’ expectations are formed early

Posted on May 28, 2014 at http://thesedablog.wordpress.com/2014/05/28/53ideas-7-students-expectations-are-formed-early/, reproduced with permission of the Staff and Educational Development Association (SEDA)

Professor Graham Gibbs

What goes on in higher education must appear somewhat strange to a student of 18 who has recently left school, or even to a mature student whose educational experience involved school some while ago and maybe some ‘on the job’ training or evening classes since. Class sizes may have increased from the dozen or so they were used to in 6th form to over 100 (or even over 500). Instead of a small group of friends you got know fairly well from years together, your fellow students will mostly be strangers who you may never get to know, and who may be different every time you start a new module. Instead of you being amongst the high achievers you may feel average or even below average. The teachers you encounter will all be new to you, and may change every semester. You may never get to know them, or in some cases even meet them outside of large classes. Whether you can ask questions, ask for help, be informal or visit their offices may not be clear. Weekly cycles of classes and small, short, tasks at school may be replaced by much longer cycles and much bigger assignments – and in some cases the first required work may not be until week 8 in the first semester. What you are supposed to do in the meantime may not be at all clear, and as the ratio of class time to study time is, at least in theory, much lower than you are used to, what you are supposed to be doing out of class may become quite an issue.

The course documentation may only list what the teacher does, not what you are supposed to do, other than phrases such as ‘background reading’ or ‘independent study’. Instead of being asked to read Chapter 6 of the textbook you might be given extended reading lists of seemingly impossible breadth and depth, some of which will be too expensive to buy, out of the library, or, even if you can get hold of them, opaque or of uncertain relevance. The volume of material ‘covered’ in lectures may appear daunting, and it may be unclear if this is meant to be merely the tip of a hidden, huge and undefined iceberg of content, or the whole iceberg. If you managed to scribble down a comprehensive set of notes, would that be enough? What an essay or a report is supposed to look like and what is good enough to pass or get a top grade may be quite different from what was expected at school, but you may be unclear in what way. Rules about plagiarism or working with other students may seem alarmingly tough yet confusing. It may all feel weird, no matter how routine it feels to teachers, but somehow you have to get used to it.

Most students of course do manage to work out a way of dealing with all this ambiguity and complexity that, if not ideal, is tolerably effective in that they do not usually fail the first assignment or the first module. But once a student has gone through this disorienting and anxiety provoking process of adjustment they are not keen to go through it again anytime soon.

In order to operate at all, new students have to make some quick guesses about what is expected and work out a modus operandi – and this is usually undertaken on their own without discussion with others. It is very easy to get this wrong. In my own first year as an undergraduate I tried to operate on a ‘week by week’ ‘small task’ way as if I was preparing for regular test questions, as I had done at the Naval College where I had crammed for A-levels alongside my naval training – and I failed several of my University first year exams that made much higher level demands than I had anticipated and that would have taken a lot more work of a very different kind than I had managed. My conception of knowledge, and what I was supposed to be doing with it, was well articulated by William Perry’s description of the first stage in his scheme of student development: “Quantitative accretion of discrete rightness”. It was not what my teachers were hoping for from me – but I didn’t understand that and I was too uncertain to do anything else. Students who are driven by fear of failure, rather than hope for success, may become loathe to change the way they study in case it works even less well than what they have tried thus far. It is the high performing students who are more likely to experiment and be flexible.

Many first year courses are dominated by large class lectures, little discussion, little independence and fairly well defined learning activities and tasks (at least compared with later years) and no opportunity to discuss feedback on assignments. By the end of the first year, students may have turned into cabbages in response to this regime, with little development of independence of mind or study habits. In the second year students may be suddenly expected to work collaboratively, undertake peer assessment, undertake much bigger, longer, less well defined learning activities, deal with multiple perspectives and ambiguity, develop their own well argued positions, and so on. They may throw up their hands in despair or resist strongly.

Teachers’ best response to this phenomenon involves getting their own expectations in early and explicitly, and not changing them radically as soon as students have got used to them. If you eventually want students to work collaboratively, require group work in the first week, not the second year. If you want them to read around and pull complex material together, require it in the first week and give them plenty of time and support to do it. If you want them to establish a pattern of putting in a full working week of 40 hours then expect that in the first week, and the second week….and make it clear what those hours might be spent on, and put class time aside to discuss what it was spent on and what proved productive and what did not. If you want students to lift their sights from Chapter 1 to what the entire degree is about, have a look at some really excitingly good final year student project reports in week one, and bring the successful and confident students who wrote them into the classroom to discuss how they managed it, talking about their pattern of studying that led to getting a first and a place to do a Doctorate. In brief, get your clear and high expectations in early, with plenty of opportunity to discuss what they mean.

Students will find this alarming and amazing – but they will get used to it just as they got used to whatever you did before. It will seem equally strange, but no more so than before. The crucial issue is that they will now be getting used to the right thing.

Defining Open Access



By Jeff Martin

The Internet has transformed the ways in which academic research can be accessed. Researchers can now grant any person connected to the Internet unfettered access to their work at any time without cost. This free access is commonly called open access (OA).

Open access is a property of a research article. An OA article does not require payment from a customer (no price barriers such as subscriptions) and has reduced permissions barriers (such as most copyright and licensing restrictions). Some commentators also argue that OA is the ideal way that academic research should be published.

The four main types of open access are “green” repositories, “gold” academic journals, hybrid journals, and predatory journals. Repositories are online storage sites in which articles can be deposited, indexed and searched. Repository administrators do not conduct peer review themselves. Uploaded articles, however, typically have been reviewed elsewhere. See http://www.opendoar.org/ for a list of repositories.

Open access journals share many similarities with subscription journals. For example, articles submitted to OA journals are subject to the peer review process (assuming the journal administrators want to publish peer reviewed research!). The key differences between the journal types are who pays what cost to access content and reduced permissions barriers for authors who publish in OA journals.

Free access is granted when payment comes from the “producer” side of the publishing process. Three examples of funding sources are subsidies from an author’s host institution, government subsidies and hard copy sales of the OA journal (online access is free). Authors are also often able to retain more copyright from OA journals compared to subscription journals. See PLoS ONE for an example of a “gold” OA journal.

Hybrid journals, on the other hand, are subscription journals that offer free access to some content. In other words, these journals use a mixed revenue model, such as subscriptions and Article Processing Charges. Examples of this model include the journal Physiological Genomics and Springer’s “Open Choice” program. For an extensive discussion of the “green”, “gold” and hybrid models, see the work of Peter Suber.

The owners of predatory journals use the “gold” journal model as a profit-making scheme. They use a variety of unethical practices. For example, academics, particularly graduate students and new researchers, are often targeted and enticed into submitting research. Manuscripts are quickly accepted for publication and a fee is then charged. Peer review is claimed to occur, despite evidence to the contrary. Some publishing academics are spammed with e-mails, whereas others are listed as journal editors without their consent.

Watch the following video for an explanation on why OA journals are good for not only researchers but also the general public.