Aligning assessment and experiential learning

I didn’t know what to expect as I rode the elevator up the Arts tower to interview for a research assistant position for a SOTL group. I certainly didn’t expect the wave of information and Dr. McBeth’s joyful energy. She, Harold Bull, and Sandy Bonny explained the project in a unique dialect; a mix of English and their shared academic speak. I hope they didn’t catch onto my confusion when they were throwing around the term MCQ, or multiple choice question, (which refers to the Medical Council exam in my former profession). I realized that I had quite a lot to learn if I was going to succeed in this position. I’d need to learn their language.

The scholarship of teaching and learning cluster working group shared the project through a concept map that linked “Assessment” to different kinds of students, subjects, and teaching strategies. The concept map itself was overwhelming at first but organizing information is in my skill set so creating an index was a straightforward matter. Connecting that index to resources with EndNote was quite a different affair. I closed the program angrily after multiple failed attempts to format the citations in the way I wanted. I had listened carefully and taken notes with Dr. McBeth, but the gap between theory and practice was large. With some perseverance, I am now able to bend the program to my will. It is a useful tool but like a large table saw or pliers used to pull teeth, it still frightens me.

Working with the SOTL concept map, I had identified the three areas, and their sub-topics, which the group is most interested in exploring:

  1. Examination/Assessment
    1. Ease of grading
    2. After experiential learning
    3. Multiple choice questions (MCQ)
  2. Type of Experience
    1. Designed
    2. Emergent
    3. Prompted reflection and relativistic reasoning
  3. Subject Permeability
    1. Alignment to common knowledge
    2. Access to affordances for self-teaching and tangential learning

Well I might as well move my things to the library and stay there until May. These topics cover a huge swath of pedagogical research. As I began reading, though, I soon saw that there were emerging patterns and overlaps among topics. Designed experiences overlapped with assessments. Multiple choice questions and cognition intersected. It was clear that while my index was neatly laid out in discreet cells in Microsoft Excel, the reality of the discourse was a lot more fluid and messier; more accurately reflected in the hand-written topic names, lines, and stickers of the concept map.

An interesting thing I discovered was that although I struggled at times in my methodology class in Term 1, the information and skills I learned there were useful in evaluating sources. I can ask questions and identify gaps where methodological choices aren’t outlined clearly. To be able to use my skills in a practical manner immediately after acquiring them is very exciting.

“…student views and assumptions about experiential learning and peer assessment may not align with data on actual learning.”

Currently I am focused on the topic of Examination/Assessment, which has the broadest scope of all topics identified. Two articles about student perception of experiential learning and peer assessment were intriguing to me. They make clear that student views and assumptions about experiential learning and peer assessment may not align with data on actual learning. This resonates with all the learning I’ve been doing about subjectivity/objectivity and research ethics. Our perceptions and assumptions can be very powerful but they shouldn’t be taken as dominant knowledge without further inquiry.

Some authors make strong claims about their findings even though the description of their methodological processes is lacking. Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012) assert that their findings “vindicate multiple-choice tests, at least of charges regarding their use as practice tests” (p. 1342). I am hesitant to completely agree with their claim based on this article alone because certain methodological details aren’t addressed, such as participants’ demographics and level of study. They also changed variables (feedback and timing for answering questions – those without feedback got more time to answer) in Experiment 2 and used a pool of participants from another area of the county (United States). The work of Gilbert, B. L., Banks, J., Houser, J. H. W., Rhodes, S. J., & Lees, N. D. (2014) is also lacking discussion of certain design choices such as appending their interview and questionnaire items and explicating the level of supervision and mentorship (and any variation thereof) that different students received in different settings. This doesn’t necessarily mean that the authors didn’t make very careful and thoughtful choices, but that either the description of their research needs to be amended or further study is necessary before making definitive claims.

Conversely, the work of VanShenkhof, M., Houseworth, M., McCord, M., & Lannin, J. (2018) on peer evaluation and Wilson, J. R., Yates, T. T., & Purton, K. (2018) on student perception of experiential learning assessment were both very detailed in their description of their research design and the methodological choices made.

I wonder if the variability in data presentation is reflective of the varying skills of researchers as writers. Perhaps it is more reflective of the struggle of researchers to move toward an evidence-based practice in the scholarship of teaching and learning. Maybe it is both.

While I will not be creating a nest in the library and making it my primary residence, there is still a lot to read, learn, and uncover. I look forward to journeying together with you.

Summary

· Sources must be carefully evaluated to ensure quality of research design and findings.· Delayed elaborate feedback produced a “small but significant improvement in learning in medical students” [Levant, B., Zuckert, W., & Peolo, A., (2018) p. 1002].

· Well-designed multiple-choice practice tests with detailed feedback may facilitate recall of information pertaining to incorrect alternatives, as well as correct answers [Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012)]

VanShenkhof, M., Houseworth, M., McCord, M., & Lannin, J. (2018) have created an initial perception of peer assessment (PPA) tool for researchers who are interested in studying peer assessment in experiential learning courses. They found that positive and rich peer assessment likely occurs in certain situations:

  • With heterogeneous groups
  • In a positive learning culture (created within groups and by the instructor)
  • Clear instructions and peer assessment methodology

Wilson, J. R., Yates, T. T., & Purton, K. (2018) found:

  • “Student understanding is not necessarily aligned with student engagement, depending on choice of assessment” – Journaling seemed best at demonstrating understanding but had a low engagement score by students, (p. 15).
  • Students seemed to prefer collaborative assessments – seen as having more learning value in addition to being more engaging, (p. 14).
  • This pilot indicates that student discomfort doesn’t necessarily have a negative impact on learning, (pp. 14-15)

This is the first in a series of blog posts by Lindsay Tarnowetzki. Their research assistantship is funded by and reports to the Scholarship of Teaching & Learning Aligning assessment and experiential learning cluster at USask.

Lindsay Tarnowetzki is a PhD student in the College of Education. They completed their Master’s degree at Concordia University in Communication (Media) Studies and Undergraduate degree in English at the University of Saskatchewan. They worked at the Clinical Learning Resource Centre at the University of Saskatchewan for three years as a Simulated Patient Educator. They are interested in narrative and as it relates to social power structures. Lindsay shares a home with their brother and one spoiled cat named Peachy Keen.

 

Transparent assessment

Assessment practice is shifting away from comparing students to each other, or grade derived professor’s experiences and preferences.  Increasing, it is focused on comparing students to a clear learning outcome or goal for the assessment that everyone in the class knows in advance. The process of clearly articulating that goal and what we consider good evidence of it is called “Transparent Assessment.” The goal of all transparent assessment is to ensure students understand what they are trying to achieve or learn, so they can be more effective partners in that learning. Our Learning Charter has three learning charter educator commitments related our assessment:

  • Provide a clear indication of what is expected of students in a course or learning activity, and what students can do to be successful in achieving the expected learning outcomes as defined in the course outline
  • Ensure that assessments of learning are transparent, applied consistently and are congruent with learning outcomes
  • Design tools to both assess and enable student learning

5 techniques to make your assessments more transparent:

  1. Clearly articulate the specific skills and knowledge you want to see students demonstrate right before they start learning each class.  While it is important to put learning outcomes or objectives on a syllabus, student need our help connecting those outcomes to specific learning they are about to do.
  2. Double check your alignment between what you teach, your outcomes, and your assessments.  Are there some parts of your assessment task that are unrelated to your outcomes? Are you testing things you haven’t taught, like specific ways of thinking or presentation skills? Is too much of the assessment focused in one area relative to the time you spent teaching it? Does the test or assignment use the same language you used when you taught?
  3. Share or co-construct assessment criteria before student start work on assessments. Discuss them overtly and compare them to models and samples, until you are confident students know what “good” looks like, and how to achieve it.  It might cost you time in class, but it will save you a lot more time marking, and you’ll mark better work.  Think your students understand?  Ask what they are trying to demonstrate when they do the assessment.  If they tell you the parts of the task (what) instead of the purpose of the task (why, how), the assessment is still not transparent to them.
  4. Use assessment tools, like checklists and rubrics, that a student can interpret without understanding what you are thinking.  If the categories on your rubric are ratings like “good” or “well-developed” a student still has to guess what you mean.  Substitute descriptions that include specifics like: “The argument is specific and illustrated through examples. The essay explains why the argument matters.”
  5. Use students are resources to increase transparency.  Have them try small examples of the main skill you are looking to see, and then give each other feedback using the criteria.  It will ensure they read the criteria, and cause them to ask about criteria or assessment processes they don’t understand. You’ll ensure students get more early feedback without increasing your marking load.

Increased transparency is about everyone in the class working together to have students learn as much as possible and demonstrate that learning as effectively as possible. For professors, it means fewer questions challenges of grades and marking better student work.  When done well, it results in student better understanding the learning goals and being more invested in them.

Learn more

 

Outcomes-based Assessment

Traditional forms of assessment, often norm-referenced, are increasingly mixed with outcomes-based assessment in campuses in Canada.  Often, outcomes-based systems start in professional programs with accreditation standards, where it is important that all graduate have minimal standards of competence, and are not just rated in comparison to their peers.  As the use of outcomes-based teaching and assessment is becoming more common, people are wondering what the difference between traditional and outcomes-based assessment is.

What is outcomes-based assessment?

  • It starts with faculty members articulating what they want students to be able to do when they complete the learning. This is called an outcome and it is different than thinking about what you will teach, because it is focused on the end result for students that you built the course for (learn more about outcomes and how to write them)
  • Outcomes-based assessment is the deliberate collection of evidence of student learning based on outcomes. It yields a mark relative to the outcomes (criterion referenced) rather than other students.

What do you do differently in an outcomes-based system?

  • Planning: You need to clearly know what level of skills and knowledge you will accept as competent or successful performance by students (instead of what mark), then plan the assessments to measure it and the instruction to teach it.
  • Assessment: You make an effort to give more feedback early, and give more attempts.  You pay more attention to most recent evidence. A good way of thinking about this is a driver’s test.  You don’t average in the first time you practiced with your final road test. Even if you fail your road test once, you get the score from your second attempt, not the combined score from your first and second, because the goal is to measure how competent you’ve become, rather than average in all attempts.
  • Calculating grades: In an outcomes-based system, you group and weigh by outcome, not by assessment task.  This means that outcome 1 might be 10% and outcome 2 could be 8% of the final grade. An individual test might have 50% of its marks in outcome 1 and another 50% in outcome 2.  In a syllabus, you’d explain your weighting by outcome, not by how much the assignments and final are worth.

Why would you bother to do this?

  • Students know exactly what you are trying to have them learn, and are more able to take responsibility for learning it.  As a result, they learn more deeply, and they more accurately identify what specifically they need to work harder on.
  • Because students understand more about what you want them to know and be able to do, you mark fewer weak assignments, which saves you time and frustration
  • It allows you to clearly understand how your students are improving over time on each of the outcomes you set.  That makes it easy to modify your course to make it more effective
  • Outcomes-based assessment is helpful for seeing how your course contributes to student success in the program, or gives a clear indication of how well student learn specific pre-requisite knowledge and skills you are trying to teach them

Why might you choose not do this?

Aside from the effort involved in trying something new, even if it might improve student learning, there are two common reasons to be concerned:

  1. The outcomes might be too focused on some government or institutional agenda, limiting your choice
  2. Students might focus on the outcomes instead of the learning

In response to #1, if you set your own outcomes, it is unlikely you would pick something you thought was unworthy or too low level, but there are times where an external body (like in accreditation) may set an outcome you would not have chosen to insure a progression of learning over courses and years. With regards to #2, I think we all want students to focus on the learning, but if they don’t know the outcome, they actually typically focus on the assessments, asking questions like “will this be on the test”, which indicate the are unclear on the learning and just focused on the grade.

What goals of the university is this connected to?

Outcomes-based assessment is directly connected to 3 key commitments we have as educators on campus, according to our Learning Charter:

  • Ensure that assessments of learning are transparent, applied consistently and are congruent with learning outcomes
  • Design tools to both assess and enable student learning
  • Learn about advances in effective pedagogies/andragogies

Read the other chats related to Our Learning Charter to learn about other educator commitments.

Top Hat: How is it being used at the U of S?

The University of Saskatchewan has a continuing commitment to a technology-enhanced learning environment for students and in January 2016 acquired a campus-wide license for the Top Hat student response system. Top Hat is a software-based student response system, incorporating a “bring-your-own-device” solution, that is available at no direct cost to instructors and students. The primary goal of Top Hat is to enhance the teaching and learning experience for both instructors and students by bringing more engagement and interaction into traditional passive lecture-style learning approaches.

Who we are

We are a research team at the University of Saskatchewan who are interested in student response systems with a specific focus on Top Hat, their pedagogical effectiveness, and investigating the best teaching practices for these systems. Our team is organized under the Scholarship of Teaching and Learning (SoTL) cluster titled “Technology-Enhanced Learning: An Assessment of Student Response Systems in the University Classroom.”

  • Carleigh Brady, PhD, Instructor, Dept. of English
  • Soo Kim, PhD, B.Sc.PT, Professor, School of Rehabilitation Science
  • Landon Baillie, PhD, Professor, Dept. of Anatomy, Physiology and Pharmacology
  • Raymond Spiteri, PhD, Professor, Dept. of Computer Science
  • Neil Chilton, PhD, Professor, Dept. of Biology
  • Katherina Lebedeva, PhD, Instructor, Dept. of Anatomy, Physiology and Pharmacology

In March of 2018, we invited all individuals with a Top Hat instructor account at the University of Saskatchewan to participate in a survey about the use of Top Hat on campus and to share their experiences.

Results

 A total of 58 instructors responded to the survey. We found the majority of instructors using Top Hat at the University of Saskatchewan:

  • incorporate it in class to assess student concept understanding, test student recall, and share student perspectives (opinions, experience, and demographics)
  • use it for asking questions, creating discussions, and monitoring attendance
  • prefer “multiple choice question,” “word answer,” and “click on target” formats
  • think that the greatest advantages of Top Hat are: increased participation and engagement, student assessment, instant feedback from students, and the system’s ease of use/functionality
  • consider that Top Hat’s biggest disadvantages to be the time investment for software setting-up and grading, design issues, and technical issues (e.g. room connectivity)

In summary, we found that most instructors using Top Hat found it effective in facilitating a collaborative teaching and learning environment. Top Hat encourages students to participate actively during lectures by asking questions and polling student responses online. Despite some disadvantages, Top Hat is still preferred over clickers for its increased functionality (various question formats, interactive functions, and use of graphics), as well as its instant feedback and results polling.

However, further studies should be conducted to systematically evaluate the effect of Top Hat on student academic performance.

 Find more information

Is Your Instruction Designed to Produce Student Learning?

Lecture is an efficient way to transmit information, especially in large classes. We inevitably feel there is a lot of content to cover, since the gap between what novice students know and expert professors know is large. However, large, uninterrupted blocks of lecture are very inefficient ways to learn, because they are passive. Learners get cognitive overload and stop processing, have trouble paying attention, and remember some ideas that they struggle to apply or connect conceptually.  All of these occur, even with strong learners, and even with instructors who provide exceptionally focused, clear delivery of information. The mind just learns more if it is actively engaged in thinking.

As a method of direct instruction, lecture is focused on a well-organized, clear presentation of information.  Its cousin, explicit instruction, is much better aligned with what we know about how the brain learns, because it is active.

Explicit instruction:

  • students are guided through the learning process with clear statements about the purpose of learning the new skill
  • teachers give clear explanations and demonstrations
  • students engage in supported practice with feedback at intervals throughout the entire class, not just at the end

The key distinction is that while there are periods of telling information, student are asked to demonstrate the skill they are learning and practice it with feedback.  As a result, they are much more likely to remember, make fewer errors, are more focused, and more motivated.  They are also more likely to describe the learning and important and describe how to keep improving. There is clear alignment between the goal of having students understand more deeply, and the activities they are asked to participate in to support their learning.

Why does all this mater?

When we set the goals for what our students will be able to know and do by the end of class (outcomes), we need to think carefully about how remembering information is essential, but not sufficient, learning. We want students to be able to correctly apply the new information in a process, to make decisions and informed judgments, and to use new information for reasoned arguments.  That means our classes need help students develop these competencies and practice them with feedback. Our outcomes, our instruction, and how we assess all need align and work cohesively together to support effective learning processes. If they don’t, we become Professor Dancelot of video fame, with good intentions and little actual learning.

Learn more:

  • Interesting Book: Donald A. Bligh, What’s the Use of Lectures? (San Francisco: Jossey-Bass, 2000), pp. 252, 11.
  • Oft cited scholarly history: C. Bane, “The Lecture vs. the Class-Discussion Method of College Teaching,” School and Society 21 (1925); B.S. Bloom, “Thought-Processes in Lectures and Discussions,” Journal of General Education 7 (1953).

It’s All About Your Outcomes


Structurally, outcomes are obligations. You need outcomes for your course syllabus, and your program as whole has some form of outcomes. From a teaching and learning perspective, however, an outcome is much more than just a hoop.  It’s at heart of why you’d bother to teach the course you do. Each outcome (and you don’t need that many), describes a skill, disposition, or set of complex knowledge that it is essential for your students to demonstrate to be successful in the course.

What does a good outcome look like?

You can read more about definitions of outcomes (what a student will do) and objectives (what an educator will teach) in another post from Gwenna Moss, but sometimes good examples can help clarify a definition.  A good outcome satisfies key criteria, including:

  • It starts with a specific, rigorous verb that reflects the type of thinking, attitude, or understanding you need students to demonstrate
  • Each outcome is worthy enough that you’ll spend a good chunk of the course returning to it and building your students’ strength with it
  • The outcome is written in language students understand and can explain in their own words

A bad outcome: Understand the definition of a just society

This outcomes is not good because there are too many ways the word “understand” could be interpreted. What would be good evidence of an outcome should be easy for students to understand the same way.  Also, this outcome might be able to be satisfied with a definition from the professor’s PowerPoint, so it isn’t worthy and long lasting enough. It is easy to make the mistake of basically describing content in your outcome, rather than what your students will demonstrate.

A much better outcome: Justify arguments about social justice using precise, accurate examples.

This is better because it specifies the type of thinking and skills student will need to do (justify an argument) and at how (using precise, accurate examples).  Social justice is a complex concept that the course will spend a long time on, deepening student conceptual knowledge overtime. Also, the skill of building an argument about social justice will built upon many multiple times in the course, sometimes in class discussions, sometime in an essay, and sometimes in an examination.  A student reads the outcome and knows the course will help you refine your skills in building arguments, and that the content will relate to social justice.

How do I write good outcomes?

  1. List the key concepts, skills, and dispositions/attitudes you’ll want in the course.  Check to ensure you aren’t just listing content.
  2. Group related things together until you have a smaller number of bigger things.
  3. Try writing statements describing things you’d accept as evidence that a student actually had the understanding, skill, or disposition you are trying to teach
  4. Look at the statements you’ve written, and ensure they each start with an active, specific verb.  Try using this list to ensure you are asking for rigorous thinking, not something students can just memorize and forget.
  5. Get someone who is not an expert to read each outcome and tell you what it says, just to make sure it is clear enough
  6. Double check that each outcome represents something you’ll want to see from students multiple times in the class. If you wouldn’t want to grade things related to it more than once, it is not important enough to be an outcome.

 

Promoting Academic Integrity: Some design questions for instructors

[social_share/]


Here are some propositions about students’ academic integrity that I’ve been working with:

  1. Students are more likely to do their work honestly when they see the personal value in what is to be learned.
  2. Students are more likely to do their work honestly when they believe the assessment produces actual evidence of what they have learned.
  3. Students are more likely to do their work honestly when they’ve had the chance for practice and feedback.
  4. Students are more likely to do their work honestly when they know the rules and expect them to be enforced.

Designing assessments for academic integrity is much more than tight invigilation processes and tools like Turnitin and SafeAssign (thankfully). There is much that instructors can do to set students on honest learning paths when they design and teach their courses.   Below, I offer some prompting questions instructors can ask themselves when designing course materials, assessments, and learning activities that relate to the four propositions above.

“See-the-Value” Questions for Instructors:

  • How can I convey/demonstrate the value of what students learn in my course?
  • How can I share my enthusiasm for learning this and the value I place on it?
  • How can I connect students to the benefits this learning brings for them individually, for their families or communities, for society or the world?
  • How can I provide opportunities for students to follow their individual interests and values in the context of this course?

“Evidence” Questions for Instructors:

  • What kind of evidence does this assessment provide that students have learned what I wanted them to learn?
  • What other forms of evidence could I use to determine this?
  • What alternatives could I offer students to show me what they have learned?
  • How can I make explicit to students that an assessment is transferrable to other contexts?

“Practice and Feedback” Questions for Instructors:

  • What do students need to be able self-assess their progress before grades are at stake?
  • How can I provide early feedback so that students still have the opportunity to improve?
  • How can I stage larger assignments with feedback so that students have time to improve (and avoid last minute temptations)?
  • How could I best equip students to provide feedback to each other?

“Rules” Questions for Instructors:

  • What are my rules for my assessments within the academic integrity policy framework?
  • How can I clearly explain both the assessments and the rules so that students know how to best proceed?
  • What are some common misconceptions/errors that I can address early on?
  • How can I help students learn how to follow the rules, especially when it involves technical components like a new citation or referencing protocol?
  • How can I show students that I am committed to enforce my own rules?

We have a workshop coming up at the GMCTL on November 14 that will explore assessment practices that promote academic integrity. Please consider registering.

When Performing Gets in the Way of Improving

[social_share/]


I encountered the following video in the spring and have been sharing it with faculty and groups with an interest in questions of assessment.  I think it lays a useful foundation for discussions on (1) what it takes to master skills and knowledge, (2) the value of lower stakes practice, (3) the necessity of formative feedback for learning, and (4) recognition that moments of “performance” or assessment for grades are also needed.

Additionally, this video supports the thinking behind a core element of the Instructional Skills Workshop—an internationally recognized workshop and certification offered regularly at our Centre.  For that workshop, participants practice the facilitation of a 10 minute “mini lesson” so valuable for improving instructional skills.  Here’s a link to more information about that workshop.

Happy to discuss the learning zone and the performance zone with Educatus readers!

First-time Thoughts on a Student Blog Assignment

[social_share/]


By Yin Liu, Associate Professor, Department of English
History and Future of the Book Blog

Why I did it

In 2016-2017 I taught, for the first time, a full-year (6 credit unit) English course, “History and Future of the Book,” which is one of our Foundations courses – that is, it is one of a few 200-level courses required for our majors. As in all our courses, there is a substantial writing component, usually in the form of essay assignments. I decided to complicate my life further by trying out a type of student assignment also new to me: a student-written course blog.

I had been thinking about using a student blog assignment ever since I heard a talk given by Daniel Paul O’Donnell (U Lethbridge) about using blogs in his own teaching. The point that struck me most forcibly about Dan’s argument was his observation that students wrote better when they were blogging. Since one of my goals in teaching writing is to help students write better, I thought I should give the idea a try.

Setting it up

From the outset, I had to make some fundamental decisions about how the blog assignment should work within the course. It became one of the writing assignments, taking the place of a regular (2000-word) essay: the blog post itself was to be 500-1000 words long and accompanied by a commentary (read by me only) in which students discussed the process of writing the blog post, especially the challenges they encountered and the solutions they developed. The commentary gave students a chance to reflect on and thus to learn from their own writing processes; it also helped me to evaluate the effectiveness of the assignment. The blog was made publicly available on the Web, but students could opt out of having their own work posted, although it still needed to be submitted to me for grading. Thus students also needed to supply signed permission to have their work published on the blog.

Heather Ross of the GMCTL guided me to the U of S blog service (words.usask.ca) and gave me valuable advice about permission forms and other such matters. The ICT people set me up and increased my storage quota, I fiddled with the WordPress templates, and we were good to go.

Results and learning

Each student wrote one post for the course blog, and thus the assignment was like a regular term paper except that (a) it was not an essay, and (b) it was published to the Web. Acting as the blog editor, I suggested changes to students’ first submissions, which they could incorporate into the final, published version if they wished; but I resisted the temptation to tinker with their final versions, which were published warts and all. I also used the course blog to post a series of Writing Tips for the class.

Students did, for the most part, write noticeably better in their blog posts than in their regular essay assignments. More was at stake in the blog posts; students knew that their work would be read not solely by their professor, but also by their peers and possibly by others outside the class. The informal nature of a blog also allowed students to write, in many cases, with a more genuine voice than for an essay assignment, and thus more effectively. This less formulaic, less familiar genre compelled students to rethink the basics of writing: ideas, information, audience, organisation, clarity. There was a higher chance that they would write about something that truly interested them, and quite a few expressed enthusiasm about the assignment. Students could also read and learn from the work of others in the class. The experiment was a success, and I would do it again.

Our course blog, History and Future of the Book, can be found at https://words.usask.ca/historyofthebook/. Some of the students’ posts have been removed at their request, but most remain, and you are welcome to browse through the Archive and read them – the best of them are excellent.

Open Pedagogy: Using OER to change how we teach

[social_share/]



There has been a considerable increase in the number of courses assigning open rather than commercial textbooks at the University of Saskatchewan.  During the 2014-2015 academic year, there were approximately 300 students enrolled in three courses using open textbooks. This year more than 2,650 students are enrolled in the at least 20 courses that have open textbooks as the assigned resource. Since the university started promoting and tracking the use of open textbooks in 2014, this use has resulted in students at the U of S saving close to $400,000 on textbook costs.

The benefits of using open textbooks and other open educational resources (OER) instead of commercial texts aren’t limited to the cost savings for students, however. The lack of copyright restrictions on OER allows instructors to modify these materials to meet the specific needs of their courses. For example, the Edwards School of Business recently released an adaption of an open book from the United States that not only saved their students money, but also meets the learning needs of the students better than the original edition. University Success will be used by the more than 475 students in the course, but also students at other institutions, and other instructors will be free to make their own changes to this resource to better meet local needs.

Just as instructors are able to adapt existing OER, so are students. Learners can become contributors to existing open materials, or use OER to create new learning materials for themselves, their peers, and future learners (and instructors).

John Kleefeld, a professor in the U of S College of Law, created an assignment that offers students in his The Art of Judgement course the chance to improve Wikipedia articles on one of the topics covered in the course. Professor Kleefeld and one of those students, Katelyn Rattray wrote an article on the design of the assignment and the experience that was published the Journal of Legal Education.

Robin DeRosa from Plymouth State University in New Hampshire created an open textbook for her early American literature course by having undergraduate research assistants find appropriate public domain content. As a core assignment in the class, students then wrote introductions for each reading based on their research about the authors and time periods.  While she served as the editor, students did much of the research and compiling of content for this new open textbook. This assignment replaced a traditional paper that would have only been seen by the instructor and the student and likely soon after marking, discarded by the student. Read more about this process on her blog.

Moving away from private “throw away” assignments can shift student activity away from knowledge consumption instead developing their skills in knowledge creation.  In the examples above and many others, this lead to increased student engagement, improved learning outcomes, and freed instructors from reading the same assignments repeatedly.

If you would like to learn more about open pedagogy, the GMCTE is offering a session on November 8 as part of our Introduction to Learning Technologies series.  You can also contact the GMCTE directly with any questions or to schedule a consultation.