Aligning assessment and experiential learning

I didn’t know what to expect as I rode the elevator up the Arts tower to interview for a research assistant position for a SOTL group. I certainly didn’t expect the wave of information and Dr. McBeth’s joyful energy. She, Harold Bull, and Sandy Bonny explained the project in a unique dialect; a mix of English and their shared academic speak. I hope they didn’t catch onto my confusion when they were throwing around the term MCQ, or multiple choice question, (which refers to the Medical Council exam in my former profession). I realized that I had quite a lot to learn if I was going to succeed in this position. I’d need to learn their language.

The scholarship of teaching and learning cluster working group shared the project through a concept map that linked “Assessment” to different kinds of students, subjects, and teaching strategies. The concept map itself was overwhelming at first but organizing information is in my skill set so creating an index was a straightforward matter. Connecting that index to resources with EndNote was quite a different affair. I closed the program angrily after multiple failed attempts to format the citations in the way I wanted. I had listened carefully and taken notes with Dr. McBeth, but the gap between theory and practice was large. With some perseverance, I am now able to bend the program to my will. It is a useful tool but like a large table saw or pliers used to pull teeth, it still frightens me.

Working with the SOTL concept map, I had identified the three areas, and their sub-topics, which the group is most interested in exploring:

  1. Examination/Assessment
    1. Ease of grading
    2. After experiential learning
    3. Multiple choice questions (MCQ)
  2. Type of Experience
    1. Designed
    2. Emergent
    3. Prompted reflection and relativistic reasoning
  3. Subject Permeability
    1. Alignment to common knowledge
    2. Access to affordances for self-teaching and tangential learning

Well I might as well move my things to the library and stay there until May. These topics cover a huge swath of pedagogical research. As I began reading, though, I soon saw that there were emerging patterns and overlaps among topics. Designed experiences overlapped with assessments. Multiple choice questions and cognition intersected. It was clear that while my index was neatly laid out in discreet cells in Microsoft Excel, the reality of the discourse was a lot more fluid and messier; more accurately reflected in the hand-written topic names, lines, and stickers of the concept map.

An interesting thing I discovered was that although I struggled at times in my methodology class in Term 1, the information and skills I learned there were useful in evaluating sources. I can ask questions and identify gaps where methodological choices aren’t outlined clearly. To be able to use my skills in a practical manner immediately after acquiring them is very exciting.

“…student views and assumptions about experiential learning and peer assessment may not align with data on actual learning.”

Currently I am focused on the topic of Examination/Assessment, which has the broadest scope of all topics identified. Two articles about student perception of experiential learning and peer assessment were intriguing to me. They make clear that student views and assumptions about experiential learning and peer assessment may not align with data on actual learning. This resonates with all the learning I’ve been doing about subjectivity/objectivity and research ethics. Our perceptions and assumptions can be very powerful but they shouldn’t be taken as dominant knowledge without further inquiry.

Some authors make strong claims about their findings even though the description of their methodological processes is lacking. Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012) assert that their findings “vindicate multiple-choice tests, at least of charges regarding their use as practice tests” (p. 1342). I am hesitant to completely agree with their claim based on this article alone because certain methodological details aren’t addressed, such as participants’ demographics and level of study. They also changed variables (feedback and timing for answering questions – those without feedback got more time to answer) in Experiment 2 and used a pool of participants from another area of the county (United States). The work of Gilbert, B. L., Banks, J., Houser, J. H. W., Rhodes, S. J., & Lees, N. D. (2014) is also lacking discussion of certain design choices such as appending their interview and questionnaire items and explicating the level of supervision and mentorship (and any variation thereof) that different students received in different settings. This doesn’t necessarily mean that the authors didn’t make very careful and thoughtful choices, but that either the description of their research needs to be amended or further study is necessary before making definitive claims.

Conversely, the work of VanShenkhof, M., Houseworth, M., McCord, M., & Lannin, J. (2018) on peer evaluation and Wilson, J. R., Yates, T. T., & Purton, K. (2018) on student perception of experiential learning assessment were both very detailed in their description of their research design and the methodological choices made.

I wonder if the variability in data presentation is reflective of the varying skills of researchers as writers. Perhaps it is more reflective of the struggle of researchers to move toward an evidence-based practice in the scholarship of teaching and learning. Maybe it is both.

While I will not be creating a nest in the library and making it my primary residence, there is still a lot to read, learn, and uncover. I look forward to journeying together with you.

Summary

· Sources must be carefully evaluated to ensure quality of research design and findings.· Delayed elaborate feedback produced a “small but significant improvement in learning in medical students” [Levant, B., Zuckert, W., & Peolo, A., (2018) p. 1002].

· Well-designed multiple-choice practice tests with detailed feedback may facilitate recall of information pertaining to incorrect alternatives, as well as correct answers [Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012)]

VanShenkhof, M., Houseworth, M., McCord, M., & Lannin, J. (2018) have created an initial perception of peer assessment (PPA) tool for researchers who are interested in studying peer assessment in experiential learning courses. They found that positive and rich peer assessment likely occurs in certain situations:

  • With heterogeneous groups
  • In a positive learning culture (created within groups and by the instructor)
  • Clear instructions and peer assessment methodology

Wilson, J. R., Yates, T. T., & Purton, K. (2018) found:

  • “Student understanding is not necessarily aligned with student engagement, depending on choice of assessment” – Journaling seemed best at demonstrating understanding but had a low engagement score by students, (p. 15).
  • Students seemed to prefer collaborative assessments – seen as having more learning value in addition to being more engaging, (p. 14).
  • This pilot indicates that student discomfort doesn’t necessarily have a negative impact on learning, (pp. 14-15)

This is the first in a series of blog posts by Lindsay Tarnowetzki. Their research assistantship is funded by and reports to the Scholarship of Teaching & Learning Aligning assessment and experiential learning cluster at USask.

Lindsay Tarnowetzki is a PhD student in the College of Education. They completed their Master’s degree at Concordia University in Communication (Media) Studies and Undergraduate degree in English at the University of Saskatchewan. They worked at the Clinical Learning Resource Centre at the University of Saskatchewan for three years as a Simulated Patient Educator. They are interested in narrative and as it relates to social power structures. Lindsay shares a home with their brother and one spoiled cat named Peachy Keen.

 

Internationalization of Teaching & Learning : Featured Instructor

Photo provided by Dr Lucy R. Hinnie

Dr Lucy R. Hinnie
Postdoctoral Fellow

Lucy is a postdoctoral fellow in the department of English and completed her PhD at the University of Edinburgh, Scotland. In her work, she looks at written text through the frame of intersectionality, interrogating the accepted ‘canon’ of white male scholars and looking to find relevance to every student, regardless of their background.

She has a desire to strengthen her teaching practice and do better by all of her students.

She took the internationalization short course because she has a desire to strengthen her teaching practice and do better by all of her students, especially those who face difficulties in what is perceived to be standard classroom situations. For her, successful internationalization will look like an enthusiastic and multicultural student body who can engage in safe learning spaces with cultural sensitivity and awareness.

Connect with Lucy to learn more: lucyrhinnie.co.uk

Internationalization of Teaching and Learning : Featured Instructor

Jocelyn Peltier-Huntley, M. Sc., P. Eng.

Photo provided by Jocelyn Peltier-Huntley

Lecturer, College of Engineering

Jocelyn is a professional mechanical engineer. Her research is around understanding the gender gap in the Canadian mining industry. At a personal level, she wants to see positive change happen to move towards equity within our society. As an instructor of engineering design and communications, and as a professional, she feels it is vitally important to know how to understand and work with stakeholders who may be from a variety of backgrounds and have different ways of knowing.

Successful internationalization allows for all people to be fully included and empowered…

She took the internationalization short course to improve her teaching practices and also to help inform how she frames the messaging of her research on gender equity. Jocelyn believes that successful internationalization allows for all people to be fully included and empowered to fully participate and achieve their full potential in their education, careers, and lives.

Connect with Jocelyn to learn more: https://engineering.usask.ca/people/sopd/Peltier-Huntley,Jocelyn.php

Online Homework Systems: How to Protect Student Privacy and Keep Materials Costs Down

Online homework systems (OHS) are online tools that can grade questions asked to students as homework, track formative practice, or assess examinations. Students can receive immediate feedback on the activities they complete using an OHS, providing students with a clear picture of how they are progressing and where they may need to do some additional work.

During the 2018-2019 academic year, approximately 70 courses included the use of online homework systems (OHS) that were registered through the U of S Bookstore. Several additional courses made use of these tools by sending students directly to publisher websites. OHS are used extensively through the STEM disciplines, but are also used in other fields including Psychology and business. While they have benefits for both instructors and students, there are concerns that both should be aware of.

Concerns

The Cost of OHS for Students

In about half of the courses using OHS purchased through the bookstore, students are required to make that purchase as part of their grade in the course. Often, the OHS is bundled with a textbook making it impossible for students to purchase a used textbook or even use one they purchased a previous year, in the case of needing to repeat a course. Publishers are also moving toward a model where bundled textbooks are only available online and for a limited time.

While instructors at the U of S have saved students almost $2 million in the past five years with a steady increase in the use of open textbooks, the costs associated with homework systems could potential counter that savings, something publishers are aiming for with methods such as bundling textbooks and OHS.

Student Purchases Directly From Publishers

As I noted above, in some courses students are sent directly to publisher websites to purchase access to an OHS. This requires them to have a credit card and opens them up to privacy breaches of their financial information as well as any additional information that publishers require them to submit. This risk can be mitigated by instructors having students purchase required access codes through the Bookstore.

Limitations for Instructors

While OHS can make grading, especially in large classes, easier for instructors and TAs, but there are also limitations. In many cases, instructors have little control over the content being assessed. In addition, some systems, especially if students are going directly through the publisher’s website, won’t integrate with the grade book in Blackboard.

Supports For Instructors

The U of S Bookstore Wants to Help Keep Costs Low for Students

I’m often asked how the Bookstore feels about the growth of open textbooks and the questioner is surprised when I say “they love it, they’re the ones who offer the print-on-demand service for open textbooks.” The U of S Bookstore wants to speak with instructors about how to lower materials costs for students. Most classes use traditional course materials, which are made by publishers, but ordered by instructors. In their role as supplier of these materials, the Bookstore is limited in what it can do and needs instructor help in reducing costs for students. To learn more about how to reduce costs for students, please visit their website here or reach out.

ICT Can Help Protect Student Privacy

In order to protect student privacy, the U of S has established formal terms of use agreements with a number of software and textbook publishers addressing required privacy, legal, security, and business requirements. ICT has a webpage listing the publishers who have signed agreements with the U of S. 

Help and Funding For Alternatives to Commercial OHS

The GMCTL can assist you in finding alternatives to requiring students to purchases access to commercial OHS in three ways:

  • Advise instructors on alternative forms of assessment
  • Discuss with instructors options for non-commercial OHS
  • Provide funding for the development of assessments such as test-banks

For more information about these options, please contact Heather M. Ross at the GMCTL.

Guidelines for the Use of OHS at USask

The U of S is currently developing guidelines for the use of OHS at the university. These guidelines will take into account the needs of both students and instructors.

Transparent assessment

Assessment practice is shifting away from comparing students to each other, or grade derived professor’s experiences and preferences.  Increasing, it is focused on comparing students to a clear learning outcome or goal for the assessment that everyone in the class knows in advance. The process of clearly articulating that goal and what we consider good evidence of it is called “Transparent Assessment.” The goal of all transparent assessment is to ensure students understand what they are trying to achieve or learn, so they can be more effective partners in that learning. Our Learning Charter has three learning charter educator commitments related our assessment:

  • Provide a clear indication of what is expected of students in a course or learning activity, and what students can do to be successful in achieving the expected learning outcomes as defined in the course outline
  • Ensure that assessments of learning are transparent, applied consistently and are congruent with learning outcomes
  • Design tools to both assess and enable student learning

5 techniques to make your assessments more transparent:

  1. Clearly articulate the specific skills and knowledge you want to see students demonstrate right before they start learning each class.  While it is important to put learning outcomes or objectives on a syllabus, student need our help connecting those outcomes to specific learning they are about to do.
  2. Double check your alignment between what you teach, your outcomes, and your assessments.  Are there some parts of your assessment task that are unrelated to your outcomes? Are you testing things you haven’t taught, like specific ways of thinking or presentation skills? Is too much of the assessment focused in one area relative to the time you spent teaching it? Does the test or assignment use the same language you used when you taught?
  3. Share or co-construct assessment criteria before student start work on assessments. Discuss them overtly and compare them to models and samples, until you are confident students know what “good” looks like, and how to achieve it.  It might cost you time in class, but it will save you a lot more time marking, and you’ll mark better work.  Think your students understand?  Ask what they are trying to demonstrate when they do the assessment.  If they tell you the parts of the task (what) instead of the purpose of the task (why, how), the assessment is still not transparent to them.
  4. Use assessment tools, like checklists and rubrics, that a student can interpret without understanding what you are thinking.  If the categories on your rubric are ratings like “good” or “well-developed” a student still has to guess what you mean.  Substitute descriptions that include specifics like: “The argument is specific and illustrated through examples. The essay explains why the argument matters.”
  5. Use students are resources to increase transparency.  Have them try small examples of the main skill you are looking to see, and then give each other feedback using the criteria.  It will ensure they read the criteria, and cause them to ask about criteria or assessment processes they don’t understand. You’ll ensure students get more early feedback without increasing your marking load.

Increased transparency is about everyone in the class working together to have students learn as much as possible and demonstrate that learning as effectively as possible. For professors, it means fewer questions challenges of grades and marking better student work.  When done well, it results in student better understanding the learning goals and being more invested in them.

Learn more

 

All aligned – Instruction

In higher education, we have our students do all the hardest learning by themselves.  As academics, our greatest strength is expertise, but we routinely select passive instructional strategies that have our students mostly listening to lectures in our classes and doing their learning later.  Choosing passive listening robs us of the opportunity to provide the nuance and clarification that learners need while they learn. This post focuses on selecting the right type of instructional approaches to have our students actively learning the most important and challenging things they will need.

Relationship to our Learning Charter:There are two learning charter educator commitments related our instructional approaches to learning tasks:

  • Be aware of the range of instructional methods and assessment strategies, and select and utilize teaching methods that are effective in helping students achieve the learning outcome of a course or learning activity
  • Ensure that content is current, accurate, relevant to learning outcomes/objectives, representative of the knowledge and skills being taught and appropriate to the position of the learning experience within a program of study

Aligning the type of learning and your outcome: The type of learning you want your students to do dictates your instructional approach.  If the task is to recall factual information, but not be able to use it is any way, lecture is actually a very effective way to communicate that information.  Student will still need to rehearse it (memorize) by studying in order to learn, and sadly, will often forget much of it six months out.  In addition, the most useful things taught by an expert are rarely basic facts. They are skills, concepts, and refined understandings, which novice students learn most effectively while actively engaging in learning facilitated by an expert. When we intersperse passive teaching with the right type of active learning given our outcomes, students are much more likely to learn the most challenging things we have to teach.

Choosing the right strategy:

  1. Determine the type of learning you want students to do (not just the content you want to cover) by writing or using a good learning outcome.
  2. Select an appropriate active approach, and intermix it with your passive approaches to increase the amount of student learning.
Type of learning Instructional approach
Knowledge: factual information like terms, classifications, and theorists · Passive: Tell student about the knowledge (lecture, video, reading)

· Active: Have student use the facts in meaningful ways to learn them (mind-mapping, listing, drill and practice, sorting/drag and drop)

Conceptual: ideas understood well enough to apply it in new situations to assess or evaluate, like the concept of a successful argument or the concept of balanced · Passive: Read a complex explanation, hear someone describe the concept

·  Active: Classify or sort parts of the concept using criteria, refine an example of the concept, find errors, render judgement, construct an example of the concept, compare personal understanding to an example or rubric, reflect on growth of conceptual understanding over time

Process (cognitive): use a series of mental steps to accomplish a task, like solving for X

Process (psycho-motor or physical): use a series of physical steps with the right degree of acuity, like a neat set of the correct stitches

·Passive: observe someone do the steps

·Active: Try to do the steps, put the steps in order, find errors in someone else doing the steps, predict what will happen if the steps are done wrong, reflect on personal success in completing the steps

Skill: Combing multiple types of learning to accomplish a goal, for example identify the critical parts of a complex problem, choose the order to do it in, and solve the problem correctly · Passive: Hear about or see someone else using the skill

· Active: Try the skill in context (experiential learning) and reflect on success, complete a simulation, generate a decision-making tree or matrix, construct an argument on the implications of the application of the skill by someone else, provide feedback to another person by comparing their use of the skill to criteria

Learn more:

Read the other blogs in this sequence about constructive alignment:

Read the other chats related to Our Learning Charter to learn about other educator commitments.

How do I internationalize my course?

Self-reflection

Step 1: Know my position and privilege. Who am I as a teacher? (This idea isn’t new, check out this article from 1958: Teacher, Know Thyself)

Step 2: Does the way I design my course plan for access and diversity?

Step 3: Do I want to “add-on”, “infuse”, or “transform” my course through internationalization?

Some direction

If you are working on step 3, there is an excellent resource of teaching tips here: Strategies for Course Internationalization. Centre for Teaching Excellence, University of Waterloo.

A simple way to start internationalization is to add assigned readings from international perspectives. This can be a way to start conversations and look for similarities and differences in findings. Even the writing and presentation structure might reflect cultural differences.

Next, take a look at your course outcomes – are students expected to develop or use intercultural competencies? How might the next version of your course highlight internationalized or global community skills?

Onwards on this journey, it’s time to look at evaluation. Inclusive assessment should include students using a metacognitive process to track their development. If that sentence doesn’t make sense on first reading, try this: a student needs to be able to know what they know and how they know it at any stage of learning. If they are just beginning, they should be able to identify that, recognize when they are building knowledge/skills/attitudes, and ultimately know when they’ve mastered or achieved the outcome of the learning. When students are involved in the assessment process, they are demonstrating choice, responsibility, and reflection. These are all attributes of inclusive learning which is fundamental to internationalization.

Here is another list of tips and tricks to start internationalizing your course.

This post is part of a series in internationalization. You can follow along here.

Come say hi! We’re at the Gwenna Moss Centre for Teaching and Learning. We can help individually or direct you to one of our workshops to meet your needs.

 

Outcomes-based Assessment

Traditional forms of assessment, often norm-referenced, are increasingly mixed with outcomes-based assessment in campuses in Canada.  Often, outcomes-based systems start in professional programs with accreditation standards, where it is important that all graduate have minimal standards of competence, and are not just rated in comparison to their peers.  As the use of outcomes-based teaching and assessment is becoming more common, people are wondering what the difference between traditional and outcomes-based assessment is.

What is outcomes-based assessment?

  • It starts with faculty members articulating what they want students to be able to do when they complete the learning. This is called an outcome and it is different than thinking about what you will teach, because it is focused on the end result for students that you built the course for (learn more about outcomes and how to write them)
  • Outcomes-based assessment is the deliberate collection of evidence of student learning based on outcomes. It yields a mark relative to the outcomes (criterion referenced) rather than other students.

What do you do differently in an outcomes-based system?

  • Planning: You need to clearly know what level of skills and knowledge you will accept as competent or successful performance by students (instead of what mark), then plan the assessments to measure it and the instruction to teach it.
  • Assessment: You make an effort to give more feedback early, and give more attempts.  You pay more attention to most recent evidence. A good way of thinking about this is a driver’s test.  You don’t average in the first time you practiced with your final road test. Even if you fail your road test once, you get the score from your second attempt, not the combined score from your first and second, because the goal is to measure how competent you’ve become, rather than average in all attempts.
  • Calculating grades: In an outcomes-based system, you group and weigh by outcome, not by assessment task.  This means that outcome 1 might be 10% and outcome 2 could be 8% of the final grade. An individual test might have 50% of its marks in outcome 1 and another 50% in outcome 2.  In a syllabus, you’d explain your weighting by outcome, not by how much the assignments and final are worth.

Why would you bother to do this?

  • Students know exactly what you are trying to have them learn, and are more able to take responsibility for learning it.  As a result, they learn more deeply, and they more accurately identify what specifically they need to work harder on.
  • Because students understand more about what you want them to know and be able to do, you mark fewer weak assignments, which saves you time and frustration
  • It allows you to clearly understand how your students are improving over time on each of the outcomes you set.  That makes it easy to modify your course to make it more effective
  • Outcomes-based assessment is helpful for seeing how your course contributes to student success in the program, or gives a clear indication of how well student learn specific pre-requisite knowledge and skills you are trying to teach them

Why might you choose not do this?

Aside from the effort involved in trying something new, even if it might improve student learning, there are two common reasons to be concerned:

  1. The outcomes might be too focused on some government or institutional agenda, limiting your choice
  2. Students might focus on the outcomes instead of the learning

In response to #1, if you set your own outcomes, it is unlikely you would pick something you thought was unworthy or too low level, but there are times where an external body (like in accreditation) may set an outcome you would not have chosen to insure a progression of learning over courses and years. With regards to #2, I think we all want students to focus on the learning, but if they don’t know the outcome, they actually typically focus on the assessments, asking questions like “will this be on the test”, which indicate the are unclear on the learning and just focused on the grade.

What goals of the university is this connected to?

Outcomes-based assessment is directly connected to 3 key commitments we have as educators on campus, according to our Learning Charter:

  • Ensure that assessments of learning are transparent, applied consistently and are congruent with learning outcomes
  • Design tools to both assess and enable student learning
  • Learn about advances in effective pedagogies/andragogies

Read the other chats related to Our Learning Charter to learn about other educator commitments.

From Modelling to Designing Intercultural Curricula

You now know that you have pretty decent intercultural teaching capacities.

You have continued to develop an awareness of your own identity and are modelling perspective-taking. Students in your course have the opportunity to interact with different worldviews because you know that makes them smarter. You actively create opportunities to build relationships between ‘others’ and can recognize barriers to student participation – you’ve essentially mastered using your intercultural capacity to inform teaching practices. So now you must be wondering, “What’s next? How can I further internationalize in my course?”  No fear, you are not alone. Dimitrov & Haque (2016) have some suggestions for “curriculum design competencies”.

“Effective instructors are able to critically evaluate the curriculum and create learning materials that transcend the limitations of monocultural disciplinary paradigms, scaffold student learning so students have a chance to master intercultural skills relevant to their discipline, and design assessments that allow students to demonstrate learning in a variety of ways.” – Dimitrov, N., & Haque, A. (2016). Intercultural teaching competence: A multi-disciplinary model for instructor reflection. Intercultural Education, 27(5), 437–456. https://doi.org/10.1080/14675986.2016.1240502

Key questions to ask yourself on your internationalization journey:

  • Does my course syllabus have a specific learning outcome where a student is asked to demonstrate specific knowledge, skills, or attitudes of a global or international design?
  • Do all the authors of my selected articles look or sound like me and if so, why – and can I change this?
  • Are students asked to take different perspectives in assessed work (work that is evaluated for marks)?
  • Do students have any choice in their assessment? Are different communication styles encouraged?
  • Does my course allow students the opportunity to develop a more robust disciplinary identity aligned with their cultural or personal identity?

If answering these questions leaves you with more questions, it’s likely a good time for a conversation with the Gwenna Moss Centre for Teaching and Learning. We can help individually or direct you to one of our workshops to meet your needs.

How Might Intercultural Capacity inform our Teaching?

Once we develop the capacity for intercultural competence, we can start to infuse the associated skills into our teaching practice. This can take many forms but all the elements connect to the group of knowledge and skills we associate with facilitation.  Pedogogy is the study of leading learners and facilitators make a process easier. So, facilitation is the process we use to make the learning possible. In adult education, we know that our learners come with valuable prior knowledge, skill, and experience. We can draw on these to enhance the learning experience for both instructor-facilitator and student.

How do we start facilitating?

As an instructor, or facilitator, you may wish to try some of these summarized strategies suggested by Dimitrov & Haque (2017) :

When can we use it?

Using your intercultural capacity can happen any time you are interacting with someone else. Everyone has inherent uniqueness that together creates diversity.  As our ‘village’ grows, our diversity along with it.

“…more and more of us do not live in closed circles of like-minded, similarly raised people. Think of the last few gathering you attended – a work meeting, a class, a trade show. Chances are, you sat next to and talked with people from places other than where you’re from, people with different cultural norms, people of different races and religions and histories. And chances are, therefore, that you sat next to people who do practice etiquette – but etiquette different from yours, and perhaps in conflict with it on certain points.”

 Parker, Priya (2018). The art of gathering: How we meet and why it matters. New York : Riverhead Books.

Why do we care?

Figure 1 Depiction of instructor trying to introduce too many new strategies at once. Illustration by author.

We care about using good facilitation because we want our learners to achieve the desired outcomes of the course in the most efficient and effective way possible. This means using strategies we know will allow learners to thrive. As instructors, we want to leverage the learner’s pre-existing knowledge, skills, and attitudes to make the new learning accessible and within reach. If the learning curve is too steep, learners may just fall off. And even for the instructor/facilitator, try to keep a growth mindset that we are all working to get along and want to be successful in our relationships. A positive disposition and honesty about one’s own positionality and areas for growth will go a long way!

If you’re looking for more help with developing your intercultural capacity, please reach out to the Gwenna Moss Centre for Teaching and Learning. We can help individually or direct you to one of our workshops or short courses that can meet your needs.

NB: If you tried the text verification link above, hemingwayapp.com, you may be interested to know that this article is at a grade 10 reading level and 9 of the 39 sentences are “very hard to read”. Ideally, public text should be at a grade 9 level 🙂