Assessment Equity and Alignment with Experiential Learning

When I met with Sandy and Harold I was stressed. I was worried that I was falling behind. After coming from a very busy workplace with many competing deadlines and defined work hours, starting a PhD program and having to manage my time independently is a huge challenge. Most days feel chaotic and I’m often overwhelmed. Being a student has given me space, mentally and emotionally, to think and to focus on my health. But this “room to think” can also be a dangerous thing. Sometimes hours, even days, slip by in an unfocused haze of meandering reading if I’m not careful. This skill of balancing time and energy is the true test of graduate school.

The chaos is exacerbated by the scope of my SOTL research. I often feel that I’m lost in a forest and blind to the connections, discourses, and secret paths within the literature. What has helped is staying in the meadow of multiple-choice questions. As I read more, I am beginning to create a complex picture of this learning and assessment modality. But before I made this decision, I wandered, as you’ll see.

I was getting tired of assessment, so I experimented with other search terms to see what I could find. I found Brunig’s article which brings together experiential education and critical pedagogy, both of which are challenging to implement in practice. Their discussion of their own practice is very helpful for those interested in using experiential learning and developing a student-directed classroom. Brunig makes a claim in their article that one of the aims of experiential education is the development of a more just world (p. 107). I was intrigued by this because I didn’t necessarily agree (although I believe this should be the goal of education in general). They referenced another paper when making this assertion which brought me to Itin. Itin’s article attempts to differentiate between experiential learning and experiential education. This paper is a good foundation for instructors who want to explore the philosophy of experiential education.

After this foray into more theoretical papers I decided to hold off on reading any more until I had spoken to Sandy and Harold. I wasn’t sure how far the group wanted me to go when it came to philosophy and theory. This was a good decision because the group agreed that they want to see more practical research and studies than theory.

The other articles can be summarized:

  • Bowen, C. discusses a college mathematics program at Haskell Indian Nations University. Of interest to mathematics educators are the handful of experiential activities provided by Bowen. Most of the activities are not suitable for large or mega class sizes (although there is the possibility of adaptation). Others such as narrative word problems or the Problem of the Week may be useful with larger classes.
  • Butler, A. C., Karpicke, J. D., & Roediger, H. L. 3rd clearly describe the methodology used. Results of this study indicate that “delayed feedback produced better long-term retention than immediate feedback” (p. 279). What is interesting is that “there was no difference between the two types of feedback” (answer-until-correct and standard – correct or incorrect) (p. 279).
  • Clandinin, D. J. & Connelly, F. M. are two of the originators of narrative inquiry and their book offers researchers new to the methodology a guide to follow. From epistemological concerns to the nuts and bolts of how to actually do a narrative inquiry, this handbook is a wonderful starting point for those interested in this methodology.
  • Ernst, B., & Steinhauser, M. suggest that the P300 and early frontal positivity “are related to two different stages of learning. The P300 reflects a fast learning process based on working memory processes. In contrast, the frontal positivity reflects an attentional orienting response that precedes slower learning of correct response information” (p. 334).
  • Koretsky, M. D., Brooks, B. J., & Higgins, A. Z. very clearly outline their research design and methodology. Their findings “suggest that asking students for written explanations helps their thinking and learning, and we encourage instructors to solicit written explanations when they use multiple-choice concept questions”, p. 1761.

 

References

Bowen, C. (2010). Indians can do math. In P. Boyer (Ed.), Ancient wisdom, modern science: The integration of Native knowledge in math and science at tribally controlled colleges and universities (pp. 43-62). Salish Kootenai College Press.

Breunig, M. (2005). Turning experiential education and critical pedagogy theory into praxis. Journal of Experiential Education, 28(2), pp. 106-122.

Butler, A. C., Karpicke, J. D., & Roediger, H. L. 3rd. (2007). The effect of type and timing of feedback on learning from multiple-choice tests. Journal of Experimental Psychology: Applied 13(4), pp. 273-281.

Clandinin, D. J. & Connelly, F. M. (2000). Narrative inquiry: Experience and story in qualitative research. Jossey-Bass.

Ernst, B., & Steinhauser, M. (2012). Feedback-related brain activity predicts learning from feedback in multiple-choice testing. Cogn Affect Behav Neurosci, 12(2), 323-336. doi:10.3758/s13415-012-0087-9

Itin, C. M. (1999). Reasserting the philosophy of experiential education as a vehicle for change in the 21st century. The Journal of Experiential Education 22(2), pp. 91-98.

Koretsky, M. D., Brooks, B. J., & Higgins, A. Z. (2016). Written justifications to multiple-choice concept questions during active learning in class, 38(11), pp. 1747-1765. doi: 10.1080/09500693.2016.1214303

This is part of a series of blog posts by Lindsay Tarnowetzki. Their research assistantship is funded by and reports to the Scholarship of Teaching & Learning Aligning assessment and experiential learning cluster at USask.

Lindsay Tarnowetzki is a PhD student in the College of Education. They completed their Master’s degree at Concordia University in Communication (Media) Studies and Undergraduate degree in English at the University of Saskatchewan. They worked at the Clinical Learning Resource Centre at the University of Saskatchewan for three years as a Simulated Patient Educator. They are interested in narrative and as it relates to social power structures. Lindsay shares a home with their brother and their cats Peachy Keen and MacKenzie.

Image provided by Lindsay.

Aligning assessment and experiential learning

I didn’t know what to expect as I rode the elevator up the Arts tower to interview for a research assistant position for a SOTL group. I certainly didn’t expect the wave of information and Dr. McBeth’s joyful energy. She, Harold Bull, and Sandy Bonny explained the project in a unique dialect; a mix of English and their shared academic speak. I hope they didn’t catch onto my confusion when they were throwing around the term MCQ, or multiple choice question, (which refers to the Medical Council exam in my former profession). I realized that I had quite a lot to learn if I was going to succeed in this position. I’d need to learn their language.

The scholarship of teaching and learning cluster working group shared the project through a concept map that linked “Assessment” to different kinds of students, subjects, and teaching strategies. The concept map itself was overwhelming at first but organizing information is in my skill set so creating an index was a straightforward matter. Connecting that index to resources with EndNote was quite a different affair. I closed the program angrily after multiple failed attempts to format the citations in the way I wanted. I had listened carefully and taken notes with Dr. McBeth, but the gap between theory and practice was large. With some perseverance, I am now able to bend the program to my will. It is a useful tool but like a large table saw or pliers used to pull teeth, it still frightens me.

Working with the SOTL concept map, I had identified the three areas, and their sub-topics, which the group is most interested in exploring:

  1. Examination/Assessment
    1. Ease of grading
    2. After experiential learning
    3. Multiple choice questions (MCQ)
  2. Type of Experience
    1. Designed
    2. Emergent
    3. Prompted reflection and relativistic reasoning
  3. Subject Permeability
    1. Alignment to common knowledge
    2. Access to affordances for self-teaching and tangential learning

Well I might as well move my things to the library and stay there until May. These topics cover a huge swath of pedagogical research. As I began reading, though, I soon saw that there were emerging patterns and overlaps among topics. Designed experiences overlapped with assessments. Multiple choice questions and cognition intersected. It was clear that while my index was neatly laid out in discreet cells in Microsoft Excel, the reality of the discourse was a lot more fluid and messier; more accurately reflected in the hand-written topic names, lines, and stickers of the concept map.

An interesting thing I discovered was that although I struggled at times in my methodology class in Term 1, the information and skills I learned there were useful in evaluating sources. I can ask questions and identify gaps where methodological choices aren’t outlined clearly. To be able to use my skills in a practical manner immediately after acquiring them is very exciting.

“…student views and assumptions about experiential learning and peer assessment may not align with data on actual learning.”

Currently I am focused on the topic of Examination/Assessment, which has the broadest scope of all topics identified. Two articles about student perception of experiential learning and peer assessment were intriguing to me. They make clear that student views and assumptions about experiential learning and peer assessment may not align with data on actual learning. This resonates with all the learning I’ve been doing about subjectivity/objectivity and research ethics. Our perceptions and assumptions can be very powerful but they shouldn’t be taken as dominant knowledge without further inquiry.

Some authors make strong claims about their findings even though the description of their methodological processes is lacking. Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012) assert that their findings “vindicate multiple-choice tests, at least of charges regarding their use as practice tests” (p. 1342). I am hesitant to completely agree with their claim based on this article alone because certain methodological details aren’t addressed, such as participants’ demographics and level of study. They also changed variables (feedback and timing for answering questions – those without feedback got more time to answer) in Experiment 2 and used a pool of participants from another area of the county (United States). The work of Gilbert, B. L., Banks, J., Houser, J. H. W., Rhodes, S. J., & Lees, N. D. (2014) is also lacking discussion of certain design choices such as appending their interview and questionnaire items and explicating the level of supervision and mentorship (and any variation thereof) that different students received in different settings. This doesn’t necessarily mean that the authors didn’t make very careful and thoughtful choices, but that either the description of their research needs to be amended or further study is necessary before making definitive claims.

Conversely, the work of VanShenkhof, M., Houseworth, M., McCord, M., & Lannin, J. (2018) on peer evaluation and Wilson, J. R., Yates, T. T., & Purton, K. (2018) on student perception of experiential learning assessment were both very detailed in their description of their research design and the methodological choices made.

I wonder if the variability in data presentation is reflective of the varying skills of researchers as writers. Perhaps it is more reflective of the struggle of researchers to move toward an evidence-based practice in the scholarship of teaching and learning. Maybe it is both.

While I will not be creating a nest in the library and making it my primary residence, there is still a lot to read, learn, and uncover. I look forward to journeying together with you.

Summary

· Sources must be carefully evaluated to ensure quality of research design and findings.· Delayed elaborate feedback produced a “small but significant improvement in learning in medical students” [Levant, B., Zuckert, W., & Peolo, A., (2018) p. 1002].

· Well-designed multiple-choice practice tests with detailed feedback may facilitate recall of information pertaining to incorrect alternatives, as well as correct answers [Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012)]

VanShenkhof, M., Houseworth, M., McCord, M., & Lannin, J. (2018) have created an initial perception of peer assessment (PPA) tool for researchers who are interested in studying peer assessment in experiential learning courses. They found that positive and rich peer assessment likely occurs in certain situations:

  • With heterogeneous groups
  • In a positive learning culture (created within groups and by the instructor)
  • Clear instructions and peer assessment methodology

Wilson, J. R., Yates, T. T., & Purton, K. (2018) found:

  • “Student understanding is not necessarily aligned with student engagement, depending on choice of assessment” – Journaling seemed best at demonstrating understanding but had a low engagement score by students, (p. 15).
  • Students seemed to prefer collaborative assessments – seen as having more learning value in addition to being more engaging, (p. 14).
  • This pilot indicates that student discomfort doesn’t necessarily have a negative impact on learning, (pp. 14-15)

This is the first in a series of blog posts by Lindsay Tarnowetzki. Their research assistantship is funded by and reports to the Scholarship of Teaching & Learning Aligning assessment and experiential learning cluster at USask.

Lindsay Tarnowetzki is a PhD student in the College of Education. They completed their Master’s degree at Concordia University in Communication (Media) Studies and Undergraduate degree in English at the University of Saskatchewan. They worked at the Clinical Learning Resource Centre at the University of Saskatchewan for three years as a Simulated Patient Educator. They are interested in narrative and as it relates to social power structures. Lindsay shares a home with their brother and one spoiled cat named Peachy Keen.

 

What works with study of teaching and learning?

The Gwenna Moss Centre for Teaching and Learning (GMCTL) offers grants for groups of faculty investigating what works in teaching and learning practice.  For most of, the process of teaching causes us to question, asking ourselves things like:

  • Why did that class work so well?
  • Why did students struggle so much with a particular question on a test?
  • What can I do to help with the increase in anxiety my students are reporting?

Gary Poole tells us is important to trust ourselves that these are great questions, and to work to explore them.  We need to consult the research in case there is already a well-established answer, we need to start small, and we need to work with others.

GMCTL offers clusters of faculty grants to help them do all three. However, many faculty still aren’t sure how to proceed with research into to teaching, specifically, so the process is still daunting. Dr. Nicola Simmons from Brock recently shared a set of videos that demystify the process, and this one with Gray Poole is a great simple introduction to what processes work in studying teaching and learning (SOTL).

If you are interested in learning more about studying your teaching practice, or want support working on a SoTL project with colleagues, contact GMCTL.

Top Hat: How is it being used at the U of S?

The University of Saskatchewan has a continuing commitment to a technology-enhanced learning environment for students and in January 2016 acquired a campus-wide license for the Top Hat student response system. Top Hat is a software-based student response system, incorporating a “bring-your-own-device” solution, that is available at no direct cost to instructors and students. The primary goal of Top Hat is to enhance the teaching and learning experience for both instructors and students by bringing more engagement and interaction into traditional passive lecture-style learning approaches.

Who we are

We are a research team at the University of Saskatchewan who are interested in student response systems with a specific focus on Top Hat, their pedagogical effectiveness, and investigating the best teaching practices for these systems. Our team is organized under the Scholarship of Teaching and Learning (SoTL) cluster titled “Technology-Enhanced Learning: An Assessment of Student Response Systems in the University Classroom.”

  • Carleigh Brady, PhD, Instructor, Dept. of English
  • Soo Kim, PhD, B.Sc.PT, Professor, School of Rehabilitation Science
  • Landon Baillie, PhD, Professor, Dept. of Anatomy, Physiology and Pharmacology
  • Raymond Spiteri, PhD, Professor, Dept. of Computer Science
  • Neil Chilton, PhD, Professor, Dept. of Biology
  • Katherina Lebedeva, PhD, Instructor, Dept. of Anatomy, Physiology and Pharmacology

In March of 2018, we invited all individuals with a Top Hat instructor account at the University of Saskatchewan to participate in a survey about the use of Top Hat on campus and to share their experiences.

Results

 A total of 58 instructors responded to the survey. We found the majority of instructors using Top Hat at the University of Saskatchewan:

  • incorporate it in class to assess student concept understanding, test student recall, and share student perspectives (opinions, experience, and demographics)
  • use it for asking questions, creating discussions, and monitoring attendance
  • prefer “multiple choice question,” “word answer,” and “click on target” formats
  • think that the greatest advantages of Top Hat are: increased participation and engagement, student assessment, instant feedback from students, and the system’s ease of use/functionality
  • consider that Top Hat’s biggest disadvantages to be the time investment for software setting-up and grading, design issues, and technical issues (e.g. room connectivity)

In summary, we found that most instructors using Top Hat found it effective in facilitating a collaborative teaching and learning environment. Top Hat encourages students to participate actively during lectures by asking questions and polling student responses online. Despite some disadvantages, Top Hat is still preferred over clickers for its increased functionality (various question formats, interactive functions, and use of graphics), as well as its instant feedback and results polling.

However, further studies should be conducted to systematically evaluate the effect of Top Hat on student academic performance.

 Find more information

Creating Time for Intellectual Creation: Deep Work and Maker Time

[social_share/]



The familiar challenge:
We are 6 weeks into summer, and in the pile on our desk about mid-way down is that proposal, paper, course redesign that there has yet to be time for.

Each week offers 40+ hours, yet there can barely be 2 hours of continuous focused worktime strung together. How can this be?

What’s going on:
We have time but how we use it changes the quality of that time for worse or for better. Just as fractures weaken the structure integrity of a beam, or aesthetics transform an object into art, time’s productivity is transformed by our use.

Within computer science and programming there is a distinction between maker time and manager (meeting) time. The first involves chunks of time where one can focus on conceptualizing and working through the depth of a design without surfacing to respond or shift to other topics. Managerial time, conversely, is broken up and additive. One more meeting in a day full of meetings does not have the same cost as a meeting in the middle of block of maker time.

Pair of glasses focussing on the word "focus" in dictionary.

Photo credit – Mark Hunter, CC-By

At a recent SoTL writing retreat, a faculty member commented how it was the first time they had read a full article uninterrupted. It can even take a few hours to settle into comfortable realization that the knock will not come, and the email (turned off) will not ding. By then also having become reacquainted with the project, work starts to build and more is accomplished in a day or two than in weeks past.

This call to create space in our week (or at least in our summer) for focused work is the drumbeat of Cal Newport’s 2016 book Deep Work: Rules for Success in a Distracted World. He posits that creating space and scheduling for deep intellectual work is the act of taking a chunk of time and maximizing productively. In a recent podcast, he speaks of both what is gained and how focus gets weakened by quick novel stimuli like emails, online, social media etc. It is a shift from talking and thinking about a project to conceptualizing and creating.

Synthesis, analysis, conceptualization, and creation (including writing) are higher levels of learning and thinking that requires deeper focus, executive cognitive functioning, more active engagement, and reflection.

So what can I do now?
The GMCTL has 4 Deep Work time opportunities that provide focused time to work with expertise and facilitation as you need it.

  1. SoTL Writing Retreats (July 20, 21 & 22) with uninterrupted time and on-hand consultations for faculty, instructors or staff working toward publishing on a teaching and learning research project. Select a morning or come for 3 days or any combination in between. Register, we will confirm your choice of days and times via email.
  2. Time for Course Design & Prep – In addition to the course design institute offered each year, we offer Drop-in mornings or afternoons. July 26 9:30am – 12:00pm is the next Consultations & Coffee Drop-In Morning. Bring what you have so far including any questions and ideas, and stay for time to work. *Registration is not required.
  3. Book “Deep Work” space for you, your course team or SoTL project colleagues to meet. We can arrange for a space and customize the level of facilitation and support you want from a few minutes to more. Contact GMCTL.
  4. Get unstuck – book a consultation to think through a course idea/challenge, a research design for a SoTL project, or an upcoming term. Contact GMCTL.

Overtime, identify what works for you. Each person’s approach to Deep Work, like preferences in morning beverages, are unique, though with shared key ingredients and conditions to percolate or steep.

What is the science behind your course design madness?

[social_share/]


By Fred Phillips, Professor, Baxter Scholar, Edwards School of Business

As we begin another year, students are encountering some of the course design decisions made by their instructors. Some will be introduced to “flipped classrooms”, where students prepare by reading/viewing/responding to a learning prompt before it is formally taken up in class. Others will encounter new learning tools, such as adaptive reading systems that embed interactive questions within reading materials with the goal of assessing each student’s comprehension so that new topics can be delivered the moment he or she is ready to comprehend them.

Just as instructors have questions about these approaches and tools, students are likely to be curious about whether there is a method to our course design madness. To help explain the underlying learning science, I have made a few videos that describe relevant (and fun) studies that lend support to these pedagogies. Each video focuses on a particular question that students (and possibly instructors) are likely to have about elements of their courses. Each video describes two or three relevant studies in just enough depth to convey the gist of how they were designed and what they discovered. And, in the spirit of a TED Talk, they are each less than 10 minutes in length.

My thought with these videos is that instructors can send each link to students at the moment they expect their students will be asking the particular question, or they can provide them en masse. My hope is that the videos will help students appreciate why our courses might be designed as they are. And, if we’re really lucky, the videos will inspire our campus community to learn more about the scholarship of teaching and learning. Enjoy!

1. Why do we have so many tests? (7 min 24 sec)

  • Students often wonder why I plan frequent quizzes and exams throughout the term.

2. Why attempt to answer questions before “being taught”? (7 min 22 sec)

  • Students often think that there isn’t benefit in attempting to answer questions before they are formally taught content.

3. Is easier and more convenient learning better? (8 min 54 sec)

  • Is it more effective for students to have a cramming study session or to study throughout the term? When practicing, should students group questions of similar type or mix different question types? Does use of analogies help or hinder student learning?

How Do We Define Success in an Open Course

[social_share/]



A version of this post was originally published on Heather Ross’s blog on June 24, 2014.

ToqueIn June I attended the Society for Teaching and Learning In Higher Education (STLHE) conference in Kingston, Ontario. As part of the conference I presented, along with Nancy Turner and Jaymie Koroluk (University of Ontario Institute of Technology), a poster about the Introduction to Learning Technologies (ILT) open course that the GMCTE offered earlier this year. During discussions around our poster as well as in other sessions related to open courses, I had a number of conversations with colleagues about just what is “success” in an open course.

Completion rates are often used as measures of success by administrators and the media, but is that really a fair measurement? Open courses, whether we call them MOOCs  (Massive Open Online Courses) or the TOOCs (Truly Open Online Courses) that we’re advocating at the GMCTE, aren’t like traditional face-to–face or distance courses in that students don’t pay tuition, there are no prerequisites for entry into the courses and no formal credit is given to students. Why do we try to measure success in open courses using the same metrics that we use for traditional courses when they are so different (of course the argument can absolutely be made that rates of attrition in traditional courses shouldn’t be measures of success either)?

While I was at the conference, an article appeared in The Chronicle of Higher Education about a new paper out from a study conducted jointly by researchers at Cornell University and Stanford University looking at types of engagement in “Massive Online Courses”. The authors of the study argue that there are five types of participants in open courses including Viewers (watch the videos), Solvers (complete assignments without watching videos or reading lecture notes), All-Rounders (do at least some of both), Collectors (download for viewing materials later) and Bystanders (they registered, but there’s no evidence that they did anything in the course). I think that these categories have merit and provide a more nuanced picture of participants, taking us beyond simply grouping everyone into those who complete and those who don’t.

Very few people completed all of the assignments in ILT, so if we looked at completion rates as the measure of success, then this course was a failure. If, however, we look at different metrics another picture emerges. After the course ended (it’s a truly open course so all of the materials are still open) we sent a survey to the 300 participants and 15 percent completed the surveys (yes, I know it’s a very low response rate, but it’s an open course and most people may have been ignoring my emails by the end). Of those who completed the survey, 81.3% said that they applied what they had learned for their own professional development and 69.6 percent said that they shared what they learned with colleagues and / or students.

Learning technologies are constantly changing and as such, I saw it as important that there should bean increase in participant comfort and skill in using a variety of types of tools rather than developing expertise in use of specific ones. A key success of the course for me was therefore the response to the survey question regarding the effect the course had on their comfort level with learning technologies; 55.3 percent reported a moderate increase and 21.3 percent said they experienced a considerable increase.

Of course the low rate of response does mean we have to interpret these results with caution, but the data does add to the argument that success for these courses shouldn’t be measured by how many students do all of the work. I’m currently completing an overall program review of the course for one of my Ph.D. courses and will then be revising the course for another offering next January (watch for details about the course dates and registration to appear on Educatus in the Fall). We’re also working with Ken Coates, the Canada Research Chair in Regional Innovation at the Johnson-Shoyama Graduate School of Public Policy and the Director of the International Centre for Northern Governance and Development on an open course that he’ll be teaching early in 2015. Both courses will provide us with valuable information on what students actually do in an open course, as well as how they define success for themselves.

Students’ expectations are formed early

[social_share/]



I have been enjoying a series of blog posts written by the acclaimed UK based higher education researcher Professor Graham Gibbs (you can start with the first of the series here).  The blogs have been drawn from a comprehensive publication called 53 Powerful Ideas All Teachers Should Know About, with one idea presented on the blog each week.  I was particularly struck by the blog post from a few weeks ago as the ideas presented resonated with the approach of the University of Saskatchewan’s undergraduate research initiative.  A key approach has been embedding such experiences in large first year courses which addresses Professor Gibbs’ key take away message; have students start as you mean them to go on.  I hope you enjoy and perhaps sample some of Professor Gibb’s other thought provoking ideas!

Idea 7- Students’ expectations are formed early

Posted on May 28, 2014 at http://thesedablog.wordpress.com/2014/05/28/53ideas-7-students-expectations-are-formed-early/, reproduced with permission of the Staff and Educational Development Association (SEDA)

Professor Graham Gibbs

What goes on in higher education must appear somewhat strange to a student of 18 who has recently left school, or even to a mature student whose educational experience involved school some while ago and maybe some ‘on the job’ training or evening classes since. Class sizes may have increased from the dozen or so they were used to in 6th form to over 100 (or even over 500). Instead of a small group of friends you got know fairly well from years together, your fellow students will mostly be strangers who you may never get to know, and who may be different every time you start a new module. Instead of you being amongst the high achievers you may feel average or even below average. The teachers you encounter will all be new to you, and may change every semester. You may never get to know them, or in some cases even meet them outside of large classes. Whether you can ask questions, ask for help, be informal or visit their offices may not be clear. Weekly cycles of classes and small, short, tasks at school may be replaced by much longer cycles and much bigger assignments – and in some cases the first required work may not be until week 8 in the first semester. What you are supposed to do in the meantime may not be at all clear, and as the ratio of class time to study time is, at least in theory, much lower than you are used to, what you are supposed to be doing out of class may become quite an issue.

The course documentation may only list what the teacher does, not what you are supposed to do, other than phrases such as ‘background reading’ or ‘independent study’. Instead of being asked to read Chapter 6 of the textbook you might be given extended reading lists of seemingly impossible breadth and depth, some of which will be too expensive to buy, out of the library, or, even if you can get hold of them, opaque or of uncertain relevance. The volume of material ‘covered’ in lectures may appear daunting, and it may be unclear if this is meant to be merely the tip of a hidden, huge and undefined iceberg of content, or the whole iceberg. If you managed to scribble down a comprehensive set of notes, would that be enough? What an essay or a report is supposed to look like and what is good enough to pass or get a top grade may be quite different from what was expected at school, but you may be unclear in what way. Rules about plagiarism or working with other students may seem alarmingly tough yet confusing. It may all feel weird, no matter how routine it feels to teachers, but somehow you have to get used to it.

Most students of course do manage to work out a way of dealing with all this ambiguity and complexity that, if not ideal, is tolerably effective in that they do not usually fail the first assignment or the first module. But once a student has gone through this disorienting and anxiety provoking process of adjustment they are not keen to go through it again anytime soon.

In order to operate at all, new students have to make some quick guesses about what is expected and work out a modus operandi – and this is usually undertaken on their own without discussion with others. It is very easy to get this wrong. In my own first year as an undergraduate I tried to operate on a ‘week by week’ ‘small task’ way as if I was preparing for regular test questions, as I had done at the Naval College where I had crammed for A-levels alongside my naval training – and I failed several of my University first year exams that made much higher level demands than I had anticipated and that would have taken a lot more work of a very different kind than I had managed. My conception of knowledge, and what I was supposed to be doing with it, was well articulated by William Perry’s description of the first stage in his scheme of student development: “Quantitative accretion of discrete rightness”. It was not what my teachers were hoping for from me – but I didn’t understand that and I was too uncertain to do anything else. Students who are driven by fear of failure, rather than hope for success, may become loathe to change the way they study in case it works even less well than what they have tried thus far. It is the high performing students who are more likely to experiment and be flexible.

Many first year courses are dominated by large class lectures, little discussion, little independence and fairly well defined learning activities and tasks (at least compared with later years) and no opportunity to discuss feedback on assignments. By the end of the first year, students may have turned into cabbages in response to this regime, with little development of independence of mind or study habits. In the second year students may be suddenly expected to work collaboratively, undertake peer assessment, undertake much bigger, longer, less well defined learning activities, deal with multiple perspectives and ambiguity, develop their own well argued positions, and so on. They may throw up their hands in despair or resist strongly.

Teachers’ best response to this phenomenon involves getting their own expectations in early and explicitly, and not changing them radically as soon as students have got used to them. If you eventually want students to work collaboratively, require group work in the first week, not the second year. If you want them to read around and pull complex material together, require it in the first week and give them plenty of time and support to do it. If you want them to establish a pattern of putting in a full working week of 40 hours then expect that in the first week, and the second week….and make it clear what those hours might be spent on, and put class time aside to discuss what it was spent on and what proved productive and what did not. If you want students to lift their sights from Chapter 1 to what the entire degree is about, have a look at some really excitingly good final year student project reports in week one, and bring the successful and confident students who wrote them into the classroom to discuss how they managed it, talking about their pattern of studying that led to getting a first and a place to do a Doctorate. In brief, get your clear and high expectations in early, with plenty of opportunity to discuss what they mean.

Students will find this alarming and amazing – but they will get used to it just as they got used to whatever you did before. It will seem equally strange, but no more so than before. The crucial issue is that they will now be getting used to the right thing.

Lee Schulman Tells us to ‘Break Bad’ and Engage in SoTL

[social_share/]



“Walter White is dead. Heisenberg is no longer someone of uncertain fate.”

These were the opening words of Lee Schulman’s talk, Situated Studies of Teaching and Learning: The New Mainstream. Intriguing. What on earth could the main character of the television series Breaking Bad have anything to do with the Scholarship of Teaching and Learning (SoTL)?

Schulman continued: “And I must say that I have this fleeting image of my colleagues in the Scholarship of Teaching and Learning, sneaking away from their Chemistry classrooms or Biology or English or History to their SOTL labs and mixing a brew intended to undermine the clarity of thought, the certainty, the dogmatism, and the ease with which their colleagues, and their colleges and universities continue to do the same work that they’ve done for many years.”

I was inspired. Not to become an over-educated high school Chemistry teacher and family man who creates a secret alter-ego criminal mastermind and drug lord in order to provide for my family. But this analogy has changed the way I think about SoTL.

The term “break bad” means to rebel against the accepted norms of a society.  Essentially, Schulman argued that those of us who engage in Situated Research (such as SoTL) are “breaking bad” – rebelling against the “accepted tradition” of research. Traditional research aims to generalize knowledge, to create broad and sweeping overviews that contribute to theories and principles that are not limited by details or particular circumstances. This type of research is often (though unjustifiably) viewed as a “superior” or more legitimate form of research than is situated research.

Situated research, on the other hand, focuses on the details: the particulars, the individuals, contexts, and environments considered unimportant in traditional research. Situated research does not attempt to create broad generalizations, but rather “seeks to describe, explain and evaluate the relationships among intentions, actions and consequences in a carefully recounted local situation”. Schulman argues that situated research will soon become mainstream in SoTL because it provides a rich, deep and detailed contribution to knowledge that traditional forms of research simply cannot.

In a way, the premise of the series Breaking Bad is like situated research. It does not seek to create broad generalizations or theories (e.g. “over-educated high school Chemistry teachers with cancer are likely to create drug empires”), but rather it “seeks to describe, explain and evaluate the relationships among intentions, actions and consequences” in the life of Walter White. The complexity, the uncertainties, the contextual details are where the brilliance of the series Breaking Bad truly lies. And that is where the brilliance of SoTL truly lies as well.

View Lee Schulman’s talk, presented at the International Society of the Scholarship of Teaching and Learning (ISSOTL) 2013 Conference:

4th Annual SoTL Conference to Be Held at USask

[social_share/]



I am extremely pleased to promote and encourage participation in the 4th annual Scholarship of Teaching and Learning (SoTL) symposium.  The day will be strengthened by a diversity of perspectives so we welcome all who would like to attend, no experience of undertaking SoTL is necessary.

The event will be held on the 1st and 2nd of May on the University of Saskatchewan campus. In addition to plenary presentations, there will be various opportunities to present your SoTL work or ideas. We invite participation from those interested in dipping a toe in the SoTL waters, those part way through a SoTL project, as well as those experienced with and wishing to present results of their SoTL research. We hope the event will be a chance to gather and learn from colleagues interested in improving teaching and learning at the University and beyond.

In an effort to open up the event to individuals at all points in their exploration of SoTL, we have created 4 different types of presentations:

  1. Watercooler chats – these sessions will be appropriate for those wanting to discuss a new area of teaching and learning research. This is an opportunity to share ideas in a more informal way with a small group of colleagues, discuss options, and get feedback from fellow participants.
  2. World Café – these sessions will be appropriate for those with an interest in sharing and discussing approaches or issues with their SoTL work with colleagues at the conference. The World Café begins with a short (5 to 10 minute) presentation on your topic to the whole group followed by table-based small group discussions with colleagues who wish to hear more and discuss your project, approach or issue in more detail.
  3. Poster session – these are appropriate for presentation about a completed SoTL project they would like to share via virtual poster with colleagues. There will be time during the symposium for attendees to view and ask questions about each poster.
  4. Research Presentation – these sessions will be appropriate for sharing completed SoTL projects with results. We would welcome participants presenting research to include a few minutes spent sharing lessons learned in undertaking the research.

This year we have added a writing retreat to the end of the SoTL symposium.  We invite you to join us for this part of the event at Boffins on Friday afternoon.  The retreat will provide some time to share your writing project and aspirations for it with colleagues and, most importantly, give you time to work on your project in a supportive and comfortable environment.

To register or submit a proposal please visit http://fluidsurveys.usask.ca/s/2014_SoTL_Symposium/The submission deadline is 14th April.

I look forward to seeing you and learning with you on May 1 and 2nd.