There Are No “Dumb” Questions, But There Are Intelligently Guessed Answers




The weather turning colder, the snow starting to fall, the days becoming shorter and people more busily bustling around are sure signs that “the most wonderful time of the year” on our campus is fast approaching: final exam season.

Few, if any, types of questions appear more prolifically on final exams than multiple choice questions (MCQ). However, there are good MCQ’s and there are not-so-good MCQ’s. An exam containing poorly written questions will produce inaccurate measures of your student learning; if the purpose of a final exam is measuring student learning, a final exam consisting of poorly constructed questions is essentially just “going through the motions” of assessment. A student who knows nothing about your subject matter could easily get a higher mark using strategic guessing than a student who is well-prepared.

For example, see these Rules for Intelligent Guessing on Multiple Choice Exams. These “rules” make a lot of sense, because they capitalize on the most common errors that instructors make when constructing multiple choice questions.

MCQ contain a stem (the lead in to the question), the correct choice(s), and distractors (the incorrect choices). Many MCQ construction errors result when “question-writing fatigue” hits… at some point, one can end up feeling a bit desperate for another distractor (or at least that’s been my personal experience: the stem and the correct answer are usually pretty easy to come up with; it’s coming up with plausible distractors that is wearying).

One way to avoid this is to write two or three MCQ’s on each topic area while you prepare to teach it in class. That way, you have a set of new and original questions to choose from when it comes time to put your final exam together. Review all your questions with the lens of a student using “rules for intelligent guessing” and make changes where required.

It takes a lot of time to construct final exam questions, so make sure your time is not wasted. You want to make sure you are testing students on their knowledge of your subject matter and not on their ability to exploit any oversights you may have made in your question construction.

14 Rules for Writing Multiple-Choice Questions, from Bringham Young University Faculty Centre is a succinct yet thorough list of best practices for writing good MCQ’s. It may be helpful to print this document and keep close at hand for easy reference.

Defining Shared Thresholds for Dealing with Academic Dishonesty




The Academic Misconduct Policy at the University of Saskatchewan recognizes that as instructors, we often are in a great position to judge the severity of an act of dishonesty and to situate that act in the context of our course.   The informal procedures available through the U of S academic misconduct policy set clear parameters—to apply a grade penalty on the assignment or test that is of concern, it must be dealt with using the “informal procedures”.   Whereas, the formal procedures may be invoked when the grade penalty you see as deserved extends beyond the assignment or test to the overall grade for the course.

However, each of us likely has a different threshold for when a concern for academic dishonesty warrants a penalty and what the severity should be. Depending on the situation, some of us will be more apt to ask a student found to have plagiarized to, after a stern warning, submit a re-write addressing the errors or omissions for re-grading. And, some of us will be instead inclined to advance the matter to the formal procedures and participate in a hearing, seeing the plagiarism as a far more serious a matter.

So, why does this variation matter?

Students come to know that different instructors handle the same kinds of academic dishonesty differently. When students see their teachers as less diligent or less vigilant about such matters, the problematic short cut (the majority of academic dishonesty takes this form) may seem a lower risk than in another class. In this situation, students committed to academic integrity can lose faith and question whether the assessment playing field is that even, after all. That is, are the rules really the rules? And, to use this year’s Academic Integrity Awareness Week catch phrase without its intended twist, previously honest students may wonder to themselves “Why not Cheat?”

What can be done?

Today, my colleague from the ULC, Elana Geller and I, will facilitate a discussion at the College of Kinesiology at their request about developing a common approach to enacting the academic misconduct policy, especially when to use the informal procedures. We will talk about the policy as it exists, acknowledge the complexities of discovering and confirming academic dishonesty, and assist in identifying common principles the faculty and instructors in the College want to use going forward.

If other academic units are interested in our assistance facilitating something similar, feel free to contact us (or check in with your friends in Kinesiology to see how it turned out).

John Boyer touches Down on Tuesday at the U of S




Sometimes, the time is right to reach into the past for a “re-post”. Now is such a time to look again at the February 24, 2014 post by Susan Bens since we are in the wonderful position to be hosting John Boyer at the U of S on Tuesday, October 7.   He’ll be speaking from 2:30 – 3:30 in the GSA Commons on the very structure of assessment he uses in his huge, blended course on World Regions.

Check out this event, and other events appearing under the Academic Integrity Awareness Week Banner.

 

What? A Menu of Assessment Options?

By Susan Bens
I have recently come upon a few interesting ideas about the conditions we create for assessment in higher education, especially with respect to deterring academic dishonesty. Standing out to me right now is a 2013 book I’ve been reading by James Lang titled “Cheating Lessons.” This book provides inspiration, encouragement, and practical advice to teachers in higher education. Lang’s premise is that cheating is an inappropriate response by students to environments that convey an emphasis on performance within the context of extremely high stakes and where extrinsic motivators overpower the “intrinsic joy or utility of the task itself” (p. 30).Slide of a Weird Grading System

Lang points his readers to an innovative assessment practice I found quite intriguing. Professor John Boyer, in his apparently infamous World Regions class of 2,670 (!) students at Virginia Tech, affords students maximum flexibility in assessment. He structures a multi-choice assessment system that pushes students away from performance orientation and instead puts the responsibility on students to choose ways of demonstrating their learning via a point system. I highly recommend a visit to Boyer’s web page for more information on his innovative approach at http://www.thejohnboyer.com/new-education/.

The Academic Dishonesty Redirect: Be Explicit, Know your Policies, Assess Authentically




At the Gwenna Moss Centre for Teaching Effectiveness, when faculty and instructors ask us about academic integrity, we will inevitably steer the conversation to three main values:

  1. the value of being very explicit with students about the rules you expect them to follow
  2. the value of understanding the rules of your home department or college as well as the university policy on academic misconduct,
  3. the value of designing assessment for authentic learning.

Here’s a video that demonstrates this tendency quite nicely, if I do say so myself:

And, for further evidence of our redirect, coming up on Monday, October 6 1:30 – 2:15 in the GMCTE Classroom, as part of Academic Integrity Awareness Week, there will be a short session on assessment practices by Carolyn Hoessler and Barb Schindelka titled “Reduce Uncertainty, Increase Integrity: How to create relevant and effective assessments.” Register for this practical session at http://www.usask.ca/gmcte/events.

Problem Solving = Great! But what kind of problems are our students really learning?




What learning are we really asking our students to demonstrate, and what are we saying actually matters through our assessments?

Within statistics, exams require students to apply statistical procedure such as t-tests to questions e.g., is there a significant difference between boys and girls on self-confidence or neural activity when the mean is… where the criteria of significance is typical, the problem to solve is clear and familiar, the variables are provided, and even the values are given. Just plug into memorized equations. In contrast, what if I was to ask on assignments (for practicing) and the exam questions such as presenting a news story and asking students to outline the information and statistical analyses they would engage in to take a stance.  They might then have to look up prior studies to find likely values, debate whether gender is dichotomous categories or a continuous variables for their purposes, and determine how to operationalize the topic, set a 1/20 or more conservative cut off for significance, and select and apply a statistical analysis. Which assessment would better measure the learning I would want my students to have when they go forward? Which learning would you want that A+ to represent when you are deciding if they will be your honours or graduate student?
Problem solving process

Several years have passed since I heard Dr. Eric Mazur speak of changing the activity in the classroom to engage students in learning and increase their conceptual understanding of physics. His approach of peer instruction is well shared. The video was included in an earlier blog post about participatory learning and transfer)

In the June 2014 opening plenary of the Society for Teaching and Learning in Higher Education conference in Kingston, Dr. Eric Mazur’s pursuit of improving learning has remained but his focus had shifted:

“For 20 years I have been working on approach to teaching, never realizing that assessment was directing the learning … assessment is the silent killer of learning.”

As educators, we do not teach so that students simply learn the concept, lens, or procedure for tomorrow, but for the days and weeks that follow. If delaying an exam one day disadvantages students or achieving a high grade cannot predict understanding of the fundamental concepts of force, he asked have they really learned? If the assessments only reflect and demand a low level of learning, then our students will not learn at the higher levels that we desire them to achieve. Do exams that promote cramming or could be answered with a quick Google search really measure the type of transfer or retention of information that we really should be aiming for?

Of several changes that Dr. Mazur outlined to improve assessment, the one that really caused me to pause was his comment about what kind of problems are we asking students to solve.

Think of the problems typically found in your field – the ones where the outcomes are desired but the procedure and path to get there is not known (e.g., design a new mechanism, identify the properties of what is before them, or write a persuasive statement). However, in our assessments, as Dr. Mazur contrasted, the problems students are asked to solve involves applying known procedures to a set of clearly outlined information to solve for an unknown outcome.

During the plenary, he presented a series of possible questions asking about estimating the number of piano tuners: the first version required students to make assumptions about frequency and populations, then to reduce students’ questions and uncertainty the second version provided the assumptions, the third the name of the formula and so on until the students were simply asked to remember the formula and input numbers…moving down the levels of Bloom’s taxonomy from creativity and evaluation to simple remembering.

Add in the removal of the resources that I would reach for when running a statistical analysis or citing for a journal article, and the removal of collaboration and consultation that my research enjoys but not my teaching of research, and the distinction between the reality I think I am preparing students for and the exam become more disparate.

The question is how pre-defined and easily remembered or repeated is the “information” students are being asked to identify, note as missing and connect.

Resources

Video: Asking Good questions, Humber College http://www.humber.ca/centreforteachingandlearning/instructional-strategies/teaching-methods/course-development-tools/blooms-taxonomy.html
Asking questions that foster problem solving based on Bloom Taxonomy

Bloom’s Taxonomy, University of Victoria
http://www.coun.uvic.ca/learning/exams/blooms-taxonomy.html
Lists example verbs and descriptions for each competence level

Bloom’s taxonomy, www.bloomstaxonomy.org
http://www.bloomstaxonomy.org/Blooms%20Taxonomy%20questions.pdf
Question stems and example assignments

‘Driving’ the Lesson: The Pre-Assessment (Part I)




In my last post, I wrote about objectives and the value of pausing in the “everyday” experiences of learning.  In a lesson, one place to really pause and pay attention is during the pre-assessment.  This is the part of the lesson when instructors can assess what students already know and where students contribute their own experiences or ideas to the lesson.

Daniel Pink, the author of Drive: The Surprising Truth About what Motivates Us, suggests that all human beings have a need “to direct our own lives, to learn and create new things, and to do better by ourselves and our world.” He claims three principles are central to motivation: autonomy, mastery, and purpose.  In this post, I’ll discuss the first of the principles, autonomy, including how and why it drives a lesson.

Autonomy

In his book, Daniel Pink describes an experiment conducted by a CEO who took a chance by introducing a results-oriented work environment  (ROWE) within his company.  In this type of workplace, employees are granted autonomy regarding when and where they work.  There are no office hours or obligations that employees must be in the office at certain times.  Instead, the only parameter employees are given is that they must get their work done on time.

When I read this, I thought of the road map I described in my last post.  I’ve already argued it’s not the destination that matters; it’s the path we take to get there.  If as a teacher, I were to follow in the footsteps of this CEO what would a results-oriented classroom look like?  Should students be responsible for more of their learning? Will they ever be able to “get their work done on time?” Does the role of the instructor change in a results-oriented classroom, or does it stay the same?

1973 AMC Matador wagon is-Cecil'10A perfect time for students to exercise some autonomy might be in the pre-assessment, but autonomy goes deeper than teachers finding out what their students know.  One step might be for instructors to find out more about what truly motivates their students.  At it’s core, however, autonomy is really about the idea that students should sometimes sit in the driver’s seat.  Autonomy makes learning much more purposeful because all human beings need an opportunity to create what is most meaningful for them, especially when learning.

One pre-assessment activity I use to promote autonomy is asking my students to develop a learning goal.  For instance, I may use a brainstorming activity, two-minute memo  or question to introduce the lesson (e.g.): “As a teacher preparing to teach, what do you need to know about the topic at hand today, and why? What can you do during the lesson today to help or drive that learning to happen?” In asking this question, I not only learn more about what my students want to know, I also transfer some of the responsibility and autonomy for learning to them.   It’s simple to return to this question at the end of the lesson, during the post-assessment, when students can evaluate for themselves what they did during the lesson and what they’ve learned.

Pink, D.H. (2009).  Drive: The surprising truth about what motivates us.  New York: Riverhead Books.

Picture courtesy of Christopher Ziemnowicz via Wikimedia Commons released into the public domain.

Course Design Institute Being Offered as ‘Flipped’ Workshop




Course Design ProcessFor several years, the GMCTE has offered the Course Design Institute (CDI), a four to five-day intensive workshop that walks instructors through the development or redevelopment of one of their courses. This May, the CDI we be delivered in an entirely different format than in the past by “flipping” it to provide participants with more hands-on work time.

While in the past, participants attending all day for the four to five days during a single week, this offering will require participants to attend three Thursday mornings over three weeks in May. They will also watch videos and complete assignments outside of these meeting times. They will post their assignments to the discussion forum where they will receive feedback from the facilitators and fellow-participants.

The CDI is built around the resources on our Course Design Process web page and includes videos and other materials related to learner and context analysis, developing learning outcomes, creating assessments, deciding on teaching strategies and evaluating and revising the course that they develop.

Participants who qualify are also eligible for a $1,000 grant to assist in the development / redevelopment of their course.

Spots in the CDI are limited to 10 participants and applications are now being accepted. For more information about the CDI, the application and the grant, please see the CDI Web site.

What? A Menu of Assessment Options?




I have recently come upon a few interesting ideas about the conditions we create for assessment in higher education, especially with respect to deterring academic dishonesty.  Standing out to me right now is a 2013 book I’ve been reading by James Lang titled “Cheating Lessons.”  This book provides inspiration, encouragement, and practical advice to teachers in higher education. Lang’s premise is that cheating is an inappropriate response by students to environments that convey an emphasis on performance within the context of extremely high stakes and where extrinsic motivators overpower the “intrinsic joy or utility of the task itself” (p. 30).

Slide of a Weird Grading System

Lang points his readers to an innovative assessment practice I found quite intriguing.  Professor John Boyer, in his apparently infamous World Regions class of 2,670 (!) students at Virginia Tech, affords students maximum flexibility in assessment.  He structures a multi-choice assessment system that pushes students away from performance orientation and instead puts the responsibility on students to choose ways of demonstrating their learning via a point system.  I highly recommend a visit to Boyer’s web page for more information on his innovative approach at http://www.thejohnboyer.com/new-education/.

Four Student Misconceptions About Learning



The main section of this blog post is a reprint of an article from Faculty Focus by Maryellen Welmer. It follows a brief introduction by Nancy Turner.

I thought readers of this blog would be interested in the article reprinted below on common student misconceptions about learning.  These points are usefully discussed openly with students at the start of a course or year of study but are also points for faculty to be aware of when planning curriculum and learning experiences.  Both explicit discussion of the misconceptions alongside curriculum, assessment and session design to implicitly counter their effects (specific examples for each are included in the text of the article) should go a long way to support deep student learning.

This article originally appeared in Faculty Focus.  © 2014 Magna Publications. All rights reserved. Reprinted with permission.

“Efficient and effective learning starts with a proper mindset,” Stephen Chew writes in his short, readable, and very useful chapter, “Helping Students to Get the Most Out of Studying.” Chew continues, pointing out what most of us know firsthand, students harbor some fairly serious misconceptions that undermine their efforts to learn. He identifies four of them.

  1. Learning is fast – Students think that learning can happen a lot faster than it does. Take, for example, the way many students handle assigned readings. They think they can get what they need out of a chapter with one quick read through (electronic devices at the ready, snacks in hand, and ears flooded with music). Or, they don’t think it’s a problem to wait until the night before the exam and do all the assigned readings at once. “Students must learn that there are no shortcuts to reading comprehension.” (p. 216) Teachers need to design activities that regularly require students to interact with course text materials.
  2. Knowledge is composed of isolated facts – Students who hold this misconception demonstrate it when they memorize definitions. Chew writes about the commonly used student practice of making flash cards with only one term or concept on each card. The approach may enable students to regurgitate the correct definition, but they “never develop a connected understanding or how to reason with and apply concepts.” (p.216) The best way for teachers to correct this misconception is by using test questions that ask students to relate definitions, use definitions to construct arguments, or apply them to some situation.
  3. Being good at a subject is a matter of inborn talent rather than hard work – All of us have had students who tell us with great assurance that they can’t write, can’t do math, are horrible at science, or have no artistic ability. Chew points out that if students hold these beliefs about their abilities, they don’t try as hard in those areas and give up as soon as any difficulty is encountered. Then they have even more evidence about those absent abilities. Students need to bring to learning a “growth mindset,” recognized by statements like this, “Yes, I’m pretty good at math, but that’s because I’ve spent a lot of time doing it.” Teacher feedback can play an important role in helping students develop these growth mindsets.
  4. I’m really good at multi-tasking, especially during class or studying – We’ve been all over this one in the blog. “The evidence is clear: trying to perform multiple tasks at once is virtually never as effective as performing the tasks one at a time focusing completely on each one.” (p. 217) Chew also writes here about “inattentional blindness” which refers to the fact that when our attention is focused on one thing, we aren’t seeing other things. “The problem of not knowing what we missed is that we believe we haven’t missed anything.” (p.217)

Pointing out these misconceptions helps but probably not as much as demonstrations. Students, especially those in the 18-24 age range, don’t always believe what their teachers tell them. The evidence offered by a demonstration is more difficult to ignore.

Please be encouraged to read Chew’s whole chapter (it’s only eight pages). It’s in an impressive new anthology which is reviewed in the February issue of The Teaching Professor newsletter. Briefly here, the book contains 24 chapters highlighting important research on the science of learning. The chapters are highly readable! They describe the research in accessible language and explore the implications of those findings. Very rarely do researchers (and most of these chapters are written by those involved with research) offer implementable suggestions. This book is full of them.

And here’s the most impressive part about this book: you can download it for free. It’s being made available by the American Psychological Association’s Society for the Teaching of Psychology. Yes, it’s a discipline-based piece of scholarly work, but as the editors correctly claim it’s a book written for anyone who teaches and cares about learning. Kudos to them for providing such a great resource!

Reference and link: Benassi, V. A., Overson, C. E., & Hakala, C. M. (Editors). (2014). Applying science of learning in education: Infusing psychological science into the curriculum. Available at the Teaching of Psychology website: http://teachpsych.org/ebooks/asle2014/index.php

 

Catching a Falling Star or Lost in Outer Space? That’s what feedback is for!




What would it be like to wait for 31 months before finding out if you were on your way to success or have burst into flames? The European Space Agency had such a wait to hear how its Rosetta space mission to study a comet is going, hearing this week for the first time from their spacecraft that had finally travelled close enough to the sun to have solar power to wake up.

Rosetta Calls HomeIn comparison most 4-year undergraduate programs are 32 months (e.g., September 2013 to April 2017 not counting summers) – a long time to wait for student feedback on their orientation and first year experience.

Most courses last just over 3 months for a one-term course, still a long time to wait for feedback on whether all students can see the screen or hear your voice, something determinable in the first 5 minutes of lecture.

Why bother?

So why should we bother with feedback? Ever get mid-way through a term unsure of how it is going but plow ahead anyways? Ever been surprised by students’ ratings at the end of the year?

Feedback from our students about our course (or our program) at regular intervals can help us adjust mid-flight to correct for any variations in students, class timing, room layout, course materials etc. The feedback can help with requesting changes to the room, suggesting topics that require additional review, or offering ideas for new approaches to the material.

How?

While the European Space Agency may have had no choice but to wait for their comet-chasing spacecraft to signal back, students are generally around September to May and willing to provide feedback on their learning experiences for their own sake and for the sake of the next cohort.

  • Quick: Ask students through a show of hands (or clickers if used in the class) who can clearly see the font on the screen? Are all hands up?
  • In Depth: Want more detailed and anonymous data, we at the GMCTE are happy to come by to survey students in class, facilitate a focus group or provide our own observation (all confidential). We can also help with surveys, focus groups, and other methods for program review too.  Carnegie Mellon University’s resources describes Focus Groups with students as “particularly effective for identifying agreement across a group and for eliciting suggestions for improvement…[and] much more flexible”
  • In the Middle: Have a one-minute questionnaire to be answered and dropped off as students leave, such as “What can be improved? What helps my learning?” or “Something I learned today? Something I am not sure about?”

More resources and examples are available about feedback on courses http://www.usask.ca/gmcte/resources/assessment/teaching or program curriculum http://www.usask.ca/gmcte/resources/curriculum_cycle/step-2-inventory

This new term has launched and our courses have cleared the start-up atmosphere, and are settling into orbit.  Curious if your course is on the right trajectory? Now is the time to get feedback.

Contact us at gmcte@usask.ca or see our Website.

Photograph courtesy of the European Space Agency.