MCQ musings


A few weeks ago I logged into a Zoom session being run by the staff development unit at Strathclyde Uni. The topic was multiple choice questions (MCQs) and seemed particularly relevant as we – as HE teaching staff – re-design our teaching methods to cope with the Covid-19 pandemic.

The session was lead by Patrick Thomson (Pure and Applied Chemistry), and Rosanne English (Computing and Information Science).

This post consists of a short paragraph about ‘Non-content MCQs’ and a longer section on some of the literature around MCQs and the Peerwise software program.

‘Non-content’ MCQs

There’s an interesting exercise of completing a non-sense, content-free, set of MCQ questions. There a google form version of the quiz here, if your setting MCQ questions it’s worth a go 🙂

MCQ literature

One of the papers referenced in the workshop was by David Nicol (reference 1). I’ve read his papers before, like his ideas and this one didn’t disappoint (although I thought it ran out of new ideas about two third of the way through).

The paper points out that MCQ’s have several problems:

they test recall only;

feedback is very limited and is set at the time the quiz is designed rather than on what the student understands;

the development of MCQs is often driven by lack of teacher resource rather than the education;

MCQs works by recognising answers and not by comphending and constructing their own answer;

there’s no options for clarifying the questions, or entering into dialogue during the tests.


The paper proposes schemes for using MCQ’s, for promoting effective learning/self-learning among the students and ways for staff to measure the effectiveness of MCQ’s. It takes the 7 key principles of effective feedback and ask how they can be applied to MCQ’s. I will look at and digest each principle and it’s impact on MCQ’s separately.

Principle 1: Clarifying goals, standards and criteria.

The students understanding of the assessment task, must be as close as possible to the teachers understanding of the assessment criteria. Under this section the authors talk about group-based and dialogue learning. (Although it’s not clear to me how this falls under principle 1 unless it’s in the clarifying that the discussions occur.) For MCQs it’s proposed that the students themselves come up with the MCQ’s and in doing so they set the critera for the standards to be achieved. Case study 4 in the paper cites a system call ‘multi-choice item development assignment’ to instruct students how to develop MCQ questions, the answers and suitable feedback. This allowed the students to lead the learning, rather than taking a top-down approach (reference 2 by Fellenz, 2004)

I wanted to dig into Fellenz’s paper a bit more: how many times that approach has been used? Google scholar shows that the paper has been cited 113 times and there were 17 references using the idea of student generated question in the first 50 citing references listed under g-scholar. The Peerwise software system came up in those papers several times (see below for further comment).

One quote stood out to me from the case study: “Indeed in professional practice, experts both create the criteria that apply to their work and assess their performance against these criteria ….. Higher education should help develop this capability.” It seems to argue that in the activity of setting a question there is understanding of the problem, the formulation of the real answer and the proposal for setting dummy answers (distractor options) that are close but inaccurate. Setting MCQ’s is hard work, and so perhaps it should form a realistic task for students to do.

Principle 2: Self assessment and reflection

In case studies that use this principle students are asked open book MCQs, so they can reflect on their answers. Addtionally, the questions are complex and so take the students to the edge of their understanding rather than just recall.

In one case study students were asked to assign a confidence to their answer (!) this is called confidence based marking: students get a boost for being confident of a correct answer (low confidence= +1, medium confidence = +2 and high confidence = +3) and a penalty for being highly confident of an incorrect answer (low confidence = 0, medium confidence = -2 and high confidence = -6). The idea is that students need to reflect on the quality of their answer before and after they select their choice. (Hence the connection with reflection in this case study.)

Principle 3: High quality feedback.

The idea here is that students get more than the polarised ‘right or wrong’ feedback with (or without) some suggestion of what the correct answer was, but – if they get the wrong answer – a chance to go to online resources that allow them to examine the principles being tested in the question again. I guess we’re looking of that ‘wtf-to- ftw’ insight and learning experience. The learning can be topical, targeted and timely if the quiz is designed right. According to Nicol this is termed ‘just in time teaching’. The paper also highlights the possibility of allowing the MCQ’s to inform the content of teaching sessions. For example, if a teaching program happens over several weeks, poorly answered MCQ questions could be discussed in class.

Principle 4: Encouraging dialogue around learning.

Under this principle the case studies looked at various ways of introducing dialogue around MCQ questions: for example, having a tutorial based upon previous test questions to either find the correct answers, or examine the quality of the MCQ questions. The main case study used here was the flipped classroom model using MCQs and ‘clickers’.

Principle 5: Feedback and motivation

I am getting a bit confused now because I thought feedback was principle 3…. The idea presented here was that repetition of MCQ’s appear to be motivating for students (although I  can think of nothing less motivating that having to do the same/similar tests again!) There were no case studies in the paper as an example of this principle.

Principle 6: Closing the gap

Under this principle the idea presented was that students gain from repeated MCQs and get better and better as the number of repitions increases.!! There were no case studies in the paper as an example of this principle.

Principle 7: Feedback shaping teaching

Again that seems to be covered in the feedback principle (Principle 3) above. There were no case studies in the paper as an example of this principle.


Peerwise is an website set up and hosted by the University of Auckland (and is currently free). It requires an account to be set up as linked to the University.

Reference 3 is a recent report of the impact of Peerwise in six UK universities in their science faculties, correlating student engagement with the Peerwise system to  attainment. However, another case study highlight some issues with lack of engagement. There is a more comprehensive paper in J Chem Ed Res here.


  1. E-assessment by design: using multiple-choice tests to good effect, David Nicol, Journal of Further and Higher Education, Vol. 31, No. 1, February 2007, pp. 53–64. An online version can be found here.
  2. Fellenz, M. (2004) Using assessment to support higher level learning: the multiple choice item development assignment, Assessment and Evaluation in Higher Education, 29(6), 703–719.”
  3. Student use of PeerWise: A multi‐institutional, multidisciplinary evaluation , Briutish Journal of Educational Technology, Alison E. Kay  Judy Hardy  Ross K. Galloway.





Categories: Tags: , ,

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s