I missed the TLAD session on feedback two months ago 😦 So now I have to spend sometime catching up. In my digging around I came across this interesting quote:
“In a comprehensive review of 87 meta-analyses of studies of what makes a difference to student achievement…. the most powerful single influence is feedback.” Gibbs and Simpson
Really. Feedback? (Although see note 1 at the end of this piece.)
One of the first TLAD resources I looked at on the myplace page (the VLE) was the JISC videos on feedback. They seem to have some quality stuff in them – so here’s my notes on video-lecture notes.
The presenters on the ‘Reconceptualising feedback‘ video had an interesting idea about following up feedback after the end of a module, but they seem to suggest that the ‘next tutor’ should responsible for that. Personally I think that the ‘previous tutor’ should extend (and taper) the previous module so that the follow up on feedback is part of the previous class, but is still merged or integrated with the next one.
I liked the idea of a ‘dialogue record’ for feedback: that at least provides some evidence that feedback has been thought about and a ‘action plan’ created as a ‘learners way forward’. There were 4 questions on the (short) VLE screenshot, but for me the three interesting ones were:
- What did you learn from feedback
- What actions do you plan to take because of what you have learned
- Is there anything unclear from your feedback?
The proposal about formative assessments throughout the class was interesting. With our online tutorials (which I’ve just spent the morning looking at!), I think that could be a useful ‘formative tool’. The trick will be to try and make sure the tutorials have a solid basis in the rest of the taught material.
Peer-based feedback was also brought up, and I’ve thought about this before. I think the key is trying to make sure that the students are somehow equipped to provide good and useful feedback. In fact with the pharmaceutical industry ‘checking’ and ‘verifying’ work is a key component of quality: this could be ‘re-packaged’ in an educational context as peer-review.
In this video the presenter propose that the student body forms one of the best sources of information about the course and should be considered as stakeholders and change agents. There’s not much detail as to what that means, so the video seems to act as a promo for the ‘change-agents-network‘ (a now archived project). The CAN project seemed to be related to digital teaching, and these online webinars might be interesting.
In the series we were given there was a third video, but there didn’t seem to be much for me to take away from that.
The embedding employability skills video was talking about the ‘competency gap’ (that being the gap between the skills graduates have, and the ones employers need), and the use of assessment as a tool to plug that gap (and lead to meta-learning). OK, nice idea. They showed a cool computer program used for designing an assessment and then finding a tool to do it. In the example given on the video, the ‘assessor’ determined the different criteria to be applied (which were structure, review, collaboration, time audience and problem/data) and the program gave a list of possible assessment tools. It struck me that resource and time (both important factors for both staff or students) was not included in the list.
The idea of using technology (as they would in a job) – like blogs and wikis etc – was a valid one, and I agree with there arguments that exams aren’t a normal activity in an employment setting, but that “report writing” would be. But, there are a couple of issues I have with this. Firstly, if exams were a poor reflection of employment skills, why have they been so popular historically, and why do employers look for graduates with good degrees? Secondly, I consider the statement that ‘report writing’ is an employment skill subjective.
Pre-reading for the TLAD session was Chapter 5 of “Using Effective Assessment to Promote Learning” of the book “University Teaching in Focus : A learning-centred approach” (Ed’s Hunt and Chalmers, 2012)
Comments on helping students understand feedback, and trying to get more peer-review going <aside>I wonder about whether its worth getting students to ‘buy into assessment’ at the start of the course? </aside>. I wonder if it might be worth highlighting the University’s student feedback booklet at the assessment setting stages.
I like the idea of thinking about assessments in terms of how we make them: valid (what we are trying to measure), reliable (as objective as possible), transparent (so everyone can trust the process), inclusive (there is no bias that would act against certain groups) and authentic (is it the students own work).
Table 5.1 was eye-openning: the list of possible assessment techniques seemed to be vast, and the inclusion of disadvantages very helpful in trying to understand what ideas to use. The list included exams (open-book), ‘take away’ papers, short answer questions, essays, report, MCQ’s, computer-based tests, portfolio, vivas, presentations, posters, projects, simulations, case-studies, OSCE’s (which are called Practically Assessed Structured Scenarios – PASS – elsewhere in the chapter), reflective journals, seminars, critical incident accounts, annotated biographies (where students write about the texts), in-tray exercises and artefact assessments.
The concept of the ‘sudden death assessment’ where everything hangs on one piece of work at the end of the term (usually an exam) seem a sensible way to describe the way I was taught!
The suggested approaches to make feedback easier an efficient for staff (of assignment return sheets, model answers, verbal feedback delivered as a group, developing a statement bank and peer-review) seems interesting. I wonder if a group of students could be asked to get together to go through their answers (and feedback), identify any inconsistencies in the marking, and develop strategies to improve. The issue is that this would require trust and respect between the assessed (the students) and the assessors (the lecturers).
There is a ton of literature out there on this, but I like “Conditions Under Which Assessment Supports Students’ Learning” by Gibbs and Simpson . (I use Gibbs’ Reflective Cycle quite a lot…and its the same Gibbs – I checked!) The paper has two useful sections.
Firstly, a summary of the advantages of assessments over exams. It appears that students often get higher marks in assessments than exams and there is better quality of learning on courses that prioritise assessments. However, there is also a poor correlation between exam marks and work performance/success, with assessment marks are a better indicator. Looking at the paper I did notice that (with the exception of the poor correlation of exam marks to work performance which was based on a review of 150 studies!), the other ‘stated advantages’ seem to be based on the conclusions of one or two cited papers.
Secondly, a set of criteria for successful assessment and feedback (perhaps this could be seen as a ‘toolkit’). The criteria are explained in the table below:
|Sufficient assessments are provided for students to capture sufficient study time||Using assessments to promote effective study of the material.|
|Tasks orientated students to allocate appropriate amounts of time and effort to the most important aspects of the course.||This is designed to avoid students focusing only on work that is covered by few assessments*|
|The task is productive and appropriate to the discipline||The antithesis of this would be rote learning for an MCQ assessment.|
|Feedback is prompt, detailed, and ‘enough’||This is a combination of two of the criteria in the Gibbs and Simpson paper|
|The feedback focuses on students’ performance, on their learning and on actions under the students’ control||The feedback is specific enough to tell students what ‘actions’ they can do to improve.|
|Feedback is appropriate to the purpose of the assignment and to its criteria for success||Feedback relates to defining what success looks like and tries to motivate a student to keep learning.|
|Feedback tries to relate to the students’ understanding of what they are supposed to be doing||This quote says it all “Many academic tasks make little sense to students. This inevitably causes problems when they come to read feedback about whether they have tackled this incomprehensible task appropriately.”|
|Feedback is attended to (by the student) and acted up||This is a combination of two of the criteria in the original paper. An interesting idea about only adding ‘grades’ when the student has reflected on the feedback!|
Note * that a class exam will cover all the material, whereas assessments only cover part of it!
Gibbs and Simpson at least discuss the staff time/resource constraints in modern tertiary education with none of the other resources did.
Thinking more generally about my teaching I can see that assessments are a useful way of promoting ‘higher learning’, either in terms of Blooms taxonomy or other models (like Mayes conceptualisation cycle).
Hmmm…more on feedback to think about.
Note 1: This quote references a paper by Hattie (1987) that doesn’t exist. The reference should be Fraser et al Int. Journal of Educational Research, Vol 11 (2), pp145-252 (chapter 4 which starts at p187). Hattie is ‘the last author’ on the paper. The paper itself looks like a very comprehensive review, but I couldn’t find in it the data that supports the statement I’ve copied in italics above: even after text searching for feedback and ’87’. If I was marking Gibbs and Simpson paper as an assignment that would cost them 5%! Actually, I’m a bit gutted this quote can’t be properly supported: it was a nice tidy idea! 😦
 “Conditions Under Which Assessment Supports Students’ Learning” by Gibbs and Simpson, Learning and Teaching in Higher Education, Issue 1, 2004-05.