I was discussing ‘student feedback’ with a friend this week and we started talking about scenarios (real and imaginary) where consumer feedback (be that from students, patients, customers, clients etc) impacted on an employee’s job security. If I’m honest, I had a strong, negative reaction to the idea (as I think others would too), but to me strong feelings mean I should reach for my ‘thinking cap’…
Just ‘the person at the front’
My thoughts generally relate to my situation as a University educator and teacher, and so the first ‘objection’ I had was that I see ‘the teacher’ as ‘the person at the front‘. If a good teacher needs great colleagues and good students, what does a ‘bad’ teacher lack? I’d agree that the teacher is an important part of a class, lecture or lab, but any poorly rated educational interaction needs to be evaluated on a broader and wider context that just ‘the teacher’. A low quality scoring pedagogical activity may come about as much by an unengaged class, or disruptive individuals, as from poor teachers, or ineffective support structures. Singh argues that learning (and teaching) are community activities: I’d agree. Although I’d add a caveat that I – as a teacher – believe that I have a greater responsibility in designing and implimenting an effective ‘learning culture’ than my students do.
‘Dead wood’ and wonga
There seems to be some logic to (as they say) ‘getting rid of dead wood’, but where’s the evidense for that? There are, however, counterarguments – effects such as ‘social loafing‘ (where in large groups employees naturally default to letting others do the work) and Price’s Law (where productive work is done by the square root of the number of employees) exist as a result of a working culture rather than underperforming individuals. Companies using the controversial Vitality Curve idea – where they rank all staff and fire the lowest 10% regardless of their actual performance – have done so without any evidense that the policy works! Workplaces and work teams are made from complicated, dynamic connections and structures: simple ‘quick fix’ solutions rarely seems to work the way we’d expect.
Consumer feedback will always be subjective and ‘rank-based’: in my case, I could deliver good lectures, but if my colleagues are even better, the ‘performance bar’ is higher and I’m ranked as poor. To paraphrase the old joke about the two guys running away from a lion: ‘I just need to teach better than you!’
There’s always the idea that salary can be used to reward productive activities and penalise negative, unproductive behaviours. Yes: that works, but only for routine ‘production-like’ activities. As Dan Pink points out, for creative tasks (which I would consider teaching to be) using financial incentives can actually be counterproductive.
I’ve spoken to my medical friends in the past about the medical Quality Outcome Framework. This was a scheme that awarded extra money to GPs to promote and impliment certain healthcare activities (for example making sure a high percentage of the patients in their practise were vaccinated). It seems like a classic ‘carrot approach’ to driving the right process, but the results were, at best, mixed (Roland and Guthrie, 2016). Quality Outcome Frameworks have now been replaced by all the UK health authorities.
Root cause and poor performers
My thinking on problems like this is influenced a lot my my time working in the Formulation Unit, to the pharmaceutical quality standard, Good Manufacturing Practice. In that context any ‘investigation’ into poor performance, errors, mistakes etc, must be focused on ‘root cause’, and avoid blaming ‘human error’. I’ve blogged on these ideas before. ‘Root cause’ may involve developing an understanding of the broader landscape in which poor staff performance occurs. A difficult question for any organisation is to find out how someone had become a ‘poor’ worker (doctor or teacher) and not developed into a ‘better’ one.
Everybody has different skills: some people will be good at (for example) lecturing, while others might be great at online teaching through a VLE. Somehow we need to work out how to play to individual strengths, so that various members of a working community can bring alternative skills to a team’s purpose and focus. But there are two challenges in ‘valuing’ the different skills of individuals in a group: firstly, valuing empathetically and enthusiatically a skill set we see in others, but don’t have ourselves; secondly, evaluating the ‘worth’ of that skill set to the overall purpose of the group. Avoiding both of these challenges – by insisting on one, uniform skill set across the whole team – is far easier, but far more erroneous!
Key Performance Indicators
In the conversation with my friend, the issue of Key Performance Indicators (KPIs) came up. The trick to KPI’s is to design a measurement that is effective and ‘drives’ the right behaviour or activity in a process or workplace. That’s difficult enough, but once the KPI’s are in place there are two key challenges: firstly, the KPIs become the focus of the work, rather than a tool to guide performance (as the saying goes ‘what gets measured gets done’) and secondly, the number of KPIs usually increases, leaving ‘the system’ overloaded and confused (“death by a thousand KPIs!”).
It seems to be tricky to find ‘solid data’ on how effective KPI’s are, but I did come across this quote from the book Key Performance Indicators: Developing, Implementing, and Using Winning KPIs by David Parmenter –
“Many organisations that have operated with KPI’s have found (they) made little or no difference to performance”
I get the impression that KPIs are a double edge sword: powerful and useful when handled correctly, but disemboweling if not!
Innovation and change
Teaching on a modern context needs innovative change in order to develop. One of my reseach interests in is ‘gamification’ – the idea that people learn through an element of play. It’s a relatively new, but legitimate, teaching tool. However, experience has taught me that I need to be careful of the phrases i use in describing these teaching activities, because some students resent spending time, money and resources turning up to workshops where ‘they just play’. Now I have adapted my teaching descriptions to accommodate student feedback, but the principle I’m trying to get across is that education needs to change through innovation and that change is not always comfortable (and may attract negative student feedback initially). Innovation always carries risk.
Pedagogy is a relational activity and so it’s by default a cultural phenomena. The ‘culture’ (and I use that word deliberately because I don’t mean nationality, class, or any of the other labels we may be tempted to put on the word culture) of teachers and students will have an impact on how easy, smooth, effective and constructive the interactions will be in a teaching/learning community.
I commented above about under performance and different skills sets, but education has a strong component of relationships (and how do you qualify that?) so ‘learning/tecahing’ moves from our ‘intellectual/left brain’ to our ’emotional/right brain’. Our relational patterns are set in childhood, but as we grow into adults we find we are awarded grades and degrees based upon our intellectual skills sets, not our relational abilities. It may take time and training (and may be even radical mental rewiring!) to develop the sorts of relational skills I think some students might look for. (And as I reflect on myself, I feel that I still need to develop some key skills in the area of pedagogical relationships!)
Thoughts from the literature
I struggled to find anything sensible on students and KPI’s in the published literature, but I did find ‘Educational Consumers or Educational Partners: A Critical Theory Analysis ‘ by Geeta Singh, and she raises some great points which made me think about different aspects of ‘Teaching Quality’ and ‘Student Surveys’.
Unlike pharmaceutics where the terms ‘quality’ can be defined (for example as ‘fit for medicinal purpose’ by meeting a set of established measurable values), in the context of education ‘quality’ has no technical meaning. That means that determining the quality of teaching is very difficult, and probably impossible. For the rest of this post I should use the term ‘quality’ where parameters can be set, or defined, such that something could be stated as ‘fit for purpose’, and ‘para-quality’ where parameters cannot be set, or a value may only give an indication of true quality.
In terms of student satisfaction surveys, to me it’s not the collecting of student survey data but the use of the data in determining ‘quality’, rather than ‘para-quality’. Student surveys are a mechanised, rather than a relational, way of collecting feedback. They’re easy to administer, but because they condense experience into a scale from 1 to 5 they can only ever be a low resolution ‘snapshot’ of experience. We (and I mean ‘We’ as mankind!) make decisions based on the data we have, even if the information is incomplete, or the image low resolution: and that’s the problem with these surveys.
In terms of ‘relational feedback’, Singh points out that there shoud be a shared learning community between students and lecturers. However, to me that makes some form of ‘back-dialogue’ vital. A learning community is a two way street: if lecturers ‘deliver teaching’, students need to ‘feedback learning’. The current process seems to be, in part, student feedback forms.
Singh argues that student satisfaction surveys can result in a ‘consumer’ approach to education, whereas learning is not consuming. Singh reports that staff have admitted to lowing grade standards to obtain better student survey returns. She argues that the effect of education is ‘long term’ (since it sets up someone for a life time), and that the quality of a course cannot be determined at the end of the class. It could be argued that effective education is that which helps the students overcome the obstacles of life, not those of the classroom. The ‘quality of a students education’ is not apparent until the student moves out of the academic system where (to my personal chagrin) very feedback is difficult to get. Singh cites a reference (as an example) where commercial considerations are compromising academic standards, and that those academic standards are not effectively represented in para-Quality Measurement schemes (italics mine).
Singh highlights that learning can be an uncomfortable, difficult process (see my post on thresholds and liminality) and because of this a learners should not be considered as a consumer. I agree, but only to a limited extent. To me it would seem too easy (as the teacher) to say that ‘learning is difficult’ and leave it at that. My objective would be to help students recognise, embrace and overcome difficulties. Challenges are a key part of studying, living and working, so if – as educators – we can train students to give ‘positive scores’ for ‘difficult challanges’ that’s a significant achievement. It could almost be argued that effective education is that which helps the students overcome the obstacles of life. However, if the ‘quality of an education’ is not apparent until the student moves out of the academic system then (to my personal chagrin) feedback becomes very is difficult to get.
One point I found interesting (because I put this same question into the last student feedback form I designed) was Singh’s argument that student feedback forms should be augmented with self-assessment exersices, completion of the additional reading, or a survey of self-study hours. She also points out that objectively measuring the quality of a lecturer is as difficult as measuring the quality of a student. Yet – and Singh doesn’t mention this – the objective grading of a student is what universities are supposed to do! Can I (as a teacher) not take my own medicine?
A final thought on ‘our medicine’
I image myself teaching in front of a group of students knowing that my performance will be evaluated on a sheet of paper, graded by some ‘mechanism’ that will decide if I get a promotion, or my annual increment, or disciplinary action. The irony is that last week I watched student presentations, rated them on a score sheet, then (at the exam boards ) graded the students by a mechanism that descided if they could do a PhD, passed or failed the course. Maybe that’s an element of feedback: what comes around, goes around.