Primary assessment is more than a fiasco – it’s completely wrong

I’ve written my submission to the Education Committee’s inquiry on primary assessment for what it’s worth. I can’t imagine that they’re interested in what we have to say, given that this government have ignored just about all the expert advice they’ve ever received or requested on nearly everything else. This country has ‘had enough of experts’ after all.

I won’t paste my submission here – there are various restrictions on publishing them elsewhere, it seems. However it’s a good time to get some thoughts off my chest. Primary assessment (and school-based assessment generally) has all gone a bit wrong. OK, a lot wrong. It’s so wrong that it’s actually very damaging. Conspiracy theorists might have good cause to think it is deliberate; my own cynicism is that it is underpinned by a string of incompetencies and a distinct failure to listen at all to any advice.

In thinking about why it has all gone wrong, I want to pose a possibly contentious question: is the attainment we are attempting to measure, a thing that should dominate all educational efforts and discourse? I’ve written before about my growing doubts about the over-emphasis on attainment and how I think it detracts from the deeper issue of education. The further we get down this line, particularly with the current nonsense about bringing back selective education, the more this crystalises for me. Just to be clear, this is not an anti-intellectual stance, nor a woolly, liberal dumbing-down view. I fully embrace the idea that we should not put a ceiling on all kinds of achievement for everybody. Having a goal and working towards it – having a way of demonstrating what you have achieved – that’s an admirable thing. What I find ridiculous is that the kind of attainment that is obsessing the nation, doesn’t actually mean very much and yet somehow we are all party to serving its ends. Put it this way – tiny fluctuations in scores in a set of very narrow domains make headlines for pupils, teachers, schools, counties etc. Every year we sweat over the %. If there’s a rise above the ‘expectation’ we breathe a sigh of relief. If, heaven forbid, we had a difficult cohort and a couple of boxes are in the ‘blue zone’ we dread the repercussions because now we’re no longer an outstanding school. But, as Jack Marwood writes here, there’s no pattern. We’ve even begun to worry about whether we’re going to be labelled a ‘coasting school’! Good should be good enough because the hysteria over these measures is sucking the life out of the most important resource – us. Of course the inspectorate needs to be on the lookout for actually bad schools. Are these really going to be so difficult to spot? Is it really the school that was well above average in 2014 and 15 but dipped in 16? Is the child who scores 99 on the scaled score so much more of a failure than the one who scored 101? Is our group of 4 pupil premium children getting well above average, in a small set of tests, an endorsement of our good teaching compared to another school’s 4 getting well below?

Attainment has become an arms race and teachers, pupils and parents are caught in the crossfire. In spite of the ‘assessment without levels’ rhetoric, all our accountability processes are driven by a focus on an attainment with one level. This is incredibly destructive in my experience. Notwithstanding those self-proclaimed paragons of good practice who claim that they’ve got the balance right etc., what I’ve mainly seen in schools are teachers at the end of their wits, wondering what on earth they can further do (what miracle of intervention they can concoct) to ‘boost’ a group of ‘under-performing’ children to get to ‘meeting’, whilst maintaining any kind of integrity with regard to the children who have never been anywhere near. I was recently told in a leadership meeting that all children should make the same amount of progress. Those ‘middle achievers’ should be able to progress at the same rate as the ‘high achievers’. It’s the opposite which is true. The high achievers are where they are exactly because they made quicker progress – but the ‘middle achievers’ (and any other category – good grief!) will also get to achieve, given time. And while all this talk of progress is on the table – let’s be honest – we’re talking about ‘attainment’ again: a measure taken from their KS2 assessments, aggregated, and compared to KS1 in a mystical algorithm.

It’s not like the issues surrounding assessment have never been considered. Just about all the pitfalls of the recent primary debacle have been written about endlessly, and frequently predicted. High-stakes testing has always been the villain of the piece: perverse incentives to teach to the test, narrowing of the curriculum, invalidity of testing domain, unreliability/bias/downright cheating etc. The problem is the issues won’t go away, because testing is the wrong villain. Testing is only the blunt tool to fashion the club of attainment with which to beat us (apologies for extended metaphor). I’m a big fan of testing. I read Roediger and Karpicke’s (pdf) research on ‘testing effect’ in the early days, long before it became a fashionable catch-phrase. I think we should test as many things in as many ways as we can: to enhance recall; to indicate understanding; to identify weaknesses; to demonstrate capacity; to achieve certification etc. I was all in favour of Nicky Morgan’s proposal to introduce an online tables test. What a great idea! Only – make it available all the time and don’t use the results against the pupil or the teacher. No – testing doesn’t cause the problem. It’s caused by the narrow, selective nature, the timing and the pressure of attaining an arbitrary ‘meeting expectations’ (one big level, post levels). The backwash on the curriculum is immense. Nothing has any status anymore: not art, not music, not D&T, not history nor geography, and certainly not science – that ‘core subject’ of yore! Some might argue that it’s because they’re not tested, and of course, I agree up to a point, but the real issue is that they’re not seen as being important in terms of attainment.

I shall add a comment here on teacher assessment, just because it continues to drag on in primary assessment like some old ghost that refuses to stop rattling its chains. If teacher assessment is finally exorcised, I will be particularly grateful. It is an iniquitous, corrupted sop to those who believe ‘teachers are best placed to make judgements about their own pupils’. Of course they are – in the day to day running of their class and in the teaching of lessons – but teacher assessment should not be used in any way to measure attainment. I am not arguing that teachers are biased, that they make mistakes or inflate or deflate their assessments. I am arguing that there is simply no common yardstick and so these cannot be considered reliable. The ‘moderated’ writing debacle of 2016 should have put that fact squarely on the table for all doubters to see. Primary assessments are used in accountability. How can we expect teachers to make judgements that could be used against them in appraisal and in pay reviews?

I’m an idealist in education. I think that it has a purpose beyond the establishment of social groups for different purposes (leadership, administrative work, manual labour). I don’t think that it is best served by a focus on a narrow set of objectives and an over-zealous accountability practice based on dubious variations in attainment. I tried to sum up my proposals for the Education Committe, and I will try to sum up my summing up:

  • Stop using small variations in flawed attainment measures for accountability
  • Give us fine-grained, useful but low-stakes testing, for all (use technology)
  • If we have to measure, get rid of teacher assessment and give us lots of common, standardised tools throughout the primary phase
  • Give us all the same technology for tracking the above (how many thousands of teacher hours have been spent on this?)
  • If you have to have end of stage tests, listen to the advice of the experts and employ some experts in test design – the 2016 tests were simply awful
  • Include science
  • Be unequivocal in the purposes of assessment and let everybody know

I didn’t say ‘get rid of the end of key stage assessments altogether and let us focus again on educating our pupils’. Maybe I should have.

 

 

 

Got the T-shirt (a moderate tale)

Given that teacher assessment is a nonsense which lacks reliability, and that moderation can not really reduce this, nor ensure that gradings are comparable, our moderation experience was about as good as it could be! It was thus:

Each of we two Y6 teachers submitted all our assessments and three children in each category (more ridiculous, inconsistent and confusable codes, here), of which one each was selected, plus another two from each category at random. So, nine children from each class. We were told who these nine were a day in advance. Had we wanted to titivate, we could have, but with our ‘system’ it really wasn’t necessary.

The ‘system’ was basically making use of the interim statements and assigning each one of them a number. Marking since April has involved annotating each piece of work with these numbers, to indicate each criterion. It was far less onerous than it sounds and was surprisingly effective in terms of formative assessment. I shall probably use something similar in the future, even if not required to present evidence.

The moderator arrived this morning and gave us time to settle our classes whilst she generally perused our books. I had been skeptical. I posted on twitter that though a moderator would have authority, I doubted they’d have more expertise. I was concerned about arguing points of grammar and assessment. I was wrong. We could hardly have asked for a better moderator. She knew her stuff. She was a y6 teacher. We had a common understanding of the grammar and the statements. She’d made it her business to sample moderation events as widely as possible and therefore had had the opportunity to see many examples of written work from a wide range of schools. She appreciated our system and the fact that all our written work from April had been done in one book.

Discussions and examination of the evidence, by and large led to an agreed assessment. One was raised from working towards; one, who I had tentatively put forward as ‘greater depth’, but only recently, was agreed to have not quite made it. The other 16 went through as previously assessed, along with all the others in the year group. Overall my colleague and I were deemed to know what we were doing! We ought to, but a) the county moderation experience unsettled us and fed my ever-ready cynicism about the whole business and b) I know that it’s easy to be lulled into a false belief that what we’ve agreed is actually the ‘truth’ about where these pupils are at. All we can say is that we roughly agreed between the three of us. The limited nature of the current criteria makes this an easier task than the old levels, (we still referred to the old levels!) but the error in the system makes it unusable for accountability or for future tracking. I’m most interested to see what the results of the writing assessment are this year – particularly in moderated v non-moderated schools. Whatever it is, it won’t be a reliable assessment but, unfortunately it will still be used (for good or ill) by senior leaders, and other agencies, to make judgements about teaching.

Nevertheless, I’m quite relieved the experience was a positive one and gratified and somewhat surprised to have spent the day with someone with sense and expertise. How was it for you?

 

 

 

 

Trialling moderation

A quick one today to cover the ‘trialling moderation’ session this afternoon.

We had to bring all the documents and some samples of pupils’ writing, as expected.

Moderators introduced themselves. They seemed to be mainly Y6 teachers who also were subject leaders for English. Some had moderated before, but obviously not for the new standards.

The ‘feel’ from the introduction to the session was that it wasn’t as big a problem as we had all been making it out to be. We were definitely using the interim statements and that ‘meeting’ was indeed equivalent to a 4b.

At my table, we expressed our distrust of this idea and our fear that very few of our pupils would meet expected standards. Work from the first pupil was shared and the criteria ticked off. We looked at about 3 pieces of work. It came out as ‘meeting’ even though I felt it was comparable to the exemplar, ‘Alex’. The second pupil from the next school was ‘nearly exceeding’. I wasn’t convinced. There were lots of extended pieces in beautiful handwriting but sentence structures were rather unsophisticated. There was arguably a lack of variety in the range and position of clauses and transitional phrases. There was no evidence of writing for any other  curriculum area, such as science.

I put forward the work from a pupil I had previously thought  to be ‘meeting’ but had then begun to doubt. I wanted clarification. Formerly, I would have put this pupil at a 4a/5c with the need to improve consistency of punctuation. Our books were the only ones on our table (and others) that had evidence of writing across the curriculum; we moved a few years ago to putting all work in a ‘theme book’ (it has its pros and cons!).

Unfortunately the session was ultimately pretty frustrating as we didn’t get to agree on the attainment of my pupil; I was told that there needed to be evidence of the teaching process that had underpinned the writing that was evident in the books. That is to say, there should be the grammar exercises where we had taught such things as ‘fronted adverbials’ etc. and then the written pieces in which that learning was then evidenced. I challenged that and asked why we couldn’t just look at the writing as we had done for the first pupil. By then the session was pretty much over. In spite of the moderator’s attempt to finish the moderation for me, we didn’t. The last part of the session was given over to the session leader coming over and asking if we felt OK about everything, and my reply that no, I didn’t. I still didn’t know which of the multiplicity of messages to listen to and I hadn’t had my pupil’s work moderated. I had seen other pieces of work, but I didn’t trust the judgements that had been made.

The response was ‘what mixed messages?’ and the suggestion that it may take time for me to ‘get my head around it’ just like I must have had to do for the previous system. She seemed quite happy that the interim statements were broadly equivalent to a 4b and suggested that the government certainly wouldn’t want to see the data showing a drop in attainment. I suggested that if people were honest, that could be the only outcome.

My colleague didn’t fare much better. She deliberately brought samples from a pupil who fails to write much but when he does, it is accurate, stylish and mature. He had a range of pieces, but most of them were short. The moderator dismissed his work as insufficient evidence but did inform my colleague that she would expect to see the whole range of text types, including poetry because otherwise how would we show ‘figurative language and metaphor’?

I’m none the wiser but slightly more demoralised than before. One of my favourite writers from last year has almost given up writing altogether because he knows his dyslexia will prevent him from ‘meeting’. Judging the writing of pupils as effectively a pass or fail is heart-breaking. I know how much effort goes into their writing. I can see writers who have such a strong grasp of audience and style, missing the mark by just a few of the criteria. This is like being faced with a wall – if you cant get over it, stop bothering.

We are likely to be doing a lot of writing over the next few weeks.

 

Moderation still doesn’t tell us the weight of the pig.

The recent culture of leaving more and more of the process of assessment in the hands of teachers, raises the important question of reliability. Much research into teacher assessment, even by strong proponents of its advantages, reveals that it is inherently unreliable. We might have guessed this from our experience of human beings and the reliability of their subjective judgements! This is difficult even for quantitative measures, such as the weight of a pig at a fair, but much more so for such qualitative aspects as those in the wording of rubrics. These are what we are currently working with in the English primary school system. We teachers are required to be: assessing formatively and feeding back; summing up and evaluating; reporting in an unbiased way and  all along being held accountable for progress, which we, ourselves are expected to be measuring. Imagine, if you will the aforementioned pig. Judge its weight yourself now, and then again when you have fed it for a month, but bear in mind that you will be accountable for the progress it has made. How reliable will either of these judgements be?

So, in an attempt to improve the reliability of teacher assessments (in order for them to have high-stakes, accountability purposes) we introduce the idea of moderation. This usually takes the form of a colleague or external moderator assisting in the judgement, based on the ‘evidence’ produced by the teacher. Now, whilst I can see the value to the teacher of the moderation process, if it involves discussion of criteria and evidence with colleagues and supposed ‘experts’ (who, exactly?), I’m skeptical that simply introducing more people into the discussion will lead to greater reliability. The problem is that the external yardstick is still missing. Even if the teacher and all those involved in the moderation process agree on the level, objective or whatever measurement is required of us, we are still making subjective judgements. Are collective, subjective judgements any better than individual ones? Sometimes, they may be if they genuinely have the effect of moderating extremes. However, we need also to consider the impact of cultural drift. By this, I mean that there is a group effect that reinforces bias and this does have an impact on assessment. I am convinced that I witnessed this over the years in the assessment of writing, where the bar for attaining each level seemed to continually be raised by teachers, afraid that they would be accused of inflating results – a real shame for the pupils who were being judged unfairly. In these instances, the moderation process doesn’t improve reliability; all it does is give a false sense of it which is then resistant to criticism or appeal. This is where we all stand around staring at the pig and we all agree that he looks a bit thinner than he should. Without the use of a weighing device, we really do not know.

June 2016

I had a look back at this post – moderation being in the wind at the moment. I was interested in articles such as this one and I wonder what it will take to stop doing such pointless, meaningless practices in education? Do we not know? Do some people still believe these things work? Isn’t it a bit obvious that teacher assessment for high stakes purposes is completely counter-productive and that moderation can in no way be considered a strategy to achieve greater reliability?

I’d like to extend the ubiquitous pig metaphor now. In the case of primary writing moderation in 2016, it’s not even a case of staring at the pig and guessing its weight. We have a farmer who has a whole field of pigs – he has been told to guess all their weights, but he’d better not have more than 30% underweight! In order to make sure he doesn’t cheat, another farmer comes along, equally clueless, and tells him whether he thinks the farmer’s guesses are the same as his own guesses. The farmer next door doesn’t have to go through this pointless ritual. Strangely, that farmer’s pigs are all just a little fatter.