Ed Select Committee report – improvements to come?

The Education Select Committee has published its report into the impact of the changes to primary assessment. It’s been an interesting journey from the point at which I submitted written evidence on primary assessment; I wrote a blog back in October, where I doubted there would be much response, but in fact I was wrong. Not only did they seem to draw widely from practioners, stake-holders and experts to give evidence, the report actually suggests that they might have listened quite well, and more to the point, understood the gist of what we were all trying to say. For anyone who had followed assessment research, most of this is nothing new. Similar things have been said for decades. Nevertheless, it’s gratifying to have some airing of the issues at this level.

Summative and formative assessment

The introduction to the report clarifies that the issues being tackled relate to summative assessment and not the ongoing process of formative assessment carried out by teachers. For me, this is a crucial point, since I have been trying, with some difficulty sometimes, to explain to teachers that the two purposes should not be confused. This is important because the original report on assessment without levels suggested that schools had ‘carte blanche’ to create their own systems. Whilst it also emphasised that purposes needed to be clear, many school systems were either extensions of formative assessment that failed to grasp the implications and the requirements of summative purposes, or they were clumsy attempts to create tracking systems based on data that really had not been derived from reliable assessment!

Implementation and design

The report is critical of the time-scale and the numerous mistakes made in the administration of the assessments. They were particularly critical of the STA, which was seen to be chaotic and insufficiently independent. Furthermore, they criticise Ofqual for lack of quality control, in spite of Ofqual’s own protestations that they had scrutinised the materials. The report recommends an independent panel to review the process in future.

This finding is pretty damning. This is not some tin-pot state setting up its first exams – how is incompetence becoming normal? In a climate of anti-expertise, I suppose it is to be expected, but it will be very interesting to see if the recommendations have any effect in this area.

The Reading Test

The report took on board the wide-spread criticism of the 2016 Reading Test. The STA defense was that it had been properly trialled and performed as expected. Nevertheless, the good news (possibly) is that the Department has supposedly “considered how this year’s test experience could be improved for pupils”. 

Well we shall see on Monday! I really hope they manage to produce something that most pupils will at least find vaguely interesting to read. The 2016 paper was certainly the least well-received of all the practice papers we did this year.

Writing and teacher assessment

Teacher assessment of writing emerged as something that divided opinion. On the one hand there were quotes from heads who suggested that ‘teachers should be trusted’ to assess writing. My view is that they miss the point and I was very happy to be quoted alongside Tim Oates, as having deep reservations about teacher assessment. I’ve frequently argued against it for several reasons (even when moderation is involved) and I believe that those who propose it may be confusing the different purposes of assessment, or fail to see how it’s not about ‘trust’ but about fairness to all pupils and an unacceptable burden on teachers.

What is good to see, though, is how the Committee have responded to our suggested alternatives. Many of us referred to ‘Comparative Judgement’ as a possible way forward. The potential of comparative judgement as an assessment method is not new, but is gaining credibility and may offer some solutions – I’m glad to see it given space in the report. Something is certainly needed, as the way we currently assess writing is really not fit for purpose. At the very least, it seems we may return to a ‘best-fit’ model for the time being.

For more on Comparative Judgment, see:

Michael Tidd  The potential of Comparative Judgement in primary

Daisy Christodoulou Comparative judgment: 21st century assessment

No More Marking

David Didau  10 Misconceptions about Comparative Judgement

Support for schools

The report found that the changes were made without proper training or support. I think this is something of an understatement. Systems were changed radically without anything concrete to replace them. Schools were left to devise their own systems and it’s difficult to see how anyone could not have foreseen that this would be inconsistent and often  inappropriate. As I said in the enquiry, there are thousands of primary schools finding thousands of different solutions. How can that be an effective national strategy, particularly as, by their own admission, schools lacked assessment expertise? Apparently some schools adopted commercial packages which were deemed ‘low quality’. This, too, is not a surprise. I know that there are teachers and head-teachers who strongly support the notion of ‘doing their own thing’, but I disagree with this idea and have referred to it in the past as the ‘pot-luck’ approach. There will be ways of doing things that are better than others. What we need to do is to make sure that we are trying to implement the most effective methods and not leaving it to the whim of individuals. Several times, Michael Tidd has repeated that we were offered an ‘item bank’ to help teachers with ongoing assessment. The report reiterates this, but I don’t suggest we hold our collective breath.

High-stakes impact and accountability

I’m sure the members of the Assessment Reform Group, and other researchers of the 20th century, would be gratified to know that this far down the line we’re still needing to point out the counter-productive nature of high-stakes assessment for accountability! Nevertheless, it’s good to see it re-emphasised in no uncertain terms and the report is very clear about the impact on well-being and on the curriculum. I’m not sure that their recommendation that OFSTED broadens its focus (again), particularly including science as a core subject, is going to help. OFSTED has already reported on the parlous state of science in the curriculum, but the subject has continued to lose status since 2009. This is as a direct result of the assessment of the other subjects. What is assessed for accountability has status. What is not, does not. The ASE argues (and I totally understand why) that science was impoverished by the test at the end of the year. Nevertheless, science has been impoverished far more, subsequently, in spite of sporadic ‘success stories’ from some schools. This is a matter of record. (pdf). Teacher assessment of science for any kind of reliable purpose is even more fraught with difficulties than the assessment of writing. The farce, last year, was schools trying to decide if they really were going to give credence to the myth that their pupils had ‘mastered’ all 24 of the objectives or whether they were going to ‘fail’ them. Added to this is the ongoing irony that primary science is still ‘sampled’ using an old-fashioned conventional test. Our inadequacy in assessing science is an area that is generally ignored or, to my great annoyance, completely unappreciated by bright-eyed believers who offer ‘simple’ solutions. I’ve suggested that complex subjects like science can only be adequately assessed using more sophisticated technology, but edtech has stalled in the UK and so I hold out little hope for developments in primary school!

When I think back to my comments to the enquiry, I wish I could have made myself clearer in some ways. I said that if we want assessment to enhance our pupils’ education then what we currently have is not serving that purpose. At the time, we were told that if we wished to further comment on the problem of accountability, then we could write to the Committee, which I did. The constant argument has always been ‘…but we need teachers to be accountable.’ I argued that they need to be accountable for the right things and that a single yearly sample of small populations in test conditions, did not ensure this. This was repeated by so many of those who wrote evidence for the Committee, that it was obviously hard to ignore. The following extract from their recommendations is probably the key statement from the entire process. If something changes as a result of this, there might be a positive outcome after all.

Many of the negative effects of assessment are in fact caused by the use of results
in the accountability system rather than the assessment system itself. Key Stage 2
results are used to hold schools to account at a system level, to parents, by Ofsted, and results are linked to teachers’ pay and performance. We recognise the importance of holding schools to account but this high-stakes system does not improve teaching and learning at primary school. (my bold)

Timings and Tides: the Chartered College of Teaching inaugural conference – Sheffield

I’ve followed the development of the Chartered College of Teaching with some interest and much scepticism. In this mode, I joined as a founder member and spent not an inconsiderable amount of money and time attending the inaugural conference in Sheffield. I’d have liked to attend the London conference, but they saw fit to hold it during the week when only half of us were actually on half-term and many of us could not attend.

Nevertheless, I went with an open mind. I’m aware that there are great enthusiasts out there who see this as a bright beacon of hope on our general plain of educational misery. I wanted to see if there was any basis for this. The answer is that I’m not sure; I’m still sceptical. This blog is my discussion of the conference itself and the College overall.

Why the profession needs a collective voice

I’m afraid I am unable to comment on the first two items on the agenda, as it was impossible to arrive on time, coming by train but I was in time to catch the talk by Professor Chris Husbands (Vice-Chancellor of Sheffield Hallam University). He drew well on his experience as a teacher and spoke convincingly, I thought, on notions of ownership and what matters. He sought to redirect the idea that ‘teachers matter’ towards ‘teaching matters’. I think he was making the point that we needed to focus less on individuals being the key to a successful education system and more towards a systematic improvement of the process. If so, then I would agree this is probably correct – we need to address education in this country, at a level that is more than just ‘holding teachers to account’. Nevertheless there were dissenting voices in the room, arguing (rightly) that teaching is dependent on the individuals, in terms of defending teachers’ well-being, and because teaching requires complex ‘on-the-hoof’ analysis and seems inextricably tied up with human interactions and relationships.

I question, too, the fundamental assumption of a collective voice. Whilst I hate the ‘pot-luck’ approach to education that the English seem unable (and unwilling) to challenge, I know that there are many voices and I worry that collective may turn out to be dominant. I’m unconvinced that the cult of evidence is going to prevent teachers being censured, yet again, by the opinionated but ill-informed, for doing the ‘wrong thing’. I know this is certainly the fear of the neo-trads, even though the tide seems currently in their favour and in fact it’s a cultural, not a political problem: fads and flavours of the month cut all ways, and always have. Which brings me to the next item.

Why we need evidence

Chaired by Ann Mroz, The panel was: Sinead Gaffney, Lisa Pettifer, Aimee Tinkler, John Tomsett and Professor Samantha Twiselton. John opened by proposing the need to weigh up the forces of authority and evidence, with the suggestion that we should not be afraid to swim against the tide if necessary. Well, at this point, I couldn’t agree more, although I suspected that not all tides were equal in his mind. There was much discussion about the need for an evidence-based approach, but I was prompted to tweet thus:

There was a lot said about the importance of an ‘evidence-based’ profession, the use of evidence and about teachers conducting research in their own environments; this is where I derive my  concern that we’re heading towards something more like a ‘cult of evidence’ than an informed profession that questions assumptions (I’m not alone, I imagine). To those without a scientific or a sceptical background, the use of ‘evidence’ as a holy grail, is as dangerous as it is essential. I felt that Sinead was something of a lone voice calling hard for the critical evaluation of evidence rather than the gullible application of a set of tools condoned by the EEF. Personally, I have found that most people are easily persuaded by rhetoric and quickly descend into ritual. Teachers generally are ignorant. (If you wish to rebut that, consider (my anecdotal evidence) that not a single member of the very large staff at my school had actually heard of the College of Teaching, as I was leaving on Friday).

It is very difficult to impress upon people that research evidence almost never says we should do something one way or the other. On the contrary, its power is in calling into question things for which there is very little or no evidence. Most educational research would be considered worthless by any scientific standards and much of it is contradictory. Almost none of it stands up to replication. See this timely article. Like Sinead, I have looked past the meta-analyses on the EEF toolkit and examined some of the original research. If you do the same, you’ll find much of it evaporates into thin air. Try it for ‘feedback’ and see what happens. Moreover, there should be serious doubts about encouraging widespread experimentation and research conducted by teachers. It’s difficult to obtain rigour in research, even under the best experimental conditions. Biology is notoriously tricky. If you add to that the ethical and social considerations of working with young children and then sharing those unreliable findings, we’re opening a massive can of worms (no biological pun intended). Whilst some on the panel were arguing that we needed to be able to judge whether the evidence was robust, audience members, without irony, were still calling for the application of instinct and John T reminded us that any consideration of evidence at all, was still ‘miles away’ in most institutions.

The other elephant-in-the-room is that there is probably far too great an acceptance of the way in which we measure effects in educational research. I don’t mean a statistical issue, but a logistical and a philosophical one. When we try to determine if a practice has an effect, how do we measure the effect, if the product is learning? It may be easy enough to determine if an intervention on multiplication facts has worked – simply test pupils to see if they know those facts. But what if the outcome is trickier to measure? I entered a discussion recently where it was argued that allowing pupils to  do practical science might not be important because the evidence showed that didactic methods trumped investigation! My question would be, ‘in what way?’ If the measure is filling in the answer boxes on a test paper (and believe me when I say I’m a strong advocate of that in appropriate ways), then perhaps teaching the pupils to do just that will produce a greater effect. Yet practical science is about being able to do practical science! Investigations should enable us to become better at investigations. I’m not alone in arguing for appropriate measures – yet most evidence is based on a very narrow set. It’s difficult to see a move away from this in an educational system that now expects secondary teachers to predict art grades from KS2 aggregated English and Maths scores!

Going beyond your comfort zone

Penny Mallory was extremely engaging and I was extremely discomfited by the implications of her speech. Penny overcame self-doubt and domestic adversity to become a champion rally driver. Her questions to the audience were, ‘Can anyone become “world class?”‘ and ‘What qualities does a “world class” person have?’. I was gratified to hear some of the answers along the lines of, ‘It depends what you mean’ and ‘Good genes’. I know what the motivational intention was: we limit ourselves; we need a growth mindset; we should take risks etc. I’m slightly, but not entirely, on board with the growth mindset philosophy. I believe it is true that we can play what Eric Berne’s patients used to refer to as the game of ‘Wooden Leg’ and I work hard to counter that with my pupils. However, I profoundly dislike the contemporary message of ‘social mobility’ and the new populism which the College also seems to be promoting. Winning depends on there being others who will lose. Climbing the greasy pole will require stepping on the competitors. It’s a toxic message in a ruthless climate which seeks to replace the greater aspiration of social justice. Aiming to be ‘world class’ as an individual is a very selfish pursuit which by necessity will always be limited to a few. Becoming world class as an organisation  (or as a country!) needs a different approach altogether – one that I feel we’ve departed from rapidly since the 1980s.

Why being brave is important

Tim O’Brien (Visiting Fellow in Psychology and Hyman Development -UCL Institute of Education) chaired this panel. It being after lunch, my note-taking had decreased and I had moved myself to the back of the room to exit, if need be, but it was interesting to consider ideas of bravery. Perhaps the College could be a force for good, recognising that the profession is currently driven more by fear than it should be.

I know that many pin their hopes on the College to remedy this. Tim’s an eminent psychologist who comes across as knowing his stuff. I was in one of the focus groups he led in the ‘grounded theory’ research he conducted when the College was deciding its remit, and he spoke of this, thanking those of us who were there. In the midst of all the concurrence on the need for bravery, however, I wished I could have had the opportunity to point out that there’s a reason for the fear in education; being brave comes at a cost. Do those advocating it, understand the risks they are asking teachers to take?

Networking

I networked just enough to find that most of the attendees were enthusiasts and that some of them at least, were waiting specifically for the last part of the day – ‘Improving Wellbeing in the Classroom’ – Professor Tanya Byron. She was a bona-fide TV celebrity, so to speak, and the audience seemed engaged. I left before the end – it was old territory for me.

So was it worth it? Well, I still feel I have done the right thing in joining and in attending. This is a novel development that may bring something good. At the very least, access to research is something that I’ve missed since finishing the Master’s. In the conference itself, I would have benefited enormously from a more structured approach to networking. This was left largely to us to do informally during the breaks. I knew that there were twitter contacts there I would have liked to meet, but it was not easy to discern who they were and my social ineptitude hindered me in approaching people ‘cold’, particularly if they were already talking in apparently established groups.

Ultimately, I’d make a plea to those who are sceptical, members or otherwise. Keep it up. To the enthusiasts, I’d say the same, alongside the request that you allow all manner of criticism. There was much enthusiasm evident among the attendees; this in itself can create a charismatic tide. Those who swim against it are always needed.

Perverse incentives are real

I’ve just spent a few pleasurable hours looking at the science writing from my y6 class. I say pleasurable, because they’re very good writers this year (thanks Mr M in y5!), but also because there were elements of their writing that hinted at an education. Some children had picked up on, and correctly reinterpreted, the higher level information I had given in reply to their questions on the chemistry of the investigation. All of them had made links with ‘the real world’ following the discussions we’d had.

It all sounds good doesn’t it?

The sad truth is that in spite of the fact that I’m an advocate of education not attainment, the knowledge of what will and will not form part of the end of year measurement is still there, influencing my decisions and having a detrimental impact on my education of the children.

This is because while I am marking their work, I am making decisions about feedback and whether  to follow up misconceptions, or take understanding further. Let’s remember that this is science. Although I personally view its study as crucial, and its neglect  as the source of most of the world’s ills, it has nevertheless lost its status in the primary curriculum. So my thoughts are, ‘Why bother? This understanding will not form part of any final assessment and no measurement of this will be used to judge the effectiveness of my teaching, nor of the school’. Since this is true for science, still nominally a ‘core subject’, how much more so for the non-entities of art, music, DT, etc.? Is there any point in pursuing any of these subjects in primary school in an educational manner?

The argument, of course, is that we have an ethical responsibility as educators to educate. That teachers worth their salt should not be unduly swayed by the knowledge that a narrow set of criteria for a small population of pupils are used at the end of KS2 to judge our success or failure. It reminds me of the argument that senior leaders shouldn’t do things just for OFSTED. It’s an unreasonable argument. It’s like saying to the donkeys, ‘Here’s a carrot and a very big stick, but just act as you would if they weren’t there!’

I’m not in favour of scrapping tests and I’m no fan of teacher assessment, but it’s undeniable, that what I teach is influenced by the KS2 SATs and not all in a good way. The primary  curriculum is vast. The attainment tests are narrow. It also brings into question all research based on using attainment data as a measure of success. Of course it’s true that the things they measure are important – they may even indicate something – but there are a lot of things which aren’t measured which may indicate a whole lot of other things.

I can’t see how we can value a proper primary education – how we can allow the pursuit of further understanding – if we set such tight boundaries on how we measure it. Testing is fine – but if it doesn’t measure what we value then we’ll only value what it measures. I’m resistant to that fact, but I’m not immune. I’m sure I’m no different to every other primary teacher out there. Our assessment system has to change so that we can feel fine about educating our pupils and not think we’re wasting our time if we pursue an area that doesn’t count towards a final mark.

 

 

 

Primary assessment is more than a fiasco – it’s completely wrong

I’ve written my submission to the Education Committee’s inquiry on primary assessment for what it’s worth. I can’t imagine that they’re interested in what we have to say, given that this government have ignored just about all the expert advice they’ve ever received or requested on nearly everything else. This country has ‘had enough of experts’ after all.

I won’t paste my submission here – there are various restrictions on publishing them elsewhere, it seems. However it’s a good time to get some thoughts off my chest. Primary assessment (and school-based assessment generally) has all gone a bit wrong. OK, a lot wrong. It’s so wrong that it’s actually very damaging. Conspiracy theorists might have good cause to think it is deliberate; my own cynicism is that it is underpinned by a string of incompetencies and a distinct failure to listen at all to any advice.

In thinking about why it has all gone wrong, I want to pose a possibly contentious question: is the attainment we are attempting to measure, a thing that should dominate all educational efforts and discourse? I’ve written before about my growing doubts about the over-emphasis on attainment and how I think it detracts from the deeper issue of education. The further we get down this line, particularly with the current nonsense about bringing back selective education, the more this crystalises for me. Just to be clear, this is not an anti-intellectual stance, nor a woolly, liberal dumbing-down view. I fully embrace the idea that we should not put a ceiling on all kinds of achievement for everybody. Having a goal and working towards it – having a way of demonstrating what you have achieved – that’s an admirable thing. What I find ridiculous is that the kind of attainment that is obsessing the nation, doesn’t actually mean very much and yet somehow we are all party to serving its ends. Put it this way – tiny fluctuations in scores in a set of very narrow domains make headlines for pupils, teachers, schools, counties etc. Every year we sweat over the %. If there’s a rise above the ‘expectation’ we breathe a sigh of relief. If, heaven forbid, we had a difficult cohort and a couple of boxes are in the ‘blue zone’ we dread the repercussions because now we’re no longer an outstanding school. But, as Jack Marwood writes here, there’s no pattern. We’ve even begun to worry about whether we’re going to be labelled a ‘coasting school’! Good should be good enough because the hysteria over these measures is sucking the life out of the most important resource – us. Of course the inspectorate needs to be on the lookout for actually bad schools. Are these really going to be so difficult to spot? Is it really the school that was well above average in 2014 and 15 but dipped in 16? Is the child who scores 99 on the scaled score so much more of a failure than the one who scored 101? Is our group of 4 pupil premium children getting well above average, in a small set of tests, an endorsement of our good teaching compared to another school’s 4 getting well below?

Attainment has become an arms race and teachers, pupils and parents are caught in the crossfire. In spite of the ‘assessment without levels’ rhetoric, all our accountability processes are driven by a focus on an attainment with one level. This is incredibly destructive in my experience. Notwithstanding those self-proclaimed paragons of good practice who claim that they’ve got the balance right etc., what I’ve mainly seen in schools are teachers at the end of their wits, wondering what on earth they can further do (what miracle of intervention they can concoct) to ‘boost’ a group of ‘under-performing’ children to get to ‘meeting’, whilst maintaining any kind of integrity with regard to the children who have never been anywhere near. I was recently told in a leadership meeting that all children should make the same amount of progress. Those ‘middle achievers’ should be able to progress at the same rate as the ‘high achievers’. It’s the opposite which is true. The high achievers are where they are exactly because they made quicker progress – but the ‘middle achievers’ (and any other category – good grief!) will also get to achieve, given time. And while all this talk of progress is on the table – let’s be honest – we’re talking about ‘attainment’ again: a measure taken from their KS2 assessments, aggregated, and compared to KS1 in a mystical algorithm.

It’s not like the issues surrounding assessment have never been considered. Just about all the pitfalls of the recent primary debacle have been written about endlessly, and frequently predicted. High-stakes testing has always been the villain of the piece: perverse incentives to teach to the test, narrowing of the curriculum, invalidity of testing domain, unreliability/bias/downright cheating etc. The problem is the issues won’t go away, because testing is the wrong villain. Testing is only the blunt tool to fashion the club of attainment with which to beat us (apologies for extended metaphor). I’m a big fan of testing. I read Roediger and Karpicke’s (pdf) research on ‘testing effect’ in the early days, long before it became a fashionable catch-phrase. I think we should test as many things in as many ways as we can: to enhance recall; to indicate understanding; to identify weaknesses; to demonstrate capacity; to achieve certification etc. I was all in favour of Nicky Morgan’s proposal to introduce an online tables test. What a great idea! Only – make it available all the time and don’t use the results against the pupil or the teacher. No – testing doesn’t cause the problem. It’s caused by the narrow, selective nature, the timing and the pressure of attaining an arbitrary ‘meeting expectations’ (one big level, post levels). The backwash on the curriculum is immense. Nothing has any status anymore: not art, not music, not D&T, not history nor geography, and certainly not science – that ‘core subject’ of yore! Some might argue that it’s because they’re not tested, and of course, I agree up to a point, but the real issue is that they’re not seen as being important in terms of attainment.

I shall add a comment here on teacher assessment, just because it continues to drag on in primary assessment like some old ghost that refuses to stop rattling its chains. If teacher assessment is finally exorcised, I will be particularly grateful. It is an iniquitous, corrupted sop to those who believe ‘teachers are best placed to make judgements about their own pupils’. Of course they are – in the day to day running of their class and in the teaching of lessons – but teacher assessment should not be used in any way to measure attainment. I am not arguing that teachers are biased, that they make mistakes or inflate or deflate their assessments. I am arguing that there is simply no common yardstick and so these cannot be considered reliable. The ‘moderated’ writing debacle of 2016 should have put that fact squarely on the table for all doubters to see. Primary assessments are used in accountability. How can we expect teachers to make judgements that could be used against them in appraisal and in pay reviews?

I’m an idealist in education. I think that it has a purpose beyond the establishment of social groups for different purposes (leadership, administrative work, manual labour). I don’t think that it is best served by a focus on a narrow set of objectives and an over-zealous accountability practice based on dubious variations in attainment. I tried to sum up my proposals for the Education Committe, and I will try to sum up my summing up:

  • Stop using small variations in flawed attainment measures for accountability
  • Give us fine-grained, useful but low-stakes testing, for all (use technology)
  • If we have to measure, get rid of teacher assessment and give us lots of common, standardised tools throughout the primary phase
  • Give us all the same technology for tracking the above (how many thousands of teacher hours have been spent on this?)
  • If you have to have end of stage tests, listen to the advice of the experts and employ some experts in test design – the 2016 tests were simply awful
  • Include science
  • Be unequivocal in the purposes of assessment and let everybody know

I didn’t say ‘get rid of the end of key stage assessments altogether and let us focus again on educating our pupils’. Maybe I should have.

 

 

 

Not good is sometimes good

I was reading Beth Budden’s blog on the cult of performativity in education and thinking of the many times when I’ve thanked the gods no-one was watching a particular lesson. It’s gratifying that there is a growing perception that a single performance in a 40 minute session is no kind of measure of effectiveness – I’ve railed against that for many years. During observations, I’ve sometimes managed to carry off the performance (and it’s always a hollow victory) and sometimes I haven’t (it always leads to pointless personal post-mortems). Lately I’ve managed to introduce the idea that I will give a full briefing of the lesson, the background, my rationale, the NC, the focus, the situation etc. etc. before any member of the leadership team sets foot in my classroom to make a formal observation. It’s been a long time coming and it goes some way to mitigating the performance effects. Not everyone in my school does it.

But what about the lessons that I really didn’t want anyone to watch? If they had, would I be recognised as a bad teacher?  If I think about lessons that seem to have been pretty poor by my own judgement, they almost always lead on to a better understanding overall. A recent example is a lesson I taught (nay crammed) on the basics of electricity. It was a rush. The pupils needed to glean a fair amount of information in a short time from a number of sources. The resultant writing showed that it was poorly understood by everyone. Of course, it was my fault and I’d have definitely failed that lesson if I were grading myself. Fortunately I wasn’t being graded and nobody was watching. Fortunately, also,  I could speak to the pupils the day after looking at their confused writing on the subject, tell them that I took responsibility for it being below par and say that we needed to address the myriad of misconceptions that has arisen. We did. The subsequent work was excellent and suggested a far higher degree of understanding from all; I assumed that something had been learned. Nowhere in here was a ‘good lesson’ but somewhere in here was some actual education – and not just about electricity.

 

 

Trialling moderation

A quick one today to cover the ‘trialling moderation’ session this afternoon.

We had to bring all the documents and some samples of pupils’ writing, as expected.

Moderators introduced themselves. They seemed to be mainly Y6 teachers who also were subject leaders for English. Some had moderated before, but obviously not for the new standards.

The ‘feel’ from the introduction to the session was that it wasn’t as big a problem as we had all been making it out to be. We were definitely using the interim statements and that ‘meeting’ was indeed equivalent to a 4b.

At my table, we expressed our distrust of this idea and our fear that very few of our pupils would meet expected standards. Work from the first pupil was shared and the criteria ticked off. We looked at about 3 pieces of work. It came out as ‘meeting’ even though I felt it was comparable to the exemplar, ‘Alex’. The second pupil from the next school was ‘nearly exceeding’. I wasn’t convinced. There were lots of extended pieces in beautiful handwriting but sentence structures were rather unsophisticated. There was arguably a lack of variety in the range and position of clauses and transitional phrases. There was no evidence of writing for any other  curriculum area, such as science.

I put forward the work from a pupil I had previously thought  to be ‘meeting’ but had then begun to doubt. I wanted clarification. Formerly, I would have put this pupil at a 4a/5c with the need to improve consistency of punctuation. Our books were the only ones on our table (and others) that had evidence of writing across the curriculum; we moved a few years ago to putting all work in a ‘theme book’ (it has its pros and cons!).

Unfortunately the session was ultimately pretty frustrating as we didn’t get to agree on the attainment of my pupil; I was told that there needed to be evidence of the teaching process that had underpinned the writing that was evident in the books. That is to say, there should be the grammar exercises where we had taught such things as ‘fronted adverbials’ etc. and then the written pieces in which that learning was then evidenced. I challenged that and asked why we couldn’t just look at the writing as we had done for the first pupil. By then the session was pretty much over. In spite of the moderator’s attempt to finish the moderation for me, we didn’t. The last part of the session was given over to the session leader coming over and asking if we felt OK about everything, and my reply that no, I didn’t. I still didn’t know which of the multiplicity of messages to listen to and I hadn’t had my pupil’s work moderated. I had seen other pieces of work, but I didn’t trust the judgements that had been made.

The response was ‘what mixed messages?’ and the suggestion that it may take time for me to ‘get my head around it’ just like I must have had to do for the previous system. She seemed quite happy that the interim statements were broadly equivalent to a 4b and suggested that the government certainly wouldn’t want to see the data showing a drop in attainment. I suggested that if people were honest, that could be the only outcome.

My colleague didn’t fare much better. She deliberately brought samples from a pupil who fails to write much but when he does, it is accurate, stylish and mature. He had a range of pieces, but most of them were short. The moderator dismissed his work as insufficient evidence but did inform my colleague that she would expect to see the whole range of text types, including poetry because otherwise how would we show ‘figurative language and metaphor’?

I’m none the wiser but slightly more demoralised than before. One of my favourite writers from last year has almost given up writing altogether because he knows his dyslexia will prevent him from ‘meeting’. Judging the writing of pupils as effectively a pass or fail is heart-breaking. I know how much effort goes into their writing. I can see writers who have such a strong grasp of audience and style, missing the mark by just a few of the criteria. This is like being faced with a wall – if you cant get over it, stop bothering.

We are likely to be doing a lot of writing over the next few weeks.

 

If I were the school leader…

I have a student teacher on placement in my class at the moment. It’s interesting to remind myself of the long list of criteria in the teachers’ standards, that we have to consider in observations. As a teacher giving advice, I know which of these are important and which I’d give a lot less weight to when making any kind of value judgement.

I’ve never been a fan of classroom observations – for all the reasons that are now part of general discussion – particularly those that attempt to grade the teacher based on a snapshot of 20-40 minutes. It’s not how I’d do it. But the job of a school leader is a tough one, I believe, and nowhere tougher than in securing quality of teaching among the staff. If it were me, what would I look for?

When teachers are worrying about trying to tick the increasing number of boxes put forward to us, actual performance deteriorates. We’re focussed on what we think will be the assessment of what we should be doing, not on what we are actually doing. Humans can’t multi-task. By thinking of the process, the process itself suffers. This is a well-documented tactic used by those who would seek to remove unwanted personnel: increase the level of scrutiny and nit-pick every move so that eventually the subject can hardly function. This is a hard-nosed game that often ends in resignation, mental breakdown and sometimes suicide.

It’s not really the way I would go, and if micro-management is not the best way to ensure the pupils are getting a good education, then perhaps it boils down to a much reduced but more important key set of desirable features/skills. I think my list would be brief. I’d be looking specifically for evidence that the teacher:

  • Knows the subject(s) (and the curriculum) well
  • Knows what the pupils have learned and what to teach next
  • Manages behaviour so that pupils can focus
  • Teaches clearly so that pupils can understand
  • Picks up on issues and remedies them
  • Is compassionate

Everything else, surely, is either part of the craft or derived from opinion?

Challenge to this is welcome.