Testing their resilience

After nearly 3 decades of primary teaching, you’d think I’d have a secure philosophy, but we’re currently caught in the variable winds of different approaches, sometimes considered more traditional or more progressive, and often seeming to be contradictory.  I can’t follow the apparent narrow path of either, but I do question myself on what mine is. If anything, it’s that education is paramount – and by that I don’t mean ‘learning’ and I definitely don’t mean ‘attainment’ as I have blogged about here, before.

I tried to give my class an analogy last week. I drew it as a bowl into which different pieces of knowledge were put. There’s no predicting what will be useful knowledge, but what is certain is that the more there is and the wider the range, the better equipped they will be to link it together in an effective way. This bears no relationship to fixed notions of attainment, cleverness, ability, SEND, class or all the other supposed divisions this country likes to impose on its citizens. Anybody can add to their bowl of knowledge at any time.

With that in mind, I want to ride rough-shod over anything I see as an impediment to the education of each and every human being. It’s a global goal for good reason and it’s not difficult to think of examples where the high-attaining, poorly educated have had a negative impact on all of us – on the planet.

I try hard to explain to my pupils that the important thing is ‘knowing stuff’. They think it’s important to be ‘good at stuff’ and increasingly, I’m stunned by some of the reactions and self-denigration I’m seeing: a small error causes a child to throw their book on the floor; not immediately ‘getting’ long division makes another child throw his hands up and begin to weep; feeling like something is hard, stops one from ever tackling a new area. Simply talking ‘growth-mindset’ has no impact.

Is it really all the high-stakes testing culture that we’re in? Though I totally agree with the arguments against it, I struggle to accept that this is the issue completely – mainly because there has always been high-stakes testing, since I’ve been teaching, but this feels like a new phenomenon. I don’t think I’ve yet encountered pupils with such a fragile sense of their own ability to overcome small set-backs, no matter how much I tell them that learning happens when we put right what we got wrong.

Recent whole-staff CPD seems to pull us in opposite directions. In some, we’re to challenge pupils to step outside their comfort zone. In others, the stress of our system is damaging their mental health. It’s difficult to know when to persevere and when to back off. I’ve always found it hard to do the latter. I’m a fan of tests, for example, even when they’re not embraced by every child in the class. In the past, they were tested regularly on old SATs papers, without any of the symptoms I’m seeing recently. Tests help us establish what we can remember and identify what we don’t know. Furthermore they aid memory. This is now common knowledge, though it wasn’t six years ago when I first came across Roediger‘s work.

This year, a SATs-style arithmetic test caused Ferdy* to cry and put his head on the desk when we marked them. The second one caused him to wail when it was announced. He was furious with himself again when we marked them. He wasn’t ‘doing badly’ he just wasn’t perfect. It was nearly impossible to help him with the misconceptions that we could identify – it was like he thought it was magic and he didn’t have the special ability. At this point it seemed like cruelty to force the poor lad to go through more. I was conscious however, that it’s not actually torture – it’s just maths. I feel pretty sure that giving up and giving in – avoidance – reinforces the negativity and doesn’t help Ferdy in the long run. So I persevered and gave them all another, two weeks later, and then another. Ferdy has pretty much sorted out every misconception that was revealed in the first test. He correctly calculated all the long divisions – his nemesis. So it paid off. Ferdy’s sense of his ability to overcome obstacles is strengthened and he’s very pleased with himself. It’s a bit of a relief to me, too.

 

*He’s not really called Ferdy.

 

Assessment for Accountability – Taking the Biscuit

Thinking about assessment and accountability again. I adapted this from a letter I wrote to the then Ed Select Committee.

The problem of accountability

If we take it to be the case that teachers and schools need to be ‘held to account’, then we need to ask ourselves some questions.

Held to account for what?

The answer to this is  crucial. For a long time we were held to account for pupil ‘attainment’. Recently there has been the reasonable suggestion that there are many factors outside of our control which impact on attainment, and that progress might be a better measure of how good or bad we are. The measurement of progress nevertheless, remains a massive challenge, in spite of attempts to contextualise it, or to use national trends for comparison: baseline data is not reliable (it’s ludicrous to believe that you can use the behaviour of 4-year-olds to derive data that will hold teachers to account at the end of KS2 – and beyond!); pupils do not make standard amounts of progress; domains at the start and end of the progress measurements are different (GCSE art teachers, beware!); cohorts are different; significance is difficult with small groups, etc.

Whilst ‘progress’ seems at first fairer and a preferable measure to ‘attainment’, neither is sufficient for our purposes – not the way currently measured and not when matched against the aspirations of the National Curriculum.

Does the current system even serve the right purpose?

In the drive to measure ‘attainment’ and ‘progress’, I think we sometimes forget that we are using these things as proxies for the quality of the education being provided. We need to return to the drawing board for how we might ensure this happens. Currently we use an assessment system that cannot do this; the measurement is too narrow, subject to chance variables, and too much driven by fear of failure, leading to all the perverse incentives the assessment experts have been writing about for so many decades. A quality primary education is not ensured by testing the very few items that are currently measured at the end of KS2, any more than a quality factory is ensured by eating one of its biscuits.

I think schools and teachers do want to provide a good education to their pupils and that a climate of fear is unnecessary and counterproductive. Real accountability must involve a move away from looking only at outcomes and focus instead on quality input: we need well-educated teachers with excellent and maintained subject knowledge, quality text books produced by experts and thoroughly vetted by the profession, online materials and required reading, use of evidence and avoidance of fads. Quality training is essential, as is development and retention of teachers with expertise.

How can we ensure accountability where it counts?

Realistically, I don’t expect a quick move away from summative assessments for the purpose of accountability, in spite of all the arguments against. But I feel that we could address some of the issues that arise with the current system by generating and providing (to the DfE if need be) not less but more information:

  • Frequent, low-stakes tests help both teaching and learning – require/provide tests throughout the year, every year.
  • Fine grained, specific tests provide useful information – test what we want to know about – keep that data.
  • Assessing the same domain more than once and in different ways helps to reduce unreliability – do not rely on one single end of year test.
  • Testing earlier in the cycle gives useful feedback for teaching – do not wait until the end of the year or the end of the Key Stage.
  • Random selection from a broad range of criteria helps to reduce ‘teaching to the test’ – test knowledge in all curriculum areas without publishing a narrow list of criteria.
  • Use assessment experts and design assessments that test what we want pupils to know or do. Criteria need to be reasonable – not obscure and mystical as they have been recently.

If these aspects were applied to an assessment system throughout the primary phase, I believe we could enhance learning, improve accountability in what really matters and provide vast amounts of data.

We really need to make better use of technology at all stages; this is the only way in which we can feasibly make assessment serve multiple purposes. There would need to be a move away from the high stakes pass/fail system which is not fit for purpose, towards a timely monitoring and feedback system that could alert all stakeholders to issues and provide useful tools for intervention. Data collected from continuous low-stakes assessments provides a far more valid picture of teaching and learning.

Whole class reading (and Macbeth)

It was several years ago – long before the latest surge in popularity of ‘whole class reading’ – that I found I no longer wanted to have anything to do with the old carousel method of guided reading and I abandoned it in favour of working on a text as a class instead. When, later, it felt that I might need to try and justify it, I sought out information online and came across Mrs P Teach’s blog. I was glad then, that I wasn’t alone in thinking it was preferable teaching 5 different lessons, to teaching the same one five times and letting the pupils do the others independently. When I took on the role of English subject-leader for UKS2, it didn’t take me long to encourage some of the teachers to use a whole class approach. Many jumped at the idea. Some were a little resistant but realised the benefits pretty quickly. One is still attached to the intimacy of the guided group and I understand why. I have suggested that for now, we use a combination and that my colleague tries at least one week of whole class reading. There have been too many imposed practices in my history for me to want to take that approach here.

At the time of looking, Jo Payne’s blog was one of the few that came up in the search engine. Googling it now, reveals just how much the idea has been taken up and welcomed. Some good links are:

The Teaching Booth

Solomon Kingsnorth

(Not so) New and Quietly Terrified

DM Crosby in the TES

And the powerhouse of whole class reading:

MissWilsonSays

I’ve drawn from these and various twitter exchanges and discussions, for the simple approach I take. There might be more honing necessary, but the following is my current practice. I felt it was time I shared something, so I have also included resources on Jon Blake’s adaptation of ‘Macbeth’ for Oxford Reading Tree. I used this text this term alongside Shakespeare’s original and I have included the witches’ poem comprehension.

Reading sessions are daily – 30 minutes.

Monday: I Read the text aloud. Pupils sometimes follow. Often they listen and make accompanying illustrations. I may get them to read aloud as a class.

Tuesday: We look at specific vocabulary from the text, discuss synonyms and usage. This is also the day for reciting and learning things by heart and for addressing spelling issues.

Wednesday: Read your own book day. It may include an activity based on their own book. Often it’s just the luxury of uninterrupted reading. 5 pupils a week use Ipads to access a set reading activity.

Thursday: Reading comprehension – 5 questions based on the part of the text that has already been read aloud by me. Pupils have access to a printout of the text. The questions are based on retrieval, interpretation and explanation. I use the acronym APE to help pupils write longer answers. I don’t overdo acronyms, but this one works:

  • Answer
  • Prove
  • Explain

We look at the answers together. Strong answers are shared. A good version is always modelled.

Friday: Pupils who were less confident with the comprehension activity are supported to give stronger answers, including looking at the model. The rest of the time is reading their own book again. I have tried to increase the amount of time they spend ‘free reading’ as it used to be called. It has felt that in recent years, they’re almost desperate to do this and never have enough time, so that books are creeping out surreptitiously during other lessons!

Here are the resources for the Macbeth text:

Vocabulary witches’ spell

Vocabulary p 82.83

Vocabulary p 50,51

Vocabulary p 26,27

Vocabulary p 18,19

Vocabulary p 7,8,9

Macbeth questions p82,83

Macbeth questions p50,51

Macbeth questions p26,27

Macbeth questions p18,19

Macbeth questions p7,8,9

 

 

 

Cognitive Load

Well Edutwitter was unusually unforthcoming with teacher-friendly presentations on cognitive load so I’ve resorted to making one from scratch. I’m grateful, though to Greg Ashman for some curation of this subject and pointing us in the direction of useful materials such as this. It formed the basis of the presentation.

Here I’m including the slides I’ve made for the presentation I will give to the upper key stage 2 staff at our school. I’ve tried to adhere to the principles in the making of it – not using text and speech at the same time etc. I think I could be criticised for not making the images even simpler as diagrams.

Feel free to use if any of you need to do the same in your schools. Feel free to give critical feedback too.

cognitive load

 

A Tale of Two Teachers

Mr White* was legendary in my primary school; he was the teacher everybody wanted to have. He was young, dynamic and funny, and his lessons were quirky and exciting. Pupils did not earn house-points – they played a continual ‘game of life’ in which they could build ‘houses’, have ‘jobs’ (even get ‘married’!) and earn money. My brother was taught by him 4 years ahead of me and raved about him so that I couldn’t wait to be taught by him myself. He was particularly well-known for setting a long list of weird and challenging activities for the pupils to do for holiday homework. Of course there were lessons, too, though I struggle to remember them. Despite the ‘progressive’ sound of all this, our core education was decidedly traditional. I finally got to have Mr White in standard 4 (aged 10) and was duly delighted. He left us in the middle of the year, to take up a job as a deputy head in another school and we never did get to earn our dollars for ‘taking a swim before sunrise’, or ‘making a cheese toasty with the iron’.

Mr Smith was different. He was the deputy head in our school and had been since the dawn of time as far as we knew. We once had a lesson in which we were to identify what had happened on certain dates. He threw in one date that nobody could guess, until someone mentioned it was the year their grandmother had been born. That’s when we realised how old he was. We had him in standard 5, the year after Mr White. I mainly remember him sitting at his desk. He taught us from his age-old knowledge and from printed materials of all sorts, and he tested us often on everything. For example, alongside a rigorous maths and English curriculum, we studied to a fine degree, the life-cycles of several tropical parasites, knowing in great detail the scientific descriptions of the characteristics of each stage and their implications for health. He made no attempt to make the materials ‘child-friendly’ and he required that we did our own research projects to a high standard. I learned an incredible amount of a wide range of topics that year and I still remember much of it. As a teacher, I now can’t believe how much we crammed in, never seeming to be rushed or stressed for time. Nobody in our school would have called Mr Smith ‘fun’ or ‘funny’. He was a crusty old cove who gave up chain-smoking and tried to take up snuff instead – which he used in front of us in class.

I’m not sure, but I can guess, which teacher would have won the popularity contest, had we been asked as children. Fortunately for us, it was not our choice. The more I reflect on it through the years, the more I realise that Mr Smith was the outstanding teacher of my junior school, possibly my school years as a whole.

Teachers now seem to worry a lot about how they can be the ‘best’. There’s a huge amount of rhetoric about relationships, teaching styles, progressive and traditional practices, accountability for results and the problem is that most of it is either wrong, unfounded or measured in short-term, limited ways. Mr Smith didn’t have to worry about any of those things.

 

*Names changed, of course.

 

 

 

Ed Select Committee report – improvements to come?

The Education Select Committee has published its report into the impact of the changes to primary assessment. It’s been an interesting journey from the point at which I submitted written evidence on primary assessment; I wrote a blog back in October, where I doubted there would be much response, but in fact I was wrong. Not only did they seem to draw widely from practioners, stake-holders and experts to give evidence, the report actually suggests that they might have listened quite well, and more to the point, understood the gist of what we were all trying to say. For anyone who had followed assessment research, most of this is nothing new. Similar things have been said for decades. Nevertheless, it’s gratifying to have some airing of the issues at this level.

Summative and formative assessment

The introduction to the report clarifies that the issues being tackled relate to summative assessment and not the ongoing process of formative assessment carried out by teachers. For me, this is a crucial point, since I have been trying, with some difficulty sometimes, to explain to teachers that the two purposes should not be confused. This is important because the original report on assessment without levels suggested that schools had ‘carte blanche’ to create their own systems. Whilst it also emphasised that purposes needed to be clear, many school systems were either extensions of formative assessment that failed to grasp the implications and the requirements of summative purposes, or they were clumsy attempts to create tracking systems based on data that really had not been derived from reliable assessment!

Implementation and design

The report is critical of the time-scale and the numerous mistakes made in the administration of the assessments. They were particularly critical of the STA, which was seen to be chaotic and insufficiently independent. Furthermore, they criticise Ofqual for lack of quality control, in spite of Ofqual’s own protestations that they had scrutinised the materials. The report recommends an independent panel to review the process in future.

This finding is pretty damning. This is not some tin-pot state setting up its first exams – how is incompetence becoming normal? In a climate of anti-expertise, I suppose it is to be expected, but it will be very interesting to see if the recommendations have any effect in this area.

The Reading Test

The report took on board the wide-spread criticism of the 2016 Reading Test. The STA defense was that it had been properly trialled and performed as expected. Nevertheless, the good news (possibly) is that the Department has supposedly “considered how this year’s test experience could be improved for pupils”. 

Well we shall see on Monday! I really hope they manage to produce something that most pupils will at least find vaguely interesting to read. The 2016 paper was certainly the least well-received of all the practice papers we did this year.

Writing and teacher assessment

Teacher assessment of writing emerged as something that divided opinion. On the one hand there were quotes from heads who suggested that ‘teachers should be trusted’ to assess writing. My view is that they miss the point and I was very happy to be quoted alongside Tim Oates, as having deep reservations about teacher assessment. I’ve frequently argued against it for several reasons (even when moderation is involved) and I believe that those who propose it may be confusing the different purposes of assessment, or fail to see how it’s not about ‘trust’ but about fairness to all pupils and an unacceptable burden on teachers.

What is good to see, though, is how the Committee have responded to our suggested alternatives. Many of us referred to ‘Comparative Judgement’ as a possible way forward. The potential of comparative judgement as an assessment method is not new, but is gaining credibility and may offer some solutions – I’m glad to see it given space in the report. Something is certainly needed, as the way we currently assess writing is really not fit for purpose. At the very least, it seems we may return to a ‘best-fit’ model for the time being.

For more on Comparative Judgment, see:

Michael Tidd  The potential of Comparative Judgement in primary

Daisy Christodoulou Comparative judgment: 21st century assessment

No More Marking

David Didau  10 Misconceptions about Comparative Judgement

Support for schools

The report found that the changes were made without proper training or support. I think this is something of an understatement. Systems were changed radically without anything concrete to replace them. Schools were left to devise their own systems and it’s difficult to see how anyone could not have foreseen that this would be inconsistent and often  inappropriate. As I said in the enquiry, there are thousands of primary schools finding thousands of different solutions. How can that be an effective national strategy, particularly as, by their own admission, schools lacked assessment expertise? Apparently some schools adopted commercial packages which were deemed ‘low quality’. This, too, is not a surprise. I know that there are teachers and head-teachers who strongly support the notion of ‘doing their own thing’, but I disagree with this idea and have referred to it in the past as the ‘pot-luck’ approach. There will be ways of doing things that are better than others. What we need to do is to make sure that we are trying to implement the most effective methods and not leaving it to the whim of individuals. Several times, Michael Tidd has repeated that we were offered an ‘item bank’ to help teachers with ongoing assessment. The report reiterates this, but I don’t suggest we hold our collective breath.

High-stakes impact and accountability

I’m sure the members of the Assessment Reform Group, and other researchers of the 20th century, would be gratified to know that this far down the line we’re still needing to point out the counter-productive nature of high-stakes assessment for accountability! Nevertheless, it’s good to see it re-emphasised in no uncertain terms and the report is very clear about the impact on well-being and on the curriculum. I’m not sure that their recommendation that OFSTED broadens its focus (again), particularly including science as a core subject, is going to help. OFSTED has already reported on the parlous state of science in the curriculum, but the subject has continued to lose status since 2009. This is as a direct result of the assessment of the other subjects. What is assessed for accountability has status. What is not, does not. The ASE argues (and I totally understand why) that science was impoverished by the test at the end of the year. Nevertheless, science has been impoverished far more, subsequently, in spite of sporadic ‘success stories’ from some schools. This is a matter of record. (pdf). Teacher assessment of science for any kind of reliable purpose is even more fraught with difficulties than the assessment of writing. The farce, last year, was schools trying to decide if they really were going to give credence to the myth that their pupils had ‘mastered’ all 24 of the objectives or whether they were going to ‘fail’ them. Added to this is the ongoing irony that primary science is still ‘sampled’ using an old-fashioned conventional test. Our inadequacy in assessing science is an area that is generally ignored or, to my great annoyance, completely unappreciated by bright-eyed believers who offer ‘simple’ solutions. I’ve suggested that complex subjects like science can only be adequately assessed using more sophisticated technology, but edtech has stalled in the UK and so I hold out little hope for developments in primary school!

When I think back to my comments to the enquiry, I wish I could have made myself clearer in some ways. I said that if we want assessment to enhance our pupils’ education then what we currently have is not serving that purpose. At the time, we were told that if we wished to further comment on the problem of accountability, then we could write to the Committee, which I did. The constant argument has always been ‘…but we need teachers to be accountable.’ I argued that they need to be accountable for the right things and that a single yearly sample of small populations in test conditions, did not ensure this. This was repeated by so many of those who wrote evidence for the Committee, that it was obviously hard to ignore. The following extract from their recommendations is probably the key statement from the entire process. If something changes as a result of this, there might be a positive outcome after all.

Many of the negative effects of assessment are in fact caused by the use of results
in the accountability system rather than the assessment system itself. Key Stage 2
results are used to hold schools to account at a system level, to parents, by Ofsted, and results are linked to teachers’ pay and performance. We recognise the importance of holding schools to account but this high-stakes system does not improve teaching and learning at primary school. (my bold)

Timings and Tides: the Chartered College of Teaching inaugural conference – Sheffield

I’ve followed the development of the Chartered College of Teaching with some interest and much scepticism. In this mode, I joined as a founder member and spent not an inconsiderable amount of money and time attending the inaugural conference in Sheffield. I’d have liked to attend the London conference, but they saw fit to hold it during the week when only half of us were actually on half-term and many of us could not attend.

Nevertheless, I went with an open mind. I’m aware that there are great enthusiasts out there who see this as a bright beacon of hope on our general plain of educational misery. I wanted to see if there was any basis for this. The answer is that I’m not sure; I’m still sceptical. This blog is my discussion of the conference itself and the College overall.

Why the profession needs a collective voice

I’m afraid I am unable to comment on the first two items on the agenda, as it was impossible to arrive on time, coming by train but I was in time to catch the talk by Professor Chris Husbands (Vice-Chancellor of Sheffield Hallam University). He drew well on his experience as a teacher and spoke convincingly, I thought, on notions of ownership and what matters. He sought to redirect the idea that ‘teachers matter’ towards ‘teaching matters’. I think he was making the point that we needed to focus less on individuals being the key to a successful education system and more towards a systematic improvement of the process. If so, then I would agree this is probably correct – we need to address education in this country, at a level that is more than just ‘holding teachers to account’. Nevertheless there were dissenting voices in the room, arguing (rightly) that teaching is dependent on the individuals, in terms of defending teachers’ well-being, and because teaching requires complex ‘on-the-hoof’ analysis and seems inextricably tied up with human interactions and relationships.

I question, too, the fundamental assumption of a collective voice. Whilst I hate the ‘pot-luck’ approach to education that the English seem unable (and unwilling) to challenge, I know that there are many voices and I worry that collective may turn out to be dominant. I’m unconvinced that the cult of evidence is going to prevent teachers being censured, yet again, by the opinionated but ill-informed, for doing the ‘wrong thing’. I know this is certainly the fear of the neo-trads, even though the tide seems currently in their favour and in fact it’s a cultural, not a political problem: fads and flavours of the month cut all ways, and always have. Which brings me to the next item.

Why we need evidence

Chaired by Ann Mroz, The panel was: Sinead Gaffney, Lisa Pettifer, Aimee Tinkler, John Tomsett and Professor Samantha Twiselton. John opened by proposing the need to weigh up the forces of authority and evidence, with the suggestion that we should not be afraid to swim against the tide if necessary. Well, at this point, I couldn’t agree more, although I suspected that not all tides were equal in his mind. There was much discussion about the need for an evidence-based approach, but I was prompted to tweet thus:

There was a lot said about the importance of an ‘evidence-based’ profession, the use of evidence and about teachers conducting research in their own environments; this is where I derive my  concern that we’re heading towards something more like a ‘cult of evidence’ than an informed profession that questions assumptions (I’m not alone, I imagine). To those without a scientific or a sceptical background, the use of ‘evidence’ as a holy grail, is as dangerous as it is essential. I felt that Sinead was something of a lone voice calling hard for the critical evaluation of evidence rather than the gullible application of a set of tools condoned by the EEF. Personally, I have found that most people are easily persuaded by rhetoric and quickly descend into ritual. Teachers generally are ignorant. (If you wish to rebut that, consider (my anecdotal evidence) that not a single member of the very large staff at my school had actually heard of the College of Teaching, as I was leaving on Friday).

It is very difficult to impress upon people that research evidence almost never says we should do something one way or the other. On the contrary, its power is in calling into question things for which there is very little or no evidence. Most educational research would be considered worthless by any scientific standards and much of it is contradictory. Almost none of it stands up to replication. See this timely article. Like Sinead, I have looked past the meta-analyses on the EEF toolkit and examined some of the original research. If you do the same, you’ll find much of it evaporates into thin air. Try it for ‘feedback’ and see what happens. Moreover, there should be serious doubts about encouraging widespread experimentation and research conducted by teachers. It’s difficult to obtain rigour in research, even under the best experimental conditions. Biology is notoriously tricky. If you add to that the ethical and social considerations of working with young children and then sharing those unreliable findings, we’re opening a massive can of worms (no biological pun intended). Whilst some on the panel were arguing that we needed to be able to judge whether the evidence was robust, audience members, without irony, were still calling for the application of instinct and John T reminded us that any consideration of evidence at all, was still ‘miles away’ in most institutions.

The other elephant-in-the-room is that there is probably far too great an acceptance of the way in which we measure effects in educational research. I don’t mean a statistical issue, but a logistical and a philosophical one. When we try to determine if a practice has an effect, how do we measure the effect, if the product is learning? It may be easy enough to determine if an intervention on multiplication facts has worked – simply test pupils to see if they know those facts. But what if the outcome is trickier to measure? I entered a discussion recently where it was argued that allowing pupils to  do practical science might not be important because the evidence showed that didactic methods trumped investigation! My question would be, ‘in what way?’ If the measure is filling in the answer boxes on a test paper (and believe me when I say I’m a strong advocate of that in appropriate ways), then perhaps teaching the pupils to do just that will produce a greater effect. Yet practical science is about being able to do practical science! Investigations should enable us to become better at investigations. I’m not alone in arguing for appropriate measures – yet most evidence is based on a very narrow set. It’s difficult to see a move away from this in an educational system that now expects secondary teachers to predict art grades from KS2 aggregated English and Maths scores!

Going beyond your comfort zone

Penny Mallory was extremely engaging and I was extremely discomfited by the implications of her speech. Penny overcame self-doubt and domestic adversity to become a champion rally driver. Her questions to the audience were, ‘Can anyone become “world class?”‘ and ‘What qualities does a “world class” person have?’. I was gratified to hear some of the answers along the lines of, ‘It depends what you mean’ and ‘Good genes’. I know what the motivational intention was: we limit ourselves; we need a growth mindset; we should take risks etc. I’m slightly, but not entirely, on board with the growth mindset philosophy. I believe it is true that we can play what Eric Berne’s patients used to refer to as the game of ‘Wooden Leg’ and I work hard to counter that with my pupils. However, I profoundly dislike the contemporary message of ‘social mobility’ and the new populism which the College also seems to be promoting. Winning depends on there being others who will lose. Climbing the greasy pole will require stepping on the competitors. It’s a toxic message in a ruthless climate which seeks to replace the greater aspiration of social justice. Aiming to be ‘world class’ as an individual is a very selfish pursuit which by necessity will always be limited to a few. Becoming world class as an organisation  (or as a country!) needs a different approach altogether – one that I feel we’ve departed from rapidly since the 1980s.

Why being brave is important

Tim O’Brien (Visiting Fellow in Psychology and Hyman Development -UCL Institute of Education) chaired this panel. It being after lunch, my note-taking had decreased and I had moved myself to the back of the room to exit, if need be, but it was interesting to consider ideas of bravery. Perhaps the College could be a force for good, recognising that the profession is currently driven more by fear than it should be.

I know that many pin their hopes on the College to remedy this. Tim’s an eminent psychologist who comes across as knowing his stuff. I was in one of the focus groups he led in the ‘grounded theory’ research he conducted when the College was deciding its remit, and he spoke of this, thanking those of us who were there. In the midst of all the concurrence on the need for bravery, however, I wished I could have had the opportunity to point out that there’s a reason for the fear in education; being brave comes at a cost. Do those advocating it, understand the risks they are asking teachers to take?

Networking

I networked just enough to find that most of the attendees were enthusiasts and that some of them at least, were waiting specifically for the last part of the day – ‘Improving Wellbeing in the Classroom’ – Professor Tanya Byron. She was a bona-fide TV celebrity, so to speak, and the audience seemed engaged. I left before the end – it was old territory for me.

So was it worth it? Well, I still feel I have done the right thing in joining and in attending. This is a novel development that may bring something good. At the very least, access to research is something that I’ve missed since finishing the Master’s. In the conference itself, I would have benefited enormously from a more structured approach to networking. This was left largely to us to do informally during the breaks. I knew that there were twitter contacts there I would have liked to meet, but it was not easy to discern who they were and my social ineptitude hindered me in approaching people ‘cold’, particularly if they were already talking in apparently established groups.

Ultimately, I’d make a plea to those who are sceptical, members or otherwise. Keep it up. To the enthusiasts, I’d say the same, alongside the request that you allow all manner of criticism. There was much enthusiasm evident among the attendees; this in itself can create a charismatic tide. Those who swim against it are always needed.

Perverse incentives are real

I’ve just spent a few pleasurable hours looking at the science writing from my y6 class. I say pleasurable, because they’re very good writers this year (thanks Mr M in y5!), but also because there were elements of their writing that hinted at an education. Some children had picked up on, and correctly reinterpreted, the higher level information I had given in reply to their questions on the chemistry of the investigation. All of them had made links with ‘the real world’ following the discussions we’d had.

It all sounds good doesn’t it?

The sad truth is that in spite of the fact that I’m an advocate of education not attainment, the knowledge of what will and will not form part of the end of year measurement is still there, influencing my decisions and having a detrimental impact on my education of the children.

This is because while I am marking their work, I am making decisions about feedback and whether  to follow up misconceptions, or take understanding further. Let’s remember that this is science. Although I personally view its study as crucial, and its neglect  as the source of most of the world’s ills, it has nevertheless lost its status in the primary curriculum. So my thoughts are, ‘Why bother? This understanding will not form part of any final assessment and no measurement of this will be used to judge the effectiveness of my teaching, nor of the school’. Since this is true for science, still nominally a ‘core subject’, how much more so for the non-entities of art, music, DT, etc.? Is there any point in pursuing any of these subjects in primary school in an educational manner?

The argument, of course, is that we have an ethical responsibility as educators to educate. That teachers worth their salt should not be unduly swayed by the knowledge that a narrow set of criteria for a small population of pupils are used at the end of KS2 to judge our success or failure. It reminds me of the argument that senior leaders shouldn’t do things just for OFSTED. It’s an unreasonable argument. It’s like saying to the donkeys, ‘Here’s a carrot and a very big stick, but just act as you would if they weren’t there!’

I’m not in favour of scrapping tests and I’m no fan of teacher assessment, but it’s undeniable, that what I teach is influenced by the KS2 SATs and not all in a good way. The primary  curriculum is vast. The attainment tests are narrow. It also brings into question all research based on using attainment data as a measure of success. Of course it’s true that the things they measure are important – they may even indicate something – but there are a lot of things which aren’t measured which may indicate a whole lot of other things.

I can’t see how we can value a proper primary education – how we can allow the pursuit of further understanding – if we set such tight boundaries on how we measure it. Testing is fine – but if it doesn’t measure what we value then we’ll only value what it measures. I’m resistant to that fact, but I’m not immune. I’m sure I’m no different to every other primary teacher out there. Our assessment system has to change so that we can feel fine about educating our pupils and not think we’re wasting our time if we pursue an area that doesn’t count towards a final mark.

 

 

 

Primary assessment is more than a fiasco – it’s completely wrong

I’ve written my submission to the Education Committee’s inquiry on primary assessment for what it’s worth. I can’t imagine that they’re interested in what we have to say, given that this government have ignored just about all the expert advice they’ve ever received or requested on nearly everything else. This country has ‘had enough of experts’ after all.

I won’t paste my submission here – there are various restrictions on publishing them elsewhere, it seems. However it’s a good time to get some thoughts off my chest. Primary assessment (and school-based assessment generally) has all gone a bit wrong. OK, a lot wrong. It’s so wrong that it’s actually very damaging. Conspiracy theorists might have good cause to think it is deliberate; my own cynicism is that it is underpinned by a string of incompetencies and a distinct failure to listen at all to any advice.

In thinking about why it has all gone wrong, I want to pose a possibly contentious question: is the attainment we are attempting to measure, a thing that should dominate all educational efforts and discourse? I’ve written before about my growing doubts about the over-emphasis on attainment and how I think it detracts from the deeper issue of education. The further we get down this line, particularly with the current nonsense about bringing back selective education, the more this crystalises for me. Just to be clear, this is not an anti-intellectual stance, nor a woolly, liberal dumbing-down view. I fully embrace the idea that we should not put a ceiling on all kinds of achievement for everybody. Having a goal and working towards it – having a way of demonstrating what you have achieved – that’s an admirable thing. What I find ridiculous is that the kind of attainment that is obsessing the nation, doesn’t actually mean very much and yet somehow we are all party to serving its ends. Put it this way – tiny fluctuations in scores in a set of very narrow domains make headlines for pupils, teachers, schools, counties etc. Every year we sweat over the %. If there’s a rise above the ‘expectation’ we breathe a sigh of relief. If, heaven forbid, we had a difficult cohort and a couple of boxes are in the ‘blue zone’ we dread the repercussions because now we’re no longer an outstanding school. But, as Jack Marwood writes here, there’s no pattern. We’ve even begun to worry about whether we’re going to be labelled a ‘coasting school’! Good should be good enough because the hysteria over these measures is sucking the life out of the most important resource – us. Of course the inspectorate needs to be on the lookout for actually bad schools. Are these really going to be so difficult to spot? Is it really the school that was well above average in 2014 and 15 but dipped in 16? Is the child who scores 99 on the scaled score so much more of a failure than the one who scored 101? Is our group of 4 pupil premium children getting well above average, in a small set of tests, an endorsement of our good teaching compared to another school’s 4 getting well below?

Attainment has become an arms race and teachers, pupils and parents are caught in the crossfire. In spite of the ‘assessment without levels’ rhetoric, all our accountability processes are driven by a focus on an attainment with one level. This is incredibly destructive in my experience. Notwithstanding those self-proclaimed paragons of good practice who claim that they’ve got the balance right etc., what I’ve mainly seen in schools are teachers at the end of their wits, wondering what on earth they can further do (what miracle of intervention they can concoct) to ‘boost’ a group of ‘under-performing’ children to get to ‘meeting’, whilst maintaining any kind of integrity with regard to the children who have never been anywhere near. I was recently told in a leadership meeting that all children should make the same amount of progress. Those ‘middle achievers’ should be able to progress at the same rate as the ‘high achievers’. It’s the opposite which is true. The high achievers are where they are exactly because they made quicker progress – but the ‘middle achievers’ (and any other category – good grief!) will also get to achieve, given time. And while all this talk of progress is on the table – let’s be honest – we’re talking about ‘attainment’ again: a measure taken from their KS2 assessments, aggregated, and compared to KS1 in a mystical algorithm.

It’s not like the issues surrounding assessment have never been considered. Just about all the pitfalls of the recent primary debacle have been written about endlessly, and frequently predicted. High-stakes testing has always been the villain of the piece: perverse incentives to teach to the test, narrowing of the curriculum, invalidity of testing domain, unreliability/bias/downright cheating etc. The problem is the issues won’t go away, because testing is the wrong villain. Testing is only the blunt tool to fashion the club of attainment with which to beat us (apologies for extended metaphor). I’m a big fan of testing. I read Roediger and Karpicke’s (pdf) research on ‘testing effect’ in the early days, long before it became a fashionable catch-phrase. I think we should test as many things in as many ways as we can: to enhance recall; to indicate understanding; to identify weaknesses; to demonstrate capacity; to achieve certification etc. I was all in favour of Nicky Morgan’s proposal to introduce an online tables test. What a great idea! Only – make it available all the time and don’t use the results against the pupil or the teacher. No – testing doesn’t cause the problem. It’s caused by the narrow, selective nature, the timing and the pressure of attaining an arbitrary ‘meeting expectations’ (one big level, post levels). The backwash on the curriculum is immense. Nothing has any status anymore: not art, not music, not D&T, not history nor geography, and certainly not science – that ‘core subject’ of yore! Some might argue that it’s because they’re not tested, and of course, I agree up to a point, but the real issue is that they’re not seen as being important in terms of attainment.

I shall add a comment here on teacher assessment, just because it continues to drag on in primary assessment like some old ghost that refuses to stop rattling its chains. If teacher assessment is finally exorcised, I will be particularly grateful. It is an iniquitous, corrupted sop to those who believe ‘teachers are best placed to make judgements about their own pupils’. Of course they are – in the day to day running of their class and in the teaching of lessons – but teacher assessment should not be used in any way to measure attainment. I am not arguing that teachers are biased, that they make mistakes or inflate or deflate their assessments. I am arguing that there is simply no common yardstick and so these cannot be considered reliable. The ‘moderated’ writing debacle of 2016 should have put that fact squarely on the table for all doubters to see. Primary assessments are used in accountability. How can we expect teachers to make judgements that could be used against them in appraisal and in pay reviews?

I’m an idealist in education. I think that it has a purpose beyond the establishment of social groups for different purposes (leadership, administrative work, manual labour). I don’t think that it is best served by a focus on a narrow set of objectives and an over-zealous accountability practice based on dubious variations in attainment. I tried to sum up my proposals for the Education Committe, and I will try to sum up my summing up:

  • Stop using small variations in flawed attainment measures for accountability
  • Give us fine-grained, useful but low-stakes testing, for all (use technology)
  • If we have to measure, get rid of teacher assessment and give us lots of common, standardised tools throughout the primary phase
  • Give us all the same technology for tracking the above (how many thousands of teacher hours have been spent on this?)
  • If you have to have end of stage tests, listen to the advice of the experts and employ some experts in test design – the 2016 tests were simply awful
  • Include science
  • Be unequivocal in the purposes of assessment and let everybody know

I didn’t say ‘get rid of the end of key stage assessments altogether and let us focus again on educating our pupils’. Maybe I should have.

 

 

 

Not good is sometimes good

I was reading Beth Budden’s blog on the cult of performativity in education and thinking of the many times when I’ve thanked the gods no-one was watching a particular lesson. It’s gratifying that there is a growing perception that a single performance in a 40 minute session is no kind of measure of effectiveness – I’ve railed against that for many years. During observations, I’ve sometimes managed to carry off the performance (and it’s always a hollow victory) and sometimes I haven’t (it always leads to pointless personal post-mortems). Lately I’ve managed to introduce the idea that I will give a full briefing of the lesson, the background, my rationale, the NC, the focus, the situation etc. etc. before any member of the leadership team sets foot in my classroom to make a formal observation. It’s been a long time coming and it goes some way to mitigating the performance effects. Not everyone in my school does it.

But what about the lessons that I really didn’t want anyone to watch? If they had, would I be recognised as a bad teacher?  If I think about lessons that seem to have been pretty poor by my own judgement, they almost always lead on to a better understanding overall. A recent example is a lesson I taught (nay crammed) on the basics of electricity. It was a rush. The pupils needed to glean a fair amount of information in a short time from a number of sources. The resultant writing showed that it was poorly understood by everyone. Of course, it was my fault and I’d have definitely failed that lesson if I were grading myself. Fortunately I wasn’t being graded and nobody was watching. Fortunately, also,  I could speak to the pupils the day after looking at their confused writing on the subject, tell them that I took responsibility for it being below par and say that we needed to address the myriad of misconceptions that has arisen. We did. The subsequent work was excellent and suggested a far higher degree of understanding from all; I assumed that something had been learned. Nowhere in here was a ‘good lesson’ but somewhere in here was some actual education – and not just about electricity.