Ed Select Committee report – improvements to come?

The Education Select Committee has published its report into the impact of the changes to primary assessment. It’s been an interesting journey from the point at which I submitted written evidence on primary assessment; I wrote a blog back in October, where I doubted there would be much response, but in fact I was wrong. Not only did they seem to draw widely from practioners, stake-holders and experts to give evidence, the report actually suggests that they might have listened quite well, and more to the point, understood the gist of what we were all trying to say. For anyone who had followed assessment research, most of this is nothing new. Similar things have been said for decades. Nevertheless, it’s gratifying to have some airing of the issues at this level.

Summative and formative assessment

The introduction to the report clarifies that the issues being tackled relate to summative assessment and not the ongoing process of formative assessment carried out by teachers. For me, this is a crucial point, since I have been trying, with some difficulty sometimes, to explain to teachers that the two purposes should not be confused. This is important because the original report on assessment without levels suggested that schools had ‘carte blanche’ to create their own systems. Whilst it also emphasised that purposes needed to be clear, many school systems were either extensions of formative assessment that failed to grasp the implications and the requirements of summative purposes, or they were clumsy attempts to create tracking systems based on data that really had not been derived from reliable assessment!

Implementation and design

The report is critical of the time-scale and the numerous mistakes made in the administration of the assessments. They were particularly critical of the STA, which was seen to be chaotic and insufficiently independent. Furthermore, they criticise Ofqual for lack of quality control, in spite of Ofqual’s own protestations that they had scrutinised the materials. The report recommends an independent panel to review the process in future.

This finding is pretty damning. This is not some tin-pot state setting up its first exams – how is incompetence becoming normal? In a climate of anti-expertise, I suppose it is to be expected, but it will be very interesting to see if the recommendations have any effect in this area.

The Reading Test

The report took on board the wide-spread criticism of the 2016 Reading Test. The STA defense was that it had been properly trialled and performed as expected. Nevertheless, the good news (possibly) is that the Department has supposedly “considered how this year’s test experience could be improved for pupils”. 

Well we shall see on Monday! I really hope they manage to produce something that most pupils will at least find vaguely interesting to read. The 2016 paper was certainly the least well-received of all the practice papers we did this year.

Writing and teacher assessment

Teacher assessment of writing emerged as something that divided opinion. On the one hand there were quotes from heads who suggested that ‘teachers should be trusted’ to assess writing. My view is that they miss the point and I was very happy to be quoted alongside Tim Oates, as having deep reservations about teacher assessment. I’ve frequently argued against it for several reasons (even when moderation is involved) and I believe that those who propose it may be confusing the different purposes of assessment, or fail to see how it’s not about ‘trust’ but about fairness to all pupils and an unacceptable burden on teachers.

What is good to see, though, is how the Committee have responded to our suggested alternatives. Many of us referred to ‘Comparative Judgement’ as a possible way forward. The potential of comparative judgement as an assessment method is not new, but is gaining credibility and may offer some solutions – I’m glad to see it given space in the report. Something is certainly needed, as the way we currently assess writing is really not fit for purpose. At the very least, it seems we may return to a ‘best-fit’ model for the time being.

For more on Comparative Judgment, see:

Michael Tidd  The potential of Comparative Judgement in primary

Daisy Christodoulou Comparative judgment: 21st century assessment

No More Marking

David Didau  10 Misconceptions about Comparative Judgement

Support for schools

The report found that the changes were made without proper training or support. I think this is something of an understatement. Systems were changed radically without anything concrete to replace them. Schools were left to devise their own systems and it’s difficult to see how anyone could not have foreseen that this would be inconsistent and often  inappropriate. As I said in the enquiry, there are thousands of primary schools finding thousands of different solutions. How can that be an effective national strategy, particularly as, by their own admission, schools lacked assessment expertise? Apparently some schools adopted commercial packages which were deemed ‘low quality’. This, too, is not a surprise. I know that there are teachers and head-teachers who strongly support the notion of ‘doing their own thing’, but I disagree with this idea and have referred to it in the past as the ‘pot-luck’ approach. There will be ways of doing things that are better than others. What we need to do is to make sure that we are trying to implement the most effective methods and not leaving it to the whim of individuals. Several times, Michael Tidd has repeated that we were offered an ‘item bank’ to help teachers with ongoing assessment. The report reiterates this, but I don’t suggest we hold our collective breath.

High-stakes impact and accountability

I’m sure the members of the Assessment Reform Group, and other researchers of the 20th century, would be gratified to know that this far down the line we’re still needing to point out the counter-productive nature of high-stakes assessment for accountability! Nevertheless, it’s good to see it re-emphasised in no uncertain terms and the report is very clear about the impact on well-being and on the curriculum. I’m not sure that their recommendation that OFSTED broadens its focus (again), particularly including science as a core subject, is going to help. OFSTED has already reported on the parlous state of science in the curriculum, but the subject has continued to lose status since 2009. This is as a direct result of the assessment of the other subjects. What is assessed for accountability has status. What is not, does not. The ASE argues (and I totally understand why) that science was impoverished by the test at the end of the year. Nevertheless, science has been impoverished far more, subsequently, in spite of sporadic ‘success stories’ from some schools. This is a matter of record. (pdf). Teacher assessment of science for any kind of reliable purpose is even more fraught with difficulties than the assessment of writing. The farce, last year, was schools trying to decide if they really were going to give credence to the myth that their pupils had ‘mastered’ all 24 of the objectives or whether they were going to ‘fail’ them. Added to this is the ongoing irony that primary science is still ‘sampled’ using an old-fashioned conventional test. Our inadequacy in assessing science is an area that is generally ignored or, to my great annoyance, completely unappreciated by bright-eyed believers who offer ‘simple’ solutions. I’ve suggested that complex subjects like science can only be adequately assessed using more sophisticated technology, but edtech has stalled in the UK and so I hold out little hope for developments in primary school!

When I think back to my comments to the enquiry, I wish I could have made myself clearer in some ways. I said that if we want assessment to enhance our pupils’ education then what we currently have is not serving that purpose. At the time, we were told that if we wished to further comment on the problem of accountability, then we could write to the Committee, which I did. The constant argument has always been ‘…but we need teachers to be accountable.’ I argued that they need to be accountable for the right things and that a single yearly sample of small populations in test conditions, did not ensure this. This was repeated by so many of those who wrote evidence for the Committee, that it was obviously hard to ignore. The following extract from their recommendations is probably the key statement from the entire process. If something changes as a result of this, there might be a positive outcome after all.

Many of the negative effects of assessment are in fact caused by the use of results
in the accountability system rather than the assessment system itself. Key Stage 2
results are used to hold schools to account at a system level, to parents, by Ofsted, and results are linked to teachers’ pay and performance. We recognise the importance of holding schools to account but this high-stakes system does not improve teaching and learning at primary school. (my bold)

Advertisements

It’s 2017 – What on Earth can we do?

Though I felt I would have preferred to be at home drinking cocoa, I played saxophone for a small, local gig on New Year’s Eve. The revelry seemed suitably subdued as the clock struck midnight and the guitarist wished me a ‘Happy New Year’, saying that there was no way 2017 could possibly be worse than 2016. I sadly disagreed and prophesied that we would look back on 2016 as the last year of Recognisable Things before we really began to notice that nothing was ever the same again.

Anyone who has read my blogs before will see that they tend not to be very upbeat, generally. Nobody would describe me as a ‘bubbly’ personality and  I’m generally inspired to write when I have something to critique. As much as I admire spirit-lifting attempts, I perceive them as fundamentally flawed and self-centred in the sense that they seem to ignore reality.

So how do I manage to work with primary school pupils? Basically, I lie by omission. I can not possibly tell them what I believe their future holds and were I to openly discuss with them what’s going on in the world, I would risk censure for ‘extremist views’.

It was in a staff room, over 20 years ago, that I said that I was pretty sure that climate change would be the biggest challenge we would face in the new millennium. The reaction then was along the lines of, ‘Oh, really? Is that because of CFCs and things? I don’t really know much about it. We can’t be doom-mongers. Well I’m not really into the environment and all that – it’s more your sort of thing.’ Over the decades, everything that I said was likely to happen, has happened and sadly, the reaction I get now is pretty much the same, in spite of the global scientific consensus and the general acceptance that it is no longer a conspiracy.

On a day-to-day basis, I engage with the business of ‘business as usual’, and in 2016 I made some efforts to push against what I felt to be detrimental to the education of our pupils. I actively responded to every government consultation and was gratified to give evidence on primary assessment to the Education Select Committee. I try to promote an agenda that a quality education is a global citizen entitlement and is not about toxic notions of ‘attainment’ and ‘social mobility’. I agree that curriculum subjects should be rigorously taught by teachers with excellent subject knowledge and I welcome the increase in attention to evidence over mythology. Perverse incentives aside, I do continue to try to do my best to develop pupils’ knowledge and understanding in the ‘core’ and ‘foundation’ subjects of our National Curriculum, as though the future will resemble the past. Deep down, I have misgivings; I probably should spend more time teaching them basic survival skills. From how things are currently panning out, the next few decades will be an escalation of the challenges we have faced this year:

It was wrong to make heroes of those who have climbed the greasy pole over their fellows, those who have risen to the top of their chosen career and gained huge amounts of wealth, and those who have dominated nations through displays of power and authority, because their big noise drowned out the voices of reason to which we should have listened and now we have to face the consequences as well as we can. Knowing that we have tipped the climate balance, it’s very difficult to see how things could improve or even stay the same but there is something to do.

If things are going to get a lot trickier, then I see that there is a need to remember that many of us are not psychopaths. We know about co-operation, consideration and compassion and we should exercise these. If we possess the trait of empathy, we know about the suffering of others and we have to be kinder – to humans and non-human animals. If we know the difference, we need to be emphatically better to each other, because there are those who will be emphatically worse. How we treat each other should be a matter of concern – in school, the supermarket, on the road, and in our (t)wittering online which appears often to deteriorate into childish insults and point-scoring. If we have the wit, let us use it to exercise consideration and circumspection in 2017.

Primary assessment is more than a fiasco – it’s completely wrong

I’ve written my submission to the Education Committee’s inquiry on primary assessment for what it’s worth. I can’t imagine that they’re interested in what we have to say, given that this government have ignored just about all the expert advice they’ve ever received or requested on nearly everything else. This country has ‘had enough of experts’ after all.

I won’t paste my submission here – there are various restrictions on publishing them elsewhere, it seems. However it’s a good time to get some thoughts off my chest. Primary assessment (and school-based assessment generally) has all gone a bit wrong. OK, a lot wrong. It’s so wrong that it’s actually very damaging. Conspiracy theorists might have good cause to think it is deliberate; my own cynicism is that it is underpinned by a string of incompetencies and a distinct failure to listen at all to any advice.

In thinking about why it has all gone wrong, I want to pose a possibly contentious question: is the attainment we are attempting to measure, a thing that should dominate all educational efforts and discourse? I’ve written before about my growing doubts about the over-emphasis on attainment and how I think it detracts from the deeper issue of education. The further we get down this line, particularly with the current nonsense about bringing back selective education, the more this crystalises for me. Just to be clear, this is not an anti-intellectual stance, nor a woolly, liberal dumbing-down view. I fully embrace the idea that we should not put a ceiling on all kinds of achievement for everybody. Having a goal and working towards it – having a way of demonstrating what you have achieved – that’s an admirable thing. What I find ridiculous is that the kind of attainment that is obsessing the nation, doesn’t actually mean very much and yet somehow we are all party to serving its ends. Put it this way – tiny fluctuations in scores in a set of very narrow domains make headlines for pupils, teachers, schools, counties etc. Every year we sweat over the %. If there’s a rise above the ‘expectation’ we breathe a sigh of relief. If, heaven forbid, we had a difficult cohort and a couple of boxes are in the ‘blue zone’ we dread the repercussions because now we’re no longer an outstanding school. But, as Jack Marwood writes here, there’s no pattern. We’ve even begun to worry about whether we’re going to be labelled a ‘coasting school’! Good should be good enough because the hysteria over these measures is sucking the life out of the most important resource – us. Of course the inspectorate needs to be on the lookout for actually bad schools. Are these really going to be so difficult to spot? Is it really the school that was well above average in 2014 and 15 but dipped in 16? Is the child who scores 99 on the scaled score so much more of a failure than the one who scored 101? Is our group of 4 pupil premium children getting well above average, in a small set of tests, an endorsement of our good teaching compared to another school’s 4 getting well below?

Attainment has become an arms race and teachers, pupils and parents are caught in the crossfire. In spite of the ‘assessment without levels’ rhetoric, all our accountability processes are driven by a focus on an attainment with one level. This is incredibly destructive in my experience. Notwithstanding those self-proclaimed paragons of good practice who claim that they’ve got the balance right etc., what I’ve mainly seen in schools are teachers at the end of their wits, wondering what on earth they can further do (what miracle of intervention they can concoct) to ‘boost’ a group of ‘under-performing’ children to get to ‘meeting’, whilst maintaining any kind of integrity with regard to the children who have never been anywhere near. I was recently told in a leadership meeting that all children should make the same amount of progress. Those ‘middle achievers’ should be able to progress at the same rate as the ‘high achievers’. It’s the opposite which is true. The high achievers are where they are exactly because they made quicker progress – but the ‘middle achievers’ (and any other category – good grief!) will also get to achieve, given time. And while all this talk of progress is on the table – let’s be honest – we’re talking about ‘attainment’ again: a measure taken from their KS2 assessments, aggregated, and compared to KS1 in a mystical algorithm.

It’s not like the issues surrounding assessment have never been considered. Just about all the pitfalls of the recent primary debacle have been written about endlessly, and frequently predicted. High-stakes testing has always been the villain of the piece: perverse incentives to teach to the test, narrowing of the curriculum, invalidity of testing domain, unreliability/bias/downright cheating etc. The problem is the issues won’t go away, because testing is the wrong villain. Testing is only the blunt tool to fashion the club of attainment with which to beat us (apologies for extended metaphor). I’m a big fan of testing. I read Roediger and Karpicke’s (pdf) research on ‘testing effect’ in the early days, long before it became a fashionable catch-phrase. I think we should test as many things in as many ways as we can: to enhance recall; to indicate understanding; to identify weaknesses; to demonstrate capacity; to achieve certification etc. I was all in favour of Nicky Morgan’s proposal to introduce an online tables test. What a great idea! Only – make it available all the time and don’t use the results against the pupil or the teacher. No – testing doesn’t cause the problem. It’s caused by the narrow, selective nature, the timing and the pressure of attaining an arbitrary ‘meeting expectations’ (one big level, post levels). The backwash on the curriculum is immense. Nothing has any status anymore: not art, not music, not D&T, not history nor geography, and certainly not science – that ‘core subject’ of yore! Some might argue that it’s because they’re not tested, and of course, I agree up to a point, but the real issue is that they’re not seen as being important in terms of attainment.

I shall add a comment here on teacher assessment, just because it continues to drag on in primary assessment like some old ghost that refuses to stop rattling its chains. If teacher assessment is finally exorcised, I will be particularly grateful. It is an iniquitous, corrupted sop to those who believe ‘teachers are best placed to make judgements about their own pupils’. Of course they are – in the day to day running of their class and in the teaching of lessons – but teacher assessment should not be used in any way to measure attainment. I am not arguing that teachers are biased, that they make mistakes or inflate or deflate their assessments. I am arguing that there is simply no common yardstick and so these cannot be considered reliable. The ‘moderated’ writing debacle of 2016 should have put that fact squarely on the table for all doubters to see. Primary assessments are used in accountability. How can we expect teachers to make judgements that could be used against them in appraisal and in pay reviews?

I’m an idealist in education. I think that it has a purpose beyond the establishment of social groups for different purposes (leadership, administrative work, manual labour). I don’t think that it is best served by a focus on a narrow set of objectives and an over-zealous accountability practice based on dubious variations in attainment. I tried to sum up my proposals for the Education Committe, and I will try to sum up my summing up:

  • Stop using small variations in flawed attainment measures for accountability
  • Give us fine-grained, useful but low-stakes testing, for all (use technology)
  • If we have to measure, get rid of teacher assessment and give us lots of common, standardised tools throughout the primary phase
  • Give us all the same technology for tracking the above (how many thousands of teacher hours have been spent on this?)
  • If you have to have end of stage tests, listen to the advice of the experts and employ some experts in test design – the 2016 tests were simply awful
  • Include science
  • Be unequivocal in the purposes of assessment and let everybody know

I didn’t say ‘get rid of the end of key stage assessments altogether and let us focus again on educating our pupils’. Maybe I should have.

 

 

 

Of Wasps and Education

A long time ago I lived with Jim, a zoologist – the sort that actually liked to know about animals. He taught me, contrary to all popular English culture, to be friendly to wasps – to sit still and observe rather than flap about, leap up, scream, etc. Actually, I was an easy pupil because I’d not had that particular education and was stunned and appalled when, as a 15-year-old, newly-arrived and attending my first English school, I witnessed a fellow pupil smash a stray wasp to death rather than simply let it out of the window as we would have done ‘back home’. Anyway, Jim used to let the wasps land on his finger and drink lemonade – a trick I subsequently performed (without being stung) in front of many a bemused audience. Since then, I’ve learned lots about these clever insects. They can recognise each other as individuals and they can recognise human faces. Allow a wasp to do its zig-zagging buzz in front of you it will learn what you look like and generally fly off, leaving you alone.

This year, I’m one of the very few to be concerned that there are practically no wasps about. Nor many other insects. I take their absence as a bad sign, where I suspect most people are just happy not to be ‘pestered’ by them. I did see one last night though, whilst waiting with my fellow band members before a gig. It took me a while to realise why they were jumping up and flapping their hands – it was a lone, wasp interested in the meat in their pork baps, so I did the trick; the wasp landed on my fingers and took the small piece I offered. I didn’t get stung, it didn’t get tangled in my hair, it didn’t land on my face or do any of the the other things that terrify people. It didn’t bother me at all and I could continue to sit on my hay bale and calmly contemplate the beautiful evening.

So how does this relate to anything? Well it’s something like this: what I have learned about wasps trumps popular culture and folk knowledge, and allows me to make both a compassionate and a superior decision. This is what I consider to be the goal of education. Yet, it’s a losing battle – education is pointless in the face of both a widespread, ignorant culture and a ruling minority that makes decisions for us, based not on evidence and expertise (badger cull, abolition of dept of energy and climate change), but for some other agenda, unnoticed by the majority and unfathomable to the rest.

 

Final report of the Commission on Assessment without Levels – a few things.

I’ve read the report and picked out some things. This is not a detailed analysis, but more of a selection of pieces relevant to me and anyone else interested in primary education and assessment:

Our consultations and discussions highlighted the extent to which teachers are subject to conflicting pressures: trying to make appropriate use of assessment as part of the day-today task of classroom teaching, while at the same time collecting assessment data which will be used in very high stakes evaluation of individual and institutional performance. These conflicted purposes too often affect adversely the fundamental aims of the curriculum,

Many of us have been arguing that for years.

the system has been so conditioned by levels that there is considerable challenge in moving away from them. We have been concerned by evidence that some schools are trying to recreate levels based on the new national curriculum.

Some schools are hanging on to them like tin cans in the apocalypse.

levels also came to be used for in-school assessment between key stages in order to monitor whether pupils were on track to achieve expected levels at the end of key stages. This distorted the purpose of in-school assessment,

Whose fault was that?

There are three main forms of assessment: in-school formative assessment, which is used by teachers to evaluate pupils’ knowledge and understanding on a day-today basis and to tailor teaching accordingly; in-school summative assessment, which enables schools to evaluate how much a pupil has learned at the end of a teaching period; and nationally standardised summative assessment,

Try explaining that to those who believe teacher assessment through the year can be used for summative purposes at the end of the year.

many teachers found data entry and data management in their school burdensome.

I love it, when it’s my own.

There is no intrinsic value in recording formative assessment;

More than that – it degrades the formative assessment itself.

the Commission recommends schools ask themselves what uses the assessments are intended to support, what the quality of the assessment information will be,

I don’t believe our trial system using FOCUS materials and assigning a score had much quality. It was too narrow and unreliable. We basically had to resort to levels to try to achieve some sort of reliability.

Schools should not seek to devise a system that they think inspectors will want to see;

!

Data should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of its pupils

‘Ordinarily’ we used levels! This is why I think we need data based on internal summative assessments. I do not think we can just base it on a summative use of formative assessment information!

The Carter Review of Initial Teacher Training (ITT) identified assessment as the area of greatest weakness in current training programmes.

We should not expect staff (e.g. subject leaders) to devise assessment systems, without having had training in assessment.

The Commission recommends the establishment of a national item bank of assessment questions to be used both for formative assessment in the classroom, to help teachers evaluate understanding of a topic or concept, and for summative assessment, by enabling teachers to create bespoke tests for assessment at the end of a topic or teaching period.

But don’t hold your breath.

The Commission decided at the outset not to prescribe any particular model for in-school assessment. In the context of curriculum freedoms and increasing autonomy for schools, it would make no sense to prescribe any one model for assessment.

Which is where it ultimately is mistaken, since we are expected to be able to make comparisons across schools!

Schools should be free to develop an approach to assessment which aligns with their curriculum and works for their pupils and staff

We have a NATIONAL CURRICULUM!

Although levels were intended to define common standards of attainment, the level descriptors were open to interpretation. Different teachers could make different judgements

Well good grief! This is true of everything they’re expecting us to do in teacher assessment all the time.

Pupils compared themselves to others and often labelled themselves according to the level they were at. This encouraged pupils to adopt a mind-set of fixed ability, which was particularly damaging where pupils saw themselves at a lower level.

This is only going to be made worse, however, by the ‘meeting’ aspects of the new system.

Without levels, schools can use their own assessment systems to support more informative and productive conversations with pupils and parents. They can ensure their approaches to assessment enable pupils to take more responsibility for their achievements by encouraging pupils to reflect on their own progress, understand what their strengths are and identify what they need to do to improve.

Actually, that’s exactly what levels did do! However…

The Commission hopes that teachers will now build their confidence in using a range of formative assessment techniques as an integral part of their teaching, without the burden of unnecessary recording and tracking.

They hope?

Whilst summative tasks can be used for formative purposes, tasks that are designed to provide summative data will often not provide the best formative information. Formative assessment does not have to be carried out with the same test used for summative assessment, and can consist of many different and varied tasks and approaches. Similarly, formative assessments do not have to be measured using the same scale that is used for summative assessments.

OK – this is a key piece of information that is misunderstood by nearly everybody working within education.

However, the Commission strongly believes that a much greater focus on high quality formative assessment as an integral part of teaching and learning will have multiple benefits:

We need to make sure this is fully understood. We must avoid formalising what we think is ‘high quality formative assessment’ because that will become another burdensome and meaningless ritual. Don’t get me started on the Black Box!

The new national curriculum is founded on the principle that teachers should ensure pupils have a secure understanding of key ideas and concepts before moving onto the next phase of learning.

And they do mean 100% of the objectives.

The word mastery is increasingly appearing in assessment systems and in discussions about assessment. Unfortunately, it is used in a number of different ways and there is a risk of confusion if it is not clear which meaning is intended

By  leading politicians too. A common understanding of terms is rather important, don’t you think?

However, Ofsted does not expect to see any specific frequency, type or volume of marking and feedback;

OK, it’s been posted before, but it’s worth reiterating. Many SL and HTs are still fixated on marking.

On the other hand, standardised tests (such as those that produce a reading age) can offer very reliable and accurate information, whereas summative teacher assessment can be subject to bias.

Oh really? Then why haven’t we been given standardised tests and why is there still so much emphasis on TA?

Some types of assessment are capable of being used for more than one purpose. However, this may distort the results, such as where an assessment is used to monitor pupil performance, but is also used as evidence for staff performance management. School leaders should be careful to ensure that the primary purpose of assessment is not distorted by using it for multiple purposes.

I made this point years ago.

Awaiting some ministerial decisions

What a joke! This from them today:

Changes to 2016 tests and assessments We are aware that schools are waiting for additional information about changes to the national curriculum tests and assessments to be introduced for the next academic year. We are still awaiting some ministerial decisions, in particular in relation to teacher assessment. We will let you know in September, as more information becomes available

Only they’re not kidding. Mike Tidd comments on the same here, but I was unrealistically (and uncharacteristically) optimistic that something would come out before we had to have everything in place in September. Should we laugh or tear our hair out that they are ‘awaiting ministerial decisions’? What – the ministers haven’t been able to decide after 2 years? I won’t hold my breath for anything sensible then. Of course, ‘teacher assessment’ should be a matter for serious consideration, but I doubt that their decisions are delayed for the types of reservations I have on the matter. Whilst it seems to have become the global panacea for all assessments that are too complex to manage, I keep banging on about how inappropriate and unreliable it is. If we are to expect pupil attainment to be a criterion for teacher appraisal and progression, then how can we possibly expect teachers to carry out that assessment themselves? That would be wrong, even if we had extremely reliable tools with which to do it, but we don’t. We have nothing of the sort and we never will have, as long as we assess by descriptive objectives.

So what do I really want? Well, to be honest, although I believe in the essential role of testing within learning, I really want to stop assessing attainment in the way it has become embedded within English culture.  It’s a red herring and has nothing to do with education. I never thought I’d say that – I always had highly ‘successful’ results within the old levels system – but I’m very much questioning the whole notion of pupil attainment as currently understood. It’s based on a narrow set of values which, in spite of all the rhetoric of ‘closing the gap’ are never going to be brilliantly addressed by all pupils. That’s an inescapable statistical fact. And why should they be? Attainment is not the same as education in the same way that climbing the ladder is not the same as being equipped to make informed decisions.

But if we must, then give us all the same tools – the same yardstick. At the end of Year 6, all pupils will be assessed by written tests for maths, reading, spelling and grammar. Their results will then be effectively norm referenced (after a fashion). Do that for all the year groups. I’d prefer it if we moved into the latter half of the 20th century in terms of the effective use of technology, but even an old Victorian style paper is better than the vague nonsense we are currently working with.

So, anyway, as it stands, are we justified in Autumn 2015, when we are visited by OFSTED, in having an assessment system in disarray or are we supposed to have sorted it all out, even though they haven’t?

Can we ditch ‘Building Learning Power’ now?

Colleagues in UK primary schools might recognise the reference, ‘Building Learning Power‘ which was another bandwagon that rolled by a few years ago. As ever, many leaped aboard without stopping to check just exactly what the evidence was. Yes, there did appear to be a definite correlation between the attitudinal aspects (‘dispositions‘ and ‘capacities‘) outlined in the promotional literature and pupil attainment, but sadly few of us seem to have learned the old adage that correlation does not necessarily imply causation. Moreover we were faced with the claim that ‘it has a robust scientific rationale for suggesting what some of these characteristics might be, and for the guiding assumption that these characteristics are indeed capable of being systematically developed.‘. And who are we, as the nation’s educators, to question such an authoritative basis as a ‘robust scientific rationale’ (in spite of the apparent lack of references)?

So, instead of simply acknowledging these characteristics, we were expected somehow to teach them, present assemblies on them and unpick them to a fine degree. It didn’t sit comfortably with many of us – were we expecting pupils to use those dispositions and capacities whilst learning something else, or were we supposed to teach them separately and specifically? When planning lessons, we were told to list the BLP skills we were focussing on, but we were confused. It seemed like we would always be listing all the skills – inevitably, since they were the characteristics which correlated with attainment. But still, teachers do what they’re told, even if it ties them up in knots sometimes.

So it is with interest I came across this piece of research from the USA:

Little evidence that executive function interventions boost student achievement

As I’m reading, I’m wondering what exactly ‘executive function’ is and why I haven’t really heard about it in the context of teaching and learning in the UK, but, as I read on I see that it is ‘the skills related to thoughtful planning, use of memory and attention, and ability to control impulses and resist distraction’ and it dawns on me that that is the language of BLP! So I read a little more closely and discover that in a 25 year meta-analysis of the research, there is no conclusive evidence that interventions aimed at teaching these skills have had any impact on attainment. To quote:

“Studies that explore the link between executive function and achievement abound, but what is striking about the body of research is how few attempts have been made to conduct rigorous analyses that would support a causal relationship,” said Jacob [author]

The authors note that few studies have controlled for characteristics such as parental education, socioeconomic status, or IQ, although these characteristics have been found to be associated with the development of executive function. They found that even fewer studies have attempted randomized trials to rigorously assess the impact of interventions.

Not such a robust scientific rationale, then? Just to be clear – lack of evidence doesn’t mean there isn’t causation, but isn’t that exactly what we should be concerned with? This is only one of a multitude of initiatives that have been thrown our way in the past decade, many of which have since fallen into disuse or become mindlessly ritualised. We are recently led to believe, however, given the catchphrase bandied about by government ministers and a good degree of funding, through such bodies as The Education Endowment Fund, that there is an increased drive for ‘evidence-based education’, which of course begs the question: what’s been going on – what exactly has underpinned the cascade of initiatives – up to this point?

Shouldn’t we just say ‘no’?

I’m beginning to wonder why we are playing their game at all. Why are we not questioning the basis for the assumptions about what children should know/be able to do by whatever year, as prescribed in the new curriculum and the soon to be published, rapidly cobbled together, waste of time and paper that are the new ‘descriptors’. Have they based these on any actual research other than what Michael Gove dimly remembered from his own school days?

We recently purchased some published assessments, partly, I’m sorry to say, on my suggestion that we needed something ‘external’ to help us measure progress, now that levels no longer work. It wasn’t what I really wanted – I favour a completely different approach involving sophisticated technology, personal learning and an open curriculum, but that’s another long story and potential PhD thesis! Applying these assessments, though, is beginning to look unethical, to say the least. I’ve always been a bit of a fan of ‘testing’ when it’s purposeful, aids memory and feeds back at the right level, but these tests are utterly demoralising for pupils and staff and I’m pretty sure that’s not a positive force in education. I’m not even sure that I want to be teaching the pupils to jump through those hoops that they’re just missing; I strongly suspect they are not even the right hoops – that there are much more important things to be doing in primary school that are in no way accounted for by the (currently inscrutable) attaining/not attaining/exceeding criteria of the new system.

So what do we do when we’re in the position of being told we have to do something that is basically antagonistic to all our principles? Are we really, after all this time, going to revert to telling pupils that they’re failures? It seems so. Historically, apart from the occasional union bleat, teachers in England have generally tried their best to do what they’re told, as if, like the ‘good’ pupils they might have been when they were at school, they believe and trust in authority. Milgram would have a field day. Fingers on buttons, folks!

Another pointless consultation

The DfE are apparently ‘seeking views on draft performance descriptors for determining pupil attainment at the end of key stages 1 and 2’.

https://www.gov.uk/government/consultations/performance-descriptors-key-stages-1-and-2

They have previously ‘sought views’ on the draft national curriculum and the assessment policy, which they acknowledged and then proceeded to largely ignore. I should imagine this will be no different. Needless to say, I still responded, as I did with the others, if only for the opportunity to point out how vague and meaningless their descriptors are.

My response in brief:

It is really important that you remove all vague terminology, such as ‘increasing’, or ‘wider’. In removing levels, you acknowledged the unreliability of the system and difficulty faced by teachers in agreeing levels. This document falls into the same trap. It would be far better to provide examples of what was expected at each key stage (and in each year), than these vague descriptions, some of which could apply to any level of study (Reception to post-doctoral). Many teachers have worked for years on helping colleagues to understand exactly what was required to show a pupil’s attainment, and in one fell swoop, the new curriculum has demolished all that work without replacing it with anything effective. Give us a standardised set of concrete examples and explanations (not exemplars of pupils’ work), along the lines of those provided by Kangaroo Maths (when we were grappling with what the levels represented in the old curriculum). Give us some e-assessment software that will allow us to quickly determine and collate this information.

I did want also to say, ‘Give us some mid 20th Century text books, since that’s obviously the source of your ‘new’ curriculum.’ In actual fact this isn’t just a just a bitter jibe. A text book would at least guide us through the current morass. We could really do with some clarity and consistency. I suggest a state of the art information source written by actual experts rather than the range of opportunistic publications which will be cobbled together by commercial companies who are ill-prepared to jump on this latest bandwagon.