Final report of the Commission on Assessment without Levels – a few things.

I’ve read the report and picked out some things. This is not a detailed analysis, but more of a selection of pieces relevant to me and anyone else interested in primary education and assessment:

Our consultations and discussions highlighted the extent to which teachers are subject to conflicting pressures: trying to make appropriate use of assessment as part of the day-today task of classroom teaching, while at the same time collecting assessment data which will be used in very high stakes evaluation of individual and institutional performance. These conflicted purposes too often affect adversely the fundamental aims of the curriculum,

Many of us have been arguing that for years.

the system has been so conditioned by levels that there is considerable challenge in moving away from them. We have been concerned by evidence that some schools are trying to recreate levels based on the new national curriculum.

Some schools are hanging on to them like tin cans in the apocalypse.

levels also came to be used for in-school assessment between key stages in order to monitor whether pupils were on track to achieve expected levels at the end of key stages. This distorted the purpose of in-school assessment,

Whose fault was that?

There are three main forms of assessment: in-school formative assessment, which is used by teachers to evaluate pupils’ knowledge and understanding on a day-today basis and to tailor teaching accordingly; in-school summative assessment, which enables schools to evaluate how much a pupil has learned at the end of a teaching period; and nationally standardised summative assessment,

Try explaining that to those who believe teacher assessment through the year can be used for summative purposes at the end of the year.

many teachers found data entry and data management in their school burdensome.

I love it, when it’s my own.

There is no intrinsic value in recording formative assessment;

More than that – it degrades the formative assessment itself.

the Commission recommends schools ask themselves what uses the assessments are intended to support, what the quality of the assessment information will be,

I don’t believe our trial system using FOCUS materials and assigning a score had much quality. It was too narrow and unreliable. We basically had to resort to levels to try to achieve some sort of reliability.

Schools should not seek to devise a system that they think inspectors will want to see;

!

Data should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of its pupils

‘Ordinarily’ we used levels! This is why I think we need data based on internal summative assessments. I do not think we can just base it on a summative use of formative assessment information!

The Carter Review of Initial Teacher Training (ITT) identified assessment as the area of greatest weakness in current training programmes.

We should not expect staff (e.g. subject leaders) to devise assessment systems, without having had training in assessment.

The Commission recommends the establishment of a national item bank of assessment questions to be used both for formative assessment in the classroom, to help teachers evaluate understanding of a topic or concept, and for summative assessment, by enabling teachers to create bespoke tests for assessment at the end of a topic or teaching period.

But don’t hold your breath.

The Commission decided at the outset not to prescribe any particular model for in-school assessment. In the context of curriculum freedoms and increasing autonomy for schools, it would make no sense to prescribe any one model for assessment.

Which is where it ultimately is mistaken, since we are expected to be able to make comparisons across schools!

Schools should be free to develop an approach to assessment which aligns with their curriculum and works for their pupils and staff

We have a NATIONAL CURRICULUM!

Although levels were intended to define common standards of attainment, the level descriptors were open to interpretation. Different teachers could make different judgements

Well good grief! This is true of everything they’re expecting us to do in teacher assessment all the time.

Pupils compared themselves to others and often labelled themselves according to the level they were at. This encouraged pupils to adopt a mind-set of fixed ability, which was particularly damaging where pupils saw themselves at a lower level.

This is only going to be made worse, however, by the ‘meeting’ aspects of the new system.

Without levels, schools can use their own assessment systems to support more informative and productive conversations with pupils and parents. They can ensure their approaches to assessment enable pupils to take more responsibility for their achievements by encouraging pupils to reflect on their own progress, understand what their strengths are and identify what they need to do to improve.

Actually, that’s exactly what levels did do! However…

The Commission hopes that teachers will now build their confidence in using a range of formative assessment techniques as an integral part of their teaching, without the burden of unnecessary recording and tracking.

They hope?

Whilst summative tasks can be used for formative purposes, tasks that are designed to provide summative data will often not provide the best formative information. Formative assessment does not have to be carried out with the same test used for summative assessment, and can consist of many different and varied tasks and approaches. Similarly, formative assessments do not have to be measured using the same scale that is used for summative assessments.

OK – this is a key piece of information that is misunderstood by nearly everybody working within education.

However, the Commission strongly believes that a much greater focus on high quality formative assessment as an integral part of teaching and learning will have multiple benefits:

We need to make sure this is fully understood. We must avoid formalising what we think is ‘high quality formative assessment’ because that will become another burdensome and meaningless ritual. Don’t get me started on the Black Box!

The new national curriculum is founded on the principle that teachers should ensure pupils have a secure understanding of key ideas and concepts before moving onto the next phase of learning.

And they do mean 100% of the objectives.

The word mastery is increasingly appearing in assessment systems and in discussions about assessment. Unfortunately, it is used in a number of different ways and there is a risk of confusion if it is not clear which meaning is intended

By  leading politicians too. A common understanding of terms is rather important, don’t you think?

However, Ofsted does not expect to see any specific frequency, type or volume of marking and feedback;

OK, it’s been posted before, but it’s worth reiterating. Many SL and HTs are still fixated on marking.

On the other hand, standardised tests (such as those that produce a reading age) can offer very reliable and accurate information, whereas summative teacher assessment can be subject to bias.

Oh really? Then why haven’t we been given standardised tests and why is there still so much emphasis on TA?

Some types of assessment are capable of being used for more than one purpose. However, this may distort the results, such as where an assessment is used to monitor pupil performance, but is also used as evidence for staff performance management. School leaders should be careful to ensure that the primary purpose of assessment is not distorted by using it for multiple purposes.

I made this point years ago.

Advertisement

Too much stirring is spoiling the pudding

The world of education seems to me to be currently in a state of frenzy, particularly in England, but probably fuelled in good part by US ideology. Teachers, like myself, who actually read the bulletins, follow the research, go on facebook, watch the news etc., (maybe there are some that do none of these) are assailed from all directions, with the underlying message that something must be done. For example – in random order:

  • RI/Good/Outstanding
  • Ofsted’s new directives
  • Just Ofsted!
  • Coasting schools
  • Failing teachers
  • Failing heads
  • Teachers want to leave
  • Workload
  • The New Curriculum is good/bad/indifferent
  • Mastery
  • Levels were a bad idea
  • Assessment for Learning is a wonderful thing
  • Progressivism was a terrible idea
  • Trojan horses
  • Text books are great/not great
  • Phonics is good (as is grammar!)
  • Academies will save us/damn us all
  • Parents have the right to choose
  • State schools should be more like private schools
  • Close the gap
  • Practice should be based on research – take your pick which piece
  • Marking is essential feedback/not essential/done badly
  • Independent learning
  • Individualised trajectories
  • Whole class teaching
  • Age related expectations
  • Progressive targets
  • Accountability measures
  • Observations are important/detrimental
  • New technologies are going to save us/damn us all
  • SMCS
  • British Values

I could go on, and readers of this blog could probably add hundreds more items to the list. When I read articles, blogs and research online, everyone has an opinion. Sometimes there is ‘evidence’, although not the kind of evidence that would be accepted within the ‘hard’ sciences. If we teachers were to try to take on board everything that they tell us, so that we are not ‘failing teachers’, we’d become useless. And what are all these methods, tools, strategies, for, exactly? An improvement in attainment of 3 months? Really? Is that anything? I meet successful former pupils – I can not begin to think how I can relate their success to something as nebulous as a 3 month difference in attainment in primary school, even if I could believe that such things can be measured. (In fact, give us that measuring tool – it would help us all a lot!). And then, what is the measure of their success? Are they making a useful contribution to the economy? Is that what it’s about? I really don’t think it is or that it should be. I would like it to stop, now. Nothing can operate well within a climate of such unremitting, frequent and conflicting input and I don’t believe it’s as complicated as all that. There have been successful educators in the past – we have to admit that teachers must have managed it before we had so many directives and all this ‘evidence based practice’. Some of my own teachers were brilliant, but that’s not even the issue. The responsibility for learning, lies with the learner, not the teacher! If we continue to believe we can ‘fix’ things by directing our remedies at the teachers, we’ll fail. The main issue with the teachers is not what they do but what they (don’t) know, and a focus on teaching distracts us from that issue. English teachers are themselves the product of the system and the result of a culture that has removed the responsibility of learning from the learner. I’ve seen this myself, where, if a teacher lacks subject knowledge (for example in the new computing requirements), they do nothing until the CPD is provided for them, yet we live in a technologically advanced world where access to information has never been easier. If we really want to remedy the ills of the English education system, we should:

  • stop making up new responsibilities for teachers
  • stop endlessly tweaking the system
  • recognise that we can’t ‘close the gap’*
  • require excellent subject knowledge
  • recognise that the learner is responsible for their learning

*’closing the gap’ is a phrase for another tirade. Try closing the gap between my sprinting time and that of Usain Bolt!

Watching the speedo

I teach the school band on a Friday, after school. I have done for about 20 years. It started out as an antidote to the stifling ‘orchestra’ that was causing children to give up their music lessons. The orchestra focussed on very simple arrangements of chamber music, welcomed mainly violinists, and at a push some woodwind. Brass could get lost and join a ‘wind band’ somewhere. It haemorrhaged members and I picked them up. All comers were welcome if they knew which way up the instrument was. We’ve always had a rhythm section with drummer, bass, guitars, keyboards and then a random assortment of instruments, including, at times, djembes, steel pans, saxophones. Every year we lose our ‘seasoned’ players to secondary school. We play 2 or three gigs a year and we’re pretty good for a primary school! I think we’re the only one of our kind in the county.

So what’s this all about? Well I was thinking about it this Friday. I have no external objectives; I have no classroom observations by ‘senior leaders’; I have no written plans; I don’t have to fill in any boxes or record progress; I don’t have to give a percentage of pupils ‘developing, meeting or exceeding’; I have no ‘accountability’ in that regard, whatsoever. In short, I can drive it myself without having to check to see if I’m doing it right according to some outside observer or objective. And yet, AMAZINGLY, we go from a rag tag bunch at the beginning of each year and by the end of the summer we’re playing rock and pop and a bit of jazz as a group. I’d claim that the ‘progress’ of each pupil is massively enhanced by the expectations of playing with the band. In other words I don’t need any external encouragement or censure. I do not need to demonstrate anything to anyone and I still work hard to get quality out of my students. Which is what I’d do in the classroom, too, if I could, without having to ‘check the speedo’ all the time to see if I’m following some external, arbitrary set of conventions and directives.

Not unrelated to this is a growing feeling that nobody has come up with anything useful since teaching began. I’ve been immersed in educational research for the past 5 years and the more I find out, the more I see that there is nothing conclusive at all. Pet theories abound, citing ‘impact weightings’ but on closer inspection, the evidence disappears in a puff of unreliable smokescreen. We’ve had a variety of memes inexpertly thrust at us through local ‘experts’, including, off the top of my head, AfL, exploratory learning, child-centred learning, reverse planning, the creative curriculum, back to basics, real books, key words, phonics, Building Learning Power, hydration, learning styles, literacy and numeracy ‘hours’, mixed ability, ability grouping, interventions… ok I realise that’s a completely mixed bag, but when I pick out anything meaningful from any of that, I’m left with the realisation that there was nothing revolutionary there. Anything useful was practised decades ago by my own teachers all through my education and I’m doubtful that the rest of the profession was full of crap teachers for the want of some external directive and earth-shattering research finding.

Teachers are so drilled into doing what they’re told, that we’re now tying ourselves up in knots to please whoever we perceive is going to judge us, by whatever measure, be it our own school leaders, OFSTED or some imaginary ‘them’ that is telling us we now have to do ‘this’. It’s stultifying. I’ve watched many new practitioners over the years and their journey has two paths – lose all initiative and become another stressed-out drone, or leave. The latter have often been those with the most to offer.

Lately I’ve gone back to my own basics: teach by showing how or what; let the pupils do it themselves; modify, practise, move on. It seems to work. I don’t know about the research evidence.

Can we ditch ‘Building Learning Power’ now?

Colleagues in UK primary schools might recognise the reference, ‘Building Learning Power‘ which was another bandwagon that rolled by a few years ago. As ever, many leaped aboard without stopping to check just exactly what the evidence was. Yes, there did appear to be a definite correlation between the attitudinal aspects (‘dispositions‘ and ‘capacities‘) outlined in the promotional literature and pupil attainment, but sadly few of us seem to have learned the old adage that correlation does not necessarily imply causation. Moreover we were faced with the claim that ‘it has a robust scientific rationale for suggesting what some of these characteristics might be, and for the guiding assumption that these characteristics are indeed capable of being systematically developed.‘. And who are we, as the nation’s educators, to question such an authoritative basis as a ‘robust scientific rationale’ (in spite of the apparent lack of references)?

So, instead of simply acknowledging these characteristics, we were expected somehow to teach them, present assemblies on them and unpick them to a fine degree. It didn’t sit comfortably with many of us – were we expecting pupils to use those dispositions and capacities whilst learning something else, or were we supposed to teach them separately and specifically? When planning lessons, we were told to list the BLP skills we were focussing on, but we were confused. It seemed like we would always be listing all the skills – inevitably, since they were the characteristics which correlated with attainment. But still, teachers do what they’re told, even if it ties them up in knots sometimes.

So it is with interest I came across this piece of research from the USA:

Little evidence that executive function interventions boost student achievement

As I’m reading, I’m wondering what exactly ‘executive function’ is and why I haven’t really heard about it in the context of teaching and learning in the UK, but, as I read on I see that it is ‘the skills related to thoughtful planning, use of memory and attention, and ability to control impulses and resist distraction’ and it dawns on me that that is the language of BLP! So I read a little more closely and discover that in a 25 year meta-analysis of the research, there is no conclusive evidence that interventions aimed at teaching these skills have had any impact on attainment. To quote:

“Studies that explore the link between executive function and achievement abound, but what is striking about the body of research is how few attempts have been made to conduct rigorous analyses that would support a causal relationship,” said Jacob [author]

The authors note that few studies have controlled for characteristics such as parental education, socioeconomic status, or IQ, although these characteristics have been found to be associated with the development of executive function. They found that even fewer studies have attempted randomized trials to rigorously assess the impact of interventions.

Not such a robust scientific rationale, then? Just to be clear – lack of evidence doesn’t mean there isn’t causation, but isn’t that exactly what we should be concerned with? This is only one of a multitude of initiatives that have been thrown our way in the past decade, many of which have since fallen into disuse or become mindlessly ritualised. We are recently led to believe, however, given the catchphrase bandied about by government ministers and a good degree of funding, through such bodies as The Education Endowment Fund, that there is an increased drive for ‘evidence-based education’, which of course begs the question: what’s been going on – what exactly has underpinned the cascade of initiatives – up to this point?