Of Wasps and Education

A long time ago I lived with Jim, a zoologist – the sort that actually liked to know about animals. He taught me, contrary to all popular English culture, to be friendly to wasps – to sit still and observe rather than flap about, leap up, scream, etc. Actually, I was an easy pupil because I’d not had that particular education and was stunned and appalled when, as a 15-year-old, newly-arrived and attending my first English school, I witnessed a fellow pupil smash a stray wasp to death rather than simply let it out of the window as we would have done ‘back home’. Anyway, Jim used to let the wasps land on his finger and drink lemonade – a trick I subsequently performed (without being stung) in front of many a bemused audience. Since then, I’ve learned lots about these clever insects. They can recognise each other as individuals and they can recognise human faces. Allow a wasp to do its zig-zagging buzz in front of you it will learn what you look like and generally fly off, leaving you alone.

This year, I’m one of the very few to be concerned that there are practically no wasps about. Nor many other insects. I take their absence as a bad sign, where I suspect most people are just happy not to be ‘pestered’ by them. I did see one last night though, whilst waiting with my fellow band members before a gig. It took me a while to realise why they were jumping up and flapping their hands – it was a lone, wasp interested in the meat in their pork baps, so I did the trick; the wasp landed on my fingers and took the small piece I offered. I didn’t get stung, it didn’t get tangled in my hair, it didn’t land on my face or do any of the the other things that terrify people. It didn’t bother me at all and I could continue to sit on my hay bale and calmly contemplate the beautiful evening.

So how does this relate to anything? Well it’s something like this: what I have learned about wasps trumps popular culture and folk knowledge, and allows me to make both a compassionate and a superior decision. This is what I consider to be the goal of education. Yet, it’s a losing battle – education is pointless in the face of both a widespread, ignorant culture and a ruling minority that makes decisions for us, based not on evidence and expertise (badger cull, abolition of dept of energy and climate change), but for some other agenda, unnoticed by the majority and unfathomable to the rest.

 

Got the T-shirt (a moderate tale)

Given that teacher assessment is a nonsense which lacks reliability, and that moderation can not really reduce this, nor ensure that gradings are comparable, our moderation experience was about as good as it could be! It was thus:

Each of we two Y6 teachers submitted all our assessments and three children in each category (more ridiculous, inconsistent and confusable codes, here), of which one each was selected, plus another two from each category at random. So, nine children from each class. We were told who these nine were a day in advance. Had we wanted to titivate, we could have, but with our ‘system’ it really wasn’t necessary.

The ‘system’ was basically making use of the interim statements and assigning each one of them a number. Marking since April has involved annotating each piece of work with these numbers, to indicate each criterion. It was far less onerous than it sounds and was surprisingly effective in terms of formative assessment. I shall probably use something similar in the future, even if not required to present evidence.

The moderator arrived this morning and gave us time to settle our classes whilst she generally perused our books. I had been skeptical. I posted on twitter that though a moderator would have authority, I doubted they’d have more expertise. I was concerned about arguing points of grammar and assessment. I was wrong. We could hardly have asked for a better moderator. She knew her stuff. She was a y6 teacher. We had a common understanding of the grammar and the statements. She’d made it her business to sample moderation events as widely as possible and therefore had had the opportunity to see many examples of written work from a wide range of schools. She appreciated our system and the fact that all our written work from April had been done in one book.

Discussions and examination of the evidence, by and large led to an agreed assessment. One was raised from working towards; one, who I had tentatively put forward as ‘greater depth’, but only recently, was agreed to have not quite made it. The other 16 went through as previously assessed, along with all the others in the year group. Overall my colleague and I were deemed to know what we were doing! We ought to, but a) the county moderation experience unsettled us and fed my ever-ready cynicism about the whole business and b) I know that it’s easy to be lulled into a false belief that what we’ve agreed is actually the ‘truth’ about where these pupils are at. All we can say is that we roughly agreed between the three of us. The limited nature of the current criteria makes this an easier task than the old levels, (we still referred to the old levels!) but the error in the system makes it unusable for accountability or for future tracking. I’m most interested to see what the results of the writing assessment are this year – particularly in moderated v non-moderated schools. Whatever it is, it won’t be a reliable assessment but, unfortunately it will still be used (for good or ill) by senior leaders, and other agencies, to make judgements about teaching.

Nevertheless, I’m quite relieved the experience was a positive one and gratified and somewhat surprised to have spent the day with someone with sense and expertise. How was it for you?

 

 

 

 

Not good is sometimes good

I was reading Beth Budden’s blog on the cult of performativity in education and thinking of the many times when I’ve thanked the gods no-one was watching a particular lesson. It’s gratifying that there is a growing perception that a single performance in a 40 minute session is no kind of measure of effectiveness – I’ve railed against that for many years. During observations, I’ve sometimes managed to carry off the performance (and it’s always a hollow victory) and sometimes I haven’t (it always leads to pointless personal post-mortems). Lately I’ve managed to introduce the idea that I will give a full briefing of the lesson, the background, my rationale, the NC, the focus, the situation etc. etc. before any member of the leadership team sets foot in my classroom to make a formal observation. It’s been a long time coming and it goes some way to mitigating the performance effects. Not everyone in my school does it.

But what about the lessons that I really didn’t want anyone to watch? If they had, would I be recognised as a bad teacher?  If I think about lessons that seem to have been pretty poor by my own judgement, they almost always lead on to a better understanding overall. A recent example is a lesson I taught (nay crammed) on the basics of electricity. It was a rush. The pupils needed to glean a fair amount of information in a short time from a number of sources. The resultant writing showed that it was poorly understood by everyone. Of course, it was my fault and I’d have definitely failed that lesson if I were grading myself. Fortunately I wasn’t being graded and nobody was watching. Fortunately, also,  I could speak to the pupils the day after looking at their confused writing on the subject, tell them that I took responsibility for it being below par and say that we needed to address the myriad of misconceptions that has arisen. We did. The subsequent work was excellent and suggested a far higher degree of understanding from all; I assumed that something had been learned. Nowhere in here was a ‘good lesson’ but somewhere in here was some actual education – and not just about electricity.

 

 

Trialling moderation

A quick one today to cover the ‘trialling moderation’ session this afternoon.

We had to bring all the documents and some samples of pupils’ writing, as expected.

Moderators introduced themselves. They seemed to be mainly Y6 teachers who also were subject leaders for English. Some had moderated before, but obviously not for the new standards.

The ‘feel’ from the introduction to the session was that it wasn’t as big a problem as we had all been making it out to be. We were definitely using the interim statements and that ‘meeting’ was indeed equivalent to a 4b.

At my table, we expressed our distrust of this idea and our fear that very few of our pupils would meet expected standards. Work from the first pupil was shared and the criteria ticked off. We looked at about 3 pieces of work. It came out as ‘meeting’ even though I felt it was comparable to the exemplar, ‘Alex’. The second pupil from the next school was ‘nearly exceeding’. I wasn’t convinced. There were lots of extended pieces in beautiful handwriting but sentence structures were rather unsophisticated. There was arguably a lack of variety in the range and position of clauses and transitional phrases. There was no evidence of writing for any other  curriculum area, such as science.

I put forward the work from a pupil I had previously thought  to be ‘meeting’ but had then begun to doubt. I wanted clarification. Formerly, I would have put this pupil at a 4a/5c with the need to improve consistency of punctuation. Our books were the only ones on our table (and others) that had evidence of writing across the curriculum; we moved a few years ago to putting all work in a ‘theme book’ (it has its pros and cons!).

Unfortunately the session was ultimately pretty frustrating as we didn’t get to agree on the attainment of my pupil; I was told that there needed to be evidence of the teaching process that had underpinned the writing that was evident in the books. That is to say, there should be the grammar exercises where we had taught such things as ‘fronted adverbials’ etc. and then the written pieces in which that learning was then evidenced. I challenged that and asked why we couldn’t just look at the writing as we had done for the first pupil. By then the session was pretty much over. In spite of the moderator’s attempt to finish the moderation for me, we didn’t. The last part of the session was given over to the session leader coming over and asking if we felt OK about everything, and my reply that no, I didn’t. I still didn’t know which of the multiplicity of messages to listen to and I hadn’t had my pupil’s work moderated. I had seen other pieces of work, but I didn’t trust the judgements that had been made.

The response was ‘what mixed messages?’ and the suggestion that it may take time for me to ‘get my head around it’ just like I must have had to do for the previous system. She seemed quite happy that the interim statements were broadly equivalent to a 4b and suggested that the government certainly wouldn’t want to see the data showing a drop in attainment. I suggested that if people were honest, that could be the only outcome.

My colleague didn’t fare much better. She deliberately brought samples from a pupil who fails to write much but when he does, it is accurate, stylish and mature. He had a range of pieces, but most of them were short. The moderator dismissed his work as insufficient evidence but did inform my colleague that she would expect to see the whole range of text types, including poetry because otherwise how would we show ‘figurative language and metaphor’?

I’m none the wiser but slightly more demoralised than before. One of my favourite writers from last year has almost given up writing altogether because he knows his dyslexia will prevent him from ‘meeting’. Judging the writing of pupils as effectively a pass or fail is heart-breaking. I know how much effort goes into their writing. I can see writers who have such a strong grasp of audience and style, missing the mark by just a few of the criteria. This is like being faced with a wall – if you cant get over it, stop bothering.

We are likely to be doing a lot of writing over the next few weeks.

 

An upbeat end to a long, hard week

This is an uncharacteristic post for me. I started this blog as a vehicle for sharing critical views on aspects of education and I suppose a lot of them are approaching what some might term ‘rants’. In any case, it’s never been a blog for sharing good practice or teacher resources (although some of those are actually available elsewhere!) and yes, much of it is somewhat negative.

This, however, is simply a feelgood story after a week I expected to be hard-going. The latter because maintaining concentration following 6 hours of parents’ evenings after school is always quite a slog. That was all good, however, in spite of the very depressing and alarming expectations thrust at us by the DfE on Monday.

Today, however, was a delight. We had set aside the whole day to cover some of the art curriculum which has taken such a back seat recently and the pupils apparently had an incredibly good time. It wasn’t complicated. We were developing the theme I had introduced earlier and extending the creative bit through printing and 3D mobile sculpture. It was almost a guilty pleasure to not be shoehorning them into passive sentences and modal verbs for a change.

So that was fun – but to cap it, I run the band after school on Fridays. While we were waiting for the keyboard players to sort out their parts, I picked up the bass, since my bass player was away ill, and started plucking out the line to Zawinul’s ‘Mercy, Mercy, Mercy’, at which point both guitars and the drummer started playing along like it was a bona fide jam session. Bear in mind that the age range is 6 to 11yrs. We followed that by a full play through of ‘Stairway to Heaven’ and it appeared that the keyboard players had actually learned their (synth flute) parts at last. Several players took solos on ‘Watermelon Man’ and then, as we were packing up, we got into conversation about other music they would all like to play. The list included: Smells Like Teen Spirit; Paranoid or Iron Man; anything by ACDC; ‘something by Steppenwolf’ and possibly Oye Como Va, Santana style. I couldn’t be more chuffed.

Coming to a stadium near you soon!

 

If I were the school leader…

I have a student teacher on placement in my class at the moment. It’s interesting to remind myself of the long list of criteria in the teachers’ standards, that we have to consider in observations. As a teacher giving advice, I know which of these are important and which I’d give a lot less weight to when making any kind of value judgement.

I’ve never been a fan of classroom observations – for all the reasons that are now part of general discussion – particularly those that attempt to grade the teacher based on a snapshot of 20-40 minutes. It’s not how I’d do it. But the job of a school leader is a tough one, I believe, and nowhere tougher than in securing quality of teaching among the staff. If it were me, what would I look for?

When teachers are worrying about trying to tick the increasing number of boxes put forward to us, actual performance deteriorates. We’re focussed on what we think will be the assessment of what we should be doing, not on what we are actually doing. Humans can’t multi-task. By thinking of the process, the process itself suffers. This is a well-documented tactic used by those who would seek to remove unwanted personnel: increase the level of scrutiny and nit-pick every move so that eventually the subject can hardly function. This is a hard-nosed game that often ends in resignation, mental breakdown and sometimes suicide.

It’s not really the way I would go, and if micro-management is not the best way to ensure the pupils are getting a good education, then perhaps it boils down to a much reduced but more important key set of desirable features/skills. I think my list would be brief. I’d be looking specifically for evidence that the teacher:

  • Knows the subject(s) (and the curriculum) well
  • Knows what the pupils have learned and what to teach next
  • Manages behaviour so that pupils can focus
  • Teaches clearly so that pupils can understand
  • Picks up on issues and remedies them
  • Is compassionate

Everything else, surely, is either part of the craft or derived from opinion?

Challenge to this is welcome.

 

Primary Science Assessment – no miracles here

In April I wrote here on the draft science assessment guidance from the TAPS group. The final version is now out in the public domain (pdf), described thus:

“The Teacher Assessment in Primary Science (TAPS) project is a 3 year project based at Bath Spa University and funded by the Primary Science Teaching Trust (PSTT), which aims to develop support for a valid, reliable and manageable system of science assessment which will have a positive impact on children’s learning.”

I was vainly hoping for a miracle: valid, reliable AND manageable! Could they pull off the impossible? Well if you read my original post, you’d know that I had already abandoned that fantasy. I’m sorry to be so disappointed – I had wished to be supportive, knowing the time, effort (money!) and best of intentions put into the project. Others may feel free to pull out the positive aspects but here I am only going to point out some of the reasons why I feel so let down.

Manageable?

At first glance we could could probably dismiss the guidance on the last of the three criteria straight away. 5 layers and 22 steps would simply not look manageable to most primary school teachers. As subject leader, I’m particularly focussed on teaching science and yet I would take one look at that pyramid and put it away for another day. Science has such low priority, regardless of the best efforts of primary science enthusiasts like myself, that any system which takes more time and effort than that given to the megaliths of English and Maths, is highly unlikely to be embraced by class teachers. If we make assessment more complicated, why should we expect anything else? Did the team actually consider the time it would take to carry out all of the assessment steps for every science objective in the New Curriculum? We do need to teach the subject, after all, even if we pretend that we can assess at every juncture.

Reliable?

In my previous post on this subject, I did include a question about the particular assessment philosophy of making formative assessment serve summative aims. I question it because I assert that it can not. It is strongly contested in the research literature and counter-indicated in my own experience. More importantly, if we do use AfL (assessment for learning/formative assessment) practices for summative data then in no way can we expect it to be reliable! Even the pupils recognise that it is unfair to make judgements about their science based on their ongoing work. Furthermore, if it is teacher assessment for high stakes or data driven purposes then it can not be considered reliable, even if the original purpose is summative. At the very least, the authors of this model should not be ignoring the research.

Valid?

Simply put, this means ‘does what it says on the tin’ – hence the impossibility of assessing science adequately. I’m frequently irritated by the suggestion that we can ‘just do this’ in science. Even at primary school (or perhaps more so) it’s a massive and complex domain. We purport to ‘assess pupils’ knowledge, skills and understanding’ but these are not simply achieved. At best we can touch on knowledge, where at least we can apply a common yardstick through testing. Skills may be observed, but there are so many variables in performance assessment that we immediately lose a good deal of reliability. Understanding can only be inferred through a combination of lengthy procedures. Technology would be able to address many of the problems of assessing science, but as I’ve complained before, England seems singularly disinterested in moving forward with this.

Still, you’d expect examples to at least demonstrate what they mean teachers to understand by the term ‘valid’. Unfortunately they include some which blatantly don’t. Of course it’s always easy to nit-pick details, but an example, from the guidance, of exactly not assessing what you think you are assessing is, ‘I can prove air exists’ (now there’s a fine can of worms!) which should result from an assessment on being able to prove something about air, not the actual assessment criterion ‘to know air exists’ (really? In Year 5?).

1. Ongoing formative assessment

This is all about pupil and peer assessment and also full of some discomforting old ideas and lingering catch phrases. I admit, I’ve never been keen on WALTs or WILFs and their ilk. I prefer to be explicit with my expectations and for the pupils to develop a genuine understanding of what they are doing rather than cultivate ritualised, knee-jerk operations. Whilst I concede that this model focusses on assessment, it’s not very evident where the actual teaching takes place. Maybe it is intended to be implied that it has already happened, but my concern is that this would not be obvious to many teachers. The guidance suggests, instead, that teachers ‘provide opportunities’, involve pupils in discussions’, ‘study products’, ‘adapt their pace’ and ‘give feedback’. I would have liked to see something along the lines of ‘pick up on misconceptions and gaps in knowledge and then teach.’

Most disheartening, is to see the persistence of ideas and rituals to do with peer assessment. Whilst peer assessment has come under some scrutiny recently for possibly not being as useful as it has been claimed, I think it does have a place, but only with some provisos. In my experience, the most useful feedback comes not when we insist that it’s reduced to a basic format (tick a box, etc.) but when pupils can genuinely offer a thoughtful contribution. As such, it has to be monitored for misinformation; the pupils have to be trained to understand that their peers might be wrong and this takes time. After fighting hard against mindless practices such as ‘two stars and a wish’, my heart sinks to find it yet again enshrined in something that is intended for primary teachers across the country.

2. Monitoring pupil progress

In this layer, we move from the daily activities which are considered part of ongoing, formative assessment, to the expectation that teachers are now to use something to monitor ‘progress’. This involves considerable sleight of hand and I would have to caution teachers and leadership to assume that they can just do the things in the boxes. Let’s see:

TEACHERS BASE THEIR SUMMATIVE JUDGEMENTS OF PUPILS’ LEARNING ON A RANGE OF TYPES OF ACTIVITY

When? To get a good range, it would have to start early in the year, particularly if it includes all the science coverage from the curriculum. In that case, summative judgements are not reliable, because the pupils should have progressed by the end of the year. If it takes place at the end of the year, do we include the work from the earlier part of the year? Do we ignore the areas covered up to February? If we don’t, do we have time to look at a range of types of activity in relation to everything they should have learned? Neither ongoing work, nor teacher observation, are reliable or fair if we need this to be used for actual comparative data.

TEACHERS TAKE PART IN MODERATION/DISCUSSION WITH EACH OTHER OF PUPILS’ WORK IN ORDER TO ALIGN JUDGEMENTS

Oh how I despise the panacea of moderation! This is supposed to reduce threats to reliability and I’m constantly calling it out in that regard. Here they state:

“Staff confidence in levelling is supported by regular moderation. The subject leader set up a series of 10 minute
science moderation slots which take place within staff meetings across the year. Each slot consists of one class
teacher bringing along some samples of work, which could be children’s writing, drawings or speech, and the staff agreeing a level for each piece. This led to lengthy discussions at first, but the process became quicker as staff developed knowledge of what to look for.”

Where to begin? Staff confidence does not mean increased reliability. All it does is reinforce group beliefs. 10 minute slots within staff meetings are unrealistic expectations, both in perceiving how long moderation takes and in the expectation that science will be given any slots at all. Whatever staff ‘agree’, it can not be considered reliable: a few samples of work are insufficient to agree anything; the staff may not have either science or assessment expertise to be qualified to make the judgement; more overtly confident members of staff may influence others and there may be collective misunderstanding of the criteria or attainment; carrying out a 10 minute moderation for one pupil in one aspect of science does not translate to all the other pupils in all the aspects of science we are expected to assess. It might also have been a good idea to vet this document for mention of levels, given that it was brought out to address their removal.

3.Summative reporting

A MANAGEABLE SYSTEM FOR RECORD-KEEPING IS IN OPERATION TO TRACK AND REPORT ON PUPILS’ LEARNING IN SCIENCE

I just want to laugh at this. I have some systems for record-keeping which in themselves are quite manageable, once we have some real data. Where we have testable information, for example, factual knowledge, they might also mean something, but as most of us will know, they quickly become a token gesture simply because they are not manageable. Very quickly, records become ‘rule of thumb’ exercises, simply because teachers do not have the time to gather sufficient evidence to back up every statement. I note that one of the examples in the guide is the use of the old APP rubric which is no longer relevant to the new curriculum. We made the best of this in our school in a way that I devised to try to be as sure of the level as was possible, but even then, we knew that our observations were best guesses. The recording system is only as good as the information which is entered, despite a widespread misconception that records and assessment are the same thing! I’m no longer surprised, although still dismayed, at the number of people that believe the statistics generated by the system.

I didn’t intend this to be a balanced analysis – I’d welcome other perspectives – and I apologise to all involved for my negativity, but we’re clearly still a long way from a satisfactory system of assessing primary science. The model can not work unless we don’t care about reliability, validity or manageability. But in that case, we need no model. If we want a fair assessment of primary science, with data on pupils’ attainment and progress that we feel is dependable, then we need something else. In my view, it only begins to be attainable if we make creative use of technology. Otherwise, perhaps we have been led on a wild goose chase, pursuing something that may be neither desirable, nor achievable. Some aspects of science are amenable to testing, as they were in the SATs. I conceded to the arguments that these were inadequate in assessing the whole of science, particularly the important parts of enquiry and practical skills, but I don’t believe anything we’ve been presented with has been adequate either. Additionally, the loss of science status was not a reasonable pay-off. To be workable, assessment systems have to be as simple and sustainable as possible. Until we can address that, if we have to have tracking data (and that’s highly questionable), perhaps we should consider returning to testing to assess science knowledge and forget trying to obtain reliable data on performance and skills – descriptive reporting on these aspects may have to be sufficient for now.

Can we ditch ‘Building Learning Power’ now?

Colleagues in UK primary schools might recognise the reference, ‘Building Learning Power‘ which was another bandwagon that rolled by a few years ago. As ever, many leaped aboard without stopping to check just exactly what the evidence was. Yes, there did appear to be a definite correlation between the attitudinal aspects (‘dispositions‘ and ‘capacities‘) outlined in the promotional literature and pupil attainment, but sadly few of us seem to have learned the old adage that correlation does not necessarily imply causation. Moreover we were faced with the claim that ‘it has a robust scientific rationale for suggesting what some of these characteristics might be, and for the guiding assumption that these characteristics are indeed capable of being systematically developed.‘. And who are we, as the nation’s educators, to question such an authoritative basis as a ‘robust scientific rationale’ (in spite of the apparent lack of references)?

So, instead of simply acknowledging these characteristics, we were expected somehow to teach them, present assemblies on them and unpick them to a fine degree. It didn’t sit comfortably with many of us – were we expecting pupils to use those dispositions and capacities whilst learning something else, or were we supposed to teach them separately and specifically? When planning lessons, we were told to list the BLP skills we were focussing on, but we were confused. It seemed like we would always be listing all the skills – inevitably, since they were the characteristics which correlated with attainment. But still, teachers do what they’re told, even if it ties them up in knots sometimes.

So it is with interest I came across this piece of research from the USA:

Little evidence that executive function interventions boost student achievement

As I’m reading, I’m wondering what exactly ‘executive function’ is and why I haven’t really heard about it in the context of teaching and learning in the UK, but, as I read on I see that it is ‘the skills related to thoughtful planning, use of memory and attention, and ability to control impulses and resist distraction’ and it dawns on me that that is the language of BLP! So I read a little more closely and discover that in a 25 year meta-analysis of the research, there is no conclusive evidence that interventions aimed at teaching these skills have had any impact on attainment. To quote:

“Studies that explore the link between executive function and achievement abound, but what is striking about the body of research is how few attempts have been made to conduct rigorous analyses that would support a causal relationship,” said Jacob [author]

The authors note that few studies have controlled for characteristics such as parental education, socioeconomic status, or IQ, although these characteristics have been found to be associated with the development of executive function. They found that even fewer studies have attempted randomized trials to rigorously assess the impact of interventions.

Not such a robust scientific rationale, then? Just to be clear – lack of evidence doesn’t mean there isn’t causation, but isn’t that exactly what we should be concerned with? This is only one of a multitude of initiatives that have been thrown our way in the past decade, many of which have since fallen into disuse or become mindlessly ritualised. We are recently led to believe, however, given the catchphrase bandied about by government ministers and a good degree of funding, through such bodies as The Education Endowment Fund, that there is an increased drive for ‘evidence-based education’, which of course begs the question: what’s been going on – what exactly has underpinned the cascade of initiatives – up to this point?

Shouldn’t we just say ‘no’?

I’m beginning to wonder why we are playing their game at all. Why are we not questioning the basis for the assumptions about what children should know/be able to do by whatever year, as prescribed in the new curriculum and the soon to be published, rapidly cobbled together, waste of time and paper that are the new ‘descriptors’. Have they based these on any actual research other than what Michael Gove dimly remembered from his own school days?

We recently purchased some published assessments, partly, I’m sorry to say, on my suggestion that we needed something ‘external’ to help us measure progress, now that levels no longer work. It wasn’t what I really wanted – I favour a completely different approach involving sophisticated technology, personal learning and an open curriculum, but that’s another long story and potential PhD thesis! Applying these assessments, though, is beginning to look unethical, to say the least. I’ve always been a bit of a fan of ‘testing’ when it’s purposeful, aids memory and feeds back at the right level, but these tests are utterly demoralising for pupils and staff and I’m pretty sure that’s not a positive force in education. I’m not even sure that I want to be teaching the pupils to jump through those hoops that they’re just missing; I strongly suspect they are not even the right hoops – that there are much more important things to be doing in primary school that are in no way accounted for by the (currently inscrutable) attaining/not attaining/exceeding criteria of the new system.

So what do we do when we’re in the position of being told we have to do something that is basically antagonistic to all our principles? Are we really, after all this time, going to revert to telling pupils that they’re failures? It seems so. Historically, apart from the occasional union bleat, teachers in England have generally tried their best to do what they’re told, as if, like the ‘good’ pupils they might have been when they were at school, they believe and trust in authority. Milgram would have a field day. Fingers on buttons, folks!

OFSTED at the ASE

On Friday I went to Reading to attend the ASE (Association for Science Education) conference and one of the sessions was run by an OFSTED HMI. I took some notes and these are as below for your interest (!).
Looking at books to monitor progress
Apparently, since they rarely see any actual science going on, they tend to look in pupil books to see what science is happening. Hopefully we can point them in some more directions than that, e.g. pictures and videos etc. Talking to teachers and pupils would be nice.
Levels or not – so what?
They don’t care what we call the levels/degrees/grades/points etc. They want to know how we use assessment to identify whether or not individuals are making progress, how we identify those falling behind and what we do about it.
Evidence of feedback making a difference
It’s crucial to allow time to feed back to pupils and for them to respond in a way that shows that they have overcome misconceptions or improved understanding. This really needs to be built into the time we give to lessons. I know I have to do this, but I still tend to start ‘new’ lessons sometimes without thinking about whether I have finished the previous one and done all the follow up properly. My junior school teachers were brilliant at this. Why do we still need to be told?
General statements to parents will be fine.
Just like the ones we gave out after parents’ evening last time. We wrote a descriptive summary on each of the core subjects, instead of just giving them the level. They actually preferred it.
Heads up on schools paying lip service to evolution.
They’ve been given instructions to look out for schools teaching evolution but only because they ‘have to’ and giving any kind of weight to ‘alternative theories’ – these are not scientific theories – they are religious indoctrination by the back door.
Detailed formative and summative information
OK
Show high expectations
Be careful in any ‘differentiation by task’, since this frequently consigns the lower attaining pupils to lower expectations. Pupils should have access to the curriculum relevant to their age. Good – because I’ve been saying this for years. Differentiation by preplanned task is counter-productive.
We need to have local cluster moderation
Or we’ll deceive ourselves about our assessments (?).
Make sure pupils finish what they start
Unfinished work is a dead giveaway that we’re not allowing for follow up time. Make sure we allow for pupils to finish in subsequent sessions.
Make sure the work is by and from the children
There should not be work by the teacher in the pupils’ books. Think about it – how much of the content of the books (backgrounds, printouts, learning intention decorations, worksheets, proformas etc.) is currently produced by you?
It should not look all the same
Avoid ‘production line’ outcomes. Pupils’ work should demonstrate individuality.
Writing up science is literacy
I think we knew that.
Use past papers to assess units
Interestingly – the use of ‘test’ papers in a constructive way and to give good feedback etc. is recommended.
He also said that OFSTED inspectors were not allowed to say how any teacher ‘should have’ done anything. That’s considered giving advice. He said that they should only say what happened, what was successful and what was missing or not successful. Hmm…