In England we’ve just had a revision of what used to be the ICT (Information and communications technology) curriculum for primary schools into what is now going to be ‘computing‘, focussing on what are imagined to be necessary skills for modern life, such as programming and ‘digital citizenship’. Interestingly, there is little mention of the use of the computer in any other subject of the new curriculum.
Having just completed the excellent FutureLearn courses ‘Teaching computing 1 and 2’, my initial thoughts were that there was little new here. The programming part is straight out of the latter half of the 20th Century. That’s not really surprising, though, given the paradigms of that time. What I find difficult to understand, however, is the anachronistic attitudes towards the assessment of this subject. Like everything else in the English primary system at the moment, it’s ‘teacher assessment’ – that old fall-back, catch-all, that miraculously covers everything. This is to be done by observing, of course; observing 32 pupils all carrying out a multiplicity of activities at a computer.
So why not use the computer?
This is my rant straight from the CAS (Computing at school) site:
Are we failing to exploit the potential for e-assessment to capture rich data in terms of pupil behaviour as well as give instant feedback to pupils? Why are we trying to construct elaborate rubrics with a multitude of descriptors which require hours of teacher observation? What is the likelihood that teachers are going to manage to dedicate any time to assessment of computing, given that the core subjects will have something like a total of 4600 items for a class of 32? I feel that on the one hand we’re supposedly promoting 21st Century skills (though I’ve yet to see that, in fact) whilst using 19th Century systems of assessment.
The ‘disastrous’ attempts to use e-assessment can largely be put down to the inappropriate use of the technology. I’m not advocating a ham-fisted approach which replicates online what pupils could do on paper. I’m amazed that computing practitioners are even considering that a paper-based approach is somehow more ‘academic’. Computers have enormous potential in terms of tracking and analysing what is done on the computer (and let’s face it, most computing is done on the computer!’). We haven’t even begun to tap into the formative potential of such things as ‘serious games’ and immersive software, and yet these are everywhere in the commercial world, in high risk occupations (medicine, aerospace, motor racing). Pupils’ behaviour is being assessed every day by the games they play online. It really is an issue for policy makers, educationalists and software companies, but I imagined that here in the computing curriculum domain, there would at least be that kind of thinking.
There are interesting developments in the US and in Australia, as well as some promising work in further and higher education. However, we really don’t seem to be aware of what is possible in English primary education which is overly influenced by a belief that ‘teacher assessment’ covers everything. It doesn’t, it can’t and the current model is severely flawed.
I’m growing increasingly frustrated with the education community in England, and their narrow inability to see the benefits of digital technology in teaching and assessing, even when the very subject is about the use of digital technology.