I once had a data tracking system I was quite proud of. It involved recording every exam question students completed in lessons and then colour coding each grade on an Excel sheet; red for below target, amber for on it and green for above. It was quite time consuming but I thought it was worth it because it allowed me to assess patterns of performance for individuals and the group as a whole. Theoretically it also enabled me to plan ‘interventions’ for underperforming students although in reality such extensive recording made working out priorities very difficult.
It was a waste of time and I’m not doing it anymore. Working out why it was so flawed has got me thinking about how curriculum planning, assessment and data tracking should happen in history.
I’m going to begin with the problems with my old tracking system. Firstly, it suggested many students were performing far better than they really were. This was because the exam questions students did in my lessons always took place right after I’d taught the content. They did well because they weren’t required to remember anything that happened more than an hour before. This became clear when students sat mock examinations which assessed material they’d covered months previously. Children who I’d been tracking at As and Bs sometimes got Fs and Gs. Conversely, a few students who I’d been tracking at low grades did significantly better than my class-based data indicated they would.
This happened because, in effect, I was teaching a linear course in a modular fashion and wasn’t teaching the importance of memory and revision explicitly enough. This sometimes created real conflict when I nagged students for not doing something they’d never been taught to do, which was to learn large amounts of content over a long period of time.
This issue was most visible in KS4 because GCSE examinations need students to have knowledge of at least two years of material. As I considered the issue more closely I became sure that the problem was caused by curriculum and assessment issues lower down in school.
My KS3 curriculum, like those of many schools in England, was in effect a modular one. Students studied one unit each half term and then sat an assessment based on only this material. This is really very damaging to the development of good history and to student performance in examinations. Students who follow curriculums like this one are never required to remember anything for more than six or seven weeks. This means that it is too easy for students to see topics as islands, isolated from everything else that ever happened, which inhibits their ability to assess links between topics or to meaningfully assess significance.
Of course this model did not develop in a vacuum. It emerged as a result of data tracking systems which required teachers of all subjects to submit some form of grade for students each half term. This may have worked well enough for skills based subjects such as English Language and Maths but was inappropriate for knowledge-based subjects like History. Carts were placed firmly before horses as Schemes of Work were revised to fit in with half termly assessments, which were in turn planned to fit in with whole school half termly data tracking. Worse followed as the discipline itself was contorted to meet the demands of skills based assessment. History was dismembered into a number of ‘skills’, which were often lifted from superficial versions of Blooms’ taxonomy. An increasing knowledge of events in the past was side-lined as an indicator of improvement. Teachers assessed students on their ability to move through a hierarchy of ‘skills’ instead. Content was deprioritised and some teachers began arguing that knowledge wasn’t really important at all. For these teachers, and I’m embarrassed to admit myself for a while, developing ‘transferrable skills’ became the most important reason history was taught in schools.
Through all this, GCSE Assessment Objectives and mark schemes didn’t vary that much. KS4 teachers understood that embedded knowledge continued to be a significant driver of student success and taught GCSE appropriately. A system emerged in which KS3 and KS4 curriculums assessed different things. History was often one subject at KS3 and a completely different one at KS4. Many departments muddled along like this for years, with National Curriculum Levels masking the issue.
The demise of these NC Levels has brought these issues to the surface and offers an opportunity to recouple the two systems into one coherent whole.
Many schools now assess learning through 1-9 GCSE numbers applied from Y7. Under this system student learning should be very clear. Students begin at a low number and then as they become more accomplished, achieve higher ones. This does seem to work for subjects such as maths, in which skills students use in Y7 will, after development, be eventually examined at the end of Y11. It does not, however, work well in history.
In history, despite the developments I discussed earlier, ‘getting better’ does not just mean becoming more skilful. ‘Getting better’ also means knowing more, and being able to remember over longer and longer periods of time. A student may be able to explain the causes of the Battle of Hasting really well, but this does not mean they can explain the causes of the Night of the Long Knives. In history skill without knowledge, quite rightly, is worth very little.
This makes tracking student improvement tough. The only truly authentic method of tracking would involve teaching KS4 courses to KS3 students and then repeating endlessly, creating a thrumming exam machine. Of course, in a Local Educational Area school this would fail to meet statutory requirements and, more importantly, is deeply unethical.
A more sensible solution is create KS3 assessments based on KS4 Assessment Objectives. If students might be expected to “explain one reason for Hitler’s consolidation of power in 1934” in their Y11 exam, it might be helpful to ask Y7 students to “explain one reason William won the Battle of Hastings” in their Norman Conquest assessment. A system like this one allows students to cover a broad curriculum but also prepares them for assessment at GCSE. One problem with this, especially for schools which want to assess the current actual position of students, is that it can only ever generate a predicted grade; students achieving an ‘8’ or ‘9’ in Y9 are not guaranteed to achieve that at GCSE because most of the content they study will be completely new to them. Students who can explain one thing well can’t be expected to be able to explain reasons for an event they haven’t studied. But this, I think, isn’t that significant a concern. We can reasonably assume that if a student demonstrates the capacity to learn one thing, they probably have the capacity to learn something else to the same level. So most students should meet their prediction with one final proviso, which brings me right back to where I started.
Predictions generated by KS3 data will be accurate only if assessments test learning over a significant period of time. The simplest, most effective way I’ve seen of doing this is a method Michael Fordham has described and one my department will be adopting from next year. I’ll only be recording marks for these six tests, but will be acting on these results far more decisively than I was when I was recording more data. It a student bombs a test there will be consequences.
Assessment | Content in assessment |
1
|
1 |
2
|
1+2 |
3
|
1+2+3 |
4
|
1+2+3+4 |
5
|
1+2+3+4+5 |
Final examination | 1+2+3+4+5+6
|
This model did seem strange to me at first. It means that students might complete an entire module in half term five to find there is only one low mark question on that material. Odd as if first seems though, I’m confident it will lead to students knowing more and better able to understand material they’ve learned for longer. If it works our curriculum and assessment should help children learn more about the past and pass exams. And that should make everyone, from parents to my Head Teacher, very happy.