One of the most common questions I get asked is about how to set grade boundaries for mocks or other internal exams. Many teachers will suggest using past paper grade boundaries, but I’d like to explain why this is not always good practice.

If the paper that you are setting is a genuine unedited past paper and you want to benchmark your students’ performance, then yes, do use the grade boundaries that were set for that paper. However, beware that if your students are taking this exam paper in Year 10, then they are unlikely to have covered all the topics and they will have less experience with programming and algorithms which will inevitably lead to lower grades than you would expect at the end of the course. You need to be very careful if reporting these grades to students as it is only a reflection of what they would achieve if they took the final exam at that moment in time, rather than what they are likely to achieve at the end of the course.

The biggest problem with the strategy above in 2021 is that there are no past exam papers and therefore no grade boundaries for the current GCSEs and so using a past exam paper would involve using a paper targeted at a significantly different specification with ground boundaries for that specification rather than the current specification which will be first examined in 2022.

However, I don’t think it is helpful to set mock exams where there are questions that students cannot answer. If setting a mock at the end of Year 10 or in January/February of Year 11, then I would suggest selecting questions from past papers that cover the topics students have covered up to that point in their learning journey. That way you can test what students know rather than what they don’t know. You can also write your own questions that are similar in style to real exam papers. The downside of this is that there are now no tried and tested grade boundaries for the paper you have set. Choosing past paper grade boundaries would not match the exam papers you have set and would give an incorrect picture of your students’ achievement.

So what should you do instead? The simple answer is ‘make them up’! But of course you’re not just going to create the boundaries out of thin air. You could use a normal distribution but that would assume that your students in your class follow a normal distribution in terms of ability, which is extremely unlikely. Instead, you should look at the evidence you already have available. You can look at past paper grade boundaries to see what the distribution of marks was like and the gaps between each grade boundary. You can look at the percentage of students that achieve each grade nationally for the exam board that you are currently following, but this will only tell you the percentages based on national data and you may have a particularly strong class (e.g. in a grammar school) or a particularly weak class or a class that is not normally distributed in terms of ability. You can look at your students’ KS2 data and the predictions that have been provided for you by your school’s data manager. These predictions will help you to see the distribution of students you have in terms of ability and combining that with the other information discussed, you can see how those students have achieved and apply grade boundaries that reflect their abilities. There will of course be anomalies and this will be more difficult with smaller cohorts, but it is much more effective than simply choosing past grade boundaries that bear little resemblance to the paper you have set.

Imagine the data below:

Each of the 2 exam papers is worth 100 marks with a total of 200 marks. The target grades are based on KS2 data, although your school may use different methods. The grades for 2019-2021 are using the grade boundaries for the exam papers for those years, but this is only to give us some general information. The average grade shows the average of each of the grades based on the last 3 years’ grade boundaries. The difference column shows how far the student’s average grade based on previous exam papers is away from their target. The average difference is -0.37 which shows that if just used past paper boundaries, then students would on average be given a grade 0.37 grades below what they should expect.

The students have been sorted into rank order based on their total mark. This rank order should be used for setting your grade boundaries. In this class, there were 3 students who had a target of 9 so I will give a grade 9 to the top 3 students, regardless of their targets. The class had 4 students with a target of 8, so I will give the next 4 students a grade 8. The class had 4 students with a target grade of 7 so I will give the next 4 students a grade 7. This pattern is continued with 3 x 6, 7 x 5, 4 x 4, 4 x 3 and 1 x 2. This has given a grade to each student without having to have worked out grade boundaries, but we can also see how this compares with grades based on previous exam papers. Most importantly, we can see the performance of each individual and how well that compares with their target.

This is a simplistic model and for a single class only measures students against their peers and does not reflect what is being achieved nationally. It does not take into account quality of teaching. If there were more classes in the year group, then there would be a larger sample size and it would be possible to compare students across classes. With a whole year group, then all students in that year would be put into rank order rather than using one class at a time.

Now that these initial grades have been awarded, there may need to be some tinkering done. For example, you may know that some of your top students are not performing particularly well and that you may have made some of the questions a bit too easy. In this case, you would reduce the number of grade 9s, 8s and 7s that you award. You may feel that your target grade 2 student really deserves a 3 and so you award an extra 3. The most important thing is that you keep the rank order and boundaries are less relevant.