What student exam data should we measure?

Hand pointing to tablet with exam data in coloured charts

With more and more processes and systems being computerised, there is the potential to collect a lot more data as we "swipe" into buildings and login to applications. Universities and associations are becoming increasingly interested in this broader range of student data, as opposed to just focusing on academic results. Analysis of this new data can provide some very interesting insights into student behaviour, enabling organisations to predict future trends and make smart changes to address potential issues before they arise. It can be as simple as tracking visits to the library or books borrowed by students which when correlated with other key data allows predictive analytics to rate how likely it is a student will drop out of their course.

In the world of online assessment and exams, where systems can track all actions taken, this raises a few questions. Apart from actual exam answers given by candidates, what other data can be collected and measured during the exam process? Is this data useful? What can it predict? Indeed, should we be collecting this data and what are the data privacy implications? (This last question, deserves a blog post in its own right when we look at GDPR 2018.)

Looking more specifically at exams, one area of interest is how long students spend on each question or section of their exam. We recently had a university who transitioned a 5-section, 2-hour long exam from paper-based to online. Based on feedback from previous candidates they knew that one of the sections was perceived as more difficult, however what the exam data showed was that candidates were actually spending around 45% of the overall exam time on this section. Obviously time data alone can be very useful but when correlated with actual score data, you can get a much fuller picture. So, it might raise a red flag if students are spending a lot of time on a specific question, but if many are actually answering that question incorrectly or scoring low marks, then this exacerbates the issue. In the example given, the insights revealed has prompted a review of the exam structure.

Analysis of a wider range of exam data, that could never be collected with a pen and paper approach, can yield insights that you may not even have thought of until you actually spend some time doing the analysis. It’s a very worthwhile exercise. Another example, which yields incredible insights, is the use of confidence ratings, where candidates are asked to indicate how confident they are in their answer. A question like "How confident are you that you have answered this question correctly?" with an agree/disagree scale, can give more detailed insights not only into that candidate’s ability, but also into the effectiveness of the underlying learning programmes. For example, if you have a module that is part of two separate courses, with two different tutors, although you might find that the academic results of assessments for that module are similar, it may be the case that confidence levels are very different due perhaps to differing delivery styles. Alternatively, if both groups have a proportion of low confidence/right answers or high confidence/wrong answers, it might indicate a problem with the learning materials and resources. When correlated with Learning Objectives (or Outcomes) this can pinpoint to specific areas of a course which may require more in-class time, a change in approach or a revamp of learning resources.

The world of online assessment offers very useful exam data giving insights into the effectiveness of course material and student behaviour during exams. I suspect in the future this is going to become of more and more value to anyone involved in running formal assessments.

If this blog has been of interest, you can read more about other Benefits of Online Assessment here.