Continuous Assessment in the Age of AI - Why Remote Proctoring Is Becoming Essential
Artificial intelligence (AI) has changed education faster than almost any other technology in recent decades. Tools that can generate fluent text, analyse data, summarise research and propose solutions are now widely available to learners at the click of a button. While this brings genuine opportunities for learning and productivity, it also presents a profound challenge for education providers, awarding organisations and professional bodies. How do you fairly and reliably assess competence when AI can complete large parts of the work on a learner’s behalf? Nowhere is this challenge more acute than in continuous assessment. Essays, projects, coursework and assignments are typically completed without supervision, often over days or weeks. In theory, these assessments are designed to evaluate understanding, analysis and applied thinking. In practice, they are increasingly vulnerable to widespread and sophisticated use of AI.
In its earliest and most obvious form, AI misuse was easy to spot. Entire assignments generated at the click of a button, pasted verbatim into submissions, often riddled with tell-tale patterns such as unnatural phrasing, repetitive sentences and a specific punctuation style (most of us can immediately spot elongated hyphens and an overuse of “, and”). For the most part, detection tools could flag these with reasonable confidence and markers could often recognise them instinctively.
Unfortunately, that phase has passed.
Today’s AI use is far more subtle. Candidates may engage with AI to brainstorm ideas, structure arguments, summarise academic papers, suggest methodologies or rephrase their own writing. The final submission may be technically original, free of obvious markers and entirely undetectable by conventional plagiarism or AI detection software. Yet the cognitive work has still been outsourced.
From an assessment perspective, this is deeply problematic. The candidate may never have engaged meaningfully with the material. They may not understand the argument they are presenting. They may be unable to reproduce the same level of work independently. The assessment outcome no longer reflects individual capability.
For professional qualifications and regulated environments, the implications are even more serious. Employers, regulators and the public rely on these qualifications as signals of competence. If continuous assessment cannot be trusted, the credibility of the entire pathway comes into question.
Why this matters to regulators
This issue is now firmly on the radar of regulators. Bodies such as Ofqual have been increasingly vocal about the risks posed by AI to assessment validity. At stake is not simply academic fairness, but the credibility of qualifications themselves. If continuous assessment cannot be trusted to measure what it claims to measure, then confidence in the entire qualification framework begins to erode.
For professional and vocational qualifications in particular, this is critical. These awards underpin employment decisions, public safety and regulatory confidence. If employers or regulators begin to doubt whether a qualification holder genuinely possesses the required knowledge or skills, the reputational damage can be severe and long-lasting.
As a result, many organisations are being pushed to rethink their continuous assessment models. The question is no longer whether AI is being used in continuous assessment, it clearly is, but how institutions can respond in a way that is proportionate, effective and fair.
The limitations of traditional online assignment systems
Most online assignment platforms were never designed with AI in mind. They typically allow candidates to upload documents, type into text fields or submit files from their own devices, with little or no control over what else is happening on that device at the same time.
Even where remote proctoring is “bolted on” to an assignment system, there are often fundamental gaps. If the underlying platform does not support full computer lockdown, candidates may still be able to open other browser windows, access AI tools, consult documents or use secondary devices out of view. In these cases, proctoring becomes largely symbolic, a deterrent rather than a true control.
This distinction is critical. Deterrence alone is not enough when the integrity of a qualification is at stake. To create a genuinely robust assignment environment, proctoring must be combined with deep technical controls that restrict what candidates can access during the assignment itself.
Re-thinking continuous assessment to enable greater control
Remote proctoring offers a practical way to bring continuous assessment onto a more formal and defensible footing, while ensuring that flexibility is retained.
As the assessment landscape evolves, so do the threats - and so do our defences. We continually assess and introduce technologies that support integrity without compromising the candidate experience.
Rather than allowing unrestricted access over long periods, candidates can be given controlled windows in which they work on their assignment while being proctored. These windows can be flexible and candidate-friendly, allowing access at times that suit different schedules and time zones. During each session, however, the assessment environment is tightly controlled.
In a properly designed model:
- The candidate’s computer is locked down, preventing access to AI tools, other websites, local files or unauthorised applications.
- Proctors monitor the candidate, ideally in real time or optionally via record-and-review models, ensuring that no secondary devices or materials are used.
- All activity is logged and auditable, creating a defensible record of how the work was completed.
- Candidates retain flexibility in when they work, while institutions retain control over how the work is done.
This approach does not require candidates to complete an entire project in one sitting. Instead, it allows structured, supervised access that balances integrity with practicality.
Cost-effective and scalable options
One of the common misconceptions about remote proctoring is that it is inherently expensive. In reality, there are multiple models that can be applied depending on the stakes and risk profile of the assignment.
Live proctoring is appropriate for high-stakes components or final submissions. Record-and-review (R&R) models can be used for longer work periods, where sessions are monitored automatically and reviewed only where risk indicators are present. Hybrid approaches combine human oversight with platform-level controls to achieve scale without compromising quality.
Used thoughtfully, these models can be applied selectively, targeting the areas of greatest risk while keeping costs proportionate. Importantly, they also reduce downstream costs associated with appeals, disputes and reputational damage.
Fairness for candidates and confidence for institutions
A well-designed remote proctoring model benefits everyone involved. Candidates know the rules are clear and consistently applied. Those who do their own work are protected from being undercut by unfair practices. Institutions gain confidence that the outcomes they certify genuinely reflect individual capability.
Perhaps most importantly, this approach restores trust at a time when trust is under pressure. As AI continues to evolve, assessment models must evolve with it. Relying solely on detection after the fact is no longer sufficient. Control at the point of assessment is becoming essential.
A path forward
AI is not going away, and nor should it. But its impact on assessment must be addressed head on. Remote proctoring, when combined with secure assignment environments and flexible delivery models, offers a credible and practical response to one of the biggest integrity challenges facing education today.
For awarding organisations, universities and professional bodies, the question is no longer whether change is needed, it is how quickly and how confidently that change can be delivered.

