It’s clear that attention on assessment security has accelerated since the rapid shift to online, remote-based and hybrid learning. Institutions across the Asia Pacific, and indeed the world, managed as best as they could to accommodate assessment in a digital mode of delivery. It proved particularly difficult for conducting high-stakes, summative assessment in a secure way, leading many to restructure curriculum on the fly. And even for institutions more prepared via existing digital infrastructure, there were unresolved issues in validating assessments to the usual standards, on such an unprecedented scale.
Having passed this initial transition period, what is the long-term outlook for assessment security? There is a growing consensus that online and hybrid settings are the logical way forward in a digital world, but it does not mean a uniform assessment response. Institutional policies have ranged from adjusted weighting of assessments in courses to offset student pressure and any temptation to cheat, reliance on more formative assessment methodologies, and exam proctoring/online invigilation to avoid unauthorised external help or materials skewing assessment outcomes.
Towards fair, accurate assessment, how can institutions shore up assessment security to validate learning outcomes without adversely affecting the student experience? Here, we explore the practices and trends that are gaining traction as we collectively rethink assessment.
A discussion of what constitutes assessment security is in order, before we unpack approaches to it in the current and future education landscape. In his book ‘Assessment Security in a Digital World’, Professor Phillip Dawson, an authority on assessment security from Deakin University in Australia, defines assessment security as: “Measures taken to harden assessment against attempts to cheat. This includes approaches to detect and evidence attempts to cheat, as well as measures to make cheating more difficult.”
Previously, the concept of securing the testing environment would conjure the thought of educators or exam invigilators physically walking up and down the aisles of seated students, visibly checking for unauthorised materials such as ‘cheat sheets’ and preventing student communication and collusion. We’ve come a long way from a 100% in-person assessment setting, first switching to a wholly remote setting - at least for higher education - and now settling into some cadence of a blended or hybrid environment. When it comes to assessment in the online, digital realm where the screen of a computer or related device limits educator visibility, it clearly demands an expansion of traditional verification practices.
That being said, it’s important not to view assessment security narrowly and purely in terms of the platform or delivery mode, but rather, at the root of the assessment design process and the institutional culture as a whole.
Phill’s aforementioned definition draws out the proactive and reactive elements of assessment security, or in other words, prevention and detection. He makes the distinction between assessment security measures as “adversarial, punitive and evidence-based” and academic integrity as the opposite side of the same coin - “positive, educative, and values-based”. The premise being that institutions need to both educate students and deter urges to cheat by fostering an ongoing culture of academic integrity, while backing it up with mechanisms that combat it and detect when it does occur. In fact, it’s this duality that drives Turnitin products; unfolding in the use of our Similarity Report that flags potential plagiarism for educators, while empowering students through feedback and writing guidance tools that offer formative learning opportunities for achieving honest, authentic work.
It’s important to recognise that forms of cheating such as plagiarism can be inadvertent, but for the purpose of this blog, we focus on deliberate or intentional academic misconduct. Attempts to cheat during assessment can be motivated by a variety of Intersecting factors. In their research review of academic integrity in online assessment published in 2021, Holden, Norris and Kuhimeier explore 4 key dimensions of student motivation to cheat,
- Individual factors
- Institutional factors
- Medium of delivery
- Assessment-specific factors
We’ll now look at how the last two factors - most relevant to the topic at hand - impact student experience and rates of cheating. If you’re wondering how the online realm has affected the state of play, Holden et al confirms that “the belief that cheating occurs more often in online courses than in in-person courses— particularly for high-stakes assessments like exams—is widespread, with approximately 42–74% of students believing it to be easier to cheat in an online class.” It’s a perception that supports action being taken by institutions to improve online assessment security and set student expectations.
Professor Phill Dawson’s body of work seeks to redress the myth that certain types of assessment are ‘cheat proof’, as do related studies from prominent academic integrity researchers including Professors Cath Ellis and Tracey Bretag, showcased in their analysis on contract cheating data. Such findings suggest that a multilayered approach to assessment design is required, with Phill identifying seven standards for assessment security that institutions ought to consider.
- Coverage across a program - how much of a degree should be secured?
- Authentication - how do we ensure the student is who they say they are?
- Control of circumstances - how can we be sure the task was done in the intended circumstances?
- Difficulty to cheat metrics - we need to know how hard tasks are to cheat in
- Detection accuracy metrics - we need to know if our detection methods work
- Proof metrics - we need to be able to prove cases of cheating
- Prevalence metrics - we need to know approximate rates of undetected, detected and proven cheating
Let’s zoom in on two specific approaches from varying ends of the educational/adversarial spectrum that are seen to have an impact on the integrity of assessment outcomes plus implications for the student experience.
- Authentic assessment
There is belief amongst many educators that the more ‘authentic’ an assessment is - that is, the more it involves students using and applying knowledge and skills in scenarios that mimic real life - the less chance they will cheat. Creating less abstracted assessments that are more relevant to student lives also intersects with a broader movement to equip students with future-ready skills required in the workforce. Data from Phill and others suggests that authentic learning garners favour with students themselves, and generally proves less prone to cheating than other assessment types.
In her recent Integrity Matters episode, Professor Rosanna Bourke, educator from Massey University and qualified psychologist, draws the link between student cheating in assessment relative to their understanding and investment in the assigned tasks. She points to stress and a lack of confidence stemming from assessment tasks as precursors to plagiarism and other forms of cheating, indicating that the such assessment doesn’t work for learning and in itself becomes the unintended barrier.
Calling out an overemphasis on grades that can prove counterproductive to students’ healthy ambition, and remarking on the possibilities for inclusive learning and co-design of assessment to strengthen learner identity, Roseanna concludes: “assessment shouldn’t be the driver that students use to necessarily direct or determine their learning - we need to encourage their driver to be around their aspirations or their motivation or internal urge to learn and know.” In this way, she attributes design of the student experience as pivotal to assessment security.
- Proctoring / online invigilation
Although there have been shifts from high-stakes summative assessment in favour of more frequent low stakes and formative assessment, exams still serve an important role in discipline expectations. For instance, courses or curriculum legitimised through accreditation bodies such as medicine and law, still rely on such summative tests for graduating students. In the past 2 years, faced with the dilemma of administering such tests remotely where students could bypass the usual in-person invigilation, some institutions opted to introduce online invigilation or proctoring.
Alongside those institutions who chose to refrain, the strategy raised challenges of privacy, data integrity and access, and students and educators alike voiced concerns about the justification of proctoring as a solution to assessment security. Aside from questions of ethical responsibility when imposing such assessment controls, its efficacy and impact on rates of cheating has also been examined. Although online invigilation software and protocols vary considerably and it’s best measured on a case by case basis, Holden et al.’s research offers us a yardstick of sorts: “evidence suggests that online invigilation offers a degree of anti-cheating protection, especially with respect to authenticating who the test-taker is. They are most effective at discouraging and detecting opportunistic attempts by students who have not researched how to cheat”.
TEQSA’s Strategies For Using Online Invigilated Exams guide - developed in collaboration with Phillip Dawson - outlines 10 key principles for deciding to implement proctoring/online invigilation. Two standout recommendations are to use it sparingly, as the last port of call in the assessment design decision-making process, and to ensure that staff and student capacity building and support are always made available.
Moving forward, we can expect to see the sector focus on enhancing the way it approaches online invigilation, by investing in technology which allows remote exams to be supervised in a seamless, humanised way.
It’s evident that a combination of tactics is the best arsenal for teachers looking to shore up assessment security. In their previously mentioned research, Holden at al. (2021) refer to ‘online exam control procedures (OECPs)’ or ‘non-proctor alternatives’ that elicit academic integrity behaviours. These tactics include: offering exams at one set time, randomising the question sequence, designing the exam to occupy a limited period of time allowed for the exam, allowing access to the exam only one time, requiring the use of a lockdown browser and changing at least one third of exam questions every term.
And beyond an exam setting, here’s a selection of Phill’s advice:
- Maintain a dialogue with students on the risks of cheating
- Adopt programmatic assessment that optimises learning function
- Design authentic assessments such as reflections and personalised tasks, plus nested tasks
- Develop a system of cheating detection to show consequences for students
- Audit your assessments regularly to check for holes in security
- Evaluate use of anti-cheating software
Although it caused massive disruption to learning that negatively affected most, if not all students at one point or another, the pandemic-led shift to remote learning has offered a silver lining. It pushed the education community to reevaluate its relationship and dynamic with students, the efficacy of assessment, and strength of learning outcomes outside of the tightly controlled ecosystem.
Just look at how the sector has embraced methods such as asynchronous learning which weren’t commonly experienced or accepted practices just 2 years ago. Despite initial teething problems and fears that students would flounder, a recent survey of students indicates that the majority prefer asynchronous, self-paced learning to wholly synchronous delivery. Of course, with change and innovation comes risks and unpredictability that needs to be mediated. In his book, Phillip Dawson reminds us to focus on securing assessment that truly matters to program outcomes. Trying to cheat proof everything will not only exceed an institution’s resources, but potentially cause mistrust that hampers student teacher relationships and the demonstration of learning.
To conclude on the importance of assessment security as a balance that factors in student wellbeing, Guy Curtis et al.’s research on negative emotions and stress on intentions, they advocate for resolving negative emotionality within assessment framework and “urge higher education practitioners to consider the potential impact of stresses caused by assessment design and deadlines on their students as a potential risk factor that may contribute to academic misconduct.”