Main Article Content
The disruption that accompanied the pandemic in 2020 and 2021 also affected the administration of academic literacy tests. These are employed to place incoming students at institutions of higher education in appropriate courses for the development of their ability to handle the demands of academic discourse. For many students the conventional tests, deployed online, were inaccessible. We reflect here on how possible alternatives might be employed to identify students who are at risk as a result of low levels of academic literacy. The first is an algorithm that aims to predict whether the student might be a candidate for an intensive academic literacy course, and the other, constituting the primary focus of this paper, is a conventional post-admission academic literacy test available in-house, which had the potential of being refined for such a purpose. Since the test had initially been designed to assess prospective postgraduate students’ preparedness to engage in academic writing, and had been piloted on a range of undergraduate students, this presented an opportunity to explore whether it might be possible to use it more widely. Analyses generated by programs yielding both descriptive, inferential and probability statistics are presented to show that this test was indeed capable of being employed thus, and could be refined further. At the same time, this exploration has had the further benefit of enhancing the assessment literacy of those presenting the actual academic literacy interventions. We envisage a further exploration of adapting tests of similar design for assessments that are more field-specific.