Ed/Tech must-reads

Ranking online learning, GenAI guidance for assessment and detection

two ladders side by side, one is old and classical, made of marble and gold. The other is futuristic, with neon and circuitry

While we may question the validity and value of any HE ranking scheme, there is no question that they feature prominently in university marketing and also influence student choice. (Chinese students are only permitted to attend top-200 ranked unis). I have strong feeling that this announcement from THE last week about a new ranking system for online learning will shake things up dramatically.

Participating universities will be ranked on four criteria, including the resources devoted to online learning, student engagement, student outcomes and the environment for online learning.

I have seen some very silly takes on this from ed tech ‘commentators’ which imply that readers are unable to discern between courses and LMSes, and thus they will think that ed tech companies own universities?!? (honestly, it was kind of garbled and nonsensical) and that ed tech companies will use these rankings to promote their products. (Gasp).

I think the most interesting part of this will be watching the establishment institutions, which often underinvest in online, work through Olympic level mental gymnastics explaining how these results lack nuance and validity while their preferred rankings (also from THE) are still credible and important.

I have a feeling that we are also going to see more attention on Online Program Management (OPM) companies like OES and Keypath which often outshine their partner institutions in student satisfaction and retention by providing more intensive support to smaller cohorts. How this plays out is anyone’s guess but I wouldn’t be surprised to see more ridiculous shock horror stories in the press about online students not knowing who is actually teaching them.

I have mentioned this document previously but it was officially launched at the TEQSA conference in Melbourne last week and appears to have reached its final form. Driven by TEQSA and Jason Lodge (UQ), Sarah Howard (UoW), Margaret Bearman and Phillip Dawson (Deakin), with contributions from the brightest and best in Oz HE, it presents a set of general guiding principles for designing assessment in the Generative AIge.

Honestly, many of these are things good educators have been pushing for already for years but, as they say, now more than ever it’s time. Authentic assessment, programmatic approaches to design, process rather than outputs, collaborative learning - it’s all there. (If you squint, you can even find my name in the acknowledgements)

While some educators and institutions cling to hope that the one true GenAI content detector will emerge and solve all our pesky academic integrity problems, there is still no sign of this happening. Not that this stops the dream. Desaire et al. (2023) make some big claims in their article in Cell Reports Physical Science titled Accurately detecting AI text when ChatGPT is told to write like a chemist. Ethan Mollick, who has emerged as one of the more interesting GenAI in education writers, kicks off a lively chat in his post on LinkedIn with the observation that this appears to be limited to the introductory sections of chemistry papers and the approach used is open to manipulation.