Ed & Tech must-reads 240326

So so many student views on AI, third space identity webinar, OES OPM ON AI, RIP metaverse

abstracted image of green grass

Time to touch some grass

This mega-study about the AI use and attitudes of 8000 students across 4 Australian universities (Deakin, Monash, Uni Queensland, Uni of Technology, Sydney) by 19 big name researchers contains a lot of numbers. Perhaps the most interesting of all of these are two - around 80% of participants reported using GenAI but around 70% felt that it could be inaccurate or make things up. 47% of all participants however used it for “finding information or conducting research”. The researchers noted that students recognised the limitations of GenAI but had enough belief in their abilities that they would be able to account for that. (Not sure if this is the Dunning Kruger effect or the new AI inspired reverse-Dunning Kruger effect). The report covers a lot of ground, as one might hope given the number of people involved, including frequency of use, common tasks, cheating, motivators and discouraging factors. One challenge that we constantly face in research is the turnaround time for studies like this - this survey was put out to students in Oct 2024. I look forward to seeing how people feel about GenAI now in a few years. Nonetheless, it is a comprehensive overview of where students are with this technology and well worth a read.

One more quick reminder that I will be kicking off our webinar series for the year with my old colleague Ingrid D’Souza from Monash, talking about findings from our respective research exploring who third space practitioners like learning designers (and more) are and proposing a new model for describing them/us. It should be pretty great.

It’s funny, in some ways, that many people (including third spacers) on the teaching and learning side in tertiary education have been saying for years and years and years that we need to do better with assessment to ensure that students are actually learning. Now that GenAI has upended assessment, the rest of the sector has caught up. This post describes a recent brief report from the Australian OPM (online program manager) OES (online education services). (You need to give OES your deets if you want the original report). OPMs are pretty widespread in tertiary education these days and while I don’t always love the corporate vibe, my experiences interacting with them has been that many people there care about good learning and teaching. They cop some flak in the current ‘HE is managerialist’ discourse but they also often get better student satisfaction scores than conventional unis, so used judiciously probably aren’t that problematic. Well that was a small diversion. The core ideas in this report aren’t necessarily anything new - strengthen assessment validity, programmatic design, and process over product, but they seem to track with current good practice (as far as we have been able to work out anyway).

Reply

or to participate.