Ed/Tech must-reads 030625

What do large groups of students think about GenAI and how do we pick the right tools?

Tags on a wall say bitcoin, ethereum and AI

Lame tech themed graffiti on the mean streets of Hawthorn (pic by me)

Student Perceptions of AI 2025 from JISC Artificial Intelligence

I’m partial to the reports that JISC produces because they frequently have far larger samples than most other research. This report is a little smaller than normal but still includes focus groups with 173 student and survey responses from an additional 1,274. The results show that students are highly mindful of both the opportunities and challenges presented as we all navigate these uncharted territories. (Clearly their may be some self-selection bias going on in the sample but when isn’t there?). Concerns about whether they might lose core skills as a result of overusing the technology mesh with those that their skills may be made redundant in entry-level positions. They continue to use it in innovative ways, improving the scholarly tone of their work and getting (hopefully accurate) interpretations and clarifications of key concepts that they struggle with. The report provides further examples across planning, revision and discipline specific approaches being used. There is a sense that as key users of these tools, they want a say in institutional policies, but they also want guidance on acceptable use. So, in some ways, we are still very much figuring everything out.

The frenetic scramble to stake out territory in the GenAI Gold Rush (Silicon Rush?) has meant that alongside regular updates from key players there is an array of other ed tech tools coming to market. Keeping up and making informed decisions about what will last and make a meaningful contribution to learning and teaching is becoming a task of vital importance. Which is why it is not surprising that this presentation at the THETA conference last week by Joan Sutherland and Nicholas English (Deakin) won the ‘Best Lightning Talk Presentation’ award. Currently there isn’t too much information but this page in the program details a six stage evaluation process and some sensible metrics including pedagogical purpose and tool uptake. Hopefully more will be written about this in the near future.

When ChatGPT dropped in late 2022, and the whole GenAI palaver began in earnest, much of the discussion centred around its use by students. Some of my immediate thoughts were what impact use by educators might have. This article by a HE who’s who (Michael Henderson, Tim Fawns, Jimena de Mello Heredia - Monash, Margaret Bearman & Jennifer Chung - Deakin, Simon Buckingham Shum - UTS, and Kelly Matthews - UQ) takes data from nearly 7000 student respondents about their attitudes towards educator use of GenAI for feedback. As might be expected, responses were a mixed bag - they liked the accessibility and immediacy of the feedback (and preferred its tone over their lecturers who could be more critical) but they thought their teachers’ feedback was probably more trustworthy and relevant. The question now is, as always, what do we do with this understanding? (And will institutions decide that they can save costs by outsourcing feedback to Clippy?)

Reply

or to participate.