Ed/Tech must-reads 120324

Moocs, books, AI grading and enshittification

Listicles and MOOCs - when is this, 2012? In all seriousness though, this post about the scholarly articles on MOOCs with the highest numbers of Google Scholar citations is a handy resource for people with an interest in the last educational disruptor (before GenAI and the smirk metaverse). Articles range across the use of video, quality management, lit reviews, and learning strategies. Papers cover 2009-2017

Ethan Mollick is one of the most interesting writers on GenAI in education currently, so good on him for getting a book out. (It’s hard not to see how this doesn’t become wildly dated within weeks but I guess the core ideas will remain). You can pre-order from a bunch of US sellers from the top link, my Australian colleagues might prefer Brunswick Bound or Booktopia.

This post does have a strong press-release energy, with its breathless explanation of a tool that teachers can upload student work to which then sends it to ChatGPT for feedback. But it gives me a chance to bang on about one of my under-reported pet topics in the GenAI space - the ethics of using GenAI to assess. The article emphasises that there must be a human link in the chain, with teachers (and this is more K-12 focused) expected to sign off on the feedback provided before sending it to students but this appears to be entirely on the honour system.

I also know how tedious providing feedback and grading work is, and that I’ve heard students comment that if it means they get useful feedback more soon, they don’t see a problem, but something about this all still feels deeply not right, particularly when it comes to summative assessment. Maybe it is that it feels disrespectful to the time and effort that students have put in to their work (yes I know that some don’t), as well as to the student teacher relationship. I think I’m also concerned that a certain type of writing - that which aligns with ‘correct’ practice - will be favoured and that students will gravitate towards the safe answers. Honestly, I still don’t trust the machines to always get things right either. Yet in so much of the discussion about the impact of GenAI, its use in assessment is virtually invisible.

Cory Doctorow is one of those thought-leaders in the tech/futurism/society space where you don’t instantly translate that term to wanker. This talk (and then panel discussion) explores the idea of enshittification, where the once promising frontier of the online has gradually been tamed and progressively made worse and worse as big business has worked out how to squeeze an other dollar from it. (It’s a 2 hour video, by the by)

He describes this as

a three stage process: First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.