Open Thinkering

Menu

Tag: assessment

TB872: Concept map to help with my EMA

Note: this is a post reflecting on one of the modules of my MSc in Systems Thinking in Practice. You can see all of the related posts in this category


A concept map based on the structure of TB872's End of Module Assessment questions.

It’s 15 years since I spent days creating a concept map for my Ed.D. thesis. Thankfully, the End of Module Assessment (EMA) for this MSc module is a mere 4,000 words, meaning it’s only taken me a few hours to create this one using Whimsical.

The requirements for the EMA are outlined in a previous post. All I’ve got to do now is write it. It’s such an interesting topic that I need to remind myself that I’ve given myself until next Friday to get it written. I’m moving house the week after, and I want this to be done.

In related news, although I’d originally planned not to do the other compulsory introductory module for this MSc (TB871) until 2025, I’ve changed my mind. Never one to shirk from a challenge, I’ll be starting that one on May 1st — a couple of weeks after finishing this assignment 😅

Elevator pitch on Open Badges for SQA Expert Assessment Group

Update: Martin Hamilton from Jisc kindly recorded my elevator pitch. You can watch it here.


Tomorrow, I’m in London to take part in the Scottish Qualifications Authority‘s Expert Assessment Group. The SQA have been forward-thinking about Open Badges over the last few years, so I’m delighted to have been asked to attend.

There’s five people been asked to give input in the morning from a ‘future of assessment’ point of view, and five in the afternoon on ways technology might be able to help enable that future. I’ve got a very short amount of time, so I’ve boiled it down to the slides below.

(Note: go fullscreen by clicking the arrows in the black bar at the bottom)

Backup locations: Slideshare / Internet Archive

The flow for my pitch starts with a tweet I saw earlier today from the influential Paul Graham. He links to an article in The New York Times which talks about skills-based hiring, but which completely disregards digital credentialing. From there, I discuss Michael Feldstein’s recent post about badging gaining huge traction in very specific areas. And then I launch into a pretty familiar flow using Bryan Mathers‘ excellent visuals.

There’s loads more I want to say about how version 2.0 of the Open Badges specification allows for really interesting dynamic badges that ‘grow’ over time. Kerri Lemoie and Lucas Blair recently wrote about this from a technical point of view, and I presented my thoughts last week at the University of Dundee, including this slide:

Dynamic badging

Perhaps I’ll get a chance to discuss these new developments if my pitch is selected to be discussed further. I’d bring up blockchain technologies and their potential uses in credentialing, but I’ve got to catch a train back home in the evening…

Photo by John-Mark Kuznietsov on Unsplash

Some (brief) thoughts about online peer assessment.

When I was a classroom teacher, peer assessment was something I loved to do. Once you’ve shown learners the basics it’s as easy as asking them to swap books with the person next to them. Not only do they get to focus in on writing for a particular purpose, but it’s a decentralised system meaning there’s no single point of failure (or authority).

Online, however, things are a little more problematic. When we go web scale, issues (e.g. around identity, privacy and trust) become foregrounded in ways that they often aren’t in offline settings. This is something I need to think carefully about in terms of the Web Literacies framework I’m working on, as I’m envisaging the following structure:

  • Skills level – granular badges awarded for completing various tasks (most badges will be awarded automatically – as is currently the case with Mozilla Thimble)
  • Competencies level – peer assessment based on a portfolio comprising of the work completed towards the skills badges
  • Literacies level – self- and peer-assessment based on work completed at the competencies level

I’ll figure out (hopefully with the help of many others) what the self-assessment looks like once we’ve sorted out the peer-assessment. The reason we need both is explained in this post.

Some of the xMOOCs such as Coursera have ‘peer-grading’ but I don’t particularly like what they’ve done for the reasons pointed out by Audrey Watters. I do, however, very much like the model that P2PU have been iterating (see this article, co-written by one of the founders of P2PU for example). The (very back-of-an-envelope) way that I see this working for the Web Literacies framework is something like:

  1. A learner complete various activities and earns ‘skills’ badges.
  2. These skills badges are represented on some kind of matrix.
  3. Once the learner has enough badges to ‘level-up’ to a competencies-level badge they are required to complete a public portfolio featuring their skills badges along with some context.
  4. This portfolio is submitted to a number (3? 5? 7? more?) of people who already have the competencies-level badge.
  5. If a certain percentage (75%? 90%?) agree that the portfolio fulfils the criteria for the badge, the learner successfully levels-up.

There’s a lot of work to be done thinking through potential extra mechanisms such as rating-the-raters as well as making the whole UX piece seamless, but I think that could be a fairly solid way to get started.

What do you think? Any suggestions? 🙂

css.php