Open Thinkering

Menu

Some (brief) thoughts about online peer assessment.

When I was a classroom teacher, peer assessment was something I loved to do. Once you’ve shown learners the basics it’s as easy as asking them to swap books with the person next to them. Not only do they get to focus in on writing for a particular purpose, but it’s a decentralised system meaning there’s no single point of failure (or authority).

Online, however, things are a little more problematic. When we go web scale, issues (e.g. around identity, privacy and trust) become foregrounded in ways that they often aren’t in offline settings. This is something I need to think carefully about in terms of the Web Literacies framework I’m working on, as I’m envisaging the following structure:

  • Skills level – granular badges awarded for completing various tasks (most badges will be awarded automatically – as is currently the case with Mozilla Thimble)
  • Competencies level – peer assessment based on a portfolio comprising of the work completed towards the skills badges
  • Literacies level – self- and peer-assessment based on work completed at the competencies level

I’ll figure out (hopefully with the help of many others) what the self-assessment looks like once we’ve sorted out the peer-assessment. The reason we need both is explained in this post.

Some of the xMOOCs such as Coursera have ‘peer-grading’ but I don’t particularly like what they’ve done for the reasons pointed out by Audrey Watters. I do, however, very much like the model that P2PU have been iterating (see this article, co-written by one of the founders of P2PU for example). The (very back-of-an-envelope) way that I see this working for the Web Literacies framework is something like:

  1. A learner complete various activities and earns ‘skills’ badges.
  2. These skills badges are represented on some kind of matrix.
  3. Once the learner has enough badges to ‘level-up’ to a competencies-level badge they are required to complete a public portfolio featuring their skills badges along with some context.
  4. This portfolio is submitted to a number (3? 5? 7? more?) of people who already have the competencies-level badge.
  5. If a certain percentage (75%? 90%?) agree that the portfolio fulfils the criteria for the badge, the learner successfully levels-up.

There’s a lot of work to be done thinking through potential extra mechanisms such as rating-the-raters as well as making the whole UX piece seamless, but I think that could be a fairly solid way to get started.

What do you think? Any suggestions? 🙂

4 thoughts on “Some (brief) thoughts about online peer assessment.

  1. Hi Doug
    This sounds great. Unwound you see this being used with any learner? Adult or young person?
    Could 10 year olds use this process? I’m not sure of your background and therefore the purpose of this proposal. Do you also have a particular platform in mind?

    I work in an 11-18 school, NW, UK.

    best wishes
    Jasmine

  2. I think the innate problem of peer assessment is effectively stuck with Juvenal – who watches the watchmen? If the badge is worth having then those with the badge might not want to let people access it (this would seem logical if the badge has some sort of economic / social capital – contrast it with say ACCA with a model of central provision)

    Also, how do the first people get the badge? It seems to have that as a problem as well (a sort of proof by induction).

    Given the flux of what “web literacy” could mean, how would those with the badge, keep their skills up to date so as to be valid as judges of those whose web literacy may be more up to date?

Leave a Reply to J Renold Cancel reply

Your email address will not be published. Required fields are marked *

css.php