Tomorrow, I’m in London to take part in the Scottish Qualifications Authority‘s Expert Assessment Group. The SQA have been forward-thinking about Open Badges over the last few years, so I’m delighted to have been asked to attend.
There’s five people been asked to give input in the morning from a ‘future of assessment’ point of view, and five in the afternoon on ways technology might be able to help enable that future. I’ve got a very short amount of time, so I’ve boiled it down to the slides below.
(Note: go fullscreen by clicking the arrows in the black bar at the bottom)
The flow for my pitch starts with a tweet I saw earlier today from the influential Paul Graham. He links to an article in The New York Times which talks about skills-based hiring, but which completely disregards digital credentialing. From there, I discuss Michael Feldstein’s recent post about badging gaining huge traction in very specific areas. And then I launch into a pretty familiar flow using Bryan Mathers‘ excellent visuals.
There’s loads more I want to say about how version 2.0 of the Open Badges specification allows for really interesting dynamic badges that ‘grow’ over time. Kerri Lemoie and Lucas Blair recently wrote about this from a technical point of view, and I presented my thoughts last week at the University of Dundee, including this slide:
Perhaps I’ll get a chance to discuss these new developments if my pitch is selected to be discussed further. I’d bring up blockchain technologies and their potential uses in credentialing, but I’ve got to catch a train back home in the evening…
When I was a classroom teacher, peer assessment was something I loved to do. Once you’ve shown learners the basics it’s as easy as asking them to swap books with the person next to them. Not only do they get to focus in on writing for a particular purpose, but it’s a decentralised system meaning there’s no single point of failure (or authority).
Online, however, things are a little more problematic. When we go web scale, issues (e.g. around identity, privacy and trust) become foregrounded in ways that they often aren’t in offline settings. This is something I need to think carefully about in terms of the Web Literacies framework I’m working on, as I’m envisaging the following structure:
Skills level – granular badges awarded for completing various tasks (most badges will be awarded automatically – as is currently the case with Mozilla Thimble)
Competencies level – peer assessment based on a portfolio comprising of the work completed towards the skills badges
Literacies level – self- and peer-assessment based on work completed at the competencies level
I’ll figure out (hopefully with the help of many others) what the self-assessment looks like once we’ve sorted out the peer-assessment. The reason we need both is explained in this post.
Some of the xMOOCs such as Coursera have ‘peer-grading’ but I don’t particularly like what they’ve done for the reasons pointed out by Audrey Watters. I do, however, very much like the model that P2PU have been iterating (see this article, co-written by one of the founders of P2PU for example). The (very back-of-an-envelope) way that I see this working for the Web Literacies framework is something like:
A learner complete various activities and earns ‘skills’ badges.
These skills badges are represented on some kind of matrix.
Once the learner has enough badges to ‘level-up’ to a competencies-level badge they are required to complete a public portfolio featuring their skills badges along with some context.
This portfolio is submitted to a number (3? 5? 7? more?) of people who already have the competencies-level badge.
If a certain percentage (75%? 90%?) agree that the portfolio fulfils the criteria for the badge, the learner successfully levels-up.
There’s a lot of work to be done thinking through potential extra mechanisms such as rating-the-raters as well as making the whole UX piece seamless, but I think that could be a fairly solid way to get started.
My (remote, somewhat helicopter-like) contribution, was pretty much summed up by the following:
After reading Audrey Watters’ post about the gathering (as well as those by others), I’d like to expand up on that and highlight some thoughts from others with whom I’m in agreement.
I want us to weigh classroom practices, power, authority, politics, publishing, assessment, expertise, attribution, and the culture(s) of the education system. I would argue that the textbook in its current form — and frankly in almost all of the digital versions we’re also starting to see now — is tightly woven into that very fabric, and once we tug hard enough at the “textbook” thread, things come undone.
The textbook is easy to talk about. It’s a physical thing that people have known as students and, for some, as educators. The trouble is that, just as with any technology, it’s difficult to separate the thing from the practices that surround the thing.
There’s nothing inherently wrong with textbooks – especially if you define them as Bud Hunt does as “A collection of information organized around thoughtful principles intended to provide support to instruction.” I’m not so keen on the word ‘instruction’ (I’d substitute ‘learning’) but like his basis in ‘thoughtful principles’.
Getting assessment right
One of the reasons I’m such a big fan of badges for lifelong learning is that assessment is broken. I don’t mean ‘broken’ in the sense that a bit of a repair job would fix. I mean structurally unsound and falling apart. Liable to collapse at any moment. That kind of broken.
It’s a problem I felt as a classroom teacher. It’s an issue I had to deal with as a senior manager. It’s evident in my sector-wide role in Higher Education. The hoops through which we’re asking people to jump not only don’t mean anything any more, but they don’t necessarily lead anywhere.
To me, that constitutes a crisis of relevance. So when we’ve got textbooks solely focused on providing content in bite-sized chunks in order to allow people to pass summative tests, then we’ve got a problem. A huge problem.
But let’s be clear: the problem is to do with the high-stakes assessment. It’s akin to the current attacks on the efficacy of teachers. The problem isn’t with (most) teachers, it’s with what you’re asking them to do. Likewise, with textbooks, it’s not the collecting of information in one place – it’s what people are expected to do with that information.
Open content and the blank page
I’ve seen many state their belief that the best kind of textbook is the blank page. By that, they mean that textbooks should be co-constructed. I certainly can’t argue with that, but we must always be careful that we don’t substitute one form of top-down structure with another.
Back in 2006 I wrote a couple of posts on my old teaching blog. One covered the idea of teachers as lifeguards, and other focused on the teacher as DJ. In the former I talk about the importance of teachers ‘knowing the waters’ so that they can allow students to explore the waters, growing in confidence (but be there when things go wrong). In the latter I discuss the similarities between teachers and DJs around ‘tempo’ and ‘playlists’.
Both the lifeguard and DJ analogies work with textbooks, I think. The difficulties are always going to be around time and competency. It’s all very well for those new to the profession, willing to burn the candle at both ends to remix the curriculum and create their own textbooks to move #beyondthetextbook. But that’s a recipe for burnout.
As usual, I’ve more questions than answers, but if I have one contribution to the #beyondthetextbook debate it’s that our current use of textbooks is a symptom of the problem, not the problem itself. It’s difficult to debate nuanced things online, and even more so via Twitter.
I think we need a renaissance in blogging – and the kind of blogging where we reference other people’s work. If we’re going to debate problems in education, let’s do so at length, with some nuance, and in a considered way.
Thanks for reading this far. I’d love to read any comments you have below!