Over the last month, I’ve been working with the Bonfire team to perform some initial user research on the Zappa project. The aim of the project, funded by a grant from the Culture of Solidarity Fund, is to empower communities with a dedicated tool to deal with the coronavirus “infodemic” and online misinformation in general.
We ended up speaking with 11 individuals and organisations, and have synthesised our initial user research into the first version of a report which is now available.
One of the things about working openly is, fairly obviously, sharing your work as you go. This can be difficult for many reasons, not least because of the human tendency toward narrative, to completed stories with start, middle, and end.
As I wrote in my previous post about the project, we’d identified some of the following:
a list of people we can/should speak with
themes of which we should be aware/cognisant
groups of people we should talk with
Inevitably, since this initial work, we’ve come up with some obvious gaps in the people we should speak to (UX designers!). The people we’ve spoken with have recommended other people to contact as well as avenues of enquiry to follow. This is such an interesting topic that we need to be careful that the project doesn’t grow legs and run away with us…
10 interesting things people have told us so far
We haven’t started synthesising any of what our user research participants have said so far, but as we’re around halfway through the process of conducting interviews, I thought it might be worth sharing 10 interesting things they’ve told us. These are not any any particular order.
Countering misinformation is time-consuming — to fact-check articles takes time and by the time the result is published the majority of the people who were going to read it have done so anyway.
Chat apps — public social networks are blamed for not dealing with mis/disinformation but some of the most problematic stuff is being shared via messaging services such as WhatsApp and Telegram.
Difference between human and bot accounts — it’s possible to reason with a human being but impossible to do with a bot account.
Metaphor of adblock list — a way of reducing the burden of moderation on administrators and moderators* of a federated social network instance by creating a more systematised version of something like the #Fediblock hashtag.
Subscribing to moderator(s) — delegating moderation explicitly to another user, perhaps by automatically blocking/muting whatever they do.
Different categories of approaches — for example, reputational solutions that deal with trusted parties, technical solutions that prove something hasn’t been tampered with, and process-based solutions which make transparent the context in which the content was created and transmitted.
Visualising connections — visualising the social graph could make it easier to spot outlier accounts which may be less trusted than those that lots of your other contacts are connected to.
Fact-checking platforms can be problematic — they promote an assumption that there is a single ‘Truth’ and one version of events. They can be useful in some instances but also be used to present a distorted view of the world.
Frictionless design — by ‘decomplexifying’ the design of user interfaces we hide the system behind the tool and the trade-offs that have been made in creating it.
Disappearing content — content that no longer exists can be a problem for derivative works / articles / posts that reference and rely on it to make valid claims.
It’s been fascinating to see the different ways that people have approached our conversations, whether from a technical, design, political, scientific, or philosophical perspective (or, indeed, all five!)
We’ve still got some people to talk with next week, but we are always looking to ensure a diverse range of user research participants with a decent geographical spread. As such, we could do with some help identifying people located in Asia (yes, the whole continent!) who might be interested in talking about their experiences, as well as people from minority and historically under-represented backgrounds in tech.
In addition, we could also do with talking with people who have suffered from mis/disinformation, any admins or moderators of federated social network instances, and UX designers who have a particular interest in mis/disinformation. You can get in touch via the comments below or at: firstname.lastname@example.org
In the past few weeks there have been a couple of occasions where the ‘why’ has been missing from some of the work in which I’ve been asked to be involved.
I’m not talking about the ‘why’ from the supply side, from the organisation that wants to provide the thing; I’m talking about the ‘why’ from the demand side, from the people who might want the thing.
This is not new to me. It was one of the major reasons it was so difficult to get systems of digital credentials based on the Open Badges standard off the ground in the early days: they made sense for the badge issuers, but not necessarily to the badge earners!
During the Catalyst Discovery work I led for We Are Open Co-op last month, we kept returning to one central theme with the nine charities that were involved in the programme. It’s summed-up in this excellent illustration from Bryan Mathers:
In other words, if you show people who you already know something that you’ve made and ask them their opinion of it, they will say things to please you. “What do you think of my cool idea?” is not a fair question to ask people with whom you’re in a relationship. It’s the equivalent of asking your partner “does my bum look big in this?”
Instead, you have to do the hard work of audience definition and then user research. If this were an easy thing to do, then every workshop would have a waiting list, every newsletter would have millions of subscribers, and every product would have made its inventors rich.
It sounds obvious, but if you don’t know who your audience is, then you can only be successful: (i) by accident, (ii) by designing for yourself (as part of the audience group), or (iii) by copying other people. These are not long-term strategies for success.
Once you have defined your audience, congratulations! You now need to find out as much about them as possible. You can do this in passive ways, through reading other people’s research and sifting through data. That’s valuable, but nothing beats being active and going out of your way to actually talk to people about their pains, gains, and jobs to be done.
I tend to use Strategyzer’s Value Proposition Design (VPD) approach for this. I used it when designing MoodleNet, and I use it with clients. In its simplest form, you boil down the thing you create to a series of ad-libs which define your audience, product, and how it helps them:
Our ______ helps ______ who want to ______ by ______ and_____ (unlike ______).
I see too much what I would term ‘magical thinking’ in the world of product design and development. It’s equivalent to the fallacy of build it and they will come which plagues us all from time to time.
If your idea is worth putting into the world, and the main audience is someone other than yourself, then it’s worth talkingin advance to the people who you want to buy, read, or use your product.