Open Thinkering

Menu

Tag: disinformation

Some interesting findings from user research for the Zappa project (so far!)

Squirrels around a bonfire

One of the things about working openly is, fairly obviously, sharing your work as you go. This can be difficult for many reasons, not least because of the human tendency toward narrative, to completed stories with start, middle, and end.

The value of resisting this tendency and sitting in ambiguity for a while is that allows for slow hunches to form and serendipitous connections to be made. So it is with user research I’m doing as part of the Zappa project for the Bonfire team. We need time to talk to lots of different types of people who meet our criteria, and to spend some time reflecting on what they’ve told us.

As I wrote in my previous post about the project, we’d identified some of the following:

  • a list of people we can/should speak with
  • themes of which we should be aware/cognisant
  • groups of people we should talk with

Inevitably, since this initial work, we’ve come up with some obvious gaps in the people we should speak to (UX designers!). The people we’ve spoken with have recommended other people to contact as well as avenues of enquiry to follow. This is such an interesting topic that we need to be careful that the project doesn’t grow legs and run away with us…

10 interesting things people have told us so far

We haven’t started synthesising any of what our user research participants have said so far, but as we’re around halfway through the process of conducting interviews, I thought it might be worth sharing 10 interesting things they’ve told us. These are not any any particular order.

  • Countering misinformation is time-consuming — to fact-check articles takes time and by the time the result is published the majority of the people who were going to read it have done so anyway.
  • Chat apps — public social networks are blamed for not dealing with mis/disinformation but some of the most problematic stuff is being shared via messaging services such as WhatsApp and Telegram.
  • Difference between human and bot accounts — it’s possible to reason with a human being but impossible to do with a bot account.
  • Metaphor of adblock list — a way of reducing the burden of moderation on administrators and moderators* of a federated social network instance by creating a more systematised version of something like the #Fediblock hashtag.
  • Subscribing to moderator(s) — delegating moderation explicitly to another user, perhaps by automatically blocking/muting whatever they do.
  • Different categories of approaches — for example, reputational solutions that deal with trusted parties, technical solutions that prove something hasn’t been tampered with, and process-based solutions which make transparent the context in which the content was created and transmitted.
  • Visualising connections — visualising the social graph could make it easier to spot outlier accounts which may be less trusted than those that lots of your other contacts are connected to.
  • Fact-checking platforms can be problematic — they promote an assumption that there is a single ‘Truth’ and one version of events. They can be useful in some instances but also be used to present a distorted view of the world.
  • Frictionless design — by ‘decomplexifying’ the design of user interfaces we hide the system behind the tool and the trade-offs that have been made in creating it.
  • Disappearing content — content that no longer exists can be a problem for derivative works / articles / posts that reference and rely on it to make valid claims.

It’s been fascinating to see the different ways that people have approached our conversations, whether from a technical, design, political, scientific, or philosophical perspective (or, indeed, all five!)

Next steps

We’ve still got some people to talk with next week, but we are always looking to ensure a diverse range of user research participants with a decent geographical spread. As such, we could do with some help identifying people located in Asia (yes, the whole continent!) who might be interested in talking about their experiences, as well as people from minority and historically under-represented backgrounds in tech.

In addition, we could also do with talking with people who have suffered from mis/disinformation, any admins or moderators of federated social network instances, and UX designers who have a particular interest in mis/disinformation. You can get in touch via the comments below or at: [email protected]

Countering misinformation in federated social networks: an introduction to the Zappa project

Illustration of birds from Bonfire website

One thing I’ve learned from spending all of my adult life online and being involved in lots of innovation projects is that you can have the best bookmarking system in the world, but it means nothing if you don’t do something with the stuff you’ve bookmarked. Usually, for me, that means turning what I’ve filed away into some kind of blog post. It’s basically the reason Thought Shrapnel exists.


Last week I started some new work with the Bonfire team called the Zappa project. Bonfire is a fork of CommonsPub, the underlying codebase for MoodleNet.

Self-host your online community and shape your experience at the most granular level: add and remove features, change behaviours and appearance, tune, swap or turn off algorithms. You are in total control.

Bonfire is modular, with different extensions allowing communities to customise their own social network. The focus of Zappa is shaped by a grant from the Culture of Solidarity Fund.

The grant will be used to release a beta version of Bonfire Social and to develop Zappa – a custom bonfire extension to empower communities with a dedicated tool to deal with the coronavirus “infodemic” and online misinformation in general.

The announcement blog post talks of “experimental artificial intelligence engines” and “Zappa scores” which may be longer-term goals, while my job is to talk to people with real-world needs right now. As I’ve learned from being involved in quite a few innovation projects over the last 20 years, there’s a sweet spot between what’s useful, theoretically sound, and technically achievable.


Last week, I met with Ivan to try and do some definition of user groups and the initial scope of the project. It’s easy to think that the possible target audience is ‘everyone’ but it’s of much more value to think about who the Zappa project is likely to be useful for in the near future.

Priority areas for stakeholders, user groups, and themes

The above Whimsical board shows:

  • a list of people we can/should speak to (we’ve spoken with two orgs so far)
  • themes of which we should be aware/cognisant
  • groups of people we should talk with

The latter two lists are prioritised based on our current thinking and, as you can see, it’s biased towards action, towards those who don’t have merely an academic interest in the Zappa project, but who have some skin in the anti-misinformation game.


A note in passing: many people use ‘misinformation’ and ‘disinformation’ as near-synonyms of one another. But, even in common usage, it’s clear that they have an important difference in meaning.

We’d say, for example, that someone was ‘misinformed’, in which case their lack of having the correct information wouldn’t necessarily be their fault. On the other hand, we might talk about state actors waging a ‘disinformation’ campaign, which very much would be intentional, and probably focused on creating a mixture of fear, uncertainty, and/or doubt.

The line between misinformation and disinformation can be blurry, but it’s probably helpful to conceptualise what we’re doing in the terms of the grant: to help “empower communities with a dedicated tool to deal with the coronavirus ‘infodemic’ and online misinformation in general”.


One of the resources that I’ve found particularly helpful (and which I wish I’d seen before presenting on Truth, Lies & Digital Fluency a couple of years ago) is Fake news. It’s complicated. Its author, Claire Wardle from First Draft, lays out 7 Types of Mis- and Disinformation on a spectrum from ‘satire or parody’ (which some wouldn’t even conceptualise as misinformation) through ‘fabricated content’ (which most people would definitely consider disinformation).

7 Types of Mis- and Disinformation

Some of the differences between these types can be quite nuanced, and so I found the Misinformation Matrix in the post really useful for looking at the reasons for the misinformation being published in the first place. These range from sloppy journalistic practices, through to flat-out propaganda.

Misinformation Matrix

What the user research we’re doing at the moment is focused upon is what types of misinformation human rights organisations, scientists, and other front-line orgs are suffering from, how and where these are manifested, and what they’ve tried to do about it.

So far, we’ve discovered that countering misinformation can be a huge time suck for people who are often volunteering for charities, non-profits, or loosely-organised groups. It seems that some areas of the world seem to suffer more than others, and particular platforms are currently doing worse than others. All of them could, of course, could do much better.


We’re still gathering people and organisations for this project. So if, based on the above, you know someone who you think it might help us to talk to, then please get in touch! You can leave a comment below, or get in contact via email.

Weeknote 41/2020

Traffic cones in a large puddle

This week has been much like last week — busy, somewhat fraught, and involving lots of thinking about the future. It’s been typical autumn weather, with bright sunshine one moment and a torrential downpour the next!


I applied for a role at the Wikimedia Foundation entitled Director of Product, Anti-Disinformation after a few people I know and respect said that they thought I’d be a good fit:

The Wikimedia Foundation is looking for a Director of Product Management to design and implement our anti-disinformation program.  This unique position will have a global impact on preventing Disinformation through Wikipedia and our other Wikimedia projects.  You will gain a deep understanding of the ways in which our communities have fought disinformation for the last two decades and how this content is used globally.  You will work cross-functionality with Legal, Security, Research and other teams at the Foundation and imagine and design solutions that enable our communities to achieve our Vision: a world in which every single human being can freely share in the sum of all knowledge.

As a result, I ended up writing about my issues with Twitter’s attempts at anti-disinformation in the run-up to the US Presidential election.

On Friday evening, a recruiter for a different global product director role got in touch seemingly slightly baffled that I’d applied for it, given my career history and credentials. I suppose it’s easy to undervalue yourself when various people chip away at your self worth over a period of months during a pandemic.


I’m very much enjoying working with Outlandish at the moment. It’s particularly nice to work alongside people who not only work openly and co-operatively, but are genuinely interested in improving communication, trust, and empathy within their organisation.

During October, due to my commitment to a four week Catalyst-funded discovery programme with nine charities, I’m only working with Outlandish the equivalent of one day per week. However, from November to January, I’ll be spending half of my week (2.5 days) divided between two things:

  1. Helping them productise existing projects, and training/supporting new ‘product managers’ (although that role will look slightly different initially)
  2. Working on helping them expand their ‘Building OUT’ programme which stands for Openness, Understanding, and Trust.

One of Outlandish’s values is that they are ‘doers’, meaning that the space between verbally proposing something, gaining the consent of colleagues, and getting on with it is really short. It’s so refreshing, and meant that on Friday I was able to publish the MVP of a playbook using existing Building OUT-related resources.


On Thought Shrapnel this week I published:

Here, I published:


Other than the above, I’ve been making final preparations for a milestone birthday for my wife, Hannah, next week. I’ll reach the same age as her in a couple of months’ time, so at the start of 2020 we’d begun to draw up plans to celebrate both birthdays. Those plans went out of the window due to COVID-19, so I’m trying to make the day as nice as possible, with an eye on a belated celebration later.


Image of traffic cones in large puddle at Morpeth, UK.

css.php