Open Thinkering

Menu

Tag: Zappa project

Exploring the sweet spot for Zappa project approaches to misinformation

Venn diagram showing three overlapping circles (Technical, Procedural, and Relational)

At the overlap of all three circles is the words 'Zappa Project'

At the overlap of Technical and Relational is the word 'Consistency'

At the overlap of Procedural and Relational is the word 'Reliability'

At the overlap of Technical and Procedural is the word 'Efficiency'

The Bonfire logo is at the bottom of the graphic, with the version number 0.1

While we know that misinformation is not a problem that can ever be fully ‘solved’, it is important to try and reduce its harms. Last week, we published the first version of a report based on user research as part of the Zappa project, an initiative from the Bonfire team.

This week, I’ve narrated a Loom video giving an overview of the mindmap embedded in the report. This was requested by Ivan, as he found that the way that I explain it captures some nuance that perhaps isn’t in the report (which is more focused on recommendations).

Another thing I was tasked with this week was creating a Venn diagram from the three types of approaches that could be taken for the technical development for the Zappa project. These were slightly tweaked from suggestions made by one of our user research participants. As you can see in the above diagram, these are:

  • Technical — improving the way that users interact with information
  • Procedural — improving the process of capturing and displaying information
  • Relational — improving the way that humans interact with one another

It’s unlikely that any one approach would sit squarely as being only one type of approach. For example, spending time thinking about the way that information is presented to users and allowing them to control that sits right in the middle of all three.

There are three overlaps other than the one right in the middle. These are:

  • Technical / Procedural — we’ve currently labelled this intersection as Efficiency as using technical approaches to improve processes usually makes them more efficient. This might include making it easier to block certain types of posts, for example.
  • Procedural / Relational — we’ve labelled this intersection as Reliability because process when considered in terms of relationships is often focused on repeatable patterns. This might include, for example, being able to validate that the account you’re interacting with hasn’t been hijacked.
  • Relational / Technical — we’ve used the label Consistency for this one as one of the things we found from our research is that users are often overwhelmed by information. We can do something about that, so this might include helping users feel in charge of their feeds to help avoid context collapse or aesthetic flattening.

You will notice the version number appended to this diagram is ‘v0.1’. It might be that we haven’t found the right words for these overlaps. It might be that some are more important than others. We’d love feedback from anyone paying attention to the Zappa project, whether you’ve been following the work around Bonfire since the beginning, or whether this is the first you’re hearing of it.

If it helps, feel free to grab the Google Slides original of the above Venn diagram, or comment below on a) what you think is good, b) anything you have questions about, or c) anything that concerns you.

First version of report published sharing findings from Zappa project user research

Illustration of a bear, fox, badger, and squirrel looking into a fire. Taken from the Bonfire website.

Over the last month, I’ve been working with the Bonfire team to perform some initial user research on the Zappa project. The aim of the project, funded by a grant from the Culture of Solidarity Fund, is to empower communities with a dedicated tool to deal with the coronavirus “infodemic” and online misinformation in general.

We ended up speaking with 11 individuals and organisations, and have synthesised our initial user research into the first version of a report which is now available.

📄 Click here to access the report

Some interesting findings from user research for the Zappa project (so far!)

Squirrels around a bonfire

One of the things about working openly is, fairly obviously, sharing your work as you go. This can be difficult for many reasons, not least because of the human tendency toward narrative, to completed stories with start, middle, and end.

The value of resisting this tendency and sitting in ambiguity for a while is that allows for slow hunches to form and serendipitous connections to be made. So it is with user research I’m doing as part of the Zappa project for the Bonfire team. We need time to talk to lots of different types of people who meet our criteria, and to spend some time reflecting on what they’ve told us.

As I wrote in my previous post about the project, we’d identified some of the following:

  • a list of people we can/should speak with
  • themes of which we should be aware/cognisant
  • groups of people we should talk with

Inevitably, since this initial work, we’ve come up with some obvious gaps in the people we should speak to (UX designers!). The people we’ve spoken with have recommended other people to contact as well as avenues of enquiry to follow. This is such an interesting topic that we need to be careful that the project doesn’t grow legs and run away with us…

10 interesting things people have told us so far

We haven’t started synthesising any of what our user research participants have said so far, but as we’re around halfway through the process of conducting interviews, I thought it might be worth sharing 10 interesting things they’ve told us. These are not any any particular order.

  • Countering misinformation is time-consuming — to fact-check articles takes time and by the time the result is published the majority of the people who were going to read it have done so anyway.
  • Chat apps — public social networks are blamed for not dealing with mis/disinformation but some of the most problematic stuff is being shared via messaging services such as WhatsApp and Telegram.
  • Difference between human and bot accounts — it’s possible to reason with a human being but impossible to do with a bot account.
  • Metaphor of adblock list — a way of reducing the burden of moderation on administrators and moderators* of a federated social network instance by creating a more systematised version of something like the #Fediblock hashtag.
  • Subscribing to moderator(s) — delegating moderation explicitly to another user, perhaps by automatically blocking/muting whatever they do.
  • Different categories of approaches — for example, reputational solutions that deal with trusted parties, technical solutions that prove something hasn’t been tampered with, and process-based solutions which make transparent the context in which the content was created and transmitted.
  • Visualising connections — visualising the social graph could make it easier to spot outlier accounts which may be less trusted than those that lots of your other contacts are connected to.
  • Fact-checking platforms can be problematic — they promote an assumption that there is a single ‘Truth’ and one version of events. They can be useful in some instances but also be used to present a distorted view of the world.
  • Frictionless design — by ‘decomplexifying’ the design of user interfaces we hide the system behind the tool and the trade-offs that have been made in creating it.
  • Disappearing content — content that no longer exists can be a problem for derivative works / articles / posts that reference and rely on it to make valid claims.

It’s been fascinating to see the different ways that people have approached our conversations, whether from a technical, design, political, scientific, or philosophical perspective (or, indeed, all five!)

Next steps

We’ve still got some people to talk with next week, but we are always looking to ensure a diverse range of user research participants with a decent geographical spread. As such, we could do with some help identifying people located in Asia (yes, the whole continent!) who might be interested in talking about their experiences, as well as people from minority and historically under-represented backgrounds in tech.

In addition, we could also do with talking with people who have suffered from mis/disinformation, any admins or moderators of federated social network instances, and UX designers who have a particular interest in mis/disinformation. You can get in touch via the comments below or at: [email protected]

css.php