As a member of social.coop, a cooperative social network that uses Mastodon, I’ve recently observed our community grappling with a significant decision. Meta, the company behind Facebook, Instagram, and WhatsApp, announced that their new platform, Threads, would join the ‘Fediverse‘ — a collective of instances compatible with federated social network protocols like ActivityPub. This sparked a debate within social.coop about whether to block any federated instances created by Meta, a decision that had to be made democratically.
However, the way this decision was introduced was problematic. A member, who hadn’t been active in prior discussions, suddenly proposed a vote. This rushed approach led to a low turnout, with only 68 out of several thousand members voting. The result was inconclusive and not representative of the community.
What followed was a convoluted discussion with multiple threads (no pun intended!) that were hard to follow. Many comments were made without considering previous discussions. Two more ‘formal’ proposals were brought forward, but neither provided a clear path forward. The lack of structure and process was evident and concerning.
The issue escalated to the point where some members suggested splitting the co-op along the lines of those for and against defederating with Threads. This is a situation we should strive to avoid. Cooperatives work best when there are defined and well-understood processes, leading to productive discussions and timely decisions. Unfortunately, this wasn’t the case in our response to Meta’s announcement about Threads.
My concern isn’t so much about the decision to defederate from Threads, but rather the process by which we arrived at this point. The discussion was exhausting and unproductive, with endless notifications about new opinions that often repeated what had already been said. This felt like an endless cycle of debate without resolution.
Cooperatives should not rely solely on consensus or voting. Instead, they should use consent-based decision making, which focuses on whether members object to a proposal rather than whether they agree with it. This approach acknowledges different perspectives and experiences and allows us to operate together towards a shared aim.
To improve our decision-making process, I suggest the following:
Proposals should follow agreed guidelines. If a member is unsure how to proceed, they should consult with a working group.
There should be separate areas for discussion and decision-making.
Proposals should be high-level and only brought to the whole membership if they aren’t covered by an existing policy.
We should use consent-based decision-making, asking whether people object (i.e. have critical concerns) rather than necessarily wholeheartedly agreeing.
Our mantra should be: is this good enough for now and safe enough to try?
By adopting these solutions, we can ensure that our cooperative remains a place for productive cooperation and informed decision-making. It’s easy to become overwhelmed by discussion and debate but the cooperative movement has solved problems in this area, and I think social.coop would benefit from adopting them.
One of the things about working openly is, fairly obviously, sharing your work as you go. This can be difficult for many reasons, not least because of the human tendency toward narrative, to completed stories with start, middle, and end.
As I wrote in my previous post about the project, we’d identified some of the following:
a list of people we can/should speak with
themes of which we should be aware/cognisant
groups of people we should talk with
Inevitably, since this initial work, we’ve come up with some obvious gaps in the people we should speak to (UX designers!). The people we’ve spoken with have recommended other people to contact as well as avenues of enquiry to follow. This is such an interesting topic that we need to be careful that the project doesn’t grow legs and run away with us…
10 interesting things people have told us so far
We haven’t started synthesising any of what our user research participants have said so far, but as we’re around halfway through the process of conducting interviews, I thought it might be worth sharing 10 interesting things they’ve told us. These are not any any particular order.
Countering misinformation is time-consuming — to fact-check articles takes time and by the time the result is published the majority of the people who were going to read it have done so anyway.
Chat apps — public social networks are blamed for not dealing with mis/disinformation but some of the most problematic stuff is being shared via messaging services such as WhatsApp and Telegram.
Difference between human and bot accounts — it’s possible to reason with a human being but impossible to do with a bot account.
Metaphor of adblock list — a way of reducing the burden of moderation on administrators and moderators* of a federated social network instance by creating a more systematised version of something like the #Fediblock hashtag.
Subscribing to moderator(s) — delegating moderation explicitly to another user, perhaps by automatically blocking/muting whatever they do.
Different categories of approaches — for example, reputational solutions that deal with trusted parties, technical solutions that prove something hasn’t been tampered with, and process-based solutions which make transparent the context in which the content was created and transmitted.
Visualising connections — visualising the social graph could make it easier to spot outlier accounts which may be less trusted than those that lots of your other contacts are connected to.
Fact-checking platforms can be problematic — they promote an assumption that there is a single ‘Truth’ and one version of events. They can be useful in some instances but also be used to present a distorted view of the world.
Frictionless design — by ‘decomplexifying’ the design of user interfaces we hide the system behind the tool and the trade-offs that have been made in creating it.
Disappearing content — content that no longer exists can be a problem for derivative works / articles / posts that reference and rely on it to make valid claims.
It’s been fascinating to see the different ways that people have approached our conversations, whether from a technical, design, political, scientific, or philosophical perspective (or, indeed, all five!)
We’ve still got some people to talk with next week, but we are always looking to ensure a diverse range of user research participants with a decent geographical spread. As such, we could do with some help identifying people located in Asia (yes, the whole continent!) who might be interested in talking about their experiences, as well as people from minority and historically under-represented backgrounds in tech.
In addition, we could also do with talking with people who have suffered from mis/disinformation, any admins or moderators of federated social network instances, and UX designers who have a particular interest in mis/disinformation. You can get in touch via the comments below or at: firstname.lastname@example.org
After spending a long time researching various options for MoodleNet last year, I recently revisited the Fediverse with fresh eyes. I enjoy using Mastodon regularly, and have written about it here before, so didn’t include it in this roundup.
Here’s some of the social networks I played around with recently, in no particular order. It’s not meant to be a comprehensive overview, just what grabbed my attention given the context in which I’m currently working. That’s why I’ve called it a ‘field trip’ 😉
Weird name but pretty awesome social network that’s very popular in Japan. Like MoodleNet and Mastodon, it’s based on the ActivityPub protocol. In fact, if you’re a Mastodon user, it will feel somewhat familiar.
Things I like:
Drive (2TB storage!)
Lots of options for customisation, including ‘dark mode’