Open Thinkering

Menu

Tag: Bonfire

Defederation and governance processes

I’ve noticed this week some Mastodon instances ‘defederating’ not only from those that are generally thought of to be toxic, but also of large, general-purpose instances such as mastodon.social. This post is about governance processes and trying to steer a way between populism and oligarchy.

The first thing I should say in all of this, is that I’m a middle-aged, straight, white guy playing life on pretty much the easiest difficulty level. So I’m not commenting in this post about any specific situation but rather zooming out to think about this on a wider scale.

What I’ve seen, mainly via screenshots as I rarely visit Twitter now except to keep the @WeAreOpenCoop account up-to-date, is that Elon Musk has run some polls. As others have commented, this is how a Roman Emperor would make decisions: through easy-to-rig polls that suggest that an outcome is “the will of the people”.

Tweet from Elon Musk: "Should Twitter offer a general amnesty to suspended accounts, provided that they have not broken the law or engaged in egregious spam?"

Yes 72.4%
No 27.6%

This is obviously an extremely bad, childish, and dangerous way to run a platform that, until recently, was almost seen as infrastructure.

On the other side of the spectrum is the kind of decision making that I’m used to as a member of a co-op that is part of a wider co-operative network. These daily decisions around matters large and small requires not necessarily consensus, but rather processes that allow for alignment around a variety of issues. As I mentioned in my previous post, one good way to do this is through consent-based decision making.

Screenshot of the Loomio for social.coop with multiple discussion threads

Using Loomio, the social.coop instance that I currently call home on the Fediverse, makes decisions in a way that is open for everyone to view — and also for members of the instance to help decide. It’s not a bad process at all, but a difficult one to scale — especially when rather verbose people with time on their hands decide to have An Opinion. It also happens in a place (Loomio) other than that which the discussion concerns (Mastodon).

So when I had one of my regular discussions with Ivan, one of the Bonfire team, I was keen to bring it up. He, of course, had already been thinking about this and pointed me towards Ukuvota, an approach which uses score voting to help with decision making:

To “keep things the way they are” is always an option, never the default. Framing this option as a default position introduces a significant conservative bias — listing it as an option removes this bias and keeps a collective evolutionary.

To “look for other options” is always an option. If none of the other current options are good enough, people are able to choose to look for better ones — this ensures that there is always an acceptable option for everyone.

Every participant can express how much they support or oppose each option. Limiting people to choose their favorite or list their preference prevents them from fully expressing their opinions — scoring clarifies opinions and makes it much more likely to identify the best decision.

Acceptance (non-opposition) is the main determinant for the best decision. A decision with little opposition reduces the likelihood of conflict, monitoring or sanctioning — it is also important that some people actively support the decision to ensure it actually happens.

The examples given on the website are powerful but quite complicated, which is why I think there’s immense power in the default. To my mind, democratic decision making is the kind of thing that you need to practise, but which shouldn’t become a burden.

I’m hoping that after the v1.0 release of Bonfire, that one of the extensions that can emerge is a powerful way of democratic governance processes being available right there in the social networking tool. If this were the case, I can imagine decisions around instance-blocking to be able to be made in a positive, timely, and democratic manner.

Watch this space! If you’re reading this and are involved in thinking about these kinds of things for projects you’re involved with, I’d love to have a chat.

Sticks and stones (and disinformation)

AI-generated image of sticks and stones

I guess like most people growing up in the 1980s and 1990s, the phrase “sticks and stones may break my bones, but words will never hurt me” was one I heard a lot. Parroted by parents and teachers alike, the sentiment may have been fairly unproblematic, but it’s a complete lie. In truth, whereas broken bones may heal relatively quickly, for some people it can take years of therapy to get over things that they experience during their formative years.

This post is about content moderation and is prompted by Elon Musk’s purchase of Twitter, which he’s promised to give a free speech makeover. As many people have pointed out, he probably doesn’t realise what he’s let himself in for. Or maybe he does, and it’s the apotheosis of authoritarian nationalism. Either way, let’s dig into some of the nuances here.

Here’s a viral video of King Charles III. It’s thirteen seconds long, and hilarious. One of the reasons it’s funny is that it pokes fun at monarchy, tradition, and an older, immensely privileged, white man. It’s obviously a parody and it would be extremely difficult to pass it off as anything else.

While I discovered this on Twitter, it also did the rounds on the Fediverse, and of course on chat apps such as WhatsApp, Signal, and Telegram. I shared it with others because it reflects my anti-monarchist views in a humorous way. It’s also a clever use of deepfake technology — although it’s not the most convincing example. I can imagine other people, including members of my family, not sharing this video partly because every other word is a profanity, but mainly because it undermines their belief in the seriousness and sanctity of monarchy.

In other words, and this is not exactly a deeply insightful point but one worth making nevertheless, the things we share with one another are social objects which are deeply contextual. (As a side note, this is why cross-posting between social networks seems so janky: each one has its own modes of discourse which only loosely translate elsewhere.)


A few months back I wrote a short report for the Bonfire team’s Zappa project. The focus was on disinformation, and I used First Draft’s 7 Types of Mis- and Disinformation spectrum as a frame.

First Draft - 7 Types of Mis- and Disinformation

As you can see, ‘Satire or Parody’ is way over on the left side of the spectrum. However, as we move to the right, it’s not necessarily the content that shifts but rather the context. That’s important in the next example I want to share.

Unlike the previous video, this one of Joe Biden is more convincing as a deepfake. Not only is it widescreen with a ‘news’ feel to it, the voice is synthesised to make it sound original, and the lip-syncing is excellent. Even the facial expression when moving to the ‘Mommy Shark…’ verse is convincing.

It is, however, still very much a parody as well as a tech demo. The video comes from the YouTube channel of Synthetic Voices, which is a “dumping ground for deepfakes videos, audio clones and machine learning memes”. The intentions here therefore may be mixed, with some videos created with an intent to mislead and deceive.


Other than the political implications of deepfakes, some of the more concerning examples are around deepfake porn. As the BBC has reported recently, while it’s “already an offence in Scotland to share images or videos that show another person in an intimate situation without their consent… in other parts of the UK, it’s only an offence if it can be proved that such actions were intended to cause the victim distress.” Trying to track down who created digital media can be extremely tricky at the best of times, and even if you do discover the culprit, they may be in a basement on the other side of the world.

So we’re getting to the stage where right now, with enough money / technological expertise, you can pretend anyone said or did anything you like. Soon, there’ll be an app for it. In fact, I’m pretty sure I saw on Hacker News that there’s already an app for creating deepfake porn. Of course there is. The genie is out of the bottle, so what are we going to do about it?


While I didn’t necessarily foresee deepfakes and weaponised memes, a decade ago in my doctoral thesis I did talk about the ‘Civic’ element as one of the Eight Essential Elements of Digital Literacies. And then in 2019, just before the pandemic, I travelled to New York to present on Truth, Lies, and Digital Fluency — taking aim at Facebook, who had a representative in the audience.

The trouble is that there isn’t a single way of preventing harms when it comes to the examples on the right-hand side of First Draft’s spectrum of mis- and disinformation. You can’t legislate it away or ban it in its entirety. It’s not just a supply-side problem. Nor can you deal with it on the consumption side through ‘digital literacy’ initiatives aiming to equip citizens with the mindsets and skillsets to be able to detect and deal with deepfakes and the like.

That’s why I think that the future of social interaction is federated. The aim of the Zappa project is to develop a multi-pronged approach which empowers communities. That is to say, instead of content moderation either being a platform’s job (as with Twitter or YouTube) or an individual’s job, it becomes the role of communities to deem what they consider problematic.

Many of those communities will be run by a handful of individuals who will share blocklists and tags with admins and moderators of other instances. Some might be run by states, news organisations, or other huge organisations and have dedicated teams of moderators. Still others might be run by individuals who decide to take all of that burden on themselves for whatever reason.

There are no easy answers. But conspiracy theories have been around since the dawn of time, mainly because there really are people in power doing terrible things. So yes, we need appropriate technological and sociological approaches to things which affect democracy, mental health, and dignity. But we also need to engineer a world where billionaires don’t exist, partly so that an individual can’t buy an (albeit privatised) digital town square for fun.

One thing’s for sure, if Musk gets his way, we’ll be able to test the phrase “sticks and stones may break my bones…” on a new whole generation. Perhaps show them the Fediverse instead?


Main image created using DALL-E 2 (it seemed appropriate!)

Bonfire beta

In my weeknotes over the last few months you’ll have seen reference to Bonfire, a federated app toolkit. Today, I’m delighted to be able to share that the core team behind Bonfire have launched the beta, and you can go and give it a whirl in the playground.

You may recognise my dulcet tones from the screencast video. It’s been my pleasure to be able to help with that, as well as the Zappa project and some user testing.

Bonfire has a big future ahead of it, and as more than just a microblog! Read more on what you can expect from the Bonfire beta.

css.php