Open Thinkering

Menu

Tag: moderation

I am so tired of moving platforms

An abstract image depicting the theme of conflict in community moderation, featuring fragmented shapes and warm colors to symbolise the intensity and challenges of decision-making.

It’s only been a few months since I switched my Thought Shrapnel newsletter to Substack. The WordPress plugin I was using, MailPoet, was great, but despite my best efforts it kept marking subscribers as ‘inactive’. This happened to Laura as well, meaning that I had to keep re-subscribing to her newsletter.

Substack not only doesn’t suffer from this problem, but it’s got some really nice features. One of them, launched this year, is Substack Notes, which is a social network made up of writers and readers of publications. I’ve discovered some absolutely wonderful writing as a result.

No good thing can last, however, and of course, like every platform Substack has a Nazi problem. The thing is, they’ve decided to essentially do nothing about it. Doing nothing is a choice. Doing nothing keeps the money flowing. For now.

Many people who have decided to leave Substack have cited the Nazi bar problem. This is based on an anecdote highlighting the importance of nipping things in the bud quickly before somewhere becomes overrun with bad actors.

Venkatesh Rao thinks that the Nazi bar analogy is “an example of a bad metaphor contagion effect” and points to a 2010 post of his about warren vs plaza architectures. He believes that Twitter, for example, is a plaza, whereas Substack is a warren:

A warren is a social environment where no participant can see beyond their little corner of a larger maze. Warrens emerge through people personalizing and customizing their individual environments with some degree of emergent collaboration. A plaza is an environment where you can easily get to a global/big picture view of the whole thing. Plazas are created by central planners who believe they know what’s best for everyone.

No matter how Substack is organised, once good, influential people decide to move (e.g. Audrey Watters, Molly White, Ryan Broderick) then it’s game over. Just as with Twitter/X, a platform can still exist, but it’s become a toxic brand, synonymous with a certain type of person or politics. This article describes an original post by Xianhang Zhang who coined the term ‘evaporative cooling effect’ to describe this exodus:

The Evaporative Cooling Effect describes the phenomenon that high value contributors leave a community because they cannot gain something from it, which leads to the decrease of the quality of the community. Since the people most likely to join a community are those whose quality is below the average quality of the community, these newcomers are very likely to harm the quality of the community. With the expansion of community, it is very hard to maintain the quality of the community.

Moderation is hard. Everyone disagrees about who and what shouldn’t be ‘allowed’. As my experience on the Fediverse among very well-intentioned people has shown, it’s not just free speech absolutists vs everyone else. There are people, for example, who believe that pre-emptively blocking an entire platform is reasonable. You can end up in endless debates about theoretical situations.

By its very nature, moderation is a form of censorship. You, as a community, space, or platform are deciding who and what is unacceptable. In Substack’s case, for example, they don’t allow pornography but they do allow Nazis. That’s not “free speech” but rather a business decision. If you’re making moderation based on financials, fine, but say so. Then platform users can make choices appropriately.


A lot of people seem to be migrating to Ghost, which is a solid option: open source software from a non-profit foundation. I guess my use case is slightly different, in that I’ve only been sending out the newsletter roundup of my Thought Shrapnel posts on Substack. The posts themselves exist, as they have done for years, on a self-hosted installation of WordPress.

I’ll probably just end up defaulting back to MailPoet, or perhaps just not send out a newsletter while I figure out what to do. It’s such a shame, because I was really enjoying the Substack experience. I’m not sure if it would be enough for me and for others if the founders were to change their mind, but it has reminded me about how important it is to own and control your own content.


Image: DALL-E 3

Sticks and stones (and disinformation)

AI-generated image of sticks and stones

I guess like most people growing up in the 1980s and 1990s, the phrase “sticks and stones may break my bones, but words will never hurt me” was one I heard a lot. Parroted by parents and teachers alike, the sentiment may have been fairly unproblematic, but it’s a complete lie. In truth, whereas broken bones may heal relatively quickly, for some people it can take years of therapy to get over things that they experience during their formative years.

This post is about content moderation and is prompted by Elon Musk’s purchase of Twitter, which he’s promised to give a free speech makeover. As many people have pointed out, he probably doesn’t realise what he’s let himself in for. Or maybe he does, and it’s the apotheosis of authoritarian nationalism. Either way, let’s dig into some of the nuances here.

Here’s a viral video of King Charles III. It’s thirteen seconds long, and hilarious. One of the reasons it’s funny is that it pokes fun at monarchy, tradition, and an older, immensely privileged, white man. It’s obviously a parody and it would be extremely difficult to pass it off as anything else.

While I discovered this on Twitter, it also did the rounds on the Fediverse, and of course on chat apps such as WhatsApp, Signal, and Telegram. I shared it with others because it reflects my anti-monarchist views in a humorous way. It’s also a clever use of deepfake technology — although it’s not the most convincing example. I can imagine other people, including members of my family, not sharing this video partly because every other word is a profanity, but mainly because it undermines their belief in the seriousness and sanctity of monarchy.

In other words, and this is not exactly a deeply insightful point but one worth making nevertheless, the things we share with one another are social objects which are deeply contextual. (As a side note, this is why cross-posting between social networks seems so janky: each one has its own modes of discourse which only loosely translate elsewhere.)


A few months back I wrote a short report for the Bonfire team’s Zappa project. The focus was on disinformation, and I used First Draft’s 7 Types of Mis- and Disinformation spectrum as a frame.

First Draft - 7 Types of Mis- and Disinformation

As you can see, ‘Satire or Parody’ is way over on the left side of the spectrum. However, as we move to the right, it’s not necessarily the content that shifts but rather the context. That’s important in the next example I want to share.

Unlike the previous video, this one of Joe Biden is more convincing as a deepfake. Not only is it widescreen with a ‘news’ feel to it, the voice is synthesised to make it sound original, and the lip-syncing is excellent. Even the facial expression when moving to the ‘Mommy Shark…’ verse is convincing.

It is, however, still very much a parody as well as a tech demo. The video comes from the YouTube channel of Synthetic Voices, which is a “dumping ground for deepfakes videos, audio clones and machine learning memes”. The intentions here therefore may be mixed, with some videos created with an intent to mislead and deceive.


Other than the political implications of deepfakes, some of the more concerning examples are around deepfake porn. As the BBC has reported recently, while it’s “already an offence in Scotland to share images or videos that show another person in an intimate situation without their consent… in other parts of the UK, it’s only an offence if it can be proved that such actions were intended to cause the victim distress.” Trying to track down who created digital media can be extremely tricky at the best of times, and even if you do discover the culprit, they may be in a basement on the other side of the world.

So we’re getting to the stage where right now, with enough money / technological expertise, you can pretend anyone said or did anything you like. Soon, there’ll be an app for it. In fact, I’m pretty sure I saw on Hacker News that there’s already an app for creating deepfake porn. Of course there is. The genie is out of the bottle, so what are we going to do about it?


While I didn’t necessarily foresee deepfakes and weaponised memes, a decade ago in my doctoral thesis I did talk about the ‘Civic’ element as one of the Eight Essential Elements of Digital Literacies. And then in 2019, just before the pandemic, I travelled to New York to present on Truth, Lies, and Digital Fluency — taking aim at Facebook, who had a representative in the audience.

The trouble is that there isn’t a single way of preventing harms when it comes to the examples on the right-hand side of First Draft’s spectrum of mis- and disinformation. You can’t legislate it away or ban it in its entirety. It’s not just a supply-side problem. Nor can you deal with it on the consumption side through ‘digital literacy’ initiatives aiming to equip citizens with the mindsets and skillsets to be able to detect and deal with deepfakes and the like.

That’s why I think that the future of social interaction is federated. The aim of the Zappa project is to develop a multi-pronged approach which empowers communities. That is to say, instead of content moderation either being a platform’s job (as with Twitter or YouTube) or an individual’s job, it becomes the role of communities to deem what they consider problematic.

Many of those communities will be run by a handful of individuals who will share blocklists and tags with admins and moderators of other instances. Some might be run by states, news organisations, or other huge organisations and have dedicated teams of moderators. Still others might be run by individuals who decide to take all of that burden on themselves for whatever reason.

There are no easy answers. But conspiracy theories have been around since the dawn of time, mainly because there really are people in power doing terrible things. So yes, we need appropriate technological and sociological approaches to things which affect democracy, mental health, and dignity. But we also need to engineer a world where billionaires don’t exist, partly so that an individual can’t buy an (albeit privatised) digital town square for fun.

One thing’s for sure, if Musk gets his way, we’ll be able to test the phrase “sticks and stones may break my bones…” on a new whole generation. Perhaps show them the Fediverse instead?


Main image created using DALL-E 2 (it seemed appropriate!)

The problems with Twitter’s attempts at anti-disinformation in the run-up to the US Presidential election

This week, Twitter published an article summarising the steps they are taking to avoid being complicit in negatively affecting the result of the upcoming US Presidential election:

Twitter plays a critical role around the globe by empowering democratic conversation, driving civic participation, facilitating meaningful political debate, and enabling people to hold those in power accountable. But we know that this cannot be achieved unless the integrity of this critical dialogue on Twitter is protected from attempts — both foreign and domestic — to undermine it.

Vijaya Gadde and Kayvon Beykpour, Additional steps we’re taking ahead of the 2020 US Election (Twitter)

I’m not impressed by what they have come up with; this announcement, coming merely a month before the election, is too little, too late.

Let’s look at what they’re doing in more detail, and I’ll explain why they’re problematic both individually and when taken together as a whole.


There are five actions we can extract from Twitter’s article:

  1. Labelling problematic tweets
  2. Forcing users to use quote retweet
  3. Removing algorithmic recommendations
  4. Censoring trending hashtags and tweets
  5. Increasing the size of Twitter’s moderation team

1. Labelling problematic tweets

We currently may label Tweets that violate our policies against misleading information about civic integrity, COVID-19, and synthetic and manipulated media. Starting next week, when people attempt to Retweet one of these Tweets with a misleading information label, they will see a prompt pointing them to credible information about the topic before they are able to amplify it.

[…]

In addition to these prompts, we will now add additional warnings and restrictions on Tweets with a misleading information label from US political figures (including candidates and campaign accounts), US-based accounts with more than 100,000 followers, or that obtain significant engagement. People must tap through a warning to see these Tweets, and then will only be able to Quote Tweet; likes, Retweets and replies will be turned off, and these Tweets won’t be algorithmically recommended by Twitter. We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets.

Vijaya Gadde and Kayvon Beykpour, Additional steps we’re taking ahead of the 2020 US Election (Twitter)

The assumption behind this intervention is that misinformation is spread by people with a large number of followers, or by a small number of tweets that can a large number of retweets.

However, as previous elections have shown, people are influenced by repetition. If users see something numerous times in their feed, from multiple different people they are following, they assume that there’s at least an element of truth to it.


2. Forcing users to use quote retweet

People who go to Retweet will be brought to the Quote Tweet composer where they’ll be encouraged to comment before sending their Tweet. Though this adds some extra friction for those who simply want to Retweet, we hope it will encourage everyone to not only consider why they are amplifying a Tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation. If people don’t add anything on the Quote Tweet composer, it will still appear as a Retweet. We will begin testing this change on Twitter.com for some people beginning today.

Vijaya Gadde and Kayvon Beykpour, Additional steps we’re taking ahead of the 2020 US Election (Twitter)

I’m surprised Twitter haven’t already tested this approach, as it’s a little close to one of the most important elections in history to be beginning testing now.

However, the assumption behind this approach is that straightforward retweets amplify disinformation more than quote retweets. I’m not sure this is the case, particularly as a quote retweet can be used passive-aggressively, and to warp, distort, and otherwise manipulate information provided by others in good faith.

One of the things that really struck me when moving to Mastodon was that it’s not possible to quote retweet. This is design decision based on observing user behaviour. It’s my opinion that Twitter removing the ability to quote retweet would significantly improve their platform, too.


3. Removing algorithmic recommendations

[W]e will prevent “liked by” and “followed by” recommendations from people you don’t follow from showing up in your timeline and won’t send notifications for these Tweets. These recommendations can be a helpful way for people to see relevant conversations from outside of their network, but we are removing them because we don’t believe the “Like” button provides sufficient, thoughtful consideration prior to amplifying Tweets to people who don’t follow the author of the Tweet, or the relevant topic that the Tweet is about. This will likely slow down how quickly Tweets from accounts and topics you don’t follow can reach you, which we believe is a worthwhile sacrifice to encourage more thoughtful and explicit amplification.

Six years ago, in Curate or Be Curated, I outlined the dangers of social networks like Twitter moving to an algorithmic timeline. What is gained through any increase in shareholder value and attention conservation is lost in user agency.

I’m pleased that Twitter is questioning the value of this form of algorithmic discovery and recommendation during the election season, but remain concerned that this will return after the US election. After all, elections happen around the world all the time, and politics is an everyday area of discussion for humans.


4. Censoring trending hashtags and tweets

[W]e will only surface Trends in the “For You” tab in the United States that include additional context. That means there will be a description Tweet or article that represents or summarizes why that term is trending. We’ve been adding more context to Trends during the last few months, but this change will ensure that only Trends with added context show up in the “For You” tab in the United States, which is where the vast majority of people discover what’s trending. This will help people more quickly gain an informed understanding of the high volume public conversation in the US and also help reduce the potential for misleading information to spread.

Twitter has been extremely careful with their language here by talking about ‘adding’ context for users in the US, rather than taking away the ability for them to see what is actually trending across the country.

If only trends with context will be shown, this means that they are being heavily moderated. That moderation is a form of gatekeeping, with an additional burden upon the moderators of explaining the trending topic in a neutral way.

While I’m not sure that a pure, unfiltered trending feed would be wise, Twitter is walking a very fine line here as, effectively, a news service. Again, as I commented in Curate or Be Curated years ago, there is no such thing as ‘neutrality’ when it comes to news, no ‘view from nowhere’.

Twitter needs to be very careful here not to make things work even worse by effectively providing mini editorials of ongoing news stories.


5. Increasing the size of Twitter’s moderation team

In addition to these changes, as we have throughout the election period, we will have teams around the world working to monitor the integrity of the conversation and take action when needed. We have already increased the size and capacity of our teams focused on the US Election and will have the necessary staffing to respond rapidly to issues that may arise on Twitter on Election night and in the days that follow.

A post on the Twitter blog from last year counted 6.2 million tweets during the EU elections last year. The population of countries making up the EU is only slightly larger than that of the USA, but next month’s election is much more controversial.

In this scenario, Twitter cannot afford (or hire) a moderation large enough to moderate this number of tweets in realtime. As a result, they will have rely on heuristics and the vigilance of users reporting tweets. However, because of the ‘filter bubble’ effect, the chances are that users who would be likely to report problematic tweets may never see them.


In conclusion…

If we step back a little and look at the above with some form of objectivity, we see that Twitter has admitted that its algorithmic timeline is an existential threat to the US election. As a result, it is stepping in to remove most elements of it, and replacing it with a somewhat-authoritarian approach which relies on its moderation team.

From my point of view, this is not good enough. It’s too little, too late, especially when the writing has been on the wall for years — certainly the last four years. I’m deeply concerned about social networks’ role in undermining our democratic processes, and I’d call on Twitter to learn from what works well elsewhere.

For example, on the Fediverse, where I spend more time these days instead of Twitter, developers of platforms and administrators of instances have developed features, policies, and procedures that strike a delicate balance between user agency and disinformation. Much of this comes from a federated architecture, something that I’ve pointed out elsewhere as being much more like how humans interact offline.

This post is already too long to rehash things I’ve discussed at length before, but Twitter has already started looking into how it can become a decentralised social network. In the meantime, I’m concerned that these anti-disinformation measures don’t go far enough.

css.php