Open Thinkering

Menu

Tag: disinformation

Sticks and stones (and disinformation)

AI-generated image of sticks and stones

I guess like most people growing up in the 1980s and 1990s, the phrase “sticks and stones may break my bones, but words will never hurt me” was one I heard a lot. Parroted by parents and teachers alike, the sentiment may have been fairly unproblematic, but it’s a complete lie. In truth, whereas broken bones may heal relatively quickly, for some people it can take years of therapy to get over things that they experience during their formative years.

This post is about content moderation and is prompted by Elon Musk’s purchase of Twitter, which he’s promised to give a free speech makeover. As many people have pointed out, he probably doesn’t realise what he’s let himself in for. Or maybe he does, and it’s the apotheosis of authoritarian nationalism. Either way, let’s dig into some of the nuances here.

Here’s a viral video of King Charles III. It’s thirteen seconds long, and hilarious. One of the reasons it’s funny is that it pokes fun at monarchy, tradition, and an older, immensely privileged, white man. It’s obviously a parody and it would be extremely difficult to pass it off as anything else.

While I discovered this on Twitter, it also did the rounds on the Fediverse, and of course on chat apps such as WhatsApp, Signal, and Telegram. I shared it with others because it reflects my anti-monarchist views in a humorous way. It’s also a clever use of deepfake technology — although it’s not the most convincing example. I can imagine other people, including members of my family, not sharing this video partly because every other word is a profanity, but mainly because it undermines their belief in the seriousness and sanctity of monarchy.

In other words, and this is not exactly a deeply insightful point but one worth making nevertheless, the things we share with one another are social objects which are deeply contextual. (As a side note, this is why cross-posting between social networks seems so janky: each one has its own modes of discourse which only loosely translate elsewhere.)


A few months back I wrote a short report for the Bonfire team’s Zappa project. The focus was on disinformation, and I used First Draft’s 7 Types of Mis- and Disinformation spectrum as a frame.

First Draft - 7 Types of Mis- and Disinformation

As you can see, ‘Satire or Parody’ is way over on the left side of the spectrum. However, as we move to the right, it’s not necessarily the content that shifts but rather the context. That’s important in the next example I want to share.

Unlike the previous video, this one of Joe Biden is more convincing as a deepfake. Not only is it widescreen with a ‘news’ feel to it, the voice is synthesised to make it sound original, and the lip-syncing is excellent. Even the facial expression when moving to the ‘Mommy Shark…’ verse is convincing.

It is, however, still very much a parody as well as a tech demo. The video comes from the YouTube channel of Synthetic Voices, which is a “dumping ground for deepfakes videos, audio clones and machine learning memes”. The intentions here therefore may be mixed, with some videos created with an intent to mislead and deceive.


Other than the political implications of deepfakes, some of the more concerning examples are around deepfake porn. As the BBC has reported recently, while it’s “already an offence in Scotland to share images or videos that show another person in an intimate situation without their consent… in other parts of the UK, it’s only an offence if it can be proved that such actions were intended to cause the victim distress.” Trying to track down who created digital media can be extremely tricky at the best of times, and even if you do discover the culprit, they may be in a basement on the other side of the world.

So we’re getting to the stage where right now, with enough money / technological expertise, you can pretend anyone said or did anything you like. Soon, there’ll be an app for it. In fact, I’m pretty sure I saw on Hacker News that there’s already an app for creating deepfake porn. Of course there is. The genie is out of the bottle, so what are we going to do about it?


While I didn’t necessarily foresee deepfakes and weaponised memes, a decade ago in my doctoral thesis I did talk about the ‘Civic’ element as one of the Eight Essential Elements of Digital Literacies. And then in 2019, just before the pandemic, I travelled to New York to present on Truth, Lies, and Digital Fluency — taking aim at Facebook, who had a representative in the audience.

The trouble is that there isn’t a single way of preventing harms when it comes to the examples on the right-hand side of First Draft’s spectrum of mis- and disinformation. You can’t legislate it away or ban it in its entirety. It’s not just a supply-side problem. Nor can you deal with it on the consumption side through ‘digital literacy’ initiatives aiming to equip citizens with the mindsets and skillsets to be able to detect and deal with deepfakes and the like.

That’s why I think that the future of social interaction is federated. The aim of the Zappa project is to develop a multi-pronged approach which empowers communities. That is to say, instead of content moderation either being a platform’s job (as with Twitter or YouTube) or an individual’s job, it becomes the role of communities to deem what they consider problematic.

Many of those communities will be run by a handful of individuals who will share blocklists and tags with admins and moderators of other instances. Some might be run by states, news organisations, or other huge organisations and have dedicated teams of moderators. Still others might be run by individuals who decide to take all of that burden on themselves for whatever reason.

There are no easy answers. But conspiracy theories have been around since the dawn of time, mainly because there really are people in power doing terrible things. So yes, we need appropriate technological and sociological approaches to things which affect democracy, mental health, and dignity. But we also need to engineer a world where billionaires don’t exist, partly so that an individual can’t buy an (albeit privatised) digital town square for fun.

One thing’s for sure, if Musk gets his way, we’ll be able to test the phrase “sticks and stones may break my bones…” on a new whole generation. Perhaps show them the Fediverse instead?


Main image created using DALL-E 2 (it seemed appropriate!)

Exploring the sweet spot for Zappa project approaches to misinformation

Venn diagram showing three overlapping circles (Technical, Procedural, and Relational)

At the overlap of all three circles is the words 'Zappa Project'

At the overlap of Technical and Relational is the word 'Consistency'

At the overlap of Procedural and Relational is the word 'Reliability'

At the overlap of Technical and Procedural is the word 'Efficiency'

The Bonfire logo is at the bottom of the graphic, with the version number 0.1

While we know that misinformation is not a problem that can ever be fully ‘solved’, it is important to try and reduce its harms. Last week, we published the first version of a report based on user research as part of the Zappa project, an initiative from the Bonfire team.

This week, I’ve narrated a Loom video giving an overview of the mindmap embedded in the report. This was requested by Ivan, as he found that the way that I explain it captures some nuance that perhaps isn’t in the report (which is more focused on recommendations).

Another thing I was tasked with this week was creating a Venn diagram from the three types of approaches that could be taken for the technical development for the Zappa project. These were slightly tweaked from suggestions made by one of our user research participants. As you can see in the above diagram, these are:

  • Technical — improving the way that users interact with information
  • Procedural — improving the process of capturing and displaying information
  • Relational — improving the way that humans interact with one another

It’s unlikely that any one approach would sit squarely as being only one type of approach. For example, spending time thinking about the way that information is presented to users and allowing them to control that sits right in the middle of all three.

There are three overlaps other than the one right in the middle. These are:

  • Technical / Procedural — we’ve currently labelled this intersection as Efficiency as using technical approaches to improve processes usually makes them more efficient. This might include making it easier to block certain types of posts, for example.
  • Procedural / Relational — we’ve labelled this intersection as Reliability because process when considered in terms of relationships is often focused on repeatable patterns. This might include, for example, being able to validate that the account you’re interacting with hasn’t been hijacked.
  • Relational / Technical — we’ve used the label Consistency for this one as one of the things we found from our research is that users are often overwhelmed by information. We can do something about that, so this might include helping users feel in charge of their feeds to help avoid context collapse or aesthetic flattening.

You will notice the version number appended to this diagram is ‘v0.1’. It might be that we haven’t found the right words for these overlaps. It might be that some are more important than others. We’d love feedback from anyone paying attention to the Zappa project, whether you’ve been following the work around Bonfire since the beginning, or whether this is the first you’re hearing of it.

If it helps, feel free to grab the Google Slides original of the above Venn diagram, or comment below on a) what you think is good, b) anything you have questions about, or c) anything that concerns you.

First version of report published sharing findings from Zappa project user research

Illustration of a bear, fox, badger, and squirrel looking into a fire. Taken from the Bonfire website.

Over the last month, I’ve been working with the Bonfire team to perform some initial user research on the Zappa project. The aim of the project, funded by a grant from the Culture of Solidarity Fund, is to empower communities with a dedicated tool to deal with the coronavirus “infodemic” and online misinformation in general.

We ended up speaking with 11 individuals and organisations, and have synthesised our initial user research into the first version of a report which is now available.

📄 Click here to access the report

css.php