Open Thinkering

Menu

Tag: trust

Trust no-one: why ‘proof of work’ is killing the planet as well as us

Note: subtlety ahead. This post uses cryptocurrency as a metaphor.

Painting of women working in a field. One has been cut out of the painting and is sitting in the corner of the frame, smoking.

As you may have read in the news recently, the energy requirements of Bitcoin are greater than that of some countries. This is because of the ‘proof of work‘ required to run a cryptocurrency without a centralised authority. It’s a ‘trustless’ system.

While other cryptocurrencies and blockchain-based systems use other, less demanding, cryptographic proofs (e.g. proof of stake) Bitcoin’s approach requires increasing amounts of computational power as the cryptographic proofs get harder.

As the cryptographic proofs serve no function other than ensuring the trustless system continues operating, it’s tempting to see ‘proof of work’ as inherently wasteful. Right now, it’s almost impossible to purchase a graphics card, as the GPUs in them are being bought up and deployed en masse to ‘mine’ cryptocurrencies like Bitcoin.

Building a system to be trustless comes with huge externalities; the true cost comes elsewhere in the overall system.

🏭 🏭 🏭

Let’s imagine for a moment that, instead of machines, we decided to deploy humans to do the cryptographic proofs. We’d probably question the whole endeavour and the waste of human life.

The late Dave Graeber railed against the pointless work inherent in what he called ‘bullshit jobs‘. He listed five different types of such jobs, which comprise more than half of work carried out by people currently in employment:

  1. Flunkies — make their superiors feel more important (e.g door attendants, receptionists)
  2. Goons — oppose other goons hired by other people/organisations (e.g. corporate lawyers, lobbyists)
  3. Duct Tapers — temporarily fix problems that could be fixed permanently (e.g. programmers repairing shoddy code, airline desk staff reassuring passengers)
  4. Box Tickers — create the appearance that something useful is being done when it is not (e.g. in-house magazine journalists, corporate compliance officers)
  5. Taskmasters — manage, or create extra work for, those who do not need it (e.g. middle management, leadership professionals)

What cuts across all of these is the ‘proof of work’ required to keep the status quo in operation. This is mostly obvious through ‘Box Tickers’, but it is equally true of middle management ensuring work is seen to be done (and that hierarchical systems prevail).

✅ ✅ ✅

There is much work that is pointless, and it could be argued that an important reason for this is because we have a trustless society. For example, when some of the most marginalised people in our communities ask for help between jobs, we require them to prove that they are spending 35 hours per week looking for one. It’s almost as if someone in government has taken the pithy phrase “looking for a job is a full time job” and run with it.

Western societies have been entirely captured by the classic economic argument that everything will turn out well if we all act in our own self-interest. I’m not sure if you’ve looked around you recently, but it seems to me that this model isn’t exactly… working?

It’s my belief, therefore, that we need to engender greater trust in society. Ideally, this trust should be inter-generational and multicultural, seeking to build bridges between different groups, rather than building solidarity in one group at the expense of others.

This is not a call to naivety: I’m well aware that trust comes in different shapes and sizes. What I think we’re losing, however, is an ability to trust people with small things. As a result, we’re out of practice when it comes to bigger things.

👁️ 👁️ 👁️

The Russian phrase Доверяй, но проверяй means, I believe, “trust, but verify”. It’s a useful approach to life, and an approach I use with everyone from members of my family to colleagues on various projects I’m working on.

The important thing here is the ‘trust’ part, with the occasional ‘verify’ to ensure that people don’t, well, take the piss. What we’re seeing instead is ‘verify and verify’, and increasing verificationism where we spend our lives proving who we are as well as our eligibility. This disproportionately affect already-marginalised people. It is a burden and tax on living a flourishing human existence.

🧑‍🤝‍🧑 🧑‍🤝‍🧑 🧑‍🤝‍🧑

Back in 2013, I wrote a series of blog posts reflecting on a talk by Laura Thomson entitled Minimum Viable Bureaucracy. In one of these entitled Scale, Chaordic Systems, and Trust I wrote:

You can build trust “by making many small deposits in the trust bank” which is a horse-training analogy. It’s important to have lots of good interactions with people so that one day when things are going really badly you can draw on that. People who have had lots of positive interactions are likely to work more effectively to solve big problems rather than all pointing fingers.

To finish, then, I want to reiterate two things that Laura Thomson recommended that anyone can do to build trust:

  1. Begin by trusting others
  2. Be trustworthy

Solidarity begins at home.


This post is Day 87 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com. Image by Banksy.

Some thoughts on Keybase, online security, and verification of identity

I’m going to stick my neck out a bit and say that, online, identity is the most important factor in any conversation or transaction. That’s not to say I’m a believer in tying these things to real-world, offline identities. Not at all.

Trust models change when verification is involved. For example, if I show up at your door claiming to be Doug Belshaw, how can I prove that’s the case? The easiest thing to do would be to use government-issued identification such as my passport or driving license. But what if I haven’t got any, or I’m unwilling to use it? (see the use case for CheapID) In those kinds of scenarios, you’re looking for multiple, lower-bar verification touchstones.

As human beings, we do this all of the time. When we meet someone new, we look for points of overlapping interest, often based around human relationships. This helps situate the ‘other’ in terms of our networks, and people can inherit trust based on existing relationships and interactions.

Online, it’s different. Sometimes we want to be anonymous, or at least pseudo-anonymous. There’s no reason, for example, why someone should be able to track all of my purchases just because I’m participating in a digital transaction. Hence Bitcoin and other cryptocurrencies.

When it comes to communication, we’ve got encrypted messengers, the best of which is widely regarded to be Signal from Open Whisper Systems. For years, we’ve tried (and failed) to use PGP/GPG to encrypt and verify email transactions, meaning that trusted interactions are increasingly taking place in locations other than your inbox.

On the one hand, we’ve got purist techies who constantly question whether a security/identity approach is the best way forward, while on the other end of the spectrum there’s people using the same password (without two-factor authentication) for every app or service. Sometimes, you need a pragmatic solution.

keybase

I remember being convinced to sign up for Keybase.io when it launched thanks to this Hacker News thread, and particularly this comment from sgentle:

Keybase asks: who are you on the internet if not the sum of your public identities? The fact that those identities all make a certain claim is a proof of trust. In fact, for someone who knows me only online, it’s likely the best kind of trust possible. If you meet me in person and I say “I’m sgentle”, that’s a weaker proof than if I post a comment from this account. Ratchet that up to include my Twitter, Facebook, GitHub, personal website and so forth, and you’re looking at a pretty solid claim.

And if you’re thinking “but A Scary Adversary could compromise all those services and Keybase itself”, consider that an adversary with that much power would also probably have the resources to compromise highly-connected nodes in the web of trust, compromise PKS servers, and falsify real-world identity documents.

I think absolutism in security is counterproductive. Keybase is definitionally less secure than, say, meeting in person and checking that the person has access to all the accounts you expect, which is itself less secure than all of the above and using several forms of biometric identification to rule out what is known as the Face/Off attack.

The fight isn’t “people use Keybase” vs “people go to key-signing parties”, the fight is “people use Keybase” vs “fuck it crypto is too hard”. Those who need the level of security provided by in-person key exchanges still have that option available to them. In fact, it would be nice to see PKS as one of the identity proof backends. But for practical purposes, anything that raises the crypto floor is going to do a lot more good than dickering with the ceiling.

Since the Trump inauguration, I’ve seen more notifications that people are using Keybase. My profile is here: https://keybase.io/dajbelshaw. Recently, cross-platform apps for desktop and mobile devices have been added, mearning not only can you verify your identity across the web, but you can chat and share files securely.

It’s a great solution. The only word of warning I’d give is don’t upload your private key. If you don’t know how public and private keys work, then please read this article. You should never share your private key with anyone. Keep it to yourself, even if Keybase claim it will make your life easier.

To my mind, all of this fits into my wider work around Open Badges. Showing who you are and what you can do on the web is a multi-faceted affair, and I like the fact that I can choose to verify who I am. What I opt to keep separate from this profile (e.g. my gamertag, other identities) is entirely my choice. But verification of identity on the internet is kind of a big deal. We should all spend longer thinking about it, I reckon.

Main image: Blondinrikard Fröberg

Minimum Viable Bureaucracy: Scale, Chaordic Systems & Trust

Recently I came across Laura Thomson’s excellent talk on Minimum Viable Bureaucracy. This is the first in a series of posts writing up Laura’s ideas. Everything in this post should be attributed to her, not me (except if I’ve made any mistakes!)

Posts in the series:

  1. Introduction
  2. Scale, Chaordic Systems & Trust
  3. Practicalities
  4. Problem Solving and Decision Making
  5. Goals, scheduling, shipping
  6. Minimum Viable Bureaucracy: Why have managers?

I chopped up the audio from her talk; you should find the two parts relating to this post below. Slides are here and it’s all backed up at the Internet Archive.

[display_podcast]

Scale

As organisations grow you experience pain.

(Laura Thomson)

When you’re working on your own you don’t need formal processes or things written down. The first pain point you experience is when you’ve got one other person working with you. The second pain point is at around 50 people: that’s when you stop knowing what everyone else is doing. A third pain point comes at 150-250 people where you don’t know everyone’s names. Then around 1,000 there’s a pain point where you say “should we behave like a big company or not?” That’s kind of where Mozilla is now.

Minimum Viable Bureaucracy - Scale

Organisational growth is a like scaling a Web app in that the technology you use depends on the number of users you have. Every so often you get to a phase change point and you have to rethink the way that you do things.

Dunbar’s Number is the cognitive limit on number of people with whom you can effectively maintain relationships. It’s supposedly based on the size of part of the brain, with a relationship between species and the size of their minimum cultural unit. Dunbar’s Number says that humans can maintain relationships with around 150-230 people. Laura thinks there’s tools and practices that can increase this number – structuring your organisation so it’s remote and distributed makes that number “a whole lot higher”. Mozilla has ‘dodged’ Dunbar’s number until it reached about 500 people.

Chaordic System

A ‘chaordic’ system is:

any self-organizing, adaptive, nonlinear complex system, whether physical, biological or social, the behavior of which exhibits characteristics of both order and chaos.

(Dee W. Hock)

Chaordic systems have order, but this is not imposed. It’s emergent order from the way we do things and very typical of Open Source projects. Chaordic systems tend to be robust, distributed and failure-tolerant. In fact, chaordic organisations mirror the Internet itself.

People from more corporate organizations than Mozilla say they need processes, paperwork and meetings to “get things done”. Laura says she always asks those kinds of people how many Open Source projects they’re familiar with and what processes they use. It’s usually a lot lighter – e.g. Apache.

Instead of having ‘all your ducks in a row’ the analogy in chaordic management is to have ‘self-organising ducks’. The idea is to give people enough autonomy, knowledge and skill to be able to do the management themselves.

Minimum Viable Bureaucracy - Organisational charts

The above is a cartoon version of organisational charts, but Laura says “there’s a lot of truth in this”. Mozilla, she believes, sits in amongst this – “we’ve been more Facebook-like but are getting more Google-ish.”

Trust

If you want self-organising ducks you need to start with trust. Laura mentions that an interesting book about this is The Field Guide to Understanding Human Error by Sidney Dekker, which is actually about plane crashes. Dekker focuses on post-mortems and how to discover how things go wrong when they go wrong. One thing he talks about is how pointless it is to assign blame. No-one goes out of their house in the morning trying to do the worst job they possibly can. They might do a bad job, but they don’t set out to do one. We should start with that, says Laura: when people behave that way at work, they do so for a reason. People don’t act randomly, and tend not to act in an evil way. We should assume that it’s the best they could do with the knowledge and skills they had at the time. That’s the basis of trust in your organisation.

Trusting people helps you stop saying things like “I’d be able to get this done if it wasn’t for IT”. It’s best to step back and ask why IT is acting like that – is it (for example) because they don’t have the resources?

The key thing for building trust is time. We tend to like people to ‘earn’ trust, but Laura encourages people to step back and get people to earn trust in the hiring process. “Once you’ve decided you want to hire someone, you should by default trust them,” she says. Once you’re in, you’re in. You can build trust “by making many small deposits in the trust bank” which is a horse-training analogy. It’s important to have lots of good interactions with people so that one day when things are going really badly you can draw on that. People who have had lots of positive interactions are likely to work more effectively to solve big problems rather than all pointing fingers.

There are two things Laura recommends you can do to build trust in your organisation:

  1. Begin by trusting others
  2. Be trustworthy

Don’t commit to things you know you’re not going to be able to do. Be reliable. Show up on time.

Trust should scale within an organisation: if you’ve hired someone then I should trust your decision instead of trying to second-guess them. Once you’ve got trust in an organisation you enable autonomy. This means you don’t feel like you have to micro-manage staff or have lots of meetings. It means leaders can have larger teams. Also, when you give people autonomy they are instantly happier. Nobody likes people looking over their shoulder all the time. You want to be trusted to do your job.


Having worked in schools, a university, and now a tech company, I can see universally-applicable lessons here. What do you think?

You can follow Laura Thomson as @lxt on Twitter.

css.php