Category: Technology (page 1 of 35)

More on the mechanics of GDPR

Note: I’m writing this post on my personal blog as I’m still learning about GDPR. This is me thinking out loud, rather than making official Moodle pronouncements.

‘Enjoyment’ and ‘compliance-focused courses’ are rarely uttered in the same breath. I have, however, enjoyed my second week of learning from Futurelearn’s course on Understanding the General Data Protection Regulation. This post summarises some of my learning and builds upon my previous post.

This week, the focus was on the rights of data subjects, and started with a discussion about the ‘modalities’ by which communication between the data controller and processor, and the data subject take place:

By modalities, we mean different mechanisms that are used to facilitate the exercise of data subjects’ rights under the GDPR, such as those relating to different forms of information provision (in writing, spoken, electronically) and other actions to be taken when data subjects invoke their rights.

Although the videos could be improved (I just use the transcripts) the mix of real-world examples, quizzes, and reflection is great and suits the way I learn best.

I discovered that the GDPR not only makes provision for what should be communicated by data controllers but how this should be done:

In the first place, measures must be taken by data controllers to provide any information or any communication relating to the processing to these individuals in a concise, transparent, intelligible and easily accessible form, using the language that is clear and plain. For instance, it should be done when personal data are collected from data subjects or when the latter exercise their rights, such as the right of access. This requirement of transparent information and communication is especially important when children are data subjects.

Moreover, unless the data subject is somehow attempting to abuse the GDPR’s provisions, the data controller must provide the requested information free of charge.

The number of times my surname is spelled incorrectly (often ‘Bellshaw’) or companies have other details incorrect, is astounding. It’s good to know, therefore, that the GDPR focuses on rectification of individuals’ personal data:

In addition, the GDPR contains another essential right that cannot be disregarded. This is the right to rectification. If controllers store personal data of individuals, the latter are further entitled to the right to rectify, without any undue delay, inaccurate information concerning them. Considering the purpose of the processing, any data subject has the right to have his or her personal data completed such as, for instance, by providing a supplementary statement.

So far, I’ve focused on me as a user of technologies — and, indeed, the course uses Google’s services as an example. However, as lead for Project MoodleNet, the reason I’m doing this course is as the representative of Moodle, an organisation that would be both data controller and processor.

There are specific things that must be built into any system that collects personal data:

At the time of the first communication with data subjects, the existence of the right to object– as addressed earlier– must be indicated to data subjects in a clear manner and separately from other information. This right can be exercised by data subjects when we deal with the use of information society services by automated means using technical specifications. Importantly, the right to object also exists when individuals’ personal data are processed for scientific or historical research or statistical purposes. This is, however, not the case if the processing is carried out for reasons of public interest.

Project MoodleNet will be a valuable service, but not from a scientific, historical, or statistical point of view. Nor will the data processing be carrierd out for reasons of public interest. As such, the ‘right to object’ should be set out clearly when users sign up for the service.

In addition, users need to be able to move their data out of the service and erase what was previously there:

The right to erasure is sometimes known as the right to be forgotten, though this denomination is not entirely correct. Data subjects have the right to obtain from data controllers the erasure of personal data concerning them without undue delay.

I’m not entirely clear what ‘undue delay’ means in practice, but when building systems, we should build it with these things in mind. Being able to add, modify, and delete information is a key part of a social network. I wonder what happens when blockchain is involved, given it’s immutable?

The thing that concerns most organisations when it comes to GDPR is Article 79, which states that data subjects have legal recourse if they’re not happy with the response they receive:

Furthermore, we should mention the right to an effective judicial remedy against a controller or processor laid down in Article 79. It allows data subjects to initiate proceedings against data controllers or processors before a court of the Member State of the establishment of controllers or processors or in the Member State where they have their habitual residence unless controllers or processors are public authorities of the Member States and exercise their public powers. Thus, data subjects can directly complain before a judicial institution against controllers and processors, such as Google or others.

I’m particularly interested in what effect data subjects having the right “not to be subjected to automated individual decision-making” will have. I can’t help but think that (as Google has already started to do through granular opt-in questions) organisations will find ways to make users feel like it’s in their best interests. They already do that with ‘personalised advertising’.

There’s a certain amount of automation that can be useful, the standard example being Amazon’s recommendations system. However, I think the GDPR focuses more on things like decisions about whether or not to give you insurance based on your social media profile:

There are three additional rights of data subjects laid down in the General Data Protection Regulation, and we will cover them here. These rights are – the right not to be subjected to automated individual decision-making, the right to be represented by organisations and others, and the right to compensation. Given that we live in a technologically advanced society, many decisions can be taken by the systems in an automatic manner. The GDPR grants to all of us a right not to be subjected to a decision that is based only on an automated processing, which includes profiling. This decision must significantly affect an individual, for example, by creating certain legal effects.

Thankfully, when it comes to challenging organisations on the provisions of the GDPR, data subjects can delegate their representation to a non-profit organisation. This is a sensible step, and prevents lawyers become rich from GDPR challenges. Otherwise, I can imagine data sovereignty becoming the next personal injury industry.

If an individual feels that he or she can better give away his or her representation to somebody else, this individual has the right to contact a not-for-profit association– such as European Digital Rights – in order to be represented by it in filing complaints, exercising some of his or her rights, and receiving compensation. This might be useful if an action is to be taken against such a tech giant as Google or any other person or entity. Finally, persons who have suffered material or non-material damage as a result of an infringement of the GDPR have the right to receive compensation from the controller or processor in question.

Finally, and given that the GDPR applies not only across European countries, but to any organisation that processes EU citizen data, the following is interesting:

The European Union and its Member States cannot simply impose restrictions addressed in Article 23 GDPR when they wish to. These restrictions must respect the essence of the fundamental rights and freedoms and be in line with the requirements of the EU Charter of Fundamental Rights and the European Convention for the Protection of Human Rights and Fundamental Freedoms. In addition, they are required to constitute necessary and proportionate measures in a democratic society meaning that there must be a pressing social need to adopt these legal instruments and that they must be proportionate to the pursued legitimate aim. Also, they must be aiming to safeguard certain important interests. So, laws adopted by the EU of its Members States that seek to restrict the scope of data subjects’ rights are required to be necessary and proportionate and must protect various interests discussed below.

I learned a lot this week which will stand me in good stead as we design Project MoodleNet. I’m looking forward to putting all this into practice!

Image by Erol Ahmed available under a CC0 license

Destroying capitalism, one stately home at a time

This week, I spent Monday evening to Wednesday evening at Wortley Hall, near Sheffield, England. It’s a stately home run by a worker-owned co-op and I was there with my We Are Open colleagues for the second annual Co-operative Technologists (CoTech) gathering. CoTech is a network of UK-based co-operatives who are focused on tech and digital.

We Are Open crew

The ‘not unattractive’ We Are Open crew (Bryan, John, Laura, Doug)

Last year, at the first CoTech gathering, we were represented by John Bevan — who was actually instrumental in getting the network off the ground. This time around, not only did all four members of We Are Open attend, but one of us (Laura Hilliger) actually helped facilitate the event.

Wortley Hall ceiling

The ceilings were restored by the workers who bought the hall from a lord

I wasn’t too sure what to expect, but I was delighted by the willingness of the 60+ people present to get straight into finding ways we can all work together. We made real progress over the couple of days I was there, and I was a little sad that other commitments meant I couldn’t stay until the bitter end on Thursday lunchtime.

Wortley Hall post-its

People dived straight in and started self-organising

We self-organised into groups, and the things I focused on were introducing Nextcloud as a gap in the CoTech shared services landscape, and helping define processes for using the various tools we have access to. Among the many other things that people collaborated on were sales and marketing, potentially hiring our first CoTech member of staff, games that could help people realise that they might be better working for a co-op, defining a constitution, and capturing the co-operative journeys that people have been on.

Wortley Hall - CoTech landscape

This diagram helped orient ourselves within the landscape we share

There was a lot of can-do attitude and talent in the room, coupled with a real sense that we’re doing important work that can help change the world. There’s a long history of co-operation that we’re building upon, and the surroundings of Wortley Hall certainly inspired us in our work! Our co-op will definitely be back next year, and I’m sure most of us will meet at CoTech network events again before then.

Wortley Hall plaque

Each room at Wortley Hall has been ‘endowed’ by a trade union to help with its restoration

The CoTech wiki is available here. As with all of these kinds of events, we had a few problems with the wifi which means that, at the time of publishing this post, not everything has been uploaded to the wiki. It will appear there in due course.

Wortley Hall artwork

All of the artwork was suitably left-wing and revolutionary in nature

Although there are member-only spaces (and benefits), anyone – whether currently a member of a worker-owned co-op or not – is also welcome to join the CoTech community discussion forum.

New blog: Doug, uncensored

TL;DR: Head to or for my new blog about freedom and decentralised technologies.

One of the great things about the internet, and one of the things I think we’re losing is the ability to experiment. I like to experiment with my technologies, my identity, and my belief systems. This flies in the face of services like Facebook that insist on a single ‘real’ identity while slowly deskill their users.

I’ve been messing about with ZeroNet, which is something I’ve mentioned before, and which gets close to something I’ve wanted now for quite some time: an ‘untakedownable’ website. Whether it’s DDoS attacks, DNS censorship, or malicious code injection, I want a platform that, no matter what I choose to say, will stay up.

To access sites via ZeroNet, you have to be running the ZeroNet service. By default, you view a clone of the site you want to visit on your own machine, accessed in the web browser. That means it’s fast. When the site creator updates the site/blog/wiki/whatever, that’s then sent to peers to distribute. It’s all lightning-quick, and built on Bittorrent technlogy and Bitcoin cryptography.

The trouble, of course, comes when someone who isn’t yet running ZeroNet wants to visit a site. Thankfully, there’s a way around that using a ‘proxy’ or bridge. This is ZeroNet running on a public server for everyone to use. There’s several of these, but I’ve set up my own using this guide.

I encourage you to download and experiment with ZeroNet but, even if you don’t, please check out my new blog. You can access it via or — the rather long and unwieldy actual IP address of the server running the public-facing copy is

Finally, if you’re thinking, “What is this?! It’ll never catch on…” then I’d like to remind  you about technologies that people didn’t ‘get’ at first (e.g. Twitter in 2007) as well as that famous Wayne Gretszky quotation, “I skate to where the puck is going to be, not where it has been”.

Decentralised technologies mean censorship-resistant websites

As I write this, I’m in an apartment in Barcelona, after speaking and running a workshop at an event.

On Sunday, there was a vote for Catalonian independence. It went ahead due to the determination of teachers (who kept schools open as voting centres), the bravery of firemen and Catalan police (who resisted Spanish police), and… technology.

As I mentioned in the first section of my presentation on Wednesday, I’m no expert on Spanish politics, but I am very interested in the Catalonian referendum from a technological point of view. Not only did the Spanish government take a heavy-handed approach by sending in masked police to remove ballot boxes, but they applied this to the digital domain, raiding internet service providers, blocking websites, and seizing control of referendum-related websites.

Yet, people still accessed websites that helped them vote. In fact, around 42% managed to do so, despite all of the problems and potential danger in doing so. By way of contrast, no more than 43% of the population has ever voted in a US Presidential election (see comments section). There have been claims of voting irregularities (which can be expected when Spanish police were using batons and rubber bullets), but of those who voted, 90% voted in favour of independence.

People managed to find out the information they required through word of mouth and via websites that were censorship-resistant. The technologists responsible for keeping the websites up despite interference from Madrid used IPFS, which stands for Inter Planetary File System. IPFS is a decentralised system which manages to remove the reliance on a single point of failure (or censorship) while simultaneously solving problems around inefficiencies caused by unecessary file duplication.

The problem with IPFS, despite its success in this situation is that it’s mainly used via the command line. As much as I’d like everyone to have some skills around using terminal windows, realistically that isn’t likely to happen anytime soon in a world of Instagram and Candy Crush.

Instead, I’ve been spending time investigating ZeroNet, which is specifically positioned as providing “open, free and uncensorable websites, using bitcoin cryptography and bitorrent network”. Instead of there being ‘gateways’ through which you can access ZeroNet sites through the open web, you have to install it and then run it locally in a web browser. It’s a lot easier than it sounds, and the cross-platform functionality has an extremely good-looking user interface.

I’ve created a ‘Doug, uncensored’ blog using ZeroNet. This can be accessed via anyone who is running the service and knows the (long) address. When you access the site you’re accessing it on your own machine and then serving it up to — just like with bittorrent. It’s the realisation of the People’s Cloud idea that Vinay Gupta came up with back in 2013. The great thing about that is the websites work even when you’re offline, and sync when you re-connect.

As with constant exhortations for people to be more careful about their privacy and security, so decentralised technologies might seem ‘unnecessary’ by most people when everything is going fine. However, just as we put curtains on our windows and locks on our doors, and sign contracts ‘just in case’ something goes wrong, so I think decentralised technologies should be our default.

Why do we accept increased centralisation and surveillance as the price of being part of the digital society? Why don’t we take back control?

Again, as I mentioned in my presentation on Wednesday, we look backwards too much when we’re talking about digital skills, competencies, and literacies. Instead, let’s look forward and ensure that the next generation of technologies don’t sell us down the river for advertising dollars.

Have a play with ZeroNet and, if you want to really think through where we might be headed with all of this, check out Bitnation.

Image CC BY-NC-ND Adolfo Luhan

My blog was without an ‘LMS isn’t dead’ post, so I thought I’d write one.

Back in November 2007, Martin Weller, a Professor at the Open University wrote that, in his opinion, the VLE/LMS is dead – “but we’ll probably take five years to realise it”. It’s been almost a decade since his post, and there has been plenty more written about the LMS. In fact, Google returns almost 20,000 results for the search term “LMS is dead”, and just recently Jim Groom wrote a widely-shared and commented-upon post about it.

Yet, it seems, the truth is that the LMS is not going away anytime soon. Why is that? Why have the alternative solutions mentioned in Martin’s post withered and died while the LMS lives on? Why would anyone in 2017 use an LMS? Curiously, the answers are right there in the post from 10 years ago:

  • Authentication
  • Convenience
  • Support
  • Reliability
  • Monitoring

Meanwhile, the reasons Martin gives in that post for moving away from an LMS have largely been negated by developments over the last ten years. Here’s his original list of the benefits of using a ‘small pieces, loosely joined’ approach instead of an LMS:

  • Better quality tools
  • Modern look and feel
  • Appropriate tools
  • Cost
  • Avoids software sedimentation
  • Disintermediation happens

Back when he wrote this post, I would have agreed with all of Martin’s points, envisioning a future filled with users merrily skipping between platforms into the sunset. I’ve learned a lot since then, and it’s pretty clear that a ‘small pieces, loosely joined’ is unlikely to ever happen. The LMS market is growing, not shrinking.

My reason for thinking about all this is because I’ve just started doing some work with Totara, an organisation I first came across back in 2012 when they built the Open Badges functionality for Moodle. Since then, while their code remains open source, they’ve ‘forked’ from the Moodle codebase. They’ve also got Totara Social, an ‘enterprise social network’ platform.

Interestingly, Totara are in the process of removing ‘LMS’ from their branding. That doesn’t mean that the concept of the learning management system is dead. No. What’s happening here is that the term ‘LMS’ has become a ‘dead metaphor’. It no longer does any useful work.

To quote myself, elsewhere:

The problem is that people will, either purposely or naïvely, use human-invented terms in ‘incorrect’ ways. This can lead to exciting new avenues, but it also spells the eventual death of the original term as it loses all explanatory power. A dead metaphor, as Richard Rorty says, is only good as the ‘coral reef’ on which to build other terms.

A learning management system, in essence, is a digital space to support learning. It doesn’t particularly matter what you call it so long as it:

  1. Has the functionality you require
  2. Costs what you can afford
  3. Is reliable

The reason I’ve accepted this piece of work with Totara is because they tick all of my boxes around their approach to this space. They’re innovative. They’re open source. They’ve got a sustainable business model. I’m looking forward to helping them with developing a workable vision and strategy around their community that fits with their pretty unique partner network approach.

As regular readers will be aware, and as betrayed by the introduction to this post, my background is in formal and informal learning. The Learning & Development (L&D) space is relatively new to me, so if you’ve got tips on people to follow, places to hang out, and things to read, please do let me know!

Photo by Jon Sullivan used under a Creative Commons BY-NC licence.

Why I’ve just ditched my cloud-based password manager

TL;DR: I’ve ditched LastPass in favour of LessPass. The former stores your passwords in the cloud and requires a master password. The latter uses ‘deterministic password generation’ to keep things on your own devices.

Although I’ve used LastPass for the past six years, I’ve never been completely happy with it. There have been breaches, and a couple of years it was acquired by LogMeIn, a company not exactly revered in terms of trust and customer service. Their ’emergency break-in’ feature makes me feel that my passwords are just one serious hack or government request away.

I read Hacker News on pretty much a daily basis and I’m particularly interested in the underlying approaches to technology that change over time. There are certain assumptions and habits of mind that come to be questioned which lead to different, usually better, solutions to certain problems. Today, the issue of cloud-based password managers was again on the front page.

From the linked article:

When passwords are stored, they must be encrypted and then retrieved later when needed. Storage, of any type, is a burden. Users are required to backup stored passwords and synchronize them across devices and implement measures to protect the stored passwords or at least log access to the stored passwords for audit purposes. Unless backups occur regularly, if the encrypted password file becomes corrupt or is deleted, then all the passwords are lost.

Users must also devise a “master password” to retrieve the encrypted passwords stored by the password management software. This “master password” is a weak point. If the “master password” is exposed, or there is a slight possibility of potential exposure, confidence in the passwords are lost.


I believe that password management should only occur locally on end use devices, not on remote systems and not in the client web browser.

Remote systems are outside the user’s control and thus cannot be trusted with password management. These systems may not be available when needed and may not be storing or transmitting passwords correctly. Externally, the systems may seem correct (https, etc.) but behind the scenes, no one really knows what’s going on, how the passwords are being transmitted, generated, stored, or who has access to them.

It’s pretty difficult to argue against these two points. Having felt uneasy for a while, I knew it was time to do something different. It was time to ditch LastPass.

I looked at a couple of different solutions: the one proposed by the author of the above quotations (too complex to set up), as well as one which looked promising, but now seems to be unsupported. In the end, I decided upon LessPass, which has been recommended to me by a few people this year.

How is LessPass different from LastPass? This gif from their explanatory blog post is helpful:


All of this happens in the browser, without your data being transmitted anywhere else.

Basically, you enter the following:

  1. Name of the site or thing for which you need a password
  2. Your username
  3. A secret passphrase

…and, from these three pieces of information, LessPass generates a password that you can then copy using complex algorithms and entropy stuff that I don’t understand.


The fact that I don’t understand it is fine, because there are people who do, and the code is Open Source. It can be inspected for bugs and vulnerabilities — unlike the proprietary solution provided by LastPass.

The options button to the bottom-right of the LessPass window gives the user advanced options such as:

  • Length of password
  • Types of character to include in the password
  • Increment number (if you’re forced to rotate passwords regularly)

My favourite LessPass feature, though, solves a nagging problem I’ve had for ages. If you have a long passphrase, then sometimes it can be very easy to mistype it. You don’t want to reveal your obfuscated passphrase to the world, so how can you be sure that you’ve typed it correctly?


Simple! LessPass adds an emoji triplet to the right of the secret passphrase box. You’ll notice that changes as you type and, when you finish, it should always look the same. If it doesn’t, then you’ve mistyped your passphrase.

I’ll be making the transition from LastPass to LessPass over the next few weeks. It’s not as simple as just exporting from one database into another, as the whole point of doing this is that there is no one place that someone can hoover up my passwords.

So my plan of action is:

  1. Every time I use a service, create a new password using LessPass.
  2. Delete existing password in LastPass.
  3. Rinse and repeat until most of my passwords are generated via LessPass.
  4. Delete my LastPass account.
  5. Celebrate my higher levels of personal security.

Questions? Ask away in the comments section!

Photo: Crypt by Christian Ditaputratama under a CC BY-SA license

Friends don’t let friends use Facebook

Facebook, on the other hand, only offers its users a forum to connect and share information. Facebook’s income derives from selling targeted advertising to be delivered to those same users, based on preferences the site has learned from their comments, friends, and preferences. It has no goods or services to sell, and its users don’t buy anything. Thus, its only product to take to market is, in fact, its users’ data. (source)

I don’t use Facebook. You shouldn’t either.

I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages. (Jonathan Albright, source)

Personalised advertising isn’t useful. It’s invasive, and it’s used to build a profile to manipulate you and your ‘friends’.

Using personality targeting, Facebook posts can attract up to 63 percent more clicks and 1,400 more conversions. (source)

There’s several pretty scary implications to where this could take us by 2020:

  1. Public sentiment as high-frequency trading — algorithms compete to sway the opinions of the electorate / consumers.
  2. Personalized, automated propaganda — not just lies by politicians, but auto-generated lies created by bots who know which of your ‘buttons’ to press.
  3. Ideological filter matrices — what happens when all of the other ‘people’ in your Facebook group are actually bots?

So, not only will I not use Facebook, but (like Dave Winer and John Gruber) I won’t link to it. Nor will I accept organisations that I’m part of setting up Facebook groups ‘for convenience’ or using a Facebook page in lieu of a website.

 Facebook is designed from the ground up as an all-out attack on the open web. (John Gruber)

The web is a huge force for good. We shouldn’t let inertia and a lack of digital skills turn it into a series of walled data mine.

Get a blog. If your ideas have any value put them on the open web. (Dave Winer)

From a business point of view, you’re mad to put all of your eggs in once basket. Get a website. Facebook’s content is, by design, not indexed by search engines. It’s invisible to search engines.

Look, I get that I’m the nut who doesn’t want to use Facebook. I’m not even saying don’t post your stuff to Facebook. But if Facebook is the only place you are posting something, know that you are shutting out people like me for no good reason. Go ahead and post to Facebook, but post it somewhere else, too. Especially if you’re running a business.


It’s 2017. There are a million ways to get a web site set up inexpensively that you can easily update yourself. Setting up a Facebook page and letting your web site rot, or worse, not even having a web site of your own, is outsourcing your entire online presence. That’s truly insane. It’s a massive risk to your business, and frankly, stupid. (source)

I feel more strongly about Facebook’s threat to the web than I did about Microsoft’s Internet Explorer at the turn of the millennium. Scarily, it looks like Twitter might be going the same way. I blame venture capital and invasive advertisnig.

Individual actions build up to movements. Resist. Find alternatives. Don’t be a boiled frog.

Header image based on an original by rodrigo

3 reasons I’ll not be returning to Twitter

This month I’ve been spending time away from Twitter in an attempt to explore Mastodon. I’ve greatly enjoyed the experience, discovering new people and ideas, learning lots along the way.

I’ve decided, for three reasons, that Twitter from now on is going to be an ‘endpoint’, somewhere I link to my thoughts and ideas. It’s the way I already use LinkedIn, for example, and the way I used to use Facebook — until I realised that the drawbacks of being on there far outweighed any benefits. This model, for those interested, is known as POSSE: Publish (on your) Own Site, Syndicate Elsewhere.

There’s three main reasons I came to this decision:

1.  Social networks should be owned by their users

Last week, at Twitter’s 2017 Annual Meeting of Stockholders, there was a proposal to turn the service into a user-owned co-operative. It failed, but these kinds of things are all about the long game. You can find out more about the movement behind it here.

However, it’s already possible to join a social network that’s owned by its users. I’m a member of, which is an instance of Mastodon, a decentralised, federated approach to social media. I’m paying $3/month and have access to a Loomio group for collective decision-making.

I imagine some people reading this will be rolling their eyes, thinking “this will never scale”. I’d just like to point out a couple of things. First, services backed by venture capital can grow rapidly, but this doesn’t necessarily mean they’re sustainable. Second, because Mastodon is a protocol rather than a centralised service, it can provide communities of practice  within a wider ecosystem. In that sense, it’s a bit like Open Badges.

2. Twitter’s new privacy policy

Coming into effect on 15th June 2017, Twitter is bringing in a new privacy policy that signals the end of their support of Do Not Track. Instead, they have brought in ‘more granular’ privacy settings.

The Electronic Frontier Foundation is concerned about this:

Twitter has stated that these granular settings are intended to replace Twitter’s reliance on Do Not Track. However, replacing a standard cross-platform choice with new, complex options buried in the settings is not a fair trade. Although “more granular” privacy settings sound like an improvement, they lose their meaning when they are set to privacy-invasive selections by default. Adding new tracking options that users are opted into by default suggests that Twitter cares more about collecting data than respecting users’ choice.

It’s also worth noting that Twitter talks about privacy in terms of ‘sharing’ data, rather than its collection. They’ll soon be invasively tracking users around the web, just like Facebook. Why? Because they need to hoover up as much data as possible, to sell to advertisers, to increase the value of their stock to shareholders. Welcome to the wonders of surveillance capitalism.

3. Anti-individualism

There’s a wonderful interview with Adam Curtis on Adam Buxton’s podcast, parts of which I’ve found myself re-listening to over the past few days. Curtis discusses many things, but the central narrative is about the problems that come with individualism underpinning our culture.

We’re all expected to express how individual we are, but the way that we do this is through capitalism, meaning that we end up living in an empty, hollow simulacrum, mediated by the market. Guy Debord had it right in The Society of the Spectacle. It also reminds me of this part of Monty Python’s Life Of Brian . “Yes, we’re all individuals.”


So, in my own life, I’m trying to rectify this by advocating for a world that’s more co-operative, more sustainable, and more focused on collective action rather than the glorification of individuals.

To be clear: I’ll get around to replying to Twitter direct messages, but I am no longer looking to engage in conversation either in public or private on that platform. I’ve updated my self-hosted Twitter archive and am considering using the open source Cardigan app to delete my tweets before May 2017 to prevent data-mining.

Image CC BY-NC Miki J.

Some thoughts on Keybase, online security, and verification of identity

I’m going to stick my neck out a bit and say that, online, identity is the most important factor in any conversation or transaction. That’s not to say I’m a believer in tying these things to real-world, offline identities. Not at all.

Trust models change when verification is involved. For example, if I show up at your door claiming to be Doug Belshaw, how can I prove that’s the case? The easiest thing to do would be to use government-issued identification such as my passport or driving license. But what if I haven’t got any, or I’m unwilling to use it? (see the use case for CheapID) In those kinds of scenarios, you’re looking for multiple, lower-bar verification touchstones.

As human beings, we do this all of the time. When we meet someone new, we look for points of overlapping interest, often based around human relationships. This helps situate the ‘other’ in terms of our networks, and people can inherit trust based on existing relationships and interactions.

Online, it’s different. Sometimes we want to be anonymous, or at least pseudo-anonymous. There’s no reason, for example, why someone should be able to track all of my purchases just because I’m participating in a digital transaction. Hence Bitcoin and other cryptocurrencies.

When it comes to communication, we’ve got encrypted messengers, the best of which is widely regarded to be Signal from Open Whisper Systems. For years, we’ve tried (and failed) to use PGP/GPG to encrypt and verify email transactions, meaning that trusted interactions are increasingly taking place in locations other than your inbox.

On the one hand, we’ve got purist techies who constantly question whether a security/identity approach is the best way forward, while on the other end of the spectrum there’s people using the same password (without two-factor authentication) for every app or service. Sometimes, you need a pragmatic solution.


I remember being convinced to sign up for when it launched thanks to this Hacker News thread, and particularly this comment from sgentle:

Keybase asks: who are you on the internet if not the sum of your public identities? The fact that those identities all make a certain claim is a proof of trust. In fact, for someone who knows me only online, it’s likely the best kind of trust possible. If you meet me in person and I say “I’m sgentle”, that’s a weaker proof than if I post a comment from this account. Ratchet that up to include my Twitter, Facebook, GitHub, personal website and so forth, and you’re looking at a pretty solid claim.

And if you’re thinking “but A Scary Adversary could compromise all those services and Keybase itself”, consider that an adversary with that much power would also probably have the resources to compromise highly-connected nodes in the web of trust, compromise PKS servers, and falsify real-world identity documents.

I think absolutism in security is counterproductive. Keybase is definitionally less secure than, say, meeting in person and checking that the person has access to all the accounts you expect, which is itself less secure than all of the above and using several forms of biometric identification to rule out what is known as the Face/Off attack.

The fight isn’t “people use Keybase” vs “people go to key-signing parties”, the fight is “people use Keybase” vs “fuck it crypto is too hard”. Those who need the level of security provided by in-person key exchanges still have that option available to them. In fact, it would be nice to see PKS as one of the identity proof backends. But for practical purposes, anything that raises the crypto floor is going to do a lot more good than dickering with the ceiling.

Since the Trump inauguration, I’ve seen more notifications that people are using Keybase. My profile is here: Recently, cross-platform apps for desktop and mobile devices have been added, mearning not only can you verify your identity across the web, but you can chat and share files securely.

It’s a great solution. The only word of warning I’d give is don’t upload your private key. If you don’t know how public and private keys work, then please read this article. You should never share your private key with anyone. Keep it to yourself, even if Keybase claim it will make your life easier.

To my mind, all of this fits into my wider work around Open Badges. Showing who you are and what you can do on the web is a multi-faceted affair, and I like the fact that I can choose to verify who I am. What I opt to keep separate from this profile (e.g. my gamertag, other identities) is entirely my choice. But verification of identity on the internet is kind of a big deal. We should all spend longer thinking about it, I reckon.

Main image: Blondinrikard Fröberg

So it turns out that you can pretty much do whatever you like on your own website

Last week, Audrey Watters blocked and Genius on her website. These two tools allow a ‘layer’ to be added to websites for annotation and discussion that can’t necessarily be controlled by the person who owns that site.

Blocking annotation tools does not stop you from annotating my work. I’m a fan of marginalia; I am. I write all over the books I’ve bought, for example. Blocking annotations in this case merely stops you from writing in the margins here on this website.

My first reaction? Audrey can do whatever she likes. Just as when she removed the ability to comment on her site a few years back, I didn’t understand the decision at first, but then it kind of made sense. Either way, it’s her site, and she can do whatever she wants.

So far, so why-are-you-even-writing-a-post-about-this?  Discussions on Twitter, Mastodon, Slack, and elsewhere show that this is a live issue. So, naturally I’ve been thinking about it. I have to say that I agree with Mike Caulfield’s sentiments:

My take (of course) is that annotation works best through a system of copies. Anyone should be able to annotate a copy of your work. But it’s not clear to me that people have the right to piggyback on the popularity of an address that you’ve worked your butt off to promote. It’s not clear to me that they should get to annotate the master file. This has always been the problem with comments as well — they work best on small sites, and go bad when they give users a much larger platform than they have earned. As with everything online, the phenomenon is gendered as well.

It seems what Audrey is doing is protecting her ‘means of production’ from what she considers to be an active assault from those who wish to piggyback on the success of her work. Some people have questioned how that works with the explicitly ‘open’ stance that Audrey takes. However, I think any perceived tension between her move and open licensing goes away when we think of some other examples.

Here’s three:

  1. Pokémon Go — this location-based, augmented reality game used some people’s residences as ‘gyms’ where characters in the game did battle. This caused real-world issues. Most people thought that random strangers pulling on to their drive to play games was an infringement of their civil liberties.
  2. Google Street View — this service involves a car mounted with 360° cameras taking photographs to improve Google’s mapping service. Faces were blurred out, but this wasn’t good enough for Germany’s stringent privacy laws. They’ve been prevented from capturing images at least once, especially when people are on their own property.
  3. Robots.txt — this text file that website owners can include in the root folder of their domain specifies what web crawlers can and cannot do. If you say that you don’t want your site to be indexed, then search engines and other aggregation engines should (legally?) comply.

Using these as touchstones, it seems fair enough for someone to insist that you create a copy of their work to be able to annotate it. As Mike Caulfield hints at, giving people the ability to comment on the master document seems like a privilege rather than a right.

Perhaps those creating annotation engines should find a way to seek the domain owner’s permission? An easy way to do that would be to get them to add the necessary code to activate annotation (as we did with OB101), rather than make it a free-for-all…

Image CC BY-NC-SA Karl Steel